Keywords

1 Introduction

Structural distribution and subsequent localization of an image data has gained much importance in the last decade due to its wide applications. It is done to isolate and identify distinct structures from their background and exactly locate their position and orientation in an acquired image [1]. Identification of different structures is performed using digital image processing techniques [2, 3] by retrieving non-redundant information from a test image. A plethora of image processing algorithms has been evolved from time to time to recognize homogeneous image segments, region of interests in image data, supervised segmentation of images, edge identification of acquired objects in images, and distinct sized/shaped/colored/textured objects in an image [4,5,6].

Image segmentation is a basic mechanism that is aimed to identify specific Regions-of-Interest (ROI) in an input acquired image data and thus to identify and locate distinct structures. Image segmentation subdivides the input pixel elements of an input image-data into different clusters possessing homogeneous kind of features on the basis of four properties of an image viz., color, intensity, edges and texture of that image [7,8,9]. Analyzing input images on the basis of these properties via image segmentation leads to development of several applications including video-surveillance, satellite imaging, recognition of faces (biometrics), and image retrieval, removing noise from an image, medical imaging analysis, and recognizing and classifying distinct structures in a given image [10]. A very less literature has been published to efficiently determine the structural based distribution of acquired image-data. Some used the Gaussian-Mixture models [11], the Dirichlet-Mixture models [12], while some tried to implement a segmentation model using a single starting Kernel by estimating the possible maximum-likelihood factor [13]. However, the non-Gaussian and Asymmetric image-data-distributions are difficult to be estimated by the Gaussian-Mixture models and thus, results obtained are not up to the mark [14]. In such instances, the modeling of input image-data is preferred by implementing the Dririchlet-Mixture based distribution models. These are implemented by applying a generalization in multivariate domain to the Beta-distribution. This paper implements a color based segmentation technique to automatically identify and localize different structures in an image. Color based image-data segmentation has become a major field of interest to estimate the structural distributions of image data. It further aids the identification of specific Regions-of-Interest and associated properties from an acquired image for subsequent image analysis.

Color based image segmentation is then followed by suitable pattern recognition task for structure classification. Several classification algorithms has been implemented in the literature including neural networks, clustering techniques, edge based and fuzzy based techniques [15, 16]. However, the nearest neighbor algorithm is easy to implement and possesses less execution time [17]. Thus, is selected for classification task in this research paper.

2 Materials and Methods

In this work, an algorithm for real time image acquisition and structural distribution system has been developed and implemented using color-based image segmentation. The functional block diagram of designed system is sketched in Fig. 1. It involves a series of modules for image acquisition, image processing and image segmentation, respectively.

Fig. 1.
figure 1

Block diagram for real time image acquisition and structural distribution system using color-based segmentation

An image is acquired in real time through image acquisition toolbox and is exported to MATLAB workspace in .jpg image format. The color based image segmentation has been explored and implemented to locate different colored structures in an acquired image. This is followed by the plotting of corresponding histograms of individual red, green and blue planes, respectively to indicate brightness at each point that in turn, represents the pixel count. Finally, the segmented pixels are classified using the Nearest Neighbor rule. The detailed methodology implemented is given in flow chart as in Fig. 2.

Fig. 2.
figure 2

Flow chart for object identification using color based segmentation

2.1 Image Acquisition

At first an image is acquired in real time using image acquisition toolbox by creating a video input object. The resolution of 288 × 352 is selected for acquisition of an image. A single frame of data has been captured and is exported to MATLAB workspace in.jpg image format.

2.2 Histogram Based Classification

The histogram based classification involves the determination of number of pixel elements of an image at each and every individual intensity point [18]. At first the range of intensity values present in an image are estimated and a graph of number of pixel elements versus respective intensity points is plotted. The construction of histogram of an image involves the single-pass scanning of an input image. It is done for the whole image using a single color filter. The aim is to determine and store a running pixel element count captured at individual intensity point. As color is the most important parameter to further extract image features/identifying structures of input image-data. Thus, color-specific histograms can also be constructed; either different histogram constructed for individual color planes viz., Red, Green and Blue or a three-dimensional histogram where all the three Red, Green and Blue color planes are represented by respective three axes. Here, the pixel-element count is indicated by the respective brightness-level at individual intensity point [19]. The color histogram based classification techniques are widely applied to Content based Retrieval systems [20, 21] and are proven to be successful. However, the details the logic behind the spatial distribution of respective colors is missing.

The major limitation of histogram based classification lies in the fact that the comparison is performed on the basis of identified color of the input image structure while completely ignoring it’s spatial or shape information. The plotted color based histograms can be misleading as there is always a possibility to have same color histograms for two different input image datasets possessing similar color content but different object/shape information. It is hard to differentiate a blue and green ball from a blue and green disc if the respective color content of ball/disc is same. Furthermore, color histogram based classification shows very high sensitivity towards noise interference including intensity variations due to lightning. It causes quantization errors also.

2.3 Color Based Segmentation

The proposed solution to above inefficient histogram based classification is to classify/identify structures in a given image using color based structural segmentation of an input image data. The segmentation of an input image data involves the estimation of constituent image regions.

The proposed system uses the L*a*b (luminosity-chromaticity) color space to identify structures of an acquired input image. It has been identified as the most appropriate color-space as per specifications provided by the International Commission on Illumination. Each and every individual color visible to Human-Eye is described in this space [22]. Further, these color spaces has been found to be the best suited to perform segmentation by calculating the Euclidean-Distance to determine differences in respective colors [22].

The ranges of sample colors ‘L’, ‘a’ and ‘b’ are calculated for region classification. The L*a*b color space includes luminosity (L) known as brightness layer, chromaticity layer (a) representing the instance of specific color fall along the Red-Green axis, and chromaticity layer (b) representing the instance of specific color fall along the Blue-Yellow axis. Once ‘a’ and ‘b’ value for each color marker is obtained, individual pixel-element in the acquired image-data has been classified using Nearest Neighbor Rule [17]. It includes calculation of Euclidean distance of a selected pixel from individual color-marker. The smallest the distance, more closely the selected pixel-element matches the specific color-marker. This leads to the labeling of selected pixel-element to that particular color-marker. This process in continuation would assign respective color labels to individual pixel-elements of an acquired image-data. It constitutes a label matrix that in turn is used to segment structures in an input acquired image by color-based segmentation.

3 Result and Discussion

The various results obtained using histogram based classification and color-based segmentation are presented and discussed in this section. Figure 3 shows the test image acquired in real time using image acquisition toolbox. The original test image constitutes of four color structures including blue background (four balls and one disk). The different colored objects have been included for better understanding of the algorithm used for color based image segmentation.

Fig. 3.
figure 3

Test image acquired in real time using image acquisition toolbox

At first the individual histogram of red, green and blue color planes are constructed and are shown in Fig. 4. But color-based histograms can be misleading as there is always a possibility to have same color histograms for two different input image datasets possessing similar color content but different object/shape information. Thus an efficient color based segmentation technique has been implemented in order to identify different color structures and the corresponding results are shown in Fig. 5.

Fig. 4.
figure 4

Test image and corresponding histogram based classification (Color figure online)

Fig. 5.
figure 5

Test image and corresponding color based segmentation

It can be observed that each and every structure can be identified individually using color based segmentation. Furthermore, histogram based classification provides only primary color information but the applied segmentation technique is even capable for identification and classification of both primary and secondary color structures with different spatial and shape information. The resulting scatter plot of the segmented labeled pixels with corresponding ‘a’ and ‘b’ values using the Nearest Neighbor Rule is plotted in Fig. 6.

Fig. 6.
figure 6

Scatter plot of the segmented labeled pixels with corresponding ‘a’ and ‘b’ values (Color figure online)

The L*a*b color space includes luminosity (L) known as brightness layer, chromaticity layer (a) representing the instance of specific color fall along the Red-Green axis, and chromaticity layer (b) representing the instance of specific color fall along the Blue-Yellow axis. The Euclidean distance of individual pixel-element of an input image from the considered color-based marker has been calculated. The minimum the distance, more closely pixel is associated with selected color. The scatter plot shows the individual color population using ‘a’ and ‘b’ values. Here, the segmentation of four input colors red, green, blue and yellow is plotted.

4 Conclusion

An algorithm for structural distribution of image data using color based image segmentation is implemented in real time. The system is developed to locate different colored structures in an acquired image along with their spatial and shape information. The Nearest Neighbor rule has been explored to classify different color region in an acquired image. Experimental results reveal the effectiveness of the color based segmentation algorithm for structural distribution as compared to histogram based classification. The future work shall include development of a medical image analysis system for tumor classification and paper currency authentication system.