Keywords

1 Introduction

Image segmentation is important in the field of medical imaging team. To accurately segment the image has been a hot topic. All these half-or fully automatic algorithms can be divided into five types mainly depending on the strategy of region of interest (ROI) division and edge detection, texture and characteristic analysis, deformation and positive mode; the algorithm is a mixed methods and multi-scale-based method [1, 2]. The model method often uses a prior knowledge and active contour model and the statistical model, or does not use any deformation prior information about the interested region return-on-investment (ROI). The foundation of active contour model, including snake model and the level set, is very popular, usually by semi-automatic division. The mix method is based on the combination of different algorithms to optimize the result [2, 3].

2 Method

2.1 The Principle of PCNN Image Segmentation

Figure 68.1 shows the neurons in mathematical model. It consists of three parts: the accept area, a field, and pulse generator. Accept areas the role of the other neurons is affected by input and from outside of two channels that connect material F and channel L. The feeding input F ij receives the external stimulus I ij and the pulse Y from the neighboring neurons. The linking input L ij receives the pulses from the neighboring neurons and output signals. In the modulation field, F ij and L ij are input and modulated. The modulation result U ij is then sent to the pulse generator, which is composed of a pulse generator and a comparator. The U ij is compared with the dynamic threshold θ ij to decide whether the neuron fires or not. If the U ij is greater than threshold θ ij , the pulse generator will output one and the dynamic threshold will be enlarged accordingly. When θ ij exceeds U ij , the pulse generator will output zero. Then a pulse burst will be generated. The corresponding mathematical model is expressed as follows:

Fig. 68.1
figure 1

Model of PCNN neuron

$$ F_{ij} \left[ n \right] = \exp \left( {\alpha_{F} } \right)F_{ij} \left[ {n - 1} \right] + V_{F} \sum {M_{ijkl} Y_{kl} \left[ {n - 1} \right] + I_{ij} } $$
(68.1)
$$ L_{ij} [n] = \exp \left( {\alpha_{L} } \right)L_{ij} \left[ {n - 1} \right] + V_{L} \sum {W_{ijkl} Y_{kl} \left[ {n - 1} \right]} $$
(68.2)
$$ U_{ij} \left[ n \right] = F_{ij} \left( {1 + \beta L_{ij} \left[ n \right]} \right) $$
(68.3)
$$ Y_{ij} \left[ n \right] = \left\{ {\begin{array}{*{20}c} {1 } & {U_{ij} > \theta_{ij} } \\ 0 & {U_{ij} \le \theta_{ij} } \\ \end{array} } \right. $$
(68.4)
$$ \theta_{ij} \left[ n \right] = \exp \left( { - \alpha_{\theta } } \right)\theta_{ij} \left[ {n - 1} \right] + V_{\theta } Y_{ij} \left[ {n - 1} \right] $$
(68.5)

where i and j refer to the pixel positions in image, k and l are the dislocations in a symmetric neighborhood around a pixel, and n denotes the current iteration (discrete time step). The M ijkl, W ijkl are the constant synaptic weights, and V F, V L and V T are the magnitude scaling terms. The α F, α L and α θ are the time delay constants of the PCNN neuron, and β is the linking strength.

From the point of view of image processing, the model still has some limitations in practical applications. There are many parameters required to be adjusted in the model, which is time-consuming and also difficult. In order to further reduce the computational complexity, the improved PCNN model [4] as follows:

$$ F_{ij} \left[ n \right] = I_{ij} $$
(68.6)
$$ L_{ij} \left[ n \right] = V_{L} \sum {W_{ijkl} } Y_{kl} \left[ {n - 1} \right] $$
(68.7)
$$ \theta_{ij} \left[ n \right] = \left\{ {\begin{array}{*{20}l} {\exp \left( { - a/n} \right)\theta_{0} ,} & {Y_{ij} \left[ {n - 1} \right] = 1} \\ {\theta_{0} , } & {Y_{ij} \left[ {n - 1} \right] = 0} \\ \end{array} } \right. $$
(68.8)

Production neurons pulse will lead to their launch around the fire of the interaction between neurons, which will cause nearby neurons to be used in the same way. Hence it can produce a pulse spread outside the activity of the field. In a neuron, fire will fire any group of neurons the whole group. So the image segmentation can quickly realize the use of synchronization characteristics.

2.2 Fuzzy Mutual Information

Mutual information (MI) is a kind of similarity measure because of its proven versatility, and has been widely used in image processing [5]. This information-theoretic is not dependent on any hypothetical data, and does not assume particular relationships in different forms of strength. As regards image processing, it is assumed that the largest reliance is on the shades of gray image between them to the correct aligned. The Max-MI standard has been used for image segmentation. However, it does not always get the best segmentation effect, because it will be affected by the transformation of missile value for the overlapping area image. The fuzzy theory applied to image segmentation puts forward the fuzzy mutual information (FMI). This paper introduces the MI based on the correlation coefficient as follows:

Given image A and B, the MI is defined as

$$ MI(A,B) = \sum\limits_{a,b} {p_{AB} \left( {a,b} \right)\log \frac{{p_{AB} \left( {a,b} \right)}}{{p_{A} \left( a \right) \cdot p_{B} \left( b \right)}}} $$
(68.9)

where p AB(a, b) is the joint probability distribution of two images, and p A (a) and p B (b) are marginal distribution of image A and B, respectively. Then the FMI is given by

$$ FMI\left( {A,B} \right) = \sum {\sum {\left( {\rho \left( {a,b} \right)} \right)^{\alpha } p_{AB} \left( {a,b} \right)\log \frac{{p_{AB} \left( {a,b} \right)}}{{p_{A} \left( a \right) \cdot p_{B} \left( b \right)}}} } $$
(68.10)

where α is an adjustable factor which is greater than 0, and ρ(a, b) is the correlation coefficient of image A and B. FMI has the following properties:

  1. 1.

    Symmetry: FMI (A, B) = FMI (B, A).

  2. 2.

    Conversion: If the adjustable factor α = 0, then the FMI is the same as MI.

2.3 Auto-Segmentation Algorithm

The algorithm can be implemented with the following steps:

  1. 1.

    Setting the parameters: VL = 0.5, α = 10, θ0 = 255, β = 0.1, and iteration number n = 1.

  2. 2.

    Inputting the normalized gray image to PCNN network as the external stimulus signal I ij .

  3. 3.

    Iterating n = n + 1.

  4. 4.

    Segmenting the image by PCNN.

  5. 5.

    Computing the value of FMI. If FMI < FMImax, go to Step 3, otherwise stop segmentation and the final result is obtained.

3 Results and Discussions

Figure 68.2 shows the segmentation results of tire images by different algorithms. The images from left to right in the first line are the original tire image, image with Gauss noise, image with salt and pepper noise, and image with multiplicative noise, respectively.

Fig. 68.2
figure 2

Segmentation results of tire images

Figure 68.3 shows the segmentation results of medical cerebral CT image. Figure 68.3a is the original CT image, and Figs. 68.3b–d are the images segmented by Otsu, PCNN with max-entropy, and PCNN with max-FMI algorithm, respectively. Figure 68.4 shows the segmentation results of breast tumor ultrasound image with the same sequence as Fig. 68.3. The PCNN with max-FMI algorithm again illuminated the well segmentation effect than other two algorithms, especially in the boundaries of region of interest. Despite the volume effect in CT, the CT image is clear enough without pre-processing. The intracranial regions in cerebral CT, such as the cerebrospinal fluid and the brain matter, are well segmented by PCNN with max-FMI, while there is too much noise in the images segmented by Otsu and PCNN with max-entropy. For ultrasound image, there are too many small spots in Fig. 68.4b, which means that Otsu algorithm suffered from the speckle noise. The boundary in Fig. 68.4c is obviously smooth, and the details are lost for PCNN with max-entropy algorithm. However, the boundary of breast boundary could be accurately segmented in ultrasound image, although the speckle noise was inherent in ultrasound image. Furthermore, our algorithm can segment the ultrasound image without the pre-processing of denoising or enhancement, which will reduce the running time. So the proposed algorithm has strong robustness against noise and high performance efficiency (Fig. 68.4).

Fig. 68.3
figure 3

Segmentation result of CT image. a Original CT image. b Segmentation image with Otsu. c Segmentation image with max-entropy PCNN. d Segmentation image with max-FMI PCNN

Fig. 68.4
figure 4

Segmentation result of breast ultrasound image. a Original ultrasound image. b Segmentation image with Otsu. c Segmentation image with max-entropy PCNN. d Segmentation image with max-FMI PCNN

4 Conclusion

In conclusion, we introduce a new image auto-segmentation algorithm based on PCNN and FMI. The proposed algorithm was able to effectively segment the CT and ultrasound images, which was confirmed by the experiments. The results suggest that the proposed algorithm has the potential application in medical images.