Keywords

1 Introduction

Brain image acquisitions are gradually becoming more and more information-rich thanks to the increasing resolution and versatility of brain imaging methods [1, 2]. Multiple Sclerosis (MS) is a chronic central nervous system disease that results in loss of sensory and motor function [2]. It is one of the most common diseases that affects the neurological abilities of young adults [1]. Neurological disabilities affect the immune system and induce inflammation to the neurons, demyelination, and axonal damage [1].

Three-Dimensional (3D) visualization of brain magnetic resonance images (MRI), has been a developing part of medical imaging for many years and the literature surrounding it, is growing rapidly ever since. It aims in providing experts with both qualitative and quantitative information on the subject of study and help comprehend it in its full dimensionality [3]. Despite the massive research that has been carried out in 3D visualization in MRI, there are only a few other studies found in the current literature, that focused on the 3D reconstruction and visualization of the brain in MS subjects [4,5,6]. More specifically, in [4] the spherical harmonics method was used to estimate the 3D shape of the MS lesions and to determine the lesions’ volume. In [5], the 3D Slicer tool was introduced, which was used to reconstruct brain MRI images and lesions. Furthermore, in order to estimate the loss of brain volume over time and asses the evolution of the MS disease, the FreeSurfer [7] and MIPAV [8] software tools were used. It was also recently proposed [6], that 3D texture analysis and 3D lesion visualization may be useful in following up the progression and development of the MS disease. Furthermore, in [6], researchers used the isosurface rendering method and texture mapping techniques to reconstruct the MRI images and estimated also the volume of the lesions in two different time points (TP1-TP2).

The objective of this work is to propose and evaluate a 3D reconstruction integrated system for brain MS lesions visualization, which will be able to reconstruct 2D MRI MS acquisitions into a 3D volume. The system is composed out of the following functions: 1) Brain and lesions re-slicing and visualization in three different planes (sagittal, coronal, transverse); 2) Visualization and comparison of four different brain MRI acquisition time points (TP1-TP4); 3) Comparison of the manual and the automated [9] lesion segmentations, which are embedded into a 3D volume; 4) Volume estimation [mm3] of each lesion; 5) Approximation for the total volume of all lesions at TP1 to TP4; 6) Comparison and quantification of the MS disease; and 7) Follow up the development of the MS disease.

To the best of our knowledge, there are no other studies reported in the current literature, where 3D reconstruction of brain images and lesions from MS subjects at TP1-TP4 was developed, validated, or investigated.

2 Material and Methods

The 3D reconstruction integrated system proposed in this work was developed using the VisPy python library [10], which is used for interactive and high-level visualization. The system is also combined with a simple user interface developed with PyQT5 and other supplementary libraries and tools [11,12,13]. Figure 1 illustrates the flow diagram depicting the steps followed for reconstructing the 3D brain MRI images and MS lesions into a 3D volume, which will be described below.

2.1 Acquisition of Brain MRI Images

The brain MRI images used in this work were available from the International Symposium on Biomedical Imaging (ISBI) 2015 contest [14] which provided 19 participants with four different MRI scan sequences (T1-weighted (T1w), magnetization prepared rapid gradient echo (PDw), T2-weighted (T2w) and T2w fluid attenuated inversion recovery (T2w FLAIR)) [14]. In this work only T2w FLAIR images were used (Nr = 5). A 3.0 T MRI scanner was used to generate the images with an inversion recovery time of 823 ms, an echo time of 68 ms, and a voxel size of 0.82 × 0.82 × 2.2 mm3 [14]. The MRI images were inhomogeneity corrected using the N4 method, a non-uniform intensity normalization [15] and then rigidly registered using a baseline (TP1) as described in [14]. In addition, additive noise filtering was applied using skull [16] and dura [17] stripped masks [14]. Finally, an inhomogeneity correction N4 was performed once again.

The entire dataset consisted of 19 subjects out of which only the five were used in this study (Nr = 5, Age = 43.5 ± 10.3 (mean ± std) years old)). The four of them had brain acquisitions from four consecutive time points (T1-T4) and one of them from five consecutive time points (T1-T5) separated by an average acquisition time of one year [14]. The MRI brain scans consisted of 181 longitudinal slices of size 217 × 181 pixels. Manual blinded delineations were performed by two raters (R1, R2) with four and ten years of clinical experience in manual delineation respectively [14].

2.2 MRI MS Lesion Preprocessing and Semi-automated Segmentation

Before the automated lesion segmentation [9] the MRI images were intensity normalized between 0 and 4095 as shown in (1) (see also Fig. 1, Step 3). The lesion segmentation was performed using a Convolutional Neural Network (CNN) U-Net system [9]. A CNN architecture that is symmetric and it uses an encoder and a decoder to extract spatial features from the images and to construct segmentation maps [18]. A basic CNN architecture repeats a sequence of two 3 × 3 convolution networks followed by a max-pooling operation with a pooling size of 2 × 2 and a stride of two [18]. The detailed architecture and additional information for the CNN model and the MRI lesion segmentation is documented in [9]. The model in [18] is also embedded in the tool proposed in this work, giving the opportunity to the user to segment new MRI scans.

2.3 Image Normalization and Contour Lesion Generation

The original MRI image intensities were histogram normalized (see Fig. 1, step 3), using the MinMaxScaler function from Scikit-Image library, to map the intensity values in the range of 0 and 255 as follows:

$$ N_{i} = \frac{{X_{i} {-} min\left( {\boldsymbol{X}} \right)}}{{max\left( {\boldsymbol{X}} \right) {-} min\left( {\boldsymbol{X}} \right)}} *\left( {T - D} \right) + D $$
(1)

where Xi and Ni are the intensity and the normalized intensity of each pixel respectively, and X the entire image array, T is the new maximum intensity and D the new minimum intensity.

The segmented lesions estimated by the system in [9] were in a form of a binary array mask that can be used to estimate the lesions contour coordinates. The contour coordinates were estimated using the OpenCV’s method findContours [12] (see Fig. 1, step 4), where a list with the (x, y) coordinates of all the contour points were provided. The coordinates were then saved in a dictionary using their slice number as the key.

2.4 3D Lesion Volume Estimation

In the next step, the segmented masks were given to the SciPy module which labels all the connected components of the array according to a 3 × 3 × 3 pixels cube structuring element filled with onesFootnote 1 (see Fig. 1, step 5). The labeling method assumes that all the pixels with zero intensities form the background of the image and are not included in the lesions mask. The structuring element estimates the outline of the lesions by assuming connecting components and labels. After iterating through all the segmented masks, the labeling method returns an array of the same shape as the input array and an integer. The returned array is labeled with all the connected pixels having the same integer as the label, and the integer returned is the number of 3D lesions found in the masks. To estimate each lesion’s volume the NumPy method count nonzero is called for each of the above estimated labels (see also Fig. 1, step 5).

2.5 3D Reconstruction

Prior to the 3D reconstruction, the user loads the json file, which contains the data for visualization (see Fig. 1, step 6). The proposed system then automatically draws the lesion contours on the original MRI brain images, using the OpenCV method drawContours [12] (see Fig. 1, step 7). Adding on, the tool automatically multiplies the segmented masks with the original images. This operation isolates the lesions and provides separate visualization (see also Figs. 2c) and 2d)). In the following, the OpenGL, NumPy and VisPy libraries were used to improve interactive 3D visualization [10]. VisPy offers the ability to render the MS lesion volumes with the direct volume rendering (DVR) translucent method. DVR renders the lesions volume by casting virtual rays through the volume and projecting the result as a 2D image on a plane. It retains the interior of the volume unchanged and provides spatial information between different structures [19]. This is a key factor for the visualization and the validity as indicated in [1, 2, 20]. The Vertex shader (see Fig. 1, step 9), is a code written in GLSLFootnote 2 programming language that is used to calculate the position of the vertices on the screen. It is also used to calculate how the virtual rays will transfer through the volume and decides which parts of the volume are visible and have to be drawn. The fragment shader (see Fig. 1, step 10), assigns the color of each pixel and its opacity, it is also written in GLSL. VisPy additionally, offers the ability to modify the volume by changing the shaders or attaching codes in the shaders [10], and to re-slice the brain volume (see Fig. 1, step 13). All of the above tool properties offer the ability to the user to remove and replace slices in three different directions (coronal, sagittal, transverse). Furthermore, VisPy’s canvas supply basic interactive keys such as rotation, zooming, and changing the center of rotation.

Eventually, one of the challenges of 3D reconstruction is to maintain the texture of the MRI image. In order to visualize the texture of the MRI as representative as possible, a GLSL code is attached on the fragment shader (see Fig. 1, step 11), which changes the colormap and the opacity of each pixel. Specifically, a color map of RGBA values using the image intensities was generated in such a way so that the brain volume can be illustrated in original colors. As a result, the pixels with zero intensity (black color) are fully transparent, while as the pixel intensity gets higher, the transparency of the pixels is depressed.

Fig. 1.
figure 1

Flow diagram of the proposed 3D reconstruction system in MRI brain images showing all processing steps. Given that MS lesions are whiter in a FLAIR image this results in MS lesion volumes that are more opaque and bright than other brain tissues.

2.6 Evaluation Metrics

The following evaluation metrics were used to assess the proposed 3D reconstruction method: a) Lesion load (L), b) Volume (V), c) The correlation coefficient, and d) The t-test p-value. Furthermore, the average difference of the total number of lesions (LAD) and the total volume (VAD) in mm3 between the two different raters and the automated system [9] for all time points (TP1–TP4) were calculated as follows:

$${LAD}_{j,k} = \frac{\sum \left|{L}_{j}\left({TP}_{i}\right)-{L}_{k}\left({TP}_{i}\right)\right|}{\sum TP}$$
(2)
$${VAD}_{j,k} = \frac{\sum \left|{V}_{j}\left({TP}_{i}\right)-{V}_{k}\left({TP}_{i}\right)\right|}{\sum TP}$$
(3)

where L and V are the total lesion volume and the number of lesions respectively, j and k represent R1 or R2 or the semi-automated segmentation system and TPi the time point of the given subject under evaluation.

3 Results

Figure 2 illustrates the 3D brain reconstruction from an MS patient aged 43.5 years in two consecutive time points (TP1, TP2), in the left and right columns, in Figs. 2a) and 2b) respectively. In Figs. 2c) and 2d) re-slicing of the brain in three different directions after rotation (−34.0 x-axis, −39.5 y-axis) is shown. The rotations were performed to better visualize the interior of the brain. Figures 2e) and 2f) show the 3D reconstructed lesions for TP1 and TP2 after the extraction of the lesions (Number of lesions at TP1/TP2: 35/44, Volume [mm3]: 31746/33382). It is noted that the 3D reconstructed lesions shown in Figs. 2e) and 2f) can be rotated and re-sliced according to the expert’s input.

Table 1 presents a comparison of all MS lesions investigated in this study based on the number of segmented lesions and volume performed by the two experts versus the semi-automated system proposed in [9], for all subjects investigated (Nr = 5) at TP1–TP5. The upper part of Table 1 tabulates the mean (± std) of the number of lesions and the total lesion volume (mm3) for different time points (TP1-TP5) for the segmentations performed by R1 and R2 and the semi-automated CNN U-Net model. The middle section of Table 1 shows the correlation coefficient and the t test p-value of lesion segmentations and volume per time point. The t test p-value indicates the level of statically significance difference between manual and automated segmentations. Lastly, the lower section of Table 1, indicates the average difference of lesion load and the total lesion volume in mm3 between R1 vs R2, R1 vs the semi-automated system, and R2 vs the semi-automated system for each different TP.

4 Discussion

The objective of this work was to provide MS clinicians, a tool that can be used to estimate, follow up and compare MS lesions progression, by reconstructing 2D MRI scans into a 3D volume. The tool can reconstruct and visualize up to four different TPs, which can be resliced, rotated and switched between showing the entire brain or the lesions. Also, the tool provides an estimation of the lesion load at each TP, as well as an estimation of each lesion’s volume in mm3. The results presented in this study, showed that the proposed 3D reconstruction method can be used to differentiate brain tissue and distinguish MS lesions by providing an improved 3D visualization. The method preserves the texture of the original MRI scans to an acceptable level (see also Fig. 2). It can also be observed from the upper part of Table 1 that lesion load varies between different TPs. More specifically the std for the mean number of lesions and the lesions volume varies a lot between R1, R2 vs the automated system. As opposed to mean and std, Pearson correlation between R1 and R2 (see middle part of Table 1), shows a good correlation of above 0.5 for generally all different TPs. For the TP1 correlation is very high (ρ = 0.91), regarding the number of lesions. The correlation coefficient for the number of lesions between R1 and R2 and the automated system at TP1 and TP2 is 0.86 and 0.81, respectively.

Fig. 2.
figure 2

MRI brain 3D reconstruction of an MS subject at TP1 and TP2 in the left and right columns, respectively. b) Re-slicing of the 3D reconstructed brain in 3 different planes (coronal, sagittal, transverse) (Number of slices shown/out of a total number of slices: 152/256, 165/256, 116/181). e), f) 3D reconstruction of the MRI MS lesions (Total number of MS lesions (TP1/TP2): 35/44.

Table 1. Comparison of MS lesions based on the number of segmented number of lesions and volume performed by the two experts (R1, R2) vs the semi-automated segmentation system [9], for all subjects investigated in this study (Nr = 5) at TP1-TP5. The mean(±std) of the total number of lesions (-/) and the total volume in mm3 (/-) are shown for all subjects (see upper part of the table). Pearson correlation (t Test p-value) of the total number of lesions (-/) and the total volume in mm3 (/-) are shown (see middle part of the table). The average difference of the total number of lesions (-/) and the total volume in mm3 (/-) are shown (see lower part of the table).

Different methods were proposed in the current literature for the 3D reconstruction of MS brain lesions and or quantification estimation. More specifically, in [4] a method for estimating the shape and volume of MS lesions was presented. The 3D surface shape of the MS lesions was approximated by 2D contours, using the spherical harmonics method. The volume estimation was finally calculated by virtually slicing the 3D lesion, summing the areas of each slice, and multiplying them by the virtual slice thickness. Cordovez M. et al. [5], visually and quantitatively compared two different time points in 11 MS patients using the FreeSurfer [7] and MIPAV [8] software. The 3D slicer was used for the brain reconstruction, where a good 3D visualization of the lesions was achieved. A study for the 3D reconstruction of MS MRI brain scans was proposed in [6], where the surface of the brain was reconstructed using the isosurface method combined with texture mapping. In all of the above-mentioned studies, the drawbacks in the 3D reconstructions are twofold. More specifically, the texture and the interior of the MRI scans was not maintained, and none of the studies mentioned interactive visualizations such as re-slicing of the brain. In addition, no other studies were found in the literature where brain visualization at four different time points, as it was proposed in this study, for comparing the evolution of the MS disease were investigated. The software proposed in this study will be available to download at https://drive.google.com/drive/folders/1z7595UBTcsOzY7Hp9kCYsJ6tnHYQEfDM?usp=sharing. A number of other studies were also reported recently for the reconstruction of MRI using deep neural networks [22, 23]. In [22] a deep neural network was proposed for 4D reconstruction of the artic flow MRI data, while in [23] a deep neural recurrent network was presented to reduce MRI image acquisition time and improve the MRI reconstruction.

5 Conclusions and Future Trends

The preliminary results presented in this study, provide evidence that the proposed 3D reconstruction system could be applied in the future in clinical practice. It was shown that the system may provide reliable, quantitative information for the neurologists for following up the evolution of the MS disease [21]. However, results reported in this study, (number of lesions and lesions volume [mm3]), should be further validated in a future work in a larger number of subjects, as well as compared with additional studies from the literature. Moreover, lesion texture analysis needs to be incorporated in the proposed system in the form of rule display [24], aiding the neurologists to better understand the MS pathogenesis as it was shown in [21].