Keywords

1 Introduction

Digital Subtraction Angiography (DSA) is the gold standard imaging method for vessel visualization in the diagnosis and treatment of neurovascular disorders [3, 22]. DSA is a sequence of x-ray images showing contrast bolus passage through the vessels with background elements removed. One application of DSA is in the guidance of catheters for endovascular thrombectomy in the treatment of acute ischemic stroke [10]. In this context, image registration of DSA images is useful for providing information on stroke diagnosis, diseased vessel location, treatment planning, and treatment effect. Image registration is an important aspect of this process for comparing serial changes in DSA images or evaluating differences across neurovascular treatments and across patients.

Image co-registration is a process where two images are mapped onto the same coordinate frame. This process is widely used in the medical fields for integrating or comparing image data obtained from different time points, viewing angles, and modalities [16, 24]. Currently, registration in the clinical setting is performed by manual selection of matching features between two DSA images to compute the affine transformation, which is then verified through visual inspection. However, manual image registration is inefficient, and can be arduous when the image datasets becomes large. Automatic methods for image co-registration are desired to provide a more efficient approach for analyzing large scale studies, and providing consistency to remove inter-reader variability [4]. Yet, there has not yet been an extensive study identifying and comparing registration algorithms in the application of DSA registration.

Algorithms for image registration can be classified generally into feature-based and intensity-based methods [11, 25]. Feature-based methods include feature detection and matching steps, where corresponding distinctive regions are identified between images. Some feature detection algorithm include Speeded-Up Robust Features (SURF) [2], Scale-Invariant Feature Transform (SIFT) [13], and SIFT-like algorithms [7]. After matching, some algorithms find the optimal transform with RANSAC [9] or MSAC [28]. For intensity-based methods, images are represented with descriptors such as area based representations, landmark based representations [5], or gradient based representations [1, 12, 14]. Registration is performed by optimizing a criterion, such as mutual information [26], squared error, or correlation metrics [8]. Optimization is then performed either directly such as with gradient descent or heuristically such as with simplex methods [18]. Other types of registration methods exist such as iterative closest point (ICP) for point cloud registration [21], or alignment in the Fourier domain. While there are many algorithms and approaches to registration, there has not yet been an investigation on how the algorithms perform relative to each other for the application of DSA registration.

In this paper, we identify and evaluate the performance of existing registration algorithms that can potentially be used for DSA co-registration. This work will be beneficial for future image analysis on DSA images for physicians or researchers. This paper is organized as follows: We first describe how we identified existing image co-registration methods in Sect. 2.2. Then we describe in Sect. 2.3 the strategy we used to evaluate the performance of different image co-registration methods. In Sect. 3, we present the registration and evaluation results for each method, and discuss in Sect. 4 the methods with the best performance. Finally we conclude with a discussion on the influence our research may have in relevant medical area in Sect. 5.

2 Methods

2.1 Dataset

The image dataset used in this study to evaluate our framework was collected from patients admitted at the UCLA stroke center and diagnosed with symptoms of acute ischemic stroke. The use of this dataset was approved by the local Institutional Review Board (IRB). Inclusion criteria for this study included: (1) final diagnosis of acute ischemic stroke, (2) last known well time within six hours at admission, (3) Digital Subtraction Angiography (DSA) of the brain performed prior and at completion of a clot retrieval procedure. Therefore, an important feature of the dataset is that additional vessels may be present in the images acquired after the procedure; due to the recanalization of those vessels. A total of 32 patients satisfied the above criteria and were included in this study. The patients had various success in revascularization. DSA images had a resolution of \(1024 \times 1024\) pixels.

2.2 Methods Identification

To start our evaluation, we identify available 2D-2D image co-registration methods that can potentially be applied to DSA registration. The search is performed on Google and Github with the following keyword combinations: ‘image registration’, ‘2d2d’, and ‘matlab source code’.

The 14 different methods we identified as potential algorithms for registering DSA images are as follows:

  1. 1.

    iat-aba-LKWarp: A gradient-based direct alignment method based upon the Lucas-Kanade algorithm [14], which is a local differential method using optical flow estimation. The optimum transformation T is found by minimizing the sum of squared errors between images as the criterion [1].

  2. 2.

    iat-aba-ECCWarp: A gradient-based direct alignment method which finds the optimum transformation T by maximizing the Enhanced Correlation Coefficient (ECC) between images as a similarity metric [8].

  3. 3.

    iat-fba: A feature-based registration method using Speeded Up Robust Features (SURF) algorithm to first detect interest points in the images and represent them by an area descriptor invariant to scale and rotation [2]. These features are matched based on arc-cosine angle between descriptor vectors to determine correspondences. Registration is achieved by estimating the optimum transform after minimizing the distance between corresponding features based upon a RANSAC scheme [9].

  4. 4.

    matlab-inner: An intensity-based registration method which represents the extent of image matching with mutual information similarity metric [17, 26]. Optimization is performed iteratively with one-plus-one evolutionary algorithm to find the optimum transform [23].

  5. 5.

    MI-Opt: This is an intensity-based registration algorithm with mutual information similarity metric and optimization with Nelder-Mead simplex method [18].

  6. 6.

    lkwq007: An FFT-based technique for registration by applying a high-pass filter and representing the spectra in log-polar coordinates. Rotation, scale, and translation parameters are extracted via phase correlation [20].

  7. 7.

    ZeLianWen-sift: A feature-based method using scale-invariant feature transform (SIFT) to identify and represent features [13]. The features are matched and clustered via Hough transform voting. The optimal transformation is then found via least squares optimization.

  8. 8.

    ZelianWen-sar-sift: A method using a SIFT-like algorithm robust to speckle noise for feature detection, originally used for synthetic-aperture radar (SAR) images [7, 15]. The features are matched, clustered, and the optimal transform is found via least squares.

  9. 9.

    hlpassion: A feature-based algorithm utilizing Matlab’s image processing tools [17]. Features are detected with a SURF algorithm and matched [2]. The optimal transformation is found using an M-estimator Sample Consensus (MSAC) algorithm.

  10. 10.

    MinhazPalasara: A method originally created for locating possible tumors in mammograms. The algorithm uses SIFT method to detect features in the images. The features are matched, outliers are computed via MSAC algorithm, and the optimal transformation found by aligning inlier features.

  11. 11.

    tonycao: This method was originally created for registration of multi-modal microscopy images using image analogies [5]. Example matches are used to create a dictionary of appearance relations, and a sparse representation model is used to obtain image analogies. Registration is then performed via least squares method.

  12. 12.

    homography: This method uses SURF to detect features which are matched via direct linear transformation (DLT). The 2D-2D projective homography is then estimated using RANSAC and Levenberg Marquardt optimization [6].

  13. 13.

    ImageRegistrationApp: An intensity-based method utilizing Matlab’s image processing toolbox. The optimum transform is found by minimizing mean square error with gradient descent strategy.

  14. 14.

    demon: A non-rigid fluid-like registration method where a velocity (or movement) is defined on each pixel using gradient information [19, 27]. The velocity field is used to transform one image onto the other, and the optimal transformation is found with gradient descent [12].

  15. 15.

    ICP: Iterative Closest Point (ICP) is a widely used surface matching algorithm for point clouds. DSA images are thresholded by Otsu’s method and randomly sampled for points. Registration is performed by assuming closest points between point clouds are corresponding, and minimizing the distances between point correspondences [21].

Fig. 1.
figure 1

Example of Registration. An example of well registered image pairs (a) with low circle overlap error (b) on the left. The right shows an example of failed registration (c) with high circle overlap error (d).

2.3 Methods Evaluation

We evaluate the co-registration algorithms on 32 DSA image pairs. As a pre-processing step, we first extract vessels each image with a Frangi filter.

We then compute groundtruth registration parameters for each image pair by manual selection of six corresponding points in each image pair, and then acquiring the affine transformation matrix as our groundtruth. Each reference point was placed at established anatomical locations by a Neurologist who was blinded to any information in the study. This groundtruth transformation will be compared to those obtained from the co-registration algorithms identified above in order to evaluate algorithm performance.

Circle Overlap Error. To evaluate algorithm accuracy, we define a circle overlap error metric. For each image pair, we have a groundtruth affine transformation \(T_{groundtruth}\) and estimated affine transformations \(T_{estimated}\) from each registration method. We apply the inverse transform for both the groundtruth and estimated transformation on a unit circle C. This will result in two elliptical shapes that may or may not overlap.

The calculation for circle overlap error is represented in the following equation:

$$\begin{aligned} Circle \, Overlap \, Error= 1 - \frac{|M\cap F|}{|M\cup F|} \end{aligned}$$
(1)

where \(M = T^{-1}_{estimated}(C)\) is the elliptical region generated by applying the inverse estimated transform on the unit circle and \(F = T^{-1}_{groundtruth}(C)\) is the elliptical region generated by applying the inverse groundtruth transform to the unit circle. Note that a greater overlap between elliptical regions such as in Fig. 1 corresponding to a more accurate estimated transformation will result in a smaller circle overlap error.

Parameter Error Calculation. Since circle overlap error can result in a value of 1 when there is no overlap in elliptical regions, we introduce a second error metric, parameter error. Parameter is computed from the difference between the transformation parameters from the groundtruth transformation and the estimated transformation. Total parameter error is obtained from the Euclidean error between all the groundtruth and estimated transform parameters. The affine transformation can be represented as a matrix

$$\begin{aligned} T = \left( \begin{array}{ccc} a &{} b &{} t_{x}\\ c &{} d &{} t_{y}\\ 0 &{} 0 &{} 1\\ \end{array} \right) \quad \text { such that } \quad \left( \begin{array}{c} x' \\ y' \\ 1 \end{array} \right) = T \left( \begin{array}{c} x \\ y \\ 1 \end{array} \right) \end{aligned}$$
(2)

where (xy, 1) are points in the original image and \((x',y',1)\) are the transformed points.

We can decompose the sub-matrix \(A = \bigl ( {\begin{matrix}a &{} b\\ c &{} d\end{matrix}}\bigr )\) as follows:

$$\begin{aligned} \begin{pmatrix} a &{} b \\ c &{} d \end{pmatrix}&= \begin{pmatrix} \cos {\theta } &{} -\sin {\theta }\\ \sin {\theta } &{} \cos {\theta } \end{pmatrix} \begin{pmatrix} s_x &{} 0 \\ 0 &{} s_y \end{pmatrix} \begin{pmatrix} 1 &{} m \\ 0 &{} 1 \end{pmatrix} = \nonumber \begin{pmatrix} s_x \cos {\theta } &{} m s_x \cos {\theta } - s_y \sin {\theta }\\ s_x \sin {\theta } &{} m s_x \sin {\theta } + s_y \cos {\theta } \end{pmatrix} \end{aligned}$$

Note that \(\det {(A)} = ad-bc = s_x s_y\).

Table 1. Algorithm Registration Absolute Errors. The mean absolute error and standard deviation values for parameter error and circle overlap error are listed. Underlined values emphasize the 3 lowest errors for each category.

We can then compute 6 transformation parameters from T:

$$\begin{aligned} \begin{aligned} \text {Translation along x-axis and y-axis}&: t_x \text { and } t_y\\ \text {Rotation}&: \theta = \arctan 2{\Big (\frac{c}{a}}\Big ) \\ \text {Scale along x-axis}&: s_x = \frac{a}{\cos \theta } \text { or } \frac{c}{\sin \theta } \\ \text {Scale along y-axis}&: s_y = \frac{\det {A}}{s_x} \\ \text {Shear along y-axis}&: m = \frac{b+s_y\sin \theta }{a} = \frac{d-s_y\sin \theta }{c} \end{aligned} \end{aligned}$$
(3)
Fig. 2.
figure 2

Parameter Error. Parameter error plots for the 6 registration parameters. Box plot lines represent median and interquartile range over the errors of the 32 DSA pairs for each algorithm, with outliers outside the whiskers.

Fig. 3.
figure 3

Circle Overlap Error. Box plot lines represent median and interquartile range over the distribution of circle overlap errors of the 32 DSA registration pairs for each algorithm. Outliers lie outside of the whiskers. A lower circle overlap error value represents a better registration result.

From this, we obtain our parameter vector \(\varvec{p} = (t_x, t_y, s_x, s_y, m, \theta )\) from the groundtruth and estimated transformations. The parameter error vector is then obtained by

$$\begin{aligned} \varvec{p}_{err} = \varvec{p}_{estimated} - \varvec{p}_{groundtruth} \end{aligned}$$
(4)

For each method, we calculate its circle overlap error and parameter error for the 32 image pairs and use the sample mean for evaluation.

3 Results

The mean absolute error and standard deviation of the error metrics for each algorithm over the 32 DSA images are shown in Table 1. For each error category, the lowest three values are identified. Registration with algorithms 10 and 8 resulted with the lowest mean circle overlap error. Registration with algorithms 1 and 5 resulted in low absolute mean errors among multiple parameters.

The distribution of the error metrics for every algorithm is shown in Figs. 2 and 3, where boxes show quartiles of data, and are less sensitive to outliers. Algorithms 1, 2, 5, 10, and 15 have relatively low median error and interquartile ranges across all six parameters. Algorithms 10 and 8 has the lowest median circle overlap error. While algorithms 6, 7, and 8 have multiple measures with relatively low absolute mean error, they also have relatively high median error and interquartile range of error.

Figure 4 shows sample results from some higher performing algorithms.

Fig. 4.
figure 4

Sample Results from Highest Performing Algorithms. Example registration results from 5 algorithms are shown. Pink vessels represent the pre-treatment DSA images, and green vessels represent the post-treatment DSA images. Circle overlap error visualization is shown under each registration result. (Color figure online)

4 Discussion

The results presented in the previous Section illustrated that Algorithms 1 and 5 perform with low absolute mean and median errors, as well as low interquartile range of error suggesting consistent performance. Algorithms 2 and 15 also demonstrate low median error and interquartile range of error. These methods are area-based alignment methods using intensity or gradient information to perform registration. We see that these intensity-based algorithms perform with consistency since they result in low interquartile range and low standard deviation.

Algorithms 8 and 10 perform accurately as they result in low circle overlap error and low median parameter error, but are also less stable since they have more outliers in error values. It should be noted that these outliers represent large registration failures, and negatively impact the performance of these algorithms. These algorithms apply feature-based methods to perform registration. We see that these feature-based methods may be relatively more accurate due to the lower circle error and low median errors, but are also less consistent in performance due to high standard deviation and higher number of outliers.

Algorithms 1 and 2 are area-based direct alignment algorithms relying on the brightness constancy assumption:

$$\begin{aligned} I_1(x,y)=I_2(T(x,y))\end{aligned}$$
(5)

where T(xy) warps the coordinates of \(I_{2}\)’s plane. These methods consider image pair differences to estimate the transformation to bring the images together, and optimize a criterion computed from \(I_{1}(x,y)\) and \(I_{2}(T(x,y))\) to find the optimal transformation T. For algorithms 1 and 2, the criterion are sum of squared error and Enhanced Correlation Coefficient respectively. Algorithm 5 performs registration by optimizing mutual information metric, a similarity metric computed from the intensity distribution between image pairs. Algorithm 15 utilizes intensity information to threshold each pixel and extract a point cloud, which is then matched with ICP. These intensity/gradient based methods possibly work by incorporating information across an area or neighborhood of pixels to perform registration. Incorporating a more global approach may aid in successful registration from a larger range of starting positions, but may also lose accuracy due to loss of spacial resolution.

For algorithms 8 and 10, a SIFT feature descriptor is used to represent interesting regions of high contrast. These feature descriptors are invariant to scaling and orientation, but may not be invariant to varying ratios between contrast and background intensity or to transformations that changes angle magnitude. Therefore, when there are multiple correct matches between feature descriptors, alignment of matches may result in a stronger accuracy of registration. But when the features between the images do not match very well, a wrong set of matches can determine a completely wrong transformation. This likely contributes to a high variability of success that we see in the data.

5 Conclusion

In this paper, we identified existing image registration methods that can be applied to 2D DSA image registration and evaluated performance using two accuracy measures: parameter error and circle overlap error. We find that methods using gradient and intensity-based registration perform consistently with low error, while methods using feature-based registration perform with more accuracy but also with more variability in success. Ultimately, an optimal DSA registration algorithm can incorporate elements of both intensity and feature-based algorithms to give optimal registration success rate and accuracy.