Abstract
Several methods have been proposed to correct motion in medical and non-medical applications, such as optical flow measurements, particle filter tracking, and image registration. In this paper, we designed experiments to test the accuracy and robustness of a recently proposed algorithm for subpixel image registration. In this case, the algorithm is used to correct the relative motion of the object and camera in pairs of images. This recent algorithm (named phase-based Savitzky-Golay gradient-correlation (P-SG-GC)) can achieve very high accuracies in finding synthetically applied translational shifts.
Experiments were performed using a camera, a flat object, a manual translational stage, and a manual rotational stage. The P-SG-GC algorithm was used to detect the flat object motion from the initial and shifted images for a set of control points on the surface of the object, which were automatically matched in subimages of 128 pixel × 128 pixel. A least-squares method was used to estimate the image transformation matrix that can register the shifted image to the initial image.
The results demonstrated that the P-SG-GC algorithm can accurately correct for the relative motion of the object and camera for a large range of applied shifts with a registration error less than 1 pixel. Furthermore, the P-SG-GC algorithm could detect the images in which the motion could not be corrected due to poorly matched control points between the initial and shifted images. We conclude that the P-SG-GC algorithm is an accurate and reliable algorithm that can be used to correct for object or camera motion.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Motion artefacts between pairs of images can arise due to unwanted movements of either the imaging device or the imaging target. Motion artefacts are problematic in many applications. In medical images, breathing and movement of patients can cause distortions and artefacts that can confound diagnosis. Motion artefacts are also an issue in camera-based systems, especially where it is necessary to have a stabilised recording. For example, the use of a hand-held stereoscopic device (such as [1]) to record surface deformations of living skin is challenging, due to movement of the subjects’ limbs or the stereoscopic device. Correcting for such relative motion can improve the analysis of medical images and can increase the accuracy of measurements. For example, human knee cartilage mapping was improved after motion correction [2]. Motion correction has been performed using several methods, such as optical flow [3], particle filter tracking [4], adaptive block motion vectors filtering [5], and, most commonly, image registration [6,7,8,9,10,11]. However, most of the existing methods for subpixel image registration lack accuracy or robustness to large shifts. To address these limitations, a new method for subpixel image registration has been recently proposed by HajiRassouliha et al. [12]. This is a phase-based method that uses Savitzky-Golay differentiators in gradient correlation (P-SG-GC) [12]. The P-SG-GC algorithm can achieve high accuracies in finding synthetically applied shifts. The registration error of this algorithm was shown to be less than 0.0002 pixel, which is 60 times better than state-of-the-art algorithms [12]. Furthermore, the P-SG-GC algorithm is computationally efficient and performs well in low-textured images [12], which makes it suitable for motion correction in real-time applications.
In this study, a variety of manual translational and rotational shifts were applied to a flat object, which was imaged using a camera. The relative shifts between pairs of images were estimated using the P-SG-GC algorithm in a set of control points (centres of subimages of size 128 pixel × 128 pixel) in the initial and shifted images. The displacements of these control points were used to estimate the image transformation matrix using a least-squares method. Each shifted image was registered to the initial image and the registration error was calculated to indicate the accuracy of the P-SG-GC algorithm in correcting the motion.
2 Method
Figure 1 shows the experimental setup used to test the ability of the P-SG-GC algorithm in measuring rigid motion. A single monochrome CMOS USB 3 camera (Flea3 FL3-U3-13Y3 M-C, Point Grey, Canada) equipped with a 12.5 mm lens was attached to a photographic stand in a position perpendicular to the surface of a flat target object. The target was attached to a manual linear translational stage, which was itself attached to a manual rotational stage. The combination of the two stages enabled the application of rigid in-plane shifts to the target, and the camera stand enabled moving the camera in the direction perpendicular to the surface of the object (Fig. 1) to provide a scaling effect in the camera images.
The following five experiments (E1 to E5) were designed and performed to test the ability of the P-SG-GC algorithm in detecting and measuring the relative motion of the target and camera:
-
E1.
Translational shifts of the target: the target was shifted at 1 mm steps to a maximum of 5 mm (i.e. five translational shift values) using the translational stage.
-
E2.
Rotational shifts of the target: The target was rotated at 0.5° steps to a maximum of 3° (i.e. six rotational shift values) using the rotational stage.
-
E3.
Translational and rotational shifts of the target: the target was shifted at 1 mm steps, and at each step was rotated by 0.25° to a maximum of 5 mm and 1.25° (i.e. five combinations of rotational and translational shift values) using the translational and rotational stages.
-
E4.
Translational shifts of the camera: the camera was shifted in the direction perpendicular to the surface of the target at 2 mm steps to a maximum of 10 mm (i.e. to provide five scaling effects) using the photographic stand handle (Fig. 1).
-
E5.
Translational shifts of the camera and the target: the camera was shifted as described in E4, but at 1 mm steps, and at each step 0.5 mm translational shifts were applied to the target using the translational stage (i.e. to provide five combinations of scaling and translation).
The images of E1 to E5 were divided into subimages of size 128 pixel × 128 pixel to measure localised motion of the object. These subimages were distributed uniformly across the surface of the target with a step increment of 20 pixel. The subimage size determines the maximum shift that the algorithm could identify. The number of steps and the maximum shift values in E1 to E5 were chosen according to the pixel size, the subimage size, and the ability of the P-SG-GC algorithm in estimating that type of motion. Note that this algorithm was developed for registration of images with translational shifts, but here was tested in a wider set of test conditions.
In the first step, the P-SG-GC algorithm was used to measure shifts between all subimages of the initial and shifted images in the x and y directions (\( d_{x} \) and \( d_{y} \)) at each step of E1 to E5. The centres of the subimages in the initial image (\( C_{s} \)) were considered as control points. Thus, the corresponding control points (i.e. centres of subimages) in the shifted image (\( C_{s} \)) are given by Eq. 1.
where, \( n \) is the subimage number, and \( N \) is the total number of subimages. The accuracy of the P-SG-GC algorithm in estimating \( d_{x} \) and \( d_{y} \) values for subimages of an image (and consequently \( C_{s} \left( n \right) \)) depends on several factors, including the subimage texture level, the magnitude of subimage shift, and the nature of the shift between the subimages. The integer error metric of the P-SG-GC algorithm (defined in [12]) was used as an indication of the level of confidence in estimating the shift between subimages. The threshold value of the integer error was set to 4 to provide an acceptable accuracy, and the control points with an error less than this threshold were considered as control points with a precise match (i.e. \( \hat{C}_{s} \left( n \right) \) and \( \hat{C}_{i} \left( n \right) \)).
The next step for correcting the motion between the initial and the shifted images is to find a geometric image transformation (T) that registers the images. Equation 2 shows the relation between the control points in the initial and the shifted image that can be used for registration [13].
where, T is a 3 × 3 matrix given in Eq. 3 [13], and M is the number of subimages that had an acceptable match (i.e. integer errors less than 4).
In the transformation matrix, T, \( \left[ {\begin{array}{*{20}c} {a_{3} } \\ {a_{6} } \\ \end{array} } \right] \) is the translation vector, [\( a_{7} , a_{8} ] \) is the projection vector, and \( \left[ {\begin{array}{*{20}c} {a_{1} } & {a_{2} } \\ {a_{4} } & {a_{5} } \\ \end{array} } \right] \) defines rotation, scaling and shearing [13]. The values of these elements identify the type of transformation, i.e. rigid, similarity, affine, projective, or some combination of these transformation types. Considering the applied shifts in E1 to E5, affine and projective image transformations are suitable for the registration of the images in this study. Affine transformations are able to describe translational, rotational, scaling, and shearing differences between the images. However, the projection vector is zero for affine transformations, so they preserve the parallelism between lines and cannot correct perspective effects. This issue was solved in projective transformations by adding the projection vector to the transformation matrix of affine transformations. Therefore, projective transformations are able to correct perspective effects, at the cost of introducing two additional parameters to the projection vector (i.e. [\( a_{7} , a_{8} ] \)). The elements of T (\( [a_{1} , \ldots ,a_{8} ] \)) were estimated using a least-squares method based on control points from the initial and shifted images. The least-squares method was only used for the images in which the percentage of the control points with an integer error less than 4 (\( \hat{C}_{s} \left( n \right) \) and \( \hat{C}_{i} \left( n \right) \)) was more than 80% of the total number of control points (\( C_{s} \left( n \right) \) and \( C_{i} \left( n \right) \)). Otherwise, this method detects the insufficiency of precise control points for finding an accurate transformation. This will help to avoid an incorrect registration of the images.
Two methods were used to assess the accuracy of registration between the initial and shifted images. The first method was a qualitative assessment performed by overlaying the registered and the initial images and subtracting their intensity values. The second method was a quantitative method for estimating the registration error. The 2D subpixel displacements (D) between the control points of the registered and the initial images (\( \hat{C}_{s} \left( n \right) \) and \( \hat{C}_{i} \left( n \right) \)) were estimated using Eq. 4, and were averaged in all the subimages to find the registration error (Eq. 5).
The accuracy of motion correction in a camera system is effected by optical distortions of the camera lens, especially the radial distortion [14]. Lens distortion causes non-uniform pixel displacements over the image. Thus, the appearance of an object changes when the object is positioned in different locations of a camera image. This effect becomes evident in the estimation of registration error. We reduced this effect by undistorting the images using radial and tangential lens distortion coefficients estimated through a camera calibration process [14].
3 Results and Discussion
The results of experiments E1 to E5 are presented in Figs. 2, 3, 4, 5 and 6. In each figure, the initial image and the final image for that experiment were overlaid following and prior to correction to provide a qualitative indication of the accuracy of the P-SG-GC algorithm in correcting for the relative motion of the target and camera. The intensity values of the initial and the final images for each experiment were subtracted and were colour-coded with and without motion correction to illustrate the effect of motion correction. The registration error (Eq. 5) was found at each step of experiments E1 to E5, and is provided as a quantitative estimate of the accuracy of the P-SG-GC algorithm. The vertical axis of the graphs in Figs. 2, 3, 4, 5 and 6(e) shows the average displacement of all subimages for that particular applied shift. In the final image of E3, the control points with an integer error less than 4 (\( \hat{C}_{s} \left( n \right) \) and \( \hat{C}_{i} \left( n \right) \)) were less than 80% of the total number of control points. The P-SG-GC algorithm could thus detect the insufficiency of control points for correcting the motion in this case. For this reason, the result of the final image of this experiment is not presented in Fig. 4.
The comparison of Figs. 2, 3, 4, 5 and 6(e) illustrates that the affine and projective transformations have had very close registration errors, indicating that the perspective effects of the images were not large in E1 to E5. This was also observed in the matrices of estimated projective transformation, in which the elements of the projection vectors had very small values.
Overall, the registration errors were less than 1 pixel for all motion correction tests in Figs. 2, 3, 4, 5 and 6(e), except for the fourth image of E3 (Fig. 4), for which the registration error was 1.52 pixel in an average displacement of 45.2 pixel (the relative error is 0.034).
Comparison of Figs. 2, 3, 4, 5 and 6(e) shows that the P-SG-GC algorithm performed best when correcting translational shifts (i.e. 0.04 pixel error for 55.3 pixel average shift (relative error = 0.00072)). The largest registration error was for the combination of object translation and rotation (i.e. 1.52 pixel for 45.2 average shift (relative error = 0.034)). This finding was expected, since the P-SG-GC algorithm was developed for subpixel registration of images with translational shifts by matching subimages of the initial and shifted images. Therefore, the P-SG-GC will have the most difficulty in matching the subimages that are both rotated and translated, such as in E3 that the maximum relative error was measured. Despite the P-SG-GC algorithm was developed for finding translational shifts, its performance was acceptable for rotational shifts and scaling of the images.
The experiments of this study (E1 to E5) were designed to cover typical scenarios of motion correction problems. Even though the shifts were applied using translational and rotational stages, it is expected that the P-SG-GC algorithm could perform similarly in correcting motion in practical applications.
4 Conclusion
Image registration is the most common technique for correcting for motion in applications that require high accuracy [6,7,8,9,10,11]. In this paper, we tested the capability of a novel algorithm for subpixel image registration (P-SG-GC) using a series of experiments with a camera, a flat object, a translational stage, and a rotational stage. The P-SG-GC algorithm was used to estimate the motion in a series of control points in subimages of size 128 pixel × 128 pixel. Affine and projective transformations were determined using control points from subimages and a least-squares method to register the shifted images to the initial images.
The results showed that the P-SG-GC algorithm could accurately and reliably correct motion for a range of applied shifts. We found that the perspective effects were not large in these types of motions, and that affine transformations were sufficient for registration. The registration error was least for pure translational shifts (i.e. 0.04 pixel error for 55.3 pixel shift (relative error = 0.00072) in experiment E1), and was largest for the combination of translational and rotational shifts (i.e. 1.52 pixel for 45.2 average shift (relative error = 0.034) in experiment E3). The P-SG-GC algorithm detected that precisely matched control points were insufficient for correcting the motion for the final image in E3. The result of this study indicates that P-SG-GC may be used as a reliable algorithm for correcting for the relative motion between a camera system and the target object. In particular, the P-SG-GC algorithm could be an accurate algorithm for correcting translational shifts.
In future, the performance of the P-SG-GC algorithm will be compared against competing algorithms for correcting motions in practical applications. In addition, the possible solutions to improve the accuracy of the algorithm in rotational tests will be investigated.
References
Hajirassouliha, A., Kmiecik, B., Taberner, A.J., Nash, M.P., Nielsen, P.M.F.: A low-cost, hand-held stereoscopic device for measuring dynamic deformations of skin in vivo. In: 30th International Conference on Image and Vision Computing New Zealand (IVCNZ 2015) (2015)
Bron, E.E., Van Tiel, J., Smit, H., Poot, D.H.J., Niessen, W.J., Krestin, G.P., Weinans, H., Oei, E.H.G., Kotek, G., Klein, S.: Image registration improves human knee cartilage T1 mapping with delayed gadolinium-enhanced MRI of cartilage (dGEMRIC). Eur. Radiol. 23, 246–252 (2013)
Chang, J.-Y., Wen-Feng, H., Cheng, M.-H., Chang, B.-S.: Digital image translational and rotational motion stabilization using optical flow technique. IEEE Trans. Consum. Electron. 48, 108–115 (2002)
Yang, J., Schonfeld, D., Mohamed, M.: Robust video stabilization based on particle filter tracking of projected camera motion. IEEE Trans. Circuits Syst. Video Technol. 19, 945–954 (2009)
Vella, F., Castorina, A., Mancuso, M., Messina, G.: Digital image stabilization by adaptive block motion vectors filtering. IEEE Trans. Consum. Electron. 48, 796–801 (2002)
Crum, W.R., Hartkens, T., Hill, D.L.G.: Non-rigid image registration: theory and practice. Br. J. Radiol. 77, S140–S153 (2004)
Chang, H.-C., Lai, S.-H., Lu, K.-R.: A robust real-time video stabilization algorithm. J. Vis. Commun. Image Represent. 17, 659–673 (2006)
Kumar, S., Azartash, H., Biswas, M., Nguyen, T.: Real-time affine global motion estimation using phase correlation and its application for digital image stabilization. IEEE Trans. Image Process. 20, 3406–3418 (2011)
Erturk, S.: Digital image stabilization with sub-image phase correlation based global motion estimation. IEEE Trans. Consum. Electron. 49, 1320–1325 (2003)
Jenkinson, M., Bannister, P., Brady, M., Smith, S.: Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 17(2), 825–841 (2002)
Xue, H., Shah, S., Greiser, A., Guetter, C., Littmann, A., Jolly, M.P., Arai, A.E., Zuehlsdorff, S., Guehring, J., Kellman, P.: Motion correction for myocardial T1 mapping using image registration with synthetic image estimation. Magn. Reson. Med. 67, 1644–1655 (2012)
Hajirassouliha, A., Taberner, A.J., Nash, M.P., Nielsen, P.M.F.: Subpixel phase-based image registration using Savitzky-Golay differentiators in gradient-correlation. IEEE Transactions on Image Processing (2016). (Under Review)
Goshtasby, A.: Chapter 9: transformation functions. In: Image Registration. Springer, London (2012)
Wöhler, C.: 3D Computer vision: efficient methods and applications. Part 1, Chap. 1 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
HajiRassouliha, A., Taberner, A.J., Nash, M.P., Nielsen, P.M.F. (2017). Motion Correction Using Subpixel Image Registration. In: Zuluaga, M., Bhatia, K., Kainz, B., Moghari, M., Pace, D. (eds) Reconstruction, Segmentation, and Analysis of Medical Images. RAMBO HVSMR 2016 2016. Lecture Notes in Computer Science(), vol 10129. Springer, Cham. https://doi.org/10.1007/978-3-319-52280-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-52280-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-52279-1
Online ISBN: 978-3-319-52280-7
eBook Packages: Computer ScienceComputer Science (R0)