Abstract
This paper first analysed the state-of-the-art corner detection algorithms and then proposed a novel corner detection approach based on a maximum point-to-chord distance. The proposed corner detector consists of three steps: First, several curves of original image is extracted using Canny edge detector. Second, a method of maximum point-to-chord distance is used in each curve to get the initial corner points. Third, non-maximum suppression and threshold are used to remove corner points with low curvature and get the final result. Different from the CPDA (chord-to-point distance accumulation) corner detector, our proposed detector neither need to accumulate each distance from a moving chord, nor need to computer the accumulation of each point in a curve, therefore achieves better speed while keeping the good average repeatability and accuracy. Compared with the existing methods, the proposed detector attains better performance on average repeatability and localization error under affine transforms, JPEG compression and Gaussian noise.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Corner points in images represent critical information in describing object features, which play a crucial and irreplaceable role in computer vision and image processing. Many computer vision tasks rely on the successful detection of corner points, including image matching, object recognition and object tracking, image retrieval, 3-D reconstruction, [1,2,3,4] etc. Feature tracking are also a fundamental problem of image processing research. In tracking problem, a set of efficient algorithms [5,6,7,8,9,10] are proposed for tracking salient objects from images and videos.
However, there still not exists a strict mathematical definition for corners; corners are in the past decades, a substantial number of promising corner detection methods based upon the different corner definitions have been proposed by vision researchers. The existing corner detection methods can be broadly classified into two classes: intensity-based [12,13,14,15,16,17,18,19] and contour-based methods [21,22,23,24,25,26,27]. The presence of two categories methods have their strengths and weaknesses, which makes the corner detection become research hotspot in the field of computer vision and image processing.
This paper is organized as follows. Section 2 gives a systematic review of state-of-the-art corner detection methods. Section 3 presents the new corner detector with detailed flowchart. Section 4 shows the comparison results of the proposed corner detector with other three popular detectors in the respect of repeatability and localization accuracy under affine transforms, JPEG compression and Gaussian noise. Finally, a conclusion is given.
2 Literature Survey
This section presents a review of the existing literature on corner detection methods. In the literature, the terms “point feature”, “dominant point”, “critical point” and “corner” are taken as equivalent. However, the terms “interest point” and “salient point” include not only “corner”, but also junctions and blobs, as well as significant texture variation [11].
2.1 Intensity-Based Methods
The key of the intensity-based corner detection is to extract gray-variation and structural information. Moravec [12] considered corners as points which are not self-similar in an image. Harris and Stephens [13] presented an operation by modifying the Moravec’s interest operator, using the first order derivative to approximate the second derivatives. Lowe [14] proposed a scale invariant feature transform (SIFT), which combines a scale invariant region detector and a descriptor based upon on the gradient distribution in the detected region. Bay et al. [15] presented SURF detector that locates the feature points at which the determinant of the Hessian reaches its maximum. Meanwhile, the low complexity is enabled by employing the box filters and the integral images. Leutenegger et al. [16] proposed BRISK detector, a method for key point detection, description and matching. Later, KAZE detector [17] finds local extreme by diffusing filtering, which is used to provide multi scale spaces and preserves natural image boundaries. Ramakrishnan et al. [18] introduced a novel technique to accelerate the Harris corner detectors, which using simple approximations to quickly prune away non-corners. Wang et al. [19] implemented an adaptive Harris corner detection algorithm based on the iterative threshold; the technique was an improvement of the Harris corner detection algorithm.
2.2 Contour-Based Methods
Contour-based methods first obtain image’s planar curves by some edge detector (e.g., Canny edge detector [20]) and then analyze the properties of the contours’ shape to detect corners. Thereafter, the points of local curvature maxima, line intersects or rapid changes in the edge direction are marked as corners. Kitchen and Rosenfeld [21] developed a corner measure based upon the change of gradient direction along an edge contour multiplied by the local gradient magnitude as follows:
Later, Mokhtarian and Suomela [22] proposed a curvature scale space (CSS) corner detector. For a given parametric vector equation of a planar curve \( \Gamma (u) = \{ x(u),y(u)\} \), the curvature is defined as
Where,
Here, \( \otimes \) is the convolution operator, \( \sigma \) is the scale factor, \( \dot{g}(u,\sigma ) \) and \( \ddot{g}(u,\sigma ) \) are the first- and second derivatives of Gaussian \( g(u,\sigma ) \), respectively. To improve corner localization and noise suppression, an enhanced CSS algorithm [23] is proposed by using different scales of the CSS for contours with different length. He and Yung [24] used an adaptive curvature threshold in a dynamic region of support to judge corners. The chord-to-distance accumulation technique [25] is applied to compute curvature and detect corners. Zhang and Shui [26] presented a contour-based corner detector using the angle difference of the principal directions of anisotropic Gaussian directional derivatives (ANDDs) on contours. Lin et al. [27] introduced two novel corner detectors to measure the response of contour points using Manhattan distance and Euclidean distance.
3 Proposed Corner Detector
In this section, we give a new corner detection method using a maximum point-to-chord distance. Like the most contour-based methods, our proposed corner detector first uses Canny to extract image’s planar curves. Then the maximum point-to-chord distance algorithm is applied to each curve to estimate an initial corner point. Next, the curvature on each initial corner point is computed. Finally, non-maximum suppression and threshold are used to remove weak corner points with low curvature and the final corner points are detected.
3.1 Planar Curves Extraction
Canny edge detector is one of the most widely used edge detectors in contour-based corner detectors and has also become a standard gauge in edge detection. An edge pixel is defined as if the gradient magnitudes at either side of it are lower than itself. However, the output contours may have small gaps and these gaps may possibly contain corners. These small gaps are formed because of two main reasons. First, the gradient magnitudes around junctions become very small, which results in the exclusion of junctions from the edge map. Although in some branching edges, the gradient magnitudes are not small but the maximal value is not at the gradient direction which will be discarded after the non-maximum suppression. Therefore, filling the small gaps between contours before corner detection is a necessary work to avoid loss of corners.
3.2 The Maximum Point-to-Chord Distance
After we extract the planar curves from the Sect. 3.1, we use a maximum point-to-chord distance method to select the corner point on the image. The detailed algorithm is outlined as follows:
-
1.
Let \( C \) be a set of \( N \) discrete point \( P_{1} \) to \( P_{N} \) that compose a curve in sequence \( C = \left\{ {P_{1} ,P_{2} ,P_{3} , \ldots ,P_{N} } \right\} \).
-
2.
Connect \( P_{1} \) and \( P_{N} \) with a line, so we get a chord \( L_{1,N} \).
-
3.
The perpendicular distance of all points in the curve \( C \) to the chord \( L_{1,N} \) is measured, denoted as \( D = \left\{ {D_{1,L} ,D_{2,L} ,D_{3,L} , \ldots ,D_{N,L} } \right\} \).
-
4.
Find the maximal distance \( D_{max} \) in \( D \) and the corresponding point \( P_{max} \).
-
5.
If the maximal distance \( D_{max} \) is beyond a threshold \( T_{min} \), mark the corresponding point \( P_{max} \) as a corner point, and divide the curve \( C \) into two curves \( C_{1} \) and \( C_{2} \).
-
6.
Repeat the step 1–6 for \( C_{1} \) and \( C_{2} \), until the maximal distance \( D_{max} \) is below a threshold \( T_{min} \) (Fig. 1).
3.3 False Corner Removal
After the maximum point-to-chord distance algorithm that presented in Sect. 3.2, we got a series of initial corner points. Although the threshold \( T_{min} \) prevents the weak corner to be selected, there are still some occasions that our algorithm could choose some weak corner as the output corner points. These false corners have a common characteristic that they are located in flat curves, which have low curvature value. Thus, by removing the initial corner with low curvature value, the false corners could be eliminated. After the curvature of each corner point been computed, the non-maximum suppression algorithm and a threshold are used to suppress the corner points with small curvature and too close to other corner points. Figure 2 shows the false corner removal result.
4 Experimental Results and Performance Evaluation
In this section, we focus on experiments and performance evaluation. The proposed detector is compared with three popular detectors (Harris [13], BRISK [16], He and Yung [24] and CPDA [25]). The average repeatability and localization error is used to evaluation the four detectors including our proposed detector with no manual intervention. The evaluation programmer can be running on any size of database, apply basic transformations like rotation, scaling, shear, image quality compression and Gaussian noise. Each image in the input database is applied these transformations and the average repeatability and localization error are computed for each detector. Finally, the average repeatability and localization error curves are drawn to have a visualized performance comparison of each detector.
4.1 Database and Transformation
As can be seen from Fig. 3, fifteen images collected from standard evaluation dataset [29] are used to evaluate the four detectors including our proposed detector.
Each image from the dataset is transformed by the following six types of transformations:
-
1.
Rotations: Rotate from −90° to 90° in 10° increments for each transformation.
-
2.
Uniform Scaling: Scale factors \( s_{x} = s_{y} \), in 0.1 increments from 0.5 to 2.0.
-
3.
Non-uniform Scaling: Scale factors \( s_{x} = 1 \) and \( s_{y} \) in 0.1 increments from 0.5 to 2.0.
-
4.
Shear transforms: Shear factor \( c \) in 0.1 increments from −1.0 to 1.0.
$$ \left[ {\begin{array}{*{20}c} {x^{\prime } } \\ {y^{\prime } } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} 1 & c \\ 0 & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} x \\ y \\ \end{array} } \right] $$ -
5.
JPEG quality compression: JPEG quality factor in 5% increments from 5% to 100%.
-
6.
Gaussian noise: zero mean white Gaussian noise at 15 standard deviation in [1, 15] at 1 apart.
4.2 Evaluation Criterion
We employ the performance evaluation metrics that used in [28]. The average repeatability and localization error represent the robustness and consistency of the detectors under different transformations that we introduced in 4.1.
The average repeatability \( R_{avg} \) measures the average number of detected corner point in the same position between original images and transformed images. It is defined as
Where \( N_{o} \) and \( N_{t} \) denote the number of interest point from original images and transformed images respectively. \( N_{r} \) is the number of repeated interest points between them. Let \( p_{i} \) be one of the corner point detected in the original images, \( q_{j} \) be the corner point detected in the corresponding geometric transformed image.
The localization error \( L_{e} \) is defined as the average distance between the corner points detected in the original images with those detected in the transformed images.
Where \( (x_{oi} ,y_{oi} ) \) and \( (x_{oi} ,y_{oi} ) \) are the location of repeated corner \( i \) in the original and transformed images respectively.
4.3 Summary of the Proposed Parameter Setting
In this subsection, we summarized the proposed parameter setting. The parameters \( T_{min} \) and the non-maximum suppression threshold were decided by experimentation. Figure 4 shows the effect of the point-to-chord distance \( T_{min} \) on the proposed corner detector. When it was set small, the average repeatability was relatively high, but its robustness to localization was quite low. However, when it was increased above 6, both the average repeatability and localization error remain stable. Therefore, we have chosen \( T_{min} = 6 \) as default for the detector. Figure 5 shows the average repeatability did not change much. Thus, we selected the parameter produced the least localization error as default value.
4.4 Comparative Results
In this section, a comparison of the average repeatability and localization error between the proposed and three other detectors (Harris [13], BRISK [16], He and Yung [24] and CPDA [25]) are presented.
The results of the average repeatability and localization error under six different transforms are shown in Fig. 6. In general, the four corner detectors achieved the highest average repeatability in JPEG quality compression and the worst localization error in shear transformation. The proposed and CPDA corner detectors performed better than other detectors in geometric transformations. In terms of JPEG quality compression and Gaussian noise, the proposed method achieves the highest average repeatability and lowest localization error than other three detectors. The experimental results show that the proposed detector attains better overall performance.
5 Conclusion
This paper proposed a new robust corner detection algorithm based on a maximum point-to-chord distance. Like the most of the contour-based corner detectors, the first step is to extract the edge map of original image and extracts edge contours from it. Compared with the existing corner detection algorithms based on curvature calculation, the proposed algorithm does not need to calculate the first- and second- derivatives, avoids the calculation error caused by the local variation effectively and very robust to noise. It can be seen from the experiment result that the proposed corner detector performs better than other three classical detectors in term of robustness. Corner detection algorithm of this paper has good detection performance. Future tasks may continuously improve its detection performance and apply it in many of computer vision research.
References
Zhu, J., Wu, S.: Multi-image matching for object recognition. IET Comput. Vis. 12(3), 350–356 (2018)
Yan, Y.: Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos. Cognit. Comput. 10(1), 94–104 (2018)
Zhou, Y.: Hierarchical visual perception and two-dimensional compressive sensing for effective content-based color image retrieval. Cognit. Comput. 8(5), 877–889 (2016)
Bi, Y.X., Wei, S.M.: 3D reconstruction of high-speed moving targets based on HRR measurements. IET Radar Sonar Navig. 11(5), 778–787 (2017)
Ren, J.: Real-time modeling of 3-D soccer ball trajectories from multiple fixed cameras. IEEE Trans. Circuits Syst. Video Technol. 18(3), 350–362 (2008)
Ren, J.: Tracking the soccer ball using multiple fixed cameras. Comput. Vis. Image Underst. 113(5), 633–642 (2009)
Ren, J.: Multi-camera video surveillance for real-time analysis and reconstruction of soccer games. Mach. Vis. Appl. 21(6), 855–863 (2010)
Han, J.: Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning. IEEE Trans. Geosci. Remote Sens. 53(6), 3325–3337 (2015)
Liu, Q.: Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset. J. Electron. Imaging 26(6), 025–063 (2017)
Wang, Z.: A deep-learning based feature hybrid framework for spatiotemporal saliency detection inside videos. Neuro Comput. 287, 68–83 (2018)
Mikolajczyk, K., Schmid, C.: Indexing based on scale invariant interest points. In: Proceedings of Eighth International Conference on Computer Vision, pp. 525–531 (2001)
Moravec, H. P.: Towards automatic visual obstacle avoidance. In: Proceedings of 5th International Joint Conference on Artificial Intelligence, p. 584 (1977)
Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of Alvey Vision Conference, University of Manchester, pp. 147–151 (1988)
Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2(60), 91–110 (2004)
Bay, H., Ess, A.: Speeded-up robust features (SURF). Comput. Vis. Image Underst. 110(3), 346–359 (2008)
Leutenegger, S., Chli, M., Siegwart, R. Y.: BRISK: binary robust invariant scalable keypoints. In: IEEE International Conference on Computer Vision (ICCV), pp. 6–13 (2011)
Alcantarilla, P. F., Bartoli, A., Davison, A. J.: Kaze features. In: Proceedings of European Conference on Pattern Recognition, (ECCV), pp. 214–227 (2012)
Ramakrishnan, N., Wu, M.Q., Lam, S.K.: Enhanced low-complexity pruning for corner detection. J. Real-Time Image Proc. 1(1), 197–213 (2016)
Wang, Z. C., Li, R.: Adaptive Harris corner detection algorithm based on iterative threshold. Modern Phys. Lett. B 31(15) (2017)
Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), 679–698 (1986)
Kitchen, L., Rosenfeld, A.: Gray-level corner detection. Pattern Recogn. Lett. 1(2), 95–102 (1982)
Mokhtarian, F., Suomela, R.: Robust image corner detection through curvature scale space. IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1376–1381 (1998)
Mokhtarian, F., Mohanna, F.: Enhancing the curvature scale space corner detector. In: Proceedings of Scandinavian Conference on Image Analysis, pp. 145–152 (2001)
He, X.C., Yung, N.H.C.: Corner detector based on global and local curvature properties. Opt. Eng. 47(5), 1–12 (2008)
Awrangjeb, M., Lu, G.: Robust image corner detection based on the chord-to-point distance accumulation technique. IEEE Trans. Multimedia 10(6), 1059–1072 (2008)
Zhang, W.C., Shui, P.L.: Contour-based corner detection via angle difference of principal directions of anisotropic Gaussian directional derivatives. Pattern Recognit. 48(9), 2785–2797 (2015)
Lin, X.Y., Zhu, C., Zhang, Q., et al.: Efficient and robust corner detectors based on second-order difference of contour. IEEE Signal Process. Lett. 24(9), 1393–1397 (2017)
Schmid, C., Mohr, R., Bauckhage, C.: Evaluation of interest point detectors. IJCV 37(2), 151–172 (2000)
The Image Database. http://figment.csee.usf.edu/edge/roc
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
He, Y., Li, Y., Zhang, W. (2018). Robust Image Corner Detection Based on Maximum Point-to-Chord Distance. In: Ren, J., et al. Advances in Brain Inspired Cognitive Systems. BICS 2018. Lecture Notes in Computer Science(), vol 10989. Springer, Cham. https://doi.org/10.1007/978-3-030-00563-4_40
Download citation
DOI: https://doi.org/10.1007/978-3-030-00563-4_40
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00562-7
Online ISBN: 978-3-030-00563-4
eBook Packages: Computer ScienceComputer Science (R0)