Abstract
The property of maintaining the lens state of the liquid crystal (LC) lens during the switching between positive and negative lens states is made use of in the fast acquirement of multi-focus images without magnification change. A depth from focus (DFF) pipeline that can generate a low-error depth map and an all-in-focus image is proposed. The depth of the scene is then obtained via DFF pipeline from the captured images. The depth sensor proposed in this paper has the advantages of simple structure, low cost, and long service life.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
M. Hansard, S. Lee, O. Choi, and R. Horaud, “Time-of-flight cameras: principles, methods and applications,” Springer Science & Business Media, 2012.
P. J. Besl, “Active optical range imaging sensors,” Advances in Machine Vision, New York: Springer, 1989, pp. 1–63.
W. E. L. Grimson and S. Brenner, “A computer implementation of a theory of human stereo vision,” Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 1981, 292(1058): 217–253.
Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, USA, 1993, pp. 68–73.
S. K. Nayar and Y. Nakagawa, “Shape from focus: An effective approach for rough surface,” in Proceedings of IEEE International Conference on Robotics and Automation, USA, 1990, pp. 218–225.
F. Oniga, A. Trif, and S. Nedevschi, “Stereovision for obstacle detection on smart mobile devices: First results,” in 16th International IEEE Conference on Intelligent Transportation Systems, Netherlands, 2013, pp. 342–347.
K. Okada, M. Inaba, and H. Inoue, “Integration of real-time binocular stereo vision and whole body information for dynamic walking navigation of humanoid robot,” in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Japan, 2003, pp. 131–136.
F. Oniga and S. Nedevschi, “Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection,” IEEE Transactions on Vehicular Technology, 2009, 59(3): 1172–1182.
X. Zhang, Z. Liu, M. Jiang, and M. Chang, “Fast and accurate auto-focusing algorithm based on the combination of depth from focus and improved depth from defocus,” Optics Express, 2014, 22(25): 31237–31247.
M. Kawamura, S. Ishikuro, and S. Sato, “Imaging system for determining depth mapping properties by using a liquid crystal lens,” in IECON 2015-41st Annual Conference of the IEEE Industrial Electronics Society, Japan, 2015, pp. 001966–001969.
L. Hui, P. Fan, W. Yuntao, Z. Yanduo, and X. Xiaolin, “Depth map sensor based on optical doped lens with multi-walled carbon nanotubes of liquid crystal,” Applied Optics, 2016, 55(1): 140–147.
S. Emberger, L. Alacoque, A. Dupret, N. Fraval, and J. L. de Bougrenet de la Tocnaye, “Evaluation of the key design parameters of liquid crystal tunable lenses for depth-from-focus algorithm,” Applied Optics, 2018, 57(1): 85–91.
B. Wang, M. Ye, and S. Sato, “Liquid crystal lens with focal length variable from negative to positive values,” IEEE Photonics Technology Letters, 2005, 18(1): 79–81.
Y. Bai, X. Chen, J. Ma, J. Zeng, and M. Ye, “Transient property of liquid crystal lens and its application in extended depth of field imaging,” Optics Communications, 2020, 473: 125974.
C. Y. Tseng and S. J. Wang, “Shape-from-focus depth reconstruction with a spatial consistency model,” IEEE Transactions on Circuits and Systems for Video Technology, 2014, 12(24): 2063–2076.
M. Moeller, M. Benning, C. Schonlieb, and D. Cremers, “Variational depth from focus reconstruction,” IEEE Transactions on Image Processing, 2015, 24(12): 5369–5378.
M. Ye, X. Chen, Q. Li, J. Zeng, and S. Yu, “Depth from defocus measurement method based on liquid crystal lens,” Optics Express, 2018, 26(22): 28413–28420.
S. Pertuz, D. Puig, and M. A. Garcia, “Analysis of focus measure operators for shape-from-focus,” Pattern Recognition, 2013, 46(5): 1415–1432.
A. R. Mansouri and J. Konrad, “Bayesian winner-take-all reconstruction of intermediate views from stereoscopic images,” IEEE Transactions on Image Processing, 2000, 9(10): 1710–1722.
M. Subbarao and T. Choi, “Accurate recovery of three-dimensional shape from image focus,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 1995, 17(3): 266–274.
A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” IEEE transactions on Pattern Analysis and Machine Intelligence, 2012, 35(2): 504–511.
H. G. Jeon, J. Surh, S. Im, and I. S. Kweon, “Ring difference filter for fast and noise robust depth from focus,” IEEE Transactions on Image Processing, 2019, 29: 1045–1060.
Acknowledgment
This study was supported by Sichuan Science and Technology Programs (Grant No. 2021YJ0102).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Xiao, H., Liu, Z., Tan, B. et al. A Depth Sensor Based on Transient Property of Liquid Crystal Lens. Photonic Sens 13, 230230 (2023). https://doi.org/10.1007/s13320-022-0669-2
Received:
Revised:
Published:
DOI: https://doi.org/10.1007/s13320-022-0669-2