Abstract
We introduce an unsupervised GAN-based model for shading photorealistic hair animations. Our model is much faster than previous rendering algorithms and produces fewer artifacts than other neural image translation methods. The main idea is to extend the Cycle-GAN structure to avoid semitransparent hair appearance and to exactly reproduce the interaction of the lights with the scene. We use two constraints to ensure temporal coherence and highlight stability. Our approach outperforms and is computationally more efficient than previous methods.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
D’Eon, E.; Francois, G.; Hill, M.; Letteri, J.; Aubry, J.-M. An energy-conserving hair reectance model. In: Proceedings of the 22nd Eurographics Conference on Rendering, 1181–1187, 2011.
Zinke, A. Photo-realistic rendering of fiber assemblies. In: Ausgezeichnete Informatikdissertationen, 2007.
Zhu, J. Y.; Park, T.; Isola, P.; Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, 2242–2251, 2017.
Kajiya, J. T.; Kay, T. L. Rendering fur with three dimensional textures. ACM SIGGRAPH Computer Graphics Vol. 23, No. 3, 271–280, 1989.
Yan, L.-Q.; Tseng, C.-W.; Jensen, H. W.; Ramamoorthi, R. Physically-accurate fur reflectance: Modeling, measurement and rendering. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 185, 2015.
Marschner, S. R.; Jensen, H. W.; Cammarano, M.; Worley, S.; Hanrahan, P. Light scattering from human hair fibers. ACM Transactions on Graphics Vol. 22, No. 3, 780–791, 2003.
Ward, K.; Bertails, F.; Kim, T. Y.; Marschner, S. R.; Cani, M. P.; Lin, M. C. A survey on hair modeling: Styling, simulation, and rendering. IEEE Transactions on Visualization and Computer Graphics Vol. 13, No. 2, 213–234, 2007.
Moon, J. T.; Walter, B.; Marschner, S. Efficient multiple scattering in hair using spherical harmonics. ACM Transactions on Graphics Vol. 27, No. 3, 1–7, 2008.
Ren, Z.; Zhou, K.; Li, T. F.; Hua, W.; Guo, B. N. Interactive hair rendering under environment lighting. In: Proceedings of the ACM SIGGRAPH 2010 Papers, Article No. 55, 2010.
Jansson, E. S. V.; Chajdas, M. G.; Lacroix, J.; Ragnemalm, I. Real-time hybrid hair rendering. In: Proceedings of the Eurographics Symposium on Rendering, 1–8, 2019.
Gatys, L. A.; Ecker, A. S.; Bethge, M. Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2414–2423, 2016.
Johnson, J.; Alahi, A.; Li, F. F. Perceptual losses for real-time style transfer and super-resolution. In: Computer Vision — ECCV 2016. Lecture Notes in Computer Science, Vol. 9906. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 694–711, 2016.
Luan, F. J.; Paris, S.; Shechtman, E.; Bala, K. Deep photo style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6997–7005, 2017.
Wei, L. Y.; Hu, L. W.; Kim, V.; Yumer, E.; Li, H. Real-time hair rendering using sequential adversarial networks. In: Computer Vision — ECCV 2018. Lecture Notes in Computer Science, Vol. 11208. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 105–122, 2018.
Chai, M.; Ren, J.; Tulyakov, S. Neural hair rendering. In: Computer Vision — ECCV 2020. Lecture Notes in Computer Science, Vol. 12363. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 371–388, 2020.
Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, 2672–2680, 2014.
Gatys, L.; Ecker, A.; Bethge, M. A neural algorithm of artistic style. Journal of Vision Vol. 16, No. 12, 326, 2016.
Jing, Y. C.; Yang, Y. Z.; Feng, Z. L.; Ye, J. W.; Yu, Y. Z.; Song, M. L. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3365–3385, 2020.
Isola, P.; Zhu, J. Y.; Zhou, T. H.; Efros, A. A. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5967–5976, 2017.
Wang, T. C.; Liu, M. Y.; Zhu, J. Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8798–8807, 2018.
Zhu, J. Y.; Park, T.; Isola, P.; Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, 2242–2251, 2017.
Bansal, A.; Ma, S. G.; Ramanan, D.; Sheikh, Y. Recycle-GAN: Unsupervised video retargeting. In: Computer Vision — ECCV 2018. Lecture Notes in Computer Science, Vol. 11209. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 122–138, 2018.
Chen, Y.; Pan, Y. W.; Yao, T.; Tian, X. M.; Mei, T. Mocycle-GAN: Unpaired video-to-video translation. In: Proceedings of the 27th ACM International Conference on Multimedia, 647–655, 2019.
Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6629–6640, 2017.
Acknowledgements
Zhi Qiao acknowledges receipt of Japanese government scholarships (MEXT). This work was partially supported by JSPS KAKENHI, Grant Number JP19K11990.
Author information
Authors and Affiliations
Corresponding author
Additional information
Zhi Qiao is a Ph.D. candidate at the University of Tokyo, Japan. His research interests center on computer graphics and computer vision. He is currently receiving a fellowship from the Ministry of Education, Culture, Sports, Science and Technology (MEXT).
Takashi Kanai is an associate professor in the Graduate School of Arts and Sciences, the University of Tokyo. His research interests include geometry processing and physics-based animation in computer graphics. He received his doctoral degree in engineering from the University of Tokyo in 1998. He is a member of ACM, ACM SIGGRAPH, IIEEJ (the Institute of Image Electronics Engineers of Japan), and IPSJ (Information Processing Society of Japan).
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.
About this article
Cite this article
Qiao, Z., Kanai, T. A GAN-based temporally stable shading model for fast animation of photorealistic hair. Comp. Visual Media 7, 127–138 (2021). https://doi.org/10.1007/s41095-020-0201-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41095-020-0201-9