Abstract
Application of neural networks in industrial settings, such as automated factories with bin-picking solutions requires costly production of large labeled datasets. This paper presents an automatic data generation tool with a procedural model of a cardboard box. We briefly demonstrate the capabilities of the system, and its various parameters and empirically prove the usefulness of the generated synthetic data by training a simple neural network. We make sample synthetic data generated by the tool publicly available.
Supported by the TERAIS project in the framework of the program Horizon-Widera-2021 of the European Union under the Grant agreement number 101079338.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Automatic detection and localization of bins on a conveyor belt is an essential task in automated factories. This detection must be robust to guarantee the safe operation of robotic arms. It includes handling edge cases such as missing edges, occlusion, and variance in the materials and shapes of the bins. Moreover, in a specific scenario of package delivery factories, bins are made from a non-rigid cardboard material. These boxes are prone to various deformations, and their paper flaps are semi-randomly opened while being filled by workers and robots.
Analytical detection algorithms lack robustness and are hard to modify for new cases [5]. On the other hand, machine learning-based methods require data. Capturing real RGB-D samples in various scenarios in factories is costly. Therefore, the generation of synthetic data is recently a popular research topic [1, 6], outlined by the boom of commercial solutions such as NVIDIA Omniverse™Footnote 1.
Following our previous work [3], in this short submission we propose a novel data-generation tool for the automated generation of training data containing cardboard boxes. We evaluate the results of a neural network trained upon this novel data against a baseline synthetic generator, which has no automatic parametrization and cannot produce boxes with paper flaps.
2 Generating Data
This project aimed to create a high-level system for generating synthetic datasets of 3D bin scans using Blender 3D compiled into a python module (bpy)Footnote 2. We accomplished this by wrapping Blender’s functionality into high-level classes representing respective parts of the 3D scanning pipeline. Our pipeline simulates the real scanning process of a structured light scanner. Render settings, scanner parameters and the behavior of random parameter generation are fully customizable by the user. The output of our system comes in the form of structured point cloud data. The camera transformation matrix and the volume box of the generated cardboard box are also exported and used as ground truth data.
2.1 Parametric Cardboard Box
Variety in synthetic data can be achieved by randomizing parameters of appropriate parametric model [2]. To generate virtual cardboard boxes, we have created a parametric model which approximates the most significant box features, see Fig. 2a for visual illustration. By changing the parameters, we are able to obtain a wide variety of virtual cardboard boxes. The box parameters are:
We have approximated a generic cardboard box as an object created using the corresponding sequence of steps as shown in Fig. 1. The steps include a series of extrusions, rounding corner edges, and adding thickness. The parametric model is implemented using Blender’s Geometry Nodes system [4].
In real production, a box is assembled by folding a sheet of cardboard. The resulting object can therefore be closely approximated in 2D. Such 2D representation can serve as a UV map without visible seams, used for procedural shading of the parametric cardboard box, Fig. 2b shows the resulting rendered image.
2.2 Generation Parameters
The camera location was generated as a random unit vector in the positive XYZ part of a sphere scaled by uniformly distributed random distance in the (1m, 1.7m) interval. The rotation of the scanner was then calculated such that the camera would point at world origin. Generation of boxes utilized random distributions for multiple parameters, ex. a single dimension was randomized as:
For our experiments, we set \(\sigma = 0.1\) and \(\gamma = 2.0\), each constant is in SI units.
3 Experiment
We have verified the added value of the proposed generator by training a neural network for 6D pose estimation of the cardboard boxes [3]. We have created two sets of synthetic training data, each consisting of 496 samples. The first set was generated using a baseline generator, without the automated box parametrization and flaps, see Fig. 3a. The second set is generated using our novel tool. The data, together with loading scripts in Python is publicly availableFootnote 3.
3.1 Metrics
Translation of the box origin is evaluated using Euclidean distance: \(e_\textrm{TE}(\, \hat{\textbf{t}}, \textbf{t} \,) = \Vert \,\textbf{t} - \hat{\textbf{t}}\,\Vert _2\). For rotation, we use model-independent angle distance between rotational axes calculated from corresponding rotation matrices as: \( e_\textrm{RE}(\hat{R}, R) = \min _{\hat{R'} \in \{\hat{R_1}, \hat{R_2}\}} \textrm{arcos} ((\textrm{Tr}(\hat{R'}R^{-1})-1)/2), \) where \(\textrm{Tr}\) is the matrix trace operator.
3.2 Evaluation
Table 1 compares networks trained over the two synthetic datasets. The validation set consists of 100 synthetic samples from the proposed generator and a test set of 22 real-world samples captured by PhoXi 3D ScannerFootnote 4. Figure 3 shows qualitative examples of the predictions. Note that it has only the 3D point cloud on the input, without any information about the dimensions of the boxes.
We conclude that the novel generator helped the network to generalize and learn to ignore paper flaps, showing promise in improving synthetic data tools for more successful training. Future work includes expanding this tool for additional possible variances, such as bins from semi-transparent plastic materials with a simulation of physical phenomena like light caustics in photo-realistic textures.
References
Chen, K., et al.: Sim-to-real 6D object pose estimation via iterative self-training for robotic bin picking. In: European Conference on Computer Vision (ECCV), pp. 533–550 (2022)
Fedorova, S., et al.: Synthetic 3D data generation pipeline for geometric deep learning in architecture. In: The International Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences (ISPRS Congress), pp. 337–344 (2021)
Gajdošech, L., Kocur, V., Stuchlík, M., Hudec, L., Madaras, M.: Towards deep learning-based 6D bin pose estimation in 3D scan. In: VISAPP, pp. 545–552 (2022)
van Gumster, J., Lampel, J.: Procedural modeling with blender’s geometry nodes. In: SIGGRAPH Labs. N (2022). https://doi.org/10.1145/3532725.3538516
Katsoulas, D.: Localization of piled boxes by means of the hough transform. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 44–51. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45243-0_7
Periyasamy, A.S., Schwarz, M., Behnke, S.: SynPick: a dataset for dynamic bin picking scene understanding. In: IEEE CASE, pp. 488–493 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kravár, P., Gajdoech, L., Madaras, M. (2023). Novel Synthetic Data Tool for Data-Driven Cardboard Box Localization. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14254. Springer, Cham. https://doi.org/10.1007/978-3-031-44207-0_50
Download citation
DOI: https://doi.org/10.1007/978-3-031-44207-0_50
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44206-3
Online ISBN: 978-3-031-44207-0
eBook Packages: Computer ScienceComputer Science (R0)