Abstract
In an era of transformation in manufacturing demographics from mass production to mass customization, advances on human-robot interaction in industries have taken many forms. However, the aim of reducing the amount of programming required by an expert using natural modes of communication is still an open topic. In this paper, we present a platform and a learning framework for human robot cooperation, called XRob. XRob is based on task-based formalism and allows building applications of human robot interactions in an intuitive way. And it enables the user to pursue different levels of shared autonomy between human and robot. The learning framework showcases that it is able to learn a complicated human robot cooperative assembly process and is intuitive to the user.
Zusammenfassung
Rasch veränderliche Marktsituationen erfordern zunehmend eine Flexibilisierung der industriellen Produktion. Um der Herausforderung hoher Variantenvielfalt gerecht zu werden, gewinnt die Mensch-Roboter-Interaktion in all ihren Formen, aufgrund der Fortschritte in Forschung und Entwicklung, zunehmend an Bedeutung. Allerdings ist das Ziel, den Programmieraufwand – üblicherweise durch einen Experten erledigt – mit Hilfe von natürlichen Kommunikationsmodi zu reduzieren, immer noch ein offenes Thema. In dieser Arbeit stellen die Autoren ein Framework zum Lernen, Parametrieren und Ausführen von Prozessen, basierend auf Mensch-Roboter-Kooperation, vor (XRob). Dieses XRob Framework beruht auf einem aufgabenbasierten Formalismus und ermöglicht es, Anwendungen basierend auf Mensch-Roboter-Interaktion in intuitiver Weise zu implementieren. Mit dieser Plattform ist der Benutzer in der Lage, verschiedene Ebenen der geteilten Autonomie zwischen Mensch und Roboter umzusetzen. Das integrierte Lern-Framework zeigt, dass es in der Lage ist, einen komplizierten, kooperativen Montageprozess zu lernen und intuitiv zu bedienen. Zusätzlich ermöglicht das Framework das Erlernen unterschiedlicher Montageprozesse durch unterschiedliche Benutzer.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
The shifting paradigm from mass production to mass customization is driving production systems to handle more product variations, smaller life cycles and smaller batch sizes [1]. Robotics is expected to be one of the main enablers of this transition to the transformable factory of tomorrow [2]. However, the aim of reducing the amount of programming (a robotic system) by an expert, using natural modes of communication, is still an open topic [1]. In this context, to achieve the goal of easy and quick re-programming by non-experts the task-level programming paradigm is employed.
One of the well-known early works on task-level programming [3] is based on a set of actions, which alters the current world state. They can be perceived as primitive formal descriptions of compliant robot motions and are composed of primitives. These primitives can be defined as simple, atomic movements, that can be combined to form a task [4]. A primitive is typically a sensory input or a single robot motion, described using the Task Frame Formalism (TFF) [5]. In other words, the assembly task is broken down into some form of action primitives the robot can interpret [6]. However, this involves the problem of modeling the world state, and maintaining the model. As a result, when using such primitives the difficulty in modeling a task increases exponentially as the complexity of the task increases.
In this paper, we showcase our work on building an assembly task with generic recipes. The terms recipe and skill are used analogously in this paper. These recipes (composed of a set of primitives) are abstracted on a higher level and hence form a bridge between complex tasks and the primitives. The idea follows along the idea of high-level abstraction for generalization proposed in [3].
Depending on how these primitives are combined, several approaches are described in literature. An approach using a visual programming tool for defining the flow control to support hierarchies and concurrencies in a state-machine like concept is proposed in [7]. Parametrization of a pre-implemented recipe by non-expert to match the current assembly task at hand is of vital importance. For example, in [9] a three-stage LfD method is proposed, which incorporates human-in-the-loop adaptation to correct a batch-learned policy iteratively, to improve accuracy and precision. And in [10], a programming-by-demonstration approach that allows users to generate skills and robot program primitives for later refinement and re-use is explained.
Though such approaches are proposed, not many of them demonstrate their applicability in practice in real-world industrial settings. Some exceptions include [1] [8], where the authors partially show their approaches applied in practical demonstrations. In contrast, our work proposes and focuses on a task based programming framework “XRob” that is capable of
-
Easy intuitive programming for non-experts
-
Programming by demonstration features with help of GUI
-
Applicability to Human robot interactive assembly tasks with varying complexity
The processes addresses key issues in manufacturing such as fast ramp up, zero defect inspection and reducing manual labor in assembly operations. The XRob platform features a flexible quality inspection system which can be enhanced with a variety of sensors and inherits intuitive configuration capabilities. Beside rapid reconfiguration of the system, also safety issues have to be taken into consideration. In the following sections, the XRob and its constituent modules are presented in brief detail. Then, to showcase the practical applicability, an assembly scenario that demonstrates a cooperation scenario where the robot works in tandem with a human operator for an assembly of automotive combustion engines is evaluated.
2 The XRob framework
The XRob software framework enables the creation of complex robot applications within fewer minutes. It builds on unique, easy-to-use features that significantly speed up commissioning and make the operation more cost-efficient and flexible than common programming methods. The special software architecture allows easy and intuitive creation of processes and configuration of the components of a robot system via a single user interface. Figure 1 provides an overview on the software components within the XRob framework. A detailed description of the components involved and an extended evaluation of the framework is given in [12].
The main software components (see Fig. 1), are tightly integrated and linked together to enable close communication. Dependent on the application that needs to be realized, the complexity of the system can be varied by activating or deactivating concrete software modules, thus leading to a high scalability of the XRob system. A brief description on the integrated software modules is as follows.
-
The Perception System software component consists of modules to aggregate data from the current state of the workspace environment. These functionalities include means to digitalize the workspace by environment reconstruction and to localize (in 3D) the objects of interest.
-
The Planning and Execution system provides modules in order to carry out actions on the robotic target system. These include the calculation of collision-free movement paths and the required interfaces to robotic systems.
-
The Application Development, the XRob software framework provides an intuitive user interface for application development, which includes an interactive programming environment, and software modules to simulate and visualize robotic movement paths as well as data acquisition via sensors.
3 Experimental evaluation
DIN ISO 10218 [13] draws a distinction between collaborative and non-collaborative operation of robot systems. Cooperation in our taxonomy is defined by temporal and spatial coincidence of robot and worker without a mutual task or physical interaction. As an example for cooperation we present a use case where worker and robot perform application tasks in one common work cell at the same time.
Use Case: Cylinder Head Assembly for combustion engines: The assembly of a combustion engine includes the installation of a cylinder head cover. The installation is carried out manually by stacking the cover with pre-inserted screws onto the motor block and tightening the screws with a manual power tool. The electronic screwdriver of the manual workplace is fitted with a push start mechanism, electronic control unit and a shut-off clutch. Therefore, it starts rotating when pushed onto the screw and stops the screwing motion and retracts when a predefined torque is reached. For combustion engine assembly the power tool as shown in Fig. 2a has to perform screw tightening operations in the required order and accuracy to meet a defined process quality (i.e. screw-in depth, torque). The working instruction of the workstation includes several additional process steps. While the robot performs the assembly task of pre-screwing the cylinder head cover the worker performs different other assembly tasks.
Modeling in XRob: XRob contains functionality for 3D and 2D position deviation compensation. Once the XRob recipe is parameterized the robot system takes an overview image (as 3D point cloud) for coarse orientation of the cylindrical head. To fine tune for position compensation (in 2D) a 2D sensor to be positioned in a fashion that allows calibration of images and deviation measurements from pixel space to metric space. This allows the robot to cover for imprecise object presentation automatically. Process functionality for screwing is implemented as subprogram in the robotic system and carried out triggered by a XRob function block. For parameterization a XRob service for improved hand guidance is implemented that allows the control of the robot during parameterization phase not only via passive hand guidance by the robots own compliant mode but also by controlled dragging of the robot via a force torque sensor with axis calibration parallel to the robots TCP frame.
Demonstration: The demonstration of the cooperation use case is carried out in three expansion stages [11]. A standardized task to be solved with the robotized tools is presented to a with regard to sex, age and experience representative selection of factory workers that are planned to be available during all three expansion stages. The workers were trained in a standardized routine and carried out the robot teach in tasks multiple times. The workers were equipped with a head mounted camera with eye-tracking and in addition were observed via external video camera. Video analysis (for timing and scope measurements) as well as measurement of task effectiveness in addition to standardized questionnaires for calculation of usability measures (NASA Task Load Index, SUS) [11] as well as generic feedback during interviews were considered. In the first expansion stage, the effectivity and simplicity of the user interface as implemented by the robot manufacturer was evaluated. The gathered data showed that the touch panel in its off-the-shelf version was experienced as not feasible and too complex to control the robot during the teaching task for participants of both use cases (see Fig. 2b).
Parameterization and teach-in of process points is only secondary while other activities (like navigation through multistage sub menus) dominate (Fig. 2b). Overall, the teaching of expansion stage 1 was rated as low with respect to usability, user experience, and acceptance, which can be explained by the fact that the actual teaching was only a fraction of the whole process, which was experienced as too complicated due to the touch panel.
In expansion stage 2, the XRob programming system was introduced which provides linear workflows, feedback about degree of program execution, subtask success and the possibility to be operated via a state of the art tablet computer with better touch screen performance. Tests in expansion stage 2 showed that better hand guidance performance leads to better acceptance of that input modality as well as increased programming performance (Fig. 2c). Programming time could be reduced from 12 min to approximately 7 min (Fig. 2c) due to the new input modalities. On the other hand, stage 2 introduced new functionality. Position deviation compensation leads to increased programming effort since an additional machine vision application has to be parameterized but increases overall process stability.
4 Conclusion
This work presents the XRob platform, which allows building models of human robot interactions in a flexible and intuitive way. Using this framework, the operator is enabled to pursue different kind of task sharing operation in applications requiring customized patterns of interactions. The work exemplifies the usability of such a flexible system in the automation chain. The demonstrated use case substantiate the potential of human robot cooperation in future manufacturing scenarios.
References
Pedersen, M. R., et al. (2016): Robot skills for manufacturing: from concept to industrial deployment. Robot. Comput.-Integr. Manuf., 37, 282–291.
euRobotics (2013): Robitcs 2020 strategic research agenda for robotics in Europe.
Nicolescu, M. N., et al. (2003): Natural methods for robot task learning: instructive demonstrations, generalization and practice. In Proceedings of the second international joint conference on autonomous agents and multiagent systems.
Finkemeyer, B., Kröger, T., Wahl, F. M. (2005): Executing assembly tasks specified by manipulation primitive nets. Adv. Robot., 19, 591–611.
Bruyninckx, H., De Schutter, J. (1996): Specification of force-controlled actions in the task frame formalism—a synthesis. IEEE Trans. Robot. Autom., 12, 581–589.
Mosemann, H., Wahl, F. M. (2001): Automatic decomposition of planned assembly sequences into skill primitives. IEEE Trans. Robot. Autom., 17, 709–718.
Steinmetz, F., Weitschat, R. (2016): Skill parametrization approaches and skill architecture for human-robot interaction. In 2016 IEEE international conference on automation science and engineering.
Pedersen, M. R., Krüger, V. (2015): Automated planning of industrial logistics on a skill-equipped robot. In IROS 2015 workshop task planning for intelligent robots in service and manufacturing, Hamburg, Germany.
Ko, W. K. H., Wu, Y., Tee, K. P., Buchli, J. (2015): Towards industrial robot learning from demonstration. In Proceedings of the 3rd international conference on human-agent interaction.
Stenmark, M., Topp, E. A. (2016): From demonstrations to skills for high-level programming of industrial robots. In 2016 AAAI fall symposium series.
Huber, A., Weiss, A., Minichberger, J., Ikeda, M. (2016): First application of robot teaching in an existing industry 4.0-environment. Does it really work? Societies.
Pichler, A., Akkaladevi, S. C., et al. (2017): Towards shared autonomy for robotic tasks in manufacturing. In Proc. FAIM 2017, Modena, Italy.
DIN EN ISO 10218-1 (2012): Industrieroboter – Sicherheitsanforderungen – Teil 1: Roboter. Berlin: Beuth.
Acknowledgements
This work is funded by the projects LERN4MRK (Austrian Ministry for Transport, Innovation and Technology) and AssistMe (FFG, 848653).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Akkaladevi, S.C., Plasch, M. & Pichler, A. Skill-based learning of an assembly process. Elektrotech. Inftech. 134, 312–315 (2017). https://doi.org/10.1007/s00502-017-0514-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00502-017-0514-2