Abstract
Unlike conventional rigid-link parallel robots, cable-driven parallel robots (CDPRs) have distinct advantages, including lower inertia, higher payload-to-weight ratio, cost-efficiency, and larger workspaces. However, because of the complexity of the cable configuration and redundant actuation, model-based forward kinematics and motion control necessitate high effort and computation. This study overcomes these challenges by introducing deep reinforcement learning (DRL) into the cable robot and achieves compensated motion control by estimating the actual position of the end-effector. We used a random behavior strategy on a CDPR to explore the environment, collect data, and train neural networks. We then apply the trained network to the CDPR and verify its efficacy. We also addressed the problem of asynchronous state observation and action execution by delaying the action execution time in one cycle and adding this action to be executed to match the motion control command. Finally, we implemented the proposed control method to a high payload cable robot system and verified the feasibility through simulations and experiments. The results demonstrate that the end-effector position estimation accuracy can be improved compared with the numerical model-based forward kinematics solution and the position control error can be reduced compared with the conventional open-loop control and the open-loop control with tension distribution form.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
S. Qian, B. Zi, W. W. Shang, and Q. S. Xu, “A review on cable-driven parallel robots,” Chinese Journal of Mechanical Engineering (English Edition), vol. 31, no. 4. Springer Singapore, 2018.
M. A. Khosravi and H. D. Taghirad, “Robust PID control of fully-constrained cable driven parallel robots,” Mechatronics, vol. 24, no. 2, pp. 87–97, 2014
M. C. Kim, H. Choi, J. Piao, E. S. Kim, J. O. Park, and C. S. Kim, “Remotely manipulated peg-in-hole task conducted by cable-driven parallel robots,” IEEE/ASME Transactions on Mechatronics, 2022.
J. Piao, M. C. Kim, E. S. Kim, and C. S. Kim, “A self-adaptive inertia hybrid control of a fully constrained cable-driven parallel robot,” IEEE Transactions on Automation Science and Engineering, vol. PP, pp. 1–11, 2023, doi: https://doi.org/10.1109/TASE.2023.3249826.
A. Pott, Cable-driven Parallel Robots: Theory and Application, vol. 120, 2018.
K. J. Astrom and T. Hägglund, “Advanced PID control,” IEEE Control Systems, vol. 26, no. 1, pp. 98–101, 2006.
M. Gouttefarde, J. Lamaury, C. Reichert, and T. Bruckmann, “A versatile tension distribution algorithm for n-DOF parallel robots driven by n + 2 cables,” IEEE Transactions on Robotics, vol. 31, no. 6, pp. 1444–1457, 2015.
X. Tang and Z. Shao, “Trajectory generation and tracking control of a multi-level hybrid support manipulator in FAST,” Mechatronics, vol. 23, no. 8, pp. 1113–1122, 2013.
R. Babaghasabha, M. A. Khosravi, and H. D. Taghirad, “Adaptive robust control of fully-constrained cable driven parallel robots,” Mechatronics, vol. 25, pp. 27–36, 2015.
X. Jin, J. Jung, S. Ko, E. Choi, J.-O. Park, and C.-S. Kim, “Geometric parameter calibration for a cable-driven parallel robot based on a single one-dimensional laser distance sensor measurement and experimental modeling,” Sensors, vol. 18, no. 7, 2392, July 2018.
Z. Zhang, G. Xie, Z. Shao, and C. Gosselin, “Kinematic calibration of cable-driven parallel robots considering the pulley kinematics,” Mechanism and Machine Theory, vol. 169, 104648, 2022.
V. L. Nguyen and R. J. Caverly, “Cable-driven parallel robot pose estimation using extended Kalman filtering with inertial payload measurements,” IEEE Robotics and Automation Lettersry, vol. 6, no. 2, pp. 3615–3622, 2021.
T. Dallej, M. Gouttefarde, N. Andreff, P. E. Hervé, and P. Martinet, “Modeling and vision-based control of large-dimension cable-driven parallel robots using a multiple-camera setup,” Mechatronics, vol. 61, no. April, pp. 20–36, 2019.
M. Tao, B. Feng, L. Li, and L. Li, “Forward kinematics solution of cable robot based on neural network and L-M algorithm,” Proc. of IAEAC 2021 - IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference, vol. 2021, pp. 2519–2524, 2021.
W. Li, M. Yue, J. Shangguan, and Y. Jin, “Navigation of mobile robots based on deep reinforcement learning: Reward function optimization and knowledge transfer,” International Journal of Control, Automation, and Systems, vol. 21, no. 2, pp. 563–574, 2023, doi: https://doi.org/10.1007/s12555-021-0642-7.
I. Grondman, L. Busoniu, G. A. D. Lopes, and R. Babuška, “A survey of actor-critic reinforcement learning: Standard and natural policy gradients,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 6, pp. 1291–1307, 2012.
Y. P. Pane, S. P. Nageshrao, J. Kober, and R. Babuška, “Reinforcement learning based compensation methods for robot manipulators,” Engineering Applications of Artificial Intelligence, vol. 78, pp. 236–247, 2019.
J. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor, and S. Levine, “How to train your robot with deep reinforcement learning: Lessons we have learned,” The International Journal of Robotics Research, vol. 40, no. 4–5, pp. 698–721, 2021.
Y. Liu, Z. Chen, Y. Li, M. Lu, C. Chen, and X. Zhang, “Robot search path planning method based on prioritized deep reinforcement learning,” International Journal of Control, Automation, and Systems, vol. 20, no. 8, pp. 2669–2680, 2022.
S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” Journal of Machine Learning Research, vol. 17, no. 39, pp. 1–40, 2016.
S. Gu, E. Holly, T. Lillicrap, and S. Levine, “tDeep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” Proc. of IEEE International Conference on Robotics and Automation, pp. 3389–3396, 2017.
C. H. Min and J. B. Song, “Hierarchical end-to-end control policy for multi-degree-of-freedom Manipulators,” International Journal of Control, Automation, and Systems, vol. 20, no. 10, pp. 3296–3311, 2022.
K. Dmitry, A. Iroan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, “Scalable deep reinforcement learning for vision-based robotic manipulation,” Proc. of 2nd Conference on Robot Learning, vol. 87, no. CoRL, pp. 651–673, 2018.
X. Zhan, H. Xu, Y. Zhang, X. Zhu, H. Yin, and Y. Zheng, “DeepThermal: Combustion optimization for thermal power generating units using offline reinforcement learning,” Proc. of 36th AAAI Conference on Artificial Intelligence (AAAI 2022), vol. 36, pp. 4680–4688, 2022.
S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement learning: Tutorial, review, and perspectives on open problems,” arXiv:2005.01643, pp. 1–43, 2020.
J. Piao, X. Jin, J. Jung, E. Choi, J. O. Park, and C. S. Kim, “Open-loop position control of a polymer cable-driven parallel robot via a viscoelastic cable model for high payload workspaces,” Advances in Mechanical Engineering, vol. 9, no. 12, 1687814017737199, 2017.
I. Kostrikov, A. Nair, and S. Levine, “Offline reinforcement learning with implicit Q-learning,” arXiv:2110.06169, pp. 1–13, 2021.
S. Fujimoto, D. Meger, and D. Precup, “Off-policy deep reinforcement learning without exploration,” Proc. of 36Hirai, th International Conference on Machine Learning (ICML 2019), vol. 2019-June, pp. 3599–3609, 2019.
A. Kumar, J. Fu, G. Tucker, and S. Levine, “Stabilizing off-policy Q-learning via bootstrapping error reduction,” Advances in Neural Information Processing Systems, vol. 32, no. NeurIPS, 2019.
D. Lau, J. Eden, Y. Tan, and D. Oetomo, “CASPR: A comprehensive cable-robot analysis and simulation platform for the research of cable-driven parallel robots,” Proc. of IEEE International Conference on Intelligent Robots and Systems, vol. 2016-Novem, pp. 3004–3011, 2016.
A. Pott, H. Mütherich, W. Kraus, V. Schmidt, P. Miermeister, T. Dietz, and A. Verl, “Cable-driven parallel robots for industrial applications: The IPAnema system family,” Proc. of 44th International Symposium on Robotics (ISR 2013), 2013.
C. P. Robert, “Simulation of truncated normal variables,” Statistics and Computing, vol. 5, no. 2, pp. 121–125, 1995.
A. Ghasemi, M. Eghtesad, and M. Farid, “Neural network solution for forward kinematics problem of cable robots,” Journal of Intelligent & Robotic Systems, vol. 60, no. 2, pp. 201–215, 2010.
H. Chen, M. C. Kim, and C. S. Kim, “Compensated motion control and forward kinematics of a cable-driven parallel robot based on deep reinforcement learning,” Proc. of the 38th ICROS Annual Conference (ICROS 2023), pp. 230–231, 2023.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors declare that there is no competing financial interest or personal relationship that could have appeared to influence the work reported in this paper.
The guest editor authors this article in this special issue. We declare that the peer review process of this article is the same as the peer review process of the journal in general and the authors (guest editor) have not handled the peer review process.
Additional information
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This study was supported by “Regional Innovation Strategy (RIS)” through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(MOE) (2021RIS-002), Chonnam National University (Grant number: 2023-0095-01), and Korea Institute for Advancement of Technology(KIAT) grant funded by the Korea Government(MOTIE (P0008473, HRD Program for Industrial Innovation).
Huaishu Chen received his B.S. degree in mechanical engineering from the Department of Mechanical Engineering at Wen-Zhou University, Wenzhou, China in 2019 and an M.S. degree in mechanical engineering from Chonnam National University, Korea, in 2022. His research interests are cable robot and deep reinforcement learning.
Min-Cheol Kim received his B.S. degree in mechanical engineering from Chonnam National University, Korea, in 2015. He is now pursuing a Ph.D. degree in mechanical engineering at Chonnam National University, Korea. His research interests are in dynamics and control applications for cable-driven parallel robots in industrial field.
Yeongoh Ko received his B.S. degree from the School of Mechanical Engineering, Chonnam National University, in 2021 and now he is an M.S. candidate in the Department of Mechanical Engineering at Chonnam National University. His research interests are control and robotics.
Chang-Sei Kim Please see vol. 21, no. 3, p. 947, March, 2023 of this journal.
Rights and permissions
About this article
Cite this article
Chen, H., Kim, MC., Ko, Y. et al. Compensated Motion and Position Estimation of a Cable-driven Parallel Robot Based on Deep Reinforcement Learning. Int. J. Control Autom. Syst. 21, 3507–3518 (2023). https://doi.org/10.1007/s12555-023-0342-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12555-023-0342-6