Abstract
Object handover is one of the fundamental tasks of service robots. This paper focuses on a robot-to-human object handover controller applied to a domestic service robot. People in need often have manual operation constraints caused by different body postures or declined physical functions. The preplanned handover strategy used in previous studies potentially increases their cognitive burden and makes it difficult to meet their needs for flexible real-time control and intuitive interaction. It also remains challenging to deal with sensing and prediction errors, motion planning and coordination. The robot no longer provides assistance after an object is delivered to a preplanned handover location, and the receiver needs to make extra efforts to get the object released. Therefore, inspired by a human study, a handover controller is designed based on a manually guided handover strategy that solve problems from grasp adjustment to object release in dynamic retraction motion. Flexible control and intuitive interaction are enabled through motion with continuous support from the robot. Without additional sensors or modeling learning, changes in motion and energy consumption are taken into account to explore an appropriate release timing. The results of robot-to-human handover experiments and the user studies indicate that the receiver can freely pull an object to a suitable location as desired and smoothly obtain the object’s control to complete a subsequent task using the proposed controller with proper release timing.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
As people are living longer, it becomes necessary to provide easier access to assisted living service and care (Beard et al. 2016). In addition, the number of patients with motor impairments caused by various diseases is increasing (Johnson et al. 2019). Therefore, the ambient assisted living style using intelligent service robots has become an excellent choice (Robinson et al. 2014). Object handover is a joint action between the giver and receiver, which is one of the fundamental tasks of service robots (Sebanz and Knoblich 2009). Although many devices can be customized and controlled remotely (Majumder et al. 2017) in compliance with ethical requirements such as privacy protection (Zhang et al. 2022a, b), human–robot collaboration remains inevitable for activities of daily living (ADLs) (Mitzner et al. 2014).
As shown in Fig. 1, most existing handover controllers employed a preplanned handover strategy, which utilizes preplanned motion patterns, preplanned handover locations, or predetermined release rules with strong restrictive effects (Ortenzi et al. 2021). Extensive data acquisition and modeling learning are required for parameter prediction and human–robot coordination. The receiver needs to accomplish awareness of the controller and follow the design rules or they would not be able to effectively gain control of the object.
This paper focuses on robot-to-human handover in home-assistance scenarios, dedicated to serving people with motor impairments who are often bedridden or wheelchair-dependent but can smoothly take objects from a position that is comfortable for them. Due to manual operation constraints caused by different body postures or declined physical functions (Beard et al. 2016; Robinson et al. 2014; Qin et al. 2023), they have more individually different requirements for handover control (Ardón et al. 2021). These issues pose challenges in handling sensing and prediction errors, synchronizing human–robot actions, and adapting to user’s environments (Ortenzi et al. 2021; Abbink et al. 2012). The problem of fulfilling the requirement to complete the handover under the free intended motion of the receiver is yet to be resolved. In addition, a robot should provide more support than just placing an object in a predicted position and asking to adjust the receiver’s behaviors. To avoid failures and improve acceptance, a handover controller needs to comply with the receiver instead of setting rules (Bedaf et al. 2016). More attention should be paid to fulfilling the receiver’s flexible real-time control and intuitive interaction needs.
Therefore, this paper proposes a human-centered service robot handover controller based on a manually guided handover strategy as Fig. 1 shows. It interacts and explores release timing through free manually guided motion. An object is delivered to a contact location without accurate prediction. Rather than transferring the load at the first physical contact, the controller allows the receiver to pull the object to a close location and transfer the load when deducing intention changes from motion changes.
In this paper, a preliminary human study is first conducted to verify the effectiveness of the manually guided handover strategy and to explore the appropriate release timing. Based on the findings, a free manually guided motion is first realized. Moreover, in order to achieve an effortless and smooth release, except for the grip force modulation, the changes in motion and energy consumption are analyzed to predict changes in the receiver’s intentions and determine the release timing. At last, the performance of the proposed controller was evaluated at different release timings, and it was also compared with the existing controller.
The main contributions of this work are as follows:
-
Designed from the receiver’s perspective, the present study is, to our knowledge, the first to explore the interaction strategy and release timing for robot-to-human handovers with free manually guided motion.
-
A human-to-human handover study was designed, and a human-inspired method for selecting release timing during motion was proposed. General laws of motion are applied without the need for additional sensors or model learning.
-
A robot-to-human handover controller was designed and evaluated, offering a feasible idea for assisting individuals with varying manual operation constraints or preferences.
2 Related work
The proposed manually guided handover strategy represents a comprehensive and systematic solution for object handover. It differs from the preplanned handover strategy in terms of Interaction strategy, Handover location, Motion control, and Release behavior.
Interaction strategy Effective human–robot interaction serves as a bridge for mutual understanding between service robots and humans. By employing diverse interaction strategies, individuals can attain situation awareness (Endsley 1995; Vanderhaegen et al. 2023), make proper control decisions, and manage the object handover process. For human-to-human handovers, human giver and receiver can share representations in multiple ways, thus enabling the receiver to coordinate actions and predict required efforts in advance (Ortenzi et al. 2021). In contrast, the motion of a robot employing the preplanned handover strategy remains unchanged, which hinders the receiver’s perception of objects (Parastegari et al. 2018; Sebanz et al. 2006). The receiver needs to spend more effort taking control of handovers. A small initial applied force requires time for adjustment, while a large initial applied force generates motion overshoot (Chan et al. 2013).
Handover location A fundamental task for the preplanned handover strategy is to determine where to transfer (Cini et al. 2019; Moon et al. 2014) prior to the physical interaction. The handover location is influenced by many factors, including different body sizes (Parastegari et al. 2017), arm mobility capacities (Ardón et al. 2021), receiver locations, task requirements (Sisbot and Alami 2012), kinematic features (Liu et al. 2021), social acceptance and preferences (Koay et al. 2014). Depending on the specific scenario, different ergonomic models (Parastegari et al. 2017; Bestick et al. 2018) and cost functions (Sisbot and Alami 2012; Bestick et al. 2016) can be utilized to predict the object configurations and handover locations. To transfer the object closer, some controllers have utilized gaze (Moon et al. 2014), motion or hand tracking systems (Medina et al. 2016; Kshirsagar et al. 2022; Kupcsik et al. 2018; Prada et al. 2014). Although these predictions can effectively improve the handover experience through extensive data collection and modeling learning, many challenges must be addressed, including prediction errors, motion synchronization, and adaptation to the user’s environment. Changes in the receiver’s intentions and manual operation constraints are difficult to accommodate. Cognitive load and waiting time should not be ignored either.
Motion control Proper motion control around physical contact can improve the interaction experience and smoothen the handover process. Extending robotic arms along a linear trajectory while actively detecting force change patterns (Han and Yanco 2019) requires the receiver to adapt to a constant trajectory. Generating a human-like (Kajikawa et al. 1995) and human-aware motion (Sisbot and Alami 2012) is a successful compensation to achieve accurate positioning and soften the shock of contact. Using dynamic motion primitive-based control methods is another practical solution (Prada et al. 2014). However, it remains a challenge to learn human behavior efficiently and to adapt to variable behavior patterns. In order to actively track the receiver’s movements, their intentions need to be accurately recognized (Li et al. 2022). Control methods based on impedance control (Bohren et al. 2011) or admittance control (Haninger et al. 2022) work well to achieve this compliant motion control. Moreover, as a typical human–robot collaborative task, object handover can benefit from shared control. By integrating human subjective intentions into an automated handover control system, the burden of human control can be alleviated. However, promptly addressing potential conflicts between human and robot control remains a complex challenge that warrants further research (Vanderhaegen 2021).
Release behavior The release timing must be coordinated, as an early release may result in a fall of the object, while a delayed release may result in higher interaction force (Chan et al. 2013). To compensate for the shortcomings of a fixed time delay (Edsinger and Kemp 2007) or a fixed distance (Kshirsagar et al. 2022), force patterns (Gómez Eguíluz et al. 2019) are commonly used. A predetermined threshold (Medina et al. 2016; Kupcsik et al. 2018) can be predicted from the force/torque information at the fingertips or robot arm joints. Load changes (Prada et al. 2014) or an entire load transfer (Psomopoulou and Doulgeri 2015) can also be used. Studies have demonstrated that combining multiple cues is effective in releasing judgments, such as the duration of the receiver’s gaze prior to touch (Grigore et al. 2013) or the haptic cue informing pulling under sufficient load sharing (Costanzo et al. 2021). In addition, a force-related displacement Bohren et al. (2011) can be used to evaluate the contact force’s magnitude. However, these controllers trade off between handover smoothness and object safety (Chan et al. 2013). After reaching the preplanned handover location, the receiver can no longer get assistance from the robot and needs to make extra efforts to get the object released. A human-inspired controller (Chan et al. 2013) employs grip force modulation based on load change, allowing to take an object easily from the vertical direction. This method has been proven effective and adopted by many other controllers such as (Medina et al. 2016; Kupcsik et al. 2018). However, greater force is required when the pulling direction is changed (Parastegari et al. 2018).
3 Preliminary human-to-human handover study
Motivated by the review of prior studies, the interaction strategy and release timing for robot-to-human handovers with free manually guided motion were explored to reduce cognitive burden and minimize handover difficulty. Using adaptable motions to facilitate mutual understanding of human–robot intentions. A human-to-human handover study was first conducted in a simulated scenario for preliminary validation. This allows for an exploration of the object handover behavior exhibited by human receivers and givers within contextually consistent cognitive conditions. The goal was not to design a study catering to the robot’s perceptual capabilities. Instead, the appropriate strategies catering to the receiver’s capabilities were explored.
Handovers were simulated using the manually guided handover strategy and the preplanned handover strategy, respectively. For the former, the giver was instructed to comply with the receiver’s retraction movement by following the receiver’s traction and releasing the object when he or she deemed it appropriate. For the latter, the giver needed to remain in a fixed location and release objects after feeling a specific pull. In addition, the receiver was told to take objects in a comfortable way, while the giver was told to ensure safety. The experiments were supervised by the experimenter and the participants as receivers. A new trial was instantly conducted if a trial failed or did not pass the judgments from the timely feedback. For example, in the former case, the receiver’s movement is hindered, or too much force is required. In the latter case, the distance moved is too large.
As Fig. 2 shows, to limit physical functions and immerse participants in the role, one participant as a receiver was invited to lie on a 0.75 m high bed, blindfolded, and receive an object only using thumb, index, and middle finger of one hand. A glass of water weighing 530 ± 0.5 g was delivered to approximately the right side of the receiver’s head at 45\(^{\circ }\), 0.95 m high and 0.5 m away (Ardón et al. 2021; Parastegari et al. 2017; Koay et al. 2014) by another participant as a giver using only one hand. The fragile container and over-weight liquid were set to increase the risk of handover failure. In this setting, the giver would naturally have an inherent sense that the receiver’s ability is limited and they need to take more responsibility for handover safety than to rely on the receiver’s ability. Meanwhile, the receiver would be more sensitive to the giver’s actions and generate high mental expectations. Note that the givers were not blindfolded because they were expected to achieve the optimal choice with full perception. The consistent motion features during handovers were considered to satisfy the comprehensive judgment of a human giver with full perception.
34 participants (26 males, 8 females) between 23 and 32 were invited to join this study. None of them had been previously involved in the system design or related research. All participants gave their informed consent before their inclusion in the study. The ethical committee of the university approved the study, and all participants provided informed consent. They were randomly grouped into pairs and were instructed to do two types of handovers. To eliminate ordering effects, the two types of handovers were alternated. After 10 trials, the roles of giver and receiver were exchanged to complete another 10 trials. The paticipants were surveyed after each trial round. At last, a free interview was conducted to gather insights about their experience. The survey was as follows.
-
1.
Rate how easy it was to take the object for each handover (1—very hard to take, 5—very easy to take).
-
2.
Rate the preference level for each handover (1—not at all preferred, 5—very much preferred)
After brief grasp adjustment, the variation of object’s acceleration is regular for one period (from Ps to Pt in Fig. 2a), after which the variation pattern of acceleration is not apparent. Moreover, object releases are also clustered in this phase as Fig. 2b and c shows, especially from reaching its 100% maximum acceleration to − 20% (152/170, 89.4%). The result of release numbers passed the Shapiro–Wilk test \(({p}=0.134 >0.05)\), obeying a normal distribution. The object’s average acceleration at release was 22.3% with a variance of 13.6% at deceleration. Figure 2c also indicates that one’s release timing converges toward \(a_0\), which corresponds to the transition moment from acceleration to deceleration.
The result of Wilcoxon signed-rank test performed showed that employing manually guided handover strategy makes it significantly easier \((Z=-9.03, p<0.001)\) to pickup objects and was significantly more preferred \((Z=-9.04, p<0.001)\). They described the experience as “[having] enough time to adjust [their grasp and movements]", “[feeling] natural and relaxed". In contrast, the preplanned handover strategy made them “nervous" and “uncertain when the object would be released", especially when grasping in inappropriate positions. Similar feedback was received from givers.
The findings are
-
Human receivers adjust their grasp after physical contact. Subsequent motion changes are relatively regular.
-
The manually guided handover strategy makes it easier for human receivers to take objects and is preferred.
-
Human givers prefer to release objects when the acceleration first drops and approaches zero.
4 Robot-to-human handover controller
Drawing upon the findings from the preliminary human-to-human handover study, a robot-to-human handover controller was designed, utilizing the manually guided handover strategy. As shown in Fig. 3, the controller uses compliant manually guided motion in response to the receiver’s pull. After providing tuning space for grasp adjustment without release judgment, it determines the release timing in motion based on consistent motion features.
4.1 Controller architecture
The proposed handover controller consists of two phases, as shown in Fig. 3:
a pre-handover phase for posture adjustment and grip force modulation. The physical exchange phase can be further divided into three phases shown above: Phase I, the grip adjustment; Phase II, the release point selection; Phase III, the load transfer at release.
In the pre-handover phase, the robot approaches from the side with gripper facing towards the receiver to reduce obstruction to the receiver’s actions (Ardón et al. 2021; Parastegari et al. 2017; Koay et al. 2014). Upon arrival, the robot will extend the object forward to the physical contact location P. This process is similar to a human giver’s reaching motion (Liu et al. 2021) and serves as a ready cue. The physical exchange phase starts when the receiver first contacts objects and ends when the giver fully releases objects.
The robotic arm will switch to different control mode to ensure safety and fluency. During transportation, the position control mode is employed to deal with bumps and collisions (Chan et al. 2013). In Phase I and II, a torque control mode is used to achieve a free manually guided motion. In Phase III, the position control mode is employed to avoid motion oscillations due to load transfer and motion termination.
A handover failure occurs when the object appears outside the red dashed line while not satisfying the release conditions. In these cases, after the external force other than the load force disappears, the robotic arm will return to P.
4.2 Manually guided motion control
To perform a compliant motion behavior under the pulling force, a force-free controller inspired by (Dong et al. 2019; Hou et al. 2017) was employed in this work. Figure 4 shows the controller architecture.
The dynamic model of the robotic arm is designed as
where \({\varvec{M}}({\varvec{q}})\), \({\varvec{C}}({\varvec{q}}\), \(\dot{{\varvec{q}}})\in R^{n \times n}\), \({\varvec{G}}({\varvec{q}})\), \({\varvec{q}}\), \(\dot{{\varvec{q}}}\), \(\ddot{{\varvec{q}}}\), \({\varvec{T}}\), \(\varvec{{T}_{ext}} \in R^n\). \({\varvec{M}}({\varvec{q}})\), \({\varvec{C}}({\varvec{q}}, \dot{{\varvec{q}}})\), \({\varvec{G}}\) are the joint space inertia, Coriolis and gravity items of the robotic arm with n joints, respectively; \(\varvec{\theta }\) is the angle of the motor. \({\varvec{q}}\), \(\dot{{\varvec{q}}}\), \(\ddot{{\varvec{q}}}\) are joint angles, angular velocities, angular accelerations, respectively; \({\varvec{T}}\), \(\varvec{{T}_{ext}}\) are the measured and external torques of joint torque sensors, respectively. \({\varvec{J}}\in R^{n \times n}\) is the motor inertia matrix; \(\ddot{\varvec{\theta }}\in R^n\) represents angle accelerations of motors; \({\varvec{T}}_{{\varvec{m}}}\in R^n\) is output torques of the motors; \(\varvec{T_f}\in R^n\) represents joint friction torques. Finally, the force-free controller is designed as
where \(\varvec{K_t}=diag(K_{t 1}, K_{t 2},..., K_{t n} )\) is a constant matrix and diag (.) denotes the diagonal matrix. From Eq. (2), it can be seen that \({\varvec{T}}\) and \({\varvec{G}}\) are the main variables of \({\varvec{T}}_{\varvec{ext }}\). \({\varvec{T}}\) can be measured directly by the torque sensors on the joints of the robotic arm. \({\varvec{G}}\) represents the gravitational force acting on the robot arm and the object. For fixed loads, this term can be obtained by invoking the gravity estimation and parameter identification functions integrated inside the basic controller of the robotic arm. To address varying loads, relevant approaches can be found in the literature (Dong et al. 2019; Hou et al. 2017).
At last, the controller can effectively compensate the gravity and reduce the influence of friction and inertial force, achieving stable movement activated by a smaller external force and self-balancing without external forces.
4.3 Grip force modulation
To achieve a smooth release experience during manually guided motion, grip force needs to be modulated at a suitable level that neither makes the object fall nor prolongs the release. The gripper used is three-fingered and multi-jointed without force sensors, which poses a challenge for modeling the grip force for different objects. However, since this is not the focus of this paper and the position control frequency of the gripper can reach 100 HZ, the force control requirements can be met by an open-loop position-based approach (Chan et al. 2013). The gripper is controlled by the number of turns the motor rotates, 0 for the whole opening and 6800 for complete closing. The whole travel time is 1.2 s. The object mass m is obtained from the vertical force \(F_{out}\) at the end-effector.
Stationary state The friction and pull forces are approximately equal when the object is close to sliding. The forces are measured using Imada Digital Push-pull Gauge ZTS-200N. A binomial is used to fit the relationship between the mass m and the rotation turns y to modulate the grip force (Chan et al. 2013):
where for the experimental cup used in Sect. 5.2, \(b_1=-1433\), \(b_2=3386\), \(b_3=1670\). The coefficient of determination for the fitted curve, (\(R^2 > 0.99\)), indicates an excellent fit between the model and the actual data..
Moving state As in Fig. 5b and d, when the object is pulled vertically downward, the required friction force \(F_f\) reaches maximum. That is \(F_f=F_{f 1}+F_{f 2}=m g+m a=M g\), where M is the virtual mass that set for calculating the grip force:
According to Parastegari et al. (2018), the relative acceleration of object in physical exchange phase is smaller than 4.5 m/s\(^2\). For safety, it is scaled to 4.9 m/s\(^2\) (\(M =\)1.5 m) to obtain y. It can be modulated according to changes in the actual magnitude of the acceleration, which will be discussed in detail in the Sect. 5.5.
4.4 Energy consumption during release
The goal is to minimize the extra efforts required by the receiver to ensure an efficient release process. Prior to release, the robotic arm continues to assist in lifting the object and itself. The receiver is not responsible for the load, which enables them to retract with minimal effort. Figure 5a–e illustrates the representative motion and forces during a handover. The phases introduced correspond to Fig. 3. The robotic arm is considered to operate force-free, and full compensation is made for the object’s gravity.
In Phase I, the receiver has enough time to adjust the grasp before the object is released. Therefore, instead of ensuring a stable grasp, the focus is on analyzing energy consumption (Neranon 2018) to explore the appropriate release timing.
In Phase II, the object’s velocity changes from 0 to \(v_0\). The energy E of the object is regarded as the sum of kinetic energy \(E_k\) and gravitational potential energy \(E_p\):
where h is the relative height, v is the current velocity. The object is supported by the robotic arm until it is released. The human receiver is only responsible for the kinetic energy change of the object and not for the potential energy change before release. According to the law of energy conservation, \(W_{ms}\) done by the receiver’s pulling force \(F_h\) from the physical contact moment \(t_{s}\) to the release moment \(t_{0}\) is calculated as the increase in kinetic energy, that is, \(1/2 mv_0^2\):
Figure 5e shows the release process in Phase III. It is assumed that the release is complete when the object has completed a displacement of l. The velocity of the object changes from \(v_0\) at \(t_0\) to \(v_t\) at \(t_t\). According to the law of energy conservation:
where \(W_h\) is the work done by the receiver’s pulling force, \(W_f\) is the work done by friction, \(\varDelta E_k=\frac{1}{2} m v_t^2-\frac{1}{2} m v_0^2\), \(\varDelta E_p=\pm mglsin\varphi\), \(\alpha\), \(\theta\), \(\varphi\) are the angles of \(F_h\), v, and l respectively. \(\varDelta E_p\) is positive for upward motion and negative for downward motion. \(v_t\) can be obtained from Eq. (8):
The object’s velocity after release, \(v_t\), is influenced by \(W_f\), \(F_h\), \(\varDelta E_p\) and \(v_0\). To minimize the object’s velocity change with minimal \(F_h\), the grip force is modulated to reduce \(W_f\). \(\varDelta E_p\) can be negative to overcome \(W_f\) by selecting the appropriate timing for load transfer, which utilizes gravity to move the object downward. According to Eq. (7), the object’s velocity at the release moment, \(v_0\), is determined by \(W_{ms}\) and the resulting kinetic energy can help to overcome \(W_f\). The extension of Phase II could help to reduce the receiver’s effort. Therefore, it can be much easier to achieve an effortless and smooth release when \(v_0\) is large enough and the receiver is ready to take over the object.
5 Experiments
In this section, the performance of the controller employing the manually guided handover strategy with three different release timings was evaluated, and it was also compared with an existing controller. The design of some existing controllers contributes to the options of the release timing.
5.1 Different controllers
The force is responsible for the change in motion, while the change in force is an expression of the receiver’s intention. This implies that only when the goal (intention) of the motion changes does the receiver change the currently applied force, which in turn causes a change in the motion. Therefore, motion changes are used to infer the receiver’s intention. Different points in the retraction motion are strategically selected for quantitative analyses to explore the appropriate timing of release (Kajikawa et al. 1995; Li et al. 2022). The receiver’s physical state is presumed to be ready to apply forces and intervene in the object’s motion when observing a significant change in object’s acceleration. Transferring load at these points is more likely to be consistent with their intentions and to be completed fluently. According to the results in Sects. 3 and 4.4, the release points shown in Fig. 6 are tested. The average velocity is calculated as the instantaneous velocity using the terminal positions with a sliding window of 0.03 s. The instantaneous acceleration is similarly obtained using the velocity. Critical values, such as the maximum, are confirmed after a delay of three frames to cover the jitter.
Manually guided handover controller 1 (MG1) releases object when the object first decelerate. It is expected that the object has been transferred to a suitable position and that the receiver’s intentions are significantly changing (Li et al. 2022). The object’s motion will then decelerate, which is compatible with the deceleration caused by the energy consumption when released.
Manually guided handover controller 2 (MG2) releases object when the object’s acceleration reached maximum. At this time, the receiver is applying the maximum force (Neranon 2018).
Manually guided handover controller 3 (MG3) releases object when the object’s displacement is 1 cm. The original release condition in Bohren et al. (2011), a vertical displacement of 1 cm was revised to avoid a high failure rate in the experimental setup.
Human-inspired handover controller (HI) releases object when detecting a vertical threshold force of 3 N (around 50% of object weight). The initial grip force is the same as that of the MG controllers, and the grip force is modulated linearly according to the perceived load change until the threshold is reached. The controller is a representative controller with a preplanned handover strategy and has been shown to have smooth performance (Chan et al. 2013).
5.2 Implementation details
System setup The system is mainly composed of a mobile platform equipped with a 7 degrees of freedom Kinova JACO2 lightweight robotic arm with a three-finger gripper. Each joint is equipped with a torque sensor. The system implemented under the Robot Operating System (ROS) framework and operated at 100 Hz. In order to quantitatively evaluate the performance of each handover controller, OptiTrack V120 Trio motion tracking system was utilized. The information obtained from this system was solely used for evaluation purposes. A cup filled with water weighing 667 ± 0.5 g was illustrated.
Subjects 24 participants (19 males, 5 females) between 22 and 55 from Southeast University were invited to participate in this study. None of them had been previously involved in the system design or related research. All participants gave their informed consent before their inclusion in the study. The ethical committee of the university approved the study, and all participants provided informed consent.
Procedure Each participant took part in the experiment as an individual. Each controller was experienced twice to familiarize participants with task requirements. Similar to the setting in Sect. 3, participants were told to pick up the cup from the same place comfortably and bring it to the same location for drinking. Note that the control of the robotic arm would not change regardless of whether the receiver was blindfolded or not. Therefore, the performance of the four controllers was objectively evaluated without requiring participants to be blindfolded. A set of 16 trials was obtained using a balanced Latin square design. At the end of each trial, the participant was invited to take a survey in Sect. 3. At last, a free interview was conducted.
Metrics For the objective metrics, five quantitative metrics were employed for evaluation: Time for 1 cm, Release duration, Maximum acceleration, Trajectory length and Receiver effort. Each trial begins with physical contact and ends with the object completing a fixed displacement. Figure 7 shows two example trials. Since handovers using MG controllers involve transferring control of an object and changing its position, these metrics have been adapted based on the execution effect.
-
Time for 1 cm This metric represents the time required from the physical contact to produce a 1 cm displacement, which reflects representation-sharing capabilities and ease of take (Chan et al. 2013).
-
Release duration As illustrated in Fig. 7, for MG controllers, the release duration consists of two components: Time for 1 cm and the time elapsed from the release point to produce another 1 cm displacement. Notably, for HI, the gripper remains stationary before releasing, causing these two parts to overlap and making the Release duration equivalent to Time for 1 cm. A shorter Release duration indicates faster object release and a smoother overall process.
-
Maximum acceleration This parameter is directly proportional to the maximum interaction force and motion overshoot (Parastegari et al. 2018). Smaller maximum acceleration values lead to a smoother object transfer process.
-
Trajectory length This metric calculates the total length of the object’s trajectory during one trial. It provides insights into the smoothness of the motion. If the trajectory is not smooth, motion overshoot occurs, resulting in a larger trajectory length.
-
Receiver effort It is determined by the receiver’s work done W (Neranon 2018) on the object from the physical contact moment \(t_s\) to the target location reaching moment \(t_{task}\). This metric measures the amount of effort that the receiver has to make to take the object.
Specifically, for Receiver effort, the release process (Phase III) which is very short and mainly affected by the friction and gravity is ignored for simplicity. This is approximately fair for all four controllers. Therefore, \(W=W_{m s}+\sum |\varDelta E|\). \(W_{m s}\) is obtained from Eq. (7). \(\sum |\varDelta E|\) is the cumulative change in the sum of kinetic and potential energies, caused by human work, and related to motion overshoot and force overshoot. Taking the two trials in Fig. 7 as an example,
where the energy value at each specific point is calculated according to Eq. (6).
\(E_{task}\) is the set constant value. \(E_{t 1}\) is larger than \(E_{task}\), while \(E_{t 2}\) is smaller than \(E_{task}\). It is believed that the human motion localization error causes this difference. To make the comparison fair, the extra work needs to be subtracted (\(E_{t 1}-E_{t a s k}\)) and the less work needs to be added (\(E_{t a s k}-E_{t 2}\)). Thus,
For the subjective metrics, perceived ease of use and preference are evaluated through the free interview and the same survey in Sect. 3.
5.3 Controller performance
Figure 8 shows the motion trajectories of a representative participant using all four controllers.
In general, trajectories of handovers using MG1 had less vertical motion space than the others and were smoother than MG2 and MG3.
Table 1 and Fig. 9 show the overall performance of each controller.
Figure 10 shows examples of each controller’s performance.
In terms of the overall performance of these controllers, the Shapiro-Wilk normality test and Mauchly’s test of Sphericity were conducted. Only the data of Maximum acceleration passed both tests. The data of Time for 1 cm and Release duration of HI did not pass the Shapiro-Wilk normality test. This may be due to the outliers caused by participants’ frequent pulling direction adjustments. Other data did not pass the Mauchly’s test of Sphericity. Therefore, the data of Maximum acceleration was analyzed using a one-way repeated-measures analysis of variance (ANOVA) test with Bonferroni correction post hoc. The other data were analyzed using the nonparametric Friedman test with Dunn–Bonferroni post hoc.
Friedman’s test showed no significant difference in Time for 1 cm \((\chi ^2(3)=134.675,p<0.001)\) between MG1, MG2, and MG3 \((p>0.05)\). However, all three groups were significantly shorter than HI \((p<0.001)\). Nevertheless, Friedman test results of Release duration \((\chi ^2(3)=53.809,p<0.001)\) showed that MG1 was significantly shorter than MG3 \((p<0.001)\) and HI \((p<0.001)\), but not significantly different from MG2 \((p>0.05)\). MG2 \((p<0.001)\) and MG3 \((p<0.05)\) were also significantly shorter than HI.
The ANOVA test results revealed a significant difference in Maximum acceleration \((F=67.536, p<0.001)\). Maximum acceleration generated by MG1 was significantly smaller than MG2, MG3 and HI \((p<0.001)\), but there was no significant difference between MG2 and MG3, MG2 and HI \((p>0.05)\).
The Friedman test results also showed significant differences in Trajectory length \((\chi ^2(3)=165.888,p<0.001)\) and Receiver effort \((\chi ^2(3)=181.700,p<0.001)\). Compared to the other three controllers, the Trajectory length of MG1 was significantly shorter\((p<0.001)\) and its Receiver effort was significantly smaller\((p<0.001)\).
While in terms of changes in position, velocity and acceleration during the handover, object’s acceleration change curve indicated that the MG controllers completed release judgment during a relatively smooth acceleration change phase. Prior to release, the object’s acceleration was small. Upon release, it accelerated rapidly after a short deceleration period. The maximum acceleration value appeared next, which reflected motion overshoot.
5.4 Survey responses and interviews
Figure 11 shows the survey results.
The Friedman test results \((\chi ^2(3)=152.040,p<0.001)\) indicated that it was significantly easier to take objects using MG1 than HI \((p<0.001)\). However, there was no significant difference between MG controllersA. Moreover, MG1 was significantly more preferred than the other three controllers \((\chi ^2(3)=117.239,p<0.001)\).
In the interviews, almost all the participants claimed that their ability to locate and manipulate the object decreases when lying down. They could take the object efficiently only when they realized the rules of HI. MG controllers allowed them to take the object from different directions. The consistency of their motions was interrupted by MG3 and they lacked time to adjust grasp. MG2 differed from MG1 since it produced a significant motion overshoot. For MG1, no significant interruptions in motion consistency were claimed and they could take objects in a more relaxed, fluent manner. Additionally, no one responded that the robotic arm followed too closely when using MG controllers.
5.5 Discussion and limitation
According to the experiment results, the significantly shorter performance of MG controllers in Time for 1 cm indicated that the manually guided handover strategy allowed the receiver easily change the object’s motion and achieve intuitive interactions. The comparison results of the Release duration showed that MG1 and MG2 released faster. But for Maximum acceleration, MG2 was significantly larger than MG1. This indicated that MG2 relied on the receiver’s pull to complete the rapid release, while the performance of MG1 confirmed the analysis in Sects. 4.4 and 5.1. Maximum acceleration, Trajectory length and Receiver effort are associated with motion overshoot. MG1 performed significantly better than the other three controllers on each of these metrics. This indicated that releasing an object at the first deceleration was easy to control, which reduced extra effort and unintended motion overshoot. The reduction in Receiver effort also reflected the further utilization of the robotic arm. Furthermore, according to the survey responses and interviews, consistent results were obtained. MG1 was preferred over the other controllers.
This study uses ‘compliance’ instead of ‘prediction’, which is an innovative attempt from a kinematic point of view. It has the potential to complement handover strategies that focus on perception and prediction to address issues such as the burden and potential errors in additional sensing, learning, and prediction. However, user-centered control relies heavily on accurate decision-making. The ability of the receiver to correctly perceive the situation plays a crucial role in achieving this accuracy. In addition to enhancing situational awareness by reducing the burden on the receiver, thereby improving control and overall performance of the object handover task, perceptual differences must be carefully considered, including those discrepancies between visual observations and actual perception, as well as discrepancies between perceived and objective indicators (Vanderhaegen et al. 2023). Our future work will prioritize addressing the risks associated with these perceptual discrepancies.
The experimental results indicate that the proposed handover control method enables flexible real-time control and intuitive interaction. The receiver can freely manipulate an object to a suitable location as desired and smoothly gain control over the object to complete subsequent tasks using the proposed controller with proper release timing. However, if the receiver’s abilities are insufficient to allow them to smoothly pick up objects from a comfortable position, the effectiveness of the controller will be compromised. Conditions such as trembling hands necessitate further research in the future.
The grip force was modulated using the maximum acceleration obtained in the previous handover test for safety reasons. To achieve a fair comparison, grip force modulation was not performed after MG1 had gained its maximum acceleration, which represents a promising direction for future improvement. Both subjective and objective evaluations also indicated that the proposed controller with proper release timing (MG1) might be promising for serving people with manual operation constraints caused by different body postures or declined physical functions. However, the target population still needs to be invited to participate in the further study to improve the controller design. Moreover, manually guided motion is sensitive to the robotic arm’s control performance. More control methods need to be explored to enhance the receiver experience. In addition, although large masse object handovers are not common in home-assistance scenarios, these handovers deserve further research.
6 Conclusion
From the perspective of the receiver, this paper proposes, for the first time, a method to achieve intuitive and flexible object handover control in unrestricted manual guided motion. By utilizing a human-inspired approach, a preliminary human study is first conducted, wherein a constrained cognitive condition is created through the construction of a simulated scenario. This allows for an exploration of the object handover behavior exhibited by human receivers and givers within contextually consistent cognitive conditions. Based on the findings of the human study, a robot-to-human handover controller is designed for home-assistance scenarios. General law of motion is used, which is less restrictive and does not require additional sensors or modeling learning. The results of handover experiments among the proposed controller with different release timing and an existing controller showed that the proposed controller with proper release timing can enable a safe, efficient, and easily controlled handover.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
Abbink DA, Mulder M, Boer ER (2012) Haptic shared control: smoothly shifting control authority? Cognit Technol Work 14:19–28
Ardón P, Cabrera ME, Pairet E, Petrick RP, Ramamoorthy S, Lohan KS, Cakmak M (2021) Affordance-aware handovers with human arm mobility constraints. IEEE Robot Autom Lett 6(2):3136–3143
Beard JR, Officer A, De Carvalho IA, Sadana R, Pot AM, Michel JP, Lloyd-Sherlock P, Epping-Jordan JE, Peeters GG, Mahanani WR et al (2016) The world report on ageing and health: a policy framework for healthy ageing. Lancet 387(10033):2145–2154
Bedaf S, Draper H, Gelderblom GJ, Sorell T, de Witte L (2016) Can a service robot which supports independent living of older people disobey a command? The views of older people, informal carers and professional caregivers on the acceptability of robots. Int J Soc Robot 8(3):409–420
Bestick A, Bajcsy R, Dragan AD (2016) Implicitly assisting humans to choose good grasps in robot to human handovers. Proceedings of international symposium experiment robotics. Springer, pp 341–354
Bestick A, Pandya R, Bajcsy R, Dragan AD (2018) Learning human ergonomic preferences for handovers. In: Proceedings of IEEE international conference robotics automation (ICRA). pp 3257–3264
Bohren J, Rusu RB, Jones EG, Marder-Eppstein E, Pantofaru C, Wise M, Mösenlechner L, Meeussen W, Holzer S (2011) Towards autonomous robotic butlers: Lessons learned with the PR2. In: Proceedings IEEE international conference robotics automation. pp 5568–5575
Chan WP, Parker CA, Van der Loos HM, Croft EA (2013) A human-inspired object handover controller. Int J Robot Res 32(8):971–983
Cini F, Ortenzi V, Corke P, Controzzi M (2019) On the choice of grasp type and location when handing over an object. Sci Robot 4(27):eaau9757
Costanzo M, De Maria G, Natale C (2021) Handover control for human–robot and robot–robot collaboration. Front Robot AI 8:672995
Dong K, Liu H, Zhu X, Wang X, Xu F, Liang B (2019) Force-free control for the flexible-joint robot in human–robot interaction. Comput Electr Eng 73:9–22
Edsinger A, Kemp CC (2007) Human–robot interaction for cooperative manipulation: handing objects to one another. In: Proceedings 16th IEEE International Symposium Robot Human Interactions Communication. pp 1167–1172
Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Factors 37(1):32–64
Gómez Eguíluz A, Rañó I, Coleman SA, McGinnity TM (2019) Reliable robotic handovers through tactile sensing. Auton Robots 43(7):1623–1637
Grigore EC, Eder K, Pipe AG, Melhuish C, Leonards U (2013) Joint action understanding improves robot-to-human object handover. In: Proceedings IEEE/RSJ international conference intelligent robots system. pp 4622–4629
Haninger K, Radke M, Vick A, Krüger J (2022) Towards high-payload admittance control for manual guidance with environmental contact. IEEE Robot Autom Lett 7(2):4275–4282
Han Z, Yanco H (2019) The effects of proactive release behaviors during human–robot handovers. In: Proceedings 14th ACM/IEEE International Conference Human–Robot Interaction. pp 440–448
Hou C, Wang Z, Zhao Y, Song G (2017) Load adaptive force-free control for the direct teaching of robots. Robot 39(4):439–448
Johnson CO, Nguyen M, Roth GA, Nichols E, Alam T, Abate D, Abd-Allah F, Abdelalim A, Abraha HN, Abu-Rmeileh NM et al (2019) Global, regional, and national burden of stroke, 1990–2016: a systematic analysis for the global burden of disease study 2016. Lancet Neurol 18(5):439–458
Kajikawa S, Okino T, Ohba K, Inooka H (1995) Motion planning for hand-over between human and robot. In: Proceedings of IEEE/RSJ international conference intelligent robots system/human robot interaction cooperative robots, vol 1. pp. 193–199
Koay KL, Syrdal DS, Ashgari-Oskoei M, Walters ML, Dautenhahn K (2014) Social roles and baseline proxemic preferences for a domestic service robot. Int J Soc Robot 6(4):469–488
Kshirsagar A, Ravi RK, Kress-Gazit H, Hoffman G (2022) Timing-specified controllers with feedback for human–robot handovers. In: Proceedings of IEEE International Symposium Robot Human Interaction Communication. pp 1313–1320
Kupcsik A, Hsu D, Lee WS (2018) Learning dynamic robot-to-human object handover from human feedback. Robot Res 1:161–176
Li Y, Yang L, Huang D, Yang C, Xia J (2022) A proactive controller for human-driven robots based on force/motion observer mechanisms. IEEE Trans Syst Man Cybern Syst 52:6211–6221
Liu D, Wang X, Cong M, Du Y, Zou Q, Zhang X (2021) Object transfer point predicting based on human comfort model for human–robot handover. IEEE Trans Instrum Meas 70:1–11
Majumder S, Aghayi E, Noferesti M, Memarzadeh-Tehran H, Mondal T, Pang Z, Deen MJ (2017) Smart homes for elderly healthcare-recent advances and research challenges. Sensors 17(11):2496
Medina JR, Duvallet F, Karnam M, Billard A (2016) A human-inspired controller for fluid human–robot handovers. In: Proceedings of IEEE-RAS International Conference Humanoid Robots, pp 324–331
Mitzner TL, Chen TL, Kemp CC, Rogers WA (2014) Identifying the potential for robotics to assist older adults in different living environments. Int J Soc Robot 6(2):213–227
Moon A, Troniak DM, Gleeson B, Pan MK, Zheng M, Blumer BA, MacLean K, Croft EA (2014) Meet me where i’m gazing: how shared attention gaze affects human–robot handover timing. In: Proceedings ACM/IEEE International Conference Human–Robot Interactions. pp 334–341
Neranon P (2018) Robot-to-human object handover using a behavioural control strategy. In: Proceedings IEEE international conference smart instrumentation, measurement, application. pp 1–6
Ortenzi V, Cosgun A, Pardi T, Chan WP, Croft E, Kulić D (2021) Object handovers: a review for robotics. IEEE Trans Robot 37(6):1855–1873
Parastegari S, Abbasi B, Noohi E, Zefran M (2017) Modeling human reaching phase in human–human object handover with application in robot–human handover. In: Proceedings of IEEE/RSJ International Conference Intelligent Robots System. pp 3597–3602
Parastegari S, Noohi E, Abbasi B, Žefran M (2018) Failure recovery in robot–human object handover. IEEE Trans Robot 34(3):660–673
Prada M, Remazeilles A, Koene A, Endo S (2014) Implementation and experimental validation of dynamic movement primitives for object handover. In: Proceedings of IEEE/RSJ international conference intelligent robots system. pp 2146–2153
Psomopoulou E, Doulgeri Z (2015) A human inspired stable object load transfer for robots in hand-over tasks. In: Proceedings of IEEE/RSJ international conference intelligent robots system. pp 491–496
Qin C, Song A, Wei L, Zhao Y (2023) A multimodal domestic service robot interaction system for people with declined abilities to express themselves. Intell Serv Robot 52:1–20
Robinson H, MacDonald B, Broadbent E (2014) The role of healthcare robots for older people at home: a review. Int J Soc Robot 6(4):575–591
Sebanz N, Knoblich G (2009) Prediction in joint action: what, when, and where. Top Cogn Sci 1(2):353–367
Sebanz N, Bekkering H, Knoblich G (2006) Joint action: bodies and minds moving together. Trends Cogn Sci 10(2):70–76
Sisbot EA, Alami R (2012) A human-aware manipulation planner. IEEE Trans Robot 28(5):1045–1057
Vanderhaegen F (2021) Heuristic-based method for conflict discovery of shared control between humans and autonomous systems—a driving automation case study. Robot Auton Syst 146:103867
Vanderhaegen F, Wolff M, Mollard R (2023) Repeatable effects of synchronizing perceptual tasks with heartbeat on perception-driven situation awareness. Cogn Syst Res 81:80–92
Zhang X, Sun X, Sun X, Sun W, Jha SK (2022a) Robust reversible audio watermarking scheme for telemedicine and privacy protection. CMC Comput Mat Contin 71(2):3035–3050
Zhang X, Zhang W, Sun W, Sun X, Jha SK (2022b) A robust 3-d medical watermarking based on wavelet transform for data protection. Comput Syst Sci Eng 41(3):1043–1056
Funding
This study was supported in part by the Basic Research Project of Leading Technology of Jiangsu Province under Grant No.BK20192004, in part by NSF of Jiangsu Province under Grant No.BK20232008; in part by Jiangsu Key Research and Development Plan under Grant No.BE2023023-1; in part by National Natural Science Foundation of China under grant numbers 92148205, 62272236, 62376128, in part by the Joint Fund Project 8091B042206, and in part by the Fundamental Research Funds for the Central Universities.
Author information
Authors and Affiliations
Contributions
Conceptualization: Chaolong Qin, Aiguo Song; Methodology: Chaolong Qin, Aiguo Song; Formal analysis and investigation: Chaolong Qin, Lifeng Zhu, Jianzhi Wang; Writing - original draft: Chaolong Qin; Writing - review and editing: Chaolong Qin, Aiguo Song, Lifeng Zhu, Xiaorui Zhang, Jianzhi Wang, Linhu Wei, Tianyuan Miao; Funding acquisition: Aiguo Song, Xiaorui Zhang; Resources: Aiguo Song; Supervision: Aiguo Song, Chaolong Qin, Lifeng Zhu. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Consent to participate
All participants were provided with information about the purpose, procedure, and potential risks of the experiments. The informed consent form was obtained from all the participated subjects.
Ethical approval
All the experiments were approved by the Ethics Committee of Southeast University and conformed to the Declaration of Helsinki.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Qin, C., Song, A., Zhu, L. et al. Exploring the interaction strategy and release timing for robot-to-human handovers with manually guided motion. Cogn Tech Work (2024). https://doi.org/10.1007/s10111-024-00773-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10111-024-00773-7