1 Rehabilitation for Neurological Injury

As the world’s population ages, the demand for rehabilitation medicine services continues to increase. An older population equates to an increase in the prevalence of physical injuries (e.g., from falls) and mobility impairments, but also neurological disabilities such as those experienced by stroke survivors. For rehabilitation-focused therapy (as opposed to therapy sessions for teaching compensatory techniques), strength training and neuroplasticity are regarded as key mechanisms of recovery. Neuroplasticity is the ability of the brain to alter and adapt its functionality through the formation of new neuronal connections. In stroke survivors, the repetitive exercise of affected neuromuscular pathways has been found to act as a catalyst for neuroplasticity [87]. As such, there is a strong motivation for rehabilitation services to emphasize repetitive physical exercise for both patients’ strength training as well as neuromuscular recovery. However, this means the burden on healthcare providers also increases not only through increased cost expenditures [60] but also through the higher physical demands on therapists who have to support the patients in these repeated exercises. This has resulted in a need for innovation, as evidenced by the growth of research on technology-focused assistance over the last few decades [102]. This chapter presents two branches of innovation that have immense potential in the field of physical rehabilitation. The first is the concept of Learning from Demonstration (LfD) for robot programming and the second is the use of VR and AR displays to enhance serious games. Both of these innovations will be discussed and their applications to enhancing haptic interactions for physical rehabilitation explained in their respective sections.

1.1 Stroke

As the global fifth leading cause of death, stroke causes approximately 6.5 million deaths each year [18]. It is defined as a clinical syndrome of presumed vascular origin, characterized by disturbance of cerebral functions lasting more than 24 h or leading to death [64]. Common symptoms of stroke include a reduction in motor control (i.e., reduced movement, weakness, incoordination), sensory loss or alteration, impaired balance, and impaired tone (i.e., spasticity) [64]. In 2013, the prevalence of stroke for adults between 20 and 64 years of age reached approximately 11 million cases [44]. In Canada alone, stroke and other cardiovascular diseases (CVD) cost the healthcare system $22.2B in 2009 [117], and $20.9B in 2015 [60]. In the United States, the total cost of stroke and other CVD was estimated to be $316.1B in 2013 [18]. Therefore, there is a great incentive for making post-stroke rehabilitation as effective and efficient as possible, not only to lower its economic burden but to improve the quality of life for a significant number of stroke survivors.

1.2 Neuroplasticity and Stroke Rehabilitation

Neuroplasticity can be defined as the ability of the brain to reorganize its structure, function, and connections for the purposes of development and learning or in response to the environment, disease, or injury [34]. For stroke patients, research has found that neuroplasticity can be promoted in patients by actively engaging in repetitive exercises, and has been extensively explored for the purposes of aiding motor recovery [5, 34, 87]. Hemiparesis is one example of a manifestation of post-stroke neuromotor disability where the principles of neuroplasticity are applied. Hemiparesis can be described as the paralysis of one side of the body (and may also be characterized by spasticity and sensory loss) [21] which, in the case of stroke, occurs as a result of a hemispheric lesion in the brain. Degeneration of motor neurons as a result of hemiparesis contributes greatly towards resultant muscle weakness [21, 98], and may be further compounded by hemispatial neglect [114, 150]. Constraint-Induced Movement Therapy (CIMT) is a family of treatments which involves constraint of movement for a patient’s unaffected limbs and mass practice using their affected limb and has seen significant success when applied to hemiparetic stroke patients with the objectives of overcoming learned non-use and promoting neuroplasticity [96]. More recently, the facilitation of neuroplasticity has shifted away from having patients practice generic and repetitive single-limb exercises and instead towards bimanual exercises (for upper limb therapy) [22] or task-oriented therapy [79].

1.3 Activities of Daily Living

In the case of post-stroke rehabilitation, task-oriented therapy typically refers to the training and assessment of patients’ abilities to perform Activities of Daily Living (ADLs). The most basic of ADLs typically include bathing, personal hygiene and grooming, dressing, toileting, functional mobility (i.e., locomoting and transferring to and from beds and chairs), and self-feeding [142]. The definition, however, can also encompass most actions that allow an individual to live independently. Conventionally, patients will be trained through a combination of strength training and movement strategies, compensatory or otherwise, that allow them to perform ADLs; assessments of their capabilities are then performed by having patients perform the ADLs themselves. Therapists may also perform in-home training of patients for ensuring they can successfully carry out ADLs. However, the use of ADLs as rehabilitation tasks themselves is not standard, although framing the focus of stroke rehabilitation around ADLs has been verified to improve independence and quality of life outcomes [6, 81].

1.4 Patient Motivation

Patient motivation is regarded as an important factor in predicting success rates for rehabilitation [55]. The physical and emotional impacts induced by events such as stroke affect the willingness of the patient to participate. While this motivation to rehabilitate can be influenced by multiple factors in the rehabilitation environment such as family members, staff, and other stroke patients, a significant element lies in rehabilitation exercises itself. Some patients have a lack of information and understanding of the nature of their rehabilitation exercises [94]. Coupled with the long sessions and repetitive motions that are involved [43], patients become uncertain about their rehabilitation outcomes. As a consequence, poor patient participation causes patients to have longer inpatient rehabilitation stay and poorer motor improvement results [84].

2 Haptics-Enabled Rehabilitation Robots

A major source of innovation for rehabilitation therapy is found in the inclusion of robots for facilitating rehabilitation, an area which has seen significant development over the last three decades. The ability of robots to provide repetitive, high-intensity interactions without being subject to fatigue makes them an attractive means of providing the repetitive exercise that is fundamental to expediting a patient’s rehabilitation [140]. A significant amount of research in the area has sought to improve the stability of these robots to make them patient-safe, as well as to provide them with the ability to adapt their behaviors, whether it be assisting or resisting a patient during exercise.

2.1 Brief History of Rehabilitation Robots

Initially, most robots used in rehabilitation were for assistive purposes. These robots did not aim to help to regain the lost motor function of the patient, but rather they aimed to assist the patient in performing activities of daily living [147]. These are commonly seen as robots attached to wheelchairs to assist in eating and drinking, grabbing objects, and mobility [61]. It was not until the late 1980s when researchers started to pursue rehabilitation robotics for actual therapy use [86, 140]. Dedicated to assisting and augmenting motor function rehabilitation using robotic devices, research in rehabilitation robotics began as a way to find a solution that alleviates therapists’ stress and produces more efficient rehabilitation techniques. In 1988, two double-link planar robots were coupled with a patient’s lower limb to provide continuous passive motion for rehabilitation [75]. This was soon followed by an upper-limb rehabilitation device in 1992, the MIT-MANUS, which was used for planar shoulder-and-elbow therapy [63]. Upper-limb rehabilitative devices were further developed after the advent of the MIT-MANUS. These include devices such as the Mirror-Image Movement Enabler (MIME) robotic device, which improved muscle movements through mirror-image training [88], and the Assisted Rehabilitation and Measurement (ARM) Guide, which functions both as an assessment and rehabilitative tool [118]. Robotic rehabilitation that targeted other areas of the body surfaced in the 2000s. These robotic devices allowed rehabilitation for areas such as the wrist [143], hand, and finger [145] for the upper-limb, and gait and ankle training [29, 40] for the lower limb. More recently, robots designed for training patients to perform ADLs have been developed [58, 100].

2.2 Motivation for Robotic Rehabilitation

The inclusion of robots in therapy to provide therapist-robot-patient interactions presents distinct advantages over conventional therapist-patient interactions:

  • Conventional hand-over-hand intervention in which therapists would provide direct supervision and apply direct assistive/resistive forces as needed is highly burdensome on the therapist. As a result, in practice, most therapy sessions are designed to maximize a patient’s self-direction, with a therapist designing and supervising interventions while providing mostly verbal guidance [129]. Otherwise, therapy sessions that involve the more preferred hands-on interaction between the therapist and patient must be limited in duration and intensity (where intensity refers to the amount of activity performed), resulting in less practice for patients. On the other hand, robots can perform repetitive movements with superhuman accuracy and reliability, but without suffering from fatigue. This characteristic suits the repetitive nature of strength training and task-oriented therapies alike and can alleviate the physical burden on therapists.

  • Assessment in current rehabilitation practice is performed by using standardized assessments such as the Chedoke-McMaster Stroke Assessment [54], Fugl-Meyer Assessment of Sensorimotor Recovery After Stroke [52], and Modified Ashworth Scale [112]. These assessments are driven by therapists’ observations and are therefore examined strictly for validity and inter-rater reliability. While specific assessments may rate highly for either criterion, there can never be complete certainty in the results they provide due to the subjective nature of human raters and the resultant coarse resolution of the assessments. On the other hand, sensors in a robotic system can provide numerical measurements that can describe a patient’s performance during rehabilitation, which is ideal for supplementing the assessments mentioned.

  • The ability of robots to be automated is one of their most important strengths. In the context of facilitating rehabilitation, the automation of rehabilitation robots provides an opportunity to streamline therapy. For example, the ability to time-share a single therapist across multiple patients more efficiently becomes possible. Another example is intelligently automating the amount of assistance or resistance provided during therapy, a concept known as Assistance-as-Needed (AAN), which has received significant interest [20, 115, 130, 144]. AAN makes it possible to introduce robotic assistance on a graduated basis, where the level of automation (from fully manual to fully autonomous) that best suits the needs of the therapist and patient can be automatically chosen.

  • Rehabilitative therapy, especially when presented in a “traditional” (i.e., face-to-face) manner, is inherently restricted by distance. Patients must either participate in rehabilitation sessions at a hospital or other rehabilitation center, or a therapist must visit a patient at their home. In the case where patients are situated in remote or otherwise difficult to access locations, providing rehabilitation may be exceedingly challenging and cost inefficient. Telerehabilitation is the concept of providing rehabilitative support, assessment and intervention over a distance, using internet-based communication as a medium for therapist-patient interaction [120]. This can take the form of purely audio or video communication, audiovisual communication with patient-robot (unilateral) interaction with performance communicated over the internet, or haptic (bilateral) interaction between a therapist side robot and patient side robot, also known as telerobotic therapy [11, 124, 125]. Through telerehabilitation, remote access is inherently addressed and has received significant focus [27, 68]. Another possible advantage comes in the form of early indications from longitudinal studies on telerehabilitation that show small cost savings [71].

2.3 Limitations of the Current State of the Art

Despite these advantages, however, robotic rehabilitation faces limitations. A selection of limitations of considerable importance is given:

  1. 1.

    First and foremost is that analyses of the efficacy of robotic rehabilitation are largely inconclusive as to whether robotic rehabilitation is more or as effective as “conventional” therapy. Improvements in motor function have been shown in studies performed with the MIT-MANUS. Post-stroke patients were recruited for the MIT-MANUS study, and results have shown a statistically significant reduction in shoulder and elbow impairment compared to the group that underwent conventional therapy [141]. Likewise, with the MIME system, post-stroke patients trained with the MIME over an eight-week period showed an improvement in reach and strength of the affected limb [89]. Based on the Fugl-Meyer score, the MIME group produced better results than the conventional therapy group in the two months of therapy. On the other hand, examination of motor improvement using the ARM guide robot was found to be as effective as a control group receiving no assistance [70]. The study hypothesized that the benefits of robotic rehabilitation may be directly tied to the interactions it provides and the robot-generated forces may be providing no additional benefit. As a result, when put in context with the high initial costs of purchasing such robots, acceptance of robotic rehabilitation remains relatively low in clinical settings. The small participant sample sizes associated with each of these works further compounds the uncertainty in the benefits of robotic rehabilitation.

  2. 2.

    The programming of rehabilitation robots has always been done such that the robots provide interactions associated with a specific set of tasks, with no easy method of changing these tasks. As a result, the kinds of interactions a therapist can provide through the robotic medium are limited unless they or another person (usually a technician) are familiar with computer programming principles and can change the task and/or task-oriented behavior of the robot.

  3. 3.

    Low patient motivation remains an issue in therapy even with the addition of robotics. As robots allow for a reduced therapist intervention, the patients themselves lose motivation due to the lack of encouragement, entertainment, and human interaction [30, 38, 93]. Motivation is seen as an important predictor in successful rehabilitation outcomes.

3 Semi-autonomous Provision of Haptic Interactions for Robot-Assisted Intervention

An important consideration is that the field of rehabilitation robotics should instead focus on the use of robots as supplementary to conventional therapy and as enabling tools in the hands of therapists, instead of as replacements for them [65]. In this frame of mind, the field can be further developed even if rehabilitation robots are as effective as (but not more effective than) conventional therapy methods if improvements are made in other areas (e.g., cost savings or freeing up the therapists’ time), addressing Limitation 1. Providing semi-autonomy is one way to do so: semi-autonomy keeps the therapist in the loop but allows them to save time and effort through the robot taking a share of the intervention to be done on the patient. Autonomy in robotics implies the existence of machine intelligence, which necessitates the domain of machine learning research.

3.1 Machine Learning in Rehabilitation

The incorporation of machine learning algorithms in rehabilitation (robotic or conventional) has seen a rise in the past two decades. The grand majority of the literature focuses on the use of machine learning algorithms for classifying and recognizing a patient’s posture and movement, but not for learning the interventions demonstrated by a therapist. Leightley et al. [82] evaluated the use of support vector machines (SVM) and random forest (RF) algorithms for learning and recognizing general human activities. Li et al. [85] used an SVM and K-nearest neighbors (KNN) classifier to recognize gestures for hand rehabilitation exercises. Giorgino et al. [51] assessed the use of KNN, logistic regression (LR), and decision trees (DT) for identifying upper body posture using a flexible sensor system integrated into the patient’s clothes. McLeod et al. [99] compared the use of LR, naive Bayes (NB) classification, and a DT for discriminating between functional upper limb movements and those associated with walking.

However, the power of machine learning models is not limited to only classifying movements. They also have the potential to provide predictions of a patient’s condition, which may serve as guides for planning their approach to rehabilitation. Zhu et al. [151] trained an SVM and KNN classifier to predict a patient’s rehabilitation potential, both of which provided better predictive abilities than an assessment protocol currently used in the field. Yeh et al. [149] utilized an SVM to classify balance in able-bodied individuals and those with vestibular dysfunction. Begg et al. [17] also used an SVM to classify gait in younger, able-bodied participants and in elderly participants. Lastly, LeMoyne et al. [83] also implemented an SVM for classification of normal and hemiplegic ankle movement.

More recent applications of machine learning build off of these works that classify both a patient’s movements and their condition, and now look to address the natural conclusion to this line of research: how rehabilitation systems should adjust either the task or their provided intervention based on these features of a patient. Barzilay et al. [15] train a neural network (NN) to adjust an upper limb rehabilitation task’s difficulty based on the upper limb kinematics and electromyography (EMG) signals. Shirzad et al. [127] evaluate the use of KNN, NN, and discriminant analysis (DA) techniques for adjusting task difficulty in relation to a patient’s motor-performance and physiological features. Badesa et al. [14] perform a similar evaluation for perceptron learning algorithms, LR, DA, SVMs, NB, KNN, and K-center classifiers. Garate et al. [49] use a fuzzy logic algorithm to relate a patient’s joint kinematics to the motor primitive outputs of a Central Pattern Generator (CPG), which effectively provide assistance during gait through the control of an exoskeleton’s torques. Gui et al. [57] take a similar approach, this time using EEG measurements as the input to a DA algorithm that provides assistive exoskeleton trajectories through a CPG. It is important to note that in each of these works, the adaptation that is learned by the algorithms is not learned from demonstrations. Rather, these interactions are generated from predetermined models relating patient performance with task difficulty or desired assistance.

3.2 Learning from Demonstration for Haptic Interaction

Learning from Demonstration (LfD) describes a family of machine learning techniques in which a robot observes demonstrations of a task by a human operator (the “demonstration” phase) and learns a policy to describe the desired task-oriented actions, which may or may not be acted upon by the robot in a “reproduction” phase later [13]. The terms “programming by demonstration” or “imitation learning” also refer to the same concept. The policy learned through LfD techniques is a central point to its innovation, and has seen implementation through mapping functions (classification and regression), or through system models (reinforcement learning) [9].

The advantages of using LfD techniques to program robots are clear. After the initial challenge of making the machine intelligent, i.e., teachable, programming the robot can be made as easy as physically holding a robot and moving it through a desired trajectory which is known as kinesthetic teaching. Users themselves do not require knowledge of computer programming. The capabilities of the robot are completely dependent on the level of sophistication of the underlying learning algorithms and the amount of sensors used to characterize a behavior; with highly sophisticated algorithms and sufficient sensors, it is possible to teach more complex aspects of tasks to robots (e.g., understanding a user’s intent). The methodology of LfD also requires a human user to be involved in the programming process, meaning the aspect of interacting with an actual human is preserved and conveyed by means of imitation. Lastly, like any other implementation of machine learning for robotics, LfD allows for automation, which translates to time and cost savings.

The concept of semi-autonomous systems and LfD has seen extensive research in the past few decades. Application of LfD principles to human-robot interaction has naturally led to exploration of cooperative tasks. Calinon et al. [26] taught a robot to cooperatively lift a beam in a setup similar to what we propose here. Gribovskaya et al. [56] built upon the same work to ensure global asymptotic stability (GAS) of the system. Peternel et al. [116] created a variant to learn motion and compliance during a highly dynamic cooperative sawing task.

3.3 Learning Haptic Interactions Provided by a Therapist

LfD is an ideal method of introducing semi-autonomy in the field of rehabilitation robotics which our group has investigated since 2015. In addition to the benefits of enabling semi-autonomy as mentioned previously, it is also a plausible method with which therapists with minimum programming experience can easily adjust not only the level of therapeutic assistance or resistance provided to a patient but also set up any number of different therapy tasks, addressing Limitation 2 (Fig. 10.1). This aspect of mutual adaptation, where users can explore and train robotic aides themselves, is an important step for rehabilitation robotics [16] and is proposed as a viable method of making robotic therapy cost-effective and personalized.

Fig. 10.1
figure 1

An example of LfD for training a robot to provide haptic interactions that imitate a therapist’s intervention. In phases 1 and 2, the therapist will provide haptic interaction for the patient when performing a therapy task while the rehabilitation robot observes the intervention through kinesthetic teaching. The LfD algorithm will then be trained after phase 2. Later in phases 3 and 4, the robot will imitate the haptic interaction demonstrated by the therapist so as to allow the patient to practice in the absence of the therapist while still receiving haptic guidance

Few groups have applied specifically LfD-based machine learning techniques towards the practice of physical therapy in rehabilitation medicine. Hansen et al. [59] use an adaptive logic network (ALN) to learn a model relating EMG signals and the timing of a patient’s activation of an assistive Functional Electrical Stimulation (FES) device during gait. Kostov et al. [77] perform a similar work involving ALNs and inductive learning algorithms, but instead relating foot pressure recordings with FES activation timing. Strazzulla et al. [132] use ridge regression techniques to learn myoelectric prosthetic control during a user’s demonstrations, characterized by EMG signals.

Works that use LfD to specifically learn and reproduce the haptic interaction provided by a therapist during interventions represent one branch of the current state of the art in robotic rehabilitation. The merging of these two technologies exploits the hands-on nature of LfD-based robotic systems and addresses some of the shortcomings of robotic rehabilitation as mentioned earlier (i.e., the enabling of cost-savings and ease of programming). Lauretti et al. [80] optimized a system built on dynamic motor primitives for learning therapist-demonstrated paths for ADLs. Atashzar et al. [10] proposed a framework for both EMG and haptics-based LfD, where the learning of the therapeutic behaviors for an upper limb task was facilitated with an NN.

An extensive amount of research has been performed in the area by our group. Tao et al. [134] utilized a method based on linear least squares regression to provide a simple estimation of the impedance inherent to a therapist’s intervention during cooperative performance of upper-limb ADLs with a patient. Maaref et al. [92] described the use of Gaussian Mixture Model (GMM)-based LfD as the underlying mechanism for an assist-as-needed paradigm, evaluating the system for providing haptic interaction for assistance in various upper-limb ADLs. Najafi et al. [107] learned the ideal task trajectory and interaction impedance provided by an able-bodied user with a GMM and provided user experiment evaluations for an upper-limb movement therapy task. Martinez et al. [97] extended the Stable Estimator of Dynamical Systems learning algorithm developed by [76] in order to learn both motion and force-based therapist interventions.

Our group has most recently extended the application of LfD-enhanced robotic rehabilitation in three works. Fong et al. [47] applied kinesthetic teaching principles to a robotic system in order to allow it to first learn and then imitate a therapist’s behavior when assisting a patient in a lower limb therapy task (Fig. 10.2). A therapist’s assistance in lifting a patient during treadmill-based gait therapy was statistically encoded by the system using a GMM (Fig. 10.3). Later, the therapist’s assistance was imitated by the robot, allowing the patient to continue practicing in the absence of the therapist. Preliminary experiments were performed by inexperienced users who took the role of an assisting therapist with healthy participants (wearing an elastic cord to simulate foot drop) playing the role of a patient. The system provided sufficient lifting assistance, but highlighted the importance of learning haptic interactions in the form of the therapist’s impedance as opposed to only their movement trajectories.

Fig. 10.2
figure 2

Photo of the devices used for lifting assistance in [47]. In (a) the robot is moved by the therapist by holding and pressing on its end-effector force sensor. This provides a lifting assistance to the patient who walks at their selected pace on the treadmill while harnessed to the robot through the rope and clip. In (b) the motion tracker camera is shown placed in front of the patient so as to capture the positions of their toes, which are registered to markers placed on the tops of their shoes

Fig. 10.3
figure 3

Data flow and interaction between different agents (i.e., the therapist, patient, robot, and motion tracker camera) throughout the LfD process in [47]. (a) Shows the process flow when the therapist and patient interact to provide training demonstrations for the GMM algorithm that is used. (b) Shows the process flow when the patient is practicing alone with assistance from the robot, which reproduces the learnt haptic interaction using GMR

Our group then applied a similar method of kinesthetic teaching for learning the impedance-based haptic interaction provided by a therapist during intervention in an upper-limb ADL [48]. The kinesthetic teaching process proposed that during performance of the task, the interaction forces exerted on the robot end-effector by each of the agents (task environment, patient, therapist) could be simplified as a set of spring forces, linearized about spatial points of the demonstration (Fig. 10.4). An estimate of the impedance-based interaction provided by the therapist could then be obtained by measuring the “performance differential”, i.e., differences in forces along the trajectory, between the patient practicing the task when assisted by the therapist and when attempting the task alone. Experimental validation of the system showed that the interaction impedance was faithfully reproduced, although the resolution of the learnt interaction model briefly produced inaccurate haptic interaction.

Fig. 10.4
figure 4

A simplified diagram of position-based impedance retrieval for reproduction of the therapist’s behavior in [48]. In (a), activation weights for the first Gaussian component of a GMM (colored blue) are highest when the robot is in close proximity to the component. A stiffness constant is retrieved for the corresponding Gaussian and used to generate the forces learned from the therapist. In (b), a different stiffness constant is used when the patient progresses into the spatial coordinates associated with a different Gaussian component (colored red). In actual reproduction, the retrieved stiffness constant was composed of a mixture of the learned stiffness constants influenced by multiple components, instead of a single constant from the influence of a single component as shown. (c) shows the experimental setup used for validation

The GMM-based LfD system was also applied to a bilateral telerobotic setup to enable telerobotic rehabilitation for home-based delivery. A GMM and GMR-based approach to LfD was implemented with the purpose of learning therapeutic interventions in a collaborative ADL task, where the intervention was dependent on the patient’s upper limb position and velocity. By training the GMM with patient performance (represented by their limb velocity) as a model input, the LfD algorithm inherently learned the adaptive nature of the therapist’s intervention with respect to a patient’s level of ability (Figs. 10.5 and 10.6).

Fig. 10.5
figure 5

Illustrations of the telerehabilitation system with LfD proposed by our group. The demonstration phase is shown in (a) where the patient interacts with the therapist, and the reproduction phase in (b) where the patient interacts with a slave robot that emulates the therapist’s behavior

Fig. 10.6
figure 6

Experimental setup used by our group for bilateral haptics-enabled intervention. (a) shows the HD2 High Definition Haptic Device (Quanser Inc., Markham, Ontario, Canada) used as the master robot by the therapist; (b) shows the Motoman SIA-5F (Yaskawa America, Inc., Miamisburg, Ohio, USA) industrial robot that interacts with the patient

Lastly, the authors also provided a comparison of the single robot and telerobotic modalities previously implemented, referred to as Robot- and Telerobotic-Mediated Kinesthetic Teaching (RMKT and TMKT), for implementing LfD in robotic rehabilitation. The study provided incentive for rehabilitation-oriented systems to pursue RMKT designs, as the demonstrations provided through the modality were found to be more consistent (Fig. 10.7).

Fig. 10.7
figure 7

Experimental setups and results for our group’s work on comparing RMKT and TMKT. (a) shows the RMKT setup. The therapist, patient, and robot force sensor hold and open the drawer together. (b) Shows the master robot that is added in TMKT. The therapist holds the master robot and moves the task-side robot through a direct force reflection control loop. (c) Shows a comparison of the consistency of user demonstrations (which is proposed to be inversely related to the variance in the trained model’s output) for the two modalities

4 Serious Games

First coined by Abt in his 1970 book [3] and popularized by Sawyer in 2002 [123], serious games have become a widely researched field of study garnering a worldwide market worth of €1.5 billion in 2010 [7]. It was not until the early 2000s when fueled by the advancements in hardware, game development, and its success in the commercial market, that researchers worldwide took an interest in novel areas of research that video games can be applied to. Defined as games that serve a main purpose other than pure entertainment, serious games have been expanding in areas such as politics [146], military [126], sports training [53], and health [135]. Serious games can be presented through any form of technology and can be in any genre. For instance, even commercial platforms intended for entertainment, the Nintendo Wii, the PlayStation 2, and the Microsoft Kinect, have been repurposed for research in the rehabilitation field [36, 39, 46]. Various conferences and seminars have emerged in response to the growing interest. In 2010, the Serious Play Conference [2] began a leadership conference for developers of serious games/simulations coming from different fields of expertise from both industry and academia. To promote interdisciplinary research within the field, IEEE launched the first serious games conference in 2009, dubbed as VS-GAMES: Games and Virtual Worlds for Serious Applications. Limitless in potential, games allow for adding entertainment in approaches to education, training, or rehabilitation. In essence, the purpose is to create an enjoyable environment for otherwise tedious activities.

4.1 Incorporating Haptic Interaction in Serious Games for Rehabilitation

Serious games have been shown to increase patient motivation [4, 19, 91], addressing Limitation 3. By combining serious games and robot-assisted rehabilitation, the patient becomes engaged in the exercise and may even “forget” that they are in a rehabilitation training session due to the immersion [111]. It has been shown that the combination of the two technologies in rehabilitation training leads to better outcomes than robotics assistance alone [24, 103].

The addition of serious games in the rehabilitation environment to augment physical therapeutic exercises has been shown to produce positive outcomes. Since the games can promote the use of both physical motions and mental processes, the patient becomes actively focused while doing exercises. One of the earlier documented applications of computer games in rehabilitation research was done in 1993 for promoting arm reach using a Simon game in which patients are instructed to repeat the sequence of flashing lights by pressing coloured buttons [128]. By adding the game, it structures the exercise such that performance components are actively acquired through goal-directed interaction with the environment instead of acquiring them through random movements or mindless exercise [104].

The availability of various interfaces for serious gaming allows for adaptability and variety for the patient. From keyboards and mice, to body tracking and head-mounted displays, a multitude of devices can be integrated in games. Robots and haptic interfaces can provide haptic feedback to the patients to enable interaction with digital objects. For example, technologies like The Java Therapy System [119] allow for different interfaces (traditional mice, force feedback mice, force feedback joystick) to be used with games.

Serious gaming for rehabilitation does not end outside the doors of the therapy clinics; it plays a key role in home-based rehabilitation. Patients who are discharged from the hospitals and are sent home but are required to do exercises by themselves lack the motivational support from staff and peers in the clinics. This also applies to patients without access to the clinics due to distance or other circumstances. Self-exercise often leads to low motivation and less adherence to the required daily workout dose when lacking motivational support. Engaging patients at home with serious games for rehabilitation promotes adherence to the therapy.

Sources of motivation also come from the environment surrounding the patient. The therapist, the patient’s family, and peers in the rehabilitation clinics are all factors that can affect the patient’s motivation. Including these elements in a serious game further motivates patients by making the exercise seem more comparable to a social activity. Jadhav et al. [66] developed a system that allowed the therapist to perform an active role in adjusting the complexity of the training regimen while the patient does the exercise from the remote environment. Multiplayer games with peers offer the patients a shared experience and a sense of social acceptance [122]. Being able to cooperate or compete with one another heightens the enjoyment and can potentially produce a more intensive exercise than playing alone [108]. Furthermore, differences in performance levels for an able-bodied person and a post-stroke patient could be equalized such that they can both take part in a rehabilitative game [95].

There is significance in the type of information the patient receives during their exercise. In traditional rehabilitation practices, this may be in the form of distinct changes in the difficulty of an exercise, location of the user’s hand with respect to the target position, or general commentary feedback from the therapist. Robotic therapy takes advantage of the ability to record quantitative information to accurately measure even minor improvements in the patient’s actions [78]. Coupled with the games, the recorded data could be transformed into an exciting challenge to beat such as using the patient’s progress as high scores or achievements to work towards to further engage the patients [41, 135].

Moreover, since the confidence of patients may decrease when recognizing an increased intensity or difficulty in the task, the task difficulty could also be discreetly changed unbeknown to the patient. The game serves to aid in taking away attention from the change, thereby producing a masking effect.

Using the games as a distractor can also be used to alleviate pain. Patients can feel discouraged to continue their rehabilitation exercises when knowing that doing so inflicts pain. The same case is often seen in pediatrics when children become difficult to manage due to the fear of needles. A study on burn patients undergoing burn rehabilitation therapy reported that less pain was experienced when the patients were distracted by VR. The patients spent less time thinking about the pain while distracted [62]. The pain can also be re-attributed as a different sensation (e.g., a needle’s sting can be represented as a warm object on the arm) in the VR environment to further draw attention away from it [113, 131].

5 Enhancing Haptic Interaction with Virtual and Augmented Reality

While serious games aid in motivating patients during rehabilitation exercises, they lack the sense of realism found in traditional rehabilitation practices. Real-world tasks involve interaction with real objects such as peg-in-the-hole insertion, block stacking, and pick-and-place operations. Patients touch and see the objects they are interacting with at the exact same location since both the visual and haptic frames are aligned. However, the games used for rehabilitation are typically shown on a 2D screen in front of the user. The disconnect between the patient’s arm movement axis and how they see their cursor or avatar move on the screen may unnecessarily impose a mental burden on the patients to match their limb movements with what they visually see on the display. Furthermore, the scaling of movements might need to be accounted for if the workspace and screen are of different sizes. For patients whose cognitive functions are negatively affected by events such a stroke or injury and who require upper-limb neurorehabilitation, the spatial disparity might make it more difficult to perform the task compared to those with no cognitive deficiency.

5.1 Motivation for Visual-Haptic Colocation in Rehabilitation

Visual-haptic colocation is the direct alignment of the virtual environment with the physical environment in terms of visual and physical interaction. In contrast, a typical rehabilitation environment has the users doing rehabilitative tasks on a robot while looking at a screen for feedback of their movements which is often represented by a computer game. However, this spatial disparity between their arm movements and on- screen movements may unintentionally apply an unnecessary mental load on the patient as it requires a mental transformation between the two coordinate frames. This may be most evident in persons affected by mental disability [148]. To alleviate this problem, we propose to use a spatial AR setup in which the virtual and physical environments become aligned to allow the patients to control the rehabilitative tasks more intuitively.

5.2 Virtual Reality and Augmented Reality

Game environments can be shown to users through different display techniques. This can range from a typical 2D computer screen to more state-of-the-art technologies such as the Microsoft Hololens. The environment presented to the user can be described through the Virtuality Continuum (VC) [101] (Fig. 10.8).

Fig. 10.8
figure 8

The Virtuality Continuum introduced by Milgram et al. to categorize different mixed reality environments

The concept of the VC illustrates how the presentation of objects is not limited to a purely virtual or purely real environment. Looking through a cell phone camera and seeing the real world is considered as part of the left end of the continuum, the real environment. On the other hand, games like Super Mario Brothers belong purely to the virtual environment (or virtual reality) on the right end of the continuum. The term “Mixed Reality” arises when these two environments are combined within a single display. Depending on the proportion of real and virtual environments represented, the result can be categorized as either Augmented Reality (AR) or Augmented Virtuality (AV). In AR, virtual objects are integrated into the real environment. It involves the augmentation of the real world with virtual (computer generated) objects. At one’s viewpoint, these objects are seamlessly integrated as if they physically exist alongside the real-world objects. This potentially allows users to digitally interact with their surrounding environment. Games such as Pokemon Go, where virtual objects are overlaid on the camera feed, are considered as AR. Conversely, AV adds real objects into the virtual environment. This can be imagined as displaying the user’s actual hand in a virtual environment to interact with virtual objects. This is not to be confused with the representation of AR where the entire real environment is overlaid with a virtual one in which only certain real objects are unmasked such that they appear in the virtualized world [50]. It is important to note that these environments are not restricted by a specific type of display method such as a head-mounted display. Even a simple 2D screen would be capable of displaying these environments.

Although the majority of AR and VR development is geared for visual use, there is some research that applies to other senses also [137]. For example, applications for AR in surgery also involve active sensorimotor augmentation techniques to either provide supplementary perceptual information or to augment motor control capabilities of the surgeon during robotic-assisted surgeries [12].

5.2.1 Virtual Reality Displays

There are two main ways that VR is presented: non-immersive VR through a 2D computer screen, or immersive 3D VR using a head-mounted display (HMD). The ReJoyce by Rehabtronics [1] is an example of a non-immersive VR system controlled by a passive robotic interface. A multitude of games can be selected and played with using the various components that are incorporated into the robotic interface. As for immersive VR, the user is brought into a completely virtual 3D environment. The virtual surroundings can be configured in any manner regardless of genre or theme, allowing for creative ways of incorporating it into different applications in the healthcare field. Immersive VR has been used in studies such as treating phobia [33], assessing mild traumatic brain injury [121], gait therapy [67], hand rehabilitation [31], and so on.

5.2.2 Augmented Reality Displays

Video see-through (VST), optical see-through (OST), and spatial AR (i.e. projection-based AR) are the three most common ways of displaying AR [137]. VST takes a video feed of the real world and then virtual objects are directly overlaid on the display on the live video [28]. Registering the virtual objects can be done using fiducial markers attached to real-world surfaces [50]. By digitizing the real-world environment through a camera, it is much easier to manipulate the environment using image processing tools as both virtual objects and the real world are now in the same digital space. This means aspects such as contrast, orientation, and size of the virtual object can be calibrated more easily with the real world. With OST, the virtual and real aspects interact through a semi-transparent mirror [69]. The user can see through the display, and the virtual objects are reflected on the display for the user to see. OSTs let the user experience their environment directly without relying on the screen of a VST and therefore the resolution quality is as good as the user’s eyesight. However, it may be bulkier as cameras, monitors, and a mirror are needed to assess the environment and display virtual objects. Finally, projection is another method to implement AR [139]. Projection solves the problem of requiring HMDs or looking at a separate screen to see the computer-generated images. It can be used on both flat and 3D surfaces with cameras or motion trackers for direct interaction. Techniques such as projection mapping are available for the virtual objects to “pop-out” and seem one with the environment. However, projection is limited to low light working environments since it can be easily overpowered by other light sources.

5.3 Virtual and Augmented Reality Rehabilitation Systems

Multiple rehabilitation tasks that incorporate AR and VR have been published in the literature. VR systems present a completely virtual environment to the user, allowing for creative scenarios unbounded by physical limitations found in real environments. AR systems let the user stay within their familiar environment while enabling interaction with the virtual objects that are displayed in the real world. A brief investigation of related work is presented to give insight on current technologies available for rehabilitation in research. These are categorized based on visual technique and the presence or absence of haptics.

5.3.1 Virtual Reality Rehabilitation Systems

There are two main types of displays used in VR systems: non-immersive or immersive. Flat display screens such as a computer monitor fall under non-immersive systems. An example of a product in the market that implements non-immersive VR is the ReJoyce Rehabilitation Workstation, with a multitude of interactive games to simulate a variety of ADL exercises [45]. However, it does not provide haptic feedback during the tasks, only visual and auditory. Immersive VR systems often use HMDs to envelop the patient in a fully virtual world. An Oculus Rift and a non-haptic glove, for example, is utilized by Kaminer et al. [72] for an immersive pick-and-place task. For their upper limb rehabilitation exercises, Andaluz et al. [8] paired the Oculus with the Novint Falcon haptic device. There are, however, fully immersive VR systems that surround the user with a large projection display such as the CAVE [35] or CAREN [42].

5.3.2 Augmented Reality Rehabilitation Systems

AR systems come in three main forms: VST, OST, or projection. The immersion varies depending on the display type as some may use HMDs while others use monitor screens for VST and OST. Examples of research in rehabilitation that utilize VST systems are Burke et al. [25], Correa et al. [32], and Vidrios-Serrano et al. [138]. Burke implemented a reaching task through a game similar to Atari’s Breakout using fiducial markers to track real objects and allow for interaction with the virtual environment. Correa developed an AR game involving replication of musical tunes by occluding colored cubes in the proper sequence. Vidrios-Serrano used a haptic device to provide haptic feedback to the users as they viewed the environment through an HMD.

For OST setups, Trojan et al. [136] took a non-haptic approach in developing a mirror training rehabilitation system suitable for home use. Luo et al. [90] created an AR-based hand opening rehabilitation setup using an HMD and a haptic glove; the glove was used to simulate the sensation of holding a real object during their grasp-and-release task.

For projection setups, Hondori et al. [105] created a non-haptic tabletop system for post-stroke hand rehabilitation which incorporated different games. These included interacting with a projected box to play sounds, holding a cup to pour out virtual water, and grasping circles of different sizes. Finally, Khademi et al. [74] implemented a spatial AR setup with a haptic device for monitoring human arm impedance. They also performed a comparison between AR and VR displays for a pick-and-place task [73].

5.4 Dimensionality of Virtual and Augmented Reality Environments

An additional consideration for VR and AR rehabilitation systems lies in the number of dimensions that virtualized rehabilitation tasks are presented in. With the recent surge in developments in VR and AR technologies, the presentation of virtual environments (for rehabilitation or other purposes) in immersive 3D implementations has become more affordable and feasible. Most rehabilitation literature that incorporate serious games involve 2D non-immersive VR displays, or 2D or 3D AR dislays but without colocation of visual and motor axes. Devices such as the ReJoyce Rehabilitation Workstation have additional suites of interactive 2D games to motivate patients and improve upper limb function after stroke [1, 45]. GenVirtual, created by Correa et al. [32], is a musically-oriented spatial 2D AR game where the user replicates a song produced by virtual cubes that light up in a sequence by touching the cubes in the same order as demonstrated. Gama et al. [37] developed MirrARbilitation, a VST 2D non-colocated AR system to encourage and guide users in a shoulder abduction therapy exercise.

In [110], our group integrated spatial AR into robotic rehabilitation to provide colocation between visual and haptic feedback for human users participating in a rehabilitative game (Fig. 10.9). A comparison between the effectiveness of VR vs AR (i.e., non-colocation vs co-location of vision) was performed. Each visualization technique was also compared in the absence and presence of haptic feedback and cognitive loading (CL) for the human user. The system was evaluated by having 10 able-bodied participants play a game targeting upper-limb rehabilitation under all 8 different combinations of conditions lasting approximately 3 min per condition. The results showed that spatial AR (corresponding to colocation of visual frame and hand frame) led to the best user performance when doing the task regardless of the presence or the absence of haptics. It was also observed that for users undergoing cognitive loading, the combination of spatial AR and haptics produced the best result in terms of task completion time.

Fig. 10.9
figure 9

Experimental setup used by our group to evaluate the effects of AR technology on performance of 2D rehabilitation tasks. Left: Actual setup. Task is projected onto the desk surface (projector is not in view). Right: Top-down diagram of what the user sees

The use of 3D VR and AR systems with robotics can be seen as the current state of the art for research in visual-haptic colocation. Vidrios-Serrano et al. [138] used a VST 3D non-colocated AR system integrated with a phantom Omni device to interact with the virtual environment in a rehabilitation exercise. Broeren et al. [23] and Murphy et al. [106] used a haptic immersive workbench to test both able-bodied and stroke-impaired persons for rehabilitation and assessment using their OST 3D colocated AR system. Swapp et al. [133] studied the effectiveness of a 3D stereoscopic display AR system for colocated haptic feedback, where the benefits of incorporating visual-haptic colocation as opposed to having no colocation were examined. However, the study did not explore its effects in rehabilitation exercises and did not adjust the display to adapt to the user’s head movements.

Our group presented a culmination of these developments in [109]. The authors developed a 3-D spatial augmented reality (AR) display to colocate visual and haptic feedback to the user in three rehabilitative games (Fig. 10.10). A projection-based AR system was implemented with an off-the-shelf projector displaying the virtualized environment on a curved, smooth screen. Head tracking and virtualization of the user’s workspace were performed by a Microsoft Kinect sensor, allowing the projection to match the user’s perspective for maximizing immersion (Fig. 10.11). The haptic feedback for each task was provided with a Quanser High Definition Haptic Device (HD2) (Quanser, Inc., Markham, Ontario, Canada) and was intended to both provide the user with guiding cues as well as simulate their physical interactions with the virtualized task. To simulate a rehabilitation scenario, able-bodied participants are put under cognitive load (CL) for simulating disability-induced cognitive deficiencies when performing the tasks. A within-subjects analysis of ten participants was carried out for the rehabilitative games. Comparison of user task performance for the same games between the AR and VR setups found that AR enabled superior performance with or without cognitive loading. This result is most evident in exercises that require participants to have quick reaction times and movement. Furthermore, even while AR had a significant difference over VR, results for one of the tasks indicated that the recorded performance for AR between non-CL and CL cases was similar, showing the ability of AR to alleviate the negative effects of CL.

Fig. 10.10
figure 10

The three virtual games designed in [109]: Snapping which involved navigating a cloud of points (left), Catching which involved catching falling objects (centre), and Ball Dropping which involved accurately dropping objects (right)

Fig. 10.11
figure 11

Experimental setup used in [109]. Left: Actual setup with the task projected onto the screen (projector is not in view). Right: Model of the setup created in Unity

Lastly, our group presented a system comprised of a robotic arm for recreating the physical dynamics of functional tasks and a 3D Augmented Reality (AR) display for immersive visualization of the tasks for Functional Capacity Evaluation (FCE) of injured workers. While this system could have been used to simulate a multitude of occupational tasks, one specific functional task was focused on. Participants performed a virtual version of a workplace task using the robot-AR system, and a physical version of the same task without the system. Preliminary results for two able-bodied users indicated that the robot-AR system’s haptic interactions resulted in the users’ upper-limb kinematics resembling those measured in the real-life equivalent task (Fig. 10.12).

Fig. 10.12
figure 12

Painting task experimental setups used by our group for robot- and AR-enhanced FCE. Top: the robot-AR condition. Bottom: the real-life equivalent condition. The projector is not shown. Through AR, the paint roller will pop out in 3D from the perspective of the user in a geometrically correct position and orientation relative to the robot end-effector. The Yaskawa Motoman SIA-5F robot provides haptic feedback to the user that match their interactions with the virtual task

6 Future Directions for LfD- and VR/AR-Enhanced Rehabilitation

The innovations presented in this chapter represent parts of the forefront in technological rehabilitation medicine research. As such, there are still many aspects that should be developed before these technologies begin to see clinical adoption. A few of the most important directions for future works are selected here.

6.1 Exploration of LfD Algorithms That Define Models Across the Task Workspace

Incorporation of machine learning techniques is still relatively new in the field of rehabilitation robotics. As such, a wide range of learning algorithms is present in the literature, none of which is presented as a definitive best option. A possible future direction would be to explore and fairly compare LfD algorithms so as to create guidelines for which are optimal for rehabilitation task and interaction learning. Algorithms that generate global models from demonstrations (i.e., that cover the entire task workspace) may represent a good starting point. In these models, desired haptic interactions would be defined for all patient behaviors, which is desirable for safety and ease of programming. This could be performed through simple methods such as surface fitting, but could also be extended to explore more advanced concepts such as fitting Riemannian manifolds, or the SEDS algorithm as seen in [97].

6.2 Clinical Trials and Validation

A common limitation of the majority of the technologies that have been presented in this chapter is that they present proof-of-concept systems, or have not been validated for patient-safe interaction. It is crucial to validate the system by conducting a longitudinal study on actual patients that have a disability. Systems incorporating either of the proposed technologies (LfD and VR/AR) should be compared with a similar traditional rehabilitation setup by analyzing the outcomes of patient neuromuscular and cognitive improvement in order to determine its effectiveness against current methods. Emphasis should also be placed on recruiting large sample sizes, as the majority of studies have shown results for relatively small samples.