1 The smart house concept

The importance of smart houses as an alternative for independent life and home care of the elderly and people with disabilities is appreciated in many existing studies (Stefanov et al., 2004). During the last decade in several European countries, a number of demonstration projects and experimental smart houses have been developed to test new technologies in practice (Berlo, 1998). The arrangement of different devices for home automation and their control by the user has become a central topic in many projects, such as HS-ADEPT (Hammond et al., 1996), the model house at the Eindhoven University of Technology, The Netherlands (Vermeulen and Berlo, 1997), AID (Assisted Interactive Dwelling) project (Bonner, 1998), the SmartBo project in Sweden (Elger and Furugren, 1998), the Gloucester Smart House in the UK (Orpwood et al., 2001), and Domotics (Denissen, 2001). Smart house arrangements integrate systems for temperature and humidity control, automatic systems for window closure in the rain, remote control of entrance doors, kitchen appliances, lights, TV, video, phone, and computers, and measurement of carbon dioxide, chemicals, gas concentration, etc. Some experimental smart houses are also intended to test advanced monitoring systems that observe the physical status of the inhabitants as well as their activities: the Care Flats in Tönsberg, Norway (Smart Homes, URL), Welfare Techno Houses in Japan (Kawarada et al., 1999), and the Smart House at Curtin University (West et al., 2005). Usually, all home-installed systems are linked via a network for common control and information exchange.

The initial smart house design concept, primarily limited to home automation, remote control, and simple tele-health monitoring, was recently expanded to include technical devices that provide assistance in mobility/manipulation and advanced equipments for human-machine communication, which allow the user to perform various complicated everyday tasks independently. Current smart house design has become rehabilitation-robot oriented and service-robot oriented. Various new research projects show that robotic systems may assist people with movement limitations in many ways. For example, the Robotic Room developed at the University of Tokyo includes a ceiling-mounted robotic arm that can bring objects to a bedridden user (Nakata et al., 1996). The smart house project developed at the INT is based on a layered control structure, which includes a MANUS rehabilitation robot (Abdulrazak et al., 2001; Ghorbel et al., 2004; Mokhtari et al., 2004; Feki et al., 2004). The MOVAID robotic system (Dario et al., 1999) was intended as a component of a smart house arrangement to assist people with disabilities. The smart house organization included a mobile, semiautonomous robot with a number of fixed workstations into which the mobile unit could physically dock. The prototype robot was tested in operations that assist the user in meal preparation and service (heating a meal in a microwave and transporting the plate of food to the user on the mobile robot base), removing soiled linen from the bed, etc. The Domotic-robotic Integrated System (Johnson et al., 2004; GIVING-A-HAND System) developed at the ARTs Lab is an integrated modular concept that proposes the subdivision of the personal assistant into robotic modules and off-the-shelf domotic modules that can be integrated through a domotic network to create a smart house environment. The project included case studies of two task-restricted robot appliances for a local kitchen environment: a fetch-and-carry robot appliance (GIVING-A-HAND) and a robot appliance for assistance with eating (SELFEED). The CAPDI project proposed an adaptive kitchen for severely disabled people (Casals et al., 1999). The project includes a low-cost design solution for a fetch-and-carry robot with few degrees of freedom, the control of which is computer vision based.

This paper is organized as follows. First, the Intelligent Sweet Home concept is introduced in Section 2. In Section 3, the development of the intelligent bed, intelligent wheelchair, and robotic hoist is presented, and a localization method for the mobile base is proposed. Human-machine interfaces based on hand gestures, voice, and body movements are described in Section 4. Conclusions follow in Section 5.

2 The KAIST's Intelligent Sweet Home

2.1 History

The smart house idea started at KAIST in 1996 with the design of a prototype robotic arm, KARES I, mounted on a wheelchair. Various control modes and human-robot interaction were tested during the same project (Song et al., 1999). The smart house development continued with the KARES II project (Bien et al., 2004). In the new project, a novel, very flexible, and small robot was made. In parallel to that research, various new interface systems were developed and tested in other research projects of the HWRS-ERC. The next project stage emphasized refining the smart house concept, developing a common strategy for control and information exchange, and designing new advanced robotic modules. An experimental smart house, called Intelligent Sweet Home, was arranged at KAIST for practical testing of the new design solutions and concepts.

2.2 Survey

To develop a strategy that offers a beneficial response to some specific preferences and needs of the potential Korean users, the research team was greatly influenced by the results from a special questionnaire survey (Kim et al., 2003) that we conducted among patients of the National Rehabilitation Center, district hospitals, and nursing homes. We involved 70 people between the ages of 18 and 75 who have lower-limb dysfunctions or are elderly (who form the main portion of the Korean people with movement impairment). We sought to gather users’ opinions regarding the effectiveness of the assistive systems that they use, the importance they assign to various everyday activities, a list of important tasks that cannot be supported by existing assistive systems, the difficulties experienced by these users when they perform certain tasks with assistive devices, the activities for which they currently need assistance from an external helper, and directions for further improvement of assistive devices. We also asked the reasons that make existing assistive devices non-applicable or inefficient in a range of tasks (such as too big, too small, too heavy, too noisy, requiring too much external power, too complicated, and difficult to control, etc.). The survey results helped us to define the outlines and target tasks of our smart house design. It was found that people interviewed gave very high priority to activities such as independence in going outdoors, assistance in meal preparation, eating, drinking, control of home appliances from the bed or wheelchair, and bringing/removing objects while they are in the bed. The survey results revealed that most of the people interviewed consider it very important to feel themselves comfortable in the bed and wheelchair where they spend most of their time. The majority of those surveyed noted that they are in difficulties when they need to change their body posture or want to transfer between the bed and wheelchair.

2.3 The Intelligent Sweet Home Concept

From a functional point of view, the smart house can be considered a large-scale robot capable of interaction with its inhabitant(s) in a way that provides gentle assistance, health monitoring of vital parameters, living comfort (appropriate humidity, temperature, lighting), and monitoring of home security. The smart house can respond to situation changes or a user's demands by activating one or more different agents (hoist, robotic arm, bed, etc.). Multiple assistive systems in the smart house can cooperate in the execution of a single task using synchronized actions and information sharing. Depending on the sensory information, each agent can act autonomously in certain tasks and situations. Cooperation between the systems may increase the list of tasks in which the user can be assisted. Given tasks may also become more efficient and more precisely executed.

We based our smart house design concept on the following main considerations:

  1. (1)

    USER-ORIENTED CONTROL: The algorithm for processing task requests should prioritize the user's commands associated with urgent needs and sensor signals that indicate worsening of the user's state of health or safety risks.

  2. (2)

    FLEXIBILITY: The smart house organization should consider the nature of the disabilities of the user. Customization, based on a modular approach and common communication protocol, can make the design process much faster, easier, and more cost effective.

  3. (3)

    EFFICIENCY IN ASSISTANCE AND HUMAN-MACHINE INTERACTION: Home-installed technology should support users efficiently, allowing them to perform important everyday activities independently. Human-machine interaction should require minimal cognitive load, offering at the same time high speed of information transfer.

  4. (4)

    HUMAN-FRIENDLY DESIGN: Human-machine interaction should be human friendly. The smart house should possess a high level of intelligence for control, actions, and interactions. The machine response should consider the user's health/emotional state, offering them a simple and natural means of interaction. The smart house environment should include recognition of the direct user's commands as well as detection and proper interpretation of the user's intentions.

  5. (5)

    COMFORTABLE POSTURE AND MOBILITY: Smart house technology should allow simple body posture change, either on the user's command or automatically, by timing-based algorithms and information of the pressure distribution history. The design should also consider technology for smooth transportation to most places within the home, and wheelchair control manners that require simple user's instructions and automatic steering. The smart house design should also consider devices for effortless transfer of the user between the bed and wheelchair.

2.4 System architecture

Our smart house design includes several robotic modules to assist motion and mobility, devices for human-machine interaction, sensors, and health-monitoring building blocks. A general block diagram of the Intelligent Sweet Home is shown in Fig. 1. All home-installed components are integrated via a central control unit. Referring to the sensory information and the user's commands, the central unit generates a set of actions for each agent and the sequence in which these actions will be executed. Most of the assistive modules developed (intelligent wheelchair, robotic arm, etc.) have been designed to perform their inherent functions as well as to cooperate in tasks with other related systems. This approach has several advantages: such modules can be used as stand-alone devices, and their control can be based on high level commands when linked to another system. The cooperation among home-installed systems leads to very efficient and precise execution of given tasks and increases the list of everyday tasks in which the inhabitant can be assisted. When a new module is added to the initial architecture, the management system modifies the earlier control strategy to allow more flexible and efficient use of each module. All systems in the proposed smart house are connected via a home network that includes both wired and wireless communication modules.

Fig. 1
figure 1

Block diagram of the Intelligent Sweet Home

The project exploits various ideas of human-machine interfaces (HMI), emphasizing solutions that do not require attachment of intermediate sensors to the user and that reduce the user's cognitive load in task planning, command setting, and receival of feedback information from home-installed devices. For this purpose, we explored interface concepts based on hand gestures, voice, body movements, and an intention-reading technique. The same HMI is used to control various agents, for example, the gesture-based soft remote control system can be used to control the TV, air conditioner, position of the mobile robot, etc.

Our smart house architecture contains two types of robotic modules: robots that help in manipulation, and robots for assistance in posture and mobility. The set of robots for assisting in manipulation includes a bed-mounted robot, a wheelchair-mounted robot, and a mobile robot for object delivery. The bed-mounted robot is responsible for tasks related to serving the user while they are in the bed (bringing objects, covering the user with the quilt, etc.). In our prototype, a MANUS robotic arm was adopted and used as a bed-mounted robot (Kim et al., 2002). In certain tasks, this robot is assisted by a small mobile robot that transports objects to or from the bed. The wheelchair-mounted robot (developed in the KARES project) is intended to assist the user in handling various objects, opening doors, etc., while they are in the wheelchair. A detailed description of the design and functionality of the robots for object handling can be found in some of our previous papers (Song et al., 1999; Bien et al., 2004). The robots for manipulation assistance may be excluded from the smart house structure when the user has reasonable motion, adequate muscle strength in the upper limbs, and adequate hand dexterity.

The intelligent bed has been designed to provide maximum posture comfort. The bed can change its configuration automatically or upon command. The bed also provides physical support for the user when changing posture via special force-sensitive bar mechanism. The robotic hoist is a special autonomous robot to transfer the user from the bed to the wheelchair and vice versa. The intelligent wheelchair provides easy user access to various places indoors and out. The sensor-based wheelchair facilitates user control by automatic obstacle avoidance maneuvers. The robots for assistance in posture and mobility are described in greater detail in the next section.

An original ceiling-mounted navigation system, called Artificial Stars, has been developed to provide information about the positions and orientations of the installed mobile robotic systems. Originally, it was applied to the robotic hoist navigation. Later, the same navigation information will be used for planning the path of the intelligent wheelchair and mobile robot for object delivery (links are shown in Fig. 1 with dashed lines). The pressure sensor system detects the user's posture in the bed. The pressure information is used to control the bed configuration.

Fig. 2
figure 2

Intelligent bed robot system

Fig. 3
figure 3

Structure of supporting bar

3 Assistive robotic systems

This section describes in detail some of the newly developed components of our Intelligent Sweet Home design (Fig. 1).

3.1 Intelligent bed robot system

To enhance the user's comfort in bed, various intelligent beds have been developed. Some of the existing research projects relating to bed design are oriented to the user's entertainment and health monitoring. For example, a multimedia bed at MIT Media Lab was implemented with a projection screen mounted above the bed for computing capabilities (Lieberman and Selker, 2000). Projection on the ceiling can provide the users with multimedia, computer games, electronic books, etc., eliminating the need to prop themselves up on their elbows or on a pillow. The SleepSmart/Morpheus project aims at improving sleep quality by monitoring vital signs and using an adjustable bed frame to alleviate sleep disorder symptoms, such as snoring and apnea (Van der Loos et al., 2003). However, the existing systems do not assist users to change their position in bed. In our system, we attached a bar-type robotic arm, as shown in Fig. 2, to actively support the user's body when intending to change posture and position. The same robotic arm can also be used to deliver some objects on the tray mounted on it.

The robotic arm is shaped as a frame composed by two supporting bars (1 and 2) connected by horizontal bar (3). A tray (4) is attached to bar 3. The height of the supporting bars (denoted as h in Fig. 2) can be actively controlled by a 50 W DC motor (A). The frame is mounted on a mobile platform that allows the robotic arm to reach and serve the whole area of the bed by moving along the bed (this motion is denoted by l in Fig. 2). Two 50 W DC motors (B) mounted on either side of the bed drive the platform. Two torque sensors mounted on each supporting bar measure the two-dimensional forces applied to the horizontal bar 3 as shown in Fig. 3. If the force, applied to the bar, contains only a horizontal component (f x ), the torques τ A and τ B measured by each torque sensor have the same direction, whereas application of a vertical force component f y creates torques with opposing directions. When the user needs to change position in bed, he/she first approximately positions the horizontal bar of the robotic arm near the chest using a voice command. The horizontal bar comes close to the user consistent with the position and posture of the user, as shown in Fig. 4. Then, he/she grasps the horizontal bar and adjusts the position and height of the bar by pushing or pulling. The forces coming from the user are sensed by the torque sensors and cause movement of the supporting bars. This “follow” mode of the robotic arm can be changed to “support” mode by a voice command. In “support” mode, the supporting bars are fixed, and the user can slightly change position and posture by leaning over the horizontal bar.

Fig. 4
figure 4

Various positions of the horizontal bar with differing movement along the bed of each supporting bar

3.2 Pressure sensor system

To position the horizontal bar near the user's chest by a voice command and achieve automatic bending of the bed to match the user's intention, we applied posture recognition and analysis of the history of the user's movements on the bed. For this purpose, we distributed an array of pressure sensors over the bed surface as shown in Fig. 5. Force-Sensing Resistors (FSRs) are used to measure the pressure at the contact points. The FSR is a thin film sensor made from piezoresistive polymer whose resistance decreases in proportion to the force applied to its active surface. We used pressure sensors that can measure forces up to 10 kg/cm2. Sensors were uniformly distributed 70 mm apart vertically and 50 mm horizontally. The dimensions of the pressure sensitive mat are 1900 mm × 800 mm × 12 mm. As the mat covers a bed with a changeable shape, the mat is divided into three parts, the dimensions of which match the bed segments to provide bends in the mat that match the bends of the bed. To measure the pressure values, the signal acquisition box scans the pressure sensors sequentially using multiplexers, reads the pressure value of the selected sensor, and transmits the data to the main computer as a pressure distribution image. The updating rate of the whole image is 10 Hz and the signal coming from each sensor is digitalized with resolution of 10 bits. Analyzing the pressure data, the program recognizes both the posture and the motion of the user. The algorithm is summarized in Section 4.3.

Fig. 5
figure 5

Pressure sensors on the bed

3.3 Intelligent wheelchair

Various intelligent wheelchairs have been developed in many research projects oriented to different design aspects. Some of the wheelchair projects highlight automatic wheelchair control and functionality, such as obstacle avoidance and path planning, to achieve complex maneuvers (Levine et al., 1999; Bourhis et al., 2001; Pries et al., 1998; Prassler et al., 2001; Simpson et al., 2004; Hamagami and Hirata, 2004). Others prioritize innovative human-machine interfaces (Yanco, 1998; Mazo, 2001; Pries et al., 1998; Bien et al., 2004; Moon et al., 2003). Our intelligent wheelchair design focused on effective obstacle avoidance in autonomous navigation using a tilted scanning mechanism of Laser Range Finder (LRF).

As a base for our robotic wheelchair prototype, we used a commercial powered-wheelchair, model “Chairman” by Permobil (shown in Fig. 6). The wheelchair has two driver motors, four other motors to adjust the wheelchair seat, power module, and seat-actuator control module, with RS232 inputs. In addition to the existing design, on the back of the wheelchair we installed an industrial personal computer (PC) running Linux with a Real-Time Application Interface (RTAI) for real-time processing. The wheelchair safety and the preciseness of the autonomous navigation require rapid response of the control system to changes in the surroundings. If the processing of sensor information is delayed, obstacles will be detected too late and, as a result, dangerous situations, such as a collision with static or moving objects, may occur. Two incremental encoders were mounted on the driving wheels to measure the rotation of the driving wheels and to help the localization of the current wheelchair position. The odometric data obtained are used to calculate the relative position and orientation of the wheelchair.

Fig. 6
figure 6

Intelligent wheelchair platform

Fig. 7
figure 7

The robotic hoists developed

The wheelchair can operate in three modes: manual, semiautonomous, and autonomous. In manual mode, the user can control the wheelchair using a standard joystick and a touch-sensitive LCD. The user's commands are directly transferred to the power module and the wheelchair behaves as a conventional powered-wheelchair. In semiautonomous mode, the user sets the general direction of wheelchair movement. If the user's command does not lead to dangerous situations, the wheelchair controller executes the user's instructions exactly. Otherwise, the wheelchair controller automatically modifies the dangerous commands of the user to be safe instructions that allow the wheelchair to steer around obstacles and prevent collisions. In autonomous mode, the wheelchair autonomously plans the route to the goal and behaves as a sensor-based mobile robot.

The wheelchair senses the immediate surroundings and detects obstacles on its route using a two-dimensional LRF and a given global map. In several wheelchair systems such as MAid (Prassler et al., 2001), the LRF is oriented parallel to the floor and scans only the horizontal plane at the height of the LRF. Therefore, it cannot detect obstacles below the height of the LRF or it requires additional sensors to detect these. However, in our system, the LRF is mounted on an aluminum support at a height of 170 cm and is inclined down at 25° as shown in Fig. 6, and scans a 180° coverage at 0.5° angular resolution in every complete scan, a time of 26 ms. Because the LRF scans an inclined plane from the height of LRF to the ground, 3-D environmental information can be obtained by accumulating consecutive scanned 2-D plane images as the wheelchair moves (Kim et al., 2005).

3.4 Robotic hoist

Many health equipment companies offer devices to transfer a user between a wheelchair and bed. Most are suitable for institutional use and require an external helper to assist in the transfer procedure (Multi-Lift and Easy Base systems (Access Unlimited, URL); AryCare Patient Support System (AryCare, URL)). In addition, some other systems (Ceiling Track Lift System (Aluminum Extrusion, URL)) require complicated home modification for installation. We have designed and tested a compact transfer system for automatic transfer of the user between bed, wheelchair, bathtub, stool, etc.

As a base for our robotic hoist design, we used a simple mechanical hoist system. In addition, we developed a mobile base and installed the hoist on it. A sling system, attached to a horizontal bar of the hoist, was designed to hold the user safely and comfortably with easy fitting and removal. Figure 7 shows our progress in the development of new transfer systems. The height of the hoist and the orientation of the horizontal bar can be controlled automatically, or by manual operation, to lift or set down the user. A simple pendant was developed for manual operation of the system. To control the mobile base and the lifting mechanism, we developed a controller based on a NOVA-7896F PC equipped with a Pentium III (700 MHz) CPU and WOW 7.3 Paran R3 operating system. Two 200 W BLDC motors drive the wheels with 80:1 speed reduction to deliver a maximum speed of 0.4 m/s.

The robotic hoist can be controlled in two modes. In manual mode, the user on the bed or wheelchair sends commands to the mobile base and navigates it using the pendant-type interface. After achieving the appropriate position and orientation, the hoist transfers the user. In automatic mode, the computer control system automatically navigates the mobile base until it reaches the desired position and orientation.

3.5 The artificial stars system

To simplify the navigation task and achieve precise positioning of the mobile base, many systems use immobile fixed references, such as the ceiling in an indoor environment. The Minerva robot at the Carnegie-Mellon University compares the current image data and the mosaics of a ceiling generated as the robot drives around a museum (Thrun et al., 1999). This approach requires a complex and sophisticated algorithm and may result in an incorrect location depending on light and illumination. To reduce this limitation, the NorthStar, Evolution Robotics Co. Ltd, uses infrared (IR) light spots projected on the ceiling from a projector on a lower part of a wall and observed by a NorthStar detector on the mobile robot (NorthStar, URL). The position and orientation of the robot are calculated from the location of the IR lights in the image data. However, the benefit of this approach is obtained at higher cost and complexity, because of the projector. To overcome the disadvantages of the existing systems, we propose a simple and low-cost landmark navigation system, Artificial Stars, which consists of two parts: ceiling-mounted active landmarks and a PC camera to detect these landmarks. The PC camera is attached to the mobile robotic base as shown in Fig. 8. Each landmark is composed of three IR-LEDs (model EL-8L with maximum wavelength 940 nm and viewing angle 34°) arranged on the 4 cm × 4 cm PCB. To make the camera insensitive to ambient light, we covered the camera lenses with an IR pass filter.

Fig. 8
figure 8

PC camera attached to the mobile base

Fig. 9
figure 9

Possible configurations for different landmarks with three LEDs (A, B, and C are LEDs)

To obtain the position and orientation information, we first detect landmarks in the camera image using the preliminary knowledge that the landmark consists of three light-emitting diodes (LEDs) arranged in a predefined range, and then compare the distances between each pair of LEDs. The shortest distance is considered to be the first coordinate axis; for example, line AB in Fig. 9 indicates the first coordinate axis. The positions of the LED labeled C in Fig. 9 with respect to the coordinate axis AB are used to determine the direction north or south and identify each landmark.

Because the position of each landmark is known a priori in the global coordinates, the information from detected landmarks (landmark IDs) is used to determine the position of the mobile base. The orientation of the mobile base with respect to the landmark is determined by calculating the angle θ between the x axis in the camera coordinate and the line that connects A and B (Fig. 10). Because the camera is fixed to the mobile base, camera coordinates (x and y axes in Fig. 10) correspond to the orientation of the mobile base.

Figure 11 shows landmark prototypes attached to the ceiling. We assume that at least one landmark module is always seen by the camera. For cases where some of the landmarks in the area are not functioning or a landmark cannot be seen by the camera, dead reckoning is applied to complete the navigation information. In our tests, the system has shown that the localization is very robust under various illumination conditions, including darkness. We have also obtained position accuracy better than 20 mm and an orientation accuracy better than 1° when the viewing area was 1.15 m×0.88 m.

Currently, the Artificial Stars system is only applied to the navigation of the robotic hoist. After finishing all our tests, we may apply the same approach to the global navigation of the intelligent wheelchair and mobile robot (as shown in Fig. 1 with dashed line), whereas the wheelchair uses the LRF for outdoor navigation and obstacle detection.

3.6 User transfer between the bed and wheelchair

The task involves synchronous actions of the intelligent bed, intelligent wheelchair, and robotic hoist. It is assumed that the user has lower limb paralysis but sufficient motion range and muscle strength in the upper limbs. The transfer task can be performed easily using the following steps, as illustrated in Figs. 12 and 13. Figure 14 presents the scene where the transfer tests were performed with the hoist prototype.

  • Step 1: The user initiates the transfer task using a voice and/or a hand gesture. The command is interpreted by the management system that generates the action strategy and distributes the subtasks to the intelligent bed, intelligent wheelchair, and robotic hoist.

  • Step 2: The pressure sensor system outputs information of the user's position and posture. The horizontal bar of the intelligent bed moves close to the user and assists the user in changing body posture on the bed.

  • Step 3: The position and posture information of the user is analyzed by the management system. The robotic hoist moves to the intelligent bed and lifts the user.

  • Step 4: The intelligent wheelchair moves to the bed and docks with the robotic hoist.

  • Step 5: The robotic hoist lowers the user onto the intelligent wheelchair when the wheelchair sends a signal that a docking is complete.

  • Step 6: The intelligent wheelchair automatically navigates to go to the target position.

  • Step 7: The robotic hoist returns to the recharge station.

Fig. 10
figure 10

Calculation of the orientation angle in the image

Fig. 11
figure 11

Active landmarks implemented with three IR-LEDs

Fig. 12
figure 12

Task scenario for transfer

In Step 4, the LRF system is used in the automatic navigation of the wheelchair to dock with the robotic hoist system based on a given a priori shape information of the robotic hoist. The wheelchair detects two long bars in the lower part of the robotic hoist (B1 and B2 in Fig. 14) from the LRF scan data, when it approaches the robotic hoist. Figure 15 shows an experimental result for posture estimation of the robotic hoist. Two clusters (C1 and C2) of the accumulated LRF scan data correspond to the detected bars of the robotic hoist. The point X denotes the center between two tips of the clusters, which corresponds to the position where the wheelchair should stop. The point A is the intersection of the best fitting lines through each cluster, and the line AX indicates the direction in which the wheelchair should approach to dock with the robotic hoist. In our experiments, we have achieved a ±100 mm position accuracy for the center position and ±4° angular accuracy for the orientation with the processing time of 0.46 ms. In Fig. 15, the point O is the true center position between two tips of the bars, while the point X denotes the estimated center position from LRF data.

Fig. 13
figure 13

Sequence of the transfer task

Fig. 14
figure 14

Transfer of a user from the bed to the wheelchair

4 Human-machine interfaces (HMI)

The acceptance of the new smart house design by the user depends critically on the human-machine interaction used in it. To provide a simple means of transferring information between the user and the devices installed in the home, we focused our efforts on designing a “human-friendly interface,” that is, an interface that allows human-machine interaction in a very natural way, similar to communication between humans. We searched for technical solutions that are relatively robust to the user's commands, and sought to propose an interface that can interpret incorrect commands correctly and can advise the user when the commands issued are inappropriate or inaccurate. We have included in our research such processing algorithms that not only recognize the user's instructions but also analyze the history of the communication episode to identify and judge the user's intentions. Such human-friendly interfaces require minimal initial training of the user and do not demand preliminary memorizing of special phrases or sequence of actions to be used for interaction. In addition, our understanding of the “human-friendly interface” includes techniques that do not require any special sensors to be attached to the user. Following this idea, we have concentrated our research on “human-friendly HMI” on two main interfaces: interfaces based on voice commands and natural speech recognition, and interfaces based on gesture recognition.

Interfaces based on voice or gestures have been widely used. However, there are still no reliable methods for voice recognition in a noisy environment. Gesture recognition based on vision technologies also depends critically on the external illumination conditions. To alleviate these weaknesses, some systems adopt both methods to develop common multimodal interfaces. For example, the Naval Research Lab implemented multimodal interfaces for the mobile robots, Nomad 200 and RWI ATRV-Jr. These robots understand speech, hand gestures, and input from a handheld Palm Pilot or other Personal Digital Assistant (Perzanowski et al., 2001). Our system has been developed specifically focusing on the home environment. CCD cameras with pan/tilt modules are mounted on the ceiling so that the user can issue commands from anywhere in the room, such as on the bed, in the wheelchair, etc. As well as the intentional commands of the user, in the design of the smart house we also refer to body motions to control the intelligent bed robot system.

Fig. 15
figure 15

Detection of two bars and posture estimation of the robotic hoist

Fig. 16
figure 16

Configuration of Soft Remote Control System

Fig. 17
figure 17

Pointing hand and head detection

4.1 Gesture-based interface

To control robotic systems and other home-installed devices in a natural manner, we propose a system that observes and recognizes the user's hand gestures using ceiling-mounted CCD cameras, as shown in Fig. 16 (Bien et al., 2005).

In our system, we assume that only one user is in the room and the user points at an object by stretching out one arm. The image data is acquired by three separate color CCD cameras with pan/tilt mechanisms mounted on the ceiling. The cameras are directed towards the user and monitor the user's hand motions. After enhancing the image quality and removing noise, because of the luminance and shadow, the YUV transform is used to detect the skin color region. Because skin color is predominantly red and has little blue, these regions can be extracted by proper threshold values in the Y, U, and V images. Then, we apply a logical AND operation and open/close operation to remove the vestigial noise and obtain pure skin color regions. To segment the pointing hand and head, we choose two pointing hand blobs and two head blobs from the three preprocessed images considering length, width, and size of the blobs, and then find the fingertip points and center of gravity (COG) of the head, as shown in Fig. 17. To recognize the pointing direction, we calculate the 3-D positions of the fingertip point and the COG of the head from two images, as shown in Fig. 18 (Kohler, 1996). We first produce two lines s 1 and s 2 that pass through the centers of the cameras C M,1 and C M,2 and fingertip points P 1 and P 2 in each camera image, and determine a line segment \(\overline {P_{M,1} P_{M,2} }\) that is perpendicular to both s 1 and s 2. Then, we can obtain a midpoint P M as the 3-D position of the fingertip point. Similarly, we calculate the 3-D position of the COG of the head, and calculate a pointing vector from the 3-D position of the COG of the head to that of the fingertip point. Finally, the system finds a home appliance in the direction of the pointing vector based on the predefined position information for the appliances. If an appliance is found, the system indicates the recognized appliance by a light signal or voice synthesis message. After the user's confirmation, the system controls the selected device or executes the preprogrammed task.

Fig. 18
figure 18

3-D position calculated from two 2-D images (with permission of Kohler)

To test our prototype system, we designed a 3 × 3 menu on the TV screen (Fig. 19) to select and execute the preprogrammed tasks, such as turning on the TV screen, delivering a newspaper, bringing a book from the bookshelf, massage service, removing the quilt, returning the book to the bookshelf, turning off the TV screen, calling an assistive robot, and watching TV.

Fig. 19
figure 19

3 × 3 menu for extended mode of Soft Remote Control System

4.2 Voice interface

A voice control algorithm should be based on simple commands, composed of short and simple words. In our design, the commands were composed of three words or less, as shown in Table 1. The structure of the command includes an object part and an action part. The object part indicates the actual name of the subsystem to which the action will be applied, such as “robotic arm”, “wheelchair”, “TV”, etc. The action part indicates a demanded action such as “come”, “on/off”, “up/down”, etc. Some examples of the voice commands used in our system are shown in Table 2.

Table 1 Configuration of the voice commands
Table 2 Examples of voice commands

Our system accepts clear synonyms of the same word as well as the exact word for the action. In the case where the action is quite specific and may apply to only one appliance in the smart house, the commands can indicate that action only. However, when the action part is comprised of only one syllable (such as “go”), it is more effective to use the action part together with the object part, as words with one syllable have a low recognition rate. The user can add or delete commands from the set using a special command management program. The feedback messages from the subsystem, which will execute the command, precede the command execution. The feedback is usually based on voice-generated messages.

In our system, the voice signal is transmitted from a wireless microphone to the main computer of the Intelligent Sweet Home where the recognition procedure is carried out. In our tests, we used a commercial voice-processing module, VoiceEzTM, from the Voiceware Co., Ltd (VoiceEz, URL).

4.3 Body posture/motion estimation

In our smart house, body posture and motion information are used to position the supporting bars of the intelligent bed close to the user, as shown in Fig. 4, and to control the bending of the bed according to the user's intention (body lifting and lowering). In addition, monitoring body posture and motion can provide information on the user's health and safety risks (poor balance, frequent turning over during sleep, fall detection, etc.). Many researchers of body motion recognition used a video camera (O’Rourke and Badler, 1980; Rehg and Kanade, 1994; Wren et al., 2000). However, the video camera approach cannot be applied to detect the body motion of a user lying on a bed because the body is usually hidden from the camera by a quilt. To overcome the problem, a static charge sensitive bed was developed (Alihanka et al., 1981). The designed system not only monitors body movements on a bed but also measures respiration, heart rate, and twitch movements. A temperature sensors-laid bed was developed to measure gross movements, such as body turns (Tamura et al., 1993). Harada et al. applied a body model to analyze information from a pressure-sensitive bed and estimated the body posture (Harada et al., 2002). This approach can estimate the subtle posture and motion between the main lying postures (supine and lateral posture). However, the procedure requires considerable calculation time to determine the posture, as many parameters of the body model must be calculated.

In our system, we use a matrix of 336 pressure sensors and apply a hybrid learning method to reduce the calculation time and achieve rapid estimation of the user's posture and motion. We based our algorithm on the Principal Components Analysis (PCA) and Radial Basis Function (RBF) neural network (Haykin, 1999). The pressure distribution image has 336 features (equal to the number of FSR sensors). PCA is used to reduce the dimension of the data, and the reduced feature vectors are presented as the input of an RBF neural network using Gaussian functions as its radial basis. The parameters of the RBF neural network are adjusted by a hybrid learning algorithm that combines the linear least square method to update the weights of the network and the gradient method for the means and standard deviations of the Gaussian functions. The algorithm is described in detail in Seo et al. (2004).

Fig. 20
figure 20

Posture classes for intention reading

We tested the algorithm to recognize four postures: supine, right lateral, left lateral, and sitting (see Fig. 20). Table 3 shows the classification success rate and processing time related to the number of principal components (the number of largest eigenvalues) in PCA. In our experiment, the processing time increased as the number of principal components increased, and the success rate increased until 30 principal components were used. However, as we need a tradeoff between the success rate and the processing time, we used 20 principal components in our system. We found that the proposed algorithm (PCA+RBFNN) with 20 principal components has the best performance considering both processing time and classification success rate. We recognize body posture at every 10 ms, and body motion is determined from the history of the recognized postures. For example, if the supine posture is followed by the sitting posture, we can consider the motion to be body lifting.

Table 3 Processing time and classification success rate for the estimation of the posture

5 Conclusions and challenging problems

In this paper, we propose a robotic smart house, Intelligent Sweet Home, for independent daily living of the elderly and people with disabilities. In particular, we suggest and have developed: (1) the concept of an assistive robotic house, which integrates various service robotic systems to help the inhabitants, human-friendly interfaces to recognize the user's intention in a natural manner, and a management system for effective cooperation among the subsystems; (2) an intelligent bed robot system that can assist the user's body movement and can deliver objects to the user on a special tray; (3) an intelligent wheelchair that can move autonomously and possesses obstacle avoidance capability; (4) a robotic hoist to transfer the user from the bed to the wheelchair and vice versa; (5) a control interface based on hand gesture recognition; (6) a voice interface to control home appliances together with hand gesture interface; (7) a posture monitoring system; and (8) a home network server and management system for reliable data exchange among home-installed systems. In addition, we have also suggested and tested a localization method for navigation of a mobile platform, called the Artificial Stars system, and an obstacle avoidance method using a tilted scanning mechanism. The developed Intelligent Sweet Home can assist many daily activities, as shown in Table 4.

Table 4 Examples of assistive services by Intelligent Sweet Home

Experience gained from the development of the Intelligent Sweet Home has indicated to us the following directions for future improvements to the initial design concept:

  1. (1)

    Design of an effective algorithm for wheelchair navigation, in an environment with moving obstacles (people in the wheelchair's path), may be based on the intention-reading technique. If a moving obstacle intends to diverge from the direction of the wheelchair's movement, the wheelchair can move without any avoidance behavior. Otherwise, the wheelchair should modify its path considering the direction of the obstacle (Kang et al., 2001).

  2. (2)

    The development process should involve the users not only in defining tasks but also in each step of the design process: survey, questionnaires, and statistical analysis; needs analysis, task analysis, and function analysis; design; implementation; evaluation; analysis; redesign (revision), etc. The feedback must include technical aspects as well as the user's acceptance and the aesthetic design.

  3. (3)

    Intention analysis and prediction should be used more widely to achieve greater human-friendly interaction and human-centered technology. Design of control based on facial expression, eye-gaze, and bio-signals can be used to cope with a high level of disability.

  4. (4)

    Emotion monitoring is another challenging subject to provide the user with different services depending on the user's current emotional status. For example, relevant music can be suggested and the illumination can be adjusted. Moreover, while most existing systems are usually passive in acquiring emotional information, we expect that future systems will give emotion-invoking actions in a more effective manner (Do et al., 2001).

  5. (5)

    The technology will be further oriented toward custom-tailored design where the modular-type components of the smart house will meet the individual needs and characteristics in a cost-effective manner. Such technology will be applied not only to the hardware systems but also to the assistive service itself, called “personalized service,” in accordance with the user's preferences.

  6. (6)

    To understand the intention and the preference of the user more effectively, it is necessary to model and handle human factors, which are usually obscure, time-varying, and inconsistent. To deal with this difficulty, a learning capability for the system is essential and plays a key role in effective human-machine interaction.

  7. (7)

    We will attempt to improve the operation of the Artificial Stars by testing different LED configurations for higher accuracy of localization, advanced line segment matching of LRF data with a priori knowledge for smooth docking of the wheelchair and the robotic hoist, active utilization of soft computing techniques for intention reading, and hand posture recognition in the soft remote control system to increase the number of gesture commands and extend to the case of multiple users.