1 Introduction

In writing this preface to the topic of Design for Assistive Augmentation, I am taking the opportunity to draw on my own work and experience to shed light onto different aspects of assistive augmentation, while addressing particular aspects of the principles for design.

I argue that the design for assistive augmentation should take 3Ms into consideration—Mind, Might, and Magic.

1.1 Definitions

Before discussing the idea of designing for assistive augmentation, let’s review the definition of the three keywords—design, assistive and augmentation.

Design is the creation or implementation of a plan, a system or an object, the process of manipulating both the form and function, to satisfy known constraints and achieve certain objectives. Design involves rationally the specification, analysis and solving of the problems, with reflective, even emotional sense-making, and improvisational creative actions. In the current context, we are considering Interaction Design (IxD), User Interface (UI) Design, User Experience (UX) Design, User-Centered Design (UCD), and Universal Design (UD).

Assistive is an adjective often used together with technology, as in Assistive Technology (AT) to include devices and services, such as objects, equipment, software systems, applications and environments, designed or intended to aid or assist people with different abilities to perform in an activity or task, or to function, for work, health, and daily living in the world. This umbrella term encompasses ideas of accessible, assistive, adaptive, rehabilitative devices to promote greater independence, to enable or enhance functional capabilities, reduce difficulties, for participation and achievements.

Augmentation is the action or process to strengthen, to increase, to extend, to give rise, to accelerate, to boost, or to reinforce the value of the condition, in rate, amount or state. Specifically, we are concerned about the context of Augmented Human, in which the design of assistive technology, user interfaces and interactions aim to seamlessly integrate with sensory input and perception, as well as motions and behaviors in embodied interactions.

In the following section, let’s discuss and link the idea of Assistive Augmentation (AA) to the concept of 3Ms—Might, Mind and Magic.

1.2 Mind, Might and Magic

As human beings, we observe, reflect, and act. We build tools to make and fix things, to gather food and resources, to maintain health, comfort and safety, to communicate, contribute, and belong to our communities. These assistive tools are augmentation to existing human capabilities. According to Maslow’s theory of the hierarchy of human motivation [1], there are five levels of human needs—physiological, safety, love/belonging, esteem, and self-actualization, represented as a pyramid with more basic needs at the bottom as shown in Fig. 1. Whether we agree with the theory or not, we can acknowledge that there are universal human needs to be fulfilled, and we all possess different abilities to achieve different degrees of fulfilments. The question then, is—How can we design assistive augmentation to help advance and unlock human potentials to achieve different needs?

Fig. 1
figure 1

Maslow’s hierarchy of needs [2]

Earlier I have outlined the definitions of design, assistive and augmentation. Before describing some projects as illustrative examples in the next sections, I would like to argue that any design of successful assistive augmentation should take 3Ms into consideration: mind, might, and magic.

The Mind is about thinking, understanding, and planning. It’s about ideas and making representations and decisions. The mind engages domain knowledge, retrieves relevant information, reasoning about actions, and stimulates creative thoughts.

The Might is the effects we have on the world, the efforts we engaged upon. The might represents the actual things we have accomplished using our power and strength. The might is the execution of activities, on objects, and environments.

Finally, the third M, Magic, concerns the development of design of the technology, to be well engineered to the extent that the form and function are seamlessly integrated into the fabric of daily lives, with wonderful, exciting effects. Magic is about making things work that appear to be fascinating, or enchanting. Science Fiction writer Arthur Clarke is famously quoted saying—Any sufficiently advanced technology is indistinguishable from magic [3].

To sum up, I am advocating the 3Ms in designing assistive augmentation—Mind, to observe before acting, to be thoughtful, and to be open-minded; Might, to consider the capacity, ability, efficacy, competency of people, and the technology; and finally, Magic, to have technology wonderfully blended in everyday life activities.

2 A Way of Working—Building Computational Tools

The premise of the Assistive Augmentation is that one can take on the challenge of integrating hardware and software technologies to improve user interfaces, and process contextual information to facilitate practical real-life applications.

Building computational tools is a way of working to solve problems and to innovate. Here, I am sharing my personal research journey to show that one can find inspirations from many different contexts, coming from very different disciplines towards the Design for Assistive Augmentation.

My interest in design computing started when I was as a graduate student at Harvard Graduate School of Design. Frustrated by the impoverished interface of advanced CAD software, I started programming and implemented sketching software to support design and eventually my Ph.D research, the Right Tool at the Right Timeinvestigation of freehand drawing as an interface to knowledge based design tools [4]. The important concept about building the Right Tool at the Right Time is to detect what are the tasks at hand (reasoning about the context, anticipating the need) and then to provide the appropriate support (triggering different functions, and knowledge-based systems).

Specifically, my research interests had spanned in several areas over the years: (1) computer aided design, especially sketch computing, (2) creativity and design cognition, including creativity support tools and design studies, (3) tangible and embedded interaction, and (4) computing for health.

2.1 Computer-Aided Design

Trained as an architect, my interest in design thinking led me to develop diagrammatic and sketching interfaces for computer-aided architectural design. I asked, “What’s in a hand-drawn design diagram that a computer should understand? [5].” To answer the question I studied design drawing and developed several computer-based sketching tools to explore and support design activities. For example, in Thinking with Diagrams [6] I examined how architects used diagramming and sketching as a tool to explore, discover, and develop ideas. The Design Sketches and Sketch Design Tools article [7] presents examples of various computational tools that can help designers in performing design tasks such as image retrieval, visual analysis, dimensional reasoning, and spatial transformation. Figure 2a shows that the VR Sketchpad system [8] transforms a 2D furniture layout sketch of a room into a 3D virtual reality for simulated walk-throughs to understand the use implications of the space. Figure 2b shows that the Design Evaluator system [9] analyzes and annotates spatial layout on the hospital floorplan based on design constraints for process flow or adjacency requirements that were defined in the program specification or rules and regulations in building codes.

Fig. 2
figure 2

a 2D sketch is transformed to 3D in VR Sketchpad system. b Design Evaluator system analyzes and annotates spatial layout in hospital floorplan based on design constraints for process flow and adjacency requirements

I have also worked with students to develop tools for sketching in 3D [10], including annotation in virtual environments [11] to support collaboration and communication [12], sketch-to-building simulation [13], design evaluation [9], space making [14], and sketch-to-fabrication [15, 16]. I explored how sketching can serve as an interface to create or interact in 3D Virtual Reality environments [17].

2.2 Creativity and Design Cognition

To support creative design, I implemented diagram indexing and retrieval for shape-based finding of visual references [18], graphics interpreter of design actions [19], organization of a case-based design aid for architecture [20], as well as context recognition for designer’s Ambiguous Intentions in their sketches [21]. In design studies and creativity and cognition, I have worked with students on the analysis of design [22], the role of physical objects and environment in creativity [23], and patterns of design process and collaboration [24].

2.3 Tangible and Embedded Interaction

My transition from 3D sketching to tangible and physical sketching follows the idea of “thinking with your hands” [25]. Tangible computing research engages designers to manipulate and experiment with embedded computing, to “sketch” with physical objects that comprise our built environment. Starting with the Navigational Blocks project [26] to navigate information space, my journey in tangible computing projects explore architectural form making [27], strategy games [28], energy awareness [29], construction kits [30], interactive furniture [31] and responsive environments [32].

For example, Fig. 3a shows the interaction with a tourist kiosk by picking up and rotating Navigational Blocks (Who, What, When, Where) would retrieve historical stories for different tourist attractions. Figure 3b shows that assembling and configuring Posey parts could constitute an animal puppet with replaceable body parts that can be animated. These are examples of how tangible, physical objects with digital and contextual awareness can enhance the interaction for information retrieval and display.

Fig. 3
figure 3

a Interaction with Navigational Blocks to retrieve historical stories at tourist kiosk. b Dinosaur Pretend Play with configured Posey construction kit

2.4 Computing for Health Applications

I have worked on a variety of computing for health applications—using technology to encourage hand-washing in a smart patient room [33], a glove for spinal cord injury rehabilitation [34], an object identification tool for the visually impaired [35], a mobile health and robotic companion for children [36], and to promote employee active lifestyle [37], and the ClockReader [38], a system for automatically scoring the Clock Drawing Test that doctors use to screen for mild cognitive impairment. Instead of asking “What’s in a design sketch that a computer should understand?” this project asks “What’s in a dementia patient’s drawing that a computer should understand?”

2.4.1 Electronic Clock Drawing Test (for Early Alzheimer’s Disease’s Detection)

Early detection is crucial for better planning and treatment managements of cognitive impairments such as Alzheimer’s disease and other related disorders. Clock Drawing Test (CDT) is a well-established and commonly used paper-and-pencil screening instrument [39]. After each clock is drawn by a patient, the clinical staff then spend time to analyze the results by measuring and scoring each criterion (i.e., have all 12 numbers, long hand, short hand, and a center, numbers in the right locations, and sequence, etc.). We developed the ClockReader System with automated recording and analysis, time stamps and playback, so doctors and clinical staff could easily retrieve and monitor the progress of patients’ cognitive impairment, exam the drawing process, and present the data in joint diagnosis sessions [40]. Figure 4a shows the ClockReader system interface with areas for drawing output, analysis, and monitoring. Figure. 4b shows the stylus pen and tablet computer setup for the ClockReader system. By implementing the pen and tablet interaction with simple user interface, we provide a computer environment that enables non-clinician or self-administered clock-drawing tests to be performed as frequently as needed, for easy comparison of past drawings collected overtime, as well as revealing time-based information such as the sequence of drawing [e.g., 1-3-6-9, or 10-11-12] and the idle time duration between each drawing marks (longer pause may indicate difficulty in recall).

Fig. 4
figure 4

a ClockReader system showing a patient’s drawing (left), the automatic analysis and scoring (right), and past drawings displayed in the monitoring panel (bottom). b Stylus pen and tablet setup for ClockReader

2.4.2 Digital Box and Block Test (Rehabilitation Independence)

Stroke is the leading cause of serious, long-term disability in the United States and worldwide [41]. Increasingly, we have seen stroke rehabilitation therapies conducted in patient’s home. However, to perform the clinical assessments care providers still require patients to visit the clinic. The Digital Box and Block Test (DBBT) is a computational tool aims to help medical professionals record and assess rehabilitation progress of stroke patients with easy setup [42]. Figure 5a shows the Box and Block Test is augmented with a Kinect camera mounted above to record and perform analysis. As shown in Fig. 5b, the time-based movement data of the hand (including fingers and arm) can then be displayed and compared between sessions. Embedding this technology in the residential spaces could also help patients to relearn and recall how to use their arms, hands and fingers. With the system, care providers would be able to more precisely detect, track, and monitor patient’s post-stroke functional motor improvements remotely.

Fig. 5
figure 5

a Digital Box and Block Test with Kinect setup above the desk. b DBBT screen showing detection of the positions of the fingers and the blocks, together with movement status

2.4.3 Mobile Music Touch (Haptic Learning, New Skills and Rehabilitation)

An instrumented light-weight glove, the Mobile Music Touch [34, 43] is designed to facilitate passive haptic learning of piano playing by tapping corresponding finger for each key when the music is playing, so one can learn to play music while doing other tasks or on the move. The vibration motors outfitted on the Mobile Music Touch cue users which finger to use to play the next note. Figure 6 left and middle show a user’s hand in a converted golf glove playing on a lighted keyboard, and the picture on the right shows a fingerless version of the music glove with a strap-on hardware box. The pilot study with students with no musical backgrounds shows that participants have fewer finger key mistakes for the songs that were cued with the glove than the ones without. A study of a short-term use of the glove with quadriplegic patients shows improved sensation and mobility for people with spinal cord injury [44]. This is a good example how a wearable device such as a music glove could facilitate passive music learning and engaging hand rehabilitation practice.

Fig. 6
figure 6

Vibrating motors of Mobile Music Touch cue which fingers to play the piano

2.4.4 Tactile Teacher (Sensing Behaviors—Piano Learning, Experience Transfer)

A student often imitates the teacher’s playing in terms of speed, dynamics, and fingering in a piano lesson. This learning model employs one’s visual and audial perception for emulation, but it lacks the tactile sensation, an important component of piano playing. To investigate how we can convey the tactile sensations of the teacher’s keystrokes to signal the student’s corresponding fingers, we implemented Tactile Teacher, an instrumented fingerless glove to detect finger taps on hard surfaces [45]. Recognizing that finger taps generate acoustic signals and cause vibrations, after testing on several different sensor placements and orientations (Fig. 7 left), we embedded three vibration sensors on the glove, and use machine learning algorithms to analyze the data from the sensors. After a brief training procedure, this prototype (Fig. 7 right) can accurately identify single finger tap in a very good performance at above 89% accuracy, and two finger taps resulted in accuracy around 85%. Wouldn’t it be nice if Tactile Teacher can capture the piano playing techniques from virtuoso piano players and then transfer the tactile sensations to learners through Mobile Music Touch? This also shows the potential of capturing other finger-based fine motor skills for training and rehabilitation in the future.

Fig. 7
figure 7

Four configurations tested to determine the optimal sensor placements and orientations, resulting in an instrumented (configuration d) glove with vibration sensors in Tactile Teacher

2.5 Things That Think, Spaces That Sense and Places That Play

Over the years, the focus of my work has been applying design computing and human-centered computing knowledge to investigate and implement the vision of Things that Think, Spaces that Sense and Places that Play—a smart living environment in which computing technologies embedded in the built environment (e.g., objects, furniture, building, and space) support everyday happy healthy living.

My recent work further explored the ideas of creating experience media and interactive computing projects towards smart living environments. Specifically, at the Keio-NUS CUTE Center we explore the idea of “Creating Unique Technology for Everyone” through the use of Connective, Ubiquitous Technology for Embodiments, in key areas of tangible interaction, augmented learning, and embodied experience [46].

3 Creating Unique Technology for Everyone

Design and Human-Computer Interaction are crucial components of information technologies in daily life and they color our experience of computation and communication. As transdisciplinary researchers and designers, we have a mission to pursuit the vision of Creating Unique Technology for Everyone through the use of Connective, Ubiquitous Technology for Embodiments. Here I will describe a couple projects from CUTE Center that demonstrate the idea about how mobile, ubiquitous, or physical and tangible computing can be used to augment or enhance human abilities and experiences.

3.1 Sensorendipity and SilverSense (Mobile Phone, Behavior Monitoring)

SilverSense is a smartphone-based activity monitoring system for elderly senior citizens [47]. Utilizing a smartphone’s built-in sensors (e.g., movement, location, light and sound levels), SilverSense uses the sensor data to facilitate the detection of old-age problems, such as dementia and falls, stores the sensor data for caregivers and family members to access and visualize the activity history to facilitate monitoring and better life style management. A collaboration with People’s Association Active Ageing Council, the project aims to provide convenient, non-intrusive monitoring of elderly seniors’ wellbeing, while connecting their family members and caregivers through a user-friendly interface. SilverSense is powered by Sensorendipity [48], a smartphone-based web-enabled sensor platform developed to facilitate smartphone sensors to be used easily by web developers to develop real-time web applications. Figure 8 shows that the SilverSense mobile app provides sensor data visualization as activity monitoring systems, and a display interface for the movement and location data.

Fig. 8
figure 8

SilverSense mobile application provides various data visualization of behaviors of seniors (left) such as movement and location info through time (right)

3.2 SilverTune NinjaX (Transformable Toy for Music Therapy)

Music therapy is increasingly conducted in the health care and clinical settings such as rehabilitation centers and nursing homes to assist older adults with physical disabilities or mental impairments due to dementia or stoke. SilverTune NinjaX (Fig. 9a) is a smart assistive device based on Ninja Track [49], a collaboration between CUTE Center and Nanyang Polytechnic, with interviews of music therapists and occupational therapists from KTPH Geriatric Centre, AWWA Rehab & Elderly Care Centre, and AWWA Dementia Day Care Centre. While Ninja Track for Game was modified, and incorporated into a first-person “fishing/fighting” game Reel Blade [50, 51] as shown in Fig. 9b, Ninja Track for Music was adopted and revamped in customizable configurations as to several types of musical instruments with audios and play interactions to cater for different preferences and therapeutic requirements, in both individual and group settings, to quantitatively record therapeutic data, and analyze performance to give multi-modal feedback to both the elderlies and the therapists.

Fig. 9
figure 9

a SilverTune NinjaX device in foldable and flexible form with buttons, that can be configured to play sounds of different musical instruments (flute, saxophone, hand bell, drum stick and harp roll). b The sword or fish reel game controller game Reel Blade, also an extension of Ninja Track

3.3 Taste+ (Digital Stimulation for Taste Enhancement)

Taste+, a winning entry of the inaugural Design Challenge hosted by Stanford Longevity Center [52, 53], is a spoon with built-in electronic control to enhance sourness and saltiness digitally for an elderly person’s taste of food, without adding chemical flavoring ingredients to compensate for their diminishing sense of taste due to old age or cognitive impairments. When the tongue touches the two silver electrodes at the bottom of the spoon, the taste sensations of food and beverages can be enhanced (to be saltier or more sour), potentially reducing a person’s salt intake. Borrowing the metaphor of the multi-color ballpoint pen color switching operation, one can push a button to switch taste with corresponding color (ocean blue salty or lime green sour) [54]. Our recent Virtual Lemonade system [55] further explores the opportunity of sensing and teleporting the color and corresponding PH value of a glass of lemonade to a customized tumbler to virtually simulate these properties with plain water. Figure 10 shows the Taste+ spoon design with electrodes, and push button to switch between the different tastes with corresponding LED light colors (left), a study participant stating the spoon is quite sour (middle), and the Virtual Lemonade system with three main components: (1) the lemonade sensor, (2) the communication protocol, and (3) a customized tumbler, acting as the lemonade simulator (right).

Fig. 10
figure 10

(Left) Taste+ spoon prototype, (middle) study participant confirm the sour taste, and (right) Virtual Lemonade system with (1) sensor, (2) communication, and (3) simulation

3.4 AmbioTherm (Thermal and Ambient Environment for Presence Enhancement)

AmbioTherm is a wearable accessory for Head Mounted Displays (HMD) that provides thermal and wind stimuli to simulate real-world environmental conditions, to enhance the sense of presence in Virtual Reality (VR). With an Ambient Temperature Module attached to the user’s neck, and a Wind Simulation Module placed in front of the user’s face with fans, the Control Module utilizing Bluetooth communication can provide wind and thermal stimuli for VR environments such as a snowy mountain and a hot desert. Participants of the study reported that wearing AmbioTherm significantly improves the sensory and realism factors, contributing towards an enhanced sense of presence when compared to traditional VR experiences [56]. Figure 11 shows the AmbioTherm setup that includes a Head Mounted Display, two servo motors-connected fans, and the Peltier elements attached on the back of the neck, all connected to a Microcontroller and Bluetooth Interface.

Fig. 11
figure 11

AmbioTherm gives people the sensation of being in a hot desert, or a snowy mountain by providing thermo module that increases the temperature for heat, and two fans controlled by servo motors that would change wind directions to create the active motions in a cooler temperature ambient environment

4 Discussion and Future Work

Earlier in this article I propose the idea of 3Ms—Mind, Might and Magic as design principles for Assistive Augmentation. I argue that building computational tools is a way of working, to build objects to think with, to implement innovation, and to facilitate creative engagements, using tangible and embedded interaction, for health applications. I advocate the environment for creativity to be a lab for making things and the aspiration to Creating Unique Technology for Everyone through the use of Connective, Ubiquitous Technology for Embodiments.

Then, you might ask, what kind of Design for Assistive Augmentation shall we work on? Let me provide you with some food for thought here.

As illustrated in Fig. 12, Nakakoji observed that people employ three types of physical tools: (1) dumbbell, (2) running shoes, and (3) skis, to help improve or enhance the performance of their physical activities. She suggested that researchers should take this analogy into consideration when evaluating different creativity support systems [57]. For example, dumbbells help people develop muscles. Once muscles are developed, they can be used for other physical activities, not just for lifting dumbbells. Running shoes, on the other hand, can help runners run faster or more comfortably. People can run without wearing the running shoes, but wearing the running shoes may result in better running experience. Finally, skis enable people to ski. Skiing is a new kind of experience that cannot happen without wearing the skis.

Fig. 12
figure 12

Three types of tool support—a dumbbell to train abilities, b running shoes to enhance performance when wearing, c skis that enable new experience

As we engage in the Design for Assistive Augmentation, let’s remember to ask ourselves the following question: Are we developing dumbbells to help people to build muscles, running shoes to help people run faster, or skis to enable people to ski?

Since tools were designed for different purposes, therefore, their roles and effects should be considered accordingly in the research, development and evaluation process. While we can certainly find examples of tools that simultaneously embody multiple aspects, it’s important for us to be aware of the differences among them, and not to arbitrary adopt a research design or evaluation framework that might be appropriate for one to the others. I also would like to encourage everyone to work on creating new type of physical and computational dumbbells, running shoes and skis to augment human capabilities.

This section focuses on the topic area of “Design for Assistive Augmentation”. The succeeding three chapters in this section include (a) Designing Augmented, Domestic Environments to Support Ageing in Place, (b) Sensory Conversation: An Interactive Environment to Augment Social Communication in Autistic Children, and (c) FingerReader: A Finger-Wearable Assistive Augmenter. These three projects are good examples of the idea of “Design for Assistive Augmentation”, because they demonstrated the support for real-world applications, from understanding the context and the physical environment to support the challenges of capturing relevant sensory information.