Key elements of the emerging field of Assistive Augmentation are both the substitution and enhancement of senses–means to augment senses, means towards “augmented sensors”. We use the very term “augmented sensors” to introduce the following subsections of this part of the volume that focuses on enhancing a particular sensory channel, remapping information from one sensory modality to another and creating new sensing modalities (cf. Fig. 1). We do so by describing our vision of such technology developed at the Augmented Human Lab,Footnote 1 sketching out research thrusts and enablers, highlighting application domains and speculating about the future of augmented sensors.

Fig. 1
figure 1

Modes of augmented sensors: a enhancing a particular sensory channel; b remapping information from one sensory modality to another; c creating new sensing modalities

1 Research Thrusts and Enablers

We exemplify the challenges for Augmented Sensors through pertinent research conducted at the Augmented Human Lab (Singapore University of Technology and Design). This work is along three highly interdisciplinary research thrusts (Fig. 2): (1) Novel User Input and Interaction Techniques, (2) Sensory Substitution and Fusion Technology, (3) Cognitive Augmentation.

Fig. 2
figure 2

Research thrusts of augmented sensors in the Augmented Human Lab (http://www.ahlab.org/projects). Example projects are listed in blue color (1 http://www.ahlab.org/project/kyanite, 2 http://www.ahlab.org/project/fingerreader, 3 http://www.ahlab.org/project/muss-bits, 4 http://www.ahlab.org/project/hapticchair, 5 http://www.ahlab.org/project/sparkubes)

1.1 Novel User Input and Interaction Techniques

Current computer systems lack the contextual knowledge to offer relevant information at the right place and time. They are more like a tool, a hammer for instance—when you need to get some work done, you use the tool and provide explicit instructions to it. On the contrary, what if the tool could potentially guide you on what to do? What if your smartphone is able to inform you that you owe your friend $5 when you meet him? To tackle this, we need to research on developing new ways to interact with computers (i.e. user inputs and interactions). For example, with the advancements of affective computing, deep learning neural networks, and power of GPU, researchers have developed a tool that is capable of understanding the user in a more holistic way.Footnote 2 Perhaps researchers from various fields including interaction design, machine learning, and ubiquitous computing would have to leverage on these to move to a paradigm outside of the ‘computer box’. In fact, introduction of virtual reality devices such as the Oculus Rift and the Samsung GearVR requires interaction methods go beyond the traditional touch/button based interfaces. In addition, augmented reality interfaces such as Microsoft Hololens bring us outside of our world into a detached reality. Given the physical space and energy constraints, we have to look beyond computer vision based gesture recognition techniques. New technologies (such as zSense [1]) are needed to increase the input expressivity of such resource restricted devices.

1.2 Sensory Substitution and Fusion Technology

Drawing inspiration from novel ways of interacting with tools, one can imagine how limitless our capabilities would be if we could use these novel interactions to augment our sensory abilities. What if we were able to temporarily extend our field of view towards 360°, allowing us to see things happening around us and to anticipate a dangerous situation happening behind us? What if a person with deafness could perceive previously inaccessible auditory information through vibro-tactile feedback? We explored the former question in SpiderVision [2], a head mounted display that enhances the human field of view for augmented awareness and the latter in works such as Haptic Chair [3]. Such approaches, we believe, will empower people to use the available communication bandwidth between our senses effectively or even increase it.

Key to this approach is (i) sensory augmentation technology that makes individual senses more accurate and effective, (ii) sensory substitution technology that remaps sensory information (Haptic Chair [3] and Music Sensory Substitution (MuSS) Bits [4]), and (iii) fusion technology that has the power to formulate new sensory modalities that expand upon vision, hearing, touch, smell and taste (Taste+ [5]).

1.3 Cognitive Augmentation

While it is fascinating to have new ways of interacting with the environment or integrating our sensory modalities to enhance our performance, they may impose some effort on our part. Further, we live in an era that requires us to constantly multi-task between a variety of activities often leaving us overwhelmed due to the overload of information. Even as you read this paragraph, you are using attention and memory. While information and tasks can be limitless, there is a limit on the cognitive processes that humans possess, particularly, attention and memory. The amount of cognitive resources depends on the difficulty of the task as well as the number of tasks that are performed concurrently. The more complex the task or greater the number of tasks/items to be remembered, the higher will be the cognitive load [6]. Through Cognitive Augmentation, we seek to understand a user’s cognitive state and develop technologies that help users make more informed decisions with less cognitive effort. The knowledge gap we need to fill is the holistic understanding of the possibilities of merging different modalities afforded by the technological advancements. This is typically approached by designing and systematically refining a prototype system with a series of end-user experiments. We use a triangulated framework of objective and subjective approaches to study the user’s cognitive state as they perform a variety of tasks in controlled and natural environments. Drawing from research in psychology, neuroscience and information technologies, this emerging field has several implications in defence services, rehabilitation and education.

1.4 Enablers: Technology and Design Innovation

Design for Acceptance: The success of any technology is determined by the ease of acceptance and use by the user community. This is more crucial when designing assistive technologies in any form, given the intention behind their creation. The cultural and experiential gap between researchers and end users can be especially large when developing such assistive technologies. Such a gap can lead to a situation where developers make products solely based on their own interpretation of the needs, a solution that can be ineffective and patronizing. However, adopting a “User Sensitive Inclusive Design” process [7], which includes identification of specific techniques for eliciting information from the target user group and strategies for involving them in user experience studies overcomes this gap to a large extent. Specially tailored focus groups discussions, semi-structured in-depth interviews, and in-home observations designed to study the usability and user experiences of the applications produce iterative results that effectively contribute the hardware and software development and vice versa.

Customized Hardware and Software: Off-the-shelf hardware covers a wide range of modalities. However, they are being developed with particular computing paradigms in mind e.g., for applications in robotics or consumer electronics. Both software and hardware development that pertain to the research thrusts described above have unique requirements. As such, it is critical to develop customized hardware and software to prototype application scenarios of assistive devices (for example, our prior work FingerReader [8], Bward [9]). Such assistive devices have been designed using custom made printed circuit boards (PCBs), emerging sensing technologies [10] and communication mediums (e.g. Low Power WAN) and additive manufacturing technology resulting in hardware prototypes that can be adapted to dynamic requirements.

2 Application Domains

In light of the research thrusts and enablers discussed above, we identify some potential implementation scenarios and outline some of the practical applications that the Augmented Human Lab has been working on. While some research thrusts find a direct implementation in some of the cases listed below, some of our work lies at the intersection of these research thrusts. A common thread underlying these applications are the enablers—customized solutions borne out of a user-centered design process. These projects illustrate the potential that augmentation holds in enabling changes across diverse communities and capabilities.

2.1 Independent Living for the Ageing Population

Sustaining the capabilities, independence and resourcefulness of older adults, and helping them to age gracefully, is a key challenge we are facing now. Traditionally, technologies developed to improve the lives of the elderly have mainly focused on physiological needs and safety concerns. We believe that the opportunities for technology do not just lie in memory, cognition and communication but also in sustaining the identity, self-reliance and self-worth of an individual. As such, we aim to design, develop and implement technologies that empower older adults and help them sustain their resourcefulness and independence that can make a significant difference.

For instance, we developed StickEar [11] (Fig. 3), a wireless, re-deployable and reconfigurable sound-based sensor to empower older adults to create a local wireless sensor network at home. It could, for example, help an elderly person suffering from degenerative hearing loss to know if someone is knocking at the door or if water is boiling in a whistling kettle. This technology can be extended to other types of sensors such as gas, water, temperature, etc. StickEar can also be used as an output device, allowing a user to trigger a sound output on StickEar from their mobile device. The elderly can use this to locate objects that they have misplaced by simply speaking into their mobile device and triggering an alarm sound on the StickEar that is attached to that object.

Fig. 3
figure 3

StickEar [11] prototype

In another project, WatchMe [12], we capitalized on the concept of remote sensing to understand the living behaviour of the elderly and use the information to alert family in cases of emergency. The WatchMe system (Fig. 4) is implemented on a regular smartwatch with the focus on making ambient monitoring intuitive and seamless. For instance, a Caretaker’s WatchMe can be paired with the WatchMe of the person who needs support, using a simple tap gesture. In addition to the ease of pairing and switching among different caretakers, the wristwatch interface allows users to simply glance at their smartwatch to get a sense of the state of the remote user. We believe, these types of seamless interactions would create a healthy link between them and their loved ones who might have busy schedules.

Fig. 4
figure 4

WatchMe [12] prototype

2.2 Assistive Technology for the People with Visual Impairments

It is estimated that about 285 million of the world’s population have some form of visual impairment. While the severity of the condition for people with visual impairments varies from individual to individual, they still lack in independence and the proper technology to aid in everyday tasks. The major hurdles that persons with visual impairment face are (i) affordability, (ii) usability and (iii) social acceptance. Related technologies available on the market come with a price tag in the order of thousands of dollars (e.g. OrCam at $2,500), require heavy instrumentation, involve a steep learning curve and are usually bulky. The latter also brands its users as “special needs” persons. We observe that finger-worn interfaces remain an unexplored space for assistive user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. As such, we focused on developing a finger-worn interface to support a blind person in everyday tasks.

As a starting point, we designed and developed FingerReader [8] (Fig. 5) to assist blind users with reading printed text on the go. We introduce a novel computer vision algorithm for local-sequential text scanning that enables reading single lines, blocks of text or skimming the text with complementary, multimodal feedback. This system is implemented in a small finger-worn form factor that enables a more manageable eyes-free operation with trivial setup. The perpetual, broad media coverage of our line of finger-worn devices underlines the significance of the problem at hand, concerning an important community within our society. We plan to develop proof of concept assistive technologies for people with sensory disabilities to be more independent in their way finding and play a more active role in social relationships.

Fig. 5
figure 5

FingerReader [8] prototype

2.3 Assistive Technology for the Deaf Community

Our work with communities having sensory disabilities extends beyond those with visual impairments. It is estimated that over 5% of the world have some form of disabling hearing loss across age groups thereby affecting their ability to perceive different forms of speech and music in the environment. We explored the possibility of translating music, an auditory signal into vibro-tactile feedback through Haptic Chair [3]. Haptic Chair (Fig. 6) is a sensory substitution interface that translates music into vibro-tactile feedback, providing rich musical experiences to deaf users via ‘full body haptic stimulation’.

Fig. 6
figure 6

HapticChair [3] prototype

In order to better understand how the system works in a more natural environment, we deployed this system in a residential deaf school to be used on a daily basis. It was encouraging to get positive feedback on how this form of sensory substitution enabled even profoundly deaf users to “hear” a song.

Inspired by this, we developed Music Sensory Substitution (MuSS) Bits [4] (Fig. 7), small wearable plug-and-play sensor-display pairs that capture real-world sounds, extract the rhythm information and convert them into visual and vibrotactile output. We deployed a working prototype of MuSS Bits in the same school, focusing on conveying rhythm information to deaf performers. Our studies demonstrate its effect in improving rhythm recreation for deaf children.

Fig. 7
figure 7

MussBits [4] prototype

2.4 Sensing and just-in-Time Information for Smart Health

Health care professionals report on numerous shortcomings of existing bedside care systems: (i) most commercial systems provide only reactive support, (for example, alert the clinical staff when the patient has already fallen); in addition these alarms are too disturbing—especially at night; (ii) most systems produce a high rate of false alarms and therefore lack in both effectiveness and efficiency; (iii) although false positives are preferable over false negatives, they increase alarm fatigue and the average reaction time of the clinical staff significantly. Many design opportunities exist that go beyond reactive support. As such, with context-aware wearable and tangible interfaces, we aim to explore new ways of managing bedside care.

As a first step, we have explored design opportunities together with stakeholders (Doctors, clinical staff, patients) at Changi General Hospital in Singapore adopting a  human-centered design process. We designed and developed a robust and reliable in situ early blood leakage detection device, BWard [9] (Fig. 8), which is tailored for clinical needs and environment in CGH hospital. This novel system consisted of a reliable detection system along with a programmable audible and visual alarm system and was integrated seamlessly with the ward/nurse call monitoring systems. This system could eliminate the requirement of medical staff having to manually observe a wound site after dialysis catheters are removed.

Fig. 8
figure 8

BWard [9] prototype

2.5 Personalized and Continuous Rehabilitation

Rehabilitation training typically involves extensive repetitive range-of-motion and coordination exercises. This requires a substantial effort from a therapist to supervise and assess the progress of a patient. However, in most cases the rehabilitation process cannot be performed with sufficient intensity due to limited human and financial resources [6]. Further, existing systems are typically bulky, complicated and has ergonomically poor in design (e.g. Sun SPOT sensor node [13], wearable sensors interconnected by wires [14], etc.). To overcome the limitations, we augment the current rehabilitation processes with responsive objects and serious gaming to increase motivation and provide personalised care. This includes physical/virtual rehabilitation game design, non-intrusive sensing device design, sensing system design and data analytics.

As one instantiation, our team developed a proof-of-concept prototype, SHRUG [15] (Fig. 9), in consultation with the medical professionals dealing with stroke rehabilitation at St Andrew’s Community Hospital, Singapore. This has two elements: (1) a main rehabilitation device, based on the hospital apparatus enhanced with a sensor and a feedback system and (2) a pole interface designed to interact with the main device. The pole interface provides a sense of ownership and enhances the gaming element, as the pole interface will display the users’ score and ‘belongs’ to the user as a personal device across potentially multiple rehabilitation devices. A complementary information dashboard was developed to provide therapists, access to performance data to enable personalized care. With this system, we expect a real-world demonstration of providing interactive and gamified feedback to engage the patients and empower the therapists to provide personalized care.

Fig. 9
figure 9

SHRUG [15] prototype

2.6 Interfaces to Support Learning

Learning does not occur in a vacuum. Any learning process typically consists of learners and learning tools or objects in the environment that the learner interacts with. In many cases, an adult/teacher who guides and enables the learning process may also be present. The learner himself is equipped with two of the states that affect learning behavior and outcome to a great extent [16]: (1) cognitive state and (2) affective/emotional state. Cognitive state includes executive functioning such as working memory, inhibition and flexibility. But this cognitive state is affected by emotional states that inturn affects learning. By understanding these underlying states during learning, we can design interfaces that best enable learning across age groups. Research has shown that when physical objects become more interactive, when physical components are interactive, they result in more engagements and become more playful. As a preliminary work, we explored play behaviour in children to understand how normal blocks can be made interactive and the subsequent influence of such an addition on play dynamics. Through free play sessions, we observed the patterns that children formed using normal blocks versus SparKubes [17]. SparKubes (Fig. 10) are a set of stand-alone tangible objects that use the flow of light as the principle of operation. They are corded with simple behaviors and they do not require any special instrumentation or setup. Our observations revealed that children not only tend to spend more time exploring the interactive features but also formed a greater variety of patterns using SparKubes [18] as compared to the normal blocks. Our findings also revealed that making normal objects interactive has the potential to increase the play value of an object, thereby making the interaction more engaging.

Fig. 10
figure 10

Sparkubes [17] prototype

2.7 Interactive Media for Community Engagement

In the urban public arena, media platforms can serve as a space of creative and artistic engagement between people, exploring and building a sense of belonging and community. For example, SonicSG [22] has a kind of “double ontology” [23] with both a visual/sonic/interactive aesthetic dimension situated in an urban recreational river walkway. Visitors were invited to participate in the work by pointing their mobile device browsers to sonic.sg. There they entered the postal code of their Singapore neighbourhood. After writing a birthday wish and submitting it, a pebble-drop ripple of light emanated from their neighbourhood location in the floating light display. This effect provided immediate feedback and public evidence of their participation, creating a connection between the audience and the work. Another “layer of connectedness,” in the audience was then established by turning their mobile phones into a distributed array of “sonified personal pixels.” Each phone slowly pulsed a color and tone unique to their neighbourhood at a rate that was a function of the number of other participants from the same neighbourhood. As participants moved around and explored the installation, a light and sound texture was created among the audience, reflecting both the dynamic diversity of neighbourhoods and the unified tapestry they collectively comprise as a nation. Apart from SonicSG we have developed technologies to support urban and interactive media designs that blur the boundary between people, objects, and environments (iSwarm [19] (Fig. 11), nZwarm [20], ReadBridge [21]).

Fig. 11
figure 11

SonicSG [3] prototype

3 Moving Forward

Our endeavours along different research avenues of Assistive Augmentation were summarized in the chapter. The illustrated research thrusts and application domains correspond to our vision at the Augmented Human Lab: enhancing how we live, work and play and most importantly, humanizing technology. This ranges from practical behavioral issues, understanding real-life contexts in which technologies function to understanding where technologies cannot only be just exciting or novel, but have a meaningful impact on the way people live.

Augmenting senses—or sensors—is key to this agenda. The highlighted application domains specifically focus on enhancing a particular sensory channel, remapping information from one sensory modality to another and creating new sensing modalities. Moving forward, these exemplary projects contribute not only to specific communities but have the potential for wider outreach. In line with the general agenda of Assistive Augmentation as a research field, the emphasis of these projects is rather on “enabling” than on fixing. This approach opens up their potential to a broader range of applications.