Keywords

1 Background

Reconfigurable structures exist since the early days of Architecture. Doors and windows are reconfigurable architectural elements that shape-shift in response to physical, environmental and social characteristics of an architectural space. Conventional examples of reconfigurable structures include doors, windows and room dividers, which are called Shōji in traditional Japanese architecture. The Rietveld Schroder House (Fig. 1), built in 1924, features movable parts that provide manually reconfigurable spaces which enhance possible activities inside and outside the house [1, 2].

Fig. 1.
figure 1

Room in a Japanese house with a Shōji

With the rapid advancement of technology, world will continue to absorb technology at the same pace. Technology is an indispensable part of architecture and, therefore, designers need to acquire new skills such as algorithmic thinking [3] and programing to effectively implement new technologies. This may gradually change the role of designers to also become programmers, engineers and robotic scientists. In particular, reconfigurable architecture involves spaces featuring sensors, processors and actuators embedded in the physical space to sense, initiate dialogues and reconfigure in accordance with human needs.

Reconfigurable architecture is not simply architecture that is responsive or adaptive to changing conditions. In contrast, reconfigurable architecture is based on a constant dialogue among three constituents including people, buildings and the environment. The dialogue involves multi-directional interactions among these constituents when they listen to each other (input), think (process) and talk (output) [4]. Gordon Pask’s Conversation Theory elaborates three types of interactions including human-to-human, human-to-machine and machine-to-machine interactions [5]. Pask’s theory places an emphasis on the importance of real-time dialogues among people and machines and how each one defines and redefines the other one in the context of the built and natural environments.

The Reconfigurable Wall System builds upon the concept of embodied, human-computer interaction [6], where architectural spaces integrate cyber-physical systems to augment the interaction among people, buildings and environments. In other words, the Reconfigurable Wall System is an exemplar case of reconfigurable architecture where the embedded computation in the built environment enhances the link between people, buildings and environments. This paper investigates various ways of initiating dialogues among people, buildings and the surrounding environment through the Reconfigurable Wall System from design and functional points of view.

1.1 Contextual Conditions

Before discussing how to build and program a reconfigurable wall system, one should ask why would we need such a system in the first place? In Christopher Alexander’s book entitled “Pattern Language”, the shape of spaces is discussed as it can be either positive or negative. By positive he refers to spaces that people feel comfortable within, and by negative he describes spaces that tend to be leftovers, which typically people neglect and remain unused [7]. Alexander argues that a positive, social space resembles a convex geometry that embraces more people in contrast to a straight wall, which “makes no sense in human or structural terms.” According to Alexander, to develop an inviting social space, the walls of an indoor space need to have some sort of in-between formal state that blur the boundary between static and dynamic qualities. This in-between formal state offers diverse opportunities for shape-shifting and changing the nature of the space. In other words, different spaces can be grouped in a single space to deliver a multitude of functionality within that space. This concept lies at the core of “reconfigurable environments” and informs the development of “spaces of many functions” [1].

Therefore, answering the question of why do we need a reconfigurable system, is to create a versatile, yet human space that meets occupants’ changing needs. The goal of our investigation is twofold: first, to design, fabricate and program a reconfigurable wall system; and second, to test the developed system. Different scenarios are examined to develop dialogues, which can be established among people, buildings and the environment.

2 Design Process

Our design-research team initially explored the potential of linking theoretical frameworks, physical spaces, reconfigurable architecture and embedded computation to meet occupants’ changing needs in an architectural space. Our multi-disciplinary team developed a common framework for designing a reconfigurable wall system that can react to social and environmental conditions. After pondering and debating the different aspects of the structure, including its physicality and its user experience, as well as considering pattern development, the design team proposed alternative prototypes, which converged during a follow-up design phase as the Reconfigurable Wall System (see Fig. 3). Three possible scenarios for the reconfiguration of the wall system were identified including

  1. (1)

    embrace: in which the wall reconfigures into a concave shape and creates an inviting positive space;

  2. (2)

    repellent: in which the wall reconfigures into a convex shape and creates an off-putting negative space; and

  3. (3)

    delineate: in which the wall reconfigures into a straight bounding form. These three formal states played a central role in design and development of the wall system (see Figs. 2 and 3).

    Fig. 2.
    figure 2

    Nine possible configurations of the wall system including embrace, repellent and delineate.

    Fig. 3.
    figure 3

    TOP: exploded view of the Reconfigurable Wall System, BOTTOM: different wall reconfigurations including embrace, repellent and delineate.

To develop the reconfigurable structure, which can offer embrace, repellent and delineate configurations at once, we had to focus on both hardware fabrication and software development. Hardware fabrication involves the design of the physical structure and focuses on the geometry and its ability to transform; whereas, software development involves the intelligence embedded within the wall system and includes the capacity of decision-making. Below we discuss the hardware fabrication and the software development of the Reconfigurable Wall System as it was iteratively designed, tested and fabricated.

2.1 Hardware Fabrication

The biggest challenge to develop the hardware component of the Reconfigurable Wall System was to design a structure that transforms from one state into another while preserving its topological characteristics, and without performing any additive or subtractive operations. The design of the physical structure involved identifying appropriate tessellations for the surface mesh. During this phase, different structures with different shapes and geometries were examined, and finally, a hexagonal structure (P6M symmetry) [8] was selected. This triangular mesh consists of interconnected equilateral triangles and allows for controlled movement of the structure. Additionally, the triangular grid ergonomically enhances the smoothness of the mesh while offering more freedom to the structure to grow and shrink (see Fig. 4). The grid structure also allows the surface to shape-shift by moving grid points up and down through attached actuators (see Fig. 5).

Fig. 4.
figure 4

Grid experimentation and employed mechanism for transforming the surface mesh. All configurations were tested manually at early design stages with a focus on mechanisms, and without employing any electronics.

Fig. 5.
figure 5

Early prototype explaining shape-shifting mechanism for surface mesh transformation

The early prototype was inspired by origami structures. This prototype was comprised of MDF pieces connected through duct tape. Deficiencies were identified: (1) as the corner points of each triangle were moving up and down to transform the associated triangle, the points were not capable of preserving their perpendicularity to the surface (Fig. 5). This; therefore, could limit the re-configurability of the system as a whole; (2) the MDF pieces were found to be heavy and required stronger servo motors to actuate the vertical shaft; and (3) in order to fold the constructed surface in different directions, we had to leave gaps between the MDF triangles to minimize the friction between the comprising elements (see Fig. 4, Bottom).

Lessons were taken and informed the process of developing a more refined prototype. In the final prototype, each point on the triangulated grid system holds a flexible joint mechanism that holds six shafts at the same time. As each point moves in the Z direction the shafts expand, and allow for grid transformation. Figure 5 shows early successful prototype; aluminum shafts were used to facilitate the actuation mechanism through six-axis 3D printed joints.

The final prototype was developed to have four actuation points (Fig. 6). These points created two triangles, which were motorized to allow for grid transformation. In particular, four servo motors (20 kg/cm of torque) were attached to the corners of each triangle through aluminum shafts. The servo motors were programed and powered through an Arduino microcontroller and an external power supply. Various computer programs such as Rhinoceros’ Grasshopper and Firefly were used to facilitate the communication between the data collection device (Microsoft Kinect) and the Arduino microcontroller. Finally, we used stretch fabric as a skinning device to cover the system.

Fig. 6.
figure 6

Final prototype of the Reconfigurable Wall System

2.2 Software Development

The Reconfigurable Wall System was programmed through a package of software including Rhinoceros’ Grasshopper and Firefly, a plugin for Grasshopper, which is the primary communication tool between Grasshopper and the Arduino board [9]. Three different algorithms were developed to provide multiple dialogue modes: (1) preconfigured interaction; (2) responsive interaction (sensing the environment); and (3) predictive interaction (processing large data).

In the preconfigured interaction mode, the structure alters its shape based on hardcoded values, and ultimately reaches predefined configurations. This mode is necessary for testing the functionality of the system including both hardware and software; it acts as a safety net in cases of malfunction and emergency. Moreover, the preconfigured interaction mode allows the system to reconfigure the surface mesh manually providing a convenient way to the occupant to override and reconfigure the system as needed. The hardcoding phase was applied to the three wall configurations including embrace, repellent, and delineate (Fig. 7).

Fig. 7.
figure 7

Manual configuration of the system using direct user input

In the responsive interaction mode, the system collects data from the surrounding environment through embedded sensors and analyzes the data while informing the structure to reconfigure accordingly. The structure integrates Microsoft Kinect motion sensor, which is utilized with “skeletal mapping” and depth information within Rhinoceroses and Grasshopper to understand nearby occupants [5] (Fig. 8). In this case, the wall system transforms from delineate to repellent only when occupants get closer to particular spots on the structure. This mode is helpful in areas where distorting the straight surface is not desired but might be necessary. For instance, standing in the corridor of an airport at peak hours is not desired. Therefore, the temporary transformation of the wall system in a barely sensible manner changes the nature of the space to offer more room (Fig. 9).

Fig. 8.
figure 8

TOP: collecting data from the surrounding environment using Kinect; BOTTOM: reacting to the skeletal mapping viewed in Rhinoceros and Grasshopper.

Fig. 9.
figure 9

Testing of the responsive interaction mode

In the predictive interaction mode, large data sets will be employed where the system has to make proactive decisions and refine its own behavior. By using the recorded data and a series of conditional statements, the system decides which shapes need to be taken for specific conditions. For instance, the structure receives data on population and reconfigures to adjust shape prior to the peak time of the day or a particular event. This step has not been fully implemented yet. We aim to pursue further research on the algorithm development to integrate the predictive interaction mode after testing the prototype in situ.

3 Four Things Learned

Although the initial prototypes were designed, developed and tested by the research team in a laboratory setting, four things leaned which can inform next steps of the project:

  1. 1.

    Alternative materials should be used for safety purposes to replace sharp edges/corners of the current prototype components.

  2. 2.

    Although the main intention is to develop a wall system that recognizes contextual conditions and reacts accordingly, we need to embed a control box for overriding the interaction modes in various unpredictable situations.

  3. 3.

    There is an excellent design-research opportunity to enhance multisensory experiences as the Reconfigurable Wall System takes different shapes of embrace, repellent and delineate in response to different human emotions. Various color integrations would further enhance this opportunity.

  4. 4.

    The design of the mesh system depends on the expected activities that occur within the space. The triangular mesh surface utilized with the shaft mechanism offers great flexibility and works with variety structures of different designs.

4 Future Work: Intelligent Dialogues

We are currently in the process of designing and fabricating a more refined version of the Reconfigurable Wall System. The main focus is given to develop a fully-functioning modular unit that can offer the three interaction modes including preconfigured, interactive and predictive. Ultimately, the modular unit will be integrated into the building system at a larger scale. Various user groups of different ages will be recruited to evaluate the usability of the system in situ. We aim to enhance the efficacy of the Reconfigurable Wall System through conducting different usability studies.

In addition to the hardware adjustments and testing, our next experiment will also focus on advancing the software of the system. By integrating machine learning, the system teaches itself and adapts accordingly. Our embedded code will involve “algorithms that improve through experience” [10]. The ultimate goal is to design an intelligent system that controls the structure and improves its behavior over time. Here, the system will be responsible for adjusting the software and regulating the system behavior. As a result, the dialogue between occupants, machines and the environment becomes an Intelligent one.

5 Import for HCI International Community

For the larger HCI International community, the Reconfigurable Wall System is a case of research-through-design [11] focusing on cyber-physical programing in the context of the natural and built environments. As computing is becoming ever-more ubiquitous in our everyday lives, it will inevitably root in physical spaces we live in, and increasingly converges with it to inform and comprise a reconfigurable, cyber-physical environment - a next frontier for HCI International.