Abstract
Over the past few decades, digital technologies have played an increasingly substantial role in how we relate to our environment. Through mobile devices, sensors, big data and ubiquitous computing amongst other technologies, our physical surroundings can be augmented with intangible data, suggesting that architects of the future could start to view the increasingly digitally saturated world around them as an information-rich environment (McCullough in Ambient commons: attention in the age of embodied information, The MIT Press, Cambridge, 2013). The adoption of computational design and digital fabrication processes in has given architects and designers the opportunity to design and fabricate architecture with unseen material performance and precision. However, attempts to combine these tools with methods for navigating and integrating the vast amount of available data into the design process have appeared slow and complicated. This research proposes a method for data capture and visualization which, despite its infancy, displays potentials for use in projects ranging in scale from the urban to the interior evaluation of existing buildings. The working research question is as follows: “How can we develop a near real-time data capture and visualization method, which can be used across multiple scales in various architectural design processes?”
Access provided by CONRICYT-eBooks. Download chapter PDF
Similar content being viewed by others
Keywords
Context
Contemporary design tools have become increasingly accessible and open, resulting in a highly flexible and customisable design environment (Davis and Peters 2013), enabling architects to interface directly with a wide variety of technologies, from simulation and analysis to robotics, microelectronics and sensing. This research project builds upon the Arduino open source electronics platform and employs the Grasshopper algorithmic modelling environment for Rhino, contributing to the democratisation of spatial sensing and data visualisation for architects. There have been several precedents in using sensors and data visualisation to map intangible layers within design environments with a focus towards implementation in landscape architecture (Fraguada et al. 2013; Melsom et al. 2012) or urban design (Offenhuber and Ratti 2014). Our project focuses on architectural design and integrates sensing and mapping of intangible data from the scale of the site through to the building and the interior.
This research develops bespoke spatial data gathering hardware and explores computational design workflows that permit the integration of this data into the architectural design process.
The project has two primary goals:
-
To develop low-cost, mobile sensing and data capturing units that are wearable and can be mounted to various manned or autonomous vehicles;
-
To develop a computational toolset and workflows for the integration and digital representation of various data streams within an architectural design environment, allowing architects to navigate and manipulate these representations to:
-
Get a better understanding of the intangible spatial aspects and;
-
Inform computational design models with more complex datasets.
-
First Prototype
The first prototype of the sensing unit was developed during the Sense, Adapt, Create Field Station summer-school in Berlin in September 2016. Starting from the site and buildings of Teufelsberg, the workshop challenged students to reflect on our environment, consisting of several tangible and intangible layers and question how architecture relates to those fields. As part of this overall brief the students were introduced to electronics prototyping and several sensors. The summer school resulted in several field stations or probes that interacted with chosen fields of the site (Fig. 1).
The mobile sensing unit employed in the summer school was developed by the tutors of the sensing workshop, and logged soil humidity, carbon dioxide, ambient light, air humidity, temperature and wind speed levels as well as the orientation of the device. While this was useful within the framework of the workshop, it had some shortcomings: the geolocation was not integrated in the sensing unit, but consisted of an external smartphone, resulting in a difference in granularity between location and sensor data. The representation of the sensor data was thus limited to standard graphs represented in a spreadsheet (Ballestrem 2016).
The research was further developed into a second prototype at the Aarhus School of Architecture in 2017. Limitations of the first prototype were resolved, and several data capturing and visualization methods were further explored, as discussed in detail below.
Data Capture
The specific spatial conditions and needs of architectural design tasks place certain requirements on data capture methods that can be difficult to meet when dealing with imperceptible data and static measurement devices. Inconsistencies between the dimensionality of the data being recorded and its representation space can result in abstract re-interpretations and lead to false readings and ultimately, useless data. To model intangible information in a volumetric manner that is of value in an architectural design process it is necessary to move beyond static, point-based data collection methods and manoeuvre the hardware spatially whilst simultaneously linking the streamed data with the physical location.
Our solution was to develop an open-source, low-cost hardware framework for adaptive-resolution data capture at an architectural scale on the Arduino platform, providing easy access to a wide variety of sensor datasets and offering great potential for expansion and modular task assignment (Melsom et al. 2012). The base system was developed on a battery-powered microcontroller to which we connected additional hardware for geolocation and data recording, permitting us to simultaneously log incoming generic data alongside its coordinates, altitude and a timestamp. These data-packets were then appended to with project-specific sensor data, here humidity, temperature, sound and ambient light levels as well as a gaseous composition reading were collected (Fig. 2).
One of the problems with using low-cost and open-source hardware is the fidelity of the readings, in that it is expected that we will have spikes in the data unless it is smoothed. To compensate for this, all of the data we have gathered was at first logged at short intervals (5–15 ms) in groups of 100 reading. These groups are then averaged, leaving one value which is logged, affecting the data as shown in Figs. 3, 4 and 5.
With these basic tools, this project expands the previous work in this field by making it possible to assign the data to real-world coordinates which are then mapped to points in a digital 3D environment through a specific data structure which will be further detailed in the following section.
The link between the captured data and both its physical and digital position is non-trivial as it permits a bi-directional workflow between the data capture process and the visualisation and control model. This live interface, coupled with an autonomous physical motion platform, allows the iterative re-evaluation and sampling of the datascape at points where it is deemed important by the designer to have more information, resulting in an adaptive-resolution data capture process. Additionally, the developed workflow is motion-system agnostic, resulting in a higher diversity of possible landscapes for sampling, depending on the manoeuvrability, precision and range of the motion platform.
Data Visualization and Processing
As described in previous research (Fraguada et al. 2013; Melsom et al. 2012), visualizing and processing collected data is a difficult task, and has therefore received a lot of attention in this project. It is our ambition to enable designers to capture and visualize data in a CAD environment native to designers and architects, allowing them to stream/write data in a time efficient way.
We interpret data through a modelling environment well known to the practice of architecture and design: Rhinoceros 3D, for which we have developed a series of custom tools and data interpretation methods in Grasshopper. This paper describes the tools related to the act of reading and writing data to the digital environment and visualizing the data. The methods of visualization enable the production of color gradient maps, volumetric models or simply permit the designer to query and alter the data. We believe that these functionalities can become important tools for architects in the future.
The methods revolve around a bespoke voxel modeler that we can read and write information from and to. Voxels are three dimensional pixels consisting of a volumetric grid of points where each pixel can have a scalar assigned. In the case of this research each grid position has an implicit knowledge of its neighboring environment, structured in a grid of cubes, as described in Fig. 6.
While there are many simpler methods for storing data, the structure described above is a necessity since we are building a marching cube algorithm on top of the voxels (Bourke 1994), not by building the elements of Fig. 6 as explicit geometry but as a data structure, meaning we build arrays [x, y] with ID pointers for the specified information. Classically, marching cube algorithms are used to make implicit surfaces (Bourke 1997), using various different mathematical formulas to wrap geometries in a mesh. However, in this case we will not use this technique to generate the surfaces, but volumetric representations of the gathered data from the sensor module.
It is these properties, and the structure of voxel grids that allow them to excel at the representation of regularly-sampled but non-homogenous data that we are interested in. We have adopted this voxel data structure and developed a method for converting a 3D point position to an ID in the data structure, making it possible to read and write data very efficiently. The formula below uses the following information: ‘p’—a point in space, pMin—the first data point of the voxels, deltaX, deltaY and deltaZ—the spacing between the points in their respective directions and ‘nX’, ‘nY’ and ‘nZ’—the number of points in their respective directions. The formula is:
This means that in order to use our geolocation data it should first be converted into an XYZ co-ordinate, so that it can be converted to an ID using the formula (1). We do so using cylindrical equidistant projection (Weisstein 2017). This method of conversion has issues with increasingly distorted points the further you get away from equator, an issue we chose to overlook for two reasons: firstly, it is computationally efficient since the factor cos(θ1) only needs to be computed once; secondly, the distortion is deemed negligible since the data points need to be remapped to the voxel field, where the precision of this depend on the voxel spacing. In formula (2), λ is longitude, θ is latitude and θ1 is a latitude in the middle of the sample data.
This makes it possible to read and write data with a high degree of precision to- and from—the system in near real-time. Due to the infancy of the project we have tested the workflow through logged data for a small architectural installation in Vennelystparken, Aarhus, Denmark.
Using the remapped geolocations, it was possible to map the data to a three-metre spacing voxel grid. This made it possible to produce numerous ways of representing data through colour gradients and coloured points (Figs. 7 and 8). This capability is not new, but we were able to extend the use of the data by converting it to volumetric representations, exemplified by Figs. 9 and 10, which shows how sound data starts to take an abstract shape. The functionality and potentials of this method are presented in the following section.
Conclusion
The workflow of data capture and interpretation has been applied here to a simple design exercise, where the data has been used to visualize combinatorial data maps and create volumetric representations for the associated data. The project aims to build on previous work by offering data collection and representation methods at both the urban and building scale, including renovation projects and interior installations. One of the shortcomings of the workflow is the inability of the geolocation system to work indoors, posing a challenge for interior data collection on a smaller scale. While there are several systems available for indoor spatial triangulation, we chose to work with a GPS based setup to provide initial data for the development of volumetric representations, where we then focused. These volumetric representations can serve many purposes, for instance they can be a new way of presenting intangible information around us, but they could also be used as abstract zoning models. Meaning they could inform certain architectural decisions. This idea opens up new avenues of architectural thinking similarly to that of draftsmen in the early 1900s that explored the New York zoning laws (Koolhaas 1994).
While the research is still in its early stages and needs to be developed further in larger case studies, the preliminary findings demonstrate that the developed method could be useful in the early stages of the design process, as it allows for the construction of abstract spatial data models within the design environment familiar to architects, and permits the integration of these models into a computational design workflow.
We plan to continue the development of the project via large scale testing through implementation in architectural projects and teaching workshops with students, where different modes of data capturing, geolocation and representation will be tested.
References
Ballestrem, M.: Sense Adapt Create. Fieldstations, TU Berlin, 10 Sept. 2016. Web. 6 June 2017
Bourke, P.: Polygonising a scalar field. Retrieved 21 Feb 2017, from http://paulbourke.net/geometry/polygonise/ (1994)
Bourke, P.: Implicit surfaces. Retrieved 7 June 2017, from http://paulbourke.net/geometry/implicitsurf/ (1997)
Davis, D., Peters, B.: Designing ecosystems, customising the architectural design environment with software plug-ins. In: Peters, B., De Kestelier, X. (eds.) Computation Works: The Building of Algorithmic Thought, pp. 124–131 (2013)
Fraguada, L., Girot, C., Melsom, J.: Ambient terrain. In: Stouffs, R., Sariyildiz, S. (eds.) Computation and Performance—Proceedings of the 31st eCAADe Conference, Faculty of Architecture, Delft University of Technology, Delft, The Netherlands, 18–20 Sept 2013, pp. 433–438 (2013)
Koolhaas R (1994) Delirious New York. The Monacelli Press, New York
McCullough M (2013) Ambient Commons: Attention in the Age of Embodied Information. The MIT Press, Cambridge
Melsom, J., Fraguada, L., Girot, C.: Synchronous horizons: redefining spatial design in landscape architecture through ambient data collection and volumetric manipulation. In: ACADIA 12: Synthetic Digital Ecologies [Proceedings of the 32nd Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA)] San Francisco 18–21 Oct 2012, pp. 355–361 (2012)
Offenhuber D, Ratti C (eds) (2014) Decoding the City: Urbanism in the Age of Big Data. Birkhauser Verlag, Basel
Weisstein, E.W.: Cylindrical equidistant projection. From MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/CylindricalEquidistantProjection.html (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Pedersen, J., Hughes, R., Cannaerts, C. (2018). Navigating the Intangible Spatial-Data-Driven Design Modelling in Architecture. In: De Rycke, K., et al. Humanizing Digital Reality. Springer, Singapore. https://doi.org/10.1007/978-981-10-6611-5_37
Download citation
DOI: https://doi.org/10.1007/978-981-10-6611-5_37
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-6610-8
Online ISBN: 978-981-10-6611-5
eBook Packages: EngineeringEngineering (R0)