Keywords

1 Introduction

In the early stages of the design process, the conceptual idea of the envisaged building and its design parameters is still vague and incomplete. While the built environment, the end product of this design process, can be represented concretely in the form of drawings or computer models, the initial design idea can usually only be formulated abstractly, for example as schematic functional descriptions or as topological constellations of spaces and or of relative proportions. A method that is therefore commonly used in the early design phases is to consult reference projects: by drawing on analogies from existing buildings or architectural designs, the designer can verify his or her ideas, identify relevant design parameters or explore new directions and possibilities. As part of a research project (referred as Metis), funded by the German Research Foundation (DFG), innovative research methods are being developed to support design actions in the conceptual design phase. Approaches have been developed for the IT-support and linking of two key design strategies that architects use when developing ideas: functional and conceptual drawings and the use of reference material. Therefore a semantic fingerprint was proposed as a means of characterising a building in much the same way as a fingerprint identifies a person [4]. This same approach can also be used as a mean of formulating architectural situations and in turn for identifying semantic similarities. The semantic fingerprint attempts to address the primary problem of the vague and incomplete nature of design ideas, creating a way of identifying analogous reference examples of existing buildings or building designs. By drawing on analogies from existing buildings or architectural designs, the designer can verify his or her ideas, identify relevant design parameters or explore new directions and possibilities. The core aims of the Metis research project are:

  • To find ways of accessing implicit knowledge contained in reference projects

  • To formulate knowledge in the form of graphs

  • To develop methods and models for retrieving specified formal structures

  • To develop a way to specify and to search and retrieve spatial configurations

Working with references is an established methodology in the architectural design process. Functional diagrams and sketches are to formulate initial ideas, for example. In the research project Metis the focus is to formulate queries to the computer, which is the basis for the search in a digital building repository called “ar:searchbox” (located at TU Munich). The presented user study examines the extent to which existing prototypes to support the formulation of queries with freehand sketches and functional diagrams can be performed.

However, even till today, architects feel comfortable with pen-and-paper based conceptual sketching. So, as a starting point, a migration from pen-and-paper to simple computer tools is required, in order to achieve further benefits, that are stated as the aims of the project.

In this article, we describe two different graphical user interfaces, that can be used by architects in conceptual design sketching. We first describe the structures and interaction principles of both the presented GUIs. Afterwards we show their general usability and user acceptance by the means of a users study.

The paper is organized as follows. Section 2 describes the classical style of conceptual sketching. Section 3 presents the two different graphics user interfaces for migrating the classical sketching style towards computer tools. Section 4 describes the experimental setup for user study. Section 5 presents the result of the user study and Sect. 7.

2 Classical Style of Conceptual Sketching

In the design process of architecture various tools and strategies are used. “Every design tool serves the perception of external circumstances (capturing and) as well as the expression of imaginations (the imprinting of inner design concepts onto a physical medium). Every design tool can either be descriptive (which means depicting, describing the given), or prescriptive (which means designing, for displaying something new)” [2]. However, certain tools are more suitable as presentation tool (CAD program or drawing board) and others as thinking tool (freehand drawing or reference). Thinking tools support the rapid materialization of thoughts to perceive and evaluate the materialized fragments of design ideas. The knowledge gained flow into the thought process, and can be described as a kind circular dialogue, as shown in Fig. 1, of the designer with the design tool. Buxton writes: “If you want to get the most out of a sketch, you need to leave big enough holes” [1].

Fig. 1.
figure 1

Sketch of a dialogue with a sketch.

Thinking tools are for example, writing texts, the making of freehand drawings and the use of references as “[...] concrete evidence in support of prediction [...]” [3]. In the early design stages freehand drawings are often used because it is a familiar, efficient and natural way to quickly express and analyze ideas. Freehand drawings can be used to represent unfinished or fragmentary ideas and thoughts, because usually there is still no precise idea of the final result. Gänshirt writes: “The simplicity of the tool enforces to reduce to the essential” [2].

3 Graphical User Interfaces for Conceptual Sketching

We have developed two different digital tools for supporting architects during early design phases: TouchTect and the Metis WebUI.

3.1 TouchTect

Touchtect 2.0 is a Windows application which can be used to query data on multiple web services. It connects to GmlMatcher, Mediatum, Neo4j, the unified-query-service and the bim-server. The application aims at letting this multiple databases look like one. Pen-based interaction on tablet computers and multi-touch tables is supported and give the architect the freedom of expressing ideas intuitively.

Fig. 2.
figure 2

Screenshot of the TouchTect UI.

In the middle, the free hand sketching canvas support the architect by let him draw a design idea in a schematic way. On the left hand side different queries can be combined like searching for a building that exists in a certain city or one that fits the hand drawing and has seven rooms. On the right hand side a preview of the search results is shown in the form of floor plans. By selecting a result, additional information like pictures and 3D visualisation of the building can be examined.

3.2 Metis WebUI

The Metis WebUI is a tool that was inspired by a working method called room schedule or space allocation plan. The idea of a room schedule in architecture is that a set of rooms is given as a requirement for a building (e.g. as a list). Some of these rooms may have a specific size, function, and there may be requirements for neighborhoods of rooms as well as passages between rooms. It is a more abstract working method than direct sketching but it also used in practice, where a room schedule is usually coming in form of a list of requirements from the customer. Since the architect may have more concrete ideas for some room layouts and rather rough imaginations about other rooms, we tried to build a tool that supports multiple abstraction levels allowing for specifying (and respecifying) design aspects as concrete or abstract as desired by the user.

Fig. 3.
figure 3

Screenshot of Metis WebUI showing both rooms with concrete layout and rooms in bubble mode.

Figure 3 shows a screenshot of the Metis WebUI. The Metis WebUI runs inside a standard web browser and was entirely written in HTML5/Javascript. It allows for combining abstract rooms i.e. rooms that have no specified wall layout yet (displayed by bubbles) and rooms with concrete wall layouts. Within the abstract mode (also reffered to as bubble mode), already the room’s size and function can be set, and the room can be interconnected by neighborhood links (displayed as single lines) and passage links (displayed as double lines). Neighborhood links symbolizes that two rooms are located next to each other (they share a wall) and passage link just means that a person can physically move from one room to another, either by a door or by a doorless passage. If the user has a concrete wall layout in his mind, he can draw the shape of the room and afterwards place windows and doors into the room’s walls. The connections to a room are adopted as they stand when the room is changed from bubble mode to a concrete room layout. Doors and windows can be resized and moved along the walls on which they were created. Multiple doors/windows per wall are allowed. Except for one button that creates new rooms and a few helper functions, the interface is entirely controlled by radial menus. Single rooms can be moved by mouse and doors can be connected to other rooms or doors by passage links. The radial menus change with the element they focus, as depicted in Fig. 4. Smart features like the autolink function helps to speed up the time needed to enter concepts.

Fig. 4.
figure 4

Screenshots of the different radial menus of the Metis WebUI.

4 Experimental Setup for User Study

We conducted a user study with 15 participants in which we have examined how well humans with architectural background are able to express their ideas in a fictional design process with our user interfaces as compared to established methods. For this purpose, we developed a specific design scenario: to design a rental apartment for a certain price in a big German city from scratch, no restrictions on the ground plan were given. The participants were asked to first create some free-style drawings and then to develop a design based on a space allocation plan. Every task had initially to be done on paper as established method and directly afterwards on one of our user interfaces (TouchTect for the free-style drawing, Metis WebUI for the space allocation plan). The participants had no specific time limit and were rudimentary guided by the study’s supervisors. For analysis purposes, the participants were videotaped and asked to fill out a questionnaire. Nearly all of the participants were affiliated with TU Munich and were therefore aware of the Metis projects content. Nevertheless, none of the participants had used the prototypes before the study. With one exception all of the participants are familiar with typical architectural software.

5 Analysis of User Study

One way of evaluating the quality of a user interface is to assess its effectiveness (to what degree was the user capable of archiving his goals at all), its efficiency (how much resources - usually time - did the user need to archive his goals) and its user’s satisfaction (to what degree did the user “like” the interface). In order to measure these categories, we conducted a user study in which the participants were asked to perform a open draft task (a situation that fits best to the purpose of the developed prototypes). This experimental design comes with the problem that the final layouts drafted by the participants are not fixed, but depend on their ideas. We could have designed the experiment in a way that the participants should only copy a given floorplan, what would have made the assessment of the effectiveness more easy, but such a task would contradict the intention of the prototypes. But since the participants were asked to first do their drafts on paper and then use the examined prototypes, they usually just copied their previous work. In order to assess how much the traditional working methods are used by the participants, we asked them whether or not they have used them before the study (Fig. 5). As expected, the vast majority was familiar with the traditional, examined methods.

Fig. 5.
figure 5

Use of the traditional working methods by the participants.

Fig. 6.
figure 6

Different effectiveness measurements. Please note that the terms functional schematic and room schedule are considered to be equal here.

In order to assess the user’s effectiveness, we asked the participants to what degree the constructs displayed on the interface matched their imaginations and to what degree they could express their ideas by using the interfaces (Fig. 6a, b, c and d). Although there were differences between our interfaces and the traditional working methods, the results were roughly comparable. For the majority of user, our tools appeared to be at least reasonalbe useable.

We asked a couple of questions that can’t be entirely classified into one of the tree mentioned categories, but the following questions are somewhere between user’s satisfaction and efficiency: We asked the participants how difficult it was to handle the prototypes (see Fig. 7) and how exhausting they perceived the work with the examined interfaces (see Fig. 8). We asked how obstructive the use of mouse and keyboard (Metis WebUI) or the digital pen (TouchTect) was for them (see Fig. 9). We also asked the amount of “perceived time” until the display of the interfaces met the imaginations of the users (Fig. 10a and b). For these questions, the results for our interfaces were pretty similar to the traditional working methods. In other words, handling our prototypes appeared not much more time-consuming or exhausting than sketching on normal paper. In order to assess the user’s satisfaction, we asked the participants to what degree they can imagine to use the tools in real life (since this question appears suggestive afterwards, we skip the results here).

Fig. 7.
figure 7

Difficulty for the users to handle the interfaces.

Fig. 8.
figure 8

Exhaustion comparison between the examined interfaces and the classical working methods.

Fig. 9.
figure 9

Comparison of obstruction of different input methods.

Fig. 10.
figure 10

The perceived time is both a measurement for the user’s satisfaction and the efficiency.

6 Improvements of GUI Based on User Feedback

Even before conducting and analysing the user study, several issues were spotted by the domain experts and programmers: In order to create useful search queries, more freedom in entering the concepts is helpfull, but too much freedom may distract the user: In the Metis WebUI, a user may draw passage connections between two rooms, two doors and as well as room and door. The idea behind this freedom was that passage connections between two rooms are more abstract than passage connections between doors. But as expected, some users didn’t understand what the difference between these connection was or didn’t notice one of these possibilities.

A similar problem conterns the level of abstractness of the links. When a room is created (which is always in bubble mode), the user might have some rought ideas regarding its door connections to other rooms. When a concret wall layout is entered, the existing connections kept. If a user wants to refine a certain connection (i.e. exchanging it by another connection that goes through a door or passage in the concrete wall layout), he is forced to manually delete the existing connection and to create a new one. Similarly, all connections through concrete doors or passages are deletect, when a new wall layout is entered. Both behaviours are rather unintuitive. Fortunately, these problems didn’t appeared often during the user study. One way to overcome this problem more elegantly, is to employ a mechanism that suppresses the display the more abstract connection in the presence of a more concrete one (i.e. a door connection between a pair of two rooms is not displayed, if a door connections exists that connects doors in the concrete wall layout of the rooms). Likewise, a more advanced mechanism that allows to edit wall layouts instead of replacing them by new layouts could help to mitigate the mentioned problems.

Basically, most of the participants were capable of entering reasonable drawings using our prototypes. After some training, the participants were able to make use of the most functions of the interface and spend reasonable time on handling them. TouchTect with its gestures appeared to be more easy and intuitive to use than the Metis WebUI, especially when it comes to drawing room shapes and linking the rooms. The biggest problem in the context of the Metis WebUI was that most participants did not make fully use of the possibility to connect the rooms as intended but led the rooms unconnected and focused on aligning the rooms manually to each other.

Apart from some rather minor cosmetic and usability flaws that attracted our attention during the study (e.g. some user confused the bubble mode with circle-shaped rooms), looking at the way how architects used the Metis WebUI helped us to find some points for design decisions we were not totally sure of before. For example we decided in the examined prototype that the radial menus of doors and windows should only be accessible after pressing the corresponding button in the main radial menu of the room in order to reduce the amount of buttons simultaneously displayed on the room. When looking at the videos we realized that this slowed down and annoyed the participants tremendously. Therefore, we are going to display the radial menu buttons of windows and doors in the next version of the prototype whenever the room is focused. Likewise, the resizing/scaling functionality for concrete rooms is unformed: Every room alone can be resized in stepless fashion (which is quite reasonable in the bubble mode), but room shapes can only be edited using a fixed grid (we wanted to avoid free-style drawings in the Metis WebUI). Hence, rooms can be resized so that their room length does not fit any longer the ones of other rooms. In the next version of the Metis WebUI we want to tackle this problem by a new resizing function that snaps when the wall corners of a room matches the size of a nearby room. Likewise, we want to incorporate a general snapping function for room movements. Snapped walls may be considered to be automatically connected by links which would speed up the drafting process and would appear more natural to the user. We also want to incorporate a “glue mode” in which rooms that are snapped are automatically moved together. Additionally, a function that automatically links walls of snapped rooms (and even automatically creates missing doors when rooms snap) is planned for future prototypes. A general problem arises, when considering the purpose of the interfaces: The tools can be either considered as a general (and possibly independent) drawing and thinking tool or as the interface to a search engine for similar floorplans. These purposes might diverge as illustrated with the link-drawing issue: The connecting lines have a certain meaning for a later-attached search mechanism. If these lines have another meaning in the mental model of the user (or the user isn’t even aware of their existence), the user might think he/she expressed his/her thoughts correctly, but will get incongruous search result. The users should be aware these semantics when the tools are considered as a search interface. Hence, we considered the interfaces as drawing and thinking tools when asking the participants to what degree the display matched their imaginations. We consider a “explose” and “implode” functionality that move away the rooms from each other to that the floorplan looks like an exploded assembly drawing in a technical user manual and the user can see all existing connections. The implode function could then used to revert the explosion and even snaps rooms that were not snaped before by the user but connected by links. Also a function for coping rooms was desired by some users.

Apart of the evaluated free hand drawings and the modeling of spatial schemata, other paradigms like floor plan representations and zoning of shapes will be examined in the future. Moreover multimodal interaction strategies are necessary to let the architect freely use different abstractions of his/her design idea without interrupting the design process and lost of data. An additional user study involving a test of the prototypes including the search functionality is also thinkable.

The discussed prototype Metis WebUI as well as the TouchTect application are going to be combined with the retrieval system MetisCBR (and other retrieval systems) during the further Metis project development. The system uses the case-based reasoning (CBR) technique to retrieve the most similar semantic fingerprints to a given sketch from a case base (a special sort of a database). It is also based on the multi agent retrieval paradigm, where each retrieval agents task is to use the given similarity functions to retrieve the most similar fingerprint parts (such as rooms or their outer connections). The retrieval process will be controlled by a coordinator agent that is able to act as a case- and/or rule-based reasoner to find the best strategy for a particular user query. Hence, subsequent user studies that also take the user’s reaction on the retrieved search results into account have to be conducted.

7 Conclusion

In this paper, we introduced two concepts for graphical user interfaces that could help to migrate state-of-the-art pen and paper working methods in early architectural design phases towards computer-based workings methods. We described the state of the art working methods as they take place in architecture generally and motivated the new approach. Then we described the two graphical user interfaces that we designed in detail. After that we presented the setup of a user study to prove the viability of the interfaces. Later we outlined the results of the user study we conducted according to the presented setup. Finally, we listed improvements we made based on the findings of the user study. Having two viable concepts for user interactions, further research regarding search concepts for similar floor plans will be carried out so that automated assistance for early conceptural design phases in architecture can be provided in future.