Keywords

1 Introduction

Design Research (DR) uses the scientific method to develop, test, and apply significant theoretical insights pertaining to design processes, designers, and design domains—the application areas under design (e.g. Cross 2001; van den Akker et al. 2006; Hevner 2007; Stolterman 2008). Historically, DR combines descriptive research of designers and design, as well as prescriptive methodological research on design processes, methods, and systems (e.g. Cross 1999, 2007; Bayazit 2004). As such, DR as a field holds a dual purpose to synthesize knowledge of different types; design researchers deal with, on one hand, experiential knowledge about design, useful tools, and methods for designers, as well as on the other hand significant theoretical contributions. It can be challenging, however, to synthesize the pragmatic goals of DR with the scholarly rigour of science in a way that both produces knowledge in the form of generalizable solutions to important classes of problems (practical insights) and yields rigorous contributions to academic knowledge and theory (scientific insights) (e.g. Cross 2004; Blessing and Chakrabarti 2009; McMahon 2012). A general challenge in DR is the focus on individual cases or situations and practical impact, often with weak or opaque theoretical grounding (Love 2002; Blessing and Chakrabarti 2009). This is especially true for problem-based DR focused on solutions, labelled Design Inclusive Research (DIR) and Practice-Based Design Research (PBDR). In these types of processes, the design process and artefact are as much in focus as theory development or testing (Horváth 2007, 2008).

In fact, Design Research and especially practice have a significant component of reflection in action (Schön 1983), which is not necessarily simple to codify to scientific knowledge. While this orientation supports achieving practical impact from research, generalization of the findings without knowledge or understanding of the underlying mechanisms is challenging, limiting both contribution to scientific knowledge and transfer of solutions. In other words, ‘If we understand nothing of the causal mechanisms, then we can only achieve a given outcome by accident at first and by rote thereafter’ (Briggs 2006). In recognition of this challenge, this chapter discusses knowledge synthesis bringing together the perspectives of experimental Design Research, or Research in Design Context that is treated extensively elsewhere in this book, and Design Inclusive Research as well as Practice-based Design Research.

The rest of this chapter is structured as follows: Sect. 13.2 lays out the philosophical grounds and discusses challenges for knowledge synthesis in (experimental) design research. Section 13.3 proposes a methodological framework to overcome these challenges specifically by developing and using design propositions as a nexus of knowledge synthesis. Section 13.4 focuses on expounding the connection between evaluation of design artefacts, propositions, and the associated knowledge claims. Finally, Sect. 13.5 presents the conclusion and discussions.

2 Challenges for Knowledge Creation in Design Research

This section lays out the philosophical framework and definitions for the discussion on knowledge synthesis. Building on that foundation, the discussion focuses on types and properties of knowledge and the challenges for knowledge synthesis in DR.

2.1 Philosophical Background and Assumptions

Horváth (2007) discusses three types of DR, called Research in Design Context (RIDC), Design Inclusive Research (DIR), and Practice-Based Design Research (PBDR). The distinguishing factor between these is the balance of focus between generation of knowledge about design and design of an artefact to satisfy specific needs. As presented in the context of this book, experimental design research often falls into the category of Research in Design Context, where the main focus is on generating knowledge about design processes, methods, and behaviours associated with design, as well as the products of design. In the other types of design research, DIR and PBDR, this interest in knowledge is paralleled with interest or ambition to create solutions to existing problems in the form of design artefacts.

Epistemologically speaking, out of the three types of DR, in particular, DIR and PBDR can be said to have adopted a pragmatic or instrumental approach to research, that is, placing precedence on utility and fitness to purpose of the design artefact and using that utility as a measure for evaluation of the artefact and claims to knowledge, most explicitly in Information Systems (Hevner et al. 2004; Gill and Hevner 2011; Piirainen and Gonzalez 2014). It follows that the ‘knowledge interest’ in this type of DR has been generally technical, that is, to understand and control the phenomenon of interest within the problem area (c.f. Habermas 1966; Donsbach 2008). In contrast, RIDC is not limited to technical interest, but framing can be motivated by a positive or critical knowledge interest. The epistemological orientation of DR is manifest in the framing of research questions, design, and evaluation (Niehaves 2007; Gonzalez and Sol 2012).

The ontological starting point for this chapter is a common-sense realist viewpoint after Moore (1959) that there is an external independent reality. Differing from earlier views of empiricists later known as (logical) positivists, Popper (e.g. 1978) argues that three ‘worlds’ exist: world one (W1) that is ‘real’ in the traditional sense, immutable, unchanging, and independent of the observer, a world of physical objects and events. The second world (W2) is the world of human observations, and emotions, in effect a kind of representation of the first world inside human psyche. The third world (W3) is a world of the artificial (Simon 1996). The third world contains the product of human mind, such as language, ontologies, and theories, as well as their instantiations as physical design artefacts.

In the context of this chapter, we refer to Design Research as systematic inquiry into the art, practice, processes, methods of, and behaviours associated with design or synthesis of artefacts and systems, and the behaviour and function of these artefacts (Cross 1999, 2007; Bayazit 2004). This denomination encapsulates also the terms Design Science and Design Studies unless specified otherwise. A potential source of disciplinary and etymological confusion in this chapter is that the field of Information Systems Research has developed a specific methodological framework called Design Science Research (DSR) independently from the traditions of Design Research, Design Studies, and Design Science (c.f. Winter 2008; Piirainen et al. 2010). Further, to relate this chapter to experimental Design Research, it represents a particular methodological orientation to DR. Later in this chapter, the relationship of DIR and PBDR to experimental approaches is discussed in detail.

Further, this chapter discusses knowledge creation and synthesis, which in this context are broader terms than theory building as described elsewhere in this book. The word knowledge is used in the sense of justified true beliefs and in particular in the context of this chapter about constructs, models, and methods related to design. Knowledge synthesis is used in a wide sense, encapsulating theory building as well as design where the conceptual functions are transformed into prescriptions of the structure of a design artefact, using knowledge from various sources to target the expected behaviours derived from the design problems (c.f. Gero 1990; Gero and Kannengiesser 2004).

2.2 Types of Design Research and Challenges of Knowledge Synthesis

Horváth (2007, 2008) proposes that RIDC process resembles what might be called a traditional research process. In RIDC, the main focus is on theory development and testing, while phenomenon and the corresponding unit and level of analysis may vary between design methods and theories to behaviours exhibited by designers during the process (see, e.g. Parts II and III in this book).

The case is more challenging in DIR and PBDR, as the actual design occupies more space in the research process and the researchers are more involved in the actual design work (Fallman 2008). This interplay makes it harder to separate design work and research, or to control for various factors. However, in the neighbouring Information Systems field, there is a discussion on generating and integrating knowledge in what might be called Practice-Based or Design-Integrated Research (PBDR and DIR).

The challenges of knowledge development and testing relate to the interaction between the three worlds as described by Popper (op. cit. Sect. 13.2.1) and to the ensuing problems of observing and measuring the phenomena of interest. The challenge of acquiring reliable information or knowledge of W1 is because of the limits of the human condition in observing the real world and translating our knowledge of either one of the worlds into representations that are able to convey the knowledge between the senders’ and the receivers’ inner worlds (W2) in the artificial world (W3) (e.g. Simon 1985, 1986; Wright and Ayton 1986). The interaction of people and the interplay between the three worlds cannot be bypassed, especially when research questions relate creativity, decision-making, and the use of methods or other aspects of human behaviour in design processes.

Generally, scientific knowledge is defined as a body or network of justified true beliefs, that is, in practical terms beliefs about causal relations between ideas and actions that are backed by evidence from the world (W1–3 depending on the unit of analysis) in some way. However, knowledge can be considered more broadly in terms of the object. Jensen et al. (2007) propose a distinction between four types of knowledge: (1) know-what—descriptive knowledge about phenomena and the state of the world, causality, or relationship between phenomena; (2) know-why—explanations behind observable phenomena; (3) know-how—procedural knowledge, skills, and routines for accomplishing given task; and (4) know-who—relational capital and knowledge about other people’s knowledge and capabilities (c.f. Table 13.1).

Table 13.1 Characterization of knowledge types (adapted from Jensen et al. 2007)

Know-what and know-why are types of knowledge that are considered scientific theories, explaining phenomena with causal relations between constructs. Know-how and know-who are applied, representing capabilities to apply the different types of knowledge and achieve given ends with various means. Know-how specifically encapsulates experiential knowledge related to existing artefacts and theories, and their application to problems. In RIDC, the focus is more explicitly on the first two types insofar as the research aims to developed and evaluate theory, while DIR and PBDR may have a broader focus on know-how beside theory development. With regard to the worlds of ontology discussed, the types of knowledge may span all three, especially in the case of know-what and know-why. However, by inclination, know-how is more often associated with the artificial (W3), whereas know-who is associated with perceptions of other peoples’ knowledge (W2).

Contextualizing the types of knowledge to design more specifically, the relevant knowledge domains include first knowledge about the environment and domain of design, which includes general contextual understanding and specific design problems and constraints (know-what, know-how). Second, there is extant ‘solution’ knowledge and existing artefacts (know-how), and theories applicable to the design problem and process or methodological knowledge, which allows executing the design process (know-why and know-what). Third, there is design knowledge, knowledge embodied in the product of design and insights borne through the design and evaluation (know-how). Even though there is wealth of literature on design methods, it seems that knowledge about the method is hard to articulate, or is quite tacit and not easily transferable as design problems appear unique. Schön’s (1983) classical exposition on reflection in action illustrates how consideration of the problem, solution, and process blend together in professional practice, and it takes special effort to articulate the underlying rationale of a design process or solution after the skill of design is internalized (typical for know-how).

A key challenge in DR is synthesizing knowledge between the existing bodies of knowledge and emerging research findings. In DIR and especially PBDR where the interest is more practical, the challenge is to make an explicit connection to existing knowledge (know-what and know-why). The practical focus tends manifest itself as interest in some outcome variables pertaining to the design context or artefact, or industry context, often named, e.g. key performance indicators (KPIs).

One facet of the solution to this challenge is explicating logic between the phenomena, propositions, and observable variables, as named in existing research (know-what, know-why) and relevant to the research or design problem (know-what, know-how). In DR with a practical focus, the (outcome) variables that on one hand characterize the problem space and on the other hand are associated with perceived success of the design are a key attachment point for DIR and PBDR. The variables can link practical problem-solving to previous knowledge (know-what, know-why) and enable building theoretical design propositions. Further, the exploration of the problem space may lead to identification of relevant design constraints (know-how), which give further variables and outline some key constructs to work with (know-what) (c.f. Robinson in Chap. 3 of this book on measurement and research designs for treatment of constructs and variables).

The other side of this issue is operationalization of existing theoretical knowledge (know-what, know-why). If a designer or design researcher aims to leverage existing knowledge in the form of theory (know-what, know-why) in the design, the theoretical propositions need to conceptualized as constructs and operationalized in terms of measurable variables that can be pattern-matched to the problem space and the associated variables. This is the key to connecting existing knowledge on the solution space to the problem space as conceptualized by the relevant variables. It follows that the relevant discipline and body of theory to draw from are guided by these variables.

The caveat in this pattern matching approach to searching applicable knowledge is that, although it is said that a problem correctly stated contains its own solution (Simon 1996), design problems are underdetermined, in the sense that the setting of constraints and variables can be done in different terms with different indications for theory. Also within the underdetermined problem, there is not necessary one ‘right’ or even optimal solution. The solution or design artefact might be behavioural, technical, or a mix of socio-technical elements from different bodies of literature or disciplines. While this is an opportunity for multidisciplinary approaches, and as such a strength, it poses a challenge for defining the constructs and corresponding units and levels of analysis rigorously.

To summarize the discussion on challenges of knowledge synthesis, they include but are not limited to the following:

  • Identifying the unit and level of analysis.

  • Identifying phenomena and constructs.

  • Operationalization of constructs in measurable variables.

  • Matching the theoretical constructs and variables to the problem space.

In practice, the challenges revolve around the pivot of identifying the level and unit of analysis, and phenomena that can be matched with the existing body of knowledge. This enables leveraging the existing knowledge to the design problem and consolidating the emerging findings with existing knowledge and by extension accumulation of knowledge by corroboration, falsification, or modification of previous claims. The crux of the approach to answering these challenges is developing an explicit research framework and design propositions, which we will elaborate in the next sections.

3 Knowledge Synthesis and Experimental Evaluation of Claims

Building on the previous discussion, this section focuses on the methodological aspects of overcoming the challenges in knowledge synthesis. The section first discusses formulating explicit design propositions as a bridge between the existing and emerging knowledge, and the design artefact. Second, it discusses the methodological framework for knowledge synthesis and evaluation of the design propositions, enabling transparent validation of knowledge claims.

3.1 Setting Design Propositions

Breaking from the convention in DR, the following discussion on design propositions uses the term Design Theory (DT) in a sense specific to the Design Science Research literature, to explore how knowledge synthesis can be codified in Design Inclusive Research (Gregor and Jones 2007). That is to say, DTs in this chapter are not prescriptive systems, rules, or methodologies to use in design processes, as in, e.g., general design theory (Reich 1995), axiomatic design (Suh 1998), and mid-century modernism (Cross 1999), or theories of design to explain design as practice or activity (Friedman 2003). Rather, DT as discussed here is a framework for describing the knowledge contribution in its context and setting explicit design propositions to be evaluated. As such, DTs or design propositions are products of design together with the design artefacts; they bridge between know-what, know-why, and know-how and act as a platform for knowledge synthesis in Design Research.

In this conception, the role of explicit design propositions is to bring transparency and consistency to design and evaluation by bridging the design requirements and principles of form, with the design propositions. Additionally, the propositions codify the reasoning and rationale behind the artefact and interface it explicitly to existing knowledge, both practical and theoretical. As such, they act as a link between practical problem-solving and contributions to knowledge (Walls et al. 1992; Gregor and Jones 2007). Finally, the propositions enable transparent rigorous evaluation of the artefact and validation of the associated underlying and/or embedded theoretical claims and by extension contribute validity and cohesiveness of knowledge (Piirainen and Briggs 2011; Gonzalez and Sol 2012).

Often DIR and PBDR are ostensibly focused on creating knowledge of know-how type, often in the form of design artefacts that may include methods (embodying process knowledge) and classes of artefacts including constructs, models, instantiations of the previous, as well as tangible objects (embodying product knowledge) (adapting March and Smith 1995) to fill a certain (kind of) problem space (Markus et al. 2002). These artefacts are built on either intuition, practice-based, or experiential knowledge (know-how), principles derived from existing theory by matching constructs and relations to the problems space (know-what, know-why), or both (c.f. Table 13.1).

For the purposes of this chapter, the operational definition of a theory is that it establishes a causal link between constructs, predicting their interdependent behaviour. In scheme of knowledge (Table 13.1), validated theories are explicit and represent the type know-what and know-how. This does not, however, exclude integrating or synthesizing other forms of knowledge in the design propositions. On the contrary, the design propositions are modelled after theories, to enable synthesis between existing knowledge of different types and emerging research findings. The following list of questions outlines what constitutes a complete theoretical contribution (Dubin 1969; Bacharach 1989; Whetten 1989):

  • What constructs and factors are relevant to explanation of the phenomenon of interest?

  • How are the constructs related; what are the relationships?

  • Why are the constructs are expected to behave as posited by the theory; what are the underlying dynamics of the interaction that manifest in the expected behaviour?

  • Who, where, and when?—What are the boundaries of the expected interaction; what is expected to happen between the constructs, where, and when? What is not supposed to happen? These questions set the geographic, social, and temporal limits or scope of a theory and its corresponding applicability.

Table 13.2 presents a framework for setting explicit design propositions in a way that enables knowledge synthesis. As discussed in Sect. 13.2.1, the pivot of knowledge synthesis is formulation of explicit propositions that can be evaluated empirically, and either falsified of corroborated in the research process. Building on Gregor and Jones (2007) and Piirainen and Briggs (2011), this formulation of propositions essentially conform to the basic criteria of a theory, as it requires specifying constructs, their relations, explanations, and testable design propositions. The propositions and consideration of mutability in particular are ex ante prediction from theory, to be tested during the evaluation of the artefact. These propositions enable corroborating or refuting the embedded theoretical propositions and improving the theory, which enables in turn contributing back to the knowledge base.

Table 13.2 A framework for setting design propositions

3.2 Experimental Evaluation of Design Propositions

The Design Science Research (DSR) framework was born in the field of Information Systems (Research). Hevner et al. (2004) laid out a set of guidelines or criteria for what is essentially for DIR or PBDR. The key difference between design practice and Design Research is that DR by definition contributes to knowledge by solving classes of problems with artefacts that are evaluated through an instantiation in the given problem context, and contributes to existing scientific knowledge base through this process. It has been said that evaluation of the design artefacts, design propositions, and the underlying claims to knowledge puts the ‘Research’ in Design Research in the sense of DIR and PBDR. Otherwise ‘[w]ithout evaluation, we only have unsubstantiated … hypothesis that some … artifact will be useful for solving some problem’ (Venable et al. 2012). Table 13.3 presents the core guidelines for DIR as conceived in the DSR literature to set a framework for the methodological approach.

Table 13.3 Guidelines for Design Research (adapted from Hevner et al. 2004; Venable 2015)

The connection between relevance and rigour, the context of design, the (business) environment, and the scientific knowledge base built by the previous research is illustrated in the three related cycles of activity described in Fig. 13.1. These cycles are the relevance cycle (1), which links the environment with design, setting the problem space, and informing design with the associated requirements and constraints, and later in the process instantiating the artefact and disseminating the results. The central design cycle (2) comprises the internal design process of DSR, where the problem space and solution space interface and an artefact is synthesized and evaluated until it satisfies the criteria set for the design. Finally, the rigour cycle (3) links DSR and the scientific knowledge base, informing the solution space and contributing back to knowledge based on the evaluation. As such, the framework integrates the perspectives of ‘design practice’, ‘design exploration’, and ‘design studies’ (Fallman 2008).

Fig. 13.1
figure 1

The DSR framework and the three cycles (adapted from Hevner 2007; Cash and Piirainen 2015)

Within this framework, the design propositions framed in Table 13.2 describe the principles of form and function, i.e. the theoretical principles and other embedded knowledge, embodied by the design artefacts. As such, they are the products of the design cycle (2). The propositions are tested through the evaluation of the artefact (Walls et al. 1992; Markus et al. 2002; Gregor and Jones 2007), which in turn validates the underlying or embedded knowledge claims (Piirainen and Briggs 2011).

During the DR process, the relevance cycle (1) feeds design problems, requirements, and constraint to the process and carries the output of design to the environment. A secondary relevance cycle (1) is found while the design is tested, demonstrated, and refined in the design cycle. The design cycle (2) interfaces both with the rigour cycle (3) and relevance cycle (1), as the rigour cycle feeds the design with theory and the evaluation with methodology, and with the relevance cycle as the artefact is piloted and evaluated. The rigour cycle (3) then feeds the principles of form and function to the design and feeds the findings of evaluation of the artefact back to the knowledge base.

The relationship between the cycles can vary depending on the design problem and solution and the methodological design of a DR project. In RIDC-type projects, the rigour cycle (3) has the most importance, and the design (2) and relevance (1) cycles may even be viewed from the outside as objects of study. Moving to DIR and PBDR, the relative weight of the relevance (1) and design (2) cycles grows, and the research project envelops more of the design cycle (2).

Relating to the relevance cycle (1), a large theme in the discussion about reflective practice and failure of Design Theory in the sense of rule-based design seems to amount to problem setting, i.e. uncovering the ‘right’ problem and the ‘right’ constraints (Schön 1983). In other words, a problem formulated correctly contains the kernel of its own solution (c.f. Simon 1996). This ‘correct’ problem framing however requires understanding the domain of the design and the application area (know-why, know-how), which is the subject of the relevance cycle (1).

Regarding the synthesis of different sources of knowledge in the design cycle (2) in reference to the discussion on the four types of knowledge (Table 13.1), the purpose of academic research is in the end to produce explicit knowledge of the know-what and know-why type. As discussed, design embodies the use of know-how, as well as know-what and know-why; thus, the design cycle in DIR and PBDR acts as a process of knowledge synthesis. Additionally, the know-how of individual professionals interacts with the design process and the artefact in the interpretation of the design artefact when it is instantiated and used in the chosen context through the relevance cycle (1). Insofar as the design process and evaluation include a feedback loop(-s) between the design and rigour cycles or descriptive elements of the instantiation and its use, the know-how element of the persons interacting with the design in the experimental context will be incorporated into the design and through the rigour cycle to the body of knowledge of the know-why variety.

For example, exploratory findings from the machinery industry indicate that one of the key prerequisites for creating value through designing products and services is understanding the users’ process and application (e.g. Piirainen and Viljamaa 2011), which means finding the ‘right’ problem framing and constraints that relate to the daily activities of the client and end-user. This also entails that the knowledge needed spans not only domain-specific technical knowledge (know-what, know-why), but knowledge of the routines associated with the problem and knowledge about behaviour of people (know-how, know-who).

What is notable regarding the rigour cycle (3), the framework is not axiomatic, in the sense that it would have a fixed normative methodology. The DSR literature proposes rather a ‘meta-methodology’ or a prescriptive methodological framework, which enables use of different research strategies, methods, and field designs, as well as epistemologies within it. Thus, used apart from established research methodologies, it is an ‘empty container’, which allows integrating different onto-epistemological and methodological approaches. The next section expands on the key issues of combining practical and theoretical contributions in design.

4 Evaluation of Design Artefacts and Knowledge Claims

This section focuses on methodological choices for evaluating design propositions within the framework presented in Sect. 13.3. As discussed, the explicit setting of design propositions and their evaluation is what sets Design Research apart from design as practice or artifice. In the same vein, the often said purpose of evaluation is to examine whether the artefact proves to solve the design problem, following the pragmatic or instrumentalist logic that the underlying theoretical claim is true, if the artefact is useful (c.f. James 1995; Gill and Hevner 2011). In a more common sense, wording evaluation ensures that the artefact fulfils its requirements and that the associated knowledge claims are sound. Venable et al. (2012) expand on that and propose that there are five purposes for evaluating the design artefacts:

  1. 1.

    Establishing the utility and efficacy (or lack thereof) of the design artefact for its stated purpose.

  2. 2.

    Evaluating the formalized knowledge about the artefact’s utility for achieving its purpose, i.e. validating the design proposition and other theoretical claims attached to the artefact.

  3. 3.

    Evaluating a design artefact in comparison with other artefacts designed for similar purpose, i.e. establishing performance of the artefact in relation to competition.

  4. 4.

    Establishing side effects or undesirable consequences of the artefact.

  5. 5.

    Identifying weaknesses and areas of improvement for a design artefact under development.

It is notable that four out of the listed five purposes are related either entirely or mostly with the practical utility of the artefact. In the interest of promoting research rigour in the knowledge synthesis, the following discussion focuses on the aspects related to evaluating design propositions and other attached theoretical claims.

Regarding the theoretical contribution and validating the underlying claims to knowledge (know-what, know-why, know-how) as codified by the design propositions, either by corroboration or refutation, the artefact and its instantiation(-s) are the interface between the world and the knowledge base. In previously used terms, design artefacts belong to the artificial (W3) and their evaluation in an empirical context will yield information about the instantiation in the ‘real’ world (W1), as well the interplay between the artefact (W3), the context (W1), and the surrounding people (W2). Hevner et al. (2004) propose that evaluation can use multiple empirical methodologies for either ‘artificial’ experimental evaluation in controlled environment or ‘naturalistic evaluation’ (Venable 2006), as well as analytical methods, including logical proof that the artefact solves the problem, as illustrated in Table 13.4. In relation to the theme of experimental Design Research as outlined especially in the first part of this book, only the category of ‘experimental’ methods strictly falls directly under this heading.

Table 13.4 Examples of evaluation methods for design artefacts and their underlying knowledge claims, in illustrative descending order of representing the empirical context accurately (adapted from Hevner et al. 2004; Siau and Rossi 2011; Gonzalez and Sol 2012; Venable et al. 2012)

A less recognized task in evaluation is to verify whether the artefact actually instantiates the propositions and can be said to operationalize the theoretical claims. Any claims to knowledge are hollow if we cannot claim to know why exactly we get the observed results and what is the attribution, or at least contribution, of the design artefact to those results (Briggs 2006; Piirainen and Gonzalez 2014). This duality of evaluation and validation is referred to as ‘verification and validation’ in simulation modelling (e.g. Kleijnen 1995; Sargent 2005; Balci 2009; Sargent 2013). Translating this duality of purpose to evaluation of design artefacts, verification within the artefact evaluation corresponds to ascertaining that the instantiation of the design artefact is in fact built after the design and adheres to the intended design principles of form and function sufficiently, and further that it operationalizes the theoretical claims that are under scrutiny (analytical evaluation). Validation corresponds to determining whether the behaviour of the artefact is as projected by the design propositions and sufficient in terms of solving the original problem (testing, experimental, and field).

Another related, and also lesser discussed, dimension in evaluation is illustrated by McGrath’s (1981) ‘three-horned dilemma’. The dilemma is that in choosing a field design and methods, a researcher has to compromise between representativeness in a population, describing behaviour accurately, and taking the context into account, often by choosing to optimize one (or two) dimensions and ‘sitting uncomfortably’ on (one or) two of the horns. In the design context, the less controlled the setting is and the wider the adoption, the less controlled the artefact use and the more mutable the artefact and its uses become, making it harder to establish attribution of the artefact to any observed changes in the system. On the other hand, the more controlled the evaluation, the more the artefact is abstracted from the ‘natural’ context, and thus the less ‘realistic’ the observations become. Thus, there tends to be a compromise between rigorously evaluating the design propositions and doing the evaluation in a realistic environment. Lastly, representativeness in population is in experimental evaluation mostly a question of sampling and resources, and in a naturalistic setting, it becomes a question of adoption and popularity of the artefact.

Essentially, this means that in terms of research design, triangulation between multiple methods enables better compromises in rigour and validity if complementary methods are chosen. A further aspect of complementarity is that choosing different methods enables answering questions regarding not only functionality of the artefact in its given setting, but also examining aspects of its interplay with the users and other phenomena in the borders of the real (W1), artificial (W3), and social (W2) (for an extended discussion, c.f. the other chapters in this book and e.g. Morgan and Smircich 1980; Cunliffe 2010). Figure 13.2 illustrates the compromise between accuracy of behaviour and realistic context in the different evaluation designs presented above. In this scheme, the representativeness in population is a question of sampling and volume of field work and by extension the resources reserved for the evaluation.

Fig. 13.2
figure 2

Illustration of trade-offs in artefact evaluation designs [c.f. Table 13.4, bubble size for the purpose of illustration, not to scale; light grey colour indicates empirical design, dark grey analytical (non-empirical)]

In recognition of these compromises, it is recommended that an artefact should be tried in controlled conditions, either through testing or experiments, before moving to instantiation in a real or naturalistic environment (Hevner 2007; Iivari 2007). Further, it has been discussed that while by nature experimental designs are rigorous and, when properly designed, offer highly valid and generalizable results on specific hypotheses within the sampled population, cases can be more illustrative of complex cause–effect relations, especially over time (Kitchenham et al. 1995).

In terms of knowledge synthesis, the observational evaluation in naturalistic settings also enables capturing the emergent properties of the artefact over time and any externalities, contributing to know-what as well as know-how. Further, by nature, experiments tend to be scaled down or abstracted representation of phenomena and simplified in order to exert better control over the phenomenon under study. In the light of the three-horned dilemma, it is advisable to implement methodological triangulation and develop a progression of evaluation (including verification and validation) during the process of design.

As discussed above in Sect. 13.3.2, the design process is often not linear, and there is often uncertainty about the framing of the actual problem and constraints of the design, which may require some searching. Draft designs may act as convenient boundary objects for defining the design constraints, which is another reason to triangulate and start with non-empirical evaluation first, until the stakeholders are in sufficient agreement over the artefact before going into costly empirical evaluation. Some authors also have recommended specific research settings where a design is instantiated in an organization with the intention for making it a permanent solution that there is an additional descriptive in-depth case study on the mutability of the artefact and its use and function and associated issues included in the field design (Lukka 2003; Piirainen and Gonzalez 2014). The intention is to uncover further insight into the principles of form and function of the artefact in their emergent form, which further contributes knowledge, know-what, know-why, and know-how.

The nature of a DIR or PBDR research process (as described in Sect. 13.3.2) and evaluation or validation of design propositions also poses a degree of limitations to applicability and generalizability of knowledge (know-what, know-why, know-how) acquired through design. That is to say that design artefacts are only ever 100 % applicable to problems that are well defined and constrained, as well as stable, to start with. Further, the problem needs to conform to the same explicit and implicit constraints as the original design problem. If some of the constraints or requirements change entirely or in priority within a class of problems, the design may have to change; that is, the design artefact is mutable.

Another limitation is that when dealing with social processes and behaviour, the knowledge about behaviour around the artefact is not definite, but probabilistic. Thus, the prescriptions derived from design are ‘satisficing’ (Simon 1996); they meet or exceed a set of performance specification with a given confidence. For example, experimental results may point that a design artefact will raise productivity of a particular task x percent with 95 % confidence (or p = 0.05), given that college-educated people from a particular country within a certain age bracket use the artefact as originally prescribed (as limited by the experimental conditions, choice of population, and sample). When going outside this population and prescribed use script, the more uncertain and suggestive the results become. The less controlled the use of the artefact in its environment is, the more likely there are different interpretations and constructions of the artefact and its uses (e.g. Williams and Edge 1996); that is, if the artefact is found functional and useful for one problem or use, it is likely applied to a different context, in a different setting, or in a different way, or to an altogether new problem not originally considered during design, which add a degree of mutability. It follows that while DR has implications for practice, the knowledge is tentative and probabilistic, especially as the artefact moves outside the evaluation conditions.

5 Discussions and Conclusion

Within the framework of experimental design research, the bulk of this book is focused on methods for research settings that can be called Research in Design Context. Similarly, design as a practice, design methods and processes, and design knowledge have been discussed in the field of DR extensively. The focus on this chapter has been bringing these two discussions together in proposing methodological guidelines for conducting rigorous and relevant DR. Within this focus, particular stress has been on how to support knowledge synthesis and to extract a theoretical contribution from DIR and PBDR. The key messages of this chapter is that all types of DR, including DIR and PBDR, can be rigorous and can contribute to the knowledge base, when the design activity is coupled with setting explicit design propositions that are embodied in the design, followed by rigorous evaluation of the design artefact and the underlying claims to knowledge.

As for other lessons, it is the author’s contention that the most usual apprehension towards structured process in Design Research in the more practice-based end of the spectrum is the perception that adding structure and methodological rigour to a ‘designerly’, DIR or PBDR, research project will constrain design unduly and halt creativity. However, these guidelines do not in any way constrain excellent design practice, nor are they meant to saddle creativity. The methodological guidelines described here do not constrain the design cycle or prescribe hard rules that disable use of know-how, including existing best practices and mastery of design. The purpose is however to support making deliberate choices about the research design to enable lifting excellent contributions to knowledge from excellent design practice.

There is a danger though that due to too much focus on the research execution, the actual design might be left to lesser attention. There are two types of errors that manifest this risk: One is locking the problem space too early on, which leads to arriving to an excellent solution to the wrong problem. Another is locking the solution space and focusing on a particular solution, possibly for a lack of effectuation (Drechsler and Hevner 2015), too early on. Both may lead to a solution that is sub-optimal for the stakeholders, while it may not detract from the value of the research as such.

These risks are related to working with explicit design propositions, if design propositions are taken as a checklist item that needs to be ticked off the to-do list as early on as possible, which may drive the design to a premature lock state. The purpose of the propositions is, on the contrary, to be an explicit codification of the principles of form and function of the design, and they need to live with the artefact. Otherwise, there is an additional risk that the evaluation will in fact not produce useful data for validation of knowledge claims.

A closely related risk is locking the evaluation field design, protocols, and instruments too early, before it is actually known what is being evaluated. Again, evaluation design should not be chosen in rote terms from a list, but judiciously following the type of the artefact and propositions, the corresponding level and unit of analysis, research questions, and the researchers chosen onto-epistemological approach.

With these remarks and reflections, the closing proposal is that these guidelines are not intended to replace or surpass the art of design in design research, but to create a framework that enables synthesis of knowledge by combining design excellence and creativity with the rigour necessary to derive excellent scientific contributions, know-what and know-why, as well as know-how from DR.