1 How to Develop Designer-Centred Methods?

Despite of many years of continuous development, design method usage remains limited in design practice. One reason for this limited uptake are shortcomings in putting the designer as a human being in the focus of design methods. This view aligns with the position of Badke-Schaub et al. (2011) who argued for a designer-centred methodology more than a decade ago. Since then, there has been only little change in research approaches to foster the development of designer-centred methods in product development. However, the demand for consideration of rational and unconscious thinking in product development is coming to the front (Ehrlenspiel 2020). To develop designer-centred methods, empirical investigations are necessary, which focus on the designer during product development.

In a first step designer thinking and its influence on the process of designing need to be understood and quantified. However, making designer thinking accessible for assessment is particularly difficult. It lies in the nature of ways of thinking that they often remain unconscious or even subconscious. But to be designer-centred, methods need to address those ways of thinking which therefore need to be made explicit. In the view of the authors, this is hindered by several challenges in the field of empirical design research. Those challenges are laid out in the following.

The main issue as stated by Üreten et al. (2020) is the development of a valid operationalisation. Operationalisation determines how the aspect under investigation is made observable or measurable. A proper operationalisation is a necessity to enable assessment of designer-related aspects, which are relevant for product development.

Operationalisation also strongly relates to another challenge: the high effort and resources required for data collection, analysis and interpretation needed in empirical design research (Üreten et al. 2020). The more the operationalisation focusses solely on the relevant aspects and the more it enables automatisation of assessment and analysis, the lesser the effort and resources required. For example, to assess when a designer encounters a problem during development, research methods originating from the social sciences such as retrospective interviews or concurrent think aloud can be used to assess conscious designer thinking. This is very resource intensive as all utterances have to be transcribed and interpreted afterwards. Also, measures to mitigate bias in interpretation have to be taken. Therefore the designer’s problems should be operationalised by objectively measurable variables. By using bio signals such as heart rate or eye-tracking metrics as variables and identifying threshold values, cognitive processes can be studied objectively (Lohmeyer and Meboldt 2016). By using algorithms, the evaluation can be accelerated through automation (Wolf et al. 2018).

To develop operationalisations, which enable quantitative measurement of aspects of designer thinking, a detailed understanding of relevant aspects to be measured is necessary. To achieve this, explorative and qualitative investigations of design in practice are needed as prerequisite. However, in current investigations concerning designer thinking, data acquisition and analysis remain on a qualitative and interpretative level. This results in a lack of comparability of results and extensive measures to ensure objectivity have to be undertaken. Objectivity could be raised by advancing to develop standardised instruments for quantitative data acquisition comparable to IQ tests. This standardisation can also be achieved via evaluation algorithms, which inherently enable an objective and reliable evaluation.

Summing up, future investigations of designer thinking should enable quantitative data acquisition and analysis, which is necessary for data driven development of design methods. This raises comparability of results and at the same time reduces bias of interpretation as well as the resources needed for research.

In order to develop design methods, Blessing and Chakrabarti (2009) suggest three steps after the research clarification within their research framework in DRM—a design research methodology: Understanding design, Developing support and Evaluation (see Chap. 9). To enable development of designer-centred methods, this approach needs to be focussed on methods supporting designer thinking. Additionally, design method development should aim at producing quantitative results concerning the method impact operationalised through proper variables, because this is needed for a comprehensive validation.

We therefore propose three steps of method development connected by designer thinking (see Fig. 10.1) aiming at a quantification of effects by fitting operationalisation: (1) Assessing ways of designer thinking, (2) Design method synthesis, (3) Design method validation. Those three steps are further described in the following. An example of how to conduct designer-centred method development is elaborated in Sect. 10.2.

Fig. 10.1
figure 1

“designer thinking” as central element connecting the three steps in development of designer-centred methods

1.1 Assessing Ways of Designer Thinking

In order to develop designer-centred methods the current situation needs to be understood by empirical investigation of designer thinking. The goal of these investigations should be the identification of best practices or problems and their causes (see Fig. 10.1). An important part in this step is to identify situations in which design engineers really have a need for support by a design method. A lack of subjective need of support is one of the main reasons why design methods are seldom applied in design practice (Eisenmann and Matthiesen 2020). Additionally, by analysing the circumstances in which difficulties occur, a deeper understanding can be gained about the underlying causes. Those causes are needed for goal oriented method development in the second step (Sect. 10.1.2) as well as for validation in the third step (Sect. 10.1.3). This first step can be divided in a qualitative and a quantitative phase.

In the qualitative phase a detailed understanding of problems and their causes in designer thinking in practice should be gained. It is advisable to use research methods from social sciences in this phase. Those range from already established methods like protocol analysis and think aloud over focus group interviews up to seldom used human subject experiments to investigate alterations in thinking and behaviour. These research methods aim at putting the human being in the focus of the investigation, thus enabling the researcher’s view to become designer-centred. Those investigations should start in a real design context to increase the external validity of identified problems and best practices.

In the following quantitative phase, the identified results should be verified by quantitative assessment. This should initially be conducted in a laboratory context, because such a more focused and less influenced environment enables higher numbers of participants and therefore statistical data analysis of occurring effects. Also, a laboratory context makes it possible to evaluate objectivity and reliability of the chosen operationalisation. While the focus on a limited number of aspects is necessary to verify the occurrence of effects, study designs should aim at being as realistic as possible to include aspects relevant for practice.

Summing up, by using research methods from social sciences, ways of designer thinking can be assessed in order to identify best practices and problems as well as their underlying causes in the first step. Quantification should then be used to verify the chosen operationalisation. This enables focussing the following design method development on relevant aspects in designer thinking.

1.2 Designer-Centred Method Synthesis

Designer-centred method synthesis focuses on the question of how to overcome causes for problems in the design process by focusing on the designer to achieve a better result than without the method. As mentioned in Sect. 10.1.1, this can either be achieved by using best-practices as a starting point or by searching for ways to influence designer thinking to overcome the occurring problems.

If the first step identified best practices, those should be made as explicit as possible to make them accessible for other designers. If the first step did not yield any best practices, it should be carefully considered if there are established approaches in design research to overcome the occurring problems. For example, when dealing with problems in design decision making, existing methods and approaches of this area should be reviewed for their potential value in the current situation.

In many cases, causes of problems in designer thinking relate to more fundamental aspects of human thinking, like logical reasoning or interpretation of information. Especially in those cases, existing approaches from other professional disciplines should be considered for design method development. Like the use of research methods from social sciences, approaches from other disciplines often require careful analysis and modification for the use in a design context (Bender et al. 2002).

To ensure not only the method’s influence on designer thinking but also cause a certain impact of the design process’ result, possible positive as well as negative effects should be thought ahead. For example, an ideation method to support short-term designer thinking takes too long to apply for short-term memory and therefore loses its effect.

Summing up, designer-centred methods should be synthesised combining expertise in the process of product development with methods from social sciences or other disciplines and the usage of existing approaches. An example of design method synthesis using approaches from psychology and intelligence analysis is elaborated in Sect. 10.2.2.

1.3 Design Method Validation

The third step design method validation aims at investigating whether the developed design method has the desired impact. In the case of designer-centred methods, this impact has to be investigated on two levels: (1) Does the design method influence designer thinking as anticipated? (2) Does the change in designer thinking lead to a better performance?

Marxen and Albers (2012) suggest to investigate design methods through experimental research before implementing them in practice. In design method validation, it is advisable to use human subject experiments, because they enable the generation of causal relationships between design method application and its effects on designer thinking. Experiments are set up by comparing two groups of participants solving a design task which represents originally occurring problems and enables a reproducible data acquisition. The test group is using the newly developed method while the other group–the control group–works intuitively or with a benchmark method. Üreten et al. (2019) summarise different aspects to consider when investigating design methods in experiments in the form of a concept map.

To investigate the influence of the developed design method on designer thinking, operationalisations created in the first step (see Sect. 10.1.1) for quantitative measurement can be used. Like this, the change of designer thinking caused by method application can be quantified. The assessment of performance is a general challenge in design research, because it is concerned with the actual impact in practice. The assessment in practice is hampered by a multitude of disturbances. Design processes in companies are unique which strongly reduces comparability: time required and costs vary significantly between similar design projects, caused by a multitude of influences by different stakeholders. The investigation in a controlled laboratory environment is therefore to be seen as a necessary step to enable a later transfer on the context in practice. A challenge is to acquire design engineers for such human subject studies.

One aspect to raise performance is the reduction of occurring problems, which were detected in the first step. It needs to be verified if the change in designer thinking indeed reduces those problems. For a comprehensive design method validation, the impact of this reduction of problems on performance needs to be investigated as well. Performance can relate to multiple different aspects ranging from time required for a task over quality of generated solutions up to the reduction of costs.

Summing up, to develop designer-centred methods three steps are needed which are connected by designer thinking as a central element (see Fig. 10.1). In the first step, ways of designer thinking are assessed in order to understand and quantify occurring problems and their causes. The second step then aims at developing a design method to influence designer thinking to overcome problems by using best practices or approaches from design research or other disciplines. In the final step, the design method is validated by quantifying its impact on designer thinking and consecutively on design performance. For those steps, researchers need competences in engineering design as well as in research methods of the social sciences to put the designer in the centre of design method development.

2 Method Development to Overcome Cognitive Bias in Product Development

In the following, the three steps of designer-centred method development are demonstrated by means of an example.

Designing can be understood as an iterative problem-solving process (Albers and Braun 2011). To solve the design problem, design engineers have to acquire and interpret different data and information (see Chap. 11). Especially under high uncertainty, challenges arise in the interpretation of results in product development (Pottebaum and Gräßler 2020). Misinterpretations of information can lead to lengthy and expensive iterations in the design process. In psychology, systematic misinterpretations in human thinking are called cognitive bias. Although misinterpretations in design have a severe impact, cognitive bias in design have hardly been considered so far. Studies here mostly investigate the influence of cognitive bias on the synthesis of new ideas (cf. design fixation (Neroni et al. 2017)) or cognitive heuristics (cf. (Bursac et al. 2017; Bursac et al. 2018; Tanaiutchawoot et al. 2019)). In the following, development of a method is described that aims to support designer thinking to overcome cognitive bias during the failure analysis in engineering design. The design method is developed, following the three steps of the approach presented in 10.1.

2.1 Assessing Ways of Designer Thinking: Identifying the Influence of Confirmation Bias on Designers’ Understanding of Problems

The first step of design method development started with a qualitative field study (see Nelius et al. 2021a) to capture challenges and their causes in a realistic setting. A problem-solving workshop addressing an actual problem occurring in a company was used as a research environment. The aim of the workshop was to identify the cause of the failure of a construction machine and to develop technical solutions to resolve the failure. Through observation of and reflection with the participants, several challenges were identified in the workshop. For the participants, the main challenge was to assess whether an identified cause of the failure was the actual one. It could be observed during the workshop that mostly, information was explored that explained or supported the suspected cause of the problem. Information that contradicted the suspected cause of the problem was rarely searched for actively. Therefore, false failure causes were pursued several times over long periods of time until disconfirming information was found unintentionally. Because of this challenge, technical solutions were developed several times that did not solve the real problem (Nelius et al. 2021a).

The pursuit of false causes of problems could be traced back to search for confirmatory information as the root cause. This mind set is known as confirmation bias–one of the before mentioned cognitive bias. Confirmation bias describes the tendency to seek and interpret information in a way that confirms one’s own views (Nickerson 1998) and is to be seen as a particularly serious cognitive bias.

Whereas the influence of confirmation bias has already been studied in many disciplines (e.g. psychology, law, medicine, informatics), its influence in engineering design has not yet been investigated. It was therefore necessary to investigate the influence of confirmation bias on the search for and interpretation of information in the failure analysis of designers in a laboratory study. In this way, occurring problems, their causes and best practices of design engineers could be made accessible to investigation to support method synthesis.

Laboratory Study on the Confirmation Bias: Data Collection and Analysis

In order to replicate the challenges of the field study, a laboratory study was set up that was as close to reality as possible. The goal of the laboratory study was to quantify the influence of confirmation bias on the perception and interpretation of information by design engineers. The task depicts a real failure from the design department of a power tool manufacturer. During the development of a power tool, a premature component failure occurred in the prototype phase, which led to the failure of the entire device. The study participants had access to the usual information sources of a responsible developer: the entire power tool, the worn parts, a 3D model of the affected assembly and the technical drawing of the assembly. The participants had the task of analysing the cause of the failure and sketching a suitable technical solution. For the evaluation of the study, 12 students and 8 designers (with more than 8 years of professional experience) were considered.

Confirmation bias was expected to have an impact on both the interpretation of information and the search for information. These aspects of confirmation bias were operationalised as follows:

  • To capture participant misinterpretation, concurrent think aloud was used, in which the participants expressed their thoughts aloud during the task. The participants’ assumptions on the cause of the failure were recorded over the course of the task and what information they used for their analysis. The information was assessed as to whether it confirmed or disconfirmed the failure cause from the participant’s subjective perspective. In addition, it was assessed whether the information also confirmed, disconfirmed, or was unrelated to the failure cause from an objective perspective. The coding was reviewed by a second person in order to obtain objective results. It is considered a sign of confirmation bias if misinterpretations occur more frequently in the confirming direction (neutral and disconfirming information is interpreted as confirming) than in the disconfirming direction (neutral and confirming information is interpreted as disconfirming).

  • The perception of information during task processing was recorded via eye tracking. Here, we examined how long participants looked at confirming, neutral, and disconfirming information concerning the failure cause being tracked. By combining this with participant statements, it was also possible to capture whether misinterpretations were related to low visual attention.

Laboratory Study: Results and Discussion

The results of the study (see Fig. 10.2) show that the participants use confirmatory evidence much more frequently to check their assumptions than disconfirming evidence. The occurring misinterpretations take place almost exclusively in the confirmatory direction. Almost one third of the evidence the participants used as confirming evidence for argumentation is misinterpreted compared to the objective view of the evaluators. Participants wrongly interpreted neutral information (i.e. projection error) as well as disconfirming evidence (i.e. interpretation error). The evidence participants used as disconfirming is almost completely interpreted correctly. Due to the dominance of the subjectively confirming evidence, most participants keep their assumed cause of the problem, even if it is wrong.

Fig. 10.2
figure 2

Quantification of the confirmation bias within the participants’ reasoning: misinterpretations like projection and interpretation error (hatched areas) occur systematically more often in the confirmatory direction (Nelius et al. 2020)

The eye tracking data show that the visual attention of the participants is significantly longer on objectively confirming evidence than on objectively disconfirming evidence. Misinterpretation of information is therefore also associated with low visual attention.

In many preceding studies in the state of the art, the higher frequency of statements of confirming evidence are seen as an indication of confirmation bias. However, since the difficulty of discovery and amount of evidence is unknown, misinterpretation is a more appropriate operationalisation of confirmation bias. Through this operationalisation and the use of eye tracking, it was possible to quantify the influence of confirmation bias on both reasoning and visual attention during failure solving. It could be shown, that the confirmation bias often leads to a wrong understanding of the problem. This incorrect understanding of the problem would have led to the development of unsuitable solutions and lengthy and expensive iterations in industry. The intensive use of disconfirming evidence can be understood as best practice, which can be used for the synthesis of methods. Information identified as disconfirming was used correctly more often. In addition, disconfirming evidence more often led to the rejection of false assumptions.

2.2 Method Development: Design-ACH to Avoid Misunderstanding of Design Problems

Existing methods describe that different failure causes should be identified and the most probable failure cause should be selected, but no specifications are given on how to overcome the confirmation bias.

Based on the findings from the laboratory study, the following aspects could be identified, which should be considered when developing a method to overcome confirmation bias:

  • Intensive analysis of evidence

    The eye tracking data show that misinterpreted evidence is analysed for a shorter period of time. An intensive analysis and detailed modelling (e.g. with the C&C2 approach (e.g. Matthiesen 2021)) should therefore lead to fewer misinterpretations.

  • Focus on disconfirming evidence

    Evidence that was identified as disconfirming had mostly been interpreted correctly. A focus on disconfirming evidence can reduce the incidence of misinterpretation.

  • Falsifying assumptions with disconfirming evidence

    Subjective disconfirming evidence led to the falsification of false assumptions and the identification of further assumptions. By focusing on the falsification of assumptions through disconfirming evidence, the pursuit of false assumptions can be directly counteracted.

Typical engineering design methods for failure analysis do not include the implications arising from confirmation bias. An approach from another professional discipline could be identified to address the confirmation bias. The Analysis of Competing Hypotheses (ACH) is a method developed by Heuer (1999) for Intelligence Analysis. Its goal is the objective evaluation of multiple hypotheses for observed data. The ACH method was developed taking into account insights from cognitive psychology, decision theory and philosophy of science. The aim is to overcome or at least minimise the analyst’s weaknesses and thinking errors. (Heuer 1999) The goal of the ACH method covers the identified needs for methodical support for designers in failure analysis in large parts. Since, in contrast to engineering design, no experimental investigations concerning the evaluation of hypotheses are possible in intelligence analysis, the ACH does not provide any support in this regard. Therefore, the ACH was further developed to the Design-ACH for the application in engineering design. For this purpose, the former eight steps of the ACH method were simplified to three steps and a step for defining efficient hypothesis testing was added.

The approach of the Design-ACH includes four steps (see Fig. 10.3, left). In the first step, several hypotheses and circumstantial evidence are identified. In this process, hypotheses on the cause of the failure are generated, if possible, in an interdisciplinary team. For all collected hypotheses evidence is collected. The hypotheses and evidence are compared in a matrix (see Fig. 10.3, center). In the cells, it is recorded whether the evidence confirms or disconfirms the hypotheses, or whether no statement can be made. In a second step, this matrix is refined. The evaluation of the hypotheses is done row by row, that means one piece of evidence after the other. This fosters the intensive analysis of evidence as each piece of evidence is analysed in the light of all hypotheses. By constantly switching between the hypotheses, the commitment to one hypothesis and thus the confirmation bias should be reduced.

Fig. 10.3
figure 3

Developed Design-ACH for root cause identification in engineering failure solving (Nelius et al. 2021b)

In the second step, the matrix is refined. Here, findings from step 1 are used to combine similar hypotheses, establish new hypotheses and evidence. Evidence that does not allow prioritisation of the hypotheses (e.g. because this evidence confirms all hypotheses) is removed from the matrix.

In the third step evaluation and decision takes place. Column by column, the probability of each hypothesis is evaluated. Here, the focus is on disconfirming evidence in order to falsify hypotheses that do not apply.

If none of the hypotheses can be selected as most probable based on the available information, an efficient hypothesis test is to be defined in step 4. In this step, investigations are defined with which remaining hypotheses after step 3 can be falsified. For this purpose, the possible investigation results are preconceived and included in the matrix as circumstantial evidence. By evaluation of the possible investigation results in relation to the hypotheses, the significance of the investigations can be estimated. After promising investigations have been carried out, the matrix is updated and the most probable hypothesis is selected.

The matrix presents the analysis results and conclusions in a clear and standardised way. It is therefore suitable for reviewing the conclusions, as a decision-making template for follow-up investigations and documentation of the failure analysis.

2.3 Method Validation: Impact of the Design-ACH

The Design-ACH was evaluated in a laboratory study as well as in a case study in an industrial environment. Both studies are described in the following.

Laboratory Evaluation Study

For method validation in the laboratory, 7 students and 5 designers were trained with a simplified version of the Design-ACH. After the theoretical part of the training, the participants applied the method in a practical exercise under the guidance of a moderator. The moderator answered questions concerning the method and ensured the correct application of the method. The previously presented task (Sect. 10.2.1) was used for data collection, which the participants worked on individually.

The impact of the Design-ACH was operationalised through the reduction of the confirmation bias. By applying the Design-ACH, participants generated more hypotheses and used more evidence. The students benefited particularly by using twice as much confirming evidence and three times as much disconfirming evidence with the Design-ACH. The proportion of misinterpreted evidence was reduced by 24% across all participants. Without the method, 27% of all evidence was associated with confirmation bias. With the method, this value was reduced to 17%. In order to record the acceptance of the method, the participants were questioned by survey to rate the benefit of the method on a scale from 1 (low) to 7 (high). Although the students benefited significantly more from the application of the method, they did not rate the benefit of the method as high (4.9/7) as the designers (5.8/7).

The Design-ACH resulted in confirmation bias occurring less frequently. False assumptions were rejected more often and the proportion of misinterpreted circumstantial evidence decreased. However, difficulties were still observed in the application of the method. The proportion of disconfirming circumstantial evidence increased only slightly. In addition, it was observed that despite subjectively disconfirming circumstantial evidence, some assumptions were not discarded. Both difficulties should be reduced by moderation.

Case Study

In the case study, the Design-ACH was used in a problem-solving workshop. The aim of the case study was to qualitatively evaluate the applicability and usefulness of the method in an industrial context. After training the workshop participants, the Design-ACH was applied to a real failure occuring in the participants’ company: In the workshop, the cause of a failure of a production machine was to be identified. The 13 participants applied the Design-ACH under moderation.

Initially, about half of the participants were convinced of one cause of the failure. Alternative causes of the failure were hardly considered. At the beginning of the Design-ACH application, existing information was collected. It became clear that not all information was known beforehand, even by the employees involved in the failure solving. The existing assumptions were transformed into four testable hypotheses. Collected information was transformed into meaningful evidence that allowed a statement on the probability of the hypotheses. The collection and discussion of existing information was described by the participants as an important step, as it made it possible to achieve a uniform understanding of the problem at hand. In addition, the amount of information was compressed to a manageable level by narrowing down the information through an evaluation in relation to the hypotheses.

The intensive discussion of the hypotheses while applying the Design-ACH led to a significant impact by reducing the original fixation on individual causes of the failure. The application of the Design-ACH also influenced the performance: Through the structured evaluation within the framework of the Design-ACH, two of four hypotheses on the cause of the failure could be excluded through the identification of clear disconfirming evidence. To further narrow down the cause of the failure, precise follow-up investigations could be defined through the use of the Design-ACH. The use of the Design-ACH was evaluated very positively in a survey and a reflection of the participants. The greatest benefit was found in the moderated application of the method.

To sum up, by developing the Design-ACH using approaches from psychology and intelligence analysis, an impact on both designer thinking as well as performance could be achieved. Designer thinking could be guided to a thorough evidence analysis and to focus on disconfirming evidence in order to falsify hypotheses. This impact on the confirmation bias could be quantified in a laboratory study. A change in performance could then be assessed in a case study, where positive effects of the Design-ACH on development could be identified.

Summing up, the three steps assessing ways of designer thinking, design method synthesis and design method validation to develop designer-centred methods presented in Sect. 10.1 could be successfully applied to develop a design method in order to overcome confirmation bias in product development. Ways of designer thinking were assessed in a field study and the occurring problems attributed to be caused by the confirmation bias. This was then verified by operationalisation of the bias and quantification of its effects in a laboratory study. By using best-practice approaches identified in the initial field study and adapting the ACH method originating from another discipline, the Design-ACH could be developed. The design method was then validated in a laboratory study, which resulted in a quantified reduction of the confirmation bias and an increased success in failure solving. This success could be qualitatively reproduced in a case study concerned with solving a company’s real problems in a workshop.

The presented research in Sect. 10.2 illustrates that the confirmation bias as an example of cognitive biases has a considerable effect on how data is interpreted and used for decisions in engineering design. Because product development is becoming more and more data driven, cognitive biases are of high relevance for design engineers as they negatively influence data interpretation. Future design methods should therefore enable design engineers to objectively interpret the growing amount of data in product development.

3 Implications for Future Method Development

In Sect. 10.1 we have described an approach on how designer-centered methods should be developed. Currently, the following points are often given too little attention:

  • The designer as a human being with his/her abilities and limitations is not considered enough in the development of design methods. Designer thinking as means of putting the designer in the focus is seldom used.

  • Many investigations in design research are limited to qualitative statements, where the requirements of objectivity, reliability and validity are not fulfilled.

  • Design methods are often evaluated either only in case studies, without the possibility to replicate results, or only in laboratory studies, without providing statements about applicability in practice. To combine the advantages of both, relevant aspects of design practice need to be included in the laboratory. One particularly relevant aspect relates to the development process. In real product development, design engineers are able to test the system under development for its functionality. This leads to iterations, which are currently seldom represented in laboratory studies.

In Sect. 10.2 we have shown the development of a designer-centred method, which takes into account the previously mentioned points. By means of a suitable operationalisation, it was possible to quantify the occurrence of the confirmation bias and its effects. Through the use of eye-tracking, a further quantification through measurement was also possible. Investigations in both practice and the laboratory enabled both requirements for replicability and significance for practice to be taken into account. The following describes derived implications for future research on design methods (additionally summarised in Fig. 10.4).

Fig. 10.4
figure 4

V-shaped process of designer-centred method development between field and lab accompanied by the vision of automated quantitative measurement to support development

Understanding Designer Thinking

Up to now, the capabilities and limitations of human thinking have rarely been taken into account in the development of design methods. However, this is a necessity to develop suitable methods. For this purpose, insights from the social sciences and especially psychology must be increasingly taken into account and closer cooperation with these disciplines must be established. Consequently, research methods should therefore also be used which make understanding and quantifying aspects of designer thinking possible. Since it is often unknown which influences affect human thinking and action, it is mandatory to investigate design methods both in an environment that is as uninfluenced as possible in the field and in a replicable way under unchanging framework conditions in the laboratory.

Enabling Automated Measurement

Current studies in design research often use qualitative methods, which rely on the interpretation by a coder. For the most objective results possible, the coding must be done by additional coders. Current quantitative methods provide additional insights. However, as shown in Sect. 10.2, the data must be combined with qualitative data to allow an interpretation. This also requires a high effort, which often limits the number of participants considered. The development of research methods that enable automated measurement could increase the objectivity of study results. Due to the reduced evaluation effort, more participants could be examined with the same resources.

To enable data driven design method development, automated measurement methods are necessary. For this, indicators must be identified that can be captured via existing data from design (project plans, CAD data) or bio signals (heartbeat, eye movement, brain waves, muscular tension). Indicators must be developed through a combination of established qualitative methods and automated measurement methods since meaningful relationships can only be identified through a prior qualitative understanding. This process will initially involve a high effort. In the long term, however, such automated approaches will improve the quality of design research and make previously unfeasible investigations possible. With the use of automatically measured data, investigations in practice should also become possible.

Representing Iterations in the Laboratory with Engineering Simulators

In order to enable design engineers to test the system under development, laboratory studies need to provide the possibility of actual manufacturing and testing. Through rapid prototyping technologies such as laser cutters or 3D-printers, it is possible to include this highly relevant aspect of engineering design without requiring too much effort. Like this, development processes from ideation over detailed design up to commissioning of the product can be simulated in a short timespan (Matthiesen et al. 2016). Study designs that include iterations by functional testability of products in a laboratory context are called Engineering Simulators. By including iterations in laboratory studies, it is possible to simulate the most relevant aspects of practice.

The use of Engineering Simulators bears additional advantages resulting from the integration of iterations: By being able to actually test the designed system, study participants can reflect on the causes of functionality or lack of functionality of their design. On the one hand, this motivates participants to design a functioning product. On the other hand, this represents designer behaviour and thinking during design processes in practice more realistically. Additionally, researchers can use the manufactured systems for performance evaluation. The functionality of the technical system has no longer to be evaluated by experts but can be measured by indicators of functionality predefined in the design task. Engineering Simulators can range from short tasks on small systems enabling multiple iterations in several hours as described in Matthiesen et al. (2016) up to development processes spanning several days to develop more intricate systems (Omidvarkarjan et al. 2020).

Future study designs should aim at integration of relevant aspects such as iterations in order to bring practice to the reproducible laboratory context. Engineering Simulators are a fitting way to include those aspects.

The presented approach bears multiple potentials for future research. It focusses on the identification of problem causes and best practices in designer thinking. Like this, design method development becomes more targeted on the designer. When properly operationalised, designer thinking also fosters assessment of method impact. By using the presented approach, operationalisation aiming at quantifying effects is supported. This fosters comparability of results in design method validation. Consequently, methods which are validated in this way are more likely to be taken up in industry on a wider scale.