Keywords

1 Introduction

In Axiomatic Design, AD, was introduced by Suh [1]. It has been used in a wide range of applications and in different ways. Here, the application is towards parametrization of design, i.e., the selection of design parameters to produce an efficient parametrization where the designer can vary the design parameters to efficiently navigate in the functional space, e.g., for design optimization.

A central concept in Axiomatic design is the design matrix that represents the relationship between the design parameters, \( \varvec{x}_{\varvec{D}} \) and the functional requirements \( \varvec{f}_{\varvec{R}} \). The relationship can be written as

$$ f_{R} = A \times x_{D} , $$

where A is the system matrix and \( \varvec{x}_{\varvec{D}} \) is a vector that is mapped onto \( \varvec{f}_{\varvec{R}} \) through A. This does of course assume linear relationships to be true, although linearization can be used for nonlinear system to illustrate the connectivity, and local behavior. The relationship between two input variables and two output variables can be written as

$$ \left( {\begin{array}{*{20}l} {f_{R1} } \hfill \\ {f_{R2} } \hfill \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} {a_{11} } \hfill \\ {a_{21} } \hfill \\ \end{array} } \right.\left. {\begin{array}{*{20}l} {a_{12} } \hfill \\ {a_{22} } \hfill \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {x_{D1} } \hfill \\ {x_{D2} } \hfill \\ \end{array} } \right) $$

Axiomatic design is based on two axioms. The first is the independence axiom which means that a good design is uncoupled and hence only has diagonal elements in A. The second axiom is the information axiom that states that the design with the lowest information content is to be preferred. In axiomatic design the information content of a design is expressed as

$$ I = \log_{2} \frac{1}{{p_{s} }} $$

where ps is the probability of success. i.e., the probability that a random design in the System range (design space) produce a solution that fulfill all the functional requirements. In [2], and subsequently in [3] this is calculated for a general system matrix A. in [4] it is discussed more in detail in respect to a system with two functional requirements. In [5] Axiomatic design and the information axiom, and the consequences of coupled design, is connected to TRIZ [6].

At this stage we also introduce the functional characteristics y. These are the actual response to the design parameters, while the functional requirements are the desired values for those functional characteristics. We also drop the index D on the design parameters for convenience. The design relation can then be written as

$$ \varvec{y} = A \times \varvec{x} $$

2 Design Space and Functional Space

Design space is here defined as the space within which design parameters can vary, and where the design parameters are the axis. There is also a corresponding Functional space where this is expressed in with the functional requirements as axis. There is also the requirement range that defines the constraints in the functional space. In many cases there are no explicit bounds to the design parameters, although when using design optimization, it is often the case that this is desirable. Nevertheless, it is a useful notion, and even if there are no constraints on the design parameters, the notion of design space is still useful when comparing different designs or parameterizations.

According to the first axiom the best design is the design where functional requirements and design parameters are uncoupled. That is, the matrix A is diagonal. This can be obtained in two ways. One is by having a system architecture where this comes out naturally. There is also a choice regarding the design parameters. A very important activity in design is the parametrization of the design. With a good parametrization a higher degree of decoupling can be achieved, see [7].

In relation to [2,3,4] where a given system range is compared to a given design range, we are here looking at what is the size of the required design space in order to encompass all of the requirement range (or design range), which is the region in the functional space that satisfies all of the functional requirements.

3 Sensitivity Analysis

Sensitivity analysis is an excellent tool to study relationships between system parameters and functional characteristics in a quantitative way. In this way numerical values of the design matrix A can be obtained [8].

If there is a model for the system, sensitivity analysis can quickly give an overview of the couplings in the design and over what parts of the design that are of importance for the desired behavior. This can be obtained either through analytical or numeric differentiation. It can be used to study the influence of disturbances and uncertainties in parameters and constants. Assuming the system

$$ \varvec{y} = \varvec{f}\left( \varvec{x} \right), $$

where f is a nonlinear function. However, using linearization around a nominal point, this can be written as

$$ \varvec{y}_{0} + \Delta \varvec{y}_{0} = \Delta \varvec{xJ} + \varvec{f}\left( {\varvec{x}_{0} } \right), $$

where J is the Jacobian, where the elements are defined as

$$ k_{ij} = \frac{{\partial y_{i} }}{{\partial x_{j} }} $$

This is hence an analytical representation of the design matrix A. If the system is complex and the sensitivity matrix large, it may be difficult to get an overview of the system since the different parameters may have values of different orders of magnitude. The functional characteristics are normally also of different orders of magnitude. In order to make it easier to get an overview of the sensitivities, some kind of normalized dimensionless sensitivities are needed. The first approach to normalize the sensitivities with respect to the nominal values and to employ the following definition:

$$ k_{ij}^{0} = \frac{{x_{j} }}{{y_{i} }}\frac{{\partial y_{i} }}{{\partial x_{j} }} $$

In this way a nondimensional value is obtained, that indicates how many percent a certain functional characteristic is changed when a system parameter is changed one percent. In this way it is much easier to assess the relative importance of the different system parameters. The normalized version of the system matrix A is hereafter denoted A0.

Example: Electric Motorcycle

As a very simple example, an electric vehicle is used. It has the functional requirements range (at constant speed 70 km/h) and acceleration time (0–70 km/h) and the design parameters battery size, mb, and engine power, Pm.

The range can be under some assumptions (only air resistance and constant speed) calculated as

$$ R = \frac{{2k_{b} m_{b} \eta }}{{C_{d} A_{0} \rho v^{2} }} $$

Here: kb is the battery energy density, mb is the mass of the battery. η is the combined efficiency of battery and motor. Cd is the aerodynamic drag coefficient. A0 is the frontal area and v being the vehicle speed. The acceleration time can be calculated as (assuming no air and rolling resistance, and constant power independent of speed)

$$ t_{a} = \frac{{mv^{2} }}{{2P\eta_{a} }}, $$

where the total weight is: m = m0 + mb. The design matrix is

$$ \left( {\begin{array}{*{20}l} R \hfill \\ {t_{a} } \hfill \\ \end{array} } \right) = K \times \left( {\begin{array}{*{20}l} {m_{b} } \hfill \\ {P_{m} } \hfill \\ \end{array} } \right) $$

The normalized sensitivity matrix K0 can be calculated as

$$ K^{0} = \left( {\begin{array}{*{20}l} {\frac{{m_{b} }}{R}\frac{\partial R}{{\partial m_{b} }}} \hfill & {\frac{{P_{m} }}{R}\frac{\partial R}{{\partial P_{m} }}} \hfill \\ {\frac{{m_{b} }}{{t_{a} }}\frac{{\partial t_{a} }}{{\partial m_{b} }}} \hfill & {\frac{{P_{m} }}{{Rt_{a} }}\frac{{\partial t_{a} }}{{\partial P_{m} }}} \hfill \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill \\ {\sigma_{m} } \hfill & { - 1} \hfill \\ \end{array} } \right), $$

where

$$ \sigma_{m} = \frac{{m_{b} }}{{m_{0} + m_{b} }} $$

Most of the elements get trivial. It can be seen that increasing the battery mass will have a direct proportional effect on the range through element \( K^{0}_{11} \). Furthermore, acceleration time is inversely proportional on motor size. If the battery mb is small compared to the rest of the vehicle m0 the first element on the second row also becomes small, and the system is almost decoupled.

4 Design Information

The second axiom in axiomatic design is regarding the minimization of information. There is a definition of design information in Suh [9], but to have a better understanding of design information we turn to the information theory as introduced by Shannon [10]. This provides a tool for quantitatively describe information content in general. For the case of continuous variables, it can be written

$$ H_{c} = - \mathop \smallint \limits_{ - \infty }^{\infty } p\left( x \right){ \log }_{2} \left( {p\left( x \right)} \right){\text{d}}x $$

This gives a measure of the average information content of a variable x. Here p(x) is the probability density function. One problem with this expression is that it does not make sense unless x is dimensionless, since the probability density function has the unit of the inverse of x.

the probability density function p(x) needs to be related to another distribution m(x). The result is called the Kullback–Leibler divergence [11] from the distribution m(x). This is the relative entropy, and it is defined as

$$ H_{\text{rel}} = \mathop \smallint \limits_{ - \infty }^{\infty } p\left( x \right){ \log }_{2} \left( {\frac{p\left( x \right)}{m\left( x \right)}} \right){\text{d}}x $$

This is the difference in entropy between having information that a random variable is within m(x) and knowing that it is within the distribution p(x). Furthermore, it represents a measure of information in bits. It can also be generalized to any dimensionality.

$$ I_{x} = H_{\text{rel}} = \mathop \smallint \limits_{ - \infty }^{\infty } \ldots \mathop \smallint \limits_{ - \infty }^{\infty } p\left( {x_{1} \ldots x_{n} } \right){ \log }_{2} \left( {\frac{{p\left( {x_{1} \ldots x_{n} } \right)}}{{m\left( {x_{1} \ldots x_{n} } \right)}}} \right){\text{d}}x_{1} \ldots {\text{d}}x_{n} $$

A rectangular distribution of m(x) in the bounded interval \( x \in \left[ {x_{\hbox{min} } ,x_{\hbox{max} } } \right] \), with \( x_{R} = x_{\hbox{max} } - x_{\hbox{min} } \) would mean that the distribution of the design space is a space of equal possibilities, where no particular region can be considered more likely than another a priori. For the rectangular probability distributions this can simply be written as

$$ I_{x} = \log_{2} \frac{{S_{1} }}{{S_{2} }}, $$

where S1 could be the design space and \( S_{2 } \), e.g., the region of the design space that fulfills the requirement range. Hence, the amount of information needed to define a design relative to a design space can be calculated. According to this relates to the part of the design range Sx that falls outside of Sc, here called Sw.

$$ I_{w} = { \log }_{2} \frac{{S_{x} }}{{S_{c} }} $$

In axiomatic design this is seen as a measure of robustness, since a design with zero Iw have all of the design range within the constraint (system) range. An alternative use is when a flexible design is made, e.g., with a parametrization that makes it easy to change the design to different functional requirements.

5 Functional Correlation

A measure of dependencies is the correlation of the functional characteristics, as discussed in [8]. It is limited to the interval [− 1, 1], and it is symmetric matrix, so there is no information regarding the dominant direction of dependency. The correlation is a measure of the angle (cosine) between two row vectors in the sensitivity matrix. If the correlation is one, they are completely aligned. If it is zero they are orthogonal and if it is minus one, they are pointing in the opposite direction. The elements in the correlation matrix are calculated as (n is the number of rows in the design matrix)

$$ c_{ik} = \frac{{\frac{1}{n}\mathop \sum \nolimits_{j = 1}^{n} k_{ij}^{0} k_{kj}^{0} }}{{s_{i} s_{k} }} $$

Here the standard deviations in the sensitivities are

$$ s_{i} = \sqrt {\frac{1}{n}\mathop \sum \limits_{j = 1}^{n} \left( {k_{ij}^{0} } \right)^{2} } $$

With mb = m0/4 the correlation matrix for the electric motorcycle becomes

$$ {\mathbf{C}} = \left( {\begin{array}{*{20}l} 1 \hfill & {0.196} \hfill \\ {0.196} \hfill & 1 \hfill \\ \end{array} } \right) $$

This means that there is some correlation between range and acceleration. Note that the diagonal elements in the correlation matrix are always one, since it corresponds to the correlation of a variable to itself. Furthermore, the matrix is always symmetric.

6 System Determinant and Design Controllability

The determinant, introduced here for the design matrix A0, has an interesting property. A geometric interpretation of the determinant is that it is a hyper volume in N dimensional space (or an area in the case of two dimensions) where the input vector is a unit hyper cube (or square), i.e., a set of vectors of length N with coordinates 0 or 1 in each spot. A small determinant means that the functional space resulting from a design space is smaller, which means that less precision in the design parameters is needed. On the other hand, this also means that there is less possibilities to change the functional characteristics. In [10] the information channel it is described in the same way as the design relation, that is

$$ y = A \times x $$

The total information in y can then according to [10] be calculated as

$$ H_{y} = - \log_{2} \det A + H_{x} $$

Assuming a rectangular probability distribution and normalized design variables as inputs. This can be written as

$$ I_{y} = \det A_{0} + I_{x} $$

or

$$ I_{y} = I_{A} + I_{x} , $$

where IA = det A0 is the information added or subtracted by the nature of the system matrix.

This indicates that the value of the determinant of \( \varvec{A}_{0} \) should be of interest also in the context of design. The value of the determinant represents the area, in the case of two variables, and a volume of higher dimension for other cases. To have a high degree of controllability, it should be desirability to have a high value of IA.

A decoupled design is also the design that maximizes the size of the functional space. There are, however, coupled designs that can have the same size. This can, e.g., be realized if the coordinate system of the design parameters is rotated 45°. Consider the following system matrix

$$ \varvec{A}_{0} = \left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill \\ \end{array} } \right) $$

If each element the input vector is varied between 0 and 1 to span a design space the left polygon in Fig. 3.1 is obtained in the output variables. The determinant: \( \det \varvec{A}_{0} = 1 \). If the system matrix is rotated 45° (π/4) it becomes

$$ \varvec{A}_{0} = \left( {\begin{array}{*{20}l} {1/\surd 2} \hfill & { - 1/\surd 2} \hfill \\ {1/\surd 2 } \hfill & {1/\surd 2} \hfill \\ \end{array} } \right) $$
Fig. 3.1
figure 1

Left: Projection in functional space of design space of uncoupled design. Right: Projection in functional space of the design space of a coupled design but with low correlation

This is a fully coupled system. However, the determinant is still: \( \det \varvec{A}_{0} = 1 \) so the size of design space is the same as of the original uncoupled system, see the right figure in Fig. 3.1.

The correlation matrix is also invariant, and is for both cases

$$ \varvec{C} = \left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill \\ \end{array} } \right) $$

Changing the sign in of the element \( \varvec{A}_{0 1,2} \) means that the determinant becomes zero. A quick inspection the system matrix shows that the system is strongly coupled. This is, however, deceptive without examining the determinant, since their properties are entirely different.

In Fig. 3.2 a value of \( \in = 0.1 \), is subtracted from the off-diagonal elements of \( \varvec{A}_{0} \) to create a small but nonzero determinant to show the effect.

$$ \varvec{A}_{0} = \left( {\begin{array}{*{20}l} {1/\sqrt 2 } \hfill & { - 1/\sqrt 2 - \in } \hfill \\ {1/\sqrt 2 - \in } \hfill & {1/\sqrt 2 } \hfill \\ \end{array} } \right) $$
Fig. 3.2
figure 2

Left: Projection in functional space of a design space of unit dimensions of a coupled design with high correlation. Right: Design space is increased to include the whole requirement range (of unit dimensions)

The determinant for this case is small, \( \det \varvec{A}_{0} = 0.131 \). This clearly shows, that the nature of the coupling is as important as whether it is uncoupled or not, and it is a property that is invariant under coordinate transformation of the design parameters. The correlation matrix becomes

$$ \varvec{C} = \left( {\begin{array}{*{20}l} 1 \hfill & { 0.988} \hfill \\ {0.988} \hfill & 1 \hfill \\ \end{array} } \right) $$

Which is an indication of a strong coupling. The size of the projection in the functional space (gray polygon in the left of Fig. 3.2) is

$$ S_{Fx} = { \det } A S_{x} $$

Considering, e.g., a parametrized model that should be capable to reach all the points of the requirement space. This would, e.g., be desirable if an optimization is to be performed to search the best point in the functional space.

For this case a very large design space (gray area) is needed to reach all parts of the requirement range. In this example, the design space has to be increased until the requirement range is totally enclosed by the projection of the design space. In this case it has to be increased with a factor \( 1/ \in \) in each dimension. This means that this design space with two dimensions has to be increased by a factor \( S_{x}^{\prime } = S_{x} / \in^{2} \). The ratio between the projection of the adjusted design space and the requirement range now becomes

$$ \frac{{S_{Fx}^{\prime } }}{{S_{FR} }} = \frac{{{ \det } A}}{{ \in^{2} }} $$

which means that a lot of the search might be wasted in areas outside of the range of the functional requirements. The ratio of the areas is 0.131. This means that the wasted information entropy is

$$ I_{w} = { \log }_{2} \frac{{S_{Fx}^{\prime } - S_{FR} }}{{S_{FR} }} = \log 2 \left( {\frac{{{ \det } A}}{{ \in^{2} }} - 1} \right) = { \log }_{2} 12.14 = 3.60\,{\text{bit}} $$

This can be seen as the mismatch between requirement space and design space.

7 Discussion

A foundation for the argument of decoupling is that the functional characteristics are uncorrelated. However, in design there are certainly a great deal of correlation between functional requirements. For example, in a product family there might be several product variants of different sizes, each with their functional characteristics that are more or less correlated to the size. For example, transport aircraft that are designed for a high passenger capacity also tend to be designed for a long range, indicating a correlation between these requirements.

An analytical approach to produce a parametrization is to establish a set of sample designs that spans the important parts of the design space and analyze them using principle component analysis that best can be performed using singular value decomposition. This was demonstrated in [7].

8 Conclusions

In this paper, the functional correlation matrix and the system determinant of the design matrix has been shown to provide valuable insights about the coupling of a system. That is, in an uncoupled system the correlation matrix only has zero off-diagonal elements. However, there are also coupled systems that have only zero off-diagonal elements but that could be made uncoupled by rotating the coordinate system for the design parameters. Furthermore, it is shown that using information theory there is a strong relationship between the two axioms in axiomatic design. That is, an uncoupled system will have a low amount of wasted design space and require less design information compared to the coupled ones. That is, the shape of the requirement range does not fit to the design space.