1. Introduction

"We just churn and chase our tails until someone says that they won't be able to make the launch date." (Anonymous product development manager at an automobile manufacturer). The difficulty in accurately measuring individual activity progress within the context of the overall program goals is well understood by product development (PD) managers. The above quote is taken from a study of PD management practices at a large automotive company (Mar 1999). Progress oscillates between being on schedule (or ahead of schedule) and falling behind. In many instances, development tasks are repeated and no one knows why. This is a universal phenomenon in PD settings. For instance, in the software development realm, Cusumano and Selby (1995) report that progress is measured by the number of bugs that testers report to developers during the development process. They show a bug report oscillating from a high number of bugs to a low number and back to a high number and so on. Other histories showing oscillatory behavior in PD processes have been observed in aerospace (Browning et al. 2002), automotive (McDaniel 1996; Mar 1999), electronics (Wheelwright and Clark 1992), and information system development (Joglekar 2001) settings.

The motivation for studying the churn phenomenon is abundant. The oscillatory nature of PD progress makes it hard to measure actual development progress and ultimately difficult to judge whether the project is on schedule or slipping. Other unfortunate consequences of churn may include significant increase in development times, organizational memory lapses regarding PD problem solving know-how, and deteriorated morale amongst developers. There are few managerial guidelines available for dealing with churn. Typically, a lack of understanding for the underlying causes of churn leads to myopic resource allocation decisions.

In this paper, we take an information-processing view of PD by characterizing the development process as a sequence of problem-solving activities (Clark and Fujimoto 1991). Design churn is defined as a scenario where the total number of problems being solved (or progress being made) does not reduce (increase) monotonically as the project evolves over time.

There are several possible explanations for churn and this paper investigates one of them: the information dependencies among activities; that is, the structure of the development process. The information processing view postulates design decomposition to be a nested series of generation and testing activities (Simon 1996). If testing occurs simultaneously with the generation activities, then the process will not churn.Footnote 1 In reality, generation-testing cycles have built-in delays. This paper develops a generation-testing model with the capability to consider integration of several generation groups in the presence of delays.Footnote 2 The structure of the development process inherently results in some of the information related to the design tasks being sometimes hidden from other developers and managers.Footnote 3 Our premise is that in many development scenarios, design churn becomes an unintended consequence of information hiding. This is consistent with the ill-structured nature of design problems (Braha and Maimon 1998).Footnote 4

Imperfect evaluation in the test activity may also cause churn. For instance, some systems exhibit nonmonotonic change in either the variance or the expected value (and sometimes both) of design parameters due to uncertainty in performance evaluation (Browning et al. 2002). In order to avoid the confounding effects of variability (as it will only exacerbate churn), we deal only with the expected values and exclude performance variation as a plausible source of churn. Other explanations for churn are also possible. Exogenous changes (e.g., a change in customer requirements) to design objectives also lead to churn (Mar 1999). Again, such changes will only confound the analysis of our basic premise and are excluded from the model. Furthermore, oscillatory allocation of resources as in "firefighting" models (e.g., Repenning et al. 2001) and in behavioral choice models (e.g., Ford and Sterman 1999) exhibit churn-like behavior. These explanations are also excluded from our model based on similar rationales.

We explore our premise by developing a model for tracking the progress of PD processes while accounting for information hiding. Our model divides the development process into two interdependent task sets: local and system. The structure of this problem-solving process is set up such that local tasks, by definition, cannot hide information from system tasks about their individual progress and problems. On the other hand, system tasks may withhold information (gathered from local tasks) for limited periods of time before releasing it to local tasks. Between these releases, the information is hidden from local tasks, which work based on previously released information. Our model focuses on churning that is caused by these episodic releases of information.

Analysis of churn due to information hiding raises interesting questions about the convergence of the underlying system. We define PD convergence as a process in which the total number of problems being solved falls below an acceptable threshold. That is, the problem-solving activities result in a technically feasible design within a specified time frame.Footnote 5 The main results obtained from the analysis of this model are summarized as follows:

  1. 1.

    The existence of design churn is a fundamental characteristic of the decomposition and integration of design between local and system teams. More specifically, it is shown that design churn may be attributed to two modes. The first mode reflects the "fundamental churn" of the design process, and the second mode, termed "extrinsic churn", may be present depending on the relative rates of work completion and the rework induced between system and local tasks.

  2. 2.

    It is possible for development processes to exhibit churning behavior under both converging and diverging scenarios. Conditions under which the total number of design problems associated with the system and local tasks converges to zero as the development time increases are presented.

The rest of the paper is organized as follows. In the next section, we discuss the literature relevant to information hiding and design churn. In Sect. 3, we propose a model for asynchronous information exchanges in a development environment. In Sect. 4.1, we introduce a PD model that involves a single local development team and single system integration team, and that accounts for information hiding. The basic model is formulated and analyzed in the rest of Sect. 4, where conditions for the convergence of the design process as well as "pure design churn" are presented. In Sect. 5, we present a generalized model that involves multiple local development teams that exchange information, under more general information release policies, with a corresponding system integration team. In Sect. 6, we apply the findings of the model to analyze the appearance design process for an automotive product development project. In Sect. 7, we discuss the managerial implications by identifying mitigation strategies to counter design churn in complex development processes.

2. Literature review

Information hiding is not a new concept in management science. For instance, in the supply chain management literature, information hiding has been justified on grounds of either asymmetrical or distorted availability of information (Lee et al. 1997). Similar ideas have been explored in a segment of PD literature. For instance, in software development projects, information hiding refers to the practice of keeping the implementation details of a software module hidden from other modules in the program (Sullivan et al. 2001). Typically, such practices are justified by the desire to reduce the coordination burden. However, formal models for capturing the effects of information hiding are rare in the PD literature.

There are several management science models that relate to one or more aspect of PD design churning. We group these models into the following categories: set-based concurrent engineering, resource allocation, and information dependency. These models are described next.

Sobek et al. (1999) describe a method to model convergence in Toyota's PD process, called set-based concurrent engineering (SBCE). With SBCE, Toyota's designers think about sets of design alternatives, rather than pursuing one alternative iteratively. As the development process progresses, they gradually narrow the set until they come to a final solution. SBCE literature does not focus on instances where design churn is possible; however, it is possible to extend these models to demonstrate and study churn (Mihm et al. 2001).

Information interdependency between development activities is an important feature of complex product development processes (Eppinger et al. 1994). Interdependency is manifested and measured by the amount of iteration and rework inherent in a PD process. The Design Structure Matrix (DSM) provides a simple mapping to capture interdependencies within a development process (Eppinger 2001; Browning 2001; Yassine and Braha 2003). As will be shown later in the paper, DSM models may exhibit divergent churn behavior; however, both Smith and Eppinger (1997) and Browning and Eppinger (2002) artificially suppress this behavior.

Resource allocation has been identified as a managerial lever for controlling the rate of PD process completion (Ahmadi and Wang 1999). Bohn (2000) and Repenning et al. (2001) define the "firefighting" syndrome as the preemption of important, but not urgent, development activities due to an imminent necessity or problem (referred to as a "fire") in another part of the same development project (or another development project). Moving resources from one part of the project to another (or from one project to another) may trigger a vicious cycle of firefighting. As a result, PD performance will oscillate. Conventional PD resource allocation studies (Adler et al. 1995; Loch and Terwiesch 1999) model waiting effects without focusing on design churn.

Our treatment of design churn builds on the PD literature of task concurrency, information dependency, and resource allocation constructs. In particular, we use a DSM model as a building block to expand upon by introducing asynchronous information delays with these constructs.Footnote 6 In the next section, we establish the linkage between asynchronous interdependencies and the DSM.

3. Asynchronous information interdependency in design processes

In a large and complex PD project, different development groups work concurrently on multiple aspects of the process (Joglekar et al. 2001). Work progresses within each group through internal iteration. Coordination between groups takes place through system level testing or an integration group. Individual (i.e., local) groups provide status updates to the system group. This information is processed based on global considerations, which may result in rework for some of the individual groups. Figure 1 shows a schematic of the information exchanges within the PD process described above. In the left side of the figure, we describe how a set of local development teams, working concurrently on a common project, interact through a system level team that coordinates and orchestrates their individual development efforts. The double-headed arrows demonstrate the two-way communication that takes place. The right side of the figure depicts the interaction process between a single local development team and the system team. The solid arrow indicates that local teams frequently provide the system team with updates regarding their progress, while the dotted arrow indicates that the system team provides intermittent feedback to the local team.

Fig. 1a, b.
figure 1

Local and system bifurcation of information

The frequency of system level feedback might depend on either exogenous considerations (such as suppliers' ability to provide updates) or endogenous considerations such as system level test requiring a minimal turn around time for a desired fidelity (Thomke and Bell 2001). If the synchronization is effectively instantaneous, for example during daily builds of Microsoft's development cycles, then we can think about the whole process in terms of a unified (combining local and system level) structure. Smith and Eppinger (1997) have developed a method using linear systems theory to analyze such models and identified controlling features of a unified iteration process. Unified iteration does not allow for information delays between local and system task execution. However, many PD processes are characterized by intermittent system feedback.Footnote 7 Hence, we explore the management of multiple development teams coordinated through a system integration team and subject to periodic feedback (Joglekar and Yassine 2001).

The DSM shown in Fig. 2 captures the above development setup. The DSM is composed of blocks that represent several local development teams and a system integration team. The system team facilitates interactions between local teams as represented by the solid arrows in the figure. The local DSMs are internally updated at every time step (ΔT), and provide status information to the system DSM at ti,S periodic intervals. The system DSM provides updates to the local DSMs at periodic intervals T1,T2,...,Tm. The local and system update periods (i.e., ti,S's or Ti's) may or may not be synchronous; e.g., \({{\rm{T}}_{{\rm{1}}} = {\rm{k}}_{{\rm{1}}} {\rm{\Delta T}}{\rm{,}} \ldots {\rm{,T}}_{{\rm{m}}} = {\rm{k}}_{{\rm{m}}} {\rm{\Delta T}}}\) where ki are integer constants for all i's. In addition, the dotted arrows demonstrate an instance where local teams are allowed to interact directly (i.e., without the facilitation of the system team), in which case the local DSM Li provides status information to other local DSMs at periodic intervals ti,1,ti,2,...,ti,m.

Fig. 2.
figure 2

DSM representation of a PD process showing local and system teams (Li represents a local development team, S represents a system team, and the captions next to the arrows indicate the frequency of information update)

This type of DSM is not a pathological case. Numerous researchers have documented the existence of this local/system bifurcation (Sosa et al. 2000). The problem cannot be treated as a single DSM to study the churning properties of the development process due to time delays and asynchrony in information transfers between the system and different local groups.

4. Asynchronous work transformation model: single local DSM case

4.1 Model formulation

First, we study a simplified version of the problem. We assume, without loss of generality, that there exists a single local DSM (containing the local tasks required for the development of a component and performed by a local development team) that exchanges information with a corresponding system DSM (containing the system tasks required for the development of the same component and performed by a system development team) at every time step. The system DSM releases information every T time steps.Footnote 8 Consistent with Smith and Eppinger (1997), we specify that all the tasks associated with the local and system DSMs are internally updated at each iteration step. We label L(k) as the vector for the amount of "unfinished work" in the local tasks at time k. Absent of all system feedback, the progress of L(k) is given by:

$${\matrix{ {{L(k) = {\rm{W}}^{{\rm{L}}} L(k - 1)}} & {{k = 1,2,...}} \cr } }$$
(1)

The amount of unfinished work can be measured by the time left to finalize a specific design, the number of engineering drawings requiring completion before the design is released, the number of engineering design studies required before design release, or the number of open issues that need to be addressed/resolved before design release, to name a few (McDaniel, 1996). The choice is dependent on the particular design environment pertaining to how the status of development progress is measured. At this point, the choice of how to measure unfinished work is immaterial to building the theoretical foundations of the proposed model. However, this point will be revisited when we illustrate the application of the model in a specific development environment.

WL is the work transformation matrix that captures the fraction of rework created within a local group of tasks (Smith and Eppinger 1997). Equation 1 describes the work transformation during each iteration stage as follows. Individual local tasks finish a fraction of their work, given a constant completion rate specified in the diagonal of WL. However, this work causes some rework to be created to other dependent local tasks. The off-diagonal elements of WL document such dependencies. The construction of WL is detailed in Appendix 1.

We augment the state space for the above model by introducing two more vectors: S(k) and H(k). The vector S(k) represents the amount of unfinished work in all system tasks at time step k, and H(k) is a vector for the amount of finished system work at time step k that is ready to be transmitted to local tasks but remains hidden until it is released. We also define a matrix WS that corresponds to S(k) in a manner analogous to the relation between WL and L(k), that is

$${\matrix{ {{S(k) = {\rm{W}}^{{\rm{S}}} S(k - 1)}} & {{k = 1,2,...}} \cr } }$$
(2)

Combining both state equations (Eqs. 1 and 2) and incorporating both types of information exchanges (from local to system and vice versa), we obtain the state equation (Eq. 3). This equation assumes that the system transmits all the work withheld up until the last moment before data transmittal to local tasks.

$$ {\left[ {\begin{array}{*{20}c} {{L(k + 1)}} \\ {{S(k + 1)}} \\ {{H(k + 1)}} \\ \end{array} } \right]} = {\underbrace {{\left[ {\begin{array}{*{20}c} {{{\text{W}}^{{\text{L}}} }} & {0} & {0} \\ {{{\text{W}}^{{{\text{LS}}}} }} & {{{\text{W}}^{{\text{S}}} }} & {0} \\ {0} & {{{\text{W}}^{{{\text{SH}}}} }} & {I} \\ \end{array} } \right]}}_{{\mathbf{A}}^{{{\text{Hold}}}} }}{\left[ {\begin{array}{*{20}c} {{L(k)}} \\ {{S(k)}} \\ {{H(k)}} \\ \end{array} } \right]}(1 - \delta _{T} (k)) + {\text{ }}{\underbrace {{\left[ {\begin{array}{*{20}c} {{{\text{W}}^{{\text{L}}} }} & {{{\text{W}}^{{{\text{SL}}}} }} & {{\text{I}}} \\ {{{\text{W}}^{{{\text{LS}}}} }} & {{{\text{W}}^{{\text{S}}} }} & {0} \\ {0} & {{\text{0}}} & {0} \\ \end{array} } \right]}}_{{\mathbf{A}}^{{{\text{Release}}}} }}{\left[ {\begin{array}{*{20}c} {{L(k)}} \\ {{S(k)}} \\ {{H(k)}} \\ \end{array} } \right]}\delta _{T} (k) $$
(3)

In Eq. 3, \({\delta _{T} (k) = {\sum\limits_{j = 0}^\infty {\delta (k - jT)} }}\) is the periodic impulse train function, where \({\delta (k - n)}\) is the unit impulse (or unit sample) function defined as:

$${\delta (k - n) = \left\{ {\matrix{ {1} & {{k = n}} \cr {0} & {{k \ne n}} \cr } } \right.}$$
(4)

As can be seen from Eq. 3, the updated amount of design work for a local team depends on the team's previous amount of design work and the interaction with the system team; similar conclusions can be obtained regarding design tasks for the system team and hidden tasks. The matrix AHold is active at each iteration step except for every T periods when the system team releases its feedback to the local team and the matrix ARelease becomes active. WLS is a matrix that captures the rework fraction created by local tasks L(k) for the corresponding system tasks S(k). Similarly, when information is released by the system, the matrix WSL captures the rework fraction created directly by the system tasks S(k) for the local tasks L(k). WSH is a matrix that captures the rework created for the local tasks by the system tasks, and is placed in a hidden (or holding) state until it is time to be transmitted to local tasks. When no information is being released by the system to local tasks, the identity submatrix in AHold guarantees that finished system work is carried over to the next period. The identity submatrix in ARelease guarantees that finished system work is transmitted to local tasks, through H(k), every T time steps. Consequently, H(k) gets set to zero each T steps and is rebuilt in between. The construction of the work transformation matrices WL, WS, WLS, WSH, and WSL is dependent on the structure of the information exchanged within the development process. In Appendix 1, we specify (consistent with the case study presented in Sect. 6) the work transformation matrices based on the local and system DSMs \(\Omega ^{{\text{L}}} ,\Omega ^{{\text{S}}}_{{}} ,\) as well as the inter-component dependency matrices \(\Omega ^{{{\text{LS}}}} ,\Omega ^{{{\text{SL}}}} ,\) which represent the interaction between local and system teams.Footnote 9

Individual elements within the L, S, and H vectors refer to the same task. To illustrate the concept, consider the following two tasks: door trim design and garnish trim design related to the development of a car door. The state equations for this problem are shown in Eqs. 5 and 6 for the case when no information is being released by the system (e.g., the "body" integration team) to local tasks (e.g., the "door" design team), and for the case when information is released by the system, respectively.

In this example, L 1 (k) and S 1 (k) may designate the number of design problems or open issues associated with the door trim task, for the local design team and system integration team, respectively. H 1 (k) refers to the number of door trim problems resolved by the system integration team awaiting to be released to the local design team. Any problem associated with the door trim design can reside in only one of these three states until it is fully resolved. Note that \({1 - w^{L}_{{11}} }\) and \({1 - w^{L}_{{22}} }\) are the fractions of L 1 and L 2 respectively that can be completed in an autonomous manner in every time step. Furthermore, \({w^{L}_{{12}} L_{2} (k)}\) and \({w^{L}_{{21}} L_{1} (k)}\) are the amounts of rework that get created for tasks L 1 and L 2, respectively, as a consequence of the autonomous progress. Similar interpretations can be made for the system matrix (i.e., \({w^{S}_{{ij}} }\)).

$${\left[ {\begin{array}{*{20}c} {{L_{1} (k + 1)}} \\ {{L_{2} (k + 1)}} \\ {{S_{1} (k + 1)}} \\ {{S_{2} (k + 1)}} \\ {{H_{1} (k + 1)}} \\ {{H_{2} (k + 1)}} \\ \end{array} } \right]} = {\left[ {\begin{array}{*{20}c} {{w^{L}_{{11}} }} & {{w^{L}_{{12}} }} & {0} & {0} & {0} & {0} \\ {{w^{L}_{{21}} }} & {{w^{L}_{{22}} }} & {0} & {0} & {0} & {0} \\ {{w^{{LS}}_{{11}} }} & {{w^{{LS}}_{{12}} }} & {{w^{S}_{{11}} }} & {{w^{S}_{{12}} }} & {0} & {0} \\ {{w^{{LS}}_{{21}} }} & {{w^{{LS}}_{{22}} }} & {{w^{S}_{{21}} }} & {{w^{S}_{{22}} }} & {0} & {0} \\ {0} & {0} & {{w^{{SH}}_{{11}} }} & {{w^{{SH}}_{{12}} }} & {1} & {0} \\ {0} & {0} & {{w^{{SH}}_{{21}} }} & {{w^{{SH}}_{{22}} }} & {0} & {1} \\ \end{array} } \right]}{\left[ {\begin{array}{*{20}c} {{L_{1} (k)}} \\ {{L_{2} (k)}} \\ {{S_{1} (k)}} \\ {{S_{2} (k)}} \\ {{H_{1} (k)}} \\ {{H_{2} (k)}} \\ \end{array} } \right]}$$
(5)
$${\left[ {\begin{array}{*{20}c} {{L_{1} (k + 1)}} \\ {{L_{2} (k + 1)}} \\ {{S_{1} (k + 1)}} \\ {{S_{2} (k + 1)}} \\ {{H_{1} (k + 1)}} \\ {{H_{2} (k + 1)}} \\ \end{array} } \right]} = {\left[ {\begin{array}{*{20}c} {{w^{L}_{{11}} }} & {{w^{L}_{{12}} }} & {{w^{{SL}}_{{11}} }} & {{w^{{SL}}_{{12}} }} & {1} & {0} \\ {{w^{L}_{{21}} }} & {{w^{L}_{{22}} }} & {{w^{{SL}}_{{21}} }} & {{w^{{SL}}_{{22}} }} & {0} & {1} \\ {{w^{{LS}}_{{11}} }} & {{w^{{LS}}_{{12}} }} & {{w^{S}_{{11}} }} & {{w^{S}_{{12}} }} & {0} & {0} \\ {{w^{{LS}}_{{21}} }} & {{w^{{LS}}_{{22}} }} & {{w^{S}_{{21}} }} & {{w^{S}_{{22}} }} & {0} & {0} \\ {0} & {0} & {0} & {0} & {0} & {0} \\ {0} & {0} & {0} & {0} & {0} & {0} \\ \end{array} } \right]}{\left[ {\begin{array}{*{20}c} {{L_{1} (k)}} \\ {{L_{2} (k)}} \\ {{S_{1} (k)}} \\ {{S_{2} (k)}} \\ {{H_{1} (k)}} \\ {{H_{2} (k)}} \\ \end{array} } \right]}$$
(6)

4.2 Model analysis

In this section, we explore the fundamental characteristics of the model described in Eq. 3. All proofs are presented in Appendix 2.

First, we notice that Eq. 3 can be rewritten as follows:

$${x(k + 1) = A(k)x(k)}$$
(7)

where \({x(k) = {\left[ {\matrix{ {{L(k)}} \cr {{S(k)}} \cr {{H(k)}} \cr } } \right]}}\) and \({A(k) = {\left[ {\matrix{ {{{\rm{W}}^{{\rm{L}}} }} & {{\delta _{T} (k){\rm{W}}^{{{\rm{SL}}}} }} & {{\delta _{T} (k){\rm{I}}}} \cr {{{\rm{W}}^{{{\rm{LS}}}} }} & {{{\rm{W}}^{{\rm{S}}} }} & {0} \cr {0} & {{(1 - \delta _{T} (k)){\rm{W}}^{{{\rm{SH}}}} }} & {{(1 - \delta _{T} (k))I}} \cr } } \right]}}\)

Thus, the model described in Eq. 3 is a homogenous linear difference system that is nonautonomous, or time-variant. Moreover, since the impulse train function \({\delta _{T} (k)}\) is periodic with period T (recall that the system DSM releases information every T time steps), we conclude that for all k∈ℤ (where ℤ is the set of all positive integers), A(k+T)=A(k). That is, the model described in Eq. 7 is a linear periodic system.

We now present some results obtained using Floquet theory (Richards 1983) for the linear periodic system given in Eq. 7.Footnote 10

Definition 1

Matrix \({C = A(T - 1)A(T - 2) \cdots A(0)}\) is referred to as the monodromy matrix of Eq. 7.

In the following we assume that the monodromy matrix is diagonalizable.Footnote 11 C is diagonalizable if and only if it has linearly independent eigenvectors. A sufficient condition for C to be diagonalizable is that it has distinct eigenvalues (Strang 1980). We cite the following result from Richards (1983) as Lemma 1, Theorem 1, and Corollary 1 to set up further analysis.

Lemma 1

Let C be a diagonalizable n×n matrix, and let T be any positive integer. Let us decompose C as \(C = S_{C} \Lambda _{C} S^{{ - 1}}_{C} , \) where \({\Lambda _{C} } \) is a diagonal matrix of the eigenvalues of C, and S C is the corresponding eigenvector matrix. Then, there exists some n×n matrix B such that B T=C. Moreover, \(B = S_{C} \Lambda _{B} S^{{ - 1}}_{C} ,\) where \(\Lambda _{B} = \sqrt[T]{{\Lambda _{C} }}.\)The following result indicates that the analysis of the periodic system described in Eq. 7 is reduced to the study of a corresponding autonomous linear system.

Theorem 1

If y(k) is a solution of the autonomous linear system

$${y(k + 1) = By(k)}$$
(8)

then the general solution x(k) of the linear periodic system (Eq. 7) is given as follows:

$${x(k) = P(k)B^{k} g}$$
(9)

where P(k) is a nonsingular periodic matrix of period T, and \(g \in {\mathbf{R}}^{n}_{{}} \) is a constant vector.Footnote 12

Corollary 1

The general solution x(k) of the linear periodic system (Eq. 7) is given by

$${x(k) = P(k)y(k)}$$
(10)

where y(k) is the general solution of the autonomous linear system (Eq. 8).

Corollary 1 has the following interesting interpretation for the information hiding problem in PD. We note that there are two sources of oscillation that govern the development of the total number of problems being solved as the project evolves over time. The first source is associated with the periodic matrix P(k) in Eq. 10, and reflects the "fundamental churn" of the process. This fundamental churn may be attributed to the intrinsic characteristic of information delays between local and system task execution. The second source of oscillation, termed "extrinsic churn", is associated with the properties of the linear autonomous system (Eq. 8) as discussed in Smith and Eppinger (1997). More specifically, positive real eigenvalues of B correspond to nonoscillatory behavior of the solution y(k). Negative and complex eigenvalues of B describe damped oscillations. The overall property of the linear periodic system (Eq. 7) is thus the combined effect of both sources of oscillation.

Corollary 1 allows the development of conditions under which the linear periodic system (Eq. 7) converges (i.e., as the time increases to infinity the total number of design problems associated with the system and local tasks converges to zero). We show in Sect. 4.3 that the eigenvalues and the eigenvectors of the matrix B determine conditions of convergence.

4.3 Conditions for stability

In this section, we present conditions under which the total number of design problems associated with the system and local tasks converges to zero as the time increases to infinity.

First, we note that the zero solution is an equilibrium point Footnote 13 of Eq. 7. Next we introduce the definitions of stability of the equilibrium point.

Definition 2

The equilibrium point x* is

  1. 1.

    Stable if given \({\varepsilon > 0}\) there exists \({\delta = \delta (\varepsilon )}\) such that \({{\left\| {x_{0} - x^{*} } \right\|} < \delta }\) implies \({{\left\| {x(k) - x^{*} } \right\|} < \varepsilon }\) for all k≥0. x* is unstable if it is not stable.

  2. 2.

    Globally attracting if \({\lim _{{k \to \infty }} x(k) = x^{*} }\) for any initial work vector x 0.

  3. 3.

    Asymptotically stable if it is stable and globally attracting.

Intuitively, the zero solution is stable if the total number of design problems associated with the system and local tasks remains bounded as the project evolves over time. Asymptotic stability requires the additional condition that the total number of design problems associated with the system and local tasks converges to the origin for any initial work vector.

When the PD process involves time delays and asynchrony in information transfer between the system and local group, conditions for the convergence of the development process are of vital importance for PD management. Before we present stability conditions for the asynchronous work transformation model, we introduce the so-called Floquet exponents and Floquet multipliers of the linear periodic system (Eq. 7). Floquet exponents are the eigenvalues \({\lambda }\) of B, while the corresponding eigenvalues \({\lambda ^{T} }\) of the monodromy matrix (C) are the Floquet multipliers. We have the following result:

Theorem 2

The zero solution of Eq. 7 is stable if and only if the Floquet exponents have magnitude less than or equal to 1, and asymptotically stable if and only if all the Floquet exponents have magnitude less than 1.

The following provides an additional result that explains the behavior of solutions of the asynchronous work transformation model:

Corollary 2

The zero solution of Eq. 7 is stable if the Floquet multipliers have magnitude less than or equal to 1 and asymptotically stable if all the Floquet multipliers have magnitude less than 1.

A direct consequence of Theorem 2 is that the Floquet exponents and their corresponding eigenvectors (i.e., eigenvectors of B) determine the rate and nature of convergence of the design process. Consistent with Smith and Eppinger (1997), we use the term design mode to refer to an eigenvalue of B along with its corresponding eigenvector.Footnote 14 The magnitude of each eigenvalue of B determines the geometric rate of convergence of one of the design modes, while the corresponding eigenvector identifies the relative contribution of each of the various constituent tasks to the amount of work that jointly converges at the given geometric rate (Smith and Eppinger 1997). The eigenvector corresponding to the largest magnitude eigenvalue of B (most slowly converging design mode) provides useful information regarding design tasks that require a significant amount of work. More specifically, the larger the magnitude of an element in that eigenvector, the stronger the element contributes to the slowly converging design mode.

4.4 Conditions for "pure churn"

"Pure design churn" is defined as a scenario where development progress oscillates freely as the project evolves over time and neither convergence nor divergence occurs. Pure design churn means that the amount of unfinished work does not decrease simultaneously for all of the tasks. Instead, the amount of unfinished work shifts from task to task as the project unfolds. The above scenario is represented by particular solutions that are periodic, i.e., solutions x(k) where for all k∈ℤ, \({x(k + N) = x(k)}\) for some positive integer N. The following results hold:

Theorem 3

  1. 1.

    The linear system (Eq. 7) has a periodic solution of period T if the monodromy matrix C has an eigenvalue equal to 1.

  2. 2.

    The linear system (Eq. 7) has a periodic solution of period 2T if the monodromy matrix C has an eigenvalue equal to −1.

  3. 3.

    If the largest magnitude eigenvalue of the monodromy matrix C equals 1 and is strictly greater (in absolute value) than any other eigenvalue, then the limiting behavior of the general solution of the linear system (Eq. 7) is periodic with period T.

5. Asynchronous work transformation model: multiple local DSM case

In this section, we consider the general case where multiple local teams are coordinated through a system integration team and subject to periodic feedback. More specifically, the m local DSMs are internally updated and provide status information to others (local and system DSMs) at every time step. The system DSM provides updates to the m local DSMs at periodic intervals \({T_{{\rm{1}}} {\rm{, }}T_{{\rm{2}}} {\rm{, }}...,T_{m} }\) as shown in Fig. 2.

We label L i as the vector that designates the amount of unfinished work of the tasks of local team i (i=1,...,m) at time k. Let n i denote the number of local tasks in local team i, and let \({n = {\sum {n_{i} } }}\) denote the total number of tasks in all of the local teams. Individual elements within the L i (i=1,...,m), S i , and H i vectors refer, correspondingly, to the same task. In general, the system of equations is written as follows:

$$ \begin{array}{*{20}l} {{{\overbrace {{\left[ {\begin{array}{*{20}c} {{L_{1} (k + 1)}} \\ { \vdots } \\ {{L_{m} (k + 1)}} \\ {{S_{1} (k + 1)}} \\ { \vdots } \\ {{S_{m} (k + 1)}} \\ {{H_{1} (k + 1)}} \\ { \vdots } \\ {{H_{m} (k + 1)}} \\ \end{array} } \right]}}^{x{\left( {k + 1} \right)}}} = } \hfill} \\ {{{\overbrace {{\left[ {\begin{array}{*{20}c} {{{\text{W}}^{{{\text{L}}_{{\text{1}}} }} }} & { \cdots } & {{{\text{W}}^{{{\text{L}}_{m} {\text{L}}_{{\text{1}}} }} }} & {{\delta _{{T_{{\text{1}}} }} {\text{(}}k{\text{)W}}^{{{\text{S}}_{{\text{1}}} {\text{L}}_{{\text{1}}} }} }} & {{...}} & {{\delta _{{T_{1} }} {\text{(}}k{\text{)W}}^{{{\text{S}}_{m} {\text{L}}_{{\text{1}}} }} }} & {{\delta _{{T_{{\text{1}}} }} {\text{(}}k{\text{)}}I}} & {{\text{0}}} & {{\text{0}}} \\ { \vdots } & { \ddots } & { \vdots } & { \vdots } & { \ddots } & { \vdots } & {{\text{0}}} & { \ddots } & {{\text{0}}} \\ {{{\text{W}}^{{{\text{L}}_{{\text{1}}} {\text{L}}_{m} }} }} & { \ldots } & {{{\text{W}}^{{{\text{L}}_{m} }} }} & {{\delta _{{T_{m} }} {\text{(}}k{\text{)W}}^{{{\text{S}}_{{\text{1}}} {\text{L}}_{m} }} }} & {{...}} & {{\delta _{{T_{m} }} {\text{(}}k{\text{)W}}^{{{\text{S}}_{m} {\text{L}}_{m} }} }} & {{\text{0}}} & {{\text{0}}} & {{\delta _{{T_{m} }} {\text{(}}k{\text{)}}I}} \\ {{{\text{W}}^{{{\text{L}}_{{\text{1}}} {\text{S}}_{{\text{1}}} }} }} & { \cdots } & {{{\text{W}}^{{{\text{L}}_{m} {\text{S}}_{{\text{1}}} }} }} & {{{\text{w}}^{{\text{S}}}_{{{\text{11}}}} }} & { \cdots } & {{{\text{w}}^{{\text{S}}}_{{{\text{1}}n}} }} & {{\text{0}}} & {{\text{0}}} & {{\text{0}}} \\ { \vdots } & { \ddots } & { \vdots } & { \vdots } & { \ddots } & { \vdots } & {{\text{0}}} & {{\text{0}}} & {{\text{0}}} \\ {{{\text{W}}^{{{\text{L}}_{{\text{1}}} {\text{S}}_{m} }} }} & { \cdots } & {{{\text{W}}^{{{\text{L}}_{m} {\text{S}}_{m} }} }} & {{{\text{w}}^{{\text{S}}}_{{n{\text{1}}}} }} & { \cdots } & {{{\text{w}}^{{\text{S}}}_{{nn}} }} & {{\text{0}}} & {{\text{0}}} & {{\text{0}}} \\ {{\text{0}}} & {{\text{0}}} & {{\text{0}}} & {{{\text{(1}} - \delta _{{T_{{\text{1}}} }} {\text{(}}k{\text{))W}}^{{{\text{S}}_{{\text{1}}} {\text{H}}_{1} }} }} & {{...}} & {{{\text{(1}} - \delta _{{T_{1} }} {\text{(}}k{\text{))W}}^{{{\text{S}}_{m} {\text{H}}_{1} }} }} & {{{\text{(1}} - \delta _{{T_{{\text{1}}} }} {\text{(}}k{\text{))}}I}} & {{\text{0}}} & {{\text{0}}} \\ {{\text{0}}} & {{\text{0}}} & {{\text{0}}} & { \vdots } & { \ddots } & { \vdots } & {{\text{0}}} & { \ddots } & {{\text{0}}} \\ {{\text{0}}} & {{\text{0}}} & {{\text{0}}} & {{{\text{(1}} - \delta _{{T_{m} }} {\text{(}}k{\text{))W}}^{{{\text{S}}_{{\text{1}}} {\text{H}}_{m} }} }} & {{...}} & {{{\text{(1}} - \delta _{{T_{m} }} {\text{(}}k{\text{))W}}^{{{\text{S}}_{m} {\text{H}}_{m} }} }} & {{\text{0}}} & {{\text{0}}} & {{{\text{(1}} - \delta _{{T_{m} }} {\text{(}}k{\text{))}}I}} \\ \end{array} } \right]}}^{A{\left( k \right)}}}{\overbrace {{\left[ {\begin{array}{*{20}c} {{L_{1} (k)}} \\ { \vdots } \\ {{L_{m} (k)}} \\ {{S_{1} (k)}} \\ { \vdots } \\ {{S_{m} (k)}} \\ {{H_{1} (k)}} \\ { \vdots } \\ {{H_{m} (k)}} \\ \end{array} } \right]}}^{x{\left( k \right)}}}} \hfill} \\ \end{array} $$
(11)

In the above expression, WLi is a work transformation matrix that captures the fraction of rework created within the group of tasks of local team i. WS is the work transformation matrix that captures the fraction of rework created within the system tasks. WSi,Hj is a n j ×n i matrix that captures the fraction of finished system work created by system tasks S i (k) for the local tasks L j (k) , and is held in H j (k) until the next scheduled information release. WLi,Lj is a n j ×n i matrix that captures the fraction of rework created by local tasks L i (k) for the local tasks L j (k). WLi,Sj is a n j ×n i matrix that captures the fraction of rework created by local tasks L i (k) for the system tasks S j (k). Since information is released by the system to local team i only at periodic intervals of T i , the n i ×n i diagonal submatrix \({(1 - \delta _{{T_{i} }} (k))I}\) guarantees that finished system work is carried over to the next period. When information is released by the system to local team j, the n j ×n i matrix \({\delta _{{T_{j} }} (k){\rm{W}}^{{{\rm{S}}_{i} {\rm{L}}_{j} }} }\) captures the fraction of rework created directly by the system tasks S i (k) for the local tasks L j (k). The n i ×n i diagonal sub-matrix \({\delta _{{T_{i} }} (k)I}\) indicates that information is transmitted to the local tasks L i (k) indirectly through the holding state H i (k).

The next result shows that the model described in Eq. 11 is a special case of a linear periodic system. Once the period of the matrix A(k) is identified, the monodromy matrix C can be determined, and the results presented in Sect. 4 can be readily employed.

Theorem 4

If the system team provides updates to m local teams at periodic intervals T 1 ,T 2 ,...,T m , then the fundamental period T of the linear matrix A(k) is the least common multiple of T 1 ,T 2 ,...,T m ; i.e., \(T = {\text{lcm}}(T_{{\text{1}}} {\text{, }}T_{{\text{2}}} {\text{, }}...{\text{,}}T_{m} ).\)

Following a similar reasoning as in Theorem 4, it can be shown that any periodic information release policy will lead to a linear periodic system, and thus can be analyzed using the tools presented in Sect. 4. For example, the local teams may provide status information to others (local and system teams) at periodic intervals \(t_{{\text{1}}} {\text{, }}t_{{\text{2}}} {\text{, }}...{\text{,}}t_{m} ,t_{{{\text{system}}}} ,\) rather than at every time step; or any team (local or system) may provide information status to others (local or system teams) at nonuniform (but periodic) intervals. Indeed, any such periodic information release policy can be transformed to a model where all elements a ij (k) of the linear matrix A(k) are periodic functions (with possibly nonidentical periods). In this case, Theorem 4 can be adapted by letting the fundamental period T of the linear matrix A(k) be the least common multiple of the periods of the elements a ij (k).

6. Case study: the automotive appearance design process

In this section, an illustration of the asynchronous work transformation model in a real product development process, previously reported by McDaniel (1996), is presented. We intend to demonstrate internal process dynamics, show that oscillatory patterns arise in an asynchronous PD project, and assess several mitigation strategies by exploiting the results developed in the paper. In Sect. 6.1, we provide a general overview of the nominal automotive appearance design process. Section 6.2 demonstrates how to construct the underlying work transformation matrices. Then, in Sect. 6.3 we analyze the base case model. Section 6.4 assesses the efficacy of churn mitigation strategies based on three operational scenarios. Finally, results of sensitivity analysis are presented in Sect. 6.5.

6.1 Appearance design process overview

Appearance design refers to the process of designing all interior and exterior automobile surfaces for which appearance, surface quality, and operational interface is important to the customer. Such design items include, for example, exterior sheet metal design and visible interior panels. Appearance design is the earliest of all physical design processes, and changes in this stage easily cascade into later development activities causing costly rework. This is avoided by allowing "stylists" (from the industrial design group) to work closely with "engineers" (from the engineering design group). While stylists are responsible for the appearance of the vehicle, engineers are responsible for the feasibility of the design by ensuring that it meets some functional, manufacturing, and reliability requirements. Figure 3 shows the industrial design process within the context of the overall automotive product development process. The industrial design portion is allotted approximately 52 weeks for completion in a typical vehicle program.

Fig. 3.
figure 3

Appearance design in relation to total development process

Records from the study company, shown in Fig. 4, indicate churning behavior for a specific vehicle program. While the curves presented in the figure show churn in both interior and exterior subsystem development, our analysis of the churn phenomenon will be limited to the interior design process involving the styling and engineering development organizations.

Fig. 4.
figure 4

Churning behavior observed in a family of vehicle programs (McDaniel 1996)

Information exchanges from styling to engineering take the form of wireframe CAD data generated from clay model scans, referred to as scan transmittals of surface data. Scan transmittals are scheduled at roughly six-week intervals (i.e., T=6). Information exchanges between engineering and styling occur on a weekly basis through a scheduled feasibility meeting. During these meetings various engineering groups provide feedback to styling on infeasible design conditions. Therefore, with this information transfer setup engineering will be the local team, as defined in our model, and styling will be the system team.

In addition to the cross-functional information exchanges between styling and engineering, information flows also occur within functional groups. For example, within engineering, a hand clearance study would compile information about the front door trim panel and the front seat to determine whether the two components physically interfere, and whether the space between them meets minimum acceptable requirements.

Finally, an appropriate metric by which to measure development progress needs to be defined. We choose to use the number of open design issues (or open problems) although it may be convenient, in other development environments, to use different measures of progress such as the time to finish a design task. Our choice is justified by the fact that the company we are investigating tracks open issues on an ongoing basis through minutes from the above mentioned feasibility meetings. Time to completion estimates are normally forecasted based on the open issues status.

6.2 Construction of work transformation matrices

From the program management perspective, the vehicle interior is segmented into subsystems, or components. These components represent major subassemblies of the interior, and include typical components such as the instrument panel, the front door trim panels, and the center console. This level of component aggregation is used primarily because these components have been the unit of management and budgetary control for engineering design work, and because the company defined a number of standard engineering design studies to be performed on each component at this level. The DSMs \({\Omega ^{L} ,\Omega ^{S} }\) for the engineering and industrial design processes are shown in Figs. 5a and 5b, respectively. The transformation of component-level design information to system-level design information, as used within the industrial design group, is captured by the "dependency" matrix \({\Omega ^{{LS}} }\) in Fig. 5c. This transformation is typically performed on a weekly basis, when the engineering group provides feedback to the industrial design group on infeasible conditions. Similarly, the dependency matrix \({\Omega ^{{SL}} }\) in Fig. 5d captures the impact of industrial design on the engineering process at each scan transmittal (on a six-week interval).

Fig. 5a–d.
figure 5

Local, system DSMs, and system/local conversion matrices

The average autonomous completion rates per component are shown along the diagonal of the local and system DSMs (i.e., \({\Omega ^{L} }\) and \(\Omega ^{S} ,\) respectively).Footnote 15 To set a base level of normalized resource usage for each component, engineers defined the resource usage intensity required to accomplish the autonomous completion rates presented in Fig. 5 as one resource-week. The DSMs for styling and engineering were obtained by circulating a survey instrument to both groups. Respondents were asked to populate the DSM by estimating the pairwise coupling (i.e., dependency strength) between components using S, M, W, or N ratings (i.e., strong, medium, weak, or none, respectively). These estimates were converted into numerical values (by assigning a probability of 0.3, 0.2, 0.1, and 0 for the S, M, W, and N, respectively). Local and system DSMs, as determined by the average of responses of the surveys, are shown in Figs. 5a and b. A complete explanation of the DSM and dependency matrices in Fig. 5 is given by McDaniel (1996).

6.3 Base case analyses

For the base case, the largest magnitude eigenvalue of B is 0.9943. Because this eigenvalue is so close to 1, this means that the system is stable under the above operating conditions, and converges very slowly (see Theorem 2). By inspecting the eigenvector corresponding to the largest magnitude eigenvalue of B, we observe that the magnitudes (in descending order) of the elements are as shown in Fig. 6.

Fig. 6.
figure 6

Eigenvector and corresponding total work

The interpretation of the ranking, in Fig. 6, is that the larger the magnitude of an element in this eigenvector, the more strongly the element contributes to the slow convergence of this mode of the design process. Thus, the ranking of the eigenvectors gives useful information for identifying the structure of the total work vector. This interpretation is supported by examining the cumulative work, which is obtained by simulating the design process for 52 weeks, as shown in Fig. 6.Footnote 16 We see that the cumulative work associated with the local "instrument panel" (i.e., L 6 ) is more than the work done on other local tasks. This is primarily due to the large work associated with the system instrument panel (see the cumulative work of S 6 ) and the long information delay (T=6) between local and system task execution. This phenomenon can be seen by examining the specific traces for individual local components as shown in Fig. 7a. As can be seen, the instrument panel has the largest number of open design issues at every point of time. Also, the oscillatory changes in design status induced by new information contained in scan transmittals are apparent. Finally, we observe that even in the complete absence of external changes, the appearance design process is not completed on time. Design rework and oscillatory behavior in the process result from the decomposed process structure and product architecture, and can never be eliminated from the appearance design process. We conclude that the appearance process must be redesigned to speed up convergence and mitigate churn.

Fig. 7a–d.
figure 7

The effect of mitigation strategies on the behavior of the system

6.4 Mitigation scenarios

Recall that the development process is stable, under the base operating conditions, but converges slowly. McDaniel (1996) reported that several mitigation strategies were implemented by the engineering and styling teams in order to speed up the rate of design progression needed to meet the required completion date. The analysis developed in this paper provides insight regarding means for achieving stability for a diverging process or speeding up convergence for a slowly converging process. Note that some of the mitigation strategies might eliminate the churn completely while others might mitigate churn by damping it at a quicker rate. In particular, three types of mitigation strategies can be applied:

  1. 1.

    Increasing the autonomous design completion rate for each component (i.e., increasing the fraction of work that can be completed in an autonomous manner in every time step).

  2. 2.

    Lessening the pairwise coupling (i.e., dependency strengths) between components.

  3. 3.

    Increasing the frequency with which design information is transmitted from the industrial design to the engineering process (i.e., reducing the information delay T).

The first strategy can be implemented, for instance, by applying resources (work efforts) above the normalized base-case level, which will result in increased progress being made on the independent, autonomous components. The extra resources may be obtained through design technology, personnel training, overtime, skill level, and other determinants of design productivity. The second and third strategies can be accomplished, for instance, by using the knowledge of the intercomponent coupling as an aid to making colocation on teaming arrangements (McCord and Eppinger 1993), or by using a variety of formal and informal mechanisms to facilitate the management of design information flows (Braha 2001).

Figures 7b and 7c present the effect of the first two mitigation strategies on the behavior of the base-case model. Scenario 1 represents expending 2.5 normalized resource-weeks and scenario 2 represents modifying the engineering coupling structure by eliminating the weak dependencies. In all cases, the increase in total resource expenditure and reduction in the magnitude of the engineering intercomponent dependencies are applied to the more complex local components (i.e., center console (L2), door trim panel (L3), and instrument panel (L6), see Fig. 7). Figure 7d shows the combined effect of these strategies on the total number of open issues.

Delays in information flows (introduced by scan transmittal intervals) from the industrial design to the engineering process have a destabilizing effect on system behavior. For example, Fig. 8 presents the behavior of the system for various information delays. As can be seen, increasing the information delay results in more extreme churning behavior. Moreover, even though all scenarios are converging, the increased churning behavior leads to slower convergence rates. Indeed, by inspecting the convergence rate (i.e., largest magnitude eigenvalueFootnote 17 of the matrix B) of the appearance design process, for various delays between consecutive information releases, we observe that convergence slows monotonically for longer delays. To illustrate the economic cost of churn, we inspect the amount of total work in the system over the "convergence" period (i.e., the time required to complete 99% of the initial total work). We see that the work associated with the information delay T=6 is about 10% more than the total work associated with the delay T=1.

Fig. 8.
figure 8

The effect of delay on the churning behavior

We also notice in Fig. 6 that the accumulation of ongoing changes in the industrial design group related to the local instrument panel (see the cumulative work of H 6 ) is larger than the magnitudes of other elements. Thus, it may be possible to reduce the impact of the accumulated design information by using differential delays among components; that is, by increasing the frequency with which design information is transmitted from the industrial design to the local components that have the most destabilizing effect on total system performance. For instance, consider the scenario where the industrial design team provides updates to the local engineering tasks L 2 , L 3 , and L 6 at shorter periodic intervals of T 1<6 weeks (while maintaining the delay for the others at T 2<6 weeks). According to the multiple local DSM model of Sect. 5, the local DSM is now partitioned into two local teams, DSM1={L 2 , L 3 , L 6 } and DSM2={L 1 , L 4 , L 5 , L 7 , L 8 , L 9 , L 10 }. By applying the resultsFootnote 18 of Sect. 5, Fig. 9 plots the convergence rate (i.e., largest magnitude eigenvalue of the matrix B) for the base scenario under (1) five differential information release policies, T 1=j and T 2=6 for j=1, 2, ..., 5, and (2) overall information release policy T=j for j=1, 2, ..., 5. As can be seen, the differential delay policy consistently achieves better "performance" (larger convergence rate) than the corresponding uniform policy; that is, the differential delay policy with T 1=j and T 2=6 achieves better performance than the uniform information release policy with delay T= for every j=1, 2, ..., 5.Footnote 19 Note that even though the relative changes in the maximum eigenvalues are small, the corresponding changes in the completion times for the project were significant.

Fig. 9.
figure 9

The effect of delay policy on the largest eigenvalue

6.5 Sensitivity analysis

The model developed in this paper enables us to perform sensitivity analysis. For example, let \({\alpha ^{L}_{2} }\) be the autonomous local center console completion rate (corresponding to the element in row two, column two in the local DSM). Assume that the other elements in the local DSM are set to their values as specified in Fig. 5. Figure 10a plots the largest magnitude eigenvalue of B against \(\alpha ^{L}_{2} .\) As can be seen, any value of \({\alpha ^{L}_{2} &gt; 0}\) will have a stabilizing effect on the system behavior (see Theorem 2). A similar plot for the local overhead system (Fig. 10b) suggests that the convergence rate is completely insensitive to its autonomous completion rate as long as it is greater than 0.05. Consequently, any increase in total resource expenditure for a bottleneck component (such as the center console) will be effective in improving the system performance.

Fig. 10a, b.
figure 10

The effect of autonomous completion rate on convergence

7. Discussion and conclusion

7.1 Case study limitations

The model and scenario analysis presented in this paper aim at illustrating the fundamental process characteristics and providing managerial insights on the effects of different mitigation strategies. The following observations will assist in assessing the limitations of our case study in context:

(A):

The interior design is completely separable from the exterior design. The overall appearance design process is subject to a number of influences as it operates within the total automotive product development cycle. These influences are considered inputs to the appearance design process. Here, the scenarios are constructed to represent as closely as possible an isolated interior design process that is independent of exterior design actions (e.g., full exterior carryover in which all relevant exterior design information is known to component engineering groups).

(B):

In order to facilitate downstream tooling design, prototype development, and testing, and ultimately to meet the desired product introduction date, the appearance design phase of the process must be completed within a specified amount of time. Thus, an important input to the appearance design process is the program work schedule, indicating planned progress in design feasibility and expected status at various program milestones. This input is used by the appearance design process (primarily by the engineering process) participants to assess the current state of the design versus the scheduled state, and to make corresponding adjustments in effort levels via resource allocation and workload policies. For instance, if a component design is behind schedule, that group will usually work overtime in an effort to catch up. In our model, we assume that the resource usage intensity required to accomplish the autonomous completion rate of the various tasks is uniform throughout the project.

(C):

The interior design is not free of midstream program direction changes. The term program direction refers collectively to the current set of assumptions regarding product content, performance, variable cost, investment, quality, and other program attributes. Changes in the program direction are considered by process participants to be a critical source of the difficulties encountered during appearance design. The detailed data on program direction changes are not available. In our model, we proceed by assuming that the program has perfect initial knowledge of the ultimate desired product content and component cost, investment, and quality level; however, we do not incorporate program direction changes over the duration of the project.

Owing to the above mentioned limitations, duplication of the historical behavior of the specific vehicle program (as shown in Fig. 4) remains beyond the scope of this study. A lack of replication of the progress history brings up the issues of validity for the model and the case study.

7.2 Model validity

It is best to validate the process structure to illustrate consistencies between constructs that are included within the model, and the resulting process dynamics against ex-post records to establish face validity to our model. Illustration of external validity, i.e., applicability of the model in generic settings, is also desirable. The complexity of the design process and the scarcity of hard data and good records on the appearance design process mean that a quantitative validation of the model is questionable (Huber-Carol et al. 2002). We restrict our discussion to a qualitative comparison of our results and the observed process data. In particular, we draw the reader's attention to the following:

(A):

The constructs of managerial relevance are considered by the model and their interdependencies are based on interviews conducted in the field study (McDaniel 1996). We have neither added any additional constructs nor have we deleted any details from these field observations.

(B):

Support for the model validity has also been obtained by asking engineers and engineering supervisors to rank the "complexity level" associated with the ten particular interior components that are captured in the full model (McDaniel 1996). The complexity levels are expressed by the number of standard engineering design studies associated with each component (which is indirectly affected by the "churn" and convergence of each component). The ranking of the top three components (i.e., instrument panel, door trim panel, and center console) is aligned with the three components that were identified to have the most destabilizing effect on total system performance as depicted in Figs. 6 and 7. In particular, it has been observed by engineers that the instrument panel is consistently behind schedule, and that it has the largest number of open design issues for the longest period of time as predicted by our model. It has also been observed that the appearance design process is not completed on time at the nominal date of week 52, as demonstrated by our model. In addition, the local components (e.g., the overhead system) that have been found to be completely insensitive to its autonomous completion rate (see Sect. 6.5) are aligned with the engineers complexity level rankings.

(C):

The periodic jumps in the number of open issues, which occur in response to the updated styling design information. In the periods between scan transmittals, the engineering group works with the information it has regarding the vehicle design, without disturbance by styling updates, so work progresses relatively smoothly. Styling reacts to the ongoing engineering design changes, however, and because information about this reaction is available to engineering only following scan transmittals, the release of updated styling information into the system results in some design issues being opened or reopened. This release drives the sharp changes in status, which represent setbacks in progress due to the structure of product development.

(D):

It has been observed by the process participants that lengthy delays in information exchanges between styling and engineering is a major source in pushing each component's status further away from its scheduled status, in alignment with our previous discussion (see Sect. 6.4).

More complete modeling and validation would be required to make more detailed observations. This requirement suggests possible record-keeping actions that could assist management at the subject company in deepening its understanding of the appearance design process. Detailed vehicle program records should be kept and analyzed, including such data as weekly component-level design status, weekly component resource usages, and number of times design files or drawings are exchanged or accessed, as well as a record of all product changes that were made. These types of records would have the additional benefit of enabling input–output correlation analysis to be used to quantitatively estimate the intercomponent coupling parameters, rather than relying on survey techniques. They would also improve the estimation procedure of the average autonomous completion rate per component (by which engineering personnel were asked to artificially decouple the interior components and make a professional judgment about how quickly each component could be designed on its own).

7.3 Conclusion

The model described in this paper provides managers with operational insights that explicitly capture the fundamental characteristics of a development process. It allows managers to experiment with several "what-if" scenarios in order to explore and compare the effects of subsequent managerial actions of improvement. However, a basic revelation of the model is that design churn is an unavoidable phenomenon and a fundamental property of a decomposed development process where the product or process architecture dictates delays in feedback information amongst the development groups. Consequently, the most significant insight this model brings to managers is to avoid making myopic resource allocation decisions based on the observance of churn (Joglekar and Ford 2003). The fluctuation in development progress cannot be avoided, but can be managed once managers understand its sources. Our model reveals several main sources of churn:

(a):

Interdependency of process or product structure is apparent when the development occurs within a monolithic group; however, it is usually hidden, ignored, or forgotten once the process is decomposed into multiple groups. Fully anticipating, understanding, and accommodating this structure can explain why the tasks seem difficult, frustrating, and prone to change.

(b):

Concurrency of local and system execution may help in expediting the development process; however, careful timing and magnitude of feedback are necessary to provide development groups with enough time, between feedbacks, to understand and react to these feedback flows. If these flows are not carefully planned, they might drive the process unstable by generating more rework than the development teams can handle.

(c):

Feedback delays are an important factor in developing a clear understanding of the development process and play a major role in determining the system stability. In combination with the interdependency structure, delays are the main reason why development problems (issues) believed to be solved (closed) tend to reappear (reopen) at later stages of development.

While exposing churn as a fundamental property of a decomposed development process, our model also provides managers with three mitigation strategies to combat design process churn, divergence, or slow convergence. These strategies are:

  1. 1.

    Timing-based strategies: these strategies advocate the minimization of delays for specific tasks that contribute the most to the slow convergence of the development process. Our model provides a quantitative approach to identify these bottleneck tasks. Once identified, strategies for reducing the time delays for these tasks should be implemented. These include the early release of preliminary information and divisive overlapping (Krishnan et al. 1997). Our illustration shows that acceleration of the synchronization frequency for all tasks may not be as effective as accelerating, by the same amount, the synchronization frequency for the bottleneck tasks.

  2. 2.

    Resource-based strategies: these strategies allow local and system teams to work faster (as captured by the diagonal elements of both WL and WS) by incorporating more resources. Our illustration shows that working faster on all the tasks simultaneously may not be as effective as allocating the same amount of resources only to the bottleneck tasks.

  3. 3.

    Rework-based strategies: these strategies suggest that local groups ignore low priority local or system feedback (as captured by the low rework fractions in \({{\rm{W}}^{{{\rm{L}}_{{\rm{i}}} {\rm{L}}_{{\rm{j}}} }} }\) or \({\text{W}}^{{{\text{S}}_{{\text{i}}} {\text{L}}_{{\text{j}}} }} ).\) A similar strategy is to reduce the values of \({{\rm{W}}^{{{\rm{L}}_{{\rm{i}}} {\rm{L}}_{{\rm{j}}} }} }\) or \({{\rm{W}}^{{{\rm{S}}_{{\rm{i}}} {\rm{L}}_{{\rm{j}}} }} }\) by requiring that local or system teams not produce much feedback to local groups. Both these strategies benefit from a modular architecture.

All the above strategies are effective in mitigating the three sources of churn (i.e., interdependency, concurrency, and feedback delays) either individually or collectively. We have demonstrated the impact of these strategies using the automotive appearance design process.

We have developed a model for a development process based on decomposing it into two groups: local and system. The model incorporates two types of information flows: (1) information flows that reflect internal rework within local and system groups, possibly generating internal rework; and (2) information flows that reflect status updates from local to system tasks and feedback from system to local tasks. These information flows influence both "fundamental" and "extrinsic" churn and determine the shape and rate of convergence of the development process.

Several extensions to our model are possible. First, cost elements associated with the information release and information processing activities may be incorporated within our model. This may result in a convex formulation that allows for the optimal determination of the information delay T (e.g., Thomke and Bell 2001). Second, except for the local and system autonomous rates of completion, our model does not explicitly account for resource allocation policies. Thus, explicitly incorporating resource allocation as a decision variable may lead to the discovery of better resource allocation policies in the context of decomposed development processes. Finally, the linearity assumption in our model can be relaxed, and nonlinear formulations may be developed. For example, our model can be modified by incorporating time-varying rework fractions, which are reduced with time as the development process unfolds.