Abstract
In this work, we present a non-intrusive polynomial chaos expansion (PCE) approach for topology optimization in the context of stationary heat conduction. The robust topology optimization problem and its sensitivity in both its variational and approximate forms are discussed. Sensitivity analysis of the statistical moments and PCE are described in detail. The variational boundary value problems were solved using the finite element method. The material distribution approach with the SIMP model was employed to represent the design. The numerical examples presented show applications addressing heat generation with uncertain magnitude, heat generation at uncertain location and damage with uncertain location. These examples prove that uncertainty-based optimization is able to obtain more robust designs than deterministic approaches.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The main goal of topology optimization is to distribute material inside a given design domain in order to obtain an optimum design [8, 56]. Due to the complexity of the resulting optimization problem, several mathematical structures have been developed for this task (see Sigmund and Maute [56] for a complete review), such as material distribution [8], homogenization [31, 58], shape optimization [1] and topological sensitivity [2, 48]. Topology optimization has also been applied to problems arising from several different phenomena, such as solid mechanics [8, 24, 49], fluid mechanics [11], heat transfer [12, 17, 27, 41], acoustics [68] and optics [34].
An important question that always concerned researchers working on topology optimization was whether or not the optimum designs obtained were robust enough to be employed in practice. Several authors argued that designs obtained with topology optimization are frequently non-robust, in the sense that small perturbations on the imposed conditions of the physical problem may lead to a very poor performance of the design. In fact, in the numerical examples of this work we observe situations where this is indeed true when non-robust optimization is employed.
In the last decades, several works showed the importance of modeling uncertainties in optimization problems [5, 9, 54]. It has been observed that designs obtained in the deterministic context often present very poor performance when uncertainties are considered. For this reason, several researchers developed and improved strategies to include uncertainties into optimization problems, such as reliability-based design optimization [3, 22, 43, 64], risk optimization [5, 6, 29, 61] and robust optimization [9, 16].
In the case of topology optimization, a main issue concerning uncertainty-based approaches has always been the computational effort required. Uncertainty-based optimization always requires more computational effort than its deterministic counterpart. The increase in the required computational effort may even be on the order of thousands, when sampling schemes are employed. Since topology optimization is a computationally demanding problem, application of existing schemes from uncertainty-based optimization frequently becomes unfeasible from the computational point of view. However, several uncertainty-based optimization approaches have been successfully applied to topology optimization in the last years [18,19,20,21, 23, 25, 33, 35, 36, 38, 39, 45, 46, 53, 57, 59, 63, 69]. In particular, the works by Tootkaboni et al. [59], Keshavarzzadeh et al. [35], Martínez-Frutos et al. [46] and Zhang et al. [69] employed polynomial chaos expansion (PCE) in the context of topology optimization, but only covered applications from solid mechanics and optics.
In PCE, the relation between the random variables and the response of the phenomena is written as a polynomial expansion that is convergent according to the mean square error. PCE was originally presented by Wiener [65] and reviewed by Cameron and Martin [14]. However, PCE received wider attention only several decades later by Ghanem and Spanos [28] and Xiu and Karniadakis [67]. The main advantage of PCE is that it requires much less computational effort than most other approaches when the number of random variables is small. Besides, it is known to be convergent for second-order stochastic processes [26], that is, a mild condition for most cases of practical interest. Thus, accuracy issues may be solved by increasing the order of the expansion employed. However, the approach becomes computationally demanding when the number of random variables increases, an issue know as “curse of dimensionality.” This issue can be avoided, to some extent, by employment of sparse quadrature rules [4, 10, 13, 32, 42, 47, 50, 66]. However, it is known that sparse grids are only able to significantly reduce the computational effort in the presence of several random variables (see Barthelmann et al. [4], Xiong et al. [66]). Here, we focus on problems with a small number of random variables and thus sparse grids fall outside the scope of this work.
In this work, we present a non-intrusive polynomial chaos expansion (PCE) approach for topology optimization in the context of stationary heat conduction. Topology optimization addressing heat transfer problems has already been studied by Gersborg-Hansen et al. [27], Bruns [12], Coffin and Maute [17], Lohan et al. [41], among others. However, uncertainties have not been extensively addressed in past works, apart from the work by Schevenels et al. [53]. Since Schevenels et al. [53] focused on uncertainties arising from manufacturing errors (modeled as density field uncertainties evaluated with Monte Carlo sampling), we identify the need to better understand how uncertainties affect topology optimization in problems concerning heat transfer. In particular, the impact of different sources of uncertainties needs to be further studied, such as uncertainties on boundary conditions, heat generation and concentrated damage. This is the main novelty of this work.
This paper is organized as follows. In Sect. 2, we present the deterministic statement of the topology optimization problem. In Sect. 3, the robust statement is presented. Sensitivity analyses for both problems are presented in Sect. 4. The required statistical moments and its sensitivities are presented in Sect. 5. The PCE approach employed to evaluate the statistical moments is presented in Sect. 6. Some computational aspects are discussed in Sect. 7. Numerical examples are presented in Sect. 8. Finally, concluding remarks are presented in Sect. 9. A detailed development of the expression required for sensitivity analysis is presented in “Appendix.”
2 Deterministic topology optimization problem
We start from the topology optimization problem: Find a density field \(\rho \in L^2(\varOmega )\) such that
where
is the spatial \(L^2(\varOmega )\) norm of the temperature field \(u(\rho )\)Footnote 1, the first constraint is the variational boundary value problem that governs the stationary heat equation [7], \(\varOmega\) is the spatial domain, \(\varGamma\) is the boundary of the domain, \(\kappa \in L^2(\varOmega )\) is the thermal conductivity field, \({\hat{q}}\) are heat flows normal to the boundary (Neumann boundary conditions), \(f \in L^2(\varOmega )\) is the heat generation inside the domain, \({\mathcal {U}} \subset H^2(\varOmega )\) is the space of admissible solutions that satisfy Dirichlet boundary conditions, \({\mathcal {V}} \subset H^2(\varOmega )\) is the space of admissible trial functions, the second constraint is a volume constraint, \(V_0\) is the volume of the domain, \(0 < \alpha \le 1\) is a volume fraction of material to be employed and \({\mathbf {y}}\) are spatial coordinates.
We also assume that the relation between the thermal conductivity \(\kappa\) and density \(\rho\) is given by
where \(\kappa _0 > 1\) is the original conductivity of the material, \(p>1\) is a SIMP (solid isotropic material with penalization) exponent [8] and \(\epsilon >0\) is a relaxation parameter on the conductivity employed to avoid theoretical and numerical issues caused by null densities. We thus employ a material distribution approach with the SIMP model.
Employing the finite element method (FEM) with a uniform mesh composed by rectangular elements [7] and assuming that the density inside each finite element is constant, the above variational optimization problem can be approximated by the discrete optimization problem: Find the vector of element densities \(\varvec{\rho } \in {\mathbb {R}}^N\) such that
where
is the norm of the vector of nodal temperatures \(u_i(\varvec{\rho })\), g is an approximation to F (the notation g is employed here to comply with later notation considering uncertainties), V is the volume of the finite elements (all elements have the same volume since the mesh is assumed uniform), \({\mathbf {u}}\) is the vector of nodal temperatures obtained with FEM, the first constraint is the FEM approximation to the variational boundary value problem, \({\mathbf {K}}\) is the conductivity matrix, \({\mathbf {f}}\) is a vector that depends on the boundary conditions and heat generation inside the domain, the second constraint is a volume constraint, \(\rho _i\) is the density of material inside each finite element and N is the number of finite elements. For more details on this approximate problem, the reader can consult the work by Santos [52]. We also highlight that other assumptions on the FEM discretization can be employed, such as non-uniform meshes or triangular elements, with little modification of the above equations.
3 Robust topology optimization problem
We now assume that the variational boundary value problem is affected by a vector of random variables \({\mathbf {X}}\). In this case, the temperature field becomes a stochastic field and its norm becomes a random variable. We then write the robust topology optimization problem: Find a density field \(\rho \in L^2(\varOmega )\) such that
where
is the spatial \(L^2(\varOmega )\) norm of the temperature field \(u(\rho ,{\mathbf {X}})\)Footnote 2, \({\text {E}}[ \ ]\) is the expected value, \({\text {Var}}[ \ ]\) is the variance and \(\eta \in [0,1]\) is a weighting factor.
The new objective function \({\overline{F}}\) is a weighted sum of the expected value and the standard deviation of the norm of the temperature field. Note that the weighted sum of the expected value and the standard deviation is frequently employed as the objective function in robust optimization [15, 53, 69]. Some authors also employ the variance instead of the standard deviation [6, 23]. The main idea in both cases is that we wish to minimize the expected value and the variability of the original objective function. This multi-objective optimization problem is then frequently stated as the weighted sum between both quantities. However, we highlight that other combinations can be employed if necessary, with little modification to the developments presented in this work.
In the same way the problem from Eq. (1) was approximated by the problem from Eq. (4), the problem from Eq. (6) can be approximated by the problem: Find the vector of densities \(\varvec{\rho } \in {\mathbb {R}}^N\) such that
where \(g = V^{1/2} \Vert {\mathbf {u}} \Vert\) as defined in Eq. (4) is an approximation to \(\Vert u \Vert\). This is the problem solved numerically in this work.
In order to employ gradient-based optimization algorithms to solve the above optimization problem, it is necessary to obtain the partial derivative of the new objective function. Since it is a combination of the expected value and the variance, we actually require the sensitivities of these statistical moments. These concepts are discussed in Sect. 5. However, we first discuss sensitivity analysis of the function F and its approximation g in the next section.
4 Sensitivity analysis
Sensitivity analysis of the deterministic topology optimization problem from Eqs. (1) and (4) was already covered in previous works. A general exposure on the subject is presented by Bendsøe and Sigmund [8] and Haftka and Gürdal [30]. Sensitivity analysis of the problem presented previously was discussed in detail by Santos [52].
If the adjoint method is employed, sensitivity of the objective function F with respect to a perturbation \(\xi \in L^2(\varOmega )\) on the density field \(\rho\) can be written as (see “Appendix” for the details)
where \(\lambda \in {\mathcal {V}}\) is solution to the adjoint problem
Using the FEM, the above variational sensitivity analysis problem can be approximated by
where \(\varvec{\lambda }\) is an approximation to the solution of the adjoint problem, given by
Note that the above expressions indicate that sensitivities \({\mathcal {D}}_{\rho }\) and \(\partial g/\partial \rho _k\) depend on the random variables \({\mathbf {X}}\). For the deterministic case, however, the required expressions for sensitivity analysis can be recovered by taking \({\mathbf {X}}\) as a vector of deterministic parameters.
5 Expected value and variance sensitivity
The expected value of some performance function g that depends on design parameters \(\varvec{\rho }\) and random vector \({\mathbf {X}}\) can be written as
where \(f_X\) is the probability density function (PDF) of the random variable \({\mathbf {X}}\) and \({\mathcal {S}} \subseteq {\mathbb {R}}^M\) is the support of \(f_X\). Assuming the design parameters do not affect the random variables, sensitivity with respect to \(\rho _k\) gives
assuming g satisfies the dominated convergence theorem [37, 40, 55]. In this case, we identify that the sensitivity of the expected value is simply the expected value of the sensitivity.
The variance of g, on the other hand, can be written as
Sensitivity with respect to design parameter \(\rho _k\) then gives, using the chain rule,
where \(\text {Cov}[ , \ ]\) is the covariance and the dominated convergence theorem was invoked again.
Analytical integration of the above expressions and its sensitivities is impossible in almost every practical situation. Employment of approximate sampling schemes, such as Monte Carlo simulation (MCS) [51], is feasible from the conceptual point of view, but leads to prohibitive computational requirements when evaluation of the performance function g is time-consuming. In this work, these issues are avoided using the expansion technique described in the next section.
6 Polynomial chaos expansion
Assuming the original performance function \(g(\varvec{\rho },{\mathbf {X}})\) has finite variance, in PCE we substitute it by a polynomial approximation \({\tilde{g}}\) of the form [14, 26, 28, 61, 62, 65, 67]
where \(\psi _{i}\) are basis functions from a polynomial set and \(c_{i}\) are coefficients to be determined. In this work, we obtain the basis functions by full tensor product [61, 62]. The resulting number of basis functions is \(n = (k+1)^M\), where k is the order of the polynomials employed for each random variable and M is the number of random variables. Once the approximation \({\tilde{g}}\) is built, probabilistic moments such as expected values and variances can be evaluated approximately with little computational effort using \({\tilde{g}}\) instead of g.
The PCE is obtained as to minimize the mean square error according to the distribution of the random vector \({\mathbf {X}}\) [14, 26, 65]
This results in a least squares approximation where the coefficients of the expansion can be found from the system of linear equations
with
and
If the polynomial basis employed is orthogonal with respect to \(f_X\), then \({\mathbf {A}}\) becomes a diagonal matrix and there is no need to solve a system of linear equations in order to obtain the expansion. This procedure is known in the literature as the Wiener–Askey scheme [67]. In this work, we use the Wiener–Askey scheme and thus Laguerre and Legendre polynomials are employed for expansion of gamma and uniform random variables, respectively.
From previous works, it is known that the partial derivatives of \({\tilde{g}}\) with respect to design parameter \(\rho _k\) can be approximated by the PCE of the partial derivatives of g. This is the approach employed in this work for evaluation of sensitivity of the PCE. More details on this subject are presented by Torii et al. [61, 62].
In this work, the expected values from Eqs. (20) and (21) are evaluated using full-tensor Gaussian quadrature, as discussed in Torii et al. [62]. The number of quadrature nodes employed for each random variables is equal to \(k+1\), where k is the order of the polynomial basis employed. This is enough to exactly integrate the polynomials of highest order that appear in Eq. (20). From full tensor product, the total number of quadrature nodes is then \(m = (k+1)^M\), where M is the number of random variables. Note that as the number of design variables increases the computational effort required to build the PCE grows exponentially, an issue frequently named “curse of dimensionality.” Consequently, PCE is generally recommended for problems with few random variables. A detailed discussion on this subject was previously presented by Torii et al. [61, 62]. A discussion on more refined quadrature schemes, such as sparse quadrature rules [10, 13, 32, 42, 47, 50, 66], is outside the scope of this work.
7 Computational aspects
In order to avoid difficulties related to mesh dependency and checkerboard patterns, we employed the sensitivity filter used by Torii and de Faria [60]. Sensitivities were filtered as
where
\(r_f\) is the filtering radius and \(d_{ij}\) is the distance from the center of elements i and j. More details on sensitivity filtering are given by Bendsøe and Sigmund [8].
8 Numerical examples
In all examples, we take the relaxation parameter \(\epsilon = 0.001\), the SIMP exponent \(p = 3\) and the volume fraction \(\alpha = 0.25\). The origin of the x-y axes is positioned at the left-bottom corner in all examples. The optimization problems were solved using the function fmincon available in MATLAB. An interior point algorithm with BFGS approximation for the Hessian was employed [44]. The stop criterion on the objective function, constraints and design vector is equal to \(10^{-9}\). After the optimization problems were solved, the expected values and variances were evaluated with MCS using a sample size of \(10^3\). See Luenberger and Ye [44] for more details on numerical optimization algorithms.
8.1 Example 1: heat generation with uncertain magnitude
In the first example, we consider a square design domain with size \(1.0\times 1.0\), as represented in Fig. 1. The original thermal conductivity is \(k_0 = 1\). The top boundary is subjected to a prescribed temperature \(u = 0\) (i.e., Dirichlet homogeneous condition). All other boundaries are insulated with \({\hat{q}} = 0\) (i.e., Neumann homogeneous condition). The heat generation rate at the left and right halves of the domain is random variables \(X_1\) and \(X_2\), respectively. The random variables \(X_1\) and \(X_2\) have gamma distribution with expected values equal to 1.0. However, \(X_1\) has standard deviation equal to 0.3, while \(X_2\) has standard deviation equal to 0.1. A polynomial basis of order \(k = 1\) was employed, and thus, four function evaluations are necessary for obtaining the PCE. The design domain is represented with a uniform mesh of 10,000 \((100\times 100)\) rectangular finite elements. The filter radius was set to \(r_\mathrm{f} = 0.015\).
The optimum designs obtained with different values of the weighting factor \(\eta\) are presented in Fig. 2. The design obtained with deterministic optimization using the expected value of the random variables is also presented. Since the temperature field in this example depends linearly on the random variables, the optimum design obtained with \(\eta = 1.00\) is very similar to the deterministic one.
In the deterministic case, the optimum design obtained is symmetric, since both random variables have the same expected value. For decreasing values of \(\eta\), we observe that the designs are shifted to the left side of the domain, in order to cover the region with high variance of heat generation. The optimum designs are consequently non-symmetric if the variance of the response is considered, since the variance of the random variables is not the same.
The expected value, the square root of the variance (standard deviation), the coefficient of variation (\(\delta = \sqrt{{\text {Var}}}/{\text {E}}\)) and the number of function evaluations (n.f.e.) necessary to obtain the optimum designs \(\varvec{\rho }^*\) obtained are presented in Table 1. The n.f.e. corresponds to the number of times the finite element model was built to obtain the temperature field and its sensitivity. The expected value and the standard deviation of the performance obtained with first-order PCE for \(\eta = 0.50\) were 7.5939 and 1.1075, respectively. These results agree with the results given by MCS, indicating that a first-order expansion is sufficient in this example. Similar accuracy was obtained for the other designs presented.
We observe that as \(\eta\) is decreased, the optimization algorithm enforces a reduction in the variance of the response. For higher values of \(\eta\), on the other hand, the optimization algorithm focuses on the minimization of the expected value of the response. Note that the n.f.e. necessary to obtain the robust designs is about four times that necessary to obtain the deterministic design, since each time the PCE is built four evaluations of the finite element model are necessary. However, the n.f.e. necessary to obtain those optimum designs is much smaller than is generally required to evaluate statistical moments by standard sampling schemes, as already observed in the work by Torii et al. [61]. In similar problem, Schevenels et al. [53] observed that 100 function evaluations were necessary just to evaluate statistical moments using MCS.
8.2 Example 2: heat generation at uncertain location
We now consider the problem represented in Fig. 3, with a square design domain of size \(1.0\times 1.0\). The original thermal conductivity is \(k_0 = 1\). A region of length 0.1 at the center of the bottom border is subjected to a prescribed temperature \(u = 0\) (i.e., Dirichlet homogeneous condition). All other boundaries are insulated with \({\hat{q}} = 0\) (i.e., Neumann homogeneous condition). There is uniform heat generation equal to \(f = 100\) inside a square region of size \(0.4\times 0.4\), centered at \((X_1,X_2)\). The coordinates \(X_1\) and \(X_2\) are uniform random variables with mean equal to 0.5 and standard deviation equal to 0.1. Consequently, the position of the region where heat is generated is uncertain. A polynomial basis of order \(k = 3\) was employed and thus 16 function evaluations are necessary for obtaining the PCE. The design domain is represented with a uniform mesh of 10,000 \((100\times 100)\) rectangular finite elements. The filter radius was set to \(r_\mathrm{f} = 0.015\).
The optimum designs obtained with different values of the weighting factor \(\eta\) are presented in Fig. 4. The design obtained with deterministic optimization using the expected value of the random variables is also presented. Note that since the temperature field does not depend linearly on the random variables, the deterministic design is different from the one obtained with \(\eta = 1.00\). The expected value and the standard deviation of the performance obtained with third-order PCE for \(\eta = 0.50\) were 28.6771 and 4.0453, respectively. These results agree with the results given by MCS, indicating that a third-order expansion is sufficient in this example. Similar accuracy was obtained for the other designs presented.
The expected value, the standard deviation, the coefficient of variation and the n.f.e. for the optimum designs \(\varvec{\rho }^*\) obtained are presented in Table 2. From these results, we observe that the deterministic design is very poor in comparison with the robust ones. This shows the importance of taking uncertainties into account in this case. We also observe that the design obtained with \(\eta = 0.00\) has a very high expected value, which indicates that minimization of the variance alone may lead to designs with no practical significance. Finally, in this example 16 evaluations of the finite element model are necessary to build the PCE, and thus, the n.f.e. necessary to obtain the robust designs is about 10 times that of the deterministic case. However, this computational effort is much less than that necessary for most standard sampling-based schemes.
8.3 Example 3: damage with unknown location
In the last example, we consider the rectangular design domain of size 1.0x2.0, represented in Fig. 5. The bottom boundary is subjected to a prescribed temperature \(u = 0\) (i.e., Dirichlet homogeneous condition). All other boundaries are insulated with \({\hat{q}} = 0\) (i.e., Neumann homogeneous condition). There is uniform heat generation equal to \(f = 1\) over the entire domain. However, the original thermal conductivity field is given by
\(X_1\) and \(X_2\) are uniform random variables with mean equal to 0.5 and 1.0 and standard deviation equal to 0.2 and 0.4, respectively. The field \(k_0\) for \(X_1 = 0.5\), \(X_2 = 1.0\) is represented in Fig. 5b. This \(k_0\) is employed to model damage with unknown location inside the domain. A polynomial basis of order \(k = 3\) was employed, and thus, 16 function evaluations are necessary for obtaining the PCE. The design domain is represented with a uniform mesh of 9800 \((70\times 140)\) rectangular finite elements. The filter radius was set to \(r_\mathrm{f} = 0.030\).
The optimum designs obtained with different values of the weighting factor \(\eta\) are presented in Fig. 6. The design obtained with deterministic optimization using the expected value of the random variables is also presented. Only robust designs obtained with \(\eta = 1.00\) and \(\eta = 0.50\) are presented in this example, since the robust designs did not vary much when \(\eta\) was modified. The expected value and the standard deviation of the performance obtained with third-order PCE for \(\eta = 0.50\) were 41.6902 and 0.1180, respectively. We observe that the expected value agrees with the results given by MCS. The standard deviation, on the other hand, was not accurately approximated. However, the standard deviation in this example is very small and thus its accuracy does not affect the designs. Similar accuracy was obtained for the other designs presented (Table 3).
From these results, we first observe that the deterministic design is very poor in comparison with the robust ones. This occurs because in the deterministic case the damage is simply located at the center of the domain. However, since the location of the damage is defined by uniform random variables, the central position is not that significant from the probabilistic point of view. As a consequence, the robust designs are completely different from the deterministic one, and much more efficient from the probabilistic point of view.
Finally, we observe that the robust designs required about five times the computational effort required by the deterministic design. However, as observed in the previous examples, this computational effort is much smaller than what would be required by most sampling-based schemes.
9 Conclusions
In this work, we presented an application of PCE for gradient-based topology optimization. In this case, the objective function of the problem is a combination of the expected value and variance of the response. The variational boundary value problems were solved using the FEM. The material distribution approach with the SIMP model was employed to represent the design. A detailed development of the expressions required for sensitivity analysis was also presented.
The main novelty of this work is to present a general framework for PCE-based robust topology optimization in the context of heat conduction problems. We also present numerical examples involving heat generation at uncertain region and uncertain damage location, which have not been discussed before.
From the numerical examples, it is possible to observe the importance of taking uncertainties into account. In the second example, the expected performance of the robust designs is about 50% better than that obtained with the deterministic approach. In the last example, the robust designs expected performance is about 15% better than the deterministic design. The variability of the performance can also be drastically reduced in some cases. In the second example, the standard deviation of the performance is about 80% smaller than that of the deterministic design. These results highlight that uncertainty-based optimization is able to obtain more robust designs than deterministic approaches in the case of topology optimization considering heat conduction. Thus, the proposed approach has a strong potential of application for practical design situations.
As observed in previous works, employment of PCE was computationally efficient for a small number of random variables. A third-order expansion was required in order to ensure accurate results in the last two examples, while a first-order expansion was required in the first example. For this reason, the proposed approach required about 5–10 times the computational effort required by the deterministic approach. Even though this seems a drastic increase at first glance, it is much less than what would be required by other general approaches. Sampling-based techniques, for example, often lead to a computational effort increase in the order of hundreds, due to the required sample sizes. This is an important issue in the context of topology optimization that is by its nature a computationally demanding problem. We emphasize that the presented examples are complex from the conceptual point of view, since they involve heat generation and damage with unknown location. This shows that PCE-based optimization approaches are stable and able to solve complex problems if employed in an appropriate manner.
Several aspects of this work should be further investigated in the future. Employment of sparse quadrature rules for building the PCE will be able to reduce the required computational effort in the presence of more random variables and higher-order expansions. This improvement will allow the study of more complex examples of practical importance, such as problems with several heat generation sources with uncertain location and magnitude. Employment of alternative optimization algorithms, such as stochastic gradient algorithms, should also be investigated. Finally, the \(L^2\) norm of the temperature field was employed here as the performance of the design. However, other quantities may be of practical importance in some applications, such as the heat flux and other norms of the temperature field (e.g., maximum norm). These questions remain as topics for future investigations.
Notes
Note that the term “norm” is used in this work to represent the spatial norm of the temperature field.
Note that the spatial norm of the temperature field now becomes a random variable, since the temperature field is not deterministic anymore.
References
Allaire G, Jouve F, Toader AM (2002) A level-set method for shape optimization. C R Math 334(12):1125–1130
Amstutz S, Novotny A, de Souza Neto E (2012) Topological derivative-based topology optimization of structures subject to Drucker–Prager stress constraints. Comput Methods Appl Mech Eng 233:123–136
Aoues Y, Chateauneuf A (2010) Benchmark study of numerical methods for reliability-based design optimization. Struct Multidiscip Optim 41(2):277–294
Barthelmann V, Novak E, Ritter K (2000) High dimensional polynomial interpolation on sparse grids. Adv Comput Math 12(4):273–288. https://doi.org/10.1023/A:1018977404843
Beck AT, Gomes WJ (2012) A comparison of deterministic, reliability-based and risk-based structural optimization under uncertainty. Probab Eng Mech 28:18–29
Beck AT, Gomes WJS, Lopez RH, Miguel LFF (2015) A comparison between robust and risk-based optimization under uncertainty. Struct Multidiscip Optim 52(3):479–492
Becker EB, Carey GF, Oden JT (1981) Finite elements: an introduction. Prentice-Hall, Upper Saddle River
Bendsøe MP, Sigmund O (2003) Topology optimization: theory, methods and applications. Springer, Berlin
Beyer H, Sendhoff B (2006) Robust optimization—a comprehensive review. Comput Methods Appl Mech Eng 196(33–34):3190–3218
Blatman G, Sudret B (2010) An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Probab Eng Mech 25(2):183–197
Borrvall T, Petersson J (2003) Topology optimization of fluids in stokes flow. Int J Numer Methods Fluids 41(1):77–107
Bruns TE (2007) Topology optimization of convection-dominated, steady-state heat transfer problems. Int J Heat Mass Transf 50(15):2859–2873
Buzzard GT (2012) Global sensitivity analysis using sparse grid interpolation and polynomial chaos. Reliab Eng Syst Saf 107:82–89
Cameron R, Martin W (1947) The orthogonal development of non-linear functionals in series of Fourier–Hermite functionals. Ann Math 48(2):385–392
Carrasco M, Ivorra B, Ramos AM (2012) A variance-expected compliance model for structural optimization. J Optim Theory Appl 152(1):136–151. https://doi.org/10.1007/s10957-011-9874-7
Chaffart D, Ricardez-Sandoval LA (2018) Robust optimization of a multiscale heterogeneous catalytic reactor system with spatially-varying uncertainty descriptions using polynomial chaos expansions. Can J Chem Eng 96(1):113–131
Coffin P, Maute K (2016) Level set topology optimization of cooling and heating devices using a simplified convection model. Struct Multidiscip Optim 53(5):985–1003
da Silva GA, Beck AT (2018) Reliability-based topology optimization of continuum structures subject to local stress constraints. Struct Multidiscip Optim 57(6):2339–2355
da Silva GA, Cardoso EL (2017) Stress-based topology optimization of continuum structures under uncertainties. Comput Methods Appl Mech Eng 313:647–672
da Silva GA, Beck AT, Cardoso EL (2018) Topology optimization of continuum structures with stress constraints and uncertainties in loading. Int J Numer Methods Eng 113(1):153–178
dos Santos RB, Torii AJ, Novotny AA (2018) Reliability-based topology optimization of structures under stress constraints. Int J Numer Methods Eng 114(6):660–674
Du X, Chen W (2004) Sequential optimization and reliability assessment method for efficient probabilistic design. ASME J Mech Des 126(2):225–233
Dunning D, Kim HA (2013) Robust topology optimization: minimization of expected and variance of compliance. AIAA J 51(11):2656–2664
Duysinx P, Bendsøe MP (1998) Topology optimization of continuum structures with local stress constraints. Int J Numer Methods Eng 43(8):1453–1478
Eom Y, Yoo K, Park J, Han S (2011) Reliability-based topology optimization using a standard response surface method for three-dimensional structures. Struct Multidiscip Optim 43(2):287–295
Ernst O, Mugler A, Starkloff H, Ullmann E (2012) On the convergence of generalized polynomial chaos expansions. ESAIM Math Model Numer Anal 46(2):317–339
Gersborg-Hansen A, Bendsøe MP, Sigmund O (2006) Topology optimization of heat conduction problems using the finite volume method. Struct Multidiscip Optim 31(4):251–259
Ghanem R, Spanos P (1991) Stochastic finite elements: a spectral approach. Springer, Berlin
Gomes WJS, Beck AT (2013) Global structural optimization considering expected consequences of failure and using ANN surrogates. Comput Struct 126:56–68
Haftka RT, Gürdal Z (1992) Elements of structural optimization, 3rd edn. Kluwer, London
Hassani B, Hinton E (1998) A review of homogenization and topology optimization I: homogenization theory for media with periodic structure. Comput Struct 69(6):707–717
Hu C, Youn BD (2011) Adaptive-sparse polynomial chaos expansion for reliability analysis and design of complex engineering systems. Struct Multidiscip Optim 43(3):419–442
Jalalpour M, Tootkaboni M (2016) An efficient approach to reliability-based topology optimization for continua under material uncertainty. Struct Multidiscip Optim 53(4):759–772
Jensen JS, Sigmund O (2005) Topology optimization of photonic crystal structures: a high-bandwidth low-loss t-junction waveguide. JOSA B 22(6):1191–1198
Keshavarzzadeh V, Fernandez F, Tortorelli DA (2017) Topology optimization under uncertainty via non-intrusive polynomial chaos expansion. Comput Methods Appl Mech Eng 318:120–147
Kharmanda G, Olhoff N, Mohamed A, Lemaire M (2004) Reliability-based topology optimization. Struct Multidiscip Optim 26(5):295–307
Kolmogorov A (1950) Foundations of the theory of probability. Chelsea, Hartford
Liu JT, Gea HC (2018) Robust topology optimization under multiple independent unknown-but-bounded loads. Comput Methods Appl Mech Eng 329:464–479
Liu K, Paulino GH, Gardoni P (2016) Reliability-based topology optimization using a new method for sensitivity approximation—application to ground structures. Struct Multidiscip Optim 54(3):553–571
Loève M (1977) Probability theory I, 4th edn. Springer, Berlin
Lohan DJ, Dede EM, Allison JT (2017) Topology optimization for heat conduction using generative design algorithms. Struct Multidiscip Optim 55(3):1063–1077
Long Q, Scavino M, Tempone R, Wang S (2013) Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations. Comput Method Appl Mech Eng 259:24–39
Lopez RH, Beck AT (2012) RBDO methods based on form: a review. J Braz Soc Mech Sci 34(4):506–514
Luenberger DG, Ye Y (2008) Linear and nonlinear programming, 3rd edn. Springer, New York
Luo Y, Zhou M, Wang MY, Deng Z (2014) Reliability based topology optimization for continuum structures with local failure constraints. Comput Struct 143:73–84
Martínez-Frutos J, Herrero-Pérez D, Kessler M, Periago F (2018) Risk-averse structural topology optimization under random fields using stochastic expansion methods. Comput Methods Appl Mech Eng 330:180–206
Nobile F, Tempone R, Webster CG (2008) A sparse grid stochastic collocation method for partial differential equations with random input data. SIAM J Numer Anal 46(5):2309–2345
Novotny AA, Sokolowsky J (2013) Topological derivatives in shape optimization. Springer, Berlin
Pereira JT, Fancello EA, Barcellos CS (2004) Topology optimization of continuum structures with material failure constraints. Struct Multidiscip Optim 26(1–2):50–66
Ren X, Yadav V, Rahman S (2016) Reliability-based design optimization by adaptive-sparse polynomial dimensional decomposition. Struct Multidiscip Optim 53(3):425–452
Ross SM (2006) Simulation, 4th edn. Elsevier, Amsterdam
Santos DPS (2017) Topology optimization in the context of the stationary heat equation. Undergraduate Technical Report. Translated from the original Otimização topológica no contexto da equação do calor em regime estacionário
Schevenels M, BSLazarov OSigmund (2011) Robust topology optimization accounting for spatially varying manufacturing errors. Comput Methods Appl Mech Eng 200(49–52:3613–3627
Schuëller G, Jensen H (2008) Computational methods in optimization considering uncertainties: an overview. Comput Methods Appl Mech Eng 198:2–13
Shriryaev A (1995) Probability, 2nd edn. Springer, Berlin
Sigmund O, Maute K (2013) Topology optimization approaches: a comparative review. Struct Multidiscip Optim 48(6):1031–1055
Silva M, Tortorelli D, Norato J, Ha C, Bae H (2010) Component and system reliability-based topology optimization using a single-loop method. Struct Multidiscip Optim 41(1):87–106
Suzuki K, Kikuchi N (1991) A homogenization method for shape and topology optimization. Comput Methods Appl Mech Eng 93(3):291–318
Tootkaboni M, Asadpoure A, Guest JK (2012) Topology optimization of continuum structures under uncertainty—a polynomial chaos approach. Comput Methods Appl Mech Eng 201–204:263–275
Torii A, de Faria J (2017) Structural optimization considering smallest magnitude eigenvalues: a smooth approximation. J Braz Soc Mech Sci Eng 39(5):1745–1754
Torii A, Lopez R, Miguel L (2017) A gradient based polynomial chaos approach for risk optimization. J Braz Soc Mech Sci Eng 39(7):2905–2915
Torii A, Lopez R, Miguel L (2017) Probability of failure sensitivity analysis using polynomial expansion. Probab Eng Mech 48:76–84
Torii AJ, Novotny AA, dos Santos RB (2016) Robust compliance topology optimization based on the topological derivative concept. Int J Numer Methods Eng 106(11):889–903
Tu J, Choi KK, Park YH (1999) A new study on reliability-based design optimization. J Mech Des 121:557–564
Wiener N (1938) The homogeneous chaos. Am J Math 60(23–26):897–936
Xiong F, Greene S, Chen W, Xiong Y, Yang S (2010) A new sparse grid based method for uncertainty propagation. Struct Multidiscip Optim 41(3):335–349
Xiu D, Karniadakis GE (2002) The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J Sci Comput 24(2):619–644
Yoon GH, Jensen JS, Sigmund O (2007) Topology optimization of acoustic–structure interaction problems using a mixed finite element formulation. Int J Numer Methods Eng 70(9):1049–1075
Zhang X, He J, Takezawa A, Kang Z (2018) Robust topology optimization of phononic crystals with random field uncertainty. Int J Numer Methods Eng. https://doi.org/10.1002/nme.5839
Acknowledgements
The authors would like to thank CNPq, Brazil, for financial support of this research.
Author information
Authors and Affiliations
Corresponding author
Additional information
Technical Editor: José Roberto de França Arruda.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Sensitivity analysis
Appendix: Sensitivity analysis
1.1 Variational boundary value problem
Consider the following stochastic variational boundary value problem
constraint to the robust topology optimization problem from Eq. (6). For a deterministic perturbation \(\delta \in L^2(\varOmega )\) with finite size \(h>0\), the conductivity field results in \(\kappa (\rho ,{\mathbf {X}}) + h \delta\). Since the above variational problem is elliptic, it is known that this perturbation leads to a finite perturbation on the temperature field \(u + h w\), \(w \in H^2(\varOmega )\). The result is the perturbed stochastic variational boundary value problem
where we assumed that the perturbation \(h \delta\) on the conductivity field does not affect boundary conditions and internal heat generation (i.e., the right-hand side of the previous equation).
Subtraction of the two above equations gives
as can be checked by the reader. Sensitivity of the stochastic variational boundary value problem then follows from
which results in the variational boundary value problem
The above problem defines the sensitivity of the temperature field w caused by an infinitesimal perturbation on \(\delta\) on the conductivity field \(\kappa\) for a given temperature field u. Note that since \(u + h w\) and u must satisfy Dirichlet boundary conditions, then w must be null where Dirichlet conditions are imposed (i.e., w must satisfy homogeneous Dirichlet boundary conditions). This is represented by \(w \in {\mathcal {U}}^0\) in the equation above. Finally, note that the problem and, consequently, its solution w depend on the random variables \({\mathbf {X}}\).
1.2 Direct method
In order to obtain the sensitivity of the objective function from Eq. (1), we consider a perturbed temperature field \(u + h w\), \(w \in H^2(\varOmega )\). We then have
Sensitivity of \(\Vert u \Vert ^2\) then gives
as can be checked by the reader. By the chain rule, we then get
If the above expression is employed to evaluate the sensitivity of the function F with respect to a perturbation on the conductivity field \(\kappa\), we can write
where w is the solution of the problem from Eq. (29), where a perturbation \(\delta\) is applied to the conductivity field. Note that the above sensitivity depends on the random variables \({\mathbf {X}}\), since both u and w depend on \({\mathbf {X}}\).
If sensitivity is evaluated with Eq. (33), then we must solve the variational boundary value problem from Eq. (29) several times, once for each perturbation w caused by a perturbation on the conductivity field \(\delta\). This would result in time-consuming computational routines. In order to avoid this issue, it is possible to employ the adjoint method.
1.3 Adjoint method
Note that Eq. (33) can be rewritten as
since the term included is null from Eq. (29). If we choose v equal to a special \(\lambda \in {\mathcal {V}}\) that satisfy
then the sensitivity results
Finally, if we have \({\mathcal {U}}^0 = {\mathcal {V}}\) (i.e., the trial functions are null where Dirichlet boundary conditions are imposed), a condition that is generally satisfied by standard approaches, we can rewrite the problem from Eq. (35) as
The problem from Eq. (37) is known as adjoint problem, and its solution \(\lambda\) is known as adjoint solution. Once the adjoint solution is obtained, sensitivity as given by Eq. (36) does not require the solution of additional boundary value problems. Thus, the approach is advantageous from the computational point of view. This approach to sensitivity analysis is known as adjoint method.
Also note that since \(\lambda \in {\mathcal {V}}\), in standard approaches such as Galerkin methods \(\lambda\) must satisfy the same conditions that the trial functions do. Consequently, \(\lambda\) must be null where Dirichlet boundary conditions are imposed, i.e., \(\lambda\) must satisfy homogeneous Dirichlet boundary conditions. Besides, we observe that the right-hand side of Eq. (37) is actually the sensitivity of the function F from Eq. (33).
Finally, for a given relation between the material conductivity and density \(\kappa : (\rho ,{\mathbf {X}}) \rightarrow \kappa\) (e.g., the one from Eq. (3)), an infinitesimal perturbation \(\xi\) on the density field causes, by the chain rule, a perturbation \(\delta\) on the conductivity field given by
Thus, if perturbations on the density field \(\rho\) are considered, Eq. (36) can be rewritten as
The expressions presented in Sect. 4 also include in their notation the dependence of the sensitivity on the random variables.
Rights and permissions
About this article
Cite this article
Torii, A.J., Santos, D.P.S. & de Medeiros, E.M. Robust topology optimization for heat conduction with polynomial chaos expansion. J Braz. Soc. Mech. Sci. Eng. 42, 284 (2020). https://doi.org/10.1007/s40430-020-02367-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40430-020-02367-6