Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Since Aristotle at least, philosophers have privileged abstract theoretical knowledge over practical knowledge, but today this assumption is generally regarded as far too simple (Bechtel and Hamilton 2006). Philosophers of science are paying more attention to the rich detail and complexity of scientific knowledge by asking new questions about explanation, styles of reasoning, and idealizations (Wimsatt 2007). Engineering, which integrates scientific principles in its design and analysis of artifacts, has long been neglected as a result of this bias toward theoretical knowledge, but it offers a goldmine for philosophical work on explanation and idealization. Engineers often engage with the world using a variety of conceptual and mathematical models, which in turn represent the world through idealizations and are used to explain events and designs. In particular, engineering is philosophically interesting because of the ways it differs from and is similar to a long-cherished philosopher’s conception of what scientific theorizing and explanation looks like.

The central focus of this chapter is the use of models to explain events and solve problems in engineering. How are models used? What epistemic constraints do they reveal on the part of the engineers who build them? What connections are there between models and the world? What differentiates a good model from a bad one? What inferences do models of engineering license, and how? In what ways do models of engineering tell the truth? Notice that these are current questions in the philosophy of science and are not peculiar to engineering. The answers offered by careful attention to engineering, however, may be.

I will not even attempt to answer all of these questions here. My aims instead are first to show through examination of cases that engineering is ripe for such inquiry and secondly to argue that any plausible account of model-based epistemology in engineering will have to deal with multiple varied and diverse kinds of models in multiple contexts.

In what follows, I will briefly discuss three models which are typical of broader trends of model usage in engineering. First, engineers learning how to flush rivet in the 1930s had in mind a conceptual model of what types of rivet designs are plausible. Using trial-and-error analysis, this conceptual model of rivet design was used to guide the testing process and eventually explain optimum design standards.

Second, control volume analysis (CVA) is an instance where engineering scientists took a theoretical model from physics, and adapted it to better fit the problems that engineers face. It represents situations in which everyday engineering knowledge reflects the physical problems that engineers face, and shows how engineers idealize and focus on parts of a system needed to solve problems.

Finally, I will discuss numerical and physical models used in the Interagency Performance Evaluation Taskforce’s failure analysis of the levees in New Orleans after Hurricane Katrina. In this case, models are being used to explain why the levees failed, and they collectively employ independent lines of analysis to reach greater confidence in that explanation. These three cases show a diversity in the kinds of models engineers used as well as in the epistemic strategies by which they try to understand the world.

A word of caution is in order before moving on to cases. My survey of engineering’s diversity of model kinds and uses is situated within a broader context. Two of my examples are taken from rich case studies in Walter Vincenti’s What Engineers Know and How They Know It,a landmark in the history and philosophy of engineering. Vincenti, an aeronautical engineering professor at Stanford University, used his case studies to generate a framework to describe engineering knowledge. I will build on his analysis and focus on the roles models play, but I wish to remain agnostic about a central theme in Vincenti’s analysis. Vincenti is of the view that “treating science and technology as separate spheres of knowledge” (Vincenti 1990, p. 4, quote of Wise) is advisable.

My main work here is to point to interesting epistemological questions raised by models in engineering, to show their diversity, and begin a conversation about how models work in these cases and others like them. The extent and ways in which they are similar and different from the models and model-based epistemic strategies in “science” seems to me to be a question that is well worth taking up, but only after we have a good understanding of models as they are used in engineering. I will not argue this thesis directly here, but the cases below point to some difficulties in drawing lines between “scientific” and “technological” uses of models and their attendant epistemologies. Whether “scientific” and “technological” models should be treated differently from an epistemic perspective is therefore best left as an open question, as is the issue of what the distinction comes to in these cases.

2 Theoretical Background on Explanation, Laws and Models

Some philosophical accounts of science and engineering have focused on explanation as a central intellectual activity in science and engineering (Woodward 2003; Pitt 1999). The best known and most influential account of explanation is the deductive-nomological (DN) account of explanation, which was established in Hempel and Oppenheim (1948). The DN model holds that events are explained if their occurrence could have been logically derived from a set of fundamental scientific laws along with observations about the past or present state of the system of interest. This emphasis on laws was highly influential for early accounts of scientific theory, as well as in rare accounts of technological theory (such as Bunge 1972). In the 1970s and 1980s, many philosophers began to reject the strict characterization of theory as axiomatic laws, thus casting the DN account into doubt (Suppe 1974; Cartwright 1983). Scientific explanation today is an area of ongoing research, with no consensus view yet established. To date, the only prominent account of explanation in technology and engineering, Pitt (1999), still focuses on a DN-based account of explanation.

Given recent critical accounts undermining the importance of scientific laws, some philosophers have focused on models as the cornerstone of scientific theory (Giere 1988; Cartwright 1983, 1999; Lloyd 1994; Woodward 2003). No model (much less a law) ever completely represents the world, as it always leaves out, or idealizes away, some aspects of the world. As it seems impossible for models to have an exactly matching correspondence with real systems, how do models relate to the world? Some philosophers, like Ronald Giere, hold that models are neither true nor false, as they think models do not directly represent the world at all. Instead of using truth as a criterion to evaluate a model’s relationship with the world, Giere holds that communities of experts judge them to be similarto the world. In Giere’s similarity based account, the degree to which a model is successful in representing the world is based upon conventions within fields of research. This approach seems fruitful (and has even been applied to engineering theory in Cuevas-Badallo (2005)), but some find similarity-based accounts to be problematic, in part due to the vagaries of using a subjective criterion such as similarity (Callender and Cohen 2006).

There are competing philosophical accounts that still place a heavy emphasis on scientific truth. Nancy Cartwright has long been a harsh and influential critic of traditional conceptions of scientific law. Cartwright (1983) argues that the laws of physics “lie” because they work by idealizing away all the confounding factors that we know to be operating in the real world. They are therefore only true of the sanitized world of the scientists’ imagination, rather than true of the world as we find it. Laws can be made true, of course, by relaxing their idealizing assumptions, but then, according to Cartwright, laws lose their explanatory power. Focusing on how general accounts are made to accurately describe specific situations in the world provides an illuminating way of thinking about what engineers do, and how they come to know, explain, and intervene on the world. Engineers idealize, but do their idealizations lie?

The following three cases showcase different instances of idealization and explanation surrounding models used in engineering. Following from a particular notion of truth, each case suggests the different ways in which engineering models can be said to lie and to tell the truth.

3 Paradigm Cases of Engineering

3.1 Flush Riveting

Engineering is intimately connected with design, but it is not encompassed by it. Whereas design might have a purely instrumental and aesthetic approach, engineers push the boundaries of their designs with scientific analysis about how the world works. A prominent and, ultimately, scientific dimension of that analysis involves the use of trial and error testing, where different designs are tested and evaluated. Even in highly developed fields like aeronautical engineering, trial-and-error analysis is an intimate part of the design process, as is illustrated by Vincenti’s analysis of flush rivets. While Vincenti does not claim this, I think the work done by these engineers can be interpreted as being guided by the use of conceptual models, or mental conceptions of what the rivet should be like and how it will perform. Engineers here used these conceptual models of what flush rivet designs should look like, which guided the testing process and served to bound ultimate explanations of what the best designs should be.

Flush rivets – or rivets that are flatly embedded on a metallic surface – arose in the aeronautical industry between 1930 and 1950 (Vincenti 1990, pp. 170–199). Generally, riveting joins together two pieces of metal by way of a metal rivet placed between matching holes; the rivet is sealed in by upsetting- or depressing- the rivet on one side. This can leave a protrusion extending beyond the plate surface. A flush rivet has this protrusion removed, making its surface continuous with that of the plate.

Flush riveting existed before the 1930s, having had a history in ships and land-based structures since the 1830s. This history, however, provided little insight for aeronautical work (p. 175) as the application of flush riveting to aircraft was a more difficult process as a result of the thin light-weight plates being used in the wings of aircraft, which were too fragile to be riveted using the normal methods. The motivation for flush riveting in aircraft was obvious from the start: protruding rivets caused unnecessary drag. As speed advancements were implemented on aircraft, the small speed increase that flush riveting offered became viewed as necessary for aircraft development and research quickly commenced.

Three corporations kept written records of their research inquiries into flush riveting: Douglas Aircraft Company, Curtis-Wright Corporation, and Bell Aircraft. All three developed approaches which dimpled the sheet around the rivet, but each company came to different conclusions about what type of riveting process to use. Each test was judged by a multitude of criteria, ranging from whether or not the rivet is successfully placed, ability to work with differently sized plates, cost, ease of installation, and strength. The process of determining and explaining why one design was better than another was complex. For example, all three corporations began by using a 78° head angle for their rivets, but Douglas Aircraft’s testing found that the 78° rivets were too steep for their purposes because they caused the sheet to become too brittle. As part of its alternative, Douglas adopted a 100° head angle rivet. However, Curtis-Wright saw the same problem with the 78° head angle, but they attempted to keep the same head angle but counter act increased brittleness using a different process for flush riveting, the machine counter-sink. They reduced the size of the rivet head and altered the positioning of the shaft, and were able to gain secure results with the smaller head angle. Bell Aircraft eventually tested and tried both methods but found problems with both; they instead opted to go for much larger 120° rivets. There were further complications that made it difficult to decide upon optimum rivet designs. Airplane designers, when trying to assess the strength of the different kinds of flush rivets and their ability to handle the loads across plates on the surface of the aircraft, were unable to find an analytical answer. According to Vincenti, designers’ knowledge of rivet strength “came entirely from experiment” (p. 189).

Each of these corporations worked for several years using their particular method, with each being generally satisfied in their results. The industry converged on a 100° head angle over the course of the 1940s, but not because one method clearly showed itself to be the best – aerospace industry associations wanted unified standards, so as to cut down on the number of costly rivet assembly tools needed for manufacturing (p. 192). Despite the lack of a conclusive best practice, the 100° rivet was deemed acceptable. Even today, Vincenti says that “flush riveting today is still not a closed book,” which emphasizes that the level of detail required to manage various problems in dealing with rivets was and still is complex, and cutting edge aerospace techniques have still not removed the need for rigorous trial-and-error testing (p. 193).

The flush riveting example holds numerous lessons for a philosophy of engineering. There are first the lessons that Vincenti himself draws. As many (but not all!) engineers often have to do, these aeronautical engineers had to create optimum design standards for a new type of rivet, without direct guidance from any physical or theoretical “first principles.” Further, each company’s development of a different rivet standard shows that design choices are not always determined on the basis of testing results. The eventual convergence of the industry on the 100° rivet shows both how technological development can go in unexpected directions, as well as how design decisions are affected by engineering needs such as low cost, standardization and ease of manufacturability.

Beyond Vincenti, the nature of trial-and error testing obviously can be an efficient strategy for attempting to justify and explain the value of particular designs. However, designs are not constructed de novo:existing precedents provide a context for every level of the engineer’s approach. For example, engineers often stick with previously existing tools; here, the existing prominence and ease of riveting in aerospace and other industries makes it an efficient basis for new designs. Similarly, the design and interpretation of tests is also theoretically determined; basic stress theory and mathematics can describe how stresses on the wing will propagate, and how much stress each rivet might be forced to go. From theory, engineers thus define tentative initial goal for what a successful test should look like. Trial-and-error based analysis clearly is not simple or derivative, but is always bounded by a context. Its success in many fields indicates the value of trial and error testing as a general epistemic strategy.

In part due to this context that surrounds both design and trial and error testing, I suggest that we think of engineers as utilizing a conceptual model to guide the design and testing processes. The key here is that the testing described above was not a random process: context, existing tools and practices and basic understanding play a guiding role. I label this guiding context a conceptual model. While Giere’s model-based view of theories might likewise claim engineers use conceptual models, he does not think about how such models guide the search for acceptable rivet designs and, perhaps more importantly, guide away from unacceptable testing and design practices. As noted above, the liberal use of the term model to describe general conceptual knowledge is not without objection (Godfrey-Smith 2005), but it may prove a useful way for examining the role and nature of an engineer’s knowledge.

For example, what connection does an engineer’s conceptual model have to the justification of the final design? Is there any sense in which an engineer’s testing and conceptual knowledge can explain the choice of rivet head angle? Explanation, conceived in a model-dependent sense, could result from a set of testing that is guided from the beginning by a model which tries to understand the effect of different design choices on the model. In this way, a model establishes what the meaning of an observed test result is, and gives a predetermined sense of what a plausible design should look like. This complex process to explain a design would not exist in the more linear DN account of explanation.

James Woodward (2003) has given an account of causal explanation based on the idea of manipulability: once one understands how a given manipulation of a “cause” will manipulate its corresponding effect, you can claim to have explained the event. This notion of explanation is highly interventionist, and perhaps for design engineers explanation is achieved when the designers understand how changing the design would affect its tangible behavior in the field. In this explanatory context, it does not matter if the engineer can describe the scientific details underlying the cause and effect so long as they know how to produce the desired effect.

3.2 Control Volume Analysis

Outside of the design process and close to areas traditionally associated with scientific experience, engineers can characterize scientific principles differently than do traditional scientists. To illustrate this, Vincenti focuses on the exclusive use of the method of control volume analysis (CVA) by engineers (112–136). CVA begins from simple equations that define the conservation of mass, momentum, energy and entropy of a system, and focuses on how to apply these equations to arbitrarily established control volumes (Fox et al. 2004, pp. 99–152). CVA uses the equations of conservation to derive new equations that describe the change of various properties within the control volume over time. Often employed in fluid mechanics problems such as the one shown in Fig. 9.1, control volumes are usually placed around systems of interest, such as the inside of a piston or the interior of a pipe where fluid flow occurs. The engineer, with previously derived equations for CVA, can calculate incoming, outgoing and stored properties within a variety of possible control volumes.

Fig. 9.1
figure 9_1_186806_1_En

Example of fluid flow in a pipe, where CVA can be used to calculate an unknown variable based on known areas and volumes and the conservation of mass

Vincenti argues that engineers exclusively use CVA, while physicists rarely apply the tool. The counterpart in physics to CVA is control mass analysis. This form of analysis doesn’t track a specific area, but instead follows a group of mass particles as they move. Applying control mass analysis to fluid mechanics problems often confronts huge problems in complexity as a group of particles can be dispersed beyond the ability to track easily using mathematics. For the physicist, analyzing such complex motion could be highly desirable, but engineers often are highly constrained by budget and time constraints on their projects. Solving altered equations of conservation across a control volume is easier and more efficient than tracking the path of particles as they move. Further, as Vincenti also points out, engineering problems concerned with fluid flow are often focused on the overall results, as opposed to a detailed description of events throughout the volume of fluid motion.

CVA represents one area where the epistemic demands of a working engineer affect the type of analysis the engineer uses. Vincenti’s historical account of the development of CVA shows how the need for fast, accessible analysis led to the refinement by academic engineers of the CVA tool. Famous academic engineers like Ludwig Prandtl and Theodore von Karman refined the concept, and it was later picked up by American academic engineers who wrote the technique into engineering textbooks, partly due to the ease with which it could be learned by engineering students. Similar economic and educational constraints are capable of affecting how engineers use and develop other conceptual tools.

CVA also raises issues regarding idealizations in engineering. In this sense, the idealizations comes from: defining the boundaries of the system, excluding some parts which obviously in fact exist; and from only examining the properties inside the control volume that are related to the conservation equations, making irrelevant to the model any actual behavior (such as turbulence) which might actually occur. Control volumes and the equations of CVA clearly serve as a model of a system, and highlight the aspects of a system that can be ignored by the engineer. By removing some features of a physical system, a control volume model allows for the application of knowledge from physics and thermodynamics to determine property changes in the volume. This idealization clearly favors the general over the particular, and as long as the relevant assumptions of the CVA analysis (such as that the generation of energy inside the volume is known, or that the flux of mass and energy leaving the system is accurately known) are realistic enough, then the model’s predictions about property changes over time will be realistic enough. Some scientists and philosophers have written about tradeoffs between generality, precision and realism that occur in other scientific models (Levins 1966; Wimsatt 2007). Examining engineering models and the way that tradeoffs between these desiderata could provide a way to continue to analyze how economic and problem-driven concerns have an effect on ways engineers approach and understand the world.

3.3 Numerical and Physical Models Used in the Failure Analysis of the New Orleans Levees

Engineering does not occur in a vacuum. In the wake of Hurricane Katrina and the tremendous failure of the levees in New Orleans, the Interagency Performance Evaluation Taskforce (IPET) systematically analyzed the numerous levee and floodwall failures using a set of models (IPET 2007). Whereas some levees failed after being overtopped by flooding water, some of the most lethal failures occurred before the levees were overtopped, at water levels that the levees were designed to withstand. One such failure occurred at the 17th St. Canal, where the breech flooded 85% of the downtown area and has been estimated to have caused half of the fatalities that occurred during Katrina (Seed et al. 2006). The social and ethical dimensions underpinning the Katrina disaster are likely the most important ones to address to prevent future failures, but it is vitally important that IPET and the Army Corps of Engineers have as accurate as possible an understanding of the technical failure of an event like the canal failure.

Katrina emphasizes how engineers can be called upon to explain failures and to try to prevent them in the future. In some cases of failure, such as with the levees, it is impossible to replicate the original conditions of failure and investigating engineers have to make the most of an initially limited set of data. As is common practice in engineering, IPET engineers used multiple models to interpret incomplete information and to explain the cause of the levee failures. The levees models here are particularly interesting because they present both a set of questions for future analysis and a way for philosophical reflection to lead to better engineering practice. The IPET engineers, given post-failure information collected from the site along with rough knowledge of floodwater height at the time of failure, used a variety of models that are in some ways independent.

Understanding the forms and degrees of independence between the models is a conceptual question that relates to the physical first principles, idealizations and data sets used by the models, and it has a direct bearing on the level of confidence engineers should have in the accuracy of their model projections. As is clear in the levee models, they are based on different principles, and the importance of questioning the scope of agreement between these models is established by subsequent criticism of the IPET analysis.

The different modeling tools employed to analyze many of the Katrina levee failures are discussed here, with relevant diagrams from the 17th St. Canal analysis.

3.3.1 Finite Element Analysis Models (IPET 2007, V-45-V-52)

Finite element analysis is a commonly used practice in engineering that subdivides an object into discrete “elements,” which are then subjected to forces or energy fluxes that are balanced across the elements. Elements are typically one dimensional beams or interfaces or two dimensional blocks (three-dimensional bricks are also available but are rarely used in practice). Finite element analysis in some ways is merely a method of solving differential equations of force equilibrium, conservation of mass, and continuity of displacement collectively across the domain being modeled for the specified boundary conditions, thus allowing for FEA to solve problems such settlement of an embankment or seepage through an earth dam. The accuracy of a FEA is highly dependent upon the grid resolution employed to model an observed system, and results of an analysis depends upon both the boundary conditions and the inputs to (demands upon) the system. FEA can generate detailed information on stresses and strains in each part of the system and can model the response of the system over time.

FEA models were used to generate factor-of-safety values, which are the calculated maximum yield strengths of the soil divided by the stresses calculated in the FEA model for the flood conditions along an assumed failure surface. The factors of safety in FEM analyses conducted by the IPET team were all above one, indicating that failure would not occur, except for models which assumed the creation of a gap in between the flood wall and the canal embankment. Because the assumption of a gap between the wall and soil was the only way to predict a failure prior to overtopping for the 17th St. Canal (a possibility likewise supported by the subsequent models), the IPET team has concluded that that gap formation contributed to the failure, which the model also predicted to occur in a clay layer beneath the levee.

3.3.2 Limit Equilibrium Assessment Model (IPET 2007, V-41-V-43)

Limit equilibrium analysis (LEA) is a relatively old method of analysis of the stability of slopes and embankments developed by civil engineers, which is valued for its simplicity, accuracy, and ease of computation (Duncan and Wright 2005). LEA analyzes possible failure along a slope by positing a failure plane (here a circular plane). Given the known loads caused by the weight of the water and of the soil, moment and force equilibrium are applied to establish the forces along the postulated shear plane. A shear strength is assigned to each portion of the postulated failure surface based upon assumed strength parameters and the applied normal forces. A factor of safety is calculated as either the resisting moment about the center of rotation due to the shear strength divided by the driving moment due to the induced shear forces or, for sliding block failures, the integrated resisting shear strength divided by the integrated applied shear stress. In a LEA analysis, all possible (“kinematically admissible”) failure surfaces must be evaluated to find the surface with the lowest factor of safety. By definition, a factor of safety less than one on any surface means that the applied shear stress along that surface is great enough to exceed the shear strength along that surface and cause slippage across that LEA failure plane.

Analysis of failure was done by examining what conditions would generate a factor of safety less than one. The LEA model indicated that for the factor of safety to decrease below 1, it was necessary for a gap to emerge between the floodwall and the canal side soil, failure was more likely to occur at the flood-level heights seen during Katrina.

3.3.3 Centrifuge Models (IPET 2007, V-43-V-45)

Physical centrifuge models attempt to replicate at a small scale the performance of geotechnical systems, e.g. flooding of the levee. Because of difficulties in physical modeling of the in situ geometry and properties of the ground, the strength of centrifuge testing lies primarily in identification of mechanisms of failure, as well as in calibration and validation of numerical models, i.e. by numerically modeling the centrifuge test. The spinning of a centrifuge models the gravity-induced body stresses in the soil mass (which govern its shear strength) and causes the water to exert stresses roughly approximate to the stresses experienced during flooding. In a centrifuge model test, physical dimensions in the model scale according to the centrifugal acceleration, i.e. at an acceleration of 30 times gravity (30 g’s) lengths in the model are scaled by a factor of 30 with respect to the prototype. The capacity of the centrifuge (in terms of g’s and payload weight) determines the size of the physical model that can be tested based upon this similarity relationship.

A centrifuge model test, of an idealized levee system was conducted by the IPET team (IPET V-43). As with the FEA and LEA models, the centrifuge model also suggested a gap between the floodwall and the canal side soil was necessary for failure. The model likewise suggested that the location of failure was along a layer of clay at the bottom of the structure.

To synthesize the results of the three studies cited above: the levee breach was analyzed by IPET using three different models. However, each of the models used initial conditions (levee cross section size, soil properties, etc.) based upon observed data from the field, and thus, in a certain sense, each model is based upon roughly similar initial conditions. The underlying analytical principles behind each model are significantly different. The principles for the models at hand are:

  • FEA models solve equations of stress and strain (which incorporate more physical phenomena than the LEA static equilibrium analysis) across small elements throughout the levee.

  • LEA solves static equilibrium equations over an aggregated plane where failure is assumed to occur, determining a factor of safety for a given load.

  • Centrifuge models are idealized physical models subjected to similar loading as experienced in the field, with the experiment results used to identify the governing mechanisms of behavior in the field.

The physical model is perhaps fundamentally different from the mathematical models. The LEA model solves broad equations of static equilibrium along assumed failure slopes, which thus avoids calculations of strains (and only calculates stress on the postulated failure surface) and does not attempt to understanding the integrated system behavior throughout the levee. The FEA model includes static equilibrium equations, but is more comprehensive in its stress analysis; however, given its greater resolution and more detailed input soil property requirements, it is subject to errors in soil strength calculations.

No set of models can be completely independent of one another, but the differences between these models is enough to warrant increased epistemic confidence as a result of a robust agreement between what are largely independent means. Within theoretical ecology, Richard Levins (1966) described robustness as applying to truths lying at the “intersection of independent lies.” This is a provocative phrasing, but what he showed was how different models, each with their own inadequacies and idealizations, significantly increase one’s confidence in model results when there are independent models in agreement with one another. The metaphor of an intersection of independent lies helps to characterize how independent models, all of which have their own flaws and inaccuracies, can attempt to agree on what’s happening in physical systems.

Each model suggested a gap was necessary, but understanding whether the model results were in agreement was not simple. In response to early drafts of the IPET report, some engineers questioned the scope of the agreement between the different models. The National Research Council, focusing on the two numerical models, is largely complimentary: “The IPET has used two independent analysis methods for evaluating mechanisms of failure at the breach sites” (NRC 2006, p. 11). With the limit equilibrium method, however, they assess the breadth of the analysis used and argue that “It is not clear why the IPET calculations have been restricted to circular arc failure mechanisms, which apparently are not the critical mechanisms associated with levee breaching as they are inconsistent with Finite Element analyses and physical model tests the IPET has conducted … The IPET team has not reported on analyses for planar and other alternative sliding surfaces and has thus opened itself to criticisms of its conclusions” (ibid).

In the early IPET draft, were the FEA and LEA models really in agreement with one another in explaining the 17th St. Canal failure? Both models concluded that the soil underneath the levee was the location of failure, though it is apparently less clear as to what the exact mechanism was. The limited equilibrium analysis focused on circular arc failure mechanisms, which, as noted by the above NAE quote, were inconsistent with the finite element analysis model. The IPET team later did this analysis, but the point remains that understanding agreement can be difficult even amongst highly trained engineers.

Levin’s discussion of robustness provides a way to characterize this example. Given two largely independent methods of analysis, there was a clear agreement about the location of the failure, but there might not have been a similar “intersection” as to the exact causal mechanism of the failure. These independent models agreed a gap formation contributed to the failure, making that conclusion robust. The NRC call for better integrating the different model outputs would serve to change the nature and robustness of the models’ agreement.

This strikes me as an area in which philosophers, working in partnership with engineers, can help to generate conceptual clarity that can lead to better failure analyses, which in turn can help design safer levees.

4 Conclusion: How the Models of Engineering Tell the Truth

The examples used in this chapter serve as a rough sketch of a taxonomy of models that are relevant in engineering: conceptual models (used in flush riveting), analytical models (CVA), numerical models (FEA and LEA) and physical models (centrifuge). These models are used for diverse purposes, and they help guide the engineer’s search for understanding. These different models may be fundamentally different from one another (what commonality does a conceptual model share with a physical model or a numerical model?), but possible differences do not undermine the proposed research examining how each kind of model is used to understand and represent the world (Godfrey-Smith 2005). My examples were chosen from traditionally recognized engineering practice, and analyzing those raises questions about whether there is any meaningful distinction between models and theory in science and models and theory used in engineering. The point remains that what engineers do has been philosophically underexamined, and here I have tried to present a range of kinds of models in context in an effort to show that philosophers’ neglect for engineering is not well founded. Indeed, the question of whether engineering and science are different enough that we require separate understandings of them should turn importantly on how we understand model use in both areas of study.

Nancy Cartwright, in her 1983 book, How the Laws of Physics Lie, helped provoke a sea change in thinking about scientific laws, truth, and explanation. Part of this lying comes in the form of idealizing assumptions – of describing a world that we can track rather than the world as it presents itself to us. Models, of course, very often are only tractable if we make similarly heroic assumptions. Levins, who was mentioned in the last section, was motivated in part by noticing that his models contain so many assumptions that they were literally false. In this engineering is no different. However, the epistemic and efficiency constraints on engineers may force them to do a better job of telling the truth, and it may thus be the case that engineering, despite its having been neglected and dismissed as unscientific, may well describe and explain the world more accurately than science properly so called. Whether this is the case we won’t know until we understand engineering models and how they work in epistemic and practical contexts.