Keywords

Introduction

Engineering design is an essential part of the process of constructing and maintaining modern complex systems as computers, smartphones, airplanes, power plants, and urban areas. All these artifacts have complex chemical, electrical, and mechanical features, which only can be understood and controlled by relying on scientific theories. In this sense, modern technology is science based. It is impossible to imagine how modern forms of artificial material, control systems, and structural systems would have been possible without advanced scientific mathematical modeling. At the same time it also true that modern mathematics and natural science depend heavily on technology. It is quite striking that some of the most theoretical questions in physics and cosmology, for instance, about the origin of life and the formation of matter, require the most advanced machines that ever have been built. The purpose of the Large Hadron Collider at CERN is to produce events which might shed light on the formation of matter. This machine is built with the aim of understanding the most fundamental aspects of the material world, all the technological benefits that flow from experiments with it are side effects. So, natural science and technology are intertwined and interdependent in a complex way.

In spite of the close interdependence between science and technology they are two different activities with their own specific logics. This has been forcefully documented by, among others, Derik de Solla Price in the essay The Difference Between Science and Technology:

We have the position, then, that the normal growth, science begets more science and technology begets more technology. The pyramid like exponential growth parallel each other, and there exists what the modern physicist would call a weak interaction -at the educational level and the popular book and the Scientific American stage - that serves just to keep the two largely independent growths in phase (Price 1975, pp. 129–130).

The two exponential growth processes – science and technology – have the same formal structure and their own internal autonomous logic of development. Science develops in the following way:

Science is a sort of growing jigsaw puzzle with a dozen sexes, and wherever there is a family of knowledge - an annual supply of knowledge - children are produced. Old knowledge gives rise to new at an exponential rate. From time to time new subdivisions of knowledge appear, but the general process goes on without let or hindrance, without fail in times of poverty and war, without hurting in times of need. There is, strangely enough, very little man can do to make knowledge come more or less quickly or to make it come in the directions we may wish (ibid., p. 129).

And technology has a similar growth pattern:

Technology, the other twin, grows, I believe, in a very similar fashion. It is evident to any historian of technology that almost all innovations are produced from previous innovations rather than from an injection of any new scientific knowledge. We do not see it so well just because the technologists are keeping quiet rather that shouting from the rooftops as the scientists do (ibid., p. 129).

So, according to Price, science and technology constitute two parallel processes that interact weekly, but where the main driving forces in each process are internal problems. New scientific results lead to new scientific problems which are the basis for new scientific results, and new technical innovations leads to new technological challenges which result in new innovation. Although there are many other determining forces in the development and that science and technology today are more intertwined than ever before it seems to be a correct picture. Science has its own internal logic and so does engineering.

Price describes the situation from the outside. In this paper we will shed some more light on the interaction between science and engineering by having a closer look at the way in which knowledge is produced within the two fields.

The Subject Matter of Science and Engineering

The subject matter of natural science is not in general ordinary objects and phenomena as we see them in our daily lives. The objects and phenomena under scrutiny by scientists are carefully delimited from their environment. They are idealized and transformed to such a degree that many of them do not have natural counterparts. For instance, when molecular biologists or pharmacologists study how various drugs are transported over biological membranes, they do it by taking test samples of tissue and cultivating them in artificial environments and eventually decomposing them into molecular parts. This is a long and complex process which is difficult to control and interpret correctly. So, what the scientist eventually measures are variables, e.g. concentrations of enzymes, proteins, etc., which are related to the original biological tissue in a very indirect and complex way. The biologist cannot see the membrane directly, but only indirectly as constructed pictures made by advanced equipment like electron microscopes, MR scanners, ultra sound instruments and the like.

The situation in engineering research is similar. Studies of strength of various building components, for instance, are in many cases based on laboratory test samples which are isolated and manipulated in ways that make them appear very different from the way they would look like at real building sites.

The process of preparation and delimitation of objects so that they can be studied scientifically leads to the construction of what we are going to call model objects . These objects are abstract constructions. But they are the objects that scientific theories are about and construction of these objects is an essential part of scientific reasoning.

It is well-known that reasoning in natural and engineering science involves complex structures of hypothetico-deductive systems where principles, laws, and empirical generalizations are hierarchically ordered with respect to epistemological significance. We will briefly discuss such systems and their role in scientific reasoning.

As model objects in a sense are theoretical constructions we must face the problem of how scientific theories are related to reality. The paper ends with a few reflections on truth and reality.

It should be noted, that engineering science has as its main objects of research artifacts like pumps, industrial plants, electronic devices and buildings. It is impossible to define such artificial objects completely in naturalistic terms. They have an intentional side and they are defined and understood in terms of their operational principles in the sense of Polanyi (see Polanyi (1958), p. 176). One must consider the nature of the goals they serve. For instance, a water pump serves the goal of pumping water from a well, and such a machine is not well defined unless this goal is taken into consideration. As artifacts ultimately must serve certain functions in society, engineering science and work must include considerations of immaterial and intentional phenomena like production, planning, organization and other circumstances of general economical and societal significance. These aspects traditionally belong to the social sciences and in some cases the humanities. Hence, engineering science covers an extremely broad spectrum of activities. Many engineering fields of study cannot be characterized as only belonging to the natural sciences. However, in this paper we will only deal with those naturalistic aspects of engineering research which are epistemologically similar to what one finds in natural sciences. Intentional phenomena will not be taken up in this paper.

Theory, Research Objects and Model Objects

As already mentioned, scientists manipulate, idealize and transform their research objects in the laboratory so that the properties and partial aspects of the object, in which they are interested, are isolated and appear as pure as possible. They eliminate all kinds of irrelevant and disturbing properties so that they can be as certain as possible that the single aspect or property under study is present and not disturbed by irrelevant factors. For example, when solid-state physicists are studying magnetic properties of metals they abstract from the mechanical and chemical properties of these metals and they make sure that all possible disturbing magnetic fields are kept away. We call this process of delimiting a research object a preparation process .

It is evident that this process of preparation of the research object requires theoretical considerations. The solid state physicists must know that there are no interactions between gravitational and magnetic forces and they must know theoretically how to shield off other fields. Without such knowledge they would not be able to prepare a research object correctly and their experimental work would be useless. Consequently, the preparation process requires theory both as a tool for selecting relevant properties of the research object and as a tool for assessing the correctness of the preparation of the research object. It requires theoretical considerations to make sure that the research object is an adequate representation of reality.

By “reality” we mean the world as it looks for us as ordinary people. We do not mean anything metaphysically complex. What we want to stress is that natural sciences, and also many engineering research fields, are not directly about the world as it appears to us. They are about highly idealized and abstract features of this world, and even for engineering research it is often difficult to see how such research has anything relevant to say about the world as we know it, and it is not unproblematic to transfer results from the laboratory to the real world and use them for actual engineering design.

Preparation of Research Objects

The preparation process involves at least three partial processes. First, the part of the reality we want to study is delimited from its environment. We may, for instance, be interested in studying how a certain drug influences a membrane in the brain of a human being. The object under study is delimited to this membrane, and the membrane is so to speak isolated from the other parts of the human being.

Secondly, we ignore all properties of the membrane except some of its electrochemical properties which we want to measure. This is an abstraction process. We abstract from all properties except one or a few which we want to study experimentally. In this way we end up with a generic object . All relevant properties are lifted to this generic membrane. Consequently, when relevant properties of the concrete membrane are determined they are considered as properties of all membranes of that generic type. Research is not about properties of concrete tokens but about those properties that can be lifted to generic types of objects.

Thirdly, we also idealize the object. We consider the membrane to be an ideal, homogeneous and generic exemplar of the actual membrane. That is, the parameters we are measuring are supposed to be valid for any membrane of the correct type and not only valid for the actual membrane under consideration. All inhomogeneities and imperfections of the actual membrane are smoothed out. In praxis this is done by considering variations in measurements as noise which must be eliminated by, for instance, statistical analysis.

The result of the preparation process is an isolated object demarcated from its surroundings and processed such that it can be manipulated in a controlled way. It is considered as an instance of a perfect, generic object. Consequently, the research object can be defined as an isolated and manipulated part of the world which is conceptualized as an instance of a generic object.

We have given a biological example. But it is an easy matter to give engineering examples. Consider, for instance, the study of oscillations of steel bars. When we want to study eigenoscillations of steel bars we delimit the bar from disturbing parts of its environment, we abstract from all properties that are irrelevant for the study of its vibrations, and we idealize by considering it as a perfect, homogeneous bar. Although we of course are making measurements on real bars we consider these measurements as being properties of the perfect, generic bar. All variations and irregularities caused by the imperfections of the actual bar in the laboratory are smoothed out and considered as unintended disturbances. The bar is conceptualized as a generic bar.

Construction of Model Objects

The research object has been defined as the result of a preparation process. Although the object is physically real it is considered as an abstract entity. The preparation process involves at least three aspects: (i) delimitation of the object, (ii) abstraction from irrelevant properties, and (iii) idealization. This process leads to a complex artifact, the research object , which is an isolated and manipulated entity conceptualized as a perfect generic object. All basic properties measured and analyzed in the laboratory are comprehended as properties of the generic object. The actual object represents the generic object.

As an example consider measurements of the gravitational field of the Earth as part of precision surveying. In mountain areas like Greenland gravitational measurements are made on locations at different altitudes. Such measurements are not compatible unless they are transformed in such a way that they refer to the same altitude. That is done by introducing the geoid. Broadly speaking the geoid is defined to be the object bordered by the equipotential surface of the gravitational field at sea level. This surface is an abstract generic object used to define the form of the Earth as a research object. The geoid is an abstract planet which is as similar to the Earth as possible. It has the same mass, the same axis of rotation, the same angular velocity, etc. But it does not exist as a real object. It is a conceptual model. When geodesists talk about the gravitational field they usually refer to the gravitational field of the geoid. All kinds of geodesic measurements are planned and prepared in such a way that they can be construed as measurements of properties of the geoid. From a geodesic point of view the Earth is identified with the geoid. The actual Earth represents this generic object, the geoid.

Consequently, research objects are not only objects as they appear to us directly as part of the environment. They are delimited and modified in such a way that they can be regarded as instances of abstract generic objects. Such abstract generic objects are called model objects . Consequently, a research object is an entity which can be viewed as an instance of a model object. Model objects are abstract conceptual entities, and they are the proper targets of scientific theories. The geoid is a model object. The real earth is an empirical object. But when the earth is studied by geodesists it is represented by the geoid. All measurements are reduced in such a way that they appear as being made on the abstract model object: the geoid.

So far we have discussed two interrelated processes: Preparation of the research object and construction of a model object which conceptually represents the research object. The research object is prepared in such a way that it fits the model object as truthfully as possible. Usually further idealizations are made during the theoretical investigation of the research object and its properties. Thus, if one is studying vibrations of a homogeneous bar it is often assumed that these vibrations can be described by linear equations, or, if that is not sufficient, second order non-linearities are taken into account. In this way we construct even more abstract and idealized model objects. Whole series of model objects may be constructed during this further generalization process. In many cases this is a necessary condition for existing theories to be applicable. More and more abstract model objects are being constructed in such a way that it is possible to get through with numerical or analytical solutions of the fundamental equations which describe the problem under consideration.

As a specific example of such a model object construction in an engineering field we will consider a project in which certain dynamic and structural features of a human shin-bone are studied experimentally (Thomsen 1990). We want to know how such bones react when they are exposed to various kinds of loading conditions. Knowledge of this kind is important in many medical contexts. For instance, it is important as basis for design when various kinds of prostheses are being developed. The research object was in this case confined to mechanical properties of a well-defined carved piece of a shin-bone. Only structural eigenoscillations in a certain frequency domain were considered, and the object was prepared by cutting it out of a real human leg, gluing on sensors, freezing it down, etc. The model object that conceptually represented this research object was a perfect bar with definite physical properties. This generic object was conceived as the carrier of the oscillations. The laboratory measurements were made on the real piece of a shin-bone cut out of a human being and they were reduced in such a way that they could be considered as originating from the model object. That is, the research object was a technically prepared piece of shin-bone conceptualized as a perfect bar.

The theory relevant for the study of mechanical properties of such objects is taken from solid mechanics. This theory is based on, among other things, fundamental mechanics, and it contains theoretical analyses of dynamic properties of various kinds of objects: bars, plates and other geometric configurations with plastic or elastic properties. Within this theory we have massive knowledge about, say, stress–strain relations of various idealized objects. This knowledge is presented in a mathematical form and is about model objects which are idealized to such a degree that a mathematical analysis is feasible. The research object is represented by such a model object.

A real human shin-bone has a highly complex geometry. It consists of several types of substances, for instance, bone tissue and marrow, with very different mechanical properties. The substances are also inhomogeneous, anisotropic, and vast biological variations exist between different individuals. The inhomogeneities of the various substances cannot directly be eliminated in the research object. But when it comes to theoretical and mathematical considerations it is necessary to increase the level of idealization and construct a model object which fits into the theoretical framework. The inhomogeneities must in some way or other be reduced before it is possible to fit the shin-bone into a theoretical framework. In the example under consideration the research object was represented as a so-called Timoschenko beam. Only some of the constitutive properties of the shin-bone were then represented in the model object. They comprised, in the final definition, “dynamic and structural properties of a rectilinear, twisted, non uniform Timoschenko beam which was made up of two linearly elastic and transversally isotropic compounds and one perfectly flexible compound” (Thomsen 1990).

The actual piece of human shin-bone was in this case considered as a non-uniform Timoschenko beam. But other models might have been chosen. Depending on available theory and the purpose of the analysis one might have been led to other choices. In fact, an important part of the project consisted of deliberations of which other possible models might have been relevant for the analysis. The analysis actually showed that the structurally very complex human shin-bone had astonishing simple dynamic properties. Therefore, it was concluded that, for practical purposes, it might be possible to reproduce its dynamic properties with a simpler model. Other researchers had already suggested that a uniform Euler beam might be adequate. Consequently, several possible ways of reducing the complexity of the model object was analyzed. It was finally concluded that from a practical point of view it would be sufficient to reduce the complex model to a simple uniform bar model with concentrated inertia contributions.

Thus the representation of a research object by a model object involves an element of choice. Usually, there are several possible theoretical models that can be applied in a given situation, and the scientist must decide which one is best for the purpose. Such decisions are especially important in engineering science. They are based on deliberations of which goal the research is supposed to serve and of the theoretical and computational possibilities. Complex models which represent the research object as faithfully as possible can be considered as reference models which form a theoretical framework from which more applicable simple models can be validated. Which simple model that eventually will be chosen as the most appropriate one depends heavily on which purpose the simple model is supposed to serve.

There is also a kind of construction involved. The model object is a conceptual object that the scientist must construct in such a way that it represents the research object as faithfully as possible. In the actual case, it was argued that a specific Timoschenko beam constructed by the researcher was an adequate representation of a generic shin-bone. The Timoschenko beam representation would serve as a reference model and simpler models useful for design could be validated by comparing them with the Timoschenko beam model.

We shall illustrate the preparation process with another engineering example. In this project, methods for calculation of reinforced structural elements of concrete were developed (Andreasen 1988). When structural elements of concrete are reinforced, i.e. the concrete is deposited around reinforcing iron beams, the strength of the elements is increased. The increase of strength depends among other things on the anchorage. When the beams are ribbed, the strength of anchorage is greater. In the project methods for calculating the load carrying capacity of ribbed reinforcing beams were developed.

There exist two different main theories dealing with such structural calculations: One is about anchorage, i.e. the equilibrium of the loading and the resulting forces acting in the structural elements. The other is about how materials respond to the resulting “inner” forces. Consequently, two different, but interrelated, processes of preparation of the research object were undertaken, and this led to two different model objects.

In relation to the anchorage the problem was delimited in the following way. Only static load was considered. Movements between concrete and iron beams were considered to be unlikely and, therefore, excluded. Consequently, only failures between concrete holding on to the reinforcement and other parts of concrete were considered. Further idealizations about the geometry of anchorage were made. For example, it was assumed that the concrete surrounding the beams was axis-symmetrical to the beam axis, and that loads were evenly distributed. These delimitations, abstractions and idealizations resulted in an idealized model object of anchorage.

Similarly, a model object reflecting the dynamic behavior of the concrete was constructed. Properties like elastic effects, creep, and hysteresis were excluded, and the material was assumed to be homogeneous and perfectly plastic. The idealization into a perfectly plastic material is a very radical step. No material is perfectly plastic, and certainly not concrete. Nonetheless, this model object was chosen because calculation methods based on the theory of plasticity are simple and lead to relatively safe results when applied in an appropriate way.

Scientific Reasoning

In the last century philosophy of science has been dominated by a logical-linguistic view of scientific theories . Philosophers have been more interested in the final products of science, namely scientific theories, and not so much in the process of discovery. They concentrated on issues about justification and truth of scientific theories, and they required theories to be hypothetico-deductive systems expressed in an appropriate language. The kind of reasoning that led up to the construction of scientific theories was not of great interest. Consequently, most philosophy of science in that period overestimated the importance of well-established final scientific theories and underestimated the main body of scientific activity, namely, the reasoning involved in the development of new scientific ideas, concepts and theories. This attitude towards science studies was drastically changed by the appearance of Thomas Kuhn ’s theory of scientific paradigms where the role of communication and argumentation within the scientific community came to be of central importance for a proper understanding of science. However it took several decades before Kuhn’s insights were fully appreciated.

It is, of course, true that many kinds of scientific analyses can be cast into a hypothetico-deductive form, and hypothetico-deductive reasoning is an important part of scientific rationality. However, it is not a complete description of the scientific rationality. It only gives a characterization of some very mature forms of scientific arguments. It is the form in which many result are presented in scientific journals. But it is not a form of representation that can be applied during the research process. During this process all kinds of reasoning are relevant, for instance, analogous reasoning, intuitive considerations, application of elucidating metaphors, and, especially in engineering science, praxis based reasonability considerations. Consequently, most scientific reasoning does not originally have a hypothetico-deductive form. In situations where one does not have a complete theory it might even be impossible to identify a deductive hierarchy. We only have a system of loosely related and vaguely defined conceptual models.

Another but related serious shortcoming of the logical-linguistic view of scientific theories concerns the fact that according to this view a theory is a general schematic framework. The basic laws are abstract schemata that do not have a concrete semantic meaning unless they are interpreted in concrete situations. That is, they only have concrete meaning when interpreted in connection with a concrete model object. But they do not by themselves give a method by which it is possible to identify the models which are essential for their own semantic interpretation. The way in which research objects and model objects are identified is an important aspect of the scientific activity. It is true, that knowledge of hypothetico-deductive systems in many situations is essential for the preparation process and for the definition of model objects. But it is not the only form of knowledge that is involved in this process. The ability to delimit concrete research objects and to identify model objects which represent them is a complex cognitive skill which is learned during scientific training. It involves both linguistically expressed knowledge and tacitly given kinds of knowledge (in the sense of Polanyi) concerning model identification. The schematic laws and propositions in hypothetico-deductive systems would not be semantically well-defined unless it was based on this knowledge.

The way in which implicit and tacitly given knowledge actually functions during scientific reasoning is very beautifully expressed by Heisenberg. He describes the way in which Niels Bohr reasoned when he was doing atomic physics:

Bohr must surely know that he starts from contradictory assumptions which cannot be correct in their present form. But he has an unerring instinct for using these very assumptions to construct fairly convincing models of atomic processes. Bohr uses classical mechanics or quantum theory just as a painter uses his brushes and colours. Brushes do not determine the picture, and colours are never the full reality; but if he keeps the picture before his mind’s eye, the artist can use the brush to convey, however inadequately, his own mental picture to others. Bohr knows precisely how atoms behave during light emission, in chemical processes and in many other phenomena, and this has helped him to form an intuitive picture of the structure of different atoms; a picture he can only convey to other physicists by such inadequate means as electron orbits and quantum conditions. It is not at all certain that Bohr himself believes that electrons revolve inside the atom. The fact that he cannot yet express it by adequate linguistic or mathematical techniques is no disaster. On the contrary, it is a great challenge. (Heisenberg 1972, pp. 36–37)

What Heisenberg is saying here is that Bohr, in his reasoning about the structure of atoms, operates with not very precisely defined model objects describing the structure of atoms. These model objects can be used to represent various atomic processes as seen in the laboratory, for instance, light emission phenomena (line spectra) and various aspects of chemical processes. These model conceptions can be communicated to other physicists by applying concepts from classical physics augmented with new quantum physical principles. But this augmented form of classical physics would be cryptic, senseless or even contradictory if not interpreted on the background of Bohr’s conceptual models of the atom. To understand atomic physics at the beginning of this century would imply to be able to

  1. 1.

    understand the conceptual models developed by Bohr, Einstein, Heisenberg and many other physicists

  2. 2.

    understand in which way these models actually represented research objects, as, e.g., light emission phenomena,

  3. 3.

    relate these conceptual objects to selected theories, laws, and principles from classical physics and quantum theory.

Thus, the scientific activity consisted at that time in many other things than descriptions and deductions within hypothetico-deductive systems.

The situation in modern science is of course completely similar. A solid state physicist also knows various important model objects, he knows how to relate such objects to concrete research objects and he knows how to relate these objects to modern theories, laws and computational principles. This is not only true of basic science but also of engineering science. Thus, as we have seen, the study of reinforced concrete also require construction of model objects as representations of research objects and as objects of theoretical analysis.

Model Building Skills

The basic assumptions and skills of a scientific discipline are acquired by taking courses and participating in daily scientific work. Although the instrumental training in formal theories and laboratory techniques is important it only constitutes some significant aspects of the cognitive capabilities that are built up during the training to become a scientist. As we have seen the perhaps most important part of scientific work consists of establishing conceptual models in which problems under investigation can be represented in a way that makes solutions possible. Formal theories like fluid mechanics, thermodynamics and other fundamental physical formalisms are logical instruments which make it possible to describe model objects and research objects in a precise mathematical manner.

The ability to construct, select and elaborate conceptual models is a skill that must be established before one can claim to have acquired scientific competence. It is part of this skill to be able to identify and prepare research objects. It is also part of it to be able to construct and analyze model objects which represent research objects in a proper scientific way. These activities require conceptualizations of the world and as such the capability of constructing conceptual models.

This skill makes it possible to understand and apply formal scientific theories, and it makes it possible to interpret what we see in the laboratory. On the other hand, scientific theories and laws are important tools of this skill. They deliver the central conceptual tools that are necessary for a proper differentiation between adequate and inadequate models. Consequently, formal theoretical structures do not by themselves characterize a scientific discipline, but they are indispensable conceptual tools which scientists need in order to be able to prepare research objects and construct relevant model object representations.

Not all kinds of conceptualizations are allowed. A central part of establishing a scientific paradigm is to restrict the class of possible conceptualizations to those which fit into the ontological view that is characteristic of the science. At this point laws and formal theories play a central role. It is a main purpose of scientific principles to give descriptions of the scientific ontology.

Consider, for instance, classical electrodynamics. When Maxwell developed the classical field equations he had in his mind a rather concrete model of the ether (See for instance Nersessian 1992). The ether was considered as a fluid and magnetism was conceived as vortices in that fluid, and electric currents consisted of small particles that flowed between the vortices. This mechanical model of the ether was the basis and source of inspiration for his derivation of the equations. The full scope of his equations could, at that time, only be understood relative to this or similar models of the ether. Only much later when the special theory of relativity was introduced, it became possible to get rid of a concrete ether model.

During the process of developing this model building skill a certain ontological view of the world is established. This is not done in an explicit way but implicitly by learning which model objects are allowed and which are prohibited. For instance, when learning classical electrodynamics at the end of the nineteenth century model objects should be in accordance with the implicit and to some extent vague conception of the ether. Model constructions violating the established but tacit conception of the ether would be rejected as being too odd, too unrealistic, or too imaginative to be worth working on.

These assumptions define a definite scientific world view. It is established during a cognitive process by which the scientists actually learn the conceptual system of the research field, the basic laws, fundamental models objects, experimental methods, ways of preparing research objects, etc. All this is integrated into one specific way to comprehend the world in. We call this a theoretical framework .Footnote 1

Hierarchical Levels in a Theoretical Framework

The system of laws within a theoretical framework is hierarchically organized. At the highest level one has abstract laws like energy conservation. They are very general structural descriptions of all kinds of systems and they can be considered as universal constraints that all kinds of physical systems must satisfy. We call these laws principles because they are part of the ontological characterization of the physical world. Our conception of nature requires that these principles are valid and that they govern all natural processes. Consequently, if they had to be changed or given up it would imply great changes in our scientific worldview.

At a lower level we have more concrete laws like, for instance, Newton’s law of gravitation and Coulombs law of force between charged particles. Usually these laws are not considered as principles. They are less general although they are more than just empirical generalizations. The fact that both electrical and mechanical forces operate inversely with the square of the distance has far-reaching implications for the nature of physical phenomena. If these inverse square laws had to be modified it would be necessary to invoke radical changes in mechanics and electrodynamics to make these theories fit the phenomena. These laws are extremely well corroborated both theoretically and empirically.

When it comes to the study of more concrete phenomena like the strength of various materials, for instance steel and concrete, we find laws at a still lower level. It is not possible to deduce the strength properties of concrete within a well-defined hypothetico-deductive system. We are, in a way, in a similar situation as Bohr was in when he studied the nature of light emission. The laws which control say the process of rupture of blocks of concrete are not known and cannot be deduced from basic physics. Consequently, it has been necessary to develop a set of laws based on both empirical and theoretical considerations.

An Example from Engineering Science

To illustrate this we will again look at modern engineering methods for calculation of structural elements of concrete and reinforced concrete. Many materials are to some degree elastic. When they are exposed to loading (compressing, bending, tension, etc.) up to a certain point they follow Hooke’s law , that is, they regain their shape when the loading ceases and deformations are proportional to the loading. But when materials are exposed to loadings beyond a certain point Hooke’s law is no longer valid. The relation between load and deformation is no longer linear and the material may not regain its shape when loading ceases. The relation between load and deformation, which can also be expressed as a stress–strain relation, might look as shown on Fig. 10.1.

Fig. 10.1
figure 1

General stress-stain relation

Hooke’s law, which states the linear relationship between load and deformation, is a law of very low generality. It is an empirical generalization that only holds for deformations up to a certain limit. Genuine empirical generalizations may be empirically extremely well-corroborated, but they do not play a deep theoretical role and they would very easily be revised if observations required it.

Materials that follow Hooke’s law are called elastic. No material is perfectly elastic, but many materials can be considered as being elastic within a certain range. A material is called perfectly plastic if the deformation continues without increasing the load, i.e. the stress–strain relation is horizontal. A material is perfectly elastic–plastic if it is perfectly elastic up to a point and thereafter perfectly plastic. The stress–strain relation of such materials is shown in Fig. 10.2.

Fig. 10.2
figure 2

Stress–strain relation of perfectly elastic–plastic material

They do not exist in nature, but they are model objects which give reasonable and approximate descriptions of many existing materials. Important examples are many types of steel for which these model objects have been used extensively in many years.

When dimensioning a structural element one can, for a given load, calculate the necessary conditions for keeping the deformation inside permissible deformations. In this way it is possible to find safety conditions for collapse or yielding. Such calculations are based on the elastic properties of the material. That is, it is assumed that the material is perfectly elastic up to a certain point, and that the deformations are within the range of elasticity of the material. Calculations of structural elements of concrete and reinforced concrete have for a long time mainly been based on such elasticity properties. The fact that the stress–strain relation is not entirely linear in actual materials has been compensated for by using safety factors.

However, one can also try to calculate the necessary strength against yielding and eventually collapse. Such calculations are based on plasticity theory. A difficulty when doing so for concrete constructions is that it is not known in advance which part of the element will participate in the collapse. Consequently, it is not possible to reduce the problem to a given single set of differential equations. Another difficulty when studying concrete is that concrete is far from being perfectly plastic. It has a work curve similar to the one in Fig. 10.1. For small loads it is rather near to being elastic but for greater loads it only poorly resembles a plastic material. That is, the calculations are based on a model object that corresponds relatively badly to the research object. The development of modern plasticity theory has been an attempt to overcome these difficulties.

At the beginning of the twentieth century only minor works on structural elements of concrete based on plasticity considerations existed. However, in the thirties a new important and productive development was initiated. In 1931 the Danish engineering scientist K.W. Johansen proposed a practical method for calculation of certain types of homogeneously reinforced slabs. His method was an extension of a method suggested by another Danish engineer, Aage Ingerslev.

The method was based on the plastic properties of reinforced concrete using the often observed fact, that concrete structures, when collapsing, yield at certain lines, the so-called yield lines. In the beginning the method was primarily meant as a pragmatic way of getting results, and it was justified by empirical observations. But later on the idea of yield lines gained theoretical significance. In his dissertation from 1943 Johansen writes:

In 1931 I gave an extended technical theory of yield lines. At that time I considered the theory as a practical approximation method, but a later review of the experiment convinced me about the reality and theoretical justification of the yield lines. This conception was further enhanced by my own experiments with small model plates, and I, therefore, began a more comprehensive theoretical investigation which in 1934 led to the mathematical theory of yield lines in slabs. (Johansen 1943)

The theory of yield lines meant an important step towards a method for determining where in the material yielding would occur, and as such it was a significant step in the direction of modern plasticity theory. However, this early theory of yield lines only made it possible to calculate safe upper bound solutions for load carrying. A more complete theory was developed independently by Russian and American researchers and published in the fifties. In fact, the Russian formulation coincided with the one by Johansen, but it was unknown to the Western World until the fifties. The complete theory of plasticity contains methods for calculating both upper and lower bound solutions.

As we have seen, it is possible to study structural elements of concrete from two different perspectives. On one hand, we can base the theory on elastic properties of materials. Within this view concrete is considered as behaving as an elastic material up to a load which leads to rupture. Determination of rupture conditions and other properties of the material are based on the assumption that the material up to rupture is elastic. That gives one theoretical framework on which both practical and theoretical analyses of structural elements can be founded. Structural elements are then construed as model objects which are within the range of elasticity.

On the other hand, one can assume a plasticity theoretical perspective on structural elements. Within this view concrete is considered as a rigid-plastic material which means that no deformations occur for stresses up to a certain limit, the yield point. For stresses at the yield point arbitrary large deformations are possible without any change in stresses. Although concrete is far from being a rigid-plastic material it is possible within this framework to develop a general theory of rupture which fits experimental data reasonably well.

The existence of these two very distinct theoretical views of concrete reflects nicely the situation in engineering science. As it is not possible to deduce material properties directly from basic physical theory we must develop theoretical views and model objects from experimental observations and those fundamental theories which seem to imply the best possible practical methods. We are free to comprehend the situation in any possible way as long as it leads to applicable results. Even contradictory views may be developed.

Both elasticity and plasticity theory can be developed within existing physical theory, and, even though they in many cases may lead to different results, both theories can be used to solve practical construction problems. However, they cannot both hold at the same time, and to some extent it is fair to say that none of them are right, as they both are based on highly idealized models of concrete. They are constructed by generalizing two different areas on the stress strain curve. But they both lead to results that are safe and technically applicable. As in the case of Bohr, a skilled expert has the capability to select those models and theories that are most adequate as tools for solving a given construction problem.

As an illustration of laws at an intermediate level between principles and empirical generalizations we will present some ideas from plasticity theory. A rigid-plastic object is characterized by a system of generalized stresses, Q1, · · ·,Qn and strains, q1, · · ·,qn. The product

$$ \mathrm{W}={\mathrm{Q}}_1{\mathrm{q}}_1+\cdotp \cdotp \cdotp +{\mathrm{Q}}_{\mathrm{n}}{\mathrm{q}}_{\mathrm{n}} $$

represents the virtual work per unit volume.

There are two fundamental laws which govern objects of this kind, namely the yield condition and the yield law .

The yield condition gives information about which combinations of stresses can cause rupture. The yield law determines the properties of strains during yielding. It says that the strains q1, …,qn must be proportional to the outward directed normal to the yield surface which mathematically means

$$ {q}_i=l\frac{\partial f}{\partial {Q}_i} $$

where λ is a positive constant.

This law, which also is called von Mises’ flow law , can be derived from a general variational principle introduced by Ludwig von Mises . von Mises introduced the hypothesis that stresses corresponding to a given strain field assume such values that the work W becomes as large as possible. That is, the material strives against deformation. From this hypothesis and the yield condition it is easy to derive the yield law.

These constitutive equations of plasticity theory are not expressions of universal basic physical laws like, say, energy conservation. They are empirical generalizations based on careful observations of failure properties of various kinds of materials. Although von Mises’ flow rule can be derived from a principle of maximum work it is still a hypothesis that requires further justification from a more fundamental understanding of the structure of solids. Until such a deeper explanation is found von Mises’ flow rule must be considered as a well-corroborated hypotheses which has important practical applications. But besides being empirical generalizations they also serve as a theoretical framework within which studies of structural elements can be organized. Therefore, they are intermediate level laws like Newton’s law of gravitational force and Coulomb’s law of force between charged particles.

It follows from these examples that there are many kinds of natural laws and that they can be organized hierarchically with respect to their generality. Laws like energy conservation are valid for all kinds of physical interactions, whereas laws like Newton’s gravitation law only holds for mechanical interactions. At an even lower level of abstraction we have the laws of plasticity theory which are valid only for bodies which can be considered as approximately plastic, and, finally, there are laws which only hold for specific types of material like, for instance, concrete and steel. The most abstract laws are also those which are the most difficult to revise mainly because they form part of our scientific ontology. They are epistemologically basic and therefore impossible to revise without changing central parts for our scientific world view, whereas low level laws can be changed when required without changing our view of nature.

It is interesting to notice that in many cases a scientific law is first introduced into a scientific framework as a rather low level empirical generalization or as a heuristic tool to better calculations. An example of this is Johansen’s theory of yield lines discussed above. When he introduced this theory in the early thirties he himself considered it as a practical method of calculation. But later on, when he studied the experimental results more carefully and further developed the idea, he realized that the idea of yield lines might have a deeper theoretical meaning. This was further elaborated by himself and other scientists around the world, and it finally led to the modern plastic theoretical analysis of concrete. Epistemologically, this development is similar to the development of the concept of a photon. The idea of light quanta was introduced by Einstein in 1905 as a heuristic, mathematical tool. Only many years later, in the 1920s, was it realized that light quanta was real particles, and only much later was the idea of light particles, i.e., photons, incorporated into modern physics. It had to wait until the idea of quantized fields was acceptable.

Truth and Reality

From the analysis above it follows that scientific statements are claims about model objects and not directly about the world as it exists independently of us. Usually scientific statements are true for model objects, but they may be false, or at least only approximately true, as characterizations of the world. For instance, when engineers calculate the strength of a shin-bone or of a construction element they have in mind a conceptual model which is used as a base for setting up their equations. The equations give a true description of the model object, and, if they can be solved, it is possible to produce true statements about the model object. But the model object is not the reality. It is an abstract, idealized, conceptual model of the research object which in turn has been transformed by the preparation process.

The test piece of shin-bone is not a human shin-bone. It is a manipulated piece which has been cut out of a real human body and has been modified to such an extent that it is possible to produce stable measurements on it. Furthermore, it has been modified in such a way that only certain important features of it, which are related to some of its mechanical properties, have been controlled. The test shin-bone, i.e. the research object, is a laboratory artifact. It is a non-trivial problem how this object is related both to the real human shin-bone, as it exists in a living human being, and to the model object, which is the object that theories are about. Data are produced by making measurements on the research object, i.e. the test shin-bone, but they are interpreted as claims about the idealized model object. They are used to “put blood and flesh” on the Timoschenko beam, that is, data are reduced in such a way that they can be considered as statements about the dimensions and oscillations of the Timoschenko beam. However, the results of this scientific analysis are regarded as giving information not only of the model object or the research object but also about the real human shin-bone. Consequently, data are not only measured on an artifact that is fabricated in the laboratory. They are interpreted as being about a highly abstract model object and, finally, they are believed to give real information about a piece of reality. This complex process where a piece of reality is being delimited, generalized, abstracted, idealized and finally identified as the object which scientific propositions are about must be reverted in order to give information about the original piece of reality.

If these considerations are true they raise serious questions about how scientific theories can be said to give true information about the reality. How can highly idealized knowledge about model objects which are only very remotely related to the part of the world they are supposed to represent lead to reliable knowledge about actual phenomena in the world? How can we be sure that statements about the strength of concrete building elements, based on calculations on highly idealized model objects, also hold true for real constructions? Fortunately, experience tells us that it, in fact, is possible in many cases to base real constructions on theoretical calculations. But we still have the epistemological problem of accounting for how that is possible.

Immanuel Kant introduced the distinction between the world in itself and the world as it appears to us. As we are finite beings and only have limited cognitive capacities we cannot know the world as it is in itself. All objects we identify and develop knowledge about are already shaped by our form of perception and by our conceptual system. Things in themselves are not accessible to us. Only things as they appear to us can be known. In science one goes even further. Only objects that are abstract and conceptual in nature – model objects – are accessible to scientific scrutiny. Hence, scientific statements about the world do not in any sense refer directly to objects in a world completely independently of us. They do neither refer to things in themselves nor to objects as they appear to us in practical life.

This strange situation has motivated some modern philosophers to claim that the objects of science – the scientific world – is a social construction. The model objects of science are social constructions based on our interests and social attitudes. Our theories about these objects are also social constructions. Consequently, science is a product of creative imagination that in a serious way is circular. Its conception of reality and its theories of this reality are constructions of our mind that are more governed by social values than by confrontation with an objective reality .

The social constructivist view has some good points as it is true that science is only able to develop true knowledge about model objects and as the construction of these abstract objects is inevitably based on our interests and epistemological possibilities. Furthermore, experiments are in many cases developed in such a way that they manipulate objects and processes in our environment with the intention of approximating the abstract model objects as closely as possible. When we force Nature to “fit our ideas” in this way we very often work with technological constructions which are at the borderline of what is technically possible. Usually, the experimental set-up is so complex and badly understood that it is nearly impossible to differentiate between real effects stemming from the research object and unexpected properties of the experimental set-up. Where to stop an experiment and how to interpret the outcome is often a matter of choice. The scientific community makes this choice. If we want to avoid social constructivism we must in some way explain how “nature strikes back” on our conceptual constructions.

This problem requires a deeper analysis. But let us conclude by suggesting a possible answer. A scientific theory is rendered true if it holds that (i) its statements are true for the model objects (in a correspondence sense of true), and that (ii) the model objects sufficiently approximate the research objects. The fit between abstract model objects and laboratory produced research objects is difficult to estimate. It requires that both kinds of objects are modified, and that involves both conceptual reconstructions and engineering of physical objects in the laboratory. These modifications cannot be done arbitrarily. The conceptual reconstructions must comply with consistency and other epistemic requirements and engineering of laboratory objects is limited by practical and physical constraints. Consequently, we cannot arbitrarily force the fitting process to converge; it may easily diverge and develop in a direction that does not serve our interests. If this process diverges or does not stabilize, aspects of the theory under scrutiny will be overthrown not by arbitrary decisions but by being unable to comply with constraints given by Nature.

It is true that scientific experiments always allow several interpretations and it is up to us – the scientific community – to choose the one that fits best into our scientific world. Therefore, especially experiments that are at the borderline of what is technically possible are not acceptable standards for deciding between truth and falsity. They admit several interpretations and our choice must be constrained by other norms and standards of the scientific community in order to be uniquely determined.

However, sometimes the experimental praxis leads to anomalous situations where new qualitative properties of Nature show themselves. Such situations constitute natural non-social conditions that often require a reorganization of the theoretical framework. The history of science delivers examples of that abundantly. Descriptions of these originally anomalous phenomena appear in textbooks, often referred to by the scientists involved in their discovery: Newton’s rings, the photo-electric effect, the Compton effect, the Zeemann effect, the Hall effect, etc. Such situations, when they appear within a scientific discipline, first of all indicate that a phenomenon has appeared which cannot be reduced to irregularities of the equipment and the experimental and theoretical techniques involved. Furthermore, the adjustment of the theoretical framework must take the new phenomenon into account. This cannot be done in a sociologically free way; it may even lead to changes in the social structure of the scientific community.

When these various constraints are respected, the stability and convergence between conceptual constructions and laboratory manipulations may lead to a worldview which cannot in any sense claim to be a true picture of the reality as it is in itself. This claim of metaphysical realism must be given up. But at least the convergence results in a view that respects the constraints that the world puts on us. We do not know what the reality is in itself but we know that it constrains us as just described.

This view leads to special problems for engineering design. Usually, new forms of design involve processes that are badly understood. We may be able to model the processes and construct devices that fit our models. But it is still an open question how well the models fit the practical reality, and, therefore, it is often unknown how the devices will behave when they no longer are under controlled laboratory conditions. Consequently, engineers face further problems. As scientists they are able to build models that to some extent describe and explain natural processes under abstract and idealized conditions. But the devices that they design and construct must live outside the controlled laboratory conditions. The abstract, idealized conditions may not hold out there and the scope of our scientific theories is too limited to cover these circumstances. New technologies must cope with the unknown. Their ultimate test is historical. Their success will eventually follow from how well they are adapted to our practical life. Luckily, it is an incontestable fact that they by and large do adapt.