Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Science and Technology: The Standard View

In the mid-twentieth century, the philosophy of science was dominated by a point of view strictly connected with the logical positivist tradition as developed before Second World War in Vienna by the so called Vienna Circle, where prominent figures were Moritz Schlick and Rudoplh Carnap, and in Berlin by Hans Reichenbach and Carl Hempel. It was this tradition that established philosophy of science as an autonomous branch of philosophy setting out its standards. The goal was to provide an analysis of the scientific method, of the nature of scientific theories and scientific explanation and the tool to be employed was symbolic logic. Even though Karl Popper (1959) criticized logical positivists’ ideas about scientific method, his falsificationism shared with them the goals and the tools of philosophical analysis of science. This point of view has been so influential in university departments that it is called in the literature the standard view of philosophy of science.

The pillars of the standard view were the ideas that a clear-cut distinction can be made between how scientific hypotheses are discovered and how they are justified, that the business of philosophers of science is justification, that a division between an observational language and a theoretical language can be drawn, and that the knowledge content of a scientific theory can be fully expressed by a formal language. The distinction between the context of discovery and the context of justification allowed to leave the sociological and psychological components of the process of scientific theorizing out of the picture of the scientific method, and it went together with the idea that observational and experimental practices belong to the context of discovery, and therefore need not to be analyzed, and their outcomes can be distilled into a crystal–clear observational language that constitutes the neutral ground against which theoretical hypotheses can be tested.

As a consequence of these two ideas, it was believed that theories could be detached from the complex process of the collection and organization of scientific data behind them, that they could be represented as linguistic entities (sentences of a theoretical language) and that scientific data themselves could be posited as linguistic entities (sentences of the observational language) that have a direct relation of reference and truth with physical entities (sense data, numerical readings on instruments, etc.). The application of logics and some cognitive virtues, like simplicity, consiliency, predictive power, explanatory power, guarantee the choice of the overall best theory among the competitors, given the available empirical evidence.

Scientific change was seen as the process of successively incorporating earlier and successful theories into the framework of their successors so that factual and predictive control over nature cumulatively increases over time, and this cumulative progress is objective and universal. It was believed to be objective in two senses: the observational language is intersubjectively available to all impartial observers, and both observational sentences and theoretical sentences can be translated and formulated in the language of mathematical logic. Science is universal since the methodological norms of science are invariably instantiated in various cultures and at different times.

This idea of scientific cumulative progress had its counterpart in the idea of a cumulative technological progress, following naturally and smoothly from the increasing predictive control over nature provided by pure science, the linear or osmotic model of R&D: Aichholzer and Schienstock (1994), Ruivo (1994). Technology is applied science and scientific research is a sufficient and necessary condition for technological innovation (the science push): therefore, the goal of government must be financing basic research that is not rentable for private enterprises but will be beneficial for the economy and society at large: Bush (1945). This view of technological process was coupled with traditional organization theory, which conceived of organization as a machine for information processing, that produces the knowledge needed for obtaining a given goal by the application of formal, systematic and codified procedures to quantifiable data.

2 The New Philosophy of Science

The new philosophy of science is a point of view with a strong historical and social orientation that has been largely influential in the period going from the 1960s to the 1980s. It is mainly related to the science historian Thomas Kuhn (1970) and the philosopher Paul Feyerabend (1975). Earlier authors who prepared the way are Willard Quine (1953), with his criticism of the possibility of distinguishing between theoretical and observational languages, and Norwood Hanson (1958), who strongly underlined that observations are always theory laden.

A common point of all these authors, even though they could put a different emphasis on it, was the rejection of the theoretical/observational sentence dichotomy on the basis of the claim that observational sentences are always seriously infected by theory, and therefore a pure observational language cannot exist. In fact, they privileged theory over observation, making explicit the view that scientific theorizing is always prior to good experimental practice whereas the standard view, although it is not the case with Popper, privileged observation over theory. As a consequence of the difficulty of tracing a border between theory and observation, they criticized the idea that there is meaning invariance of the observational sentences across theoretical change, thus putting at stake the cumulative view of scientific development. Together with meaning invariance, the objective basis of theory evaluation and theories choice was lost: logic plus implementation of cognitive virtues were no more sufficient to choose the best theory at the bar of evidence, and non epistemic factors, like social and psychological factors, entered the picture beyond the idealized logic of justification. A privileged, intersubjective access to the plane of observation, to what there is, providing one true description of the physical world, was negated.

According to Kuhn’s famous book, The Structure of Scientific Revolution, scientists always look at the world wearing the glasses of a paradigm. His concept of paradigm has two meanings: in the first meaning, a paradigm is the entire constellation of beliefs, values, commitments, theories, techniques shared by the members of a scientific community; in the second, it is a particular item of this constellation, that is, the models and the exemplars of good scientific practice that exemplify the explicit and implicit rules that guide the problem-solving activities of scientists working within a paradigm. Theories are not superseded by their successors because of an accumulation of evidence against them, or because they are falsified, but because they are less good, in comparison to those theories that supersede them, for solving outstanding scientific problems, and at choosing new relevant scientific problems. Kuhn rejected the idea that knowledge is growing just in case our theories are succeeding in producing better representations of reality. For him a scientific theory is better than its predecessors only in the sense that it is a better instrument for formulating and solving puzzles, and not because it is a better representation of what the physical world is really like.

But Kuhn’s theory of paradigms could not provide an unproblematic account of scientific change: if paradigms are incommensurable because there is no meaning invariance across them, so too are the problems they define and the criteria of their solutions. Thus, the way was left open for people who claimed that, in the end, non-epistemic factors are decisive for theory choice. This step was taken by Paul Feyerabend and a new generation of sociologists of science, following in the 1970s the path opened by the The Structure of Scientific Revolution. Feyerabend claimed that the notion that progress in science is made through a paradigm is an illusion, as it is the idea that science is a problem-solving activity. He called himself a ‘dadaist’ and argued for theoretical pluralism, attacking what he considered the two fundamental claims of empiricism: that new theories must contain or be consistent with the results and the content of the theories they replace, and meaning invariance across theory change. If there is no meaning invariance and if theories can be logically inconsistent with one another, then there is no basis for a unique scientific method which overall guides scientific practice.

The new philosophy of science must be understood in the context of the 1960s: it had a feedback relationship with radical political movements, as for example Science for the People in USA, and in general with the social crisis of science at that time that called into question the faith in science and technology and their beneficial effect on society: Carson (1962), De Solla Price (1963), Rose and Rose (1976). The crisis of the standard view in the field of philosophy went hand in hand with the crisis of the linear R&D model and its view that science and technology, managed by experts, are capable of solving any kind of problem that can arise in connection with economic growth and its impact on society and the natural environment.

3 The Sociology of Science and STS Studies

The ideas of the new philosophy of science had an important impact on sociology of science in 1970s and 1980s. In the traditional approach, inspired by the works of Robert Merton (1973), the task of the sociologist was intended as the study of the system of social relations that allows scientific communities to implement the scientific method and extend objective knowledge. It was not the task of sociology to analyze the content of the product of scientific method. But now there was no more a unique scientific method, and there was nothing for philosophy of science to discover about how to reliably acquire knowledge of the world. If non-epistemic factors, which are the sociologist’s business, play such an important role in the production of scientific results, then also the analysis of this product becomes sociologist’s business: Barnes, Bloor, and Henry (1996).

The so called strong programme in the sociology of knowledge, developed by the Edinburgh School, claimed that social theory can describe and explain both the production of science and the product of science, because science itself is an elaborate social system for deciding what to say, how to talk about the world, and for making social decisions about technical matters: Barnes (1974, 1977), Bloor (1991). The products of science, the scientific facts, are artefacts of social practice and scientific knowledge is whatever a cognitive community collectively endorses or agrees upon by the pragmatics of social consensus. Scientific change is a matter of linguistic redescription and the generation of new discourses compelled by interaction with phenomena and directed by changes in social interests and cognitive needs. Incommensurability is not a problem, since no one language of the scientific culture can be objectively preferred to any other.

From a philosophical point of view, these ideas reflect a sometimes uncompromising relativism and some form of social constructivism: Collins and Pinch (1993), Knorr-Cetina (1981), Woolgar (1988).Footnote 1 They have produced some interesting studies in the history of science as those by Shapin and Schaffer on the seventeenth century scientific revolution: Shapin and Schaffer (1985), Shapin (1994), and in the anthropology of science that look at how scientists actually work doing experiments in laboratories: Latour and Woolgar (1986), Collins (1992), Pickering (1984). Bruno Latour and Steve Woolgar interpret the laboratory as a literary text, where consensus is politically negotiated about what inscriptions of the text (traces, spots or points on screen or scales, recorded numbers, spectra and so on, the hard data of logical positivism) can be considered scientific facts. According to Harry Collins’ analysis, what constitutes an experimental result is decided by negotiations within the scientific community, driven by factors such as career, social and cognitive interests of the scientists, or the perceived utility for future work. More recently Andrew Pickering (1995) has put forward that an experiment is a dialectic of resistance and accommodation between the experimental apparatus and its running, the theory of the apparatus and the theory of the phenomenon under study: a successful experiment realizes a mutual agreement between all these factors. In its strongest formulation social constructivism says that only social facts do exist, that is, facts about the existence of the constructions we call scientific facts.

It is true that there are no numbers, spots, spectra out there in the world, and that human practices performed in a socially organized context are necessary for accessing scientific facts, but one cannot infer from that that these facts are solely social constructs. It is true that laboratory made facts are produced, maintained and understood under controlled conditions, but one must not forget that these facts cannot be produced without the operation of underlying causal processes that can operate also in absence of theoretical knowledge and beyond the intentions of human agents. as the historian Carlo Ginzburg wrote:

the fashionable injunction to study reality as a text should be supplemented by the awareness that no text can be understood without a reference to extra textual realities. (Ginzburg 1994, p. 295)

The tide of social constructionist theories that inflected many branches of the humanities also caused the birth of a new academic field: in the 1970s some American universities (such as Pennsylvania, Cornell, Carnegie Mellon and Stanford) began the first STS programmes aimed to social, political and economic analysis of science and technology. These studies were the academic response to the economic and political problems raised by the scientific-technological development and the dissatisfaction towards the traditional conception of science and technology: Mitcham and Mackey (1972), Spiegel-R­sing and De Solla (1977). The acronym STS has two different readings that indicate two different traditions in this field of study: GonzÃlez GarcÚa, López Cerezo, and LujÃn López (1996). If we read it as Science and Technology Studies, it is a research field that refers to the European tradition, that goes back to the above mentioned sociology of science works, and had initially set its interests mainly on scientific theories, moving only at a later time to the study of technology, while maintaining a strong theoretical characterization: Bijker, Hughes, and Pinch (1987), Bijker and Law (1992), Collins (1990), Latour (1987), Jasanoff (1995), Webster (1991). If we read it as Science, Technology and Society, it indicates the American tradition that from the beginning has studied technology and its impact on society, paying particular attention to ethical and normative issues, and to social and political philosophy, starting with the pioneeristic work of Lewis Mumford (1934): Durbin (1987), Fuller (1993), Ihde (1979), Mitcham (1994).

The main currents in this field agree that is impossible to distinguish between science and technology, and that technological factors are very important for the development of pure science itself. The technology is understood as a social process and technological determinism is criticized, together with the “linear model”, because in contemporary sociotechnical systems there are social factors (technical, organizational, cultural, political and economical) which interact with technological factors. Technological development is a process of variation and selection, and decisions about which of the technological variants are viable social choices are the result of negotiations between the actors of a network that includes scientists, engineers, business leaders, politicians: the interests of social actors shape technology, but this, in turn, changes social relations. In Latour’s actor-network theory (1987) not only humans actors are nodes of this network but also material objects, and both are members of a new conceptual category, they are actants. Recently, it has been formulated the notion of eco-technological systems where technologies are integrated into broader social systems that may have similarities with ecosystems: Hughes (2006).

4 The Semantic View

Philosophers of science took two different paths to meet the challenges of the new philosophy of science and of deconstructionist theories: either they have turned to history to see how, in fact, science works or they have remained faithful to the idea of a formal analysis of science, but with a different concept of what a scientific theory is. The latter way produced in the 1970s what is called the semantic view of scientific theories: van Fraassen (1980), Giere (1988), Sneed (1971), Suppe (1974), Suppes (2003). The semantic view considers a scientific theory not as body of propositions that can be literally true or false in the real world, but as a complex description that is true of some models of systems in the real world.

In mathematical logic a model is a structure that makes all the sentences of a theory true, where a theory is a (deductively closed) set of sentences in a formal language. For example, any structure in which all the axioms and theorems of Euclidean geometry are true is a model of Euclidean geometry. In this sense, a model is an abstract object, and a theory is viewed as a collection of many, alternative models with which we try to represent, explain and predict aspects of observed phenomena. The practice of science is trying to embed observed regularities within a model of a theory, so that any real system exhibiting that regularity may be treated as a system satisfying the theory. High level scientific hypotheses, as for example the fundamental law of Newtonian Mechanics (F = m × a), are not literally true of any real system: they simply define a class of models, that is, the class of Newtonian systems whose members are all those structures to which the quantities F, m and a apply and for which the law is literally true.

Then the problem is how a model is connected with the world. Ronald Giere says that we make an hypothesis about the existence of a similarity in structure between the model and the real system. The problem is that hypotheses of this kind go beyond what the approach can afford and it is not clear how to choose what respects of similarity are those which are relevant. According to Patrick Suppes’ view, a theory is a hierarchical set of models with different degrees of abstraction, ranging from empirical models or models of data, describing experimental evidence, to abstract mathematical models: data themselves are an abstraction from the practical activity of producing them. The plane of observation has became an eventful region where scientists produce, process, and fit observations into a model of the data. A criticism raised against the semantic view is that in this activity of data making many types of models are involved that are not structural models in the sense required by the theory. Therefore it can neither account for how these models are constructed nor for how they work.Footnote 2

5 Knowing That and Knowing How

The other way to meet the challenge of deconstructionist theories was taking Kuhn’s lesson to look at the history of science and the practices of science more seriously than Kuhn himself did, avoiding the tribute he still paid to the standard view idols. Indeed, the new philosophy of science maintained, in common with the view it criticized, the idea that all scientific knowledge is propositional in content, and thus that all forms of knowing-how are to be transformed into knowing-that. The standard view privileged observation over theory, Khun and Feyerabend privileged theory over observation, concentrating on paradigms, conceptual schemes, and the methods they drive, but both failed to appreciate the common ground provided by instruments, experimental practices, shared skills, which makes judgments of commensurability (and incommensurability) possible. Paradigms do not carry with themselves a particular batch of instruments and experimental procedures that are understandable only in terms of that particular paradigm, but people working with different paradigms also share tools and procedures, direct experiences, and they are living in the same phenomenological world. Forerunners of this approach, under different aspects, can be considered Gaston Bachelard (1949) and Michael Polanyi (1964).

Starting from the 1980s, this new approach has challenged the dichotomy theoretical/observational by seeing experimentation and experimental techniques as central to scientific practice: science is driven by practice rather than by theory and observation, and often experiments have a life of their own, independent of theory: Ackermann (1985), Franklin (1986, 1990), Galison (1987, 1997); Gooding, Pinch, and Schaffer (1989), Gooding (1990), Hacking (1983), Hull (1988), Pickering (1992, 1995). The authors following this approach claim that science is largely skill-based, network-based and laboratory-based and can be located somewhere between the activities of individuals and the material, cultural and cognitive frameworks which they inhabit. Therefore, they attempt to reconstruct the material culture of science, that complex networks of skills, competences, negotiations, and intellectual and material resources from which stable patterns of scientific practice and experimental results emerge. Networks of this kind embody a knowing-how that cannot be captured by the notion that understanding is a knowing-that, abstractly expressed through representational and propositional tools, like sets of models and sets of sentences. Practicing a theory is not a matter of understanding a theory’s formal expressions, but is rather the business of adopting and transmitting through practice a set of mental technologies used in contextualized applications of the theory to problem solving.

For example, Allan Franklin talks of epistemological strategies, to be applied in the design of experiments, that provide arguments for the correctness of the experiment even though they cannot be explicitly defined as a set of formal rules. This kind of approach shares many themes with social constructivism and the sociological approach outlined above, but a fundamental difference is that for people like Ian Hacking, Allan Franklin and Peter Galison, experimental results are, at the end, accepted because of epistemological arguments, while people like Bloor, Collins and Pickering deny that epistemological arguments play a decisive role.

Research cultures are constructed in local contexts but then they can travel beyond the confines of the scientific communities which give them birth and make possible communication among different contexts. This overall picture can explain why translation is possible between different communities: theories, instruments, and experimental practices do not change together in one great rupture of paradigms, but usually they are changing at different times, piece by piece, and what are points of discontinuity in theory are not so in the material culture of experiments. Galison has put forward the concept of trading zone, that is, spatially located (laboratories), or virtual, zones (networks of labs connected by the web) where people meet, theoretical scientists meet experimental scientists, engineers meet scientists, scientific subcultures meet each other and where wordless interlanguages are spoken (pidgins or creole languages), that are embodied in objects and procedures. Knowledge moves across boundaries and coordination around specific problems and sites is possible even where globally shared meanings are not. Meanings do not travel all at once in great conceptual schemes or paradigms, but partially and piecemeal.

6 The Theory of the Knowledge-Creating Company

Michael Polanyi made an important distinction between tacit knowledge and explicit knowledge.

When we are relying on our awareness of something (A) for attending to someone else (B), we are but subsidiarily aware of A. The thing B to which we are thus focally attending, is then the meaning of A. The focal object B is always identifiable, while things like A, of which we are subsidiarily aware, may be unidentifiable. The two kinds of awareness are mutually exclusive: when we switch our attention to something of which we have hitherto been subsidiarily aware, it loses its previous meaning. Such is briefly, the structure of tacit knowledge. Now to the distinction between tacit and explicit knowledge. Things of which we are focally aware can be explicitly identified; but no knowledge can be made wholly explicit. For one thing, the meaning of language, when in use, lies in its tacit component; for another, to use language involves actions of our body of which we have only a subsidiary awareness. Hence tacit knowing is more fundamental than explicit knowing: we can know more than we can tell and we can tell nothing without relying on our awareness of things we may not be able to tell. Things that we can tell, we know by observing them, those that we cannot tell, we know by dwelling in them. All understanding is based on our dwelling in the particulars of that which we comprehend. Such indwelling is a participation of ours in the existence of that which we comprehend; it is Heidegger’s being-in-the-world. (Polanyi 1964, p. x)

Polanyi’s ideas about the importance of the first, are particularly relevant because they have influenced Ikujiro Nonaka’s theory of the knowledge-creating company: Nonaka (1994), Nonaka and Takeuchi (1995), Nonaka and Toyama (2003). Nonaka’s theory embeds the concepts of tacit and explicit knowledge into a model composed by four modes of knowledge acquisition, namely, socialization, externalization, combination, and internalization, a model that has the form of a spiral, starting from tacit knowledge (knowing-how), passing through explicit knowledge (knowing-that), and ending again with new embodied tacit knowledge, where the interaction between tacit knowledge and explicit knowledge is amplified through the conversion of knowledge from one mode to the other.

Knowledge creation starts with socialization, the processes by which people convert their personal (tacit) knowledge, consisting of skills, mental models, and beliefs that shape the perception of the world, into shared experiences, which are mostly time and space specific.

In the socialization process, the phenomenological method of seeing things as they are is effective. By ‘indwelling’ or ‘living in’ the world, individuals accumulate and share tacit knowledge about the world that surrounds them. (Nonaka & Toyama 2003, p. 5)

Then, in the externalization process, tacit knowledge is articulated into explicit knowledge by means of dialogue within the organization and with the help of metaphors, analogies, models, hypotheses. Explicit knowledge is manipulated and shared throughout the organization by building up theories, models, codified procedures, also making use of formal languages, during the combination process. This newly created explicit (and linguistic) knowledge is converted again into tacit knowledge by individuals through the internalization process, by learning by doing, developing shared mental models and technical know-how.

Art scholar Mary Jane Jacob has pointed out the parallels between Nonaka’s theory of knowledge production and John Dewey’s philosophy of learning-by-doing, where knowledge is conceptualized as a dialectic process of interaction between man and his environment going through active phases (doing) and passive phases (undergoing): Jacob (2013). A basic difference between the two models lays in the fact that the second one is a collective model:

in contrast to Dewey’s relational model, in which new ideas are formed in thoughtful reflection by the individual, Nonaka and Toyama’s model places emphasis on the sharing and interaction of one’s ideas in relation to those of others. Nonaka and Toyama employ the Japanese word Ba to denote this space of shared context. (Jacob 2013, p. 106)

This notion of Ba, a knowledge-creating place for firms, is similar to Galison’s notion of a trading zone for scientific material culture:

The knowledge-creating process is necessarily context-specific in terms of time, space, and relationship with others. Knowledge cannot be created in vacuum, and needs a place where information is given meaning through interpretation to become knowledge. […] Building on the concept that was originally proposed by the Japanese philosopher Kitaro Nishida (1970, 1990), we define ba as a shared context in motion, in which knowledge is shared, created, and utilized. Ba provides the energy, quality, and places to perform the individual knowledge conversions and to move along the knowledge spiral. (Nonaka & Toyama 2003, p. 6)

Ba is a dynamic self-organizing structure which is created and disappears according to the need of the organization, whose boundaries are fluid and persons can come and go, where contradictory beliefs are confronted and eventually can be synthesized: Nonaka and Toyama (2003), Nonaka, Toyama, and Konno (2000). Ba is a zone open for experimentation, communication and understanding, and where new knowledge can occur.