What is the initiative and the virtual impulse for seeking new knowledge? Is there a significant differentiation between basic and applied research when it comes to learning the truth? How much does the society trusts the scientific truth and what is the impact of science on the social, technological and ethical development of a society? Do all the above imply that science is providing certainty? Yet, certainty is no more an issue, since the end of certainties was already established by Prigogine 1996).

This chapter aims to investigate the way science has been working around the phenomenon, how these have been analyzed and which are the potential criteria, in a narrow and a broader sense, that have been used for assessing its substance and expose the results to the rest of the scientific community for approval and acceptance.

Introduction

The way scientists mainly perceive a phenomenon is in a direct dialectic relationship to the context of the upcoming hypothesis. That needs to have a straight-line reference to a specific question made about this phenomenon per se or on a futuristic version of this, a vision of how things may differ and if and how could that be possible and functional. At this point a note on the differences between scientific and technological research could be made to correlate the outcome between possibility and functionality, respectively. Furthermore, it is implied that the hypothesis may be formed either as a “descriptive” goal or a “predictive” one. Nevertheless, the initial “definitive portrayal” of the phenomenon that establishes the base line for either of the aforementioned goals, has an apparent dependency of what each one of us anticipates regarding this phenomenon, i.e. what we hypothesis that this sensed natural development will do (for us).

Hence, a hypothesis is, to a large extent, a human construction supported by the personal interpretations, cognitional formulations, understanding and questioning, which comes together in the form of an “interest”. In that sense, the hypothesis is created within a certain “atmosphere”, through which the expressions utilized by humans for presenting their interest and the relevant ways for a satisfactory road map to an outcome, are perceived. The goal of the hypothesis to find its potential expression goes via the logical combination of empirical observations and conclusive theoretical and practical interpretations. Thus, incidents occurring in the real world could be translated to phenomena when recognized through the human senses (Kanavouras and Coutelieris 2017c) and placed within the framework of theory and knowledge, available at the very specific time (Kuhn 1962). Consequently, at the main drivers for scientific and technological developments, may to a great extend be defined as the hypothesis formation, the particular interest of the human beings, the related means to satisfy their interest and the outcome of the research applied for that purpose.

Before discussing on the perception of a phenomenon, it is crucial to acknowledge and take into account Prigogine’s claims about time, chaos and the laws of nature. According to Newton, nature’s laws provide a descriptive account of a time-less, deterministic universe, where predictions are dictated by absolute certainty. If time is a fallacy and/or time is reversible, then it has no meaning to deal with any predictions. In this context, Ilya Prigogine’s pioneering research led to radical changes. Scientists always discover inconsistencies, fluctuations and instabilities leading to evolutionary formations, in all fields, from cosmology to moral biology. Time reversible processes are rare in the real world; non-reversibility seems to be the rule. No one prior to Prigogine had ever managed to take into account this rather obvious, non-reversible time flow in Physics’ laws. In La fin des certitudes Prigogine combines concepts he had previously introduced in his works in an austere, coherent scientific worldview.

A phenomenon cannot be independent in its perception from a theoretical scheme that engulfs the phenomenon within a structurally holistic scheme. At the same time, the theories are supporting and are being supported by the phenomena. At a consequent stage, a current theory may hence, explain why some phenomena occur (or not) by modeling the parameters, causes and conditions that frame their occurrence (or non-occurrence) for the reasons of any experimental prediction and control. Alternatively, a theory may explain a lawful regularity among empirical events by producing a theoretical/cognitional model of the causes or conditions that, if fulfilled, necessitate the lawful regularity among these events.

Theoretical issues placed by a researcher, which are being formulated into sharp and accurate “technological” questions, may require reproducing the phenomena within the lab, in order to mirror theory versus reality according to the re-produced corresponding expressions of the phenomena under given conditions. The point is that an experiment created within a given lab “environment” has a unique outcome, which should be understood—to a certain extent—against an existing or an emerging, theoretical scheme. Within this process, it is the experimenter, who should produce specific “technological devices”/experiments, through which a decisive answer to these questions, (hypothesis), has to be elicited. More pending issues, rifles and open points, may also follow a gradual implementation into the way the experimentation will be unfolding within its space and time frame. An obvious impact on the research subjectivity and its outcome may now be acknowledged.

Through a following up step of an explanation and interpretation process, the human engagement is essential for understanding what has actually happened during the experiment. Humans, work towards that by providing the expressions of governing principles, while, at the same time, humans also try to make any more sense of themselves as “professionals”, their world and the mode of being part in it (Popper and Eccles 1977).

By summing up the individual characteristics being examined and “adding” them up into a total picture for obtaining a comprehensive research process, may not seem appropriate when the total systems’ behavior is to be understood as dialogic, and emerging through its inherent interaction(s) between self and other parties (Goffman 1959).

The aforementioned discussion concerns the struggle towards the creation of additional valid and justified knowledge parts. Although the form of the creation, presentation and criticism of these outcomes has been developed to its side forms through the years, the core of this justification process remains mainly stable and quite standardized within the various scientific fields. On the other hand, the rhythm of knowledge production has been increasing in size and quality in parallel to the technological research-assisting, developments, leading to enormous data accumulated reports, papers and information, asking for either equally developed management systems, or increased selectivity and quality versus quantity. Nevertheless, the way humans have been selectively collecting the existing knowledge parts available for a given system includes the objectivized pre-understanding, as well as an interpreter and an inquirer. Potential users of the available scientific knowledge parts may be sharing a theoretical and practical pre-understanding within the professional communities. This consequently deriving variability may be defining the multiple horizons of pre-understanding, while understanding also occurs as an iterated reciprocal movement between (the meaning of) a part and (the meaning of) a whole to which that part belongs. Assuming that a part only makes sense within a whole, yet the whole does not make sense except in terms of a coherent configuration of its parts (Gadamer 1994). Both the individual and the whole do not remain independent of each other or unaffected by any relevant conceptualization development which dialectically combines both through the evolution process. A historical reference to any scientific area may easily support this point since within any scientific paradigm a coherent relationship has been developed that had either strengthen this paradigm or had led to its abolishment and the consequent change of the paradigm. In that sense, a dialectic relationship is established beyond the existing of up-to-day concepts, but in a broader extend, with those to come as well, since they question each other a priori. The mode and nature of these dialectic relationships should a certain extent defined the evolution of the history of science and consequently the history of human civilization. The dialect of any invention to the societal background of its time is an interesting example in asking “what if”. What if the Greeks had invented the steam power utilization 2000 years ago and had applied that into a steam turning wheel? How would the industrial revolution, if at all, had been developed since then? Where would the human civilization have been by now? Why this did not happen? Initially seemingly a pure metaphysical question, it has nowadays become of interest to historians of science and technology to respond to such challenges, in an attempt to recognize the drivers behind evolution and investigate the parameters and conditions that had altered and keep on altering, the “progress”.

Finally, “understanding” contains the information-derived-knowledge. Understanding contains specific knowledge about a subject, situation, etc., or about how something works. Besides the ability to appreciate something, understanding also refers to the human power to abstract thought. It’s the individual’s perception or judgment of a situation.

Therefore, knowledge has an inherent “engineering functionality” mechanism which transforms the existing knowledge, through means of appropriate justification, into (scientific) understanding. That, according to Capurro (1987), belongs together with technology, to the will-to-power as knowledge, to the will-to-know or, in other words, to a “technology of/for knowledge”. Furthermore, the hermeneutical paradigm offers a framework for the foundation of various relevance criteria such as systems relevance and individual relevance or suitable applicability (Lancaster 1979; Salton and McGill 1983). Nevertheless, this distinction is not enough. According to Froehlich (1994), hermeneutics can provide a more productive framework for modeling systems and user criteria. This framework should include a hermeneutic of (a) users, (b) information collection, and (c) mediation through the system. Yet, even when this happens, the process of interpretation is essential for the constitution of meaning (Capurro 1985).

In order to overcome any issues raised by the complexity of the phenomena, the human factor engagement and the data collection, in this book we propose an independent, knowledge-engineering based method, allowing the experimenter scientist to design and perform the reproduction of the phenomena in the lab, via a clearly defined experimentation “device”. The background of this methodology will be further unfolded within this as well as within the consequent chapters of this book.

A. The World of Phenomena

Etymologically speaking, phenomenon has a long history deriving from ancient Greek and the verb “phenome” (φαίνομαι, I appear); it was also opted for expressing philosophical opinions on visual incidents and reality. An important background in using the term implies that a phenomenon has to be observable, visible and particular, that appears frequently under given conditions. It also signifies a unique and rather important event. Its regularity allows us to express the phenomenon with a normative generalization that approximates a law, turning it into a universal truth, thus scientific knowledge. In many instances, this specific regularity is called phenomenon. Still, phenomena need to be resolved. But could that be possible with descriptive laws and can we reveal the causes behind the phenomena?

Being in such a close encounter with the human senses, a few ancient philosophers thought of the phenomena as transformed objects of senses, in contrast to the pure substance of things, the permanent reality. In the later years, phenomena were opposed to cognition products, “things as are”. Philosopher Kant (1724–1804) moved that discussion to the modern science of his times and declared the ‘things as they are in themselves’ (noumena) as not-known and the physical science was defined as the science of the effects rather than that of the phenomena, following the normative framework of positivism or the conceptual toolkit of phenomenology (Kant 1781/1998).

Nevertheless, reported phenomena and their corresponding recorded effects have been progressively accumulated in science, being mostly accepted solely on being following experimentally affirmative indications. Based on that, we may distinguish the phenomena as the events which can be recorded by a trained, therefore skillful and competent, observer, who is, then, not interfering in the phenomena, while their recorded effects connect the phenomena to the experimentation process. Through which later process, the phenomena become known and may be described. The way these effects have been recorded and reported through research and outcomes heavily involving humans, eventually makes them the products of a significant human interference to nature. It is apparently part of the scientific research application purpose to attempt a normative norm or regularity. At a first glance, this may be seen as acceptable or non-acceptable by the scientific community only when placed against a given theoretical background.

Consequently, creating an event at the lab, denotes an invention skill, since it involves tools, fabricated devices, means of control, the surrounding space or environment, articulated research and successful applications, in some cases within more than a merely one innovative series of reproductive processes. As mentioned above, the experimentation as much as the hypothesis are being developed, evolved and potentially be concluded, within a certain “atmosphere” characteristic of the particular lab. The “lab” then becomes more than a battle field for research; it is a complete micro-world mapped for expertise, equipment and facilities. But, does this then might imply that particular phenomena and events may only form a reality within a certain device and human, as well as, technological background? Does it mean that a researcher in a lab creates his/her own reality, along with the experts around, that may not even be reproducible or repeatable, thus, particular and incidental in time and space? How may such a research be validated and accepted by others?

When scientists are forming and collecting experiences, their perceptions are coming together only contingently, so that no necessity of their connection exists or can become evident within the perceptions themselves. Apprehension is only an association of the manifold of empirical intuition, therefore no representation of the necessity for a combined existence of the appearances that it compares, in space and time, is to be encountered in it. Then, apparently experience encounters the cognition of the objects through our senses and hence human perception. So, the relation in the existence of the manifold is to be represented in it not as it is juxtaposed in time but as it is objectively conceptualized in time. Yet, since time itself cannot be perceived, the determination of the existence of objects in time can only come about through their combination in time in general, therefore, only through “a priori” connecting of concepts. Now since the objects carry along their necessity for existence, experience may thus be possible only through a representation of the necessary connection of the perceptions of the objects.

Research as a Concept

Through the centuries, some of the most prominent philosophers have consented that if something is error-free, then it is logic! But where logic derives from? Could we ever reach the bottom of logic? Is there an answer to every question? In 1931 an unpredictable response was given; Austrian mathematician Gödel (April 28, 1906–January 14, 1978) at his 25 proved the inadequacy of logic. Mathematics in a peculiar way begins earlier than logic, while the logic’s tinder isn’t originated from a divine truth but from very fundamental human acknowledgements!

One of the most precious chapters of Philosophy is Gödel’s famous incompleteness theorems (Gödel 1931).

  1. 1.

    If the system is consistent, it cannot be complete.

  2. 2.

    The consistency of the axioms cannot be proved within the system.

What does consistent and complete might mean? A logical system is consistent when it lacks contradictions, when a sentence cannot be true and false simultaneously. Complete is a system when all its sentences are either false or true.

Therefore, Gödel proved that a system satisfying both theorems cannot exist. If it is complete, then it cannot be consistent and vice versa. In simple terms, it will contain either contradictions or questions that cannot be answered.

In his second theorem, the Austrian philosopher clarified that axioms cannot sufficiently prove the consistency of a theory. Hence, Logic that mathematicians through the centuries had been striving to identify with mathematics accepted a horrible and incurable wound. It soon proved as the Achilles’ heel of mathematics, turning it quasi incompetent to describe science in whole. The birth of mathematics and of any other science doesn’t kick off from absolute logic but from intuition itself!

The Human Factor

Even more nowadays, a human engaged in the scientific research, is not an isolated inquirer trying to reach others or the outside world from his or her encapsulated mind/brain, but is someone already sharing the world with others. Modern knowledge is supposed to be, shared by the scientific community. Hence, modern knowledge may emerge not as the primacy of rational or scientific thought that is in qualitative terms superior to all other types of discourse, neither via human subjectivity, as opposed to objectivity, in which inter-subjectivity and conceptuality play only minor roles, not even as the Platonic idea of human knowledge being something separate from the knower.

Acquiring knowledge in the modern way, via a system, includes on the one side the objectivized pre-understanding and on the other side the interpreter or inquirer. Potential users of scientific knowledge are sharing a theoretical and practical pre-understanding with, for instance, professional communities; that is defining the various horizons of pre-understanding. Among those, in a later chapter of this book, we shall further discuss the similarity of knowledge through an appropriated classification scheme.

Scientific research seeks explanations by systematically using a predefined set of procedures in order to collect evidence and other data not determined in advance, along with findings that are applicable beyond the limited framework of the study. In that concept, qualitative research shares these characteristics, especially effectively for obtaining culturally specific information about the values, opinions, behaviors, and social contexts of particular populations. It seeks to understand a given research problem from the perspectives of the local population it involves (Mack et al. 2005). The strength of qualitative research refers to its ability to provide complex textual descriptions of how people experience a given research issue. It provides information about the “human” side of an issue—that is often the contradictory behaviors, beliefs, opinions, emotions, and relationships of individuals. In a similar vein, qualitative methods are also effective in identifying intangible factors, such as social norms, socioeconomic status, gender roles, ethnicity, and religion, whose role in the research issue may not be readily apparent. Finally, when used along with quantitative methods, qualitative research can help us to interpret and better understand the complex reality of a given situation and the implications of quantitative data.

Human engagement of the world of phenomena occurs in multiple domains (Roth 1987). As material entities or living organisms in a physical world, we give expression to basic principles such as that of motion. Yet another domain of human engagement is the domain of the sign, a semiotics domain where humans try to make sense, to conceptualize within a structured and realistic model themselves, their world, and the manner of being in it by means of signs and symbols (Peirce 1931–1936, 1958; Popper and Eccles 1977). When engaged in scientific processes, humans carry along the aforementioned occurrence. Yet, as researchers and analytical thinkers, they want to know not only the characteristics of the “real” (the rules of the nature in the case of physical science) but also the practices by which the real is produced and sustained.

Through the historic process of humans’ understanding, a focus on “relationships” rather than “separate entities” was developed. Within this context, summing up the individual characteristics being examined and “added up” to make the whole, has not been considered as an appropriate skill, compared to total systems’ dialogic behavior emerging in the interaction between self and other participants (Goffman 1959; Miller 1982).

According to Anderson (2009) the real world can be divided into two parts, namely the one constructed of material conditions and the other one of social practices. In that sense, laws would have immediately change if or when, the scientific community had decided on what it considered as a better explanation. A scientific field can be formed through the twin principles of public presentation and critical review, or “critical rationalism”, expressed by Popper (1959/2002) and what might now be called public (rather than self) reflexivity (Kobayashi 2003). A publicly reflexive science (one that looks back on itself as an object of study), accepts its constituting action and enters the objectivity struggle to attain the universal knowledge.

Although qualitative findings are often extended to phenomena with characteristics similar to those in the study, gaining a rich and complex understanding of a specific empirical context or phenomenon, typically takes precedence over eliciting data that can be generalized to other scientific areas or applications. In this sense, qualitative research differs slightly from scientific research in general, but it is still a type of scientific research. As Anderson notes (2009), qualitative research “is used to indicate a set of text-based or observational methods that are themselves used as companions to quantitative methods (Wilk 2001). It is used to point to an independent set of methodologies that can be used with or without quantitative methods, but remain within the same epistemological framework (e.g., Chick 2000). Finally, it is used to designate an entirely different paradigm of science that is not only independent of quantitative methods but also of its epistemological foundation (Denzin and Lincoln 1994).

In metric empiricism, a term coined by Anderson (2009) for quantitative research, explanation is located in the individual (an epistemological requirement known as methodological individualism). This means that everything needed to explain some activity of the “individual” must be found within the boundaries of the individual. Metric empiricism works from a metric logic of quantities and rates, similarities and differences, dependence and independence, operations and results. It depends on “things” which have clear boundaries (such as “thing” and “not thing”) and mutually exclusive characteristics (such as “thing 1 not thing 2”). One can count things, measure their boundaries, figure their proportion among other things, and so on to create all sorts of information found in the metric empiricism explanation. This requirement for achieving good explanations is why such theoretical structures as attitudes, scripts, and schemata have developed: because they are contained within the mind of an individual and not yet through direct scientific practices. In accordance with typical cognitive theory, that constitutes the engine for most of the metric empiricism, while differences have to do with the characteristic explanation that is produced by each form (Anderson 2009).

In the same context, Anderson (2009) also uses the term “hermeneutic empiricism” (HE) to underscore the paradigmatic nature of quantitative research. The hermeneutic empiricist not only tells the story but also interprets it. In the process of employing the HE, she then works from a narrative logic of routines and actions, critical instances and episodes, conversation and discourse, results and phenomena. For the narrative logic to deliver, the quality of the interpretation needs to be proven. Thus, interpretations are based on agents that both generate the action and depict some cultural understanding. They have recognizable action of a beginning, a middle and an end, and last but not least they have motive, intentionality, and consequence. With the goal of arriving at a final statement, a transcendental law needs to hold for all conditions, within the scope of the law. Although that still leaves a lot of wiggle room for new work, it is much closer to an ultimate settlement of a topic without the need for additional work by the scientific community.

The methodological approach of a technological world seeks primarily to modify the potential of new knowledge and to acquire deeper understanding with the aim of exploring the potential of recreating the phenomena. This is much aligned to Heelan’s (1997) approach/suggestion for a revisit and review of the natural sciences from the perspective of hermeneutic philosophy. In any case, the goal has been a clearer or at least a different assessment of the status of theoretical explanatory knowledge and its relation to the life world. Furthermore, some sense of how the current logical empiricist and the hermeneutic traditions relate to one another with respect to the short-term explanatory goals of science and the long-term goals of knowledge may possible be gained.

Hermeneutic theories, while emphasizing the collective and the relational, acknowledge the contribution of the particular individual as an active, performing initiator, albeit one who is also an agent of collective understanding. Evidence and claim must preserve the individual’s contribution (Newell 1986), as placed within the potential knowledge level of nature (Anderson 2009).

Interpretations and Explanations

According to Vattimo (1989), Capurro (1995), the key in today’s knowledge society, is our relation to what we do not know, in and through, what we believe we know. One of the major challenges of a scientific community is the constructed overcome of knowledge’s partiality on phenomena, leading to a fully transparent and complete perspective upon a potentially chaotic and yet creative empiricism.

Creating a knowledge database initially needs a pre-define field of knowledge, which is usually dealt with under a classification in strict terminology of the field. That might actually be an objectivized pre-understanding collection of the phenomena descriptors, following specifically coded classes of data that can be interactively elaborated and enriched by the scientific community. That shall serve as an epistemic paradigm that conceives the information retrieval process more than just an interpretation process. The individual that has gathered information creates knowledge structures in order to actively interact with the system (Anderson 2009).

The appropriate philosophical approach to the method of interpretation is triggered by the breakdown of a task and begins by calling on the deep structure of pre-theoretical pre-categorical understanding of being which is found in the life-world. Inquiry is awakened when a directed question is formed. Which, like all directed questions, already implicitly contains an outline of a search and the discovery strategy aiming at uncovering a solution. The question construed in this case is not in an articulated form yet. Only later it achieves an adequate expression in what philosophers of science call an “explanation”.

Philosophers have wondered whether science might be better off abandoning the pursuit of explanation. Duhem (1954), among other philosophers of science, claimed that “explanatory knowledge would have to be a kind of knowledge so exalted as to be forever beyond the reach of ordinary scientific inquiry: it would have to be knowledge of the essential natures of things. Something that neo-Kantians, empiricists, and practical men of science could all agree was neither possible nor perhaps even desirable” (Strevens 2006).

There follows an active dialogue and actions seeking practical fulfillment in the awareness that the sought-for understanding has presented itself and made itself manifest to the inquirer. If understanding is absent, search resumes, dipping again into the available resources. This hermeneutical circle of inquiry is repeated until a solution presents itself within a new cultural praxis in the life-world.

Within this book the cyclic development based on understanding of existing knowledge, seeking of missing one and progress through a repeatable process, will be extensively discussed in an attempt to provide a methodological approach for engineering the research in the most efficient and complete, in time, way.

Realizing Physical Phenomena

Gadamer (2004) had spoken of the horizon as “the range of vision that includes everything that can be seen from a particular vantage point. Applying this vision on the thinking mind, we talk about the narrowness of the horizon, or of its potential expansion, or the opening up of new horizons, and so forth.” Following Gadamer’s line of thought, Vamanu (2013) noted that the configuration of prejudgments, as it is related to a domain of objects, constitutes a ‘horizon of understanding’. This horizon is a boundary for all possible phenomena we can experience either as the familiar ones or as those potentially cognizable. Furthermore, the horizon mainly constitutes a limit for understanding and a condition of possibility that is constantly shifting. Accordingly, we may recognize the aforementioned “horizon of understanding” as the knowledge’s technology potentiality within the known world, prospectively to be increased and improved only via the optimization of its inherent property, i.e. the engineering of the knowledge. That may be achievable through a better understanding of the phenomena and the contribution of the combined reciprocity of theory and technology.

In developing this concept, we shall follow Seely’s (1984) point, indicating that for facing the problem of adherence among the human behaviors and experimental procedures—considered to be typically scientific—we shall be potentially obstructing practical answers to engineering problems, while at the same time might have failed to improve the theoretical understanding of these problems. Thus, it is the collection and analysis of data via a repeated sequence of reproducing experimental “devices”, an essential step to the apparent problems still to be identified.

The Essentials for Understanding

It is indeed both the data collection and analysis that unfold the experimentation method and process, connect the analogous opening of knowledge and, hence, leads us to the understanding of nature. This anticipating process constitutes the aforementioned circle of understanding, holding normative implications for research.

An iterated reciprocal movement between the meaning of a part and the meaning of a whole to which that part belongs under the assumption that a part only makes sense within a whole, may occur. Yet, within this circle of understanding, the whole does not make sense except in terms of a coherent configuration of its parts. This circle of understanding, contains the information-derived-knowledge, both information and knowledge constituting the expert system called the hermeneutical cycle. Therefore, the hermeneutical cycle, in its totality as a “device”, depends on an inherent knowledge engineering functionality, which is transforming the existing knowledge, via appropriate justification means, into understanding: one of the knowledge technology’s forms.

The collected data regarding the phenomena and thus, the development of knowledge, signify the integrated details within a phenomenon that shall provide a coherent and meaningful whole of the world of the phenomena. Every new finding has to be accepted and embedded in the pool of the existing experience (a potential “barrier”) while, on the other hand, it may improve the value of this existing knowledge after its acceptance (a promising “advantage”). After being a part of the existing knowledge, each new discovery becomes one step beyond the subjection of the circle of understanding. The consequence regarding the understanding and design of experimentation systems, will be that in setting up a knowledge database, the fragmentation of information forces us to create the conditions of possibility for the retrieval of the knowledge pieces.

In order to move towards a precise reveal of an empirical-theoretical system, the experiment will have to satisfy three requirements:

  1. (a)

    it must be synthetic, so that it may not represent a contradictory but rather a possible world;

  2. (b)

    it must satisfy the criterion of demarcation, by not being metaphysical instead of representable of a world of possible experience and

  3. (c)

    it must be distinguished as a system representing our world of experience compared to other similar systems to be submitted to tests, to which it has to stand up against (Popper 1959/2002).

A rather glaring point requiring further consideration concerns the well-defined experimenting “device”, which needs to be engineered by the researchers well in advance and in pure methodological manner. When considering phenomena’s ability to engage in the experimental “device”, we move our thinking from a deterministic stance, towards a potential/probabilistic approach, by focusing on the classification of the systemic organization, through its categorical descriptors. This conception does not simply judge the phenomena’s abilities, but moreover enhances understanding of how the environment influences the phenomena’s progress, and the systemic capability to withstand the potential of disclaiming a hypothesis and ultimately pursue a specific roadmap.

The consequent usage of the operational profile of a system, justified through, among others, the physical and mathematical theories, the cognition on optimum alternative solutions, benefits, risks, factorization and analysis of means and targets, will lead to a methodology concept for an engineering based design of the systemic descriptors. A potential replacement of standing techniques by new ones, however obeying to comparatively different principles, allows the potentials, of those research community members that wish to reaffirm the substantial rationality of scientific approach and results, to advance.

The perception of a physical phenomenon, as an awareness perceived by scientists, is included within the first two rows of the above Table 1 namely “The World of Phenomena” and “Scientific Knowledge”. The following two rows (namely, “Similarity of Knowledge” and “Classification of Knowledge”) correspond to the compulsory step-wise procedures need to be satisfied for to guide the resulting decision regarding the experimentation outcome as presented in the last two rows (namely, “Experimental Design” and “Experimenting Engineering”). Furthermore, Table 1 is indicatively describing two modes of transition occurring in the experimentation phase, i.e. from experience to technology, through the classification of the existing knowledge, as well as from technology to technique, through the experimental design. The first transition is the starting point of the cognitional experimental design process, while the second one is supporting the practical experiment execution. The common operator in all of the above procedures is the human as a critical factor that perceives the phenomenon under investigation, categorizes the existing knowledge, identifies the lacks (i.e. the potential field for further but necessary research), detects the internal and external similarities of the phenomenon and, finally, integrates all of the above within the appropriate experimental design, before selecting any technological available means to potentially fulfill the aforementioned shortages.

Table 1 A summary of a simplified three steps process for realizing physical phenomena, existing knowledge and experimentation

The interaction of the above matrix with the human factor is bi-directional: understanding is the one way from matrix to humans while action is the reverse pathway from humans to matrix. Hence, understanding advances technology that leads to better techniques, so increases pure experience of the community (via scientist’s accepted work), which feeds-back to the consequently following research.

An important note to be made here is that an experimental design should be adequately available (if and only if) the rows named “Similarity of Knowledge” and “Classification of Knowledge” are suitably filled. Evidently, the first two rows (“The World of Phenomena” and “Scientific Knowledge”) must be filled first, as has been stated elsewhere (Kanavouras and Coutelieris 2017a). In the end, a fully, properly and acceptably filled matrix, of Table 1, may describe the adequate realization of a physical phenomenon under investigation, identify the potential areas of shortened information (i.e. the fields where the research should be directed) and guide the experimenters towards satisfying the research inquiries.

Apparently, several pathways might produce analogous results for specific questions. Nevertheless, Coutelieris and Kanavouras (2017) demonstrated that all of those potential matrices are similar, thus, producing equivalent outcomes, given a specific phenomenon. The cross-section of the matrices filled per scientific hypothesis, does not constitute a theory regarding this phenomenon. Since infinite matrices may be necessary to produce a new cross-section theory, each filled matrix (as seen in Table 1) either further supportively completes an existing theory or actively argues its validity.

Essentially, the proposed matrix of Table 1 highlights the way that the circle of understanding a phenomenon under question, runs. Every innovative observation, simulation result, idea of whatsoever, shall inevitably need proofs that can be justifiably obtained through a suitably filled matrix. Also, a methodological approach to define a specific and well-posed transition roadmap from the description of a system as far as the formation of a Prediction was described in detail by Coutelieris and Kanavouras (2016). According to the authors, in order to link experimentation to knowledge it is necessary to observe and realize the relative facts for to produce a survey of existing knowledge regarding the phenomenon and to identify the gaps along with the unnecessary repetitions within this knowledge survey. Thus, a classification scheme is necessary to capture every new finding. The scheme will essentially function as a knowledge pool where deriving knowledge is to be accepted and embedded within this pool of existing experience. The ultimate aim is the improvement of the existing knowledge value.

A consequence regarding the understanding and design of experimentation systems is that in setting up a knowledge database, the fragmentation of information forces us to create the conditions of possibility for the retrieval of the knowledge pieces. Observable incidents can be studied trough observations and/or mathematical simulations, rather focusing on the cohesions among the systemic quantities (variables, parameters, etc.) rather than on the quantities themselves.

The Cycle of Understanding

Given that the cycle of understanding is particularly referred to a specific phenomenon, Kanavouras and Coutelieris (2017b) showed that it is possible to methodologically approach the supportive evidential background of a theory, through a validation process of the theory-related phenomena. Such a process is based on their level of internal similarity (Coutelieris and Kanavouras 2017) and the consequent similarity of the conclusive macroscopic categorical descriptors named “outcomes”. The elementary concept for that is to define the necessary criteria that are able to assure that similarity does exist. In that case, the world of specific phenomenon, as perceived by scientists, constitutes a four-dimensional vector space where each vector corresponds to a specific level of knowledge regarding the phenomenon under question. Given the mathematically well-defined pathway from the satisfaction of the similarity criteria to the estimation of the values for the parameters affecting the mathematical expression (equations) regarding the phenomenon, a linear mapping over this vector space expresses the internal similarity. In this context, each new cycle of understanding might put in question an existing theory and, consequently, generate a new one instead.

Popper (1959/2002) mentioned that a theoretical system has to be (a) free from contradiction (whether self-contradiction or mutual contradiction) this is equivalent to the demand that not every arbitrarily chosen statement is deducible from it, it must (b) independent, and i.e. it must not contain any axiom deducible from the remaining axioms. (In other words, a statement is to be called an axiom only if it is not deducible within the rest of the system.) These two conditions concern the axiom system as such. Regarding the relation of the axiom system to the bulk of the theory, it should be (c) sufficient for the deduction of all statements belonging to the theory which is to be axiomatized, and (d) it should be necessary, for the same purpose, which means that they should contain no superfluous assumptions. In a theory, thus axiomatized it is possible to investigate the mutual dependence of various parts of the system.

In order to tackle such systems, we encourage a proper handling of the “theories-technology-technique” triptych. It is this combination that will fully uncover and let us categorize the existing knowledge and reveal any potential gaps. Direct inter- and intra-developmental procedures among these areas of knowledge, as shown in Fig. 1, initiated developmental outcomes from each of the above sources of empiricism. Similarly, engineering has theoretical, technological or technical sources of origin within sequential developments (Mitcham 1999). This implies that the region where the three areas meet and overlap, consists the pure and applied zone for paradigms in each system (see Kanavouras and Coutelieris 2017b).

Fig. 1
figure 1

The overlaying area of the three major empiricist contributors constitutes the existing pure knowledge of natural phenomena

To fulfill such an approach, we propose to work on a cyclic mode, satisfying the intention that on every cycle each and every peripheral experience has the same equal distance from the “center”. At the center of the cycle of experiences, the knowledge regarding each field may be positioned. Such a center could be named as the “dominating paradigm”, which, according to Kuhn (1962) is usually established and commonly accepted, through years of empirical research and cognitional understanding. It is also constantly subjected to revisions, criticism and reconstructions until there are more problems rather than answers emerging from research in that field. Eventually, research will lead to a so called “scientific revolution” that through the new knowledge shall produce and establish a “new Paradigm”.

Therefore, such a cyclic process shall now be described as an eight-step process, depicted in Fig. 2. The cycle contains the natural phenomena and a clear involvement of the experimenter in defining them. Each and every step of this cycle may be perceived as an individual process by itself, while each of these steps constitutes, in a sense, an individual “device” or a “gear”, with a unique and particular functionality within the knowledge evolution process. Apparently, some of these steps rely mostly on the scholar knowledge available at a certain time period and practical experience (technical or technological), while others, although making use of accepted parameters, are mainly dependent on cognitional and individual interpretations of the researchers. Nonetheless, both of them will lead human decision making towards setting, defining and working on the research efficiency based on its outcome. Furthermore, experimental proficiency will now be reflected through the validation of the knowledge gaps (filled, remaining, or even opened), and the efficient use of resources for each particular research plan (Kanavouras and Coutelieris 2017b).

Fig. 2
figure 2

The cycle of understanding the physical world and increasing knowledge of physical phenomena

In brief, the cycle is triggered by the description of the empirically defined systemic categories which when placed into a certain context, shall form the hypothesis. Any hypothesis hence, needs to be presented through the relevant mathematical representations of the physical or chemical phenomena related to the system as well as the naturally developed sequences of phenomena occurring within an event, using the relevant mathematical description. Having established and identified such laws and relationships for the hypothesis, the available knowledge may then be analyzed and logically combined within the systemic and hypothesis boundaries, eventually leading to its proper classification. That, consequently, makes the experimenter capable of identifying the research gaps and selecting experimental conditions that will allow a proper expression of the relevant physical and natural parameters. Securing or disclaiming the hypothesis through this approach, will initiate an underlying new cycle for the newly refined hypothesis. This road-map is step-by-step presented and conclusively supported in Fig. 2.

In Fig. 2, each transition within the cycle represents a specific procedure towards the deep insight of the phenomenon under investigation. More precisely, for a given system, a falsifiable hypothesis must be defined (arrow 1) in order to be able to study the system under this hypothesis. To further proceed with understanding, it is now necessary to apply the relevant laws and principles (arrow 2) and to express them mathematically (arrow 3) before obtaining the results (arrow 4). After obtaining this information, it is necessary to obtain the cognition about the system (arrow 5) and to classify the existing knowledge (arrow 6) in order to identify the existing gaps and to design and perform new experiments (arrow 7), which will unavoidably define a new system and the cycling will now restart (arrow 8).

Consequently, in order to progress with the cycle’s steps or validate and evaluate any research plan and objective, the classification of the existing knowledge is considered absolutely essential. This knowledge classification (step #6 of Fig. 2) asks for a satisfactory process that includes a thorough review of the available data and experience gained on the existences, along with a selection of both the systemic descriptors and their main participating classes. Then, knowledge could be captured within those classes-cells that have been mainly defined via the participants in the system.

In order to work according to this methodology, we need to implement the systemic characteristics, predicates, attributes, qualities or properties as part of the systemic experiences that control the evolution of the phenomena within a given system, placed at a particular environment. Furthermore, we also wish to propose a comprehensive methodology for fully meeting the research requirements. As a methodological consequence, conclusions of the research and studies should focus on determining any particular outcome in terms and conditions for their satisfactory fit to the given hypothesis. Moreover, the particular systemic participants, as recorded within the existing knowledge, need to be classified in order to point out their interrelationships (Kanavouras and Coutelieris 2017b).

For example, in order to approach the “food-packaging-environment system”, we shall consider it as an engineering-based system following the “in-process-out” scheme. We may accordingly define “in” as being constituted of matter and energy, “process” as the relationships within and along the matter and energy, while “out” as the outcome perceived by senses. The scheme of such a system is given in Fig. 3. The main knowledge classes of this scheme will be further discussed in the following section.

Fig. 3
figure 3

The four categorical descriptors of a system in the center and their main knowledge classes

In particular, matter, which in the case of packed food refers to all the materials constituting the system, consists of the following essential groups or knowledge classes that are actually developed with: (i) each and every experience, or empirical evidence, (ii) the properties, (iii) the qualities, and (iv) the characteristics, for any systemic possibility in general. Accordingly, “experience” may refer to the physical dimensions; “properties” of the matter can be related to the mass transfer phenomena among systemic materials; “qualities” in a sense adhere to the indicators, markers or factors relevant to the materials; and “characteristics” propose the molecular chemistry, internal composition, recipes and concentrations of interest to the hypothesis. Based on the above scheme, experimenters essentially modify the evolution “possibilities” of the system by critically utilizing knowledge, and then making cognitional decisions on how it justifiably complies with the hypothesis in question and, most importantly, the appropriateness of this knowledge. Having achieved that, innovative approaches (ideas), inventive applications (technology), practicalities (techniques) and/or alternative rationality (scientific theories) may emerge.

An assortment of contributing properties/attributes of packaging (materials, means and processes), that can be described in physical terms and measured within analytical capabilities, can potentially become model parameters for mathematical expressions. Hence, according to the aforementioned four categories and the knowledge classes within each one, a list of properties, specifications or attributes could be used to further elaborate on each and every group of “experiences”, “properties”, “qualities” and “characteristics” for the hypothesis, regarding packaging in particular.

For a “food-packaging-environment” system that may undergo spoilage and adulterations during its storage, the available “energy” is now discussed. This descriptor defines the progress of the phenomena within, as well as, among the elements of the matter. This descriptor may be further broken down into the experience that includes the system’s energy requirements (empirically measured), the energy properties (may include energy transfer, storage, and/or systems capacity), the qualities (energy transformations within the boundaries of the system) and finally the characteristics (connecting the energy to its rate/coefficient, frequency, wavelengths and similar knowledge descriptors).

An inherent systemic uniqueness is reflected in the “relationships” among its contributors. For that, “relationships” may break down to the experience (sensed by the species reactivity), the affinity properties (of the species for each other), the qualities (such as the rate/coefficients, sequences, and/or levels of expressions) and characteristics (indicating the contributors’ inter- and intra-dependencies, adequacies, deficiencies, imposing restrictions, catalytic performance etc.).

Frequently, “outcome” has been the one and only component of the collected data for several studies. In this class, the measured experience mainly referred to is quantity, the properties of outcome are adjacent to its identification, the qualities signify the selected indicator(s), marker(s) and significance, and the characteristics could be related to the molecular chemistry of evolved compounds.

Conclusively, geometry, volume content, area, permeation, transparency, pack integrity, food-packaging active/passive interactions, activation energy—thermal properties, energy transport, energy transfer, thermal capacity, amounts, polarity, active/passive interactions, active reactivity sites, sorption etc. are, in the opinion of the authors, essential considerations for packaging, carefully selected for the food consideration, as demonstratively summarized in Fig. 4.

Fig. 4
figure 4

Exemplary accomplishment of filling the four knowledge classes and the conditions for testing the hypothesis against them

In Fig. 4, the four main knowledge classes of the categorical descriptors of a system are presented, along with the conditions for essentially altering and controlling (engineering) the evolution “possibilities” of the system. In a sense, this last, fifth, column is the practical outcome of this method, and is of interest to the experimenters. These are the conditions under which the phenomena may be expressed or have an impact on the level of their expression. That research interest could be fully and clearly revealed via this approach, i.e. when the experimenters correlate the properties, specifications and attributes to the conditions that may impact on them. These conditions shall become appropriate for experimentation, directly linked to the measured empiricism and from that to the knowledge classes and systemic categories. Therefore, these conditions will check, within a minimum risk, the justified packaging that shall provide the least possibility for disclaiming the food preservation hypothesis (Kanavouras and Coutelieris 2017b).

B. Research in Practice

When at a starting point of a research attempt, the scientists apparently face a series of questions, dilemmas or even obstacles, they have to overcome in order to establish a research “protocol” they trust and can support. Let’s recall some of these questions that at some point we had all been facing. For example, how may we proceed regarding the set-up and use of a particular research’s scheme? Have we all the information for replicating an experiment? How can we trust the differences in tools and equipment, conditions and materials we are using in relation to those having been used by others? How may the experimental set-ups be compared among each other? Have we in the end wasted our research efforts and resources or have we actually achieved to look behind the closed curtains of the phenomena and reveal a hidden stage of physical activities, systemic performances, theoretical extensions or mismatches? How similar the findings or statements of one article, might be to those of another? Furthermore, how could we really comment on their similarity level and how may we select the one to implement against the other? i.e., when is something an exception or a contingency? Even if some of these have not been faced before, we believe we all agree are valid, important and may their answers may have a significant impact on the quality of any research and experiment. Therefore, in the following section of this book, we shall try to approach the answers, yet, not answering them in a unique way, which is left to each one with the interest to do so.

Creating and Reproducing Phenomena

We certainly have to ask ourselves about the origin of a phenomenon, its originality and uniqueness and furthermore, our ability to be present during its occurrence and evolution, or whether we may reproduce a phenomenon under our supervision within certain space and time.

One of the cardinal rules of philosophy is that one must always “save the phenomena”. It is expected that despite of whatever a philosophical explanation might do, it should always account for the way things seem like to the spectator, to us. The principle called “saving the phenomena” is a powerful tool of criticism. The necessity of saving the phenomena is obvious for another reason, that is to say, when one considers the relationship between an “explanation” with the explanandum, as philosophers of science call technically what an explanation explains. Meanwhile, an aspect of a phenomenon apparent at one level might not be apparent to another. Since the ancient Greek times, philosophers like Plato (428/7–348/7 BC) and even earlier than him Parmenides of Elea (515–450 BC) they had saved the phenomena while claiming that the sensible world is illusory in a sense, because in their attempt to explain that phenomena are illusory they did explain those phenomena (Baggini and Fosl 2003).

The discussion regarding the creation of a phenomenon attracts an increased interest and becomes more meaningful in the case that a phenomenon may be reported, recorded, or even reproduced, though no solid theory is available to explain it, or embrace it within its establishments. Whether such a theory may ever be completed is an additional issue. Although, on the other hand, many phenomena were created after a theory had been established and accepted.

At the same time, there are theoretical entities which are used during the experiments in order to facilitate their functionality, support the conclusions and establish new theoretical relationships, via practical handling of such entities (see the entities used in modern physics regarding the structure of the matter). In other words, during the experimentation, we may use the theoretical entities, or even prepare them and handle under certain conditions based on their known properties. Additionally, we create new devices and structures also based on their properties, in such a way that, eventually, a specific outcome will be obtained. The aforementioned properties may have previously derived from pre-experiments and they may now be used as reasoned properties for the new experiments. In that sense, these properties are phenomena by themselves. Then, creating phenomena is based on the capability of isolating and use of these properties, as raw materials for creating and reproducing new phenomena. Then the theoretical entities are not the object of the experiment, but they rather constitute the mean to proceed with it cause otherwise the lack of materials will hinder the experiment.

Experimentation

According to Kant (1781/1998) experience deals with the raw materials provided by our sensible sensations. Thus, experience is rather the first product brought forth by our understanding. It is for this very reason the first teaching, and in its progress, it is so inexhaustible in new instruction that the chain of life in all future generations will never have any lack of new information that can be gathered on this terrain. Nevertheless, it is far from the only field to which our understanding can be restricted. It tells us, to be sure, what is, but never that it must necessarily be thus and not otherwise. For that very reason it gives us no true universality, and reason, which is so desirous of this kind of cognitions, is more stimulated than satisfied by it. Now such universal cognitions, which at the same time have the character of inner necessity, must be clear and certain for themselves, independently of experience; hence one calls them a priori cognitions: whereas that which is merely borrowed from experience is, as it is put, cognized only a posteriori, or empirically).

What is especially remarkable is that even among our experiences, cognitions are mixed in that they must have their origin a priori and that perhaps they serve only to establish connection among our representations of the senses. For, if one removes from our experiences everything that belongs to the senses, there still remain certain original concepts and the judgments generated from them, which must have arisen entirely a priori, therefore independently of experience. That could be said, because they make one able to say more about the objects that appear to the senses than mere experience would teach. Or at least make one believe that one can say this, and make assertions contain subjective universality and strict necessity, the likes of which merely empirical cognition can never afford.

Literature Search

The literature available for engineering sciences and technology has moved from predominant journal articles to a body of literature that also includes books, encyclopedia and conference proceedings, among others. Engineering science covers many aspects such as food, civil, mechanical, electrical, environmental, marine, etc. It is an applied science that also covers basic fields as chemistry, analysis, processes, based on a sound knowledge of the pure sciences including mathematics and biology. Engineers are also involved in developing engineering standards to promote and facilitate world trade.

Principal sources of information have emerged, highly specialized on topics and thematic fields, providing researchers with extensive and variable pieces of information and data along with critical reviews of highly appreciated topics. It is inevitable though that researcher may omit some information sources in any type of search that will mainly consider English-language sources, although in the case of local conferences, foreign-language materials may be included.

The Language of Science; the Terms of Truth

A critical point and consideration in research approach besides the language itself is the variability in terminology for the same or similar meanings. Such new worlds may signal new constructs, indicating lively science that is pushing against the boundaries of what is commonly used. It is prima facie plausible to postulate that there is nothing beyond understanding a text, that understanding the sentences composing it; and that there is nothing beyond understanding a sentence than understanding the words which compose it. The meaning of a complex expression is supposed to be fully determined by its structure and the meanings of its constituents (Szabó 2013).

Terminology is mostly used by authors and researchers for best describing their ideas, methods, tools and equipment and knowledge, although much less for providing findings, results and conclusions. It is not uncommon, particularly for young researchers, to have those new worlds trouble them in understanding their readings. To make things worse, messy nomenclature for the key-words that are frequently used as indicators of research and descriptors of the work may further confuse and irritate a researcher of the literature, while they may break down the bridges between one researcher’s observations and another researchers’ needs.

Language processing is a complex skill which has become routinized once one has gained experience in all important levels regarding understanding expressions: the phonologic, the semantic, the syntactic and the pragmatic. Over the course of time, sounds, words, sentences, and entire texts are automatically classified in one’s cognitive system (Nehamas 1987) and therefore language processing takes place largely unconsciously under standard conditions. If a difficulty arises in the language comprehension process, and if one is not apt to immediately understand one or more linguistic expressions, then cognitive resources in the form of attention are activated, generating an interpretative hypothesis. This conscious process is often modeled as an interactive process of all relevant levels of information processing: the phonologic, the semantic, the syntactic, and the pragmatic.

The process of parsing, during which the words in a linguistic expression are transformed from a written sentence into a mental representation with a combined meaning of the words, as studied by cognitive scientists, is especially relevant. During this procedure, the meaning of a sentence is processed phrase-by-phrase and people tend to integrate both semantic and syntactic cues, in order to achieve an incremental understanding of a statement or a text (Pinker 1994). Yet alone, it was Wittgenstein who commented in his later work Philosophische Undersuchunen, on the meaningless meaning of a word as perceived by a human mind, but rather on the meaning obtained only through the actual use of this world, (Wittgenstein 1953).

A nexus of meaning, connected with a specific linguistic expression or a specific text, is construed by the author against the background of his goals, beliefs, and other mental states while interacting with his natural and social environment: such a construal of meaning is a complex process and involves both the conscious and unconscious use of symbols. Text interpretation can be conceptualized as the activity directed at correctly identifying the meaning of a text by virtue of accurately reconstructing the nexus of meaning that has arisen in connection with that text (Mantzavinos 2016).

As Rescher (1997) points out: “The crucial point, then, is that any text has an envisioning historical and cultural context and that the context of a text is itself not simply textual—not something that can be played out solely and wholly in the textual domain. This context of the texts that concern us constrains and limits the viable interpretations that these texts are able to bear. The process of deconstruction—of interpretatively dissolving any and every text into a plurality of supposedly merit-equivalent construction—can and should be offset by the process of reconstruction which calls for viewing texts within their larger contexts. After all, texts inevitably have a setting—historical, cultural, and authorial—on which their actual meaning is critically dependent”.

What lies at the heart of this epistemic activity, i.e., of inventing interpretations as reconstructions of nexuses of meaning with respect to different aims, and how it can be best methodically captured is the subject of the application of the hypothetico-deductive method in the case of meaningful material as a plausible way to account for the epistemic activity of text interpretation (Føllesdal 1979; Tepe 2007).

Dealing with specific problems of interpretation, arising within specific disciplines like jurisprudence, theology and literature, which have been the focus of philosophical approaches. The aim was indeed to show what kind of general problems of interpretation are treated by the discipline of hermeneutics and to identify some important procedures leading to their efficacious solution—always keeping in mind that these procedures, like all epistemological procedures, are bound to remain fallible.

Hypothetico-deductive or deductive nomological method has been originally debated in connection with the philosophical theory of scientific explanation and can help establish hermeneutic objectivity, ultimately based on a critical discussion among the participants to the discourse on the appropriateness of different interpretations regarding the fulfillment of the diverse aims of interpretation. Inter-subjective intelligibility, testability with the use of evidence, rational argumentation and objectivity are, thus, feasible also in the case of text interpretation. A series of examples from diverse disciplines demonstrate this.

It has indeed been the case that the main protagonists, Hempel (January 8, 1905–November 9, 1997) and Popper (July 28, 1902–September 17, 1994) have portrayed scientific heart activity as exclusively an explanatory activity—largely aiming at answering “why?”—questions (Hempel 1962, 1948; Popper 1959/2002, 1963/2002). They both notably have argued that what lies at the heart of scientific enquiry is a starting hypothesis that proves e.g. that lead is heavier than water: if this true it is possible to deduce certain other claims, true ones that follow from it; one obvious is that lead sinks in the water! This influential and, very often, only implicitly shared view that all scientific activity is explanatory need not be followed, however. Moreover, answers to “what was the case?”—questions rather than only to “why?”—questions can be allowed to enter the field of science, appropriately accommodating the activities of all those whose daily work consists in text interpretation. The application of the hypothetico-deductive method is a way to show that the standards currently used when dealing with problems of explanation—intersubjective intelligibility, testability with the use of evidence, rational argumentation and objectivity—can also apply to problems of interpretation.

It is only the institutionalization of the possibility of criticism that can lead to the correction of errors when these evaluations and choices are involved. Our fallible judgments are all what we have here as elsewhere and enabling a critical discussion is the prerequisite of making informed choices (Mantzavinos 2016).

Information

The information-seeking process is basically a context-interpretation process of the people who store different kinds of knowledge in various conceptual formats. This process has a meaning within fixed contexts of understanding such as: thesauri, key words and classification schemes. Considering the process of storage and retrieval of information from the hermeneutical point of view, it could be considered as the articulation of the relationship between the existential world-openness of the inquirer, his/her different horizons of pre-understanding and the established horizon of the system.

The fragmentation of information forces us to create the conditions of possibility for the retrieval of the pieces because their common context remains tacitly implied. The partialization opens the possibility for different perspectives of interpretation. This situation can be described in terms similar to those used by Heidegger to analyze the structure of understanding: the general conceptual background ( Vorhabe ), the specific viewpoint ( Vorsicht ), and the corresponding terminology ( Vorgriff ) (Cappuro 2000).

Yet the system called ‘empirical science’ is intended to represent only one world: the ‘real world’ or the ‘world of our experience’. In order for an empirical theoretical system to become a little more precise will have to satisfy three requirements, first, it must be synthetic, so that it may represent a non-contradictory, a possible world. Secondly, it must satisfy the criterion of demarcation, i.e. it must not be metaphysical, but must represent a world of possible experience. Thirdly, it must be a system distinguished in some way from other such systems as the one which represents our world of experience. The system that represents our world of experience when it has been submitted to tests, and has stood up to tests (Popper 1959/2002).

Knowledge obtained via experience is rather a public ‘domain’ where scientists can share the products of their understanding obtained through a well-defined experimental, observational and in-nature-reproducing, framework. It is implied that there have been developed common ways of working, mainstream action and terminology and communication conditions using scientific language and other media. How does the scientific knowledge interpreted by one scientist may by understood by another, if not via logical reasoning common goals, set up norms, definitions, similarities, gaps, criticism, testing questions, etc., aiming in the most efficient common knowledge and understanding, of all-times developments, transformations and innovations. Finally, though subject to change under transmission, it is not on this account devoid of truth, rather is meaning the instrument through which truthfulness makes its appearance in the life-world.

Hermeneutic cycle was conceived in terms of the mutual relationship between the text as a whole and its individual parts, or in terms of the relation between text and tradition. With Heidegger, however, the hermeneutic cycle refers to something completely different: the interplay between our self-understanding and our understanding of the world. The hermeneutic cycle is no longer perceived as a helpful procedural tool, but entails a cognitional task with which each of us is confronted. What matters Heidegger claims, is the attempt to enter the cycle in the right way. A suitable investigation into the ontological conditions of the system that ought to work back on the way in which universal cognition is led, is essential in the initiation of the cycle. What is needed is therefore not just an analysis of the way in which we de facto are trained by history but a set of quasi-transcendental principles of validity in terms of which the claims of the tradition may be subjected to evaluation.

Gadamer (February 11, 1900–March 13, 2002) recognized the importance of pre-judgments, which is a less loaded word in English than ‘prejudices’, in the understanding of language. For him, respect for authority and the traditions of one’s community are paramount; meaning has to precede understanding. Habermas (June 18, 1929–) on the contrary, this being probably the most important aspect of his position and perhaps one that informs all his thought, believes that the project of cultural and philosophical modernity, which he distinguishes from societal modernity and the crisis in advanced capitalism, namely the working-out of the Enlightenment’s enthronement of reason, is not dead—as the postmodernists in philosophy and in the arts assert—but is yet incomplete. For Habermas, reason should be the basis of the consensus upon which communication depends, not authority.

Hermeneutics, Habermas argues, must be completed by a critical theory of society. Following the above expressions of hermeneutics and attempting a transfer of the philosophical to technological sector, we may claim the parallel compliance to each other. It is in our view that the scientific evidences historically collected during time, needs an evaluation method in order to be validated by quasi-transcendental principles. Such a method may only be applicable within the classes of knowledge, rather than the individual knowledge points. It is only via the classification of knowledge that a critical approach to the scientific knowledge may be accomplished, since the criticism emerges directly through the systemic descriptors, at the three levels of possibility.

In addition to meanings seen on the basis of a common tradition of interpretation (with its presumption of historical continuity), legitimacy can be gained by other meanings independently of any presumption that there exists a historical continuity of meaning with the source through a common tradition of life, action, and interpretation. Each acting within its own horizon of research purposes is in dialogue with relevant data through its own empirical processes of testing and measurement (Heelan 1997).

Hermeneutic method is a processing interpretative work trying to provide a philosophical foundation for understanding how quantitative empirical methods give meaning to empirical contents, how theory-laden data depend on the self-assessed presentation (classification) of the measured entity as a public knowledge entity. Thus, via the knowledge classification, the hermeneutic framework will be capable of emphasizing the interpretative nature of the domain of objects of information science and of the researcher’s access to it, the effects of the researcher’s prejudgments on her choice of research topics and on her approach to them, the importance of iteration in the process of data collection and of integration of ‘anomalous’ details into a coherent, meaningful whole at the level of data analysis and the importance of repeating the study of the same phenomenon to acquire an ever richer understanding of it.

In particular, the double role of measurements with equipment assist in creating and refining both theoretical and technological meanings will be addressed as part of the reproduction of the natural phenomena within an experimental “device”.

Nevertheless, the information is an opportunity for characteristic formations within the hidden dimension of language. The information can become a voice within the polyphonic nature of human logos, if and only if it is interrelated to the whole range of its hidden potentialities. If it is not, then we will have no more than an information society. The key issue in today’s knowledge society is our relation to what we do not know in and through what we believe we know (Anderson 2009).

Data

Data collection and analysis, sets off the cycle of understanding. Further unfold of experimentation, connected to the parallel unfold of knowledge and hence, understanding of nature, constitutes the cycle of understanding anticipation, containing normative implications for research. For a hermeneutic circularity, the knowledge of the phenomena and thus the formation of knowledge, implies that it is the ‘anomalous’ details within a phenomenon that, when integrated, provide a coherent, meaningful entity of the world of phenomena. The apparent “anomalies” have still to be identified, collected and analyzed at the level of data collection, via a repeated sequence of reproducing experimental “devices”. With hermeneutic theories in need of being empirically adequate, scientists should make significant and accountable choices for a human accomplishment within the semiotic domain. This truth construction accommodates the characteristics of the phenomenal world but is not determined by them (Schultz 1965; Weber 1974).

Whatever operations are performed for gathering information about people or objects of interest in a study, some trace of what is detected must have been captured and recorded. All the traces together are called “data”. In natural sciences, data are coming from the empirical experiences of the researchers, either via direct or most of the times indirect means, tools and equipment. That is in particular, called raw data, even though if it is very frequently the outcome of a software or other type of treatment applied on the analysis experienced in regard to a phenomenon taking place during the experiment. A further process of the raw data may be required in order to closely approach, form or imply an answer to the initial question. It could as well be that the selected treatment of the raw data is selected in order to best fit and provide a pre-set opinion, point or objective of the researcher, concluding on a certainly biased outcome, remark and conclusion. Then, it is also possible that treated data requires an additional writing skill which as mentioned already could be expressed and interpreted on an individual base, yet alone that the individuals collecting the data may be different from those analyzing it and even writing about it.

In that process, the deep understanding and appreciation of the theory supporting and defining the analytical equipment, the technology impact on the handling and outcome of the resulted values and the human factor itself, need to be considered as significant factors of data collection and analysis. It is in total, an additional matter of trust and confidence regarding the manipulation and inspection of raw data to become processed, (most often statistically) analyzed and reveling. Furthermore, data rarely tells a simple univocal story, and this should not come as a surprise! With this in mind, the question regarding the quality of data collected and its truthful reflection of the phenomena intended to be examined, follow each and every single experimentation/data collection methodology.

Starting from these findings, one may wonder how to situate them in the context of existing knowledge outside the typological varieties of research studies’ publications, but rather into a more scientific, natural laws-based and relationships-depended approach with a full inclusion of parameters, descriptors, conditions and outcomes. We may say so, since although there are many variations of research strategies that represent various ways of comforting a common set of problems, but besides the nomenclature differences, in the midst of variety, many familiar ideas may be encountered. Their application though, requires a certain type of expertise in strategic methodological classification of the data and knowledge as well as on the design of useful information extraction from that. Having a strong support to that saying, we wish to further propose and discuss a simple framework for categorizing studies, as well as proposing a conceptual basis for recognizing justifiable connections to further research planning and execution.

To cite Kant (1781/1998) again, for every concept there is requisite, first, the logical form of a concept (of thinking) in general, and then, second, the possibility of giving it an object to which it is to be related. Without this latter it has no sense, and is entirely empty of content, even though it may still contain the logical function for making a concept out of whatever sort of data there are.

Literature Briefs

It is apparently essential to search for and read the scientific publications, provided one has the experience via having received relevant education, training or guidelines for practical approaches, in doing so. The aim is to maximize the benefit of understanding, appreciating and ultimately utilizing their content, meanings, methods, examples, data, theories, opinions, ideas or even failures. All the aforementioned “benefits” contribute, to the formation of a hypothesis, or provide guidelines in formulating a methodology, understanding and appreciating, through compliance of the derived results to those previously produced by others.

It is of no surprise to mention that although an enormous number of articles have been sent for publications daily from various research institutes, groups of scientists and organizations, under numerous scientific programs, to many types of publishing journals of various prestige and acknowledgments, it still is a fraction of those selected as worth publishing. Within this framework, specialized editors are doing their best in managing the work load and subjectively handling the work coming on their tables, (needless to say) with a certain publication policy and a certain goal for the “best” possible selections.

So, we may use the world research as a term describing a multistage, multidisciplinary process, with formal rules that usually describe each step, the first one being a careful defined question followed by the design of a systematic way of information collection and knowledge extraction. It is this process that may distinguish academic research from other casual types of searches for goods and daily news and information on a topic.

The perception of scientific research may be formalized under six perceptions, some widely held and other found among certain groups of readers, such as complexity of results, conflicting results, trivial topics, impractical studies, absence of commitment and care, and conflict with other sources of “truth”. The first one, complexity, being a characteristic of both research-based knowledge structures and the way scholars think. Needless to say, that complexity is also the cause of apparent conflicts among research findings.

The following questions, selectively placed in three groups, is a simple, though by no means exhausted, attempt of organizing but also revealing the areas of importance and troublesome in reading, understanding and applying research results on the basis of the knowledge provided. These concerns by no means have the motive of discouraging researchers. On the contrary, the aim is to allow the researchers advance towards a realistic understanding of the status of a, sometimes, accorded research. It is also an attempt to bring together those topics that need to be considered in order to bridge the gap between worshiped and impractical research. The gap between like and do, believe and act, trust and apply. The gap to bring the researcher from being a follower to become an independent leader.

Engineering of the Experimentation

According to Popper (1959/2002), science can be considered by certain standpoints, and not only by an epistemological aspect; for example, one could perceive it as a biological or as a sociological phenomenon. As such it might be described as an implement comparable perhaps to some of our industrial equipment or even machinery. Science may be regarded as a means of production—as the last word in ‘roundabout production’. Even from this point of view science is no more closely connected with ‘our experience’ than other instruments or means of production. And even if we look at it as gratifying our intellectual needs, its connection with our experiences does not differ in principle from that of any other objective structure. Admittedly it is not incorrect to say that science is ‘… an instrument’ whose purpose is ‘…to predict from immediate or given experiences later experiences, and even as far as possible to control them’. Popper does not think that this talk about experiences contributes to clarity. He comments that it has hardly more point than, say, the not incorrect characterization of an oil derrick by the assertion that its purpose is to give us certain experiences: not oil, but rather the sight and smell of oil; not money, but rather the feeling of having money.

Experience deriving through experimentation with natural phenomena within an experimentation “device” performs as a distinctive method whereby one theoretical system may be distinguished from others. Hence, empirical science seems to be characterized not only by its logical form but, in addition, by its distinctive method. The theory of knowledge, whose task is the analysis of the method or procedure peculiar to empirical science, may accordingly be described as a theory of the empirical method—a theory of what is usually called ‘experience’ (Popper 1959/2002).

Engineers, on the other hand, prefer to see the world as subject to the laws and regulations of an ideally perfectly programmable, fully functional and strictly predetermined technical machine. A machine with complementary parts, comparatively and logically inseparable from competition, that is, a certain amount of “noise”, i.e. “noise” is a certain amount of inconsistency of machines’ operational status, produced by the interference of the contributing parts at their interfaces. Provided that this “situation” remains constant, this “noise” can urge the system into a new but mostly controlled status, a re-organization on a new basis, considering as inherent part of the system, all outcomes of the interferences within it. Engineers must keep the disorder of each system within strict limits and in regions with, artificially, low complexity of the system. Hyper-complexity that sometimes develops in a system corresponds to a qualitative “jump” of the system via enhancing certain features and weakening others. The new situation may have a positive impact on the world outside the planned system of the world with similar consequences for its controlling engineer. Thus, super-complex systems are looser hierarchically, less specialized, and not strictly centralized, largely dominated by strategies and skills, dependent on inter-relationships, and all subject to a non-controlled “situation” much alike the loss of muscle control (ataxia), noise and error.

Research Planning

As in any other type of planning, research requires decisions that facilitate the data collection, their validity and credibility. Validity, a major quality indicator for the scientific community, generally denotes the condition of being justified, therefore verified and accepted as “real” on the design and execution level of the scientific in regard to (i) the question ( hypothesis ) asked in the study (internal validity), as well as (ii) the validity of the results in present, future as well as in different places and by other scientists (external validity). Both validities are extremely important for engineers who are in favor of applications, technical approves and efficiency indicators-based qualifications.

Studies designed to resolve all possible threats to a consistent and true inference, although rather rarely found in the literature, ascribe significant importance on controlling validity, reliability and other elusive qualities present in a study, towards a certain degree of certainty in the success of an experiment. Considering that, any study is as close to producing reliably valid results, as the human skills and efforts can devise. A skillful research planner develops a plan that approaches the optimum and reaches asymptotically the best possible outcome, according to a predefined opinion. This describes the aim to maximize the significantly desired outcome, while minimizing the significant non-desirable ones.

What then makes sense to exist is a number of the typical procedures for decision-making, utilizing the optimization objectives and targeting the effort minimization at the lower cost possible. Procedures that initiate, develop and conclude dialectically with the available raw materials (inputs-outputs-humans) within particular circumstances. Implying that there is not a single solution to the issues, simply because the issues are not unique and completely repeatable after all. Yet the method is not a problem as it follows the suggested particular process, rather independent of the shape of the issues. Meaning that the engineering method itself should not allow for any space for multiple solutions to appear for a given problem, or otherwise this will classify the problem as “non-engineering” based one. The method selection among various alternatives that are based on comparing the input-output relationships, makes engineering design a distinct approach compared to other design forms.

The structure of such activities is considered to differ from the scientific methodology of knowing and learning, though at the same time it is in a strong analogy to that. Design is proposed as the unique method for practical activities, only differing fact objectives, targets—aims and potentially the raw materials used.

A final remark concerns the relationship of the engineering design to the self-restrictions implied, since engineers only deal with materialistic reality. Any imaginary conception can only be sourced from the rational natural physics and the latest development of mechanics. Furthermore, the engineering ideas are subjected to mathematical treatments, analysis and criticism, thus, much more far from the influence of the human senses.

The underestimated view of the experimentation significance by Aristotle changed in the 17th century, when Francis Bacon suggested that we should not simply observe the pure nature, but we also need to manage the world in order to reveal its secrets. The Scientific Revolution established this view and upgraded the experiment to the brilliant pathway to true knowledge. Nevertheless, the history of science is mainly the history of theories.

Once Bacon’s essays and philosophies, regarding experimentation and observation, were firstly accepted and established, they soon enhanced people’s desire to take advantage of them to harness nature for profit. Inevitably, the “study” and “research” of nature proved to be less about changing traditional attitudes and beliefs and more about economy’s stimulation. By the middle of the following 18th century the Scientific Revolution gave birth to the Industrial Revolution that radically and dramatically transformed the daily lives of people around the Globe. Western society has been pushed forward on Bacon’s model for the past three centuries. People often neglect the crucial and vital role doubt played in Bacon’s philosophy. Even with powerful microscopes, there is still a lot that human senses miss.

Accounts such ones in this book, identify with considerations on experiments and research in general, and are registered in the sphere of intuitions, trying to get their “grip” on applied cases. Surely experiments have their own theory in the background with prejudgments and certain distance from theoretical considerations of science. Yet, observations, experimentation and analogy had been considered as essential features of basic sciences. Observation recognizes, reports and articulates the facts clearly and in details; analogy reveals the similarities among phenomena and experimentation, brings into light new facts, and thus, progressing knowledge. The observation, as it is guided by the analogy, is leading to experimentation, while the analogy that may be confirmed by the experiment, becomes a scientifically established truth.

The value of experiment has been consolidated since the 18th century, but the meaning of the experiment has been profoundly criticized in its perception by the scientists. The experiment was perceived as a mechanism that would bring a certain result and outcome, yet in some cases it was proposed only as a support to cognition which should be one step before the experiment, in order for the latter to have a meaningful contribution. It is the idea that must drive the experiment. Still in some cases experiments are performed simply by curiosity and may—or may not—be the initial stages of upcoming thorough investigations on a solid hypothesis base. That last point indicates a circular approach to knowledge with a variable starting point. There is no universal initiation step especially among the various scientific areas of interest and the epistemic level each area has progressed and developed so far. It goes without saying that a significant observation may be the starting point. On the contrary, it has been proposed that each purposeful experiment is dominated by theoretical embodiments. Of course, such observations cannot do or mean anything by themselves. In certain cases, phenomena enthusiastically provoke, but remain without further use or exploitation as nobody can see what they really mean, how they are correlated to something else or how they may be used. Some highly sophisticated in their conceptualization experiments have been conducted entirely through an accepted theory. Such experiments were unique therefore influential to the historic development of the particular scientific field, critical and most widely accepted by scientists as “worth trying”, although their outcomes were not necessarily broadly accepted at the time.

At the same time, great theories were based on pre-experiments, yet others have fallen behind due to the luck of connection to the real world. Even some recorded phenomena remain useless without a solid theory. In special exultant occasions theories and experiments meet even though they are of different scientific fields’ origins. That brings us to the particular example of astrology, where experimentation may not exist, but all are based on observations. In this case experiments may only be those mimicking other scientific fields, approximating the phenomena, yet, only after theory has completed a thorough approach giving birth to a well-shaped theory. Finally let us mention the shelf-existing experiments that may exist for long, independent of a proper theoretical assignation. These are experiments that may provide solid, accurate, novel, yet not particularly straightforward and easy to explain by the existing, at the time of execution, theories and believes. These were not rejected, nevertheless they were also not accepted by all.

According to Hacking (1983), in a broader picture, the experiment is the second part of a scientific activity next to theory. While theory is an attempt towards “representing”, the experiment is the attempt to “intervening” the phenomena into the world. When experimenting, an attempt to control the phenomenon on question, via the available, well defined and hopefully well controlled conditions, is in focus. Therefore, we may say that every experiment actually guides the expression of the phenomenon according to the selected and established conditions within the lab, i.e. within a given artificial “world”. While theory is based on speculations, cognitive pictures of an object, and a qualitative view of the world, it may have no direct relationship to the real world or the phenomena. The relationship among theory and experiment also defines the scientific methodology to be adopted. An additional activity establishes the connection between these views and the world, the activity of calculation . This activity refers to the mathematical transformation of an initial theoretical hypothesis in accordance to the real world. In that process, the experiment is connected via the calculation ‘threshold’ to the theory. Calculation is the intermediate step between theories and experiments. Furthermore, the creation of models may also take place. Models still remain indirectly dependent and indirectly compatible to either the theories or the experiments, while the phenomena may be described with phenomenological laws connected to these models. A synergy among theory, calculation and experiment prerequisites the acceptance of those theoretical entities for which the hypothesis will be intensively made.

By no means should we suggest that theory and experiments are clearly distinct entities and separated up to their merge. Their inner-relationship is obvious in experimentation, where each step has to deal with (i) a theoretical approach or a device, and (ii) an equipment or a system that by itself has been constructed, invented or built based on an internal theory that guided the technology and technique experimenters apply in the lab. Inventions are indeed the outcome of a process where theory and experiments are applied on a practical solution, although there are inventions that have preceded a theory. Such a case is the invention of the steam engine by Watt which was the evolutionary outcome of preexisting attempts well before the thermodynamics theoretical establishment. The experiments were the invention efforts for the technological advancement in the heart of the Industrial Revolution era. Thermodynamics has been a theory that organized, summarized, utilized, and extended the experimentation knowledge and its practical applications.

But do all the phenomena really exist in the world, longing for the proper experiment to reveal, describe and present them to scientists? In many cases the answer is negative. In physics, for example, phenomena are reproduced or developed in a lab under well-defined and controlled conditions that are, at the same time, technically produced through other analogously preceding phenomena. Such conditions allow the isolation, distinction and stabilization of the phenomena. And these are the characteristics that permit their reproducibility at lab scale. Even if we assume that the lab conditions may be reproduced in nature without the intervention of the experimenters that is only possible only after the whole preparation process has been fully completed.

Reproducing a phenomenon is not that simple and implies a series of activities including design, teaching and learning on how to execute an experiment. Most importantly it implies the knowledge of when does an experiment actually functions. Such skills and capabilities can be obtained in the lab (under and within certain human and social environment), making the recording of a phenomenon a rather obscure situation.

Inevitably, questions and debates arise regarding observations, picturing reality, critical experiments, measurement’s accuracy and data values significance as well as the nature of experiments and the role of theory into them and the role of scientists overall. At the same time, the above points also imply an inherent and individual way of speaking about them. Is it the words and language also significant in talking about observations, facts and truth during an experimental observation or is it that each and every observation is embedded with a (specific?) theoretical consideration?

Yet alone, in certain cases, the ability to have a device capably functional for revealing the phenomena in a credible way is much more important than observation. Still the observer should be extremely competent in order to get the most out of a device or equipment used in data collection, especially on phenomenally paradoxical events that have been previously neglected or ignored by others. Such remarks may be a corner stone in the forthcoming work, but the upcoming experiments should and usually do, overcome simple observation. Training and education may have a part in developing an observation capability which nowadays goes well beyond the human senses.

It is not uncommon that experimenting scientists place observation on the top of their work, and the instruments, equipment and devices in the center of their ability to collect data. Consequently, both the above have the very first role in their conclusions, theoretical remarks and criticism. Thus, science progresses hand in hand with technology and techniques applied in the lab. We may still debate on whether there are indeed significant differences between observations and theory, but are aware of the strong impact our decision and adoption of a certain belief may have on our work. Such a case is making a decision on which things we may observe and whether that decision initiates from a theoretical impulse or an independent and pure interest.

Experimental Design

Although the objectives of quantitative and qualitative research are not mutually exclusive, their approaches to translating the world involve distinct research techniques and thus separate skill sets. Experience in quantitative methods is not required, but neither is it a disadvantage. Essential for our purposes, rather, is that all qualitative data collectors have a clear understanding of the differences between qualitative and quantitative research, in order to avoid confusing one with the other technique. Whatever a researcher’s experience in either approach might be, a general grasp of the premises and objectives motivating each, helps develop and improve competence in the qualitative data collection techniques, detailed in this guide.

A theory explains why some event occurs (or does not occur) by providing a model of the causes or conditions that control its occurrence (or non-occurrence). Since an event is mainly consisted of the participating, structural, phenomena that follow the apparently well-defined natural laws, in order to explain the event, are therefore in focus and consequently, the goal of the model is the experimental prediction and control. Alternatively, a theory may explain a normality mode? Regularity among events by providing a model of the causes or conditions that, if fulfilled, necessitate the “regularity” noticed in these events. Theoretical questions that reach the researcher have to be formulated into sharp and accurate “technological” questions, in order to reproduce nature in the lab in order to reflect theory to the reality of the phenomena expression or vice versa. It is the experimenter, who has to formulate certain “technological devices”, experimental equipment, through which a concrete answer to these questions has to be elicited. Of course, more questions may be pending or appear due and through to the process itself.

Concluding on whether a “technological” need is to be excluded, not necessarily in total and forever, goes via and through the questions to be answered. Such questions may also be part of a gradual implementation into the experimentation unfolding process. Apparently, such an implementation will have a rather obvious impact on the research subjectivity and its outcome.

Within the goal of a sensitive research, still associated with the need to face questions, it is a common aim to identify all possible sources of error, but at the same time try to avoid generalizations. Hence, researcher-selective theoretical approaches and structures dominate the design of the experimental work up to its execution, and even more up to a conclusive formalism in the laboratory, predominantly a theoretical one. Nevertheless, Seely (1984) concluded that the adherence to behaviors and experimental procedures, commonly considered to be typically scientific, has obstructed the development of practical answers to engineering problems, while at the same time failed to improve the theoretical understanding of these problems.

A consequence of the notion of horizons of understanding for research is that the researcher should be able to articulate the configuration of prejudgments bearing on the phenomenon, in awareness of the fact that complete self-transparency in this respect, is utopian. If there is a methodical dimension to understanding, it can only refer to the researcher’s ability to make any potential outcome of understanding, as feasibly explicit and to possibly by keeping in check the erroneous prejudgments that block the researcher’s access to the studied object. The presence of peculiarities in the data is a good indicator that we have not understood the phenomenon completely. A consequence regarding the understanding and design of experimentation systems is that in setting up, say, a bibliographic database, the fragmentation of information forces us to create the conditions of possibility for retrieving the pieces. We need conceptual backgrounds, for instance the scope of a data base, specific viewpoints (classification schemes), and terminology. The result is an objectivized or fixed pre-understanding. These backgrounds belong to historical, cultural and/or linguistic situations. There is no knowledge in itself .

One of the lessons of philosophical hermeneutics is exactly that; intellectual innovation of this sort depends on—indeed, is a manifestation of—the self-renewing power of tradition, of its dynamism, and its interpretability and reinterpretability. We can be open and have expectations with regard to the subject matter of a text because we belong to a culture or tradition (familial, educational, cultural, disciplinary, professional, organizational, etc.) which equips us with prejudgments and stereotypes and prejudice about anything that can become a matter of possible (research) interest. We can in principle understand only those phenomena with which we share a kind of meaning (Vamanu 2013).

Experimental Strategy and Experimental Results

Experimenting means reproducing, testing, improving, and standardizing phenomena. And phenomena are hard to be reproduced in a standard way. For this reason, we need to reproduce them, instead of simply discover or unmask them, leading to a laborious and time-consuming procedure, even more than one, actually. These procedures include design a functioning experiment, find out how to make an experiment work, and even more know when the experiment functions. Obviously, having these in mind, observing seems less significant in experimentation. What counts is the competency of districting the odd, the erroneous, the manipulative or the falsified environment, equipment and tool used in observing a phenomenon. It is then the equipment itself that should function rather than the experimenter who simply records the observations. For that, education, experience and practical thinking-designing- executing may contribute to unbiased results.

Repeating an experiment implies a way to value its repeatability and statistics have been extensively applied for that task. But how about the original experiment? Does it still maintain its validity versus the repetitions and can we actually repeat an experiment? Typically, repeating an experiment is an attempt of improving it against a more standard, less biased version of the phenomenon. While repeating involves similar or similarly functioning equipment, similar or similarly observing humans, similar or similarly functioning environment and so on, in some cases it troubleshoots the analysts on whether they should keep the results of a repetition or not. Apparently, it is the accuracy increase of a measurement aiming to exclude the systemic errors, instead of working on statistical means and standard deviations of a large number or data that may potentially accumulate systemic errors within less accurate or even erroneous results.

Engineering Based Design

Within the various Engineering Studies, the engineering design and methodology represent an essential part of the studies and experiences obtained through education. And although there is much discussion regarding the methodological approaches, it is the design methodology itself that distinguishes between engineering and scientific methodology.

In the end, engineering design is the effort to solve cognitively, using the available knowledge, the construction issues that shall provide the maximum economical and no time-consuming process of construction, construction itself, or even both. It is a systematic effort for minimizing effort. It starts as a cognition effort and ends as a physical effort. Everything in between may be called design. The cognitional effort during the design is not knowledge in itself (either scientific or engineering), but rather a pre-construction end-point when we need to conclude that this is the way to go.

Every design has to be abstract and provide the means for constructing, for bringing to reality, each of its conclusive points, even (at the same time) as potentially only one of the means for creating a more general model. The language behind the design, or for the design, is related to the functional relationships existing within the specific environment the design has been developed or the particular area it is meant to be applied to. Such examples include the process flow diagram for chemical engineers, the functional diagram for mechanical engineers or the circuit diagram for electrical engineers, as essential tools during the cognitional phase.

Lab

As mentioned above, the reproduction of a phenomenon has to take place within a well-defined frame that has given components under strict control of conditions. We shall call that a device which allows the transformation of the applied forces. Devices have been designed and constructed for specific reason. In experimentation, such devices may be viewed as the extensions of human senses and human functions upon the natural phenomena and consequently the measuring electronic equipment as extensions of the human nervous system. The first part of the previous sentence is about a technical experience of a natural potential, while the second part is referring to a technical mean for the realization of an imaginary potential, in relationship to a broader discussion regarding the relationship between the human intuitive plausibility and the experimentation.

The insights gained in the lab can be extrapolated to the world beyond, and this is a critical maintained assumption underlying laboratory experiments; this principle is denoted as generalizability. Of course, several different formulations have been used to depict the relationship between the lab and the field, like parallelism, external validity, and ecological validity. Parallelism is traced to Shapley (1964) and it is said to be established if the results found in the laboratory hold in other, particularly real-world, situations under ceteris paribus conditions (Wilde 1981; Smith 1982). Campbell and Stanley (1963) introduced external validity as follows: “external validity asks the question of generalizability: To what populations, settings, treatment variables, and measurement variables can this effect be generalized?” Ecological validity has taken on a multifarious set of meanings, including the notion that a study is ecologically valid if “one can generalize from observed behavior in the laboratory to natural behavior in the world” (Schmuckler 2001). But, confusion arises because it is clear that Egon Brunswik coined the term ecological validity to indicate the degree of correlation between a proximal (e.g., retinal) cue and the distal (e.g., object) variable to which it is related (Brunswik 1955, 1956).

For that, a scientific experiment is not merely a magnifying lens. In sciences outside the social or political field, it usually needs to be independent of the human energy, although it needs a control or a guidance by humans, in the sense that humans (researchers, experimenters) “drive” the reproduction of the phenomenon, via the experimental set-up, to a certain direction, on a prefigured roadmap (with available or applied tools, means, utensils and apparatuses). The fundamental energy, the inherent driving force is originated within the phenomenon, the environmental factors and the participating matter through the naturally expressed relationships and processes. The experiment then should be and should remain independent of the human energy, for the phenomenon to be developed as is, not as we want it to be. Then the experimentation on a phenomenon seen as an object, is radically transformed from a static object to a carrier and a producer of functions or specific physical, chemical/electrical processes. The fundamental establishment within the experiment is the transition of the matter and energy to an output, through the device (set-up). Parenthetically, the measuring equipment is, in a sense, the set of tools provided to the phenomenon, as communication means among humans and nature.

The devices are closed systems that can be analyzed using kinetic or dynamic terms, to describe the combinations of resistant substances capable of evolving to certain directions and transact a work. The picture of the experiments may be described as closed evolution chains or as combinations of resilient parts naturally ordered to produce a given outcome. The quality of this work, or in other worlds its favorable or unfavorable outcome, defines the efficiency or effectiveness level of functionality for this device. Then, it is an engineer’s role to change either the kinetics or the kinetics and the dynamics of the device (system), to deliver a particular outcome. The way the individual parts of this device is ordered consist the mechanism of the device. The way the input is becoming an output. The mechanisms within a device are to be considered at the initial step of setting up the device, but also as a hint in defining the case studies, hypothesis or proposals of and for a research.

The fundamental scope of Engineering is to describe the input-output transformations and to compare either the inputs or outputs with quantified terms (quantities) as much as for qualified ones (types). Therefore, devices must be distinguished from simple structures. The design and build-up of such experimental devices and the incorporation of the evolution related processes of a phenomenon in them, equals the availability not simply of a natural object but also of a procedure, as any particular phenomenon shall and should only take place within the device. The more distant this device remains from the human influence, the more objective the device’s procedures with be, and thus, the independence of the phenomenon from the external uncontrolled influences shall remain. The product produced by the device is based on the phenomena that take place and the way they evolve therein. These phenomena, and their consequent product, will be further defined by the system’s dynamics that are developed in time and in space of the device and as they were allowed, by the experimenter, the lab or the environment, to evolve. Hence, differences in tools, utensils, apparatuses even though they may seem alike, produce a different product when applied in small or larger scales of matter and energy. The transition from distinct scales to significantly differing ones has been a major engineering riddle, only to be enhanced by the complexity of the phenomena to be reproduced. Then, the phenomena, the procedures and the (natural) object making the procedures possible, have to be approached as total procedures, handled as devices and be controlled and guided as such. In accordance, the variability in the outcomes of such a device as well as the operators’ impact will, after all, indicate the level of human control and/or manipulation applied to the device.

An interesting point concerns the fatigue of a device. This can be related to the exhaustion of the inherent energy, the de-structuring of the device’s structure and the transformations towards a produced work, let alone the reduction in matter or the accumulation of “toxic” substances (by-products). It is in many instances the recording of the device’s operational life that is indeed the scope of the research and its rate the actual experimentation outcome. Even more, the manipulative change of this rate may be the ultimate target for a scientist, who becomes an engineer of the phenomenon to guide its application to a much-desired direction. This target could be an additional reason towards setting and working for a continuous cyclic research evolution and developments in applied sciences.

Following the aforementioned approach, it is rather obvious that the experiment needs more than a researcher; it needs an engineer, a person capable of running the device in a well-defined operational way, in a well-established functional procedure and in a standardized perception. The phenomenon that is incorporated and reproduced in an experiment, becomes a technical system. Thus, it is less of an independent entity that may deliver an unpredictable number of different and distinct relationships and becomes a moment in a system of predefined relationships. The phenomenon (its relationships or processes) is transformed to a primary reality within which the matter (substances) are functioning as moments.

A final concern on experiments as devices for reproducing the phenomena, is the human factor and the role of the lab as a social-scientific environment where each and every expertise are developed, grown, established and propagate. Within that environment, as in each and every society, many rules have been established and are followed as working directives. Needless to mention the impact of such rules on the degrees of freedom in carrying out an experiment, performing a design, executing a protocol and elaborating on the results by the participating scientists. The connoisseurs and their expertise are in many cases well assigned among the members, restricting and imposing the working, handling/managing, performing and even understanding processes. It is quite interesting the way technologies reflect to their creators and users, affecting their self-images, self-understanding and self-interpretations. Then experiments are who we are and vice versa. In conclusion, the phenomena and their devices cannot be independent of their frame, namely, their systemic world and its social interchange with the experimenters, the totality of which we call the meaning of the phenomena, or simply the experiment.

Equipment

Instrumentation during research has a major role in the method as well as the outcome of any experimentation set-up. In certain cases, the quality and quantity of the data collected is closely connected to the consistency of the collection process, bringing up the reliability, both of instrument and user mode.

Calibration and checking is an essential pre-step in the experimentation process, well in advance of the actual usage and application of the equipment of the study itself. Manuals, guidelines and procedures may be available, or even developed by researchers as part of the experiment set-up. In some cases, due to the close impact on the experimenters’ experiences and empirical connection to the phenomenon under investigation, they need to be clearly mentioned in literature as an inherent part of the experimental data collection, analysis and validation. That may allow for another group to replicate the experiment, and/or correct, improve or criticize its results.

It is a great challenge for both scholars and experimenters to have a critical approach to different results among replicates, in combination to the conditions applied and the control means used to check the human interference and the non-predicted parameters. That in the top of its application may allow them not just to fit the results in the existing theory or practice, but going one step further. That is, to extend and expand the areas of understanding to the complex fields of unpredicted-yet well-known and defined-peculiarities of the experimentation process overall. After all, it is the mistakes or faults that have led to different observations and consequent new discoveries in sciences and technology.

Nevertheless, the analytical and combinatorial ability of the Experiment and scholar to reveal such opportunities and take advantage of the understanding of the relevant factors behind the incomprehensible phenomena, shall allow for an engineering of the phenomena and a creation of the new experimenting machine to function towards a different, but desirable, outcome . Both the skill of analysis and knowledge combination in support of understanding the world seem of paramount importance in planning, executing and validating an experiment. This is realized only when it is grounded on sound knowledge, not simply in its totality but in each and every one of its classes, participating and contributing to the lab scale reproduction of the world.

An important dilemma though, regards the point in time for confidence, reliability, analysis and combination, needed for the execution and understanding of the phenomena and the valuable answer to the question (hypothesis). In general, it is the particular form of research that could define whether the above confirmations may be established in advance, simply due to the amount of data to be collected and the human versus instrumental intervention in doing so. In the case of basic, applied and engineering based experiments it is great confidence to the instruments customarily accepted by the specific scientific community, leaving back the human factor, while the amount of data (or its collection and recording frequency) is usually considered adequate in the light of operator’s experience, execution consistency, and time, cost and equipment availability. Numerous publications have been occasionally available due to a newly developed and used analytical device, creating new types of data collection, claiming high accuracy and maximum sensitivity in details recording, mainly due to recent advancements in the technology of the device, rather than the theory behind the analysis of involved phenomena.

Tools

Considering every equipment as a machine, or in other words as a structural totality of individual parts brought together in a certain operational and functional manner for performing a certain task of actions with an accepted performance efficiency, accuracy, reputability and reproducibility, and with operation input and recordable outcome, it is easy to conclude on the importance of tools to operate such a machine. Not just internal but external tools are involved, including calibration tools (machines by themselves), maintenance tools, means and methods, people (human machines with operational functionalities, skills and capabilities) or even environmental conditions that influence or collaborate with the equipment to allow its intended functionality (for instance, a thermometer has to be exposed to a thermo-environment in order to function).

Efficiency, Effectiveness, Economy

In defining design for engineers, we may mention the inherent intention to fulfill a construction, i.e. a reified intention. During this process, the design includes a process that incorporates the well-known techniques and technologies as well the scientific principles to define a cognitional object in such a way that this object can be naturally materialized as intended to. It is also a repetitive process of decision making for such designs that may utilize the available resources in the most optimum ways, thus the human mind is invited to fulfill the needs in the best possible way.

But besides the optimization, it is also the fit-for-purpose or bounded rationality that is included in the design process and targets. The optimization logic may distinguish among the command variables, means or entries, the fixed parameters, rules or principles and finally the constrains, targets or outcomes, but the process is aiming in fact in defining those values of the above parameters that will provide a widely-accepted sum. A sum should then be compatible to constrains, while at the same time it should also be maximizing the utility function within a given environment or conditional background. Given the complexity of the real situations, is good enough to have a design method that shall allow the selection among x potential alternative solutions, for the aforementioned fit-for-purpose or bounded rationality, to be met at the highest possible level, or at least provide a solution that will be better than the existing ones.

Results

The interpretation of the derived results for a system may be also expressed as a potentially alternative systemic model. Hence, the outcome will be a system of analytic statements (since it will be true by agreement). Such an interpretation cannot therefore be regarded as empirical or scientific, since it cannot be disproved by the falsification of its consequences, for these too, must be analytic. For a system to be interpreted as an empirical or of a scientific hypothesis, the primitive terms occurring in the system are to be regarded as ‘extra-logical constants’. In this way, the statements of the system become statements about empirical objects, that is to say synthetic statements. This leads, however, to difficulties because a definite empirical meaning is assigned to the experimentation concept by correlating it with certain objects belonging to the real world, experimentally determined. Thus, the experimentation concept can be regarded as a symbol of those objects.

As Popper notes (1959/2002) “it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system, e.g. physics. This possibility is particularly important when, in the course of the evolution of a science, one system of statements is being explained by means of a new—a more general—system of hypotheses which permits the deduction not only of statements belonging to the first system, but also of statements belonging to other systems. In such cases, it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems”.

If the hermeneutical circle describes the structure of understanding, the ‘fusion of horizons’ represents its mode (Gadamer 2004). Understanding emerges as an event out of this sort of fusion; the interpreter’s horizon has expanded and has been enriched as a result of this ‘merger’, by acquiring a wider and more sophisticated view of the phenomenon. The event in which ‘the tension’ between the separate horizons of the researcher and of the object is ‘dissolved,’ as they ‘merge with each other’ (Gadamer 1989), can be achieved via the knowledge classification (see Vamanu 2013).

Thus, when we deal with research objects (especially connected to the past), prejudgments about these objects derive from a ‘chain of interpretations’ which have accumulate to become an integral part of the objects and thus their knowledge as they mediate our access and relation to these objects. In this respect, new questions, new concerns, and new contexts of research enable different understandings of the same object and transform understanding into an unending endeavor.

A chain of logical reasoning for empirical scientists, can be validated, according to Popper (1959/2002), only when it has been broken up into many small steps, each easy to be checked via mathematical or logical techniques of transforming sentences. Raised doubts can only be pointed out as errors in the steps of the proof, or re-thinking of the matter. Describing experiments present empirical scientific statements that can be tested by skillful experimenters.

Engineering

A world of Latin origin ingeniare mid-14c., enginour, “constructor of military engines,” from Old French engigneor “engineer, architect, maker of war-engines; schemer” (12c.), from Late Latin ingeniare (see engine); general sense of “inventor, designer” is recorded from early 15c.; civil sense, in reference to public works, is recorded from c. 1600, but not the common meaning of the word until 19c (hence lingering distinction as civil engineer). Meaning “locomotive driver” is first attested 1832, American English. A “maker of engines” in ancient Greece was a mekhanopoios.

Although great changes in the meaning appeared in Europe or USA, today it refers to the art of applying science for the optimum use of natural resources to the humans’ benefit. It is the conception and execution of a plan for a construction or a system that functions under and responds to certain conditions at the most optimum possible way. In a sense, it is a cognitional study process for designing a device or a system that shall solve a problem effectively or shall confront a certain necessity. Then, an engineer should go beyond the actual making, fabricating or constructing, to managing, designing or studying in a systematic way. The engineer has to establish and order the particular engineering framework that puts all elements together.

Furthermore, engineering science, i.e. the systematic knowledge of making and relevant engineering crafts, contains the traditional as well as the latest upcoming branches that have particular application targets. Therein it is found the distinction among engineering and technology, as the first one dealing with the broader problems of application, while the later dealing with the more specific and particular issues. Needless to say, that the term may be used as either a mechanical-technical or a social sciences signification. Yet, both promote the conceptual creation of materialistic artifacts and many elements and influences that interact within this primary procedure and derive from that, influenced by its different forms and ultimately affecting them, in turn. In either case, such a creativity process is highly guided by modern science giving birth to a series of relevant references.

C. Mathematics

Mathematics and physics are the two theoretical cognitions of reason that are supposed to determine their objects a priori, the former entirely purely, the latter at least in part purely but also following the standards of sources of cognition other than pure reason. According to Kant (1781/1998) since the early time of ancient Greek philosophers, Mathematics has travelled the sound path of a self-sustained science. Yet, it was not as easy as it was for logic—in which reason has to do only with itself—to trace that quasi-royal path, or rather itself to pioneer it; rather, he claims that Mathematics was left groping about for a long time (chiefly among the Egyptians), and that its transformation is to be ascribed to a revolution, brought about by the happy inspiration of a single man in an attempt from which the road to be taken onward could no longer be missed, and the secure course of a science was entered on and prescribed for all time and to an infinite extent. The history of this revolution in the way of thinking—which was far more important than the discovery of the way around the famous Cape II—and of the fortunate ones, who brought it about, has not been preserved for us. But the legend handed down to us by Diogenes Laertius—who names the reputed inventor of the smallest elements of geometrical demonstrations, even of those that, according to common judgment, stand in no need of proof—proves that the memory of the alteration wrought by the discovery of this new path in its earliest footsteps must have seemed exceedingly important to mathematicians, and was thereby rendered unforgettable. A new light broke upon the first person who demonstrated the isosceles a triangle, whether he was called “Thales” or had some other name. For he found that what he had to do was not to trace what he saw in this figure, or even trace its mere concept, and read off, as it were, from the properties of the figure; but rather that he had to produce the latter from what he himself thought into the object and presented (through construction) according to a priori concepts, and that in order to know something securely a priori he had to ascribe to the thing nothing except what followed necessarily from what he himself had put into it in accordance with its concept.

Mathematical Modeling

Modeling is a scientific field where the use of mathematics is inevitable. As discussed above, mathematics actually offers a closed structure (language) to describe physical problems and to study them through the solutions (mathematical entities such as functions, along with several operators over them) obtained by this mathematical description. To model a phenomenon, it is necessary to separate the unknowns/variables from the parameters affecting them, to describe the processes with operators and tensors acting on the variable in several manners so that to (i) produce equations, (ii) to superimpose the conditions on the boundaries in order to sufficiently describe the singularities possibly being there and, finally, (iii) to use the appropriate mathematical techniques/methods in order to solve the mathematical equations, thus obtaining a set of mathematical entities (usually functions) that represent the consequent physical entities under investigation. The application of this solution on a variety of similar problems allows for the transition of its value from the world of mathematics to the real world of physical phenomena. The following graph clarifies further this recursive relationship between “nature” and mathematics (Fig. 5).

Fig. 5
figure 5

The correspondence of mathematics to the physical world

Two main approaches are encountered in the modeling world: the deterministic modeling and the stochastic one. To produce a deterministic model, one should apply fundamental principles and laws in the field where the processes under investigation occur, and therefore, to describe mathematically these principles in a form of specific terms containing ratios of variability (derivatives) for the quantities under question. By further applying balances, these terms form derivative equations along with conditions applied on the boundaries of the domain in which processes are supposed to take place. In fact, the produced system of equations is independent from the phenomenon itself that has to be described, since the mathematical theories used for achieving a solution actually consist applications (i.e. methods) based on broader and deeper mathematical theories.

The stochastic modeling is quite analogous to the thermodynamics approach for nature. In accordance to this approach, each medium is represented by a finite number of elements that have pre-specified degrees of freedom. At each time step, every element alters its situation (position, momentum, etc.), “selecting” one of some possible new situations/values under the limitation of a pre-defined possibility. Although the variability in a microscopic scale is absolutely randomly defined, the macroscopic behavior of the system that is described by statistically derived quantities such as mean, deviation, etc., is usually in excellent agreement with the expectations coming from the deterministic view of the simulated system. In that sense, this type of simulation is compatible to the opinion established by the thermo-dynamical approach of the phenomena: even though on particles level the activity is much random and disordered, when the various essential particles become a unity as a total (i.e. when we consider a great number of them as one, along with their interactions and interrelationships), then the system is statistically (inductively) predictable.

Apparently, both approaches produce solutions that are depended not only on the parameters affecting the physical system but also on mathematical quantities that are intrinsic properties of the theories, methods and tools used in the particular simulation. This cohesion between nature and mathematics is beyond the description abilities that mathematics offer as a well-structure universal language. It can be attributed in the “nature of mathematics”, that is their inherent capability to keep a distance from the phenomenon into which they appear. The involvement of mathematics in description of physical problems allows for applications in terms of engineering, since engineering may be considered as the application of the cycle of understanding on the sensory world, i.e. what is perceived via the senses in combination to the available knowledge background.

Speculations, Calculations, Models and Approximations

Pragmatic science has a structure of its own, following a certain articulation of a theory that is then exposed to experimentation. The initial speculations may not be related to the real world, while, in many recorded cases, verification required new methods and technologies in the experimentation process. Thus, both theory and experiment need to be articulated accordingly. A theoretical approach to either one may be called calculation , i.e. a mathematical version of a given speculation for better fit to the world. It is the speculation that aims to a qualified structure of a scientific field, while the experimentation targets its own existence within this field. What the calculation process does is the bridging between those two, by a hypothetico-deductive model. This connection allows for a quantitative agreement between theory and experiment. Somewhere in between these steps the models for the phenomena and models for the theories, simple mathematical expressions and footprints, and representations of the world appear. The human senses and cognition assisted by advanced technological tools turn these models feasible. The consequence of such an action is the creation of an impression for the phenomena, the theories and their interconnection via simplified mathematical propositions. Within this frame, the phenomena become a reality and theories are approaching the truth more and more closely. The type, complexity and number of correlated or independent models may be used within the same theory as the only accepted representations of the phenomena. In some cases, models were proven more resilient than theory, since there is more truth to incompatible models than to a given sophisticated theory.

There is a tremendous number of phenomena, but only a few relative simple laws that are applied in nature. However, not all of these laws may be applied in all phenomena. The trend is towards the successful use of more incompatible models for the phenomena in every day’s work than before, so the ultimate end-point shall be the absolute plethora of models and a unified theory. But even so, the vast majority of science and technology will be unattached, as there is a sustainable need for applications, as they develop per case. For that, engineering should be established on the comprehensible bases of theories, models, phenomena and proofs allowing for well-justified approaches to practical solutions. An engineering based justified application for manipulating and managing the phenomena has an additional circular effect on the origination of the models and the understanding of the phenomena, with a potential impact on the theories as well, through a cycle of understanding and re-organizing of our sensorial experiences and cognitional process.

Now, as Kant wonders (1781/1998) “why is it that here the secure path of science still could not be found? Is it perhaps impossible? Why then has nature afflicted our reason with the restless striving for such a path, as if it were one of reason’s most important occupations? Still more, how little cause have we to place trust in our reason if in one of the most important parts of our desire for knowledge it does not merely forsake us but even entices us with delusions and in the end, betrays us! Or if the path has merely eluded us so far, what indications may we use that might lead us to hope that in renewed attempts we will be luckier than those who have gone before us?”

We should think that the examples of mathematics and natural science, which have become what they are now through a revolution that brought about all at once, were remarkable enough. This is why we could reflect on the essential element constituting the change in the ways of thinking that has been so advantageous to them, and, at least experimentally, imitate it insofar as their analogy with metaphysics (rational cognition) might permit.

Underlying the literalist view there are a series of cropped understandings (misunderstandings?) of the nature and role of mathematical models. This concerns, for example, how theories relate to the empirical world, the nature of truth, and particularly, knowledge as only a short causal objective snapshot in contrast with knowledge as long term dynamic, historical, and social assessments that function of necessity in a cultural milieu. The latter, being praxis-laden, does not need or support unlimited iniquity, precision, or causality (see Heelan 1997).

Finally, according to Popper (1959/2002) the admissible values of the ‘unknowns’ (or variables) which appear in a system of equations are in some way or another determined by it. Even if the system of equations does not suffice for a unique solution, it does not allow every conceivable combination of values to be substituted for the ‘unknowns’ (variables). Rather, the system of equations characterizes certain combinations of values or value systems as admissible, and others as inadmissible; it distinguishes the class of admissible value systems from the class of inadmissible value systems. Correspondingly, systems of concepts can be distinguished as admissible or as inadmissible by means of what might be called a ‘statement-equation’. A statement-equation is obtained from a propositional function or statement-function; this is an incomplete statement, in which one or more ‘blanks’ occur. Now what Popper calls a ‘statement equation’ is obtained if we decide, with respect to some statement function, to admit only such values for substitution as turn this function into a true statement. By means of this statement-equation a definite class of admissible value-systems is defined, namely the class of those which satisfy it.