1 Theories in Business and Information Systems Engineering

1.1 Introduction

Even though the idea of science enjoys an impressive reputation, there seems to be no precise conception of science. On the one hand, there is no unified definition of the extension of activities subsumed under the notion of science. According to the narrow conception that is common in Anglo-Saxon countries, science is restricted to those disciplines that investigate nature and aim at explanation and prediction of natural phenomena. A wider conception that can be found in various European countries includes social sciences, the humanities and engineering. On the other hand and related to the first aspect, there is still no general consensus on the specific characteristics of scientific discoveries and scientific knowledge.

1.2 Theory and Science

The demarcation problem in the philosophy of science is how to distinguish between science and non-science. Some argue that the demarcation between science and non-science is a pseudo-problem that would best be replaced by focusing on the distinction between reliable and unreliable knowledge, without bothering to ask whether that knowledge is scientific or not. Nevertheless, there seems to be one answer to Kant’s question concerning the difference between scientific insights and the dreams of a ghost-viewer that is accepted by many: At its core, scientific knowledge is based on theories. Therefore, research should be aimed at the construction and testing of theories. However, this conclusion is satisfactory only at first sight, because the concept of theory itself lacks a unified and commonly accepted definition. There seem to be various reasons for this surprising lack of conceptual clarity at the foundation of an enterprise that is aimed at linguistic precision.

First, the term “theory” is used for different kinds of epistemological constructions. That makes it difficult to develop a satisfactory general conception. Philosophy of science does not provide us with an accepted concept of theory either (Godfrey-Smith 2003). Formal theories developed using the axiomatic method as it is subject of mathematics and logic are not necessarily motivated by observations from the empirical world. Their truth can be proved, i.e., they can be verified with respect to the underlying axioms. Theories in the empirical sciences usually aim at gaining reliable descriptions of reality. Therefore, their justification will depend on some form of confrontation with a conception of reality which is coined by underlying epistemological and ontological assumptions. In the case of (neo)positivist approaches, this kind of justification is based on the correspondence theory of truth, which in turn has its background in a (critical) realist view of the world. Some philosophers of science aim at a (partially) formalized conception of empirical theories. The semantic view (Suppe 1989) regards theories as being comprised of sets of mathematical models and sets of models with an empirical claim. (Testable) hypotheses then serve to link both kinds of models. The ’non-statement view’ of theories aims at specifying a formal structure, also called an “architectonic”, which should be suited to represent the “‘essential’ features of empirical knowledge ...” (Balzer et al. 1987, xvii). The formal structure comprises a set of so called potential models (interpretations) of the underlying conceptual framework. Hermeneutic approaches which are rather based on different forms of constructivism or idealism make use of the coherence or the consensus theory of truth. In addition to that it is questionable whether truth is always the only justification criterion (Frank 2006).

Second, the actual use of the term is not only ambiguous but also ambivalent. A clear distinction between scientific (theoretical) and non-scientific knowledge is not trivial, if not impossible (Laudan 1983). Furthermore, studies in sociology of science show that scientific knowledge contributions are not independent from external factors such as incentives, expected reputation or power games (Feyerabend 1993; Kuhn 1964; Latour and Woolgar 1986). Sometimes it may seem that a theory is the result of a social construction – somebody has named it as such and his proposal was legitimized by being published in a top tier journal – rather than an epistemological distinction.

1.3 Theories in Our Field

The lack of a satisfactory conception of theory is especially critical in Information Systems or Business and Information Systems Engineering (BISE), respective. The wide range of research topics in our field comprises not only empirical theories, but also formal theories and the design of elaborate artifacts. At the same time, leading journals emphasize the need for theories, thereby creating a situation that is suited to create confusion. Various publications are aimed at targeting this problem.

Especially Gregor (2006) helps clarifying the use of theories in Information Systems. However, her work is mainly restricted to (neo-)positivist ideas of theory (Popper, Hempel/Oppenheim) and does not account for the peculiarities of formal theories or those conceptions of theory found in our neighboring disciplines economics, informatics, and management science, and also of those in several sub-communities of BISE. Frank (2006) suggests a meta conception of scientific knowledge that covers empirical, formal and design contributions, but does not provide a correspondingly wide conception of theory.

The situation is even worse when it comes to criteria that help assessing the quality of theories – especially with respect to the epistemological value of probabilistic propositions that are used by the majority of theoretical contributions in our field (Lim et al. 2009) – and that Popper refused to accept as proper theories. The problems caused by an ambiguous conception of theory in our field have been known for some time. In a recent debate that was triggered by Avison and Malaurant (2014) who question what they call the “theory fetish in information systems”, (Markus 2014, p. 342) comes to the conclusion “... that conflicting notions of theory and theoretical contribution, rather than sheer overemphasis on theory, may lie at the heart of the problem that Avison and Malaurent identified.”

A close look at theories relevant for our field results in a wide range of examples that are substantially different. For example, in informatics theoretical foundations such as automata theory, computability theory, complexity theory, or computational learning theory, which are typically based on the axiomatic method, constitute foundations for engineering sub-disciplines such as data engineering, data mining, and operations research. In those fields that focus on human behavior and action systems, many researchers follow a neo-positivist research paradigm with a concept of theory that leans on that common in the natural sciences. However, some researchers in these fields prefer hermeneutic approaches, e.g., for conducting case studies. Respective research methods do not only replace the idea of scientific objectivity with subjectivity, they sometimes deny the need for generalization.

The neo-positivist conception is challenged by a further principal concern that is directly related to a current subject of our research: the digital transformation. It is questionable whether research can provide an orientation for change if it is focused on actual or past patterns of developing and using IT. Instead, it may be more appropriate to emphasize the notion of theory (“theoría”): to transcend the “factual” world by contemplation. For us that means to look beyond current patterns of developing and using IT or – in other words: to develop justified (!) models of possible future worlds (Rorty 1999; Frank (2006) that serve those who live the future as an inspiration and a meaningful orientation. Respective constructions cannot be validated by confronting them with reality, since they are on purpose different from it.

Fields that make heavy use of formal models and methods are arguably very important for our discipline. They emphasize the power of mathematics and logic for representing scientific knowledge. While respective constructions come with obvious advantages as they allow for computing and proving, they come with the problem how to decide whether there is a valid empirical interpretation of socio-economic systems and whether actors can be expected to follow the rules of logic.

On the other hand, there are researchers that follow a more empiricist agenda, but aim to reconstruct their theories with formal models. This is particularly important in our field as human behavior cannot easily be characterized by a simple set of axioms. Empirical models of behavior can then be used to contrast axioms as they are used in theory. For example, independence of irrelevant alternatives is an axiom typically used in social choice theory. However, experimental research has found that human subjects often change their preferences over two alternatives if faced with an extended set of alternatives.

1.4 Theory and BISE Identity

The theoretical foundation of a scientific discipline has a substantial impact on its identity, and the identity of the IS discipline has led to significant discussion in the past. Some colleagues see themselves in the tradition of computer science and operations research, and they heavily draw on certain branches of mathematics, theoretical computer science (in particular algorithms and complexity theory), and statistics. Some colleagues are closer to economics and draw on economic theory, most notably microeconomics and industrial economics. Finally, the work of many colleagues is rooted in psychology and sociology, in particular when it comes to user perception and adoption of information systems.

Of course, the underlying theory has a substantial influence on the research being done and the criteria used to evaluate research. Some argue that IS needs to develop its own theories, which are distinct from reference disciplines. After all, it is not even easy to characterize what constitutes a theory, and the understanding of this is different in all of these reference disciplines. In any case, the current state of the discussion on theory in IS appears unsatisfactory.

Due to the fact that IT plays a role in more and more aspects of our lives, IS academics have looked into an ever growing number of subjects and IT-driven phenomena. Sometimes these phenomena are related to finance (e.g., crowd funding), sometimes to marketing (e.g., online shopping behavior), sometimes to systems engineering (e.g., enterprise architecture management), and sometimes to labor economics (e.g., online job markets). Nowadays research topics in BISE are largely interdisciplinary. While it is important to analyze all of these topics, our community is not the only one looking at these phenomena. It is important that we bring certain methods and theories to the table – a particular point of view that adds to the work of others in a valuable way. This is one, but of course not the only reason why it is important to be aware of the theoretical foundations of our work.

While some may regard a discussion of theories a mere philosophical exercise, we are convinced that a reflection on the foundations of our work – and its intended outcome – is essential. Without considering the existing variety of theory conceptions in our discipline, we cannot develop elaborate ideas of the ultimate goals of our work, of the justification and evaluation of research, of scientific progress and of proper ways to document scientific knowledge.

1.5 Contributions

We have collected the views of colleagues on the importance and nature of theories in their field. This was intended to not only lead to a summary of different theoretical streams relevant to our research, but it might also influence the discussion about curricula in our field. We asked them to account for the following questions:

  • Which conception of theory is central to your area of research?

  • How do you evaluate progress in your field and what would you describe as long-term goal?

  • In which way does theory guide design and engineering in your field and how does it impact practice?

  • How do you evaluate the quality of theories in your field?

The contributions we received confirm that a debate on theory in our field is both challenging and inspiring. It is challenging because there is a variety of clearly different perspectives on the subject that indicates not only that we lack a common conception of theory, but that it might even be illusive to aim at one. At the same time, such a debate promises that “the object of our thought becomes progressively clearer” (Berger and Luckmann 1966) through the multitude of perspectives on it.

David Avison and Julien Malaurent used the opportunity to comment on their contribution to a debate on theory they had organized earlier (Avison and Malaurant 2014). There they questioned the “theory fetish” they observed in IS research and suggested that research would benefit from a more relaxed notion of theory, which they referred to as “theory light”. In their present contribution they emphasize that they did not mean to give up the quest for theory in IS research, but that there should be the opportunity for publishing ideas without referring to a rigorous notion of theory. Avison and Malaurent seem to assume that there is a common conception of theory in IS, since they do not discuss the conception of theory as such.

Peter Fettke focusses on particularities of research in Business and Information Systems Engineering (BISE) compared to IS. He argues that IS follows a model of research that has matured in the natural sciences, while BISE is rooted in engineering. While he regards referring to theories as a common, if not mandatory part of research in IS, he suggests that there are conceptual frameworks in BISE that are not called theory, but might as well qualify as such. While Fettke is reluctant to offer a definition of theory, he has a clear preference for a concept of theory that emphasizes the identification of cause–effect-relationships.

Dirk Hovorka proposes an inspiring relativist view on theory. He criticizes the common idea that a theory is a static linguistic structure that enables problem solving as misleading. Instead, he proposes a more dynamic view. Theories, as well as the conception of theory, are in a state of flux, they are representations of the ongoing discourse that constitutes the idea of science. Since such a discourse may stress a multiplicity of different perspectives on the subject of thought, theories may possess different forms and serve different purposes. Therefore, according to Hovorka, it would be inappropriate to aim at a common or integrated conception of theory. At the same time, such a view on theory implies giving up the common idea of scientific progress, because it denies the existence of criteria that would allow a clear discrimination of competing contributions to a common knowledge base.

In their research, Jan Krämer and Daniel Schurr follow a micro-economic paradigm that makes heavy use of mathematical models. Therefore, it does not come as a surprise that the conception of theory they suggest shows clear similarities to the notion of theory in mathematics. They regard models as interpretations of formal theories that help mediating between abstract structures and reality. To serve this purpose, models need to be designed with assumptions about the targeted domain in mind, which in turn requires some sort of empirical analysis. Hence, they claim that models serve as an instrument to develop appropriate formal theories that can be turned into theories with an empirical claim. They do not, however, advocate a pure realist conception of models. Instead, they regard models as analytical tools that may on purpose deviate from factual properties of reality.

Benjamin Müller distinguishes between positivist and non-positivist conceptions of theory and poses the question which one is more appropriate. He argues that scientific progress is likely to result from integrating and consolidating findings that are brought about by different research methods and paradigms. Consequently, he proposes that accounting for multiple perspectives should be a pivotal criterion for evaluating the quality of theories. He also advocates the conduction of research on post-adoption, that is to go beyond simplified models of technology adoption and focus on new patterns of (inter-) action that may emerge after the adoption of new technologies.

Leena Suhl’s view on theories reflects her work in operations research. She argues that operations research calls for enriching formal theories with empirical theories from the targeted domains, especially from economics, but also from fields such as manufacturing or marketing. Suhl suggests that the use of different types of theories contributes to the strength of the field, because it requires looking at the research subject from different perspectives. Therefore, she advises against aiming for a common conception of theory or even a comprehensive unified theory in Business and Information Systems Engineering. Instead, she suggests building and maintaining a common repository of relevant theories and methods that foster reuse.

Bernhard Thalheim argues that conceptual models are indispensable instruments of research in our field. Therefore, he proposes a general model theory that is suited to guide the more reflected construction, use and evaluation of models. For this purpose, he suggests a conception of model and discusses its relationship to the concept of theory. Since he regards models as primary subjects of scientific thought, he recommends supplementing a general model theory with a theory of reasoning that would include foundational elements of reasoning about the construction, analysis, and use of models.

Prof. Dr. Martin Bichler

Technical University of Munich

Prof. Dr. Ulrich Frank

University of Duisburg-Essen

2 A Call for ‘Theory Light’ Papers

In our original paper published in Journal of Information Technology (Avison and Malaurant 2014), we argued that papers in our top journals need not only emphasize theoretical contributions, but could also, for example, emphasize new arguments, facts, patterns and relationships and thereby be ‘theory light’ and yet still make a major contribution to the discipline of information systems (IS). We gave some examples of such papers from IS and other management disciplines. We also provided several reasons for our concern about the present stress on theory in our journals, giving full explanations in that original paper:

  1. 1.

    Authors may be tempted to revert to ‘ideal types’ in our understanding process to make sense of the data within a theoretical framework.

  2. 2.

    Authors may be tempted to distort the description of the research setting so that it fits better to the chosen theory or theories.

  3. 3.

    There is no ‘recipe’ to help authors somehow fit the data to a theory and too few reflective accounts about how any potential gap between theory and data can be addressed, so that authors may be tempted to choose only those data that fit the story.

  4. 4.

    Authors may be tempted to choose theories that might be more related to ‘fashion’ or the fact that a theory developed in another discipline has yet to be ‘borrowed’ into IS, in order to provide an ‘original’ theoretical contribution, rather than to select a theory on the grounds of suitability considerations.

  5. 5.

    The requirement to emphasize theory in all our published papers has an opportunity cost as authors loose the opportunity to make other valuable contributions fully because of space issues. To move into ‘unexplored territories and arguments’ requires supporting explanations etc. to make the contributions convincing.

  6. 6.

    The requirement of a theoretical contribution in every paper makes some of these ‘contributions’ somewhat trivial. Many papers may contain ‘theoretical filling’ rather than making a substantial theoretical contribution. It is this ‘window dressing’ which downplays theory as it does not give theory the weight it deserves and suggests that IS is ‘weak theoretically’. Thus IS papers that do stress theory should deepen IS theory rather than simply ‘add to the mass’.

As we stated in our original paper, all these concerns are not about appropriate emphasis on theory, but about the danger of inappropriate emphasis or inappropriate use of theory or theoretical frameworks. We therefore argued for (and provided examples of) some papers being ‘theory light’ where theory plays (or pretends to play) no significant part in the paper and the contribution lies elsewhere.

We are particularly concerned that too few papers published in the top journals of our discipline impact practice. Articles published are often posteriori interpretations of cases or datasets and the connections between academic IS researchers and practitioners remain too limited and uncertain. For this reason we have been particularly keen to promote the use of action research (Avison et al. 2016).

Our paper has had the impact to lead, for example, to six rich commentaries published in the same issue of Journal of Information Technology, but it has also sometimes been misinterpreted and misrepresented. For that reason we now emphasize what we did not say! For example:

  1. 1.

    We did not argue for a theoretical or theory-free research. This suggests an anti-theoretical stance that we do not share. We argue for papers to be accepted in our top journals that either make an excellent theoretical contribution or that make an excellent contribution elsewhere.

  2. 2.

    Our position is not the same as that of a grounded theorist who might start from a tentative theory-free stance but when making sense of the data is expected to create theory. Therefore papers based on the grounded theory approach are expected to discuss theoretical contributions of the research.

  3. 3.

    We did not argue that theory should not be a key element of doctoral studies. Doctoral students should have a thorough grasp of theory. They need to demonstrate knowledge and use of theory as part of their qualification.

  4. 4.

    We did not suggest that ‘anything goes’ in ‘theory light’ papers. Indeed, we suggested that authors and reviewers ask themselves ten questions which might apply to all qualitative papers, but are especially important in ‘theory light’ papers. These questions are: (1) Is it interesting? (2) Is it original? (3) Is it rigorous? (4) Is it authentic? (5) Is it plausible? (6) Does it show criticality? (7) Is there access to the original data? (8) Is the approach appropriate? (9) Is it done well? (10) Is it timely? Again, each of these questions is discussed in the paper.

  5. 5.

    We do not regard writing ‘theory light’ papers to be easier to research or write, nor did we imply a less rigorous reviewing process, a lowering of standards for our leading journals, or an easier read. On the contrary, responding positively to our ten questions above suggests that these contributions need to be especially good ones.

The acid test for any paper (including ‘theory light’ ones) is the following high barrier: Is it probable that the paper will stimulate future research that will substantially alter IS theory and/or practice? Following this path we should see more papers in our leading journals that are truly original, challenging, and exciting, and less – dare we say – formulaic.

Dr. David Avison

Dr. Julien Malaurent

ESSEC Business School

3 Towards a Coherent View on Information Systems

Scientists have odious manners, except when you prop up their theory; then you can borrow money of them. – Mark Twain

3.1 Business Informatics as an Academic Field of Inquiry

Talking about theories depends on the underlying notion of theory. First, I would like to point out that academic fields of inquiry have developed very different understandings of what science and an acceptable theory are. It is impossible to give a complete overview of all answers. However, I would like to open the discourse and make some important preliminary remarks.

Table 1 shows four triples of corresponding words in English, French and German. This synopsis clearly shows that that for the English word “science” different terms are used in German and French (McCloskey 1984). This fact is of major importance because it makes indisputably clear that the terms “science” and “Wissenschaft” are not interchangeable in all sentences without altering the truth value of statements. Hence, speakers from different language communities, particularly from English and German speaking ones, have different conceptions in mind when talking about science or Wissenschaft. According to (McCloskey 1984, p. 97), while in German and French the science word “merely means ‘disciplined inquiry,’ as distinct from... journalism or common sense”, in English, the “august word connotes of numbers, laboratory coats, and decisive experiments publicly observed”. In fact, whenever German speakers use the term “Wissenschaft” in the sense of Geisteswissenschaft or Ingenieurwissenschaft, English speakers do not use the term “science” at all.

Table 1 Synopsis of terms denoting academic fields of inquiry in different languages (based on McCloskey 1984)

Therefore, if we talk about Information Systems or Business and Information Systems Engineering (BISE) as a science, our understanding of science has to be clarified. While Information Systems is strongly rooted in science, BISE has its origin in engineering. In the following, I use the term “Business Informatics” – in analogy to Bioinformatics or Health Informatics – as an umbrella term for Information Systems and BISE. Table 2 summarizes the foci of different academic disciplines studying information systems.

Table 2 Focus of different academic disciplines studying information systems

3.2 What is a Theory in Business Informatics?

Analyzing the usage of the term “theory” in different communities is one approach to answer the question what a theory is in Business Informatics. Table 3 aggregates the results of two quantitative literature reviews conducted by Lim et al. (2009) (with a focus on Information Systems) and Houy et al. (2014) (with a focus on BISE).

Table 3 Most cited theories in Business Informatics (The ranking points are calculated as the arithmetic mean of the ranking points a theory obtained by the two rankings. A theory ranked first gets 1 point, ranked second gets 2 points etc.)

These results show:

  • Pluralistic orientation: Table 3 only depicts the most cited theories in Business Informatics, in total, more than 200 theories were identified. This result shows that there exists no clear and distinct theoretical research paradigm in the sense of Kuhn (1996). Although there are some competing theories (e.g., resource-based view versus market-based view), most theories have different application areas and can be seen as complementary.

  • Theory as an umbrella term: Sometimes the term “theory” is used as an umbrella term for different theoretical approaches, e.g., organization theory, decision theory or systems theory include very different theoretical approaches.

  • Different reference disciplines: Theories used in Business Informatics are rooted in different academic fields of inquiry, e.g., microeconomics (game theory), strategic management (resource-based view), or organizational sciences (organizational theory).

  • Mathematical and empirical theories: Some theories have an empirical content, e.g., transaction cost theory. The empirical content of other theories is debatable, e.g., systems theory or game theory. Other theories, e.g., graph theory, do not have any empirical content at all.

  • Descriptive and normative theories: The term “theory” is used in a descriptive as well as a normative sense. For instance, it is well-known that decision theory has two different branches, normative/rational decision theory and descriptive decision theory.

Although such quantitative literature analyses can give important and interesting insights into the usage of the term “theory” in Business Informatics, it is also clear that such results should be critically reflected: (1) The presented analysis is based on the premise that a theory is present wherever the term “theory” is used. Although the idea that the meaning of a word is given by its usage is appealing, it should be remarked that it would be a classical logical fallacy to derive a normative notion of what a theory is solely from a descriptive analysis. (2) Since the term “theory” is used very differently, it is prima facie plausible that there exists not only one conception of the idea “theory”. My following contribution relies on the premise that the term “theory” can be explicated differently.

3.3 Two Major Design Theories in Business Informatics

The analysis above shows that design theories are clearly underrepresented in the top Business Informatics theories (Gregor 2006). However, it cannot be concluded from this result that there are no important design theories in Business Informatics. Note that there are many important theories in other branches of academic inquiry which do not carry the term “theory” in their name, e.g., geometry, thermodynamics or evolution. In fact, some very important research results in Business Informatics are not labeled as theory at all. Let me introduce two examples which have major influence within the German Business Informatics community:

  • Model of Integrated Information Systems (IIV) developed by Mertens (2012): The work on this model started in the late 1960s and was further developed for more than 40 years. This model shows how different application systems in the manufacturing industry are conceptually integrated.

  • Architecture of Integrated Information Systems (ARIS) developed by Scheer (1994): Scheer developed the ARIS as an instrument to systematize different aspects to describe and develop information systems. For each aspect and layer particular instruments are introduced and integrated. This model was developed in the late 1980s and is still used in different versions.

Although both works can easily be criticized for several reasons (e.g., bias towards manufacturing industry, not every construction step is explicated), the mentioned examples are two major instances of design theories. This is not merely my opinion; the statement can easily be substantiated by taking a look on the history of these contributions (work of Mertens developed up to the 18th edition, Scheer’s major work on ARIS is translated into English, Chinese, Russian and other languages). There are numerous examples of dissertations and research articles which are based on the design theories developed by Mertens and Scheer, although the literature analysis shows that they are not explicitly labeled as theory. Furthermore, at many German-speaking universities, these works provide the classical textbook for an introductory course into Business Informatics.

To summarize, although both design theories are not explicitly called “theories” and therefore do not appear in the above mentioned literature analysis, it would be a mistake not to subsume this work under the umbrella term “(design) theories” of Business Informatics.

3.4 Theoretical Progress: A Multi-Perspective Understanding of Theory

At large, there are good arguments to question the idea of scientific progress in general (Kuhn 1996). However, when understanding academic inquiry as a problem solving activity by following a particular research paradigm, I think it is possible to see some important developments which can be called progress. With respect to different research traditions, such a progress can have very different roots and epistemic qualities (Hacking 1983). Figure 1 provides an overview of four main perspectives.

Fig. 1
figure 1

Different perspectives on Business Informatics

  • Business Informatics as mathematics: From the perspective of mathematics, the formal structure of information systems is of major importance. Empirical insights are out of scope of this perspective. As a primary method, a formal proof is used. Progress is achieved by formalizing general ideas and proving interesting statements. Example: Seminal paper by Kindler (2006) introduces and formalizes a framework for formal execution semantics for Event-driven Process Chains (EPC). The significant progress of this work is a mathematically sound definition of the non-local behavior of EPC.

  • Business Informatics as a science: Real phenomena are described, explained, understood and often generalized by using a theory about these phenomena. Experiments are the scientific method par excellence. From this perspective, there are different areas for improvement, mainly a theoretical progress [finding a new theory explaining a phenomenon), an empirical progress (identifying or describing a (new) phenomenon] and a methodological progress (improving an existent or inventing a new method). Example: Seminal Paper by Davis (1989) explaining the acceptance of information technology. Davis shows that perceived ease of use and perceived usefulness are high predictors for user acceptance of information technology (theoretical progress). Additionally, he develops and validates measurement instruments for all introduced constructs (methodological progress).

  • Business Informatics as engineering: New, more powerful and astonishing information technologies are created in academic or industrial laboratories and ultimately tested in reality. Research and development respectively prototyping are primary research methods. Example: Seminal work by Scheer (1994) on the Architecture of Information Systems (ARIS). The significant contribution of Scheer’s work is a comprehensive framework for describing and developing business information systems. Furthermore, a powerful software package was developed which demonstrates the feasibility and usefulness of this innovative approach. The experiences with this prototype provides the foundation for the development of the ARIS Platform which later became the market-leading system for business process management.

  • Business Informatics as a philosophy: Developing new ideas and perspectives and criticizing well-known approaches is important for the philosophy of information systems. Speculation, discourse, analysis, argument and debate are the major elements of methods used from this perspective. Example: Wand and Weber (1988) present the idea of using ontology as a foundation of information systems research and set the philosophical starting point and foundation for a broad research stream (Fettke 2006). Another example on the meta-level of research on Business Informatics is the seminal work by Hevner et al. (2004) who explicitly discuss the importance of design science research in information systems. Both works mentioned offer very fresh and fruitful views for and on research in Business Informatics. The significant contribution of Wand and Weber is a completely new fundament for conducting research. Hevner et al. introduce clear guidelines for conducting design science.

Again, I would like to point out that the different perspectives often stress different aspects of progress. However, the ultimate goal is to provide a coherent view on information systems. Identified contradictions in practice or theory are an important sign of a lack of coherence and call for more research. Furthermore, different perspectives on information systems have to be integrated. Such an integration provides a richer picture of how information systems are, can be, or should be.

As I stated before, different academic fields of inquiry have developed different understandings of what a theory is. However, I would like to mention that there exists a standard view on theory in the philosophy of science, which I would like to discuss in more detail in the following.

3.5 A Narrower View on Theory: The Standard View in Philosophy of Science

If you talk about what a theory is, there are of course different answers to this question (Fettke and Loos 2004). In the broadest sense, a theory is the result of an academic inquiry. As such it can be understood as justified true beliefs which are framed and often specifically named. However, the term “theory” is often used in a narrower sense. For example, compare the five theory types described by Gregor (2006), namely theory for: (I) analysis, (II) explaining, (III) predicting, (IV) explaining and predicting and (V) design & action.

Compared to the concept of theory introduced by Gregor, the standard view of philosophy of science is much narrower (Bunge 1998b; Ladyman 2001). According to the standard view, a theory is a cumulating point of scientific endeavor. A theory is a hypothetical-deductive system which contains presumptions and at least one scientific law statement covering a cause–effect-relationship (formalized as A \(\rightarrow\) B). The Euclidian geometry theory was for a long time the ideal formulation of a theory. However, in the meantime it is well known that Euclid’s geometry does not fit together with the real world, other geometry theories have been developed. Furthermore, Newtonian mechanics is an example of another theory in this sense. However, we know that this theory is still successfully applied in everyday reasoning, although it is not correct when very large velocities or very big masses are involved. Under this assumption, relativity theory must be used for correct reasoning.

From my point of view, there are good reasons to identify cause–effect-relationships at the core of an academic discipline or theory (Note that this statement is not a contradiction to my preliminary remarks as long as you accept the unproblematic premise that there are different conceptions of what theory is.). However, as an application-oriented discipline, solely quarrying for cause–effect-relationships is not sufficient. Business Informatics should not only be interested in cause–effect-relationships, but should also research means–end-relationships (Bunge 1998a; Chmielewicz 1994; Zelewski 1995).

3.6 The Importance and Foundation of Technological Rules

Business Informatics investigates information systems. Such investigations aim at representing and explaining existing information systems. According to the standard view of theory, a scientific law constitutes the core of a theory. In contrast, an application-oriented discipline such as Business Informatics is not only interested in scientific laws but in technological rules [formally: “B per A!”, (Bunge 1998a; Maaß and Storey 2015)]. In other words, Business Informatics works on new, possible information systems [Frank (2006); Müller (1990), p. 8]. Two design types can be distinguished. First, a new system can be described (“to-be system”). Although not every time explicitly mentioned, the modus of description is: “It is possible that ...”. Such a description represents an information system as it could or should be. Second, a new process can be described (“to-be process”). A planned process describes an action plan of how a possible system can be implemented or how an objective can be achieved.

Technological rules do not represent existing systems; they guide the development of new information systems. It is impossible to assign truth values to statements about possible systems by comparing the stated possibility with actual reality. Instead, one can only ask whether it is possible to implement or to realize such designs or whether it is desirable to make a planned system reality.

Typical examples for technological rules are (Fettke 2008):

  • Business Model Engineering: “Customer-orientation improves profit!” (Davenport and Short 1990).

  • Business Process Engineering: “Using processes models is more efficient!” (Scheer 1994).

  • Business Software Engineering: “Adding people to a late project makes it later!” (Brooks 1975).

The most important question is how such technological rules can be justified. Or, more generally: What is the interdependence between theories (in the narrower sense) and technological rules?

Often, from the perspective of pure science, it is argued that engineering is only an application of such law statements. Although some renowned proponents, e.g., Popper (1957), formulate the idea that theories can easily be transformed into technological rules by so-called tautological transformations, I believe the interrelationship between both concepts is much more complex (Houy et al. 2010, 2015). For example, the following aspects must be taken into account: (1) “Man has known how to make children without having the remotest idea about the reproduction process” (Bunge 1998a, p. 143). (2) Theories are sometimes still used for design purposes even when it is widely accepted that they are not true, e.g., Newtonian mechanics is still used for the calculating satellite orbits. (3) Not every law statement can effectively be used by a technological law statement, e.g., if one has no means to make the antecedent of the law true, it is impossible to use the law by a simple tautological transformation. Nevertheless, knowing the law might be useful for technological purposes. (4) Particularly in Business Informatics it is questionable whether all known empirically identified patterns or regularities qualify as causal relationships. For example, it is debatable whether the construct “perceived ease-of-use” of the Technology Acceptance Model has a causal effect on system acceptance. (5) Social systems engineers have to deal with self-fulfilling or self-defeating predictions.

To conclude, from an application-oriented perspective it does make sense to conduct academic inquires which are not theory-grounded (in the narrower sense) but practically successful.

3.7 On the Quality of Theories in Business Informatics

Lack of cumulative research, following short-lived fads and missing long-term, ambitious research goals are well-known shortcomings of our field which many others have criticized before (Hirschheim and Klein 2003; Steininger et al. 2009). Instead of repeating these still relevant deficits, I would like to put more emphasis on another aspect.

In his contribution to this discussion, Dirk Horvoka already referenced Kuhn’s concept of the disciplinary matrix which constitutes not only the identity of discipline but also the values of a research community. In other words, it is interesting to have a more detailed look on our disciplinary matrix in order to elaborate on the quality of theories in our field.

The textbooks of a discipline are one important factor constituting the disciplinary matrix. First, textbooks are major sources for introducing students to a field and demonstrating what is well-known and well-accepted in that discipline. Second, textbooks are also useful for practitioners as points of references to most significant results. Metaphorically speaking, they are symbols for the body of knowledge of a discipline.

A few years ago, some colleagues conducted a detailed analysis of Business Informatics textbooks and obtained remarkable and thought-provoking results (Frank and Lange 2004; Schauer and Strecker 2007). I do not want to recapitulate and update this analysis here. Instead, I would like to pose the following question: How do our textbooks deal with theories?

Without conducting a detailed analysis of how theories are referenced and described in our textbooks, I conjecture that the theories mentioned before do not play a central role in these introductory texts. This might have different reasons, e.g., it might take some time until a theory that is newly introduced by a major research outlet is included in a textbook.

As said before, there are also well-established theories in Business Informatics (e.g., Technology Acceptance Model and the two design theories by Mertens and Scheer mentioned above). I know there are some textbooks which adequately cover these theories. However, other textbooks do not describe or even mention these well-known theories at all. What can be the reason for this omission?

If we exclude the explanation that these textbooks do not represent the disciplinary matrix adequately, one explanation may be that the authors of these textbooks do not identify the mentioned theories as part of the disciplinary matrix of our discipline. If my assumption is true, then it can be concluded that our disciplinary matrix is not coherent anymore, but might be cracked.

3.8 Conclusion

When discussing what theory is and its role in academic inquiry, it must be clear that different fields of inquiry have very different answers to these questions. From the wider perspective of scientific progress, it can be argued that this situation can be harmful but also very productive. However, it is necessary that different fields of knowledge create a coherent view of what information systems are.

According to the standard view of theory in philosophy of science, a theory is a set of statements with at least one nomological law. Such statements are of major importance for the understanding and design of information systems. Although there are some candidates for such statements in the context of Business Informatics, it is clear that there are very few examples which are able to constitute the core of our discipline. However, there exist well-known examples for (design) theories which can be seen as the core of Business Informatics.

In the future, it is necessary to develop a more coherent picture of different approaches to information systems. I propose to distinguish between two types of approaches, namely black box and white box theorizing. In a black box approach, technology is viewed as a black box whose inner components are invisible to the theory; they are abstracted. Typical examples for black box theories are the Technology Acceptance Model or studies on success factors of ERP systems. Such an approach to theorizing has its strengths. It provides a higher level of abstraction because the concrete implementation is not regarded as important for the theory. Furthermore, the complexity of real information systems is effectively reduced.

However, black box approaches are established on the premise that technology is simply given. Such approaches are blind with respect to design decisions inside the black box, which might have a huge impact on theorizing about it. Per definitionem, they do not generate knowledge about the inner structure and functions of technology. What our discipline needs are more white box theories providing a coherent view on information systems and its inner components.

Prof. Dr. Peter Fettke

German Research Center for Artificial Intelligence (DFKI)

and Saarland University

4 Science as Practice: Theory-as-Discourse

4.1 Introduction

When Latour’s climate scientist explains why his own claims and not those of the climate-change deniers should be believed, he does not invoke theory or models. He does not summon explanatory power or predictive accuracy. Nor does he retreat to an argument about instruments or data or simulations. Rather he responds, “If people don’t trust the institution of science, we’re in serious trouble” (Latour 2013, p. 3). He appeals to the fragile and ill-defined institution that engages a specific form of discourse. It is the discourse of science this essay highlights, and the disciplinary context in which the concept of theory makes any sense at all.

The assertions that theory is the pinnacle of research (Gregor 2006; Straub 2009), that scientific knowledge is based on theories, and that the primary contribution to research is theory have become IS folklore and are only rarely contested [for examples see: Avison and Malaurant (2014), Hambrick (2007)]. The claim that “conflicting notions of theory and theoretical contribution, rather than sheer overemphasis on theory” (Markus 2014, p. 342) is the cause of problems for the field assumes that a unitary view of theory is desirable. Further, it obscures the differences among the discursive, material, and instrumental contexts in which theory makes sense. Many authors discuss theory as a thing-in-itself, as an isolated entity to be reified, bounded and celebrated above all else. This preoccupation diminishes the other disciplinary research contributions that are required for a theory to be cogent (Hovorka and Boell 2015). Certainly theory is important and requires attention, but it is critical to position our understanding of theory within the distinctive disciplinary contexts through which theory, as a discourse, is created, critiqued, evolved, and adjudicated.

Through historical analysis, Kuhn captures this discourse in his original sense of paradigm, a term he subsequently abandoned for the broader concept of disciplinary matrix. This matrix is composed, at least in part, of symbolic generalizations, models, exemplars, instruments, and values (e.g., precision, prediction, generalizability, design). While Kuhn acknowledges that the list is incomplete, its components illustrate some of the shared commitments of a scientific practice.

It is noteworthy that in Kuhn’s extensive writing theory is not prioritized as a defining component of disciplinary integrity or legitimacy. Instead, disciplines are characterized by their paradigm or disciplinary matrix. The primary meaning of paradigm (and a key component of the disciplinary matrix) is the exemplar: the texts, teaching cases, and narratives which “contain not only the key theories and laws, but also...the applications of those theories in the solution of important problems, along with the new experimental or mathematical techniques (such as the chemical balance in Traité élémentaire de chimie and the calculus in Principia Mathematica) employed in those applications.” (Bird 2011). Theory and models are important but not “king” or the primary contribution of research. The elevation of theory as the premier contribution in scientific practice and the basis of knowledge misrepresents the role of theory in the broader discourse of scientific inquiry.

In Kuhn’s normal science, scientists are occupied with matching facts and observations to extant theory, and with articulating what is implicit with theory. Scientists must “premise current theory as the rules of the game. His objective is to solve a puzzle... at which others have failed and current theory is required to define the puzzle...” (Kuhn 1965). Theory becomes fixed as a reified entity used to solve specific problems. Discussion in IS frequently focuses on the normal science image of theory as a reified object with essential characteristics. But during revolutionary science, in which the fundamentals of a disciplinary matrix change, Kuhn reveals fluidity in the conception of theory among practicing scientists who share the same commitments. The interpretation of a theory and even what it means to be a theory, is subject to situated contestation and revision and is specific to the scientific problem at hand. Kuhn’s normal-revolutionary science distinction reveals that there is no clean separation of a theory from the disciplinary matrix, the discourse, in which it is embedded. As communities develop and change, theories are contested, supported/rejected, critiqued, expanded or simplified. Accounts of revolutionary science reveal an image of contestation, where ontological perspectives, theories of instruments and measurement, observations, ideas, things, marks, practices, and truth vie for recognition.

From this we can see that theory cannot be cleanly separated from the discourse regarding observations, instruments, measurements, methods, and the values by which scientific activity is evaluated. Every theory is a discourse composed of the individual papers which, taken together, present argumentation for a specific account of a phenomenon. This account is only understood by the community based on disciplinary matrix which the community shares and within which the theory is grounded.

The introduction to this special section and some of the contributors acknowledge that IS, BISE, informatics, management science, and other specializations are overlapping, yet distinctive, fields of inquiry. As new research communities and subspecialties proliferate over time (e.g., Big Data, Q-BISE, DSR) there will perforce be many theory discourses between and within disciplines. Within each community, what counts as factual, as a construct, as valid, or as explanatory also changes. The set of publications, conference talks, teaching materials – the discourse – becomes an intellectual space where ideas clash. The theory-as-discourse is an area defined by what we know, but it is also a zone of contestation, not of revolution, but of ideas competing against each other to disclose what worlds are created by theory.

The consequence of conceptualizing theory as an ongoing discursive-instrumental argument rather than a category used to include/exclude specific instances is that there is no essential characteristic form or function of theory. One of Kuhn’s central contributions was the recognition that practicing scientists do not follow a set of rules that enable coordinated research activity. Rather, the shared disciplinary matrix of each community is exhibited in the exemplars used to enroll researchers into the practice. Theory and models are only a part of the community’s exemplars and are embedded in the discourse in each community. Thus theory-as-discourse takes on a multiplicity of forms and functions including:

  • An aspiration – what we wish we knew.

  • A condensation – what we think we know.

  • A compounding – (nothing accumulates in an unaltered form).

  • A guide – what is worthy of our time.

  • A value – what is worthy of knowing.

4.2 Reflections on this Special Section

The variety in conceptions of theory as exhibited in this special section evidence the primary argument I have put forward. In summary, different intellectual communities articulate theory in a variety of ways. Theory is viewed: (a) as a law-like cause-effect relationship that may be used to develop practical technological rules (Thalheim, in this section), (b) as a set of models, which are themselves simplified abstractions of reality (Kraemer and Schnurr, in this section), and (c) as a foundation for specific domain-oriented sub-disciplines (Suhl, in this section). There is some agreement among these papers that theory differs among disciplines (Avison and Malaurent as well as Fettke in this section). In addition, Mueller (in this section) notes the relationship between different onto-epistemologies that disclose different phenomenon, and the theorizing that identifies and accounts for those phenomenon. For example, the phenomenon of IS use, which is grounded in a Cartesian separation of user and object (Weber 2012), is de-centered in a non-dualist ontology (Barad 1996; Riemer and Johnston 2012).

These different conceptions do not present a compelling argument that IS/BISE and design communities should search for a unifying conceptual ground upon which to construct “theory for everyone,” or for an integrated conception of theory across communities. Rather they evidence the position argued in this essay that different conceptions of theory are not only inevitable, but are essential, for the different communities within IS/BISE, design and engineering to progress. It is not possible or desirable to reconcile or to integrate the many descriptions of theory such that every science community would agree on a single set of normative criteria. For example, IS is composed of multiple intellectual communities (Larsen et al. 2008). These communities have differing goals and values, and their different ontological foundations disclose different phenomena. Some communities in IS and BISE focus on explaining and predicting known phenomena. Recognizing the multiple forms and interpretations of explanation (Hovorka 2004) and of prediction (Hacking 1999) renders Gregor’s (2006) theory types equivocal in that the development and assessment of explanatory or predictive theories differs depending on the specific form of explanation or prediction implicated in the theory discourse. IS design- and engineering-oriented communities are more like architectural practice (Lee 1991) in their focus on creating new realities and emergent phenomenon rather than retrospective explanation or specific future predictions. But they are different practices and consider theory quite differently. In each community the resulting theory-as-discourse has different criteria for development, for contribution, for progress, and for adjudication of quality. In some communities, increasing the absolute accuracy of prediction is valuable. In other communities, increasing the business utility of prediction indicates progress. For some the creation of novel or problem-solving artifacts constitutes contribution and intellectual progress. But often progress can only be judged in retrospect as technologies or new processes derived from scientific inquiry come to dominate the landscape. Broadly, there are multiple distinctions for progress, including increasing correspondence of representations to observed phenomenon, of coherence of a set of beliefs held to be true, and of pragmatism. These adjudications further illustrate the inevitability of different theory discourses within and among the IS/BISE and design communities as each community enacts theory-as-discourse in relation to its own shared commitments to knowing the world.

A flexible and many-valent theory-as-discourse does not lead to arbitrary or relativistic conceptions of theory. The instrumental and discursive theory-as-discourse proposed here is implied by Pickering’s “mangle” (Pickering 1995) and by the “motley of science” of Hacking (1992). The dialectic of resistance and accommodation in scientific practice provides severe criteria for objectivity at both community and individual levels. These may include demands for falsifiability, avoidance of post-hoc and ad-hoc modifications, and the preference for theory which predicts new phenomenon over theories that explain what is already known (Pickering 1995). These, and other shared commitments of the institution of science are the background upon which communities adjudicate the quality of each theory-as-discourse. As scientific practice is enacted, the instruments, symbolic generalizations, models and values are challenged, supported critiqued, and evolve. The material phenomena themselves resist and push back, revealing a realm in which the researcher and their instruments struggle to make things work (Pickering 1992). Material reality resists capture by experiments, denies measurement, and confounds instruments. Accommodation occurs when researchers enact conceptual, instrumental or other reconfiguration to overcome resistance (Pickering 1995). The dialectic of resistance and accommodation thus results in further changes in the theory discourse. When material resistance becomes extreme, a theory-as-discourse will longer elaborate “a distinct realm of facts, phenomena, and understandings of the world” (Pickering 1995, p. 202), and it is abandoned. For example, Wegener’s theory of continental drift (Wegener 1966), first published in 1915, was dismissed as being eccentric, footloose, preposterous, and improbable. But new instruments (e.g., sonar, magnetometers), disclosure of new phenomenon (e.g., ocean ridges and trenches, earthquake zones), new theory (e.g., sea-floor spreading, magnetic field reversal), and new models (e.g., continental drift; lithosphere dynamics) entered the theory-as-discourse resulting in the abandonment of contracting-earth theory and the broad acceptance of Plate Tectonics – albeit 50 years later.

The theoretical discourse culminating in Plate Tectonics illustrates that the phenomenon itself changed as symbolic generalizations, instruments, models and new exemplars become part of the disciplinary matrix. It is only within this discourse, in its entirety, that Plate Tectonics theory makes the world comprehensible. Theory-as-discourse acknowledges the variety of contributions composing a community’s disciplinary matrix and contextualizes the social-political-material-discursive practice of scientific institutions. This position liberates us from an unresolvable debate on what theory is or should be. In rejuvenating the discussion of the full spectrum of potential research contributions which constitute a disciplinary matrix, we may restore theory to an appropriate position and regain confidence in the institution of science itself.

Dr. Dirk Hovorka

University of Sydney

5 Microeconomically Founded Information Systems Research

5.1 Introduction

It is our fundamental understanding that the main purpose of IS research, like most other research disciplines, should be the development of robust theories, which can then inform us about the likely answers to our research questions. What is notable, although not unique about IS research is that the research questions we pursue are not only concerned with the understanding, explanation and possibly prediction of real world phenomena, but also with how we can shape the institutions (North 1991; Roth 2002) that govern these phenomena in order to achieve a certain goal (cf. Gregor 2006). In this regard, IS research takes a theory-guided engineering perspective.

Consider the domain of electronic markets, for example. IS research may be interested in why an observed (e.g., technology induced) market behavior occurs, which market outcomes are likely under a given scenario, but also how markets should be designed in order to achieve a desirable outcome.

In the following we will develop and discuss what we call an idealized microeconomically founded IS research process cycle, depicted in Fig. 2, which reflects our view that fruitful IS theories can be built upon formal, analytic models. Such models are in turn founded upon both, stylized facts that are derived from empirical regularities observed in reality, as well as the existing body of knowledge stemming from robust theories. With reality, we denote the object and processes of investigation that research intents to describe or understand. Scientific inquiries are either concerned with realizations of the past or with potential future states. Researchers perceive reality through empirical observation and data gathering, which is naturally constrained and imperfect. Models, which in themselves are the foundation of theory, can then be used to explain, predict and design instances of the real world. Finally, models, and thus also theory, are evaluated and refined with respect to their ability to inform us about past or future real world phenomena. This can be achieved in field or laboratory studies either by validating or falsifying theory-guided hypotheses, comparing a theory’s predictions with actual future outcomes or by evaluating the success of theory-informed design proposals and engineering approaches in actual applications.

The herein described research paradigm is more specific than (but not contradictory to) more general IS research paradigms (cf. Frank 2006), such as design science (cf., e.g., Hevner et al. 2004). Nevertheless, we will argue that theories developed under this framework are suitable to pursue all four fundamental goals of IS research, namely analysis, explanation, prediction, and prescription/design (cf. Gregor 2006). It is not our intention, however, to evaluate or judge different IS research approaches, but rather to motivate why we believe that the proposed microeconomically founded research paradigm is one of several appropriate means to rigorously develop relevant IS theories.

Fig. 2
figure 2

Idealized microeconomically founded IS research process cycle

5.2 The Building Blocks of Microeconomically Founded Theory Development

5.2.1 Theory as a Set of Models

In general, theory has been characterized as the “basic aim of science” (Kerlinger 1986, p. 8) and is often referred to as “the answer to queries of why” (Kaplan and Merton cited by Sutton and Staw (1995), p. 378). According to (Weick 2005, p. 396) a theory may be measured in its success to “explain, predict, and delight”.

In explaining our precise understanding of “theory”, we start from the premise that the main task of theory is the integration of findings of individual studies into a modular, but coherent body of knowledge that connects research agendas based on a shared terminology and which provides a microfoundation. Revision and extension of theory is achieved in iterative steps through new or modified models that may either re-investigate central assumptions, thus deepening theory’s microfoundation, or create meta-models by further abstraction based on the existing body of knowledge. By this means, a mircofounded theory serves as an anchor (Dasgupta 2002) and provides building blocks for new research projects and further theory-building.

In our view, robust theories are the result of deduction and induction from a host of formal models. Therefore, theory can be viewed as a classified set or series of models (Morgan and Knuuttila 2012). In philosophy of science this integral role of models as a part of the structure of theory has been supported by the Semantic View and has been further emphasized by the Pragmatic View (Winther 2015). Consequently, a clear distinction between theory and its models is difficult in general, and even more so if the analysis of theoretical models is deemed as the central part of scientific activity.

At the extreme, a single model can already be the foundation of a theory, although probably not a very robust one. In this regard, the understanding of a robust theory in the social sciences may differ from the understanding of a robust theory in the natural sciences, because theory in the social sciences can be very context dependent, as subjectivity of decision makers, i.e., their beliefs, information, and view of the world substantially shape their choices and actions (Hausman 2013). For example, (Dasgupta 2002, p. 63) noted that “the physicist, Steven Weinberg, once remarked that when you have ‘seen’ one electron, you have seen them all. [...] When you have observed one transaction, you have not observed them all. More tellingly, when you have met one human being, you have by no means met them all”. This is why a robust theory in the social sciences should regularly be built upon a set of models, each of which takes a different perspective on a particular issue and explores a slightly different set of assumptions, such that the boundaries of the theory become transparent.

5.2.2 Models as the Mediator Between Theory and Reality

This understanding of theory shifts our attention to the development of suitable models. Models as idealizations (Morgan and Knuuttila 2012) serve as representations of reality that are obtained by simplification, abstraction (see, e.g., the work of Cartwright 2005; Hausman 1990) and/or isolation (Mäki 1992, 2012). But they may also be created as pure constructions, i.e., exaggerated caricatures (Gibbard and Varian 1978), fictional constructs (Sugden 2000), or heuristic devices that “mimic [...] some stylized features of the real system” (Morgan and Knuuttila 2012, p. 64). Gilboa et al. (2014) suggested that economic models serve as analogies that allow for case-based reasoning and contribute to the body of knowledge through inductive inference rather than through deductive, rule-based reasoning. We advocate the use of formal, analytic models in this context, because such models allow to make the assumptions transparent that may lead to a proposition and possibly a normative statement upon which a robust theory, and ultimately a robust explanation or prediction can be built. Note that mathematical formalization is a sufficient, but not a necessary prerequisite to develop a formal model, because it allows to precisely formulate its subject domain, making it an “exact science” (Griesemer 2013, p. 299). Moreover, (Dasgupta 2002, p. 70f.) argued that in building a theory “prior intuition is often of little help. That is why mathematical modeling has proved to be indispensible”. The analytic approach provides researchers with a toolbox to deal with especially hard and complex problems. By the means of logical verification, propositions can be shown to be internally true with regard to the underlying assumption.

In general, the goal of a model is to “capture only those core causal factors, capacities or the essentials of a causal mechanism that bring about a certain target phenomenon” (Morgan and Knuuttila 2012, p. 53). Such an abstraction is the prerequisite for conducting a deductive analysis within a particular scenario of interest. What we consider to be particularly important in order to develop relevant models is that a model’s microfoundation should contain elements of both theory and reality. On the one hand, a model’s assumptions should reflect stylized empirical facts that are well grounded in observed empirical regularities or relevant future scenarios. Such empirical facts can be derived directly from gathered data (most likely with measurement error), may already be the result of extended data analysis, e.g., in the form of detected patterns or correlations, or may be identified by means of a literature review (Houy et al. 2015). However, stylized empirical facts need not (yet) be supported by any theory. This enables us also to incorporate insights of theory-free empirical analysis [particularly (big) data analytics or machine learning] into formal models, which may then lead to a theory that can explain the empirical regularities.Footnote 1 On the other hand, a model’s assumptions may also be derived from the existing body of knowledge, i.e., from theory. This exemplifies the dual view on the relationship between models and theory: Although models are used to advance theory, theory is also used to produce and inform models.

A main line of attack against analytic models is to argue that they are not realistic and thus, model-driven theory is useless, because there is nothing to learn about reality. This criticism is amplified in the field of social science, where models are context dependent, as argued above. This naive understanding, however, falls short. First, as we have just mentioned, good models should be grounded in stylized empirical facts. Second, there is an inherent trade-off between accuracy and generality, achieved through simplicity (Gilboa et al. 2014). Scholars experienced in the domain of modeling generally agree on the fact, that too much complexity in fact impedes the explanatory power and the interpretability of models. For example, (Schwab et al. 2011, p. 1115) stated that in order “to formulate useful generalizations, researchers need to focus on the most fundamental, pervasive, and inertial causal relations. To guide human action, researchers need to develop parsimonious, and simple models that humans understand”. In the words of (Lucas 1980, p. 697) “a ’good’ model [...] will not be exactly more ‘real‘ than a poor one, but will provide better imitations”. In this context, the statistician George Box coined the famous phrase that “all models are wrong, but some are useful” (Box 1979, p. 2), clarifying that a model must inherently be unrealistic in a dogmatic sense (see Mäki 2012 for a discussion), but that models in fact enable us to understand real phenomena by abstracting from the complexity of reality. To exemplify this, (Robinson 1962, p. 33) argued that “a model which took account of all the variegation of reality would be of no more use than a map at the scale of one to one”. Of course, an interesting model must also exceed a pure tautology, i.e., the results that can be deduced from its assumptions are usually not a priori clear, but may represent surprising results (Koopmans 1957; Morgan and Knuuttila 2012). This requirement can be paraphrased by a quote that is supposedly due to Einstein: “Everything should be made as simple as possible, but not simpler”.

Furthermore, we wish to emphasize that over and beyond the explanatory function of formal models, the modeling process itself may prove to exhibit value for understanding a particular scenario. Moreover, a model is an instrument to express an individuals’ perception of a problem and may therefore serve as a communication device. (Gibbard and Varian 1978, p. 669) stated that “perhaps, it is initially unclear what is to be explained, and a model provides a means of formulation”.

5.2.3 Empirical Analyses as the Means to Evaluate Theory

According to our theory-centric research view, empirical analysis serves two core functions: (1) As described above, empirical analysis is a means to derive stylized facts in order to motivate model assumptions, or likewise, to evaluate the plausibility of proposed assumptions. (2) As will be described next, empirical analysis is also a means to evaluate the quality of a theory as a whole. In the context of IS research, we conceive three main ways in which evaluation of theory can be done.

First, empirical analysis, foremost field and laboratory studies, can be employed in order to falsify [in the spirit of Lakatos and Popper (Hausman 2013; Backhouse 2012)], and more ambitiously to validate, theoretically derived hypotheses. While field studies have the advantage of high external validity, they can be generally challenged on the premises that it is difficult to establish causal effects due to problems of (unobserved) confounding variables and endogeneity. At a fundamental level, this gives rise to doubts whether empirical observations are able to falsify (a fortiori validate) theory at all. These concerns are magnified due to the context-specific nature of field studies and a lack of control over the environment that encompasses investigations. Laboratory experiments may be able to mitigate some of these concerns through systematic variation of treatment conditions, randomization of subjects and augmented control of the researcher. Based on a high internal validity, although at the cost of lack of external validity, isolation of causal relationships is facilitated and falsification of theoretical propositions is more easily justifiable (Guala 2005). Furthermore, laboratory experiments facilitate the process of de-idealization (Morgan and Knuuttila 2012), i.e., the generalization of the model context beyond its well-defined assumptions by successively relaxing the assumptions until the theory’s established hypotheses begin to break down. Ultimately, however, laboratory and field studies are complementary means to a similar end.

Second, empirical analysis can evaluate the accuracy of theory-driven predictions over time. Although hypotheses may also be regarded as model predictions, the focus here lies less on falsification of suggested causal relationships, but more on the correct qualitative assessment of the impact of future scenarios. With regard to its ability to predict future states of reality [in the sense of Friedman 1953], a microfounded theory draws from its ability to explain observations at the macro level, based on an understanding of the underlying mechanisms and the necessary conditions. By this means, theory-driven predictions are likely to be more robust to changes of real systems as underlying causes can be identified and theory can be modified accordingly (Dasgupta 2002). Moreover, formal analysis allows for experimentation and evaluation of counterfactuals. Two remarks should be made in this context: First, it must be noted that there exists an inherent trade-off between a theory’s simplicity and its predictive accuracy. While a simple model or theory may apply more generally and is able to make more robust qualitative predictions, it will also almost certainly be too simple to make accurate quantitative predictions. In turn, the reverse holds true for complex models. This is akin to what is known as the bias-variance-trade-off in statistics (cf. Hastie et al. 2009). Second, even if a theory’s prediction may be accurate, this does not “prove” in a deductive sense that it is valid. We may only apply what is known as abductive inference here, that is we can infer that a theory was sufficient to predict the phenomenon of interest, but not that it was necessary, i.e., the only possible theory to be sufficient.

Third, and possibly most interesting in the context of IS research, empirical studies can serve as a testbed for theory-driven design proposals. In this context, laboratory experiments can be seen as an intermediate economic engineering step, similar to a wind tunnel in traditional engineering, where the design proposals (e.g., a proposed market design or regulatory institution) can be evaluated under idealized conditions that mirror those assumptions under which the theory was developed. If the proposed design performs well (relative to the intended goal) in the laboratory then it should be taken to the field for further evaluation. If, however, the proposed design already fails to perform in the laboratory, then there is little reason to believe that it would perform well in the field (Plott 1987). Consequently, the design, and most probably also the underlying theory, would need revision already at this stage.

5.3 Conclusions

Recently, several scholars in the fields of management (Locke 2007; Hambrick 2007) and IS (Avison and Malaurant 2014), among others, have criticized excessive adherence to theory and argue that a scientific contribution can also be made without the need for theory. While we are sympathetic with this view, we strongly believe that the development of robust theories is at the core of scientific endeavor. However, we also believe that these models and theories should be both, (1) well grounded in stylized empirical facts that are the result of inductive research efforts, as well as (2) evaluated and refined through empirical analyses based on field studies and laboratory experiments. To this end, we have motivated and discussed a microeconomically founded IS research paradigm that we deem suitable to develop theories in our field that are rigorous and relevant. In this spirit, we deem the long term goal of microeconomically founded IS research to be the development of robust and stable theories that have been developed and refined through several repetitions of the depicted research process cycle.

Prof. Dr. Jan Krämer

Daniel Schurr, M.Sc.

Universität Passau

6 Theory in the Age of Post-Adoption

6.1 Introduction

To put first things first: I think of theory and theorizing as the key task of any science and feel that our discipline’s attention is increasingly shifting in that direction. This is evidenced by seminal contributions (e.g., Burton-Jones et al. 2015; Gregor 2006; Weber 2012), special sections in key journals (e.g., MISQ and JAIS), and dedicated conference tracks (esp. at ICIS, ECIS, and HICSS). In my opinion, this is a welcome shift from methods to theories – or from how to what we research – that brings a dormant discussion to the center stage: what is theory?

This shift also comes with controversy: While I personally don’t agree to the “theory fetish” Avison and Malaurant (2014) diagnose, I think they do our discipline a great service by recognizing this discussion. However, I believe that this issue’s editorial points in the right direction when it refers to Markus’ (2014, p. 342) observation that “conflicting notions of theory and theoretical contribution, rather than sheer overemphasis on theory, may lie at the heart of the problem [...].” In light of an increasing recognition of the debate about what theory is, it comes as no surprise that Becker et al. (2015) find that “rethinking the theoretical foundations of the IS discipline” is among the top three grand challenges in our discipline’s future development – both in terms of relevance and impact.

6.2 The Field of Post-Adoption

One arena I believe this challenge to be particularly true for is post-adoption. As a response to criticism of simple models of technology adoption, the post-adoption research community is shaping up to develop more elaborate models for what happens across multiple levels once technology starts to interact with individuals’ actions and larger organizational, market, and societal structures. The resultant research opportunities resonate with the German Informatics Society’s grand challenge of omnipresent human-computer interaction, and socio-technical issues, broadly speaking, are among the key issues in the BISE community as well (Becker et al. 2015). Outside of academia, post-adoption research comes at a time when many organizations are thinking about how to engage in digital transformation in order to leverage modern information and communication technologies.

Of course, this is not a new issue. Its roots date back to the 1970s (esp. Bostrom and Heinen 1977a, b) and beyond (e.g., Emery and Trist 1960; Woodward 1958). Recently, however, post-adoption research has mainly been characterized by an intense ontological and epistemological debate and a resultant fragmentation of its results – that is, its theories.

(1) Which conception of theory is central to your area of research?

The two main contestants in this debate come with different conceptions of theory: those advocating ontological separability of social and material aspects on one side, and those promoting ontological inseparability on the other (Mueller et al. 2012). Recently, these camps have begun to rally under new banners such as “critical realism” versus “agential realism” (Leonardi 2013) or “weak socio-materiality” versus “strong sociomateriality” (Jones 2014) respectively.

While the interested reader can find more elaborate explanations of these camps in Leonardi (2013) and Jones (2014), the camps’ assumptions about ontology and epistemology are central to the debate on theory. On the one hand, the separability camp subscribes to a realist ontology and a mostly representational epistemology. For them, material and social aspects exist independently of any actor and the theorist’s key job is to determine which is which and how they interact once they meet in practice. Works by Mutch (2010, 2013) and Mingers (2000) – who strongly draw on Bhaskar (1979) – investigate how such a paradigmatic setup can facilitate the study of technology in social systems, and papers by Burton-Jones and Grange (2013) or Volkoff, Strong, and colleagues (e.g., Strong and Volkoff 2010; Volkoff et al. 2007) deliver excellent exemplars of how this philosophical position helps develop theoretical models of post-adoption mechanisms and processes.

On the other hand, the inseparability camp grants ontological equality of all entities involved in a phenomenon. These entities, however, do not depend on any objective reality nor are they an attribute of human (or, for that matter, non-human) agency. They rather emerge within entanglements through material-discursive practices. Such an entanglement, or phenomenon, is the ontological entity that is sociomaterial. This means that, ontologically speaking, all phenomena are inseparably social and material and that any attempt to separate the two is an arbitrary decision by an agent – be it an actor in one of our studies or the researcher. This stresses a deviation from the representational epistemology discussed above and suggests a shift towards performative (and diffractive) thinking. This camp, rooted in Barad’s (2003) work, was made popular in IS by Orlikowski and Scott (esp. Orlikowski 2010; Orlikowski and Scott 2008) and studies by Scott and Orlikowski (2014) themselves and by Schultze (2011) illustrate the tenets of this paradigmatic position and its conception of theory.

While my own thinking increasingly gravitates towards the realist position (e.g., Lauterbach et al. 2014) – mainly because I find respective field studies easier to design – I strongly believe that both positions should not be seen as fundamentally irreconcilable opposites. Rather, I would like to think that there is a level beyond the current discussion on which we could explore how insights from these two perspectives complement each other. However, the (seeming) opposition between the camps creates a key challenge to this (perhaps naive) belief: Are the mostly positivistic conceptions of theory and theorizing still useful (let alone valid) in the neo-positivist world of the critical realists or in the non-positivist world of the agential realists or – particularly – in a world that seeks to move beyond their distinction?

(2) How evaluate progress in your field? What is a long-term goal?

It is this challenge that also drives progress: For the last five years, progress in this domain is probably best described by the emergence of new theoretical perspectives and our discipline’s increasing command of the underlying paradigmatic positions. While the former is evidenced by a growing number of studies employing some form of sociomaterial thinking (e.g., Hultin and Mähring 2014; Introna and Hayes 2011; Johri 2011; Jones 2014), the latter is underlined by the various attempts to better structure the debate’s philosophical roots (e.g., Jones 2014; Leonardi (2013).

However, a challenge I see in this is the fact that many seem to have been motivated by some instance of paradigmatic inconvenience to develop an own variant of the ontological and epistemological foundations. Looking at the larger body of sociomaterial studies published recently, irreconcilable differences seem to hamper our discipline’s ability to integrate and synthesize theoretical findings I argued for above – an essential prerequisite for the development of a cumulative tradition and a competition of theories to retain the most powerful explanations (Weick 1989).

Consequently, a long-term goal I think worthy of exploration is to turn away from a theory for every one towards a theory for everyone – even if we may have to stop calling it theory then. That is, carefully discussing if and how paradigmatic differences influence our findings, what we mean when we talk of theory, and our ability to compare, contrast, and combine insights into the interplay of technology, social structures, and individuals’ behaviors. Hovorka (this section) makes an excellent observation when he points out that the different communities involved in such an integration effort will likely also realize differences in what they mean by theory and how they judge its progress and quality. Nevertheless, I feel that this plurality of perspectives still gravitates around the interplay of technology, social structures, and individuals’ behaviors as a common phenomenon. Wouldn’t it thus seem logical to try to learn from each other? To me, this thought resonates with the debate between Avison and Malaurant (2014) and Markus (2014) as much as it seems to be on Barad’s (2003) mind. Also, a debate seeking to transcend philosophical differences seems a promising approach to not simply reproduce the philosophical discussions from outside the BISE community, but to actually contribute to advancing these debates – a concern for this domain that can be traced back as far as Williams’ and Edge’s (1996) seminal paper. Consequently, in order to help the post-adoption domain and its theories grow, revisiting paradigmatic assumptions to explore options for complementarity of findings is an essential prerequisite for integrating and consolidating our various findings towards a shared understanding.

(3) How is theory guiding design and engineering and how does it impact practice?

While much of the debate in this field might seem esoteric, I see three important links between this paradigmatic debate and practice. First, I believe that our research in this domain enables managers to better express their experiences. This is inspired by a steering committee meeting I attended three years ago in which I pitched the post-adoption research my team and I intended to do to a potential host company. While the team and I expected that the philosophical aspects might be ill-matched to the audience, the participating executives quickly adopted the concepts presented to them and retold their experiences in this newfound language. The ensuing discussion allowed them to make sense of each other’s experiences, pinpoint problems, and devise solutions – and resulted in exciting insights for research.

Second, I see important links to the design and engineering of future systems. Insights from this domain of IS research are beginning to shed light on how people interact with technology, make sense of it, and transform what they do through it (e.g., Burton-Jones and Grange 2013; Liang et al. 2015) as well as on how we design the projects that introduce these technologies (e.g., Strong et al. 2014; Wagner et al. 2010). While not yet prominent, some IS research hints towards this research’s impact on how we design technologies and their interfaces, particularly when recognizing material properties and their impact on resultant practices (e.g., Jones 2014; Leonardi 2012). I like the thought Brynjolfsson and McAfee (2014) introduce: Increasingly, we will have to think of technology and how we design it not (only) as a potential replacement for human work, but as a meaningful augmentation that complements human work. This will lead to new forms of technology and interface design just as much as to new patterns of interaction between humans and technology. In the long run, this understanding will inform the development of truly intelligent and self-adapting technologies.

Third, on a more abstract but all the more important level, better understanding of what technology is, how we relate to it, and how it shapes our lives also has an ethical dimension. While underexplored in our field thus far, technology is in the process of fundamentally reshaping our life and how we live it.

Taking these three together, advanced sensemaking and expression will allow for expanded description, analysis, and explanation of the interplay of technology, social structures, and individuals’ behaviors. Such an improved understanding of post-adoption research’s key phenomenon will transform technologies, behaviors, and social structures. Thus there seems to be nothing quite so practical as a sound understanding of what technology means for us, how we relate to it, and how it influences our behaviors; all of which needs to ground on a sound paradigmatic understanding of the theories we develop to help explain these issues.

(4) How do you evaluate the quality of theories in your field?

Much like elsewhere, the basic evaluation of theories in the post-adoption field is conducted through a social process towards consensus among a panel of reviewers, editors, and authors. The key tenet of this process to me, especially for conceptual pieces mostly focused on theory and theorizing, is to see if a new theory proposed succeeds in convincing peers. To this end, its power to transform our thinking is one of the key aspects I believe to be important in new theoretical contributions. This resonates strongly with DiMaggio’s (1995) idea of theory as narrative with a touch of enlightenment as well as with my own steering committee experience I shared above.

As such, the question of whether a new theoretical perspective helps to make sense of things we observe in practice, but cannot quite explain so far, seems like a key aspect of a theory’s quality. For this, Popper (1980) develops the metaphor of theories as “[...] nets cast to catch what we call ‘the world’; to rationalize, to explain and to master it” (p. 59). Again, DiMaggio (1995) offers a brilliant perspective on theory as being constructed “post hoc,” which to me suggests that many theories might best not be evaluated by any quantitative indicator, but by their potential to inspire and transform thinking.

This also alerts us to the fact that no theory should be looked at in isolation. Beyond any one single theory alone, a good theory also engages in a detailed discussion of rivalry explanations, boundary spanning constructs, and its own boundaries. While often neglected in complex manuscripts already pressured for space, this engagement with what else we know is essential to link any theoretical insight back to the larger discourse and its attempt to build a cumulative core of knowledge on the phenomenon we study. Based on own experiences (e.g., Mueller and Raeth 2012), I particularly appreciate multi-paradigmatic and multi-theoretical work that consciously compares and contrasts what we can see from one perspective with what we would see from another. In the long run, such comparative working will contribute to what Weick (1989) calls disciplined imagination, that is, theorizing as a process of variation, selection, and retention.

Of course the ability to do so depends on understanding the underlying paradigmatic assumptions and on being willing to focus on commonalities and overlaps rather than differences. Above, I hinted towards my belief that the post-adoption community is not yet at a point where such a synthesis is possible. The last five years rather seem to inspire the metaphor of the “Tower of Babel” instead of letting us hope for the coming of a “Babelfish” for theories and insights (as borrowed from Douglas Adams’ best-selling “Hitchhiker’s Guide to the Galaxy” series).

6.3 Challenges on the Way Ahead

In the next five years, however, I am confident that this domain will witness a tremendous discussion and – hopefully – advance of theory and theorizing. Regardless of which of the above mentioned camps researchers subscribe to, both will likely be united in their quest for post-positivist theories; neo-positivist, realist scholars on one side and non-positivist scholars on the other. This will come with a shift away from the conceptual monopoly positivistic, representational constructions of theory have held in the discourse so far. In fact, the upcoming working conference of the IFIP working group 8.2 to be held this December just before ICIS has set out to explore “new encounters with technology and organization” that go “beyond Interpretivism” (from the call for papers) and I am excited to see what this will produce.

Future debates like this will have to address a wide spectrum of issues: from the redefinition of basic theory taxonomy (e.g., is the term “construct” also applicable to describe theories that do not follow a realist ontology and a representational epistemology?) to quite practical concerns (e.g., means of representation; Gregor 2006). This will also lead to an intense debate on what theory really is and new quality criteria that theories have to live up to, preferably also across paradigmatic positions (see, e.g., Burton-Jones et al. 2015 or Lee 2014 for notable early contributions). Reading Hovorka’s contribution to this section, I feel that the post-adoption community is on the brink of realizing and discussing its theories-as-discourses – both in terms of their contents (immediate theories) as well as on a philosophical level (meta-theoretical considerations). While the current fragmentation of these discourses seems to hamper the integration of our various understandings of the post-adoption phenomenon, its heterogeneity must not be seen as something evil per se. Quite to the contrary, I join Scott and Orlikowski (2013) in appreciating the plurality of current studies and also think that Lyytinen and King (2004) make an excellent point when they advocate plurality as a driver of innovation that makes sure that a discipline stays current and maintains a reasonable level of plasticity to adapt to changes in the phenomena it studies.

At the end of the day, all research in this domain strives to better understand the interplay (or intraplay) of technology, social structures, and individuals’ behaviors. In the years ahead, I personally hope that the focus will not only be on the content (i.e., the theory itself), but also on two equally important aspects: First, the meaning of theory – or what comes beyond theory – in order to help integrate what we learn about post-adoption. Second, the process of theorizing in order to help aspiring theorist – like myself – hone the skills and crafts of writing and reasoning that are theorizing.

Dr. Benjamin Müller

University of Groningen

7 Business and Decision Analytics in BISE: How much Theory do we Need?

As a scientific discipline, BISE is based on a theoretical foundation that includes different theories depending on the focus and perspective of a given subcommunity. The BISE subcommunity, due to its focus on analytical methods and decision support systems, uses quantitative methods to build and analyze descriptive, predictive and prescriptive models that support decision makers in practice. Here we use the term “Business and Decision Analytics” for this subarea. The quantitative methods draw from a rich theoretical basis in mathematics, statistics, computer science, and operations research, among others. It is not a main goal of BISE researchers to develop new theories in mathematics or operations research, but they need understanding of theory in order to be able to select a right solution approach for each problem and research task. As a generalization and abstraction, new theoretical findings can be established based on BISE research in this area.

Theories in statistics, artificial intelligence, and data modeling form the basis of business and decision analytics, and researchers develop new models and methods to analyze data and compute various indicators to guide business decisions. Mathematics, algorithm theory, and software engineering are important to guide business analysts and software developers in building optimization systems to compute optimal or near-optimal solutions for complex decision problems in business applications.

The models that represent decision problems from practice tend to be quite large and difficult, so that solution methods are needed which can cope with large models and can scale these according to the needs from practice. Knowledge of complexity theory helps researchers to classify algorithmic solution methods and be able to judge their suitability for a given decision problem. It is not a main goal of a BISE researcher to prove worst-case complexity of an algorithm, but rather to assess which methods are able to generate best possible solutions that can be realized in practice with today’s technologies.

Fuzzy set theory or alternative uncertainty theories, including stochastics, can be the basis for modeling approaches with respect to preference elicitation and optimization, when the data available is uncertain. Discrete event simulation traditionally uses stochastic distributions to model uncertain data. Decision theory can be used as a basis for designing systems for multicriteria decision support. Some decision support approaches can be built using game theory to represent autonomous actors in agent-based systems.

Modeling is a very important step in developing solutions for decision situations. The best modeling approach should be selected based on the structure and goals of the decision problem. Optimization models, simulation models, data mining models and multicriteria decision models, among others, have their own application areas, and each modeling technology requires a certain structure of the decision problem. A unified modeling theory is still missing and would be helpful for selecting a suitable modeling approach (see Thalheim, in this section).

A main challenge the business and decision analytics subcommunity faces today is the increasing complexity of decisions in the progressively dynamic environment of today’s business, especially in supply, manufacturing and service networks (see Fink et al. 2015; Mertens et al. 2015). The increasing interaction of various entities in complex business networks is not yet well understood. Simultaneously today’s powerful information technology allows for the use of large amounts of structured digital data for decision-making. “Big data” together with cloud technologies provide much more opportunities to analyze and generate supporting information for decision makers than has been realized until now.

A main research goal of the business and decision analytics subcommunity is to develop new models, methods and systems to be able to model and analyze the complex networks and interactions of their entities. New approaches are needed that include uncertainties and consider robustness aspects, thus providing support to help practitioners improve decision making. To achieve this goal, an interdisciplinary approach is necessary. We need expertise in modeling, algorithms, software engineering, and business theories.

Long-time research goal of the business and decision analytics subcommunity is thus to develop and improve models and methods that help to understand and analyze the dynamic environment of today’s business. Evaluation of research progress should therefore assess to what extent new decision models cover relevant areas in business that have not been fully understood until now, as well as how good the methods are which have been proposed to solve and analyze the models. The models and methods developed should be evaluated considering problem structure and needs from the business world, and the same should be done simultaneously with the scientific state-of-the-art and relevant theory. The natural goal is thus to combine rigor and relevance and to produce relevant research results on a high level of scientific rigor.

An expert in research and/or practice of business and decision analytics needs interdisciplinary skills and usually combines knowledge of several disciplines such as information systems, mathematical models and methods, business processes, computer science, software engineering, and data science with decision support techniques. In these disciplines theories have been developed that build a theoretical foundation and thus establish the discipline as a scientific research area. Some of the relevant theories are domain-specific and focus on a given application domain, such as ERP, revenue management or recommender systems, and others are of general nature, such as graph theory or complexity theory.

Besides theoretical knowledge, a business and decision analytics professional needs awareness of all competences necessary to complete modeling and system development projects that provide support for business decision makers and processes. Typically, the following competencies are needed:

  • To understand the domain and the specific decision problem.

  • To select a suitable modeling approach: simulation, optimization, MCDM, data analysis etc.

  • To set up a correct model, combining domain knowledge with modeling knowledge and experience.

  • To select the right solution approach, its implementation, and configuration.

  • If necessary, to develop and test new solution methods.

  • To integrate new quantitative models into an existing business information system, incl. design of database interfaces, user interfaces, communication networks, etc.

  • To interpret the solution for the decision makers.

Typical textbooks for decision support systems and operations research contain most of the relevant areas (see for ex. Turban et al. 2014), however, they mostly focus on methodical aspects and ignore many areas that are important from the information systems point of view.

The question arises whether the subarea business and decision analytics in BISE involves or needs its own theories, or if it is sufficient to be based on theories of neighboring disciplines, the combination and integration of which no doubt is a very challenging task in every single project. To my understanding it does not seem promising to try to develop one unified comprehensive theory for the complete subcommunity, it would simply be too multi-faceted as well as constantly evolving and without sharp boundaries. Its basis would be many theories from the neighboring disciplines, and an expert should have an understanding of the most important ones and be able to combine various aspects of them in each single research and development project.

However, it might be possible and helpful to develop a classification or taxonomy of business and decision analytics that could be called a theory. Such a structured and comprehensive view (though not necessarily covering all aspects) would help to understand the area and to select the right approach and right methods for a given problem.

Individual researchers and practitioners have collected a lot of experience and established strict rules as well as heuristic thumb rules that help structuring certain decision problems, selecting the right models and methods, and embedding the system components into an existing IS environment. This knowledge and experience may build the basis for a theory in the sense of classification, taxonomy and/or rule system. Such a taxonomy would ideally involve aspects such as application areas, modeling and solving methods, decision support components, as well as integration into business information and communication systems (see Table 4).

Table 4 Examples of components to be included in a classification system for business and decision analytics

A comprehensive taxonomy would be helpful in introducing the area to students and professionals and in communicating the concepts of business and decision analytics. In practice, many objects can be assigned to two and more classes. However, the classification would help assigning an object and selecting the right approach to solve a given business decision task.

Prof. Dr. Leena Suhl

University of Paderborn

8 Towards a Theory of (Conceptual) Models

8.1 Introduction

A theory is in general any systematic and coherent collection of ideas that relate to a specific subject. The notion of theory varies in dependence on scientific disciplines (Kondakov 1974; Seiffert and Radnitzky 1992; Thiel 2004).

  1. 1.

    A theory can be understood as a practice-oriented apprenticeship, as a counterpart of acting and of practice, as a systematic generalization of experience, and a system of main ideas.

  2. 2.

    A (scientific) theory is a “systematic ideational structure of broad scope, conceived by the human imagination, that encompasses a family of empirical (experiential) laws regarding regularities existing in objects and events, both observed and posited. A scientific theory is a structure suggested by these laws and is devised to explain them in a scientifically rational manner. In attempting to explain things and events, the scientist employs (1) careful observation or experiments, (2) reports of regularities, and (3) systematic explanatory schemes (theories).” (Bosco et al. 2015).

  3. 3.

    A theory can also be understood as an offer, i.e., a scientific, an explicit and systematic discussion of foundations and methods, with critical reflection, and as a system of assured conceptions providing a holistic understanding. Many scientific and engineering disciplines use this constructive understanding of the notion of theory. A constructive theory is a collection of settled instruction conceptions (e.g., concepts, rules, laws, conditions) for (system) development within practical (technical) and quality (esthetic) norms, according to the goals of construction, and guided by some background. A theory is understood as the underpinning of engineering similar to architecture theory (Semper 1851) and the approaches by Vitruvius and L. B. Alberti. Constructive theories in Computer Science and Business Informatics use as their sources four kinds of methods: systematic (deductive mathematical or inductive logical), engineering-oriented abductive or compositional, application-driven, and electronics-oriented component methods.

A theory in the third sense combines explicative and prognostic functions. It is applicative, explicate, exploitative, expiative, explorative, and implicative from the one side, and it is preindicating, prognosticative, and predictive from the other side. Gregor (2006) associates models with construction-oriented theories for the area of information systems. She distinguishes (1) theories for analyzing, (2) theories for explaining, (3) theories for predicting, (4) theories for explaining and predicting, and (5) theories for design and action. Her main attitude is, however, construction models for analysis, explanation, prediction, and construction.

8.2 Models – The Third Dimension of Science

Models are one of the – if not the – central elements of Computer Science and Business Informatics. The research in these disciplines considers models as artifacts that are constructed in a certain way and prepared for their utilization. Models might also be mental models and thought concepts. Models are used in utilization scenarios such as construction of systems, verification, optimization, explanation, and documentation. In these scenarios they function as instrumentsFootnote 2.

Given the utilization scenarios, we may use models as perception models, mental models, situation models, experimentation models, formal model, mathematical models, conceptual models, computational models, inspiration models, physical models, visualization models, representation models, diagrammatic models, exploration models, heuristic models, informative models, instructive models, etc. They are a means for some purpose (or better: function within a certain utilization scenario), are often volatile after having been used, are useful inside and often useless outside the utilization scenario.

8.2.1 Elements of a General Modeling Theory

A general theory of model should provide answers to questions such as: What is a model? What are its essential elements? Which kinds of models reflect which task and support a solution of which problems? Which methods must be provided for a proper use of the model? Which methods support development and modernization of models? In which cases is the model adequate? What are the limits and where should this model not be used? In which case we can rely on a model? What are good models? Which models are effective? Which properties can be proven for models? How can models be integrated and composed? What are the correct activities for modeling? What is the added value of a model? Who can use the model how? What are the background theories of modeling? Why should this model be used where it is used? In what way? And by what means?

A general modeling theory generalizes the variety of model notions. In this case language matters, e.g., it enables or disables. The theory allows for managing a complexity of models and methods. Model development methods and model utilization methods should be defined in a similar way as in natural sciences. The theory should also refer to good utilization stories and to best practices.

8.2.2 Models Within the Dichotomy of Theory and State of Affairs

Classical science and also Computer Science and Business Informatics consider models to reflect a certain state of affairs, a certain part of reality, or certain observations. They might also depict parts and pieces of a theory. So, models seem to be placed between the state of affairs and theories. Figure 3 shows the classical understanding of this dichotomy.

This two-dimensional reasoning seems, however, too simple. Models form a further and orthogonal means and are different from theories and also different from the state of affairs.

Fig. 3
figure 3

Models as characterization of situations, representation of a theory, or a mixture of both

8.2.3 The Development of Sciences

Disciplines often use a combination of empirical research that mainly describes natural phenomena, of theory-oriented research that develops concept worlds, of computational research that simulates complex phenomena, and of data exploration research that unifies theory, experiment, and simulation (Gray 2007). Thus Fig. 4 distinguishes four generations of sciences.

Fig. 4
figure 4

The four generations of sciences

Models are a main instrument in all four generations. Their function, however, is different as illustrated in Fig. 5.

Fig. 5
figure 5

Some model functions in the four generations of sciences

8.2.4 Extending the Two-Dimension of the Dichotomy by a Third Dimension

The classical dichotomy of reality and theories should be extended by a third dimension. Theories explain the state of affairs. They are results of explorations of the reality. Models provide an understanding of a theory and illustrate the reality. For Computer Science and Business Informatics, the relationship is similar. We might, for instance, use schemata as models. The theory behind could be, for instance, a concept theory.

Models are therefore the third dimension of science (Thalheim and Nissen 2015a)Footnote 3. Figure 6 depicts this understanding.

Fig. 6
figure 6

Models – the third dimension of science and more specifically models in Business Informatics

8.3 The Conception of the (Conceptual) Model

A model is a well-formed, adequate, and dependable instrument that represents origins.

Its criteria of well-formedness, adequacy, and dependability must be commonly accepted by its community of practice within some context and correspond to the functions that a model fulfills in utilization scenarios.

The model should be well-formed according to specific well-formedness criteria. As an instrument or more specifically an artifact, a model comes with its background, e.g., with paradigms, assumptions, postulates, language, thought community, etc. The background is often given only in an implicit form.

A well-formed instrument is adequate for a collection of origins if it is analogous to the origins to be represented according to specific analogy criteria, it is more focused (e.g., simpler, truncated, more abstract or reduced) than the origins being modeled, and if it sufficiently satisfies its purpose.

Well-formedness enables an instrument to be justified by an empirical corroboration according to its objectives, by rational coherence and conformity explicitly stated through formulas, by falsifiability, and by stability and plasticity.

The instrument is sufficient by its quality characterization for internal quality, external quality and quality in use or through quality characteristics (Thalheim 2010) such as correctness, generality, usefulness, comprehensibility, parsimony, robustness, novelty etc. Sufficiency is typically combined with some assurance evaluation (tolerance, modality, confidence, and restrictions).

A well-formed instrument is called dependable if it is sufficient and justified for some of the justification properties and some of the sufficiency characteristics.

8.3.1 Scenarios and Functions of a Model

Models function as an instrument in some usage scenarios and a given usage spectrum. Their function in these scenarios is a combination of functions such as explanation, optimization-variation, validation-verification-testing, reflection-optimization, exploration, hypothetical investigation, documentation-visualization, and description-prescription functions. The model functions effectively in some of the scenarios and less effectively in others. The function determines the purpose and the objective (or goal) of the model. Functioning of models is supported by methods. Such methods support tasks such as defining, constructing, exploring, communicating, understanding, replacing, substituting, documenting, negotiating, replacing, optimizing, validating, verifying, testing, reporting, and accounting. A model is effective if it can be deployed according to its objectives.

8.3.2 Conceptual Models

An information systems or database model is typically a schematic description of a system, theory, or phenomenon of an origin that accounts for known or inferred properties of the origin and may be used for further study of the origin’s characteristics.

Conceptual models are models enhanced by concepts and integrated into a space of conceptionsFootnote 4. Conceptional modeling is modeling with associations to concepts and conceptions. A conceptual model incorporates concepts into the model. Hence, Fig. 6 can now be revisited for this case and we arrive at Fig. 7.

Fig. 7
figure 7

Conceptual modeling as descriptive or rudimentary conceptual modeling for database models

8.3.3 Reasoning Theory within a Theory of Models

A general theory of reasoning must therefore cover many different aspects. We may structure these aspects by a pattern for specification of reasoning support for modeling acts or steps as follows (Thalheim 2011, 2012b, 2014; Thalheim and Nissen 2015b):

  • the modeling acts with its specifics (Thalheim 2010);

  • the foundation for the modeling acts with the theory that is going to support this act, the technics that can be used for the start, completion and for the support of the modeling act, and the reasoning techniques that can be applied for each step (Thalheim 2012a);

  • the partner involved with their obligations, permissions, and restrictions, with their roles and rights, and with their play;

  • the aspects that are under consideration for the current modeling acts;

  • the consumed and produced elements of the instrument that are under consideration during work;

  • the resources that must be obtained, that can be used or that are going to be modified during a modeling act.

Consider, for instance, the reasoning that aims at realization objectives. It includes specific facets such as

  • to command, to require, to compel, and to make someone do something by means of supporting acts such as communicating, requesting, bespeaking, ordering, forbidding, prohibiting, interdicting, proscribing;

  • to ask, to expect, to consider obligatory, to request and expect by means of specific supporting acts such as transmitting, communicating, calling for, demanding;

  • to want, to need, to require by means of supporting acts of wanting, needing, requiring;

  • to necessitate, to ask, to postulate, to need, to take, to involve, to call for, to demand, to require as useful, to just, or to proper.

The reasoning that is geared towards operating, relevant properties, model objectives, the model itself, towards construction and assessment and guarantees can be characterized in a similar form.

8.4 Theories and (Conceptual) Models

Thalheim and Nissen (2015a) distinguish between ‘models’ (models as representations or artifacts), ‘to model’ (methods of model development and model utilization), and ‘modeling’ (systematic and well-founded matured model development and model utilization; abbreviated as MMM).

8.4.1 Art, Science, and Culture of Modeling

Art (in the broader sense, e.g., used in D.E. Knuth’s “Art of Programming”) is based on creative skills and imagination in the MMM community and produces models as instruments for an easy and simple way of utilization in given scenarios. It requires conscious development of well-formed models. It intends to be contemplated or appreciated as adequate and dependable. We claim that an MMM art has already been developed but is not yet compiled into a holistic body of knowledge.

However, engineering requires a creative application of scientific principles to the design or development and utilization of models, to forecast the effect of model application, and to effectively handle co-evolution of systems and models according to the function of models in utilization scenarios. It requires an MMM science and culture.

An MMM science additionally contains methodologies, matured guidelines for modeling practice, well-founded algorithms and methods for development and utilization of models beyond MMM theories. Culture is “a system of shared values, which distinguishes members of one group or category of people from those of another group; culture is therefore intrinsic in the mind of individuals and it can be measured” (Hofstede et al. 2010). An MMM culture is the collective programming of the mind in one MMM community of practice. It will be different in different areas of Computer Science and Business Informatics.

8.4.2 The MMM Theory as a Lacuna of CS and BI Research

Hartmann and Frigg (2014) consider models and modeling as one of the lacunas in modern research: “Models play an important role in science. But despite the fact that they have generated considerable interest among philosophers, there remain significant lacunas in our understanding of what models are and of how they work.” The book of Thalheim and Nissen (2015a) tries to close this gap on the basis of surveys of models, of approaches to the modeling activities, and of modeling in various sciences (archeology, arts, biology, business informatics, chemistry, computer science, economics, electrotechnics, environmental sciences, farming, geosciences, historical sciences, languages, marine science, mathematics, medicine, ocean sciences, pedagogical science, philosophy, philology, physics, political sciences, sociology, and sports). An MMM theory is still one of the difficult research topics in Computer Science and Business Informatics. The development of a settled conception of models is the first step. The next step is the treatment of modelling activities and of modeling. An MMM culture seems to constitute the task of the next decade.

Bernhard Thalheim

Christian-Albrechts-Universität zu Kiel