1 Introduction

This article provides an introduction to the Festschrift that has been put together on the occasion of Franz Baader’s 60th birthday to celebrate his fundamental and highly influential scientific contributions. We start with a brief and personal overview of Franz’s career, listing some important collaborators, places, and scientific milestones, and then provide first person accounts of how each one of us came in contact with Franz and how we benefitted from his collaboration and mentoring. Our selection is not intended to be complete and it is in fact strongly biased by our own personal experience and preferences. Many of Franz’s contributions that we had to leave out are discussed in later chapters of this volume.

Franz was born in 1959 in Spalt, Germany, a small village known for its hop growing and its interesting Bavarian/Franconian accent—which seems to manifest especially after the consumption of hopped beverages. After high school and military service he studied computer science (Informatik) in nearby Erlangen. This included programming using punch cards and usage of the vi editor. Rumour has it that Franz still enjoys using some of this technology today. He continued with his Ph.D. on unification and rewriting, under the supervision of Klaus Leeb, an unconventional academic who clearly strengthened Franz’s ability for independent research and critical thought. As a Ph.D. student, Franz also worked as a teaching assistant, with unusually high levels of responsibility.

In 1989, Franz completed his Ph.D. Unifikation und Reduktionssysteme für Halbgruppenvarietäten and moved to the German Research Center for Artificial Intelligence (DFKI) in Saarbrücken as a post-doctoral researcher. It is there that he encountered description logic—then known as “terminological knowledge representation systems” or “concept languages”—met collaborators such as Bernhard Nebel, Enrico Franconi, Phillip Hanschke, Bernhard Hollunder, Werner Nutt, Jörg Siekman, and Gerd Smolka, and added another dimension to his multi-faceted research profile.

After 4 years at DFKI, in 1993, Franz successfully applied for his first professorship at RWTH Aachen and shortly thereafter for his first DFG project on “Combination of special deduction procedures”, which allowed him to hire his first externally funded Ph.D. student, Jörn Richts. Within a year of working in Aachen, he assembled his first research group, consisting of 4 Ph.D. students: Can Adam Albayrak, Jörn Richts, and Uli Sattler, together with Diego Calvanese visiting for a year from Rome. This group continued to grow substantially over the next decade, supported by various other DFG and EU-funded research projects as well as a DFG research training group.

In 2002, Franz applied for the Chair of Automata Theory at TU Dresden. Unsurprisingly, his sterling research track record made him the front runner for this post. As a consequence, a group of impressive size moved from Aachen to Dresden, including Franz and his family of three, Sebastian Brandt, Jan Hladik, Carsten Lutz, Anni-Yasmin Turhan, and Uli Sattler (Fig. 1).

Fig. 1.
figure 1

Most of the people who moved with Franz to Dresden: from left to right: Carsten, Anni, Franz, Uli, Sebastian; Jan took the photo, 2001.

They all settled quickly in the beautiful city on the river Elbe, to then experience a millennial flood in their first year. The bottom foot and a half of Franz’s new family home in Dresden was flooded—but the damage was fortunately manageable thanks to the house’s unusual design and their quick thinking which led them to move all items to the house’s higher level. In the following years, Franz continued to grow his group even further, attracting many more students and grants, including, most notably, the QuantLA research training group on Quantitative Logics and Automata which he started and has led from its beginnings to today. He also became an ECCAI Fellow in 2004 and was Dean of the Faculty of Computer Science at TU Dresden from 2012 to 2015.

Throughout his career, Franz has supervised 26 Ph.D. students, five of whom successfully went on to receive their habilitation. He co-authored a textbook on Term Rewriting and one on Description Logic, and co-edited the Description Logic Handbook, all of which have become standard references in their respecting fields. At the time of this writing, according to Google Scholar, his publications have been cited more than 29,000 times. With more than 11,000 citations, the Description Logic Handbook [BCM+03] is his most cited work. The Term Rewriting textbook [BN98] is second with more than 3,000 citation while his research paper on tractable extensions of the description logic \(\mathcal {EL}\) [BBL05] takes an impressive 3rd place with more than 1,000 citations. All this provides an excellent example of the high impact that Franz’s work has had across several research areas.

2 Contributions

The following subsections provide a first person account of how each one of us came in contact with Franz and ended up enjoying a fruitful collaboration that has spanned many years. Nerdy as we are, we proceed in the order of earliest joint paper with him.

2.1 Uli Sattler: Classification and Subsumption in Expressive Description Logics

In 1993, Franz started his first professorship at RWTH Aachen University, where I joined his young research group in 1994 as one of his first Ph.D. students. I had never heard of description logics but relied on recommendations from former colleagues of Franz from Erlangen who assured me that Franz was a rising star and would make a great Ph.D. supervisor. My first task was to catch up on the already huge body of work that various people, including Franz, had established around description logics.

In the early 90s, Franz worked with Bernhard Hollunder in Saarbruecken on Kris [BH91], a terminological knowledge representation system that implemented classification, realisation, retrieval for extensions of \(\mathcal {ALCN}\) (e.g., with feature (dis-)agreement) with respect to acyclic TBoxes and ABoxes. This was a rather brave endeavour at the time, especially after the recent (1989), surprising results by Manfred Schmidt-Schauß that KLone was undecidable and accumulating evidence (by Bernard Nebel, Klaus Schild, and others) that reasoning in all description logics is intractable once general TBoxes are included. The more common reaction to these insights was to severely restrict the expressive power or to move to even more expressive, undecidable description logics. Franz and Bernhard, however, went for a decidable yet intractable logic, where

[...] the price one has to pay is that the worst case complexity of the algorithms is worse than NP. But it is not clear whether the behaviour for “typical” knowledge bases is also that bad. [BH91]

The reasoner implemented in Kris was based on a “completion algorithm” developed by Schmidt-Schauß, Smolka, Nutt, Hollunder which would later be called tableau-based. In [BHN+92], the authors introduce a first Enhanced Traversal method: a crucial optimisation method that reduces the number of subsumption test from \(n^2\) to \(n \log n\) and has been used and further enhanced in all tableau-based reasoners for expressive DLs. Another highly relevant optimisation method first employed in Kris is lazy unfolding, which enables early clash detection and, again, both have been successfully employed and refined in other reasoners.

Franz has also developed significant extensions to these first tableau-based algorithms which required the design of novel, sophisticated techniques: for example, in [Baa91a] a tableau algorithm for \(\mathcal {ALC}\) extended with regular expressions on roles was described. This required not only a suitable cycle detection mechanism (now known as blocking) but also the distinction between good and bad cycles. Moreover, in this line of work, internalisation of TBoxes was first described, a technique that can be used to reduce reasoning problems w.r.t. a general TBox to pure concept reasoning and that turned out to be a powerful tool to assess the computational complexity of a description logic.

Another significant extension relates to qualifying number restrictions [HB91]: together with Bernhard Hollunder, Franz discovered the yo-yo problem and solved it by introducing explicit (in)equalities on individuals in completion system, thus avoiding non-termination. They also introduced the first choose rule to avoid incorrectness caused by some tricky behaviour of qualifying number restrictions.

I have mentioned these technical contributions here to illustrate the kind of research that was going on at the time, and the many, significant contributions Franz was involved in developing already at this early stage of his career. In addition, I also want to point out the ways in which Franz influenced the description logic community, its methodologies, and its value system: as mentioned above, he was an early advocate of understanding computational complexity beyond the usual worst case. Moreover, he has always been an amazing explainer and campaigner. He spent a lot of energy on discussions with colleagues and students about the trio of soundness, completeness, and termination—and why it matters in description logic reasoners and related knowledge representation systems. And he developed very clear proof techniques to show that a subsumption or satisfiability algorithm is indeed sound, complete, and terminating. More generally, we appreciate Franz as a strong supporter of clarity (in proof, definitions, descriptions, etc.) and as somebody who quickly recognises the murky “then a miracle occurs” part in a proof or finds an elegant way to improve a definition. On the occasion of his 60th birthday, I would like to say “Happy birthday, Franz, and thank you for the clarity!”.

2.2 Cesare Tinelli: Unification Theory, Term Rewriting and Combination of Decision Procedures

It is easy to argue about the significance of Franz’s body of work and its long-lasting impact in several areas of knowledge representation and automated reasoning. Given my expertise, I could comment on the importance of his work in (term) unification theory where he has produced several results [Baa89, Baa91b, BS92, Baa93, BS95, BN96, Baa98, BM10, BBBM16, BBM16] and written authoritative compendiums [BS94, BS98c, BS01] on the topic. I could talk about his contributions to term rewriting, which include both research advances [Baa97] and the publication of a widely used textbook on “term rewriting and all that” [BN98]. I could say how his interest in the general problem of combining formal systems has led him to produce a large number of results on the combination of decision procedures or solvers for various problems [BS92, BS95, Baa97, BS98b, BT02a, BT02b, BGT04, BG05] and create a conference focused on combination, FroCoS [BS96], which is now at its 12-th edition beside being one of the founding member conferences of IJCAR, the biennial Joint Conference on Automated Reasoning. These topics are covered by the contributions in this volume by Peter Baumgartner and Uwe Waldmann, Maria Paola Bonacina et al., Veena Ravishankar et al., Christophe Ringeissen, Manfred Schmidt-Schauss, and Yoni Zohar et al. So, instead, I prefer to focus on more personal anecdotes which nevertheless illustrate why we are celebrating the man and his work with this volume.

At the start of my Ph.D. studies in the early 1990s I became interested in constraint logic programming and automated deduction. I was attracted early on by the problem of combining specialised decision procedures modularly and integrating them into general-purpose proof procedures. However, I found the foundational literature on general-purpose theorem proving and related areas such as term rewriting somewhat unappealing for what I thought was an excessive reliance on syntactical methods for proving correctness properties of the various proof calculi and systems. This was in contrast with much of the foundational work in (constrained) logic programming which was based on elegant and more intuitive algebraic and model-theoretic arguments. I also struggled to understand the literature on the combination decision procedures which I found wanting in clarity and precision.

This was the background when, while searching for related work, I came across a paper by some Franz Baader and Klaus Schulz on combining unification procedures for disjoint equational theories [BS92]. The paper presented a new combination method that, in contrast to previous ones, could be used to combine both decision procedures for unification and procedures for computing complete sets of unifiers. The method neatly extended a previous one by Manfred Schmidt-Schauß and was the start of a series of combination results by Franz and Klaus with increasing generality and formal elegance of the combination method [BS92, BS95, BS98b]. This line of work was significant also for often relying on algebraic arguments to prove the main theoretical results, for instance by exploiting the fact that certain free models of an equational theory are canonical for unification problems, or that computing unifiers in a combined theory can be reduced to solving equations in a model of the theory that is a specific amalgamation of the free models of the component theories.

Those papers, together with their associated technical reports, which included more details and full proofs, had a great impact on me. They showed how one could push the state of the art in automated reasoning with new theoretical results based on solid mathematical foundations while keeping a keen eye on practical implementation concerns. They were remarkable examples of how to write extremely rigorous theoretical material that was nonetheless quite understandable because the authors had clearly put great care in: explicitly highlighting the technical contribution and relating it to previous work; explaining the intuitive functioning of the new method; formatting the description of the method so that it was pleasing to the eye and easy to follow; explaining how the method, described at an abstract level, could be instantiated to concrete and efficient implementations; providing extensive proofs of the theoretical results that clearly explained all the intermediate steps.

Based on the early example of [BS92], I set to write a modern treatment of the well-known combination procedure by Nelson and Oppen [NO79] along the lines of Franz’s paper while trying to achieve similar levels of quality. Once I made enough progress, I contacted Franz by email telling him about my attempts and asking for advice of how to address some challenges in my correctness proof. To my surprise and delight, he promptly replied to this email by an unknown Ph.D. student at the University of Illinois, and went on to provide his advice over the course of a long email exchange. I wrote the paper mostly as an exercise; as a way for me to understand the Nelson-Oppen method and explain it well to other novices like me. When I finished it and sent it to Franz for feedback he encouraged me to submit it to the very first edition of a new workshop on combining systems he had started with Klaus Schulz, FroCoS 1996. Not only was the paper accepted [TH96], it also became a widely cited reference in the field that later came to be known as Satisfiability Modulo Theories or SMT. As several people told me in person, the popularity of that paper is in large part due to its clarity and precision both in the description of the method and in the correctness proof, again something that I took from Franz’s papers.

After we met in person at FroCoS 1996, Franz proposed to work on combination problems together, an opportunity I immediately accepted. That was the start of a long-lasting collaboration on the combination of decision procedures for the word problem [BT97, BT99, BT00, BT02b] and other more general problems [BT02a, BGT06]. That collaboration gave me the opportunity to appreciate Franz’s vast knowledge and prodigious intellect. More important, it also gave me precious insights on how to develop abstract formal frameworks to describe automated reasoning methods, with the goal of understanding and proving their properties. It taught me how to develop soundness, completeness and termination arguments and turn them into reader-friendly mathematical proofs. I learned from him how to constantly keep the reader in mind when writing a technical paper, for instance by using consistent and intuitive notation, defining everything precisely while avoiding verbosity, using redundancy judiciously to remind the reader of crucial points or notions, and so on.

While I have eventually learned a lot also from other outstanding researchers and collaborators, my early exposure to Franz’s work and my collaboration with him have profoundly affected the way I do research and write technical papers. I have actively tried over the years to pass on to my students and junior collaborators the principles and the deep appreciation of good, meticulous writing that I have learned from Franz.

Thank you, Franz, for being a collaborator and a model. It has been an honour and a pleasure. Happy 60th birthday, with many more to follow!

2.3 Carsten Lutz: Concrete Domains and the \(\mathcal {EL}\) Family

When I was a student of computer science at the university of Hamburg, I became interested in the topic of description logics and I decided that I would like to do a Ph.D. in that area. At the time, I was particularly fascinated by concrete domains, the extension of DLs with concrete qualities such as numbers and strings as well as operations on them. As in \(\mathsf {Professor} \sqcap \exists \mathsf {age}. {=}_{60}\). Franz was the definite authority on DLs, he seemed to have written at least half of all important papers and, what was especially spectacular for me, this guy had actually invented concrete domains (together with Hanschke [BHs91]). I was thus very happy when I was accepted as a Ph.D. student in his group at RWTH Aachen. Under Franz’s supervision, I continued to study concrete domains and eventually wrote my Ph.D. thesis on the subject. I learned a lot during that time and I feel that I have especially benefitted from Franz’s uncompromising formal rigor and from his ability to identify interesting research problems and to ask the right questions (even if, many times, I had no answer). Concrete domains are a good example. He identified the integration of concrete qualities into DLs as the important question that it is and came up with a formalization that was completely to the point and has never been questioned since. In fact, essentially the same setup has later been used in other areas such as XML, constraint LTL, and data words; it would be interesting to reconsider concrete domains today, from the perspective of the substantial developments in those areas. Over the years, Franz has continued to make interesting contributions to concrete domains, for example by adding aggregation (with Sattler [BS98a]) and rather recently by bringing into the picture uncertainty in the form of probability distributions over numerical values (with Koopmann and Turhan [BKT17]).

Another great line of research that Franz has pursued and that I had the pleasure to be involved in concerns lightweight DLs, in particular those of the \(\mathcal {EL}\) family. In the early 2000s, there was a strong trend towards identifying more and more expressive DLs that would still be decidable. However, Franz also always maintained an interest in DLs with limited expressive power and better complexity of reasoning. The traditional family of inexpressive DLs was the \(\mathcal {FL}\) family, but there even very basic reasoning problems are coNP-complete. In 2003, Franz wrote two IJCAI papers in which he considered \(\mathcal {EL}\), which was unusual at the time, showing (among other things) that subsumption can be decided in polynomial time even in the presence of terminological cycles [Baa03a, Baa03b]. A bit later, this positive result was extended to general concept inclusions (GCIs) in joint work with Brandt and myself [Bra04, BBL05]. What followed was a success story. In joint work with Meng Suntisrivaraporn, we implemented the first \(\mathcal {EL}\) reasoner called Cel which demonstrated that reasoning in \(\mathcal {EL}\) is not only in polynomial time, but also very efficient and robust in practice. Many other reasoners have followed, the most prominent one today being Elk. We also explored the limits of polynomial time reasoning in the \(\mathcal {EL}\) family [BLB08] and this resulted in a member of the \(\mathcal {EL}\) family of DLs to be standardized as a profile of the W3C’s OWL 2 ontology language. Nowadays, \(\mathcal {EL}\) is one of the most standard families of DLs, widely used in many applications and also studied in several chapters of this volume, including the ones by Marcelo Finger, by Rafael Peñaloza, by Loris Bozzato et al. and by Ana Ozaki et al. Already in our initial work on \(\mathcal {EL}\) with GCIs, we invented a particular kind of polynomial time reasoning procedure, the one that was also implemented in Cel. This type of procedure is now known as consequence-based reasoning and has found applications also far beyond the \(\mathcal {EL}\) family of DLs. In fact, a survey on 15 years of consequence-based reasoning is presented in this volume in the chapter by David Tena Cucala, Bernardo Cuenca Grau and Ian Horrocks. It was a tremendous pleasure and privilege to have been involved in all this, together with you, Franz, and building on your prior work. Happy birthday!

2.4 Frank Wolter: Modal, Temporal, and Action Logics

I first met Franz in the summer of 1997 at ESSLLI in Aix-en-Provence, where I (jointly with Michael Zakharyaschev) organised a workshop on combining logics and, I believe, Franz gave a course introducing description logics. I am not entirely sure about the course as I very clearly recall our conversations about description logics but not at all any description logic lectures. At that point, after having failed to sell modal logic to mathematicians, I was looking for new ways of applying modal logic in computing and/or AI. And there the applications were right in front of me! As Franz quickly explained, description logic is nothing but modal logic, but much more relevant and with many exciting new applications and open problems. As long as one does not try to axiomatize description logics, there would be a huge interest in the description logic community in using techniques from modal logic and also in combining modal and description logics. So that is what I did over the next 22 years. The most obvious way to do description logic as a modal logician was to carefully read the papers on modal description logics that Franz et al. had just published [BL95, BO95], ask Franz what he regarded as interesting problems that were left open, and try to solve as many as possible of them (fortunately, Franz has the amazing ability to pose many more open problems than one could ever hope to solve). But Franz did not only pose open problems! He continued to work himself on temporal description logics with Ghilardi and Lutz [BGL12] and, more recently, with Stefan Borgwardt and Marcel Lippmann on combining temporal and description logics to design temporal query languages for monitoring purposes [BL14, BBL15], a topic on which Franz gave an invited keynote address at the Vienna Summer of Logic in 2014.

Other exciting collaborations developed over the years: when working together on combining description logics (with themselves) [BLSW02], I learned a lot from Franz’s work on combining equational theories and on combining computational systems in general. Working together on the connection between tableaux and automata [BHLW03] was an excellent opportunity to let Franz explain to me what a tree automaton actually is. We only briefly worked together on extending description logics by action formalisms [BLM+05], but later Franz, together with Gerd Lakemeyer and many others, developed an amazing theory combining GOLOG and description logics, details of this collaboration are given in the article “Situation Calculus Meets Description Logics” by Jens Claßen, Gerhard Lakemeyer, and Benjamin Zarrieß in this volume. So, Franz, many thanks for both the problems and the solutions. It has been a great pleasure to work with you over so many years. Happy Birthday!

2.5 Anni-Yasmin Turhan: Non-standard Inferences in Description Logics

When I studied computer science at the University of Hamburg a project on knowledge representation had sparked my interest in description logics. I found the formal properties, the simplicity and elegance of these logics immediately appealing. As I soon noticed, most of the fundamental research results on description logics were achieved by Franz and his collaborators and, so after completing my studies, I was keen to join Franz’s group in Aachen to start my Ph.D. studies. There I started to work in his research project on non-standard inferences in description logics together with Sebastian Brandt.

Non-standard inferences are a collection of various reasoning services for description logic knowledge bases that complement the traditional reasoning problems such as subsumption or satisfiability. The idea is that they assist users in developing, maintaining and integrating knowledge bases. In order to build and augment knowledge bases, inferences that generalize knowledge can be important. Franz had, together with Ralf Küsters and Ralf Molitor, investigated the most specific concept that can generalize knowledge about an object into a concept description and the least common subsumer that generalizes a set of concepts into a single one. Together these two inferences give rise to example-driven learning of new concepts. Their initial results were achieved for \(\mathcal {EL}\) concepts without a general TBox [BKM99]. At that time it was quite a bit non-standard to work on inexpressive, light-weight description logics as many research efforts were dedicated to satisfiability of highly expressive logics. The overall approach to generalization is to ensure the instance-of or the subsumption relationship of the resulting concept by an embedding into the normalized input. This method was explored by us in several settings [BST07]. Franz lifted this also to general TBoxes [Baa03a, Baa03b], which had then lead to the famed polynomial time reasoning algorithms for \(\mathcal {EL}\). These are based on canonical models and simulations that are widely used today and have, in turn, fueled further research on new non-standard inferences such as conservative extensions and computation of modules that were investigated in great detail by Carsten Lutz and Frank Wolter.

Besides generalization, Franz also introduced and investigated other non-standard inferences that compute “leaner” representations of concepts. One instantiation of this idea is to compute syntactically minimal, but equivalent rewritings of concepts and another is to compute “(upper) approximations” of concepts written in an expressive description logic in a less expressive one [BK06]. Franz combined his research interests and great expertise in unification and knowledge representation in a strand of work on matching in description logics. This inference is mainly used to detect redundancies in knowledge bases. Here Franz achieved many contributions for sub-Boolean description logics—in the last years predominantly in collaboration with Morawska and Borgwardt [BM10, BBM12, BBM16]. Franz’s many contributions on versatile non-standard inferences demonstrates that his research topics are in overwhelming majority driven by a clear motivation that is often drawn from practical applications. Furthermore, they often times establish connections between several sub-areas of knowledge representation and theoretical computer science.

Once a knowledge base is built, explaining and removing unwanted consequences might become necessary as well. This can be done by computing justifications, i.e. the minimal axiom sets that are “responsible” for the consequence. This inference was and still is intensively investigated by Franz and his group—especially in collaboration with Rafael Peñaloza. They have mapped out the connection between computing justifications and weighted automata and are studying gentle, i.e. more fine-grained repairs that detect responsible parts of axioms [BKNP18]. Rafael tells the full story about it in “Explaining Axiom Pinpointing” in this volume. Their contributions on justifications are fruitfully applied also in other areas of knowledge representation, such as inconsistency-tolerant reasoning or nonmonotonic reasoning. This is underlined by Gerhard Brewka and Markus Ulbricht in their article on “Strong Explanations for Nonmonotonic Reasoning” and also by Vinícius Bitencourt Matos et al. in their article on “Pseudo-contractions as gentle repairs” presented in this volume.

So, what started out as non-standard inferences in description logics—I seem to remember that Franz even coined that term—has become a well-established part of the wider research field. This is certainly due to Franz’s ability to identify clear motivation for research questions, his passion for clear explanations and his relentless pursuit of excellence. It has been truly fascinating for me to see this versatile research area grow and evolve over the many years that I have worked with him.

Franz, I thank you for the many chances being given for example-driven learning and the gentle explanations. Happy birthday!

3 Final Words

Although this article and volume are by no means a complete overview, we hope that the reader will gain some insight into the remarkable breadth and depth of the research contributions that Franz Baader has made in the last 30+ years. What is more, he has achieved all this while keeping up his favourite pastimes such as skiing and cycling, while being a proud and loving father to his three children—and without ever cutting off his pony tail, see Fig. 2.

Fig. 2.
figure 2

Franz in a typical pose, giving a clear explanation of a technically complex point.

We hope that Franz will enjoy reading about our views of his research record and our experience in working with him, as well as the many articles in this Franzschrift. We thank him for his advice, guidance, and friendship, and wish him—again and all together now

a very happy birthday and many happy returns!