1 History

Mathematical Fuzzy Logics [51, 60] have a long tradition with roots going back to the many-valued logics of Łukasiewicz, Gödel, and Kleene [57, 68, 73] and the Fuzzy Set Theory of Zadeh [111]. Their purpose is to model vagueness or imprecision in the real world, by introducing new degrees of truth as additional shades of gray between the Boolean true and false. For example, one can express the distinction between a person x having a high fever or a low fever as the degree of truth of the logical statement \(\mathsf {Fever} (x)\). One of the central properties of fuzzy logics is truth functionality—the truth degree of a complex logical formula is uniquely determined by the truth degrees of its subformulas. This is a fundamental difference to other quantitative logics like probabilistic or possibilistic logics [56, 83]. The semantics of fuzzy logics are thus given by functions interpreting the logical constructors conjunction, implication, and negation. For example, the truth degree of the conjunction \((\mathsf {Fever} \wedge \mathsf {Headache})(x)\) can be computed as a function of the degrees of \(\mathsf {Fever} (x)\) and \(\mathsf {Headache} (x)\). The functions proposed by Zadeh [111] are a popular choice, because they lead to good computational properties. A different approach uses operators called triangular norms (t-norms) and their associated residua to interpret conjunction and implication [60, 69].

More recently, Description Logics (DLs) were developed as fragments of first-order logic, with a focus on their computational properties [3]. They use concepts (unary predicates, such as \(\mathsf {Fever} \) and \(\mathsf {Headache} \)) and roles (binary predicates like \(\mathsf {hasSymptom} \)) to describe knowledge about the world. For example, the description logic axiom

$$\begin{aligned} \exists \mathsf {hasDiagnosis}.\mathsf {Flu} \sqsubseteq \exists \mathsf {hasSymptom}.\mathsf {Fever} \sqcap \exists \mathsf {hasSymptom}.\mathsf {Headache} \end{aligned}$$
(1)

says that whenever patients are diagnosed with flu, they must have the symptoms fever and headache, i.e. fever and headache are necessary symptoms for a flu diagnosis. Different choices of concept constructors (such as conjunction \(\sqcap \) and existential restrictions \(\exists \)) can be used to tailor the description logic to the needs of a specific domain, ranging from lightweight to very expressive logics, for which nevertheless highly optimized reasoning systems have been developed. The first Fuzzy Description Logics (FDLs) were developed based on Zadeh’s fuzzy semantics [95, 106, 110], and classical DL algorithms were extended to deal with the additional expressivity provided by the truth degrees. Since then, a multitude of combinations of description logics with fuzzy semantics have been investigated, they have been subject of several surveys and monographies [10, 24, 47, 74, 102], and many FDL reasoners have been implemented [1, 14, 23, 86, 107]. Returning to the example axiom (1), FDLs enable us to grade a flu diagnosis as mild or severe, based on the severity of its symptoms.

In this survey, we focus on a prototypical FDL based on the classical description logic \(\mathcal {ALC}\), and demonstrate the effects of different fuzzy semantics on the complexity of reasoning with general TBoxes. Section 2 introduces the basic syntax, semantics, and reasoning problems of the logic, and the subsequent chapters deal with different kinds of semantics, sorted roughly by their complexity: Zadeh semantics and semantics based on finitely many degrees of truth (Sect. 3), semantics based on the Gödel t-norm (Sect. 4), and other t-norms such as the Łukasiewicz t-norm or the product t-norm (Sect. 5). We conclude with a discussion of related logics and reasoning problems, and some open problems.

2 The Prototypical Fuzzy Description Logic

As a prototypical FDL, we briefly introduce a generic fuzzy extension of \(\mathcal {ALC}\), called \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\), where \( {\textsf {L}}\) denotes an algebra specifying the fuzzy semantics, and \(\mathfrak {N}\) denotes the presence of an additional concept constructor \(\boxminus \) called residual negation. In the following sections, we instantiate this generic definition with different concrete semantics.

Truth degrees. We consider algebras of the form \({ {\textsf {L}}} =(\underline{{ {\textsf {L}}}},\mathrel {*} _{ {\textsf {L}}} ,\Rightarrow _{ {\textsf {L}}} )\) where \(\underline{{ {\textsf {L}}}}\) is a totally ordered set of truth degrees, including 0 (false) and 1 (true), respectively; \(\mathrel {*} _{ {\textsf {L}}} \) is a t-norm, i.e. an associative, commutative, and monotonic binary operator on \(\underline{{ {\textsf {L}}}}\) that has unit 1; and \(\Rightarrow _{ {\textsf {L}}} \) is a binary operator on \(\underline{{ {\textsf {L}}}}\) called implication function. To simplify the notation, we usually denote the set \(\underline{{ {\textsf {L}}}}\) as \( {\textsf {L}}\). In most cases, \(\Rightarrow _{ {\textsf {L}}} \) is a residuum of \(\mathrel {*} _{ {\textsf {L}}} \) (formally defined in Sect. 4), and the associated residual negation is the function \(x\mapsto x\Rightarrow _{ {\textsf {L}}} 0\), for \(x\in { {\textsf {L}}} \). Another popular fuzzy negation function is the involutive negation defined by \(x\mapsto 1-x\). In the following, we consider a logic that uses both involutive and residual negation.

Syntax. Let \(\mathsf {N_I}\), \(\mathsf {N_C}\), and \(\mathsf {N_R}\) be mutually disjoint sets of individual-, concept-, and role names. Concepts of \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\) are defined by the following grammar rule, where \(A\in \mathsf {N_C} \) and \(r\in \mathsf {N_R} \):

$$\begin{aligned} C {:}{:}= A \mid \lnot C \mid \boxminus C \mid C\sqcap C \mid \exists r.C \mid \forall r.C \end{aligned}$$

A TBox is a finite set of general concept inclusions (GCIs) \(\langle C\sqsubseteq D \ge q\rangle \), where CD are concepts and \(q\in [0,1]\). An ABox is a finite set of concept assertions \(\langle C(a)\bowtie q\rangle \) and role assertions \(\langle r(a,b)\bowtie q\rangle \) with \(a,b\in \mathsf {N_I} \), \(r\in \mathsf {N_R} \), C a concept, \({\bowtie }\in \{\le ,\ge \}\), and \(q\in [0,1]\). An ontology is composed of a TBox and an ABox. We refer to GCIs and assertions as axioms.

Semantics. Interpretations \(\mathcal {I} =(\Delta ^\mathcal {I},\cdot ^\mathcal {I})\) consist of a non-empty set \(\Delta ^\mathcal {I} \), called domain, and an interpretation function \(\cdot ^\mathcal {I} \) that maps every \(a\in \mathsf {N_I} \) to an element \(a^\mathcal {I} \in \Delta ^\mathcal {I} \), every \(A\in \mathsf {N_C} \) to a function \(A^\mathcal {I}:\Delta ^\mathcal {I} \rightarrow { {\textsf {L}}} \), and every \(r\in \mathsf {N_R} \) to \(r^\mathcal {I}:\Delta ^\mathcal {I} \times \Delta ^\mathcal {I} \rightarrow { {\textsf {L}}} \). This function is extended to concepts C by similarly assigning them a truth function \(C^\mathcal {I}:\Delta ^\mathcal {I} \rightarrow { {\textsf {L}}} \) as follows. For every \(x\in \Delta ^\mathcal {I} \),

$$\begin{aligned} (\lnot C)^\mathcal {I} (x)&:= 1-C^\mathcal {I} (x); \\ (\boxminus C)^\mathcal {I} (x)&:= C^\mathcal {I} (x)\Rightarrow _{ {\textsf {L}}} 0; \\ (C\sqcap D)^\mathcal {I} (x)&:= C^\mathcal {I} (x)\mathrel {*} _{ {\textsf {L}}} D^\mathcal {I} (x); \\ (\exists r.C)^\mathcal {I} (x)&:= \sup _{y\in \Delta ^\mathcal {I}} r^\mathcal {I} (x,y)\mathrel {*} _{ {\textsf {L}}} C^\mathcal {I} (y); \text { and} \\ (\forall r.C)^\mathcal {I} (x)&:= \inf _{y\in \Delta ^\mathcal {I}} r^\mathcal {I} (x,y)\Rightarrow _{ {\textsf {L}}} C^\mathcal {I} (y). \end{aligned}$$

The interpretation \(\mathcal {I}\) satisfies (or is a model of)

  • the GCI \(\langle C\sqsubseteq D \ge q\rangle \) iff \(C^\mathcal {I} (x)\Rightarrow _{ {\textsf {L}}} D^\mathcal {I} (x)\ge q\) holds for all \(x\in \Delta ^\mathcal {I} \);

  • the concept assertion \(\langle C(a)\bowtie q\rangle \) iff \(C^\mathcal {I} (a^\mathcal {I})\bowtie q\);

  • the role assertion \(\langle r(a,b)\bowtie q\rangle \) iff \(r^\mathcal {I} (a^\mathcal {I},b^\mathcal {I})\bowtie q\); and

  • the ontology \(\mathcal {O}\) iff it satisfies all axioms in \(\mathcal {O}\).

We are interested in deciding consistency in \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\), i.e. whether a given ontology has a model.

Witnessed models. The semantics of the existential restriction \(\exists r.C\) computes the supremum over a potentially infinite set of truth degrees, which may lead to unwanted or unexpected results. For that reason, witnessed models have been introduced [61], in which this supremum—and, dually, the infimum from \(\forall r.C\)—is required to be reached by some domain element, becoming a maximum (or minimum, respectively). After having been introduced for FDLs, witnessed models were also studied in the context of fuzzy predicate logics [62,63,64]. Following the standard approach in FDLs, in the following we implicitly restrict all models to be witnessed, in particular for Sects. 4 and 5. For work that does not enforce this restriction, we refer the interested reader to [24, 29, 32, 37, 40, 48, 61].

In the following sections, we instantiate the logic \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\) with different semantics, represented by various choices for the algebra \( {\textsf {L}}\), and discuss the effect of these choices on the complexity of deciding (witnessed) consistency.

3 Zadeh and Finitely Valued Semantics

We start with the “traditional” FDL \(\mathsf {Z}\)-\(\mathfrak {N}\mathcal {ALC}\), where \(\mathsf {Z} =([0,1],\mathrel {*} _\mathsf {Z},\Rightarrow _\mathsf {Z})\) is given by the Gödel (or minimum) t-norm \(x\mathrel {*} _\mathsf {Z} y=\min \{x,y\}\) and the Kleene-Dienes implication \(x\Rightarrow _\mathsf {Z} y=\max \{1-x,y\}\) [55, 68]. These functions were initially proposed in the context of Fuzzy Set Theory [111] to model the intersection and inclusion of fuzzy sets. This semantics allows various simplifications in \(\mathsf {Z}\)-\(\mathfrak {N}\mathcal {ALC}\). For example, \(\lnot C\) is equivalent to \(\boxminus C\), and—as in classical logic—\(\forall r.C\) can be expressed as \(\lnot \exists r.\lnot C\). This is not always the case for other semantics. It was noticed early on that this choice of functions also results in an effectively finitely valued FDL, i.e. that the set of truth degrees can be restricted without loss of generality to be finite, as long as it contains 0, 0.5, and 1, and is closed under application of the involutive negation \(x\mapsto 1-x\) [97]. For this reason, we see it as a special case of finitely valued FDLs, where \( {\textsf {L}}\) is a fixed, finite set, which can be assumed to be of the form \(\{0,\frac{1}{n},\dots ,\frac{n-1}{n},1\}\), and \(\mathrel {*} _{ {\textsf {L}}} \) and \(\Rightarrow _{ {\textsf {L}}} \) are arbitrary functions as introduced in Sect. 2. In such logics, the restriction to witnessed models is not necessary anymore, since any supremum over a finite set of values is automatically a maximum. Reasoning approaches for Zadeh and finitely valued FDLs can be divided into three classes: tableaux-, crispification-, and automata-based algorithms.

Tableaux algorithms were developed as extensions of classical tableaux techniques for description logics. The basic idea is to iteratively apply tableaux rules to decompose complex assertions into simpler ones. For example, \({\langle (C\sqcap D)(a)\ge q\rangle }\) is split into \(\langle C(a)\ge q_1\rangle \) and \(\langle D(a)\ge q_2\rangle \), where \(q_1\) and \(q_2\) are nondeterministically chosen such that \(q=q_1\mathrel {*} _{ {\textsf {L}}} q_2\). For an assertion \(\langle (\exists r.C)(a)\ge q\rangle \) involving an existential restriction, a new individual x has to be introduced, together with assertions \(\langle r(a,x)\ge q_1\rangle \) and \(\langle C(x)\ge q_2\rangle \) as above. In general, there are many choices for \(q_1\) and \(q_2\), which result in a high degree of nondeterminism; however, for some semantics (e.g. Zadeh) the rules can be simplified. Since there are only finitely many possibilities for such assertions over a single individual, appropriate blocking conditions can ensure termination of these algorithms. Fuzzy tableaux algorithms have been developed, starting from the initial works in [95, 96, 106], for very expressive extensions of \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\) with more concept constructors [38, 43, 88, 89, 92, 93]. They usually do not provide tight complexity bounds for deciding consistency, but they have been successfully implemented in FDL reasoners like fuzzyDL [23] and FiRE [86].

Crispification algorithms are also among the first reasoning methods to be developed for finitely valued FDLs [97, 100]. The idea is to translate the fuzzy ontology into a classical DL ontology, and then use optimized algorithms for classical reasoning. In this translation, each concept name A is replaced by finitely many classical concept names \(A_{\ge q}\) that represent all those individuals that belong to A with a degree of at least q. The order structure on \( {\textsf {L}}\) is then expressed by GCIs \(A_{\ge q_2}\sqsubseteq A_{\ge q_1}\) where \(q_1< q_2\). Since the order has to be preserved also for role names, these reductions introduce so-called role inclusion axioms, which usually do not increase the complexity of reasoning. Finally, the original fuzzy axioms are recursively translated into classical axioms by using the new concept and role names. The ontology resulting from this translation is consistent in the classical sense iff the original fuzzy ontology is consistent. The first reductions for general finitely valued semantics included an exponential blowup [13, 15, 16, 20, 22, 75], which can however be avoided by preprocessing the ontology [35]; again, this problem does not occur when using the Zadeh semantics [11, 12, 97]. Based on these polynomial translations, it can be shown that deciding consistency in finitely valued FDLs has the same complexity as in the underlying classical DLs. In particular, consistency in \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\) with finite \( {\textsf {L}}\) is ExpTime-complete. It was pointed out recently [35] that some reductions [16, 20, 75] are incorrect for so-called number restrictions, which allow to restrict the number of r-successors of a particular type; unfortunately, no alternative reduction has been found so far. The crispification approach has been implemented in the FDL reasoner DeLorean [14].

Automata-based algorithms generalize similar techniques from classical DLs. The basic idea behind these algorithms is to use tree automata to decide the existence of a forest-shaped model of the ontology [36, 41, 45]. In some cases, e.g. in the presence of nominals (concepts that can refer to specific individual names), a forest-shaped model does not need to exist, but the automata-based techniques can be adapted [25]. One disadvantage of this approach is that it cannot handle ABoxes naturally. In fact, a pre-completion step, which is based on the tableau rules mentioned before, is necessary to ensure correctness of the algorithms [38]. On the other hand, automata-based algorithms are useful for finding tight complexity bounds. In particular, using the notions from [4], one can show that consistency in \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\) is PSpace-complete when restricted to so-called acyclic TBoxes [41].

In summary, reasoning in finitely valued FDLs (including those using Zadeh semantics) is usually decidable and has the same complexity as in the underlying classical DLs; moreover, efficient implementations of reasoning algorithms are available. The only known exceptions are FDLs that are less expressive than \( {\textsf {L}}\)-\(\mathfrak {N}\mathcal {ALC}\), where additional truth degrees can actually increase the complexity of reasoning, e.g. from P to ExpTime in \(\mathcal {E\!L}\)  [26].

4 Gödel Semantics

Hájek initiated a systematic investigation of FDLs with t-norm-based semantics in [61], founded on his work on Mathematical Fuzzy Logic [60, 65]. Such semantics use the infinite set of truth degrees [0, 1] and t-norms \(\mathrel {*} _{ {\textsf {L}}} \) as introduced in Sect. 2. However, as implication function they use an associated residuum \(\Rightarrow _{ {\textsf {L}}} \) that satisfies, for all \(x,y,z\in [0,1]\), the equivalence \(x\mathrel {*} _{ {\textsf {L}}} z\le y\) iff \(z\le x\Rightarrow _{ {\textsf {L}}} y\). We consider here only continuous t-norms \(\mathrel {*} _{ {\textsf {L}}} \), which ensures that their residuum is unique [69]. It is long known [82] that all infinitely many continuous t-norm can be constructed using so-called ordinal sums from three fundamental t-norms: the Gödel t-norm (which is also used in the Zadeh semantics), the Łukasiewicz t-norm, and the product t-norm. Without going into the details of this construction, we focus here on these fundamental t-norms themselves; however, in the literature many results for their combinations have been obtained.

In this section, we consider the first of the three fundamental t-norms. As mentioned earlier, the Gödel semantics \(\mathsf {G} =([0,1],\mathrel {*} _\mathsf {G},\Rightarrow _\mathsf {G})\) differs from the Zadeh semantics only by using as implication function the Gödel residuum, which is defined, for all \(x,y\in [0,1]\), as

$$\begin{aligned} x\Rightarrow _\mathsf {G} y = {\left\{ \begin{array}{ll} 1 &{} \text {if}\ x\le y, \\ y &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

The first work to deal with Gödel semantics in FDLs was [13]. There, the authors attempted to replicate the ideas successfully employed for the Zadeh semantics, and show that reasoning can be restricted to a finite subset of truth degrees. Unfortunately, this claim turned out to be incorrect. The main reason lies in the fact that the constructors \(\boxminus \) and \(\forall \), which require the Gödel residuum in their interpretation, can be used to guarantee the existence of truth degrees strictly greater than others, which may, however, be arbitrarily close.

Using this insight, it was later shown in [31] that \(\mathsf {G}\)-\(\mathfrak {N}\mathcal {ALC}\) does not have the finite model property (FMP). That is, there exist consistent \(\mathsf {G}\)-\(\mathfrak {N}\mathcal {ALC}\) ontologies that have only infinite models. Since classical \(\mathcal {ALC}\) has the FMP, a direct consequence of this result is that it is impossible to apply the crispification approach from finitely valued FDLs to the Gödel semantics. This result was strengthened further, by showing that some ontologies can only be satisfied by models that use infinitely many different truth degrees [31].

Interestingly, it turns out that satisfiability of a \(\mathsf {G}\)-\(\mathfrak {N}\mathcal {ALC}\) ontology does not depend on the precise truth degrees used by an interpretation, but rather on their relative order. Moreover, only finitely many such orderings are relevant. Hence, a new crispification approach was developed in [44], where the classical concepts represent orderings between the degrees given to different fuzzy concepts. Building on this idea, a tableaux-based method, which decomposes complex concepts into simpler (ordered) concepts, was developed in [46]. From these algorithms, it was possible to obtain complexity bounds for reasoning in FDLs under Gödel semantics that mostly match those of reasoning in their classical variants. In particular, deciding consistency in \(\mathsf {G}\)-\(\mathfrak {N}\mathcal {ALC}\) is again ExpTime-complete.

5 Łukasiewicz and Product Semantics

We now consider the remaining two of the three fundamental continuous t-norms. Following the formalization of [60], the product semantics is given by the algebra \(\mathsf {\Pi } =([0,1],\mathrel {*} _\mathsf {\Pi },\Rightarrow _\mathsf {\Pi })\), with \(x\mathrel {*} _\mathsf {\Pi } y=x\cdot y\) and

$$\begin{aligned} x\Rightarrow _\mathsf {\Pi } y = {\left\{ \begin{array}{ll} 1 &{} \text {if}\ x\le y, \\ \frac{y}{x} &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

The Łukasiewicz semantics is given by \(\textsf {\L } =([0,1],\mathrel {*} _\textsf {\L } ,\Rightarrow _\textsf {\L } )\), where

$$\begin{aligned} x\mathrel {*} _\textsf {\L } y&= \max \{x+y-1,0\} \text { and} \\ x\Rightarrow _\textsf {\L } y&= \min \{1-x+y,1\}. \end{aligned}$$

After t-norm-based semantics were proposed for FDLs in [61], many tableaux algorithms were developed for these logics [17, 18, 59, 87, 103, 104]. Most of them are based on a novel combination of traditional tableaux algorithms with mixed integer programming solvers. The former decompose complex assertions into smaller ones, while generating a set of constraints; and the latter are then used to find a solution for these constraints. In a simple example, \({\langle (C\sqcap D)(a)\ge 0.5\rangle }\) is decomposed into \(\langle C(a)\ge x\rangle \) and \(\langle D(a)\ge y\rangle \), and the constraint \(x\cdot y\ge 0.5\) is added to a set of inequations (under product semantics). At the end, an external solver computes a solution to this inequation, e.g. \(\{x\mapsto 0.8, y\mapsto 0.7\}\). Unfortunately, as observed in [5, 9], these logics also lack the FMP and, consequently, none of these algorithms can decide consistency in the presence of GCIs. In fact, these algorithms are correct only for so-called unfoldable TBoxes (similar to acyclic TBoxes). A detailed discussion of the reasons for restricting the expressivity of the ontologies can be found in [2]. Different algorithms tailored towards FDLs using the product and Łukasiewicz t-norms in the absence of GCIs were used to show decidability in [1, 47, 48, 61].

Surprisingly, many such FDLs were subsequently shown to have an undecidable consistency problem, first for product semantics [5,6,7], and then for Łukasiewicz semantics [49]. These results are based on reductions from the Post Correspondence Problem. They were later generalized to cover a variety of combinations of t-norms and concept constructors [33, 39], including both \(\mathsf {\Pi }\)-\(\mathfrak {N}\mathcal {ALC}\) and \(\textsf {\L } \)-\(\mathfrak {N}\mathcal {ALC}\), even if all axioms are restricted to be crisp, i.e. of the form \(\langle \alpha \ge 1\rangle \). At the same time, it was discovered that these results are quite sensitive to the choice of concept constructors and axioms allowed. For example, consistency in \(\mathsf {\Pi }\)-\(\mathfrak {N}\mathcal {ALC}\) becomes decidable if the constructor \(\lnot \) and assertions of the form \(\langle \alpha \le q\rangle \) are disallowed. Indeed, it was shown that, to decide consistency in this restricted logic, it suffices to consider classical interpretations, which use only the truth values 0 and 1; effectively, the logic cannot even be considered to be fuzzy [30, 33]. On the other hand, consistency in \(\textsf {\L } \)-\(\mathfrak {N}\mathcal {ALC}\) remains undecidable even without \(\lnot \) and \(\forall \), and with only crisp axioms [39]. Very recently, it was discovered that even \(\textsf {\L } \)-\(\mathcal {E\!L}\), which extends a logic with polynomial complexity in the classical case, has an undecidable consistency problem [27, 28]; however, no such result is known for the variant \(\mathsf {\Pi }\)-\(\mathcal {E\!L}\) with product semantics. For a detailed discussion of the border between decidability and undecidability in t-norm-based FDLs, we refer the reader to [33].

6 Related Notions

Much research effort has been devoted to extending FDLs towards even more expressive languages. For example, more complex assertions like \(\langle \mathsf {Tall} (a)>\mathsf {Tall} (b)\rangle \) allow to compare fuzzy degrees between different individuals a and b. Usually, such extensions do not affect the complexity of consistency [31, 44,45,46]. Going one step further, one can also allow comparisons inside concepts like \(\mathsf {Tall} >\forall \mathsf {friend}.\mathsf {Tall} \), representing the set of all people that are taller than all their friends. These latter extensions have so far been studied only for the Zadeh semantics [67, 72].

The papers [21, 70, 109] propose aggregation operators that generalize concept conjunctions, and for example allow to express weighted sums of truth degrees, like a \(\frac{1}{2}\mathsf {Comfortable} +\frac{1}{2}\mathsf {Cheap} \) hotel. Due to their generality, however, one has to be careful not to obtain an undecidable logic, e.g. by restricting to unfoldable TBoxes. Similarly, one can replace the quantifiers in FDLs by more general functions [85], or introduce fuzzy modifiers like \(\mathsf {very}\)  [66, 112] that can scale and transform the interpretations of concepts, e.g. \(\mathsf {very\,Tall}\).

A different direction for generalization is to allow the truth degrees to be only partially ordered. In particular, extensions of description logics with lattice-based semantics have been studied in [24, 25, 36, 41, 43, 58, 100]. If the underlying lattice is finite, then methods similar to those described in Sects. 3 and 4 can be used to show tight complexity bounds or provide reductions to classical reasoning. Other (infinite) semantics remain largely unexplored, but they are expected to be undecidable in most cases.

The standard approach to integrate datatypes into description logics is to include concrete domains, which provide access to new concrete predicates whose interpretation is fixed, e.g. the total order on the natural numbers. In [80], the classical description logic \(\mathcal {ALC}\) is extended with the use of fuzzy concrete domains, which provide built-in fuzzy predicates. In the context of FDLs, fuzzy concrete domains have also been studied in [18, 98], but are usually restricted to unary concrete predicates.

A different reasoning task often considered in description logics is to answer conjunctive queries (CQs) over an ontology \(\mathcal {O}\). More precisely, one is interested in retrieving all the individuals that satisfy some given properties in every model of \(\mathcal {O}\). In FDLs this problem becomes more involved, since individuals may satisfy the query to some intermediate truth degree. This has motivated different approaches and solutions to CQ answering over fuzzy ontologies, for example by crispification or adaptation of classical CQ answering techniques such as query rewriting [35, 75, 78, 84, 99].

A lot of research has been done also on fuzzy extensions of less expressive description logics like \(\mathcal {E\!L}\)  [8, 26, 42, 76, 90], \(\mathcal {FL}_0\)  [34], and rule-based languages [77, 94, 108]. Since the semantics of the standard Web Ontology Language OWL 2 is based on classical description logics,Footnote 1 several proposals have been made for a fuzzy extension of OWL 2 in order to make FDLs more accessible, and fuzzy plug-ins for ontology editors have been developed [19, 88, 91].

We have focused here mainly on the theoretical aspects of FDLs and in particular on the complexity of reasoning in ontologies built with these languages. The importance of this study is highlighted by several applications that have been considered. Some of these applications include medicine [81], information retrieval [79, 101], recommendation [50, 53], and detection [52, 54]. Another important aspect is the problem of constructing such ontologies in the first place. Approaches for learning FDL axioms from data have been suggested in [71, 105].

7 Conclusions

As it can be seen from this survey, FDLs are a very active research topic. Particularly during the current decade, much effort has been made to understand the computational properties of this family of logics. As a result, the computational complexity of deciding consistency of FDL ontologies and other related reasoning tasks is mostly known. The most notable exceptions are perhaps the cases with restricted expressivity—either by the limiting the concept constructors as in \(\mathcal {FL}_0\) or DL-Lite, or by constraining the available axioms like for acyclic TBoxes—with general t-norm semantics.

Apart from the identified undecidable logics, there are several candidates of languages that can be used to effectively represent and reason about imprecise knowledge. Interestingly, these cases are still expressive enough for the needs of some existing applications, and mostly retain the same computational complexity as their underlying classical formalisms. However, the development of efficient, scalable reasoners is still ongoing. Future work should focus on the development and implementation of specialized optimizations. These reasoners can then be used to further promote the use of FDLs in practical applications.