Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Perhaps the simplest form of constructing a statement with two constituent parts is joining them by using the linguistic connectives and and or, as done, for instance, with the elemental parts “x is P”, and “y is Q”, for obtaining either “x is P and y is Q”, and “x is P or y is Q”, with P and Q acting, respectively, in universes X and Y. It shows that, initially, the particle-words and, and or, operate in the universe X [P] × Y [Q], and, hence, their behavior/action should be searched for in it.

In plain language, and contrary to the artificial ones, these particles are not “logical constants”, but are endowed with a meaning that should be specified in each context; the meanings of “P and Q” and “P or Q”, not only depend on the meanings of P and Q, but also on those of and and or. A first question is, thus, how to capture the meanings of “P and Q” and “P or Q”.

4.1. How can the action of and be captured? In the classical setting, in which the parts joined by and are precise and hence specified by subsets P and Q in the respective universes of discourse X and Y, the “new” predicative word “P and Q”: = P & Q = P · Q, in X × Y, is understood as “x is P and y is Q\( \Leftrightarrow \) (x, y) is P & Q, and is specified in X U Y by the intersection of its subsets P and Q.

In general, once the qualitative meanings of P in X and Q in Y are known, it should be If x 1 is less P than x 2 and y 1 is less Q than y 2, then (x 1, x 2) is less P & Q than (y 1, y 2), shortened by

$$ {<_{P}} \times {<_{Q}}\, \subseteq \,{<_{P \& Q}} , $$

And showing that < P&Q is not empty provided the Cartesian product < P × < Q , were not so, that P & Q would be measurable. If it can be accepted that “(x, y) is P & Q” implies “x is P” and also “y is Q”, then both would be equivalent, that is, < P&Q  = < P × < Q , as happens in the precise case.

Concerning the specification of measures m P&Q obtained once measures m P and m Q are specified, it should be noticed that a priori nothing guarantees the existence of a two-variable function A: [0, 1] × [0, 1] → [0, 1] (and-function), such that m P&Q  = A o (m P × m Q ), that is, that the measure of P & Q should be decomposable, or functionally expressible, by means of the measures of P and Q. Nevertheless, in the case of nondecomposability, the only thing that can be done is directly checking that m P&Q verifies, in the graph (X × Y, < P&Q ), the three axioms for a measure, something not often easy to do. In addition, proving that a given measure m P&Q is nondecomposable only can be done directly by finding, at least, one pair (x, y) with two different values under it. It is for reasons of this kind that decomposable measures are usually preferred in praxis; they save the designer from such checking, even if she or he cannot forget to consciously assume that decomposability is but a hypothesis.

Hence finding those functions A is relevant for the praxis and, in principle, provided it were < P x < Q  = < P&Q , the designer should necessarily verify the following properties.

  1. (1)

    Because (x 1, x 2) < P&Q (y 1, y 2), is equivalent to x 1 < P x 2, and y 1 < Q y 2, then it is m P (x 1) ≤ m P (x 2), and m Q (y 1) ≤ m Q (y 2). Hence it suffices a nondecreasing function A in its two variables to have

    $$ A\left( {m_{P} \left( {x_{ 1} } \right),\;m_{Q} \left( {y_{ 1} } \right)} \right) \le {\text{A}}\left( {m_{P} \left( {x_{ 2} } \right),m_{Q} \left( {y_{ 2} } \right)} \right) \Leftrightarrow m_{P\& Q} \left( {x_{ 1} ,\;x_{ 2} } \right) \le m_{P\& Q} \left( {y_{ 1} ,\;y_{ 2} } \right); $$

    that is, under this presumption, m P&Q verifies the first axiom of a measure.

  2. (2)

    Provided (x, y) is maximal for < P&Q implies that x is maximal for < P , or y for < Q , and that A (1, x) = A (x, 1) = 1, then m P&Q (x, y) = A (1, x) (or A (x, 1)) = 1. Notice that if there is coincidence between both maximal characters, it suffices the property A (1, 1) = 1.

  3. (3)

    Provided (x, y) is minimal for < P&Q implies x is minimal for < P , or y for < Q , and that A (0, x) = A (x, 0) = 0, then m P&Q (x, y) = 0. With coincidence between the minimal characters, it suffices the property A (0, 0) = 0.

Consequently, under good enough conditions, a function A suffices, nondecreasing in each variable, taking the value 1 at (1, 1), and 0 at (0, 0), to count with the measure m P&Q  = A o (m P × m Q ) for the meaning of P & Q.

An analogous reasoning can be made for representing a more complex statement such as, ‘(x is P and y is Q) and z is R), with m (P&Q)&R  = A 2 (A 1 (m P × m Q ), m R ), for A 1 corresponding to P & Q, and A 2 to R & (P & Q) that, in praxis, are always considered to be coincidental. This cannot always be presumed; for instance, in a large statement with several appearances of and, it should be previously checked if all of them can be represented, or not, by the same function A.

Note that one of such functions is A (x, y) = min (x, y), a commutative and associative operation easily translated into n-variables one by one, for instance, A (x, y, z) = min (x, y, z), without taking care of the way in which x, y, z, are ordered. It is not always possible with a nonassociative two-variable function A, for instance, with the arithmetical mean (x + y)/2 that, nevertheless, is commutative, but is not the pondered mean (3x + 2y)/5.

A remark is still in order for the case P = Q, (x, y) is P & P, in which, and in principle, if “x is P and y is P”, then “x is P”, and also “y is P”, but without knowing if P & P coincides with P or not; in the first case, function A should verify A (x, x) = x, for all x in [0, 1] and not only for x = 0 and x = 1, but in the second functions A such that, generally, A (x, x) ≠ x is required. Notice that presuming A (x, x) = x, implies to presume the meaning of P & P coincides with that of P, something that, clear in the precise meanings of mathematical terms, is not so clear with the imprecise terms of plain language. For instance, if we can accept that “7 is prime and 7 is prime”, shortened as “7 is prime and prime”, means nothing else than “7 is prime”, it is not so with “Richard is crazy and crazy”, because in this kind of plain language’s statements the and seems to be used for reinforcing the adjective to, perhaps, the meaning of “very crazy”.

Thus, functions A should be selected according to the properties that can be assumed for the actually used linguistic and. For instance, selecting a commutative A means to accept that P & Q = Q & P, and accepting an associative A corresponds to accepting (P & Q) & R = P & (Q & R), like accepting an involutive one is to accept P & P = P; that is, the selection of A cannot be blindly done without checking, in each case, if its algebraic properties are effectively verified in what is both linguistically expressed and tried to be represented by A. On the contrary, such properties will be imposed on language, and it can be an artificial supposition affecting the solution of the problem under consideration. Selecting A is a matter of design needing to take into account the language’s reality in the considered piece of it. The same properties cannot be presumed and assigned to the language of mathematics as to the language describing something in the world. For instance, if what describes the statement, “7 is prime and less than 11” clearly leads it to commute, what the statement, “He started crying, and entered the room” describes, does not lead immediately to application of the commutative law to its possible symbolic representations; doing it implies a risk the designer should be, at least, able to evaluate. For instance, in the statements, “He is Italian and tall, and he started crying and entered the room”, and “He is Italian and tall, and he entered the room and started crying”, in principle the several “and” particles cannot be supposed to keep the same properties; different and-functions should be selected for representing them.

4.2. What about the linguistic or? That is, how can the action of “P or Q” (in symbols, P + Q) through the elemental statements “x is P or y is Q”, shortened by “(x, y) is (P + Q)”, the meaning of P + Q, be captured and symbolically represented?

In this case what can be presumed is just (< P U < Q ) ⊆ < P+Q , which can be viewed as coming from “if x is P, then x is P or Q” whatever it can be Q, and allows us to state that P + Q has a qualitative meaning provided at least P or Q were to have a qualitative meaning. That is, one of < P , or < Q , is not empty; P + Q is measurable provided P or Q were to be measurable.

It cannot always be presumed that P + P coincides with P, as in the classical case with precise words represented by sets, and because it is P U P = P. For instance, if the “or” is exclusive, that is, if by P + Q it is understood as “either P, or Q”, that can be interpreted by “P or Q, but not both”, then it can mean nothing. It happens, for instance, in the classical crisp case, where (P U P) ∩ (PP)c = P ∩ Ø = Ø. Of course, (P + P) · (P · P)′, should be interpreted in each case, and it can be measurable or meaningless because, even provided P were measurable and P + P is not meaningless, (P · P)′ could be so. As has been said, the designer of a representation of P + Q or P · Q should be able to check how, in the used piece of language and for the current problem, and and or are used.

As in the case of P · Q, to find a measure m P + Q , decomposable with respect to m P and m Q , and once supposed that (< P U < Q ) coincides with < P + Q , an or-function, O: [0, 1] × [0, 1] → [0, 1], such that m P + Q  = O o (m P × m Q ), should be found. For it,

  1. (1)

    Because “x 1 is less P than x 2” or “y 1 is less Q than y 2” is equivalent to “(x 1, x 2) is less P + Q” than (y 1, y 2)′, then

    $$ m_{P} \left( {x_{ 1} } \right) \le m_{P} \left( {x_{ 2} } \right)\;{\text{or}}\;m_{Q} \left( {y_{ 1} } \right) \le m_{Q} \left( {y_{ 2} } \right)\;{\text{implies}}\;m_{P\, + \,Q} \left( {x_{ 1} ,\;x_{ 2} } \right) \le m_{P + Q} \left( {y_{ 1} ,\;y_{ 2} } \right). $$

    Provided O is nondecreasing in both variables, it would follow that m P + Q (x, y) = O (m P (x), m Q (y)) verifies the first axiom of a measure for the meaning of P + Q.

  2. (2)

    Provided the maximal character of (x, y) for < P + Q were to coincide with x is maximal for < P , and y maximal for < Q , then m P + Q (x, y) = O (1, 1), and, hence, provided the function O verified O (1, 1) = 1, it would follow that m P + Q (x, y) = 1. Provided only one of x or y were maximal for its respective qualitative meaning, then it should be O (1, y) = 1, or O (x, 1) = 1, that is, 1 absorbent for function O.

  3. (3)

    Analogously, with the preserving character of minimality for (x, y), it follows that m P + Q (x, y) = O (0, 0), and, hence, provided O (0, 0) = 0, it would follow that m P + Q (x, y) = 0. Provided only one of x or y were minimal for its respective qualitative meaning, then it should be O (0, y) = 0, or O (x, 0) = 0.

Consequently, supposing that O is nondecreasing in both variables verifies O (0, 0) = 0, O (1, 1) = 1, and under good conditions for maximals and minimals, m P + Q  = O o (m P × m Q ) is a measure for P + Q.

Note that or-functions O show, under the above conditions, the same axioms as and-functions A, and hence in praxis they should be distinguished by some additional property inasmuch as they could instead imply the identity P + Q = P · Q that, at least if P ≠ Q, sounds very rare. In the classical crisp case, it corresponds with the identity P U Q = PQ equivalent to P = Q, because it is PP U Q = P ∩ QP, and QP U Q = PQQ, an equivalence that, nevertheless and when P or Q are not precise, is not so clear in general.

To obtain such a distinction between A and O, let’s assume that the words P and Q verify

$$ P \cdot Q\;{\text{implies}}\;P + Q, $$

leading to m P · Q  ≤ m P + Q , and hence to the coherence’s condition:

$$ A \le O,\;{\text{that}}\;{\text{is}}\;A\, (x,\,y )\le O\, (x,\,y ),\;{\text{for}}\;{\text{all}}\;x,\;y\;{\text{in}}\;[0,\, 1], $$

with the equality only acceptable in a pathological case. This shows that if A = min, it should be min < O, for instance, O = max; and that if O = max, then it is A < max, such as A = product. In principle, there is an enormous amount of and-functions and or-functions able, respectively, to represent the measures of the used linguistic conjunction and disjunction, that is, they can show very different full meanings.

4.3. Let’s now consider the case in which P and Q are known by means of their corresponding designed membership functions μ P and μ Q , and what is tried to be obtained directly are the membership functions μ P · Q and μ P + Q for, respectively, the linguistic labels “P and Q”, and “P or Q”. Suppose an and-function A exists, and an or-function O (A < O), with which it is μ P · Q (x, y) = A (μ P (x), μ Q (y)) and μ P + Q (x, y) = O (μ P (x), μ Q (y)), for all x, y in X. Then,

  • The commutative character of the conjunction and the disjunction, P · Q = Q · P, P + Q = Q + P, requires A (x, y) = A (y, x), and O (x, y) = O (y, x), for all x, y in [0, 1]; that is, A and O should be commutative operations.

  • The involutive characters, P · P = P, P + P = P, require A (x, x) = x, and O (x, x) = x, for all x in X; that is, A and O should be involutive operations.

  • The associative characters, P · (Q · R) = (P · Q) · R, P + (Q + R) = (P + Q) + R, require that A and O are associative operations, that is, verify A (x, A (y, z)) = A (A(x, y), z), O (x, O (y, z)) = O (O(x, y), z), for all x, y, z in [0, 1].

  • The distributive characters, P · (Q + R) = (P · Q) + (P · R), P + (Q · R) = (P + Q) · (P + R), require, respectively, the properties A (x, O(y, z)) = O (A (x, y), A (x, z)), and O (x, A (y, z)) = A(O (x, y), O (x, z)), for all x, y, z in [0, 1]; that is, A is a distributive operation for O, and O is such for A. It is the case, for instance, if it is A = min, and O = max, but not if it is A = prod and O = max.

And so on; for instance, if it is P + P′ such that its measure is one, m P + P (x, y) = O (m P (x), m P (y)) = 1, provided the measure of P′ were expressible by a negation function N, it would be O (m P (x), N (m P (x))) = 1, giving the possibility of finding O and N by solving the numerical functional equation

$$ O\left( {a,\,N\,(a)} \right) = 1,\;{\text{for}}\;{\text{all}}\;a\;{\text{in}}\;[0,\; 1], $$

and provided it were possible by benefiting from the properties O and N can have. Analogously, if P · P′ has measure zero, m P · P (x, y) = A (m P (x), N (m P (x))) = 0 [*], then A and N can be eventually found through solving the numerical functional equation

$$ A\left( {a,\;N\, (a )} \right) = 0\;{\text{for}}\;{\text{all}}\;a\;{\text{in}}\;[0,\; 1 ]. $$

That is, in the praxis, and-functions, or-functions, and negation functions, they can be eventually found through solving some functional equations; for such a goal, the continuity of such functions is a good help as illustrated further on, and apart from the usual necessity that the resulting functions should be continuous as a counterpart for the predicates’ flexibility. In addition and for instance, from solving the last equation solutions also follow for the case in which P · P a has measure zero, inasmuch as m P a (x) ≤ m P (x), from [*] it follows that A (m P (x), m P (s (x)) = 0.

All this shows, in a new light, the relevance of carefully designing, in accordance with how things actually are, the measures intervening in linguistic descriptions; in the end, measures are but membership functions even if, as said before, such functions are not always measures, but just approximations to them, whose continuity is important for solving the corresponding functional equations, and can come from the flexibility of the considered imprecise words.

4.4. What has been presented in both Chap. 3, and the last Sects. 4.14.3, opens a window to consider what can be understood by a (primitive) algebra of fuzzy sets, namely to establish a general enough algebra of fuzzy sets allowing its particularization in each concrete problem. That is, defining among fuzzy sets an algebraic structure just endowed with a minimal number of axioms, not only necessary for counting with an operative general view of fuzzy sets in relation to its linguistic labels, in short with language, but for having the possibility of designing and effectively computing with fuzzy sets. In fact, for constructing a theory of “computing with words”, a calculus is needed in each context. For such a goal it is required not to consider the linguistic collectives/fuzzy sets in themselves, but all their potential states/membership functions, that is, the functions in the set [0, 1]X of all functions X → [0, 1], by counting with the important difference that each bivalued function f: X → {0, 1} actually and univocally represents the single crisp subset f −1(1) of X, something without a parallel with fuzzy sets and the functions in [0, 1]X (except for those in its subset {0, 1}X).

Such an algebraic structure should be established, after defining a suitable ordering in [0, 1]X, with two binary operations, (·) and (+) and a unary one (′) representing, respectively, the “and”, “or”, and “not” of language where, as said before, the validity of the laws imposed to such operations should be checked in each case. For this reason a structure with very few laws is important; without some laws there is no mathematical structure able to allow a calculus; they are necessary for developing the consequences from its acceptance, and to count with a solid base for computation, reducible to the classical case when the represented words are precise. What has been said offers a way to establish such a general structure.

Hence, for instance, the ordering defined among the functions in [0, 1]X should reduce to the ordering among subsets or, equivalently, to the ordering among its characteristic functions, namely \( {\mathbf{P}}\, \subseteq\, {\mathbf{Q}} \Leftrightarrow \mu_{P} (x) \le \mu_{Q} (x) \), for all x in X. Thus, the order chosen for membership functions in [0, 1]X can be the pointwise one:

$$ \mu \le \sigma \Leftrightarrow \mu \,\text{(}x\text{)} \le \sigma \,(x),\;{\text{for}}\;{\text{all}}\;x\;{\text{in}}\;X, $$

under which the minimum is the function μ 0 (x) = 0 for all x in X, the maximum is μ 1 (x) = 1 for all x in X, and it is μ 0 ≤ σ ≤ μ 1 for all σ in [0, 1]X; functions σ appear under such ordering in the interval of functions [μ 0, μ 1] between, respectively, the empty Ø and the total set X. The functions μ r (x) = r, r in [0, 1], for all x in X, are the constant functions reduced in the classical case to only those with r = 0 and r = 1. Additionally, it is \( \mu = \sigma \Leftrightarrow \mu \le \sigma \) and σ ≤ μ. With this:

  • A basic algebra of fuzzy sets (BAF) is a quintet ([0, 1]X, ≤; ·, +; ′) satisfying the laws, or axioms:

    1. (1)

      For negation: μ ≤ σ \( \Rightarrow \) σ′ ≤ μ′; (μ 1)′ = μ 0; (μ 0)′ = μ 1.

    2. (2)

      For conjunction: μ ≤ σ \( \Rightarrow \) μ · α ≤ σ · α, and α · μ ≤ α · σ, for all α in [0, 1]X; μ · μ 1 = μ 1 · μ = μ, for all μ in [0, 1]X.

    3. (3)

      For disjunction: μ ≤ σ \( \Rightarrow \) μ + α ≤ σ + α, and α + μ ≤ α + σ, for all α in [0, 1]X; μ + μ 0 = μ 0 + μ = μ, for all μ in [0, 1]X.

    4. (4)

      Of coherence: Provided μ and σ belong to {0, 1}X, then it would be μ · σ = min (μ, σ), μ + σ = max (μ, σ), μ′ = 1 − μ (σ′ = 1 − σ), all of them belonging to {0, 1}X.

Note that such algebras are neither presumed to verify all the laws classically supposed between sets, nor even those that are usually supposed between fuzzy sets. A BAF is but a “formal skeleton” to which other laws could be added when suitable, such as when searching for operations allowing the satisfaction of Aristotle’s principles of noncontradiction, μ · μ′ = μ 0, of excluded-middle, μ + μ′ = μ 1, or the commutative law μ · σ = σ · μ, among others.

Anyway, a few consequences follow from the small number of laws a BAF verifies.

  1. (a)

    Axiom 4 is independent of axioms 1, 2, and 3.

    In fact, defining μ* (x) = 1 − μ (1 − x), for μ in [0, 1][0, 1], and all x in [0, 1], it is easy to check that μ* verifies 1, but if μ is the membership function of [0, 0.4], then μ* is the membership function of [0.6, 1], not coincidental with the complement (0.4, 1] of [0, 0.4]. Hence μ* cannot be taken as a (coherent) negation of μ.

  2. (b)

    μ · σ ≤ min (μ, σ) ≤ max (μ, σ) ≤ μ + σ.

    In fact, from μ ≤ μ 1, it follows that μ · σ ≤ μ 1· σ = σ, and from σ ≤ μ 1, it analogously follows that μ · σ ≤ μ; hence, μ · σ ≤ min (μ, σ) ≤ max (μ, σ). From μ 0 ≤ μ follows that σ = μ 0 + σ ≤ μ + σ, and from μ 0 ≤ σ it follows that μ ≤ μ + σ; that is, max (μ, σ) ≤ μ + σ.

  3. (c)

    μ · μ 0 = μ 0 · μ = μ 0, and μ + μ 1 = μ 1 + μ = μ 1.

    In fact, the first follows from μ 0 ≤ μ · μ 0 ≤ μ 0, and the second from μ 1 ≤ μ + μ 1 ≤ μ 1.

  4. (d)

    It is obvious that the only case in which ([0, 1]X, ≤; ·, +; ′) can be a lattice with negation, is with · = min, and + = max.

    In this case, if (′) is a strong negation, that is, verifies (μ′)′ = μ, for all functions μ, it is easy to prove that (μ + σ)′ = μ′· σ′, and (μ · σ)′ = μ′ + σ′ hold, the laws of duality. Hence, in this case, disjunction and conjunction are not independent, but (through the negation) one depends on the other, and the BFA is a De Morgan algebra.

  5. (e)

    With an “abstract” partially ordered set Ω with a maximum 1 and a minimum 0, instead of [0, 1]X, and keeping within its elements the former axioms 1, 2, and 3 of a BAF, both ortholattices and De Morgan algebras are instances of such new and abstract algebraic structures, and Boolean algebras in particular; in them, the role of {0, 1}X in Axiom (4) is played by the subset {0, 1}. Nevertheless, no BAF on [0, 1]X can have all the properties of a Boolean algebra; indeed, because such a case is that of a lattice, it should be · = min, + = max, and for no negation (′) it is possible to have μ r · μ r ′ = μ 0, for all r in [0, 1], because min (μ r (x), μ r (x)) = 0 for all x in X implies r = 0. There is no way, under the BAF’s axioms, of endowing all the membership functions of fuzzy sets with either the Boolean structure of the characteristic functions of sets, nor with the weaker of an ortholattice. On the contrary, provided [0, 1]X were endowed with a Boolean structure, it would follow, through Stone’s characterization theorem of Boolean algebras as algebras of sets, the very surprising fact of an isomorphism existing between membership functions and characteristic functions, with which fuzzy sets were not, actually, substantially different from sets.

    In sum, the BAF’s structure seems to be general enough for representing the states of fuzzy sets and another, eventually different, primitive axiomatic for [0, 1]X, should be able to deduce such a result.

  6. (f)

    Concerning the validity of the laws of duality, holding, as said, within the lattice structure, they don’t hold in all BAF, even if they can hold in some particular cases.

    For instance, were the connectives decomposable by a triplet (T, S, N), where S is the N-dual of T, and N is strong, that is, S (x, y) = N (T (N(x), N(y)), and N 2 = N, then it would obviously hold that (μ + σ)′ = μ′ · σ′, and because it is also S (N (x), N (y)) = N (T (x, y)), it would also hold that (μ · σ)′ = μ′ + σ′. Notwithstanding, with T = product, S = max, and N = 1 − id, it is \( 1-x\,\cdot\,y \ne { \hbox{max} }\,(1-x,\;1-y) = 1-{ \hbox{min} }\,(x,y) \Leftrightarrow x\,\cdot\,y \ne { \hbox{min} }\,(x,\;y) \), and hence the laws of duality are not valid. The laws of duality do not generally hold in a BAF, and their validity depends on the particular expression of the connectives.

    Anyway, in a BAF with + = max, and regardless of the conjunction, the law of semi-duality μ′ + σ′ ≤ (μ · σ)′ holds, and with · = min and regardless of the disjunction, the other semi-duality law (μ + σ)′ ≤ μ′ · σ′ holds. For instance, if + = max, and because it is μ ≤ μ + σ, and σ ≤ μ + σ, it follows that (μ + σ)′ ≤ μ′ and (μ + σ)′ ≤ σ′, from which it follows that (μ + σ)′ = (μ + σ)′ + (μ + σ)′ ≤ μ′ · σ′. With · = min, it can proceed analogously.

    Of course, if + = max, and · = min, both inequalities jointly hold but are reduced to equalities. In general, conjunction and disjunction are, nevertheless, independent operations.

  7. (g)

    The conjunction (·) is involutive, μ · μ = μ, if and only if · = min, and the disjunction (+) is involutive, μ + μ = μ, if and only if + = max.

    That these two operations are involutive is well known, but the reciprocal also holds. Let’s suppose that (·) is involutive; that is, μ · μ = μ holds, for all functions μ. Because it is min (μ, σ) · min (μ, σ) = min (μ, σ), and min (μ, σ) ≤ μ, min (μ, σ) ≤ σ, it follows that min (μ, σ) · min (μ, σ) ≤ μ · σ, and min (μ, σ) ≤ μ · σ; that is, min (μ, σ) = μ · σ. A similar proof applied to max and + shows max = + .

  8. (h)

    All BFA verify Kleene’s law μ · μ′ ≤ σ + σ′, for all μ and σ in [0, 1]X, that reduces to μ 0 ≤ μ 1, if μ and σ are in {0, 1}X, or the BAF verifies the Aristotelian principles μ · μ′ = μ 0, or μ + μ′ = μ 1.

    Because at each x it is either μ (x) ≤ σ (x), or σ (x) ≤ μ (x), in the first case, it is (μ · μ′) (x) ≤ min (μ (x), μ′ (x)) ≤ μ (x) ≤ σ (x) ≤ max (σ (x), σ′ (x)) ≤ (σ + σ′) (x). In the second it analogously follows the same result, and hence Kleene’s law is proven for all BAF.

  9. (i)

    Concerning the absorption laws, μ · (μ + σ) = μ, and μ + (μ ·σ) = μ, for all μ and σ in [0, 1]X, the first holds if and only if · = min, and the second if and only if + = max.

    That these two formulae hold with, respectively, min and max, is evident. To prove the reciprocal, by supposing that the first holds, just taking σ = μ 0, it follows that μ · μ = μ, for all μ, implies · = min. Analogously, and by taking in the second σ = μ 1, it follows that μ + μ = μ, for all μ, which leads to + = max.

  10. (j)

    Provided the operations of a BAF were decomposable by numerical functions F, G: [0, 1] × [0, 1] → [0, 1], for conjunction and disjunction, respectively, and N: [0, 1] → [0, 1], for negation, which properties enjoy these three functions? Obviously, such properties are the following.

    1. (a)

      Properties of N: x ≤ y \( \Rightarrow \) N (y) ≤ N (x); N (0) = 1; N (1) = 0.

    2. (b)

      Properties of F: x ≤ y \( \Rightarrow \) F (x, z) ≤ F (y, z), F (z, x) ≤ F (z, y), for all z in [0, 1]; and F(1, x) = F(x, 1) = x.

    3. (c)

      Properties of G: x ≤ y \( \Rightarrow \) G (x, z) ≤ G (y, z), G (z, x) ≤ G (z, y), for all z in [0, 1]; and G (0, x) = G (x, 0) = x.

    4. (d)

      Coherence: If x, y belong to {0, 1}, then F (x, y) = min (x, y), G (x, y) = max (x, y), and N (x) = 1 − x.

      Hence ([0, 1], ≤; F, G; N) inherits the corresponding BAF structure with the set {0, 1} playing the role of {0, 1}X; for instance, functions F and G verify F ≤ min ≤ max ≤ G and, jointly with N, also verify Kleene’s law, F (x, N (x)) ≤ G (y, N (y)), for all x, y in [0, 1]. Reciprocally, provided F, G, and N were to satisfy the former conditions, the functionally expressed operations defined by μ · σ = F o (μ x σ), μ + σ = G o (μ x σ), and μ′ = N o μ, would endow [0, 1]X with a BAF structure.

      Thus, the former properties are necessary and sufficient for counting with a decomposable BAF, and, because there is a big multiplicity of triplets (F, G, N) verifying such properties, the number of BAFs that can be defined on [0, 1]X is just enormous and, at each practical problem requiring decomposable “and”, “or”, and “not”, additional properties should be added to functions F, G, and N, for the goal of counting with a calculus. If such additional properties should be imposed according to what can be checked in language, for not supposing properties nonexistent among its words and statements, it is worth noticing that in some practical cases it could be unnecessary to take the three operations decomposable, but just some of them. Let’s repeat anew that such decisions are but a matter of design in each particular problem or subject.

  11. (k)

    Concerning the distributive laws, μ · (α + β) = μ · α + μ · β, and μ + (α · β) = (μ + α) · (μ + β), it is obvious that they hold simultaneously when · = min and + = max. Nevertheless, the first holds for all μ, α, and β; taking α = β = μ 1, it implies μ + μ = μ, for all μ, and hence + = max, regardless of concerning the second, α = β = μ 0 implies μ · μ = μ, for all μ, thus · = min regardless of +.

    Hence, with + = max, the distributive law of · to + holds, and with · = min that of + to ·; of course, both only hold simultaneously with min and max. If both laws simultaneously hold in arithmetic, in plain language its joint validity cannot always be supposed.

4.5. To end this chapter, let’s do a last reflection on what corresponds to assigning an algebraic structure to the membership functions. For such a goal, it can be illustrative enough to look at the case with the triplet (min, max, 1 − id), the most frequently used in the applications of fuzzy sets, and in which most of the Boolean laws are preserved, as they are, for instance the two distributive laws, but not those of noncontradiction μ · μ′ = μ 0, and excluded-middle μ + μ′ = μ 1, the only Boolean laws that do not hold with the connectives min and max. The distributive laws were already rejected in the reasoning physicists conduct on quantum mechanics with their specialized theoretical language, based on the algebra of a Hilbert space and, although holding with precise words, a general reason is not even known for its validity when imprecise words are used in language; hence its supposition can violate what is behind a description of something in plain language. By just incorrectly supposing one of them, there is added to language a law that can lead to conclude something not real, and which can introduce a lack of confidence on the corresponding conclusions. That distributive laws don’t hold in the language of quantum physics suffices to show that these laws cannot always be taken for granted.

Analogous comments can be made for any law not appearing in the list of axioms defining a BAF, therefore each time one such addition seems to be necessary for computing, it should be carefully checked in each case that can imply its addition. Note that, in fact, the laws presumed in a BAF are almost coincidental with those that were needed to reach some properties of meaning.

All this shows the importance of paying careful attention to the design of the membership functions and the connectives that could appear in such design, by previously acquiring the best available information on the contextual behavior of their linguistic labels; that is, to a good comprehension of what they actually express. There is no universal algebra with imprecise words.