1 Introduction

The concept of fuzzy automata was introduced by Wee in 1967. Thereafter, there were considerable authors, such as Santos (1975, 1976, 1977), Lee and Zadeh (1969) etc., have contributed in this field. For detail, we refer to Kandel and Lee (1980), and Wechler (1978) and especially the recent ones by Mordeson and Malik (2000, 2002). Also, fuzzy automata have many important applications such as in learning systems, the model of computing with words, pattern recognition and data base theory (Mordeson and Malik 2002; Ying 2002).

Recently, Qiu (2001) and Asveld (2003) proposed to study fuzzy automata based on residuated logic and the proposed method provided a tool to study fuzzy automata in the frame of many-valued logic. However, these concepts and conclusions are mostly drawn from fuzzifying preliminarily the classical case, and therefore have poor level structure. To state more precise, automata are the mathematical models to recognize formal languages in the theory of classical computation, and the former proposed fuzzy automata with truth values in unit interval [0, 1] with max–min composition, we call them classical fuzzy automata in this paper. To overcome this kind of problem, we study the automata theory with truth values in the more general structures: lattice-ordered monoids.

It is well known in the classical automata theory (Hopcroft and Ullman 1979) that, the following four approaches to represent a language (regular language, to be precise) \(\mathcal{L}\) are equivalent:

  1. (i)

    \(\mathcal{L}\) is recognized by some deterministic finite state automaton.

  2. (ii)

    \(\mathcal{L}\) is recognized by some nondeterministic finite state automaton.

  3. (iii)

    \(\mathcal{L}\) is described by some regular expression.

  4. (iv)

    \(\mathcal{L}\) is generated by some regular grammar.

The same results hold for fuzzy regular languages with truth values in [0, 1] and with max–min composition (Bělohlávek 2002; Kandel and Lee 1980; Malik and Mordeson 2000, 2002; Močkoř 2002; Santos 1975, 1976, 1977; Shen 1996; Thomason and Marinos 1974). For generalized lattice-valued languages, it has been shown in Li(2003) that (i) and (ii) are not equivalent for some truth-value lattice-ordered monoids. The nondeterministic lattice-valued finite automaton (LA) is more powerful than deterministic lattice-valued finite automata (DLA) when it comes to the recognitions of fuzzy languages. An important problem arises as to the description of LA or DLA by regular expressions and regular grammars. Since the problem of regular expressions has been resolved in Li and Pedrycz (2004), this issue is of interest in the study of the relationship between LA and LRG.

2 Some basic concepts

We first give some basic concepts we use in this paper.

Definition 2.1. Given a lattice L, we use ∨, ∧ to represent the supremum operation and infimum operation on L, respectively, with 0, 1 being the least and the largest elements. Assume that there is a binary operation • (we call it multiplication) on L such that (L, •, e) is a monoid with identity eL. We call L a lattice-ordered monoid (some modification of the the notion of lattice-ordered monoid in Birkhoff (1940) if it satisfies the following two conditions:

  1. (i)

    aL, a• 0=0• a=0,

  2. (ii)

    a, b, cL, a• (bc)=(ab)∨ (ac), and (bc)• a =(ba)∨ (ca).

For a lattice-ordered monoid, since we only concerned with the multiplication • and finite supremum operation ∨, in what follows, a lattice-ordered monoid will be denoted by (L, •, ∨).

We give some examples of lattice-ordered monoids.

Example 2.1. (1) Let (L, ∧, ∨) be a distributive lattice (Balbes and Dwinger 1974), and let •=∧, then L is a lattice-ordered monoid, and the identity of multiplication is 1.

2. Let (L, •, ∨) be a lattice-ordered monoid, the identity is e. We use L(n) to denote all n× n matrices with values in L. The multiplication, denoted as ○, is defined as sup−• composition; and ∨ is the pointwise-∨. That is, for two n× n matrices, A=(a ij ) and B=(b ij ), with values in L, let AB=C=(c ij ), then \(c_{ij} = \bigvee\nolimits_{k = 1}^n {(a_{ik} \bullet b_{kj} )} ,\) and let AB=D=(d ij ), then d ij =a ij b ij . Then (L(n), ○, ∨) is also a lattice-ordered monoid, the identity is the diagonal-matrix E=diag(e, e, ..., e) with e as the diagonal element. In general, the multiplication on L(n) is not commutative, even if the multiplication on L is commutative.

3. Let • be any uninorm (Li and Shi 2000; Yager 1994) on [0, 1], if 0• 1=0, then ([0, 1], •, ∨) is a commutative lattice-ordered monoid. In particular, if • is a t-norm on [0, 1], then ([0,1], •, ∨) is a commutative lattice-ordered monoid with identity e=1.

4. Quantales are complete lattice-ordered monoid, where L is a quantale if L is a complete lattice, and it satisfies the following infinite distributive laws (Li et al. 2002; Rosenthal 1990):

$$ a \bullet \left( {\bigvee\nolimits_t {b_t } } \right) = \bigvee\nolimits_t {(a \bullet b_t )} ,\;{\text{and}}\;\left( {\bigvee\nolimits_t {b_t } } \right) \bullet a = \bigvee\nolimits_t {(b_t \bullet a)} . $$

In the following, we always assume that L is a lattice-ordered monoid.

Definition 2.2. An LA is a five tuple,

$$ M = (X,U,\delta ,\sigma _0 ,\sigma _1 ), $$

where X, U are finite nonempty sets, δ: X× UF(X), and σ0, σ1 : XL are L-fuzzy sets of X. The elements of X are called states, and the elements of U are called (input) symbols. δ is called a fuzzy transition function and σ0, σ1 are called fuzzy initial state and fuzzy final state, respectively.

We can regard δ(x, u)(y) as the grade of membership that the next state of M is y, given that the present state of M is x and the input symbol u is implied. For simplification we denote δ(x, u)(y) as δ(x, u, y).

Let U* denote the set of all words of finite letter over U and let Λ denotes the empty word. Then, U* is the free monoid generated by U with the concatenation operation. For θ∈U*, |θ| denotes the length of θ.

By means of Li (2003), we can extend δ on X× U*, denoted as δ*: X× U*F(X), in the following forms:

  1. (i)

    yX, if y=x, then δ*(x, Λ, y)=e, otherwise δ*(x, Λ, y)=0;

  2. (ii)

    ∀ θ∈U*, uU, \(\delta ^* (x,\theta u,y) = \bigvee\nolimits_{z \in X} {[\delta ^* (x,\theta ,z) \bullet \delta (z,u,y)]} .\)

For the extension δ*, since L is a lattice-ordered monoid, it satisfies the following equation for any θ∈U* (Li 2003), if θ=θ1θ2, then

$$ \delta ^* (x,\theta _1 \theta _2 ,y) = \bigcup\nolimits_{z \in X} {[\delta ^* (x,\theta _1 ,z) \bullet \delta ^* (z,\theta _2 ,y)]} . $$

Definition 2.3. Suppose that M=(X, U, δ, σ0, σ1) is an LA. Then the L-valued language f M F(U*) accepted by M or recognized by M is defined as follows,

$$ f_M (\theta ) = \sigma _0 \circ \delta _\theta \circ \sigma _1 = \bigvee\nolimits_{x,y \in X} {[\sigma _0 (x) \bullet \delta ^* (x,\theta ,y) \bullet \sigma _1 (y)]} . $$

The element fF(U*) is called an L-language on U, an L-language which is accepted by an LA is called an LA-language.

For two LAs M1 and M2, we say that they are equivalent if they accept the same LA-language, that is, \(f_{M_1 } = f_{M_2 } .\)

As to the classical automata theory, we have the notions of deterministic finite automata and nondeterministic finite automata. The notion of an LA is a generalization of the notion of nondeterministic automaton. The LA is nondeterministic in nature , in the following we will present a deterministic counterpart of the notion of an LA.

Definition 2.4. A DLA is a five tuple,

$$ M = (X,U,\delta ,x_0 ,\sigma _1 ), $$

where X, U and σ1 are the same defined as LA , x0X is the initial state, δ: X× UX is called a transition function.

The extension of δ onto U* is the same as that of classical case. Then, the L-language f M F(U*) accepted by a DLA is defined as,

$$ \forall \theta \in U^* ,\quad f_M (\theta ) = \sigma _1 (\delta ^* (x_0 ,\theta )). $$

However, unlike the classical case, Li (2003) has shown that in general LA and DLA are not equally powerful.

In the classical automata theory we know that regular grammars and finite automata are essentially the same in that either is a specification of a regular language. Since then, it is necessary for us to study the corresponding (distinctive) fuzzy grammars for LA and DLA.

Definition 2.5. (1) A lattice-valued grammar is a quadruple G=(N, T, P, S), where

  1. (i)

    N is a finite set whose elements are called nonterminal symbols.

  2. (ii)

    T is a finite set whose elements are called terminal symbols and TN=ϕ.

  3. (iii)

    SN is called the starting symbol and S does not occur on the right-hand side of any fuzzy productions.

  4. (iv)

    P is a finite collection of fuzzy productions over \(T \cup N\) and

    $$ P = \left\{ {u\xrightarrow{\rho }v|u \in (N \cup T)^* N(N \cup T)^* ,v \in (N \cup T)^* } \right\},\;{\text{where}}\;\rho \in L - \{ 0\} . $$

    We may interpret ρ(u, v) as the grade of membership that u will be replaced by v.

If \(u\xrightarrow{\rho }v\) is a production and α, β∈(NT)*, then α vβ is said to be directly derivable from α uβ. In this case, we write

$$ \alpha u\beta \mathop \Rightarrow \limits^\rho \alpha v\beta . $$

If u i ∈(NT)* for i=1, 2, ..., p and ui+1 is directly derivable from u i for i=1, 2, ..., p−1, we say that u p is derivable from u1 and written as,

$$ u_1 {\mathop \Rightarrow \limits^\rho}^* \,u_p . $$

We call

$$ u_1 \mathop \Rightarrow \limits^{\rho _1 } u_2 \mathop \Rightarrow \limits^{\rho _2 } \ldots \mathop \Rightarrow \limits^{\rho _{p - 1} } u_p $$

the derivation chain of u p (from u1), where ρ=ρ1•ρ2•...•ρp-1.

(2) The L-language f G F(T*) generated by an L-valued grammar is defined as follows:

$$ f_G (\theta ) = \bigvee {\left\{ {\rho :S {\mathop \Rightarrow \limits^\rho}^* \,\theta } \right\}} $$

L-valued grammars are classified according to the form of the production rules used (Chomsky hierarchy). These grammars are sometimes described as follows:

Definition 2.6. Let G=(N, T, P, S) be an L-valued grammar.

  1. 1.

    G is unrestricted (of type 0), if there are no restrictions on the form of the production rules. Accordingly, f G is called L-valued type 0 language.

  2. 2.

    G is context-sensitive if for every \(u\xrightarrow{\rho }v \in P,\) u, v ∈(TN)*, ρ(u, v)>0 implies |u|≤ |v|. Accordingly, f G is called L-valued context-sensitive language.

  3. 3.

    G is context-free if for every \(u\xrightarrow{\rho }v \in P,\) u, v ∈(TN)*, ρ(u, v)>0 implies |u|≤ |v| and uN. Accordingly, f G is called L-valued context-free language.

  4. 4.

    G is weak regular if for every \(u\xrightarrow{\rho }v \in P,\) u, v ∈(TN)*, ρ(u, v)>0 implies uN and vT+B, BN ∪{Λ} or u=S, v=Λ . Accordingly, f G is called L-valued weak regular language.

  5. 5.

    G is regular if for every \(u\xrightarrow{\rho }v \in P,\) u, v ∈(TN)*, ρ(u, v)>0 implies uN and vTB, BN ∪{Λ} or u=S, v=Λ . Accordingly, f G is called L-valued regular language.

Two L-valued grammars G1 and G2 are said to be equivalent if they generate the same language, that is, \(f_{G_1 } = f_{G_2 } .\)

Theorem 2.1. L-valued weak regular grammars (LWRG, for short) are equivalent to L-valued regular grammars (LRG, for short). Where the word “equivalent” means that they generate the same classes of L-languages.

Proof From the definition, we can easily know that every L-language generated by an LRG can be generated by an LWRG which is the same as the former LRG. In the following we only need to show that every L-language generated by an LWRG can also generated by an LRG.

Let G1=(N1, T, P1, S) be an LWRG, the language generated by G1 is denoted as \(f_{G_1 } .\) We can define an LRG G=(N, T, P, S), such that \(f_G = f_{G_1 } .\)

(i) For each production

$$ x \to a_1 a_2 \ldots a_m y \in P_1 , $$
(1)

where a i T1 for i=1, 2, ..., m and x, yN1.

When m=1, we have a1T, \(x,y \in N_1 \subseteq N\) and xa1 yP as desired.

When m≥ 2, \(x,y \in N_1 \subseteq N,\) we can define new nonterminal symbols ξ1, ξ2, ..., ξm-1 in N and denote the set of these symbols as N0, then N=N1N0.

Let ρ G and \(\rho _{G_1 } \) denote the grade of membership of productions in G and G1, respectively. Now we can mimic the production (Eq. 1) by means of productions xa1ξ1, ξ1a2ξ2, ..., ξm-1a m yP with

$$ \begin{aligned} \rho _G & (x \to a_1 \xi _1 ) = \rho _G (\xi _1 \to a_2 \xi _2 ) = \ldots = \rho _G (\xi _{m - 2} \to a_{m - 1} \xi _{m - 1} ) = e; \\ \rho _G & (\xi _{m - 1} \to a_m y) = \rho _{G_1 } (x \to a_1 a_2 \ldots a_m y). \\ \end{aligned} $$

Then \(\begin{aligned} \rho _G (x \to a_1 a_2 \ldots a_m y) = & \rho _G (x \to a_1 \xi _1 ) \bullet \rho _G (\xi _1 \to a_2 \xi _2 ) \bullet \ldots \\ & \bullet \rho _G (\xi _{m - 2} \to a_{m - 1} \xi _{m - 1} ) \bullet \rho _G (\xi _{m - 1} \to a_m y) = \rho _{G_1 } (x \to a_1 a_2 \ldots a_m y). \\ \end{aligned} \)

(ii) For each production

$$ x \to b_1 b_2 \ldots b_n \in P_1 $$
(2)

where b i T1 for i=1, 2, ..., n and xN1. When n=1, we have b1T and Xb1P as desired. When n≥ 2, \(x \in N_1 \subseteq N.\) We can define new nonterminal symbols η1, η2, ..., ηn-1N and denote the set of these symbols as N0, then N=N1N0. Now we can mimic the production (Eq. 2) by means of productions

$$ x \to b_1 \eta _1 ,\eta _1 \to b_2 \eta _2 , \ldots \eta _{n - 1} \to b_n \in P $$

with ρ G (xb1η1)=ρ G 1b2η2)=...=ρ G n-2bn-1ηn-1)=e; \(\rho _G (\eta _{n - 1} \to b_n ) = \rho _{G_1 } (x \to b_1 b_2 \ldots b_n ).\) Then \(\begin{aligned} \rho _G (x \to b_1 b_2 \ldots b_n ) = & \rho _G (x \to b_1 \eta _1 ) \bullet \rho _G (\eta _1 \to b_2 \eta _2 ) \bullet \ldots \\ & \bullet \rho _G (\eta _{n - 2} \to b_{n - 1} \eta _{n - 1} ) \bullet \rho _G (\eta _{n - 1} \to b_n ) = \rho _{G_1 } (x \to b_1 b_2 \ldots b_n ). \\ \end{aligned} \)

Clearly G thus specified is an L-valued regular grammar. Thus, from Eqs. 1 and 2 we conclude that \(f_{G_1 } \subseteq f_G .\)

To prove the reverse inclusion, suppose that for some θ∈T*, there is a derivation \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta \) in G. Let G2=(N, T, PP1, S), where G2 has both all the productions of G1 and G , then certainly there is a derivation

$$ S{\mathop \Rightarrow \limits^\rho}^* \,\theta $$
(3)

in G2 with \(\rho _{G_2 } (S \longrightarrow \theta ) = \rho _G (S \longrightarrow \theta ).\)

We shall show that there is a derivation of θ in G1 by induction on the number of symbols from N\ N1 appearing in the derivation (3). If no such symbols appear then Eq. 3 is already a derivation in G1. Otherwise the first appearance of a symbol of N\ N1 is based either on a production

$$ x \to a_1 \xi _1 \quad {\text{with}}\;\rho _{G_2 } (x \to a_1 \xi _1 ) = \rho _G (x \to a_1 \xi _1 ), $$

where xa1a2... a m y is a production in G1, or on a production

$$ x \to b_1 \eta _1 \quad {\text{with}}\;\rho _{G_2 } (x \to b_1 \eta _1 ) = \rho _G (x \to b_1 \eta _1 ), $$

where xb1b2... b n is a production in G1. Consider the first case, for any of the symbols of N\ N1, the only way in which ξ i can subsequently appear must involve changes from ξ1 to a2ξ2, ..., ξm-1 to a m y. Then the sequence of transitions

$$ \begin{aligned} x \to a_1 \xi _1 ,\xi _1 \to a_2 \xi _2 , \ldots ,\xi _{m - 1} \to & a_m y \\ {\text{with}}\quad \rho _{G_2 } (x \to a_1 \xi _1 ) = & \rho _G (x \to a_1 \xi _1 ),\rho _{G_2 } (\xi _1 \to a_2 \xi _2 ) \\ = & \rho _G (\xi _1 \to a_2 \xi _2 ), \ldots ,\rho _{G_2 } (\xi _{m - 1} \to a_m y) \\ = & \rho _G (\xi _{m - 1} \to a_m y) \\ \end{aligned} $$

can be replaced by a single transition xa1a2... a m y in G2 with \(\begin{aligned} \rho _{G_2 } (x \to a_1 a_2 \ldots a_m y) = & \rho _{G_2 } (x \to a_1 \xi _1 ) \bullet \rho _{G_2 } (\xi _1 \to a_2 \xi _2 ) \bullet \ldots \\ & \bullet \rho _{G_2 } (\xi _{m - 1} \to a_m y) = \rho _G (x \to a_1 \xi _1 ) \bullet \rho _G (\xi _1 \to a_2 \xi _2 ) \bullet \ldots \\ & \bullet \rho _G (\xi _{m - 1} \to a_m y) = \rho _G (x \to a_1 a_2 \ldots a_m y) = \rho _{G_1 } (x \to a_1 a_2 \ldots a_m y). \\ \end{aligned} \)

Similarly, in the second case the derivation must involve subsequently changes from η1 to b2η2, ..., ηn-1 to b n , and these n transitions can be replaced by a single transition in G2 from x to b1b2... b n with

$$ \begin{aligned} \rho _{G_2 } (x \to b_1 b_2 \ldots b_n ) = & \rho _{G_2 } (x \to b_1 \eta _1 ) \bullet \rho _{G_2 } (\eta _1 \to b_2 \eta _2 ) \bullet \ldots \\ & \bullet \rho _{G_2 } (\eta _{n - 1} \to b_n ) = \rho _G (x \to b_1 \eta _1 ) \bullet \rho _G (\eta _1 \to b_2 \eta _2 ) \bullet \ldots \\ & \bullet \rho _G (\eta _{n - 1} \to b_n ) = \rho _G (x \to b_1 b_2 \ldots b_n ) = \rho _{G_1 } (x \to b_1 b_2 \ldots b_n ). \\ \end{aligned} $$

In both cases, the derivation (3) is replaced by one with fewer occurrences of symbols from N\ N1 with \(\rho _{G_2 } (\theta ) = \rho _G (\theta ),\) then it follows that \(f_{G_2 } \subset f_{G_1 } .\) Hence certainly \(f_G \subset f_{G_1 } .\)

Thereby, \(f_G = f_{G_1 } ,\) say, LWRG and LRG are equivalent as far as the languages generated by them are concerned. □

Example 2.2. Suppose G1=(N1, T, P1, S) is an LWRG, where N1={S, A, B}, T={a, b, c},

$$ P_1 = \left\{ {S\xrightarrow{{\rho _1 }}abA,A\xrightarrow{{\rho _2 }}bbB,B\xrightarrow{{\rho _3 }}cA,A\xrightarrow{{\rho _4 }}ac,B\xrightarrow{{\rho _5 }}bc} \right\}. $$

Construct an LRG G=(N, T, P, S), where

$$ \begin{aligned} P = & \left\{ {S\xrightarrow{e}a\xi _1 ,\xi _1 \xrightarrow{{\rho _1 }}bA,A\xrightarrow{e}b\xi _2 ,\xi _2 \xrightarrow{{\rho _2 }}bB,B\xrightarrow{{\rho _3 }}cA,A\xrightarrow{e}} \right.a\eta _1 , \\ & \quad \left. {\eta _1 \xrightarrow{{\rho _4 }}c,B\xrightarrow{e}b\eta _2 ,\eta _2 \xrightarrow{{\rho _5 }}c} \right\}\;{\text{and}}\;N = N_1 \cup \left\{ {\xi _1 ,\xi _2 ,\eta _1 ,\eta _2 } \right\}. \\ \end{aligned} $$

Clearly, for θ=ab3cac the derivation of θ in G1 is as follows:

$$ S\xrightarrow{{\rho _1 }}abA\xrightarrow{{\rho _2 }}abbbB\xrightarrow{{\rho 3}}ab^3 cA\xrightarrow{{\rho _4 }}ab^3 cac, $$

so \(f_{G_1 } (\theta ) = \rho _1 \bullet \rho _2 \bullet \rho _3 \bullet \rho _4 .\)

The derivation of θ in G is as follows:

$$ S\xrightarrow{e}a\xi _1 \xrightarrow{{\rho _1 }}abA\xrightarrow{e}ab^2 \xi _2 \xrightarrow{{\rho _2 }}ab^3 B\xrightarrow{{\rho _3 }}ab^3 cA\xrightarrow{e}ab^3 ca\eta _1 \xrightarrow{{\rho _4 }}ab^3 cac, $$

then \(f_G (\theta ) = e \bullet \rho _1 \bullet e \bullet \rho _2 \bullet \rho _3 \bullet e \bullet \rho _4 = \rho _1 \bullet \rho _2 \bullet \rho _3 \bullet \rho _4 = f_{G_1 } (\theta ).\)

3 Lattice-valued grammars and LA

In Theorems 3.1 and 3.2, we shall show that L-valued regular languages coincide with LA-languages.

Theorem 3.1 Let G=(N, T, P, S) be an L-valued regular grammar, then there exists an LA M such that f M =f G .

Proof Constructing an LA M=(X, U, δ, σ0, σ1), where U=T, X=N∪{Z}, σ0=e/S, σ1=(e/Z)+(f G (Λ)/S) and

$$ \delta (x,u,y) = \left\{ {\begin{array}{*{20}l} {\rho (x \to uy)} & {x,y \in N} \\ {\rho (x \to u)} & {x \in N,y = Z} \\ 0 & {{\text{otherwise}}} \\ \end{array} } \right. $$

For any σ, τ ∈(TN)*, \(\rho (\sigma \to \tau ) = \vee \{ \rho :\sigma {\mathop \Rightarrow \limits^\rho}^* \,\tau \} .\) We show f G =f M below.

Let θ∈T*, if θ=Λ, then f M (θ)=f G (Λ) as desired.

When θ≠Λ, suppose θ=u1u2... u n , where u i T for i=1, 2, ..., n. If \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta \) there must exist some derivation of θ with the form

$$ S\mathop \Rightarrow \limits^{\rho _1 } u_1 A_1 \mathop \Rightarrow \limits^{\rho _2 } u_1 u_2 A_2 \Rightarrow \ldots \mathop \Rightarrow \limits^{\rho _{n - 1} } u_1 u_2 \ldots u_{n - 1} A_{n - 1} \mathop \Rightarrow \limits^{\rho _n } u_1 u_2 \ldots u_{n - 1} u_n , $$

where \(A_{i - 1} \xrightarrow{{\rho _i }}u_i A_i \in P\) for i=1, 2, ..., n and A0=S, A n =u n with ρ=ρ1 • ρ2 • ... • ρ n .

Corresponding to the given LA, by the definition of δ, we get that

$$ \delta ^* (S,\theta ,Z) \geq \delta (S,u_1 ,A_1 ) \bullet \delta (A_1 ,u_2 ,A_2 ) \bullet \ldots \bullet \delta (A_{n - 1} ,u_n ,Z) = \rho _1 \bullet \rho _2 \bullet \ldots \bullet \rho _n = \rho , $$

then f M (θ)=σ0 ○ δθ ○ σ10(S) • δ*(S, θ, Z) • σ1(Z)=δ*(S, θ, Z)≥ρ.

Thus, we have shown that f G (θ)≤ f M (θ) for any θ∈U*, that is, f G f M .

Conversely, if f M (θ)=δ* (S, θ, Z)≥ δ(S, u1, A1) • δ(A1, u2, A2) • ... • δ(An-1, u n , Z). For any A i X, let δ(S, u1, A1)=ρ1, ..., δ(An-1, u n , A n )=ρ n , and let ρ=ρ1 • ρ2 • ... • ρ n , then there exists corresponding derivation \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta .\) Then it shows that if ρ≤ f M (θ), then ρ≤ f G (θ), that is, f M f G .

Therefore f M =f G , that is, L-valued languages can be recognized by LA. □

We give an example to illustrate the proof of the above theorem.

Example 3.1. Let L=[0, 1], • is the multiplication of real numbers, G=(N, T, P, S) is an LRG, where N={S, A, B}, T={a, b}. The productions in P are as follows:

$$ P:S\xrightarrow{{0.5}}aA,S\xrightarrow{{0.6}}aB,A\xrightarrow{{0.7}}aA,B\xrightarrow{{0.4}}aB,A\xrightarrow{{0.3}}b,B\xrightarrow{{0.8}}b. $$

Construct an LA, M=(X, U, δ, σ0, σ1), where

$$ U = \{ a,b\} ,X = \{ S,A,B,Z\} ,\sigma _0 = \frac{e} {S},\sigma _1 = \frac{e} {Z} + \frac{{f_G (\Lambda )}} {S}, $$

the transition function with the form

$$ \begin{array}{*{20}l} {\delta (S,a,A) = 0.5} & {\delta (S,a,B) = 0.6} & {\delta (A,a,A) = 0.7,} \\ {\delta (B,a,B) = 0.4} & {\delta (A,b,Z) = 0.3} & {\delta (B,b,Z) = 0.8.} \\ \end{array} $$

For θ=a3b, the derivation of θ are as follows

$$ \begin{array}{*{20}l} {({\text{i}})\;S\xrightarrow{{0.5}}aA\xrightarrow{{0.7}}aaA\xrightarrow{{0.7}}aaaA\xrightarrow{{0.3}}aaab.} \\ {({\text{ii}})\;S\xrightarrow{{0.6}}aB\xrightarrow{{0.4}}aaB\xrightarrow{{0.4}}aaaB\xrightarrow{{0.8}}aaab.} \\ \end{array} $$

So f G (a3b)=(0.5 × 0.7 × 0.7 × 0.3) ∨ (0.6 × 0.4 × 0.4 × 0.8)=0.0768, f M (a3b)=δ*(S, a3b, Z)=[δ(S, a, A) • δ(A, a, A) • δ(A, a, A) • δ(A, b, Z)] ∨ [δ(S, a, B) • δ(B, a, B) • δ(B, a, B) • δ(B, b, Z)]=0.0768. Thereby f M (θ)=f G (θ).

Theorem 3.2 Let M=(U, X, δ, σ0, σ1) be an LA, there exists an LRG such that f G =f M .

Proof We define an LRG G=(N, T, P, S) as follows T=U, N=X∪{S}. The productions in P are defined as follows:

  1. (i)

    \(x\xrightarrow{{\rho _1 }}uy \in P\) for each δ(x, u, y)≠ 0, where ρ1=δ(x, u, y).

  2. (ii)

    If δ(x, u, y)≠ 0 and σ1(y)≠ 0, then \(x\xrightarrow{{\rho _2 }}u \in P,\) where \(\rho _2 = \bigvee\nolimits_{y \in X} {[\delta (x,u,y) \bullet \sigma _1 (y)]} .\)

  3. (iii)

    If δ(x, u, y)≠ 0 and σ0(x)≠ 0, then \(S\xrightarrow{{\rho _3 }}uy \in P,\) where \(\rho _3 = \bigvee\nolimits_{x \in X} {[\sigma _0 (x) \bullet \delta (x,u,y)]} .\)

  4. (iv)

    If δ(x, u, y)≠ 0, σ0(x)≠ 0 and σ1(y)≠ 0, then \(S\xrightarrow{{\rho _4 }}u \in P(u \in U \cup \{ \Lambda \} ),\) where \(\rho _4 = \bigvee\nolimits_{x,y \in X} {[\sigma _0 (x) \bullet \delta (x,u,y) \bullet \sigma _1 (y)]} .\)

Obviously, such a grammar G is an LRG. We show that f G =f M below.

Let θ∈T*, if θ=Λ, then f G (Λ)=ρ (S → Λ)=f M (Λ).

When θ≠Λ, suppose θ=u1u2... u n , where u i T for i=1, 2, ..., n.

If \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta \) there must exist some derivation of θ with the form

$$ S\mathop \Rightarrow \limits^{\rho _1 } u_1 A_1 \mathop \Rightarrow \limits^{\rho _2 } u_1 u_2 A_2 \Rightarrow \ldots \mathop \Rightarrow \limits^{\rho _{n - 1} } u_1 u_2 \ldots u_{n - 1} A_{n - 1} \mathop \Rightarrow \limits^{\rho _n } u_1 u_2 \ldots u_{n - 1} u_n = \theta , $$

where \(A_{i - 1} \xrightarrow{{\rho _i }}u_i A_i \in P,i = 1,2, \ldots n\;{\text{and}}\;A_0 = S,A_n = u_n \;{\text{with}}\;\rho = \rho _1 \bullet \rho _2 \bullet \ldots \bullet \rho _n .\) From the definition of P, we know that

$$ \begin{aligned} f_M (\theta ) = & \sigma _0 \circ \delta _\theta \circ \sigma _1 = \bigvee\nolimits_{A_0 ,A_n \in X} {[\sigma _0 (A_0 ) \bullet \delta ^* (S,\theta ,A_n )]} \\ \geq & \bigvee\nolimits_{A_0 ,A_n \in X} {[\sigma _0 (A_0 ) \bullet \delta (S,u_1 ,A_1 ) \bullet \delta (A_1 ,u_2 ,A_2 ) \bullet \ldots \bullet \delta (A_{n - 1} ,u_n ,A_n ) \bullet \sigma _1 (A_n )]} \\ = & \bigvee\nolimits_{A_0 \in X} {[\sigma _0 (A_0 ) \bullet \delta (S,u_1 ,A_1 )]} \bullet \delta (A_1 ,u_2 ,A_2 ) \bullet \ldots \bullet \bigvee\nolimits_{A_n \in X} {[\delta (A_{n - 1} ,u_n ,A_n ) \bullet \sigma _1 (A_n )]} \\ = & \rho _1 \bullet \rho _2 \bullet \ldots \bullet \rho _n = \rho \\ \end{aligned} $$

Thus, we have shown that f G (θ)≤ f M (θ) for any θ∈U* , that is, f G f M .

Conversely, if

$$ \begin{aligned} f_M (\theta ) = & \sigma _0 \circ \delta _\theta \circ \sigma _1 = \bigvee\nolimits_{A_1 , \ldots ,A_n \in X} {(\sigma _0 (A_0 ) \bullet \delta (S,u_1 ,A_1 ) \bullet \delta (A_1 ,u_2 ,A_2 ) \bullet \ldots \bullet \delta (A_{n - 1} ,u_n ,A_n ) \bullet \sigma _1 (A_n ))} \\ \geq & \bigvee\nolimits_{A_0 \in X} {[\sigma _0 (A_0 ) \bullet \delta (S,u_1 ,A_1 )]} \bullet \delta (A_1 ,u_2 ,A_2 ) \bullet \ldots \bullet \bigvee\nolimits_{A_n \in X} {[\delta (A_{n - 1} ,u_n ,A_n ) \bullet \sigma _1 (A_n )]} . \\ \end{aligned} $$

For any A1, ..., A n X, let \(\begin{array}{*{20}l} {\bigvee\nolimits_{A_0 \in X} {[\sigma _0 (A_0 ) \bullet \delta (S,u_1 ,A_1 )]} \bullet \delta (A_1 ,u_2 ,A_2 ) \bullet \ldots \bullet \bigvee\nolimits_{A_n \in X} {[\delta (A_{n - 1} ,u_n ,A_n ) \bullet \sigma _1 (A_n )]} } \\ {\quad = \rho _1 \bullet \rho _2 \bullet \ldots \bullet \rho _n = \rho ,} \\ \end{array} \) then there exists corresponding derivation \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta .\) Thus it shows that if ρ≤ f M (θ), then ρ≤ f G (θ), that is, f M (θ)≤ f G (θ) for any θ∈U*, that is, f M f G .

Therefore f M =f G , that is, the languages recognized by LA are L-valued regular languages. □

We give an example to illustrate the proof of the above theorem.

Example 3.2 Let L=[0, 1], • is the multiplication of real numbers, M=(U, X, δ, σ0, σ1) is an LA where U={u1, u2}, X={x1, x2, x3}, σ0=(0.9/x1)+(0.5/x2), σ1=(0.5/x2)+(0.8/x3),

$$ \delta (x_1 ,u_1 ,x_2 ) = 0.8,\delta (x_2 ,u_1 ,x_3 ) = 0.7,\delta (x_3 ,u_1 ,x_3 ) = 0.4,\delta (x_3 ,u_2 ,x_2 ) = 0.5. $$

Construct an LRG G=(N, T, P, S), where N={x1, x2, x3}, T={u1, u2}. The productions in P are as follows:

$$ \begin{array}{*{20}l} {S\xrightarrow{{0.36}}u_1 ,S\xrightarrow{{0.72}}u_1 x_2 ,S\xrightarrow{{0.35}}u_1 x_3 ,x_1 \xrightarrow{{0.8}}u_1 x_2 ,x_2 \xrightarrow{{0.7}}u_1 x_3 ,x_3 \xrightarrow{{0.4}}u_1 x_3 ,x_3 \xrightarrow{{0.5}}u_2 x_2 ,} \\ {x_1 \xrightarrow{{0.4}}u_1 ,x_2 \xrightarrow{{0.56}}u_1 ,x_3 \xrightarrow{{0.32}}u_1 ,x_3 \xrightarrow{{0.25}}u_2 .} \\ \end{array} $$

For θ=u 21 u2u1,

$$ \begin{aligned} f_M (\theta ) = & \sigma _0 \circ \delta _\theta \circ \sigma _1 \\ = & [\sigma _0 (x_1 ) \bullet \delta ^* (x_1 ,\theta ,x_3 ) \bullet \sigma _1 (x_3 )]\bigvee {[\sigma _0 (x_2 ) \bullet \delta ^* (x_2 ,\theta ,x_3 ) \bullet \sigma _1 (x_3 )]} = 0.14112. \\ \end{aligned} $$

Correspondingly , the derivation of θ in G are as follows:

$$ \begin{array}{*{20}l} {({\text{i}})\;S\xrightarrow{{\rho _1 }}u_1 x_2 \xrightarrow{{\rho _2 }}u_1 u_1 x_3 \xrightarrow{{\rho _3 }}u_1 u_1 u_2 x_2 \xrightarrow{{\rho _4 }}u_1 u_1 u_2 u_1 .} \\ {({\text{ii}})\;S\xrightarrow{{\rho '_1 }}u_1 x_3 \xrightarrow{{\rho '_2 }}u_1 u_1 x_3 \xrightarrow{{\rho '_3 }}u_1 u_1 u_2 x_2 \xrightarrow{{\rho '_4 }}u_1 u_1 u_2 u_1 .} \\ \end{array} $$

Furthermore in (i), we have,

$$ \begin{aligned} \rho _1 = & \sigma _0 (x_1 ) \bullet \delta (x_1 ,u_1 ,x_2 ) = 0.9 \times 0.8 = 0.72, \\ \rho _2 = & \delta (x_2 ,u_1 ,x_3 ) = 0.7, \\ \rho _3 = & \delta (x_3 ,u_2 ,x_2 ) = 0.5, \\ \rho _4 = & \delta (x_2 ,u_1 ,x_3 ) \bullet \sigma _1 (x_3 ) = 0.56, \\ \rho = & \rho _1 \bullet \rho _2 \bullet \rho _3 \bullet \rho _4 = 0.72 \times 0.7 \times 0.5 \times 0.56 = 0.14112, \\ \end{aligned} $$

in (ii)

$$ \begin{aligned} \rho ^{\prime}_1 = & \sigma _0 (x_2 ) \bullet \delta (x_2 ,u_1 ,x_3 ) = 0.5 \times 0.7 = 0.35, \\ \rho ^{\prime}_2 = & \delta (x_3 ,u_1 ,x_3 ) = 0.4, \\ \rho ^{\prime}_3 = & \delta (x_3 ,u_2 ,x_2 ) = 0.5, \\ \rho ^{\prime}_4 = & \delta (x_2 ,u_1 ,x_3 ) \bullet \sigma _1 (x_3 ) = 0.56, \\ \rho ^{\prime} = & \rho ^{\prime}_1 \bullet \rho ^{\prime}_2 \bullet \rho ^{\prime}_3 \bullet \rho ^{\prime}_4 = 0.35 \times 0.4 \times 0.5 \times 0.56 = 0.0392. \\ \end{aligned} $$

Then f G (θ)=ρ ∨ ρ′=0.14112, thus f M (θ)=f G (θ).

Corollary 3.1 L-valued regular grammars are equivalent to LAs.

Proof Straightforward by Theorem 3.1 and Theorem 3.2.□

By Theorem 2.1 and Corollary 3.1 we can easily get that

Corollary 3.2 L-valued automata are equivalent to L-valued weak regular grammars.

4 Characterization of DLA by L-valued regular grammars

Definition 4.1 (1) G=(N, T, P, S) is called an L-valued deterministic regular grammar (DLRG, for short) if all the productions in P have only three forms,

$$ A\xrightarrow{e}uB\;{\text{or}}\;A\xrightarrow{\rho }u\;{\text{or}}\;S\xrightarrow{\rho }\Lambda ,\quad {\text{where}}\;A,B \in N,u \in T,\quad {\text{and}}\;\rho \in L - \{ 0\} . $$

Accordingly, f G is called an L-valued deterministic regular language. Obviously, DLRG belongs to a special kind of LRG.

Similarly, for u1, ..., u p ∈(NT)*, if u i ∈(NT)* for i=1, 2, ..., p and ui+1 is directly derivable from u i for i=1, 2, ..., p−1, we say that u p is derivable from u1 and written as,

$$ u_1 {\mathop \Rightarrow \limits^\rho}^* \,u_p $$

We call

$$ u_1 \mathop \Rightarrow \limits^{\rho _1 } u_2 \mathop \Rightarrow \limits^{\rho _2 } \ldots \mathop { \Rightarrow u_p }\limits^{\rho _{p - 1} } $$

the derivation chain of u p (from u1). However, here ρ12=...=ρp-2=e and ρ=ρ1 • ρ2 • ... • ρp-1p-1.

(2) The L-language f G F(T*) generated by an L-valued grammar is defined as follows:

$$ f_G (\theta ) = \bigvee {\left\{ {\rho :S{\mathop \Rightarrow \limits^\rho}^* \,\theta } \right\}} $$

Remark 4.1 Suppose that G is a DLRG, then the range of images set of f is finite, that is, the set R f ={f G (θ)|θ∈T*} is a finite set of L.

Since \(f_G (\theta ) = \vee \{ \rho :S {\mathop \Rightarrow \limits^\rho}^* \,\theta \} ,\) it is clear that R f is completely determined by the productions with form like \(A\xrightarrow{\rho }u.\) Since P is finite, such ρs are finite.

Lemma 4.1 (Li 2003) Let M=(X, U, δ, x0, σ1) be an LA, where δ: X× U→ 2X, σ:XL, x0X, \(f_M (\theta ) = \bigvee {[\sigma _1 (x):x \in \delta ^* (x_0 ,\theta )]} ,\) then there exists a DLA, N=(Y, U, η, y0, τ), such that f M =f N .

Proof Take Y=2X, and η:Y× UY is defined as, \(\eta (Z,u) = \cup _{x \in Z} \delta (x,u),\) y0={x0}, τ:YL is taken as, \(\tau (Z) = \bigvee\nolimits_{x \in Z} {\sigma (x).} \) Then N=(Y, U, η, y0, τ) is a DLA, and η*(y0, θ)=δ*(x0, θ). Thus \(f_N (\theta ) = \tau (\eta ^* (y_0 ,\theta )) = \bigvee\nolimits_{x \in \eta ^* (y_0 ,\theta )} {\sigma (x)} = \bigvee\nolimits_{x \in \delta ^* (x_0 ,\theta )} {\sigma (x)} = f_M (\theta ),\) that is, f N =f M . □

Theorem 4.1 L-valued deterministic regular languages can be accepted by DLA.

Proof Suppose G=(N, T, P, S) be a DLRG, and construct an LA M=(X, U, δ, x0, σ1) where U=T, x0=S. Check all the productions in G, and let P1 denote the set whose elements are with form of \(A\xrightarrow{\rho }u,\) that is, \(P_1 = \left\{ {x\xrightarrow{{\rho _i }}u|i = 1,2, \ldots ,m} \right\},\) where m is just the number of such productions.

For any (x, u)∈NT, define δ as follows

$$ \delta (x,u) = \left\{ {\begin{array}{*{20}l} {\left\{ {y|x\xrightarrow{e}uy \in P} \right\} \cup \left\{ {Z_i } \right\}} & {x\xrightarrow{\rho }u \in P_1 } \\ {\left\{ {y|x\xrightarrow{e}uy \in P} \right\}} & {x\xrightarrow{\rho }u \notin P_1 } \\ \end{array} } \right. $$

and define σ1 as: σ1=(ρ i (xu)/Z i )+(f G (Λ)/S). Furthermore, X=N∪ { Z i } for i=1, 2, ..., m.

Let θ∈T*, if θ=Λ, then, f M (θ)=σ1 (δ(x0, Λ))=f G (Λ).

When θ≠Λ, suppose that θ=u1 u2 ... u n where u i T for i=1, 2, ..., n. If \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta ,\) there must exist some derivation of the form

$$ S\mathop \Rightarrow \limits^e u_1 A_1 \mathop \Rightarrow \limits^e u_1 u_2 A_2 \Rightarrow \ldots \mathop \Rightarrow \limits^e u_1 u_2 \ldots u_{n - 1} A_{n - 1} \mathop \Rightarrow \limits^\rho u_1 u_2 \ldots u_{n - 1} u_n , $$

where \(A_{i - 1} \xrightarrow{e}u_i A_i \in P,\) for i=1, 2, ..., n−1, A0=S and \(A_{n - 1} \xrightarrow{\rho }u_n \in P.\) Furthermore ρ (S → θ)=ρ (An-1u n )=ρ.

Corresponding to the LA, by the definition of δ and σ1, we get that A1 ∈δ(x0, u1); A2 ∈δ(A1, u2); ..., An-1 ∈δ(An-2, un-1); Z t ∈δ(An-1, u n ) for 1≤ tm.

Then f M (θ) = ∨ {σ1(Z) : Z ∈δ* (x0, θ)} ≥ σ1(Z t )=ρ(An-1u n )=ρ. Thus, we have shown that f G (θ) ≤ f M (θ) for any θ∈U*, that is, f G f M .

Conversely, if f M (θ) = ∨ [σ1(Z) : Z ∈δ* (x0, θ)] ≥ σ1(Z t )=ρ(An-1u n )=ρ, then there exist corresponding derivation \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta ,\) it implies that if ρ≤ f M (θ), then ρ≤ f G (θ), that is, f M f G .

Therefore, f M =f G . From lemma 4. 1, f M is equivalent to a DLA language, thus f G can be accepted by a DLA. □

Theorem 4.2 The languages accepted by DLA are L-valued deterministic regular languages.

Proof Let M=(X, U, δ, x0, σ1) be a DLA, define an LRG G=(N, T, P, S), where T=U, N=X, S=x0.

Check all the transition functions in M, for every y=δ(x, u) where x, yX, uU, we define a corresponding production: \(x\xrightarrow{e}uy \in P.\) Let Y={x1(x)≠ 0 }, \(Y \subseteq X.\)

If the transition functions satisfy that δ(x, u)∈Y, then we define another kind of productions: \(x\xrightarrow{{\rho _i }}u\) and ρ i (xu)=σ1(Z i ), Z i Y, i=1, 2, ..., m, where m=|Y|.

Clearly, such a grammar G is an DLRG. We show f M =f G below.

Let θ∈T*, if θ=Λ, then f G (θ)=ρ(S → Λ)=f M (Λ).

When θ≠Λ, suppose that θ=u1,u2...u n , where u i T for i=1, 2, ..., n, then the process that θ is accepted by M can be denoted as

$$ \delta (x_0 ,u_1 ) = A_1 ;\delta (A_1 ,u_2 ) = A_2 ; \ldots ,\delta (A_{n - 2} ,u_{n - 1} ) = A_{n - 1} ;\delta (A_{n - 1} ,u_n ) = Z(Z \in Y), $$

so f M (θ)=σ1*(x0, θ))=σ1(Z).

Correspondingly, from the construction of G we know that, if \(S{\mathop \Rightarrow \limits^\rho}^* \,\theta \) there must exist some derivation of θ with form,

$$ S\mathop \Rightarrow \limits^e u_1 A_1 \mathop \Rightarrow \limits^e u_1 u_2 A_2 \Rightarrow \ldots \mathop \Rightarrow \limits^e u_1 u_2 \ldots u_{n - 1} A_{n - 1} \mathop \Rightarrow \limits^\rho u_1 u_2 \ldots u_{n - 1} u_n , $$

where \(A_{i - 1} \xrightarrow{e}u_i A_i \in P,i = 1,2, \ldots ,n - 1,A_0 = S = x_0 ;A_{n - 1} \xrightarrow{\rho }u_n \in P.\) Furthermore, ρ (S → θ)=ρ(An-1u n )=σ1(Z), thus f G =f M . □

Example 4.1 Let L=[0, 1], • is the multiplication of real numbers, M=(X, U, δ, x0, σ1) is a DLA, where U={u1, u2}, X={x1, x2, x3, x4}, x0=x1, σ1=(0.5/x2)+(0.8/x3), δ(x1, u1)=x2, δ(x2, u1)=x3, δ(x3, u1)=x4, δ(x4, u2)=x3, δ(x3, u2)=x2.

Construct an LRG G=(N, T, P, S), where N={x1, x2, x3, x4}, T={u1, u2}, S=x1. The fuzzy productions in P are as follows:

$$ \begin{array}{*{20}l} {({\text{a}})\;x_1 \xrightarrow{e}u_1 x_2 ,x_2 \xrightarrow{e}u_1 x_3 ,x_3 \xrightarrow{e}u_1 x_4 ,x_4 \xrightarrow{e}u_2 x_3 ,x_3 \xrightarrow{e}u_2 x_2 \;{\text{and}}} \\ {({\text{b}})\;x_1 \xrightarrow{{\rho _1 }}u_1 ,x_2 \xrightarrow{{\rho _2 }}u_1 ,x_4 \xrightarrow{{\rho _2 }}u_2 ,x_3 \xrightarrow{{\rho _1 }}u_2 } \\ \end{array} $$

and ρ11(x2)=0.5, ρ21(x3)=0.8.

For θ=u 31 u2, its derivations in G are as follows, \(x_1 \xrightarrow{e}u_1 x_2 \xrightarrow{e}u_1^2 x_3 \xrightarrow{e}u_1^3 x_4 \xrightarrow{{\rho _2 }}u_1^3 u_2 ,\) then f G (θ)=ρ2=0.8, and f M (θ)=σ1*(x1, θ))=σ1(x3) =0.8, thus f M (θ)=f G (θ).

Thereby, from theorems 4.1 and 4.2 we can immediately get the following conclusion:

Corollary 4.1. For an L-language f over a finite set U, f can be accepted by a DLA iff f can be generated by a DLRG.

In Li (2003), a sufficient and necessary condition has been given for the lattice-ordered monoid L, which to ensure that any LA is equivalent to some DLA, then we can obtain another similar conclusion for LRG and DLRG.

Corollary 4.2. For any LRG G, there exists a DLRG Gsuch that f G =fG iff the lattice-ordered monoid L satisfies the following conditions, for any finite subset L of L, the subalgebra of (L, •, ∨) generated by Lis finite.

Distributive lattice always satisfies the conditions stated in Corollary 4.2, that is to say, for any distributive lattice L, if L′ is a finite subset of L, then the sublattice of L generated by L′ is also finite. Then we deduce the following corollary.

Corollary 4.3 If (L, ∧, ∨) is a distributive lattice, then any LRG are equivalent to an DLRG.

5 Conclusion

Regular grammars with truth values in lattice-ordered monoid and the languages generated by them are studied in this paper. We show the equivalence between LA and LRG in the sense of recognizing fuzzy languages. Since, LA are not equivalent to DLA in the power of recognizing fuzzy languages, we define a special regular grammar DLRG and show that this kind of grammars are equivalent to DLA.

Of course, much more work should be done in this direction. Some other related work such as minimization of LA, the special (algebraic) properties of LA languages from the point of view of level structure, and the formal model of computing with words (Ying 2002) based on LRG and DLRG, and so on, should be studied in the future work.