Keywords

1 Introduction

Hexagonal arrays generated by grammars are found in the literature with the insight into computations in pattern recognition, picture processing, and scene analysis [4, 6]. Some of classical formalisms to generate hexagonal arrays are hexagonal kolam array grammars (HKAG) [6] and hexagonal array rewriting grammars (HAG) [7]. In HKAG model, hexagonal arrays on triangular grid were viewed as two-dimensional representation of three-dimensional blocks, and also several scene analytic transformations were discussed. Hexagonal array rewriting grammars (HAG) are the generalization for HKAG. Hexagonal Wang system (HWS) and hexagonal tiling system (HTS) were an equivalent pair of formal devices for recognizable class of hexagonal arrays (HREC) [1]. Hexagonal tile rewriting grammars (HTRG) [8] and regional hexagonal tile rewriting grammars (RHTRG) [2] are the recent tiling-based hexagonal array rewriting models, which have more generative capacity. In the generalization process of rectangular grammar models, Prusa [5] has recently defined a grammar device which extended the rectangular context-free kolam array grammar model, attaining some generative capacity. This model generates the rectangular pictures by parallel application of rules and thereby maintains a grid-like structure in each stage of derivation of picture. But hexagonal grammar models with parallel derivation rules are very rare in literature and that too have limited growth patterns. With this quest of generalization of hexagonal models, we propose a hexagonal version of Prusa grammar to generate context-free hexagonal picture languages and study some comparisons with the other existing models for their generative capacity.

The paper is organized in the following manner. In Sect. 2, we recall some basic notions of hexagonal pictures and languages. In Sect. 3, we introduce hexagonal Prusa grammar (HPG) and examples for illustration. In Sect. 4, we present comparison results of HPG with other hexagonal models with respect to the generating capacity.

2 Preliminaries

Let T be a finite alphabet of symbols. A hexagonal picture p over T is a hexagonal array of symbols of T.

With respect to a triad of triangular axes x, y, z, the coordinates of each element of hexagonal picture can be fixed [1]. The origin of reference is the upper left vertex, having co-ordinates (1, 1, 1).

The set of all hexagonal arrays over of the alphabet T is denoted by T **H and set of all non-empty hexagonal arrays over T is denoted by T ++H. T + denotes set of all non-empty strings in the three directions parallel to the triangular axes. A non-empty hexagonal picture language L over T is a subset of T ++H.

For pT ++H, let \( \hat{p} \) be the hexagonal array obtained by surrounding p with a special boundary symbol # \( \notin \) T.

Given a picture pT ++H, if , m, n denote the number of elements in the borders of p, parallel to x-, y-, z-directions, respectively, then the triple (, m, n) is called the size of the picture p denoted by |p| = (, m, n). Let p ijk denotes the symbol in p with coordinates (i, j, k) where 1 ≤ i ≤ , 1 ≤ j ≤ m, 1 ≤ k ≤ n. Let T (ℓ, m, n)H be the set of hexagonal pictures of size (, m, n). A typical hexagonal array of size (, m, n) can be denoted by [p ijk ](,m,n)H. A hexagonal picture of size (2, 2, 2) is called a hexagonal tile. We denote the set of all hexagonal tiles contained in a picture \( \hat{p} \) by [[\( \hat{p} \)]].

The notions of non-convex hexagonal arrays called arrowheads, and arrowhead catenations in six directions are adapted as in [1, 6].

3 Hexagonal Prusa Grammar

In this section, we give a formal definition of hexagonal Prusa grammar and some simple examples of languages generated by these grammars.

Definition 1

A Hexagonal Prusa Grammar (HPG) is a tuple \( \left\langle {N,T,P,S} \right\rangle \) where N is a finite set of non-terminals, T is a finite set of terminals, P ⊆ N × [(NT)++H ∪ (NT)+] and SN is the start symbol.

Definition 2

Let \( G = \left\langle {N,T,P,S} \right\rangle \) be a HPG. We define a hexagonal picture language L(G, C) over T for every CN, by the following recursive rules.

  1. 1.

    Terminal rule: If C → X is in P, and X ∈ (T ++HT +), then XL(G, C).

  2. 2.

    Mixed rule: Let C → X be a production in P with

$$ X \in \cup {\mkern 1mu} \;(N \cup T)^{{(\ell^{\prime } , \, m^{\prime } , \, n^{\prime } )H}} ,\; 1\le \ell^{\prime } \le \ell ,\; 1\le m^{\prime } \le m\;{\text{and}}\; 1\le n^{\prime } \le n $$

and Q ijk (1 ≤ i ≤ , 1 ≤ j ≤ m, 1 ≤ k ≤ n) be the pictures such that

  1. (a)

    if X ijk T, then Q ijk  = X ijk .

  2. (b)

    if X ijk N, then Q ijk L[G, X].

Then, if \( Q = [Q_{ijk} ]^{{ (\ell^{\prime } ,m^{\prime } ,n^{\prime } )H}} \) is defined through string catenation (or) arrowhead catenation, then QL[G, C].

The set L[G, C] contains all and only pictures that can be obtained by applying a finite sequence of rules (1) and (2). The hexagonal language L[G] generated by the grammar G is defined to be the language L[G, S]. \( \mathcal{L}\left( {\text{HPG}} \right) \) is the class of all languages generated by these grammars. Languages in \( \mathcal{L}\left( {\text{HPG}} \right) \) are called HPG languages.

Example 1

The language

$$ L{}_{ 1} = \left\{ {\begin{array}{*{20}c} {} & a & {} & a & {} \\ a & {} & a & {} & a \\ {} & a & {} & a & {} \\ \end{array} ,\quad \begin{array}{*{20}c} {} & a & {} & a & {} & a & {} \\ a & {} & a & {} & a & {} & a \\ {} & a & {} & a & {} & a & {} \\ \end{array} ,\quad \begin{array}{*{20}c} {} & a & {} & a & {} & a & {} & a & {} \\ a & {} & a & {} & a & {} & a & {} & a \\ {} & a & {} & a & {} & a & {} & a & {} \\ \end{array} , \ldots } \right\} $$

is generated by HPG \( G_{ 1} = \left\langle {N,T,P,S} \right\rangle \) where N = {H, S}, T = {a} and

\( P = \left\{ {S \to SH ,\quad S \to \begin{array}{*{20}c} {} & a & {} & a & {} \\ a & {} & a & {} & a \\ {} & a & {} & a & {} \\ \end{array} \, ,\;\quad H \to \begin{array}{*{20}c} a & {} \\ {} & a \\ a & {} \\ \end{array} } \right\} \). By applying terminal rules, \( H \to \begin{array}{*{20}c} a & {} \\ {} & a \\ a & {} \\ \end{array} \, ,\;\quad S \to \begin{array}{*{20}c} {} & a & {} & a & {} \\ a & {} & a & {} & a \\ {} & a & {} & a & {} \\ \end{array} \) we get \( \begin{array}{*{20}c} a & {} \\ {} & a \\ a & {} \\ \end{array} \in L [G ,H ] \) and \( \begin{array}{*{20}c} {} & a & {} & a & {} \\ a & {} & a & {} & a \\ {} & a & {} & a & {} \\ \end{array} \in L [G ,S ] \).

Now, by applying mixed rule S → SH, we have \( \begin{array}{*{20}c} {} & a & {} & a & {} & a & {} \\ a & {} & a & {} & a & {} & a \\ {} & a & {} & a & {} & a & {} \\ \end{array} \in L [G ,S ] \).

The repeated application of mixed rule S → SH generates all the members of L 1.

Example 2

The language

$$ L{}_{ 2} = \left\{ {\begin{array}{*{20}c} {} & 1& {} & 1& {} \\ 1& {} & 2& {} & 1\\ {} & 1& {} & 1& {} \\ \end{array} \, ,\;\;\begin{array}{*{20}c} {} & 1& {} & 1& {} & 1& {} \\ 1& {} & 2& {} & 2& {} & 1\\ {} & 1& {} & 1& {} & 1& {} \\ \end{array} \, ,\;\;\begin{array}{*{20}c} {} & 1& {} & 1& {} & 1& {} & 1& {} \\ 1& {} & 2& {} & 2& {} & 2& {} & 1\\ {} & 1& {} & 1& {} & 1& {} & 1& {} \\ \end{array} \, ,\; \ldots } \right\} $$

can be generated by HPG, \( G_{ 2} = \left\langle {N,T,P,S} \right\rangle \) where N = {S, A, B, H}, and T = {1, 2}.

$$ P = \left\{ {S \to AH\; /\;\begin{array}{*{20}c} {} & 1& {} & 1& {} \\ 1& {} & 2& {} & 1\\ {} & 1& {} & 1& {} \\ \end{array} \, ,\quad A \to AB\; /\;\begin{array}{*{20}c} {} & 1& {} & 1& {} \\ 1& {} & 2& {} & 2\\ {} & 1& {} & 1& {} \\ \end{array} \, ,\quad B \to \begin{array}{*{20}c} 1& {} \\ {} & 2\\ 1& {} \\ \end{array} \, ,\quad H \to \begin{array}{*{20}c} 1& {} \\ {} & 1\\ 1& {} \\ \end{array} \,} \right\} $$

From the terminal rules, we have \( \begin{array}{*{20}c} {} & 1& {} & 1& {} \\ 1& {} & 2& {} & 1\\ {} & 1& {} & 1& {} \\ \end{array} \)L[G,S], \( \begin{array}{*{20}c} 1 & {} \\ {} & 2 \\ 1 & {} \\ \end{array} \)L[G, B]. Parallel application of mixed rules A → AB, S → AH repeatedly produces the language L 2.

We now introduce non-terminal normal form for hexagonal Prusa grammars.

Definition 3

A hexagonal Prusa grammar \( G = \left\langle {N,T,P,S} \right\rangle \) is in non-terminal normal form (NNF) iff every rule in P has the form either C → t or C → X, where CN, X ∈ (N ++HN +) and tT.

4 Comparison Results

In this section, we present comparison results of HPG with other hexagonal models with respect to its generative power. We use \( \mathcal{L}\left[ X \right] \) to denote the family of languages generated or recognized by the device X.

Theorem 1

$$ \mathcal{L}\left[ {\text{CFHAG}} \right] \subset \mathcal{L}\left[ {\text{HPG}} \right]. $$

Proof

Consider a context-free hexagonal array grammars (CFHAG) [7] G in Chomsky normal form. It may contain six types of rules:

C → A B, C → A B, C → A B and its duals.

C → A B, C → A B and C → A B.

Now, the rule C → A B is equivalent to a HPG rule of the form C → BA.

The rule C → A B is equivalent to a HPG rule of the form \( C \to A^{B} . \)

The rule C → A B is equivalent to a HPG rule of the form \( C \to A_{B} . \)

Similarly, we can have equivalent rules in HPG for the corresponding dual rules in CFHAG. The terminal rules C → t are identical in both the grammars. This shows that \( \mathcal{L}\left[ {\text{CFHAG}} \right] \; \subseteq \;\mathcal{L}\left[ {\text{HPG}} \right]. \)

The inclusion is strict as the language L 2 in Example 2 cannot be generated by any CFHAG, which was proved in [2].

Theorem 2

\( \mathcal{L}\left[ {\text{HPG}} \right] \) and \( \mathcal{L}\left[ {\text{HTS}} \right] \) are incomparable but not disjoint.

Proof

It is already proved in [2] that the language L 1 in Example 1 is a hexagonal tiling system [1] recognizable language. So \( L_{ 1} \in \mathcal{L}\left( {\text{HPG}} \right) \cap \mathcal{L}\left( {\text{HTS}} \right). \)

Now, consider the language L 4, which consists of palindromic left arrowheads over T = {a, b}.

L 4 can be generated by the HPG, \( G = \left\langle {N,T,P,S} \right\rangle \) with N = {S}, T = {a, b},

$$ P = \left\{ {S \to \begin{array}{*{20}c} {} & a \\ S & {} \\ {} & a \\ \end{array} ,S \to \begin{array}{*{20}c} {} & b \\ S & {} \\ {} & b \\ \end{array} ,S \to \begin{array}{*{20}c} {} & a \\ a & {} \\ {} & a \\ \end{array} \quad /\quad \begin{array}{*{20}c} {} & a \\ b & {} \\ {} & a \\ \end{array} \quad /\quad \begin{array}{*{20}c} {} & b \\ a & {} \\ {} & b \\ \end{array} \quad /\quad \begin{array}{*{20}c} {} & b \\ b & {} \\ {} & b \\ \end{array} } \right\} $$

But there is no local strategy available to remember the characters that appearing in the corresponding positions above and below the arrowhead vertex. So L 4 cannot be generated by any HTS.

In [3], the authors have given a hexagonal tiling system to generate a language L parallel which consists of all hexagonal pictures over Σ = {a, b} with elements in the corresponding positions of the borders parallel to x-direction are identical. Since the productions of a HPG do not have control in producing such patterns, L parallel cannot be generated by any HPG.

Theorem 3

$$ \mathcal{L}\left( {\text{HPG}} \right) \subset \mathcal{L}\left[ {\text{RHTRG}} \right]. $$

Proof

Consider a HPG grammar G in NNF. First, without loss of generality, we assume that the non-terminals in the right part of any rule of G are all different.

Let us define a RHTRG [2] G′ equivalent to G. The terminal rules are easy to handle. For a non-terminal rule of G, \( C \to A^{B} \), the equivalent rule in G′ is

$$ C \to \left[ {\left[ {\begin{array}{*{20}c} {} & {} & {} & {} & \# & {} & \# & {} & \# & {} & {} \\ {} & {} & {} & \# & {} & B & {} & B & {} & \# & {} \\ {} & {} & \# & {} & B & {} & B & {} & B & {} & \# \\ {} & \# & {} & A & {} & A & {} & B & {} & \# & {} \\ \# & {} & A & {} & A & {} & A & {} & \# & {} & {} \\ {} & \# & {} & A & {} & A & {} & \# & {} & {} & {} \\ {} & {} & \# & {} & \# & {} & \# & {} & {} & {} & {} \\ \end{array} } \right]} \right] $$

Similarly, for the rules \( C \to A_{B} \) and \( C \to {\text{BA}} \) in G, equivalent rules can be formed in G′. Therefore, \( \mathcal{L}\left( {\text{HPG}} \right) \subseteq \mathcal{L}\left( {\text{RHTRG}} \right) \)

For strict inclusion, consider the language L 5 over the alphabet T = {a, b, c, x}, consists of pictures of size (2, 2, n), n ≥ 4, with misaligned palindromic borders in the Z-direction, with no (2, 2, 2) hexagonal subpicture contains the symbol c in both the z-directional borders. An RHTRG can be easily defined to generate L 5 with one of the variable size rules as

$$ S \to \left[ {\left[ {\begin{array}{*{20}c} {} & {} & \# & {} & \# & {} & \# & {} & \# & {} & \# & {} & \# & {} & \# & {} & {} \\ {} & \# & {} & {A_{1} } & {} & {A_{1} } & {} & {A_{1} } & {} & {A_{1} } & {} & {C_{1} } & {} & {C_{1} } & {} & \# & {} \\ \# & {} & X & {} & X & {} & X & {} & X & {} & X & {} & X & {} & X & {} & \# \\ {} & \# & {} & {C_{2} } & {} & {C_{2} } & {} & {A_{2} } & {} & {A_{2} } & {} & {A_{2} } & {} & {A_{2} } & {} & \# & {} \\ {} & {} & \# & {} & \# & {} & \# & {} & \# & {} & \# & {} & \# & {} & \# & {} & {} \\ \end{array} } \right]} \right] $$

This language cannot be generated by any HPG G′, as a production of G′ in the form of \( S \to \begin{array}{*{20}c} {} & A & {} & C & {} \\ X & {} & X & {} & X \\ {} & C & {} & A & {} \\ \end{array} \) where A generates the z-direction palindromic string, and C, X generate z-direction strings of c’s and x’s, respectively, cannot restrict the occurrence of c′s so that we may have a hexagonal subpicture of the form \( \begin{array}{*{20}c} {} & d & {} & c & {} \\ x & {} & x & {} & x \\ {} & c & {} & d & {} \\ \end{array} \) where d ∈ {a, b}, but which is not a picture in L 5.

5 Conclusions

HPG is the simple type of hexagonal array rewriting models with parallel application of rewriting rules in each derivation, which gives a considerable generative capacity than CFHAG. But lack of control in the production rules make this model less general than that of RHTRG. Since hexagonal tiling patterns and kolam patterns are the applications of CFHAG, HPG can also produce these patterns. Other pattern recognition tasks by this model remain to be explored and which may further depends on the development of good parsing algorithms.