1 Introduction

As part of the effort to extend the classical theory of iterated function systems due to J. Hutchinson (see [2]), R. Miculescu and A. Mihail introduced the concept of generalized iterated function system (see [7] and [9]) which was obtained by considering contractions from Xp into X rather than contractions from X into itself (here (X,d) is a metric space and p is a natural number). Sufficient conditions for the existence and uniqueness of the attractor of a generalized iterated function system (for short GIFS) \(\mathcal {F}=((X,d),(f_{i})_{i\in \{1,2,...,L\}})\), an upper bound for the Hausdorff–Pompeiu distance between the attractors of two such GIFSs, an upper bound for the Hausdorff–Pompeiu distance between the attractor of such a GIFS and an arbitrary compact set of X have been provided and the continuous dependence of the attractor on the functions fi was proved. In the last years, this concept has been intensively studied. Let us mention some lines of research regarding this subject:

In [15], F. Strobin and J. Swaczyna extended the concept of GIFS by using weaker types of generalized contractions which are similar to those introduced by F. Browder (see [1]) or J. Matkowski (see [5]). In [14], Strobin emphasized the fact that the set of the attractors generated by GIFSs is larger than the one generated by iterated function systems. Other related topics can be found in [4, 6, 8, 10,11,12,13] and [16].

Moreover, in [3], Strobin and his collaborators provided four algorithms which generate images of attractors of GIFSs, one of them being the deterministic algorithm for GIFSs (a counterpart of the classical deterministic algorithm for iterated function systems). Note that in [3] one can also find a list of papers dealing with algorithms generating images of the attractors of iterated function systems.

In this paper, we present another algorithm (called the grid algorithm) allowing the generation of images of the attractors of GIFSs on finite dimensional spaces and we compare it with the deterministic algorithm for GIFSs. The deterministic algorithm for GIFSs consists in choosing a finite set of points and applying to this set each of the constitutive functions of the system obtaining in this way a new finite set of points. To each of these new points, we apply again each of the constitutive functions of the system. Continuing the procedure described above, we approach the attractor. The main idea of the grid algorithm is to simplify the deterministic algorithm by dividing, at each step, the space that we are working with into small pieces and to choose for each such piece just one point.

2 Preliminaries

Given a metric space (X,d), we adopt the following notation:

$$ P_{cp}(X)\overset{not}{=}\{K\subseteq X\mid K\text{} \ \text{is non-empty and compact}\}\text{.} $$

For K1,K2Pcp(X), we consider:

$$ d(K_{1},K_{2})\overset{def}{=}\underset{x\in K_{1}}{\sup} d(x,K_{2})\text{,} $$

where \(d(x,K_{2})\overset {def}{=}\underset {y\in K_{2}}{\inf } d(x,y)\).

The function h : Pcp(X) × Pcp(X) → [0,) given by:

$$ h(K_{1},K_{2})=\max \{d(K_{1},K_{2}),d(K_{2},K_{1})\}\text{,} $$

for every K1,K2Pcp(X), turns out to be a metric which is called the Hausdorff-Pompeiu metric.

If (X,d) is complete, then (Pcp(X),h) is complete.

Given a metric space (X,d) and \(p\in \mathbb {N}^{\ast } \), by Xp we denote the Cartesian product of p copies of X. We endow Xp with the maximum metric dmax defined by:

$$ d_{\max} ((x_{1},...,x_{p}),(y_{1},...,y_{p}))=\max \{d(x_{1},y_{1}),...,d(x_{p},y_{p})\}\text{,} $$

for all (x1,...,xp),(y1,...,yp) ∈ Xp.

Definition 2.1

A generalized iterated function system (of order p) is a pair \(\mathcal {F}=((X,d),(f_{i})_{i\in \{1,2,...,L\}})\), where (X,d) is a metric space, \(p,L\in \mathbb {N}^{\ast } \) and fi : XpX is contraction for each i ∈{1,...,L}. The function \(F_{\mathcal {F}}:(P_{cp}(X))^{p}\rightarrow P_{cp}(X)\), described by:

$$ F_{\mathcal{F}}(K_{1},...,K_{p})=\underset{i\in \{1,...,L\}}{\cup} f_{i}(K_{1}\times ...\times K_{p})\text{,} $$

for all K1,...,KpPcp(X), is called the fractal operator associated to \(\mathcal {F}\).

We shall use the abbreviation GIFS for a generalized iterated function system.

Theorem 2.2

(see Theorem 3.9 from [9]).Given a complete metric space(X,d) and a GIFS\(\mathcal {F=} ((X,d),(f_{i})_{i\in \{1,...,L\}})\)oforder p, there exists a unique\(A_{\mathcal {F}}\in P_{cp}(X)\)suchthat:

$$ F_{\mathcal{F}}(A_{\mathcal{F}},...,A_{\mathcal{F}})=A_{\mathcal{F}} \text{{.}} $$

In addition, for every K1,...,KpPcp(X), the sequence (Kn)n defined by

$$ K_{n+p}=F_{\mathcal{F}}(K_{n},...,K_{n+p-1})\text{,} $$

for every \(n\in \mathbb {N}^{\ast } \), converges, with respect to the Hausdorff-Pompeiu metric, to \(A_{\mathcal {F}}\).

Definition 2.3

In the framework of the above theorem, the set \(A_{\mathcal {F}}\) is called the fractal generated by \(\mathcal { F}\).

Remark 2.4

(see Remark 12 from [3]). In the framework of the above definition, the function \(\mathcal {G}_{\mathcal {F}} :P_{cp}(X)\rightarrow P_{cp}(X)\), described by:

$$ \mathcal{G}_{\mathcal{F}}(K)=F_{\mathcal{F}}(K,...,K)=\underset{ i\in \{1,...,L\}}{\cup} f_{i}(K\times...\times K)\text{,} $$

for all KPcp(X), is a contraction on the complete metric space (Pcp(X),h) since it has the Lipschitz constant less than or equal to max{lip(f1),...,lip(fL)} < 1.

3 The presentation of the algorithms

For \((x_{1},....,x_{M})\in \mathbb {R}^{M}\), we shall use the following notation:

$$ \lbrack (x_{1},....,x_{M})]=([x_{1}],....,[x_{M}])\text{,} $$

where [x] designates the greatest integer less than or equal to the real number x.

In the sequel, without loss of generality,

$$ \mathcal{F}=(([0,D]^{M},d),\{f_{1},...,f_{L}\})\text{,} $$

where \(L,M\in \mathbb {N}\) and d is the Euclidean distance in \(\mathbb {R} ^{M}\), will be a generalized iterated function system of order p ≥ 2 (so fi : ([0,D]M)p → [0,D]M for every i ∈{1,...,L}).

We shall use the following notation:

  • \(\max \{lip(f_{1}),...,lip(f_{L})\}\overset {not}{=}C<1\)

  • β = pM.

We also consider the following functions:

  • \( F_{\mathcal {F}}:(P_{cp}([0,D]^{M}))^{p}\rightarrow P_{cp}([0,D]^{M})\) described by

    $$ F_{\mathcal{F}}(K_{1},...,K_{p})=f_{1}(K_{1}\times...\times K_{p})\cup ...\cup f_{L}(K_{1}\times...\times K_{p})\text{,} $$

for all K1,KpPcp([0,D]M)

  • \(\mathcal {G}_{\mathcal {F}}:P_{cp}([0,D]^{M})\rightarrow P_{cp}([0,D]^{M})\) described by

    $$ \mathcal{G}_{\mathcal{F}}(K)= F_{\mathcal{F}}(K, K)\text{,} $$

for every KPcp([0,D]M)

  • \((n_{k})_{k\in \mathbb {N}^{\ast } }\) a sequence of natural numbers.

For a finite set K0Pcp([0,D]M), we shall use the following notations:

  • $$ A_{k}\overset{not}{=}\mathcal{G}_{\mathcal{F}}^{[k]}(K_{0})\text{,} $$

where \(k\in \mathbb {N}\)

  • $$ \overset{\sim} {A_{k}}\overset{not}{=}\{\frac{D}{n_{k}}[\frac{n_{k}}{D} f_{l}(u_{1},...,u_{p})]\mid u_{1},...,u_{p}\in \overset{\sim} {A}_{k-1}\text{,} l\in \{1,...,L\}\}\text{,} $$

where \(k\in \mathbb {N}^{\ast } \) and \(\overset {\sim } {A_{0}}=K_{0}\)

  • $$ \frac{D\sqrt{M}}{n_{k}}\overset{not}{=}\varepsilon_{k}\text{,} $$

where \(k\in \mathbb {N}\).

Let us recall the pseudocode for the deterministic algorithm for a GIFS (see [3]):

figure a

Now, let us present the pseudocode for our new algorithm.

figure b

4 The complexity of the algorithms

By xk, we denote the number of points computed at the step k of the deterministic algorithm and by yk the number of points computed up to the step k of the grid algorithm.

A. The case of the deterministic algorithm

We have xk+ 1L(xk)p, so, with the notation \(z_{k}\overset {not} {=}\ln x_{k}\), we obtain zk+ 1 ≤ lnL + pzk for every \(k\in \mathbb {N} \). Therefore \(z_{k}\leq \frac {p^{k}-1}{p-1}\ln L+p^{k}z_{0}\), i.e.:

$$ x_{k}\leq \frac{1}{L^{\frac{1}{p-1}}}(x_{0}L^{\frac{1}{p-1}})^{p^{k}}\text{,} $$
(1)

for every \(k\in \mathbb {N}\).

Note that, according to Remark 2.4, we have \(h(A_{k},A_{\mathcal {F}})\leq \frac {h(A_{0},A_{1})}{1-C}C^{k}\) for every \(k\in \mathbb {N}\). Therefore, in order to be sure that Ak approximates the attractor \(A_{\mathcal {F}}\) with accuracy \(\varepsilon \frac {h(A_{0},A_{1})}{1-C}\), we need \(k>\log _{C^{-1}}(\varepsilon ^{-1})\). Hence, based on (1), the quantity \(\frac {1}{ L^{\frac {1}{p-1}}}(x_{0}L^{\frac {1}{p-1}})^{p^{\log _{C^{-1}}(\varepsilon ^{-1})}}=\frac {1}{L^{\frac {1}{p-1}}}(x_{0}L^{\frac {1}{p-1}})^{(\frac {1}{ \varepsilon } )^{\log _{C^{-1}}(p)}}\) describes the number of points that we have to compute in order to be sure that Ak is an approximation of \(A_{ \mathcal {F}}\) with an error less than \(\varepsilon \frac {h(A_{0},A_{1})}{1-C} \).

Conclusion

The complexity of the deterministic algorithm is described by the function \(\mathcal {C}_{c}:(0,\infty )\rightarrow \mathbb {R}\) given by:

$$ \mathcal{C}_{c}(\varepsilon )=(x_{0}L^{\frac{1}{p-1}})^{(\frac{1}{ \varepsilon} )^{\log_{\frac{1}{C}}(p)}}\text{,} $$

for every ε > 0.

B. The case of the grid algorithm

Remark 4.1

Since yk+ 1L(nk)β for every \(k\in \mathbb {N}^{\ast } \), up to the step N, we have to compute \(L\overset {N}{\underset {k=1}{\sum } }(n_{k})^{\beta } =L(D\sqrt {M})^{\beta } \overset {N}{\underset {k=1}{\sum } }(\frac {1}{\varepsilon _{k}})^{\beta } \) points.

Remark 4.2

We have

$$ h(\overset{\sim} {A_{k}},\mathcal{G}_{\mathcal{F}}(\overset{\sim} {A} _{k-1}))\leq \varepsilon_{k}\text{,} $$

for every \(k\in \mathbb {N}^{\ast } \).

Remark 4.3

We have

$$ h(\overset{\sim} {A_{0}},A_{\mathcal{F}})\leq diam([0,D]^{M})=D\sqrt{M}\text{ .} $$

As the inequality:

$$ h(\overset{\sim} {A_{k}},A_{\mathcal{F}})\leq h(\overset{\sim} {A_{k}}, \mathcal{G}_{\mathcal{F}}(\overset{\sim} {A}_{k-1}))+h(\mathcal{G}_{\mathcal{ F}}(\overset{\sim} {A}_{k-1}),\mathcal{G}_{\mathcal{F}}(A_{\mathcal{F}}))\leq $$
$$ \overset{\text{Remarks 2.4 and 4.2}}{\leq} \varepsilon_{k}+Ch(\overset{\sim} {A}_{k-1},A_{\mathcal{F}})\text{,} $$

is valid for every \(k\in \mathbb {N}^{\ast } \), we get:

$$ h(\overset{\sim} {A_{k}},A_{\mathcal{F}})\leq \varepsilon_{k}+C\varepsilon_{k-1}+C^{2}\varepsilon_{k-2}+...+C^{k-2}\varepsilon_{2}+C^{k-1}\varepsilon_{1}+C^{k}h(\overset{\sim} {A_{0}},A_{\mathcal{F}}) \text{,} $$

so, taking into account Remark 4.3, we obtain:

$$ h(\overset{\sim} {A_{k}},A_{\mathcal{F}})\leq \varepsilon_{k}+C\varepsilon_{k-1}+C^{2}\varepsilon_{k-2}+...+C^{k-2}\varepsilon_{2}+C^{k-1}\varepsilon_{1}+C^{k}D\sqrt{M}\text{,} $$

for every \(k\in \mathbb {N}^{\ast } \). Consequently, we arrive to the following problem: given a fixed natural number N and ε > 0 such that \(\frac {\varepsilon } {C^{N}}-D\sqrt {M}>0\), find the minimum of the function f : [0,)N → [0,), given by:

$$ f(\varepsilon_{1},...,\varepsilon_{N})=\overset{N}{\underset{k=1}{\sum} }(\frac{1}{\varepsilon_{k}})^{\beta} \text{,} $$

for every ε1,...,εN ∈ [0,), with the constraint:

$$ \varepsilon_{N}+C\varepsilon_{N-1}+C^{2}\varepsilon_{N-2}+...+C^{N-2}\varepsilon_{2}+C^{N-1}\varepsilon_{1}+C^{N}D\sqrt{M} =\varepsilon \text{.} $$

We adopt the following notations:

  • \(t\overset {not}{=}C^{-\frac {\beta } {\beta +1}N}-1\)

  • \(K_{1}\overset {not}{=}\frac {C^{\frac {1}{\beta +1}}-C}{C^{\frac {1}{ \beta +1}}}=1-C^{\frac {\beta } {\beta +1}}\)

  • \(K_{2}\overset {not}{=}K_{1}^{-\beta -1}\)

  • \(K_{3}\overset {not}{=}K_{2}\varepsilon ^{-\beta } \)

  • \(a\overset {not}{=}\frac {D\sqrt {M}}{\varepsilon } \)

  • \(y\overset {not}{=}\frac {1}{C^{N}}\).

Since we are going to use the method of Lagrange multipliers, we consider the function F = f + λg, where \(\lambda \in \mathbb {R}\) and the function g : [0,)N → [0,) is given by:

$$ g(\varepsilon_{1},...,\varepsilon_{N})=\varepsilon_{N}+C\varepsilon_{N-1}+...+C^{N-2}\varepsilon_{2}+C^{N-1}\varepsilon_{1}+C^{N}D\sqrt{M} -\varepsilon \text{,} $$

for every ε1,...,εN ∈ [0,). The equation \(\frac {\partial F}{\partial \varepsilon _{k}}=0\), i.e. − β(εk)β− 1 + λCNk = 0, has the solution:

$$ {\varepsilon_{k}^{0}}=k_{N}C^{\frac{k}{\beta +1}}\text{,} $$
(1)

for every k, where \(k_{N}=\frac {1}{C^{\frac {N}{\beta +1}}}(\frac {\beta } { \lambda } )^{\frac {1}{\beta +1}}\). As \(g({\varepsilon _{1}^{0}},...,\varepsilon _{N}^{0})=0\), we get:

$$ k_{N}(C^{\frac{N}{\beta +1}}+C^{1+\frac{N-1}{\beta +1}}+C^{2+\frac{N-2}{ \beta +1}}+...+C^{N-2+\frac{2}{\beta +1}}+C^{N-1+\frac{1}{\beta +1}} )=\varepsilon -C^{N}D\sqrt{M}\text{,} $$

i.e.:

$$ k_{N}C^{\frac{N}{\beta +1}}(1+C^{\frac{\beta} {\beta +1}}+C^{^{2\frac{\beta} {\beta +1}}}+...+C^{^{^{(N-2)\frac{\beta} {\beta +1}}}}+C^{(N-1)\frac{\beta} {\beta +1}})=\varepsilon -C^{N}D\sqrt{M}\text{,} $$

so \(k_{N}C^{\frac {N}{\beta +1}}\frac {(C^{\frac {\beta } {\beta +1}})^{N}-1}{C^{ \frac {\beta } {\beta +1}}-1}=\varepsilon -C^{N}D\sqrt {M}\), which implies \( k_{N}\frac {C^{N}-C^{\frac {N}{\beta +1}}}{C-C^{\frac {1}{\beta +1}}}=\frac { \varepsilon -C^{N}D\sqrt {M}}{C^{\frac {1}{\beta +1}}}\). The last equality takes the form \(k_{N}\frac {C^{-\frac {\beta } {\beta +1}N}-1}{C^{\frac {1}{ \beta +1}}-C}=\frac {\frac {\varepsilon } {C^{N}}-D\sqrt {M}}{C^{\frac {1}{\beta +1}}}\). Thus, we obtain:

$$ k_{N}=\frac{K_{1}}{t}(\frac{\varepsilon} {C^{N}}-D\sqrt{M})\text{.} $$
(2)

We have:

$$ f({\varepsilon_{1}^{0}},...,{\varepsilon_{N}^{0}}) = \overset{N}{\underset{k=1}{ \sum} }({\varepsilon_{k}^{0}})^{-\beta} \overset{(1)}{=}(k_{N})^{-\beta} \overset{N}{\underset{k=1}{\sum} }C^{-\frac{\beta} {\beta +1} k} = (k_{N})^{-\beta} C^{-\frac{\beta} {\beta +1}}\frac{(C^{-\frac{\beta} { \beta +1}})^{N}-1}{C^{-\frac{\beta} {\beta +1}}-1}= $$
$$ \overset{(2)}{=}t^{\beta} K_{1}^{-\beta} (\frac{\varepsilon} {C^{N}}-D\sqrt{M} )^{-\beta} \frac{t}{1-C^{\frac{\beta} {\beta +1}}}=t^{\beta +1}(\frac{ \varepsilon} {C^{N}}-D\sqrt{M})^{-\beta} \frac{K_{1}^{-\beta} }{K_{1}}= $$
$$ =t^{\beta +1}(\frac{\varepsilon} {C^{N}}-D\sqrt{M})^{-\beta} K_{1}^{-\beta -1}=K_{2}(\frac{\varepsilon} {C^{N}}-D\sqrt{M})^{-\beta} (C^{-\frac{\beta} { \beta +1}N}-1)^{\beta +1}\text{.} $$

Therefore, the last equality can be written as:

$$ f({\varepsilon_{1}^{0}},...,{\varepsilon_{N}^{0}})=K_{3}(y^{\frac{\beta} { \beta +1}}-1)^{\beta +1}(y-a)^{-\beta} \text{.} $$
(3)

As the right-hand side of (3) gives us the optimal number of points that we have to compute, after N steps, in order to approximate \(A_{\mathcal {F}}\) by \(\overset {\sim } {A_{k}}\) with an error not greater than ε, we need to find the minimum value of the function \(h:(a,\infty )\rightarrow \mathbb {R}\) given by:

$$ h(y)=K_{3}(y^{\frac{\beta} {\beta +1}}-1)^{\beta +1}(y-a)^{-\beta} \text{,} $$

for every y ∈ (a,). One can easily see that:

$$ h^{^{\prime} }(y)=K_{3}\beta (y^{\frac{\beta} {\beta +1}}-1)^{\beta} (y-a)^{-\beta -1}(1-ay^{-\frac{1}{\beta +1}})\text{,} $$

for every y ∈ (a,). As we can suppose that a > 1 (since we are interested in the case when ε is small), \(\underset { y\rightarrow \infty } {\lim } h(y)=K_{3}\) and \(\underset {\underset {y>a}{ y\rightarrow a}}{\lim } h(y)=\infty \), we conclude that h attains its minimum at aβ+ 1 and the value of the minimum is:

$$ h(a^{\beta +1})=K_{3}(a^{\beta} -1)^{\beta +1}(a^{\beta +1}-a)^{-\beta} = $$
$$ =K_{3}\frac{a^{\beta} -1}{a^{\beta} }=\varepsilon^{-\beta} (1-C^{\frac{ \beta} {\beta +1}})^{-\beta -1}\frac{(\frac{D\sqrt{M}}{\varepsilon} )^{\beta} -1}{(\frac{D\sqrt{M}}{\varepsilon} )^{\beta} }\text{,} $$

so

$$ \underset{\underset{\varepsilon >0}{\varepsilon \rightarrow 0}}{\lim} \frac{ h(a^{\beta +1})}{(1-C^{\frac{\beta} {\beta +1}})^{-\beta -1}(\frac{1}{ \varepsilon} )^{\beta} }=1\text{.} $$

Conclusion

The complexity of the grid algorithm is described by the function \(\mathcal {C}_{g}:(0,\infty )\rightarrow \mathbb {R} \) given by:

$$ \mathcal{C}_{g}(\varepsilon )=(1-C^{\frac{\beta} {\beta +1}})^{-\beta -1}(\frac{1}{\varepsilon} )^{pM}\text{,} $$

for every ε > 0.

In the final part of this section, we mention that (in order to avoid very complicated computations) we did not pay attention to the fact that the best values of nk and N that we obtained (namely \(\frac {D\sqrt {M}}{ {\varepsilon _{k}^{0}}}\) and \((\beta +1)\frac {\ln (\frac {\varepsilon } {D\sqrt {M} })}{\ln (C)}\)) may not be integers. In reality, we could work with \(n_{k}=[ \frac {D\sqrt {M}}{{\varepsilon _{k}^{0}}}]+1\) and \(N=[(\beta +1)\frac {\ln (\frac {\varepsilon } {D\sqrt {M}})}{\ln (C)}]+1\) without a significant change.

5 Final remarks

Remark 5.1

We have:

$$ \underset{\underset{\varepsilon >0}{\varepsilon \rightarrow 0}}{\lim} \frac{ \mathcal{C}_{g}(\varepsilon )}{\mathcal{C}_{c}(\varepsilon )}=\underset{ \underset{\varepsilon >0}{\varepsilon \rightarrow 0}}{\lim} \frac{(1-C^{ \frac{\beta} {\beta +1}})^{-\beta -1}(\frac{1}{\varepsilon} )^{pM}}{(x_{0}L^{ \frac{1}{p-1}})^{(\frac{1}{\varepsilon} )^{\log_{\frac{1}{C}}(p)}}}=0\text{,} $$

so, the grid algorithm is more efficient than the deterministic algorithm.

Remark 5.2

As \(\left \vert u-[u+\frac {1}{2}]\right \vert \leq \frac {1} {2}\), we can improve our grid algorithm (which is based on the inequality |u − [u]| < 1) in the following way:

figure c

Remark 5.3

On the one hand, repeating the arguments used in 4, A, for the case of an iterated function system (i.e., p = 1), we obtain that the complexity of the corresponding algorithm is described by the function \(\mathcal {C}:(0,\infty )\rightarrow \mathbb {R}\) given by:

$$ \mathcal{C}(\varepsilon )=(\frac{1}{\varepsilon} )^{\frac{\ln L}{\ln \frac{1} {C}}}\text{,} $$

for every ε > 0, so C is involved at the exponent of \(\frac {1}{\varepsilon } \). We stress upon the fact that since \(\underset {\underset {C<1}{C\rightarrow 1}}{\lim } \frac {1}{\ln \frac {1}{ C}}=\infty \), the closer is C to 1, the bigger is the number of points that we have to compute in order to approximate the attractor with an error less that ε.

On the other hand, in the rule that gives \(\mathcal {C}_{g}(\varepsilon )\), the constant C is involved only in the coefficient \((1-C^{\frac {\beta } { \beta +1}})^{-\beta -1}\).

Moreover, note that \(\underset {\underset {C<1}{C\rightarrow 1}}{\lim } \frac {\mathcal {C}_{g}(\varepsilon )}{\mathcal {C}_{c}(\varepsilon )}=0\) for each ε ∈ (0,1).

6 Examples

In Section 4, we compared the algorithms with respect to a fixed preassigned error. In this section, our goal is to get an optimal image (with respect to the computer that we worked with) for three examples. For this reason, we chose a version of the grid algorithm for which nk = k2, the error being less than \(D\sqrt {M}\left (\frac {1}{n^{2}}+C\frac {1}{(n-1)^{2}}+C^{2}\frac {1}{(n-2)^{2}} +...+C^{n}\right )\), where n is the number of steps and C is the contraction constant of the system.

A.

Consider the GIFS \(\mathcal {F}=(([0,1]^{2},d),\{f_{1},f_{2},f_{3}\})\), where, for x = (x1,y1) and y = (x2,y2), we have:

$$ f_{1}(x,y)=(0.2x_{1}+0.2y_{2};0.2x_{2}+0.1y_{2}) $$
$$ f_{2}(x,y)=(0.15x_{1}+0.07x_{2}+0.4;0.15y_{1}+0.07y_{2})\text{.} $$

and

$$ f_{3}(x,y)=(0.15y_{1}+0.07x_{2};0.15x_{1}+0.07y_{2}+0.04)\text{.} $$

Using the deterministic algorithm, we get the image indicated in Fig. 1 and using the grid algorithm, we get the image in Fig. 2.

Fig. 1
figure 1

Image obtained using the deterministic algorithm running 4 steps in 10 s

Fig. 2
figure 2

Image obtained using the grid algorithm running 8 steps in less than 10 s

The deterministic algorithm ran 4 steps in 10 s, while the grid algorithm 8 steps in less than 10 s.

Figure 1 has approximately \(3^{2^{4}}=43,046,721\) points, while Fig. 2 comprises around 20,000 points.

Remark 6.1

If we allow the deterministic algorithm to run 5 steps, it needs about 90 min and we get a very similar image with the one in Fig. 2.

B.

Consider the GIFS \(\mathcal {F}=(([0,1]^{2},d),\{f_{1},f_{2}\})\), where, for x = (x1,y1) and y = (x2,y2), we have:

$$ f_{1}(x,y)=(0.1x_{1}+0.16y_{1}-0.01x_{2}+0.3y_{2};-0.05y_{1}+0.15x_{2}+0.15y_{2}) $$

and

$$ f_{2}(x,y)=(0.09x_{1}-0.1y_{1}-0.15x_{2}+0.14y_{2}+0.4;0.14x_{1}+0.14y_{1}+0.14x_{2}+0.04) \text{.} $$

The deterministic algorithm yields the image in Fig. 3 and the grid algorithm produces the image in Fig. 4.

Fig. 3
figure 3

Produced from using the deterministic algorithm which needed 20 s to run 5 steps

Fig. 4
figure 4

Produced from using the grid algorithm which needed about 10 min to run 14 steps

The deterministic algorithm needed 20 s to run 5 steps, while the grid algorithm needed about 10 min to run 14 steps.

Figure 3 consists of about \(2^{2^{5}}=4,294,967,296\) points, while Fig. 4 is built up using around 2,000,000 points.

Remark 6.2

With the aid of the computer that we utilized, the deterministic algorithm would need 42 days to run 6 steps.

Fig. 5
figure 5

The deterministic algorithm ran for about 2 min, running 5 steps

C.

Consider the GIFS \(\mathcal {F}=(([0,1]^{2},d),\{f_{1},f_{2}\})\), where, for x = (x1,y1) and y = (x2,y2), we have:

$$ f_{1}(x,y)=(0.5x_{1}-0.5y_{1}+0.001x_{2}+0.45;0.5x_{1}+0.5y_{1}+0.001y_{2}-0.05) $$

and

$$ f_{2}(x,y)=(0.2x_{1}+0.01x_{2}+0.14y_{2}+0.147;0.2y_{1}+0.01y_{2}+0.105) \text{.} $$

The image in Fig. 5 indicates what we get running the deterministic algorithm and the image in Fig. 6 what we obtain using the grid algorithm.

Fig. 6
figure 6

The grid algorithm ran for about 2 min,running 22 steps

Both algorithms ran about 2 min, the deterministic one running 5 steps, while the grid one 22 steps.

Figure 5 consists of about \(2^{2^{5}}=4, 294, 967, 296\) points, while Fig. 6 is made up of circa 217,800 points.

Remark 6.3

Even though the number of points making up Fig. 5 is considerably bigger than the number of points building up Fig. 6, one can observe that the grid algorithm produces a much better approximation of the attractor.