Introduction

Almost automorphic functions are part of a hierarchy of functions which starts with periodic functions and these functions were first introduced in the literature by Bochner [3] as a natural generalization of the classical family of almost periodic function. This concept became a natural generalization of almost periodicity which is one of the most attractive topics in the qualitative theory of differential equations because of their significance and applications in physics, mathematical biology, control theory, and other related fields.

Generally, the research for the solutions almost automorphic for dynamic systems are more complicated since the fundamental property of uniform continuity which is valid for almost periodic functions is not verified by the almost automorphic functions. In particular, the recurrent neural networks (RNNs) are non linear dynamic systems with some resemblance of biological neural networks in the brain. Recently, many RNNs have been developed and applied extensively in many fields such as signal processing, pattern recognition, optimization, robotics and control and associative memories. For instance, in [13], the authors analyzed the following model:

$$\begin{aligned} \dot{x_{i}}\left( t\right) =-d_{i}h_{i}(x_{i}\left( t\right) ) +\sum \limits _{j=1}^{n}a_{ij}\left( t\right) f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{n}b_{ij}\left( t\right) g_{j}(x_{j}\left( t-\tau _{j}\left( t\right) \right) )+J_{i}\left( t\right) . \end{aligned}$$

Hence, by using the Lyapunov functional method, applying classical techniques such as M-matrix and topological degree theory, the global exponential stability and the existence of periodic solutions are given. As we all know, when modeling neural networks, both the discrete and distributed time delays should be taken into account. So, in [16], the authors have studied the dynamics of a class of recurrent neural networks with mixed delays and variable coefficients which is described by the following integro-differential equations:

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x_{i}}\left( t\right) =-d_{i}h_{i}(x_{i}\left( t\right) ) +\sum \limits _{j=1}^{n}a_{ij}\left( t\right) f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{n}b_{ij}\left( t\right) g_{j}(x_{j}\left( t-\tau _{1}\left( t\right) \right) )\\ \quad \quad \quad \quad +\sum \limits _{j=1}^{n}c_{ij}\left( t\right) \int \limits _{t-\tau _{2} }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}\left( s\right) )\mathrm{d}s+J_{i}\left( t\right) \!. \end{array} \right. \end{aligned}$$

Recently, Xiang and Cao [19] investigated the almost periodic oscillatory behavior for the following system:

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x_{i}}\left( t\right) =-d_{i}(x_{i}\left( t\right) ) +\sum \limits _{j=1}^{n}a_{ij}\left( t\right) f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{n}b_{ij}\left( t\right) g_{j}(x_{j}\left( t-\tau _{ij}\left( t\right) \right) )\\ \quad \quad \quad \quad +\sum \limits _{j=1}^{n}c_{ij}\left( t\right) \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}\left( s\right) )\mathrm{d}s+J_{i}\left( t\right) \!. \end{array} \right. \end{aligned}$$

By utilizing a suitable functional, analysis technique and the fact that the kernel is a piecewise continuous integrable function and it satisfies

$$\begin{aligned} \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) \mathrm{d}s=1,\quad \quad \int \limits _{0}^{\infty }K_{ij}\left( s\right) \mathrm{exp}\left( \alpha s\right) \mathrm{d}s<+\infty , \end{aligned}$$

the authors derived new conditions for the existence of an almost periodic solution for this model. Motivated by the above discussions we give, in this paper, some new sufficient conditions ensuring the existence and attractivity of almost automorphic solution by employing Banach fixed point theory and using differential inequality technique. Roughly speaking, we are concerned with the following recurrent neural networks:

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x_{i}}\left( t\right) =-d_{i}\left( t\right) x_{i}(t)+\sum \limits _{j=1}^{n}a_{ij}\left( t\right) f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{n}b_{ij}\left( t\right) g_{j}\left( x_{j}\left( t-\tau _{j}\left( t\right) \right) \right) \\ \quad \quad \quad \quad +\sum \limits _{j=1}^{n}c_{ij}\left( t\right) \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}\left( s\right) )\mathrm{d}s+J_{i}\left( t\right) \\ x_{i}\left( t\right) =\widehat{x}_{i}\left( t\right) ,\quad -\infty <t\le 0,\quad i=1,\ldots ,n. \end{array} \right. \end{aligned}$$
(1)

This model have been the object of intensive analysis by numerous authors in recent years since it presents a generalized class of RNNs with mixed delays and variable coefficients. Furthermore, from the mathematical point of view, systems with constant coefficients and finite delays are different from those with variable coefficients and infinite distributed delays, and classical mathematical methods do not directly apply. But research has focused mainly on the study of the existence and stability of periodic and/or almost periodic solutions (see, for example, [4, 6, 7, 11, 13, 1517, 19] and the references therein) and to the best of our knowledge, the almost automorphic behavior for the recurrent neural network is never considered. So the principal motivation of the paper is to establish the existence and the stability of the unique almost automorphic solution to the model (1).

Clearly, the study of the dynamics of system (1) includes many previous models, such as models above, and the concept of almost automorphy is more general than almost periodicity.

The paper is organized in the following way. In “Almost Automorphic Functions”, we will recall the basic properties of the almost automorphic functions. In “Description System and Preliminaries”, we will introduce some necessary notations, definitions and preliminaries which will be used later. In “Existence of the Almost Automorphic Solution and Global Exponential Stability of Almost Automorphic Solution”, several sufficient conditions are derived for the existence and the global exponential stability of the unique almost automorphic solutions of the equation (1) in the suitable convex set of \(AA(\mathbb{R }, \mathbb{R }^{n})\). In “Illustrative Examples”, we present two illustrative examples. In particular, we study the cases \(n=2\) and \(n=3\). Finally, conclusions and some comments are drawn in “Global Exponential Stability of Almost Automorphic Solution”.

It should be mentioned that the main results include Theorems 2 and 3.

Almost Automorphic Functions

In this section, we would like to recall some basic notations and results related to the concept of almost automorphy which shall come into play later on.

Definition 1

[2] A continuous function \(f:\mathbb{R }\longrightarrow \mathbb{R }^{n}\) is said to be almost automorphic if for every sequence of real numbers \((s_{n}^{\prime })_{n\in \mathbb{N }}\) there exists a subsequence \( (s_{n})_{n\in \mathbb{N }}\) such that

$$\begin{aligned} g(t):=\lim _{n\rightarrow \infty }f(t+s_{n}) \end{aligned}$$

is well defined for each \(t\in \mathbb{R }\), and

$$\begin{aligned} \lim _{n\rightarrow \infty }g(t-s_{n})=f(t) \end{aligned}$$

for each \(t\in \mathbb{R }\).

Remark 1

Note that the function \(g\) in definition above is measurable but not necessarily continuous. Moreover, if \(g\) is continuous, then \(f\) is uniformly continuous. Besides, if the convergence above is uniform in \(t\in \mathbb{R }\), then \(f\) is almost periodic. Denote by \(AA(\mathbb{R },\mathbb{R } ^{n})\) the collection of all almost automorphic functions

$$\begin{aligned} AP(\mathbb{R },\mathbb{R }^{n})\subset AA(\mathbb{R },\mathbb{R }^{n})\subset BC( \mathbb{R },\mathbb{R }^{n}), \end{aligned}$$

where \(AP(\mathbb{R },\mathbb{R }^{n})\) and \(BC(\mathbb{R },\mathbb{R }^{n})\) are respectively the collection of all almost periodic functions and the set of bounded continuous functions from \(\mathbb{R }\) to \(\mathbb{R }^{n}.\)

      Among others things, almost automorphic functions satisfy the following properties:

Theorem 1

[9] For all \(f,f_{1},f_{2}\in AA(\mathbb{R },\mathbb{R }^{n}),\) one has

  1. 1.

    \(f_{1}+f_{2}\in AA(\mathbb{R },\mathbb{R }^{n}).\)

  2. 2.

    \(\lambda f\in AA(\mathbb{R },\mathbb{R }^{n})\) for any scalar \(\lambda \in \mathbb{R }\)

  3. 3.

    \(f_{\alpha }\in AA(\mathbb{R },\mathbb{R }^{n})\) where \(f_{\alpha }: \mathbb{R }\longrightarrow X\) is defined by \(f_{\alpha }\left( \cdot \right) =f(\cdot +\alpha ).\)

  4. 4.

    Let \(f\in AA(\mathbb{R },\mathbb{R }^{n}),\) then the range \(R_{f}:=\left\{ f(t),t\in \mathbb{R }\right\} \) is relatively compact in \(X,\) thus \(f\) is bounded in norm.

  5. 5.

    If \(f_{n}\longrightarrow f\) uniformly on \(\mathbb{R }\) where \(f_{n}\in AA(\mathbb{R },\mathbb{R }^{n}),\) then \(f\in AA(\mathbb{R },\mathbb{R }^{n}).\)

  6. 6.

    \(\left( AA(\mathbb{R },\mathbb{R }^{n}),\parallel . \parallel _{\infty }\right) \) is a Banach space.

Definition 2

A function \(f\in C(\mathbb{R }\times \mathbb{R }^{n},\mathbb{R }^{n})\) is said to be almost automorphic in \(t\in \mathbb{R }\) for each \(x\in X\) if for every sequence of real numbers \((s_{n}^{\prime })_{n\in \mathbb{N }}\) there exists a subsequence \((s_{n})_{n\in \mathbb{N }}\) such that

$$\begin{aligned} g(t,x):=\lim _{n\rightarrow \infty }f(t+s_{n},x) \end{aligned}$$

is well defined for each \(t\in \mathbb{R },x\in \mathbb{R }^{n}\) and

$$\begin{aligned} \lim _{n\rightarrow \infty }g(t-s_{n},x)=f(t,x) \end{aligned}$$

for each \(t{\in }\,\, \mathbb{R },x{\in }\,\,\mathbb{R }^{n}.\) The collection of such functions will be denoted by \(AA\left( \mathbb{R }\times \mathbb{R }^{n},\mathbb{R } ^{n}\right) .\)

Example 1

A classical example of an almost automorphic function which is not almost periodic, as it is not uniformly continuous, is the function defined by

$$\begin{aligned} f(t)=\cos \left( \frac{1}{2+\sin t+\sin \pi t}\right) ,\quad t\in \mathbb{R }. \end{aligned}$$

Example 2

The function

$$\begin{aligned} f(t,x)=\sin \frac{1}{2+\cos t+\cos \sqrt{2}t}\cos x, \end{aligned}$$

is almost automorphic in \(t\in \mathbb{R }\) for each \(x\in X\), where \(X=L^{2}\left( \left[ 0,1\right] \right) \!.\)

Description System and Preliminaries

The model of the recurrent neural network considered in this paper is described by the following state equations:

$$\begin{aligned} \left\{ \begin{array}{l} \dot{x_{i}}\left( t\right) =-d_{i}\left( t\right) x_{i}(t)+\sum \limits _{j=1}^{n}a_{ij}\left( t\right) f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{n}b_{ij}\left( t\right) g_{j}\left( x_{j}\left( t-\tau _{j}\left( t\right) \right) \right) \\ \quad \quad \quad \quad +\sum \limits _{j=1}^{n}c_{ij}\left( t\right) \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}\left( t\right) )\mathrm{d}s+J_{i}\left( t\right) \\ x_{i}\left( t\right) =\widehat{x}_{i}\left( t\right) ,\quad -\infty <t\le 0,\quad i=1,\ldots ,n, \end{array} \right. \end{aligned}$$

where n is the number of the neurons in the neural network, \(x_{i}(t)\) denotes the state of the \(i\)th neural neuron at time \(t\), \(f_{j}(x_{j}(t))\), \(g_{j}(x_{j}(t))\) and \(h_{j}(x_{j}(t)\)) are the activation functions of jth neuron at time t. The functions \(a_{ij}\left( \cdot \right) \), \(b_{ij}\left( \cdot \right) \) and \(c_{ij}\left( \cdot \right) \) denote, respectively, the connection weights, the discretely delayed connection weights, and the distributively delayed connection weights, of the \(j\)th neuron on the \(i\) neuron. \( J_{i}\left( \cdot \right) \) is the external bias on the \(i\)th neuron, \( d_{i}\left( \cdot \right) \) denotes the rate with which the \(i\)th neuron will reset its potential to the resting state in isolation when disconnected from the network and external inputs and \(\tau _{j}\left( \cdot \right) \) is the time delay.

Let us list some assumptions which will be used in the paper.

  • \((H_{1})\) The activity functions \(f_{j},g_{j}\) and \(h_{j}\) are assumed to be global Lipschitz continuous, that is, there exist \(L_{f_{j}},L_{g_{j}},\) \(L_{h_{j}}>0\) such that for all \(u,v\in \mathbb{R }\)

    $$\begin{aligned}&\left| f_{j}(u)-f_{j}(v)\right| <L_{f_{j}}\left| u-v\right| \! ,\quad \left| g_{j}(u)-g_{j}(v)\right| <L_{g_{j}}\left| u-v\right| \!,\\&\left| h_{j}(u)-h_{j}(v)\right| <L_{h_{j}}\left| u-v\right| \!. \end{aligned}$$

    Furthermore, we suppose that \(f_{j}(0)=g_{j}(0)=h_{j}(0)=0\).

  • \((H_{2}) J(\cdot )=\left( J_{1}(\cdot ),\ldots ,J_{n}(\cdot )\right) \in AA(\mathbb{R },\mathbb{R }^{n})\) and for all \(1\le i,j\le n\) the functions \( a_{ij}\left( \cdot \right) \), \(b_{ij}\left( \cdot \right) ,\) \(c_{ij}\left( \cdot \right) \) and \(d_{ij}\left( \cdot \right) \) are almost automorphic.

  • \(\left( H_{3}\right) \) In addition

    $$\begin{aligned} r=\max _{1\le i\le n}\sup \limits _{s\in \mathbb{R } }\left( \frac{\sum \nolimits _{j=1}^{n}\left| a_{ij}\left( s\right) \right| L_{f_{j}} +\left| b_{ij}\left( s\right) \right| L_{g_{j}}+\frac{M}{ w}\left| c_{ij}\left( s\right) \right| L_{h_{j}}}{\widetilde{d}} \right) <1, \end{aligned}$$

    where for all \(1\le i\le n\)

    $$\begin{aligned} \widetilde{d_{i}}=\min \limits _{\xi \in \mathbb{R } }d_{i}\left( \xi \right) ,\quad \widetilde{d}=\min \limits _{1\le i\le n}\widetilde{d_{i}}. \end{aligned}$$
  • \(\left( H_{4}\right) \) The kernel \(K_{ij}\left( \cdot \right) \) is almost automorphic and there exist \(M>0\) and \(w>0\) such that

    $$\begin{aligned} \left| K_{ij}\left( t\right) \right| \le M\mathrm{e}^{-tw}. \end{aligned}$$

Existence of the Almost Automorphic Solution

In this section, we establish some results for the existence, uniqueness of the almost automorphic solution of \((1)\). First, we shall recall and prove some technical lemmas which is necessary to prove the first main theorem in this paper.

Lemma 1

(see [8]) Let \(f:\mathbb{R }\times \mathbb{R }^{n}\rightarrow \mathbb{R }^{n}\) be an almost automorphic function in \(t\in \mathbb{R }\) for each \(x\in \mathbb{R }^{n}\) and assume that \(f\) satisfies a Lipschitz condition in \(x\) uniformly in \(t\) \(\in \mathbb{R }\). Let \(\varphi \) : \( \mathbb{R }\) \(\longrightarrow \) \(\mathbb{R }^{n}\) be an almost automorphic function. Then the function

$$\begin{aligned} \phi :t\longmapsto \phi \left( t\right) =f\left( t,\varphi \left( t\right) \right) \end{aligned}$$

is almost automorphic.

Lemma 2

Suppose that assumptions \((H_{1}),(H_{4})\) hold and \(x_{j}(\cdot )\in \) \(AA( \mathbb{R },\mathbb{R })\) then

$$\begin{aligned} \phi :t\longmapsto \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}\left( s\right) )\mathrm{d}s \end{aligned}$$

belongs to \(AA( \mathbb{R },\mathbb{R })\).

Proof

By the composition theorem of almost automorphic functions [9], the functions \(\psi :s\longmapsto h_{j}(x_{j}\left( s\right) )\) belongs to \(AA(\mathbb{R },\mathbb{R })\) whenever \(x_{j}\in AA(\mathbb{R }, \mathbb{R }^{n}).\) Now, let \(\left( s_{n}^{\prime }\right) \) be a sequence of real numbers. By \(\left( H_{3}\right) \) we can extract a subsequence \(\left( s_{n}\right) \) of \(\left( s_{n}^{\prime }\right) \) such that for all \(t,s\in \mathbb{R }\)

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }K_{ij}\left( t-s+s_{n}\right) =K_{ij}^{1}\left( t-s\right) ,\quad \lim \limits _{n\rightarrow +\infty }K_{ij}^{1}\left( t-s-s_{n}\right) =K_{ij}\left( t-s\right) \!, \end{aligned}$$

and

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }\psi \left( t+s_{n}\right) =\psi ^{1}\left( t\right) ,\quad \lim \limits _{n\rightarrow +\infty }\psi ^{1}\left( t-s_{n}\right) =\psi \left( t\right) \!. \end{aligned}$$

Pose

$$\begin{aligned} \phi ^{1}:t\longmapsto \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) \psi ^{1}(s)\mathrm{d}s. \end{aligned}$$

Clearly

$$\begin{aligned} \phi ^{1}\left( t+s_{n}\right) -\phi ^{1}\left( t\right)&= \int \limits _{-\infty }^{t+s_{n}}K_{ij}\left( t-s+s_{n}\right) \psi (s)\mathrm{d}s-\int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) \psi ^{1}(s)\mathrm{d}s \\&= \int \limits _{-\infty }^{t}K_{ij}\left( t-u\right) \psi (u+s_{n})\mathrm{d}u-\int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) \psi ^{1}(s)\mathrm{d}s\\&= \int \limits _{-\infty }^{t}K_{ij}\left( t-u\right) \left| \psi (u+s_{n})-\psi ^{1}(s)\right| \mathrm{d}s \\&\le \int \limits _{-\infty }^{t}M\mathrm{e}^{-\left( t-s\right) w}\left| \psi (s+s_{n})-\psi ^{1}(s)\right| \mathrm{d}s. \end{aligned}$$

By the well known Lebesgue Dominated Convergence Theorem and \(\left( H_{2}\right) ,\) we have for all \(t\in \mathbb{R }\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\phi (t+s_{n})=\phi ^{1}\left( t\right) \!. \end{aligned}$$

Similarly for each \(t\in \mathbb{R }\)

$$\begin{aligned} \lim _{n\rightarrow \infty }\phi ^{1}(t-s_{n})=\phi \left( t\right) \!, \end{aligned}$$

which implies that \(\phi :t\longmapsto \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}\left( s\right) )\mathrm{d}s\) belongs to \(AA( \mathbb{R },\mathbb{R })\). \(\square \)

Lemma 3

Suppose that assumptions \(\mathrm{(H_{1}),(H_{2})}\) and \(\mathrm{(H_{3})}\) hold. Define the nonlinear operator \(\varGamma \) by: for each \(\varphi \in AA(\mathbb{R },\mathbb{R }^{n})\)

$$\begin{aligned} (\varGamma \varphi )(t)&= col\left\{ \int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}\left( \xi \right) d\xi }\left[ \sum \limits _{j=1}^{n}a_{ij}\left( s\right) f_{j}(\varphi _{j}\left( s\right) )\right. \right. \\&\quad +\sum \limits _{j=1}^{n}b_{ij}\left( s\right) g_{j}\left( \varphi _{j}\left( s-\tau _{j}\left( s\right) \right) \right) \\&\quad \left. \left. +\sum \limits _{j=1}^{n}c_{ij}\left( s\right) \int \limits _{-\infty }^{t}K_{ij}\left( s-u\right) h_{j}(\varphi _{j}\left( u\right) )\mathrm{d}u+J_{j}\left( s\right) \right] \mathrm{d}s\right\} \!\!. \end{aligned}$$

Then \(\varGamma \) maps \(AA(\mathbb{R },\mathbb{R }^{n})\) into itself.

Proof

First of all, let us check that \(\varGamma \) is well defined. Indeed, by Theorem 1, the space \(AA(\mathbb{R },\mathbb{R }^{n})\) is translation invariant. Besides, by Lemmas 1 and 2 the function

$$\begin{aligned}&\chi _{i} :s\longmapsto \sum \limits _{j=1}^{n}a_{ij}\left( s\right) f_{j}(\varphi _{j}\left( s\right) )+\sum \limits _{j=1}^{n}b_{ij}\left( s\right) g_{j}\Bigg (\varphi _{j}\left( s-\tau _{j}\left( s\right) \right) \Bigg ) \\&\quad \quad +\sum \limits _{j=1}^{n}c_{ij}\left( s\right) \int \limits _{-\infty }^{t}K_{ij}\left( s-u\right) h_{j}\left( \varphi _{j}\left( u\right) \right) \mathrm{d}u+J_{i}\left( s\right) \end{aligned}$$

belongs to \(AA(\mathbb{R },\mathbb{R })\). Consequently we can write

$$\begin{aligned} (\varGamma \varphi )(t):=col\left\{ \int \limits _{-\infty }^{t}\exp \left( -\int \limits _{s}^{t}d_{i}\left( \xi \right) d\xi \right) \chi _{i}\left( s\right) \mathrm{d}s\right\} \!. \end{aligned}$$

Let \(\left( s_{n}^{\prime }\right) \) be a sequence of real numbers. By \( \left( H_{3}\right) \) we can extract a subsequence \(\left( s_{n}\right) \) of \(\left( s_{n}^{\prime }\right) \) such that for all \(t,s\in \mathbb{R }\)

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }d_{i}\left( t+s_{n}\right) =d_{i}^{1}\left( t\right) ,\qquad \lim \limits _{n\rightarrow +\infty }d_{i}^{1}\left( t-s_{n}\right) =d_{i}\left( t\right) \end{aligned}$$

and

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }\chi _{i}\left( t+s_{n}\right) =\chi _{i}^{1}\left( t\right) ,\qquad \lim \limits _{n\rightarrow +\infty }\chi _{i}^{1}\left( t-s_{n}\right) =\chi _{i}\left( t\right) \!. \end{aligned}$$

Pose

$$\begin{aligned} (\varGamma ^{1}\varphi )(t):=\int \limits _{-\infty }^{t}\exp \left( -\int \limits _{s}^{t}d_{i}^{1}\left( \xi \right) d\xi \right) \chi _{i}^{1}\left( s\right) \mathrm{d}s. \end{aligned}$$

It follows that

$$\begin{aligned} (\varGamma ^{1}\varphi )(t+s_{n})-(\varGamma ^{1}\varphi )(t)&= \int \limits _{-\infty }^{t+s_{n}}\mathrm{e}^{-\int \limits _{s}^{t+s_{n}}d_{i}\left( \xi \right) d\xi }\chi _{i}\left( s\right) \mathrm{d}s-\int \limits _{-\infty }^{t}\exp ^{-\int \limits _{s}^{t}d_{i}^{1}\left( \xi \right) d\xi }\chi _{i}^{1}\left( s\right) \mathrm{d}s \\&= \int \limits _{-\infty }^{t+s_{n}}\mathrm{e}^{-\int \limits _{s-s_{n}}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }\chi _{i}\left( s\right) \mathrm{d}s-\int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}^{1}\left( \xi \right) d\xi }\chi _{i}^{1}\left( s\right) \mathrm{d}s \\&= \int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{u}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }\chi _{i}\left( u+s_{n}\right) \mathrm{d}u-\int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}^{1}\left( \xi \right) d\xi }\chi _{i}^{1}\left( s\right) \mathrm{d}s \\&= \int \limits _{-\infty }^{t}\mathrm{e}^{\!-\int \limits _{u}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }\chi _{i}\left( u\!+\!s_{n}\right) \mathrm{d}u\!-\!\!\int \limits _{-\infty }^{t}\mathrm{e}^{\!-\int \limits _{u}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }\chi _{i}^{1}\left( u\right) \mathrm{d}u \\&+\int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{u}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }\chi _{i}^{1}\left( u\right) \mathrm{d}u-\int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{u}^{t}d_{i}^{1}\left( \xi \right) d\xi }\chi _{i}^{1}\left( u\right) \mathrm{d}u \\&= \int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{u}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }\left( \chi _{i}\left( s+s_{n}\right) -\chi _{i}^{1}\left( s\right) \right) \mathrm{d}s \\&+\int \limits _{-\infty }^{t}\left( \mathrm{e}^{-\int \limits _{u}^{t}d_{i}\left( \sigma +s_{n}\right) d\sigma }-\mathrm{e}^{-\int \limits _{s}^{t}d_{i}^{1}\left( \xi \right) d\xi }\right) \chi _{i}^{1}\left( s\right) \mathrm{d}s. \end{aligned}$$

Again by the Lebesgue Dominated Convergence Theorem we obtain immediately that \(\forall t\in \mathbb{R }\)

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }(\varGamma ^{1}\varphi )(t+s_{n})=(\varGamma ^{1}\varphi )(t). \end{aligned}$$

The same approach proves that \(\forall t\in \mathbb{R }\)

$$\begin{aligned} \lim \limits _{n\rightarrow +\infty }(\varGamma ^{1}\varphi )(t-s_{n})=(\varGamma \varphi )(t)\quad \forall t\in \mathbb{R }. \end{aligned}$$

Consequently, the function \((\varGamma \varphi )\) belongs to \(AA(\mathbb{R }, \mathbb{R }).\) \(\square \)

Theorem 2

Suppose that assumptions \(\mathrm{(H_{1})-(H_{4})}\) hold. Then the recurrent neural networks (1) has a unique almost automorphic solution in the region

$$\begin{aligned} \mathcal{B }=B(\varphi _{0},r)=\left\{ \varphi \in AA(\mathbb{R },\mathbb{R } ^{n}),\left\| \varphi -\varphi _{0}\right\| \le \frac{r\left\| J\right\| _{\infty }}{\widetilde{d}\left( 1-r\right) }\right\} \!, \end{aligned}$$

where

$$\begin{aligned} \varphi _{0}\left( t\right) =\left( \begin{array}{c} \int \limits _{-\infty }^{t}\exp \left( -\int \limits _{s}^{t}d_{1}\left( \xi \right) d\xi \right) J_{1}\left( s\right) \mathrm{d}s \\ \vdots \\ \vdots \\ \int \limits _{-\infty }^{t}\exp \left( -\int \limits _{s}^{t}d_{n}\left( \xi \right) d\xi \right) J_{n}\left( s\right) \mathrm{d}s \end{array} \right) \!. \end{aligned}$$

Proof

Set

$$\begin{aligned} \mathcal{B }&= B(\varphi _{0},r) \\&= \left\{ \varphi \in AA(\mathbb{R },\mathbb{R }^{n}),\left\| \varphi -\varphi _{0}\right\| \le \frac{r\left\| J\right\| _{\infty }}{ \widetilde{d}\left( 1-r\right) }\right\} \!. \end{aligned}$$

Clearly, \(\mathcal{B }\) is a closed convex subset of \(AA(\mathbb{R },\mathbb R ^{n})\). One has immediately

$$\begin{aligned} \left\| \varphi _{0}\left( t\right) \right\|&= \max _{1\le i\le n}\sup _{t\in \mathbb{R }}\left\| \int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}\left( \xi \right) d\xi }J_{i}\left( s\right) \mathrm{d}s\right\| \\&\le \left\| J\right\| _{\infty }\max _{1\le i\le n}\sup _{t\in \mathbb{R }}\int \limits _{-\infty }^{t}\mathrm{e}^{-\left( t-s\right) \widetilde{d_{i}} }\mathrm{d}s \\&= \frac{\left\| J\right\| _{\infty }}{\widetilde{d}}. \end{aligned}$$

Therefore, for any \(\varphi \in \mathcal{B }\) and by using the estimate just obtained, we see easily that

$$\begin{aligned} \left\| \varphi \right\|&\le \left\| \varphi -\varphi _{0}\right\| +\left\| \varphi _{0}\right\| \\&\le \frac{r\left\| J\right\| _{\infty }}{\widetilde{d}\left( 1-r\right) }+\frac{\left\| J\right\| _{\infty }}{\widetilde{d}}=\frac{ \left\| J\right\| _{\infty }}{\widetilde{d}\left( 1-r\right) }. \end{aligned}$$

Now we prove that \(\varGamma \) is a self-mapping from \(\mathcal{B }\) to \( \mathcal{B }\). In fact, for arbitrary \(\varphi \in \mathcal{B }\) it follows:

$$\begin{aligned}&\left\| (\varGamma \varphi )(t)-\varphi _{0}(t)\right\| \\&=\max _{1\le i\le n}\sup _{t\in \mathbb{R }}\left\| \int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}\left( \xi \right) d\xi }\left\{ \sum \limits _{j=1}^{n}a_{ij}\left( s\right) f_{j}(\varphi _{j}\left( s\right) )\right. \right. +\sum \limits _{j=1}^{n}b_{ij}\left( s\right) g_{j}\left( \varphi _{j}\left( s-\tau _{j}\left( s\right) \right) \right) \\&\left. \left. \quad +\sum \limits _{j=1}^{n}c_{ij}\left( s\right) \int \limits _{-\infty }^{t}K_{ij}\left( t-\sigma \right) h_{j}\left( \varphi _{j}\left( \sigma \right) \right) d\sigma \right\} \mathrm{d}s\right\| \\&=\max _{1\le i\le n}\sup _{t\in \mathbb{R }}\frac{\sum \nolimits _{j=1}^{n}L_{f_{j}}\left| a_{ij}\left( t\right) \right| +\sum \nolimits _{j=1}^{n}L_{g_{j}}\left| b_{ij}\left( t\right) \right| + \frac{M}{w}\sum \nolimits _{j=1}^{n}L_{h_{j}}\left| b_{ij}\left( t\right) \right| }{\widetilde{d}}\left\| \varphi \right\| _{\infty } \\&\le \frac{r\left\| J\right\| _{\infty }}{\widetilde{d}\left( 1-r\right) }, \end{aligned}$$

which implies that \(\left( \varGamma \varphi \right) \in \mathcal{B }.\) Next, we prove the mapping \(\varGamma \) is a contraction mapping of \(\mathcal{B }\). In view of \((H_{2})\), for any \(\varphi ,\psi \) \(\in \mathcal{B }\), we get the following estimates:

$$\begin{aligned} \left\| (\varGamma \varphi )(t)-(\varGamma \psi )(t)\right\|&\le \max _{1\le i\le n}\sup _{t\in \mathbb{R }}\int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}\left( \xi \right) d\xi }\left\{ \sum \limits _{j=1}^{n}\left| a_{ij}\left( s\right) \right| \left| f_{j}(\varphi _{j}\left( s\right) )-f_{j}(\psi _{j}\left( s\right) )\right| \right. \\&+\sum \limits _{j=1}^{n}\left| b_{ij}\left( s\right) \right| \left| g_{j}\left( \varphi _{j}\left( s-\tau _{j}\left( s\right) \right) \right) -g_{j}\left( \psi _{j}\left( s-\tau _{j}\left( s\right) \right) \right) \right| \\&\left. +\sum \limits _{j=1}^{n}\left| c_{ij}\left( s\right) \right| \int \limits _{-\infty }^{t}K_{ij}\left( t-\sigma \right) \left| h_{j}(\varphi _{j}\left( \sigma \right) )-h_{j}(\psi _{j}\left( \sigma \right) )\right| d\sigma \right\} \mathrm{d}s \\&\le \max _{1\le i\le n}\sup _{t\in \mathbb{R }}\int \limits _{-\infty }^{t}\mathrm{e}^{-\int \limits _{s}^{t}d_{i}\left( \xi \right) d\xi }\left\{ \sum \limits _{j=1}^{n}\left| a_{ij}\left( s\right) \right| L_{f_{j}}\left| \varphi _{j}\left( s\right) -\psi _{j}\left( s\right) \right| \right. \\&+\sum \limits _{j=1}^{n}\left| b_{ij}\left( s\right) \right| L_{g_{j}}\left| \varphi _{j}\left( s-\tau _{j}\left( s\right) \right) -\psi _{j}\left( s-\tau _{j}\left( s\right) \right) \right| \\&\left. +\sum \limits _{j=1}^{n}\left| c_{ij}\left( s\right) \right| \int \limits _{-\infty }^{t}K_{ij}\left( t-\sigma \right) L_{h_{j}}\left| \varphi _{j}\left( \sigma \right) -\psi _{j}\left( \sigma \right) \right| d\sigma \right\} \mathrm{d}s \\&\le \max _{1\le i\le n}\sup _{s\in \mathbb{R }}\left( \frac{ \sum \nolimits _{j=1}^{n}L_{f_{j}}\left| a_{ij}\left( s\right) \right| +\left| b_{ij}\left( s\right) \right| L_{g_{j}}\!+\!\frac{M}{w} \left| c_{ij}\left( s\right) \right| L_{h_{j}}}{\widetilde{d}}\right) \\&\times \left\| \varphi -\psi \right\| . \end{aligned}$$

Then from (\(H_{3}\)) it follows that \(\varGamma \) is contracting operator in \( B \). So there exists a unique almost automorphic solution \(x^{*} \in \mathcal{B }\) of \(\left( 1\right) \) that is \(\varGamma \left( x^{*}\right) =x^{*}.\) \(\square \)

Global Exponential Stability of Almost Automorphic Solution

In this section, we shall discuss the global exponential stability of the almost automorphic solution of system \((1)\). For any solution

$$\begin{aligned} x\left( t\right) =\left( x_{1}\left( t\right) ,\ldots ,x_{n}\left( t\right) \right) \end{aligned}$$

and almost automorphic solution

$$\begin{aligned} x^{*}\left( t\right) =\left( x_{1}^{*}\left( t\right) ,\ldots ,x_{n}^{*}\left( t\right) \right) \end{aligned}$$

of system \(\left( 1\right) \) with the initial condition

$$\begin{aligned} x_{i}\left( t\right) =\widehat{x}_{i}\left( t\right) \!,\quad -\infty <t\le 0,\quad 1\le i\le n, \end{aligned}$$

set

$$\begin{aligned} z(t)=x\left( t\right) -x^{*}\left( t\right) =\left( x_{1}\left( t\right) -x_{1}^{*}\left( t\right) ,\ldots ,x_{n}\left( t\right) -x_{n}^{*}\left( t\right) \right) \!. \end{aligned}$$

Definition 3

The almost automorphic solution \(x^{*}\left( \cdot \right) =\left( x_{1}^{*}\left( \cdot \right) ,\ldots ,x_{n}^{*}\left( \cdot \right) \right) \) of RNNs is said to be globally exponentially stable, if, for any solution \(x\left( \cdot \right) =\left( x_{1}\left( \cdot \right) ,\ldots ,x_{n}\left( \cdot \right) \right) \) there exist constants \(M>0\) and \(\mu >0\) such that for all \(t\in \mathbb{R }\)

$$\begin{aligned} \left\| x^{*}\left( t\right) -x\left( t\right) \right\| \le M\mathrm{e}^{-\mu t}. \end{aligned}$$

Definition 4

[12] (The upper-right Dini derivative)

Let \(f:\mathbb{R }\longrightarrow \mathbb{R }\) be a continuous function, then the upper-right Dini derivative \(\frac{D^{+}f(t)}{\mathrm{d}t}\) is defined by

$$\begin{aligned} \frac{D^{+}f(t)}{\mathrm{d}t}=\overline{\lim _{h\rightarrow 0^{+}}}\frac{f\left( t+h\right) -f\left( t\right) }{h}. \end{aligned}$$

Remark 2

The upper-right Dini derivative \(\frac{D^{+}V\left| f(t)\right| }{\mathrm{d}t} \) of \(\left| f(t)\right| \) is given by

$$\begin{aligned} \frac{D^{+}V\left| f(t)\right| }{\mathrm{d}t}=\mathrm{sign}\left( f\left( t\right) \right) \frac{\mathrm{d}f\left( t\right) }{\mathrm{d}t} \end{aligned}$$

where sign\(\left( \cdot \right) \) is the signum function.

Remark 3

It is well known that global exponential stability is a strong form of stability since it implies uniform, asymptotic stability. Furthermore, exponential stability is important in applications since it is robust to various types of perturbations.

Theorem 3

Suppose that assumptions \(\mathrm{(H_{1})-(H_{4})}\) hold. Let \( x^{*}\left( t\right) =\left( x_{1}^{*}\left( t\right) ,\ldots ,x_{n}^{*}\left( t\right) \right) \) is the unique almost automorphic solution of Eq. \((1)\) in \(\mathcal{B }.\) If for every sufficiently small \(t>0\)

$$\begin{aligned} (H_{5})\widetilde{d}-\left( \sum \limits _{j=1}^{n}L_{f_{j}} \overline{a_{ij}}+\mathrm{e}^{\tau t}\overline{b_{ij}}L_{g_{j}}+\overline{c_{ij}} L_{h_{j}}\int \limits _{0}^{+\infty }K_{ij}\left( \rho \right) \mathrm{e}^{t\rho }d\rho \right) >0, \end{aligned}$$

where

$$\begin{aligned} \overline{a_{ij}}=\sup _{t\in \mathbb{R }}\left| a_{ij}\left( t\right) \right| ,\quad \overline{b_{ij}}=\sup _{t\in \mathbb{R }}\left| b_{ij}\left( t\right) \right| ,\quad \overline{c_{ij}}=\sup _{t\in \mathbb{R }}\left| c_{ij}\left( t\right) \right| . \end{aligned}$$

Then \(x^{*}\left( \cdot \right) \) is globally exponentially stable.

Proof

For \(1\le i\le n,\) set

$$\begin{aligned} \psi _{i}(t)=t-d_{i}+\sum \limits _{j=1}^{n}\left( L_{f_{j}}\overline{a_{ij}} +\mathrm{e}^{\tau t}\overline{b_{ij}}L_{g_{j}}+\overline{c_{ij}}L_{h_{j}}\int \limits _{0}^{+ \infty }K_{ij}\left( \rho \right) \mathrm{e}^{t\rho }d\rho \right) \!. \end{aligned}$$

It is clear that the functions \(t\longmapsto \psi _{i}(t), 1\le i\le n, \) are continuous on \(\mathbb{R }^{+}\) and by hypothesis \((H_{5}),\) \(\psi _{i}(0)<0.\) Thus, there exists a sufficiently small constant \(\mu \) such that

$$\begin{aligned} \psi _{i}(\mu )<0,\quad 1\le i\le n. \end{aligned}$$

Take an arbitrary \(\epsilon >0.\) Set, for all \(1\le j\le n,\)

$$\begin{aligned} z_{j}\left( t\right) =\left| x_{j}^{*}\left( t\right) -x_{j}\left( t\right) \right| \mathrm{e}^{\mu t}. \end{aligned}$$

Then for all \(1\le j\le n,\) and for all \(-\tau \le t\le 0,\) one has

$$\begin{aligned} z_{j}\left( t\right) \le M<M+\epsilon . \end{aligned}$$
(2)

In the following, we shall prove that for all \(t>0,\)

$$\begin{aligned} z_{j}\left( t\right) \le M+\epsilon . \end{aligned}$$

Suppose the contrary. Let us denote \(A_{i}=\left\{ t>0,z_{j}\left( t\right) >M+\epsilon \right\} \). It follows that there exists \(1\le j_{0}\le n\) such that \(A_{j_{0}}\ne \emptyset \). Let

$$\begin{aligned} t_{j}=\left\{ \begin{array}{ccc} \inf \left( A_{j}\right) &{} &{} \left\{ t>0,z_{j}\left( t\right) >M+\epsilon \right\} \ne \emptyset ,\\ +\infty &{} &{} \left\{ t>0,z_{j}\left( t\right) >M+\epsilon \right\} =\emptyset . \end{array} \right. \end{aligned}$$

Clearly \(t_{j}>0\) and for all \(-\tau \le t<t_{j}.\) Further, one has

$$\begin{aligned} z_{j}\left( t\right) \le M+\epsilon . \end{aligned}$$

Let us denote \(t_{s}=\min \nolimits _{1\le j\le n}t_{j}.\) It follows that \( 0<t_{s}<+\infty \) and for all \(-\tau \le t\le t_{s}.\) Note that

$$\begin{aligned} z_{s}\left( t_{s}\right) =M+\epsilon , \qquad D^{+}z_{s}\left( t_{s}\right) \ge 0. \end{aligned}$$

Now since \(x_{j}\left( \cdot \right) \) and \(x_{j}^{*}\left( \cdot \right) \) are solutions of \(\left( 1\right) ,\) we get

$$\begin{aligned} 0&\le D^{+}z_{s}\left( t_{s}\right) =D^{+}\left[ \left| x_{j}^{*}\left( t\right) -x_{j}\left( t\right) \right| \mathrm{e}^{\mu t}\right] _{\mid t=t_{s}} \\&= \mathrm{e}^{\mu t_{s}}\left[ \mu \left| x_{j}^{*}\left( t\right) -x_{j}\left( t\right) \right| +\frac{D^{+}\left| x_{j}^{*}(t) -x_{j}\left( t\right) \right| }{\mathrm{d}t}\mid _{t=t_{s}}\right] \\&= \left| x_{s}^{*}\left( t_{s}\right) -x_{s}\left( t_{s}\right) \right| \mu \mathrm{e}^{\mu t_{s}}+\mathrm{e}^{\mu t_{s}}sgn\left( x_{s}^{*}\left( t_{s}\right) -x_{s}\left( t_{s}\right) \right) \\&\times \left\{ -d_{s}\left( t_{s}\right) \left( x_{s}^{*}\left( t_{s}\right) -x_{s}\left( t_{s}\right) \right) +\sum \limits _{j=1}^{n}a_{sj}\left( t_{s}\right) f_{j}(x_{j}^{*}\left( t_{s}\right) )-f_{j}(x_{j}\left( t_{s}\right) )\right. \\ \quad&+\, \mathrm{e}^{\mu \tau }\sum \limits _{j=1}^{n}b_{sj}\left( t_{s}\right) \left[ g_{j}\Bigg (x_{j}^{*}\left( t_{s}-\tau _{j}\left( t_{s}\right) \right) \Bigg )-g_{j}\left( x_{j}\left( t_{s}-\tau _{j}\left( t_{s}\right) \right) \right) \right] \\ \quad&\left. +\sum \limits _{j=1}^{n}c_{ij}\left( t_{s}\right) \left[ \int \limits _{-\infty }^{t_{s}}K_{ij}\left( t_{s}-\rho \right) \left( h_{j}\left( x_{j}^{*}\left( \rho \right) \right) -h_{j}\left( x_{j}(\rho )\right) \right) d\rho \right] \right\} \\&< \left| x_{s}^{*}\left( t_{s}\right) -x_{s}\left( t_{s}\right) \right| \mu \mathrm{e}^{\mu t_{s}}+\mathrm{e}^{\mu t_{s}}\Bigg (-d_{s}\left( t_{s}\right) \left| x_{s}^{*}\left( t_{s}\right) -x_{s}\left( t_{s}\right) \right| \\&+\sum \limits _{j=1}^{n}\left| a_{ij}\left( t_{s}\right) \right| L_{f_{j}}\left| x_{j}^{*}\left( t_{s}\right) -x_{j}\left( t_{s}\right) \right| \\&+\, \mathrm{e}^{\mu \tau }\sum \limits _{j=1}^{n}\left| b_{ij}\left( t_{s}\right) \right| L_{g_{j}}\left| x_{j}^{*}\left( t_{s}-\tau _{j}\left( t_{s}\right) \right) -x_{j}\left( t_{s}-\tau _{j}\left( t_{s}\right) \right) \right| \Bigg ) \\&+\,\mathrm{e}^{\mu t_{s}}\sum \limits _{j=1}^{n}\left| c_{ij}\left( t_{s}\right) \right| L_{h_{j}}\left[ \int \limits _{0}^{+\infty }K_{ij}\left( u\right) \mathrm{e}^{\mu u}\left| x_{j}^{*}(t_{s}-u)-x_{j}(t_{s}-u)\right| \mathrm{d}u \right] \\&\le \left( M\!+\!\epsilon \right) \left( \mu \!-\!d_{s}\left( t_{s}\right) \right) \!+\!\sum \limits _{j=1}^{n}\left| a_{ij}\left( t_{s}\right) \right| L_{f_{j}}z_{j}\left( t_{s}\right) +\mathrm{e}^{\mu \tau }\sum \limits _{j=1}^{n}\left| b_{ij}\left( t_{s}\right) \right| L_{g_{_{j}}}z_{j}\left( t_{s}-\tau _{j}\left( t_{s}\right) \right) \\&+\sum \limits _{j=1}^{n}\left| c_{ij}\left( t_{s}\right) \right| L_{h_{j}}\int \limits _{0}^{+\infty }k_{ij}\left( u\right) \mathrm{e}^{\mu u}z_{j}(t_{s}-u)\mathrm{d}u \\&\le \left( M+\epsilon \right) \left( \mu -\widetilde{d} _{s}+\sum \limits _{j=1}^{n}\overline{a_{ij}}L_{f_{j}}+\mathrm{e}^{\mu \tau }\overline{ b_{ij}}L_{g_{j}}+\overline{c_{ij}}L_{h_{j}}\int \limits _{0}^{+\infty }k_{ij}\left( u\right) \mathrm{e}^{\mu u}\mathrm{d}u\right) \!. \end{aligned}$$

It follows that

$$\begin{aligned} \mu -\widetilde{d}_{s}+\sum \limits _{j=1}^{n}\overline{ a_{ij}}L_{f_{j}}+\mathrm{e}^{\mu \tau }\overline{b_{ij}}L_{g_{j}}+\overline{c_{ij}} L_{h_{j}}\int \limits _{0}^{+\infty }k_{ij}\left( u\right) \mathrm{e}^{\mu u}\mathrm{d}u\ge 0, \end{aligned}$$

that is \(\psi _{i}\left( \mu \right) \ge 0\) which contradicts the fact that \(\psi _{i}\left( \mu \right) <0.\) Thus we obtain that for all \(t>0,\)

$$\begin{aligned} z_{j}\left( t\right) =\left| x_{j}\left( t\right) -\varphi _{j}\left( t\right) \right| \le \left( M+\epsilon \right) \mathrm{e}^{-\mu t}\!. \end{aligned}$$

Note that \(\Vert x\left( t\right) -x_{j}^{*}\left( t\right) \Vert =\max \nolimits _{1\le j\le n}\vert x_{j}\left( t\right) -x_{j}^{*}\left( t\right) \vert ,\) then passing to limit when \( \epsilon \rightarrow 0^{+}\) we obtain for all \(t>0\)

$$\begin{aligned} \Vert x\left( t\right) -x_{j}^{*}\left( t\right) \Vert \le M \mathrm{e}^{-\mu t}. \end{aligned}$$

The proof of this theorem is now completed. \(\square \)

Remark 4

If should be noted that, if we let \(f_{j}=g_{j}\) and \(p_{ij}=0\), the model \((1)\) is the one investigated in [5]. Besides, in [10] and [20] , Gopalsamy et al. analyzed the global stability of the following system:

$$\begin{aligned} \dot{x_{i}}\left( t\right) =-a_{i}x_{i}(t)+\sum \limits _{j=1}^{n}p_{ij}\left( t\right) \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}(s))\mathrm{d}s+J_{i},\quad 1\le i\le n, \end{aligned}$$

as a model for neural networks involving distributed time delays arising from signal propagation. Due to the difference in the methods discussed, the results in this paper and those in the above references are different. Therefore, our results are novel and have some significance in theories as well as in applications of almost periodic oscillatory neural networks. On the other hand, different approach is used in [14] to obtain several sufficient conditions for the existence and attractivity of almost periodic solution for a new class of recurrent neural networks similar than \( (1)\). Note that in this study the kernel \(k_{ij}\) is a piecewise continuous integrable function, and it satisfies that

$$\begin{aligned} \int \limits _{0}^{+\infty }K_{ij}\left( s\right) \mathrm{d}s=1, \int \limits _{0}^{+\infty }sK_{ij}\left( s\right) \mathrm{d}s=1,\quad \forall 1\le i\le n. \end{aligned}$$

Let us notice that the last hypothesis was omitted in this paper.

Remark 5

In [1] similar techniques are used in order to study the pseudo almost periodic solutions with \(d_{i}\left( t\right) =d_{i}.\) Recently, Quin et al. [18] investigated the existence, uniqueness and stability of almost periodic solution for the class of delayed neural networks. First, our results can be seen as a generalization and improvement of [18] since in the models (1) and (3) of [18] authors considered the almost periodic case without distributed delays term

$$\begin{aligned} \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}(s))\mathrm{d}s \end{aligned}$$

Second, in our study we deal with the space of almost automorphic functions which contains the set of almost periodic functions. Further the methods are quite different. Consequently, Theorems 2 and 3 generalise and improve Theorem (3.1) in [18].

Illustrative Examples

The sufficient condition for existence and stability of a class of delayed RNNs presented in this paper is demonstrated by a couple of examples and numerical simulations.

Example 1

First, let us apply our main results to some special three-dimensional systems

$$\begin{aligned} \dot{x_{i}}\left( t\right)&= -d_{i}\left( t\right) x_{i}(t)+\sum \limits _{j=1}^{3}a_{ij}\left( t\right) f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{3}b_{ij}\left( t\right) g_{j}(x_{j}\left( t-\tau \right) ) \\&+ \sum \limits _{j=1}^{3}c_{ij}\left( t\right) \int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}(s))\mathrm{d}s+J_{i}\left( t\right) , \end{aligned}$$

where

$$\begin{aligned} \left( \begin{array}{c} d_{1}\left( t\right) \\ d_{2}\left( t\right) \\ d_{3}\left( t\right) \end{array} \right) =\left( \begin{array}{c} 3+\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) \\ 7+3\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) \\ 5+2\sin \left( \frac{1}{2+\cos t+\sin \sqrt{2}t}\right) \end{array} \right) \end{aligned}$$

for all \(t\in \mathbb{R },\)

$$\begin{aligned} f_{j}(t)=g_{j}(t)=h_{j}(t)=\frac{\left| t+1\right| -\left| t-1\right| }{2} \end{aligned}$$

and

$$\begin{aligned} K_{ij}\left( t\right)&= \cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) -1\quad \Longrightarrow \quad K_{ij}\left( t\right) \le \mathrm{e}^{-t},\\ \left( a_{ij}\right)&= \left( \begin{array}{c@{\quad }c@{\quad }c} \frac{2\cos t+\cos \sqrt{2}t}{10} &{} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{10} &{} \frac{\sin \left( \frac{1}{1+\sin t+\sin \sqrt{5}t} \right) }{10} \\ \frac{2\cos t+0,1\cos \sqrt{3}t}{10} &{} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{3}t}\right) }{10} &{} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{10} \\ \frac{2\sin t+\sin \sqrt{2}t}{10} &{} \frac{\sin \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{5} &{} \frac{\sin \left( \frac{1}{2+\sin t+\sin \sqrt{3}t}\right) }{5} \end{array} \right) ,\\ \left( b_{ij}\right)&= \left( \begin{array}{c@{\quad }c@{\quad }c} \frac{2\sin \sqrt{2}t+\sin \sqrt{3}t}{10} &{} \frac{\cos \left( \frac{1}{ 2+\sin t+\sin \sqrt{2}t}\right) }{5} &{} \frac{\sin \left( \frac{1}{1+\sin t+\sin \sqrt{5}t}\right) }{10} \\ \frac{2\cos \sqrt{5}t+\cos \sqrt{3}t}{10} &{} \frac{\cos \left( \frac{1}{ 2+\sin t+\sin \sqrt{3}t}\right) }{10} &{} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{5} \\ \frac{2\sin \sqrt{3}t+\cos \sqrt{2}t}{10} &{} \frac{\sin \left( \frac{1}{ 2+\sin t+\sin \sqrt{2}t}\right) }{10} &{} \frac{\sin \left( \frac{1}{2+\sin t+\sin \sqrt{3}t}\right) }{5} \end{array} \right) \!,\\ \left( c_{ij}\right)&= \left( \begin{array}{c@{\quad }c@{\quad }c} \frac{2\cos t+3\cos \sqrt{2}t}{10} &{} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{10} &{} \frac{\sin \left( \frac{1}{1+\sin t+\sin \sqrt{5}t}\right) }{5} \\ \frac{2\cos t+\cos \sqrt{3}t}{10} &{} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{3}t}\right) }{10} &{} \frac{3\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{10} \\ \frac{\sin t+\sin \sqrt{2}t}{10} &{} \frac{\sin \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{5} &{} \frac{\sin \left( \frac{1}{2+\sin t+\sin \sqrt{3}t} \right) }{5} \end{array} \right) \!, \end{aligned}$$

and

$$\begin{aligned} J_{i}\left( t\right) =\left( \begin{array}{c} \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{5} \\ \frac{\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{2} \\ \frac{3\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{10} \end{array} \right) \!. \end{aligned}$$

It follows that

$$\begin{aligned} r&= \max _{1\le i\le 3}\sup _{s\in \mathbb{R }}\left( \frac{ \sum \nolimits _{j=1}^{3}\left| a_{ij}\left( s\right) \right| L_{f_{j}} +\sum \nolimits _{j=1}^{3}\left| b_{ij}\left( s\right) \right| L_{g_{j}}+ \frac{M}{w}\sum \nolimits _{j=1}^{3}\left| c_{ij}\left( s\right) \right| L_{h_{j}}}{\widetilde{d}}\right) \\&= \max _{1\le i\le 3}\sup _{s\in \mathbb{R }}\left( \frac{ \sum \nolimits _{j=1}^{3}\left| a_{ij}\left( s\right) \right| +\sum \nolimits _{j=1}^{3}\left| b_{ij}\left( s\right) \right| +\sum \nolimits _{j=1}^{3}\left| c_{ij}\left( s\right) \right| }{\widetilde{d}}\right) \\&< \max \left( \frac{0.5+0.6+0.8}{2},\frac{0.5+0.6+0.7}{2},\frac{0.7+0.5+0.6 }{2}\right) \\&< 1. \end{aligned}$$

Further the condition \((H_{5})\) is also satisfied, therefore, all conditions of Theorem 3 are satisfied, then the delayed recurrent neural networks have a unique almost automorphic solution in the convex set

$$\begin{aligned} \mathcal{B }=B(\varphi _{0},r)=\left\{ \varphi \in AA(\mathbb{R },\mathbb{R } ^{3}),\left\| \varphi -\varphi _{0}\right\| \le 4.75 \right\} . \end{aligned}$$

Moreover, the solution is exponential stable (see Fig. 1).

Fig. 1
figure 1

Behavior of AA solution with the initial condition \(\left( 0.5, 0.2, 0.3\right) \)

Example 2

Now let us consider the following recurrent neural network \(\left( n=2\right) \)

$$\begin{aligned} \dot{x_{i}}\left( t\right)&= -d_{i}\left( t\right) x_{i}(t)+\sum \limits _{j=1}^{2}a_{ij}f_{j}(x_{j}\left( t\right) )+\sum \limits _{j=1}^{2}b_{ij}g_{j}(x_{j}\left( t-\tau \right) ) \\&+\sum \limits _{j=1}^{2}c_{ij}\int \limits _{-\infty }^{t}K_{ij}\left( t-s\right) h_{j}(x_{j}(s))\mathrm{d}s+J_{i}\left( t\right) ,\left( 1.2\right) , \end{aligned}$$

where

$$\begin{aligned} \left( \begin{array}{c} d_{1}\left( t\right) \\ d_{2}\left( t\right) \end{array} \right) =\left( \begin{array}{c} 3+\cos \sqrt{5}t \\ 5+\sin \sqrt{3}t \end{array} \right) \quad \Longrightarrow \quad \widetilde{d}=2 \end{aligned}$$

for all \(t\in \mathbb{R }\) and for all \(1\le j\le 2\)

$$\begin{aligned} f_{j}(t)=g_{j}(t)=h_{j}(t)=\frac{\left| t+1\right| -\left| t-1\right| }{2}. \end{aligned}$$

Pose for all \(1\le i,\quad j\le 2\)

$$\begin{aligned} K_{ij}\left( t\right)&= \cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) -1\Longrightarrow K_{ij}\left( t\right) \le \mathrm{e}^{-t}\\ \left( a_{ij}\right)&= \left( \begin{array}{c@{\quad }c} \frac{1}{5} &{} \frac{-1}{5} \\ \frac{3}{10} &{} \frac{1}{5} \end{array} \right) ,\left( b_{ij}\right) =\left( \begin{array}{c@{\quad }c} \frac{1}{5} &{} \frac{1}{2} \\ \frac{4}{5} &{} \frac{-1}{10} \end{array} \right) ,\left( c_{ij}\right) =\left( \begin{array}{c@{\quad }c} \frac{1}{5} &{} \frac{1}{5} \\ \frac{1}{10} &{} \frac{-3}{10} \end{array} \right) \end{aligned}$$

and

$$\begin{aligned} J_{i}\left( t\right) =\left( \begin{array}{c} \frac{\cos \sqrt{3}t+\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{5} \\ \frac{\sin \sqrt{5}t+3\cos \left( \frac{1}{2+\sin t+\sin \sqrt{2}t}\right) }{10} \end{array} \right) \!. \end{aligned}$$

It follows that

$$\begin{aligned} r&= \max _{1\le i\le 2}\left( \frac{\sum \nolimits _{j=1}^{2}L_{f_{j}}\left| a_{ij} \right| +\sum \nolimits _{j=1}^{2}\left| b_{ij} \right| L_{g_{j}}+\frac{M}{w}\sum \nolimits _{j=1}^{2} \left| c_{ij} \right| L_{h_{j}}}{\widetilde{d}} \right) \\&= \max _{1\le i\le 2}\left( \frac{\sum \nolimits _{j=1}^{2}\left| a_{ij} \right| +\sum \nolimits _{j=1}^{2}\left| b_{ij} \right| +\sum \nolimits _{j=1}^{2}\left| c_{ij} \right| }{\widetilde{d}}\right) \\&< \max \left( \frac{1.5}{2},\frac{1.9}{2}\right) \\&< 1. \end{aligned}$$

Further the condition \((H_{5})\) is also satisfied, therefore, all conditions of Theorem 3 are satisfied, then the delayed recurrent neural networks above have a unique almost automorphic solution which is exponential stable (see Fig. 2).

Fig. 2
figure 2

Behavior of AA solution with the initial condition \(\left( 0.4,0.3 \right) \)

Conclusion

As is well konwn, time delays are likely to be present in the implementing of neural networks due to the finite switching speed of amplifiers. The time delays in the response can influence the stability of a network by causing oscillatory and unstable characteristics. In this paper, the existence and uniqueness of almost automorphic solution for the recurrent neural networks with variable coefficients and time-varying delay have been studied. Furthermore, several sufficient conditions have also been proposed to guarantee the global exponential stability of the almost automorphic solution. Hence, we improve the results of ([4, 14, 16] and [19]) since these papers considered the periodic and the almost periodic situations. Moreover, our criteria are easy to check and apply in practice and are of prime importance and great interest in many application fields and the designs of networks. Finally, two illustrative examples are given to demonstrate the effectiveness of the obtained results. In a future study, we will also make an attempt to extend the results of this paper to the space of pseudo almost automorphic functions without difficulty. Recall that the new concept of pseudo almost automorphy generalizes the one of pseudo almost periodicity, in fact, a pseudo almost automorphic function is the sum of an almost automorphic function and of an ergodic perturbation. Besides, another important task will be to consider the exponential sinchronization of this class of RNNs with mixed delays.