1 Introduction

In 1996, Meyer-Baese studied and put forward competitive neural network model in [1]. As one of the popular artificial neural networks, competitive neural networks have received significant attention. From the view of biology, there are two kinds of the human memories: short-term-memory (STM) and long-term-memory (LTM), STM prestnts fast neural activity and LTM presents unsupervised and slow synaptic modifications. Competitive neural networks contain two timescales, the one dealing with the fast change of the state and the other one with the slow change of the synapse by external stimulation. It is a kind of unsupervised learning neural networks, which refers to the whole interconnection between input and output of the single layer neural networks and is widely used in optimization design, pattern recognition, signal processing and control theory and so on [13]. Dynamics of competitive neural networks with different time scales can be found in [410]. No matter in biological or man-made neural networks, the synapse between neurons inevitably appear time delay effect, and connection weight between neurons is time-varying which may lead to oscillation, divergence, so as to instability. At present, a variety of dynamic behaviors about competitive neural networks with delays have been studied, such as singular perturbation [1], periodicity [11], stability [4, 1217], synchronization [8, 10, 1822] and so on. And the dynamic behaviors of competitive neural networks with delays mainly focus on constant delays [4, 11, 12, 18], bounded time-varying delays [8, 13, 14, 16, 17, 19, 20], mixed delays (i.e. bounded time-varying delay and distributed delay) [15, 21, 22], etc.

It is well known that stability has played a very important role in the applications of competitive neural networks. Thus, various stabilities of competitive neural networks with delays have been widely studied and a great deal of results have been obtained (see, [4, 1217]). Global exponential stability of competitive neural networks with constant and time-varying delays had been studied by constructing Lyapunov functional in [4, 14], respectively. In [12], existence and global exponential stability of equilibrium of competitive neural networks with different time scales and multiple delays had been discussed by nonlinear Lipschitz measure method and constructing suitable Lyapunov functional. In [13], exponential stability of competitive neural networks with time-varying and distributed delays were studied by inequality techniques and properties of an M-matrix. In [15, 16], multi-stability of competitive neural networks with time-varying and distributed delays had been studied by using inequality technique. Global stability and convergence of equilibrium point for delayed competitive neural networks with different time scales and discontinuous activations were investigated by employing the Leray-Schauder alternative theorem in multi-valued analysis, linear matrix inequality technique and generalized Lyapunov-like method in [17].

Different from above mentioned delays, proportional delay is an unbounded time-varying delay. The proportional delay functions \(\tau (t)=(1-q)t,\,0<q<1\) is a kind of unbounded delay, which often rises in many fields such as physics, biology systems and control theory. At the same time, since the differences between proportional delay and other delays, the past results about the stability of neural networks with delays can not be directly applied to neural networks with proportional delays. The category of proportional delayed differential equation, which the neural networks with proportional delays belongs to, is an important kind of unbounded delay differential equation and is widely used in many fields, such as light absorption in the star substance and nonlinear dynamic systems. Hence, researches on the dynamic behaviors of neural networks with proportional delays have important theoretical and practical value. The dynamical behaviors of neural networks with proportional delays have been studied in [2331]. In [23], dissipativity of a class of cellular neural networks (CNNs) with proportional delays was investigated by using the inner product properties. In [2426, 28], Zhou had discussed the global exponential stability and asymptotic stability of CNNs with multi-proportional delays by employing matrix theory and constructing Lyapunov functional, respectively. Delay-dependent exponential synchronization of recurrent neural networks (RNNs) with multiple proportional delays was studied in [28] by constructing appropriate Lyapunov functional. A few results about dynamical behaviors of neural networks with proportional delays in [2426, 28] mainly used to establish appropriate Lyapunov functionals. It is well known that constructing new Lyapunov functions is very difficult, and no general method can be found. At present, there are many other research methods, for example, by constructing nonlinear delay differential inequality, Zhou had studied the global exponential stability of the bidirectional associative memory neural networks with proportional delays in [27] and [29]. In [30], stability criteria for high-order networks with proportional delay was studied based on matrix measure and Halanay inequality. In [31], new explicit conditions ensuring that the state trajectories of the system do not exceed a certain threshold over a pre-specified finite time interval was obtained by matrix inequalities.

The advantage of neural networks with proportional delays is that the network’s running time can be controlled according to the network allowed delays. Thus, it is not only theoretically interesting but also practical to establish sufficient conditions for stability of neural networks with proportional delays. Until now, the results on exponential stability of competitive neural networks with proportional delays has not been obtained. Inspired by [27], the aim of this paper is to discuss exponential stability of competitive neural networks with proportional delays. By using fixed point theorem, the existence and uniqueness of equilibrium point of the system is proved. Furthermore, by constructing appropriate delay differential inequality, two delay-independent and delay-dependent sufficient conditions for the exponential stability of equilibrium are obtained. Finally, two examples and their simulations are given to illustrate the effectiveness of the obtained results.

The rest of the paper is organized as follows. In Sect. 2, models and preliminaries are presented that will be used later. Through transformations, competitive neural networks with multi-proportional delays can be turned into competitive neural network with multi-constant delays and variable coefficients. In Sect. 3, some novel sufficient conditions are derived for the existence, uniqueness and stability of the equilibrium point. In Sect. 4, several examples and their simulations are given to show the effectiveness of the obtained results. In Sect. 5, conclusions are provided.

2 Model and Preliminaries

Consider the following competitive neural networks with multi-proportional delays

$$\begin{aligned} \left\{ \begin{array}{l} { STM}:\varepsilon \frac{{\text {d}x}_{i}(t)}{\text {d}t}=-a_{i}x_{i}(t)+\sum \limits _{j=1}^{n}b_{ij}f_{j}(x_{j}(t))+\sum \limits _{j=1}^{n}c_{ij}f_{j}(x_{j}(q_{j}t))\\ \qquad \qquad \;\;\qquad + B_{i}\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)+I_{i},\\ { LTM}:\frac{\text {d}m_{ij}(t)}{\text {d}t}=-m_{ij}(t)+d_{j}f_{i}(x_{i}(t)),\end{array}\right. \end{aligned}$$
(2.1)

for \(t\ge 1\), \(i,j=1,2,\ldots ,n\). Where \(x_{i}(t)\) denotes the neuron currrent activity level; \(m_{ij}(t)\) are synaptic efficiency; \(a_{i}>0\) is the changing rate for neurons i; \(b_{ij}\) and \(c_{ij}(t)\) are constants which denote the strengths of connectivity between the cells j and i at time t and connection weights at \(q_{j}t\) time respectively; \(d_{j}\) is a given arbitrarily constant; \(q_{j}\) is proportional delay factor and satisfies \(0<q_{j}\le 1\), \(q_{j}t=q_{j}*t\), \(q_{j}t=t-(1-q_{j})t\), in which \((1-q_{j})t\) corresponds to the time delay function, and \((1-q_{j})t\rightarrow +\infty \) as \(q_{j}\ne 1,~t\rightarrow +\infty \); \(q=\min \limits _{1\le j\le n}\{q_{j}\}\); \(I_{i}\) denotes external input; \(B_{i}>0\) is an external stimulus intensity; \(\varepsilon \) is a fast time scale decided by STM and \(\varepsilon >0\). In this paper, taking \(\varepsilon =1\) for convenience. \(f_{i}(x_{i}(t))\) is the nonlinear activation function.

Let \(s_{i}(t)=\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)\), then (2.1) can be written as

$$\begin{aligned} \left\{ \begin{array}{l} { STM}:\frac{{\text {d}x}_{i}(t)}{\text {d}t}=-a_{i}x_{i}(t)+\sum \limits _{j=1}^{n}b_{ij}f_{j}(x_{j}(t))+\sum \limits _{j=1}^{n}c_{ij}f_{j}(x_{j}(q_{j}t))\\ \qquad \qquad \qquad +B_{i}s_{i}(t)+I_{i},\\ { LTM}:\frac{\text {d}s_{i}(t)}{\text {d}t}=-s_{i}(t)+\alpha f_{i}(x_{i}(t)),\end{array}\right. \end{aligned}$$
(2.2)

for \(t\ge 1\), with the initial values

$$\begin{aligned} \left\{ \begin{array}{ll} x_{i}(t)=x_{i0},\\ s_{i}(t)=s_{i0},\end{array}\right. ~t\in [q,1], \end{aligned}$$
(2.3)

where \(\alpha =\sum \limits _{j=1}^{n}d_{j}^{2}>0\). \(x_{i0}\) and \(s_{i0}\), \(i=1,2,\ldots ,n\) are constants. \(x(0)=(x_{10},x_{20},\ldots ,x_{n0})^{T}\), and \(s(0)=(s_{10},s_{20},\ldots ,s_{n0})^{T}\) as \(t\in [q,1]\).

Assumption 1

\(f_{j}(\cdot )\) is bounded and satisfies Lipschitz condition, that is there exists \(A_{i}>0,\) \(L_{j}>0\), for \(\forall \varsigma ,\zeta \in \mathbb {R}\), such that

$$\begin{aligned} \left\{ \begin{array}{ll} |f_{j}(\cdot )|<\mathrm{A}_{j},\quad f_{j}(0)=0,\\ |f_{j}(\varsigma )-f_{j}(\zeta )|\le L_{j}|\varsigma -\zeta |,\quad j=1,2,\ldots ,n. \end{array}\right. \end{aligned}$$
(2.4)

Assumption 2

\(s_{i}(\cdot )\) is bounded, that is there exists \(C_{i}>0\), such that

$$\begin{aligned} |s_{i}(\cdot )|\le C_{i}<+\infty ,\quad i=1,2,\ldots ,n. \end{aligned}$$

Remark 2.1

In (2.1), \((1-q_{j})t\rightarrow +\infty \) as \(q_{j}\ne 1,~t\rightarrow +\infty \), so those stability results in [4, 1217] can not be directly applied to (2.1).

Consider the following transformations defined by

$$\begin{aligned} y_{i}(t)=x_{i}(\text {e}^{t}),u_{i}(t)=s_{i}(\text {e}^{t}),\quad i,j=1,2,\ldots ,n, \end{aligned}$$
(2.5)

then (2.1) can be equivalently turned into competitive neural networks with multi-constant delays and variable coefficients (See, [25])

$$\begin{aligned} \left\{ \begin{array}{l} { STM}:\frac{\text {d}y_{i}(t)}{{\text {d}}t}=\text {e}^{t}\{-a_{i}y_{i}(t)+\sum \limits _{j=1}^{n}b_{ij}f_{j}(y_{j}(t))+\sum \limits _{j=1}^{n}c_{ij}f_{j}(y_{j}(t-\tau _{j}))\\ \qquad \qquad \qquad +\sum \limits _{j=1}^{n}B_{i}u_{i}(t)+I_{i}\},\\ { LTM}:\frac{\text {d}u_{i}(t)}{\text {d}t}=\text {e}^{t}\{-u_{i}(t)+\alpha f_{i}(y_{i}(t))\},\end{array}\right. \end{aligned}$$
(2.6)

for \(t\ge 0\), with the initial values

$$\begin{aligned} \left\{ \!\begin{array}{ll} y_{i}(t)=\varphi _{i}(t),\\ u_{i}(t)=\psi _{i}(t), \end{array}\right. ~t\in [-\tau ,0], \end{aligned}$$
(2.7)

where \(\tau =\max \limits _{1\le j\le n}\{\tau _{j}\}\), in which \(\tau _{j}=-\log q_{j}\ge 0\)\(\varphi _{i}(t)=x_{i0}\)\(\psi _{i}(t)=s_{i0}\), \(t\in [-\tau ,0]\). \(y(0)=(y_{10},y_{20},\ldots ,y_{n0})^{T},\) \(u(0)=(u_{10},u_{20},\ldots ,u_{n0})^{T}\).

It follows from Assumption 2 that

$$\begin{aligned} |u_{i}(t)|=|s_{i}(\text {e}^{t})|\le D_{i}<+\infty , \end{aligned}$$
(2.8)

namely, \(u_{i}(t)\) is bounded.

Remark 2.2

It is very easy to verify that (2.2) and (2.6) have the same equilibriums. For considering the stability of equilibrium of (2.2), we just need to consider the stability of equilibrium of (2.6).

3 Main Results

In this section, we shall establish some sufficient conditions to ensure the global exponential stability of system (2.6).

Theorem 3.1

Under Assumptions 1 and 2, if the following conditions

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{i}-\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|)L_{j}>B_{i},\\ 1>\alpha L_{i},~~~i=1,2,\ldots ,n\end{array}\right. \end{aligned}$$
(3.1)

hold, then system (2.6) has a unique equilibrium point.

Proof

\({(y^*,u^*)^{T}}\) is said to be an equilibrium of system (2.6), if this point \((y^{*},u^{*})^{T}\) satisfies the following equations

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{i}y_{i}^{*}=\sum \limits _{j=1}^{n}(b_{ij}+c_{ij})f_{j}(y_{j}^{*})+B_{i}u_{i}^{*}+I_{i},\\ u_{i}^{*}=\alpha f_{i}(y_{i}^{*}),\end{array}\right. \end{aligned}$$
(3.2)

in which \({y^{*}}=(y_{1}^{*},y_{2}^{*},\ldots ,y_{n}^{*})^{T}\), \({u^{*}}=(u_{1}^{*},u_{2}^{*},\ldots ,u_{n}^{*})^{T}.\)

Define the mapping \(Q(\theta )=(F(\theta ),G(\theta ))^{T}\), where \(\theta ={(y,u)^{T}}\),

\({F(\theta )=(F_{1}(\theta ), F_{2}(\theta ), \ldots , F_{n}(\theta ))^{T}}\), \({G(\theta )=(G_{1}(\theta ), G_{2}(\theta ), \ldots , G_{n}(\theta ))^{T}}\),

in which

$$\begin{aligned} \left\{ \! \begin{array}{ll} F_{i}(\theta )=a_{i}^{-1}[\sum \limits _{j=1}^{n}(b_{ij}+c_{ij})f_{j}(y_{j})+B_{i}u_{i}+I_{i}],\\ G_{i}(\theta )=\alpha f_{i}(y_{i}).\end{array}\right. \end{aligned}$$
(3.3)

Then it follows from (3.3) and Assumption 1 that

$$\begin{aligned} \left\{ \! \begin{array}{ll} |F_{i}(\theta )|\le a_{i}^{-1}\left[ \sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|)A_{j}+B_{i}D_{i}+|I_{i}|\right] \le r,\\ |G_{i}(\theta )|\le \alpha A_{i}\le r,\end{array}\right. \end{aligned}$$
(3.4)

where \(r=\max \{ r_{1},r_{2}\} \), in which \(r_{1}\) and \(r_{2}\) are respectively:

$$\begin{aligned} \left\{ \! \begin{array}{ll} r_{1}=\max \limits _{1\le i\le n}\Bigg \{a_{i}^{-1}\left[ \sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|)A_{j}+B_{i}D_{i}+|I_{i}|\right] \Bigg \},\\ r_{2}=\max \limits _{1\le i\le n}\big \{\alpha A_{i}\big \}.\end{array}\right. \end{aligned}$$
(3.5)

Then we have \(\theta ={{(y,u)}}^{T}\in [-r,r]^{2n} \Longrightarrow Q(\theta )=(F(\theta ),G(\theta ))^{T}\in [-r,r]^{2n}\), because of the continuity of \(f_{j}(\cdot )\), it follows that the mapping \(Q:[-r,r]^{2n}\rightarrow \) itself is continuous. By Brouwer’s fixed point theorem, there exists at least one fixed point\({(y^{*},u^{*})^{T}}\) of Q , i.e., an equilibrium point of system (2.6).

Next we prove the uniqueness of equilibrium \({(y^{*},u^{*})^{T}}\). Suppose system (2.6) has another equilibrium \({(y^{**},u^{**})^{T}}\), there must be \(y_{i}^{*}=y_{i}^{**}\), \(u_{i}^{*}=u_{i}^{**},\) \(i=1,2,\ldots ,n.\)

Case 1 if \(y^{**}\ne y^{**}\), \(u^{*}= u^{**}\), there are some components \(y_{i}^{*},\) \(y_{i}^{**}\) among \(y^{*},\) \(y^{**}\), such that \(y_{d}^{*}\ne y_{d}^{**},\) \( y_{j}^{*}= y_{j}^{**},\) \(j\ne d\), \(u_{i}^{*}=u_{i}^{**},\) \(i=1,2,\ldots ,n.\) From (3.1),(3.2), we can get

$$\begin{aligned} \left\{ \! \begin{array}{ll} [a_{d}-(|b_{dd}|+|c_{dd}|)]L_{d}|y_{d}^{*}-y_{d}^{**}|\le 0,\\ 0=|u_{i}^{*}-u_{i}^{**}|\le \alpha L_{i}|y_{i}^{*}-y_{i}^{**}|,\end{array}\right. \end{aligned}$$
(3.6)

\(y_{d}^{*}=y_{d}^{**}\) which is a contradiction, so the assumption is not established.

Case 2 if \(y^{**}\ne y^{**}\), \(u^{*}\ne u^{**}\), there must be components \(y_{d}^{*}\ne y_{d}^{**}\) among \(y^{*},\) \(y^{**}\), \(u_{k}^{*}\ne u_{k}^{**}\) among \(u^{*},\) \(u^{**}\).

when \(d=k\), it follows from (3.2) that

$$\begin{aligned} \left\{ \! \begin{array}{ll} [a_{d}-(|b_{dd}|+|c_{dd}|)]L_{d}|y_{d}^{*}-y_{d}^{**}|\le B_{d} |u_{d}^{*}- u_{d}^{**}| ,\\ |u_{d}^{*}- u_{d}^{**}|\le \alpha L_{i}|y_{d}^{*}-y_{d}^{**}|\le |y_{d}^{*}-y_{d}^{**}|,\end{array}\right. \end{aligned}$$
(3.7)

By (3.1) and (3.7), we get

$$\begin{aligned} \left\{ \! \begin{array}{ll} |y_{d}^{*}-y_{d}^{**}|\le |u_{d}^{*}- u_{d}^{**}| ,\\ |u_{d}^{*}- u_{d}^{**}|\le |y_{d}^{*}-y_{d}^{**}|,\end{array}\right. \end{aligned}$$
(3.8)

which is a contradiction.

when \(d\ne k\), it follows from (3.2) that

$$\begin{aligned} \left\{ \! \begin{array}{ll} [a_{d}-(|b_{dd}|+|c_{dd}|)]L_{d}|y_{d}^{*}-y_{d}^{**}|\le B_{d} |u_{k}^{*}- u_{k}^{**}| ,\\ |u_{k}^{*}- u_{k}^{**}|\le \alpha L_{i}|y_{d}^{*}-y_{d}^{**}|\le |y_{d}^{*}-y_{d}^{**}|<0.\end{array}\right. \end{aligned}$$
(3.9)

By (3.1) and (3.9), we get \(|u_{k}^{*}- u_{k}^{**}|\le |y_{d}^{*}-y_{d}^{**}|<0\), which is a contradiction. All in all, the equilibrium of system (2.6) is unique. \(\square \)

Suppose

$$\begin{aligned} \left\{ \! \begin{array}{ll} K_{1}=\max \limits _{1\le i\le n}\Big \{\sup \limits _{-\tau \le s\le 0}|y_{i}(t)-y_{i}^{*}|\Big \},\\ K_{2}=\max \limits _{1\le i\le n}\Big \{\sup \limits _{-\tau \le s\le 0}|u_{i}(t)-u_{i}^{*}|\Big \},\end{array}\right. \end{aligned}$$
(3.10)

where either \(K_{1}\) or \(K_{2}\) is positive. For example, if \(K_{1}>0\), we assume that \(K_{2}=0\); when \(K_{2}=0\), we have \(u_{i}(t)=u_{i}^{*}\), for \(t\in [-\tau ,0].\)

Theorem 3.2

Under Assumptions 1 and 2, if (3.1) holds and there exists positive constants \(\eta \) and K such that

$$\begin{aligned} \left\{ \! \begin{array}{ll} |y_{i}(t)-y_{i}^{*}|\le K\text {e}^{-\eta t},\\ |u_{i}(t)-u_{i}^{*}|\le K\text {e}^{-\eta t},\end{array}\right. \end{aligned}$$
(3.11)

for \(t\ge 0\), \(i=1,2,\ldots ,n\). Where \(K=\max \big \{K_{1},K_{2}\big \}>0\), \(K_{1},K_{2}\) are given by (3.10), then equilibrium point \({(y^*,u^*)^{T}}\) of system (2.6) is globally exponentially stable.

Proof

From Theorem 3.1, system (2.6) has a unique equilibrium point \({(y^*,u^*)^{T}}\), next we prove it is globally exponentially stable. By (2.6), for \(t>0,\) we have

$$\begin{aligned} \left\{ \! \begin{array}{ll} D^{+}|y_{i}(t)-y_{i}^{*}|\le \text {e}^{t}\Bigg \{-a_{i}|y_{i}(t)-y_{i}^{*}|+\sum \limits _{j=1}^{n}|b_{ij}||L_{j}||y_{j}(t)-y_{j}^{*}|\\ \qquad +\sum \limits _{j=1}^{n}|c_{ij}||L_{j}||y_{j}(t-\tau _{j})-y_{j}^{*}|+B_{i}|u_{i}(t)-u_{i}^{*}|\Bigg \},\\ D^{+}|u_{i}(t)-u_{i}^{*}|\le \text {e}^{t}\big \{-|u_{i}(t)-u_{i}^{*}|+\alpha L_{i}|y_{i}(t)-y_{i}^{*}|\big \}.\end{array}\right. \end{aligned}$$
(3.12)

Defining functions as follows,

$$\begin{aligned} \left\{ \! \begin{array}{ll} \Phi _{i}(\mu _{i})=a_{i}-{\mu _{i}}-\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|\text {e}^{\mu _{i}{\tau }})L_{j}-B_{i},\\ \Psi _{i}(\upsilon _{i})=1-{\upsilon _{i}}-\alpha L_{i},\end{array}\right. \end{aligned}$$
(3.13)

where \(\mu _{i},~\upsilon _{i}\in [0,+\infty )\). Note that (3.1), we have

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{i}-\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|)L_{j}-B_{i}\ge \varsigma ,\\ 1-\alpha L_{i}\ge \varsigma ,~~~ i=1,2,\ldots ,n,\end{array}\right. \end{aligned}$$
(3.14)

where \(\varsigma =\min \{\varsigma _{1},\varsigma _{2}\}\), in which

$$\begin{aligned} \left\{ \! \begin{array}{ll} \varsigma _{1}=\min \limits _{1\le i\le n}\Big \{a_{i}-\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|)L_{j}-B_{i}\Big \}>0,\\ \varsigma _{2}=\min \limits _{1\le i\le n}\big \{1-\alpha L_{i}\big \}>0.\end{array}\right. \end{aligned}$$
(3.15)

It follows from (3.13) and (3.14) that \(\Phi _{i}(0)\ge \varsigma \) and \(\Psi _{i}(0)\ge \varsigma \). Obviously, \(\Phi _{i}(\mu _{i})\) and \(\Psi _{i}(\upsilon _{i})\) are continuous, and \(\Phi _{i}(\mu _{i})\rightarrow -\infty \), \(\Psi _{i}(\upsilon _{i})\rightarrow -\infty \) as \(\mu _{i}\rightarrow +\infty \), \(\upsilon _{i}\rightarrow +\infty .\) So there are \(\tilde{\mu }_{i}, \tilde{\upsilon }_{i}\in (0,+\infty )\) such that

$$\begin{aligned} \left\{ \! \begin{array}{ll} \Phi _{i}(\tilde{\mu }_{i})=a_{i}-\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|\text {e}^{\tilde{\mu }_{i}\tau })L_{j}-\tilde{\mu }_{i}-B_{i}=0,\\ \Psi _{i}(\tilde{\upsilon }_{i})=1-\tilde{\upsilon }_{i}-\alpha L_{i}=0.\end{array}\right. \end{aligned}$$
(3.16)

Thus, there exists a constant \(\eta \), which satisfies \(0<\eta <\min \limits _{1\le i\le n}\big \{\tilde{\mu }_{i},\tilde{\upsilon }_{i}\big \}\), such that

$$\begin{aligned} \left\{ \! \begin{array}{ll} \Phi _{i}(\eta )=a_{i}-\eta -\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|\text {e}^{\eta \tau })L_{j}-B_{i}>0,\\ \Psi _{i}(\eta )=1-\eta -\alpha L_{i}>0.\end{array}\right. \end{aligned}$$
(3.17)

Accordingly, define functions \(Y_{i}(t)\) and \(U_{i}(t)\) as follows

$$\begin{aligned} \left\{ \! \begin{array}{ll} Y_{i}(t)=\text {e}^{\eta t}|y_{i}(t)-y_{i}^{*}|,\\ U_{i}(t)=\text {e}^{\eta t}|u_{i}(t)-u_{i}^{*}|,\end{array}\right. ~~~~t\in [-\tau ,+\infty ). \end{aligned}$$
(3.18)

We obtain the following inequalities

$$\begin{aligned}&D^{+}Y_{i}(t)=\eta \text {e}^{\eta t}|y_{i}(t)-y_{i}^{*}|+\text {e}^{\eta t}D^{+}|y_{i}(t)-y_{i}^{*}| \nonumber \\&\le \eta ~Y_{i}(t)+\text {e}^{\eta t}\Bigg \{-a_{i}\text {e}^{t}|y_{i}(t)-y_{i}^{*}|+\text {e}^{t}\sum \limits _{j=1}^{n}|b_{ij}|L_{j}|y_{j}(t)-y_{j}^{*}|\nonumber \\&\qquad +\text {e}^{t}\sum \limits _{j=1}^{n}|c_{ij}|L_{j}|y_{j}(t-\tau _{j})-y_{j}^{*}|+B_{i}\text {e}^{t}|u_{i}(t)-u_{i}^{*}|\Bigg \}\nonumber \\&\quad =\eta Y_{i}(t)+\text {e}^{ t}\Bigg \{-a_{i}\text {e}^{\eta t}|y_{i}(t)-y_{i}^{*}|+\text {e}^{\eta t}\sum \limits _{j=1}^{n}|b_{ij}|L_{j}|y_{j}(t)-y_{j}^{*}| \nonumber \\&\qquad +\text {e}^{\eta t}\sum \limits _{j=1}^{n}|c_{ij}|L_{j}|y_{j}(t-\tau _{j})-y_{j}^{*}|+B_{i}\text {e}^{\eta t}|u_{i}(t)-u_{i}^{*}|\Bigg \}\nonumber \\&\quad =\eta Y_{i}(t)+\text {e}^{ t}\Bigg \{-a_{i}Y_{i}(t)+\sum \limits _{j=1}^{n}|b_{ij}|L_{j}Y_{i}(t) \nonumber \\&\qquad +\sum \limits _{j=1}^{n}|c_{ij}|L_{j}\text {e}^{\eta \tau }Y_{i}(t-\tau _{j})+B_{i}U_{i}(t)\Bigg \}\nonumber \\&\quad \le -(a_{i}-\eta )\text {e}^{t}Y_{i}(t)+\text {e}^{t}\left[ \sum \limits _{j=1}^{n}|b_{ij}|+\sum \limits _{j=1}^{n}|c_{ij}|\text {e}^{\eta \tau }\right] L_{j}\sup _{s\in [t-\tau ,t]}Y_{j}(s)\nonumber \\&\qquad +\text {e}^{t}B_{i}U_{i}(t), \end{aligned}$$
(3.19)

and

$$\begin{aligned}&D^{+}U_{i}(t)=\eta \text {e}^{\eta t}|u_{i}(t)-u_{i}^{*}|+\text {e}^{\eta t}D^{+}|u_{i}(t)-u_{i}^{*}|\nonumber \\&\quad \le \eta U_{i}(t)+\text {e}^{\eta t}\text {e}^{t}\{-|u_{i}(t)-u_{i}^{*}|+\alpha L_{i}|y_{i}(t)-y_{i}^{*}|\}\nonumber \\&\quad \le -(1-\eta )\text {e}^{t}U_{i}(t)+\alpha L_{i}\text {e}^{t}Y_{i}(t). \end{aligned}$$
(3.20)

By (3.10) and (3.18), we know that

$$\begin{aligned} \left\{ \! \begin{array}{ll} Y_{i}(t)\le K,\\ U_{i}(t)\le K,\end{array}\right. ~~~t\in [-\tau ,0]. \end{aligned}$$
(3.21)

We claim that

$$\begin{aligned} \left\{ \! \begin{array}{ll} Y_{i}(t)\le K,\\ U_{i}(t)\le K,\end{array}\right. ~~~~t\in [0,+\infty ). \end{aligned}$$
(3.22)

First, for \(d>1\), we prove that there are

$$\begin{aligned} \left\{ \! \begin{array}{ll} Y_{i}(t)<dK,\\ U_{i}(t)<dK.\end{array}\right. ~~~~~t\in [-\tau ,+\infty ). \end{aligned}$$
(3.23)

Suppose that (3.23) does not hold in this sense that there is one component among \(U_{i}\) (say \(U_{k}\)) and a first time \(t_{1}>0\) such that

$$\begin{aligned} Y_{k}(t)<dK,~Y_{k}(t_{1})=dK, ~~t\in [-\tau ,t_{1}), \end{aligned}$$

while \(Y_{i}(t)<dK, ~i\ne k\), and \(U_{i}(t)<dK\), then we have \(D^{+}Y_{k}(t_{1})\ge 0\). On the other hand, it follows from (3.17) and (3.19) that

$$\begin{aligned} 0\le D^{+}Y_{k}(t_{1})\le -\Bigg (a_{i}-\eta -\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|\text {e}^{\eta \tau })L_{j}-B_{i}\Bigg )\text {e}^{t_{1}}dK<0, \end{aligned}$$
(3.24)

which means a contradiction. Thus for \(t\in [-\tau ,+\infty )\), we have \(Y_{i}(t)<dK\). while \(d\rightarrow 1\), we obtain that \(Y_{i}(t)\le K\). That is, the claim (3.22) must hold. Namely, the unique equilibrium \((y^{*},u^{*})^{T}\) of system (2.6) is globally exponentially stable. \(\square \)

In view of the proof of Theorem 3.2, we obtained the following delay-dependent sufficient condition.

Theorem 3.3

Under Assumptions 1 and 2, if there exists a constant \(\eta >0\) such that the following conditions

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{i}-\eta -\sum \limits _{j=1}^{n}(|b_{ij}|+|c_{ij}|\text {e}^{\eta \tau })L_{j}-B_{i}>0,\\ 1-\eta -\alpha L_{i}>0,~~~i=1,2,\ldots ,n,\end{array}\right. \end{aligned}$$
(3.25)

hold, then system (2.6) has a unique equilibrium \((y^{*},u^{*})^{T}\). And there exist a positive constant K such that

$$\begin{aligned} \left\{ \! \begin{array}{ll} |y_{i}(t)-y_{i}^{*}|\le K\text {e}^{-\eta t},\\ |u_{i}(t)-u_{i}^{*}|\le K\text {e}^{-\eta t},\end{array}\right. \end{aligned}$$
(3.26)

hold, where \(K=\max \big \{K_{1},K_{2}\big \}>0\), \(K_{1}\) and \(K_{2}\) are given by (3.10), \(\tau =\max \limits _{1\le j\le n}\{\tau _{j}\}\), \(\tau _{j}=-\log q_{j}\ge 0\).

4 Illustrative Examples

In this section, several examples are given to show the effectiveness of the conditions given in this paper.

Example 4.1

Consider the following competitive neural networks

$$\begin{aligned} \left\{ \! \begin{array}{llllll} \dot{x}_{1}(t)=-3x_{1}+f_{1}(x_{1}(t))+f_{1}(x_{1}(qt))+m_{11}(t)+m_{12}(t)-1,\\ \dot{x}_{2}(t)=-4x_{2}+f_{2}(x_{2}(t))+2f_{2}(x_{2}(qt))+m_{21}(t)+m_{22}(t)+2,\\ \dot{m}_{11}(t)=-m_{11}(t)+f_{1}(x_{1}(t)),\\ \dot{m}_{12}(t)=-m_{12}(t),\\ \dot{m}_{21}(t)=-m_{21}(t)+f_{2}(x_{2}(t)),\\ \dot{m}_{22}(t)=-m_{22}(t),\end{array}\right. \end{aligned}$$
(4.1)

where \(A=\left( \begin{array}{cc} 3 &{} 0 \\ 0 &{} 4 \\ \end{array} \right) \), \(B=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(C=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 2 \\ \end{array} \right) , \) \(B^{\tau }=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(q=0.5\), \(d_{1}=d_{2}=1\), \(f_{i}(y_{i})=0.4\tanh (y_{i})\), \(i=1,2\), then \(L_{1}=L_{2}=0.4,~\alpha =d_{1}^{2}+d_{2}^{2}=2\).

Let \(s_{i}(t)=\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)\), system (4.1) can turn into

$$\begin{aligned} \left\{ \! \begin{array}{llll} \dot{x}_{1}(t)=-3x_{1}+f_{1}(x_{1}(t))+f_{1}(x_{1}(qt))+s_{1}(t)-1,\\ \dot{x}_{2}(t)=-4x_{2}+f_{2}(x_{2}(t))+2f_{2}(x_{2}(qt))+s_{2}(t)+2,\\ \dot{s}_{1}(t)=-s_{1}(t)+f_{1}(x_{1}(t)),\\ \dot{s}_{2}(t)=-s_{2}(t)+f_{2}(x_{2}(t)).\end{array}\right. \end{aligned}$$
(4.2)

Let \(y_{i}(t)=x_{i}(\text {e}^{t}),u_{i}(t)=s_{i}(\text {e}^{t}) \), (4.2) is equivalent to the following

$$\begin{aligned} \left\{ \! \begin{array}{llll} \dot{y}_{1}(t)=\text {e}^{t}\{-3y_{1}+f_{1}(y_{1}(t))+f_{1}(y_{1}(t-\tau ))+u_{1}(t)-1\},\\ \dot{y}_{2}(t)=\text {e}^{t}\{-4y_{2}+f_{2}(y_{2}(t))+2f_{2}(y_{2}(t-\tau ))+u_{2}(t)+2\},\\ \dot{u}_{2}(t)=\text {e}^{t}\{-u_{1}(t)+f_{1}(y_{1}(t))\},\\ \dot{u}_{2}(t)=\text {e}^{t}\{-u_{2}(t)+f_{2}(y_{2}(t))\}.\end{array}\right. \end{aligned}$$
(4.3)

We compute and obtain that

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{1}-\sum \limits _{j=1}^{2}(|b_{1j}|+|c_{1j}|)L_{j}-B_{1}=1.2>0,\\ 1>\alpha L_{1}=0.2>0,\end{array}\right. \end{aligned}$$

and

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{2}-\sum \limits _{j=1}^{2}(|b_{2j}|+|c_{2j}|)L_{j}-B_{2}=1.8>0,\\ 1>\alpha L_{2}=0.2>0.\end{array}\right. \end{aligned}$$

Therefore, by Theorems 3.1 and 3.2, system (4.2) has a unique equilibrium and it is globally exponentially stable. By Matlab, we obtain that the equilibrium of (4.2) is \((-0.5257,0.7516,-0.1884,0.2485)^{T}\), the Matlab simulation result is presented in Fig.1.

Fig. 1
figure 1

The trajectories of system (4.2) for \((x(0),s(0))^{T}=(-0.2,-0.5,0.2,-0.8)^{T}\)

Example 4.2

Consider the following competitive neural networks

$$\begin{aligned} \left\{ \! \begin{array}{llllll} \dot{x}_{1}(t)=-4x_{1}+2f_{1}(x_{1}(t))+2f_{1}(x_{1}(q_{1}t))+0.5m_{11}(t)+0.5m_{12}(t)+1,\\ \dot{x}_{2}(t)=-5x_{2}+f_{2}(x_{2}(t))+5f_{2}(x_{2}(q_{2}t))+0.5m_{21}(t)+0.5m_{22}(t)+2,\\ \dot{m}_{11}(t)=-m_{11}(t)+0.5f_{1}(x_{1}(t)),\\ \dot{m}_{12}(t)=-m_{12}(t),\\ \dot{m}_{21}(t)=-m_{21}(t)+0.5f_{2}(x_{2}(t)),\\ \dot{m}_{22}(t)=-m_{22}(t),\end{array}\right. \end{aligned}$$
(4.4)

where \(A=\left( \begin{array}{cc} 4 &{} 0 \\ 0 &{} 5 \\ \end{array} \right) \), \(B=\left( \begin{array}{cc} 2 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(C=\left( \begin{array}{cc} 2 &{} 0 \\ 0 &{} 5 \\ \end{array} \right) , \) \(B^{\tau }=\left( \begin{array}{cc} 1 &{} 0 \\ 0 &{} 1 \\ \end{array} \right) , \) \(q_{1}=0.4,\) \(q_{2}=0.6\), \(d_{1}=d_{2}=0.5\), \(f_{i}(x_{i})=0.5(\tanh (0.4x_{i})+\cos (0.4x_{i})),~i=1,2.\)

Let \(s_{i}(t)=\sum \limits _{j=1}^{n}d_{j}m_{ij}(t)\), system (4.4) can be turned into

$$\begin{aligned} \left\{ \! \begin{array}{llll} \dot{x}_{1}(t)=-4x_{1}+2f_{1}(x_{1}(t))+2f_{1}(x_{1}(q_{1}t))+s_{1}(t)+1,\\ \dot{x}_{2}(t)=-5x_{2}+f_{2}(x_{2}(t))+5f_{2}(x_{2}(q_{2}t))+s_{2}(t)+2,\\ \dot{s}_{1}(t)=-s_{1}(t)+0.5f_{1}(x_{1}(t)),\\ \dot{s}_{2}(t)=-s_{2}(t)+0.5f_{2}(x_{2}(t)).\end{array}\right. \end{aligned}$$
(4.5)

Let \(y_{i}(t)=x_{i}(\text {e}^{t}),\) \(u_{i}(t)=s_{i}(\text {e}^{t})\), system (4.5) becomes as follows

$$\begin{aligned} \left\{ \! \begin{array}{llll} \dot{y}_{1}(t)=\text {e}^{t}\{-4y_{1}+2f_{1}(y_{1}(t))+2f_{1}(y_{1}(t-\tau _{1}))+u_{1}(t)+1\},\\ \dot{y}_{2}(t)=\text {e}^{t}\{-5y_{2}+f_{2}(y_{2}(t))+5f_{2}(y_{2}(t-\tau _{2}))+u_{2}(t)+2\},\\ \dot{u}_{2}(t)=\text {e}^{t}\{-u_{1}(t)+0.5f_{1}(y_{1}(t))\},\\ \dot{u}_{2}(t)=\text {e}^{t}\{-u_{2}(t)+0.5f_{2}(y_{2}(t))\}.\end{array}\right. \end{aligned}$$
(4.6)

By computing, \(\tau _{1}=-\log q_{1}=0.9163,\) \(\tau _{2}=-\log q_{2}=0.5108\), \(\tau =\max \{\tau _{1},\tau _{2}\}=0.9163\). \(L_{1}=L_{2}=0.4\). \(\alpha =d_{1}^{2}+d_{1}^{2}=0.5\). Taking \(\eta =0.2\). We compute and obtain that

$$\begin{aligned} \left\{ \! \begin{array}{lll} a_{1}-\sum \limits _{j=1}^{2}(|b_{1j}|+|c_{1j}|\text {e}^{\eta \tau })L_{j}-\eta -B_{1}=1.0391>0,\\ 1-\eta -\alpha L_{1}=0.6>0,\end{array}\right. \end{aligned}$$

and

$$\begin{aligned} \left\{ \! \begin{array}{ll} a_{2}-\sum \limits _{j=1}^{2}(|b_{2j}|+|c_{2j}|\text {e}^{\eta \tau })L_{j}-\eta -B_{2}=0.9978>0,\\ 1-\eta -\alpha L_{2}=0.6>0.\end{array}\right. \end{aligned}$$
Fig. 2
figure 2

The trajectories of system (4.5) for \((x(0),s(0))^{T}=(2.0,1.0,-2.0,-1.0)^{T}\)

Then it follows from Theorems 3.1 and 3.3, system (4.5) has a unique equilibrium and it is globally exponentially stable. Though Matlab, the equilibrium point of system (4.5) is \((1.2473,1.6426,0.3248,0.3422)^{T}\). The Matlab simulation result is presented in Fig. 2.

Example 4.3

Consider the following competitive neural networks

$$\begin{aligned} \left\{ \! \begin{array}{rll} { STM}:\varepsilon \frac{{\text {d}x}_{i}(t)}{\text {d}t}&{}=-a_{i}x_{i}(t)+\sum \limits _{j=1}^{2}D_{ij}f_{j}(x_{j}(t))+\sum \limits _{j=1}^{2}D_{ij}^{\tau }f_{j}(x_{j}(qt))\\ &{}\quad +\,B_{i}s_{i}(t),\\ { LTM}:\frac{\text {d}s_{i}(t)}{\text {d}t}&{}=-s_{i}(t)+f_{i}(x_{i}(t)),~i=1,2,\end{array}\right. \end{aligned}$$
(4.7)

where \(\varepsilon =1\), \(A=\left( \begin{array}{cc} 2.2 &{} 0 \\ 0 &{} 2.2 \\ \end{array} \right) \), \(D=\left( \begin{array}{cc} -1 &{} 0.3 \\ 0.3 &{} -1 \\ \end{array} \right) , \) \(D^{\tau }=\left( \begin{array}{cc} -1.2 &{} 0.5 \\ 0.5 &{} -1.2 \\ \end{array} \right) , \) \(B=\left( \begin{array}{cc} -0.1 &{} 0 \\ 0 &{} 0.3 \\ \end{array} \right) , \) the activation is given by \(f_{i}(s)=0.01\sin (s)\) with \(L_{i}=0.01\). \(q=0.5\).

We compute and obtain that

$$\begin{aligned}&\left\{ \! \begin{array}{lll} a_{1}-\sum \limits _{j=1}^{2}(|b_{1j}|+|c_{1j}|)L_{j}-B_{1}=2.286>0,\\ 1-\alpha L_{1}=0.08>0,\end{array}\right. \\&\left\{ \! \begin{array}{ll} a_{2}-\sum \limits _{j=1}^{2}(|b_{2j}|+|c_{2j}|)L_{j}-B_{2}=1.886>0,\\ 1-\alpha L_{2}=0.08>0.\end{array}\right. \end{aligned}$$

Then it follows from Theorems 3.1 and 3.2, system (4.7) has a unique equilibrium and it is globally exponentially stable. the Matlab simulation result is presented in Fig. 3. Except for time delay and activation function in Example 4.3, the other data are the same as Example 4.1 in [32]. The activation function is discontinuous in [32], in this paper we choose a continuous and bounded one. The time delay is time-varying and bounded in [32], while in this paper the time delay is a proportional one which is unbounded and time-varying. Thus, the obtained results in [32] cann’t directly apply to Example 4.3 in the paper. In terms of time delay terms, the obtained results in the paper are less conservative than the previous results.

Fig. 3
figure 3

The trajectories of system (4.7) for \((x(0),s(0))^{T}=(1,2,-1,-2)^{T}\)

5 Conclusions

In this paper, by the fixed point theorem and constructing delay differential inequality, we discuss the global exponential stability of a class of competitive neural networks with multi-proportional delays. And we obtain two novel delay-independent and delay-dependent sufficient conditions which ensure the existence and uniqueness, and global exponential stability of equilibrium of the system. This method is to construct a delay differential inequality rather than a Lyapunov functional, whose results can be easily checked. Different from the prior works, delays here are proportional delays which are unbounded and time-varying. In terms of time delay terms, the obtained results in the paper are less conservative than the previous results.