1 Introduction

1.1 Background

COVID-19 is a respiratory illness caused by the novel coronavirus SARS-CoV-2. Since its emergence in late 2019, COVID-19 has had a significant impact on human health, as well as social and economic well-being worldwide. The effects of COVID-19 on humans can range from mild to severe, with the most severe cases resulting in hospitalization, long-term disability, and even death. The symptoms of COVID-19 can include fever, cough, shortness of breath, fatigue, body aches, loss of taste or smell, and gastrointestinal symptoms [1]. In severe cases, the virus can lead to respiratory failure, septic shock, and multi-organ failure. While COVID-19 is most commonly associated with respiratory symptoms, the virus has also been shown to affect other systems of the body, including the cardiovascular, nervous, and gastrointestinal systems.

Studying the effects of COVID-19 on humans is necessary for several reasons. First and foremost, understanding the mechanisms by which the virus causes illness can help healthcare professionals develop effective treatments and vaccines. Additionally, studying the long-term effects of COVID-19 can help researchers understand the potential impact of the virus on the health and well-being of individuals who have recovered from the illness. In summary, studying the effects of COVID-19 on humans is critical for developing effective treatments and vaccines, understanding the long-term impact of the virus on health and well-being, and developing policies and interventions to mitigate the social and economic impacts of the pandemic.

Mathematical models have become essential tools for studying the spread of infectious diseases, including COVID-19 [2,3,4,5]. For instance, Ndaïrou et al. [6] proposed a deterministic model for the spread of the COVID-19 disease with special focus on the transmissibility of super-spreaders individuals. Biswas et al. [7] investigated a compartmental model to study the dynamics and future trend of COVID-19 outbreak in India. Khan and Atangana [8] developed an infectious disease model for omicron.They analyzed the equilibria of the model and the local asymptotic stability of disease-free equilibrium. Raza et al. [9] developed a SIVR epidemic model to study crowding effects of coronavirus.

Nonstandard finite difference methods, particularly in the realm of fractional modeling, offer distinct advantages when it comes to capturing phenomena characterized by anomalous diffusion and other fractional-order dynamics [10]. By incorporating concepts from fractional calculus, such as fractional derivatives or integrals, these methods excel at representing intricate dynamics that possess memory and long-range dependencies. Furthermore, adopting spatio-temporal modeling approaches provides valuable insights into the spatial dynamics of epidemics [11]. This spatial perspective aids in identifying high-risk areas, optimizing resource allocation, and evaluating the effectiveness of targeted interventions, ultimately enhancing the ability to control and manage the spread of infectious diseases.

Tilahun and Alemneh [12] formulated the following deterministic mathematical model for COVID-19 transmission in Ethiopia by a SEIR model:

$$\begin{aligned} \left\{ \begin{aligned} {\frac{\textrm{d}S(t)}{\textrm{d}t}}&{=}\Pi {+}\eta R(t){-}\beta (\sigma _1 I(t){+}\sigma _2 E(t))S(t){-}\mu S(t),\\ {\frac{\textrm{d}E(t)}{\textrm{d}t}}&=\beta (\sigma _1 I(t)+\sigma _2 E(t))S(t)-(\delta +\mu )E(t),\\ {\frac{\textrm{d}I(t)}{\textrm{d}t}}&=\tau \delta E(t)-(\epsilon +\rho +\mu )I(t),\\ {\frac{\textrm{d}R(t)}{\textrm{d}t}}&=(1-\tau )\delta E(t)+\epsilon I(t)-(\mu +\eta )R(t), \end{aligned} \right. \nonumber \\ \end{aligned}$$
(1.1)

where the meanings of the variables and parameters are given in Table 1.

Table 1 Variables and parameters used in model (1.1)

The analysis of the model (1.1) includes qualitative examination of major factors such as the disease-free equilibrium, basic reproduction number, stability analysis of equilibria. According to [12], the basic reproduction number of the model is

$$\begin{aligned} R_0 =\frac{ \beta \Pi (\sigma _1\tau \delta +\sigma _2 (\epsilon +\rho +\mu ) )}{\mu (\delta +\mu )(\epsilon +\rho +\mu )}. \end{aligned}$$

They further analyzed and obtained that the disease-free equilibrium of the model is locally asymptotically stable and globally asymptotically stable when \(R_0 < 1\).

However, the existence and stability of endemic equilibrium was not discussed. In the “Appendix”, we give the endemic equilibrium of the equivalent model of deterministic model (1.1) and prove its local asymptotic stability when \(R_0>1\).

1.2 Stochastic model formulation

Stochastic mathematical models are valuable for studying infectious diseases because they allow researchers to account for the inherent randomness and uncertainty associated with disease transmission [13, 14]. Raza et al. [15] proposed a stochastic nonstandard finite difference model to investigate the spread of Nipah virus. Hamam et al. [16] utilized an evolutionary approach to study the stochastic modeling of Lassa Fever, a viral hemorrhagic fever. By considering stochasticity in their model, they were able to capture the inherent uncertainty and variability in the transmission dynamics of this disease. Furthermore, a stochastic cancer virotherapy model incorporating immune responses has been developed to analyze the dynamics of cell populations in cancer treatment [17]. This model explores the complex interactions between viruses, cancer cells, and the immune system, providing insights into the efficacy and limitations of virotherapy as a potential cancer treatment approach. In the case of COVID-19, this uncertainty is particularly significant given the rapidly changing nature of the pandemic and the evolving understanding of how the disease spreads.

One of the primary benefits of stochastic mathematical models is that they can provide more realistic and accurate predictions of the course of the pandemic compared to deterministic models [18,19,20,21]. Deterministic models assume that the epidemic unfold in a predictable way based on a set of fixed parameters, while stochastic models account for the inherent randomness of disease transmission and the impact of random events, such as superspreader events or localized outbreaks, on the spread of the disease.

Another benefit of stochastic models is their ability to provide probabilistic forecasts of the course of the pandemic. Probabilistic forecasts are important because they allow policymakers to assess the likelihood of various outcomes and make informed decisions based on the potential risks and benefits associated with different interventions.

Additionally, stochastic models can be used to estimate important parameters related to the disease, such as the transmission rate, the basic reproductive number, and the effectiveness of various interventions. These estimates can help inform public health policies and interventions aimed at controlling the spread of the disease.

In conclusion, the benefits of using stochastic mathematical models to study COVID-19 are significant. These models provide a more realistic and accurate representation of disease transmission and can be used to provide probabilistic forecasts of the course of the pandemic. They also provide estimates of important disease parameters, which can inform public health policies and interventions aimed at controlling the spread of the disease.

The concept of environmental perturbations in the stochastic model has been introduced by May [22] to capture the effect of uncertainties in the real world. To achieve this, all the parameters in the model have been assumed to be prone to random fluctuations, which can be effectively described using Brownian motion. In particular, the contact rate \(\beta \) is highly sensitive to environmental disturbances. Therefore, we have considered \(\beta \) as a random variable, which is denoted by \(\beta (t)\). There are two commonly used methods for modeling the effect of environmental perturbations. The first method involves the assumption that the random variable is perturbed by Gaussian linear white noise [23], while the second approach considers the scenario where disease transmission rates are influenced by random factors in the environment and tend towards the mean value over time. In this approach, the parameter \(\beta (t)\) is assumed to follow a separate stochastic differential equation (SDE), which is forced to be around the asymptotic mean [24]. This process is also known as a mean-reverting process, where the classical type is the Ornstein–Uhlenbeck (OU) process [25].

Taking inspiration from the method proposed in [26], we have discussed and compared two methods for perturbing \(\beta (t)\) and present the analytical comparison below. The first method involves assuming that the random variables \(\beta (t)\) can be accurately modeled by linear functions of white noise. In such a scenario, it follows that

$$\begin{aligned} \beta \rightarrow \bar{\beta }+\rho \frac{\textrm{d}B(t)}{\textrm{d}t}, \end{aligned}$$
(1.2)

where \(\bar{\beta }\) is the long-run mean level of the contact rate, \(\rho ^2\) denotes the intensity of the white noise and B(t) is a standard Brownian motion defined on this complete probability space. Upon direct integration of equation (1.2), the average disease contact rate over an interval [0, t] is

$$\begin{aligned} \frac{1}{t}\int _0^t \beta (\tau ) \textrm{d}\tau =\bar{\beta }+\rho \frac{ B(t)}{\textrm{d}t}\sim {\mathbb {N}}\left( \bar{\beta },\frac{\rho ^2}{t}\right) . \end{aligned}$$

Indeed, it is apparent that as the time interval approaches zero, the average contact rate tends towards infinity. This behavior is problematic because it implies that the average value of parameters, such as the disease contact rate, becomes increasingly unstable as the time interval decreases. Such instability is unrealistic and impractical for modeling purposes.

These observations suggest that the use of white noise to model environmental changes may have inherent limitations. Recent research has highlighted these limitations, particularly in the context of disease transmission. It has been shown that considering uncertainty as white noise underestimates the severity of major disease outbreaks. On the other hand, the OU model has been shown to accurately predict the process of disease propagation [27]. This model incorporates temporal correlations and persistence, providing a more realistic representation of uncertainty in disease transmission dynamics.

To address the issue of potential negative values for \(\beta (t)\) when directly utilizing the OU process, we introduce a modification. Instead of modeling \(\beta (t)\) directly, we consider the natural logarithm of \(\beta (t)\), denoted as \(\ln \beta (t)\), and apply a mean-reverting SDE to this transformed variable:

$$\begin{aligned} \textrm{d}\ln \beta (t) =\theta \left( \ln \bar{\beta }-\ln \beta (t) \right) \textrm{d}t+\xi \textrm{d}B (t), \end{aligned}$$

where \(\theta \) denotes the speed of reversion and \(\xi \) is noise intensity. Letting \(x(t) = \ln \beta (t)\) and \({\bar{x}} = \ln \bar{\beta }\), then x(t) satisfies the OU SDE:

$$\begin{aligned} \textrm{d}x(t) =\theta \left( {\bar{x}}-x(t) \right) \textrm{d}t+\xi \textrm{d}B (t), \end{aligned}$$
(1.3)

Assuming that \(x(0) = \ln \bar{\beta }\), form [28], it follows that \(x(t)\sim {\mathbb {N}}\left( \ln \bar{\beta },\frac{\xi ^2}{2\theta }(1-e^{-2\theta t})\right) \). Thus the probability density of \( \beta (t)\) approaches a stationary log-normal density with mean \(\bar{\beta }e^{\frac{\xi ^2}{4\theta }}\) and variance \(\bar{\beta }^2 (e^{\frac{\xi ^2}{ \theta }(1-e^{-2\theta t})}-e^{\frac{\xi ^2}{ 2\theta }(1-e^{-2\theta t})})\). As the time interval decreases sufficiently, it becomes apparent that the variance, which indicates the level of variability in the disease contact rate, gradually tends towards zero. This suggests a stable and consistent disease contact rate over time. As a result, this modeling approach appears more reasonable.

By employing the mean-reverting SDE for \(\ln \beta (t)\), we ensure that the resulting process remains within realistic and non-negative values. This modification preserves the inherent properties of the OU process, such as temporal correlations and persistence, while preventing the occurrence of negative transmission rates. This approach allows us to capture the stochastic nature of disease transmission and the impact of environmental fluctuations, while maintaining the realism and feasibility of the model. By modeling the logarithm of \(\beta (t)\) with a mean-reverting SDE, we strike a balance between incorporating realistic dynamics and avoiding unrealistic scenarios with negative transmission rates.

Taken together, these findings emphasize the importance of using appropriate modeling approaches that account for the temporal nature of environmental changes and capture the complex dynamics of disease transmission. The use of the OU model offers a more accurate and reliable framework for understanding and predicting the spread of diseases, addressing the limitations associated with modeling uncertainty as white noise.

Therefore, we obtain the following stochastic COVID-19 model:

$$\begin{aligned} \left\{ \begin{aligned} \textrm{d}x(t)&=\theta \left( {\bar{x}}-x(t) \right) \textrm{d}t+\xi \textrm{d}B (t),\\ \textrm{d}S (t)&{=}(\Pi {+}\eta R(t) {-}e^{x(t)}(\sigma _1 I(t){+}\sigma _2 E(t))S(t){-}\mu S (t))\textrm{d}t,\\ \textrm{d}E (t)&{=}(e^{x(t)}(\sigma _1 I(t){+}\sigma _2 E(t))S(t){-}(\delta {+}\mu )E(t))\textrm{d}t,\\ \textrm{d}I (t)&=(\tau \delta E(t)-(\epsilon +\rho +\mu )I(t))\textrm{d}t,\\ \textrm{d}R(t)&=((1-\tau )\delta E(t)+\epsilon I-(\mu +\eta )R(t))\textrm{d}t. \end{aligned} \right. \nonumber \\ \end{aligned}$$
(1.4)

By comparing our research with existing results, we have made the following significant contributions:

  • Addressing the limitations of white noise models: Recent study has demonstrated that stochastic models constructed with white noise tend to underestimate the severity of disease outbreaks. In contrast, our research incorporates the OU process as a suitable tool for modeling uncertainty in transmission rate [27]. This approach allows for more accurate predictions of the severity and progression of the COVID-19 pandemic. Notably, our stochastic SEIRS epidemic model (1.4) with a log OU process introduces a more realistic and biologically meaningful framework for studying the spread of COVID-19.

  • Novel Lyapunov function construction and stochastic analysis: We contribute to the field by developing a novel approach that combines Lyapunov function construction methods with stochastic process theory. By doing so, we derive sufficient conditions for the existence of a stationary distribution and the extinction of the disease within the model. Specifically, we demonstrate that the model exhibits a stationary distribution when \(R_0^s > 1\), while disease extinction occurs when \(R_0^e < 1\).

  • Exact expression for the probability density function: By defining the quasi-epidemic equilibrium \(E^\star \), we provide an exact expression for the probability density function of a stable distribution in the vicinity of \(E^\star \). This contribution enhances our understanding of the distributional characteristics and behavior of the model near the quasi-epidemic equilibrium.

  • Comprehensive numerical simulations and real case data analysis: We conducted various numerical simulations to illustrate the impact of environmental noise on the dynamics of the stochastic model. Furthermore, we validated our findings by comparing the outcomes of our stochastic model with real case data from Ethiopia, covering the period from March 2020 to July 2021. This comparison highlights the practical relevance and applicability of our stochastic model in analyzing and understanding real-world epidemic dynamics.

In summary, our research provides valuable insights into the limitations of white noise models, introduces a more realistic stochastic framework for modeling the spread of COVID-19, establishes conditions for the existence of stationary distributions and disease extinction, and incorporates numerical simulations and real case data analysis for validation and practical applicability.

In this paper, we will explore the benefits of using mathematical models, particularly stochastic mathematical model (1.4), to study COVID-19. The paper is structured as follows: Sect. 2 presents the preliminaries and investigates the existence of the unique global solution. Sections 3 and 4 explore the stationary distribution and the probability density function of the stochastic model, respectively. Section 5 discusses the sufficient conditions for the extinction of disease. In Sect. 6, we perform several numerical simulations to demonstrate the theoretical findings presented in this paper. Finally, we provide a comprehensive conclusion to the paper.

2 Preliminaries

2.1 Useful lemmas

Throughout this paper, let \((\Omega , {\mathscr {F}}, \{{\mathscr {F}}_t\}_{t\ge 0},{\mathbb {P}})\) be a complete probability space with a filtration \(\{{\mathscr {F}}_t\}_{t\ge 0}\) satisfying the usual conditions (i.e. it is increasing and right continuous while \({\mathscr {F}}_0\) contains all \({\mathbb {P}}\)-null sets). We also let \({\mathbb {R}}^n_+=\{x=(x_1,\dots ,x_n)\in {\mathbb {R}}^n: x_i > 0, i=1,\dots ,n\}\). If M is a matrix, its transpose is denoted by \(M^T\). Let \({\mathbb {N}}_k\) denote a k-dimensional normal distribution, for k is a positive integer.

Lemma 2.1

[29,30,31] If there exists a bounded closed domain \({\mathbb {D}}\in {\mathbb {R}}^d\) with a regular boundary, for any initial value \(X(0) \in {\mathbb {R}}^d\), if

$$\begin{aligned} \liminf _{t\rightarrow +\infty }\frac{1}{t}\int _0^t {\mathbb {P}}(\tau ,X(0),{\mathbb {D}})\textrm{d}\tau >0, \ \ a.s., \end{aligned}$$

where \({\mathbb {P}}(\tau ,X(0),{\mathbb {D}})\) is the transition probability of X(t). Then system (1.4) will possesses a solution which has the Feller property. In addition, system (1.4) admits at least one invariant probability measure on \({\mathbb {R}}^d\), which means system (1.4) has at least one ergodic stationary distribution on \({\mathbb {R}}^d\).

Lemma 2.2

For a stochastic equation

$$\begin{aligned} \textrm{d}(\ln \beta (t))=\theta (\ln \bar{\beta }-\ln \beta (t))+\xi B(t), \end{aligned}$$
(2.1)

where \(\ln \bar{\beta }\) and \(\xi \) are positive constants and B(t) is a standard Brownian motion. Then,

  1. (i)
    $$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\int _0^t \left| \beta (s)-\bar{\beta }\right| \textrm{d}s \le e^{\bar{x}}\left( 1+e^{\frac{\xi ^2}{\theta }}-2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}}. \end{aligned}$$
  2. (ii)

    For \(n>0\),

    $$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t \beta ^n (s)\textrm{d}s= (\bar{\beta })^n e^{\frac{n^2\xi ^2}{4 \theta }}. \end{aligned}$$

Proof

(i) According to the ergodicity of \(z_1\), \(z_2\) and the strong law of large numbers, we obtain

$$\begin{aligned}{} & {} \lim _{t\rightarrow \infty }\frac{1}{t}\int _0^t \left| e^{x (s)}{-}\bar{\beta }\right| \textrm{d}s {=}\int _{-\infty }^{+\infty } \left| e^{x (\nu )}-e^{\bar{x }}\right| \rho (\nu )\textrm{d}\nu \\{} & {} \quad \le \left( \int _{{-}\infty }^{+\infty } \left( e^{x (\nu )}{-}e^{\bar{x }}\right) ^2\rho (\nu )\textrm{d}\nu \right) ^{\frac{1}{2}} \left( \int _{-\infty }^{+\infty } 1^2 \rho (\nu )\right) ^{\frac{1}{2}}\\{} & {} \quad =\left( \int _{-\infty }^{+\infty } \left( e^{x (\nu )}-e^{\bar{x }}\right) ^2\rho (\nu )\textrm{d}\nu \right) ^{\frac{1}{2}}\\{} & {} \quad =\left( e^{2\bar{x }+\frac{\xi ^2}{\theta }}+e^{2\bar{x }}-2e^{2\bar{x }+\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}}\\{} & {} \quad =e^{\bar{x }}\left( 1+e^{\frac{\xi ^2}{\theta }}-2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}}, \end{aligned}$$

where

$$\begin{aligned} \rho (\nu )=\frac{\sqrt{\theta }}{ \sqrt{\pi }\xi } e^{-\frac{r (\nu -\bar{x })^2}{\xi ^2}}. \end{aligned}$$

(ii) Denote \(x(t)=\ln \beta (t)\) and \({\bar{x}}=\ln \bar{\beta }\). Then (2.1) becomes

$$\begin{aligned} \textrm{d}x(t)=\theta ( {\bar{x}}- \ln x(t))+\xi B(t). \end{aligned}$$
(2.2)

Then we have

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t \beta ^n (s)\textrm{d}s= & {} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t e^{nx(s)}\textrm{d}s \\= & {} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t e^{\frac{\sqrt{2r}(x(s)-{\bar{x}})}{\xi }\frac{n\xi }{\sqrt{2\theta }}} e^{{\bar{x}} n}\textrm{d}s. \end{aligned}$$

Let \(v(t)=\frac{\sqrt{2\theta }(x(t)-{\bar{x}})}{\xi }\), then it is obvious that the stationary distribution of v(t) obeys \({\mathbb {N}}(0,1)\). Therefore, we have

$$\begin{aligned} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t \beta ^n (s)\textrm{d}s= & {} (\bar{\beta })^n \int _{-\infty }^{+\infty } \frac{1}{\sqrt{2\pi }} e^{-\frac{v^2}{2}} e^{\frac{n\xi }{\sqrt{2\theta }}v}\textrm{d}v \\= & {} (\bar{\beta })^n \int _{-\infty }^{+\infty } \frac{1}{\sqrt{2\pi }} e^{-\frac{\left( v-\frac{n\xi }{\sqrt{2\theta }}\right) ^2}{2}} e^{\frac{n^2\xi ^2}{ 4\theta } }\textrm{d}v\\= & {} (\bar{\beta })^n e^{\frac{n^2\xi ^2}{4\theta } }. \end{aligned}$$

\(\square \)

Next, we give a lemma on five-dimensional positive definite matrix.

Lemma 2.3

[32] For a symmetric matrix \(\Omega _0\), if \(\Omega _0\) satisfies \(\Xi _0^2+A_0\Omega _0+\Omega _0 A_0^T=0\), where

$$\begin{aligned} \Xi _0= & {} \begin{pmatrix} 1 &{}0 &{}0 &{}0 &{}0\\ 0 &{}0 &{}0 &{}0 &{}0\\ 0 &{}0 &{}0 &{}0&{}0\\ 0 &{}0 &{}0 &{}0&{}0\\ 0 &{}0 &{}0 &{}0&{}0 \end{pmatrix}, \\ A_0= & {} \begin{pmatrix} - \vartheta _1 &{}- \vartheta _2 &{}- \vartheta _3 &{}- \vartheta _4 &{}-\vartheta _5\\ 1 &{}0 &{}0 &{}0 &{}0\\ 0 &{}1 &{}0 &{}0 &{}0\\ 0 &{}0 &{}1 &{}0 &{}0\\ 0 &{}0 &{}0 &{}1 &{}0 \end{pmatrix}, \end{aligned}$$

with

$$\begin{aligned}{} & {} \vartheta _1,\vartheta _2,\vartheta _3,\vartheta _4,\vartheta _5>0,~ \vartheta _1\vartheta _2-\vartheta _3>0,~ \vartheta _3\vartheta _4\\{} & {} \quad {-}\vartheta _2\vartheta _5>0, ~ \vartheta _3(\vartheta _1\vartheta _2{-}\vartheta _3){-}\vartheta _1(\vartheta _1\vartheta _4{-}\vartheta _5)>0,\\{} & {} \quad (\vartheta _1\vartheta _2-\vartheta _3)(\vartheta _3\vartheta _4-\vartheta _2\vartheta _5) -(\vartheta _1\vartheta _4-\vartheta _5)^2>0. \end{aligned}$$

Then

$$\begin{aligned} \Omega _0 = \begin{pmatrix} \vartheta _{11} &{} 0&{}-\vartheta _{22}&{}0 &{}\vartheta _{33} \\ 0 &{}\vartheta _{22} &{}0 &{}-\vartheta _{33} &{}0\\ -\vartheta _{22} &{}0 &{}\vartheta _{33} &{}0 &{}-\vartheta _{44}\\ 0 &{}-\vartheta _{33} &{}0 &{}\vartheta _{44} &{}0\\ \vartheta _{33} &{}0 &{}-\vartheta _{44} &{}0 &{}\vartheta _{55} \end{pmatrix}. \end{aligned}$$

is a positive definite matrix, where

$$\begin{aligned} \vartheta _{11}= & {} \frac{\vartheta _2(\vartheta _3\vartheta _4-\vartheta _2\vartheta _5)-\vartheta _4 (\vartheta _1\vartheta _4-\vartheta _5)}{2[(\vartheta _1\vartheta _2-\vartheta _3)(\vartheta _3\vartheta _4 -\vartheta _2\vartheta _5)-(\vartheta _1\vartheta _4-\vartheta _5)^2] },\\ \vartheta _{22}= & {} \frac{ \vartheta _3\vartheta _4-\vartheta _2\vartheta _5 }{2[(\vartheta _1\vartheta _2-\vartheta _3)(\vartheta _3\vartheta _4-\vartheta _2\vartheta _5) -(\vartheta _1\vartheta _4-\vartheta _5)^2] },\\ \vartheta _{33}= & {} \frac{ \vartheta _1\vartheta _4-\vartheta _5 }{2[(\vartheta _1\vartheta _2-\vartheta _3)(\vartheta _3\vartheta _4-\vartheta _2\vartheta _5) -(\vartheta _1\vartheta _4-\vartheta _5)^2] },~ \\ \vartheta _{44}= & {} \frac{\vartheta _1\vartheta _2-\vartheta _3 }{2[(\vartheta _1\vartheta _2-\vartheta _3)(\vartheta _3\vartheta _4-\vartheta _2\vartheta _5) -(\vartheta _1\vartheta _4-\vartheta _5)^2] },\\ \vartheta _{55}= & {} \frac{\vartheta _3(\vartheta _1\vartheta _2-\vartheta _3)-\vartheta _1 (\vartheta _1\vartheta _4-\vartheta _5)}{2[(\vartheta _1\vartheta _2-\vartheta _3)(\vartheta _3\vartheta _4 -\vartheta _2\vartheta _5)-(\vartheta _1\vartheta _4-\vartheta _5)^2]}. \end{aligned}$$

2.2 Existence and uniqueness of the global solution

Firstly, we give the following fundamental theorem with respect to a unique global solution of stochastic model (1.4).

Theorem 2.1

For any initial value (x(0), S(0), E(0),  \(I(0),R(0))\in {\mathbb {R}}\times {\mathbb {R}}_+^4\), there exists a unique solution (x(t), S(t), E(t), I(t), R(t)) of model (1.4) on \(t \ge 0\), and the solution will remain in \({\mathbb {R}}\times {\mathbb {R}}_+^4\) with probability one almost surely (a.s.).

Proof

The beginning and the end of the proof are similar to [33], thus we omit them here. Here, we only present the most crucial Lyapunov function. Define a \(C^2\)-function as follows

$$\begin{aligned} U_0= & {} (e^{x}-x-1)+(S-1- \ln S)\\{} & {} \quad +(E_1-1-\ln E)+ (I-1- \ln I)\\{} & {} \quad +(R-1-\ln R), \end{aligned}$$

where the non-negativity of \(U_0\) can be obtained through the inequality \(z-1-\ln z \ge 0\) for \(z >0\). Applying Itô’s formula to \(U_0\), we have

$$\begin{aligned} {\mathcal {L}}U_0= & {} \theta (e^{x}-1)({\bar{x}}-x)+\frac{\xi ^2}{2} e^{x} +\left( 1-\frac{1}{S}\right) \\{} & {} \quad \left( \Pi +\eta R-e^x(\sigma _1 I+\sigma _2 E)S-\mu S\right) \\{} & {} \quad +\left( 1-\frac{1}{E}\right) \left( e^x(\sigma _1 I+\sigma _2 E)S-(\delta +\mu )E\right) \\{} & {} \quad +\left( 1-\frac{1}{I}\right) \left( \tau \delta E-(\epsilon +\rho +\mu )I\right) \\{} & {} \quad +\left( 1-\frac{1}{R}\right) \left( (1-\tau )\delta E+\epsilon I-(\mu +\eta )R\right) \\{} & {} \quad \le \theta (e^{x}-1)({\bar{x}}-x)+\frac{\xi ^2}{2} e^{x} +e^x(\sigma _1 I\\{} & {} \quad +\sigma _2 E) +\Pi +\delta + \epsilon +\rho +\rho +3\mu . \end{aligned}$$

Notice that

$$\begin{aligned} (S+E+I+R)'&{=}&\Pi -\mu (S+E+I+R)\\{} & {} -\rho I \le \Pi -\mu (S{+}E{+}I{+}R), \end{aligned}$$

Thus

$$\begin{aligned} S+E+I+R\le {\mathcal {N}}_0 \triangleq \left\{ \begin{aligned}&\frac{\Pi }{\mu },{} & {} \text{ if } S(0){+}E(0){+}I(0){+}R(0){\le } \frac{\Pi }{\mu },\\&S(0)+E(0)+I(0)+R(0),{} & {} \text{ if } S(0)+E(0)+I(0)+R(0)>\frac{\Pi }{\mu }. \end{aligned} \right. \end{aligned}$$
(2.3)

Therefore, combining (2.3), one gets

$$\begin{aligned} {\mathcal {L}}U_0\le & {} \theta (e^{x}-1)({\bar{x}}-x)+\frac{\xi ^2}{2} e^{x} +e^x(\sigma _1 I+\sigma _2 E)\\{} & {} +\Pi +\delta + \epsilon +\rho +\rho +3\mu \\= & {} f_0(x)+\Pi +\delta + \epsilon +\rho +\rho +3\mu , \end{aligned}$$

where

$$\begin{aligned} f_0(x)= \theta (e^{x}-1)({\bar{x}}-x)+\left( \frac{\xi ^2}{2}+ \frac{\Pi (\sigma _1 +\sigma _2 )}{\mu }\right) e^{x}. \end{aligned}$$

Note that \(f_0(x)\) is tending to negative infinity when x tends to negative infinity or positive infinity. Therefore, on the real number domain, the functions \(f_0(x)\) have a upper bound. Then we have

$$\begin{aligned} {\mathcal {L}}U_0\le \sup _{x\in {\mathbb {R}}}\{f_0(x)\}+\Pi +\delta + \epsilon +\rho +\rho +3\mu \le K_0, \end{aligned}$$

where \(K_0\) is a positive constant. A similar proof of Theorem 3.1 of Yang et al. [33], thus the rest of the proof is omitted here. This completes the proof. \(\square \)

Remark 2.1

Theorem 2.1 demonstrates that for any initial value \((x(0),S(0),E(0),I(0),R(0))\in {\mathbb {R}} \times {\mathbb {R}}_+^4\), there exists a unique global solution (x(t), S(t), E(t),  \( I(t),R(t))\in {\mathbb {R}} \times {\mathbb {R}}_+^4\) a.s. of system (1.4).

Since

$$\begin{aligned} (S+E+I+R)' \le \Pi -\mu (S+E+I+R), \end{aligned}$$

one gets

$$\begin{aligned}{} & {} \!\!\!S(t)+E(t)+I(t)+R(t)\le \frac{\Pi }{\mu }\\{} & {} \quad +e^{-\mu t}\left( S(0)+E(0)+I(0)+R(0)-\frac{\Pi }{\mu }\right) , \end{aligned}$$

Thus if \(S(0)+E(0)+I(0)+R(0)<\frac{\Pi }{\mu }\), then \( S(t)+E(t)+I(t)+R(t)<\frac{\Pi }{\mu }\) a.s.. Hence, the region

$$\begin{aligned} \Gamma= & {} \left\{ (x,S,E,I,R)\in {\mathbb {R}} \times {\mathbb {R}}_+^4:\right. \\{} & {} \quad \left. S+E+I+R<\frac{\Pi }{\mu }\right\} \end{aligned}$$

is a positively invariant set of system (1.4). From now on, we always assume that the initial value \((x(0),S(0),E(0), I(0),R(0))\in \Gamma \).

3 Ergodic property and stationary distribution

Deterministic COVID-19 epidemic model (1.1) in which the stability of the endemic equilibrium can reflect the long-term spread of the disease. The persistence and ergodicity of the disease in model (1.4) are obtained from stationary distributions by considering stochastic factors.

Define

$$\begin{aligned} R_0^s=\frac{\bar{\beta }\Pi \left( \sigma _1\tau \delta e^{\frac{\xi ^2}{12\theta }}+\sigma _2e^{\frac{\xi ^2}{8\theta }}(\epsilon +\rho +\mu ) \right) }{\mu (\delta +\mu )(\epsilon +\rho +\mu )}. \end{aligned}$$

Theorem 3.1

Assume that \(R_0^s >1\), then the stochastic system (1.4) admits at least one ergodic stationary distribution.

Proof

We divide the proof into three steps: first we construct suitable stochastic Lyapunov functions, then we construct a compact set, and finally we verify its ergodicity using above Lyapunov functions and the set. Consider

$$\begin{aligned}{} & {} {\mathcal {L}} (-\ln E)= -\frac{\sigma _1 I e^x (N -E -I -R )}{E}\\{} & {} \quad -\sigma _2 e^x (N -E -I -R ) + \delta +\mu , \\{} & {} {\mathcal {L}} (-\ln (N-E-I-R))\le -\frac{\Pi }{N-E-I-R}\\{} & {} \quad +e^x(\sigma _1 I+\sigma _2 E) +\mu , \\{} & {} {\mathcal {L}} (-\ln I)= -\frac{\tau \delta E}{I}+\epsilon +\rho +\mu .\\ \end{aligned}$$

Then define

$$\begin{aligned}{} & {} U_1=-\ln E-(a_1+a_2)\ln \\{} & {} \quad (N-E-I-R)-a_2\ln I, \end{aligned}$$

where \(a_i\) \((i=1,2,3)\) are determined later.

$$\begin{aligned} {\mathcal {L}} U_1\le & {} -\frac{\sigma _1 I e^x (N {-}E {-}I {-}R )}{E}{-}\frac{a_1\Pi }{N{-}E{-}I{-}R}\\{} & {} \quad -\frac{a_3\tau \delta E}{I}-\sigma _2 e^x (N -E -I -R )\\{} & {} \quad -\frac{a_2\Pi }{N-E-I-R} \\{} & {} \quad + \delta +\mu +(a_1+a_2 )\mu +a_3(\epsilon +\rho +\mu )\\{} & {} \quad +(a_1+a_2)e^x(\sigma _1 I+\sigma _2 E)\\{} & {} \le -\root 3 \of {a_1 a_3\Pi \sigma _1\tau \delta e^x}\\{} & {} \quad -\sqrt{a_2\Pi \sigma _2 e^x} + \delta +\mu +(a_1+a_2 )\mu \\{} & {} \quad +a_3(\epsilon +\rho +\mu )+(a_1+a_2)e^x(\sigma _1 I+\sigma _2 E)\\= & {} -\root 3 \of {a_1 a_3\Pi \sigma _1\tau \delta \bar{\beta }e^{\frac{\xi ^2}{12\theta }}} \\{} & {} \quad -\sqrt{a_2\Pi \sigma _2 \bar{\beta }e^{\frac{\xi ^2}{8\theta }}} + \delta \\{} & {} \quad +\mu +(a_1+a_2 )\mu +a_3(\epsilon +\rho +\mu ) \\{} & {} \quad +(a_1+a_2)e^x(\sigma _1 I+\sigma _2 E)+f_1(x), \end{aligned}$$

where

$$\begin{aligned} f_1(x)= & {} \root 3 \of {a_1 a_3\Pi \sigma _1\tau \delta }\left( \bar{\beta }^{\frac{1}{3}} e^{\frac{\xi ^2}{36\theta }}-e^{\frac{x}{3}} \right) \\{} & {} \quad +\sqrt{a_2\Pi \sigma _2 }\left( \bar{\beta }^{\frac{1}{2}} e^{\frac{\xi ^2}{16\theta }}-e^{\frac{x}{2 }} \right) . \end{aligned}$$

Choose

$$\begin{aligned}&a_1=\frac{\Pi \sigma _1\tau \delta \bar{\beta }e^{\frac{\xi ^2}{12\theta }}}{\mu ^2 (\epsilon +\rho +\mu )}, ~a_2=\frac{\Pi \sigma _2 \bar{\beta }e^{\frac{\xi ^2}{8\theta }}}{\mu ^2},\\&\quad ~a_3=\frac{\Pi \sigma _1\tau \delta \bar{\beta }e^{\frac{\xi ^2}{12\theta }}}{\mu (\epsilon +\rho +\mu )^2}. \end{aligned}$$

Then we have

$$\begin{aligned} {\mathcal {L}} U_1\le & {} -\frac{\Pi \sigma _1\tau \delta \bar{\beta }e^{\frac{\xi ^2}{12\theta }}}{\mu (\epsilon +\rho +\mu )} -\frac{\Pi \sigma _2 \bar{\beta }e^{\frac{\xi ^2}{8\theta }}}{\mu }\\{} & {} \quad + \delta +\mu +(a_1+a_2)e^x(\sigma _1 I+\sigma _2 E)+f_1(x)\\= & {} -(R_0^s-1)(\delta +\mu )+(a_1+a_2)\\{} & {} \quad e^x(\sigma _1 I+\sigma _2 E)+f_1(x). \end{aligned}$$

Note that

$$\begin{aligned} e^x\le a_4 e^{2x }+\frac{1}{4a_4},~ e^x\le a_5 e^{2x }+\frac{1}{4a_5}. \end{aligned}$$

Then

$$\begin{aligned} {\mathcal {L}} U_1\le & {} {-}(R_0^s{-}1)(\delta {+}\mu ){+} \sigma _1(a_1{+}a_2)\left( a_4 e^{2x }{+}\frac{1}{4a_4}\right) \\{} & {} \quad I +\sigma _2(a_1+a_2)\left( a_5 e^{2x }+\frac{1}{4a_5}\right) E +f_1(x)\\\le & {} -(R_0^s-1)(\delta +\mu ) +\frac{\Pi (a_1+a_2)(\sigma _1a_4+\sigma _2a_5) e^{2x }}{\mu }\\{} & {} \qquad +\frac{\sigma _1(a_1+a_2)}{4a_4}I +\frac{\sigma _2(a_1+a_2)}{4a_5}E +f_1(x)\\{} & {} =-(R_0^s-1)(\delta +\mu ) \\{} & {} \qquad +\frac{\Pi \sigma _1a_4 (a_1+a_2)(\sigma _1a_4+\sigma _2a_5) }{\mu }\bar{\beta }^2 e^{\frac{\xi ^2}{\theta }} \\{} & {} \qquad +\frac{\sigma _1(a_1+a_2)}{4a_4}I +\frac{\sigma _2(a_1+a_2)}{4a_5}E\\{} & {} \qquad +f_1(x)+f_2(x), \end{aligned}$$

where

$$\begin{aligned} f_2(x)=\frac{\Pi (a_1+a_2)(\sigma _1a_4+\sigma _2a_5) }{\mu }\left( e^{2x }-\bar{\beta }^2 e^{\frac{\xi ^2}{\theta }} \right) . \end{aligned}$$

Choose

$$\begin{aligned} a_4= & {} \frac{\mu (R_0^s-1)(\delta +\mu ) }{4\Pi \sigma _1\bar{\beta }^2 e^{\frac{\xi ^2}{\theta }} (a_1+a_2)},\\ a_5= & {} \frac{\mu (R_0^s-1)(\delta +\mu )}{4\Pi \sigma _2\bar{\beta }^2 e^{\frac{\xi ^2}{\theta }}(a_1+a_2)}, \end{aligned}$$

such that

$$\begin{aligned} {\mathcal {L}} U_1\le & {} -\frac{1}{2}(R_0^s-1)(\delta +\mu ) +\frac{\sigma _1(a_1+a_2)}{4a_4}\\{} & {} \quad I +\frac{\sigma _2(a_1+a_2)}{4a_5}E +f_1(x)+f_2(x). \end{aligned}$$

Define

$$\begin{aligned} U_2= & {} U_1+\frac{\sigma _1(a_1+a_2)}{4a_4(\epsilon +\rho +\mu )} I. \\ {\mathcal {L}} U_2\le & {} -\frac{1}{2}(R_0^s-1)(\delta +\mu ) \\{} & {} \quad +\left( \frac{\sigma _2(a_1+a_2)}{4a_5} +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \\{} & {} \quad E +f_1(x)+f_2(x). \end{aligned}$$

Next define

$$\begin{aligned}{} & {} \!\!U_3=-\ln S-\ln I-\ln R-\ln \\{} & {} \quad \left( \frac{\Pi }{\mu }-S-E-I-R\right) +e^x-x-1, \end{aligned}$$

we have

$$\begin{aligned} {\mathcal {L}} U_3\le & {} -\frac{\Pi }{S} -\frac{\tau \delta E}{I}-\frac{(1-\tau \delta )E}{R}\\{} & {} \quad +\frac{\Pi -\mu (S+E+I+R)-\rho I}{\frac{\Pi }{\mu }-S-E-I-R}\\{} & {} \quad +\epsilon + \rho +\eta +2\mu \\{} & {} \quad +e^x(\sigma _1 I+\sigma _2 E)+\theta ({\bar{x}}-x)(e^x-1)+\frac{\xi ^2 e^x}{2}\\\le & {} -\frac{\Pi }{S} -\frac{\tau \delta E}{I}\\{} & {} \quad -\frac{(1-\tau \delta )E}{R} -\frac{ \rho I}{\frac{\Pi }{\mu }-S-E-I-R}\\{} & {} \quad +\theta ({\bar{x}}-x)(e^x-1)+\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \\{} & {} e^x+\epsilon + \rho +\eta +3\mu . \end{aligned}$$

Finally define

$$\begin{aligned} U_4= M_0U_2+U_3, \end{aligned}$$

where \(M_0\) is a sufficiently large constant satisfying

$$\begin{aligned}{} & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu ) +\sup _{x\in {\mathbb {R}}}\left\{ \theta ({\bar{x}}-x)(e^x-1)\right. \nonumber \\{} & {} \quad \left. +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) e^x \right\} +\epsilon + \rho +\eta +2\mu \nonumber \\{} & {} \le -2. \end{aligned}$$
(3.1)

Then we have

$$\begin{aligned} {\mathcal {L}} U_4\le & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu )\\{} & {} \quad +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \\{} & {} \quad E+M_0f_1(x)+M_0f_2(x)\\{} & {} \quad {-}\frac{\Pi }{S} {-}\frac{\tau \delta E}{I}{-}\frac{(1-\tau \delta )E}{R} {-}\frac{ \rho I}{\frac{\Pi }{\mu }-S-E-I-R}\\{} & {} \quad +\theta ({\bar{x}}-x)(e^x-1)+\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) e^x\\{} & {} \quad +\epsilon + \rho +\eta +3\mu \\= & {} f_3(x,S,E,I,R)+M_0f_1(x)+M_0f_2(x), \end{aligned}$$

where

$$\begin{aligned} f_3(x,S,E,I,R)={} & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu )\nonumber \\{} & {} \quad +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \nonumber \\{} & {} \quad \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \nonumber \\{} & {} \quad E -\frac{\Pi }{S} -\frac{\tau \delta E}{I}-\frac{(1-\tau \delta )E}{R}\nonumber \\{} & {} \quad {-}\frac{ \rho I}{\frac{\Pi }{\mu }{-}S{-}E{-}I{-}R}\nonumber \\{} & {} \quad {+}\theta ({\bar{x}}-x)(e^x-1)\nonumber \\{} & {} \quad +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \nonumber \\{} & {} \quad e^x+\epsilon + \rho +\eta +3\mu . \end{aligned}$$
(3.2)

Then, we construct a compact set \({\mathbb {D}}\subset \Gamma \) as follows

$$\begin{aligned} {\mathbb {D}}= & {} \Bigg \{(x,N,E,I,R)\in \Gamma :\kappa \le e^{x}\le \frac{1}{\kappa }, \kappa ^2 \le E,\\{} & {} \kappa ^4\le I,\kappa ^4 \le R, S+E+I+R\le \frac{\Pi }{\mu }-\kappa ^3 \Bigg \} \end{aligned}$$

such that \(f_3(U) \le -1\) for any \((x,N,E,I,R)\in \Gamma {\setminus } {\mathbb {D}}:={\mathbb {D}}^c\). Then let \({\mathbb {D}}^c=\bigcup _{i=1}^7 {\mathbb {D}}_i^c\), where

$$\begin{aligned} {\mathbb {D}}_1^c= & {} \{(x,N,E,I,R)\in \Gamma :0<e^{x}<\kappa \},\\ {\mathbb {D}}_2^c= & {} \{(x,N,E,I,R)\in \Gamma :\frac{1}{\kappa }<e^{x}\},\\ {\mathbb {D}}_3^c= & {} \{(x,N,E,I,R)\in \Gamma : 0<E<\kappa \},\\ {\mathbb {D}}_4^c= & {} \{(x,N,E,I,R)\in \Gamma : \kappa \le E,0<I<\kappa ^2\},\\ {\mathbb {D}}_5^c= & {} \{(x,N,E,I,R)\in \Gamma : \kappa \le E,0<R<\kappa ^2\},\\ {\mathbb {D}}_6^c= & {} \{(x,N,E,I,R)\in \Gamma : 0<S<\kappa \},\\ {\mathbb {D}}_7^c= & {} \left\{ (x,N,E,I,R)\in \Gamma : \kappa ^2\le I,\right. \\{} & {} \quad \left. S+E+I+R<\frac{\Pi }{\mu }-\kappa ^3\right\} , \end{aligned}$$

with \(\kappa \in (0,1)\) is a small enough constant satisfying the following inequalities

$$\begin{aligned} \frac{\theta }{2}(1-\kappa )(\ln \kappa -\bar{x})+\sup _{x\in {\mathbb {R}}}\{f_4(x)\} \le -1, \end{aligned}$$
(3.3)

with

$$\begin{aligned} f_4(x)= & {} \left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) e^x \\{} & {} \quad + M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \\{} & {} \quad \frac{\Pi }{\mu }+\epsilon + \rho +\eta +3\mu , \end{aligned}$$
$$\begin{aligned}{} & {} \frac{\theta }{2} \left( \frac{1}{\kappa }-1\right) (\ln \kappa +\bar{x})+\sup _{x\in {\mathbb {R}}}\{f_4(x)\} \le -1, \nonumber \\ \end{aligned}$$
(3.4)
$$\begin{aligned}{} & {} M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \kappa \le 1, \nonumber \\ \end{aligned}$$
(3.5)
$$\begin{aligned}{} & {} M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } +\frac{\sigma _2\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \frac{\Pi }{\mu } -\frac{\tau \delta }{\kappa }\le 1,\nonumber \\ \end{aligned}$$
(3.6)
$$\begin{aligned}{} & {} M_0\left( \frac{\sigma _1(a_1{+}a_2) }{\kappa } {+}\frac{\sigma _2\tau \delta (a_1{+}a_2)}{4a_4(\epsilon {+}\rho {+}\mu )}\right) \frac{\Pi }{\mu } {-}\frac{1{-}\tau \delta }{\kappa }\le 1,\nonumber \\ \end{aligned}$$
(3.7)
$$\begin{aligned}{} & {} M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } +\frac{\sigma _2\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \frac{\Pi }{\mu } -\frac{\Pi }{\kappa }\le 1,\nonumber \\ \end{aligned}$$
(3.8)
$$\begin{aligned}{} & {} M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } +\frac{\sigma _2\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \frac{\Pi }{\mu } -\frac{\rho }{\kappa }\le 1.\nonumber \\ \end{aligned}$$
(3.9)

Case 1 If \((x,N,E,I,R)\in {\mathbb {D}}_1^c\), i.e. \(x\in (-\infty ,\ln \kappa )\), from (3.2) and (3.3), we have

$$\begin{aligned} f_3( x,N,E,I,R )\le & {} \frac{\theta }{2}({\bar{x}}-x)(e^x-1)+f_4(x) \\{} & {} \le \frac{\theta }{2}(1-\kappa )(\ln \kappa -\bar{x})\\{} & {} \quad +\sup _{x\in {\mathbb {R}}}\{f_4(x)\} \le -1. \end{aligned}$$

Case 2 If \((x,N,E,I,R)\in {\mathbb {D}}_2^c\), i.e. \(x\in (-\ln \kappa ,\infty )\), from (3.2) and (3.4), we have

$$\begin{aligned} f_3( x,N,E,I,R )\le & {} \frac{\theta }{2}({\bar{x}}-x)(e^x-1)+f_4(x)\\{} & {} \le \frac{\theta }{2} \left( \frac{1}{\kappa }-1\right) (\ln \kappa +\bar{x})\\{} & {} +\sup _{x\in {\mathbb {R}}}\{f_4(x)\} \le -1. \end{aligned}$$

Case 3 If \((x,N,E,I,R)\in {\mathbb {D}}_3^c\). From (3.1), (3.2) and (3.5), we have

$$\begin{aligned} f_3( x,N,E,I,R )\le & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu ) \\{} & {} \quad +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \\{} & {} \quad \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) E \\{} & {} \quad +\theta ({\bar{x}}-x)(e^x-1)\nonumber \\{} & {} \quad +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \\{} & {} \quad e^x+\epsilon + \rho +\eta +3\mu \\{} & {} \le M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \\{} & {} \quad \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \kappa -2 \\{} & {} \le -1. \end{aligned}$$

Case 4 If \((x,N,E,I,R)\in {\mathbb {D}}_4^c\). From (3.1), (3.2) and (3.6), we have

$$\begin{aligned} f_3(x,N,E,I,R)\le & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu )\\{} & {} +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \\{} & {} \quad \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) E -\frac{\tau \delta E}{I} \\{} & {} +\theta ({\bar{x}}-x)(e^x-1)\\{} & {} +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \\{} & {} e^x+\epsilon + \rho +\eta +3\mu \\{} & {} \le M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } \right. \\{} & {} \quad \left. {+}\frac{\sigma _2\tau \delta (a_1{+}a_2)}{4a_4(\epsilon {+}\rho {+}\mu )}\right) \frac{\Pi }{\mu } {-}\frac{\tau \delta }{\kappa } {-}2\\{} & {} \le -1. \end{aligned}$$

Case 5 If \((x,N,E,I,R)\in {\mathbb {D}}_5^c\), from (3.1), (3.2) and (3.7), we have

$$\begin{aligned} f_3(x,N,E,I,R)\le & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu )\\{} & {} +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \\{} & {} \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) E \\{} & {} -\frac{(1-\tau \delta )E}{R}+\theta ({\bar{x}}-x)(e^x-1)\\{} & {} +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \\{} & {} e^x+\epsilon + \rho +\eta +3\mu \\\le & {} M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } \right. \\{} & {} \left. {+}\frac{\sigma _2\tau \delta (a_1{+}a_2)}{4a_4(\epsilon {+}\rho {+}\mu )}\right) \frac{\Pi }{\mu } {-}\frac{1{-}\tau \delta }{\kappa } {-}2\\\le & {} -1. \end{aligned}$$

Case 6 If \((x,N,E,I,R)\in {\mathbb {D}}_6^c\), from (3.1), (3.2) and (3.8), we have

$$\begin{aligned} f_3(x,N,E,I,R)\le & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu )\\{} & {} \quad +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \\{} & {} \quad \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) E -\frac{\Pi }{S} \\{} & {} \quad +\theta ({\bar{x}}-x)(e^x-1)\\{} & {} \quad +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \\{} & {} \quad e^x+\epsilon + \rho +\eta +3\mu \\\le & {} M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } \right. \\{} & {} \left. +\frac{\sigma _2\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \frac{\Pi }{\mu } -\frac{\Pi }{\kappa } -2\\{} & {} \quad \le -1. \end{aligned}$$

Case 7 If \((x,N,E,I,R)\in {\mathbb {D}}_7^c\), from (3.1), (3.2) and (3.9), we have

$$\begin{aligned} f_3(x,N,E,I,R)\le & {} -\frac{M_0}{2}(R_0^s-1)(\delta +\mu )\\{} & {} +M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} \right. \\{} & {} \left. +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \\{} & {} E -\frac{ \rho I}{\frac{\Pi }{\mu }-S-E-I-R} \\{} & {} +\theta ({\bar{x}}-x)(e^x-1)\\{} & {} +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) \\{} & {} e^x+\epsilon + \rho +\eta +3\mu \\\le & {} M_0\left( \frac{\sigma _1(a_1+a_2) }{\kappa } \right. \\{} & {} \left. +\frac{\sigma _2\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \frac{\Pi }{\mu } -\frac{\rho }{\kappa } -2\\\le & {} -1. \end{aligned}$$

In summary, we have \(f_3(x,N,E,I,R)\le -1\) for all \((x,N,E,I,R)\in {\mathbb {D}}^c\).

Step 3. (The existence and ergodicity of the solution of system (1.4)): Since the function \(U_4(x,N,E,I,R)\) tends to \( \infty \) as x, S, E, I, R or \(S+E+I+R\) approach the boundary of \({\mathbb {R}}\times {\mathbb {R}}_+^4\) or as \(||(x, N,E,I,R)||\rightarrow \infty \). Thus, there exists a point \(({\tilde{x}},{\tilde{S}},{\tilde{E}},{\tilde{I}},{\tilde{R}})\) in the interior of \(\Gamma \) which makes \(U_4({\tilde{x}},{\tilde{S}},{\tilde{E}},{\tilde{I}},{\tilde{R}})\) take the minimum value.

Therefore \(U=U_4-U_4({\tilde{x}},{\tilde{S}},{\tilde{E}},{\tilde{I}},{\tilde{R}})\) is a non-negative \(C^2\)-function. Applying the Itô’s formula to U, we have

$$\begin{aligned} {\mathcal {L}}U\le f_3(x,N,E,I,R)+M_0f_1(x)+M_0f_2(x). \end{aligned}$$

For any initial value \((x(0),S(0),E(0),I(0),R(0)) \in \Gamma \) and a interval [0, t], applying Itô’s integral and mathematical expectation to U, we get

$$\begin{aligned} 0\le & {} \frac{{\mathbb {E}}U ( x(t),S(t),E(t),I(t),R(t))}{t}\nonumber \\= & {} \frac{{\mathbb {E}}U(x(0),S(0),E(0),I(0),R(0))}{t}\nonumber \\{} & {} \quad +\frac{1}{t}\int _0^t {\mathbb {E}}({\mathcal {L}}U(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\textrm{d}\tau \nonumber \\\le & {} \frac{{\mathbb {E}}U (x(0),S(0),E(0),I(0),R(0))}{t}\nonumber \\{} & {} \quad +\frac{1}{t}\int _0^t {\mathbb {E}}(f_3(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\textrm{d}\tau \nonumber \\{} & {} \quad +M_0 \root 3 \of {a_1 a_3\Pi \sigma _1\tau \delta }\left( \bar{\beta }^{\frac{1}{3}} e^{\frac{\xi ^2}{36\theta }}-{\mathbb {E}}\left( \frac{1}{t}\int _0^t e^{\frac{x(\tau )}{3}}\textrm{d}\tau \right) \right) \nonumber \\{} & {} \quad +M_0\sqrt{a_2\Pi \sigma _2 }\left( \bar{\beta }^{\frac{1}{2}} e^{\frac{\xi ^2}{16\theta }}-{\mathbb {E}}\left( \frac{1}{t}\int _0^t e^{\frac{x(\tau )}{2}}\textrm{d}\tau \right) \right) \nonumber \\{} & {} \quad +\frac{M_0\Pi (a_1+a_2) (\sigma _1a_4 +\sigma _2 a_5)}{\mu }\nonumber \\{} & {} \quad \left( {\mathbb {E}}\left( \frac{1}{t}\int _0^t e^{ 2x(\tau ) }\textrm{d}\tau \right) -\bar{\beta }^2 e^{\frac{\xi ^2}{\theta }} \right) . \end{aligned}$$
(3.10)

According to Lemma 2.2, we have

$$\begin{aligned}{} & {} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t e^{\frac{x(\tau )}{3} } \textrm{d}\tau = \bar{\beta }^{\frac{1}{3}} e^{\frac{\xi ^2}{36\theta }},~\nonumber \\{} & {} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t e^{\frac{x(\tau )}{2} } \textrm{d}\tau = \bar{\beta }^{\frac{1}{2}} e^{\frac{\xi ^2}{16\theta }},~\nonumber \\{} & {} \lim _{t\rightarrow \infty } \frac{1}{t}\int _0^t e^{2x(\tau ) } \textrm{d}\tau =\bar{\beta }^2 e^{\frac{\xi ^2}{\theta }}. \end{aligned}$$
(3.11)

Thus allowing \(t\rightarrow \infty \) and substituting (3.11) into (3.10), we have

$$\begin{aligned} 0\le & {} \liminf _{t\rightarrow \infty } \frac{{\mathbb {E}}U (x(0),S(0),E(0),I(0),R(0))}{t}\\{} & {} \quad +\liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t {\mathbb {E}}(f_3(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\textrm{d}\tau \\= & {} \liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t {\mathbb {E}}(f_3(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\\{} & {} \quad \textrm{d}\tau , \ \ a.s.. \end{aligned}$$

On the other hand, note that

$$\begin{aligned} f_3(x,S,E,I,R) \le M_1,\ \ \forall (x,S,E,I,R)\in \Gamma , \end{aligned}$$

where

$$\begin{aligned} M_1= & {} \sup _{(x,S,E,I,R)\in \Gamma }\left\{ M_0\left( \frac{\sigma _2(a_1+a_2)}{4a_5} +\frac{\sigma _1\tau \delta (a_1+a_2)}{4a_4(\epsilon +\rho +\mu )}\right) \right. \\{} & {} \left. E -\frac{\Pi }{S} -\frac{\tau \delta E}{I}-\frac{(1-\tau \delta )E}{R}\right. \\{} & {} \left. -\frac{ \rho I}{\frac{\Pi }{\mu }-S-E-I-R} +\theta ({\bar{x}}-x)(e^x-1)\right. \\{} & {} \left. +\left( \frac{\Pi (\sigma _1+\sigma _2)}{\mu }+\frac{\xi ^2 }{2} \right) e^x+\epsilon + \rho +\eta +3\mu \right\} \\{} & {} <+\infty . \end{aligned}$$

Hence we have

$$\begin{aligned}{} & {} \liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t {\mathbb {E}}(f_3(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\textrm{d}\tau \\{} & {} \quad =\liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t {\mathbb {E}}(f_3(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\\{} & {} \quad \textbf{1}_{\left\{ (x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau ))\in {\mathbb {D}}\right\} }\textrm{d}\tau \\{} & {} \qquad +\liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t {\mathbb {E}}(f_3(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )))\\{} & {} \quad \textbf{1}_{\left\{ (x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau ))\in {\mathbb {D}}^c\right\} }\textrm{d}\tau \\{} & {} \quad \le M_1\liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t \\{} & {} \quad \textbf{1}_{\left\{ (x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau ))\in {\mathbb {D}}\right\} }\textrm{d}\tau \\{} & {} \qquad -\liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t\textbf{1}_{\left\{ (x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau ))\in {\mathbb {D}}^c\right\} }\textrm{d}\tau \\{} & {} \quad = (M_1+1)\liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t \textbf{1}_{\left\{ (x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau ))\in {\mathbb {D}}\right\} }\\{} & {} \quad \textrm{d}\tau -1. \end{aligned}$$

This implies that

$$\begin{aligned}{} & {} \liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t \textbf{1}_{\left\{ (x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau ))\in {\mathbb {D}}\right\} }\nonumber \\{} & {} \quad \textrm{d}\tau \ge \frac{1}{M_1+1}>0, \ \ a.s.. \end{aligned}$$
(3.12)

Let \({\mathbb {P}}(t,(x(t),S(t),E(t),I(t),R(t )), \Omega )\) as the transition probability of (x(t), S(t), E(t), I(t), R(t)) belongs to the set \(\Omega \). Making the use of Fatou’s lemma [29], we have

$$\begin{aligned}{} & {} \liminf _{t\rightarrow \infty }\frac{1}{t}\int _0^t {\mathbb {P}}(\tau ,(x(\tau ),S(\tau ),E(\tau ),I(\tau ),R(\tau )), {\mathbb {D}})\nonumber \\{} & {} \quad \textrm{d}\tau \ge \frac{1}{M_1+1}>0, \ \ a.s.. \end{aligned}$$
(3.13)

According to Lemma 2.1, system (1.4) has at least one stationary distribution \(\eta (\cdot )\) on \(\Gamma \) which has the Feller and ergodic property. This completes the proof. \(\square \)

4 Probability density function

With the purpose of obtaining more information about the dynamics and statistical properties of the stochastic model (1.4), in this section, we concentrate on the analysis of the probability density function. First, we let \(N (t)= S(t)+E(t)+I(t)+R(t)\) to obtain the equivalent model of stochastic model (1.4) as follows:

$$\begin{aligned} \left\{ \begin{aligned} \textrm{d}x(t)&=\theta \left( {\bar{x}}-x(t) \right) \textrm{d}t+\xi \textrm{d}B (t),\\ \textrm{d}N (t)&=(\Pi -\mu N(t)-\rho I(t))\textrm{d}t,\\ \textrm{d}E (t)&=(e^{x(t)}(\sigma _1 I(t)+\sigma _2 E(t))(N(t)-E(t)-I(t)-R(t))-(\delta +\mu )E(t))\textrm{d}t,\\ \textrm{d}I (t)&=(\tau \delta E(t)-(\epsilon +\rho +\mu )I(t))\textrm{d}t,\\ \textrm{d}R(t)&=((1-\tau )\delta E(t)+\epsilon I-(\mu +\eta )R(t))\textrm{d}t. \end{aligned} \right. \end{aligned}$$
(4.1)

Similar to the analysis in “Appendix”, if

$$\begin{aligned} R_0^p=\frac{ \bar{\beta }\Pi \left( \sigma _1\tau \delta +\sigma _2 (\epsilon +\rho +\mu ) \right) }{\mu (\delta +\mu )(\epsilon +\rho +\mu )}>1, \end{aligned}$$
(4.2)

we can obtain that the stochastic model (4.1) has a unique quasi-infected equilibrium \(\Theta ^\star =(\ln \bar{\beta }, N^\star , \)\( E^\star ,I^\star ,R^\star )\), where

$$\begin{aligned}{} & {} \!\!\!(N^\star ,E^\star ,I^\star ,R^\star )\\{} & {} \quad =\left( \frac{\Pi }{\mu } -\frac{\rho I^\star }{\mu },\frac{\epsilon +\rho +\mu }{\tau \delta }I^\star ,I^\star ,\right. \\{} & {} \qquad \left. \quad \frac{ (1-\tau )( \rho +\mu ) +\epsilon }{(\mu +\eta )\tau }I^\star \right) ,\\{} & {} I^\star =\frac{\frac{\Pi }{\mu }\left( 1-\frac{1}{R_0^p}\right) }{1+\frac{\rho }{\mu }+\dfrac{\epsilon +\rho +\mu }{\tau \delta } +\frac{(1-\tau )(\rho +\mu )+\epsilon }{\tau (\mu +\eta )}}>0. \end{aligned}$$

Then let \(Y=(y_1,y_2,y_3,y_4,y_5)^T=(x -\bar{x},E-E^\star ,I-I^\star ,R-R^\star , N-N^\star )^T\). Applying Itô’s integral, we obtain the corresponding linearized system of model (4.1):

$$\begin{aligned} \left\{ \begin{aligned} \textrm{d}{y_1}&=-\theta y_1\textrm{d}t+\xi \textrm{d}B (t),\\ \textrm{d}{y_2}&{=}(p_{21}y_1{-}p_{22} y_2{-}p_{23}y_3{-}p_{24}y_4{+}p_{24} y_5)\textrm{d}t,\\ \textrm{d}{y_3}&=( p_{32} y_2-p_{33} y_3)\textrm{d}t,\\ \textrm{d}{y_4}&=( p_{42} y_2+ p_{43} y_3- p_{44} y_4)\textrm{d}t,\\ \textrm{d}{y_5}&=(-p_{53} y_3-p_{55} y_5)\textrm{d}t, \end{aligned} \right. \nonumber \\ \end{aligned}$$
(4.3)

where

$$\begin{aligned} \begin{aligned}&p_{21}= \bar{\beta }(\sigma _1 I^\star +\sigma _2 E^\star )(N^\star -E^\star -I^\star -R^\star ),~\\&p_{22}= \bar{\beta }(\sigma _1 I^\star -\sigma _2(N^\star -I^\star -R^\star ))+\delta +\mu ,\\&p_{23}= \bar{\beta }( \sigma _2 E^\star -\sigma _1(N^\star -E^\star -R^\star )),~\\&p_{24}= \bar{\beta }(\sigma _1 I^\star +\sigma _2 E^\star ), ~ p_{32}=\tau \delta ,~ p_{33}=\epsilon +\rho +\mu ,\\&p_{42}=(1- \tau )\delta ,~ p_{43}=\epsilon ,~ p_{44}=\mu +\eta , \\&p_{53}=\rho ,~ p_{55}=\mu . \end{aligned} \end{aligned}$$

The model (4.3) can be equivalently written as

$$\begin{aligned} \textrm{d}Y(t)=P Y(t)\textrm{d}t+\Xi \textrm{d}B(t), \end{aligned}$$

where

$$\begin{aligned} P=\begin{pmatrix} -\theta &{}0 &{}0 &{}0 &{}0 \\ p_{21} &{}-p_{22} &{}-p_{23} &{}-p_{24} &{}p_{24} \\ 0 &{}p_{32} &{}-p_{33} &{}0 &{}0 \\ 0 &{}p_{42} &{} p_{43} &{}-p_{44}&{} 0 \\ 0 &{}0 &{}-p_{53} &{}0&{}-p_{55} \\ \end{pmatrix},~ \Xi =\begin{pmatrix} \xi &{}0 &{}0 &{}0 &{}0\\ 0 &{}0 &{}0 &{}0 &{}0\\ 0 &{}0&{}0 &{}0&{}0 \\ 0 &{}0&{}0 &{}0&{}0 \\ 0 &{}0&{}0 &{}0&{}0 \end{pmatrix}. \end{aligned}$$

Theorem 4.1

If \(R_0^p> 1\) and \(\tau \epsilon +(1-\tau )(\epsilon +\eta -\rho )\ne 0\), then the stationary solution (x(t), E(t), I(t), R(t),  N(t)) to system (4.1) around \(\Theta ^\star =(\ln \bar{\beta }, N^\star ,E^\star ,I^\star ,R^\star )\) follows the normal distribution \({\mathbb {N}}_5(\Theta ^\star ,\Sigma )\), where

$$\begin{aligned} \Sigma= & {} (\xi \tau \delta \rho \eta \bar{\beta }(\sigma _1 I^\star +\sigma _2 E^\star )(N^\star -E^\star -I^\star -R^\star ) )^2 \\{} & {} \quad (T_3 T_2 T_1)^{-1}\Omega [(T_3 T_2 T_1)^{-1}]^T, \end{aligned}$$

and matrices \(T_1\), \(T_2\), \(T_3\) and \(\Omega \) are defined in the following proof.

Proof

The local probability density function has been widely studied in recent years. According to the theoretical analysis in [33,34,35], covariance matrix \(\Sigma \) can be determined by

$$\begin{aligned} \Xi ^2+P \Sigma +\Sigma P^T=0. \end{aligned}$$

Let

$$\begin{aligned} T_1=\begin{pmatrix} 1 &{}0 &{}0 &{}0 &{}0\\ 0 &{}1 &{}0 &{}0 &{}0\\ 0 &{}0 &{}1 &{}0 &{}0\\ 0 &{}0 &{}-\frac{p_{42}}{p_{32}} &{}1 &{}0\\ 0 &{}0 &{}0 &{}0 &{}1 \end{pmatrix} \end{aligned}$$

such that

$$\begin{aligned} P_1= & {} T_1 P T_1^{-1}\\= & {} \begin{pmatrix} -\theta &{}0 &{}0 &{}0 &{}0 \\ p_{21} &{}-p_{22} &{} -p_{23}-\frac{p_{24}p_{42} }{p_{32}} &{} -p_{24} &{}p_{24} \\ 0 &{}p_{32} &{}-p_{33} &{}0 &{}0 \\ 0 &{}0 &{}\frac{p_{32}p_{43}+p_{33}p_{42}-p_{42}p_{44}}{p_{32}} &{}-p_{44} &{}0 \\ 0 &{}0 &{}-p_{53} &{}0 &{}-p_{55}. \end{pmatrix}. \end{aligned}$$

Note that \(p_{32}p_{43}+p_{33}p_{42}-p_{42}p_{44}=\delta (\tau \epsilon +(1-\tau )(\epsilon +\eta -\rho ))\ne 0\). Then let

$$\begin{aligned} T_2=\begin{pmatrix} 1 &{}0 &{}0 &{}0 &{}0\\ 0 &{}1 &{}0 &{}0 &{}0\\ 0 &{}0 &{}1 &{}0 &{}0\\ 0 &{}0 &{}0 &{}1 &{}0\\ 0 &{}0 &{}0 &{}\frac{p_{32}p_{53}}{p_{32}p_{43}+p_{33}p_{42}-p_{42}p_{44}} &{}1 \end{pmatrix} \end{aligned}$$

such that

$$\begin{aligned} P_2=T_2 P_1 T_2^{-1}=\begin{pmatrix} -\theta &{}0 &{}0 &{}0 &{}0 \\ p_{21} &{}-p_{22} &{}-p_{23}-\frac{p_{24}p_{42} }{p_{32}} &{}-p_{24}\left( 1+\frac{p_{32} p_{53}}{p_{32}p_{43}+p_{33}p_{42}-p_{42}p_{44}}\right) &{}p_{24} \\ 0 &{}p_{32} &{}-p_{33} &{}0 &{}0 \\ 0 &{}0 &{}\frac{p_{32}p_{43}+p_{33}p_{42}-p_{42}p_{44}}{p_{32}} &{}-p_{44} &{}0 \\ 0 &{}0 &{}0 &{}\frac{p_{32}p_{53}(p_{44}-p_{55})}{p_{32}p_{43}+p_{33}p_{42}-p_{42}p_{44}} &{}-p_{55}, \end{pmatrix}. \end{aligned}$$

where \(p_{44}-p_{55}=\eta \ne 0\). Then we denote \(Q=(0,0,0,0,1)\) and \(T_3=(Q P_2^4,Q P_2^3,Q P_2^2,Q P_2,Q)^T\) such that

$$\begin{aligned} P_3=T_3 P_2 T_3^{-1}= \begin{pmatrix} -p_1 &{}-p_2 &{}-p_3 &{}-p_4 &{}-p_5 \\ 1 &{}0 &{}0 &{}0 &{} 0 \\ 0 &{}1 &{}0 &{} 0 &{} 0 \\ 0 &{}0 &{}1 &{}0 &{}0 \\ 0 &{}0 &{}0 &{}1 &{}0 \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} p_1&=\theta +\varrho _1, ~ p_2=\varrho _1\theta +\varrho _2,~ p_3=\varrho _2\theta +\varrho _3,~\\ p_4&=\varrho _3\theta +\varrho _4,~ p_5=\varrho _4\theta ,\\ \varrho _1&= p_{22 }+ p_{33 }+ p_{44 }+ p_{55} >0,\\ \varrho _2&= p_{22}p_{33 }+ p_{22}p_{44 }+ p_{22}p_{55 }+ p_{33}p_{44 }+ p_{34}p_{43 }\\&\quad + p_{32}p_{53 }+ p_{33}p_{55 }+ p_{44}p_{55},\\ \varrho _3&= p_{22}p_{33}p_{44 }+ p_{22}p_{34}p_{43 }+ p_{24}p_{32}p_{43 }\\&\quad + p_{22}p_{32}p_{53 }+ p_{22}p_{33}p_{55 }+ p_{22}p_{44}p_{55 }\\&\quad +p_{32}p_{43}p_{54 }+ p_{32}p_{44}p_{53 }\\&\quad + p_{33}p_{44}p_{55 }+ p_{34}p_{43}p_{55},\\ \varrho _4&= p_{22}p_{32}p_{43}p_{54 }+ p_{22}p_{32}p_{44}p_{53 }\\&\quad + p_{22}p_{33}p_{44}p_{55 }+ p_{22}p_{34}p_{43}p_{55 }\\&\quad + p_{24}p_{32}p_{43}p_{55}. \end{aligned} \end{aligned}$$

Assume that the characteristic polynomial of \(P_3\) is assumed as

$$\begin{aligned} |\lambda \textbf{I}-P_3|= (\lambda +\theta )\psi (\lambda ), \end{aligned}$$

where \(\psi (\lambda )=\lambda ^4+ \varrho _1\lambda ^3+\varrho _2\lambda ^2+\varrho _3\lambda +\varrho _4\). Next we shall prove that the matrix P satisfies the conditions in Lemma 2.3.

Based on the proof in “Appendix”, it is not difficult to obtain the following model,

$$\begin{aligned} \left\{ \begin{aligned} {\frac{\textrm{d}N}{\textrm{d}t}}&=\Pi -\mu N-\rho I,\\ {\frac{\textrm{d}E}{\textrm{d}t}}&=\bar{\beta }(\sigma _1 I+\sigma _2 E)(N-E-I-R){-}(\delta +\mu )E,\\ {\frac{\textrm{d}I}{\textrm{d}t}}&=\tau \delta E-(\epsilon +\rho +\mu )I,\\ {\frac{\textrm{d}R}{\textrm{d}t}}&=(1-\tau )\delta E+\epsilon I-(\mu +\eta )R. \end{aligned} \right. \end{aligned}$$
(4.4)

which has a unique locally asymptotically stable endemic equilibrium \((N^\star ,E^\star ,I^\star ,R^\star )\).

The Jacobian matrix of system (4.4) at \((N^\star ,E^\star ,I^\star , \)\( R^\star )\) is:

$$\begin{aligned} J_{(N^\star ,E^\star ,I^\star ,R^\star )}=\begin{pmatrix} -p_{22} &{}-p_{23} &{}-p_{24} &{}p_{24} \\ p_{32} &{}-p_{33} &{}0 &{}0 \\ p_{42} &{} p_{43} &{}-p_{44}&{} 0 \\ 0 &{}-p_{53} &{}0&{}-p_{55} \\ \end{pmatrix} \end{aligned}$$

Then we obtain

$$\begin{aligned}{} & {} |\lambda \textbf{I} - J_{(N^\star ,E^\star ,I^\star ,R^\star )}|= \lambda ^4\\{} & {} \quad + p_1\lambda ^3+p_2\lambda ^2+p_3\lambda +p_4=0. \end{aligned}$$

Thus, it is clear that \(\psi (\lambda )\) has four negative real roots. This implies that \((\lambda +\theta )\psi (\lambda )\) has five negative real roots. According to Routh–Hurwitz criterion, one has

$$\begin{aligned} \begin{aligned}&p_1,p_2,p_3,p_4,p_5>0,~ p_1p_2-p_3>0,~ \\&p_3p_4-p_2p_5>0, \\&p_3(p_1p_2-p_3)-p_1(p_1p_4-p_5)>0,\\&\Delta =(p_1p_2-p_3)(p_3p_4-p_2p_5)\\&\qquad -(p_1p_4-p_5)^2>0. \end{aligned} \end{aligned}$$

Denote

$$\begin{aligned} \begin{aligned} \omega _{11}&=\frac{p_2(p_3p_4-p_2p_5)-p_4(p_1p_4-p_5)}{2\Delta }, \\ \omega _{22}&=\frac{ p_3p_4-p_2p_5 }{2\Delta }, ~\omega _{33}=\frac{ p_1p_4-p_5 }{2\Delta },\\ \omega _{44}&=\frac{p_1p_2-p_3 }{2\Delta },~\\ \omega _{55}&=\frac{p_3(p_1p_2-p_3)-p_1(p_1p_4-p_5)}{2\Delta }, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \Omega = \begin{pmatrix} \omega _{11} &{} 0&{}-\omega _{22}&{}0 &{}\omega _{33} \\ 0 &{}\omega _{22} &{}0 &{}-\omega _{33} &{}0\\ -\omega _{22} &{}0 &{}\omega _{33} &{}0 &{}-\omega _{44}\\ 0 &{}-\omega _{33} &{}0 &{}\omega _{44} &{}0\\ \omega _{33} &{}0 &{}-\omega _{44} &{}0 &{}\omega _{55} \end{pmatrix}. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned}{} & {} (T_3 T_2 T_1)\Xi _1^2(T_3 T_2 T_1)^T+P_3[(T_3 T_2 T_1)\Sigma (T_3 T_2 T_1)^T]\\{} & {} \quad +[(T_3 T_2 T_1)\Sigma (T_3 T_2 T_1)^T]P_3^T=0. \end{aligned}$$

From Lemma 2.3, we obtain that \((T_3 T_2 T_1)\Sigma (T_3 T_2 T_1)^T=(\xi p_{21} p_{32} p_{53}(p_{44}-p_{55}))^2\Omega \) is a positive definite matrix. Hence

$$\begin{aligned}{} & {} \Sigma =(\xi \tau \delta \rho \eta \bar{\beta }(\sigma _1 I^\star +\sigma _2 E^\star )(N^\star -E^\star -I^\star -R^\star ) )^2\\{} & {} \quad (T_3 T_2 T_1)^{-1}\Omega [(T_3 T_2 T_1)^{-1}]^T \end{aligned}$$

is also positive definite. \(\square \)

5 Extinction exponentially of the disease

Denote

$$\begin{aligned} R_0^e{} & {} =R_0^p\\{} & {} \quad {+}\frac{\bar{\beta }\left( 1{+}e^{\frac{\xi ^2}{\theta }}{-}2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}} \max \left\{ \dfrac{R_0^p(\epsilon {+}\rho {+}\mu )}{\bar{\beta }},\dfrac{ \sigma _2 \Pi }{ \mu }\right\} }{\min \left\{ \dfrac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\epsilon +\rho +\mu \right\} }, \end{aligned}$$

where \(R_0^p\) is defined in (4.2).

Theorem 5.1

If \(R_0^e <1\), then

$$\begin{aligned} \limsup _{t\rightarrow \infty }\frac{1}{t}\ln \left( R_0^p E(t){+}\frac{ \Pi \bar{\beta }\sigma _1}{\mu (\epsilon {+}\rho {+}\mu )} I(t)\right) {<}0,\ \ a.s., \end{aligned}$$

which implies that the exposed individuals E and diseased individuals I in the system (1.4) will go extinct in the long term.

Proof

First we define a \(C^2\)-function

$$\begin{aligned} P=b_1 E+b_2 I, \end{aligned}$$

where \(b_1\) and \(b_2\) are positive constants to be determined later. Then applying Itô’s formula to \(\ln P\), we have

$$\begin{aligned} \begin{aligned} {\mathcal {L}}(\ln P)&\le \frac{1}{b_1 E+b_2 I}\left[ \frac{b_1 \Pi e^x \sigma _1}{\mu }I+\frac{b_1 \Pi e^x \sigma _2}{\mu }E\right. \\&\quad \left. -(b_1(\delta +\mu )-b_2 \tau \delta ) E-b_2(\epsilon +\rho +\mu )I \right] \\&=\frac{1}{b_1 E+b_2 I}\left[ \frac{b_1 \Pi \bar{\beta }\sigma _1}{\mu }I+\frac{b_1 \Pi \bar{\beta }\sigma _2}{\mu }E \right. \\&\quad \left. -(b_1(\delta +\mu )-b_2 \tau \delta ) E-b_2(\epsilon +\rho +\mu )I \right] \\&\quad +\frac{1}{b_1 E+b_2 I}\left[ \frac{b_1 \Pi \sigma _1 (e^x-\bar{\beta }) }{\mu }I\right. \\&\quad \left. +\frac{b_1 \Pi \sigma _2 (e^x-\bar{\beta }) }{\mu }E\right] \\&\le \frac{1}{b_1 E+b_2 I}\left[ \frac{b_1 \Pi \bar{\beta }\sigma _1}{\mu }I+\frac{b_1 \Pi \bar{\beta }\sigma _2}{\mu }E\right. \\&\quad \left. -(b_1(\delta +\mu )-b_2 \tau \delta ) E-b_2(\epsilon +\rho +\mu )I \right] \\&\quad +\max \left\{ \frac{b_1\sigma _1 \Pi }{b_2\mu },\frac{ \sigma _2 \Pi }{ \mu }\right\} |e^x-\bar{\beta }|. \end{aligned} \end{aligned}$$

Choose

$$\begin{aligned}{} & {} b_1=R_0^p=\frac{ \Pi \bar{\beta }\left( \sigma _1\tau \delta +\sigma _2 (\epsilon +\rho +\mu ) \right) }{\mu (\delta +\mu )(\epsilon +\rho +\mu )},\\{} & {} b_2=\frac{ \Pi \bar{\beta }\sigma _1}{\mu (\epsilon +\rho +\mu )}, \end{aligned}$$

such that

$$\begin{aligned} b_1(\delta +\mu )-b_2 \tau \delta = \frac{ \Pi \bar{\beta }\sigma _2}{\mu },~ b_2(\epsilon +\rho +\mu )=\frac{ \Pi \bar{\beta }\sigma _1}{\mu }. \end{aligned}$$

Then we have

$$\begin{aligned} {\mathcal {L}}(\ln P)\le & {} \frac{1}{R_0^p E+b_2 I}\left[ \frac{ \Pi \bar{\beta }\sigma _2}{\mu }\right. \nonumber \\{} & {} \left. (R_0^p-1)E +\frac{ \Pi \bar{\beta }\sigma _1}{\mu }(R_0^p-1)I \right] \nonumber \\{} & {} +\max \left\{ \frac{R_0^p\sigma _1 \Pi }{b_2\mu },\frac{ \sigma _2 \Pi }{ \mu }\right\} |e^x-\bar{\beta }|\nonumber \\{} & {} =\min \left\{ \frac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\frac{ \Pi \bar{\beta }\sigma _1}{b_2\mu }\right\} (R_0^p-1){\mathbb {I}}_{\{R_0^p< 1\}}\nonumber \\{} & {} +\max \left\{ \frac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\frac{ \Pi \bar{\beta }\sigma _1}{b_2\mu }\right\} (R_0^p-1){\mathbb {I}}_{\{R_0^p\ge 1\}}\nonumber \\{} & {} \quad +\max \left\{ \frac{R_0^p\sigma _1 \Pi }{b_2\mu },\frac{ \sigma _2 \Pi }{ \mu }\right\} |e^x-\bar{\beta }|.\nonumber \\ \end{aligned}$$
(5.1)

Integrating (5.1) from 0 to t and dividing by t on both sides. It can be seen that if \(R_0^e<1\), \(R_0^p\) must be less than 1. Then one gets

$$\begin{aligned}{} & {} \frac{\ln P(t)-\ln P(0)}{t}\le \min \left\{ \frac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\frac{ \Pi \bar{\beta }\sigma _1}{b_2\mu }\right\} (R_0^p-1)\nonumber \\{} & {} \quad +\max \left\{ \frac{R_0^p\sigma _1 \Pi }{b_2\mu },\frac{ \sigma _2 \Pi }{ \mu }\right\} \times \left( \frac{1}{t}\int _0^t\left| e^{x(\tau )}-\bar{\beta }\right| \textrm{d}\tau \right) .\nonumber \\ \end{aligned}$$
(5.2)

From Lemma 2.2, we have

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{1}{t}\int _0^t \left| e^{x(\tau )}-\bar{\beta }\right| \textrm{d}\tau \le e^{\bar{x}}\left( 1+e^{\frac{\xi ^2}{\theta }}-2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}},\nonumber \\ \end{aligned}$$
(5.3)
Table 2 Value of parameters in (6.1)

Taking the superior limit of t on both sides of (5.2) and combining (5.3), then we have

$$\begin{aligned}{} & {} \limsup _{t\rightarrow \infty }\frac{\ln P(t)}{t}\le - \min \left\{ \frac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\frac{ \Pi \bar{\beta }\sigma _1}{b_2\mu }\right\} (1-R_0^p) \\{} & {} \quad +\max \left\{ \frac{R_0^p(\epsilon +\rho {+}\mu )}{\bar{\beta }},\frac{ \sigma _2 \Pi }{ \mu }\right\} e^{\bar{x}} \left( 1+e^{\frac{\xi ^2}{\theta }}-2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}}\\{} & {} ={-}\min \left\{ \frac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\epsilon {+}\rho {+}\mu \right\} (1{-}R_0^e). \end{aligned}$$

\(\square \)

6 Numerical simulations

In this section, we have performed a comprehensive numerical evaluation of the stochastic COVID-19 model. Our main objective was to investigate the impact of the OU process on disease transmission, and to achieve this, we utilized Milstein’s higher-order method [36]. The parameters in Table 2 were used for the purpose of numerical simulations, ensuring that our findings were consistent with the theoretical framework established in the literature. By leveraging these advanced numerical techniques, we were able to obtain valuable insights into the transmission dynamics of COVID-19, which can guide public health interventions to mitigate its spread. Firstly, we derive the following discretization equation:

$$\begin{aligned} \left\{ \begin{aligned} x_{i+1}=&x_{i}-\theta (\ln \bar{\beta }- x_{i}) \Delta t+\xi \chi _{i}\sqrt{\Delta t},\\ S_{i+1}=&S_{i}+\left( \Pi +\eta R_{i}-e^{x_i}(\sigma _1 I_{i}+\sigma _2 E_{i})S_{i}-\mu S_{i}\right) \Delta t,\\ E_{i+1}=&E_{i}+\left( e^{x_i}(\sigma _1 I_{i}+\sigma _2 E_{i})S_{i}-(\delta +\mu )E_{i}\right) \Delta t,\\ I_{i+1}=&I_{i}+\left( \tau \delta E_{i}-(\epsilon +\rho +\mu )I_{i}\right) \Delta t,\\ R_{i+1}=&R_{i}+\left( (1-\tau )\delta E_{i}+\epsilon I_{i}-(\mu +\eta )R_{i}\right) \Delta t,\\ \end{aligned} \right. \nonumber \\ \end{aligned}$$
(6.1)

where \(\left( x_i, S_i, E_i, I_i, R_i\right) \) denotes the corresponding value of the i-th iteration of the discretization equation; the time increment is depicted by \(\Delta t > 0\); \(\chi _i\) \((i=1,2\dots ,n)\) are independent random variables that follow the standard normal distribution, and the other parameters are shown in Table 2. Choose the initial value as \((x(0),S(0),E(0),I(0),R(0))=(\ln \bar{\beta },200,20,10,10)\).

Example 6.1

(Stationary distribution) To analyze the stationary distribution of model (1.4), we select the parameter values as presented in Table 2. By calculation, we determined that

$$\begin{aligned} R_0^s{} & {} =\frac{\bar{\beta }\Pi \left( \sigma _1\tau \delta e^{\frac{\xi ^2}{12\theta }}+\sigma _2e^{\frac{\xi ^2}{8\theta }}(\epsilon +\rho +\mu ) \right) }{\mu (\delta +\mu )(\epsilon +\rho +\mu )}\\{} & {} =2.8218>1. \end{aligned}$$

This indicates that the potential for sustained disease transmission. Based on Theorem 3.1, it can be concluded that system (1.4) possesses at least one stationary distribution.

In Fig. 1, we provide phase diagrams and frequency histograms for the variables S(t), E(t), I(t), and R(t) in accordance with the stochastic model. These diagrams visually represent the dynamics and distribution of these variables over time. Additionally, to facilitate comparison, we include simulations of the deterministic model (1.1) in Fig. 1. The parameter values for the deterministic model are identical to those specified in Table 2.

This side-by-side representation allows for a comprehensive comparison between the stochastic and deterministic models, providing insights into the similarities and differences in their behavior and outcomes. By examining both the phase diagrams and frequency histograms, we can gain a deeper understanding of the dynamics, variability, and stationary distribution associated with model (1.4).

Fig. 1
figure 1

The left column displays the numbers of susceptible individuals S, exposed individuals E, infected individuals I, and recovered individuals R in both the deterministic model (1.1) and stochastic model (1.4). On the right column, the corresponding frequency histograms of the stochastic solutions are presented

Example 6.2

(Probability density function) In this example, we aim to investigate the probability density function of the model near the quasi-endemic equilibrium. To achieve this, we utilize the same set of parameters as in Example 6.1, resulting in a basic reproduction number

$$\begin{aligned} R_0^p= & {} \frac{\bar{\beta }\Pi (\sigma _1\tau \delta +\sigma _2 (\epsilon +\rho +\mu ) )}{\mu (\delta +\mu )(\epsilon +\rho +\mu )}\\= & {} 2.8101>1,~ \tau \epsilon +(1-\tau )(\epsilon +\eta -\rho )\\= & {} 0.1949\ne 0. \end{aligned}$$
Fig. 2
figure 2

The left column displays the quantities of exposed individuals E, infected individuals I, recovered individuals R and total individuals N in system (1.4). The corresponding frequency histograms and marginal density functions are shown in the middle column. The right column displays the fitted plots of the corresponding frequency histograms and marginal density functions

According to Theorem 4.1, the solution (x(t), E(t),  I(t), R(t), N(t)) follows a normal density function

$$\begin{aligned} \Psi (x,E,I,R,N)\sim {\mathbb {N}}_5(\Theta ^\star ,\Sigma ), \end{aligned}$$

where \(\Theta ^\star =(\ln \bar{\beta },E^\star ,I ^\star ,R^\star ,N^\star )= ( \ln 0.0143,\) 320.7531, 94.4525, 125.9259, 841.3887) and

$$\begin{aligned} \Sigma = \begin{pmatrix} 0.016667&{} 1.1245&{} 0.11813&{} 0.088699&{} -0.00014954\\ 1.1245&{} 244.61&{} 48.249&{} 44.567&{} -0.090838\\ 0.11813&{} 48.249&{} 14.208&{} 16.03&{} -0.055561\\ 0.088699&{} 44.567&{} 16.03&{} 20.122&{} -0.091503\\ -0.00014954&{} -0.090838&{} -0.055561&{} -0.091503&{} 0.001389\\ \end{pmatrix}. \end{aligned}$$

Since \(x\sim {\mathbb {N}} (\ln \bar{\beta }, \frac{\xi ^2}{2\theta })\), here we focus on the marginal density functions of E(t), I(t), R(t), and N(t):

$$\begin{aligned} \begin{aligned}&\frac{\partial \Psi (x,E,I,R,N)}{\partial E}\sim {\mathbb {N}} (320.7531, 244.61),\\&\frac{\partial \Psi (x,E,I,R,N)}{\partial I}\sim {\mathbb {N}} ( 94.4525, 14.208),\\&\frac{\partial \Psi (x,E,I,R,N)}{\partial R}\sim {\mathbb {N}} (125.9259, 20.122),\\&\frac{\partial \Psi (x,E,I,R,N)}{\partial N}\sim {\mathbb {N}} (841.3887, 0.001389). \end{aligned} \end{aligned}$$
Fig. 3
figure 3

Computer simulations for the stochastic solutions of exposed individuals E and diseased individuals I under the parameter conditions in Example 6.3

In Fig. 2, the middle and right columns illustrate the corresponding marginal density functions and frequency histograms, respectively, based on a total of 9,000,000 iteration points. By examining the marginal density functions, we gain insights into the distribution patterns and variability of the variables of interest. The frequency histograms provide a visual representation of the occurrence frequencies of different values for the variables, complementing the analysis of the probability density function.

This comprehensive analysis of the probability density function and frequency histograms enables a detailed examination of the distributional characteristics of E(t), I(t), R(t), and N(t) in the vicinity of the quasi-endemic equilibrium. These findings contribute to a better understanding of the dynamics and variability of the model, providing valuable insights into the probabilistic nature of the system’s behavior.

Example 6.3

(Extinction of the disease) In this example, we investigate the potential extinction of COVID-19 within the stochastic model. We select the values \(\bar{\beta }=0.0044\) and \(\xi =0.2\), while the remaining parameters are specified in Table 2. Then we have Choose \(\bar{\beta }=0.0044\), \(\xi =0.2\), and the other parameters are shown in Table 2, Then we have

$$\begin{aligned} R_0^e= & {} R_0^p\\{} & {} \quad {+}\frac{\bar{\beta }\left( 1{+}e^{\frac{\xi ^2}{\theta }}{-}2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}} \max \left\{ \dfrac{R_0^p(\epsilon {+}\rho {+}\mu )}{\bar{\beta }},\dfrac{ \sigma _2 \Pi }{ \mu }\right\} }{\min \left\{ \dfrac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\epsilon +\rho +\mu \right\} }\\= & {} 0.9900<1, \end{aligned}$$

According to Theorem 5.1, this implies that the disease compartments E and I will eventually go extinct in the long term. To visually demonstrate this extinction phenomenon, we present Fig. 3. The figure provides graphical evidence supporting the theoretical conclusions by showcasing the dynamics of the compartments E and I over time. The results highlight the eventual decline and eventual elimination of these infectious compartments, confirming the extinction of COVID-19 within the stochastic model.

Through this analysis, we provide insights into the potential for disease extinction within the stochastic model, emphasizing the significance of \(R_0^e\) as a sufficient condition. The findings highlight the importance of understanding the underlying dynamics and the impact of key parameters in predicting the long-term of infectious diseases such as COVID-19.

Example 6.4

(Variation trends of \(R_0^s\), \(R_0^p\) and \(R_2^e\)) Figure 4 shows the trends of \(R_0^s\), \(R_0^p\) and \(R_0^e\) for different \(\bar{\beta }\in [0.0045,0.006]\). According to Theorems 3.1, 4.1 and 5.1, choose the other parameters are shown in Table 2, and it can be found that

  • At least one ergodic stationary distribution is admitted when \( \beta \in [0.0050677, 0.06)\);

  • The global solution follows a unique normal density function when \( \beta \in [0.0050888, 0.06)\);

  • The disease will go to extinction when \( \beta \in [0.0045, 0.0049147)\);

Fig. 4
figure 4

The variation trends of \(R_0^s\), \(R_0^p\) and \(R_0^e\) with different \(\bar{\beta }\in [0.0045,0.006]\). The other parameters are shown in Table 2

Example 6.5

In this example, we estimate the parameters of the stochastic model (1.4) based on real data from COVID-19 infection cases in Ethiopia. Specifically, we consider monthly confirmed COVID-19 cases in Ethiopia from March 31, 2020, to July 31, 2021, as provided in Table 3 of our study. The data source is referenced as [37], and some model parameters are estimated based on the literature.

Using the parameter values from Table 3, we calculate that \(R_0^s=3.7315>1\), which indicates that the disease has the potential to sustain transmission in the population. A comparison between the solutions of the stochastic model and the actual case data is depicted in Fig. 5, allowing us to visually assess the performance and fitting of the model.

By incorporating real data and comparing the model outputs with the observed cases, we can evaluate the model’s ability to capture the underlying dynamics and trends of COVID-19 in Ethiopia. This analysis provides insights into the model’s accuracy and its potential utility for predicting and managing the spread of the disease.

Table 3 COVID-19 total confirmed cases in Ethiopia and parameters used in Example 6.5
Fig. 5
figure 5

The fitted data to the reported cases using stochastic model (1.4) for Ethiopia from March 31, 2020 to July 31, 2021

7 Conclusion

This paper presents a novel stochastic model for COVID-19 epidemics that is driven by the Ornstein–Uhlenbeck (OU) process. The main objective is to explore the dynamic properties of COVID-19 transmission. Drawing on the theory of mean-reverting OU processes, it becomes apparent that simulations of environmental disturbances with mean-reverting properties are more realistic than other approaches. Additionally, the unique properties of OU processes enable us to build models that more closely reflect reality and enable us to explore the dynamic properties of epidemic models in greater depth.

Despite the potential benefits of using the OU process in this way, there has been little research conducted in this area, and the underlying theory is highly divergent. As such, our primary focus in this paper is on the methodology and theory of the dynamic behavior of models driven by OU processes. Specifically, we aim to analyze and demonstrate the dynamic properties of the stochastic model (1.4) through the following results:

  1. (i)

    For any initial value \((x(0),S(0),E(0),I(0),R(0))\in {\mathbb {R}}\times {\mathbb {R}}_+^4\), then system (1.4) has a unique global solution \((x(t),S(t),E(t),I(t),R(t))\in {\mathbb {R}}\times {\mathbb {R}}_+^4\) a.s.. Moreover,

    $$\begin{aligned} \Gamma= & {} \Bigg \{(x,S,E,I,R)\in {\mathbb {R}} \times {\mathbb {R}}_+^4: \\{} & {} \quad S+E+I+R<\frac{\Pi }{\mu }\Bigg \} \end{aligned}$$

    is positive invariant for model (1.4).

  2. (ii)

    If

    $$\begin{aligned} R_0^s=\frac{\bar{\beta }\Pi \left( \sigma _1\tau \delta e^{\frac{\xi ^2}{12\theta }}+\sigma _2e^{\frac{\xi ^2}{8\theta }}(\epsilon +\rho +\mu ) \right) }{\mu (\delta +\mu )(\epsilon +\rho +\mu )}>1, \end{aligned}$$

    then the stochastic system (1.4) admits at least one ergodic stationary distribution.

  3. (iii)

    If

    $$\begin{aligned} R_0^p=\frac{\bar{\beta }\Pi \left( \sigma _1\tau \delta +\sigma _2 (\epsilon +\rho +\mu ) \right) }{\mu (\delta +\mu )(\epsilon +\rho +\mu )}> 1 \end{aligned}$$

    and \(\tau \epsilon +(1-\tau )(\epsilon +\eta -\rho )\ne 0\), then the stationary solution (x(t), S(t), E(t), I(t), R(t)) to system (1.4) around \(\Theta ^\star =(\ln \bar{\beta }, N^\star ,E^\star ,I^\star ,R^\star )\) follows five-dimensional normal distribution.

  4. (iv)

    If

    $$\begin{aligned} R_0^e= & {} R_0^p\\{} & {} {+}\frac{\bar{\beta }\left( 1{+}e^{\frac{\xi ^2}{\theta }}-2e^{\frac{\xi ^2}{4\theta }}\right) ^{\frac{1}{2}} \max \left\{ \dfrac{R_0^p(\epsilon {+}\rho {+}\mu )}{\bar{\beta }},\dfrac{ \sigma _2 \Pi }{ \mu }\right\} }{\min \left\{ \dfrac{ \Pi \bar{\beta }\sigma _2}{R_0^p\mu },\epsilon {+}\rho {+}\mu \right\} }\\{} & {} <1, \end{aligned}$$

    then the disease of system (1.4) will tend to zero exponentially with probability one.

Upon closer examination, we observe that when the intensity of the random noise \(\xi \) is equal to zero, the values of \(R_0^s\), \(R_0^p\), and \(R_0^e\) are identical. This finding implies that the stochastic model (1.4) and the deterministic model (1.1) are equivalent under these specific conditions. It is worth noting that our previously presented results, which focused on the dynamic properties of the stochastic model, are entirely inclusive of the deterministic model results. As such, the insights we have gained from our stochastic model analysis can be applied directly to the deterministic model under these specific conditions.

Furthermore, we found that the deterministic model has a unique endemic equilibrium. Moreover, when it exists, this equilibrium is locally asymptotically stable. This finding is noteworthy as it provides significant insight into the behavior of SEIR models, particularly in terms of their stability. Moreover, this gives a new approach for researchers in the field, as it inspires further investigation into the stability of similar epidemic models.

The findings we have presented above can help to deepen our understanding of the dynamics underlying both deterministic and stochastic models. By gaining a better understanding of the dynamics of these models, we may be able to uncover new insights into the spread and control of infectious diseases. These insights, in turn, may have significant practical implications for managing infectious diseases more effectively.

On the other hand, the utilization of nonstandard finite difference methods, especially in fractional modeling, allows us to effectively capture anomalous diffusion and fractional-order phenomena. Additionally, incorporating spatial considerations into epidemic models offers important insights into the spatial dynamics of epidemics, enabling more informed decision-making and intervention strategies. It will be our future research direction.