1 Introduction

The regulation of information encoding and transmission in biological systems has intrigued and occupied mathematicians and physicists for decades. One of the earliest published papers along these lines is Timoféeff-Ressovsky et al. (1935) which played a major role in the motivation of Erwin Schrödinger to give the 1943 Dublin lectures that are immortalized in Schrödinger (1944). The regulation of information retrieval started to become understood very quickly after the seminal work of Jacob and Monod (Jacob et al. 1960; Jacob and Monod 1961) elucidating the nature of the regulation of lactose production in bacteria. The molecular apparatus carrying out this procedure in bacteria, involving transcription of DNA to produce mRNA and the translation of the mRNA to ultimately produce an effector protein, was named an ’operon’ by them. In Fig. 1 we have illustrated the operon concept using the lactose (lac) operon as an example (Jacob et al. 1960; Jacob and Monod 1961).

Fig. 1
figure 1

A cartoon representation of the operation of the lac operon enabling bacteria to utilize lactose as an energy source in the absence of glucose, as elucidated by Jacob et al. (1960), Jacob and Monod (1961). The process starts (upper left) when the operator region (dark-blue) is free of active repressor molecules so mRNA polymerase can attach to the DNA and start moving along the structural genes to produce mRNA. Once the mRNA is fully formed, ribosomes start the translation process which, for the lac operon, produces, in sequence, \(\beta \)-galactosidase, permease, and transacetylase. The permease facilitates the transport of extracellular lactose to the intracellular space (bottom right), while the permease is essential for the conversion of the internalized lactose into allolactose. Allolactose, in turn, is able to bind to active repressor molecules thereby inactivating them and giving rise to the positive feedback nature of the lac operon. Modified from Yildirim and Mackey (2003)

Rather astonishingly, mathematical models of the process of transcription and translation rapidly appeared (Goodwin et al. 1963; Goodwin 1965). These first attempts were swiftly followed by an analysis of a simple repressible operon (Griffith 1968a) and an inducible operon (Griffith 1968b). These and other results were summarized in the Tyson and Othmer (1978) review which is still relevant today.

Though Goodwin clearly noted the existence of significant delays in both transcription and translation in Goodwin et al. (1963), and thought that the delays might have significant dynamic influences,Footnote 1 he did not examine their potential effects. Apparently the first to incorporate constant transcriptional and translational delays into the Goodwin model was Banks (1977) and then MacDonald (1977) followed in rapid succession by Banks and Mahaffy (1978a, 1978b), an der Heiden (1979, 1983) and Mahaffy and Pao (1984). These were followed by a number of subsequent investigations.

Since the processes of transcription and translation are rather complicated, the assumption of constant delay may limit our ability to appreciate the richness of dynamics that the process of protein production can impose on the cell. The goal of this paper is to derive a Goodwin-like delay-differential equation (DDE) model with state dependent delays that we feel may more closely correspond to biological reality, explore the potential dynamics both in repressible and inducible cases and contrast these dynamics with that of a system with constant delays.

This paper is rather long and detailed, and a summary of the contents may be of help to the reader. Section 2 outlines the basic operon equations starting with a summary of the Goodwin model in Sect. 2.1 and a summary of the equations we derive in this paper in Sect. 2.2. Section 2.3 details the full derivation of the model equations we study here while Sect. 2.4 summarizes the functional forms for the transcription initiation rates that we use for inducible and repressible operons.

Section 2.5 contains our arguments for the nature and form of the transcriptional and translational velocities in the two types of operons that lead naturally to the state dependent delays that are central to our study. Section 2.6 deals with the quantitative and qualitative nature of the possible equilibrium states of our model equations developed in Sect. 2.3, and the following Sect. 2.7 gives a linearization procedure in the neighborhood of these steady states that is not fully justified mathematically but easily understood by most readers. (The analytically exact linearization is to be found in “Appendix C” and leads to precisely the same result). These linearizations are needed for stability determinations based on the eigenvalues evaluated at the equilibria.

Section 3 contains the details of the numerical methods we have used in our numerical studies of this model, while the following Sect. 4 contains the extensive details of our numerical studies for both the repressible (Sect. 4.1) and inducible (Sect. 4.2) operon models. In both cases we have found that significant new types of dynamics are introduced by the state dependency of the delays. In the inducible operon model we found a stable periodic orbit as well as tristability between a periodic orbit and two steady states. In the repressible operon model we found bistability between two steady states as well as between a periodic orbit and a steady state. All of these results are obtained with one state dependent delay and are not present in the corresponding DDE model with a constant delay. In addition, in the repressible operon model we found evidence of a homoclinic bifurcation of Shilnikov type (Shilnikov 1965; Kuznetsov 2004), indicating the potential for complex dynamics. Finally, in both types of operons there are stable periodic orbits, where a short burst of transcription and translation is interspersed with longer periods of quiescence. These orbits represent a pulse-generating mechanism on a sub-cellular level and may be connected to the phenomena of transcriptional bursting (Tunnacliffe and Chubb 2020). The main body of the paper concludes with a discussion and summary in Sect. 5 and is followed by three mathematical appendices. “Appendix A” treats the semiflows arising from our basic model with state dependent delays, “Appendix B” considers some aspects of the nature of the model solutions including positivity and the global attractor, while “Appendix C” treats the linearization mentioned above.

2 Basic operon equations

2.1 The Goodwin model

The Goodwin (1965) model for operon dynamics considers a large population of cells, each of which contains one copy of a particular operon, and we use that as a basis for discussion. We let (MIE) respectively denote the mRNA, intermediate protein, and effector protein concentrations. For a generic operon the dynamics are assumed to be given by (Goodwin et al. 1963; Goodwin 1965; Griffith 1968a, b; Othmer 1976; Selgrade 1979)

$$\begin{aligned} \dfrac{dM}{dt}(t)&= {{\mathcal {F}}}(E(t)) -\gamma _M M(t), \end{aligned}$$
(2.1)
$$\begin{aligned} \dfrac{dI}{dt}(t)&= \beta _I M(t) -\gamma _I I(t), \end{aligned}$$
(2.2)
$$\begin{aligned} \dfrac{dE}{dt}(t)&= \beta _E I(t) - \gamma _E E(t). \end{aligned}$$
(2.3)

It is assumed here that the flux \({{\mathcal {F}}}\) (in units of \([\text{ concentration }\cdot \text{ time}^{-1}]\)) of initiation of mRNA production is a function of the effector level E. Furthermore, the model assumes that the flux of protein and metabolite production are proportional (at rates \(\beta _I,\beta _E\) respectively) to the amount of mRNA and intermediate protein respectively. All three of the components (MIE) are subject to degradation at rates \(\gamma _M, \gamma _I, \gamma _E\). The parmeters \(\beta _I,\beta _M,\gamma _M\), \(\gamma _I\) and \(\gamma _E\) have dimensions [time\(^{-1}\)].

2.2 The effects of cell growth and state dependent transcription and translation delays

We will study an extended Goodwin model taking into account the effects of cell growth and delays which are introduced by state dependent transcription and translation processes. The cell growth affects the volume and hence the concentrations of all the molecules in the cell.

The following sections are devoted to the derivation of the generalization of the Goodwin model:

$$\begin{aligned} \dfrac{dM}{dt}(t)&= \beta _M e^{-\mu \tau _M}\dfrac{v_M(E(t))}{v_M(E(t-\tau _M))}f(E(t-\tau _M))-{\bar{\gamma }}_M M(t), \end{aligned}$$
(2.4)
$$\begin{aligned} \dfrac{dI}{dt}(t)&= \beta _I e^{-\mu \tau _I}\dfrac{v_I(M(t))}{v_I(M(t-\tau _I))} M(t-\tau _I) -{\bar{\gamma }}_I I(t), \end{aligned}$$
(2.5)
$$\begin{aligned} \dfrac{dE}{dt}(t)&= \beta _E I(t) -{\bar{\gamma }}_E E(t). \end{aligned}$$
(2.6)

In Eqs. (2.4)–(2.6) there are several changes to be noted relative to the original Goodwin model (2.1)–(2.3). The first is the introduction of the two delay terms \(E(t-\tau _M)\) and \(M(t-\tau _I)\) indicating that E and M are now to be evaluated at a time in the past due to the non-zero times required for transcription and translation. From a dynamic point of view, the presence of these delays can have a dramatic effect.

The second changeFootnote 2 is the appearance of the terms \(e^{-\mu \tau _M}\) and \(e^{-\mu \tau _I}\) which respectively account for an effective dilution of the maximal mRNA production and intermediate protein fluxes because the cell is growing at a rate \(\mu \) (in units of [time\(^{-1}\)]).

The third change from (2.1)–(2.3) to (2.4)–(2.6) is the alteration of the decay rates \(\gamma _i\) to \({\bar{\gamma }}_i \equiv \gamma _i + \mu \) because the dilution due to cell growth leads to an effective increase in the rate of destruction.

A fourth change is the replacement of \({{\mathcal {F}}}\) in (2.1) by \(\beta _M f\) in (2.4). Here \(\beta _M\) is the maximal production rate (in units of \([\text{ concentration }\cdot \text{ time}^{-1}]\)) possible and f is the fraction of free operator sites on the operon, a function that will vary between a maximal value of 1 and a minimal value in [0, 1). We remark that \(\beta _M\) thus has different units than the linear rate constants \(\beta _I\) and \(\beta _E\).

Finally, velocity ratio terms of the form \(\frac{v_j(w)}{v_j(w(t-\tau ))}\) appear in (2.4) and (2.5) as a consequence of the delays \(\tau _M\) and \(\tau _I\) being non-constant and depending on the state variables, as explained in Sect. 2.5.

2.3 Evolution equations incorporating state-dependent transcription and translation rates

Transcription is initiated when RNA polymerase (RNAP) is recruited to the promoter region by one or more transcription factors, partially unwinds the promoter DNA to form the transcription bubble, and subsequently leaves the promoter region, moving along the DNA. If multiple initiations take place in rapid succession, then transcribing RNAPs start to interfere with each other, and as a result the average velocity of individual RNAPs transcription events will decrease. This, in turn, leads to an increased time of transcription.

Translation is initiated by assembly of the ribosome on the initiation region of the mRNA. Ribosomes catalyze subsequent binding of codon specific transfer RNAs (tRNA) to the mRNA and transfer of the attached amino acid to the nascent polypeptide. Subsequent translocation of the ribosome completes the cycle.

The result is a bio-polymerization process whose velocity depends on current demand for both ribosomes and tRNAs, which is affected both by the number of actively translated mRNAs and the growth rate of the cell. Both transcription and translation share key characteristics that lead to a common model of these processes. The most basic model, from which other models are derived, is a stochastic Totally Asymmetric Simple Exclusion Process (TASEP) model for particles hopping on a strand with a finite number of discrete sites, that represent nucleotides (Derrida et al. 1993; Schütz and Domany 1993; Kolomeisky 1998; Shaw et al. 2003; Zia et al. 2011).

It should be noted that in eukaryotes (as opposed to our consideration in this paper of prokaryotic gene regulation) transcription takes place in the nucleus while translation takes place in the cytoplasm and the consequent transport of intermediate from cytoplasm into the nucleus gives rise to a transport delay that may, on occasion, be considered as state dependent (Ahmed and Verriest 2017; Wang and Pei 2021).

2.3.1 mRNA dynamics

For the mRNA molecules we start with the mRNA transcripts and consider their density r(ta) at time t and location a along the DNA, so

$$\begin{aligned} \int _{a_1}^{a_2}r(t,a)da \end{aligned}$$

is the number of mRNA molecules with positions between \(a_1\) and \(a_2\), \(0\le a_1<a_2\le a_M\), where \(a_M\) is the end of the transcription region. The velocity of transcription along the DNA is given by a function \(v_M\), and we assume that the actual velocity of the process depends on the value w(t) of a function w, to be determined later. If the transcription process takes place without any loss of mRNA transcripts, then the evolution equation for the density r(ta) is given by

$$\begin{aligned} \dfrac{\partial r}{\partial t}(t,a) + v_M(w(t)) \dfrac{\partial r}{\partial a}(t,a) = 0. \end{aligned}$$
(2.7)

We look for a differential equation

$$\begin{aligned} \frac{dm}{dt}(t)=p(t)-\gamma _Mm(t) \end{aligned}$$

for the number m(t) of complete mRNA molecules at time t, with a constant rate \(\gamma _M>0\) of degradation and a production function p which describes the contribution of the release of completed mRNA molecules at time t to the rate of change \(\frac{dm}{dt}(t)\). In order to determine p(t) consider the number of mRNA molecules undergoing transcription at time t, which is

$$\begin{aligned} J(t)=\int _0^{a_M}r(t,a)da. \end{aligned}$$

Using Eq. (2.7) we have a balance equation for J,

$$\begin{aligned} \frac{dJ}{dt}(t)=v_M(w(t))r(t,0)-v_M(w(t))r(t,a_M), \end{aligned}$$

where the term \(v_M(w(t)r(t,0)\) represents the initiation rate of transcription of mRNA molecules contribution to \(\frac{dJ}{dt}(t)\), and \(-v_M(w(t))r(t,a_M)\) is the release rate of completed mRNA molecules. Therefore the term \(v_M(w(t))r(t,a_M)\) is the desired contribution p(t) to \(\frac{dm}{dt}(t)\). Using characteristics we obtain

$$\begin{aligned} p(t)=v_M(w(t))r(t,a_M)=v_M(w(t))r(t-\tau ,0), \end{aligned}$$

with the time \(\tau =\tau _M(t)\) needed for production of mRNA molecules which reach the final length \(a_M\) at time t,

$$\begin{aligned} a_M = \int ^t_{t-\tau _M(t)}v(w(s))ds = \int ^0_{-\tau _M(t)}v(w(t+s))ds. \end{aligned}$$
(2.8)

We arrive at

$$\begin{aligned} p(t)&= v_M(w(t))r(t-\tau _M(t),0)\\&= \frac{v_M(w(t))}{v_M(w(t-\tau _M(t)))}[v_M(w(t-\tau _M(t)))r(t-\tau _M(t),0)], \end{aligned}$$

where the term \([v_M(w(t-\tau _M(t)))r(t-\tau _M(t),0)]=F(t-\tau _M(t))\) stands for the onset of transcription of mRNA molecules at time \(t-\tau _M(t)\). The differential equation for m thus becomes

$$\begin{aligned} \frac{dm}{dt}(t)&= \frac{v_M(w(t))}{v_M(w(t-\tau _M(t)))}[v_M(w(t-\tau _M(t)))r(t-\tau _M(t),0)]-\gamma _Mm(t)\nonumber \\&= \frac{v_M(w(t))}{v_M(w(t-\tau _M(t)))}F(t-\tau _M(t))-\gamma _Mm(t). \end{aligned}$$
(2.9)

We now switch to a description of transcription in terms of molecule concentration, rather than numbers of molecules. Since the concentration M is related to the number of molecules m by \(M=m/V\), we have

$$\begin{aligned} \dfrac{dm}{dt}(t) = \dfrac{dM}{dt}(t) V(t) + M(t)\dfrac{dV}{dt}(t)= V(t)\dfrac{dM}{dt}(t) + \mu V(t)M(t), \end{aligned}$$

under the assumption that the cells are growing exponentially with \(\tfrac{dV}{dt}(t) = \mu V(t)\). Consequently, noting that \(V(t)=e^{\mu \tau _M(t)}V(t - \tau _M(t))\), we can rewrite (2.9) as

$$\begin{aligned} \dfrac{dM}{dt}(t)&= \frac{1}{V(t)}\dfrac{dm(t)}{dt}-\mu M(t) \\&=\dfrac{v_M(w(t))}{v_M(w(t-\tau _M(t)))} e^{-\mu \tau _M(t)} \dfrac{ F (t-\tau _M(t))}{V(t - \tau _M(t))} - (\gamma _M + \mu ) M(t) \end{aligned}$$

We express the initiation flux \(\frac{F(\ldots )}{V(\ldots )}\) in concentration units as

$$\begin{aligned} \dfrac{F(t-\tau _M(t))}{V(t - \tau _M(t))}=:\beta _M f(w(t-\tau _M(t))) \end{aligned}$$

where \(\beta _M\) is the maximal initiation flux (units of [\(\text{ concentration }\cdot \text{ time}^{-1}\)]) and f stands for the fraction of free operator sites on the operon, a function that will vary between a minimal value in (0, 1) and a maximal value of 1.

As derived in Mackey et al. (2016, Chapter 1), the initiation flux is a function of concentration of the effector molecule E, and the velocity \(v_M\) also depends on E (Sect. 2.5). Therefore for the transcription process \(w=E\) and we obtain

$$\begin{aligned} \dfrac{dM}{dt}(t) =\dfrac{v_{M}(E(t))}{v_{M}(E(t-\tau _{M}(t)))} e^{-\mu \tau _{M}(t)} \beta _M f^{}(E(t-\tau _{M}(t))) - (\gamma _M + \mu ) M(t), \nonumber \\ \end{aligned}$$
(2.10)

together with Eq. (2.8) for the delay \(\tau _M(t)\), which depends on the function E.

2.3.2 Intermediate dynamics

We assume that the initiation of the translational production of the mRNA into intermediate protein is a relatively simple process and, unlike the transcription process, not under regulatory control.

For the intermediate molecules, we use i(tb) to describe their density at time t and location b along the mRNA. We assume that the translation is proceeding at a velocity \(v_I(q)\) along the mRNA. This velocity may depend on q, where q is to be determined. Analogous to the transcription process we arrive at

$$\begin{aligned} \dfrac{di}{dt}(t) = \dfrac{v_I(q(t))}{v_I(q(t-\tau _I(t)))} \beta _I m(t-\tau _I(t)) - \gamma _I i(t). \end{aligned}$$
(2.11)

In Sect. 2.5 we argue that \(q=M\), the concentration of mRNA. Therefore switching to a concentration description using \(I(t)= \frac{i(t)}{V(t)}\), following the derivation of (2.10) we can rewrite (2.11) in the form

$$\begin{aligned} \dfrac{dI}{dt}(t) =\dfrac{v_I(M(t))}{v_I(M(t-\tau _I(t)))} e^{-\mu \tau _I(t)} \beta _I M(t-\tau _I(t))) - (\gamma _I + \mu ) I(t). \end{aligned}$$
(2.12)

2.3.3 Effector dynamics

The effector dynamics are the easiest because there is neither transcription nor translation involved. Rather the production of the effector is assumed to be proportional to the intermediate level i at a rate \(\beta _E\), while the effector is destroyed at a rate \(\gamma _E\). Thus

$$\begin{aligned} \dfrac{de}{dt}(t) = \beta _E i(t) - \gamma _E e(t), \end{aligned}$$
(2.13)

and, changing the description from numbers to concentrations, we have simply that

$$\begin{aligned} \dfrac{dE}{dt}(t) = \beta _E I(t) - (\gamma _E + \mu ) E(t). \end{aligned}$$
(2.14)

2.3.4 Putting it all together

Denote the transcriptional velocity by \(v_M(E(t))\) and the translational velocity by \(v_I(M(t))\). Further let \({\bar{\gamma }}_M = \gamma _M + \mu ,{\bar{\gamma }}_I = \gamma _I + \mu ,{\bar{\gamma }}_E = \gamma _E + \mu \). Then we can write the state dependent forms of (2.4)–(2.6) as

$$\begin{aligned} \dfrac{dM}{dt}(t)&= \beta _M e^{-\mu \tau _{M}(t)} \dfrac{v_M(E(t))}{v_M(E(t-\tau _{M}(t)))} f(E(t-\tau _M(t))) -{\bar{\gamma }}_M M(t), \end{aligned}$$
(2.15)
$$\begin{aligned} \dfrac{dI}{dt}(t)&= \beta _I e^{-\mu \tau _I(t)} \dfrac{v_I({M(t)})}{v_I(M(t-\tau _I(t)))} M(t-\tau _I(t)) -{\bar{\gamma }}_I I(t), \end{aligned}$$
(2.16)
$$\begin{aligned} \dfrac{dE}{dt}(t)&= \beta _E I(t) -{\bar{\gamma }}_E E(t). \end{aligned}$$
(2.17)

These equations are supplemented by the two additional equations which define the delays \(\tau _M\) and \(\tau _I\) by threshold conditions, namely

$$\begin{aligned} a_M&= \int _{t-\tau _M(t)}^{t} v_M(E(s)) ds=\int _{-\tau _M(t)}^{0} v_M(E(t+s)) ds \end{aligned}$$
(2.18)
$$\begin{aligned} a_I&= \int _{t-\tau _I(t)}^{t} v_I(M(s)) ds=\int _{-\tau _I(t)}^{0} v_I(M(t+s)) ds. \end{aligned}$$
(2.19)

We write \(\tau _M(t)\) and \(\tau _I(t)\) for the state-dependent delays, but from (2.18) and (2.19) it is clear that the value of each is determined by the values of E(t) or M(t) respectively over the whole integration interval. Using the Banach space notation of the Appendices we ought to write \(\tau _M(E_t)\) and \(\tau _I(M_t)\) for these delays where \(E_t\) and \(M_t\) are functions defined by \(E_t(\theta )=E(t-\theta )\) and \(M_t(\theta )=M(t-\theta )\). But, to hopefully make the presentation accessible to readers who are not comfortable with Banach spaces, we will avoid any Banach space notation in the main body of the text and continue to write \(\tau _M(t)\) and \(\tau _I(t)\) for the delays at time t.

Velocity ratio terms such as those appearing in (2.15) and (2.16) are ubiquitous in distributed state-dependent DDE problems with either threshold conditions (Craig et al. 2016) or with randomly distributed maturation times (Cassidy et al. 2019). Bernard (2016) explains very clearly why they arise.

2.4 The control of transcription initiation rates

The determination of how effector concentrations modify the fraction of free operator sites, f, has been dealt with by a number of authors. Here we merely summarize the nature of f for inducible and repressible systems, see Mackey et al. (2016, Chapter 1) for details.

For a repressible operon, f is a monotone decreasing function

$$\begin{aligned} f(E) = \frac{1+ K_1E^n}{1 + KE^n}, \end{aligned}$$
(2.20)

where \(K > K_1\), \(n > 1\), so there is maximal repression for large E. For an inducible system f is a monotone increasing function of the form

$$\begin{aligned} f(E) = \frac{1+ K_1E^n}{K + K_1E^n}, \end{aligned}$$
(2.21)

where \(K > 1\), \(n > 1\). Maximal induction occurs for very large E.

Note that both (2.20) and (2.21) are special cases of

$$\begin{aligned} f(E) = \dfrac{1+K_1E^n}{A+BE^n} \end{aligned}$$
(2.22)

The constants \(A,B \ge 0\) are defined in Table 1.

2.5 Transcriptional and translational velocities

In this section we discuss cellular processes that affect the transcriptional and translational velocities \(v_M \) and \(v_I \). Both transcription and translation are polymerization processes where small parts are associated by an enzymatic reaction catalyzed by a large complex into a long polymer chain.

For the transcription process nucleotides A,C,G,T are incorporated by RNA polymerase (RNAP) into an mRNA chain, and for the translation process peptides are incorporated by ribososomes into a polypeptide that, upon folding, becomes a functional protein. Velocities of both processes depend on a sufficient and timely supply of nucleotides and peptides, respectively. The availability, or paucity, of the parts may result in changes in velocity from position to position along the strand (Zia et al. 2011). The abundance of the parts reflects the overall growth rate of the cell: faster growth leads to greater demand on resources and a slower transcription (\(v_M\)) and translation (\(v_I \)) velocity. Therefore for an inducible operon, the transcription velocity \(v_M(E) \) is a decreasing function of the concentration of the effector E and for a repressible operon \(v_M(E)\) is an increasing function of the concentration of the effector E.

The velocity of translation depends on the number of initiations of the translation process, which is directly proportional to the concentration M of mRNA. Since greater demand on peptide availability results in a lower elongation velocity of ribosomes, the translational velocity of ribosomes \(v_I(M)\) is a decreasing function of M.

There is a second effect that may affect elongation velocity. This is the effect of elongation interference by multiple RNAP or multiple ribosomes (Klumpp and Hwa 2008; Klumpp 2011). The velocity of elongation decreases with the number of RNAPs and ribosomes that elongate at the same time. Since this number is proportional to the initiation rate, the velocity \(v_M=v_M(E)\) is a decreasing function of the concentration of E for an inducible operon and an increasing function of the concentration of E for a repressible operon. The velocity \(v_I(M)\) is a decreasing function of M.

Both availability of nucleotides and peptides and elongation interference support the following assumptions on the velocities \(v_M\) and \(v_I\):

$$\begin{aligned} v_M&=v_M(E) \quad \text { is a decreasing function of } E \text { for an inducible operon} \\ v_M&=v_M(E) \quad \text { is an increasing function of } E \text { for a repressible operon} \\ v_I&= v_I(M) \quad \text { is always a decreasing function of} M. \end{aligned}$$

There are no analytic expressions for the dependence of \(v_M(E)\) and \(v_I(M)\) on E and M respectively, but we have made assumptions for modeling purposes and these are detailed in Table 1. Specifically we have assumed that they can be represented by Hill functions with parameters determining the maximum, minimum and half-maximal values as well as a parameter (m or \(m_I\)) which controls the slope. We do not offer any detailed stoichiometric justification for these assumptions, but rather assume that they will capture the essential nature of their dependencies.

Table 1 Summary of the form of the fraction f of free operators as determined by their stoichiometry, and the Hill function forms we have assumed for the transcriptional and translational velocities

The parameters \(v_M^{min}, v_I^{min}\) describe minimal velocity of transcription and translation, respectively. While the individual polymerases and ribosomes may briefly pause their elongation, in our model where MI model concentrations in a large population of cells,l we assume \(v_I^{min}>0\) and \(v_M^{min}>0\). Violation of this assumption would cause significant problems both with our theory and numerical simulations. The minimal velocity being strictly positive ensures that the maximal delay is bounded since from (2.18) and (2.19)

$$\begin{aligned} \tau _M(t)\le \frac{a_M}{v_M^{min}}, \qquad \tau _I(t)\le \frac{a_I}{v_I^{min}}. \end{aligned}$$

Similarly the maximal velocities define the minimal delays. Interesting dynamics can occur when delays become large and in what follows we will often take \(v_M^{min}\) as a bifurcation parameter and study what happens as \(v_M^{min}\rightarrow 0\) and consequently \(\tau _M(t)\) becomes large.

2.6 Equilibria

We next consider the steady states \((M^*,I^*,E^*)\) of (2.15)–(2.19). From (2.18) and (2.19) at steady state the delays satisfy

$$\begin{aligned} \tau _M=\tau _M^*(E^*):=\frac{a_M}{v_M(E^*)}, \qquad \tau _I=\tau _I^*(M^*):=\frac{a_I}{v_I(M^*)}. \end{aligned}$$
(2.23)

Then, at the steady state, equations (2.15)–(2.17) simplify to

$$\begin{aligned} 0&= \beta _M e^{-\mu \tau _M^*(E^*)}f(E^*) - {\bar{\gamma }}_M M^*, \end{aligned}$$
(2.24)
$$\begin{aligned} 0&= \beta _I e^{-\mu \tau _I^*(M^*)} M^* -{\bar{\gamma }}_I I^*, \end{aligned}$$
(2.25)
$$\begin{aligned} 0&= \beta _E I^* -{\bar{\gamma }}_E E^*. \end{aligned}$$
(2.26)

We rearrange (2.24) to obtain

$$\begin{aligned} M^* = \dfrac{\beta _M}{{\bar{\gamma }}_M}e^{-\mu \tau _M^*(E^*)}f(E^*), \end{aligned}$$
(2.27)

and then substituting this and (2.26) into (2.25) we find that the steady state must satisfy a single equation for \(E^*\):

$$\begin{aligned} 0 = g_E(E^*):= \frac{\beta _M\beta _I\beta _E}{{\bar{\gamma }}_M{\bar{\gamma }}_I{\bar{\gamma }}_E} e^{-\mu (\tau _I^*(M^*)+\tau _M^*(E^*))}f(E^*)-E^* \end{aligned}$$
(2.28)

where the argument of \(\tau _I^*\) is given by (2.27).

With the functions \(v_M\), \(v_I\) and f defined as in Table 1 then \(v_M(E)\in [v_M^{min},v_M^{max}]\) and \(v_I(M)\in [v_I^{min},v_I^{max}]\), so

$$\begin{aligned} \tau _M\in [a_M/v_M^{max},a_M/v_M^{min}] \,\,\text{ and } \,\, \tau _I\in [a_I/v_I^{max},a_I/v_I^{min}] \end{aligned}$$

while \(f(E)\in (0,1]\). Thus \(g_E(0)>0\) and

$$\begin{aligned} g_E(E) \le \frac{\beta _M\beta _I\beta _E}{{\bar{\gamma }}_M{\bar{\gamma }}_I{\bar{\gamma }}_E}-E. \end{aligned}$$

Therefore, \(g_E(E)<0\) for all E sufficiently large, and by the intermediate value theorem there is at least one solution \(E^*>0\) to \(g_E(E^*)=0\). This defines a steady state \((M^*,I^*,E^*)\). It also follows that any steady-state solution must satisfy

$$\begin{aligned} E^*\le \frac{\beta _M\beta _I\beta _E}{{\bar{\gamma }}_M{\bar{\gamma }}_I{\bar{\gamma }}_E}, \qquad I^*\le \frac{\beta _M\beta _I}{{\bar{\gamma }}_M{\bar{\gamma }}_I}. \end{aligned}$$
Fig. 2
figure 2

Inducible constant delays with f as defined in Table 1. a Illustration of one or three solutions to (2.29) for different values of \(C_{\beta \gamma }\). b Because \(f'\) is unimodal \(g'(E)\) has at most two sign changes so (2.29) cannot have more than three solutions

If there is no cell growth, and thus \(\mu =0\), and/or if both delays are constant and independent of the state-variables (\(v_M^{max}=v_M^{min}\) and \(v_I^{max}=v_I^{min}\)) then equation (2.28) reduces to the form

$$\begin{aligned} 0 = g_E(E) = C_{\beta \gamma }f(E)-E \end{aligned}$$
(2.29)

for a suitably defined constant \(C_{\beta \gamma }>0\). The solutions of (2.29) and similar equations are well-studied in the context of monotone-cyclic feedback systems both with and without constant delay (Othmer 1976; Tyson and Othmer 1978; Yildirim et al. 2004). For the repressible case f(E) is monotone decreasing, hence \(g_E(E)\) is also monotone decreasing and there is a unique steady state. For the inducible case f(E) is non-negative and monotone increasing. Here the number of steady states depends on the exact form of f. With f defined as in Table 1, which has a unique inflection point with \(f''(E)=0\) and \(E>0\), there will be at most three steady states as shown in Yildirim et al. (2004) and illustrated in Fig. 2.

Fig. 3
figure 3

Examples of the function \(g_E(E)\) defined by (2.28) with functions defined in Table 1. a and b show a repressible and an inducible example with a single state-dependent delay, and parameter values given in Table 2. c and d show examples with two state-dependent delays with the same parameter values, except for those stated in (2.30) and (2.31) respectively

In many DDEs the delay(s) only appears in the state variables, and so do not affect the computation of the steady states. This is also the case with our model when \(\mu =0\), and the computation of the steady states from (2.24)–(2.26) is independent of the delays in that case.

With state-dependent delays and cell growth, and thus \(\mu >0\), the behaviour of the model (2.15)–(2.19) is quite different. Now, the delays \(\tau _I\) and \(\tau _M\) enter explicitly into (2.15)–(2.19) and hence (2.28). While the delay will be constant in time on any steady-state solution, with state-dependent delays the value of the delay will depend on the state variable, which may change the structure of the phase-space of the dynamical system.

As an example consider the repressible case with state-dependent transcription velocity (but constant translation velocity). From Table 1 the transcription velocity \(v_M(E)\) is a monotonic increasing function of E, and hence at steady state (from (2.23)) the transcription delay is a monotonic decreasing function of E. Then \(g_E(E^*)\) contains the product of a monotonic increasing function \(e^{-\mu \tau _M^*(E^*)}\) and monotonic decreasing function \(f(E^*)\). The product in \(g_E(E)\) need not be monotonic and we can no longer conclude that there is a unique steady state for the repressible case. This is illustrated in panel (a) of Fig. 3 which shows an example where \(g_E(E)\) has three zeros corresponding to three different steady states of the model for the repressible case.

For an inducible operon, the situation is reversed. The velocity \(v_M(E)\) is a decreasing function and so \(e^{-\mu \tau _M^*(E^*))}\) a decreasing function of \(E^*\), while the function \(f(E^*)\) is an increasing function of its argument. This can again lead to additional steady states and Fig. 3b shows an example where \(g_E(E)\) has five zeros corresponding to five different steady states in the model for the inducible case with state-dependent transcription velocity, but constant translation velocity. The full parameter sets for both of these examples are listed in Table 2.

Table 2 Parameters used for repressible and inducible examples in a and b of Fig. 3a, b

In the previous examples we set \(v_I^{min}=v_I^{max}=v_I\) so the translation velocity \(v_I(M)=v_I\) was constant, as was the translation delay \(\tau _I=a_I/v_I\). If we allow \(v_I^{min}<v_I^{max}\) then the translation delay \(\tau _I(M)\) becomes a second state-dependent delay, and in \(g_E(E^*)\) the term \(e^{-\tau _M^*(E^*))}f(E^*)\) is multiplied by an additional term \(e^{-\mu (\tau _I^*(M^*))}\). With the translation velocity defined as in Table 1 we see that \(e^{-\mu (\tau _I^*(M^*))}\) is a monotonic increasing function of \(M^*\). However, \(M^*\) itself is defined by (2.27) which again contains the product of \(e^{-\tau _M^*(E^*))}\) and \(f(E^*)\) that we already discussed above. Although a full analysis of this case is beyond the scope of this paper, we note that by changing a few parameters from their values in Table 2 it is possible to obtain additional steady states. For the repressible case with

$$\begin{aligned} K=10, \quad n=10, \quad v_I^{min}=0.05, \quad v_I^{max}=0.5. \end{aligned}$$
(2.30)

and with both the delays \(\tau _M^*(E^*)\) and \(\tau _I^*(M^*)\) state-dependent, we obtain 5 co-existing steady states, as shown in Fig. 3c. For the inducible case with

$$\begin{aligned} v_I^{min}=1.1. \end{aligned}$$
(2.31)

we obtain 7 co-existing steady states, where again both delays are state-dependent.

Taken together the examples of Fig. 3 suggest that there can be

$$\begin{aligned} 1+2\chi _I+2n_\tau , \qquad \chi _I=\left\{ \begin{array}{cl} 0, &{} \text { repressible case} \\ 1, &{} \text { inducible case} \end{array}\right. \end{aligned}$$
(2.32)

steady states where \(n_\tau \) is the number of delays which are state-dependent. We cannot prove that this is the maximum possible number of steady states, but we can construct examples with this many steady states in a systematic way.

Fig. 4
figure 4

a For the inducible model, the constant function \(e^{-\mu \tau _I(M^*)}\) (broken line) as in Fig. 3b, and the state-dependent function \(e^{-\mu \tau _I(M^*)}\) (solid line) as in Fig. 3b. b The behaviour of the corresponding functions \(g_E(E)\) for \(E\approx 2\). The black vertical lines separate the intervals on which \(g_E(E)\) is increasing or decreasing

We illustrate this by showing how the example of the inducible operon with two state-dependant delays and 7 steady states in Fig. 3d is constructed from the example with one state-dependent delay and 5 steady states in Fig. 3b. The only difference between the two examples is that in (2.28) the term \(e^{-\mu \tau _I^*(M)}\) is constant in the first example, but not in the second. To make \(\tau _I\) state-dependent we take \(v_I^{min}<v_I^{max}\) and \(m_I\gg 0\), so that the translation velocity \(v_I(M)\) is close to a step function, which results in \(e^{-\mu \tau _I^*(M^*)}\) also being essentially a switching function. In Fig. 3 we identify that for \(E\approx 2\) we have \(0<g_E(E)\ll 1\) with \(g_E'(E)>0\), and we set the switching function to act at this point by using (2.27) to define \(M_{50}\) via

$$\begin{aligned} M_{50} = \dfrac{\beta _M}{{\bar{\gamma }}_M}e^{-\mu \tau _M^*(2)}f(2). \end{aligned}$$

Then using (2.27) directly we obtain \(e^{-\mu \tau _I^*(M^*)}\) as a function of \(E^*\) as shown in Fig. 4a. With this element included in \(g_E(E^*)\) the function is modified so that \(g_E'(2)<0\), and the function gains an additional maximum and minimum for \(E\approx 2\), as shown in Fig. 4b. From there, parameters can be adjusted as needed to ensure \(g_E(E)\) has a zero between each sign change of \(g_E'(E)\).

Fig. 5
figure 5

One parameter continuations of the steady states as the parameter \(v_M^{min}\) is varied, obtained by plotting the zero contour of the function \(g_E\). All the other parameters take the same values as in the corresponding panel of Fig. 3. The red circles indicate the co-existing steady states already seen in Fig. 3. In the limit as \(v_M^{min}\rightarrow v_M^{max}\) the delay \(\tau _M\) ceases to be state-dependent (color figure online)

The function \(g_E\) can be used to effectively perform a one-parameter continuation of the steady states by varying E and one other parameter, and plotting a single contour of the function corresponding to \(g_E=0\). In Fig. 5 we demonstrate four examples of one-parameter continuation in the parameter \(v_M^{min}\) starting from the cases illustrated in Fig. 3. Given that we obtained our examples with several co-existing steady states by constructing the function \(g_E(E)\) to have multiple nearby zeros, and hence multiple local extrema close to zero, it should be no surprise to see that the steady states from Fig. 3 only co-exist over a small interval of \(v_M^{min}\) values and that some of them are destroyed in fold bifurcations. We note also that as \(v_M^{min}\) is increased, in the limit as \(v_M^{min}\rightarrow v_M^{max}\) the delay \(\tau _M\) becomes constant, and by (2.32) the number of steady states will be reduced. In particular in the cases of Fig. 5a, b when \(v_M^{min}=v_M^{max}\) there is no state-dependency in the model and there can only be 1 or 3 steady states in the repressible and inducible cases, respectively.

Figure 5 indicates that the steady states of (2.15)–(2.19) undergo fold bifurcations. Hopf bifurcations are also ubiquitous in DDEs, and already known to occur in the repressible case with constant delays. Hence in the following sections we will study the dynamics and bifurcations of the system (2.15)–(2.19) and will return to the examples of this section.

2.7 Linearization by expansion

To determine the stability of the steady states considered Sect. 2.6, we linearize the system (2.15)–(2.17) with (2.18) and (2.19) in a neighborhood of each steady state and examine the nature of the characteristic values.

This can be done rigorously using a functional analytic approach in an appropriate Banach space, and this derivation is presented in “Appendix C”. However, that approach will not be accessible to many readers, so here we present an alternative heuristic derivation using elementary techniques which arrives at exactly the same characteristic equation as in “Appendix C”.

Assuming linear behaviour of the solution for a small perturbation from the steady state \((M^*, I^*, E^*)\), we begin by setting

$$\begin{aligned} M(t)&= M^*+\mathcal {E}_M e^{\lambda t}, \end{aligned}$$
(2.33)
$$\begin{aligned} I(t)&= I^*+\mathcal {E}_I e^{\lambda t}, \end{aligned}$$
(2.34)
$$\begin{aligned} E(t)&= E^*+\mathcal {E}_E e^{\lambda t}. \end{aligned}$$
(2.35)

We denote the delays at the steady state by \(\tau _M^*(E^*)\) and \(\tau _I^*(M^*)\), as defined in (2.23), and again write \(\tau _M(t)\) and \(\tau _I(t)\) for the time varying delays on a solution close to the steady state (even though as noted after (2.19) these delay terms are properly functions in a Banach space).

From the threshold condition (2.18), Taylor expanding the integrand around the steady state we obtain

$$\begin{aligned} v_M(E^*)\tau _M^*(E^*)&= a_M = \int _{t-\tau _M(t)}^{t} v_M(E(s))ds = \int _{-\tau _M(t)}^{0} v_M(E(s+t)) ds \nonumber \\&= \int _{-\tau _M(t)}^{0} v_M(E^*+\mathcal {E}_Ee^{\lambda (s+t)}) ds \nonumber \\&= \int _{-\tau _M(t)}^{0} v_M(E^*)+v_M'(E^*)\mathcal {E}_Ee^{\lambda (s+t)}+\mathcal {O}(\mathcal {E}_E^2) ds \nonumber \\&= v_M(E^*)\tau _M(t) + \mathcal {E}_E v_M'(E^*) e^{\lambda t}\int _{-\tau _M(t)}^{0} e^{\lambda s} ds+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$
(2.36)

Note that for \(\lambda \ne 0\),

$$\begin{aligned} \int _{-a}^0 e^{\lambda s}\,ds = \frac{1}{\lambda }(1-e^{-a\lambda }) \end{aligned}$$
(2.37)

while

$$\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{1}{\lambda }(1-e^{-a\lambda })=a=\int _{-a}^0 e^{0 s}\,ds, \end{aligned}$$

so \((1-e^{-a\lambda })/\lambda \) has a removable singularity at \(\lambda =0\). Therefore we can use (2.37) for all \(\lambda \in {\mathbb {C}}\) and (2.36) becomes

$$\begin{aligned} v_M(E^*)\tau _M^*(E^*)&= v_M(E^*)\tau _M(t)\nonumber \\&\qquad +\mathcal {E}_Ee^{\lambda t} \frac{v_M'(E^*)}{\lambda }(1-e^{-\lambda \tau _M(t)})+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$
(2.38)

Notice that \(v_M(E^*)>0\) is required for \(\tau _M(E^*)\), defined by (2.23), to be finite; this is ensured by the assumption that \(v_M^{min}>0\). Hence we may rearrange (2.38) as

$$\begin{aligned} \tau _M(t) = \tau _M^*(E^*) -\mathcal {E}_E e^{\lambda t}\frac{v_M'(E^*)}{\lambda v_M(E^*)}(1-e^{-\lambda \tau _M(t)})+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$

Noting that this implies that \(\tau _M(t) = \tau _M^*(E^*)+\mathcal {O}(\mathcal {E}_E)\), we obtain

$$\begin{aligned} \tau _M(t) = \tau _M^*(E^*) -\mathcal {E}_E e^{\lambda t}\frac{v_M'(E^*)}{\lambda v_M(E^*)}(1-e^{-\lambda \tau _M^*(E^*)})+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$
(2.39)

With (2.39), the factor \(e^{-\mu \tau _M(t)}\) in (2.15) behaves as

$$\begin{aligned} e^{-\mu \tau _M(t)}&= e^{-\mu \tau _M^*(E^*)}e^{\mathcal {E}_E\frac{\mu v_M'(E^*)}{\lambda v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})}e^{-\mu \mathcal {O}(\mathcal {E}_E^2)} \nonumber \\&= e^{-\mu \tau _M^*(E^*)}\left[ 1+\mathcal {E}_E\frac{\mu v_M'(E^*)}{\lambda v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})+\mathcal {O}(\mathcal {E}_E^2)\right] \left[ 1+\mathcal {O}(\mathcal {E}_E^2)\right] \nonumber \\&= e^{-\mu \tau _M^*(E^*)}\left[ 1+\mathcal {E}_E\frac{\mu v_M'(E^*)}{\lambda v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})\right] +\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$
(2.40)

For the fraction term \(v_M(E)/v_M(E(t-\tau _M(t)))\) in the differential equation, we apply Taylor expansion around the steady state to \(v_M(E)\) and \(1/v_M(E(t-\tau _M(t)))\) separately and then take the product, which gives:

$$\begin{aligned} v_M(E)= & {} v_M(E^*)+v_M'(E^*)\mathcal {E}_E e^{\lambda t}+\mathcal {O}(\mathcal {E}_E^2), \\ \frac{1}{v_M(E(t-\tau _M(t)))}= & {} \frac{1}{v_M(E^*)}+ \left( -\frac{v_M'(E^*)}{v_M(E^*)^2}\right) \mathcal {E}_E e^{\lambda (t-\tau _M(t))}+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$

Thus

$$\begin{aligned} \frac{v_M(E)}{v_M(E(t-\tau _M(t)))}&= 1+\mathcal {E}_E\frac{v_M'(E^*)}{v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M(t)})+\mathcal {O}(\mathcal {E}_E^2) \nonumber \\&= 1+\mathcal {E}_E\frac{v_M'(E^*)}{v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$
(2.41)

Following the derivation of (2.40) and (2.41), similarly we have

$$\begin{aligned} e^{-\mu \tau _I(t)} = e^{-\mu \tau _I^*(M^*)}\left[ 1+\mathcal {E}_M\frac{\mu v_I'(M^*)}{\lambda v_I(M^*)}e^{\lambda t}(1-e^{-\lambda \tau _I^*(M^*)})\right] +\mathcal {O}(\mathcal {E}_M^2), \end{aligned}$$
(2.42)
$$\begin{aligned} \frac{v_I(M)}{v_I(M(t-\tau _I(t)))} = 1+\mathcal {E}_M\frac{v_I'(M^*)}{v_I(M^*)}e^{\lambda t}(1-e^{-\lambda \tau _I^*(M^*)})+\mathcal {O}(\mathcal {E}_M^2). \end{aligned}$$
(2.43)

Now we use these expansions to linearize the system (2.15)–(2.17) equation by equation. Substituting the perturbations (2.33) and (2.35) into (2.15) and using the expansions (2.40) and (2.41) we have

$$\begin{aligned} \mathcal {E}_M\lambda e^{\lambda t}&= \dfrac{d}{dt}(M^*+\mathcal {E}_Me^{\lambda t})\\&= \beta _M\left[ e^{-\mu \tau _M^*(E^*)}\left( 1+\mathcal {E}_E\frac{\mu v_M'(E^*)}{\lambda v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})\right) +\mathcal {O}(\mathcal {E}_E^2)\right] \times \\&\qquad \left[ 1+\mathcal {E}_E\frac{v_M'(E^*)}{v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M(E^*)})+\mathcal {O}(\mathcal {E}_E^2)\right] f(E^* +\mathcal {E}_E e^{\lambda (t-\tau _M(t))}) \\&\qquad \quad -{\bar{\gamma }}_M(M^*+\mathcal {E}_Me^{\lambda t}) \\&= \beta _M e^{-\mu \tau _M^*(E^*)}\left[ 1+\mathcal {E}_E\frac{v_M'(E^*)}{v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})(1+\frac{\mu }{\lambda })\right] \times \\&\qquad \left[ f(E^*)+ \mathcal {E}_E f'(E^*) e^{\lambda (t-\tau _M(t))}\right] -{\bar{\gamma }}_M(M^*+\mathcal {E}_Me^{\lambda t})+\mathcal {O}(\mathcal {E}_E^2) \\&= \beta _M e^{-\mu \tau _M^*(E^*)}\left( f(E^*)+\mathcal {E}_E f(E^*)\frac{v_M'(E^*)}{v_M(E^*)}e^{\lambda t}(1-e^{-\lambda \tau _M^*(E^*)})(1+\frac{\mu }{\lambda })\right. \\&\qquad + \mathcal {E}_E f'(E^*)e^{\lambda (t-\tau _M^*(E^*))} \bigg ) -{\bar{\gamma }}_M(M^*+\mathcal {E}_Me^{\lambda t})+\mathcal {O}(\mathcal {E}_E^2). \end{aligned}$$

Using the equality (2.24) and multiplying by \(e^{-\lambda t}\), this simplifies to

$$\begin{aligned} \mathcal {E}_M\lambda&= \mathcal {E}_E\beta _M e^{-\mu \tau _M^*(E^*)}\bigg (f(E^*)\frac{v_M'(E^*)}{v_M(E^*)}(1-e^{-\lambda \tau _M^*(E^*)})(1+\frac{\mu }{\lambda })\nonumber \\&\qquad \quad + f'(E^*)e^{-\lambda \tau _M^*(E^*)}\bigg ) -\mathcal {E}_M{\bar{\gamma }}_M+\mathcal {O}(\mathcal {E}_E^2) \end{aligned}$$
(2.44)

For the second differential Eq. (2.16), substituting the perturbation (2.33) and (2.34) and the expansion (2.42) and (2.43) we similarly find that

$$\begin{aligned} \mathcal {E}_I\lambda e^{\lambda t}&= \dfrac{d}{dt}(I^*+\mathcal {E}_Ie^{\lambda t})\\&= \beta _I e^{-\mu \tau _I^*(M^*)}\left( M^*+\mathcal {E}_M e^{\lambda (t-\tau _I^*(M^*))} +\mathcal {E}_M M^*\frac{v_I'(M^*)}{v_I(M^*)}\times \right. \\&\qquad \qquad \qquad e^{\lambda t}(1-e^{-\lambda \tau _I^*(M^*)})(1+\frac{\mu }{\lambda })\Big )-{\bar{\gamma }}_I(I^* +\mathcal {E}_Ie^{\lambda t})+\mathcal {O}(\mathcal {E}_M^2). \end{aligned}$$

Using the equality (2.25) and multiplying by \(e^{-\lambda t}\) we obtain

$$\begin{aligned} \mathcal {E}_I\lambda&= \mathcal {E}_M\beta _I e^{-\mu \tau _I^*(M^*)}\left( e^{-\lambda \tau _I^*(M^*)}+M^*\frac{v_I'(M^*)}{v_I(M^*)}(1-e^{-\lambda \tau _I^*(M^*)})(1+\frac{\mu }{\lambda })\right) \nonumber \\&\qquad -{\bar{\gamma }}_I\mathcal {E}_I+\mathcal {O}(\mathcal {E}_M^2). \end{aligned}$$
(2.45)

Lastly, the case of the differential equation (2.17) is simpler, since it is linear with no delays. Substituting the perturbations (2.34) and (2.35) we have

$$\begin{aligned} \mathcal {E}_E\lambda e^{\lambda t} = \frac{d}{dt}(E^*+\mathcal {E}_Ee^{\lambda t}) = \beta _E(I^*+\mathcal {E}_I e^{\lambda t})-{\bar{\gamma }}_E(E^*+\mathcal {E}_E e^{\lambda t}). \end{aligned}$$

Using the equality (2.26), and multiplying by \(e^{-\lambda t}\) this simplifies to

$$\begin{aligned} \mathcal {E}_E\lambda = \mathcal {E}_I\beta _E-\mathcal {E}_E{\bar{\gamma }}_E. \end{aligned}$$
(2.46)

Combining (2.44), (2.45) and (2.46), and dropping the higher order terms gives the linear system

$$\begin{aligned} A(\lambda ) \begin{pmatrix} \mathcal {E}_M \\ \mathcal {E}_I \\ \mathcal {E}_E \end{pmatrix} = \left( \begin{matrix} -{\bar{\gamma }}_M-\lambda &{} 0 &{} A_{13}\\ A_{21} &{} -{\bar{\gamma }}_I-\lambda &{} 0\\ 0 &{} \beta _E &{} -{\bar{\gamma }}_E-\lambda \end{matrix}\right) \begin{pmatrix} \mathcal {E}_M \\ \mathcal {E}_I \\ \mathcal {E}_E \end{pmatrix} =0 \end{aligned}$$
(2.47)

where the \(A_{13}\) and \(A_{21}\) entries of the \(3\times 3\)-matrix \(A(\lambda )\) are defined by

$$\begin{aligned}&A_{13} = \beta _M e^{-\mu \tau _{M}^*(E^{*})} \left( f(E^*)\frac{v_{M}'(E^*)}{v_{M}(E^*)}(1-e^{-\lambda \tau _{M}^*(E^{*})})(1+\frac{\mu }{\lambda })+f'^{}(E^{*})e^{-\lambda \tau _{M}^*(E^{*})} \right) ,\\&A_{21} = \beta _I e^{-\mu \tau _I^*(M^*)}\left( M^*\frac{v_I'(M^*)}{v_I(M^*)}(1-e^{-\lambda \tau _I^*(M^*)})(1+\frac{\mu }{\lambda }) +e^{-\lambda \tau _I^*(M^*)}\right) . \end{aligned}$$

The characteristic equation of (2.15)–(2.17) is

$$\begin{aligned} \varDelta (\lambda )=\det (A(\lambda ))=0, \end{aligned}$$
(2.48)

with \(\varDelta (\lambda )\) given by

$$\begin{aligned} \varDelta (\lambda ) = ({\bar{\gamma }}_M+\lambda )({\bar{\gamma }}_I+\lambda )({\bar{\gamma }}_E+\lambda ) +\beta _M\beta _I\beta _E e^{-\mu (\tau _{I}^*(M^*)+\tau _{M}^*(E^*))}k(\lambda ), \end{aligned}$$
(2.49)

where

$$\begin{aligned} k(\lambda )&=\left( \frac{v_M'(E^*)}{v_M(E^*)}f(E^*)( 1-e^{-\lambda \tau _M^*(E^*)})\Bigl (1+\frac{\mu }{\lambda }\Bigr ) + f'^{}(E^*)e^{-\lambda \tau _M^*(E^*)} \right) \nonumber \\&\qquad \times \left( \frac{v_I'(M^*)}{v_I(M^*)}M^*(1- e^{-\lambda \tau _I^*(M^*)})\Bigl (1+\frac{\mu }{\lambda }\Bigr ) +e^{-\lambda \tau _I^*(M^*)} \right) \end{aligned}$$
(2.50)

Exactly the same characteristic equation is derived completely rigorously in “Appendices AC” culminating in equation (C.12).

In contrast to the rigorous variational approach used in the appendices, here we assert without proof that all the quantities of interest can be written as functions of the perturbation parameters \(\mathcal {E}_M\), \(\mathcal {E}_I\) and \(\mathcal {E}_E\). For example in Eq. (2.39) we have Taylor expanded the state-dependent delay \(\tau _M\) as a function of \(\mathcal {E}_E\). To justify that rigorously requires functional analysis, and this is done in Proposition A1 in “Appendix A”.

Another drawback of the derivation above is that there is no theory to show that stability of steady states is determined by the characteristic Eq. (2.48). However, since for our model both approaches lead to the same characteristic equation, the theory relating stability to the characteristic equation applies (Hartung et al. 2006). Therefore the stability of equilibria of the system (2.15)–(2.17) is determined by characteristic values arising from (2.48).

3 Numerical methods

In this section, we describe numerical methods to study the distributed state-dependent delay model (2.15)–(2.19). We would like to conduct one-parameter continuation of steady states and periodic orbits and compute local stability and bifurcations in Matlab (Mathworks 2020). The standard package for performing numerical bifurcation analysis of DDEs in Matlab is DDE-BIFTOOL (Sieber et al. 2015). Unfortunately, although it can handle constant or discrete state-dependent delays, DDE-BIFTOOL cannot be applied directly to problems with distributed state-dependent delays defined by threshold conditions such as (2.18) and (2.19). Likewise, the built-in Matlab function ddesd for solving DDE initial value problems is also only implemented for discrete delays.

In Eqs. (2.18) and (2.19) the delay \(\tau _M\) or \(\tau _I\) can be determined by adjusting the lower limit of the integral until the integral has the desired value \(a_M\). Naive numerical implementations would use a bisection or secant iteration to determine the delay, which would necessitate evaluating the integral at each step of the iteration. This would be very slow to compute and would become the main bottleneck slowing down numerical computations. Another problem in evaluating this integral is that the numerical DDE solvers that have been implemented in Matlab are all written for discrete delays and only give access to the value of the solution \(u(t-\tau _j)\) at the discrete delays, whereas to evaluate the integral in (2.18) or (2.19) we require the values of the integrand across the whole interval.

We describe below two different implementations of (2.15)–(2.19) in DDE-BIFTOOL, neither of which require an iteration to find the delays, and also show how to apply ddesd to solve the initial value problem.

3.1 Steady state computations: linearization correction

As discussed in Sect. 2.6, we can obtain steady states of (2.15)–(2.19) from the scalar function \(g_E(E^*)\) defined in (2.28). Any solution to \(g_E(E^*)=0\) gives the E component of a steady state with corresponding M and I components given by (2.24) and (2.26).

We would like to conduct one-parameter continuation of steady states and compute local stability using DDE-BIFTOOL, but as noted above it cannot be directly applied to solve DDEs when the delay is defined by a threshold condition. Nevertheless, at a steady state the threshold integral conditions (2.18) and (2.19) become integrals of constant functions. Consequently, the delays are defined by (2.23) and can be treated as discrete delays. Therefore we are able to implement the system (2.15)–(2.17) together with (2.23) in DDE-BIFTOOL and use it to continue the steady states. This approach also allows us to locate fold bifurcations of steady states. However, although replacing (2.18) and (2.19) by (2.23) preserves the existence of the steady states, (as detailed in Wang (2020) for a related model) characteristic values and hence stability of the steady state are altered. The reason is that the integration of exponential perturbations along the solution, see Sect. 2.7, are not included.

To recover the correct stability information when using the modified problem (2.15)–(2.17) with discrete delays (2.23) we perform linearization correction using the characteristic equation. The characteristic roots of the steady state of the modified problem are taken as “seed values” which are then corrected by applying the Matlab nonlinear system solver fsolve to the exact characteristic Eq. (2.48) for the original model (2.15)–(2.19). This works well at a majority of points along continuation branch; however, it behaves poorly at some points leading to spurious bifurcations. In addition, sometimes the algorithm does not converge, while sometimes the solver converges to an already found characteristic value. As well as creating duplicates of characteristic values, this leads to missing some characteristic values, and so does not reliably classify bifurcations.

To deal with these issues, we remove any characteristic values at which the algorithm fails to converge, as well as any duplicate values. To resolve the issue of missing characteristic values, we use the corrected characteristic values from the previous point on the branch as a second set of “seed values” to compute additional characteristic values, where again we remove duplicates. The removal of duplicate characteristic values is somewhat dangerous, because it could result in missing genuine instances of characteristic values with multiplicity larger than one. However, in practice, we did not encounter this problem.

Once the corrected characteristic roots are computed at each steady state, we obtain the correct stability information. Hopf bifurcations occur when two complex conjugate characteristic values cross the imaginary axis, or equivalently when the number of complex characteristic values with positive real part changes by two. Fold bifurcations happen when a real characteristic value changes sign, which we can also detect by the number of complex characteristic values with positive real part changing by one. Because we obtain the stability from a modified problem and we did not alter any of the DDE-BIFTOOL subroutines which use linearization inappropriate for our model, we are not able to use additional DDE-BIFTOOL subroutines such as those that detect criticality of Hopf bifurcations and perform normal form computations.

3.2 Steady state and periodic orbit computation: delay discretization

While the approach of Sect. 3.1 allows us to compute the stability of steady states and hence to detect fold and Hopf bifurcations, it cannot be used to compute periodic orbits, because the delays (2.18)–(2.19) would not be constant on periodic orbits.

The only way to tackle the full distributed state-dependent delay operon model (2.15)–(2.19) is to evaluate the integrals in the threshold conditions (2.18)–(2.19). While this cannot be done exactly in DDE-BIFTOOL, it is enough to evaluate the integral to sufficient accuracy using a numerical quadrature scheme.

As the two delays are of similar form, we will describe the method for approximating \(\tau _M\) by discretizing the integral in (2.18) using the composite trapezoidal method and seeking the value of \(\tau _M\) that satisfies (2.18). To do this we introduce extra “dummy” delays as follows.

With \(a_M\) fixed and \(v_M(E) \in [v_M^{min}, v_M^{max}]\), it follows that \(\tau _M \in [a_M/v_M^{max}, a_M/v_M^{min}]\). To obtain the state-dependent delay \(\tau _M\) that satisfies the threshold condition (2.18), we discretize the interval \(\left[ t-a_M/v_M^{max}, t \right] \) uniformly with a sequence of mesh points

$$\begin{aligned} t=x_0> x_1> \cdots > x_N=t-\frac{a_M}{v_M^{max}} \end{aligned}$$

and define N constant “dummy” delays

$$\begin{aligned} \tau _j=t-x_j=t-\frac{j}{N}\frac{a_M}{v_M^{max}}, \qquad j=1, 2,\ldots , N. \end{aligned}$$

In particular, as \(\tau _M \geqslant a_M/v_M^{max}\), there is no need for detection of the delay \(\tau _M\) over the interval \([t-a_M/v_M^{max}, t]\).

On the interval \([t-a_M/v_M^{min}, t-a_M/v_M^{max}]\) where the delay \(\tau _M\) lies, we detect it as follows. Divide the interval \([t-a_M/v_M^{min}, t-a_M/v_M^{max}]\) into N equal width subintervals,

$$\begin{aligned} t-\frac{a_M}{v_M^{max}}=x_N> x_{N+1}>\cdots > x_{2N}=t-\frac{a_M}{v_M^{min}}, \end{aligned}$$

which implies that

$$\begin{aligned} x_{N+j} = t-\frac{a_M}{v_M^{max}}-j \frac{a_M}{N} (\frac{1}{v_M^{min}}-\frac{1}{v_M^{max}}), \qquad j=1, 2,\ldots , N. \end{aligned}$$

We then define another N constant “dummy” delays

$$\begin{aligned} \tau _{N+j}=t-x_{N+j}, \qquad j=1, 2,\ldots , N. \end{aligned}$$

To compute \(\tau _M\), we take advantage of the functionality of DDE-BIFTOOL which allows state-dependent delays to be defined as functions of the other delays and the solution values at those delays. We let

$$\begin{aligned} J(j)=\int _{x_j}^{x_0} v_M(E(s))ds = \int _{t- \tau _j}^t v_M(E(s))ds, \qquad j=1, 2, \ldots , 2N \end{aligned}$$

and let \(J_h(j)\) be the numerical approximation of J(j) using the composite trapezoidal rule,

$$\begin{aligned} J_h(j) = \sum _{k=1}^{j-1} \frac{1}{2}\left( v_M(E(x_k))+v_M(E(x_{k+1}))\right) (x_k-x_{k+1}). \end{aligned}$$

We look for the largest j such that \(J_h(j) \leqslant a_M\). Since

$$\begin{aligned} a_M&> \int _{t-a_M/v_M^{max}}^t v_M(E(s))ds \\&\approx \frac{a_M}{N v_M^{max}}\left[ \frac{1}{2}v_M(E(t))+\frac{1}{2}v_M(E(t-\frac{a_M}{v_M^{max}}))+\sum _{j=1}^{N-1}v_M(E(t-\tau _j))\right] \\&= J_h(N), \end{aligned}$$

we successively add subintervals to the integral until we find j such that

$$\begin{aligned} a_M \geqslant J_h(j) \quad \text {and} \quad a_M < J_h(j+1). \end{aligned}$$
(3.1)

With such a j, we have \(\tau _M\in [\tau _j,\tau _{j+1})\). To locate \(\tau _M\) more precisely, consider

$$\begin{aligned} a_M = \int _{t-\tau _M}^t v_M(E(s))ds =\int _{t-\tau _M}^{t-\tau _j} v_M(E(s))ds+\int _{t-\tau _j}^t v_M(E(s))ds, \end{aligned}$$

which implies

$$\begin{aligned} a_M-J_h(j) \approx \int _{t-\tau _M}^{t-\tau _j} v_M(E(s))ds. \end{aligned}$$
(3.2)

Applying the trapezoidal rule again, we have

$$\begin{aligned} \int _{t-\tau _M}^{t-\tau _j} v_M(E(s)) ds \approx \frac{\tau _M-\tau _j}{2}[v_M(E(t-\tau _M))+v_M(E(t-\tau _j))], \end{aligned}$$
(3.3)

and using a linearization in the subinterval (which is consistent with the trapezoidal method) we have

$$\begin{aligned} v_M(E(t-\tau _M))&= v_M(E(t-\tau _j))\nonumber \\&\qquad +(\tau _M-\tau _j)[v_M(E(t-\tau _{j+1}))-v_M(E(t-\tau _j))]. \end{aligned}$$
(3.4)

Substituting (3.3) and (3.4) into (3.2) gives

$$\begin{aligned} a_M- J_h(j)&\approx \frac{\tau _M-\tau _j}{2}[(2+\tau _j-\tau _M)v_M(E(t-\tau _j)) \\&\qquad \qquad \qquad \qquad +(\tau _M-\tau _j)v_M(E(t-\tau _{j+1}))]. \end{aligned}$$

Rearranging this we find that \(\tau _M\) is given as the solution of \(k(\tau _M)=0\) where

$$\begin{aligned} k(\tau _M)&= \frac{(\tau _M-\tau _j)^2}{2}[v_M(E(t-\tau _j))-v_M(E(t-\tau _{j+1}))] \nonumber \\&\qquad \qquad \qquad \qquad -(\tau _M-\tau _j)v_M(E(t-\tau _j))+(a_M-J_h(j)) \end{aligned}$$
(3.5)

Note that (3.5) is a quadratic function of \(\tau _M\) and the condition (3.1) guarantees that \(k(\tau _M)\) has a zero for \(\tau _M \in [\tau _j, \tau _{j+1}]\). Applying the quadratic formula to (3.5), we obtain the solution

$$\begin{aligned} \tau _M&= \tau _j + \frac{v_{M}(E(x_j))}{v_M(E(x_j))-v_M(E(x_{j+1}))} \nonumber \\&\qquad - \frac{\sqrt{v_{M}(E(x_j)^2-2(a_M-J_h(j))(v_{M}(E(x_j))-v_{M}(E(x_{j+1})))}}{v_M(E(x_j))-v_M(E(x_{j+1}))} \end{aligned}$$
(3.6)

where the minus sign in the quadratic formula ensures that the root \(\tau _M\in [\tau _j,\tau _{j+1}]\) when \(k(\tau _M)\) is either a concave up or concave down parabola.

With this implementation we are able to apply DDE-BIFTOOL directly to an approximation to the system (2.15)–(2.19). Since the stability computations are carried out within DDE-BIFTOOL (as opposed to the linearization correction technique described in Sect. 3.1) we are able to use the full functionality of DDE-BIFTOOL which allows us to determine criticality of bifurcations and also to compute branches of periodic orbits emanating from Hopf bifurcations.

The choice of parameters for the numerical discretization is somewhat delicate. If the discretization is too coarse convergence issues arise in the branch continuation, while finer discretizations allow for a smoother continuation of branches with larger continuation steps, but at the cost of each step being very slow. This arises because the numerical linear algebra problems at the heart of the approximate Newton method in each DDE-BIFTOOL continuation step increase in complexity with both the number of delays and the size of the collocation problem. The total number of delays in the discretized problem is \(2N+2\), composed of the 2N dummy delays, the (assumed constant) delay \(\tau _I\), and the computed state-dependent delay \(\tau _M\) given by (3.6).

For the computations in Sect. 4 we use degree 4 or 5 collocation polynomials and 20 to 40 mesh intervals resulting in 80 to 200 collocation points on the periodic orbit. For stability computations of steady states we took \(N=32\) which results in 65 constant delays and one state-dependent delay. For computation of periodic orbits we took \(N=48\) resulting in close to one hundred delays in the discretized problem.

The computation of each step of the continuation is quite slow compared to the implementation of Sect. 3.1. The algorithms give consistent results on problems for which both can be applied (with bifurcation points agreeing to between 3 and 5 significant digits of accuracy), but the algorithm of this section is more widely applicable. For the results shown in Sect. 4, we mainly use the discretization method described in this section, with the linearization correction method of Sect. 3.1 used to validate the results.

3.3 Solving initial value problems (IVPs)

Simulating IVPs allows us to investigate the dynamics in parameter regimes where none of the steady states are stable. In Sect. 4 we find stable periodic orbits which do not arise from Hopf bifurcations by following this procedure.

The Matlab routine ddesd solves DDE initial value problems with discrete state-dependent delays. While we would like to use ddesd to study (2.15)–(2.19), we need to address the issue of implicitly defined delays.

For simplicity, as in the preceding sections, we treat \(\tau _I\) as a constant delay which is defined by (2.23). We deal with the state-dependent delay \(\tau _M\) defined by (2.18) by differentiating the integral in (2.18) with respect to t to obtain

$$\begin{aligned} 0=v_M(E(t))-\Big (1-\frac{d\tau _{M}}{dt}(t)\Big )v_M(E(t-\tau _M)), \end{aligned}$$

which implies that

$$\begin{aligned} \frac{d\tau _{M}}{dt}(t)= 1-\frac{v_M(E(t))}{v_M(E(t-\tau _M))}. \end{aligned}$$
(3.7)

We can thus solve the system (2.15)–(2.19) as an initial value problem by considering the system of three equations (2.15)–(2.17) augmented by (3.7) to define the evolution of the state-dependent delay \(\tau _M\) along with the constant delay \(\tau _I=a_I/v_I\) where \(v_I=v_I^{min}= v_I^{max}\). The case where \(\tau _I\) is state-dependent can be handled similarly.

Although this trick avoids the need to evaluate the integral in (2.18) during the simulation, care needs to be taken since information is lost when differentiating and while a solution of (2.18) also solves (3.7), the converse is not necessarily true. To ensure our solution of (3.7) also solves (2.18), we specify history functions so that (2.18) is satisfied at time \(t=t_0\). In particular, we require \(\tau _M(t_0)\) to satisfy

$$\begin{aligned} a_M= \int _{t_0-\tau _M(t_0)}^{t_0} v_M(E(s)) ds. \end{aligned}$$
(3.8)

This will depend on the choice of the history function E(t) defined for \(t \leqslant t_0\). In general we need to evaluate this integral only once. Even this can be avoided if \(E(t)=E_0\) is constant for \(t \leqslant t_0\) since then (3.8) simplifies to \(a_M=\tau _M(t_0) v_M(E_0)\) which implies that

$$\begin{aligned} \tau _M(t_0)=\frac{a_M}{v_M(E_0)}. \end{aligned}$$

Although we do not need to solve the integral threshold condition (2.18) during the numerical computation, after a numerical solution is computed, it is very easy to evaluate the integral on the right hand side of (2.18) to check how close it is to \(a_M\). In all the examples presented in Sect. 4, this defect is smaller that \(10^{-5}\) at the final time indicating 5 or more digits of accuracy in the computation of the threshold condition across the interval of computation.

To find the period of a stable periodic solution, a simple technique is to take advantage of the idea of the Poincaré section, and we implement an event function to detect periodicity. While ddesd has a built-in event detection function which can be used to detect periodicity, it slows down the numerical solution drastically. Instead, once the simulation is complete, we fit a spline to the numerical solution and use the spline functions within Matlab to obtain the crossings of the Poincaré section and maxima and minima of solutions and hence period and amplitude information.

Once we find a stable periodic orbit, the solution may be continued as one parameter is varied either by performing additional numerical IVP solves to find a periodic orbit for a perturbed parameter set, or by importing the numerically computed periodic solution into DDE-BIFTOOL and use the discretization of Sect. 3.2 to continue the solution. The DDE-BIFTOOL discretization has the advantage that it can equally well find stable and unstable periodic orbits, and we will use it in Sect. 4 to detect fold bifurcations of periodic orbits where the stability of the periodic orbit changes.

ddesd can only be used for the continuation of stable periodic orbits, which is useful for validating the DDE-BIFTOOL results. To perform continuation with ddesd we use the stable periodic solution at each iteration as the history function for the next computation when the continuation parameter is slightly changed. With a small perturbation in the continuation parameter value, we expect to converge to the stable periodic solution as it should still lie in the basin of attraction. Care needs to be taken when doing this, since when making a perturbation of the parameters we need to recompute initial value of the state-dependent delay \(\tau _M(t_0)\) so that the integral (3.8) is satisfied at the initial time \(t_0\) with the new parameter set and history function given by the numerical solution with the previous parameter set.

4 Dynamics of repressible and inducible operons with state-dependent delays

In this section, we explore the dynamics of the Goodwin operon model (2.15)–(2.19) incorporating state-dependent delays. We will mainly focus on the case where the transcription delay \(\tau _M\) is state-dependent and the translation delay \(\tau _I\) is constant. Then equations (2.15)–(2.19) simplify to

$$\begin{aligned} \begin{aligned} \dfrac{dM}{dt}(t)&= \beta _M e^{-\mu \tau _M(t)} \dfrac{v_M(E(t))}{v_M(E(t-\tau _M(t)))} f(E(t-\tau _M(t))) -{\bar{\gamma }}_M M(t), \\ \dfrac{dI}{dt}(t)&= \beta _I e^{-\mu \tau _I} M(t-\tau _I) -{\bar{\gamma }}_I I(t), \\ \dfrac{dE}{dt}(t)&= \beta _E I(t) -{\bar{\gamma }}_E E(t), \\ a_M&= \int _{t-\tau _M(t)}^{t} v_M(E(s)) ds=\int _{-\tau _M(t)}^{0} v_M(E(t+s)) ds, \end{aligned} \end{aligned}$$
(4.1)

with \(\tau _I=a_I/v_I\) where \(v_I=v_I^{min}= v_I^{max}\). The respective functions for a repressible or inducible system are defined in Table 1. We will treat the minimum transcription velocity, \(v_M^{min}\), as a bifurcation parameter.

4.1 Repressible operon with one state-dependent delay

Recall that when there are no state-dependent delays there are only two possibilities for a repressible system. Namely there is either a globally stable steady state, or a globally stable limit cycle which arises through a supercritical Hopf bifurcation from the steady state. We already showed in Sect. 2.6 (see Figs. 3a and 5a) that it is possible for a repressible system with one state-dependent delay to have multiple steady states, as well as fold bifurcations of steady states. In this section we will explore the dynamics of the repressible system in more depth to reveal the possible dynamics and bifurcations that may arise.

Fig. 6
figure 6

Bifurcation diagram of the model (4.1) for a repressible system, with parameters defined in Table 2 except \(v_M^{min}\) which is taken as the bifurcation parameter. Solid lines represent stable objects including stable steady state (in green) and stable limit cycle (maximum amplitude in red and minimum amplitude in blue). Steady states are represented using the E-component of the solution, and the amplitude of periodic solutions is taken from the maximum and minimum of the E(t) on the periodic solution. Dashed lines represent unstable objects including unstable steady states (depending on the number of eigenvalues with positive real part, green for one, black for two and gray for three and more) and an unstable limit cycle. Bifurcations are listed in Table 3 (color figure online)

Fig. 7
figure 7

Repressible system (4.1) with parameters as defined in Table 2 showing the orbits from Fig. 6 at \(v_M^{min}=0.01\). a A projection of the phase-space dynamics into the ME plane in \(\mathbb {R}^2\) with curves formed by the points (M(t), E(t)), \(t \in \mathbb {R}\) along periodic solutions (M(t), I(t), E(t)), with squares denoting steady states (colour-coded according to the dimension of their unstable manifold). b The three components of the stable periodic solution (color figure online)

Fig. 8
figure 8

Periodic orbits from the bifurcation diagramsin Fig. 6. Left column: stable periodic orbits. Right column: unstable periodic orbits. The colormap in each column indicates values of the continuation parameter \(v_M^{min}\) (color figure online)

We begin by returning to the example from Sect. 2.6 and consider the state-dependent delay system (4.1) with the repressible parameter set defined in Table 2. The bifurcation diagram in Fig. 6 was computed using DDE-BIFTOOL as detailed in Sect. 3, and extends the diagram previously shown in Fig. 5a to show steady state solutions, periodic orbits along with their stability, as well as Hopf and fold bifurcations. These bifurcations are listed in Table 3.

Table 3 Steady state bifurcations seen on the branches in Fig. 6

When \(v_M^{min} = v_M^{max}\) both delays \(\tau _M\) and \(\tau _I\) are constant, and there can only be one steady state. With the repressible parameter values in Table 2 this steady state is stable. As \(v_M^{min}\) is decreased there is a fold bifurcation at \(v_M^{min}=0.0174\) giving rise to a pair of additional steady states, one of which is stable. Therefore there is bistability between steady states for the repressible model with \(\tau _M\) state-dependent. However, the bistability region is very narrow as at \(v_M^{min}=0.0172\) there is a Hopf bifurcation from one of the steady states giving rise to a stable periodic orbit. Consequently, for \(v_M^{min}<0.0172\) there is bistability between a steady state and a limit cycle.

There is another Hopf bifurcation at \(v_M^{min}=0.0162\) that gives rise to an unstable limit cycle. Unstable periodic orbits are unlikely to be detected via numerical simulation, but it is possible to compute and follow the unstable periodic orbits in DDE-BIFTOOL for \(v_M^{min} < 0.0162\) as shown in Fig. 6. This Hopf bifurcation results in the coexistence of a stable steady state, two unstable steady states, a stable limit cycle and an unstable limit cycle.

Figure 7a shows these coexisting objects at \(v_M^{min} = 0.01\) in a projection of phase space onto the ME plane. Since DDEs define infinite dimensional dynamical systems, low dimensional projections of phase space are often used to visualise dynamics, but the projection will, in general, not be one-to-one. Therefore some orbits may appear to intersect in the projection, even though that is impossible in phase space due to uniqueness of solutions. As an illustration of the information that is lost in projection consider the stable limit cycle at \(v_M^{min}\) which is shown over one period in Fig. 7b, but is represented by the closed green curve in Fig. 7a and by just two points in Fig. 6.

Figure 8 shows the evolution of the stable and unstable limit cycles generated in the Hopf bifurcations as \(v_M^{min}\) decreases. Illustrated are the E component of the limit cycle for different values of \(v_M^{min}\) as well as the transcription velocity \(v_M(E(t))\) and the delay \(\tau _M\) as functions of t on the periodic solution. Comparing the two columns of Fig. 8 we see that the stable limit cycles remain fairly sinusoidal over the parameter range, while the unstable limit cycles have larger period than the stable ones and also larger ratios between the maximum and minimum values of the time-dependent components shown.

4.1.1 Homoclinic bifurcation

Now we change two parameter values from the previous example and consider the repressible model (4.1) with parameter values in Table 2 except for \(n=15\) and \(v_M^{max}=1\). We again take \(v_M^{min}\) as the bifurcation parameter.

When \(v_M^{min} = v_M^{max} =1\) both delays \(\tau _M\) and \(\tau _I\) are constant with \(\tau _M=\tau _I=1\). In this case the constant delay repressible model has an unstable steady state and a globally stable limit cycle. This limit cycle can be found by simulating the DDE system (as described in Sect. 3.3) using ddesd and then continuing the solution using DDE-BIFTOOL (see Sect. 3.2).

Fig. 9
figure 9

Bifurcation diagram of the model (4.1) for a repressible system with constant \(\tau _I\). Parameter values are as in Table 2 except \(n=15\), \(v_M^{max}=1\) and \(v_M^{min}\). Line specifications can be found in Fig. 6

When the parameter value \(v_M^{min}\) is decreased the delay \(\tau _M\) becomes state-dependent and the amplitude of the stable periodic orbit gradually increases as shown in the bifurcation diagram in Fig. 9. Bifurcations are listed in Table 4. Similar to the previous example there is a fold bifurcation when \(v_M^{min}\) is very small which leads to two additional steady states, one of which is stable. Thus in this example we obtain two unstable steady states which co-exist with a single stable steady state. There is also an unstable limit cycle generated by a Hopf bifurcation, also similar to the previous example. We are not able to find stable limit-cycles that co-exist with the stable steady state.

Table 4 Steady state bifurcations seen in Fig. 9
Fig. 10
figure 10

a The period of the stable periodic orbit (shown in Fig. 9) grows dramatically as \(v_M^{min}\) decreases. b Periodic orbit at \(v_M^{min} = 0.03\). c Projection of the phase space dynamics into the ME plane at \(v_M^{min} = 0.0197\). The open square marks the steady state \((M^*, E^*)\) at the fold bifurcation. d Periodic orbit at \(v_M^{min} = 0.0197\)

This example differs from the previous example in the behaviour of the stable limit cycle. We are able to find the limit cycle only for \(v_M^{min}\ge 0.0197\) with the period increasing dramatically as \(v_M^{min}\rightarrow 0.0197\) as shown in Fig. 10a, which suggests that a homoclinic bifurcation may occur. For \(v_M^{min} = 0.03\) the stable limit cycle is shown in Fig. 10b, and appears to behave like a relaxation oscillator with \((M(t),I(t),E(t))\approx (0,0,0)\) for much of the time, with one burst of production each period. This periodic solution may have an interesting biological interpretation (see Sect. 5). Namely, the burst of transcription is followed in short succession by burst of protein production, and this protein represses the initiation of mRNA transcription for a majority of the period. Only when this repression is released, a burst of transcription follows.

The last limit cycle that we are able to compute for \(v_M^{min}=0.0197\) is shown in Fig. 10c, d. If there is a homoclinic orbit then the limit cycle would have to approach a saddle-like steady state. However, in the phase space plot in panel (c), the periodic orbit is always far from the only steady state (denoted by the solid square) that exists for \(v_M^{min}=0.0197\). On the other hand, we do observe that the orbit does pass through the region of phase space containing the ‘ghost’ of the saddle steady state destroyed in the fold bifurcation at \(v_M^{min}=0.01961\). Panel (d) also shows the solution close to this ghost steady state for \(t\in (20,180)\) which is for most of the period.

In this example it seems that a homoclinic bifurcation occurs very close to the fold bifurcation where the steady state with saddle stability is destroyed. This suggests that our parameter set is close to a higher co-dimension bifurcation where the homoclinic and saddle bifurcations coincide. We investigate this further in the next example.

4.1.2 Zero-Hopf bifurcation and 3DL transition

Next we change a single parameter value from the example shown in Figs. 9 and 10 to consider the model (4.1) in the repressible case with the Hill coefficient in the transcription velocity \(m=15\) (in both the previous examples we took \(m=3\)). All the other parameter values remain the same as in the previous example, so \(n=15\), \(v_M^{max}=1\) and the rest of the parameters as defined Table 2. The resulting bifurcation diagram is shown in Fig. 11, and the bifurcations are listed in Table 5.

Fig. 11
figure 11

Bifurcation diagram of the model (4.1) for a repressible system with constant \(\tau _I\). Parameter values are as in Table 2 except \(m=n=15\), \(v_M^{max}=1\) and \(v_M^{min}\). a Line specifications can be found in Fig. 6. b The same as a except the stable and unstable periodic orbits are represented as a solid red and blue dashed curve respectively using the 1-norm (4.2) of the periodic solution (color figure online)

Table 5 Bifurcation information associated with Fig. 11

There are several significant differences between the bifurcation diagram in Fig. 11 and the previous case in Fig. 9. Considering first just the steady states, we see that there is an additional fold bifurcation and that all the steady states now lie on a single continuous branch of steady states with two fold bifurcations. As in the previous example there is a single segment of stable steady states, but it loses stability in a subcritical Hopf bifurcation at \(v_M^{min}=0.072792\) whereas in the previous example the stable steady state was destroyed in a fold bifurcation. Comparing the insets in Figs. 9 and 11 we see that the Hopf and the fold bifurcation both occur in each example but, importantly, their order on the branch is reversed. Therefore there must be an intermediate value of \(m\in (3,15)\) where the two bifurcations will coincide in a so-called zero-Hopf or fold-Hopf bifurcation. The codimension-two zero-Hopf bifurcation is known to generate homoclinic orbits and bifurcations (Kuznetsov 2004), which is further evidence for the existence of homoclinic orbits in the state-dependent delay operon model (4.1).

Fig. 12
figure 12

a, b Stable, and, c, d unstable branches of periodic orbits from the bifurcation diagram in Fig. 11

Consideration of the periodic orbits shown in Fig. 11 provides further evidence supporting existence of homoclinic orbits. While we could imagine that the two branches of periodic orbits shown in Fig. 11a might join up to form one continuous branch that is not what happens. A different representation of the periodic solutions on the bifurcation diagram is appropriate when considering periodic orbits close to homoclinic. In Fig. 11a the periodic orbits are represented by two curves, representing their amplitude. Figure 11b shows exactly the same bifurcation diagram, except that a periodic orbit of period T is now represented by the 1-norm of its E(t) component:

$$\begin{aligned} \Vert \cdot \Vert _1=\frac{1}{T}\int _{0}^TE(t)dt \end{aligned}$$
(4.2)

This representation of periodic orbits is useful because the 1-norm of a periodic orbit approaches the value of \(E^*\) as a periodic orbit approaches either a Hopf bifurcation or a homoclinic bifurcation at the steady state \((M^*,I^*,E^*)\). In Fig. 11b the stable and unstable periodic orbits are each represented by a single curve using (4.2), and in the inset the periodic solution branches can be seen to both be approaching the intermediate steady state.

Figure 12 shows the evolution of both the amplitude and period of the branch of stable and branch of unstable periodic orbits. The rapidly increasing period at the end of each branch suggests that both terminate in homoclinic bifurcations. This can be seen even more clearly by viewing the periodic orbits in phase space.

Fig. 13
figure 13

Stable periodic orbit for repressible system (4.1) with \(v_M^{min}=0.071577\), \(v_M^{max}=1\), \(m=n=15\), and other parameters as in Table 2. a The unstable periodic solution. b The part of the periodic orbit very close to the middle steady state. c A projection of the phase-space dynamics into the ME plane showing the unstable periodic solution and steady states (colour coded according to the dimension of their unstable manifold). d Detail of the phase space showing periodic orbit passing very close to middle steady state. Also shown is the projection of the linear unstable manifold (in dashed blue) and the leading linear stable manifold (in dashed green) of this steady state (color figure online)

Figure 13 shows the last limit cycle that we are able to compute on the branch of stable periodic orbits with \(v_M^{min}=0.071577\). Panel (a) shows all three components of the periodic orbit as well as the unstable steady state from the middle segment of the branch of steady states. This shows that the system spends most of the time close to this steady state with just a short burst of production once per period.

Recall that DDEs define infinite dimensional dynamical systems whose phase space consists of function segments defined over a time interval equal to the largest delay. It follows that for two solutions to be close in phase space, it is necessary that they are close in coordinate space for a time interval longer than the largest delay. With the parameters in this example \(\tau _I=1\) and at the steady state \(E^*=0.81\) and \(\tau _M=9.155\). Thus the largest delay is close to 10. Figure 13b shows a zoomed view on the part of the periodic orbit closest to the steady state just before the burst. This shows that all three components of the periodic solution agree with the steady state values to three significant digits over a time interval several times larger than the delay, thus confirming that the periodic orbit passes close to the steady state in phase space.

Figure 13c shows a projection of the phase space dynamics on to the ME plane showing the stable periodic orbit and the three coexisting steady states at \(v_M^{min}=0.071577\). The periodic orbit appears to pass close to all three steady states, but that is an illusion created partly by the projection from infinite dimensions to \(\mathbb {R}^2\) and partly by the scale which is very compact to show the large amplitude bursting periodic orbit. Also note that one of the steady states is asymptotically stable and so it is impossible for a periodic orbit to lie in its basin of attraction.

Figure 13d shows that the periodic orbit passes close to just the middle steady state, and also shows the behaviour of the solution near to this steady state. The leading characteristic values of the intermediate unstable steady state \((M^*, I^*, E^*)=(0.8515, 0.8100, 0.8100)\) at \(v_M^{min}= 0.071577\), where stable periodic orbits cease to exist, are obtained from (2.48) as

$$\begin{aligned} \lambda _1&=0.49051, \\ \lambda _2&= -0.017729, \\ \lambda _{3,4}&=-0.018217 \pm 0.70175i. \end{aligned}$$

Following the theory of Sect. 2.7 this leads to linearized solutions close to the steady state \((M^*,I^*,E^*)\) of the form

$$\begin{aligned} \left( \begin{array}{c} M(t) \\ I(t) \\ E(t) \end{array}\right) = \left( \begin{array}{c} M^* \\ I^* \\ E^* \end{array}\right) +Ce^{\lambda t} \left( \begin{array}{c} \mathcal {E}_M \\ \mathcal {E}_I \\ \mathcal {E}_E \end{array}\right) \end{aligned}$$

where the constant eigenvector \((\mathcal {E}_M,\mathcal {E}_I,\mathcal {E}_E)\) lies in the nullspace of the matrix \(A(\lambda )\) defined in (2.47). The corresponding eigenvectors for \(\lambda _1\) and \(\lambda _2\) are computed as

$$\begin{aligned} v_1=\begin{pmatrix} 1\\ 0.39077\\ 0.26222 \end{pmatrix}, \quad v_2=\begin{pmatrix} 1\\ 0.98572\\ 1.0035 \end{pmatrix}. \end{aligned}$$

Since \(\lambda _1\) is the only characteristic value with positive real part the linear unstable manifold of the steady state is defined by \(e^{\lambda _1 t}v_1\). When projecting phase space into the ME plane this line has slope \(\mathcal {E}_E/\mathcal {E}_M\). The stable manifold of the steady state is infinite-dimensional and so cannot be represented in the ME plane, but the dominant part of the linear stable manifold (with slowest decay) is given by \(e^{\lambda _2 t}v_2\).

The projections of both the dominant part of the linear stable manifold and the linear unstable manifold are shown in Fig. 13d. The stable periodic orbit is seen to approach the steady state along a direction that is tangential to the dominant stable manifold before leaving along a direction that is tangential to the unstable manifold. Since the orbit passes very close to the steady state, the passage through the neighbourhood of the steady state takes a very long time. This results in the large period of the orbit.

Fig. 14
figure 14

Unstable periodic orbit for repressible system (4.1) with \(v_M^{min}=0.071622\), and the other parameters the same as in Fig. 13. Panels a to d as described in caption to Fig. 13

Fig. 15
figure 15

Configuration of leading negative real and complex-conjugate eigenvalues as \(v_M^{min}\) varies. A 3-dimensional transition (Kalia et al. 2019) occurs on the middle branch of unstable steady states as in Fig. 11

Figure 14 is similar to Fig. 13 but shows the last periodic orbit that we are able to compute on the branch of unstable periodic orbits with \(v_M^{min}=0.071622\). Comparing the two figures we see that the unstable periodic orbit is quite different to the stable orbit. Figure 14a, b show that again the periodic orbit is close to the intermediate steady state for most of the period except for a short burst (or antiburst) of depressed production.

The characteristic values and corresponding eigenvectors can also be computed for the unstable periodic orbit, but the value of \(v_M^{min}\) only differs in the third significant digit between the two examples. The characteristic values and eigenvectors agree with those above to the third significant digit. The phase space plots in Fig. 14c, d again show a periodic orbit close to homoclinic approaching the steady state near the dominant linear stable manifold and leaving tangential to the linear unstable manifold.

There are two significant differences from the stable periodic orbit. Firstly, the unstable periodic orbit leaves the neighbourhood of the steady state in the opposite direction to the stable periodic orbit, which results in the production being decreased rather than increased during the burst. Secondly it is apparent in Fig. 14b, d that the periodic orbit is not tangential to the dominant part of the linear stable manifiold but rather oscillates about it. This seems to arise because there is only a small difference in the real parts between \(\lambda _2\) and the next characteristic values which occur as a complex conjugate pair \(\lambda _{3,4}\). Furthermore, as shown in Fig. 15, for a very nearby value of \(v_M^{min}\), the leading negative real eigenvalue \(\lambda _2\) and complex-conjugate eigenvalues \(\lambda _{3,4}\) exchange order. Such a transition in a system also having one real positive eigenvalue is called a 3-dimensional or 3DL transition and is associated with rich Shilnikov homoclinic bifurcation structures (Kalia et al. 2019).

4.2 Inducible operon with one state-dependent delay

We now turn to consider the Goodwin model (4.1) with one-state dependent delay in the case of an inducible operon with functions defined in Table 1. Recall that with both delays constant (and also in the absence of delays) an inducible system with \(n>1\) can have either a single globally stable steady state, or there can be two locally stable steady states and an unstable intermediate steady state. There are no other possibilities when using the functions in Table 1 (Yildirim et al. 2004).

We will show that an inducible operon with state-dependent transcription delay \(\tau _M\) can support stable and unstable periodic orbits and that these can be generated in supercritical or subcritical Hopf bifurcations, or in fold bifurcations of periodic orbits.

4.2.1 Inducible supercritical hopf bifurcation

Fig. 16
figure 16

Bifurcation diagram of the model (4.1) for an inducible system with parameter values as defined in Table 6 and \(v_M^{min}\) treated as a continuation/bifurcation parameter. Line specifications can be found in Fig. 6. The amplitudes of E-component of periodic solutions \(\mathbb {R}\rightarrow \mathbb {R}^3\) are shown. The vertical dotted line at \(v_M^{min} = 0\) separates the biologically realistic case \(v_M^{min}>0\) from the biologically unrealistic case \(v_M^{min}<0\) (see text). Bifurcations occurring for \(v_M^{min}>0\) are detailed in Table 7

Fig. 17
figure 17

Inducible system (4.1) with parameters as defined in Table 6 showing the orbits from Fig. 16 at \(v_M^{min}=0.01\). a A projection of the phase space dynamics into the ME plane showing periodic orbits represented by closed curves, and steady states by squares (whose colour indicates the number of unstable eigenvalues as in Fig. 13). b The three components of the stable periodic solution

We begin by considering the inducible operon model (4.1) with parameters defined in Table 6. With this parameter set and \(v_M^{min}=v_M^{max}=1\), both delays are constant and the model has a single globally stable steady state. For \(v_M^{min}<1\) the transcription delay becomes state-dependent and several bifurcations occur, as shown in Fig. 16 and listed in Table 7.

Table 6 Parameters for inducible operon example of Fig. 16
Table 7 Bifurcation information associated with Fig. 16

As \(v_M^{min}\) is decreased there is first a fold bifurcation which creates two additional steady states. This results in bistability between two stable steady states for \(v_M^{min}\in (0.08,0.354)\) separated by an intermediate unstable steady state. This configuration is well known for inducible operons with constant delays, but here the bifurcation to three steady states is induced by varying the state-dependency of the delay \(\tau _M\).

Reducing \(v_M^{min}\) further, an unexpected event occurs; the upper steady state loses stability in a supercritical Hopf bifurcation which creates a stable periodic orbit which exists for \(v_M^{min}<0.08\). This stable periodic orbit coexists with one stable and two unstable steady states. Thus we have an interval of bistability between a limit cycle and a steady state for an inducible operon.

The stable periodic orbit and a projection of phase space into the ME plane are shown in Fig. 17 for \(v_M^{min}=0.01\). We suspect that the periodic orbit exists for all \(v_M^{min}>0\), but the numerical discretization of the threshold integral described in Sect. 3.2 requires \(v_M^{min}\) bounded away for zero, and we only compute periodic orbits for \(v_M^{min}\ge 0.01\).

The linearization correction method of Sect. 3.1 avoids discretizing the integral and is applicable even when \(v_M^{min}<0\). Though \(v_M^{min}<0\) leads to negative transcription velocities which is not physiological, this can be computationally useful. This is demonstrated in Fig. 16 where continuation through negative values of \(v_M^{min}\) reveals that the different branches of steady states are joined at a fold bifurcation with \(v_M^{min}<0\). This allows computation of all the physiological steady states by continuation of a single branch.

4.2.2 Inducible subcritical hopf bifurcation

We now change just one parameter value from the previous example and consider the inducible state-dependent transcription delay operon model (4.1) with parameters as in Table 6, except for the Hill coefficient in the transcription velocity function which we now set to \(m=4\).

Fig. 18
figure 18

Bifurcation diagram of the inducible operon model (4.1) with \(m=4\) and all other parameters as defined in Table 6. Bifurcations are listed in Table 8

Table 8 Steady state and Periodic Orbit Bifurcation information for the Example shown in Fig. 18

Comparing Fig. 18 with the previous example in Fig. 16 we see that changing the value of m from 2 to 4 results in two important changes in the bifurcations. Firstly, in Fig. 18 both the fold bifurcations on the branch of steady states now occur for positive values of \(v_M^{min}\). Consequently for \(v_M^{min}\in (0.064,0.285)\) there are three co-existing steady states, while for both larger and smaller values of \(v_M^{min}>0\) there is a unique stable steady state.

Fig. 19
figure 19

Stable and unstable periodic orbits on the branch of periodic orbits emanating from the subcritical Hopf bifurcation in Fig. 18. The colormap indicates the value of the continuation parameter \(v_M^{min}\). The periodic orbits are shown in a amplitude of \(E-\)component, b period, c profile in E, d profile in Me delay \(\tau _M\). Panel f shows a projection of phase space onto the ME plane when \(v_M^{min} = 0.2\) and tristability occurs. Periodic orbits are represented by closed curves, and steady states by squares (whose colour indicates the number of unstable eigenvalues as in Fig. 13) (color figure online)

The second important difference between the two examples is that the Hopf bifurcation on the upper segment of steady states at \(v_M^{min}=0.196\) in Fig. 18 is subcritical resulting in a branch of unstable periodic orbits. The change in the criticality of this Hopf bifurcation between the two examples implies that for some intermediate value \(m\in (2,4)\) there is a Bautin bifurcation at which the criticality switches. Bautin bifurcations are well studied (Kuznetsov 2004) and in a two-parameter unfolding generate a branch of fold bifurcations of periodic orbits.

The branch of unstable periodic orbits emanating from the subcritical Hopf bifurcation terminates in the fold bifurcation of periodic orbits seen in Fig. 18 at \(v_M^{min}=0.20812\), at which the periodic orbit becomes stable. As a consequence of the subcritical Hopf bifurcation and fold of periodic orbits there are stable periodic orbits for \(v_M^{min}<0.208\) (to the left of the fold bifurcation of periodic orbits) and co-existing stable steady states for \(v_M^{min}\in (0.196,285)\) (to the right of the Hopf bifurcation). This creates a small parameter interval of tristability for \(v_M^{min}\in (0.196,0.208)\) between the Hopf bifurcation and fold of periodic orbits bifurcation for which a stable periodic orbit coexists with two stable steady states. Figure 19f shows the dynamics when \(v_M^{min}=0.2\) in the tristability region, in a projection of phase space onto the ME plane. The branch of periodic orbits emanating from the Hopf bifurcation at \(v_M^{min}=0.19603\) crosses \(v_M^{min}=0.2\) twice and both the stable and unstable periodic orbit are shown in the phase portrait.

The other panels of Fig. 19 show the evolution of the periodic orbit from the Hopf bifurcation on this branch with separate colour maps for the stable and unstable legs of the branch.

For \(v_M^{min} \in (0.20812, 0.28543)\) there is bistability between two steady states, and for \(v_M^{min} < 0.19603\) there is bistability between a periodic orbit and a steady state. There is also a second Hopf bifurcation at \(v_M^{min}=0.1228\) which generates small amplitude unstable periodic orbits shown on the bifurcation diagram in Fig. 18.

4.2.3 Fold bifurcation of periodic orbits

For our final example of inducible operon dynamics we return to the example from Sect. 2.6 and consider the one state-dependent delay system (4.1) with the inducible parameter set defined in Table 2.

Fig. 20
figure 20

Bifurcation diagram of the model (4.1) for an inducible system with parameters defined in Table 2 except \(v_M^{min}\) which is taken as the bifurcation parameter. Red circles denote the five co-existing steady states at \(v_M^{min}=0.05\). All other lines and symbols are defined as in Figs. 6 and 9 (color figure online)

Table 9 Bifurcation information associated with Fig. 20

The bifurcation diagram in Fig. 20 extends the diagram previously shown in Fig. 5b to show steady state solutions and periodic orbits along with their stability, as well as Hopf and fold bifurcations. The bifurcations are listed in Table 9.

We already saw in Sect. 2.6 that when \(v_M^{min}=v_M^{max}\) and thus both delays are constant, there are three co-existing steady states. Two of these are stable and the intermediate steady state is unstable. When \(v_M^{min}\) is reduced the delay \(\tau _M\) becomes state-dependent and a number of bifurcations may occur. The lower stable steady state remains stable and does not undergo any bifurcations. The intermediate steady state remains unstable for all \(v_M^{min}>0\) but does undergo a Hopf bifurcation. The upper branch of steady states loses stability in a fold bifurcation. There is also another fold bifurcation and several Hopf bifurcations on this branch. Considering all the branches together there may be up to five co-existing steady states, but as the bifurcation diagram shows, there are only ever one or two co-existing stable steady states.

Fig. 21
figure 21

a Amplitude and b Period for the branch of periodic orbits from Fig. 9. c The solution components and d the projection into phase space of the periodic orbit at the fold bifurcation \(v_M^{min}=0.1597\). e E component and f phase space projection for a simulation with \(v_M^{min}=0.1605\) and initial function close to the intermediate steady state

The fold bifurcation at which the steady state loses stability (at \(v_M^{min}=0.048865\)) is immediately followed by a Hopf bifurcation (at \(v_M^{min}=0.048868\)), indicating that this inducible operon is close to a zero-Hopf bifurcation (in Sect. 4.1 we inferred existence of a zero-Hopf bifurcation for a repressible operon).

The branch of periodic orbits emanating from the Hopf bifurcation is shown in Fig. 21. The bifurcation is a supercritical Hopf bifurcation from an unstable steady state, which gives rise to a branch of unstable periodic orbits bifurcating to the right. The amplitude and period of these orbits are shown in Fig. 21a, b. Interestingly, moving along the branch away from the Hopf bifurcation the period decreases as the amplitude increases until there is a fold bifurcation of periodic orbits at \(v_M^{min}=0.1597\) creating a segment of stable periodic orbits on the branch. The periodic orbit at the fold bifurcation is shown in Fig. 21c, d. For \(v_M^{min}>0.1597\) there is no longer a periodic orbit, but it is still possible to have transient oscillatory dynamics. Figure 21e, f show an example of this for \(v_M^{min}>0.1605\), where an initial function close to the unstable intermediate steady state generates a solution with large oscillations for 200 time units before the solution converges to the stable steady state. When the phase space projection of this solution in Fig. 21f is compared to the periodic orbit at the fold of periodic orbits (in Fig. 21d) it is clear that we are seeing a ghost of the periodic orbit.

Fig. 22
figure 22

Stable periodic orbit at \(v_M^{min}=0.0001\) seen in Fig. 20. The periodic orbit is shown in a solution in all three components over one period, b projection of phase space into ME plane, c velocity \(v_M(E(t))\) and d delay \(\tau _M(t)\)

For \(v_M^{min}\in (0.048865,0.1597)\) the branch of stable periodic orbits coexists with two stable steady states, creating another example of tristability of solutions. The stable periodic orbit at the left end of the branch with \(v_M^{min}=0.0001\) is shown in Fig. 22.

The transcription velocity is essentially zero for nearly all of the period, with just a short burst of transcription when E is close to its minimum. This sudden release of mRNA gives the M component of the solution the characteristic form of a relaxation oscillator, even though the other components of the solution are smooth.

The variation of the delay as a function of time seen in Fig. 22d shows that the delay is very far from being constant. The delay is increasing linearly on the segment of the orbit for which the transcription velocity is zero, and so no transcripts are being completed. During this time the effector E concentration is high and thus the transcription initiation rate f(E) is high; at the same time though, the delay \(\tau (E)\) is also increasing. Only when the concentration of the effector E drops sufficiently, does transcription proceed during the last quarter of the period.

Finally, we remark that it is highly delicate to numerically compute a branch of periodic orbits emanating from a Hopf bifurcation close to a co-dimension two zero-Hopf bifurcation in the system (4.1) with the threshold integral discretized as in Sect. 3.2. For this reason we were not able to compute this branch starting from the Hopf bifurcation. Instead, noting that for small values of \(v_M^{min}\) there are three steady states, but only the lower one is stable, we performed a numerical simulation of dynamics as described in Sect. 3.3 starting close to the upper unstable steady state. This simulation converged to the stable periodic orbit. This periodic orbit was then continued in DDE-BIFTOOL to find the fold bifurcation of periodic orbits and follow the branch of unstable periodic orbits back to the Hopf bifurcation.

4.3 Two state-dependent delays

Fig. 23
figure 23

Bifurcation diagram of the model (2.15)–(2.19) with \(\tau _M\) and \(\tau _I\) both state-dependent. a Repressible case with same parameters as in Fig. 5c. b Inducible case with same parameters as in Fig. 5d. Symbols and lines are as defined in Fig. 6, except for the red circles which denote the co-existing steady states shown in Fig. 3c, d (color figure online)

Table 10 Bifurcation information associated with Fig. 23a
Table 11 Bifurcation information associated with Fig. 23b

We briefly return to the model (2.15)–(2.19) with two state-dependent delays. Figure 23 shows the stability of the steady states and also the steady state bifurcations for the two examples first considered in Fig. 5c, d in Sect. 2.6. The principal bifurcations are listed in Tables 10 and 11. For the inducible case our numerical code found many pairs of complex conjugate characteristic values crossing the imaginary axis indicating the possibility of many Hopf bifurcations. In Table 11 we only list with Hopf bifurcations that generate steady states with three or fewer unstable eigenvalues, as Hopf bifurcations with more unstable directions will never change the stability of the stability of the steady state and will only generate periodic orbits with multiple unstable Floquet multipliers.

Compared to the examples from the preceding sections we see that allowing the second delay to also be state-dependent can result in additional co-existing steady states (consistent with (2.32)), however these extra steady states are unstable and there do not seem to be additional stable invariant objects. In Fig. 23a there are four unstable and one stable equilibrium, suggesting existence of one or more stable periodic orbits. In Fig. 23b for low \(v_{min}\) there are two unstable and one stable equilibrium, again suggesting existence of a stable periodic orbit.

5 Discussion and summary

This paper studies the Goodwin model of operon dynamics in the presence of state dependent delays in the processes of transcription and translation. The dependence of delays on the state of the system was considered previously (Monk 2003; Verdugo and Rand 2007; Ahmed and Verriest 2017; Wang and Pei 2021) and justified by the existence of transportation delays of mRNA export through the nuclear membrane. We argue that the availability of building blocks for mRNA and protein synthesis, as well as traffic jams of transcribing polymerases and translating ribosomes, affect the velocity of these processes and depend on the state of the cell. In contrast with membrane transportation delays, these effects may influence also prokaryotic operons.

The focus of the paper is on exploring potential operon dynamics in the presence of state dependent delays both in transcription and translation and contrasting its richness with the dynamics of constant delay systems. We consider two different situations of repressible and inducible operon.

In the repressible case with state dependent transcriptional delay \(\tau _M\) and constant translational delay \(\tau _I\) we find bistability either between two steady states or a steady state and a stable periodic orbit (Figs. 5611). This periodic orbit has the characteristic of a relaxation oscillator, where the velocity of transcription is very low for a majority of the period, only to produce a brief ’spike’ of transcription which results in subsequent spikes of the translated protein and the effector protein, Fig. 10.

We also found compelling evidence for the existence of complicated dynamics in the repressible case. In particular, by tracing the trajectory of the stable periodic orbit just before it disappears, we found that it passes very close to a saddle point which it approaches along the dominant linearized stable direction and leaves along the one-dimensional unstable manifold (Fig. 13). An unstable periodic orbit behaviours very similarly at a near identical paranmeter value, except it leaves along the one-dimensional unstable manifold in the opposite direction (Fig. 14). This suggests that these periodic orbits disappear in a homoclinic bifurcation. Furthermore, linearization at the saddle point at nearby parameters shows that the dominant stable real eigenvalue becomes a dominant complex pair (Fig. 15) whose magnitude satisfies the assumptions guaranteeing existence of Shilnikov type chaotic set (Shilnikov 1965; Kuznetsov 2004). We found two parameter values near each other such that at one the stable periodic orbit loses stability through a Hopf bifurcation and at the other at a fold bifurcation and inferred that at some intermediate parameter value those two bifurcations will coincide in a codimension-two zero-Hopf bifurcation.

When both \(\tau _M\) and \(\tau _I\) are variable we show that the system can admit 5 steady states, Figs. 5c and 23a. Further exploration of the range of dynamics in this case is left for future studies.

In the inducible case when the delays are constant we can have either a single steady state or bistability between two steady states. The dynamics are richer in presence of variable delays.

We start with two situations when \(\tau _M\) and \(\tau _I\) are constant: either (1) there is a unique steady state or (2) there are 3 steady states, two of which are stable. We then allow \(\tau _M\) to vary while \(\tau _I\) remains constant. In case (1) we find coexistence of the steady state with a stable periodic orbit, or another steady state, Fig. 16; and in case (2) we find there are an additional 2 equilibria for a total of 5. This results in tristability between two equilibria and a period orbit (Figs. 18 and  20). The stable periodic orbit in (1) has features of a relaxation oscillator; the velocity of transcription remains close to zero for the majority of the period, only to show a rapid increase in a short burst Figs. 22.

When both \(\tau _M\) and \(\tau _I\) are variable the system can have up to 7 steady states, Figs. 5d and 23b. Further exploration of the dynamics in this case is again left for future studies.

The presence of relaxation type oscillations in both inducible and repressible operons provides an intriguing source of pulse generation on a subcellular level. Among several periodic behaviors that have been experimentally observed we focus on transcriptional bursting (Chong et al. 2014; Lenstra et al. 2016; Tunnacliffe and Chubb 2020). The production of mRNA from some genes does not produce a steady stream of mRNAs, but rather proceeds in bursts of production interspersed by periods of quiescence. The most popular model that describes this phenomena is the telegraph model (Peccoud and Ycart 1995) where, during the periods when the transcription factor (TF) binds to promoter RNA, polymerases repeatedly initiate transcription, while when transcription factor is off the promoter, the initiation stops. While the data supports temporal coupling between TF binding and initiation of transcriptional bursts, the durations of the binding times and bursts are not equal. For instance, in yeast, an average TF (GAL4) binding time of 34 s initiates a mean burst duration of around 2.5 min (Tunnacliffe and Chubb 2020).

We propose that the bursting periodic solutions that we observed in this paper may be one of the mechanisms supporting or enhancing transcriptional bursting. This may be in addition to other proposed mechanisms related to DNA supercoiling (Chong et al. 2014), chromatin opening, scaffold presence at initiation site or pulses of nuclear localization (Lenstra et al. 2016).