1 Introduction

The simplest linear scalar Stieltjes integral equation (also known as a generalized ordinary differential equation or a measure differential equation, see e.g. [9, 10]) has the form

$$\begin{aligned} z(t)=z(t_0)+\int _{t_0}^t z(s)\,\textrm{d}P(s), \quad t\in [a,b], \end{aligned}$$
(1.1)

where \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation. Under certain conditions on the jumps of P, this equation has a unique solution on [ab] for each initial condition \(z(t_0)=z_0\). The precise conditions are stated in Sect. 3, where we also recall an explicit formula for the solution.

Besides Eq. (1.1), it makes sense to study two related Stieltjes integral equations of the form

$$\begin{aligned} x(t)&=x(t_0)+\int _{t_0}^t x(s-)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$
(1.2)
$$\begin{aligned} y(t)&=y(t_0)-\int _{t_0}^t y(s+)\,\textrm{d}P(s),\quad t\in [a,b]. \end{aligned}$$
(1.3)

As explained in [5], these equations are adjoint (or dual) to each other in the sense that the product of their solutions is a constant function, and the corresponding linear operators satisfy an analogue of Lagrange’s identity. If P is left-continuous, then Eq. (1.2) reduces to Eq. (1.1); similarly, if P is right-continuous, then Eq. (1.3) reduces to Eq. (1.1) with P replaced by \(-P\). Also, as pointed out in [5], Eqs. (1.2) and (1.3) include as special cases various forms of linear \(\Delta \)- and \(\nabla \)-dynamic equations studied in the time scales calculus.

Equation (1.2) was investigated much earlier in [3], where it was observed that in contrast to Eq. (1.1), bounded variation of P is sufficient for the existence and uniqueness of solutions on \([t_0,b]\), and no further conditions on the jumps of P are needed. Existence and uniqueness of solutions to Eqs. (1.2) and (1.3) and their nonhomogeneous counterparts was also discussed in [5], but no explicit formulas for the solutions were given there. Hence, the main goal of this paper is to provide solution formulas for Eqs. (1.2) and (1.3), as well as for the corresponding nonhomogeneous equations

$$\begin{aligned} u(t)&=g(t)+\int _{t_0}^t u(s-)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$
(1.4)
$$\begin{aligned} v(t)&=g(t)-\int _{t_0}^t v(s+)\,\textrm{d}P(s),\quad t\in [a,b]. \end{aligned}$$
(1.5)

The paper [3] contains solution formulas for Eqs. (1.2) and (1.4) in the special case when \(t_0=a\), but no proofs are given there. While it is not too difficult to guess the correct formula for the homogeneous equation, a rigorous proof requires some amount of work. Moreover, our solution formula for Eq. (1.4) is simpler than in [3], applies to any choice of \(t_0\in [a,b]\), and has less restrictive assumptions on the jumps of P.

Throughout this paper, the integrals on the right-hand sides of Eqs. (1.1)–(1.5) are understood as Kurzweil–Stieltjes integrals (also known as Perron–Stieltjes integrals). We need only some basic properties of these integrals, which are summarized in Sect. 2. A much more comprehensive treatment is available in [9]. Alternatively, following [3], it would be possible to work with the Young integral, which coincides with the Kurzweil–Stieltjes integral if the integrand and integrator are regulated and one of them has bounded variation (cf. [9, Theorem 6.13.1]).

2 Preliminaries

Given a regulated function \(g:[a,b]\rightarrow {\mathbb {C}}\), we use the following notation:

$$\begin{aligned} \Delta ^+g(t)={\left\{ \begin{array}{ll} g(t+)-g(t)&{}\text{ if } t\in [a,b),\\ 0&{}\text{ if } t=b, \end{array}\right. } \quad \quad \Delta ^-g(t)={\left\{ \begin{array}{ll} g(t)-g(t-)&{}\text{ if } t\in (a,b],\\ 0&{}\text{ if } t=a. \end{array}\right. } \end{aligned}$$

Also, we let \(\Delta g(t)=\Delta ^+g(t)+\Delta ^-g(t)\) for \(t\in [a,b]\). It will be useful to observe that if \(t_0\in [a,b)\), then

$$\begin{aligned} \lim _{\delta \rightarrow 0+}\Delta ^+g(t_0+\delta )&=\lim _{\delta \rightarrow 0+}(g(t_0+\delta +)-g(t_0+\delta ))=g(t_0+)-g(t_0+)=0, \end{aligned}$$
(2.1)
$$\begin{aligned} \lim _{\delta \rightarrow 0+}\Delta ^-g(t_0+\delta )&=\lim _{\delta \rightarrow 0+}(g(t_0+\delta )-g(t_0+\delta -))=g(t_0+)-g(t_0+)=0 \end{aligned}$$
(2.2)

(cf. [9, Corollary 4.1.9]). Similar relations hold for \(\delta \rightarrow 0-\) and \(t_0\in (a,b]\).

The next result can be found in [6, Lemma 2.7].

Lemma 2.1

If \(f:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation and

$$\begin{aligned} f(t)\not =0,\ t\in [a,b];\quad f(t+)\not =0,\ t\in [a,b);\quad f(t-)\not =0,\ t\in (a,b], \end{aligned}$$

then 1/f has bounded variation.

We will often use the next result, which can be found in [11, Proposition 2.12], and shows how to evaluate integrals whose integrands vanish except on a countable set.

Lemma 2.2

Let \(f:[a,b]\rightarrow {\mathbb {R}}\) be a function which is zero except on a countable set \(\{t_1,t_2,\ldots \}\subset [a,b]\) and \(\sum _i f(t_i)\) be absolutely convergent. Then, for every regulated function \(g:[a,b]\rightarrow {\mathbb {R}}\), we have

$$\begin{aligned} \int _a^b f(t)\,\textrm{d}g(t)=\sum _{t\in [a,b]}f(t)\Delta g(t), \end{aligned}$$

with the convention that \(\Delta g(a)=\Delta ^+g(a)\) and \(\Delta g(b)=\Delta ^-g(b)\).

In particular, if \(f:[a,b]\rightarrow {\mathbb {R}}\) is an arbitrary function and \(g:[a,b]\rightarrow {\mathbb {R}}\) is regulated, then for every \(\tau \in [a,b]\) we have

$$\begin{aligned} \int _a^b f(t)\chi _{\{\tau \}}(t)\,\textrm{d}g(t)=f(\tau )\Delta g(\tau ). \end{aligned}$$

The next result is a basic existence theorem for the Kurzweil–Stieltjes integral, and also describes the properties of indefinite integrals (see [9, Theorem 6.3.11 and Corollary 6.5.5]).

Theorem 2.3

Suppose that \(f,g:[a,b]\rightarrow {\mathbb {R}}\) are regulated and one of them has bounded variation. Then the integral \(\int _a^b f\,{\textrm{d}}g\) exists. Moreover, for every \(t_0\in [a,b]\), the function

$$\begin{aligned} h(t)=\int _{t_0}^t f\,{\textrm{d}}g,\quad t\in [a,b], \end{aligned}$$

is regulated and satisfies

$$\begin{aligned} h(t+)&=h(t)+f(t)\Delta ^+g(t),\quad t\in [a,b),\\ h(t-)&=h(t)-f(t)\Delta ^-g(t),\quad t\in (a,b]. \end{aligned}$$

Throughout the paper, we occasionally use the notation

$$\begin{aligned} \int _a^{t+}f\,\textrm{d}g=\lim _{\delta \rightarrow 0+}\int _a^{t+\delta }f\,\textrm{d}g,\quad \quad \int _a^{t-}f\,\textrm{d}g=\lim _{\delta \rightarrow 0+}\int _a^{t-\delta }f\,\textrm{d}g, \end{aligned}$$

and similarly for limits with respect to the lower bound of the integral.

We need two convergence results for Stieltjes integrals; first of them is the following bounded convergence theorem (see [9, Theorem 6.8.13] or [7, Theorem 6.3]).

Theorem 2.4

Suppose that a function \(g:[a,b]\rightarrow {\mathbb {R}}\) has bounded variation and let \(f_n:[a,b]\rightarrow {\mathbb {R}}\), \(n\in {\mathbb {N}}\), be a sequence of functions satisfying the following conditions:

  1. (i)

    The integral \(\int _a^b f_n\,{\mathrm d}g\) exists for each \(n\in {\mathbb {N}}\).

  2. (ii)

    \(\lim _{n{\rightarrow }\infty } f_n(t)=f(t)\) for all \(t\in [a,b]\).

  3. (iii)

    There exists a constant \(M\ge 0\) such that \(|f_n(t)|\le M\) for all \(n\in {\mathbb {N}}\) and \(t\in [a,b]\).

Then the integral \(\int _a^b f\,\textrm{d}g\) exists and

$$\begin{aligned} \lim _{n{\rightarrow }\infty }\left( \sup _{t\in [a,b]} \left| \int _a^t f_n\,\textrm{d}g-\int _a^t f\,\textrm{d}g\right| \right) =0. \end{aligned}$$

The following uniform convergence theorem is a special case of [9, Theorem 6.8.8].

Theorem 2.5

Let \(g_n:[a,b]\rightarrow {\mathbb {R}}\), \(n\in {\mathbb {N}}\), be a sequence of functions such that \(\sup _{n\in {\mathbb {N}}}\,\mathop {\textrm{var}}g_n<\infty \) and \(g_n\rightrightarrows g\). If \(f_n:[a,b]\rightarrow {\mathbb {R}}\), \(n\in {\mathbb {N}}\), is a sequence of regulated functions such that \(f_n\rightrightarrows f\), then

$$\begin{aligned} \lim _{n{\rightarrow }\infty }\int _a^b f_n\,\textrm{d}g_n=\int _a^b f\,\textrm{d}g. \end{aligned}$$

Next, let us derive a Fubini-type theorem for iterated Kurzweil–Stieltjes integrals. A result of this type is available in [10, Corollary 1.46], but its assumptions are difficult to verify in practice. Therefore, our goal is to obtain an easier-to-use theorem.

We need the following facts: For each nondecreasing function \(g:{\mathbb {R}}\rightarrow {\mathbb {R}}\), there is a corresponding Lebesgue–Stieltjes measure denoted by \(\mu _g\), and it is a Borel measure. If \(f:[a,b]\rightarrow {\mathbb {R}}\) is such that the Lebesgue–Stieltjes integral \(\int _{(a,b)}f\,\textrm{d}\mu _g\) exists and has finite value, then the Kurzweil–Stieltjes integral \(\int _a^b f\,\textrm{d}g\) exists as well, and we have the relation

$$\begin{aligned} \int _a^b f\,\textrm{d}g=f(a)\Delta ^{+}g(a)+\int _{(a,b)}f\,\textrm{d}\mu _g+f(b)\Delta ^-g(b) \end{aligned}$$

(see [9, Theorem 6.12.3]). This relation will be used in the proof of the next result.

Theorem 2.6

If \(g:[a,b]\rightarrow {\mathbb {R}}\) and \(h:[c,d]\rightarrow {\mathbb {R}}\) have bounded variation, \(f:[a,b]\times [c,d]\rightarrow {\mathbb {R}}\) is Borel measurable and bounded, then

$$\begin{aligned} \int _a^b\left( \int _c^d f(x,y)\,\textrm{d}h(y)\right) \textrm{d}g(x)=\int _c^d\left( \int _a^b f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y). \end{aligned}$$

Proof

Since each function of bounded variation can be written as the difference of two nondecreasing functions, it suffices to prove the theorem for the case when \(g:[a,b]\rightarrow {\mathbb {R}}\) and \(h:[c,d]\rightarrow {\mathbb {R}}\) are nondecreasing. We extend g and h to the whole real line by letting \(g(x)=g(a)\) for all \(x<a\), \(h(y)=h(c)\) for all \(y<c\), \(g(x)=g(b)\) for all \(x>b\), and \(h(y)=h(d)\) for all \(y>d\). Let \(\mu _g\) and \(\mu _h\) be the corresponding Lebesgue–Stieltjes measures.

We also extend f to \({\mathbb {R}}^2\) by letting \(f(x,y)=0\) for all \((x,y)\in {\mathbb {R}}^2{\setminus }[a,b]\times [c,d]\); note that f is bounded and Borel measurable on \({\mathbb {R}}^2\). Fix an arbitrary \(\varepsilon >0\). The double integral \(\int _{(a-\varepsilon ,b+\varepsilon )\times (c-\varepsilon ,d+\varepsilon )}f\,\textrm{d}\mu _g\otimes \mu _h\) (where \(\mu _g\otimes \mu _h\) is the product measure) exists and has finite value. Applying the Fubini theorem on this integral and using the relation between Kurzweil–Stieltjes and Lebesgue–Stieltjes integrals, we get

$$\begin{aligned}{} & {} \int _a^b\left( \int _c^d f(x,y)\,\textrm{d}h(y)\right) \textrm{d}g(x)= \int _{a-\varepsilon }^{b+\varepsilon }\left( \int _{c-\varepsilon }^{d+\varepsilon } f(x,y)\,\textrm{d}h(y)\right) \textrm{d}g(x)\\{} & {} \quad \int _{(a-\varepsilon ,b+\varepsilon )}\left( \int _{(c-\varepsilon ,d+\varepsilon )} f(x,y)\,\textrm{d}\mu _h(y)\right) \textrm{d}\mu _g(x)\\{} & {} \quad =\int _{(c-\varepsilon ,d+\varepsilon )}\left( \int _{(a-\varepsilon ,b+\varepsilon )} f(x,y)\,\textrm{d}\mu _g(x)\right) \textrm{d}\mu _h(y)\\{} & {} \quad =\int _{c-\varepsilon }^{d+\varepsilon }\left( \int _{a-\varepsilon }^{b+\varepsilon } f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y)=\int _c^d\left( \int _a^b f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y). \end{aligned}$$

\(\square \)

Corollary 2.7

If \(g,h:[a,b]\rightarrow {\mathbb {R}}\) have bounded variation, \(f:[a,b]\times [a,b]\rightarrow {\mathbb {R}}\) is Borel measurable and bounded, then

$$\begin{aligned}{} & {} \int _a^b\left( \int _a^x f(x,y)\,\textrm{d}h(y)\right) \textrm{d}g(x)\\{} & {} \quad =\int _a^b\left( \int _y^b f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y) +\sum _{y\in (a,b]}f(y,y)\Delta ^-g(y)\Delta h(y)\\{} & {} \qquad -\sum _{x\in [a,b)}f(x,x)\Delta ^+h(x)\Delta g(x), \end{aligned}$$

with the convention that \(\Delta g(a)=\Delta ^+g(a)\) and \(\Delta h(b)=\Delta ^-h(b)\).

Proof

Using Theorem 2.6 and Lemma 2.2, we get

$$\begin{aligned}{} & {} \int _a^b\left( \int _a^x f(x,y)\,\textrm{d}h(y)\right) \textrm{d}g(x)=\int _a^b\left( \int _a^x f(x,y)\chi _{[a,x]}(y)\,\textrm{d}h(y)\right) \textrm{d}g(x)\\{} & {} \quad =\int _a^b\left( \int _a^b f(x,y)\chi _{[a,x]}(y)\,\textrm{d}h(y)\right) \textrm{d}g(x)-\int _a^b\left( \int _x^b f(x,y)\chi _{[a,x]}(y)\,\textrm{d}h(y)\right) \textrm{d}g(x)\\{} & {} \quad =\int _a^b\left( \int _a^b f(x,y)\chi _{[y,b]}(x)\,\textrm{d}g(x)\right) \textrm{d}h(y)-\int _a^bf(x,x)\Delta ^+h(x)\,\textrm{d}g(x)\\{} & {} \quad =\int _a^b\left( \int _a^b f(x,y)\chi _{[y,b]}(x)\,\textrm{d}g(x)\right) \textrm{d}h(y)-\sum _{x\in [a,b)}f(x,x)\Delta ^+h(x)\Delta g(x)\\{} & {} \quad =\int _a^b\left( \int _a^y f(x,y)\chi _{[y,b]}(x)\,\textrm{d}g(x)\right) \textrm{d}h(y)+\int _a^b\left( \int _y^b f(x,y)\chi _{[y,b]}(x)\,\textrm{d}g(x)\right) \textrm{d}h(y)\\{} & {} \qquad -\sum _{x\in [a,b)}f(x,x)\Delta ^+h(x)\Delta g(x)\\{} & {} \quad =\int _a^b\left( \int _a^y f(x,y)\chi _{[y,b]}(x)\,\textrm{d}g(x)\right) \textrm{d}h(y)+\int _a^b\left( \int _y^b f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y)\\{} & {} \qquad -\sum _{x\in [a,b)}f(x,x)\Delta ^+h(x)\Delta g(x)\\{} & {} \quad =\int _a^bf(y,y)\Delta ^- g(y)\,\textrm{d}h(y)+\int _a^b\left( \int _y^b f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y)\\{} & {} \qquad -\sum _{x\in [a,b)}f(x,x)\Delta ^+h(x)\Delta g(x) \\{} & {} \quad =\sum _{y\in (a,b]}f(y,y)\Delta ^- g(y)\Delta h(y)+\int _a^b\left( \int _y^b f(x,y)\,\textrm{d}g(x)\right) \textrm{d}h(y)\\{} & {} \qquad -\sum _{x\in [a,b)}f(x,x)\Delta ^+h(x)\Delta g(x). \end{aligned}$$

\(\square \)

We will deal with integrals having complex-valued integrands and integrators. Given a pair of functions \(f,g:[a,b]\rightarrow {\mathbb {C}}\) with real parts \(f_1,g_1\) and imaginary parts \(f_2,g_2\), we define

$$\begin{aligned} \int _a^b f\,{\textrm{d}}g=\int _a^b (f_1+if_2)\,{\textrm{d}}(g_1+ig_2)=\int _a^b f_1\,{\textrm{d}}g_1-\int _a^b f_2\,{\textrm{d}}g_2+i\left( \int _a^b f_1\,{\textrm{d}}g_2+\int _a^b f_2\,{\textrm{d}}g_1\right) \end{aligned}$$

whenever the integrals on the right-hand side exist. Using this definition, it is not difficult to check that all results mentioned in this section remain valid for complex-valued functions.

3 The Homogeneous Case

Let us begin by summarizing the basic results for the simplest linear scalar Stieltjes integral equation

$$\begin{aligned} z(t)=z(t_0)+\int _{t_0}^t z(s)\,\textrm{d}P(s), \quad t\in [a,b], \end{aligned}$$
(3.1)

where \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation. It is well known that if \(1+\Delta ^+P(t)\ne 0\) for all \(t\in [a,t_0)\) and \(1-\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\), then Eq. (3.1) has a unique solution satisfying \(z(t_0)=z_0\) for each \(z_0\in {\mathbb {C}}\). This result can be found in several sources, see e.g. [8, Theorem 2.7] or [9, Theorem 8.5.1].

The solution corresponding to \(z_0=1\) is known as the generalized exponential function (see [8] and [9, Section 8.5]), and is denoted by \(t\mapsto e_{\textrm{d}P}(t,t_0)\). The solution corresponding to a general \(z_0\in {\mathbb {C}}\) is then given by \(z(t)=z_0e_{\textrm{d}P}(t,t_0)\). The generalized exponential function has the following basic properties (see [8, Theorem 3.2] or [9, Theorem 8.5.3]):

  • The function \(t\mapsto e_{\textrm{d}P}(t,t_0)\) is regulated on [ab] and satisfies

    $$\begin{aligned} e_{\textrm{d}P}(t+,t_0)&=(1+\Delta ^+P(t))\,e_{\textrm{d}P}(t,t_0),\quad t\in [a,b), \end{aligned}$$
    (3.2)
    $$\begin{aligned} e_{\textrm{d}P}(t-,t_0)&=(1-\Delta ^-P(t))\,e_{\textrm{d}P}(t,t_0),\quad t\in (a,b]. \end{aligned}$$
    (3.3)
  • \(e_{\textrm{d}P}(t,s)\,e_{\textrm{d}P}(s,r)=e_{\textrm{d}P}(t,r)\) for every \(t,s,r\in [a,b]\).

  • \(e_{\textrm{d}P}(t,s)=e_{\textrm{d}P}(s,t)^{-1}\) for every \(t,\,s\in [a,b]\).

  • If P is continuous, then \(e_{\textrm{d}P}(t,t_0)=e^{P(t)-P(t_0)}\) for every \(t\in [a,b]\).

These properties lead to the following explicit formula for the generalized exponential function:

$$\begin{aligned} e_{\textrm{d}P}(t,t_0)={\left\{ \begin{array}{ll} 1,&{}t=t_0,\\ \dfrac{e^{P(t-)-P(t_0+)}}{e^{\sum _{s\in (t_0,t)}\Delta P(s)}}\,\, \dfrac{\prod \nolimits _{s\in [t_0,t)}(1+\Delta ^+P(s))}{\prod \nolimits _{s\in (t_0,t]}(1-\Delta ^-P(s))},&{}t>t_0,\\ {} \dfrac{e^{\sum _{s\in (t,t_0)}\Delta P(s)}}{e^{P(t_0-)-P(t+)}}\,\, \dfrac{\prod \nolimits _{s\in (t,t_0]}(1-\Delta ^-P(s))}{\prod \nolimits _{s\in [t,t_0)}(1+\Delta ^+P(s))},&{}t<t_0. \end{array}\right. } \end{aligned}$$
(3.4)

The idea behind this formula is simple: For \(t>t_0\) and a continuous function P, the correct formula would be \(e^{P(t)-P(t_0)}\). For each discontinuity point \(s\in (t_0,t)\), the ratio of the right and left limits of this function is \(e^{\Delta P(s)}\), but the correct ratio according to the relations (3.2) and (3.3) should be \((1+\Delta ^+ P(s))/(1-\Delta ^-P(s))\); this is where the additional terms in Eq. (3.4) come from. The endpoints \(t_0\) and t have to be treated separately, because only one-sided limits are relevant here.

A rigorous proof of Eq. (3.4) can be found in [9, Theorem 8.5.4] (which deals only with the case \(t>t_0\), but the formula for \(t<t_0\) follows from the relation \(e_{\textrm{d}P}(t,t_0)=e_{\textrm{d}P}(t_0,t)^{-1}\)). The idea is to verify the formula for functions P having at most finitely many discontinuities, and then use functions of this type to approximate a general function P with bounded variation.

It might be worth pointing out that if \(g:[a,b]\rightarrow {\mathbb {R}}\) is a left-continuous nondecreasing function, \(p:[a,b]\rightarrow {\mathbb {C}}\) is integrable with respect to g, and \(P(t)=\int _{t_0}^t p(s)\,\textrm{d}g(s)\) for all \(t\in [a,b]\), then the formula (3.4) contains as a special case the formulas for the g-exponential function developed recently in [1, 2, 4] in the context of Stieltjes differential equations.

Our goal in the present paper is to obtain similar formulas for the following pair of integral equations:

$$\begin{aligned} x(t)&=x(t_0)+\int _{t_0}^t x(s-)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$
(3.5)
$$\begin{aligned} y(t)&=y(t_0)-\int _{t_0}^t y(s+)\,\textrm{d}P(s),\quad t\in [a,b]. \end{aligned}$$
(3.6)

As explained in [5], these equations have to be interpreted carefully: In the first integral, the value \(x(s-)\) should be understood as x(s) when s coincides with \(\min (t_0,t)\). Similarly, in the second integral, the value \(y(s+)\) should be understood as y(s) when s coincides with \(\max (t_0,t)\). Having in mind that \(\int _{t_0}^t f\,\textrm{d}g=-\int ^{t_0}_t f\,\textrm{d}g\) if \(t<t_0\), we arrive at the following definition of solutions.

Definition 3.1

A regulated function \(x:[a,b]\rightarrow {\mathbb {C}}\) is called a solution of Eq. (3.5) if it satisfies

$$\begin{aligned} x(t)={\left\{ \begin{array}{ll} x(t_0)+\int _{t_0}^t \left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s),&{}t\ge t_0, \\ x(t_0)-\int _{t}^{t_0} \left( x(s-)\chi _{(t,t_0]}(s)+x(t)\chi _{\{t\}}(s)\right) \textrm{d}P(s),&{}t\le t_0. \end{array}\right. } \end{aligned}$$

A regulated function \(y:[a,b]\rightarrow {\mathbb {C}}\) is called a solution of Eq. (3.6) if it satisfies

$$\begin{aligned} y(t)={\left\{ \begin{array}{ll} y(t_0)-\int _{t_0}^t \left( y(s+)\chi _{[t_0,t)}(s)+y(t)\chi _{\{t\}}(s)\right) \textrm{d}P(s),&{}t\ge t_0, \\ y(t_0)+\int _{t}^{t_0} \left( y(s+)\chi _{[t,t_0)}(s)+y(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s),&{}t\le t_0. \end{array}\right. } \end{aligned}$$

Remark 3.2

As pointed out in [5, Remark 6.4], there is a close relation between solutions of Eqs. (3.5) and (3.6): A function \(y:[a,b]\rightarrow {\mathbb {C}}\) is a solution of the equation

$$\begin{aligned} y(t)=y(t_0)-\int _{t_0}^t y(s+)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$

if and only if the function \(x:[-b,-a]\rightarrow {\mathbb {C}}\) given by \(x(t)=y(-t)\) is a solution of the equation

$$\begin{aligned} x(t)=x(-t_0)+\int _{-t_0}^t x(s-)\,\textrm{d}{\widetilde{P}}(s),\quad t\in [-b,-a], \end{aligned}$$

where \({\widetilde{P}}(s)=-P(-s)\) for \(s\in [-b,-a]\). Both solutions are understood in the sense of Definition 3.1. Moreover, \(\Delta ^+{\widetilde{P}}(t)=\Delta ^-P(-t)\) for all \(t\in [-b,-a)\), and \(\Delta ^-{\widetilde{P}}(t)=\Delta ^+P(-t)\) for all \(t\in (-b,-a]\).

These observations are easily extended to nonhomogeneous equations that will be studied in Sect. 4: A function \(y:[a,b]\rightarrow {\mathbb {C}}\) is a solution of the equation

$$\begin{aligned} y(t)=g(t)-\int _{t_0}^t y(s+)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$

if and only if the function \(x:[-b,-a]\rightarrow {\mathbb {C}}\) given by \(x(t)=y(-t)\) is a solution of the equation

$$\begin{aligned} x(t)={\tilde{g}}(t)+\int _{-t_0}^t x(s-)\,\textrm{d}{\widetilde{P}}(s),\quad t\in [-b,-a], \end{aligned}$$

where \({\widetilde{P}}\) is as before, and \({\tilde{g}}(t)=g(-t)\) for all \(t\in [-b,-a]\).

To derive explicit formulas for solutions of Eqs. (3.5) and (3.6), we need suitable relations for the one-sided limits of solutions.

Lemma 3.3

Suppose that \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation and \(t_0\in [a,b]\).

  1. 1.

    If \(x:[a,b]\rightarrow {\mathbb {C}}\) is a solution of Eq. (3.5) on [ab], then

    $$\begin{aligned} x(t-)(1+\Delta P(t))&=x(t)(1+\Delta ^+P(t)),&t\in (a,t_0),\\ x(t-)(1+\Delta ^-P(t))&=x(t),&t\in [t_0,b],\\ x(t+)&=x(t)(1+\Delta ^+P(t)),&t\in [a,t_0],\\ x(t+)&=x(t-)(1+\Delta P(t)),&t\in (t_0,b). \end{aligned}$$
  2. 2.

    If \(y:[a,b]\rightarrow {\mathbb {C}}\) is a solution of Eq. (3.6) on [ab], then

    $$\begin{aligned} y(t+)(1+\Delta ^+P(t))&=y(t),&t\in [a,t_0],\\ y(t+)(1+\Delta P(t))&=y(t)(1+\Delta ^-P(t)),&t\in (t_0,b),\\ y(t-)&=y(t)(1+\Delta ^-P(t)),&t\in [t_0,b],\\ y(t-)&=y(t+)(1+\Delta P(t)),&t\in (a,t_0). \end{aligned}$$

Proof

The first two equalities were already proved in [5] (see the proof of Lemma 6.5 there). Let us prove the third equality. For each \(t\in [a,t_0)\), we calculate (with the help of Lemma 2.2, Theorem 2.4, and the relation (2.1))

$$\begin{aligned} x(t+)= & {} \lim _{\delta \rightarrow 0+}\left( x(t_0)-\int _{t+\delta }^{t_0}\left( x(s-)\chi _{(t+\delta ,t_0]}(s)+x(t+\delta ) \chi _{\{t+\delta \}}(s)\right) \textrm{d}P(s)\right) \\= & {} \lim _{\delta \rightarrow 0+}\left( x(t_0)-\int _{a}^{t_0}x(s-)\chi _{(t+\delta ,t_0]}(s)\,\textrm{d}P(s)-x(t+\delta )\Delta ^+P(t+\delta )\right) \\= & {} x(t_0)-\int _{a}^{t_0}x(s-)\chi _{(t,t_0]}(s)\,\textrm{d}P(s)\\= & {} x(t_0)-\int _{t}^{t_0}\left( x(s-)\chi _{(t,t_0]}(s)+x(t)\chi _{\{t\}}(s)\right) \textrm{d}P(s)+x(t)\Delta ^+P(t)\\= & {} x(t)+x(t)\Delta ^+P(t)=x(t)(1+\Delta ^+P(t)). \end{aligned}$$

For \(t=t_0\), we get (using Lemma 2.2, Theorem 2.4, and the relation (2.2))

$$\begin{aligned} x(t_0+)= & {} x(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t_0+\delta }\left( x(s-)\chi _{(t_0,t_0+\delta ]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\right) \\= & {} x(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{b}\left( x(s-)\chi _{(t_0,t_0+\delta )}(s)\right. \right. \\{} & {} \quad \left. \left. +x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)+x(t_0+\delta -)\Delta ^-P(t_0+\delta )\right) \\= & {} x(t_0)+\int _{t_0}^{b}x(t_0)\chi _{\{t_0\}}(s)\,\textrm{d}P(s)=x(t_0)+x(t_0)\Delta ^+P(t_0)\\= & {} x(t_0)(1+\Delta ^+P(t_0)), \end{aligned}$$

which proves the third equality. To get the fourth one, let \(t\in (t_0,b)\) and calculate (again using Lemma 2.2, Theorem 2.4, and the relation (2.2))

$$\begin{aligned} x(t+)= & {} x(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t+\delta }\left( x(s-)\chi _{(t_0,t+\delta ]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\right) \\= & {} x(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{b}\left( x(s-)\chi _{(t_0,t+\delta )}(s)\right. \right. \\{} & {} \qquad \left. \left. +x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)+x(t+\delta -)\Delta ^-P(t+\delta )\right) \\= & {} x(t_0)+\int _{t_0}^{b}\left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\\= & {} x(t_0)+\int _{t_0}^{t}\left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)+\int _{t}^{b}x(s-)\chi _{(t_0,t]}(s)\,\textrm{d}P(s)\\= & {} x(t)+x(t-)\Delta ^+P(t)=x(t-)(1+\Delta ^-P(t))+x(t-)\Delta ^+P(t)=x(t-)(1+\Delta P(t)). \end{aligned}$$

The remaining four equalities can be proved similarly, but it is easier to utilize the relation between solutions of Eqs. (3.6) and (3.5) described in Remark 3.2. \(\square \)

The relations described in the previous lemma lead us to conjecture that the explicit solution formulas for solutions of Eqs. (3.5) and (3.6) with initial conditions \(x(t_0)=x_0\) and \(y(t_0)=y_0\) might look as follows:

$$\begin{aligned} x(t)={\left\{ \begin{array}{ll} x_0,&{}{}t=t_0,\\ x_0\dfrac{e^{P(t-)-P(t_0+)}}{e^{\sum _{s\in (t_0,t)}\Delta P(s)}}(1+\Delta ^+P(t_0))\prod \limits _{s\in (t_0,t)}(1+\Delta P(s))(1+\Delta ^-P(t)),&{}{}t>t_0,\\ x_0\dfrac{e^{\sum _{s\in (t,t_0)}\Delta P(s)}}{e^{P(t_0-)-P(t+)}}\dfrac{1}{1+\Delta ^+P(t)}\dfrac{1}{\prod \limits _{s\in (t,t_0)}(1+\Delta P(s))}\dfrac{1}{1+\Delta ^-P(t_0)},&{}{}t<t_0, \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.7)
$$\begin{aligned} y(t)={\left\{ \begin{array}{ll} y_0,&{}t=t_0,\\ y_0\dfrac{e^{\sum _{s\in (t_0,t)}\Delta P(s)}}{e^{P(t-)-P(t_0+)}}\dfrac{1}{1+\Delta ^+P(t_0)}\dfrac{1}{\prod \limits _{s\in (t_0,t)}(1+\Delta P(s))}\dfrac{1}{1+\Delta ^-P(t)},&{}t>t_0,\\ y_0\dfrac{e^{P(t_0-)-P(t+)}}{e^{\sum _{s\in (t,t_0)}\Delta P(s)}}(1+\Delta ^+P(t))\prod \limits _{s\in (t,t_0)}(1+\Delta P(s))(1+\Delta ^-P(t_0)),&{}t<t_0. \end{array}\right. }\nonumber \\ \end{aligned}$$
(3.8)

We begin by verifying the formulas in the case when P has at most finitely many discontinuities.

Lemma 3.4

Suppose that \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation and at most finitely many discontinuities.

  1. 1.

    If \(1+\Delta ^-P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (a,t_0)\), and \(1+\Delta ^+P(t)\ne 0\) for all \(t\in [a,t_0)\), then the function \(x:[a,b]\rightarrow {\mathbb {C}}\) given by Eq. (3.7) is a solution of Eq. (3.5).

  2. 2.

    If \(1+\Delta ^+P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (t_0,b)\), and \(1+\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\), then the function \(y:[a,b]\rightarrow {\mathbb {C}}\) given by Eq. (3.8) is a solution of Eq. (3.6).

Proof

Let us prove the first statement. Since P has at most finitely many discontinuities, the sums and products in the definition of x have finitely many terms, and the definition makes sense.

Suppose that the discontinuities of P are contained among the points

$$\begin{aligned} a=t_{-m}<t_{-m+1}<\cdots<t_0<t_1<\cdots < t_n=b. \end{aligned}$$

It is clear that Eq. (3.5) holds if \(t=t_0\). Next, suppose that \(t\in (t_0,t_1)\). Then, using Lemma 2.2 and Theorem 2.3, we obtain

$$\begin{aligned}{} & {} x_0+\int _{t_0}^{t}\left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\\{} & {} \quad =x_0+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t_0+\delta }\left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\right. \\{} & {} \qquad \left. +\int _{t_0+\delta }^{t}x(s-)\chi _{(t_0,t]}(s)\,\textrm{d}P(s)\right) \\{} & {} \quad =x_0+x_0\Delta ^+P(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0+\delta }^{t}x(s-)\chi _{(t_0,t]}(s)\,\textrm{d}P(s)\right) . \end{aligned}$$

Since \(x(s-)=x(s)=x_0(1+\Delta ^+P(t_0))e^{P(s)-P(t_0+)}\) for all \(s\in (t_0,t_1)\), the previous line equals

$$\begin{aligned}{} & {} x_0(1+\Delta ^+P(t_0))\left( 1+\int _{t_0+}^t e^{P(s)-P(t_0+)}\,\textrm{d}P(s)\right) \\{} & {} \quad =x_0(1+\Delta ^+P(t_0))e^{P(t)-P(t_0+)}=x(t). \end{aligned}$$

Thus, Eq. (3.5) holds for all \(t\in (t_0,t_1)\).

If Eq. (3.5) holds for \(t\in (t_{i-1},t_i)\) with \(i\in \{1,\ldots ,n\}\), then it also holds for \(t=t_i\), because

$$\begin{aligned}{} & {} x_0+\int _{t_0}^{t_i}\left( x(s-)\chi _{(t_0,t_i]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\\{} & {} \quad =x_0+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t_i-\delta }\left( x(s-)\chi _{(t_0,t_i]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\right) +x(t_i-)\Delta ^-P(t_i)\\{} & {} \quad =x(t_i-)+x(t_i-)\Delta ^-P(t_i)=x(t_i-)(1+\Delta ^-P(t_i))=x(t_i), \end{aligned}$$

where the last equality follows from the definition of x.

Next, we show that if Eq. (3.5) holds for \(t\in [t_0,t_i]\) with \(i\in \{1,\ldots ,n-1\}\), then it also holds for \(t\in (t_i,t_{i+1})\). For these values of t, we have

$$\begin{aligned}{} & {} x_0+\int _{t_0}^{t}\left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s)\\{} & {} \quad =x_0+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t_{i}-\delta }\left( x(s-)\chi _{(t_0,t]}(s)+x(t_0)\chi _{\{t_0\}}(s)\right) \textrm{d}P(s) \right. \\{} & {} \qquad \left. +\int _{t_{i}-\delta }^{t_i+\delta }x(s-)\,\textrm{d}P(s) +\int _{t_i+\delta }^tx(s-)\,\textrm{d}P(s)\right) \\{} & {} \quad =\lim _{\delta \rightarrow 0+}x(t_{i}-\delta ) +\lim _{\delta \rightarrow 0+}\int _{t_{i}-\delta }^{t_i+\delta }x(s-)\,\textrm{d}P(s) +\lim _{\delta \rightarrow 0+}\int _{t_i+\delta }^tx(s-)\,\textrm{d}P(s)\\{} & {} \quad =x(t_i-)+x(t_i-)\Delta P(t_i)+\int _{t_i+}^tx(s-)\,\textrm{d}P(s)\\{} & {} \quad =x(t_i-)(1+\Delta P(t_i))+\int _{t_i+}^tx(s-)\,\textrm{d}P(s). \end{aligned}$$

Using the fact that

$$\begin{aligned} x(s-)= & {} x(s)=x(t_{i}-)(1+\Delta P(t_{i}))\frac{e^{P(s)-P(t_{i}-)}}{e^{\Delta P(t_{i})}}\\= & {} x(t_{i}-)(1+\Delta P(t_{i}))e^{P(s)-P(t_{i}+)},\quad s\in (t_i,t_{i+1}), \end{aligned}$$

we continue in the previous calculation and obtain

$$\begin{aligned}{} & {} x(t_i-)(1+\Delta P(t_i))\left( 1+\int _{t_i+}^t e^{P(s)-P(t_{i}+)}\,\textrm{d}P(s)\right) \\{} & {} \quad =x(t_i-)(1+\Delta P(t_i))(e^{P(t)-P(t_{i}+)})=x(t). \end{aligned}$$

The arguments so far show that Eq. (3.5) holds for all \(t\in [t_0,b]\), and it remains to consider \(t<t_0\). We begin with \(t\in (t_{-1},t_0)\). Since P is continuous at t, we obtain

$$\begin{aligned}{} & {} x_0-\int _{t}^{t_0} \left( x(s-)\chi _{(t,t_0]}(s)+x(t)\chi _{\{t\}}(s)\right) \textrm{d}P(s)=x_0-\int _{t}^{t_0} x(s-)\,\textrm{d}P(s)\\{} & {} \quad =x_0-\lim _{\delta \rightarrow 0+}\left( \int _{t}^{t_0-\delta } x(s-)\,\textrm{d}P(s)+\int _{t_0-\delta }^{t_0} x(s-)\,\textrm{d}P(s)\right) \\{} & {} \quad =x_0-x(t_0-)\Delta ^-P(t_0)-\int _{t}^{t_0-} x(s-)\,\textrm{d}P(s). \end{aligned}$$

Using the fact that

$$\begin{aligned} x(s-)=x(s)=\frac{x_0}{1+\Delta ^-P(t_0)}e^{P(s)-P(t_0-)},\quad s\in (t_{-1},t_0), \end{aligned}$$

we continue in the previous calculation and get

$$\begin{aligned}{} & {} x_0-x(t_0-)\Delta ^-P(t_0)-\frac{x_0}{1+\Delta ^-P(t_0)}\int _{t}^{t_0-} e^{P(s)-P(t_0-)}\textrm{d}P(s)\\{} & {} \quad =x_0-\frac{x_0}{1+\Delta ^-P(t_0)}\Delta ^-P(t_0)+\frac{x_0}{1+\Delta ^-P(t_0)}\left( e^{P(t)-P(t_0-)}-1\right) \\{} & {} \quad =\frac{x_0}{1+\Delta ^-P(t_0)}e^{P(t)-P(t_0-)}=x(t), \end{aligned}$$

which shows that Eq. (3.5) holds for all \(t\in (t_{-1},t_0)\).

Next, suppose that Eq. (3.5) holds for all \(t\in (t_{-i},t_{-i+1})\) with \(i\in \{1,\ldots ,m\}\), and let us show that it holds for \(t=t_{-i}\). We calculate

$$\begin{aligned} \begin{aligned}{}&{} x_0-\int _{t_{-i}}^{t_0} \left( x(s-)\chi _{(t_{-i},t_0]}(s)+x(t_{-i})\chi _{\{t_{-i}\}}(s)\right) \text {d}P(s)\\{}&{} \quad =x_0-\int _{t_{-i}}^{t_0} x(s-)\chi _{(t_{-i},t_0]}(s)\,\text {d}P(s)-x(t_{-i})\Delta ^+P(t_{-i})\\{}&{} \quad =x_0-\lim _{\delta \rightarrow 0+}\left( \int _{t_{-i+\delta }}^{t_0} x(s-)\chi _{(t_{-i+\delta },t_0]}(s)\,\text {d}P(s)\right) -x(t_{-i})\Delta ^+P(t_{-i})\\{}&{} \quad =x_0-\lim _{\delta \rightarrow 0+}\left( \int _{t_{-i+\delta }}^{t_0} \left( x(s-)\chi _{(t_{-i+\delta },t_0]}(s)+x(t_{-i}+\delta )\chi _{\{t_{-i}+\delta \}}(s)\right) \text {d}P(s)\right. \\{}&{} \qquad \left. -x(t_{-i}+\delta )\Delta ^+P(t_{-i}+\delta )\right) -x(t_{-i})\Delta ^+P(t_{-i}) \\{}&{} \quad =x(t_{-i}+)-x(t_{-i})\Delta ^+P(t_{-i})\\{}&{} \quad =x(t_{-i})(1+\Delta ^+P(t_{-i}))-x(t_{-i})\Delta ^+P(t_{-i})=x(t_{-i}). \end{aligned} \end{aligned}$$

Finally, we show that if Eq. (3.5) holds for \(t\in [t_{-i},t_0]\) with \(i\in \{1,\ldots ,m-1\}\), then it also holds for \(t\in (t_{-i-1},t_{-i})\). Since P is continuous at these values of t, we have

$$\begin{aligned} x_0-\int _{t}^{t_0} \left( x(s-)\chi _{(t,t_0]}(s)+x(t)\chi _{\{t\}}(s)\right) \textrm{d}P(s)=x_0-\int _{t}^{t_0} x(s-)\,\textrm{d}P(s)\nonumber \\ =x_0-\int _{t}^{t_{-i}-} x(s-)\,\textrm{d}P(s)-x(t_{-i}-)\Delta P(t_{-i}) -\int _{t_{-i}+}^{t_0} x(s-)\,\textrm{d}P(s).\nonumber \\ \end{aligned}$$
(3.9)

To evaluate the first integral in (3.9), observe that

$$\begin{aligned} x(s-)=x(s)=\frac{x(t_{-i}+)}{1+\Delta P(t_{-i})}e^{P(s)-P(t_{-i}-)},\quad s\in (t_{-i-1},t_{-i}), \end{aligned}$$

and therefore

$$\begin{aligned}{} & {} -\int _{t}^{t_{-i}-} x(s-)\,\textrm{d}P(s)=\frac{x(t_{-i}+)}{1+\Delta P(t_{-i})}\int _{t_{-i}-}^t e^{P(s)-P(t_{-i}-)} \,\textrm{d}P(s)\\{} & {} \quad =\frac{x(t_{-i}+)}{1+\Delta P(t_{-i})}\left( e^{P(t)-P(t_{-i}-)}-1\right) . \end{aligned}$$

Combining the first and the last term in (3.9) and using continuity of P on a right neighborhood of \(t_{-i}\), we get

$$\begin{aligned} \begin{aligned}{}&{} x_0-\int _{t_{-i}+}^{t_0} x(s-)\,\text {d}P(s) =x_0-\lim _{\delta \rightarrow 0+}\int _{t_{-i}+\delta }^{t_0}\bigl ( x(s-)\chi _{(t_{-i}+\delta ,t_0]}(s) \\{}&{} \quad +x(t_{-i}+\delta )\chi _{\{t_{-i}+\delta \}}(s)\bigr ) \text {d}P(s)=x(t_{-i}+). \end{aligned} \end{aligned}$$

Substituting the previous results back to (3.9), we obtain

$$\begin{aligned}{} & {} x(t_{-i}+)-x(t_{-i}-)\Delta P(t_{-i})+\frac{x(t_{-i}+)}{1+\Delta P(t_{-i})}\left( e^{P(t)-P(t_{-i}-)}-1\right) \\{} & {} \quad =x(t_{-i}+)-x(t_{-i}+)\frac{\Delta P(t_{-i})}{1+\Delta P(t_{-i})}+\frac{x(t_{-i}+)}{1+\Delta P(t_{-i})}\left( e^{P(t)-P(t_{-i}-)}-1\right) \\{} & {} \quad =\frac{x(t_{-i}+)}{1+\Delta P(t_{-i})}\left( e^{P(t)-P(t_{-i}-)}\right) =x(t). \end{aligned}$$

Hence, Eq. (3.5) holds for all \(t\in [a,t_0]\).

The fact that y is a solution of Eq. (3.6) can be verified similarly, but it easier to use the relation between solutions of Eqs. (3.6) and (3.5) described in Remark 3.2. Indeed, let \({\widetilde{x}}:[-b,-a]\rightarrow {\mathbb {C}}\) be given by \({\widetilde{x}}(t)=y(-t)\), \(t\in [-b,-a]\), and \({\widetilde{P}}:[-b,-a]\rightarrow {\mathbb {C}}\) be given by \({\widetilde{P}}(s)=-P(-s)\), \(s\in [-b,-a]\). Then

$$\begin{aligned} {\widetilde{x}}(t)={\left\{ \begin{array}{ll} y_0,&{}t=-t_0,\\ y_0\dfrac{e^{\sum _{s\in (t,-t_0)}\Delta {\widetilde{P}}(s)}}{e^{-{\widetilde{P}}(t+)+{\widetilde{P}}(-t_0-)}}\dfrac{1}{1+\Delta ^-{\widetilde{P}}(-t_0)}\dfrac{1}{\prod \limits _{s\in (t,-t_0)}(1+\Delta {\widetilde{P}}(s))}\dfrac{1}{1+\Delta ^+{\widetilde{P}}(t)},&{}t<-t_0,\\ y_0\dfrac{e^{-{\widetilde{P}}(-t_0+)+{\widetilde{P}}(t-)}}{e^{\sum _{s\in (-t_0,t)}\Delta {\widetilde{P}}(s)}}(1+\Delta ^-{\widetilde{P}}(t))\prod \limits _{s\in (-t_0,t)}(1+\Delta {\widetilde{P}}(s))(1+\Delta ^+{\widetilde{P}}(-t_0)),&{}t>-t_0. \end{array}\right. } \end{aligned}$$

Hence, using the first part of the present lemma, we conclude that \({\tilde{x}}\) satisfies

$$\begin{aligned} {\widetilde{x}}(t)={\widetilde{x}}(-t_0)+\int _{-t_0}^t {\widetilde{x}}(s-)\,\textrm{d}{\widetilde{P}}(s),\quad t\in [-b,-a]. \end{aligned}$$

By Remark 3.2, this means that \(y(t)=\widetilde{x}(-t)\) is a solution of Eq. (3.6). \(\square \)

Relaxing the condition that P has at most finitely many discontinuities, we arrive at the first main result of this paper.

Theorem 3.5

Suppose that \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation.

  1. 1.

    If \(1+\Delta ^-P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (a,t_0)\), and \(1+\Delta ^+P(t)\ne 0\) for all \(t\in [a,t_0)\), then the function \(x:[a,b]\rightarrow {\mathbb {C}}\) given by Eq. (3.7) is a solution of Eq. (3.5).

  2. 2.

    If \(1+\Delta ^+P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (t_0,b)\), and \(1+\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\), then the function \(y:[a,b]\rightarrow {\mathbb {C}}\) given by Eq. (3.8) is a solution of Eq. (3.6).

Proof

Let us prove the first statement. Thanks to Lemma 3.4, it suffices to consider the case when P has infinitely many discontinuities. Since P has bounded variation, there are countably many of them. Let \(D=\{s_k\}_{k=1}^\infty \) be a sequence of all discontinuity points contained in (ab). Since \(\sum _{k=1}^\infty |\Delta P(s_k)|<\infty \) (see [9, Corollary 2.3.8]), the infinite sums and products in the definition of x are absolutely convergent. Moreover, the assumption \(1+\Delta P(t)\ne 0\) for \(t\in (a,t_0)\) implies that the infinite product in the denominator is nonzero, and the definition of x makes sense.

Consider the Jordan decomposition \(P=P^{\mathrm C}+P^{\mathrm B}\), where

$$\begin{aligned} P^{\mathrm B}(t)&=P(a)+\Delta ^+P(a)\,\chi _{(a,b]}(t)+\Delta ^-P(b)\,\chi _{\{b\}}(t) \\ {}&\quad +\sum _{k=1}^\infty \Big (\Delta ^+P(s_k)\,\chi _{(s_k,b]}(t) +\Delta ^-P(s_k)\,\chi _{[s_k,b]}(t)\Big ), \quad t\in [a,b], \end{aligned}$$

is the jump part of P,  and \(P^{\mathrm C}=P-P^{\mathrm B}\) is the continuous part of P (see [9], Theorem 2.6.1 and its proof). For each \(n\in {\mathbb {N}},\) let \(P_n=P^{\mathrm C}+P^{\mathrm B}_n,\) where

$$\begin{aligned} P_n^{\mathrm B}(t)&=P(a)+\Delta ^+P(a)\,\chi _{(a,b]}(t)+\Delta ^-P(b)\,\chi _{\{b\}}(t)\\&\quad +\sum _{k=1}^n\Big (\Delta ^+P(s_k)\,\chi _{(s_k,b]}(\tau ) +\Delta ^-P(s_k)\,\chi _{[s_k,b]}(t)\Big ), \quad t\in [a,b]. \end{aligned}$$

Note that the discontinuities of \(P_n\) are contained in the finite set \(\{a,s_1,\ldots ,s_n,b\}\). Hence, by Lemma 3.4, the function \(x_n:[a,b]\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} x_n(t)={\left\{ \begin{array}{ll} x_0,&{}t=t_0,\\ x_0\dfrac{e^{P_n(t-)-P_n(t_0+)}}{e^{\sum _{s\in (t_0,t)}\Delta P_n(s)}}(1+\Delta ^+P_n(t_0))\prod \limits _{s\in (t_0,t)}(1+\Delta P_n(s))(1+\Delta ^-P_n(t)),&{}t>t_0,\\ x_0\dfrac{e^{\sum _{s\in (t,t_0)}\Delta P_n(s)}}{e^{P_n(t_0-)-P_n(t+)}}\dfrac{1}{1+\Delta ^+P_n(t)}\dfrac{1}{\prod \limits _{s\in (t,t_0)}(1+\Delta P_n(s))}\dfrac{1}{1+\Delta ^-P_n(t_0)},&{}t<t_0, \end{array}\right. } \end{aligned}$$

satisfies

$$\begin{aligned} x_n(t)=x_0+\int _{t_0}^t x_n(s-)\,\textrm{d}P_n(s),\quad t\in [a,b] \end{aligned}$$
(3.10)

(in the sense of Definition 3.1).

The sequence \(\{P_n\}_{n=1}^\infty \) is convergent to P in the BV norm (see the proof of [9, Lemma 2.6.5]). Thus, the variations of \(P_n\), \(n\in {\mathbb {N}}\), are uniformly bounded, and \(\{P_n\}_{n=1}^\infty \) is uniformly convergent to P. Therefore, \(P_n(t-)\rightrightarrows P(t-)\) and \(P_n(t+)\rightrightarrows P(t+)\) with respect to \(t\in [a,b]\); see [9, Lemma 4.2.3].

Furthermore, observe that \(\Delta ^+P_n(s_k)=\Delta ^+P(s_k)\) and \(\Delta ^-P_n(s_k)=\Delta ^-P(s_k)\) for all \(k\in \{1,\ldots ,n\}\). Thus, if \(t>t_0\), then

$$\begin{aligned} \begin{aligned} \left| \sum _{s\in (t_0,t)}\Delta P(s)-\sum _{s\in (t_0,t)}\Delta P_n(s)\right| =\left| \sum _{s\in (t_0,t){\setminus }\{s_1,\ldots ,s_n\}}\Delta P(s)\right| \le \sum _{s\in (a,b){\setminus }\{s_1,\ldots ,s_n\}}|\Delta P(s)|, \end{aligned} \end{aligned}$$

and therefore \(\sum _{s\in (t_0,t)}\Delta P_n(s)\rightrightarrows \sum _{s\in (t_0,t)}\Delta P(s)\) for \(n\rightarrow \infty \), where the convergence is uniform with respect to \(t\in (t_0,b)\). Similarly, \(\sum _{s\in (t,t_0)}\Delta P_n(s)\rightrightarrows \sum _{s\in (t,t_0)}\Delta P(s)\) for \(n\rightarrow \infty \) with respect to \(t\in (a,t_0)\).

Also, if \(t>t_0\), then

$$\begin{aligned}{} & {} \left| \prod _{s\in (t_0,t)}(1+\Delta P(s))-\prod _{s\in (t_0,t)}(1+\Delta P_n(s))\right| \\{} & {} \quad =\left| \prod _{s\in (t_0,t)\cap \{s_1,\ldots ,s_n\}}(1+\Delta P(s))\right| \left| \prod _{s\in (t_0,t){\setminus }\{s_1,\ldots ,s_n\}}(1+\Delta P(s))-1\right| \\{} & {} \quad \le \prod _{s\in (a,b)}(1+|\Delta P(s)|)\left( e^{\sum _{s\in (a,b){\setminus }\{s_1,\ldots ,s_n\})}|\Delta P(s)|}-1\right) , \end{aligned}$$

and therefore \(\prod _{s\in (t_0,t)}(1+\Delta P_n(s))\rightrightarrows \prod _{s\in (t_0,t)}(1+\Delta P(s))\) for \(n\rightarrow \infty \) with respect to \(t\in (t_0,b)\). Similarly, \(\prod _{s\in (t,t_0)}(1+\Delta P_n(s))\rightrightarrows \prod _{s\in (t,t_0)}(1+\Delta P(s))\) for \(n\rightarrow \infty \) with respect to \(t\in (a,t_0)\).

These considerations show that \(x_n(t)\rightrightarrows x(t)\), and consequently also \(x_n(t-)\rightrightarrows x(t-)\) with respect to \(t\in [a,b]\). Taking the relation (3.10), passing to the limit for \(n\rightarrow \infty \) and using Theorem 2.5, we conclude that x is a solution of Eq. (3.5).

The second statement can be proved in a similar way. \(\square \)

Remark 3.6

Note that if x and y are given by the formulas (3.7) and (3.8), then \(x(t)y(t)=x_0y_0\) for all \(t\in [a,b]\), which is in agreement with [5, Theorem 6.2].

We now show that the solutions discussed in Theorem 3.5 are in fact unique. The proof is similar to the proof of [9, Lemma 7.4.4].

Theorem 3.7

Suppose that \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation.

  1. 1.

    If \(1+\Delta ^-P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (a,t_0)\), and \(1+\Delta ^+P(t)\ne 0\) for all \(t\in [a,t_0)\), then for each \(x_0\in {\mathbb {C}}\), Eq. (3.5) has a unique solution on [ab] satisfying \(x(t_0)=x_0\).

  2. 2.

    If \(1+\Delta ^+P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (t_0,b)\), and \(1+\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\), then for each \(y_0\in {\mathbb {C}}\), Eq. (3.6) has a unique solution on [ab] satisfying \(y(t_0)=y_0\).

Proof

Let us prove the first statement. Suppose that Eq. (3.5) has two solutions whose value at \(t_0\) is \(x_0\). Let x be their difference; it suffices to show that x vanishes on [ab]. Clearly, x is a solution of

$$\begin{aligned} x(t)=\int _{t_0}^t x(s-)\,\textrm{d}P(s),\quad t\in [a,b] \end{aligned}$$

(in the sense of Definition 3.1). Then \(x(t_0)=0\), and Lemma 3.3 implies \(x(t_0+)=x(t_0)(1+\Delta ^+P(t_0))=0\). Denote \(\alpha (t)=\mathop {\textrm{var}}(P,[t_0,t])\) for all \(t\in [t_0,b]\). This function is nondecreasing, and has a finite limit \(\alpha (t_0+)\). Hence, there exists a \(\delta \in (0,b-t_0)\) such that

$$\begin{aligned} 0\le \alpha (t_0+\delta )-\alpha (t_0+)\le 1/2. \end{aligned}$$

For each \(t\in (t_0,t_0+\delta ]\), we have

$$\begin{aligned} |x(t)|&=\left| x(t_0)\Delta ^+P(t_0)+\int _{t_0+}^{t}x(s-)\,\textrm{d}P(s)\right| \\&=\left| \int _{t_0+}^{t}x(s-)\,\textrm{d}P(s)\right| \le \sup _{s\in [t_0,t_0+\delta ]}|x(s)|\cdot (\alpha (t_0+\delta )-\alpha (t_0+)), \end{aligned}$$

and therefore

$$\begin{aligned} \sup _{s\in [t_0,t_0+\delta ]}|x(s)|\le \frac{1}{2}\sup _{s\in [t_0,t_0+\delta ]}|x(s)|, \end{aligned}$$

but this is possible only if x vanishes on \([t_0,t_0+\delta ]\).

Let \(T=\sup \{\tau \in (t_0,b]:x=0 \text{ on } [t_0,\tau ]\}\). Note that \(x(T-)=0\), and Lemma 3.3 implies \(x(T)=x(T-)(1+\Delta ^-P(T))=0\). If \(T<b\), we can use the same argument as before to show that x vanishes on \([t_0,T+\eta ]\), where \(\eta >0\). This would be in contradiction with the definition of T, and therefore we necessarily have \(T=b\), which means that x vanishes on \([t_0,b]\).

To show that x vanishes on \([a,t_0]\) as well, we can use essentially the same approach with the following modifications: To begin with, we need \(x(t_0-)=0\); this follows from Lemma 3.3, which says that \(x(t_0-)(1+\Delta ^-P(t_0))=x(t_0)=0\), and from the assumption \(1+\Delta ^-P(t_0)\ne 0\). Then, as before, we conclude that x vanishes on \([t_0-\delta ,t_0]\) for a certain \(\delta >0\). Next, we let \(T=\inf \{\tau \in [a,t_0):x=0 \text{ on } [\tau ,t_0]\}\). Then \(x(T)=0\), because \(0=x(T+)=x(T)(1+\Delta ^+P(T))\), and because \(1+\Delta ^+P(T)\ne 0\). Finally, if \(T>a\), we use the relation \(x(T-)(1+\Delta P(T))=x(T+)\) to conclude that \(x(T-)=0\) (since \(1+\Delta P(T)\ne 0\)), and arrive at a contradiction by finding a larger interval \([T-\eta ,t_0]\) on which x vanishes.

The second statement is proved in a similar way. \(\square \)

Remark 3.8

We point out that [5, Theorem 6.8] also guarantees existence and uniqueness of solutions to Eq. (3.5), but gives no explicit formula, and has stronger assumptions on P. In addition to the conditions imposed in the first part of Theorems 3.5 and 3.7, the result in [5] additionally requires that \(1+\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\). The reason is that the additional condition was needed in [5] to convert Eq. (3.5) into a Volterra–Stieltjes integral equation, while we have avoided this procedure and worked directly with Eq. (3.5).

Similar remarks apply to solutions of Eq. (3.6).

Finally, let us show that solutions of Eqs. (3.5) and (3.6) have bounded variation.

Theorem 3.9

Suppose that \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation.

  • If \(x:[a,b]\rightarrow {\mathbb {C}}\) is a solution of Eq. (3.5), then it has bounded variation.

  • If \(y:[a,b]\rightarrow {\mathbb {C}}\) is a solution of Eq. (3.6), then it has bounded variation.

Proof

Let us prove the first statement. If \(x:[a,b]\rightarrow {\mathbb {C}}\) is a solution of Eq. (3.5), then for each interval \([c,d]\subset [a,b]\), we have the estimate

$$\begin{aligned} |x(d)-x(c)|=\left| \int _c^d x(s-)\,\textrm{d}P(s)\right| \le \sup _{t\in [a,b]}|x(t)|\cdot \mathop {\textrm{var}}(P,[c,d]), \end{aligned}$$

from which it is clear that \(\mathop {\textrm{var}}(x,[a,b])\le \sup _{t\in [a,b]}|x(t)|\cdot \mathop {\textrm{var}}(P,[a,b])\).

The proof of the second statement is similar. \(\square \)

4 The Nonhomogeneous Case

In the present section, we focus on the nonhomogeneous versions of Eqs. (3.5) and (3.6), i.e., on the equations

$$\begin{aligned} u(t)&=g(t)+\int _{t_0}^t u(s-)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$
(4.1)
$$\begin{aligned} v(t)&=g(t)-\int _{t_0}^t v(s+)\,\textrm{d}P(s),\quad t\in [a,b], \end{aligned}$$
(4.2)

for given functions \(g,P:[t_0,b]\rightarrow {\mathbb {C}}\). As in the homogeneous case, the value \(u(s-)\) in the first integral should be understood as u(s) when \(s=\min (t_0,t)\), and \(v(s+)\) in the second integral should be understood as v(s) when \(s=\max (t_0,t)\).

If P has bounded variation, then the mappings

$$\begin{aligned} u&\mapsto \int _{t_0}^t u(s-)\,\textrm{d}P(s), \quad t\in [a,b],\\ v&\mapsto -\int _{t_0}^t v(s-)\,\textrm{d}P(s), \quad t\in [a,b], \end{aligned}$$

are compact linear operators on the space of regulated functions \({\mathcal {G}}([a,b])\) equipped with the supremum norm; the proof of compactness is the same as in [9, Theorem 7.4.2]. Thus, it follows from the Fredholm alternative theorem that the assumptions of Theorem 3.7, which guarantee that the corresponding homogeneous equations have only the trivial solution, also imply existence and uniqueness of solutions of the nonhomogeneous equations (4.1) and (4.2) for each regulated function g.

However, our goal is to obtain explicit formulas for u and v that involve solutions x and y of the corresponding homogeneous equations, i.e., we are looking for two versions of the variation of constants formula for the two nonhomogeneous equations.

The next two lemmas focus on forward solutions, i.e., solutions on \([t_0,b]\). As soon as we settle this case, we will deal with solutions on \([a,t_0]\) using the symmetry described in Remark 3.2.

Lemma 4.1

Suppose that \(g:[t_0,b]\rightarrow {\mathbb {C}}\) is regulated, \(P:[t_0,b]\rightarrow {\mathbb {C}}\) has bounded variation, \(1+\Delta ^+P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (t_0,b)\), and \(1+\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\). Let \(x:[t_0,b]\rightarrow {\mathbb {C}}\) be given by Eq. (3.7) with \(x_0=1\). Then the function \(u:[t_0,b]\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} u(t)=x(t)\left( g(t_0)+\int _{t_0}^t\left( \frac{\chi _{[t_0,t)}(s)}{x(s+)}+\frac{\chi _{\{t\}}(s)}{x(t)}\right) \textrm{d}g(s)\right) ,\quad t\in [t_0,b], \end{aligned}$$
(4.3)

satisfies

$$\begin{aligned} u(t)=g(t)+\int _{t_0}^t \left( \chi _{(t_0,t]}(s)u(s-)+\chi _{\{t_0\}}(s)u(t_0)\right) \textrm{d}P(s),\quad t\in [t_0,b].\nonumber \\ \end{aligned}$$
(4.4)

Proof

Let us check that the definition of u makes sense: The assumptions together with Eq. (3.7) imply that \(x(t)\ne 0\) for all \(t\in [t_0,b]\). Lemma 3.3 implies that \(x(t+)\ne 0\) for all \(t\in [t_0,b)\), and \(x(t-)\ne 0\) for all \(t\in (t_0,b]\). Moreover, the integral exists because the integrator is regulated and the integrand has bounded variation (this is a consequence of Lemma 2.1 and Theorem 3.9).

Equation (4.4) obviously holds for \(t=t_0\), because \(u(t_0)=x(t_0)g(t_0)=g(t_0)\). To check it for \(t>t_0\), we need to calculate \(u(t-)\):

$$\begin{aligned} u(t-)= & {} x(t-)\left( g(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t-\delta }\frac{\chi _{[t_0,t-\delta )}(s)}{x(s+)}\,\textrm{d}g(s)+\frac{\Delta ^-g(t-\delta )}{x(t-\delta )}\right) \right) \\= & {} x(t-)\left( g(t_0)+\int _{t_0}^{t}\frac{\chi _{[t_0,t)}(s)}{x(s+)}\,\textrm{d}g(s)\right) , \quad t\in (t_0,b]. \end{aligned}$$

Hence, if \(t>t_0\), the right-hand side of Eq. (4.4) equals

$$\begin{aligned}{} & {} \quad g(t)+\int _{t_0}^t \chi _{(t_0,t]}(s)u(s-)\,\textrm{d}P(s)+u(t_0)\Delta ^+P(t_0)\\{} & {} \quad =g(t)+\int _{t_0}^t \chi _{(t_0,t]}(s)x(s-)\left( g(t_0)+\int _{t_0}^{s}\frac{\chi _{[t_0,s)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )\right) \,\textrm{d}P(s)\\{} & {} \qquad +x(t_0)g(t_0)\Delta ^+P(t_0)\\{} & {} \quad =g(t)+g(t_0)\left( \int _{t_0}^t \chi _{(t_0,t]}(s)x(s-)\,\textrm{d}P(s)+x(t_0)\Delta ^+P(t_0)\right) \\{} & {} \qquad +\int _{t_0}^t \int _{t_0}^{s}\frac{\chi _{(t_0,t]}(s)x(s-)\chi _{[t_0,s)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )\,\textrm{d}P(s)\\{} & {} \quad =g(t)+g(t_0)\int _{t_0}^t \left( \chi _{(t_0,t]}(s)x(s-)+\chi _{\{t_0\}}(s)x(t_0)\right) \,\textrm{d}P(s)\\{} & {} \qquad +\int _{t_0}^t \int _{t_0}^{t}\frac{\chi _{(t_0,t]}(s)x(s-)\chi _{[t_0,s)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )\,\textrm{d}P(s)\\{} & {} \quad =g(t)+g(t_0)(x(t)-1)+\int _{t_0}^t \int _{t_0}^{t}\frac{\chi _{(t_0,t]}(s)x(s-)\chi _{[t_0,s)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )\,\textrm{d}P(s), \end{aligned}$$

where the last equality follows from the fact that x is a solution of Eq. (3.5) with \(x_0=1\) (see the first statement of Theorem 3.5).

The functions \((s,\tau )\mapsto \chi _{(t_0,t]}(s)\) and \((s,\tau )\mapsto \chi _{[t_0,s)}(\tau )\) are obviously bounded and Borel measurable on \([t_0,t]\times [t_0,t]\). The functions \((s,\tau )\mapsto x(s-)\) and \((s,\tau )\mapsto \frac{1}{x(\tau +)}\) have the same properties, because \(s\mapsto x(s-)\) and \(\tau \mapsto \frac{1}{x(\tau +)}\) are univariate regulated functions, therefore they are bounded and uniform limits of step functions. Thus, we can use Theorem 2.6 to interchange the order of integrals. Observing that \(\chi _{(t_0,t]}(s)\chi _{[t_0,s)}(\tau )=\chi _{[t_0,t)}(\tau )\chi _{(\tau ,t]}(s)\) (both sides are nonzero only if \(t_0\le \tau <s\le t\)), we obtain

$$\begin{aligned}{} & {} g(t)+g(t_0)(x(t)-1)+\int _{t_0}^t \int _{t_0}^{t}\frac{\chi _{[t_0,t)}(\tau )\chi _{(\tau ,t]}(s)x(s-)}{x(\tau +)}\,\textrm{d}P(s)\,\textrm{d}g(\tau )\\{} & {} \quad =x(t)g(t_0)+g(t)-g(t_0)+\int _{t_0}^t \frac{\chi _{[t_0,t)}(\tau )}{x(\tau +)}\left( \int _{t_0}^{t}\chi _{(\tau ,t]}(s)x(s-)\,\textrm{d}P(s)\right) \textrm{d}g(\tau ). \end{aligned}$$

Let us calculate the inner integral. If \(t_0\le \tau <t\), then

$$\begin{aligned}{} & {} \int _{t_0}^{t}\chi _{(\tau ,t]}(s)x(s-)\,\textrm{d}P(s) =\int _{t_0}^{t}\chi _{(t_0,t]}(s)x(s-)\,\textrm{d}P(s)-\int _{t_0}^{t}\chi _{(t_0,\tau ]}(s)x(s-)\,\textrm{d}P(s)\\{} & {} \quad =\int _{t_0}^{t}\chi _{(t_0,t]}(s)x(s-)\,\textrm{d}P(s)-\int _{t_0}^{\tau }\chi _{(t_0,\tau ]}(s)x(s-)\,\textrm{d}P(s)-x(\tau -)\Delta ^+P(\tau )\\{} & {} \quad =x(t)-x(\tau )-x(\tau -)\Delta ^+P(\tau )\\{} & {} \quad =x(t)-x(\tau -)(1+\Delta ^- P(\tau ))-x(\tau -)\Delta ^+P(\tau )\\{} & {} \quad =x(t)-x(\tau -)(1+\Delta P(\tau ))=x(t)-x(\tau +), \end{aligned}$$

where we used the fact that x is a solution of Eq. (3.5), as well as Lemma 3.3.

Returning back to the main calculation, we see that the right-hand side of Eq. (4.4) equals

$$\begin{aligned}{} & {} x(t)g(t_0)+g(t)-g(t_0)+\int _{t_0}^t \frac{\chi _{[t_0,t)}(\tau )}{x(\tau +)} (x(t)-x(\tau +))\,\textrm{d}g(\tau )\\{} & {} \quad =x(t)g(t_0)+g(t)-g(t_0)+x(t)\int _{t_0}^t \frac{\chi _{[t_0,t)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )-\int _{t_0}^t {\chi _{[t_0,t)}(\tau )}\,\textrm{d}g(\tau )\\{} & {} \quad =x(t)\left( g(t_0)+\int _{t_0}^t \frac{\chi _{[t_0,t)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )\right) +g(t)-g(t_0)-(g(t-)-g(t_0))\\{} & {} \quad =x(t)\left( g(t_0)+\int _{t_0}^t \frac{\chi _{[t_0,t)}(\tau )}{x(\tau +)}\,\textrm{d}g(\tau )+\frac{\Delta ^-g(t)}{x(t)}\right) =u(t), \end{aligned}$$

as we wished to prove. \(\square \)

Lemma 4.2

Suppose that \(g:[t_0,b]\rightarrow {\mathbb {C}}\) is regulated, \(P:[t_0,b]\rightarrow {\mathbb {C}}\) has bounded variation, \(1+\Delta ^+P(t_0)\ne 0\), \(1+\Delta P(t)\ne 0\) for all \(t\in (t_0,b)\), and \(1+\Delta ^-P(t)\ne 0\) for all \(t\in (t_0,b]\). Let \(y:[t_0,b]\rightarrow {\mathbb {C}}\) be given by Eq. (3.8) with \(y_0=1\). Then the function \(v:[t_0,b]\rightarrow {\mathbb {C}}\) given by

$$\begin{aligned} v(t)=y(t)\left( g(t_0)+\int _{t_0}^t\left( \frac{\chi _{(t_0,t]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \textrm{d}g(s)\right) ,\quad t\in [t_0,b],\nonumber \\ \end{aligned}$$
(4.5)

satisfies

$$\begin{aligned} v(t)=g(t)-\int _{t_0}^t \left( \chi _{[t_0,t)}(s)v(s+) +\chi _{\{t\}}(s)v(t)\right) \textrm{d}P(s),\quad t\in [t_0,b].\nonumber \\ \end{aligned}$$
(4.6)

Proof

The fact that the definition of v makes sense can be verified as in the proof of Lemma 4.1.

Equation (4.6) obviously holds for \(t=t_0\), because \(v(t_0)=y(t_0)g(t_0)=g(t_0)\). Next, we need to calculate \(v(t+)\) for \(t\in [t_0,b)\):

$$\begin{aligned}{} & {} v(t+)=y(t+)\left( g(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{t+\delta }\left( \frac{\chi _{(t_0,t+\delta ]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \,\textrm{d}g(s)\right) \right) \\{} & {} \quad =y(t+)\left( g(t_0)+\lim _{\delta \rightarrow 0+}\left( \int _{t_0}^{b}\left( \frac{\chi _{(t_0,t+\delta )}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \,\textrm{d}g(s)+\frac{\Delta ^-g(t+\delta )}{y(t+\delta -)}\right) \right) \\{} & {} \quad =y(t+)\left( g(t_0)+\int _{t_0}^{b}\left( \frac{\chi _{(t_0,t]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \,\textrm{d}g(s)\right) \end{aligned}$$

If \(t=t_0\), the integrand in the last integral reduces to \(\chi _{\{t_0\}}(s)\), and the value of the integral is \(\Delta ^+g(t_0)\). On the other hand, for \(t>t_0\), we have

$$\begin{aligned}{} & {} \int _{t_0}^{b}\left( \frac{\chi _{(t_0,t]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \,\textrm{d}g(s) =\int _{t_0}^{t}\left( \frac{\chi _{(t_0,t]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \,\textrm{d}g(s)\\{} & {} \qquad +\int _{t}^{b}\frac{\chi _{(t_0,t]}(s)}{y(s-)}\,\textrm{d}g(s)\\{} & {} \quad =\int _{t_0}^{t}\left( \frac{\chi _{(t_0,t]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \,\textrm{d}g(s)+\frac{\chi _{(t_0,b)}(t)}{y(t-)}\Delta ^+g(t). \end{aligned}$$

The results for \(t=t_0\) and \(t>t_0\) can be combined into a single formula, and we get

$$\begin{aligned}{} & {} v(t+)=y(t+)\left( g(t_0)+\int _{t_0}^{t}\left( \frac{\chi _{(t_0,t]}(s)}{y(s-)}+\chi _{\{t_0\}}(s)\right) \textrm{d}g(s)\right. \\{} & {} \qquad \left. +\chi _{(t_0,b)}(t)\frac{\Delta ^+g(t)}{y(t-)}+\chi _{\{t_0\}}(t)\Delta ^+g(t_0)\right) . \end{aligned}$$

Hence, if \(t>t_0\), the right-hand side of Eq. (4.6) equals

$$\begin{aligned}{} & {} g(t)-\int _{t_0}^t \chi _{[t_0,t)}(s)v(s+)\,\textrm{d}P(s)-v(t)\Delta ^-P(t)\\{} & {} \quad =g(t)-v(t)\Delta ^-P(t)\\{} & {} \qquad -\!\int _{t_0}^t \chi _{[t_0,t)}(s)y(s+)\left( g(t_0)\!+\!\int _{t_0}^{s}\left( \frac{\chi _{(t_0,s]}(\tau )}{y(\tau -)}\!+\!\chi _{\{t_0\}}(\tau )\right) \textrm{d}g(\tau )\right. \\{} & {} \qquad \left. +\chi _{(t_0,b)}(s)\frac{\Delta ^+g(s)}{y(s-)} +\!\chi _{\{t_0\}}(s)\Delta ^+g(t_0)\right) \textrm{d}P(s)\\{} & {} \quad =g(t)-v(t)\Delta ^-P(t)-g(t_0)\int _{t_0}^t \chi _{[t_0,t)}(s)y(s+)\,\textrm{d}P(s)\\{} & {} \qquad -\int _{t_0}^t \chi _{[t_0,t)}(s)y(s+)\left( \int _{t_0}^{s}\left( \frac{\chi _{(t_0,s]}(\tau )}{y(\tau -)}+\chi _{\{t_0\}}(\tau )\right) \textrm{d}g(\tau )\right) \textrm{d}P(s)\\{} & {} \qquad -\int _{t_0}^t \chi _{[t_0,t)}(s)y(s+)\left( \chi _{(t_0,b)}(s)\frac{\Delta ^+g(s)}{y(s-)} +\chi _{\{t_0\}}(s)\Delta ^+g(t_0)\right) \textrm{d}P(s)\\{} & {} \quad =g(t)-v(t)\Delta ^-P(t)+g(t_0)(y(t)-1+y(t)\Delta ^-P(t))\\{} & {} \qquad -\int _{t_0}^t\int _{t_0}^{s} \chi _{[t_0,t)}(s)\chi _{(t_0,s]}(\tau )\frac{y(s+)}{y(\tau -)}\,\textrm{d}g(\tau )\,\textrm{d}P(s) \\{} & {} \qquad -\int _{t_0}^t \chi _{[t_0,t)}(s)y(s+)\chi _{(t_0,t]}(s)\Delta ^+g(t_0)\,\textrm{d}P(s)\\{} & {} \qquad -\int _{t_0}^t \chi _{(t_0,t)}(s)\frac{y(s+)}{y(s-)}\Delta ^+g(s)\,\textrm{d}P(s)-y(t_0+)\Delta ^+g(t_0)\Delta ^+P(t_0)\\{} & {} \quad =g(t)-g(t_0)+g(t_0)y(t)(1+\Delta ^-P(t))-v(t)\Delta ^-P(t)\\{} & {} \qquad -\int _{t_0}^t\int _{t_0}^{s} \chi _{(t_0,t)}(s)\chi _{(t_0,s]}(\tau )\frac{y(s+)}{y(\tau -)}\,\textrm{d}g(\tau )\,\textrm{d}P(s)\\{} & {} \qquad -\Delta ^+g(t_0)\int _{t_0}^t \chi _{(t_0,t)}(s)y(s+)\,\textrm{d}P(s)\\{} & {} \qquad -\sum _{s\in (t_0,t)}\frac{y(s+)}{y(s-)}\Delta ^+g(s)\Delta P(s)-y(t_0+)\Delta ^+g(t_0)\Delta ^+P(t_0) \end{aligned}$$

(we have used Lemma 2.2 to get the sum on the last line). We claim it is possible to rewrite the iterated integral as follows:

$$\begin{aligned}{} & {} \int _{t_0}^t\int _{t_0}^{s}\chi _{(t_0,t)}(s)\chi _{(t_0,s]}(\tau )\frac{y(s+)}{y(\tau -)}\,\textrm{d}g(\tau )\,\textrm{d}P(s) \end{aligned}$$
(4.7)
$$\begin{aligned}{} & {} =-\sum _{\tau \in (t_0,t)} \frac{y(\tau +)}{y(\tau -)}\Delta P(\tau )\Delta ^+ g(\tau )-y(t-)\int _{t_0}^t\frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\,\textrm{d}g(\tau )\nonumber \\{} & {} \qquad +g(t-)-g(t_0+).\nonumber \\ \end{aligned}$$
(4.8)

Let us postpone the verification of this identity for later, and show how to use it to finish our previous calculation of the right-hand side of Eq. (4.6) for \(t>t_0\):

$$\begin{aligned}{} & {} g(t)-g(t_0)+g(t_0)y(t)(1+\Delta ^-P(t))-v(t)\Delta ^-P(t) +y(t-)\int _{t_0}^t\frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\,\textrm{d}g(\tau )\\{} & {} \qquad -(g(t-)-g(t_0+))-\Delta ^+g(t_0)\int _{t_0}^t \chi _{(t_0,t)}(s)y(s+)\, \textrm{d}P(s) \\{} & {} \qquad -y(t_0+)\Delta ^+g(t_0)\Delta ^+P(t_0)\\{} & {} \quad =\Delta ^-g(t)+\Delta ^+g(t_0)+g(t_0)y(t)(1+\Delta ^-P(t))-v(t)\Delta ^-P(t)\\{} & {} \qquad +y(t-)\left( \frac{v(t)}{y(t)}-g(t_0)-\Delta ^+g(t_0)-\frac{\Delta ^-g(t)}{y(t-)}\right) \\{} & {} \qquad +\Delta ^+g(t_0)\left( y(t)-1+y(t)\Delta ^-P(t)+y(t_0+) \Delta ^+P(t_0)\right) \\{} & {} \quad -y(t_0+)\Delta ^+g(t_0)\Delta ^+P(t_0)\\{} & {} \quad =\Delta ^-g(t)+g(t_0)y(t)(1+\Delta ^-P(t))-v(t)\Delta ^-P(t)\\{} & {} \qquad +(1+\Delta ^-P(t))v(t)-y(t-)g(t_0+)-\Delta ^-g(t) +\Delta ^+g(t_0)y(t)\left( 1+\Delta ^-P(t)\right) \\{} & {} \quad =g(t_0+)y(t)(1+\Delta ^-P(t))+v(t)-y(t-)g(t_0+). \end{aligned}$$

The first and the last term cancel each other out, because \(y(t-)=y(t)(1+\Delta ^-P(t))\) (see Lemma 3.3). Thus, the only remaining term is v(t), as we wished to prove.

To finish the proof, it remains to verify the identity (4.7)–(4.8). To calculate the iterated integral, we apply Corollary 2.7 and interchange the order of integrals. The fact that the integrand is bounded and Borel measurable is verified in the same way as in the proof of Lemma 4.1. Observing also that \(\chi _{(t_0,t)}(s)\chi _{(t_0,s]}(\tau )=\chi _{(t_0,t)}(\tau )\chi _{[\tau ,t)}(s)\) (both sides are nonzero only if \(t_0<\tau \le s<t\)), we obtain:

$$\begin{aligned}{} & {} \int _{t_0}^t\int _{t_0}^{s}\chi _{(t_0,t)}(s)\chi _{(t_0,s]}(\tau )\frac{y(s+)}{y(\tau -)}\,\textrm{d}g(\tau )\,\textrm{d}P(s) \nonumber \\{} & {} \quad =\int _{t_0}^t\int _{\tau }^{t}\chi _{(t_0,t)}(\tau )\chi _{[\tau ,t)}(s)\frac{y(s+)}{y(\tau -)}\,\textrm{d}P(s)\,\textrm{d}g(\tau )\nonumber \\{} & {} \qquad +\sum _{s\in (t_0,t]}\chi _{(t_0,t)}(s)\chi _{(t_0,s]}(s)\frac{y(s+)}{y(s-)}\Delta ^-P(s)\Delta g(s)\nonumber \\{} & {} \qquad -\sum _{\tau \in [t_0,t)}\chi _{(t_0,t)}(\tau )\chi _{(t_0,\tau ]}(\tau )\frac{y(\tau +)}{y(\tau -)}\Delta ^+g(\tau )\Delta P(\tau )\nonumber \\{} & {} \quad =\int _{t_0}^t \frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\left( \int _{\tau }^{t}\chi _{[\tau ,t)}(s) y(s+)\,\textrm{d}P(s)\right) \textrm{d}g(\tau )\nonumber \\{} & {} \qquad +\sum _{s\in (t_0,t)}\frac{y(s+)}{y(s-)}\Delta ^-P(s)\Delta g(s)-\sum _{\tau \in (t_0,t)}\frac{y(\tau +)}{y(\tau -)}\Delta ^+g(\tau )\Delta P(\tau ). \end{aligned}$$
(4.9)

For \(\tau \in (t_0,t)\), the value of the inner integral in (4.9) is

$$\begin{aligned}{} & {} \quad \int _{\tau }^{t}\chi _{[\tau ,t)}(s) y(s+)\,\textrm{d}P(s)=y(\tau +)\Delta ^+P(\tau )\\{} & {} \qquad +\int _{t_0}^{t}\chi _{(\tau ,t)}(s) y(s+)\,\textrm{d}P(s)\\{} & {} \quad =y(\tau +)\Delta ^+P(\tau )+\int _{t_0}^{t}\chi _{[t_0,t)}(s) y(s+)\,\textrm{d}P(s)-\int _{t_0}^{t}\chi _{[t_0,\tau )}(s) y(s+)\,\textrm{d}P(s)\\{} & {} \qquad -\int _{t_0}^{t}\chi _{\{\tau \}}(s) y(s+)\,\textrm{d}P(s)\\{} & {} \quad =y(\tau +)\Delta ^+P(\tau )+\int _{t_0}^{t}\chi _{[t_0,t)}(s) y(s+)\,\textrm{d}P(s)\\{} & {} \qquad -\int _{t_0}^{\tau }\chi _{[t_0,\tau )}(s) y(s+)\,\textrm{d}P(s)-y(\tau +)\Delta P(\tau )\\{} & {} \quad =-y(t)+1-y(t)\Delta ^-P(t)+y(\tau )-1+y(\tau )\Delta ^-P(\tau )-y(\tau +)\Delta ^- P(\tau )\\{} & {} \quad =y(\tau )(1+\Delta ^-P(\tau ))-y(t)(1+\Delta ^-P(t))-y(\tau +)\Delta ^- P(\tau )\\{} & {} \quad =y(\tau -)-y(t-)-y(\tau +)\Delta ^- P(\tau ) \end{aligned}$$

(the last equality follows from Lemma 3.3). Therefore, the iterated integral in (4.9) equals

$$\begin{aligned}{} & {} \int _{t_0}^t \frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\left( \int _{\tau }^{t}\chi _{[\tau ,t)}(s) y(s+)\,\textrm{d}P(s)\right) \textrm{d}g(\tau )\\{} & {} \quad =\int _{t_0}^t \frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\left( y(\tau -)-y(t-)-y(\tau +)\Delta ^- P(\tau )\right) \textrm{d}g(\tau )\\{} & {} \quad =\int _{t_0}^t \chi _{(t_0,t)}(\tau )\,\textrm{d}g(\tau )-y(t-)\int _{t_0}^t\frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\,\textrm{d}g(\tau )\\{} & {} \qquad -\int _{t_0}^t \chi _{(t_0,t)}(\tau )\frac{y(\tau +)}{y(\tau -)}\Delta ^- P(\tau )\,\textrm{d}g(\tau )\\{} & {} \quad =g(t-)-g(t_0+)-y(t-)\int _{t_0}^t\frac{\chi _{(t_0,t)}(\tau )}{y(\tau -)}\,\textrm{d}g(\tau )\\{} & {} \qquad -\sum _{\tau \in (t_0,t)} \frac{y(\tau +)}{y(\tau -)}\Delta ^- P(\tau )\Delta g(\tau ) \end{aligned}$$

(we have used Lemma 2.2), and the proof of the identity (4.7)–(4.8) is complete. \(\square \)

The next theorem is our second main result, and provides explicit solution formulas for the nonhomogeneous Stieltjes integral equations (4.1) and (4.2).

Theorem 4.3

Suppose that \(g:[a,b]\rightarrow {\mathbb {C}}\) is regulated, \(P:[a,b]\rightarrow {\mathbb {C}}\) has bounded variation, \(1+\Delta P(t)\ne 0\) for all \(t\in (a,t_0)\cup (t_0,b)\), \(1+\Delta ^+P(t)\ne 0\) for all \(t\in [a,t_0]\), and \(1+\Delta ^-P(t)\ne 0\) for all \(t\in [t_0,b]\).

  1. 1.

    If \(x:[a,b]\rightarrow {\mathbb {C}}\) is given by Eq. (3.7) with \(x_0=1\), then the function \(u:[a,b]\rightarrow {\mathbb {C}}\) given by

    $$\begin{aligned} u(t)=x(t)\left( g(t_0)+\int _{t_0}^t \frac{1}{x(s+)}\,\textrm{d}g(s)\right) ,\quad t\in [a,b] \end{aligned}$$
    (4.10)

    (with the convention that \(x(s+)\) is understood as x(s) when \(s=\max (t,t_0)\)) is the unique solution of the equation

    $$\begin{aligned} u(t)=g(t)+\int _{t_0}^t u(s-)\,\textrm{d}P(s),\quad t\in [a,b] \end{aligned}$$
    (4.11)

    (with the convention that \(u(s-)\) is understood as u(s) when \(s=\min (t,t_0)\)).

  2. 2.

    If \(y:[a,b]\rightarrow {\mathbb {C}}\) is given by Eq. (3.8) with \(y_0=1\), then the function \(v:[a,b]\rightarrow {\mathbb {C}}\) given by

    $$\begin{aligned} v(t)=y(t)\left( g(t_0)+\int _{t_0}^t \frac{1}{y(s-)}\,\textrm{d}g(s)\right) ,\quad t\in [a,b] \end{aligned}$$
    (4.12)

    (with the convention that \(y(s-)\) is understood as y(s) when \(s=\min (t,t_0)\)) is the unique solution of the equation

    $$\begin{aligned} v(t)=g(t)-\int _{t_0}^t v(s+)\,\textrm{d}P(s),\quad t\in [a,b] \end{aligned}$$
    (4.13)

    (with the convention that \(v(s+)\) is understood as v(s) when \(s=\max (t,t_0)\)).

Proof

Let us show that the function u given by Eq. (4.10) is a solution of Eq. (4.11). The fact that Eq. (4.11) holds for all \(t\in [t_0,b]\) follows from Lemma 4.1, whose assumptions are satisfied. To deal with \(t\in [a,t_0]\), we use Remark 3.2 and look for a solution of the symmetric equation

$$\begin{aligned} v(t)={\tilde{g}}(t)-\int _{-t_0}^t v(s+)\,\textrm{d}{\widetilde{P}}(s),\quad t\in [-t_0,-a], \end{aligned}$$

where \({\tilde{g}}(t)=g(-t)\) and \({\widetilde{P}}(t)=-P(-t)\).

Observe that \(1+\Delta ^+{\widetilde{P}}(-t_0)=1+\Delta ^- P(t_0)\ne 0\), \(1+\Delta {\widetilde{P}}(t)=1+\Delta P(-t)\ne 0\) for all \(t\in (-t_0,-a)\), and \(1+\Delta ^-{\widetilde{P}}(t)=1+\Delta ^+ P(-t)\ne 0\) for all \(t\in (-t_0,-a]\). Hence, according to Lemma 4.2, the previous equation has solution

$$\begin{aligned} v(t)=y(t)\left( {\tilde{g}}(-t_0)+\int _{-t_0}^t\left( \chi _{(-t_0,t]}(s)\frac{1}{y(s-)} +\chi _{\{-t_0\}}(s)\right) \textrm{d}{\tilde{g}}(s)\right) ,\quad t\in [-t_0,-a], \end{aligned}$$

where, by Remark 3.2, \(y(t)=x(-t)\). Thus, the corresponding solution of Eq. (4.11) on \([a,t_0]\) is

$$\begin{aligned}{} & {} u(t)=v(-t)=y(-t)\left( {\tilde{g}}(-t_0)+\int _{-t_0}^{-t}\left( \chi _{(-t_0,-t]} (s)\frac{1}{y(s-)}+\chi _{\{-t_0\}}(s)\right) \textrm{d}{\tilde{g}}(s)\right) \\{} & {} \qquad =x(t)\left( g(t_0)-\int _{t}^{t_0}\left( \chi _{[t,t_0)}(s)\frac{1}{x(s+)}+\chi _{\{t_0\}}(s)\right) \textrm{d}g(s)\right) , \end{aligned}$$

the last equality being a consequence of the substitution theorem \(\int _{\phi (c)}^{\phi (d)}f(s)\,\textrm{d}g(s)=\int _c^d f(\phi (t))\,\textrm{d}g(\phi (t))\); see [9, Theorem 6.6.5 and Exercise 6.6.6] with \(\phi (x)=-x\). The result agrees with the function given by Eq. (4.10) for all \(t\in [a,t_0]\), since \(x(t_0)=1\).

The function u is a unique solution of Eq. (4.11): Otherwise, if there were two different solutions, their difference would be a nontrivial solution of the corresponding homogeneous solution, and this would contradict Theorem 3.7.

In a similar way, one can show that the function v given by Eq. (4.12) is a solution of Eq. (4.13) on [ab]: The fact that Eq. (4.13) holds for all \(t\in [t_0,b]\) follows from Lemma 4.2. To deal with \(t\in [a,t_0]\), we use Remark 3.2 and look for a solution of the symmetric equation

$$\begin{aligned} u(t)={\tilde{g}}(t)+\int _{-t_0}^t u(s-)\,\textrm{d}{\widetilde{P}}(s),\quad t\in [-t_0,-a], \end{aligned}$$

where \({\tilde{g}}\) and \({\widetilde{P}}\) are as before. According to Lemma 4.1, the solution is

$$\begin{aligned} \begin{aligned} u(t)=x(t)\left( {\tilde{g}}(-t_0)+\int _{-t_0}^t\left( \frac{\chi _{[-t_0,t)}(s)}{x(s+)}+\frac{\chi _{\{t\}}(s)}{x(t)}\right) \text {d}{\tilde{g}}(s)\right) ,\quad t\in [-t_0,-a], \end{aligned}\end{aligned}$$

where, by Remark 3.2, \(x(t)=y(-t)\). Thus, the corresponding solution of Eq. (4.13) on \([a,t_0]\) is

$$\begin{aligned} \begin{aligned}{}&{} v(t)=u(-t)=x(-t)\left( {\tilde{g}}(-t_0)+\int _{-t_0}^{-t}\left( \frac{\chi _{[-t_0,-t)} (s)}{x(s+)}+\frac{\chi _{\{-t\}}(s)}{x(-t)}\right) \text {d}{\tilde{g}}(s)\right) \\{}&{} \qquad =y(t)\left( g(t_0)-\int ^{t_0}_{t}\left( \frac{\chi _{(t,t_0]}(s)}{y(s-)}+\frac{\chi _{\{t\}}(s)}{y(t)}\right) \text {d}g(s)\right) , \end{aligned} \end{aligned}$$

which agrees with Eq. (4.12). Again, it follows from Theorem 3.7 that this solution is unique. \(\square \)