1 Introduction

An index is a sequence of positive integers including the empty sequence \(\emptyset \). In particular, an index \({\textbf{k}}=(k_1,\ldots ,k_r)\) is said to be admissible if either \(k_r>1\) or \({\textbf{k}}=\emptyset \). The number \(|{\textbf{k}}|:=k_1+\cdots +k_r\) is called the weight of \({\textbf{k}}=(k_1,\ldots ,k_r)\) and r, the depth. Let I(kr) be the set of all indices whose weight is k and depth is r. For an admissible index \({\textbf{k}}=(k_1,\ldots ,k_r)\), the multiple zeta value (MZV) is the real number defined by

$$\begin{aligned} \zeta ({\textbf{k}})= \zeta (k_1,\ldots ,k_r):= \sum _{0<m_1<\cdots <m_r} \frac{1}{m_1^{k_1}\cdots m_r^{k_r}}. \end{aligned}$$

Conventionally, \(\zeta (\emptyset )\) is understood as 1. Let \({\mathcal {Z}}\) be the \(\mathbb {Q}\)-algebra generated by all multiple zeta values. Set

where is the constant term of the shuffle regularized polynomial. Then the symmetric multiple zeta value (SMZV) is an element of \({\mathcal {Z}}/\zeta (2){\mathcal {Z}}\) defined as

SMZVs are introduced by Kaneko and Zagier in [5] as real counterparts of finite multiple zeta values (FMZVs). Moreover, Kaneko and Zagier conjectured that there exists a one-to-one correspondence between SMZVs and FMZVs, which is called the Kaneko–Zagier conjecture. This conjecture implies that any relations for SMZVs take the same form as the corresponding ones for FMZVs, and vice versa.

In [6], for an index \({\textbf{k}}=(k_1,\ldots ,k_r)\), Hirose defined the refined symmetric multiple zeta values \(\zeta _{RS}({\textbf{k}})\) (RSMZVs) as an element of \({\mathcal {Z}}[2\pi i]\) in terms of iterated integrals and gives the following expression:

By this expression, we easily see that \(\zeta _{RS}({\textbf{k}})\) is a lift of \(\zeta _{{\mathcal {S}}}({\textbf{k}})\):

(1.1)

The definition of RSMZVs, (1.1) and the method of the iterated integrals discussed in [2, 3] together lead to weighted sum formulas for SMZVs, which is our main purpose in this article.

Main Theorem

(Weighted sum formulas, cf. [3]) Let \(\lambda _1,\lambda _2,\xi _1\), and \(\xi _2\) be indeterminates. For \(r,s\in \mathbb {Z}_{\ge 0}\), we have

(1.2)

We remark that (1.2) holds without modulo \(\zeta (2){\mathcal {Z}}\). The weighted sum formulas for FMZVs are proved by Kamano in [3]. According to [3], by setting

$$\begin{aligned} w({\textbf{k}}):= {\left\{ \begin{array}{ll} 0&{}(k_1>1),\\ m&{}(k_1=\cdots =k_m=1, k_{m+1}>1), \end{array}\right. } \end{aligned}$$

for \({\textbf{k}}=(k_1,\ldots ,k_r)\) and specializing the parameter \((\lambda _1,\lambda _2,\xi _1,\xi _2)=(1,0,0,1)\) in (1.2), we have the following corollary.

Corollary 1.1

For \(k\in \mathbb {Z}_{>0}\) and \(r\in \mathbb {Z}_{\ge 0}\), we have

$$\begin{aligned} \sum _{{\textbf{k}}\in I(k+r,k)} w({\textbf{k}}) \zeta _{\mathcal {S}}({\textbf{k}}) = (-1)^{r-1} \zeta _{\mathcal {S}}(\overbrace{1,\ldots ,1}^{k-1},r+1). \end{aligned}$$
(1.3)

By substituting \((\lambda _1,\lambda _2,\xi _1,\xi _2)=(1,1,-1,1)\) in (1.2), the following corollary holds.

Corollary 1.2

For \(k\in \mathbb {Z}_{>0}\) and an even integer \(r\ge 2\), we have

$$\begin{aligned} \sum _{{\textbf{k}}\in I(k+r,k)} 2^{w({\textbf{k}})} \zeta _{\mathcal {S}}({\textbf{k}})=0. \end{aligned}$$
(1.4)

The proofs of (1.3) and (1.4) are exactly the same as for FMZVs. Thus for the details of Corollary 1.11.2, see [3].

Remark

Either (1.3) or (1.4) does not hold without modulo \(\zeta (2){\mathcal {Z}}\) because we need to use the symmetric sum formula [7].

We denote by \({\mathfrak {S}}_n\) the symmetric group of degree n. For \(p,q\in \mathbb {Z}_{\ge 0}\), Kamano introduced in [3]

$$\begin{aligned} W_{p,q}:={} & {} \{\sigma \in {\mathfrak {S}}_{p+q+1}\mid \sigma (1)<\cdots<\sigma (p), \sigma (p+1)<\cdots <\sigma (p+q),\\{} & {} \quad \sigma (p+q+1)=p+q+1\}, \end{aligned}$$

and for \(\sigma \in W_{p,q}\), \({\varvec{\lambda }}=(\lambda _1,\ldots ,\lambda _p)\) and \({\varvec{\lambda }}'=(\lambda _{p+1},\ldots ,\lambda _{p+q})\), he defined \(P_i^\sigma ({\varvec{\lambda }},{\varvec{\lambda }}')\ \ (1\le i\le p+q)\) such that

$$\begin{aligned}{} & {} \sum _{i=1}^{p-1} \lambda _i \int _{t_i}^{t_{i+1}} \frac{dt'}{t'} + \lambda _p \int _{t_p}^{p+q+1} \frac{dt'}{t'} + \sum _{j=p+1}^{p+q-1} \lambda _j \int _{t_j}^{t_{j+1}} \frac{dt'}{t'} + \lambda _{p+q} \int _{t_{p+q}}^{p+q+1} \frac{dt'}{t'}\nonumber \\{} & {} \quad = \sum _{i=1}^{p+q} P_i^\sigma ({\varvec{\lambda }},{\varvec{\lambda }}') \int _{t_{\sigma ^{-1}(i)}}^{t_{\sigma ^{-1}(i+1)}} \frac{dt'}{t'}, \end{aligned}$$

which is uniquely determined. Then the argument in [3, Section 3] with the iterated integrals for RSMZVs works well, and we obtain the following theorem.

Theorem 1.1

For indeterminates \(\lambda _m,\xi _m\ (1\le m\le p+q)\) and \(r,s\in \mathbb {Z}_{\ge 0}\), we have

We state the structure of this article. In Sect. 2, we review the fundamental facts of iterated integral and the definition of RSMZVs. In Sect. 3, by using iterated integrals, we give the proof of our main result.

2 Preparation for proof

To prove main result, let us introduce some notions concerning regularized iterated integrals. Our basic references are [1, 6]. We write a pair of a point \(p\in \mathbb {C}\) and a nonzero tangential vector \(v\in T_p\mathbb {C}=\mathbb {C}\) as \(v_p\). Fix \(a_1,\ldots ,a_n\in \{0,1\}\). Let \(\gamma :[0,1]\rightarrow \mathbb {C}\) be a cuspidal path, that is, a continuous piecewise smooth map, from \(v_p\) to \(w_q\) with \(\gamma \bigl ((0,1)\bigr )\subset \mathbb {C}\setminus \{0,1\}\), \(\gamma (0)=p, \gamma '(0)=v, \gamma (1)=q\), and \(\gamma '(1)=-w\). For the path \(\gamma \), set a function \(F_\gamma :(0,\frac{1}{2})\rightarrow \mathbb {C}\) by

$$\begin{aligned} F_\gamma (\epsilon ) := \underset{\epsilon<t_1<\cdots<t_n<1-\epsilon }{\int } \prod _{i=1}^n \frac{d\gamma (t_i)}{\gamma (t_i)-a_i}. \end{aligned}$$

It is known [1, 6] that \(F_\gamma (\epsilon )\) has an asymptotic development: there exist complex numbers \(c_0,c_1,\ldots ,c_n\in \mathbb {C}\) such that

$$\begin{aligned} F_\gamma (\epsilon ) = c_0+\sum _{k=1}^nc_k(\log \epsilon )^k +O(\epsilon \log ^{n+1}\epsilon ) \end{aligned}$$

as \(\epsilon \rightarrow 0\). Then the regularized iterated integral \(I_\gamma (v_p;a_1,\ldots ,a_n;w_q)\) is defined by

$$\begin{aligned} I_\gamma (v_p;a_1,\ldots ,a_n;w_q):=c_0. \end{aligned}$$

Note that in a homotopy class, \(c_0,c_1,\ldots ,c_n\) are independent of the choice of a representative \(\gamma \).

Hereafter, two tangential base points \(0'\) and \(1'\) are understood as

$$\begin{aligned} 0'=1_0,\qquad 1'=(-1)_1. \end{aligned}$$

Let \({{\,\textrm{dch}\,}}\) denote the straight path from \(0'\) to \(1'\) [1]. By using this path, is given by

where \(\{0\}^k\) means \(\overbrace{0,\ldots ,0}^k\) for \(k\in \mathbb {Z}_{\ge 0}\). We define the path \(\alpha \) from \(1'\) to \(1'\) which circles 1 once counterclockwise and the composition path \(\beta :={{\,\textrm{dch}\,}}\cdot \ \alpha \cdot {{\,\textrm{dch}\,}}^{-1}\) (see Figure 12, and 3).

Fig. 1
figure 1

the path dch

Fig. 2
figure 2

the path \(\alpha \)

Fig. 3
figure 3

the path \(\beta \)

Let \(\alpha ^n=\overbrace{\alpha \cdots \alpha }^n\) for \(n\in \mathbb {Z}_{>0}\).

Lemma 2.1

([1, Theorem 3.253, Example 3.261]) For any sequence \(a_1,\ldots ,a_m\in \{0,1\}\), we have

$$\begin{aligned} I_{{{\,\textrm{dch}\,}}}(0';a_1,\ldots ,a_m;1')&= (-1)^m I_{{{\,\textrm{dch}\,}}^{-1}}(1';a_m,\ldots ,a_1;0') \in {\mathcal {Z}}, \\ I_{\alpha ^n}(1';a_1,\ldots ,a_m;1')&= {\left\{ \begin{array}{ll} \dfrac{(2\pi in)^m}{m!}&{}(a_1=\cdots =a_m=1), \\ 0&{}(\text {otherwise}). \end{array}\right. } \end{aligned}$$

Set \(\beta _n={{\,\textrm{dch}\,}}\cdot \ \alpha ^n \cdot {{\,\textrm{dch}\,}}^{-1}\). According to [6, Proposition 4], by Lemma 2.1, the iterated integral \(I_{\beta _n}(0';a_1,\ldots ,a_m;0')\) with \((a_1,\ldots ,a_m)=(1,\{0\}^{k_1-1},\ldots ,1,\{0\}^{k_r-1},1)\) is written as

Since the equation above holds for all \(n\in \mathbb {Z}_{>0}\), we replace \(2\pi in\) with T as follows. For \(a_1,\ldots ,a_m\in \{0,1\}\), there uniquely exists \(L(a_1,\ldots ,a_m;T)\in T{\mathcal {Z}}[T]\) such that

$$\begin{aligned} I_{\beta _n}(0';a_1,\ldots ,a_m;0')=L(a_1,\ldots ,a_m;2\pi in). \end{aligned}$$

Thus we put

$$\begin{aligned} \zeta _{RS}(k_1,\ldots ,k_r;T) := \frac{(-1)^r}{T} L(1,\{0\}^{k_1-1},\ldots ,1,\{0\}^{k_r-1},1;T) \end{aligned}$$

and get

which plays an important role in the proof of the main result. Note that \(L(a_1,\ldots ,a_m;T)\) is an abbreviation of the symbol \(L(e_{a_1}\cdots e_{a_m};T)\) introduced in [6].

Lemma 2.2

(cf. [2, Lemma 2.1])

$$\begin{aligned} \biggl ( \int _t^X \frac{d\beta _n(t')}{\beta _n(t')-1} \biggr )^i&= i! \underset{t<p_1<\cdots<p_i<X}{\int } \prod _{a=1}^i \frac{d\beta _n(p_a)}{\beta _n(p_a)-1}, \\ \biggl ( \int _t^X \frac{d\beta _n(t')}{\beta _n(t')} \biggr )^j&= j! \underset{t<q_1<\cdots<q_j<X}{\int } \prod _{b=1}^j \frac{d\beta _n(q_b)}{\beta _n(q_b)}. \end{aligned}$$

Proof

We have

$$\begin{aligned} \begin{aligned} \biggl ( \int _t^X \frac{d\beta _n(t')}{\beta _n(t')-1} \biggr )^i&= \underset{\begin{array}{c} t<p_1<X \\ \vdots \\ t<p_i<X \end{array}}{\int } \prod _{m=1}^i \frac{d\beta _n(p_m)}{\beta _n(p_m)-1} \\&= \sum _{\sigma \in {\mathfrak {S}}_i} \underset{\begin{array}{c} t<p_{\sigma (1)}<\cdots< p_{\sigma (i)}<X \end{array}}{\int } \prod _{m=1}^i \frac{d\beta _n(p_m)}{\beta _n(p_m)-1} \\&= i! \underset{t<p_1<\cdots<p_i<X}{\int } \prod _{m=1}^i \frac{d\beta _n(p_m)}{\beta _n(p_m)-1}, \end{aligned} \end{aligned}$$

which implies the first equality. By the same method, we easily get the second equality, and complete the proof. \(\square \)

The following lemma is a key of the proof of our main result.

Lemma 2.3

Fix sufficient small \(\epsilon >0\). For \(X\in [\epsilon ,1-\epsilon ]\) and a continuous function \(f:\beta _n((\epsilon ,1-\epsilon ))^m\rightarrow \mathbb {C}\), we have

$$\begin{aligned}{} & {} \int _{\beta _n^{-1}(1-X)}^{\beta _n^{-1}(1-\epsilon )} dz_m \int _{\beta _n^{-1}(1-X)}^{z_m} dz_{m-1} \cdots \int _{\beta _n^{-1}(1-X)}^{z_2} dz_1 f(z_1,\ldots ,z_m) \\{} & {} \quad = \sum _{i=0}^m \biggl ( \int _{\beta _n^{-1}(\epsilon )}^{\beta _n^{-1}(1-\epsilon )} dz_m \int _{\beta _n^{-1}(\epsilon )}^{z_n} dz_{m-1} \cdots \int _{\beta _n^{-1}(\epsilon )}^{z_{i+2}} dz_{i+1} \biggr )\\{} & {} \qquad \times \biggl ( \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} dz_i \int _{\beta _n(X)}^{z_i} dz_{i-1} \cdots \int _{\beta _n(X)}^{z_2} dz_1 \biggr ) f(z_1,\ldots ,z_m). \end{aligned}$$

Proof

The key idea is a replacement of the path from \(\beta _n^{-1}(1-X)\) to \(z\in \beta _n((\epsilon ,1-\epsilon ))\) by two paths: one starts at \(\beta ^{-1}_n(1-X)=\beta _n(X)\) and goes backward to \(\beta _n(1-\epsilon )\), and the other starts at \(\beta _n(1-\epsilon )=\beta ^{-1}_n(\epsilon )\) and goes forward to z. See Figure 4 in the case \(n=1\).

Fig. 4
figure 4

replacement of the path \(\beta _n^{-1}\)

We repeatedly apply this separating method to each iterated integral as follows:

$$\begin{aligned}{} & {} \int _{\beta _n^{-1}(1-X)}^{\beta _n^{-1}(1-\epsilon )} dz_m \int _{\beta _n^{-1}(1-X)}^{z_m} dz_{m-1} \cdots \int _{\beta _n^{-1}(1-X)}^{z_2} dz_1 f(z_1,\ldots ,z_m) \\{} & {} \quad = \biggl ( \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} + \int _{\beta _n^{-1}(\epsilon )}^{\beta _n^{-1}(1-\epsilon )} \biggr ) dz_m \\{} & {} \qquad \times \int _{\beta _n^{-1}(1-X)}^{z_m} dz_{m-1} \cdots \int _{\beta _n^{-1}(1-X)}^{z_2} dz_1 f(z_1,\ldots ,z_m) \\{} & {} \quad = \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} dz_m \int _{\beta _n(X)}^{z_m} dz_{m-1} \cdots \int _{\beta _n(X)}^{z_2} dz_1 f(z_1,\ldots ,z_m) \\{} & {} \qquad + \int _{\beta _n^{-1}(\epsilon )}^{\beta _n^{-1}(1-\epsilon )} dz_m \biggl ( \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} + \int _{\beta _n^{-1}(\epsilon )}^{z_m} \biggr ) dz_{m-1} \int _{\beta _n^{-1}(1-X)}^{z_{m-1}} dz_{m-2} \cdots \\{} & {} \qquad \times \int _{\beta _n^{-1}(1-X)}^{z_2} dz_1 f(z_1,\ldots ,z_m)\\{} & {} \quad = \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} dz_m \int _{\beta _n(X)}^{z_m} dz_{m-1} \cdots \\{} & {} \qquad \times \int _{\beta _n(X)}^{z_2} dz_1 f(z_1,\ldots ,z_m) \\{} & {} \qquad + \int _{\beta _n^{-1}(\epsilon )}^{\beta _n^{-1}(1-\epsilon )} dz_m \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} dz_{m-1} \int _{\beta _n(X)}^{z_{m-1}} dz_{m-2} \cdots \int _{\beta _n(X)}^{z_2} dz_1 f(z_1,\ldots ,z_m) \\{} & {} \qquad + \int _{\beta _n^{-1}(\epsilon )}^{\beta _n^{-1}(1-\epsilon )} dz_m \int _{\beta _n^{-1}(\epsilon )}^{z_m} dz_{m-1} \biggl ( \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} + \int _{\beta _n^{-1}(\epsilon )}^{z_{m-1}} \biggr ) dz_{m-2}\\{} & {} \qquad \times \int _{\beta _n^{-1}(1-X)}^{z_{m-2}} dz_{m-3} \cdots \int _{\beta _n^{-1}(1-X)}^{z_2} dz_1 f(z_1,\ldots ,z_m)\\{} & {} \quad =\cdots = \sum _{i=0}^m \biggl ( \int _{\beta _n^{-1}(\epsilon )}^{\beta _n^{-1}(1-\epsilon )} dz_m \int _{\beta _n^{-1}(\epsilon )}^{z_m} dz_{m-1} \cdots \int _{\beta _n^{-1}(\epsilon )}^{z_{i+2}} dz_{i+1} \biggr )\\{} & {} \qquad \times \biggl ( \int _{\beta _n(X)}^{\beta _n(1-\epsilon )} dz_i \int _{\beta _n(X)}^{z_i} dz_{i-1} \cdots \int _{\beta _n(X)}^{z_2} dz_1 \biggr ) f(z_1,\ldots ,z_m), \end{aligned}$$

which completes the proof. \(\square \)

Remark

Although we showed Lemma 2.2 and 2.3 directly, they are consequences of general formulas:

for a cuspidal path \(\gamma :[0,1] \rightarrow \mathbb {C}\) from a point x to a point y and \(a, b \in (0,1)\), consider the possibly divergent integral

$$\begin{aligned} \int _{\gamma , a, b} w_{a_{1}} \cdots w_{a_{m}} := \underset{a<t_{1}<\cdots<t_{m}<b}{\int } \frac{d \gamma \left( t_{1}\right) }{\gamma \left( t_{1}\right) -a_{1}} \cdots \frac{d \gamma \left( t_{m}\right) }{\gamma \left( t_{m}\right) -a_{m}} \end{aligned}$$

with \(a_1,\ldots ,a_m\in (0,1)\). Then, Lemma 2.2 follows from the shuffle product

where with \(b_1,\ldots ,b_n\in (0,1)\) means that all permutations of \( w_{a_{1}} \cdots w_{a_{m}}\) and \(w_{b_{1}} \cdots w_{b_{n}}\) with its order kept. Moreover, Lemma 2.3 is essentially a combination of the path composition formula

$$\begin{aligned} \int _{\gamma , a, b} w_{a_{1}} \cdots w_{a_{m}} = \sum _{j=0}^{m} \int _{\gamma , a, c} w_{a_{1}} \cdots w_{a_{j}} \int _{\gamma , c, b} w_{a_{j+1}} \cdots w_{a_{m}} \end{aligned}$$

and

$$\begin{aligned} \int _{\gamma , a, b} w_{a_{1}} \cdots w_{a_{m}} = \int _{\gamma ^{-1}, 1-a, 1-b} w_{a_{1}} \cdots w_{a_{m}}. \end{aligned}$$

Remark

Lemma 2.3 plays the same role as in [3, Lemma 2.1]. We will see below in (3.2) that only the term \(i=m\) remains and the other terms vanish by modulo \(2\pi in{\mathcal {Z}}[2\pi in]\).

3 Proof of main theorem

Proof

(Proof of Main Theorem) Fix sufficiently small \(\epsilon >0\). For \(X\in [\epsilon ,1-\epsilon ]\), we define

$$\begin{aligned} I_{r,s}(X){} & {} =I_{r,s}(\lambda _1,\lambda _2,\xi _1,\xi _2;X) \\{} & {} :=\frac{1}{r!s!} \underset{\begin{array}{c} \epsilon<t<X\\ \epsilon<u<X \end{array}}{\int } \biggl ( \lambda _1 \int _t^{X} \frac{d\beta _n(t')}{\beta _n(t')-1} + \lambda _2 \int _u^{X} \frac{d\beta _n(u')}{\beta _n(u')-1} \biggr )^r\nonumber \\{} & {} \quad \times \biggl ( \xi _1 \int _t^{X} \frac{d\beta _n(t')}{\beta _n(t')} + \xi _2 \int _u^{X} \frac{d\beta _n(u')}{\beta _n(u')} \biggr )^s\times \frac{d\beta _n(t)d\beta _n(u)}{(\beta _n(t)-1)(\beta _n(u)-1)}. \end{aligned}$$

We calculate \(I_{r,s}(X)\) in two ways. By using the binomial expansion and Lemma 2.2, We have

$$\begin{aligned}{} & {} I_{r,s}(X) \\{} & {} = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \frac{\lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2}}{i_1!i_2!j_1!j_2!} \underset{\epsilon<t<X}{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \biggl ( \int _t^{X} \frac{d\beta _n(t')}{\beta _n(t')-1} \biggr )^{i_1} \biggl ( \int _t^{X} \frac{d\beta _n(t')}{\beta _n(t')} \biggr )^{j_1} \\{} & {} \quad \times \underset{\epsilon<u<X}{\int } \frac{d\beta _n(u)}{\beta _n(u)-1} \biggl ( \int _u^{X} \frac{d\beta _n(u')}{\beta _n(u')-1} \biggr )^{i_2} \biggl ( \int _u^{X} \frac{d\beta _n(u')}{\beta _n(u')} \biggr )^{j_2} \\{} & {} = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2} \underset{\begin{array}{c} \epsilon<t<p_1<\cdots<p_{i_1}<X \\ \epsilon<t<q_1<\cdots<q_{j_1}<X \\ \epsilon<u<x_1<\cdots<x_{i_2}<X \\ \epsilon<u<y_1<\cdots<y_{j_2}<X \end{array}}{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \prod _{a=1}^{i_1} \frac{d\beta _n(p_a)}{\beta _n(p_a)-1}\\{} & {} \quad \times \prod _{b=1}^{j_1} \frac{d\beta _n(q_b)}{\beta _n(q_b)} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{c=1}^{i_2} \frac{d\beta _n(x_c)}{\beta _n(x_c)-1} \prod _{d=1}^{j_2} \frac{d\beta _n(y_d)}{\beta _n(y_d)} \\{} & {} = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2} \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1 \\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array} } \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}<X \\ \epsilon<u<\nu _1<\cdots<\nu _{i_2+j_2}<X \end{array} }{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \\{} & {} \quad \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l}, \end{aligned}$$

where we put

$$\begin{aligned} a_k = {\left\{ \begin{array}{ll} 1&{}\text {if}\, \mu _k\, \text {is one of}\, p_1,\ldots ,p_{i_1}, \\ 0&{}\text {if }\,\mu _k\, \text {is one of} \,q_1,\ldots ,q_{j_1}, \end{array}\right. } \qquad b_l = {\left\{ \begin{array}{ll} 1&{}\text {if }\,\nu _l\, \text {is one of} \,x_1,\ldots ,x_{i_2}, \\ 0&{}\text {if}\, \nu _l\, \text {is one of }\,y_1,\ldots ,y_{j_2}, \end{array}\right. } \end{aligned}$$

for \(1\le k\le i_1+j_1\) and \(1\le l\le i_2+j_2\). Since \(\beta _n^{-1}(t)=\beta _n(1-t)\), we have

$$\begin{aligned}{} & {} \underset{\epsilon<u<\nu _1<\cdots<\nu _{i_2+j_2}<X}{\int } \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \\{} & {} \quad = (-1)^{i_2+j_2+1} \underset{1-X<\nu _{i_2+j_2}<\cdots<\nu _1<u<1-\epsilon }{\int } \prod _{l=1}^{i_2+j_2} \frac{d\beta ^{-1}_n(\nu _l)}{\beta ^{-1}_n(\nu _l)-b_l} \frac{d\beta ^{-1}_n(u)}{\beta ^{-1}_n(u)-1}. \end{aligned}$$

From Lemma 2.3, we obtain

$$\begin{aligned}{} & {} (-1)^{i_2+j_2+1} \underset{1-X<\nu _{i_2+j_2}<\cdots<\nu _1<u<1-\epsilon }{\int } \prod _{l=1}^{i_2+j_2} \frac{d\beta ^{-1}_n(\nu _l)}{\beta ^{-1}_n(\nu _l)-b_l} \frac{d\beta ^{-1}_n(u)}{\beta ^{-1}_n(u)-1} \\{} & {} \quad =(-1)^{i_2+j_2+1} \biggl ( \underset{X<\nu _{i_2+j_2}<\cdots<\nu _1<u<1-\epsilon }{\int } \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \frac{d\beta _n(u)}{\beta _n(u)-1} \\{} & {} \qquad + \sum _{i=1}^{i_2+j_2+1} \underset{X<\nu _{i_2+j_2}<\cdots<\nu _i<1-\epsilon }{\int } \prod _{l=i}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l}\\{} & {} \qquad \times \underset{\epsilon<\nu _{i-1}<\cdots<\nu _1<u<1-\epsilon }{\int } \prod _{m=1}^{i-1} \frac{d\beta ^{-1}_n(\nu _m)}{\beta ^{-1}_n(\nu _m)-b_m} \frac{d\beta ^{-1}_n(u)}{\beta ^{-1}_n(u)-1} \biggr ), \end{aligned}$$

Thus we have

$$\begin{aligned}{} & {} \frac{(-1)^{r+2}}{2\pi in} \underset{\epsilon<X<1-\epsilon }{\int } I_{r,s}(X) \frac{d\beta _n(X)}{\beta _n(X)-1} \nonumber \\{} & {} \quad = \sum _{\begin{array}{c} i_1+i_2=r \nonumber \\ j_1+j_2=s \end{array}} (-1)^{i_2+j_2+1} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2} \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1 \nonumber \\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array} } \frac{(-1)^{k+2}}{2\pi i n} \nonumber \\{} & {} \qquad \times \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}< X<\nu _{i_2+j_2}<\cdots<\nu _1<u<1-\epsilon \end{array} }{\int } \frac{d\beta _n(t)}{\beta _n(t)-1}\nonumber \\{} & {} \qquad \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(X)}{\beta _n(X)-1}\nonumber \\{} & {} \qquad \times \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \frac{d\beta _n(u)}{\beta _n(u)-1}\nonumber \\{} & {} \qquad +P^\epsilon _{r,s}(2\pi in), \end{aligned}$$
(3.1)

where \(P^\epsilon _{r,s}(2\pi in)\) is given by

$$\begin{aligned}{} & {} P^\epsilon _{r,s}(2\pi in) =\sum _{\begin{array}{c} i_1+i_2=r\\ j_1+j_2=s \end{array}} (-1)^{i_2+j_2+1} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2} \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1 \\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array} } \sum _{i=1}^{i_2+j_2+1} \frac{(-1)^{k+i+2}}{2\pi in} \\{} & {} \quad \quad \quad \quad \quad \quad \quad \quad \times \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}<X<\nu _{i_2+j_2}<\cdots<\nu _i<1-\epsilon \\ \epsilon<u<\nu _1<\cdots<\nu _{i-1}<1-\epsilon \end{array}}{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \\{} & {} \quad \quad \quad \quad \quad \quad \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(X)}{\beta _n(X)-1} \prod _{l=i}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \\{} & {} \quad \quad \quad \quad \quad \quad \times \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{m=1}^{i-1} \frac{d\beta _n(\nu _m)}{\beta _n(\nu _m)-b_m}. \end{aligned}$$

Hence the regularization of (3.1) with the replacement \(2\pi in\) by T is equal to

$$\begin{aligned}{} & {} \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} (-1)^{i_2+j_2+1} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2}\nonumber \\{} & {} \quad \times \sum _{\begin{array}{c} {\textbf{k}}\in I(i_1+j_1+1,i_1+1) \\ {\textbf{l}}\in I(i_2+j_2+1,i_2+1) \end{array}} \biggl ( \zeta _{RS}({\textbf{k}},{\textbf{l}};T) + \sum _{i=0}^{i_2} \sum _{m=0}^{l_i-1} \frac{(-1)^{k+i+2}}{T} \nonumber \\{} & {} \quad \times L(1,\{0\}^{k_0-1},\ldots ,1,\{0\}^{k_{i_1}-1},1, \{0\}^{l_{i_2}-1} ,1,\ldots ,\{0\}^{l_{i+1}-1},1,\{0\}^{m+1};T)\nonumber \\{} & {} \quad \times L(1,\{0\}^{l_0-1},\ldots ,1,\{0\}^{l_i-m};T) \biggr ). \end{aligned}$$
(3.2)

Next, we give another expression of \(I_{r,s}(X)\). By dividing the region of the integral, we have

$$\begin{aligned} I_{r,s}(X) = I^{(1)}_{r,s}(X) + I^{(2)}_{r,s}(X), \end{aligned}$$

where we set

$$\begin{aligned} I^{(1)}_{r,s}(X)&:=\frac{1}{r!s!}\underset{\epsilon<t<u<X}{\int }\cdots , \\ I^{(2)}_{r,s}(X)&:=\frac{1}{r!s!}\underset{\epsilon<u<t<X}{\int }\cdots . \end{aligned}$$

Using the binomial expansion and Lemma 2.2, We have

$$\begin{aligned}{} & {} I^{(1)}_{r,s}(X) \\{} & {} \quad = \frac{1}{r!s!} \underset{\begin{array}{c} \epsilon<t<u<X \end{array}}{\int } \biggl ( \lambda _1 \int _t^u \frac{d\beta _n(t')}{\beta _n(t')-1} + (\lambda _1+\lambda _2) \int _u^X \frac{d\beta _n(u')}{\beta _n(u')-1} \biggr )^r \\{} & {} \qquad \times \biggl ( \xi _1 \int _t^u \frac{d\beta _n(t')}{\beta _n(t')} + (\xi _1+\xi _2) \int _u^X \frac{d\beta _n(u')}{\beta _n(u')} \biggr )^s \frac{d\beta _n(t)d\beta _n(u)}{(\beta _n(t)-1)(\beta _n(u)-1)} \\{} & {} \quad = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \frac{\lambda _1^{i_1}(\lambda _1+\lambda _2)^{i_2}\xi _1^{j_1}(\xi _1+\xi _2)^{j_2}}{i_1!i_2!j_1!j_2!} \\{} & {} \qquad \times \underset{\epsilon<t<u<X}{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \biggl ( \int _t^u \frac{d\beta _n(t')}{\beta _n(t')-1} \biggr )^{i_1} \biggl ( \int _t^u \frac{d\beta _n(t')}{\beta _n(t')} \biggr )^{j_1}\\{} & {} \qquad \times \biggl ( \int _u^X\frac{d\beta _n(u')}{\beta _n(u')-1}\biggr )^{i_2} \biggl ( \int _u^X \frac{d\beta _n(u')}{\beta _n(u')} \biggr )^{j_2} \frac{d\beta _n(u)}{\beta _n(u)-1} \\{} & {} \quad = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \lambda _1^{i_1}(\lambda _1+\lambda _2)^{i_2}\xi _1^{j_1}(\xi _1+\xi _2)^{j_2} \\{} & {} \qquad \times \underset{\begin{array}{c} \epsilon<t<p_1<\cdots<p_{i_1}<u<x_1<\cdots<x_{i_2}<X \\ \epsilon<t<q_1<\cdots<q_{j_1}<u<y_1<\cdots<y_{j_2}<X \end{array}}{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \prod _{a=1}^{i_1} \frac{d\beta _n(p_a)}{\beta _n(p_a)-1}\\{} & {} \qquad \times \prod _{b=1}^{j_1} \frac{d\beta _n(q_b)}{\beta _n(q_b)} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{c=1}^{i_2} \frac{d\beta _n(x_c)}{\beta _n(x_c)-1} \prod _{d=1}^{j_2} \frac{d\beta _n(y_d)}{\beta _n(y_d)} \\{} & {} \quad = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \lambda _1^{i_1}(\lambda _1+\lambda _2)^{i_2}\xi _1^{j_1}(\xi _1+\xi _2)^{j_2}\\{} & {} \qquad \times \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1 \\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array}} \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}<u<\nu _1<\cdots<\nu _{i_2+j_2}<X \end{array} }{\int } \frac{d\beta _n(t)}{\beta _n(t)-1}\\{} & {} \qquad \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l}. \end{aligned}$$

So we obtain

$$\begin{aligned}{} & {} \underset{\epsilon<X<1-\epsilon }{\int } I_{r,s}^{(1)}(X) \frac{d\beta _n(X)}{\beta _n(X)-1} \\{} & {} \quad = \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \lambda _1^{i_1}(\lambda _1+\lambda _2)^{i_2}\xi _1^{j_1}(\xi _1+\xi _2)^{j_2}\\{} & {} \qquad \times \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1\\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array}} \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}<u<\nu _1<\cdots<\nu _{i_2+j_2}<X<1-\epsilon \end{array}}{\int } \frac{d\beta _n(t)}{\beta _n(t)-1}\\{} & {} \qquad \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \frac{d\beta _n(X)}{\beta _n(X)-1}. \end{aligned}$$

Similarly, the calculation of \(I^{(2)}_{r,s}(X)\) yields

$$\begin{aligned}{} & {} \underset{\epsilon<X<1-\epsilon }{\int } I_{r,s}^{(2)}(X) \frac{d\beta _n(X)}{\beta _n(X)-1} \\{} & {} \begin{aligned}&= \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} \lambda _2^{i_1}(\lambda _1+\lambda _2)^{i_2}\xi _2^{j_1}(\xi _1+\xi _2)^{j_2}\\&\quad \times \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1 \\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array} } \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}<u<\nu _1<\cdots<\nu _{i_2+j_2}<X<1-\epsilon \end{array} }{\int } \frac{d\beta _n(t)}{\beta _n(t)-1} \\&\quad \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \frac{d\beta _n(X)}{\beta _n(X)-1}. \end{aligned} \end{aligned}$$

Thus

$$\begin{aligned}{} & {} \frac{(-1)^{r+2}}{2\pi i n} \underset{\epsilon<X<1-\epsilon }{\int } I_{r,s}(X)\frac{d\beta _n(X)}{\beta _n(X)-1} \nonumber \\= & {} \frac{(-1)^{r+2}}{2\pi i n} \underset{\epsilon<X<1-\epsilon }{\int } \biggl ( I^{(1)}_{r,s}(X)+I_{r,s}^{(2)}(X) \biggr ) \frac{d\beta _n(X)}{\beta _n(X)-1} \nonumber \\= & {} \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} (\lambda _1^{i_1}\xi _1^{j_1}+\lambda _2^{i_1}\xi _2^{j_1}) (\lambda _1+\lambda _2)^{i_2}(\xi _1+\xi _2)^{j_2} \sum _{ \begin{array}{c} a_1+\cdots +a_{i_1+j_1}=i_1 \\ b_1+\cdots +b_{i_2+j_2}=i_2 \end{array} } \frac{(-1)^{k+2}}{2\pi i n} \nonumber \\{} & {} \times \underset{ \begin{array}{c} \epsilon<t<\mu _1<\cdots<\mu _{i_1+j_1}<u<\nu _1<\cdots<\nu _{i_2+j_2}<X<1-\epsilon \end{array} }{\int } \frac{d\beta _n(t)}{\beta _n(t)-1}\nonumber \\{} & {} \times \prod _{k=1}^{i_1+j_1} \frac{d\beta _n(\mu _k)}{\beta _n(\mu _k)-a_k} \frac{d\beta _n(u)}{\beta _n(u)-1} \prod _{l=1}^{i_2+j_2} \frac{d\beta _n(\nu _l)}{\beta _n(\nu _l)-b_l} \frac{d\beta _n(X)}{\beta _n(X)-1}. \end{aligned}$$
(3.3)

Hence the regularization of (3.3) with the replacement \(2\pi in\) by T is equal to

$$\begin{aligned}{} & {} \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} (\lambda _1^{i_1}\xi _1^{j_1}+\lambda _2^{i_1}\xi _2^{j_1}) (\lambda _1+\lambda _2)^{i_2}(\xi _1+\xi _2)^{j_2}\sum _{\begin{array}{c} {\textbf{k}}\in I(i_1+j_1+1,i_1+1) \\ {\textbf{l}}\in I(i_2+j_2+1,i_2+1) \end{array}} \zeta _{RS}({\textbf{k}},{\textbf{l}};T).\nonumber \\ \end{aligned}$$
(3.4)

By (3.2) and (3.4), we have

$$\begin{aligned}{} & {} \sum _{ \begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array} } \biggl ( (-1)^{i_2+j_2} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2} + (\lambda _1^{i_1}\xi _1^{j_1}+\lambda _2^{i_1}\xi _2^{j_1}) (\lambda _1+\lambda _2)^{i_2}(\xi _1+\xi _2)^{j_2} \biggr )\nonumber \\{} & {} \quad \times \sum _{ \begin{array}{c} {\textbf{k}}\in I(i_1+j_1+1,i_1+1) \\ {\textbf{l}}\in I(i_2+j_2+1,i_2+1) \end{array} } \zeta _{RS}({\textbf{k}},{\textbf{l}};T) =P^{{{\,\textrm{reg}\,}}}_{r,s}(T), \end{aligned}$$
(3.5)

where \(P^{{{\,\textrm{reg}\,}}}_{r,s}(T)\) is equal to

$$\begin{aligned}{} & {} \sum _{\begin{array}{c} i_1+i_2=r \\ j_1+j_2=s \end{array}} (-1)^{i_2+j_2+1} \lambda _1^{i_1}\xi _1^{j_1} \lambda _2^{i_2}\xi _2^{j_2} \\{} & {} \quad \times \sum _{\begin{array}{c} {\textbf{k}}\in I(i_1+j_1+1,i_1+1) \\ {\textbf{l}}\in I(i_2+j_2+1,i_2+1) \end{array}} \sum _{i=0}^{i_2} \sum _{m=0}^{l_i-1} \frac{(-1)^{k+i}}{T}\\{} & {} \quad \times L(1,\{0\}^{k_0-1},\ldots ,1,\{0\}^{k_{i_1}-1},1, \{0\}^{l_{i_2}-1}, 1,\ldots ,\{0\}^{l_{i+1}-1},1,\{0\}^{m+1};T) \\{} & {} \quad \times L(1,\{0\}^{l_0-1},\ldots ,1,\{0\}^{l_i-m};T). \end{aligned}$$

We see that \(P^{{{\,\textrm{reg}\,}}}_{r,s}(T)\) is a polynomial in T without constant term because \(L(a_1,\ldots ,a_m;T)\in T{\mathcal {Z}}[T]\). Therefore, by putting \(T=0\) in (3.5), we have (1.2). This completes the proof. \(\square \)