1 Introduction

It is a well-known fact that in closed polyacetylene molecular chains having an even number of carbon atoms (e.g. benzene), the valence electrons arrange themselves one link in two. This phenomenon is well understood in the Peierls model, introduced in 1930 (see [10, p.108] and [2]), which is a simple nonlinear functional describing, in particular, polyacetylene chains. This model is invariant under 1-translations, but there is a symmetry breaking: the minimizers are dimerized, in the sense that they are 2-periodic, but not 1-periodic. This is known as Peierls instability or Peierls distortion and is responsible for the high diamagnetism and low conductivity of certain materials such as bismuth [4].

In this paper, we study the Peierls model with temperature, and describe the corresponding phase diagram. We prove the existence of a critical temperature below which the chain is dimerized and above which the chain is 1-periodic. We characterize this critical temperature and study the transition around it. In order to state our main results, let us first recall what is known for the Peierls model without temperature.

1.1 The Peierls Model at Null Temperature

We focus on the case of even chains: We consider a periodic linear chain with \(L=2N\) classical atoms (for an integer \(N \ge 2\)), together with quantum non-interacting electrons. We denote by \(t_i\) the distance between the i-th and \((i + 1)\)-th atoms and set \(\{ {{\textbf {t}}}\}:=~\{ t_1, \ldots , t_L\}\). By periodicity, we mean that the atoms indices are taken modulo L. The electrons are represented by a one-body density matrix \(\gamma \), which is a self-adjoint operator on \(\ell ^2({\mathbb {C}}^L)\), satisfying the Pauli principle \(0 \le \gamma \le 1\). In this simple model, the electrons can hop between nearest neighbor atoms and feel a Hamiltonian of the form

$$\begin{aligned} T = T( \{ {{\textbf {t}}}\} ):= \begin{pmatrix} 0 &{}\quad t_1 &{}\quad 0 &{}\quad 0 &{}\quad \cdots &{}\quad t_{L} \\ t_1 &{}\quad 0 &{}\quad t_2 &{}\quad \cdots &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad t_2 &{}\quad 0 &{}\quad t_3 &{}\quad \cdots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \cdots &{}\quad t_{L-2} &{}\quad 0 &{}\quad t_{L-1} \\ t_{L} &{}\quad 0 &{}\quad \cdots &{}\quad 0 &{}\quad t_{L-1} &{}\quad 0 \end{pmatrix}. \end{aligned}$$
(1)

The Peierls energy of such a system reads [5, 8,9,10, 12]

$$\begin{aligned} {\widetilde{{{\mathcal {E}}}}}_{\textrm{full}}^{(L)}(\{ {\tilde{{{\textbf {t}}}}} \}, \gamma ):= \frac{1}{2}g \sum _{i=1}^{L}(\tilde{t_i} - b)^2 + 2 \textrm{Tr} \, (T \gamma ). \end{aligned}$$

The first term is the distortion energy of the atoms. Here, \(b > 0\) is the equilibrium distance between two atoms and \(g > 0\) is the rigidity of the chain. The second term models the electronic energy of the valence electrons (the 2 factor stands for the spin). By scaling, setting \(\tilde{t_i} = b t_i\) and \(\mu = gb\), we have \({\widetilde{{{\mathcal {E}}}}}_{\textrm{full}}^{(L)}(\{ {\tilde{t}} \}, \gamma ) = b {{\mathcal {E}}}_{\textrm{full}}^{(L)}(\{ t\}, \gamma )\), with the energy

$$\begin{aligned} \boxed { {{\mathcal {E}}}^{(L)}_{\textrm{full}}(\{ {{\textbf {t}}}\}, \gamma ):= \frac{\mu }{2} \sum _{i=1}^{L}(t_i - 1)^2 + 2 \textrm{Tr} \, (T \gamma ). } \end{aligned}$$
(2)

There is only one parameter in the model, which is the strength \(\mu > 0 \). In the so-called half-filled model, this energy is minimized over all \(t_i > 0\) and all one-body density matrices (there is no constraint on the number of electrons):

$$\begin{aligned} \boxed { E^{(L)}:= \min \left\{ {{\mathcal {E}}}^{(L)}_{\textrm{full}}(\{ {{\textbf {t}}}\}, \gamma ), \quad {{\textbf {t}}}\in \mathbb {R}_+^L, \quad 0 \le \gamma = \gamma ^* \le 1 \right\} .} \end{aligned}$$

One can perform the minimization in \(\gamma \) first. We get

$$\begin{aligned} \min _{0 \le \gamma = \gamma ^* \le 1} 2 \textrm{Tr} \, \left( T \gamma \right) = 2 \textrm{Tr} \, \left( T {\mathbbm {1}} (T < 0 )\right) = - \textrm{Tr} \, ( | T | ) = - \textrm{Tr} \, \left( \sqrt{T^2} \right) , \end{aligned}$$
(3)

where we used here that T is unitarily equivalent to \(-T\), so that its spectrum is symmetric with respect to the origin. The optimal density matrix in this case is \(\gamma _* = {\mathbbm {1}}(T < 0)\), which has \(\textrm{Tr} \, (\gamma _* ) = N\) electrons (hence the denomination half-filled). The energy simplifies into

$$\begin{aligned} E^{(L)} = \min \left\{ {{\mathcal {E}}}^{(L)}(\{ {{\textbf {t}}}\}), \quad {{\textbf {t}}}\in \mathbb {R}_+^L \right\} , \quad \text {with} \quad {{\mathcal {E}}}^{(L)}(\{ {{\textbf {t}}}\}):= \frac{\mu }{2} \sum _{i=1}^{L}(t_i - 1)^2 -\textrm{Tr} \, (\sqrt{T^2}). \end{aligned}$$

The energy \({{\mathcal {E}}}^{(L)}\) only depends on \(\{ {{\textbf {t}}}\}\), and is translationally invariant, in the sense that \({{\mathcal {E}}}^{(L)}(\{{{\textbf {t}}}\}) = {{\mathcal {E}}}^{(L)}(\{\tau _k {{\textbf {t}}}\})\) where \(\{\tau _k {{\textbf {t}}}\}:= \{ t_{k+1}, \ldots , t_{k+L} \}\). However, the minimizers of this energy are usually 2-periodic, as proved by Kennedy and Lieb [5] and Lieb and Nachtergaele [7]. More specifically, they proved the following:

\(\underline{\hbox {Case}~L \equiv 0 \mod 4.}\) There are two minimizing configurations for \(E^{(2N)}\), of the form

$$\begin{aligned} t_i = W + (-1)^i\delta \text{ or } t_i = W - (-1)^i\delta , \qquad \text {with }\delta > 0. \end{aligned}$$
(4)

These two configurations are called dimerized configurations [6]: they are 2-periodic but not 1-periodic. In other words, it is energetically favorable for the chain to break the 1-periodicity of the model. We prove in Appendix A that the corresponding gain of energy is actually exponentially small in the limit \(\mu \rightarrow \infty \).

\(\underline{Case~L~\equiv 2 \mod 4.}\) This case is similar, but we may have \(\delta = 0\) for small values of L, or large values of \(\mu \) (see also [6]). There is \(0< \mu _c(L) < \infty \) so that, for \(0< \mu < \mu _c(L)\), there are still two dimerized minimizers, as in (4), while for \(\mu > \mu _c(L)\), there is only one minimizer, which is 1-periodic, that is \(\delta = 0\).

In all cases (with L even), one can restrict the minimization problem to configurations \(\{ {{\textbf {t}}}\}\) of the form \(t_i = W \pm (-1)^i \delta \) and obtain a minimization problem with only two parameters.

Although L is always even in the present paper, let us mention that molecules with L odd and very large have been studied at zero temperature by Garcia Arroyo and Séré [3]. In that case one gets “kink solutions” in the limit \(L\rightarrow \infty \).

1.2 The Peierls Model with Temperature, Main Results

In the present article, we extend the results in the positive temperature case by modifying the Peierls model in order to take the entropy of the electrons into account. We denote by \(\theta \) the temperature (the letter T is reserved for the matrix in (1)). Following the general scheme described in [1, Section 4], the free energy is now given by (compare with (2))

$$\begin{aligned} \boxed { {{\mathcal {F}}}_{{\textrm{full}}, \theta }^{(L)}(\{ {{\textbf {t}}}\}, \gamma ):= \frac{\mu }{2} \sum _{i=1}^{L}(t_i - 1)^2 + 2 \left\{ \textrm{Tr} \, (T \gamma ) + \theta \textrm{Tr} \, (S(\gamma )) \right\} }, \end{aligned}$$
(5)

with \(S(x):= x \log (x) + (1 - x) \log (1-x)\) the usual entropy function. We consider again the minimization over all one-body density matrices and study the minimization problem

$$\begin{aligned} \boxed { F^{(L)}_\theta := \min \left\{ {{\mathcal {F}}}_{{\textrm{full}}, \theta }^{(L)}(\{ {{\textbf {t}}}\}, \gamma ), \quad {{\textbf {t}}}\in \mathbb {R}_+^L, \quad 0 \le \gamma = \gamma ^* \le 1 \right\} .} \end{aligned}$$

There are now two parameters in the model, namely \(\mu \) and \(\theta \). The main goal of the paper is to study the phase diagram in the \((\mu , \theta )\) plane.

As in (3), one can perform the minimization in \(\gamma \) first (see Sect. 2.1 for the proof).

Lemma 1.1

We have

$$\begin{aligned} \min _{0 \le \gamma \le 1} 2 \left\{ \textrm{Tr} \, (T \gamma ) + \theta \textrm{Tr} \, (S(\gamma ))\right\} = - \textrm{Tr} \, \left( h_\theta (T^2) \right) , \end{aligned}$$
(6)

with the function

$$\begin{aligned} h_\theta (x):= 2 \theta \log \left( 2 \cosh \left( \frac{\sqrt{x}}{2 \theta } \right) \right) . \end{aligned}$$

The minimization problem in the l.h.s of (6) has the unique minimizer \(\gamma _* = (1+{\textrm{e}}^{ T/\theta })^{-1}\).

The properties of the function \(h_\theta \) are given in Proposition 2.1. The free Peierls energy therefore simplifies into a minimization problem in \(\{ {{\textbf {t}}}\}\) only:

$$\begin{aligned} F^{(L)}_\theta = \inf \left\{ {{\mathcal {F}}}^{(L)}_\theta ( \{{{\textbf {t}}}\} ), \ {{\textbf {t}}}\in \mathbb {R}^L_+ \right\} , \quad \text {with} \quad {{\mathcal {F}}}^{(L)}_\theta ( \{{{\textbf {t}}}\}):= \frac{\mu }{2} \sum _{i=1}^L (t_i - 1)^2 - \textrm{Tr} \, \left( h_\theta (T^2) \right) . \end{aligned}$$
(7)

Our first theorem states that minimizers are always 2-periodic, and that they become 1-periodic when the temperature is large enough (phase transition).

Theorem 1.2

For any \(L=2N\), with N an integer and \(N \ge 2 \), there exists a critical temperature \(\theta _c^{(L)}:=\theta _c^{(L)}(\mu ) \ge 0\) such that:

  • for \(\theta \ge \theta _c^{(L)}\), the minimizer of \({{\mathcal {F}}}_\theta ^{(L)}\) is unique and 1-periodic;

  • for \(\theta \in (0, \theta _c^{(L)})\) (this set is empty if \(\theta _c^{(L)} = 0\)), there are exactly two minimizers, which are dimerized, of the form (4).

In addition,

  1. (i)

    If \(L\equiv 0\mod 4\), this critical temperature is positive (\(\theta _c^{(L)}(\mu ) > 0\) for all \(\mu > 0\)).

  2. (ii)

    If \(L\equiv 2\mod 4\), there is \(\mu _c:= \mu _c(L) > 0\) such that for \(\mu \le \mu _c\), \(\theta _c^{(L)} \) is positive (\(\theta _c^{(L)} >0\)), whereas for \(\mu > \mu _c\), \(\theta _c^{(L)} = 0\). Moreover, as a function of L we have \(\mu _c(L) \sim \frac{2}{\pi }\ln (L)\) at \(+\infty \).

This theorem only deals with an even number L of atoms. One expects a similar behavior for L odd and large, but the arguments in the proof are not sufficient to guarantee this: they only imply that the minimizer is one-periodic when the temperature is large enough (see Remark 2.4). We do not know what exactly happens for a small positive temperature and an odd number L.

We postpone the proof of Theorem 1.2 until Sect. 2. The first part uses the concavity of the function \(h_\theta \) on \(\mathbb {R}_+\), while those of i) and ii) are based on the Euler–Lagrange equations.

As in the null temperature case, minimizers are always 2-periodic; hence, the minimization problem is a minimization over the two variables W and \(\delta \). Actually, we have

$$\begin{aligned} F_\theta ^{(2N)} = (2N) \min \left\{ g_\theta ^{(2N)}(W, \delta ), \quad W \ge 0, \ \delta \ge 0\right\} , \end{aligned}$$

with the energy per unit atom (the following expression is justified in Eq. (13))

$$\begin{aligned} g_\theta ^{(2N)}(W, \delta ) = \frac{\mu }{2} \left[ (W- 1)^2 + \delta ^2 \right] - \frac{1}{2N} \sum _{k=1}^{2N} h_\theta \left( 4 W^2 \cos ^2 \left( \frac{2 k \pi }{2N} \right) + 4 \delta ^2 \sin ^2 \left( \frac{2 k \pi }{2N} \right) \right) . \end{aligned}$$
(8)

We recognize a Riemann sum in the last expression. This suggests that we can take the thermodynamic limit \(L \rightarrow \infty \). This limit is quite standard in the physics literature on long polymers: many theoretical papers present models of polymers at null temperature that are directly written for infinite chains (see e.g [12]).

We define the thermodynamic limit free energy (per unit atom) as

$$\begin{aligned} \boxed { \displaystyle f_\theta := \displaystyle \liminf _{N \rightarrow + \infty } \frac{1}{2N}F_\theta ^{(2N)}}. \end{aligned}$$
(9)

As expected, we have the following (see Sect. 3.1 for the proof).

Lemma 1.3

We have \(f_\theta = \min \left\{ g_\theta (W, \delta ), \quad W \ge 0, \ \delta \ge 0 \right\} \) with

$$\begin{aligned} g_\theta (W, \delta ):= \frac{\mu }{2} \left[ (W- 1)^2 + \delta ^2 \right] - \frac{1}{2 \pi }\int _0^{2 \pi } h_\theta \left( 4 W^2 \cos ^2 ( s ) + 4 \delta ^2 \sin ^2 (s) \right) {\textrm{d}}s. \end{aligned}$$

The next theorem is similar to Theorem 1.2 and shows the existence of a critical temperature for the thermodynamic model. Its proof is postponed until Sect. 3.2 and is based on the study of the Euler–Lagrange equations.

Theorem 1.4

  There is a critical (thermodynamic) temperature \(\theta _c = \theta _c(\mu ) > 0\), which is always positive, and so that for all \(\theta \ge \theta _c,\) the minimizer of \(g_\theta \) satisfies \(\delta = 0\), whereas for all \(\theta < \theta _c\), it satisfies \(\delta > 0\).

In the large \(\mu \) limit, we have

$$\begin{aligned} \theta _c(\mu ) \sim C\exp \left( - \frac{\pi }{4} \mu \right) , \quad \text {with} \quad C \approx 0.61385. \end{aligned}$$

This reflects the fact that for an infinite chain, there is a transition between the dimerized states (\(\delta > 0\)), which are insulating (actually, one can show that the gap of the T matrix is of order \(\delta \)), and the 1-periodic state (with \(\delta = 0\)), which is metallic, as the temperature increases. This can be interpreted as an insulating/metallic transition for polyacetylene. Such a phase transition has been observed experimentally in the blue bronze in [11]. We display in Fig. 1 (left) the map \(\mu \mapsto \theta _c(\mu )\) in the \((\mu , \theta )\) plane.

In (9), we only consider the limit \(L = 2N \rightarrow \infty \) to define the thermodynamic critical temperature \(\theta _c\). Note that the cases \(L\equiv 0\mod 4\) and \(L\equiv 2\mod 4\) merge when L tends to infinity: this is consistent with the fact that the critical stiffness \(\mu _c(L)\) tends to infinity as \(L\rightarrow \infty \) in Theorem 1.2. We also expect odd chains to behave like even chains, but the study of the odd case is more delicate since we do not have an analogue of (8) and we leave it for future work.

Finally, we study the nature of the transition. It is not difficult to see that \(\delta \rightarrow 0\) as \(\theta \rightarrow \theta _c\). Actually, there is a bifurcation around this critical temperature, see also Fig. 1 (right).

Theorem 1.5

There is \(C > 0,\) such that \(\delta (\theta ) = C\sqrt{(\theta _c - \theta )_+} + o\left( \sqrt{(\theta _c - \theta )_+}\right) .\)

We postpone the proof of Theorem 1.5 until Sect. 3.3. It mainly uses the implicit function theorem. The value of C is explicit and is given in the proof.

Fig. 1
figure 1

Numerical simulations. (Left) the critical temperature \(\mu \mapsto \theta _c(\mu )\) and its asymptotic \(Ce^{-\frac{\pi }{4}\mu }\). (Right) The bifurcation of \(\delta \) in the thermodynamic model. We took \(\mu = 2\), and the critical temperature is found to be \(\theta _c = 0.2112\)

2 Proofs in the Finite Chain Peierls Model with Temperature

We now provide the proofs of our results. We gather in this section the proofs of the finite \(L = 2N\) model and postpone the ones of the thermodynamic model to the next section.

2.1 Proof of Lemma 1.1 and Properties of the h Functional

First, we justify the functional \({{\mathcal {F}}}^{(L)}_\theta \) appearing in (7) and provide the proof of Lemma 1.1.

Proof

We study the minimization problem

$$\begin{aligned} \min _{0 \le \gamma \le 1} 2 \left\{ \textrm{Tr} \, (T \gamma ) + \theta \textrm{Tr} \, (S(\gamma ) ) \right\} . \end{aligned}$$

Any critical point \(\gamma ^*\) of the functional satisfies the Euler–Lagrange equation

$$\begin{aligned} T + \theta S'(\gamma _*) = 0, \quad \text {that is} \quad T + \theta \ln \left( \frac{\gamma _*}{1 - \gamma _*} \right) = 0. \end{aligned}$$
(10)

There is therefore only one such critical point, given by

$$\begin{aligned} \gamma _* = \frac{1}{1 + {\textrm{e}}^{ T/\theta }} = \frac{{\textrm{e}}^{-T/(2 \theta )}}{2 \cosh (T/(2 \theta ))}, \quad \text {hence} \quad 1 - \gamma _* = \frac{1}{1 + {\textrm{e}}^{ -T/\theta }} = \frac{{\textrm{e}}^{T/(2 \theta )}}{2 \cosh (T/(2 \theta ))}. \end{aligned}$$

By convexity of the functional, this critical point is the minimizer. For this one-body density matrix, we obtain using (10)

$$\begin{aligned} 2 \left\{ \textrm{Tr} \, (T \gamma _*) + \theta \textrm{Tr} \, (S(\gamma _*) ) \right\}&= 2 \textrm{Tr} \, \left( \gamma _* \left[ T + \theta \ln \left( \frac{\gamma _*}{1 - \gamma _*}\right) \right] + \theta \ln (1 - \gamma _* )\right) \\&= 2 \theta \textrm{Tr} \, (\ln (1 - \gamma _*)) = 2 \theta \textrm{Tr} \, \left( T/2 \theta \right) - 2 \theta \textrm{Tr} \, \left( \ln \left[ 2 \cosh (T/2\theta ) \right] \right) . \end{aligned}$$

Finally, since T is unitary equivalent to \(-T\), we have \(\textrm{Tr} \, (T) = 0\). This gives as wanted

$$\begin{aligned} \min _{0 \le \gamma \le 1} 2 \left\{ \textrm{Tr} \, (T \gamma ) + \theta \textrm{Tr} \, (S(\gamma ) ) \right\} = - \textrm{Tr} \, \left( h_\theta (T^2) \right) , \quad \text {with} \quad h_\theta (x):= 2 \theta \ln \left( 2 \cosh \left( \frac{ \sqrt{x}}{2\theta } \right) \right) . \end{aligned}$$

\(\square \)

Let us gather here some properties of the function \(h_\theta \) that we will use throughout the article.

Proposition 2.1

We have \(h_\theta (x) = \theta h \left( \frac{x}{4 \theta ^2} \right) \) and \(h'_\theta (x) = \frac{1}{4 \theta } h' \left( \frac{x}{4 \theta ^2} \right) \), with

$$\begin{aligned} h(y) = 2 \log (2 \cosh ( \sqrt{y} )), \quad \text {and} \quad h'(y) = \dfrac{\tanh (\sqrt{y})}{\sqrt{y}}. \end{aligned}$$

In particular, h (hence \(h_\theta \)) is positive, increasing and concave. We have \(\lim _{y \rightarrow 0} h'(y) = 1\), and the inequality \(h_\theta (x) \ge \sqrt{x}\), valid for all \(\theta > 0\) and all \(x \ge 0\). In addition, we have the pointwise convergence \(h_\theta (x) \rightarrow \sqrt{x}\) as \(\theta \rightarrow 0\).

The last part shows that we recover the model at zero temperature. The concavity of h comes from the fact that \(h'\) is positive and decreasing. Another way to see concavity is that \(h_\theta (t) = \min _{0 \le g \le 1} 2 \{ t g + \theta S(g) \}\) is the minimum of linear functions (in t), hence concave. The inequality \(h_\theta (x) \ge \sqrt{x}\) comes from \(2 \cosh (x) \ge {\textrm{e}}^x\).

2.2 Proof of Theorem 1.2: Existence of a Critical Temperature

We now study the minimizers of \({{\mathcal {F}}}^{(L)}_\theta (\{ {{\textbf {t}}}\})\) in (7), which we recall is given by

$$\begin{aligned} {{\mathcal {F}}}^{(L)}_\theta (\{ {{\textbf {t}}}\}):= \frac{\mu }{2} \sum _{i=1}^L (t_i - 1)^2 - \textrm{Tr} \, \left( h_\theta (T^2) \right) . \end{aligned}$$

First, we prove that the minimizers are always 2-periodic. We then study the existence of a critical temperature. For the first part, our strategy follows closely the argument of Kennedy and Lieb in [5] and relies on the concavity of \(h_\theta \).

\(\underline{\hbox {All minimizers are}\,\,2-\hbox {periodic}}\). Recall that if \(x \mapsto \varphi (x)\) is concave over \(\mathbb {R}_+\); then, \(A \mapsto \textrm{Tr} \, (\varphi (A))\) is concave over the set of positive matrices. Applying this property to \(h_\theta \) which is concave on \(\mathbb {R}_+\), we have

$$\begin{aligned} \textrm{Tr} \, ( h_\theta (T^2)) \le \textrm{Tr} \, (h_\theta ( \langle T^2 \rangle )), \end{aligned}$$

where \(\langle T^2 \rangle \) is defined as in [5] as the average of \(T^2\) over all translations:

$$\begin{aligned} \langle T^2 \rangle = \frac{1}{L}\sum _{k=1}^L \Theta _k T^2 \Theta _k^{-1}, \text{ with } \Theta _k = \Theta _1^k \text{ and } \Theta _1:= \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \cdots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \cdots &{}\quad 1 \\ 1 &{}\quad 0 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \end{pmatrix}. \end{aligned}$$

This implies the lower bound

$$\begin{aligned} F^{(L)}_\theta \ge G_\theta ^{(L)} \end{aligned}$$
(11)

where

$$\begin{aligned} G^{(L)}_\theta:= & {} \inf \left\{ {{\mathcal {G}}}^{(L)}_\theta ( \{{{\textbf {t}}}\} ), \ {{\textbf {t}}}\in \mathbb {R}^L_+ \right\} , \quad \text {with} \quad \nonumber \\ {{\mathcal {G}}}_\theta ^{(L)}(\{{{\textbf {t}}}\})= & {} \frac{\mu }{2} \sum _{i=1}^L (t_i - 1)^2 - \textrm{Tr} \, \left( h_\theta (\langle T^2 \rangle ) \right) . \end{aligned}$$
(12)

In addition, we have equality in (11) iff the optimal \(\{{{\textbf {t}}}\}\) for \(G_\theta ^{(L)}\) satisfies \(T(\{ {{\textbf {t}}}\})^2 = \langle T(\{ {{\textbf {t}}}\})^2 \rangle \). Note that

$$\begin{aligned} T^2 = \begin{pmatrix} t_L^2 + t_1^2 &{} 0 &{} t_1 t_2 &{} 0 &{} \cdots &{} 0\\ 0 &{} t_1^2 + t_2^2 &{} 0 &{} t_2 t_3 &{} \cdots &{} t_L t_1 \\ t_1 t_2 &{} 0 &{} t_2^2 + t_3^2 &{} 0 &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ t_{L-1} t_L &{} 0 &{} \cdots &{} 0 &{} t_{L-2}^2 + t_{L-1}^2 &{} 0 \\ 0 &{} t_L t_1 &{} \cdots &{} t_{L-1} t_L &{} 0 &{} t_{L}^2 + t_1^2 \end{pmatrix}. \end{aligned}$$

So we have \(T(\{ {{\textbf {t}}}\})^2 = \langle T(\{ {{\textbf {t}}}\})^2 \rangle \) iff \(t_i^2 + t_{i+1}^2\) and \(t_i t_{i+1}\) are independent of i. This happens only if T is 2-periodic.

Introducing the variables (our notation slightly differ from the ones in [5]: we put \(z^2\) instead of z, so that all quantities (xyz) are homogeneous)

$$\begin{aligned} x:= \frac{1}{L} \sum _{i=1}^L t_i, \quad y^2:= \frac{1}{L} \sum _{i=1}^L t_i^2, \quad z^2 = \frac{1}{L} \sum _{i=1}^L t_i t_{i+1}, \end{aligned}$$

we obtain \(\langle T^2 \rangle = 2 y^2 \mathbb {I}_L + z^2 \Omega _L\) with \(\Omega _L:= \Theta _2 + \Theta _2^*\), and

$$\begin{aligned} {{\mathcal {G}}}^{(L)}_\theta (\{{{\textbf {t}}}\}) = {\widetilde{{{\mathcal {G}}}}}^{(L)}_\theta (x,y,z):= \frac{\mu L}{2}(y^2 - 2x + 1) - \textrm{Tr} \, \left( h_\theta (2y^2 \mathbb {I}_L + z^2 \Omega _L) \right) . \end{aligned}$$

The function \({\widetilde{{{\mathcal {G}}}}}^{(L)}_\theta \) is much easier to study, as it only depends on the three variables (xyz). Let us identify the triplets (xyz) coming from a 2-periodic or 1-periodic state.

Lemma 2.2

 

  • For all \({{\textbf {t}}}\in \mathbb {R}^L_+\), the corresponding triplet (xyz) belongs to

    $$\begin{aligned} X:= \left\{ (x, y, z) \in \mathbb {R}^3_+, \quad y^2 \ge x^2, \quad z^2 \ge \max \{ 0, 2 x^2 - y^2 \} \right\} . \end{aligned}$$
  • If \(L = 2N\) is even, the configuration \({{\textbf {t}}}\) is 2-periodic of the form (4) iff the triple (xyz) belongs to

    $$\begin{aligned}{} & {} X_2:= \left\{ (x, y, z) \in \mathbb {R}^3_+ \ \text {of the form} \ x = W,\, y^2= W^2 + \delta ^2, \ z^2 = W^2 - \delta ^2 \right\} . \end{aligned}$$

    This happens iff \(z^2 = 2x^2 - y^2\).

  • The configuration \({{\textbf {t}}}\) is 1-periodic, of the form \({{\textbf {t}}}= (W, \ldots , W)\) iff (xyz) belongs to

    $$\begin{aligned} X_1:= \left\{ (x, y, z) \in \mathbb {R}^3_+ \ \text {of the form} \ x = y = z = W \right\} . \end{aligned}$$

    This happens iff \(z^2 = 2x^2 - y^2\) and \(x = y\).

Proof

By Cauchy–Schwarz, we have

$$\begin{aligned} x^2 = \frac{1}{L^2} \left( \sum _{i=1}^L t_i \right) ^2 \le \frac{1}{L} \sum _{i=1}^L t_i^2 = y^2, \end{aligned}$$

which is the first equality. Next, we have

$$\begin{aligned} z^2 = \frac{1}{2 L} \sum _{i=1}^L \left[ (t_i + t_{i+1})^2 - t_i^2 - t_{i+1}^2 \right] = \frac{1}{2 L} \sum _{i=1}^L (t_i + t_{i+1})^2 - y^2. \end{aligned}$$

On the other hand, we have by Cauchy–Schwarz,

$$\begin{aligned} x^2 = \left( \frac{1}{2 L} \sum _{i=1}^L (t_i + t_{i+1}) \right) ^2 \le \frac{1}{4 L} \sum _{i=1}^L (t_i + t_{i+1})^2. \end{aligned}$$

This proves that \(z^2 \ge 2 x^2 - y^2\). The other parts of the lemma can be easily checked. \(\square \)

Lemma 2.3

For any integer \(L > 2\) and all \(\theta \ge 0\), the minimizers of \({\widetilde{{{\mathcal {G}}}}}^{(L)}_\theta \) over X belong to \(X_2\).

Proof

Let us fix x and y, and look at the minimization over the variable z only. Setting \(Z:= z^2\), we see that

$$\begin{aligned} \varphi : Z \mapsto \textrm{Tr} \, \left( h_\theta (2 y^2 \mathbb {I}_L + Z \Omega _L) \right) \end{aligned}$$

is concave. In addition, the derivative of \(\varphi \) at \(Z = 0\) equals

$$\begin{aligned} \varphi '(Z) = \textrm{Tr} \, \left( h_\theta ' (2 y^2 ) \Omega _L \right) = h_\theta ' (2 y^2 ) \textrm{Tr} \, ( \Omega _L ) = 0, \end{aligned}$$

where we used that \(\Omega _L\) only has null elements on its diagonal. We deduce that \(\varphi \) is decreasing on \(\mathbb {R}_+\). So the minimizer of \({\widetilde{{{\mathcal {G}}}}}^{(L)}_\theta \) must saturate the lower bound constraint \(z^2 = \max \{ 0, 2x^2 - y^2\}\).

We now claim that the optimal triplet (xyz) satisfies \(2x^2 - y^2 \ge 0\). Assume otherwise that \(2x^2 - y^2 < 0\), hence \(z^2 = 0\). We have

$$\begin{aligned} {\widetilde{{{\mathcal {G}}}}}^{(L)}_\theta (x,y,0)= & {} \frac{\mu L}{2}(y^2 - 2x + 1) - \textrm{Tr} \, \left( h_\theta (2y^2) \right) \\= & {} L \left( \frac{\mu }{2} (y^2 - 2x + 1) - h_\theta (2 y^2) \right) . \end{aligned}$$

This function is decreasing in x, so the optimal x saturates the constraint \(x^2 = y^2\). But in this case, we have \(2x^2 - y^2 = y^2 \ge 0 \), a contradiction. This proves that, for the optimizer, we have \(2 x^2 - y^2 \ge 0\), and \(z^2 = 2x^2 - y^2\). Finally, (xyz) belongs in \(X_2\). \(\square \)

Let \((x_*, y_*, z_*) \in X_2\) be the minimizer of \({\widetilde{{{\mathcal {G}}}}}_\theta ^{(L)}\), and let \(W \ge 0\) and \(\delta \ge 0\) be so that \(x_* = W\), \(y_*^2 = W^2 + \delta ^2\), and \(z_* = W^2 - \delta ^2\). Let \({{\textbf {t}}}_*\) be one of the two 2-periodic states \(W \pm (-1)^i \delta \). We have \(T(\{ {{\textbf {t}}}_* \})^2 = \langle T(\{ {{\textbf {t}}}_* \})^2 \rangle \), which leads to the chain of inequalities

$$\begin{aligned} F_\theta ^{(L)} \ge G_\theta ^{(L)} \ge \min _{(x, y,z)} {\widetilde{{{\mathcal {G}}}}}_\theta ^{(L)} = {\widetilde{{{\mathcal {G}}}}}_\theta ^{(L)}(x_*, y_*, z_*) = G_\theta ^{(L)}(\{ {{\textbf {t}}}_* \}) = {{\mathcal {F}}}_\theta ( \{ {{\textbf {t}}}_* \}) \ge F_\theta ^{(L)}. \end{aligned}$$

We therefore have equalities everywhere. Since only the 2-periodic states \(W \pm (-1)^i \delta \) give the optimal triplet \((x_*, y_*, z_*)\), they are the only minimizers. This proves that all minimizer of \({{\mathcal {F}}}_\theta ^{(L)}\) are 2-periodic. They are two dimerized minimizers if \(\delta > 0\), and a unique 1-periodic minimizer if \(\delta = 0\).

Remark 2.4

In the case of odd chains, we still have the equation \(F_\theta ^{(L)} \ge G_\theta ^{(L)}\) in (11). However, the optimal triplet \((x_*, y_*, z_*)\) does not usually come from a state \(\{ {{\textbf {t}}}_* \}\): an odd chain cannot be dimerized. It can, however, come from such a state if \(\delta = 0\), that is, if \({{\textbf {t}}}_*\) is actually one-periodic. One can therefore prove that also for odd chains, minimizers become 1-periodic for large enough temperature.

\(\underline{\hbox {Existence of the critical temperature}.}\) Since all minimizers are 2-periodic, we can parametrize \({{\mathcal {G}}}_\theta ^{(L)}\) as a function of \((W, \delta )\) instead of \(\{ {{\textbf {t}}}\}\). So we write (in what follows, we normalize by L to get the energy per atom)

$$\begin{aligned} g^{(L)}_\theta (W, \delta ) = \frac{\mu }{2} \left[ (W- 1)^2 + \delta ^2 \right] - \frac{1}{L} \textrm{Tr} \, \left( h_\theta ( 2 (W^2 + \delta ^2) \mathbb {I}_L + (W^2 - \delta ^2) \Omega _L ) \right) . \end{aligned}$$

To compute the last trace, we compute the spectrum of \(\Omega _L\). We have, for all \(1 \le k \le L\),

$$\begin{aligned} \Omega _L {\textbf {e}}_k = 2 \cos \left( \frac{4 k \pi }{L} \right) {\textbf {e}}_k, \quad \text {where} \quad {\textbf {e}}_ k = (1, {\textrm{e}}^{2i \pi k/L}, {\textrm{e}}^{2 \cdot 2i \pi k /L}, \ldots , {\textrm{e}}^{(L-1) \cdot 2i \pi k /L})^T. \end{aligned}$$

So

$$\begin{aligned} \sigma \left( \Omega _L \right) := \left\{ 2 \cos \left( \frac{4 k \pi }{L} \right) , \quad 1 \le k \le L \right\} . \end{aligned}$$

This shows that

$$\begin{aligned} g^{(L)}_\theta (W, \delta )&= \frac{\mu }{2} \left[ (W- 1)^2 + \delta ^2 \right] - \frac{1}{L}\sum _{k=1}^L h_\theta \left( 2 (W^2 + \delta ^2) +2 (W^2 - \delta ^2) \cos \left( \frac{4 k \pi }{L} \right) \right) \nonumber \\&= \frac{\mu }{2} \left[ (W- 1)^2 + \delta ^2 \right] - \frac{1}{L} \sum _{k=1}^L h_\theta \left( 4 W^2 \cos ^2 \left( \frac{2 k \pi }{L} \right) + 4 \delta ^2 \sin ^2 \left( \frac{2 k \pi }{L} \right) \right) , \nonumber \\ \end{aligned}$$
(13)

which is the expression given in (8). The function \(g_\theta \) appearing in Lemma 1.3 has a similar expression, but we replace the last Riemann sum by the corresponding integral.

First, we prove that for \(\theta \) large enough, the minimizer is 1-periodic (corresponding to \(\delta = 0\)).

Lemma 2.5

For all \(\theta \ge \frac{1}{\mu }\), the minimizer of \({{\mathcal {G}}}_\theta ^{(L)}\) satisfies \(\delta = 0\). The same holds for the function \(g_\theta \) (thermodynamic limit case).

Proof

We prove the result in the thermodynamic limit, but the proof works similarly at fixed L. Let \((W_1, 0)\) denote the minimizer of \(g_\theta \) among 1-periodic configurations (that is with the extra constraint that \(\delta = 0\)). Writing that \(\partial _W g_\theta (W_1, 0)= 0\), we obtain that

$$\begin{aligned} \mu (W_1 - 1) = \frac{W_1}{ \pi \theta } \int _0^{2 \pi } h'\left( \dfrac{W_1^2 \cos ^2(s)}{\theta ^2}\right) \cos ^2(s) {\textrm{d}}s. \end{aligned}$$
(14)

For any other configurations \((W, \delta )\), we write \(W = W_1 + \varepsilon \) and obtain that

$$\begin{aligned} g_\theta (W_1 + \varepsilon , \delta ) - g_\theta (W_1, 0)&= \frac{\mu }{2} \left[ 2(W_1 - 1) \varepsilon + \varepsilon ^2 + \delta ^2 \right] \\&\quad - \frac{\theta }{2 \pi } \int _0^{2 \pi } \left[ h \left( \dfrac{ (W_1 + \varepsilon )^2 \cos ^2(s) + \delta ^2 \sin ^2(s)}{\theta ^2 }\right) \right. \\ {}&\quad \left. - h \left( \dfrac{W_1^2 \cos ^2 (s)}{\theta ^2} \right) \right] {\textrm{d}}s. \end{aligned}$$

Using that h is concave, we have \(h(a + b) - h(a) \le h'(a)b\), so, with \(a = W_1^2 \cos ^2(s) / \theta ^2\) and \(b = \left[ \delta ^2 \sin ^2(s) + (2W_1\varepsilon + \varepsilon ^2) \cos ^2( s) \right] /\theta ^2\), we get

$$\begin{aligned}&g_\theta (W_1 + \varepsilon , \delta ) - g_\theta (W_1, 0)\\&\ge \mu (W_1 - 1) \varepsilon + \frac{\mu }{2} \varepsilon ^2 + \frac{\mu }{2} \delta ^2 \\&\quad - \frac{1}{2 \pi \theta } \int _0^{2 \pi } h' \left( \dfrac{ W_1^2 \cos ^2 (s)}{\theta ^2} \right) \cdot \left[ \delta ^2 \sin ^2(s) + (2W_1\varepsilon + \varepsilon ^2) \cos ^2( s) \right] {\text {d}}s. \end{aligned}$$

Using (14), the term linear in \(\varepsilon \) vanishes. In addition, since \(h'' < 0\) on \(\mathbb {R}_+\), we have \(h'(x) \le ~h'(0)=~1\). This gives

$$\begin{aligned}&g_\theta (W_1 + \varepsilon , \delta ) - g_\theta (W_1, 0) \ge \left( \frac{\mu }{2} - \frac{1}{2 \theta } \right) \varepsilon ^2 + \left( \frac{\mu }{2} - \frac{1}{2 \theta } \right) \delta ^2 . \end{aligned}$$

The right-hand side is positive whenever \(\theta >\frac{1}{\mu }\), which proves the result. \(\square \)

In what follows, we define the critical temperature \(\theta _c = \theta _c(\mu )\) by

$$\begin{aligned} \theta _c:= \inf \{ \theta \in \mathbb {R}_+, \ \text {the minimizer of } g_{\theta '}\text { has }\delta = 0\text { for all }\theta ' \ge \theta \}. \end{aligned}$$

We define similarly \(\theta _c^{(L)} = \theta _c^{(L)}(\mu )\) for the case of finite chains.

\(\underline{\hbox {Study of the critical temperature in the case}\, L \in 2 \mathbb {N}.}\) We now study \(\theta _c^{(L)}\) with \(L = 2N\), \(N \ge 2\), and prove that it is strictly positive if \(L \equiv 0 \mod 4\), and that, if \(L \equiv 2 \mod 4\), there is \(\mu _c = \mu _c(L)\) so that \(\theta _c^{(L)}(\mu ) > 0\) iff \(\mu < \mu _c(L)\).

For fixed \(\theta \), any minimizing configuration \((W, \delta )\) satisfies the Euler–Lagrange equations

$$\begin{aligned} (\partial _W g_\theta ^{(L)}, \partial _\delta g_\theta ^{(L)})(W, \delta ) = (0,0). \end{aligned}$$

This gives the set of equations

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu (W - 1) &{} = \displaystyle \frac{2W}{\theta } \frac{1}{L}\sum _{k = 1}^{L} h'\left( \frac{W^2}{\theta ^2} \cos ^2( \tfrac{2k\pi }{L}) + \frac{\delta ^2}{\theta ^2} \sin ^2( \tfrac{2k\pi }{L}) \right) .\cos ^2(\tfrac{2k\pi }{L}) \\ \mu \delta &{} = \displaystyle \frac{2\delta }{\theta } \frac{1}{L}\sum _{k = 1}^{L} h'\left( \frac{W^2}{\theta ^2} \cos ^2(\tfrac{2k\pi }{L}) + \frac{\delta ^2}{\theta ^2} \sin ^2( \tfrac{2k\pi }{L})\right) .\sin ^2(\tfrac{2k\pi }{L}). \end{array}\right. }\nonumber \\ \end{aligned}$$
(15)

Note that the second equation always admits the trivial solution \(\delta = 0\). This corresponds to the critical point among 1-periodic configurations. It is the unique solution if \(\theta \ge \theta _c^{(L)}\), but for \(\theta \in (0, \theta _c^{(L)})\), there are other critical points, corresponding to the dimerized configurations. Actually, as \(\theta \) varies, we expect two branches of solutions: the branch of 1-periodic configuration and the branch of dimerized configurations. These two branches cross only at \(\theta = \theta _c\) (see Fig. 1 (right)).

In order to focus on the branch of dimerized configurations, we factor out the \(\delta \) factor in the second equation. Now, \(\delta = 0\) is no longer a solution, unless we are exactly at the critical temperature \(\theta _c^{(L)}\). So, in order to find this critical temperature, we seek the solution, in \((W, \theta )\), of (we multiply the second equation by W for clarity)

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu (W - 1) &{} = \displaystyle \frac{2W}{\theta } \frac{1}{L}\sum _{k = 1}^{L} h'\left( \frac{W^2}{\theta ^2} \cos ^2( \tfrac{2k\pi }{L})\right) .\cos ^2(\tfrac{2k\pi }{L}) \\ \mu W &{} = \displaystyle \frac{2 W}{\theta } \frac{1}{L}\sum _{k = 1}^{L} h'\left( \frac{W^2}{\theta ^2} \cos ^2(\tfrac{2k\pi }{L})\right) .\sin ^2(\tfrac{2k\pi }{L}). \end{array}\right. } \end{aligned}$$
(16)

Lemma 2.6

For all \(\mu > 0\), there is a unique solution \((W, \theta )\) of (16) in the case \(L~=~0\mod 4\), whereas if \(L=2\mod 4\), there is some value \(\mu _c:=\mu _c(L)\) such that for all \(\mu > \mu _c\), (16) has no solution and has a unique one if \(\mu \le \mu _c\). Moreover, in the last case \(\mu _c(L)\sim \frac{2}{\pi }\ln (L)\) at \(+\infty \).

Proof

We write \(L = 2N\) and note that the terms k and \(k + N\) gives the same contribution. Taking the difference of the second and first equations of (16), we obtain

$$\begin{aligned} \mu = - \displaystyle \frac{2 W}{\theta } \frac{1}{N}\sum _{k = 1}^{N} h'\left( \frac{W^2}{\theta ^2} \cos ^2(\tfrac{k\pi }{N})\right) .\cos (\tfrac{2k\pi }{N}). \end{aligned}$$

Recall that \(h'(t) = \frac{\tanh (\sqrt{t})}{\sqrt{t}}\) for \(t\ne 0\) and \(h'(0) = 1\). The point \(t = 0\) therefore plays a special role. The argument of \(h'\) equals 0 for \(k =\frac{N}{2}\), which happens only if \(N \equiv 0 \mod 2\) (that is \(L \equiv 0 \mod 4\)). In this case, the equation becomes, with \(x:= \frac{W}{\theta }\) (we write \(L = 2N = 4n\))

$$\begin{aligned} \mu = -\frac{1}{n} \sum _{\tiny {\begin{matrix} k = 1 \\ k\ne n \end{matrix}} }^{2n}\frac{\tanh \left( x\cos (\frac{k\pi }{2n})\right) }{\cos (\frac{k\pi }{2n})}.\cos (\tfrac{k\pi }{n}) + \frac{x}{n}=: {{\mathcal {J}}}_{2n}(x). \end{aligned}$$
(17)

The function \({{\mathcal {J}}}_{2n}\) is smooth. The first sum is uniformly bounded for \(x \in \mathbb {R}_+\) while the second diverges, so \({{\mathcal {J}}}_{2n} = 0\) and \({{\mathcal {J}}}_{2n}(+ \infty ) = + \infty \). We claim that \({{\mathcal {J}}}_{2n}\) is increasing. The intermediate value theorem then gives the existence and uniqueness of the solution of \({{\mathcal {J}}}_{2n}(x) = \mu \) on \(\mathbb {R}_+\). This gives \(\frac{W}{\theta } = {{\mathcal {J}}}_{2n}^{-1}(\mu )\). We then deduce, respectively, \(\theta \) and W from the first and second equations of (16). This proves that (16) has a unique solution. The corresponding temperature is the critical temperature \(\theta _c^{(L)}\).

It remains to prove that \({{\mathcal {J}}}_{2n}\) is increasing. Splitting the sum in (17) into 2 sums of size \((n-1)\), we get

$$\begin{aligned} {{\mathcal {J}}}_{2n}(x) = \frac{1}{n}\left( x - \tanh (x)\right) + \frac{1}{n}\sum _{k=1}^{n-1}\left( \frac{\tanh \left( x\sin \left( \frac{k\pi }{2n}\right) \right) }{\sin \left( \frac{k\pi }{2n}\right) } - \frac{\tanh \left( x\cos \left( \frac{k\pi }{2n}\right) \right) }{\cos \left( \frac{k\pi }{2n}\right) }\right) .\cos \left( \tfrac{k\pi }{n}\right) . \end{aligned}$$

Its derivative is given by

$$\begin{aligned} {{\mathcal {J}}}_{2n}'(x)= & {} \frac{1}{n}\left( 1 - \frac{1}{\cosh ^2(x)}\right) \\{} & {} + \frac{1}{n} \sum _{k = 1}^{n-1}\left( \frac{1}{\cosh ^2\left( x\sin \left( \frac{k\pi }{2n}\right) \right) } - \frac{1}{\cosh ^2\left( x\cos \left( \frac{k\pi }{2n}\right) \right) } \right) .\cos (\tfrac{k\pi }{n}). \end{aligned}$$

For all \(s \in [0, 1]\), the function

$$\begin{aligned} \left[ \cosh ^{-2}\left( x\sin \left( \tfrac{\pi }{2}s\right) \right) - \cosh ^{-2}\left( x\cos \left( \tfrac{\pi }{2}s\right) \right) \right] .\cos (\pi s) \end{aligned}$$

is positive (both terms are positive if \(s \in [0, 1/2]\), and both are negative if \(s \in [1/2, 1]\)). This shows that \({{\mathcal {J}}}_{2n}\) is increasing as wanted.

In the case \(N \equiv 1 \mod 4\) (that is \(L = 2 \mod 4\)), the argument of \(h'\) is never null, and we simply have (we write \(L = 2N = 4n + 2\))

$$\begin{aligned} \mu = -\frac{1}{2n+1} \sum _{k = 1}^{2n+1}\frac{\tanh \left( x\cos (\frac{k\pi }{2n+1})\right) }{\cos (\frac{k\pi }{2n+1})}.\cos (\tfrac{2 k\pi }{2n+1}) =: {{\mathcal {J}}}_{2n+1}(x). \end{aligned}$$

We claim again that \({{\mathcal {J}}}_{2n+1}\) is increasing (see below). However, we now have

$$\begin{aligned} \lim _{x \rightarrow \infty } {{\mathcal {J}}}_{2n + 1}(x) = -\frac{1}{2n+1} \sum _{k = 1}^{2n+1}\frac{\cos (\tfrac{2 k\pi }{2n+1})}{\left| \cos (\frac{k\pi }{2n+1}) \right| } =: \mu _c(L). \end{aligned}$$
(18)

If \(\mu \in (0, \mu _c(L))\), we can apply again the intermediate value theorem and deduce that the equation \({{\mathcal {J}}}_{2N}(x) = \mu \) has the unique solution \(x = {{\mathcal {J}}}_{2n+1}^{-1}(\mu )\). We deduce as before that there is unique solution of system (16) in this case. If instead \(\mu > \mu _c(L)\), then the system (16) has no solution.

Let us prove that \({{\mathcal {J}}}_{2n+1}\) is increasing (this will eventually prove that \(\mu _c(L) > 0\). Its derivative is given by

$$\begin{aligned} (2n+1) {{\mathcal {J}}}_{2n+1}'(x)= & {} - \sum _{k=1}^{2n+1}\frac{\cos \left( \frac{2k\pi }{2n+1}\right) }{\cosh ^2\left( x\cos \left( \frac{k\pi }{2n+1}\right) \right) }\\= & {} - \frac{1}{\cosh ^2(x)} - 2 \sum _{k=1}^n \frac{\cos \left( \frac{2k\pi }{2n+1}\right) }{\cosh ^2\left( x\cos \left( \frac{k\pi }{2n+1}\right) \right) }. \end{aligned}$$

In the last equality, we isolated the \(k = 2n+1\) term, and use the change of variable \(k' = 2n+1 - k\) for \(n+1 \le k \le 2n\). When \(1 \le k \le n/2\), we have \(\cos \left( \frac{2k\pi }{2n+1}\right) \ge 0\), while \(\frac{1}{\sqrt{2}} \le \cos (\tfrac{k \pi }{2n+1}) \le 1\). On the other hand, if \(n/2 \le k \le n\), we have \(\cos \left( \frac{2k\pi }{2n+1}\right) \le 0\), and \(0 \le \cos (\tfrac{k \pi }{2n+1}) \le \frac{1}{\sqrt{2}}\). In both cases, we deduce that

$$\begin{aligned} \forall k \in \{ 1, \ldots , n \}, \quad - \frac{\cos \left( \frac{2k\pi }{2n+1}\right) }{\cosh ^2\left( x\cos \left( \frac{k\pi }{2n+1}\right) \right) } \ge - \frac{\cos \left( \frac{2k\pi }{2n+1}\right) }{\cosh ^2(\frac{x}{\sqrt{2}})}. \end{aligned}$$

Summing over k, and using that

$$\begin{aligned} 2 \sum _{k=1}^n \cos \left( \frac{2k\pi }{2n+1}\right) = \sum _{k=1}^{2n} \cos \left( \frac{2k\pi }{2n+1}\right) = \sum _{k=1}^{2n+1} \cos \left( \frac{2k\pi }{2n+1}\right) - 1 = -1, \end{aligned}$$

we obtain the lower bound

$$\begin{aligned} (2n+1) {{\mathcal {J}}}_{2n+1}'(x) \ge - \frac{1}{\cosh ^2(x)} + \frac{1}{\cosh ^2(\frac{x}{\sqrt{2}})} \ge 0, \end{aligned}$$

which proves that \({{\mathcal {J}}}_{2n+1}\) is increasing.

Finally, we estimate \(\mu _c(L)\), defined in (18). We rewrite \(\mu _c(L)\) as

$$\begin{aligned} \mu _c(L)= & {} \frac{1}{2n+1} \sum _{k=1}^{2n+1} f\left( \tfrac{k}{2n+1} \right) + \frac{1}{2n+1} \sum _{k=1}^{2n+1} \dfrac{1}{\pi | \frac{k}{2n+1} - \frac{1}{2}|},\\{} & {} \quad \text {with} \quad f(s):= \dfrac{\cos (2 \pi s)}{| \cos (\pi s) |} - \frac{1}{\pi | s - \frac{1}{2}|}. \end{aligned}$$

We recognize a Riemann sum in the first term. Since the function f is integrable on [0, 1] (there is no singularity at \(s = \frac{1}{2}\)), this term converges to the integral of f. For the second term, we recognize a harmonic sum. More specifically, we have

$$\begin{aligned} \frac{1}{2n+1} \sum _{k=1}^{2n+1} \dfrac{1}{\pi | \frac{k}{2n+1} - \frac{1}{2}|} = \frac{1}{\pi } \sum _{k=1}^{2n+1} \dfrac{1}{ | k - n - \frac{1}{2}|} \sim \frac{2}{\pi } \sum _{k'=1}^n \frac{1}{( k' - \frac{1}{2})} \sim \frac{2}{\pi } \log (n) \sim \frac{2}{\pi } \log (L). \end{aligned}$$

This proves that \(\mu _c(L) \sim \frac{2}{\pi }\log (L)\) at \(+\infty \) and completes the proof. \(\square \)

3 Proofs in the Thermodynamic Model

We now focus on the thermodynamic model.

3.1 Proof of Lemma 1.3: Justification of the Thermodynamic Model

First, we show that this model is indeed the limit of the finite chain model as \(L \rightarrow \infty \). We denote by \(f_\theta ^{(2N)}\) the minimum of \(g_\theta ^{(2N)}\) (so \(f_\theta ^{(2N)} = \frac{1}{2N} F_\theta ^{(2N)}\)) and by \({\widetilde{f}}_\theta \) the minimum of \(g_\theta \). Our goal is to prove that \({\widetilde{f}}_\theta = f_\theta \), where we recall that \(f_\theta := \liminf _N f_\theta ^{(2N)}\).

We denote by \((W_{2N}, \delta _{2N})\) the optimizer of \(g_\theta ^{(2N)}\), and by \((W_*, \delta _*)\) the one of \(g_\theta \). First, from the pointwise convergence \(g_\theta ^{(2N)}(W, \delta ) \rightarrow g_\theta (W, \delta )\), we obtain

$$\begin{aligned} {\widetilde{f}}_\theta = g_\theta (W_*, \delta _*) = \lim _{N \rightarrow \infty } g_\theta ^{(2N)}(W_*, \delta _*) \ge \lim _{N \rightarrow \infty } f_\theta ^{(2N)} = f_\theta \end{aligned}$$

For the other in equality, we use that so \(h_\theta (x)\) \(+2\theta \ln (2)\), \(\le \sqrt{x}\)

$$\begin{aligned} g_\theta ^{(2N)}(W, \delta ) \ge \frac{\mu }{2} \left[ (W-1)^2 + \delta ^2 \right] - \sqrt{W^2 + \delta ^2} -2\theta \ln (2). \end{aligned}$$

In particular, \(g_\theta ^{(2N)}\) is lower bounded and coercive, uniformly in N. So if \((W_{2N}, \delta _{2N})\) denotes the optimizer of \(g_\theta ^{(2N)}\), the sequence \((W_{2N}, \delta _{2N})\) is bounded in \(\mathbb {R}^2_+\). Up to a not displayed subsequence, we may assume that

$$\begin{aligned} f_\theta = \lim _{N \rightarrow \infty } f_\theta ^{(2N)} = \lim _{N \rightarrow \infty } g_\theta ^{(2N)} (W_{2N}, \delta _{2N}), \quad \text {and} \quad \lim _{N \rightarrow \infty } (W_{2N}, \delta _{2N}) =: (W_\infty , \delta _\infty ). \end{aligned}$$

We then have

$$\begin{aligned} f_\theta = \lim _{N \rightarrow \infty } g_\theta ^{(2N)}(W_{2N}, \delta _{2N}) = \lim _{N \rightarrow \infty } g_\theta (W_{2N}, \delta _{2N}) + \lim _{N \rightarrow \infty } \left[ g_\theta ^{(2N)} - g_\theta \right] (W_{2N}, \delta _{2N}). \end{aligned}$$

The first limit converges to \(g_\theta (W_\infty , \delta _\infty )\), by continuity of the \(g_\theta \) functional. For the second limit, we use that \(g_\theta ^{(2N)} - g_\theta \) is the difference between an integral and a corresponding Riemann sum. If \({{\mathcal {I}}}_N(s)\) denotes the integrand, this difference is controlled by \(\frac{c}{2N} \sup _{s} \Vert {{\mathcal {I}}}_N'(s) \Vert \). In our case, \({{\mathcal {I}}}_N(s) = h_\theta (4 W_{2N}^2 \cos ^2(\pi s) + 4 \delta _{2N}^2 \sin ^2(\pi s))\), whose derivative is uniformly bounded in N, since \((W_{2N}, \delta _{2N})\) is bounded. This proves that the last limit goes to zero, hence

$$\begin{aligned} f_\theta = g_\theta (W_\infty , \delta _\infty ) \ge {\widetilde{f}}_\theta . \end{aligned}$$

We conclude that \(f_\theta = {\widetilde{f}}_\theta \). In particular, by uniqueness of the minimizer of \(g_\theta \), we must have \((W_\infty , \delta _\infty ) = (W_*, \delta _*)\), and the whole sequence \((W_{2N}, \delta _{2N})\) converges to \((W_*, \delta _*)\).

3.2 Proof of Theorem 1.4: Estimation of the Critical Temperature

We now study the properties of \(\theta _c\), the critical temperature in the thermodynamic limit. Reasoning as in the finite L case, the critical temperature \(\theta _c\) can be found by solving the equations in \((W, \theta )\) (compare with (16))

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu (W - 1) &{} =\displaystyle \frac{W}{ \pi \theta } \int _0^{2 \pi } h' \left( \dfrac{ W^2 \cos ^2 (s)}{\theta ^2} \right) \cdot \cos ^2 (s) {\textrm{d}}s \\ \mu W &{} = \displaystyle \frac{W}{\pi \theta } \int _0^{2 \pi }h' \left( \dfrac{ W^2 \cos ^2 (s)}{\theta ^2} \right) \cdot \sin ^2 (s) {\textrm{d}}s. \end{array}\right. } \end{aligned}$$

Using again the expression \(h'(t):= \frac{\tanh (\sqrt{t})}{\sqrt{t}}\), and splitting the integrals between \((0, 2 \pi )\) into four of size \(\pi /2\), this is also

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu (W - 1) &{} = \displaystyle \frac{4}{ \pi } \int _0^{\pi /2} \tanh \left( \dfrac{W \cos (s)}{\theta } \right) \cdot \cos (s) {\textrm{d}}s \\ \mu W &{} = \displaystyle \frac{4}{\pi } \int _0^{\pi /2} \tanh \left( \dfrac{W \cos (s) }{\theta }\right) \frac{\sin ^2(s)}{\cos (s)} {\textrm{d}}s. \end{array}\right. } \end{aligned}$$
(19)

Let us prove that this system always admits a unique solution. The proof is similar to the previous \(L \equiv 0 \mod 4\) case. Taking the difference of the two equations gives, with \(x:= \frac{W}{\theta }\),

$$\begin{aligned} \mu = -\frac{4}{\pi }\int _0^{\pi /2} \tanh \left( x\cos (s)\right) .\frac{\cos (2s)}{\cos (s)} {\textrm{d}}s =: {{\mathcal {J}}}\left( x\right) , \end{aligned}$$
(20)

The function \({{\mathcal {J}}}\) is derivable on \(\mathbb {R}_+\) with derivative given by

$$\begin{aligned} {{\mathcal {J}}}'(x) = \frac{4}{\pi }\int _{0}^{\pi /4} \left( \frac{1}{\cosh ^2(x\sin (s))} - \frac{1}{\cosh ^2(x\cos (s))}\right) .\cos (2s) {\textrm{d}}s. \end{aligned}$$

The integrand is positive for all \(s \in [0, s/4]\), so \({{\mathcal {J}}}\) is a strictly increasing function on \(\mathbb {R}_+\), and since \({{\mathcal {J}}}([0, +\infty )) = [0, +\infty )\), we get \(x = \frac{W}{\theta } = {{\mathcal {J}}}^{-1}(\mu )\). The first equation of (19) gives

$$\begin{aligned} \mu (x\theta - 1) = \frac{4}{\pi }\int _{0}^{\pi /2}\tanh \left( x\cos (s)\right) .\cos (s){\textrm{d}}s. \end{aligned}$$

This proves that \(\theta _c\) is well defined and depends only on \(\mu \).

We now estimate this critical temperature. We are interested in the large \(\mu \) limit. First, since \(\mathbb {R}\ni u\mapsto \tanh (u)\) is a bounded function, the first equation shows that \(\mu (W - 1)\) is uniformly bounded in \(\mu \), so \(W = 1 + O(\mu ^{-1})\) as \(\mu \rightarrow \infty \). Then, we must have \(\theta \rightarrow 0\) as \(\mu \rightarrow \infty \) in order to satisfy the second equation. Using the dominated convergence in the first integral gives

$$\begin{aligned} \frac{4}{ \pi } \int _0^{\pi /2} \tanh \left( \dfrac{W}{\theta } \cos (s) \right) \cdot \cos (s) {\textrm{d}}s \xrightarrow [\theta \rightarrow 0]{} \frac{4}{ \pi } \int _0^{\pi /2} \cos (s) {\textrm{d}}s = \frac{4}{\pi }, \end{aligned}$$

so the first equation gives

$$\begin{aligned} W = 1 + \frac{4}{\pi \mu } + o\left( \frac{1}{\mu }\right) . \end{aligned}$$

We now evaluate the integral of the right-hand side in the second equation, in the limit \(\theta \rightarrow 0\). It is convenient to make the change of variable \(s \mapsto \pi /2 - s\), so we compute

$$\begin{aligned} I(\theta ):= \int _0^{\pi /2} \tanh \left( \frac{W}{\theta } \sin (s) \right) \frac{\cos ^2(s)}{\sin (s)} {\textrm{d}}s. \end{aligned}$$

In order to evaluate \(I(\theta )\) as \(\theta \rightarrow 0\), we write \(I = I_1 + I_2\) with

$$\begin{aligned} I_1:= & {} \int _0^{\pi /2} \tanh \left( \frac{W}{\theta } \sin (s) \right) \frac{\cos (s)}{\sin (s)} {\textrm{d}}s \quad \text {and} \quad \\ {}{} & {} \quad I_2:= \int _0^{\pi /2} \tanh \left( \frac{W}{\theta } \sin (s) \right) \frac{\cos (s)(\cos (s) - 1)}{\sin (s)} {\textrm{d}}s. \end{aligned}$$

For the first integral, we make the change of variable \(u = \frac{W}{\theta } \sin (s)\) and get

$$\begin{aligned} I_1= & {} \int _0^{\frac{W}{\theta }} \frac{\tanh \left( u \right) }{u} {\textrm{d}}u = \ln \left( \frac{W}{\theta } \right) + c_1 + o(1), \quad \text {with} \quad \\ c_1:= & {} \int _0^1 \frac{\tanh (u)}{u}+ \int _1^\infty \frac{\left( \tanh (u) - 1 \right) }{u} {\textrm{d}}u. \end{aligned}$$

The value of \(c_1\) is computed numerically to be \(c_1 \approx 0.8188\). For the second integral \(I_2\), we remark that the integrand is uniformly bounded in \(\theta \) and s, so \(I_2 = O(1)\). Actually, since \(\theta \rightarrow 0\), we have by the dominated convergence theorem that

$$\begin{aligned} I_2 = \int _0^{\pi /2} \frac{\cos (s)( \cos (s) - 1)}{\sin (s)} {\textrm{d}}s + o(1) = \ln (2) - 1 + o(1). \end{aligned}$$

Altogether, we obtain that

$$\begin{aligned} I(\theta ) = \ln \left( \frac{W}{\theta } \right) + c_2 + o(1), \quad \text {with} \quad c_2 = c_1 + \ln (2) - 1 \approx 0.512. \end{aligned}$$

Together with the second equation of (19), we obtain

$$\begin{aligned} \mu = \frac{4}{\pi W} \left( \ln \left( \frac{W}{\theta } \right) + c_2 + o(1) \right) \end{aligned}$$

which gives, as wanted, in the limit \(\mu \rightarrow \infty \)

$$\begin{aligned} \theta _c(\mu ) \sim C\exp \left( - \frac{\pi }{4} \mu \right) \text{ with } C \approx 0.61385. \end{aligned}$$

3.3 Proof of Theorem 1.5: Study of the Phase Transition

In the previous section, we found the critical temperature. We now study the bifurcation of \(\delta \) around this temperature. The critical points of \(g_\theta \) are given by the Euler–Lagrange equations

$$\begin{aligned} {\left\{ \begin{array}{ll} \mu \left( W - 1 \right) &{} = \displaystyle \frac{W}{ \pi \theta } \int _0^{2 \pi } h' \left( \dfrac{ W^2 \cos ^2 (s) + \delta ^2 \sin ^2 (s)}{\theta ^2} \right) \cdot \cos ^2 (s) {\textrm{d}}s \\ \mu W &{} = \displaystyle \frac{ W }{\pi \theta } \int _0^{2 \pi } h' \left( \dfrac{ W^2 \cos ^2 (s) + \delta \sin ^2 (s)}{\theta ^2} \right) \cdot \sin ^2 (s) {\textrm{d}}s. \end{array}\right. } \end{aligned}$$

Recall that one can remove the 1-periodic minimizers by factoring out \(\delta \) in the second equation. This gives a set of equation involving \(\delta \) through the variable \(\Delta := \delta ^2\) only. In what follows, we fix \(\mu \), and set (we multiply the equations by \(\theta /W\) in order to have simpler computations afterward)

$$\begin{aligned} {{\mathcal {F}}}\left( \theta ; (W, \Delta ) \right) := {\left\{ \begin{array}{ll} \displaystyle \mu \theta \left( 1 - \frac{1}{W} \right) - \frac{1}{ \pi } \int _0^{2 \pi } h' \left( \dfrac{ W^2 \cos ^2 (s) + \Delta \sin ^2 (s)}{\theta ^2} \right) \cdot \cos ^2 (s) {\textrm{d}}s \\ \displaystyle \mu \theta - \frac{ 1}{\pi } \int _0^{2 \pi } h' \left( \dfrac{ W^2 \cos ^2 (s) + \Delta \sin ^2 (s)}{\theta ^2} \right) \cdot \sin ^2 (s) {\textrm{d}}s. \end{array}\right. } \end{aligned}$$

Recall that \({{\mathcal {F}}}\left( \theta _c; (W_*, 0) \right) = (0, 0)\), where \(W_*\) is the optimal W at the critical temperature. If \({{\mathcal {F}}}\left( \theta ; (W, \Delta ) \right) = (0, 0)\) with \(\Delta > 0\), the configurations \((W, \pm \sqrt{\Delta })\) are minimizers of \(g_\theta \). If \({{\mathcal {F}}}\left( \theta ; (W, \Delta ) \right) = (0, 0)\) with \(\Delta < 0\), it does not correspond to a physical solution.

We want to apply the implicit function theorem for \({{\mathcal {F}}}\) at the point \((\theta _c; (W_*, 0))\). In order to do so, we first record all derivatives. We denote by \({{\mathcal {F}}}= ({{\mathcal {F}}}_1, {{\mathcal {F}}}_2)\) the components of \({{\mathcal {F}}}\). The derivatives of \({{\mathcal {F}}}\), evaluated at \(\Delta = 0\), \(\theta = \theta _c\) and \(W = W_*\) are given by

$$\begin{aligned}{} & {} {\left\{ \begin{array}{ll} \partial _W {{\mathcal {F}}}_1 &{} = \dfrac{\mu \theta _c}{W_*^2} - \dfrac{2 W_*}{\theta _c^2} A \\ \partial _W {{\mathcal {F}}}_2 &{} = - \dfrac{2 W_*}{\theta _c^2} B \\ \end{array}\right. }, \quad {\left\{ \begin{array}{ll} \partial _\Delta {{\mathcal {F}}}_1 &{} = - \dfrac{1}{\theta _c^2} B \\ \partial _\Delta {{\mathcal {F}}}_2 &{} = - \dfrac{1}{\theta _c^2} C \\ \end{array}\right. }, \quad \\{} & {} \quad \text {and} \quad {\left\{ \begin{array}{ll} \partial _\theta {{\mathcal {F}}}_1 &{} = \mu \left( 1 - \frac{1}{W_*} \right) + 2 \dfrac{W_*^2}{\theta ^3} A \\ \partial _\theta {{\mathcal {F}}}_2 &{} = \mu + 2 \dfrac{W_*^2}{\theta _c^3} B \\ \end{array}\right. }. \end{aligned}$$

where we set (we split the integral in four parts of size \(\pi /2\))

$$\begin{aligned} {\left\{ \begin{array}{ll} A:= \displaystyle \frac{4}{\pi } \int _0^{\pi /2} h'' \left( \dfrac{ W_*^2 \cos ^2 (s) }{\theta _c^2} \right) \cdot \cos ^4(s) {\textrm{d}}s \\ B:= \displaystyle \frac{4}{\pi } \int _0^{\pi /2} h'' \left( \dfrac{ W_*^2 \cos ^2 (s) }{\theta _c^2} \right) \cdot \sin ^2 (s) \cos ^2(s) {\textrm{d}}s \\ C:= \displaystyle \frac{4}{\pi } \int _0^{\pi /2} h'' \left( \dfrac{ W_*^2 \cos ^2 (s) }{\theta _c^2} \right) \cdot \sin ^4(s) {\textrm{d}}s. \end{array}\right. } \end{aligned}$$

Since h is concave, AB and C are negative. In addition, by Cauchy–Schwarz, we have

$$\begin{aligned} B^2 \le A C. \end{aligned}$$
(21)

The Jacobian \(J:= \left( \partial _{(W, \Delta )} {{\mathcal {F}}}\right) (\theta _c; (W_*, 0))\) is of the form

$$\begin{aligned} J = \begin{pmatrix} \frac{\mu \theta _c}{W_*^2} - \frac{2 W_*}{ \theta _c^2} A &{} - \frac{1}{\theta _c^2} B \\ - \frac{ 2 W_* }{\theta _c^2} B &{} - \frac{1}{ \theta _c^2} C \end{pmatrix}, \quad \text {and} \quad \det J = - \dfrac{\mu }{W_*^2 \theta _c} C + \dfrac{2W_*}{\theta _c^4} (AC - B^2). \end{aligned}$$

Since \(C < 0\) and \(B^2 - AC < 0\), we have \(\det J > 0\), so J is invertible. We can therefore apply the implicit function theorem for \({{\mathcal {F}}}\) at \((\theta _c, (W_*, 0))\). There is a function \(\theta \mapsto (W(\theta ), \Delta (\theta )) \) so that, locally around \((\theta _c, (W_*, 0))\), we have

$$\begin{aligned} {{\mathcal {F}}}(\theta , (W, \Delta )) = 0, \quad \text {iff} \quad (W, \Delta ) = (W(\theta ), \Delta (\theta )). \end{aligned}$$

The derivatives \((W'(\theta ), \Delta '(\theta ))\) are given by

$$\begin{aligned} \begin{pmatrix} W'(\theta _c) \\ \Delta '(\theta _c) \end{pmatrix} = - J^{-1} \begin{pmatrix} \partial _\theta {{\mathcal {F}}}_1 \\ \partial _\theta {{\mathcal {F}}}_2 \end{pmatrix} = \dfrac{-1}{\det J} \begin{pmatrix} - \frac{1}{ \theta _c^2} C &{} \frac{1}{\theta _c^2} B \\ \frac{ 2 W_* }{\theta _c^2} B &{} \frac{\mu \theta _c}{W_*^2} - \frac{2 W_*}{ \theta _c^2} A \end{pmatrix} \begin{pmatrix} \mu \left( 1 - \frac{1}{W_*}\right) + 2 \frac{W_*^2}{\theta _c^3} A \\ \mu + 2 \frac{W_*^2}{\theta _c^3} B \end{pmatrix}. \end{aligned}$$

This gives

$$\begin{aligned} \Delta '(\theta _c) = \dfrac{-1}{\det J} \left( \frac{2 W_* \mu }{\theta _c^2} \right) \left( (B - A) + \dfrac{\mu \theta _c^3}{2W_*^3} \right) . \end{aligned}$$
(22)

We claim that \(B \ge A\) (for the proof see below). This shows that \(\Delta '(\theta _c) < 0\). So, restoring the variable \(\delta ^2\), we have

$$\begin{aligned} \delta ^2(\theta ) \approx - \Delta '(\theta _c) (\theta _c - \theta )_+, \quad \text {and finally}, \quad \boxed {\delta (\theta ) = \sqrt{- \Delta '(\theta _c) } \cdot \sqrt{(\theta _c - \theta )_+} (1 + o(1)).} \end{aligned}$$

It remains to prove that \(B \ge A\). This comes from the fact that \(h''\) is increasing negative. First, we notice that |A| and |C| are of the form

$$\begin{aligned} | A | = \frac{4}{\pi } \int _0^{\pi /2} f(s) g(s) {\textrm{d}}s, \quad | C | = \frac{4}{\pi } \int _0^{\pi /2} f(s) g(\pi /2 - s), \end{aligned}$$

with \(f(s):= \left| h''(W_*^2 \cos ^2(s)/\theta _c^2) \right| \) and \(g(s):= \cos ^4(s)\). The functions f and g are both decreasing on \([0, \frac{\pi }{2}]\). By re-arrangement, we deduce that \(| A | > | C |\). Actually, we have

$$\begin{aligned} |A | - | C | = \frac{4}{\pi } \int _0^{\pi /4} \left( f(s) - f\left( \frac{\pi }{2} - s\right) \right) \left( g(s) - g\left( \frac{\pi }{2} - s\right) \right) > 0. \end{aligned}$$

Together with Cauchy–Schwarz in (21), this gives \(| B |^2 \le | A | \cdot | C | < | A |^2\), since A and B are negative, we get \(B > A\), as wanted. This concludes the proof of Theorem 1.5.