1 Introduction and the main results

In this paper, we study the existence and concentration of solutions for the following fractional Schrödinger–Poisson system

$$\begin{aligned} \left\{ \begin{array}{ll} \varepsilon ^{2s}(-\Delta )^s u +V(x)u+\phi u=K(x)|u|^{p-2}u,\,\,\text {in}~\mathbb {R}^3,\\ \\ \varepsilon ^{2s}(-\Delta )^s \phi =u^2,\,\,\text {in}~\mathbb {R}^3. \end{array} \right. \end{aligned}$$
(1.1)

Here \(\varepsilon >0\) is a small parameter, \(\frac{3}{4}<s<1\) is a fixed constant, \(4<p<2_s^*\), \(2_s^*:=\frac{6}{3-2s}\) is the fractional critical exponent in dimension 3, and the operator \((-\Delta )^s\) is the fractional Laplacian of order s, which can be defined by the Fourier transform \((-\Delta )^su=\mathcal {F}^{-1}(|\xi |^{2s}\mathcal {F}u)\). In (1.1), the first equation is a fractional nonlinear Schrödinger equation in which the potential \(\phi \) satisfies the second equation which is a fractional Poisson equation. For this reason, (1.1) is refereed to as a fractional nonlinear Schrödinger–Poisson system (also called Schrödinger–Maxwell system).

In the local case that \(s=1\), (1.1) reduces to the following system

$$\begin{aligned} \left\{ \begin{array}{ll} -\varepsilon ^2\Delta u+V(x)u+\phi u=K(x)g(u),\,\,\text {in}\,\mathbb {R}^3,\\ \\ -\varepsilon ^2\Delta \phi = u ^2,\,\,\text {in}\,\mathbb {R}^3, \end{array} \right. \end{aligned}$$
(1.2)

which is called the Hatree–Fock equation for \(\varepsilon =1\) in [30]. A similar system settled on a bounded domain was introduced by Benci in [4] as a model in semiconductor theory. For more physical aspects of (1.2) we refer to [5] and the references therein.

In the past decades, the system like or similar to (1.2) has been studied extensively by means of variational tools. See [1, 25, 36, 44, 48] and the references therein for the existence of solutions. The concentration behavior of solutions was studied in some papers. In [37], Ruiz and Vaira constructed multibump solutions whose bumps concentrate around a local minimum of the potential V. In [19], by using the Ljusternik–Schnirelmann theory, He proved that the system (1.2) has at least \(cat_{\varLambda _\delta }(\varLambda )\) positive solutions for \(\varepsilon >0\) small. The critical case was considered in [20], He and Zou proved that system (1.2) possesses a positive ground state solution which concentrate around the global minimum of V. In [23], Ianni and Vaira considered the following system

$$\begin{aligned} \left\{ \begin{array}{ll} -\varepsilon ^2\Delta u +V(x)u+\phi u=f(u),\,\,\text {in}~\mathbb {R}^3,\\ \\ -\Delta \phi =u^2,\,\,\text {in}~\mathbb {R}^3. \end{array} \right. \end{aligned}$$

The authors proved the existence of a single bump solution which concentrates on the critical points of V(x). In [11], D’Aprile and Wei constructed a family of radially symmetric solutions concentrating around a sphere. See [45] for the concentration phenomena for a Schrödinger–Poisson system with competing potentials.

If \(\phi (x)=0\), (1.1) becomes the fractional Schrödinger equation like

$$\begin{aligned} \varepsilon ^{2s}(-\Delta )^s u+V(x)u=f(x,u),\,\,\,x\in \mathbb {R}^N. \end{aligned}$$
(1.3)

Solutions of the Eq. (1.3) are standing wave solutions of the fractional Schrödinger equation of the form

$$\begin{aligned} i\varepsilon \frac{\partial \psi }{\partial t}=\varepsilon ^{2s}(-\Delta )^s\psi +V(x)\psi -f(x,|\psi |),\,\,\,x\in \mathbb {R}^N, \end{aligned}$$

that is solutions of the form \(\psi (x,t)=e^{-iEt/\varepsilon }u(x)\), where E is a constant, u(x) is a solution of (1.3). The fractional Schrödinger equation is a fundamental equation in fractional quantum mechanics. It was discovered by Laskin [27, 28] as a result of extending the Feynman path integral, from the Brownian-like to Lévy-like quantum mechanical paths, where the Feynman path integral leads to the classical Schrödinger equation, and the path integral Lévy trajectories leads to the fractional Schrödinger equation. Different to the classical Laplacian operator, the usual analysis tools for elliptic PDEs can not be directly applied to (1.3) since \((-\Delta )^s\) is a nonlocal operator. In [7], Cafferelli and Silvestre developed a powerful extension method which transfer the nonlocal Eq. (1.3) into a local one settled on a half-space. Recently, in [13], the authors gave a survey on the fractional Sobolev spaces and proposed some fundamental techniques for fractional Laplacian equations.

Since then, there have been some works concerning with the existence, multiplicity and concentration phenomenon of solutions to nonlinear fractional Schrödinger Eq. (1.3) via variational methods. See [8, 16, 18, 31, 38, 39] for the existence of solutions. The concentration phenomena was considered independently in [9, 12] via a Lyapunov–Schmidt reduction argument. After that, the concentration problem was studied in some very recent works. The solutions concentrated around a global minimum of the potential V were constructed in [17]. For the concentration phenomena around a local minimum of the potential V, see [2, 21] for the subcritical and the critical cases, respectively. See also [10] for a similar work with \(s=\frac{1}{2}\) and a nonlocal term. The different concentration phenomena for (1.3) with competition potentials was studied in [29, 39].

To the best of our knowledge, there are few results concerning the existence of solutions to (1.1) except for works [33, 41, 47]. In [41], Teng adapted the monotonicity trick (see for example, Jeanjean and Tanaka [24]) to obtain the existence of ground state solutions to

$$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s u +V(x)u+\phi u=\mu |u|^{q-1}u+|u|^{2_s^*-2}u,\,\,\text {in}~\mathbb {R}^3,\\ \\ (-\Delta )^{t} \phi =\alpha u^2,\,\,\text {in}~\mathbb {R}^3, \end{array} \right. \end{aligned}$$

for \(q\in (2,2_s^*-1)\). See [42] for the subcritical case. In [33], the authors considered the following system

$$\begin{aligned} \left\{ \begin{array}{ll} \varepsilon ^{2s}(-\Delta )^s u +V(x)u+\phi u=g(u),\,\,\text {in}~\mathbb {R}^3,\\ \\ \varepsilon ^\theta (-\Delta )^{\frac{\alpha }{2}} \phi =\gamma _\alpha u^2,\,\,\text {in}~\mathbb {R}^3, \end{array} \right. \end{aligned}$$

and adapted some ideas of [3] to establish the multiplicity of solutions for small \(\varepsilon \), where g is subcritical at infinity. A positive solution of a system similar to (1.1) with \(V=0\) was obtained in [47].

It is natural to ask how about the asymptotical behavior of solutions of (1.1) as \(\varepsilon \rightarrow 0\)? As far as we know such a problem was not considered before. There are some difficulties in such a problem. The first one is that there is a competition between the potentials V and K: each would try to attract ground states to their minimum and maximum points, respectively. This makes difficulties in determining the concentration position of solutions. This kind of problem can be trace back to [43], see also [14, 15] for a different concentration phenomena for a Dirac equation and an elliptic system of Hamilton type. The second one is, as we mention above, the fractional Laplacian operator \((-\Delta )^s\) is nonlocal, and this brings some essential difference with the elliptic equations with the classical Laplacian operator, such as regularity, Maximum principle and so on.

In this paper, we will give an answer to the above question. First, we obtain a positive ground state solutions via Nehari manifold method for each \(\varepsilon >0\) small enough. To study the concentration behavior of these solutions as \(\varepsilon \rightarrow 0\), we establish the \(L^\infty \) and decay estimate of these solutions. At last, we determine a concrete set related to the potentials V and K as the concentration position of these solutions. Roughly speaking, the ground state solutions concentrate at such points x where V(x) is small or K(x) is large. For a special case, we show that, as \(\varepsilon \rightarrow 0\), these ground state solutions concentrate around such points which are both the minima points of the potential V and the maximum points of the potential K.

Before stating our theorems, we first give some notations. Set

$$\begin{aligned} \begin{aligned}&V_{min}:=\min _{x\in \mathbb {R}^3} V,\ \ \ \mathcal {V}:= \left\{ x\in \mathbb {R}^3:V(x)=V_{min}\right\} ,\ \ \ V_\infty := \liminf _{|x|\rightarrow \infty }V(x),\\&K_{max}:=\max _{x\in \mathbb {R}^3} K,\ \mathcal {K}:= \left\{ x\in \mathbb {R}^3:K(x)=K_{max}\right\} ,\ K_\infty :=\limsup _{|x|\rightarrow \infty }K(x). \end{aligned} \end{aligned}$$

To describe our results, we assume that V and K satisfy the following conditions:

\((A_0)\) :

\(V,K\in L^\infty (\mathbb {R}^3)\) are uniformly continuous and \(V_{\min }>0,\inf K>0\);

either

\((A_1)\) :

\(V_{\min }<V_\infty <+\infty \) and there exists \(x_1\in \mathcal {V}\) such that \(K(x_1)\ge K(x)\) for \(|x|\ge R\) with \(R>0\) sufficiently large;

or

\((A_2)\) :

\(K_{\max }>K_\infty \ge \inf K>0\) and there exists \(x_2\in \mathcal {K}\) such that \(V(x_2)\le V(x)\) for \(|x|\ge R\) with \(R>0\) sufficiently large.

Obviously, if \((A_1)\) holds, we can assume \(K(x_1)=\max \limits _{x\in \mathcal {V}}K(x)\), and set

$$\begin{aligned} \mathcal {H}_1=\left\{ x\in \mathcal {V}:K(x)=K(x_1)\}\cup \{x\notin \mathcal {V}:K(x)>K(x_1)\right\} . \end{aligned}$$

If \((A_2)\) holds, we can assume \(V(x_2)=\min \limits _{x\in \mathcal {K}}V(x)\), and set

$$\begin{aligned} \mathcal {H}_2=\left\{ x\in \mathcal {K}:V(x)=V(x_2)\}\cup \{x\notin \mathcal {K}:V(x)<V(x_2)\right\} . \end{aligned}$$

Clearly, \(\mathcal {H}_1\) and \(\mathcal {H}_2\) are bounded sets. Moreover, if \(\mathcal {V}\cap \mathcal {K}\ne \emptyset \), then \(\mathcal {H}_1=\mathcal {H}_2=\mathcal {V}\cap \mathcal {K}\).

Now we state our main results as follows.

Theorem 1.1

Assume that \((A_0)\) and \((A_1)\) hold, then for all small \(\varepsilon >0\):

  1. (i)

    The system (1.1) has a positive ground state solution \((\omega _\varepsilon ,\phi _{\omega _\varepsilon })\);

  2. (ii)

    \(\omega _\varepsilon \) possesses a global maximum point \(x_\varepsilon \) such that, up to a subsequence, \(x_\varepsilon \rightarrow x_0\) as \(\varepsilon \rightarrow 0\), \(\lim \limits _{\varepsilon \rightarrow 0}dist(x_\varepsilon , \mathcal {H}_1)=0\), and \(v_\varepsilon (x):=\omega _\varepsilon (\varepsilon x+x_\varepsilon )\) converges in \(H^s(\mathbb {R}^3)\) to a positive ground state solution of

    $$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s u +V(x_0)u+\phi u=K(x_0)|u|^{p-2}u,\,\,\text {in}~\mathbb {R}^3,\\ \\ (-\Delta )^s \phi =u^2,\,\,\text {in}~\mathbb {R}^3. \end{array} \right. \end{aligned}$$

    In particular if \(\mathcal {V}\cap \mathcal {K}\ne \emptyset \), then \(\lim \limits _{\varepsilon \rightarrow 0}dist(x_\varepsilon ,\mathcal {V}\cap \mathcal {K})=0\), and up to a subsequence, \(v_\varepsilon \) converges in \(H^s(\mathbb {R}^3)\) to a positive ground state solution of

    $$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s u +V_{\min }u+\phi u=K_{\max }|u|^{p-2}u,\,\,\text {in}~\mathbb {R}^3,\\ \\ (-\Delta )^s \phi =u^2,\,\,\text {in}~\mathbb {R}^3. \end{array} \right. \end{aligned}$$
  3. (iii)

    There exists a constant \(C>0\) such that

    $$\begin{aligned} \omega _\varepsilon (x)\le \frac{C \varepsilon ^{3+2s}}{ \varepsilon ^{3+2s}+|x- x_\varepsilon |^{3+2s}},\ \forall x\in \mathbb {R}^3. \end{aligned}$$

Theorem 1.2

Assume \((A_0)\) and \((A_2)\) holds, and we replace \((\mathcal {H}_1)\) by \((\mathcal {H}_2)\), then all the conclusions of Theorem 1.1 remain true.

In the sequel, we only give the details proof for Theorem 1.1 because the argument for Theorem 1.2 is similar to that for Theorem 1.1.

This paper is organized as follows. In Sect. 2, we provide some preliminary Lemmas which will be used later. In Sect. 3, we prove the existence of positive ground state solutions. In Sect. 4, we study the concentration phenomenon and convergence of ground state solutions. In Sect. 5, we obtain the decay estimate of solution, which is polynomial instead of exponential form. Finally, we give the Proof of Theorem 1.1.

2 Preliminary results

Throughout this paper, we denote \(\Vert {\cdot }\Vert _p\) the usual norm of the space \(L^p(\mathbb {R}^3)\), \(1\le p<\infty \), \(\Vert {\cdot }\Vert _\infty \) denote the norm of the space \(L^\infty (\mathbb {R}^3)\), C or \(C_i(i=1,2,\ldots )\) denote some positive constants may change from line to line.

First, we collect some preliminary results for the fractional Laplacian. We define the homogeneous fractional Sobolev space \(\mathcal {D}^{s,2}(\mathbb {R}^3)\) as the completion of \(\mathcal {C}_0^\infty (\mathbb {R}^3)\) with respect to the norm

$$\begin{aligned} \Vert u\Vert _{\mathcal {D}^{s,2}}:=\left( \iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{|u(x)-u(y)|^2}{|x-y|^{3+2s}}dxdy\right) ^{\frac{1}{2}}=[u]_{H^s}. \end{aligned}$$

We denote by \(H^s(\mathbb {R}^3)\) the standard fractional Sobolev space, defined as the set of \(u\in \mathcal {D}^{s,2}(\mathbb {R}^3)\) satisfying \(u\in L^2(\mathbb {R}^3)\) with the norm

$$\begin{aligned} \Vert u\Vert _{H^s}^2=\iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{|u(x)-u(y)|^2}{|x-y|^{3+2s}}dxdy+\int _{\mathbb {R}^3}u^2dx=[u]_{H^s}^2+\Vert u\Vert _2^2. \end{aligned}$$

Also, in light of [13, Proposition 3.4 and Proposition 3.6], we have

$$\begin{aligned} \left\| (-\Delta )^{\frac{s}{2}}u\right\| _2^2=\int _{\mathbb {R}^3}|\xi |^{2s}|\hat{u}(\xi )|^2d\xi =\frac{1}{2}C(s)\iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{|u(x)-u(y)|^2}{|x-y|^{3+2s}}dxdy, \end{aligned}$$

where \(\hat{u}\) stands for the Fourier transform of u and

$$\begin{aligned} C(s)=\bigg (\int _{\mathbb {R}^3}\frac{1-cos\xi _1}{|\xi |^{3+2s}}d\xi \bigg )^{-1},~\xi =(\xi _1,\xi _2,\xi _3). \end{aligned}$$

As a consequence, the norms on \(H^s(\mathbb {R}^3)\) defined below

$$\begin{aligned} \begin{aligned}&u\mapsto \bigg (\int _{\mathbb {R}^3}u^2dx+\iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{|u(x)-u(y)|^2}{|x-y|^{3+2s}}dxdy\bigg )^{\frac{1}{2}}\\&u\mapsto \bigg (\int _{\mathbb {R}^3}u^2dx+\int _{\mathbb {R}^3}|\xi |^{2s}|\hat{u}(\xi )|^2d\xi \bigg )^{\frac{1}{2}}\\&u\mapsto \bigg (\int _{\mathbb {R}^3}u^2dx+\Vert (-\Delta )^{\frac{s}{2}}u\Vert _2^2\bigg )^{\frac{1}{2}} \end{aligned} \end{aligned}$$

are all equivalent. Moreover, \((-\Delta )^su\) can be equivalently represented as (see [13, Lemma 3.2])

$$\begin{aligned} (-\Delta )^su(x)=-\frac{C(s)}{2}\int _{\mathbb {R}^3}\frac{u(x+y)+u(x-y)-2u(x)}{|y|^{3+2s}}dy,\ \forall \,x\in \mathbb {R}^3. \end{aligned}$$
(2.1)

We denote \(\Vert {\cdot }\Vert _{H^s}\) by \(\Vert {\cdot }\Vert \) in the sequel for convenience.

Recall that by the Lax–Milgram theorem, we know that for every \(u\in H^s(\mathbb {R}^3)\), there exists a unique \(\phi _u^s\in \mathcal {D}^{s,2}(\mathbb {R}^3)\) such that \((-\Delta )^s\phi _u^s=u^2\) and \(\phi _u^s\) can be expressed by

$$\begin{aligned} \phi _u^s(x)=C_s\int _{\mathbb {R}^3}\frac{ u^2(y)}{|x-y|^{3-2s}}dy,\quad \forall \,x\in \mathbb {R}^3, \end{aligned}$$

which is called s-Riesz potential(see [26] or [7]), where

$$\begin{aligned} C_s=\frac{1}{\pi ^{\frac{3}{2}}}\frac{\varGamma \left( \frac{3}{2}-s\right) }{2^{2s}\varGamma (s)}. \end{aligned}$$

Making the change of variable \(x\mapsto \varepsilon x\), we can rewrite the system (1.1) as the following equivalent system

$$\begin{aligned} \left\{ \begin{array}{ll}(-\Delta )^s u +V(\varepsilon x)u+\phi u=K(\varepsilon x)|u|^{p-2}u,\,\,\text {in}~\mathbb {R}^3,\\ \\ (-\Delta )^s \phi =u^2,\,\,\text {in}~\mathbb {R}^3. \end{array} \right. \end{aligned}$$
(2.2)

If u is a solution of the system (2.2), then \(\omega (x):=u(\frac{x}{\varepsilon })\) is a solution of the system (1.1). Thus, to study the system (1.1), it suffices to study the system (2.2). In view of the presence of potential V(x), we introduce the subspace

$$\begin{aligned} H_\varepsilon =\bigg \{u\in H^s\left( \mathbb {R}^3\right) :\int _{\mathbb {R}^3}V(\varepsilon x)u^2dx<\infty \bigg \}, \end{aligned}$$

which is a Hilbert space equipped with the inner product

$$\begin{aligned} (u,v)_\varepsilon =\int _{\mathbb {R}^3}(-\Delta )^{\frac{s}{2}} u(-\Delta )^{\frac{s}{2}} vdx+\int _{\mathbb {R}^3}V(\varepsilon x)uvdx, \end{aligned}$$

and the equivalent norm

$$\begin{aligned} \Vert u\Vert _\varepsilon ^2=(u,u)_\varepsilon =\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}} u|^2dx+\int _{\mathbb {R}^3}V(\varepsilon x)u^2dx. \end{aligned}$$

Moreover, it can be proved that \((u,\phi _u^s)\in H_\varepsilon \times \mathcal {D}^{s,2}(\mathbb {R}^3)\) is a solution of (2.2) if and only if \(u\in H_\varepsilon \) is a critical point of the functional \(\mathcal {I}_\varepsilon :H_\varepsilon \rightarrow \mathbb {R}\) defined as

$$\begin{aligned} \mathcal {I}_\varepsilon (u)=\frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\frac{1}{2}\int _{\mathbb {R}^3}V(\varepsilon x)u^2dx+\frac{1}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx-\frac{1}{p}\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx, \end{aligned}$$
(2.3)

where \(\phi _u^s\) is the unique solution of the second equation in (2.2). Note that \(2\le \frac{12}{3+2s}\le 2_s^*\) if \(s\ge \frac{1}{2}\), then by the Hölder inequality and the Sobolev inequality (see Lemma 2.3 below), we have

$$\begin{aligned} \begin{aligned} \int _{\mathbb {R}^3}\phi _u^su^2dx&\le \left( \int _{\mathbb {R}^3}|u|^{\frac{12}{3+2s}}dx\right) ^{\frac{3+2s}{6}} \left( \int _{\mathbb {R}^3}|\phi _u^s|^{2_s^*}dx\right) ^{\frac{1}{2_s^*}}\\&\le C\left( \int _{\mathbb {R}^3}|u|^{\frac{12}{3+2s}}dx \right) ^{\frac{3+2s}{6}}\Vert \phi _u^s\Vert _{\mathcal {D}^{s,2}}\\&\le C \Vert u\Vert ^2\Vert \phi _u^s\Vert _{\mathcal {D}^{s,2}}<\infty . \end{aligned} \end{aligned}$$

Therefore, the functional \(\mathcal {I}_\varepsilon \) is well-defined for every \(u\in H_\varepsilon \) and belongs to \(C^1(H_\varepsilon ,\mathbb {R})\). Moreover, for any \(u,v\in H_\varepsilon \), we have

$$\begin{aligned} \begin{aligned} \langle \mathcal {I}_\varepsilon ^\prime (u),v\rangle =&\int _{\mathbb {R}^3}(-\Delta )^{\frac{s}{2}}u(-\Delta )^{\frac{s}{2}}vdx+\int _{\mathbb {R}^3}V(\varepsilon x)uvdx\\&+\int _{\mathbb {R}^3}\phi _u^suvdx-\int _{\mathbb {R}^3}K(\varepsilon x)|u|^{p-2}uvdx. \end{aligned} \end{aligned}$$
(2.4)

The properties of the function \(\phi _u^s\) are given in the following Lemma (see [41, Lemma 2.3]).

Lemma 2.1

For any \(u\in H^s(\mathbb {R}^3)\) and \(s\in [\frac{1}{2},1)\), we have

  1. (i)

    \(\phi _u^s\ge 0\);

  2. (ii)

    \(\phi _u^s:H^s(\mathbb {R}^3)\rightarrow \mathcal {D}^{s,2}(\mathbb {R}^3)\) is continuous and maps bounded sets into bounded sets;

  3. (iii)

    \(\int _{\mathbb {R}^3}\phi _u^s u^2dx\le C\Vert u\Vert _{\frac{12}{3+2s}}^4\le C\Vert u\Vert ^4\);

  4. (iv)

    If \(u_n\rightharpoonup u\) in \(H^s(\mathbb {R}^3)\), then \(\phi _{u_n}^s \rightharpoonup \phi _u^s\) in \(\mathcal {D}^{s,2}(\mathbb {R}^3)\);

  5. (v)

      If \(u_n\rightarrow u\) in \(H^s(\mathbb {R}^3)\), then \(\phi _{u_n}^s \rightarrow \phi _u^s\) in \(\mathcal {D}^{s,2}(\mathbb {R}^3)\) and \(\int _{\mathbb {R}^3}\phi _{u_n}^su_n^2dx\rightarrow \int _{\mathbb {R}^3}\phi _u^su^2dx\).

Define \(N:H^s(\mathbb {R}^3)\rightarrow \mathbb {R}\) by

$$\begin{aligned} N(u)=\int _{\mathbb {R}^3}\phi _u^su^2dx. \end{aligned}$$

The next Lemma shows that the functional N and \(N^\prime \) possesses BL-splitting property which is similar to the well-known Brezis–Lieb Lemma ([6]).

Lemma 2.2

([41, Lemma 2.4]) Assume that \(s>\frac{3}{4}\). Let \(u_n\rightharpoonup u\) in \(H^s(\mathbb {R}^3)\) and \(u_n\rightarrow u\) a.e. in \(\mathbb {R}^3\). Then

  1. (i)

    \(N(u_n-u)=N(u_n)-N(u)+o(1)\);

  2. (ii)

    \(N^\prime (u_n-u)=N^\prime (u_n)-N^\prime (u)+o(1)\), in \((H^s(\mathbb {R}^3))^*\).

The following embedding results for fractional Sobolev space can be found in [13].

Lemma 2.3

There exists a constant C, depending only on s such that

$$\begin{aligned} \Vert u\Vert _{2_s^*}^2\le C \iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{|u(x)-u(y)|^2}{|x-y|^{3+2s}}dxdy, \end{aligned}$$

for every \(u\in H^s(\mathbb {R}^3)\). Moreover, \(H^s(\mathbb {R}^3)\) is continuously embedding into \(L^r(\mathbb {R}^3)\) for any \(r\in [2,2_s^*]\) and compactly embedding into \(L_{loc}^r(\mathbb {R}^3)\) for any \(r\in [1,2_s^*)\).

The following vanishing Lemma is a version of the concentration-compactness principle proved by P. L. Lions. We can consult [22, Lemma 3.6], [18] and [38, Lemma 2.4].

Lemma 2.4

If \(\{u_n\}\) is bounded in \(H^s(\mathbb {R}^3)\) and for some \(R>0\) and \(2\le r<2_s^*\) we have

$$\begin{aligned} \sup \limits _{x\in \mathbb {R}^3}\int _{B_R(x)}|u_n|^rdx\rightarrow 0~as~n\rightarrow \infty , \end{aligned}$$

then \(u_n\rightarrow 0\) in \(L^t(\mathbb {R}^3)\) for any \(2<t<2_s^*\).

The following Lemma implies that the functional \(\mathcal {I}_\varepsilon \) possesses the Mountain Pass structure (see [34] or [46]).

Lemma 2.5

The functional \(\mathcal {I}_\varepsilon \) possesses the following properties

  1. (i)

    there exist \(\alpha ,\rho >0\), such that \(\mathcal {I}_\varepsilon (u)\ge \alpha \) if \(\Vert u\Vert _\varepsilon =\rho \);

  2. (ii)

    there exists an \(e\in H_\varepsilon \) with \(\Vert e\Vert _\varepsilon >\rho \) such that \(\mathcal {I}_\varepsilon (e)<0\).

Proof

  1. (i)

    For any \(u\in H_\varepsilon {\setminus } \{0\}\), by Lemma 2.1(i) and the Sobolev inequality, we have

    $$\begin{aligned} \begin{aligned} \mathcal {I}_\varepsilon (u)&=\frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\frac{1}{2}\int _{\mathbb {R}^3}V(\varepsilon x) u^2dx+\frac{1}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx\\&\quad \ -\frac{1}{p}\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx\\&\ge \frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\frac{1}{2}\int _{\mathbb {R}^3}V(\varepsilon x) u^2dx-\frac{1}{p}K_{\max }\int _{\mathbb {R}^3}|u|^pdx\\&\ge \frac{1}{2}\Vert u\Vert _\varepsilon ^2-C\Vert u\Vert _\varepsilon ^p. \end{aligned} \end{aligned}$$

    Since \(p>4\), hence, we can choose some \(\rho >0\) such that

    $$\begin{aligned} \mathcal {I}_\varepsilon (u)\ge \alpha \,\,\text {with}\,\, \Vert u\Vert _\varepsilon =\rho . \end{aligned}$$
  2. (ii)

    For any \(u\in H_\varepsilon {\setminus } \{0\}\), we have

    $$\begin{aligned} \begin{aligned} \mathcal {I}_\varepsilon (tu)&=\frac{t^2}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+ \frac{t^2}{2} \int _{\mathbb {R}^3}V(\varepsilon x) u^2dx\\&\quad +\frac{ t^4}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx-\frac{t^p}{p}\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx\\&\le \frac{t^2}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\frac{t^2}{2}\int _{\mathbb {R}^3}V(\varepsilon x) u^2dx+\frac{t^4}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx\\&\quad - \frac{t^p}{p}\inf K\int _{\mathbb {R}^3}|u|^pdx\\&\rightarrow -\infty \,\,as\,\, t\rightarrow \infty . \end{aligned} \end{aligned}$$

    Thus, we can choose \(e=t^*u\) for some \(t^*>0\) large enough such that (ii) holds.

\(\square \)

Lemma 2.6

Let \(\{u_n\}\) be a \((PS)_c\) sequence for \(\mathcal {I}_\varepsilon \). Then \(\{u_n\}\) is bounded in \(H_\varepsilon \).

Proof

Let \(\{u_n\}\subset H_\varepsilon \) be a \((PS)_c\) sequence for \(\mathcal {I}_\varepsilon \), that is

$$\begin{aligned} \mathcal {I}_\varepsilon (u_n)\rightarrow c~~ \text {and}~~ \mathcal {I}_\varepsilon ^\prime (u_n)\rightarrow 0\quad \text {as}~~n\rightarrow +\infty . \end{aligned}$$

Therefore, we have

$$\begin{aligned} \begin{aligned} c+1+\Vert u_n\Vert _\varepsilon&\ge \mathcal {I}_\varepsilon (u_n)-\frac{1}{4}\langle \mathcal {I}_\varepsilon ^\prime (u_n),u_n\rangle \\&=\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u_n|^2dx+\frac{1}{4}\int _{\mathbb {R}^3} V(\varepsilon x)u_n^2dx\\ {}&\quad \ +\left( \frac{1}{4}-\frac{1}{p}\right) \int _{\mathbb {R}^3}K(\varepsilon x)|u_n|^pdx\\&\ge \frac{1}{4}\Vert u_n\Vert _\varepsilon ^2, \end{aligned} \end{aligned}$$

for n large enough, which implies that \(\{u_n\}\) is bounded in \(H_\varepsilon \). \(\square \)

To characterize the least energy, we define the Nehari manifold by

$$\begin{aligned} \mathcal {N}_\varepsilon =\left\{ u\in H_\varepsilon {\setminus } \{0\}:\langle \mathcal {I}_\varepsilon ^\prime (u),u\rangle =0\right\} . \end{aligned}$$

Thus, for any \(u\in \mathcal {N}_\varepsilon \), we have that

$$\begin{aligned} \int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\int _{\mathbb {R}^3}V(\varepsilon x)u^2dx+ \int _{\mathbb {R}^3}\phi _u^su^2dx=\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx. \end{aligned}$$

Lemma 2.7

For any \(u\in H_\varepsilon {\setminus } \{0\}\), we have

  1. (i)

    There exists a unique \(t_\varepsilon =t_\varepsilon (u)>0\) such that \(t_\varepsilon u\in \mathcal {N_\varepsilon }\). Moreover, \(\mathcal {I}_\varepsilon (t_\varepsilon u)=\max \limits _{t\ge 0} \mathcal {I}_\varepsilon (tu)\).

  2. (ii)

    There exist \(T_1>T_2>0\) independent of \(\varepsilon >0\) such that \(T_2\le t_\varepsilon \le T_1\).

Proof

  1. (i)

    For \(t>0\), let

    $$\begin{aligned} \begin{aligned} g(t)&=\mathcal {I}_\varepsilon (tu)=\frac{ t^2}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\frac{t^2}{2}\int _{\mathbb {R}^3}V(\varepsilon x)u^2dx\\&\qquad +\frac{ t^4}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx-\frac{t^p}{p}\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx. \end{aligned} \end{aligned}$$

    Then we have

    $$\begin{aligned} g(t)\ge \frac{1}{2}t^2\Vert u\Vert _\varepsilon ^2-\frac{t^p}{p}\int _{\mathbb {R}^3}|u|^qdx \ge \frac{t^2}{4}\Vert u\Vert _\varepsilon ^2-Ct^p\Vert u\Vert _\varepsilon ^p. \end{aligned}$$

    Since \(4<p<2_s^*\), \(g(t)>0\) for small \(t>0\). Moreover, by Lemma 2.1(iii), we get

    $$\begin{aligned} g(t)\le \frac{t^2}{2}\Vert u\Vert _\varepsilon ^2+Ct^4\Vert u\Vert _\varepsilon ^4-\frac{t^p}{p}\int _{\mathbb {R}^3}|u|^pdx. \end{aligned}$$

    Hence, \(g(t)\rightarrow -\infty \) as \(t\rightarrow \infty \) and g has a positive maximum at \(t_\varepsilon =t_\varepsilon (u)>0\). So that \(g^\prime (t_\varepsilon u)=0\) and \(t_\varepsilon u\in \mathcal {N}_\varepsilon \). The condition \(g^\prime (t)=0\) is equivalent to

    $$\begin{aligned} \frac{\Vert u\Vert _\varepsilon ^2}{t^2}+\int _{\mathbb {R}^3}\phi _u^su^2dx=t^{p-4}\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx. \end{aligned}$$
    (2.5)

    Suppose that there exist \(t_\varepsilon ^\prime>t_\varepsilon >0\) such that \(t_\varepsilon ^\prime u, t_\varepsilon u\in \mathcal {N}_\varepsilon \). It follows from (2.5) that

    $$\begin{aligned} \left( \frac{1}{{t_\varepsilon ^\prime }^2}-\frac{1}{t_\varepsilon ^2} \right) \Vert u\Vert _\varepsilon ^2=\left( {t_\varepsilon ^\prime }^{p-4}- t_\varepsilon ^{p-4}\right) \int _{\mathbb {R}^3}K\left( \varepsilon x\right) |u|^pdx. \end{aligned}$$

    which is impossible in view of \(t_\varepsilon ^\prime>t_\varepsilon >0\).

  2. (ii)

    By \(t_\varepsilon u\in \mathcal {N}_\varepsilon \) and Lemma 2.1(iii), we have

    $$\begin{aligned} C_1 t_\varepsilon ^2\Vert u\Vert ^2+C_2 t_\varepsilon ^4\Vert u\Vert ^4\ge & {} t_\varepsilon ^2\Vert u\Vert _\varepsilon ^2+ t_\varepsilon ^4\int _{\mathbb {R}^3}\phi _u^su^2dx=t_\varepsilon ^p\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx\\\ge & {} C_3t_\varepsilon ^p\int _{\mathbb {R}^3}|u|^pdx. \end{aligned}$$

    Thus, there exists a \(T_1>0\) independent of \(\varepsilon \) such that \(t_\varepsilon \le T_1\).

On the other hand, using \(t_\varepsilon u\in \mathcal {N}_\varepsilon \) again and Lemma 2.1(i), we have

$$\begin{aligned} t_\varepsilon ^2\Vert u\Vert ^2\le t_\varepsilon ^p\int _{\mathbb {R}^3}K(\varepsilon x)|u|^pdx, \end{aligned}$$

which yields that there exists a \(T_2>0\) independent of \(\varepsilon \) such that \(t_\varepsilon \ge T_2\). \(\square \)

In order to obtain a ground state solution, we need a characterization of the least energy. Following [35], we define

$$\begin{aligned} c_\varepsilon =\inf \limits _{\gamma \in \varGamma }\max \limits _{t\in [0,1]}\mathcal {I}_\varepsilon {(\gamma (t))},~c_\varepsilon ^*=\inf \limits _{u\in \mathcal { N}_\varepsilon }\mathcal {I}_\varepsilon (u),~c_\varepsilon ^{**}=\inf \limits _{u\in H_\varepsilon {\setminus }\{0\}}\max \limits _{t\ge 0}\mathcal {I}_\varepsilon (tu), \end{aligned}$$

where \(\varGamma =\{\gamma \in C([0,1],H_\varepsilon ):\gamma (0)=0,\,\mathcal {I}_\varepsilon (\gamma (1))\le 0,\,\gamma (1)\ne 0\}\).

By a standard argument (see [35, 46]), we have

Lemma 2.8

\(c_\varepsilon =c_\varepsilon ^*=c_\varepsilon ^{**}>0\).

For any \(a,b>0\), consider the autonomous problem

$$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s u +au+\phi u=b|u|^{p-2}u,\,\,\text {in}~\mathbb {R}^3,\\ \\ (-\Delta )^s \phi =u^2,\,\,\text {in}~\mathbb {R}^3, \end{array} \right. \end{aligned}$$
(2.6)

and the corresponding energy functional

$$\begin{aligned} \mathcal {I}_{ab}(u)=\frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+\frac{a}{2}\int _{\mathbb {R}^3}u^2dx+\frac{1}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx-\frac{b}{p}\int _{\mathbb {R}^3}|u|^pdx, \end{aligned}$$

defined for \(u\in H^s(\mathbb {R}^3)\). It is easy to check that \(\mathcal {I}_{ab}(u)\) possesses the Mountain Pass structure and hence \(\mathcal {I}_{ab}(u)\) has a bounded (PS)-sequence, and its least energy has the same characterization as stated in Lemma 2.8. Using the fact that \(\mathcal {I}_{ab}(u)\) is invariant under translation, we see that \(\gamma _{ab}=\inf \limits _{u\in \mathcal {N}_{ab}}\mathcal {I}_{ab}(u)\) is attained, where \(\gamma _{ab}\) is the Mountain Pass level and \(\mathcal {N}_{ab}\) is the Nehari manifold of \(\mathcal {I}_{ab}\).

Lemma 2.9

Let \(a_j > 0\) and \(b_j>0,j=1,2\), with \(a_1\le a_2\) and \(b_1\ge b_2\). Then \(\gamma _{a_1b_1}\le \gamma _{a_2b_2}\). In particular, if one of inequalities is strict, then \(\gamma _{a_1b_1}<\gamma _{a_2b_2}\).

Proof

Let \(u \in \mathcal {N}_{a_2b_2}\) be such that

$$\begin{aligned} \gamma _{a_2b_2}=\mathcal {I}_{a_2b_2}(u)=\max _{t>0}\mathcal {I}_{a_2b_2}(tu). \end{aligned}$$

Let \(u_0=t_1u\) be such that \(\mathcal {I}_{a_1b_1}(u_0) = \max \limits _{t>0}\mathcal {I}_{a_1b_1}(tu)\). One has

$$\begin{aligned} \begin{aligned} \gamma _{a_2b_2}&=\mathcal {I}_{a_2b_2}(u)\ge \mathcal {I}_{a_2b_2}(u_0)\\&=\mathcal {I}_{a_1b_1}(u_0)+\frac{1}{2}(a_2-a_1) \int _{\mathbb {R}^3}|u_0|^2dx+\frac{1}{p}\left( b_1-b_2\right) \int _{\mathbb {R}^3}|u_0|^pdx\\&\ge \gamma _{a_1b_1}. \end{aligned} \end{aligned}$$

Thus, we complete the proof. \(\square \)

Without loss of generality, up to a translation, we may assume that

$$\begin{aligned} x_1=0\in \mathcal {V}, \end{aligned}$$

so

$$\begin{aligned} V(0)=V_{\min }\ \ \text {and}\ \ \kappa :=K(0)\ge K(x)\ \text {for all}\ |x|\ge R. \end{aligned}$$

Lemma 2.10

\(\limsup \limits _{\varepsilon \rightarrow 0}c_\varepsilon \le \gamma _{V_{\min }\kappa }\).

Proof

Denote \(V^c(x)=\max \{c,V(x)\}\), \(K^d(x)=\min \{d,K(x)\}\), \(V_\varepsilon ^c(x)=V^c(\varepsilon x)\) and \(K_\varepsilon ^d(x)=K^d(\varepsilon x)\), where cd are positive constants. Define the auxiliary functional as follows:

$$\begin{aligned} \mathcal {I}_\varepsilon ^{cd}(u):=\frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx+ \frac{1}{2} \int _{\mathbb {R}^3}V_\varepsilon ^c(x) u^2dx+\frac{ 1}{4}\int _{\mathbb {R}^3}\phi _u^su^2dx-\frac{1}{p}\int _{\mathbb {R}^3}K_\varepsilon ^c(x) |u|^pdx, \end{aligned}$$

for any \(u\in H^s(\mathbb {R}^3)\), which implies that \(\mathcal {I}_{cd}(u)\le \mathcal {I}_\varepsilon ^{cd}(u)\), and thus \(\gamma _{cd}\le c_\varepsilon ^{cd}\), where \(c_\varepsilon ^{cd}\) is the least energy of \(\mathcal {I}_\varepsilon ^{cd}\). By the definition of \(V_{\min }\) and \(K_{\max }\), we get \(V_\varepsilon ^{V_{\min }}(x)=V(\varepsilon x)\), \(K_\varepsilon ^{K_{\max }}(x)=K(\varepsilon x)\). Therefore, we have

$$\begin{aligned} \mathcal {I}_\varepsilon ^{V_{\min } K_{\max }}(u)=\mathcal {I}_\varepsilon (u), \end{aligned}$$
(2.7)

and \(V_\varepsilon ^{V_{\min }}(x)\rightarrow V(0)=V_{min}\), \(K_\varepsilon ^{K_{\max }}(x)\rightarrow K(0)=\kappa \) uniformly on bounded sets of x as \(\varepsilon \rightarrow 0\).

Now, we claim \(\limsup \limits _{\varepsilon \rightarrow 0}c_\varepsilon ^{V_{\min } K_{\max }}\le \gamma _{V_{\min } \kappa }\).

Indeed, let w be a ground state solution of \(\mathcal {I}_{V_{\min } \kappa }\), that is, \(\mathcal {I}_{V_{\min } \kappa }(w)=\gamma _{V_{\min }\kappa }\), then there exists \(t_\varepsilon >0\) such that \(t_\varepsilon w\in \mathcal {N}_\varepsilon ^{V_{\min } K_{\max }}\), where \(\mathcal {N}_\varepsilon ^{V_{\min }K_{\max }}\) is the Nehari manifold of the functional \(\mathcal {I}_\varepsilon ^{V_{\min }K_{\max }}\). Thus

$$\begin{aligned} c_\varepsilon ^{V_{\min }K_{\max }}\le \mathcal {I}_\varepsilon ^{V_{\min }K_{\max }}\left( t_\varepsilon w\right) =\max _{t\ge 0}\mathcal {I}_\varepsilon ^{V_{\min }K_{\max }}(tw). \end{aligned}$$

One has

$$\begin{aligned} \begin{aligned} \mathcal {I}_\varepsilon ^{V_{\min }K_{\max }}(t_\varepsilon w)&= \mathcal {I}_{V_{\min } \kappa }(t_\varepsilon w)+ \frac{1}{2}\int _{\mathbb {R}^3}\left( V_\varepsilon ^{V_{\min }}(x)- V_{min}\right) |t_\varepsilon w|^2dx\\&\quad +\frac{1}{p}\int _{\mathbb {R}^3} \left( \kappa -K_\varepsilon ^{K_{\max }}(x)\right) |t_\varepsilon w|^pdx. \end{aligned} \end{aligned}$$
(2.8)

By Lemma 2.7(ii), we can assume that \(t_\varepsilon \rightarrow t_0\) as \(\varepsilon \rightarrow 0\). Since \(w\in L^2(\mathbb {R}^3)\), for any \(\eta >0\), there exists a \(R>0\) such that

$$\begin{aligned} \int _{\mathbb {R}^3{\setminus } B_R(0)}|w|^2dx<\eta . \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^3}\left( V_\varepsilon ^{V_{\min }}\left( x\right) -V_{min}\right) |t_\varepsilon w|^2dx=\int _{\mathbb {R}^3}\left( V_\varepsilon ^{V_{\min }}\left( x\right) -V_{min}\right) |t_0 w|^2dx+o\left( 1\right) \\&\quad =\int _{\mathbb {R}^3{\setminus } B_R\left( 0\right) }\left( V_\varepsilon ^{V_{\min }} \left( x\right) -V_{min}\right) |t_0 w|^2dx+\int _{B_R\left( 0\right) } \left( V_\varepsilon ^{V_{\min }}\left( x\right) -V_{min}\right) |t_0 w|^2dx+o\left( 1\right) \\&\quad \le Ct_0^2\eta +o\left( 1\right) +o\left( 1\right) , \end{aligned} \end{aligned}$$

here we use the fact that \(V_\varepsilon ^{V_{\min }}(x)\rightarrow V_{\min }\) uniformly in \(x\in B_{R}(0)\). Thus, we obtain

$$\begin{aligned} \int _{\mathbb {R}^3}\left( V_\varepsilon ^{V_{\min }}\left( x\right) -V_{min}\right) |t_\varepsilon w|^2dx=o\left( 1\right) . \end{aligned}$$

Similarly, we have

$$\begin{aligned} \int _{\mathbb {R}^3}\left( \kappa -K_\varepsilon ^{K_{\max }}\left( x\right) \right) |t_\varepsilon w|^pdx=o\left( 1\right) . \end{aligned}$$

Thus, by (2.8), we have

$$\begin{aligned} \mathcal {I}_\varepsilon ^{V_{\min }K_{\max }}(t_\varepsilon w)=\mathcal {I}_{V_{\min }\kappa }(t_\varepsilon w)+o(1)\rightarrow \mathcal {I}_{V_{\min }\kappa }(t_0 w)\quad \text {as}\, \varepsilon \rightarrow 0. \end{aligned}$$
(2.9)

Consequently

$$\begin{aligned} c_\varepsilon ^{V_{\min }K_{\max }}\le \mathcal {I}_\varepsilon ^{V_{\min }K_{\max }}(t_\varepsilon w)\rightarrow \mathcal {I}_{V_{\min } \kappa }(t_0 w)\le \max _{t\ge 0}\mathcal {I}_{V_{\min } \kappa }(t w)=\mathcal {I}_{V_{\min } \kappa }(w)=\gamma _{V_{\min }\kappa }. \end{aligned}$$

From (2.7), we obtain \(c_\varepsilon ^{V_{\min }K_{\max }}=c_\varepsilon \). This completes the proof. \(\square \)

3 Existence of ground state solutions

Lemma 3.1

\(c_\varepsilon \) is attained at some positive \(u_\varepsilon \in H_\varepsilon \) for small \(\varepsilon >0\).

Proof

By Lemma 2.5, we see that the functional \(\mathcal {I}_\varepsilon \) possesses the Mountain Pass structure. Using a version of the Mountain Pass theorem without (PS) condition(see [46]), there exists a sequence \(\{u_n\}\subset H_\varepsilon \) such that

$$\begin{aligned} \mathcal {I}_\varepsilon (u_n)\rightarrow c_\varepsilon \quad \text {and}\quad \mathcal {I}_\varepsilon ^\prime (u_n)\rightarrow 0\quad \text {as}\ n\rightarrow \infty . \end{aligned}$$

By Lemma 2.6, we know that \(\{u_n\}\) is bounded in \(H_\varepsilon \). Assume that \(u_n\rightharpoonup u_\varepsilon \) in \(H_\varepsilon \), then by Lemmas 2.2(ii) and 2.3, we have \(\mathcal {I}_\varepsilon ^\prime (u_\varepsilon )=0\). If \(u_\varepsilon \ne 0\), it is easy to check that \(\mathcal {I}_\varepsilon (u_\varepsilon )=c_\varepsilon \). Next we show that \(u_\varepsilon \ne 0\) for small \(\varepsilon >0\).

Assume by contradiction that there exists a sequence \(\varepsilon _j\rightarrow 0\) such that \(u_{\varepsilon _j}=0\), then \(u_n\rightharpoonup 0\) in \(H_\varepsilon \), and thus \(u_n\rightarrow 0\) in \(L_{loc}^t(\mathbb {R}^3)\) for \(t\in [1,2_s^*)\) and \(u_n(x)\rightarrow 0\) a.e. in \(x\in \mathbb {R}^3\).

By \((A_1)\), choose \(b\in (V_{\min },V_\infty )\) and consider the functional \(\mathcal {I}_{\varepsilon _j}^{b\kappa }\). Let \(t_n>0\) be such that \(t_n u_n\in \mathcal {N}_{\varepsilon _j}^{b\kappa }\), from Lemma 2.7(ii), \(\{t_n\}\) is bounded. Assume \(t_n\rightarrow t_0\) as \(n\rightarrow \infty \). By \((A_1)\) again, the set \(O_\varepsilon :=\{x\in \mathbb {R}^3:V_\varepsilon (x)<b\ \text {or}\ K_\varepsilon (x)\ge \kappa \}\) is bounded. Notice that \(\mathcal {I}_{\varepsilon _j}(t_nu_n)\le \mathcal {I}_{\varepsilon _j}(u_n)\). We obtain

$$\begin{aligned} \begin{aligned} c_{\varepsilon _j}^{b \kappa }&\le \mathcal {I}_{\varepsilon _j}^{b \kappa }(t_nu_n)\\&=\mathcal {I}_{\varepsilon _j}(t_nu_n)+\frac{1}{2}\int _{\mathbb {R}^3} \left( V_{\varepsilon _j}^b(x)-V({\varepsilon _j}x)\right) |t_nu_n|^2dx\\&\quad +\frac{1}{p}\int _{\mathbb {R}^3}\big (K({\varepsilon _j}x)-K_{\varepsilon _j}^\kappa (x)\big )|t_nu_n|^pdx\\&=\mathcal {I}_{\varepsilon _j}(t_nu_n)+\frac{1}{2}\int _{O_{\varepsilon _j}}\big (b-V({\varepsilon _j}x)\big )|t_nu_n|^2dx\\&\quad +\frac{1}{p}\int _{O_{\varepsilon _j}}\big (K({\varepsilon _j}x)-\kappa \big )|t_nu_n|^pdx\\&\le \mathcal {I}_{\varepsilon _j}(t_nu_n)+o(1)\le \mathcal {I}_{\varepsilon _j}(u_n)+o(1)=c_{\varepsilon _j}. \end{aligned} \end{aligned}$$

Notice that \(\gamma _{b\kappa }\le c_{\varepsilon _j}^{b\kappa }\), hence \(\gamma _{b\kappa }\le c_{\varepsilon _j}\). In virtue of Lemma 2.10, letting \(\varepsilon _j\rightarrow 0\) yields

$$\begin{aligned} \gamma _{b\kappa }\le \gamma _{V_{\min }\kappa }, \end{aligned}$$

which is impossible since \(\gamma _{V_{\min }\kappa }<\gamma _{b\kappa }\). Therefore, \(c_\varepsilon \) is attained at some \(u_\varepsilon \ne 0\) for small \(\varepsilon >0\).

Next we only need to prove that the solution \(u_\varepsilon \) is positive. Put \(u_\varepsilon ^\pm =\max \{\pm u_\varepsilon ,0\}\) the positive (negative) part of \(u_\varepsilon \). We note that all the calculations above can be repeated word by word, replacing \(\mathcal {I}_\varepsilon ^+(u_\varepsilon )\) with the functional

$$\begin{aligned} \mathcal {I}_\varepsilon ^+(u_\varepsilon )= & {} \frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u_\varepsilon |^2dx+\frac{1}{2}\int _{\mathbb {R}^3}V(\varepsilon x)u_\varepsilon ^2dx\\&+\frac{1}{4}\int _{\mathbb {R}^3}\phi _{u_\varepsilon }^su_\varepsilon ^2dx-\frac{1}{p}\int _{\mathbb {R}^3}K(\varepsilon x)|u_\varepsilon ^+|^pdx. \end{aligned}$$

In this way we get a ground state solution \(u_\varepsilon \) of the equation

$$\begin{aligned} (-\Delta )^s u_\varepsilon +V(\epsilon x)u_\varepsilon +\phi _{u_\varepsilon }^s u_\varepsilon =K(\varepsilon x)|u_\varepsilon ^+|^{p-2}u_\varepsilon ^+,\,\,\text {in}\ \mathbb {R}^3. \end{aligned}$$
(3.1)

Using \(u_\varepsilon ^-\) as a test function in (3.1) we obtain

$$\begin{aligned} \int _{\mathbb {R}^3}(-\Delta )^\frac{s}{2} u_\varepsilon \cdot (-\Delta )^\frac{s}{2} u_\varepsilon ^-dx+\int _{\mathbb {R}^3} V(\varepsilon x)|u_\varepsilon ^-|^2dx+\int _{\mathbb {R}^3}\phi _{u_\varepsilon }^s(u_\varepsilon ^-)^2dx=0. \end{aligned}$$
(3.2)

On the other hand,

$$\begin{aligned} \begin{aligned} \int _{\mathbb {R}^3}(-\Delta )^\frac{s}{2} u_\varepsilon \cdot (-\Delta )^\frac{s}{2} u_\varepsilon ^-dx&=\frac{1}{2}C(s)\iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{(u_\varepsilon (x)-u_\varepsilon (y))(u_\varepsilon ^-(x)-u_\varepsilon ^-(y))}{|x-y|^{3+2s}}dxdy\\&\ge \frac{1}{2}C(s)\left[ \int _{\{u_\varepsilon>0\}\times \{u_\varepsilon<0\}}\frac{(u_\varepsilon (x)-u_\varepsilon (y))(-u_\varepsilon ^-(y))}{|x-y|^{3+2s}}dxdy\right. \\&\quad +\int _{\{u_\varepsilon<0\}\times \{u_\varepsilon<0\}}\frac{(u_\varepsilon ^-(x)-u_\varepsilon ^-(y))^2}{|x-y|^{3+2s}}dxdy\\&\quad +\left. \int _{\{u_\varepsilon <0\}\times \{u_\varepsilon >0\}}\frac{(u_\varepsilon (x)-u_\varepsilon (y))u_\varepsilon ^-(x)}{|x-y|^{3+2s}}dxdy\right] \\&\ge 0. \end{aligned} \end{aligned}$$

Thus, it follows from (3.2) and Lemma 2.1(i), we have \(u_\varepsilon ^-=0\) and \(u_\varepsilon \ge 0\). Moreover, if \(u_\varepsilon (x_0)=0\) for some \(x_0\in \mathbb {R}^3\), then \((-\Delta )^su_\varepsilon (x_0)=0\) and by (2.1), we have

$$\begin{aligned} (-\Delta )^su_\varepsilon (x_0)=-\frac{C(s)}{2}\int _{\mathbb {R}^3}\frac{u_\varepsilon (x_0+y)+u_\varepsilon (x_0-y)-2u_\varepsilon (x_0)}{|y|^{3+2s}}dy, \end{aligned}$$

therefore,

$$\begin{aligned} \int _{\mathbb {R}^3}\frac{u_\varepsilon (x_0+y)+u_\varepsilon (x_0-y)}{|y|^{3+2s}}dy=0, \end{aligned}$$

yielding \(u_\varepsilon \equiv 0\), a contradiction. Therefore, \(u_\varepsilon \) is a positive solution of the system (2.2) and the proof is completed. \(\square \)

4 Concentration and convergence of ground state solutions

In this section, we are devoted to the concentration behavior of the ground state solutions \(u_\varepsilon \) as \(\varepsilon \rightarrow 0\). We will prove the following results.

Theorem 4.1

Let \(u_\varepsilon \) be a solution of the system (2.2) given by Lemma 3.1, then \(u_\varepsilon \) possesses a global maximum point \(y_\varepsilon \) such that, up to a subsequence, \(\varepsilon y_\varepsilon \rightarrow x_0\) as \(\varepsilon \rightarrow 0\), \(\lim \limits _{\varepsilon \rightarrow 0}dist(\varepsilon y_\varepsilon , \mathcal {H}_1)=0\) and \(v_\varepsilon (x):=u_\varepsilon (x+y_\varepsilon )\) converges in \(H^s(\mathbb {R}^3)\) to a positive ground state solution of

$$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s u +V(x_0)u+\phi u=K(x_0)|u|^{p-2}u,\,\,\text {in}\ \mathbb {R}^3,\\ (-\Delta )^s \phi =u^2,\,\,\text {in}\ \mathbb {R}^3. \end{array} \right. \end{aligned}$$

In particular, if \(\mathcal {V}\cap \mathcal {K}\ne \emptyset \), then \(\lim \limits _{\varepsilon \rightarrow 0}dist(\varepsilon y_\varepsilon ,\mathcal {V}\cap \mathcal {K})=0\), and up to a subsequence, \(v_\varepsilon \) converges in \(H^s(\mathbb {R}^3)\) to a positive ground state solution of

$$\begin{aligned} \left\{ \begin{array}{ll} (-\Delta )^s u +V_{\min }u+\phi u=K_{\max }|u|^{p-2}u,\,\,\text {in}\ \mathbb {R}^3,\\ \\ (-\Delta )^s \phi =u^2,\,\,\text {in}\ \mathbb {R}^3. \end{array} \right. \end{aligned}$$

Lemma 4.1

There exists \(\varepsilon ^*>0\) such that, for all \(\varepsilon \in (0,\varepsilon ^*)\), there exist \(\{y_\varepsilon \}\subset \mathbb {R}^3\) and \(\tilde{R}\), \(\sigma >0\) such that

$$\begin{aligned} \int _{B_{\tilde{R}}(y_\varepsilon )}u_\varepsilon ^2dx\ge \sigma . \end{aligned}$$

Proof

Assume by contradiction that there exists a sequence \(\varepsilon _j\rightarrow 0\) as \(j\rightarrow \infty \), such that for any \(R>0\),

$$\begin{aligned} \lim _{j\rightarrow \infty }\sup _{y\in \mathbb {R}^3}\int _{B_R(y)}u_{\varepsilon _j}^2dx=0. \end{aligned}$$

Thus, by Lemma 2.4, we have

$$\begin{aligned} u_{\varepsilon _j}\rightarrow 0\ \text {in}\ L^q(\mathbb {R}^3)\ \text {for}\ 2<q<2^*_s, \end{aligned}$$

jointly with Lemma 2.1(iii), we have

$$\begin{aligned} \int _{\mathbb {R}^3}\phi _{u_{\varepsilon _j}}^su_{\varepsilon _j}^2dx\rightarrow 0~\hbox {as}~ j\rightarrow \infty , \end{aligned}$$

and hence

$$\begin{aligned} \Vert u_{\varepsilon _j}\Vert _{\varepsilon _j}^2=\int _{\mathbb {R}^3}K(\varepsilon _j x)|u_{\varepsilon _j}|^pdx-\int _{\mathbb {R}^3}\phi _{u_{\varepsilon _j}}^su_{\varepsilon _j}^2dx \rightarrow 0~\hbox {as}~j\rightarrow \infty . \end{aligned}$$

Thus, \(\mathcal {I}_{\varepsilon _j}(u_{\varepsilon _j})\rightarrow 0\) as \(j\rightarrow \infty \), which contradicts \(\mathcal {I}_{\varepsilon _j}(u_{\varepsilon _j})\rightarrow c_{\varepsilon _j}>0\). \(\square \)

Set \(v_\varepsilon (x):=u_\varepsilon (x+y_\varepsilon )\), then \(v_\varepsilon \) satisfies

$$\begin{aligned} (-\Delta )^sv_\varepsilon +V(\varepsilon (x+y_\varepsilon ))v_\varepsilon +\phi _{v_\varepsilon }^sv_\varepsilon =K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^{p-2}v_\varepsilon , \end{aligned}$$
(4.1)

with energy

$$\begin{aligned} \begin{aligned} \mathcal {J}_\varepsilon (v_\varepsilon )&=\frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx +\frac{1}{2}\int _{\mathbb {R}^3}V(\varepsilon (x+y_\varepsilon ))v_\varepsilon ^2dx+\frac{1}{4}\int _{\mathbb {R}^3}\phi _{v_\varepsilon }^sv_\varepsilon ^2dx\\&\quad -\frac{1}{p}\int _{\mathbb {R}^3}K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^p dx\\&=\mathcal {J}_\varepsilon (v_\varepsilon )-\frac{1}{4}\left\langle \mathcal {J}_\varepsilon ^\prime (v_\varepsilon ),v_\varepsilon \right\rangle \\&=\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx +\frac{1}{4}\int _{\mathbb {R}^3}V(\varepsilon (x+y_\varepsilon ))v_\varepsilon ^2dx\\&\quad +\left( \frac{1}{4}-\frac{1}{p}\right) \int _{\mathbb {R}^3}K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^p dx\\&=\mathcal {I}_\varepsilon (u_\varepsilon )-\frac{1}{4}\langle \mathcal {I}_\varepsilon ^\prime (u_\varepsilon ),u_\varepsilon \rangle =\mathcal {I}_\varepsilon (u_\varepsilon )=c_\varepsilon . \end{aligned} \end{aligned}$$

We may assume \(v_\varepsilon \rightharpoonup u\) in \(H_\varepsilon \), and \(v_\varepsilon \rightarrow u\) in \(L_{loc}^t(\mathbb {R}^3)\) for \(t\in [1,2_s^*)\) with \(u\ne 0\).

By \(V,K\in L^\infty (\mathbb {R}^3)\), without loss of generality, we may assume that \(V(\varepsilon y_\varepsilon )\rightarrow V_0\) and \(K(\varepsilon y_\varepsilon )\rightarrow K_0\) as \(\varepsilon \rightarrow 0\).

Lemma 4.2

u is a positive ground state solution of

$$\begin{aligned} (-\Delta )^su+V_0u+\phi _{u}^su=K_0|u|^{p-2}u. \end{aligned}$$
(4.2)

Proof

Since VK are uniformly continuous, one has

$$\begin{aligned} |V(\varepsilon (x+y_\varepsilon ))-V(\varepsilon y_\varepsilon )|\rightarrow 0\ \text {and}\ |K(\varepsilon (x+y_\varepsilon ))-K(\varepsilon y_\varepsilon )|\rightarrow 0\ \text {as}\ \varepsilon \rightarrow 0 \end{aligned}$$

uniformly on bounded sets of \(x\in \mathbb {R}^3\). Then, we get

$$\begin{aligned} \begin{aligned} |V(\varepsilon (x+y_\varepsilon ))-V_0|&\le |V(\varepsilon (x+y_\varepsilon ))-V(\varepsilon y_\varepsilon )|+|V(\varepsilon y_\varepsilon )-V_0|\rightarrow 0, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} |K(\varepsilon (x+y_\varepsilon ))-K_0|&\le |K(\varepsilon (x+y_\varepsilon ))-K(\varepsilon y_\varepsilon )|+|K(\varepsilon y_\varepsilon )-K_0|\rightarrow 0 \end{aligned} \end{aligned}$$

as \(\varepsilon \rightarrow 0\) uniformly on bounded sets of \(x\in \mathbb {R}^3\). Therefore, \(V(\varepsilon (x+y_\varepsilon ))\rightarrow V_0\) and \(K(\varepsilon (x+y_\varepsilon ))\rightarrow K_0\) as \(\varepsilon \rightarrow 0\) uniformly on bounded sets of \(x\in \mathbb {R}^3\). Consequently, by (4.1), for any \(\varphi \in C_0^\infty (\mathbb {R}^3)\),

$$\begin{aligned} \begin{aligned} 0&=\lim _{\varepsilon \rightarrow 0}\int _{\mathbb {R}^3}\big ((-\Delta )^sv_\varepsilon +V(\varepsilon (x+y_\varepsilon ))v_\varepsilon +\phi _{v_\varepsilon }^sv_\varepsilon -K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^{p-2}v_\varepsilon \big )\varphi dx\\&=\int _{\mathbb {R}^3}\big ((-\Delta )^su+V_0u +\phi _{u}^su-K_0|u|^{p-2}u\big )\varphi dx, \end{aligned} \end{aligned}$$

which implies that u solves (4.2) with energy

$$\begin{aligned} \begin{aligned} \mathcal {I}_{V_0K_0}(u):&=\frac{1}{2}\int _{\mathbb {R}^3}|(-\Delta )^\frac{s}{2}u|^2dx+\frac{1}{2}V_0\int _{\mathbb {R}^3}u^2dx +\frac{1}{4}\int _{\mathbb {R}^3}\phi _{u}^su^2dx-\frac{1}{p}K_0\int _{\mathbb {R}^3}|u|^pdx\\&=\mathcal {I}_{V_0K_0}(u)-\frac{1}{4}\left\langle \mathcal {I}_{V_0K_0}^\prime (u),u\right\rangle \\&=\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx +\frac{1}{4}V_0\int _{\mathbb {R}^3}u^2dx+\left( \frac{1}{4}-\frac{1}{p}\right) K_0\int _{\mathbb {R}^3}|u|^p dx\\&\ge \gamma _{V_0K_0}. \end{aligned} \end{aligned}$$

By Fatou’s lemma and the Proof of Lemma 2.10, we have

$$\begin{aligned} \begin{aligned} \gamma _{V_0K_0}&\le \frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx +\frac{1}{4}V_0\int _{\mathbb {R}^3}u^2dx+\left( \frac{1}{4}-\frac{1}{p}\right) K_0\int _{\mathbb {R}^3}|u|^p dx\\&\le \liminf _{\varepsilon \rightarrow 0}\bigg [\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx +\frac{1}{4}\int _{\mathbb {R}^3}V(\varepsilon (x+y_\varepsilon ))v_\varepsilon ^2dx\\&\quad +\left( \frac{1}{4}-\frac{1}{p}\right) \int _{\mathbb {R}^3}K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^p dx\bigg ]\\&=\liminf _{\varepsilon \rightarrow 0}\mathcal {J}_\varepsilon (v_\varepsilon )\\&\le \limsup _{\varepsilon \rightarrow 0}\mathcal {I}_\varepsilon (u_\varepsilon )\\&\le \gamma _{V_0K_0}. \end{aligned} \end{aligned}$$

Consequently,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\mathcal {J}_\varepsilon (v_\varepsilon )=\lim _{\varepsilon \rightarrow 0}c_\varepsilon =\mathcal {I}_{V_0K_0}(u)=\gamma _{V_0K_0}. \end{aligned}$$
(4.3)

Therefore, u is a ground state solution of the limit problem (4.2). As in the Proof of Lemma 3.1, u is positive. \(\square \)

Lemma 4.3

\(\{\varepsilon y_\varepsilon \}\) is bounded.

Proof

Suppose to the contrary that, after passing to a subsequence, \(|\varepsilon y_\varepsilon |\rightarrow \infty \). By \(V,K\in L^\infty (\mathbb {R}^3)\), without loss of generality, we may assume that \(V(\varepsilon y_\varepsilon )\rightarrow V_0\) and \(K(\varepsilon y_\varepsilon )\rightarrow K_0\) as \(\varepsilon \rightarrow 0\). Since \(V(0)=V_{\min }\) and \(\kappa =K(0)\ge K(x)\) for all \(|x|\ge R\), we deduce that \(V_0> V_{\min }\) and \(K_0\le \kappa \). So it follows from Lemma 2.9 that \(\gamma _{V_0K_0}>\gamma _{V_{\min }\kappa }\).

However, by (4.3) and Lemma 2.10, \(c_\varepsilon \rightarrow \gamma _{V_0K_0}\le \gamma _{V_{\min }\kappa }\), which is a contradiction. Therefore, \(\{\varepsilon y_\varepsilon \}\) is bounded. \(\square \)

After extracting a subsequence, we may assume \(\varepsilon y_\varepsilon \rightarrow x_0\) as \(\varepsilon \rightarrow 0\), then \(V_0=V(x_0)\) and \(K_0=K(x_0)\).

Lemma 4.4

\(\lim \limits _{\varepsilon \rightarrow 0}dist(\varepsilon y_\varepsilon ,\mathcal {H}_1)=0\).

Proof

It suffices to show that \(x_0\in \mathcal {H}_1\). We argue by contradiction, if \(x_0\notin \mathcal {H}_1\), then it is easy to check that \(\gamma _{V(x_0)K(x_0)}>\gamma _{V_{\min }k}\) by \((A_1)\) and Lemma 2.9. Therefore, by Lemma 2.10, we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}c_\varepsilon =\gamma _{V(x_0)K(x_0)}>\gamma _{V_{\min }k}\ge \lim _{\varepsilon \rightarrow 0}c_\varepsilon , \end{aligned}$$

which is absurd. \(\square \)

Lemma 4.5

\(v_\varepsilon \rightarrow u\) in \(H^s(\mathbb {R}^3)\).

Proof

Recall that u is a ground state solution of (4.2), we have

$$\begin{aligned} \begin{aligned}&\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx\le \liminf \limits _{\varepsilon \rightarrow 0}\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx\le \limsup \limits _{\varepsilon \rightarrow 0}\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx\\&\quad \le \limsup \limits _{\varepsilon \rightarrow 0}\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx+\liminf \limits _{\varepsilon \rightarrow 0}\frac{1}{4}\int _{\mathbb {R}^3}V(\varepsilon (x+y_\varepsilon ))v_\varepsilon ^2dx-\frac{1}{4}V_0\int _{\mathbb {R}^3}u^2dx\\&\qquad +\liminf \limits _{\varepsilon \rightarrow 0}\left( \frac{1}{4}-\frac{1}{p}\right) \int _{\mathbb {R}^3}K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^pdx -\left( \frac{1}{4}-\frac{1}{p}\right) K_0\int _{\mathbb {R}^3}|u|^pdx\\&\quad \le \limsup \limits _{\varepsilon \rightarrow 0}\bigg [\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx+\frac{1}{4}\int _{\mathbb {R}^3}V(\varepsilon (x+y_\varepsilon ))v_\varepsilon ^2dx\\&\qquad +\left( \frac{1}{4}-\frac{1}{p}\right) \int _{\mathbb {R}^3}K(\varepsilon (x+y_\varepsilon ))|v_\varepsilon |^pdx \bigg ]-\frac{1}{4}V_0\int _{\mathbb {R}^3}u^2dx-\left( \frac{1}{4}-\frac{1}{p}\right) K_0\int _{\mathbb {R}^3}|u|^pdx\\&\quad =\frac{1}{4}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx. \end{aligned} \end{aligned}$$

Consequently,

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_\varepsilon |^2dx=\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx. \end{aligned}$$

Similarly, we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\int _{\mathbb {R}^3}V(\varepsilon (x+y_\varepsilon ))v_\varepsilon ^2dx=V_0\int _{\mathbb {R}^3}u^2dx. \end{aligned}$$

Notice that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\left( \int _{\mathbb {R}^3}V \left( \varepsilon (x+y_\varepsilon )\right) v_\varepsilon ^2dx-V_0 \int _{\mathbb {R}^3}v_\varepsilon ^2dx\right) =0. \end{aligned}$$

Thus

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \left\{ \int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_ \varepsilon |^2dx+V_0\int _{\mathbb {R}^3}v_\varepsilon ^2dx\right\} =\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}u|^2dx +V_0\int _{\mathbb {R}^3}u^2dx. \end{aligned}$$

Together with \(v_\varepsilon \rightharpoonup u\) in \(H^s(\mathbb {R}^3)\), we have \(v_\varepsilon \rightarrow u\) in \(H^s(\mathbb {R}^3)\). \(\square \)

To establish the \(L^\infty \)-estimate of ground state solutions, we first recall the following result which can be found in [16, (5.1.1) and (5.1.2)] but without having proof of it.

Lemma 4.6

Suppose that \(f:\mathbb {R}\rightarrow \mathbb {R}\) is convex and Lipschitz continuous with the Lipschitz constant L, \(f(0)=0\). Then for each \(u\in H^s(\mathbb {R}^3)\), \(f(u)\in H^s(\mathbb {R}^3)\) and

$$\begin{aligned} (-\Delta )^sf(u)\le f^\prime (u)(-\Delta )^su \end{aligned}$$
(4.4)

in the weak sense.

Proof

First, we claim that \(f(u)\in H^s(\mathbb {R}^3)\) for \(u\in H^s(\mathbb {R}^3)\). In fact

$$\begin{aligned} \begin{aligned}{}[f(u)]_{\mathcal {D}^{s,2}}&=\left( \iint _{\mathbb {R}^3\times \mathbb {R}^3}\frac{|f(u(x))-f(u(y))|^2}{|x-y|^{3+2s}}dxdy\right) ^{\frac{1}{2}}\\&\le \left( \iint _{\mathbb {R}^3\times \mathbb {R}^3} \frac{L^2|u(x)-u(y)|^2}{|x-y|^{3+2s}}dxdy\right) ^{\frac{1}{2}}\\&=L [u]_{\mathcal {D}^{s,2}}, \end{aligned} \end{aligned}$$

which implies that \(f(u)\in \mathcal {D}^{s,2}(\mathbb {R}^3)\). Moreover,

$$\begin{aligned} \int _{\mathbb {R}^3}|f(u)|^2dx=\int _{\mathbb {R}^3}|f(u)-f(0)|^2dx\le \int _{\mathbb {R}^3}L^2|u|^2dx<\infty , \end{aligned}$$

which yields that \(f(u)\in L^2(\mathbb {R}^3)\). Therefore, the claim is true.

Next we show that (4.4) holds. Observe that \(f^\prime \) exists a.e. in \(\mathbb {R}\) since f is Lipschitz continuous. For \(\psi \in C_{0}^{ \infty }(\mathbb {R}^3,\mathbb {R})\) with \(\psi \ge 0\), combining (2.1) with the convexity of f, there holds

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^3}(-\Delta )^s (f(u))\psi dx\\&\quad =-\frac{1}{2}C(s)\int \int _{\mathbb {R}^3\times \mathbb {R}^3}\frac{f(u(x+y))+f(u(x-y))-2f(u(x))}{|y|^{3+2s}}\psi (x) dydx,\\&\quad =-\frac{1}{2}C(s)\int \int _{\mathbb {R}^3\times \mathbb {R}^3}\frac{f(u(x+y))-f(u(x))+f(u(x-y))-f(u(x))}{|y|^{3+2s}}\psi (x) dydx\\&\quad \le -\frac{1}{2}C(s)\int \int _{\mathbb {R}^3\times \mathbb {R}^3}\frac{f^\prime (u(x))(u(x+y)-u(x))+f^\prime (u(x))(u(x-y)-u(x))}{|y|^{3+2s}}\psi (x) dydx\\&\quad =-\frac{1}{2}C(s)\int \int _{\mathbb {R}^3\times \mathbb {R}^3}\frac{f^\prime (u(x))[u(x+y)+u(x-y)-2u(x)]}{|y|^{3+2s}}\psi (x) dydx\\&\quad =\int _{\mathbb {R}^3}f^\prime (u)(-\Delta )^s u\psi dx. \end{aligned} \end{aligned}$$

This completes the proof. \(\square \)

Remark 1

In fact, from the above arguments, one can see that (4.4) holds for a.e. \(x\in \mathbb {R}^3\). Moreover, Lemma 4.6 is true for general dimension N.

The following Lemma plays a fundamental role in the study of behavior of the maximum points of the solutions, whose proof is related to the Moser iterative method [32].

Lemma 4.7

Let \(\varepsilon _n\rightarrow 0\) and \(v_{\varepsilon _n}\) be a solution of the following problem

$$\begin{aligned} (-\Delta )^s v_{\varepsilon _n} +V(\varepsilon _n(x+y_{\varepsilon _n}))v_{\varepsilon _n}+\phi _{v_{\varepsilon _n}}^s v_{\varepsilon _n}=K(\varepsilon _n(x+y_{\varepsilon _n}))|v_{\varepsilon _n}|^{p-2}v_{\varepsilon _n},\,\,\text {in}~\mathbb {R}^3, \end{aligned}$$
(4.5)

where \(y_{\varepsilon _n}\) is given in Lemma 4.1. Then \(v_{\varepsilon _n}\in L^\infty (\mathbb {R}^3)\) and there exists \(C>0\) such that

$$\begin{aligned} \Vert v_{\varepsilon _n}\Vert _\infty \le C,\ \text {uniformly in}\ n\in \mathbb {N}. \end{aligned}$$

Moreover, \(v_{\varepsilon _n}\rightarrow u\) in \(L^q(\mathbb {R}^3),\forall ~q\in [2,+\infty )\).

Proof

For simplicity of notations, we denote \(v_{\varepsilon _n}\) and \(y_{\varepsilon _n}\) by \(v_n\) and \(y_n\), respectively. Define

$$\begin{aligned} h(x,v_n):=K(\varepsilon _n(x+y_n))|v_n|^{p-2}v_n-V(\varepsilon _n(x+y_n))v_n-\phi _{v_n}^sv_n. \end{aligned}$$

From Lemma 4.5, \(\{v_n\}\) is bounded in \(H^s(\mathbb {R}^3)\), and hence in \(L^q(\mathbb {R}^3)\) for any \(q\in [2,2_s^*]\). So there exists some \(C>0\) such that

$$\begin{aligned} \Vert v_n\Vert _q\le C, \end{aligned}$$

uniformly in n. Since \(v_n\) is a solution of (4.5), then

$$\begin{aligned} \begin{aligned} \phi _{v_n}^s(x)&=\int _{\mathbb {R}^3}\frac{v_n^2(y)}{|x-y|^{3-2s}}dy=\int _{\{|x-y|\le 1\}}\frac{v_n^2(y)}{|x-y|^{3-2s}}dy+\int _{\{|x-y|> 1\}}\frac{v_n^2(y)}{|x-y|^{3-2s}}dy\\&\le \int _{\{|x-y|\le 1\}}\frac{v_n^2(y)}{|x-y|^{3-2s}}dy+\int _{\{|x-y|> 1\}}v_n^2(y)dy\\&\le \left( \int _{\{|x-y|\le 1\}}\frac{1}{|x-y|^{(3-2s)t^\prime }}dy \right) ^{\frac{1}{t^\prime }} \left( \int _{\{|x-y|\le 1\}}v_n^{2t}(y)dy\right) ^{\frac{1}{t}}+C\\&\le C, \end{aligned} \end{aligned}$$

where \(t^\prime (3-2s)<3,~2t\in [2,2_s^*],~\frac{1}{t}+\frac{1}{t^\prime }=1\) since \(\frac{3}{4}<s<1\). Therefore, we have

$$\begin{aligned} |h(x,v_n)|\le C(|v_n|+|v_n|^{p-1})\le C(1+|v_n|^{2_s^*-1}). \end{aligned}$$
(4.6)

Let \(T>0\), we follow [16] and define

$$\begin{aligned} f(t)=\left\{ \begin{array}{ll}0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text {if}\ t\le 0,\\ t^\beta ,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text {if}~0<t<T,\\ \beta T^{\beta -1}(t-T)+T^\beta ,\,\,\text {if}\ t\ge T, \end{array} \right. \end{aligned}$$

with \(\beta >1\) to be determined later. Since f is convex and Lipschitz with constant \(L_0=\beta T^{\beta -1}\) and \(f(0)=0\), by Lemma 4.6, we have \(f(v_n)\in \mathcal {D}^{s,2}(\mathbb {R}^3)\) and

$$\begin{aligned} (-\Delta )^sf(v_n)\le f^\prime (v_n)(-\Delta )^sv_n \end{aligned}$$
(4.7)

in the weak sense. Thus, from \(f(v_n)\in \mathcal {D}^{s,2}(\mathbb {R}^3)\), the self-adjointness of the operator \((-\Delta )^{s/2}\) and (4.6)–(4.7), we have

$$\begin{aligned} \begin{aligned} \Vert f(v_n)\Vert _{2_s^*}^2&\le C\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}f(v_n)|^2dx =C\int _{\mathbb {R}^3}f(v_n)(-\Delta )^sf(v_n)dx\\&\le C\int _{\mathbb {R}^3}f(v_n)f^\prime (v_n)(-\Delta )^sv_n dx= C\int _{\mathbb {R}^3}f(v_n)f^\prime (v_n)h(x,v_n) dx\\&\le C\int _{\mathbb {R}^3}f(v_n)f^\prime (v_n)dx+C\int _{\mathbb {R}^3}f(v_n)f^\prime (v_n)v_n^{2_s^*-1}dx. \end{aligned} \end{aligned}$$

Using the fact that \(f(v_n)f^\prime (v_n)\le \beta ^2 v_n^{2\beta -1}\) and \(v_n f^\prime (v_n)\le \beta f(v_n)\), we have

$$\begin{aligned} \bigg (\int _{\mathbb {R}^3}\big (f(v_n)\big )^{2_s^*}dx\bigg )^{\frac{2}{2_s^*}} \le C\beta ^2\bigg (\int _{\mathbb {R}^3}v_n^{2\beta -1}dx+\int _{\mathbb {R}^3}\big (f(v_n)\big )^2v_n^{2_s^*-2}dx\bigg ), \end{aligned}$$
(4.8)

where C is a positive constant that does not depend on \(\beta \). Notice that the last integral is well defined for T in the definition of f. Indeed

$$\begin{aligned} \begin{aligned} \int _{\mathbb {R}^3}\big (f(v_n)\big )^2v_n^{2_s^*-2}dx&= \int _{\{v_n\le T\} }\big (f(v_n)\big )^2v_n^{2_s^*-2}dx+\int _{\{v_n>T\}}\big (f(v_n)\big )^2v_n^{2_s^*-2}dx\\&\le T^{2\beta -2}\int _{\mathbb {R}^3}v_n^{2_s^*}dx+C\int _{\mathbb {R}^3}v_n^{2_s^*}dx<\infty . \end{aligned} \end{aligned}$$

We choose now \(\beta \) in (4.8) such that \(2\beta -1=2_s^*\), and we name it \(\beta _1\), that is

$$\begin{aligned} \beta _1:=\frac{2_s^*+1}{2}. \end{aligned}$$
(4.9)

Let \(\hat{R}>0\) to be fixed later. Attending to the last integral in (4.8) and applying the Holder’s inequality with exponents \(\gamma :=\frac{2_s^*}{2}\) and \(\gamma ^\prime :=\frac{2_s^*}{2_s^*-2}\),

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^3}\big (f(v_n)\big )^2v_n^{2_s^*-2}dx\\&\quad = \int _{\{v_n\le \hat{R}\} }\big (f(v_n)\big )^2v_n^{2_s^*-2}dx+\int _{\{v_n>\hat{R}\}}\big (f(v_n)\big )^2v_n^{2_s^*-2}dx\\&\quad \le \int _{\{v_n\le \hat{R}\} }\frac{\big (f(v_n)\big )^2}{v_n}\hat{R}^{2_s^*-1}dx +\bigg (\int _{\mathbb {R}^3}\big (f(v_n)\big )^{2_s^*}dx\bigg )^{\frac{2}{2_s^*}}\bigg (\int _{\{v_n>\hat{R}\}}v_n^{2_s^*}dx\bigg )^{\frac{2_s^*-2}{2_s^*}}. \end{aligned} \end{aligned}$$
(4.10)

By the Monotone Convergence Theorem, we can choose \(\hat{R}\) large enough so that

$$\begin{aligned} \bigg (\int _{\{v_n>\hat{R}\}}v_n^{2_s^*}dx\bigg )^{\frac{2_s^*-2}{2_s^*}}\le \frac{1}{2C\beta _1^2}, \end{aligned}$$

where C is the constant appearing in (4.8). Therefore, we can absorb the last term in (4.10) by the left hand side of (4.8) to get

$$\begin{aligned} \bigg (\int _{\mathbb {R}^3}\big (f(v_n)\big )^{2_s^*}dx\bigg )^{\frac{2}{2_s^*}} \le 2C\beta _1^2\bigg (\int _{\mathbb {R}^3}v_n^{2_s^*}dx+\hat{R}^{2_s^*-1}\int _{\mathbb {R}^3}\frac{\big (f(v_n)\big )^2}{v_n}dx\bigg ). \end{aligned}$$

Now we use the fact that \(f(v_n)\le v_n^{\beta _1}\) and (4.9) once again in the right hand side and we take \(T\rightarrow \infty \) we obtain

$$\begin{aligned} \bigg (\int _{\mathbb {R}^3}v_n^{2_s^*\beta _1}dx\bigg )^{\frac{2}{2_s^*}} \le 2C\beta _1^2\bigg (\int _{\mathbb {R}^3}v_n^{2_s^*}dx+\hat{R}^{2_s^*-1}\int _{\mathbb {R}^3}v_n^{2_s^*}dx\bigg ). \end{aligned}$$

and therefore

$$\begin{aligned} v_n\in L^{2_s^*\beta _1}(\mathbb {R}^3),~\forall ~n\in \mathbb {N}, \end{aligned}$$
(4.11)

and

$$\begin{aligned} \Vert v_n\Vert _{2_s^*\beta _1}\le C, \end{aligned}$$
(4.12)

uniformly in n.

Let us suppose now \(\beta >\beta _1\). Thus, using that \(f(v_n)\le v_n^\beta \) in the right hand side of (4.8) and letting \(T\rightarrow \infty \) we get

$$\begin{aligned} \bigg (\int _{\mathbb {R}^3}v_n^{2_s^*\beta }dx\bigg )^{\frac{2}{2_s^*}} \le C\beta ^2\bigg (\int _{\mathbb {R}^3}v_n^{2\beta -1}dx+\hat{R}^{2_s^*-1}\int _{\mathbb {R}^3}v_n^{2\beta +2_s^*-2}dx\bigg ). \end{aligned}$$
(4.13)

Set \(c_0:=\frac{2_s^*(2_s^*-1)}{2(\beta -1)}\) and \(c_1:=2\beta -1-c_0\). Notice that, since \(\beta >\beta _1\), then \(0<c_0<2_s^*, c_1>0\). Hence, applying Young’s inequality with exponents \(\gamma :={2_s^*}/c_0\ \text {and}\ \gamma ^\prime :={2_s^*}/{2_s^*-c_0}\), we have

$$\begin{aligned} \begin{aligned} \int _{\mathbb {R}^3}v_n^{2\beta -1}dx&\le \frac{c_0}{2_s^*}\int _{\mathbb {R}^3}v_n^{2_s^*}dx+\frac{2_s^*}{2_s^*-c_0}\int _{\mathbb {R}^3}v_n^{\frac{2_s^*c_1}{2_s^*-c_0}}dx\\&\le \int _{\mathbb {R}^3}v_n^{2_s^*}dx+\int _{\mathbb {R}^3}v_n^{2\beta +2_s^*-2}dx\\&\le C\left( 1+\int _{\mathbb {R}^3}v_n^{2\beta +2_s^*-2}dx\right) , \end{aligned} \end{aligned}$$

with \(C>0\) independent of \(\beta \). Plugging into (4.13),

$$\begin{aligned} \bigg (\int _{\mathbb {R}^3}v_n^{2_s^*\beta }dx\bigg )^{\frac{2}{2_s^*}}\le C\beta ^2\left( 1+\int _{\mathbb {R}^3}v_n^{2\beta +2_s^*-2}dx\right) , \end{aligned}$$

with C changing from line to line, but remaining independent of \(\beta \). Therefore

$$\begin{aligned} \bigg (1+\int _{\mathbb {R}^3}v_n^{2_s^*\beta }dx \bigg )^{\frac{1}{2_s^*(\beta -1)}}\le \big (C\beta ^2\big )^{\frac{1}{2(\beta -1)}}\bigg (1+\int _{\mathbb {R}^3}v_n^{2\beta +2_s^*-2}dx\bigg )^{\frac{1}{2(\beta -1)}}. \end{aligned}$$
(4.14)

Repeating this argument we will define a sequence \(\beta _m,m\ge 1\) such that

$$\begin{aligned} 2\beta _{m+1}+2_s^*-2=2_s^*\beta _m. \end{aligned}$$

Thus,

$$\begin{aligned} \beta _{m+1}-1=\left( \frac{2_s^*}{2}\right) ^m(\beta _1-1). \end{aligned}$$

Replacing it in (4.14) one has

$$\begin{aligned} \bigg (1+\int _{\mathbb {R}^3}v_n^{2_s^*\beta _{m+1}}dx\bigg )^{\frac{1}{2_s^*(\beta _{m+1}-1)}}\le \big (C\beta _{m+1}^2\big )^{\frac{1}{2(\beta _{m+1}-1)}}\bigg (1+\int _{\mathbb {R}^3}v_n^{2_s^*\beta _m}dx\bigg )^{\frac{1}{2_s^*(\beta _m-1)}}. \end{aligned}$$

Defining \(C_{m+1}:=C\beta _{m+1}^2\) and

$$\begin{aligned} A_m:=\bigg (1+\int _{\mathbb {R}^3}v_n^{2_s^*\beta _m}dx\bigg )^{\frac{1}{2_s^*(\beta _m-1)}}. \end{aligned}$$

So

$$\begin{aligned} A_{m+1}\le (C_{m+1})^{\frac{1}{2(\beta _{m+1}-1)}}A_m,~m=1,2,\ldots . \end{aligned}$$

Now from an iterative procedure we conclude that there exists a constant \(C_0>0\) independent of m, such that

$$\begin{aligned} A_m\le \prod _{k=1}^{m}C_k^{\frac{1}{2(\beta _k-1)}}A_1\le C_0A_1,~\forall m. \end{aligned}$$

Thus, from (4.11),

$$\begin{aligned} \Vert v_n\Vert _\infty \le C_0A_1<\infty , \end{aligned}$$
(4.15)

and hence \(v_n\in L^{\infty }(\mathbb {R}^3)\). By (4.12),

$$\begin{aligned} \Vert v_n\Vert _\infty \le C, \end{aligned}$$
(4.16)

uniformly in \(n\in \mathbb {N}\), Finally, by interpolation on the \(L^q\)-spaces and \(v_n\rightarrow u\) in \(L^2(\mathbb {R}^3)\), we have \(v_n\rightarrow u\) in \(L^q(\mathbb {R}^3),\forall ~q\in [2,+\infty )\). This finishes the Proof of Lemma 4.7. \(\square \)

Lemma 4.8

\(v_n(x)\rightarrow 0\) as \(|x|\rightarrow \infty \) uniformly in n.

Proof

Since \(v_n\) satisfies the equation

$$\begin{aligned} (-\Delta )^sv_n+v_n=\varUpsilon _n,~x\in \mathbb {R}^3, \end{aligned}$$

where

$$\begin{aligned} \varUpsilon _n(x)=v_n(x)-V(\varepsilon _n(x+y_n))v_n(x)-\phi _{v_n}^s(x)v_n(x)+K(\varepsilon _n(x+y_n))v_n^p(x),~x\in \mathbb {R}^3. \end{aligned}$$

Putting \(\varUpsilon (x)=u(x)-V(x_0)u(x)-\phi _u^s(x)u(x)+K(x_0)u^p(x)\), by Lemma 4.7, we see that

$$\begin{aligned} \varUpsilon _n\rightarrow \varUpsilon ~\text {in}~L^q(\mathbb {R}^3),~\forall ~q\in [2,+\infty ), \end{aligned}$$

and there exists a \(C_2>0\) such that

$$\begin{aligned} \Vert \varUpsilon _n\Vert _\infty \le C_2,~\forall ~n\in \mathbb {N}. \end{aligned}$$

From [18], we have that

$$\begin{aligned} v_n(x)=\mathcal {G}*\varUpsilon _n=\int _{\mathbb {R}^3}\mathcal {G}(x-y)\varUpsilon _n(y)dy, \end{aligned}$$

where \(\mathcal {G}\) is the Bessel Kernel

$$\begin{aligned} \mathcal {G}(x)=\mathcal {F}^{-1}\left( \frac{1}{1+|\xi |^{2s}}\right) . \end{aligned}$$

It is known from [18, Theorem 3.3] that, \(\mathcal {G}\) is positive, radially symmetric and smooth in \(\mathbb {R}^3{\setminus } \{0\}\); there is \(C>0\) such that \(\mathcal {G}(x)\le \frac{C}{|x|^{3+2s}}\), and \(\mathcal {G}\in L^q(\mathbb {R}^3),~\forall ~q\in [1,\frac{3}{3-2s})\). Now argue as in the Proof of [2, Lemma 2.6], we conclude that

$$\begin{aligned} v_n(x)\rightarrow 0\quad \text {as}~|x|\rightarrow \infty , \end{aligned}$$
(4.17)

uniformly in \(n\in \mathbb {N}\). \(\square \)

Proof of Theorem 4.1 First we claim that there exists a \(\rho _0>0\) such that \(\Vert v_n\Vert _\infty \ge \rho _0,~\forall ~n\in \mathbb {N}\). In fact, suppose that \(\Vert v_n\Vert _\infty \rightarrow 0\) as \(n\rightarrow \infty \). Let \(\varepsilon _0=\frac{V_{\min }}{2}\), then there exists an \(n_0\in \mathbb {N}\) such that

$$\begin{aligned} K_{\max }\Vert v_n\Vert _\infty ^{p-2}<\frac{V_{\min }}{2}\quad \text {for}~n>n_0. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_n|^2dx+\int _{\mathbb {R}^3}V(\varepsilon _n (x+y_n))v_n^2dx\\&\quad \le \int _{\mathbb {R}^3}|(-\Delta )^{\frac{s}{2}}v_n|^2dx+\int _{\mathbb {R}^3}V(\varepsilon _n (x+y_n))v_n^2dx+\int _{\mathbb {R}^3}\phi _{v_n}^sv_n^2dx\\&\quad =\int _{\mathbb {R}^3}K(\varepsilon _n(x+y_n))|v_n|^pdx\\&\quad \le K_{\max }\Vert v_n\Vert _\infty ^{p-2}\int _{\mathbb {R}^3}v_n^2dx. \end{aligned} \end{aligned}$$

This implies that \(\Vert v_n\Vert =0\) for \(n>n_0\), which is impossible because \(v_n\rightarrow u\) in \(H^s(\mathbb {R}^3)\) and \(u\ne 0\). Then, the claim is true.

From [40, Proposition 2.9], we see that \(v_n\in C^{1,\alpha }(\mathbb {R}^3)\) for any \(\alpha <2s-1\). Thus, we know that \(v_n\) has a global maximum point \(p_n\) by (4.17) and the claim above, we also see that \(p_n\in B_{R_0}(0)\) for some \(R_0>0\). Hence, the global maximum point of \(u_{\varepsilon _n}\) given by \(p_n+y_n\). Define \(\psi _n(x):=u_{\varepsilon _n}(x+y_n+p_n)\), where \(u_{\varepsilon _n}(x)=v_n(x+y_n)\). Since \(\{p_n\}\subset B_{R_0}(0)\) is bounded, then we know that \(\{\varepsilon _n(p_n+y_n)\}\) is bounded and \(\varepsilon _n(p_n+y_n)\rightarrow x_0\in \mathcal {H}_1\). It follows from the boundedness of \(\{u_{\varepsilon _n}\}\) that \(\{\psi _n\}\) is bounded in \(H^s(\mathbb {R}^3)\), and we assume that \(\psi _n\rightharpoonup \psi \) in \(H^s(\mathbb {R}^3)\), \(\psi _n\rightarrow \psi \) in \(L_{loc}^q(\mathbb {R}^3)\) for \(q\in [1,2_s^*)\). On the other hand, by Lemma 4.1, we have

$$\begin{aligned} \int _{B_{\tilde{R}+R_0}(0)}\psi _n^2(x)dx\ge \int _{\{|x+p_n|<\tilde{R}\}}\psi _n^2(x)dx=\int _{B_{\tilde{R}}(y_n)}u_{\varepsilon _n}^2(x)dx\ge \sigma , \end{aligned}$$

so we obtain \(\psi \ne 0\). Moreover, similar to the argument above, we know that \(\psi \) is a ground state solution of (4.2) and \(\psi _n\rightarrow \psi \) in \(H^s(\mathbb {R}^3)\). Therefore, \(\psi _n\) possesses same properties as \(v_n\), and we can assume that \(y_n\) is a global maximum point of \(u_{\varepsilon _n}\). Then, by Lemmas 4.14.5 above, one can obtain Theorem 4.1.

5 Decay estimates

In this section, we estimate the decay properties of \(v_n\).

Lemma 5.1

There exist \(C>0\) such that

$$\begin{aligned} v_n(x)\le \frac{C}{1+|x|^{3+2s}},\ \forall \ x\in \mathbb {R}^3. \end{aligned}$$

Proof

According to [18, Lemma 4.2], there exists a continuous function \(\bar{\omega }\) such that

$$\begin{aligned} 0<\bar{\omega }(x)\le \frac{C}{1+|x|^{3+2s}}, \end{aligned}$$
(5.1)

and

$$\begin{aligned} (-\Delta )^s\bar{\omega }+\frac{V_{\min }}{2}\bar{\omega }=0,\quad \text {in}\, \mathbb {R}^3{\setminus } B_{\bar{R}}(0) \end{aligned}$$
(5.2)

for some suitable \(\bar{R}>0\). Thanks to (4.17), we have that \(v_n(x)\rightarrow 0\) as \(|x|\rightarrow \infty \) uniformly in n. Therefore, for some large \(R_1>0\), we obtain

$$\begin{aligned} \begin{aligned} (-\Delta )^sv_n+\frac{V_{\min }}{2}v_n&=(-\Delta )^sv_n+V(\varepsilon _n(x+y_n)) v_n-\left( V(\varepsilon _n(x+y_n))-\frac{V_{\min }}{2}\right) v_n\\&=-\phi _{v_n}^sv_n+K(\varepsilon _n(x+y_n))|v_n|^{p-2}v_n- \left( V(\varepsilon _n(x+y_n))-\frac{V_{\min }}{2}\right) v_n\\&\le \left( K_{\max }|v_n|^{p-2}-\frac{V_{\min }}{2}\right) v_n\\&\le 0, \end{aligned} \end{aligned}$$
(5.3)

for \(x\in \mathbb {R}^3{\setminus } B_{R_1}(0)\). Now we take \(R_2:=\max \{\bar{R},R_1\}\) and set

$$\begin{aligned} z_n:=(m+1)\bar{\omega }-bv_n, \end{aligned}$$
(5.4)

where \(m:=\sup \limits _{n\in \mathbb {N}}\Vert v_n\Vert _\infty <\infty \) and \(b:=\min \limits _{\bar{B}_{R_2}(0)} \bar{\omega }>0\). We next show that \(z_n\ge 0\) in \(\mathbb {R}^3\). For this we suppose by contradiction that, there is a sequence \(\{x_n^j\}\) such that

$$\begin{aligned} \inf _{x\in \mathbb {R}^3} z_n(x)=\lim _{j\rightarrow \infty } z_n\left( x_n^j\right) <0. \end{aligned}$$
(5.5)

Notice that

$$\begin{aligned} \lim _{|x|\rightarrow \infty }\bar{\omega }(x)=0. \end{aligned}$$

Jointly with (4.17), we obtain

$$\begin{aligned} \lim \limits _{|x|\rightarrow \infty } z_n(x)=0, \end{aligned}$$

uniformly in \(n\in \mathbb {N}\). Consequently, the sequence \(\{x_n^j\}\) is bounded and therefore, up to a subsequence, we may assume that \(x_n^j\rightarrow x_n^*\) as \(j\rightarrow \infty \) for some \(x_n^*\in \mathbb {R}^3\). Hence (5.5) becomes

$$\begin{aligned} z_n(x_n^*)=\inf _{x\in \mathbb {R}^3} z_n(x)<0. \end{aligned}$$
(5.6)

From (5.6) and (2.1), we have

$$\begin{aligned} (-\Delta )^sz_n(x_n^*)=-\frac{C(s)}{2}\int _{\mathbb {R}^3}\frac{z_n(x_n^*+y)+z_n(x_n^*-y)-2z_n(x_n^*)}{|y|^{3+2s}}dy\le 0. \end{aligned}$$
(5.7)

By (5.4), we get

$$\begin{aligned} z_n(x)\ge mb+\bar{\omega }-mb>0,~\text {in}~B(0,R_2). \end{aligned}$$

Therefore, combining this with (5.6), we see that

$$\begin{aligned} x_n^*\in \mathbb {R}^3{\setminus } B_{R_2}(0). \end{aligned}$$
(5.8)

From (5.2)–(5.3), we conclude that

$$\begin{aligned} (-\Delta )^sz_n+\frac{V_{\min }}{2}z_n\ge 0,~\text {in}~\mathbb {R}^3{\setminus } B_{R_2}(0). \end{aligned}$$
(5.9)

Thinks to (5.8), we can evaluate (5.9) at the point \(x_n^*\), and recall (5.6), (5.7), we conclude that

$$\begin{aligned} 0\le (-\Delta )^sz_n(x_n^*)+\frac{V_{\min }}{2}z_n(x_n^*)<0, \end{aligned}$$

this is a contradiction, so \(z_n(x)\ge 0\) in \(\mathbb {R}^3\). That is to say, \(v_n\le (m+1)b^{-1}\bar{\omega }\), which together with (5.1), implies that

$$\begin{aligned} v_n(x)\le \frac{C}{1+|x|^{3+2s}},\ \forall \ x\in \mathbb {R}^3. \end{aligned}$$

Then the proof is completed. \(\square \)

Proof of Theorem 1.1

Define \(\omega _n(x):=u_n(\frac{x}{\varepsilon _n})\), then \(\omega _n\) is a positive ground state solution of system (1.1) and \(x_{\varepsilon _n}:=\varepsilon _n y_n\) is a maximum point of \(\omega _n\), and by Theorem 4.1, we know that the Theorem 1.1(i), (ii) hold. Moreover, we have

$$\begin{aligned} \begin{aligned} \omega _n(x)&=u_n\left( \frac{x}{\varepsilon _n}\right) =v_n\left( \frac{x}{\varepsilon _n}-y_n\right) \\&\le \frac{C}{1+|\frac{x}{\varepsilon _n}-y_n|^{3+2s}}\\&=\frac{C \varepsilon _n^{3+2s}}{ \varepsilon _n^{3+2s}+|x-\varepsilon _n y_n|^{3+2s}}\\&=\frac{C \varepsilon _n^{3+2s}}{ \varepsilon _n^{3+2s}+|x- x_{\varepsilon _n}|^{3+2s}},~\forall ~x\in \mathbb {R}^3. \end{aligned} \end{aligned}$$

Thus, the proof of Theorem 1.1 is completed.