1 Introduction

Recent mathematical studies of fluid mechanics have found that molecular dissipation is better modelled by the fractional powers of \(-(-\Delta )^\alpha \), \(\alpha >0\). In this paper, for \({\alpha ,\beta }\in (0,1)\), we consider the following generalized magnetohydrodynamic (MHD) system

$$\begin{aligned} \left\{ \begin{array}{ll} \partial _{t}u+u\cdot \nabla {u}=-\nabla {P}+b\cdot \nabla {b}-\eta (-\Delta )^{\alpha } u,&{}x\in {\mathbb {R}}^{3}, t>{0},\\ \partial _{t}b+u\cdot \nabla {b}=b\cdot \nabla {u}-\mu (-\Delta )^{\beta }b, &{}x\in {\mathbb {R}}^{3}, t>{0},\\ \nabla \cdot {u}=\nabla \cdot {b}=0,&{}x\in {\mathbb {R}}^{3}, t>{0},\\ (u,b)|_{t=0}=(u_0(x),b_0(x)),&{}x\in {\mathbb {R}}^3, \end{array} \right. \end{aligned}$$
(1.1)

with \(\eta \), \(\mu \) positive constants. Here \(u=u(x,t)=(u_1(x,t),u_2(x,t),u_3(x,t))\), \(b=b(x,t)= (b_1(x,t),b_2(x,t),b_3(x,t))\) and \(P=P(x,t)\) are non-dimensional quantities corresponding to the flow velocity, the magnetic field and the total kinetic pressure at the point (xt), \(u_0(x)\) and \(b_0(x)\) are the initial velocity and magnetic field satisfying that \(\nabla \cdot {u}_0=0\) and \(\nabla \cdot {b}_0=0\), respectively. We denote the Fourier transform of the function z by \({\hat{z}}\), then fractional Laplacian is defined by

$$\begin{aligned} \widehat{(-\Delta )^{\alpha }{z}(\xi )}=|\xi |^{2\alpha }{\hat{z}}(\xi ). \end{aligned}$$
(1.2)

More details on \((-\Delta )^{\alpha }\) can be found in Chapter 5 of Stein’s book [34].

When \(\alpha =\beta =1\), (1.1) reduces to the standard incompressible MHD equations. The MHD equations govern the dynamics of the velocity field u and the magnetic field b in electrically conducting fluids such as plasmas [2, 31]. Fundamental mathematical issues such as the global regularity of their solutions have generated extensive research and many interesting results have been obtained. For example, Schonbek et al. [32] studied large time behaviour of solutions to n-dimensional (n-D) \((2\leqslant {n}\leqslant 4)\) MHD equations in weighted Sobolev spaces. They obtained very interesting results on the upper and lower bounds of \(L^2\) decay. He and Xin [17, 18] considered the 3D MHD equations and showed that, if u satisfies

$$\begin{aligned} \nabla {u}\in {L^q(0,T;L^{p}({\mathbb {R}}^3))}\quad \text {for}\quad \frac{3}{p} +\frac{2}{q}=2\quad \text {with}\quad 1<q\leqslant 2, \end{aligned}$$
(1.3)

then the solution (ub) is regular on [0, T]. Cao and Wu [4] established two regularity criteria for the 3D MHD equations:

$$\begin{aligned} u_z\in {L^q(0,T;L^{p}({\mathbb {R}}^3))}\quad \text {for}\quad \frac{3}{p} +\frac{2}{q}\leqslant 1\quad \text {with}\quad p\geqslant 3 \end{aligned}$$
(1.4)

and

$$\begin{aligned} P_z\in {L^q(0,T;L^{p}({\mathbb {R}}^3))}\quad \text {for}\quad \frac{3}{p} +\frac{2}{q}\leqslant \frac{7}{4}\quad \text {with}\quad p\geqslant \frac{12}{7}. \end{aligned}$$
(1.5)

That is, any solution (ub) of the 3D MHD equations is regular if the derivative of u in one direction, say along the z-axis, is bounded in \(L^q(0,T;L^{p}({\mathbb {R}}^3))\) with (pq) satisfying (1.4) or if the derivative of P in one direction satisfies (1.5). The readers may refer to [3, 5, 8, 13, 24, 27, 28, 30, 31, 33, 38, 39, 41] for more details.

The generalization of dissipation in the above manner has been implemented to other fluid systems, including the Navier-Stokes, Boussinesq, and surface quasi-geostrophic equations, see [6, 7, 11, 19,20,21, 25]. Studying these generalized equations has enabled researchers to gain a deeper understanding of the strength and weaknesses of available mathematical methods and techniques, and, in some cases, motivated and inspired the invention of new methods. In the remainder of this introduction, we present the known results on generalized MHD equations in three major parameter domains:\((\text {i})\eta =0,\mu >0\), \((\text {ii})\eta =0,\mu =0\) and \((\text {iii})\eta>0,\mu >0\).

When \(\eta =0,\mu >0\), (1.1) turns to the generalized MHD equations without viscous diffusion. Specially, if \(b\equiv 0\), that is, the 3D Euler equations, Beale, Kato and Majda [1] showed that if a solution of the system is initally smooth and loses its regularity at some later time, then the maximum vorticity necessarily grows without bound as the critical time approaches equivalently, if the vorticity remains bounded, a smooth solution persists. Constantin [9] and Constantin et al. [10] generalize the above result by linking the vorticity directions and the probability of blow up.

When \(\eta =0,\mu =0\), (1.1) becomes the ideal MHD equations. In order to extend the result of [1], Caflisch, Klapper and Steele [3] derived a necessary condition for singularity development in the ideal MHD equations. Gibbon and Ohkitani [14] investigated the regularity of a class of stretched solutions to the 3D ideal MHD equations through analytical criteria and pseudo-spectral computations.

When \(\eta>0,\mu >0\), Wu [40] showed that the n-D\((n\geqslant 3)\) generalized MHD equations possess global weak solutions corresponding to any \(L^2\) initial data with any \(\alpha >0\) and \(\beta >0\). Moreover, weak solutions associated with

$$\begin{aligned} \alpha \geqslant \frac{1}{2}+\frac{n}{4},~~~\beta \geqslant \frac{1}{2}+\frac{n}{4} \end{aligned}$$
(1.6)

are actually global classical solutions when their initial data are sufficiently smooth. As a special consequence, smooth solutions of the 3D generalized MHD equations with

$$\begin{aligned} \alpha \geqslant \frac{5}{4},~~~~~\beta \geqslant \frac{5}{4} \end{aligned}$$
(1.7)

do not develop finite-time singularities. So far the best result for the global regularity of the n-D generalized MHD equations has been derived in [45], where it has been proved that the system is globally regular as long as the following conditions

$$\begin{aligned} \alpha \geqslant \frac{1}{2}+\frac{n}{4}, ~~~\beta >0,~~~\alpha +\beta \geqslant {1}+\frac{n}{2} \end{aligned}$$
(1.8)

are satisfied. Tran, Yu and Zhai [35] extended the above results to the case \(\beta =0\), they considered the n-D generalized MHD equations with hyper-viscosity and zero resistivity, and proved that the system has a unique global classical solution if the following condition is satisfied:

$$\begin{aligned} \alpha \geqslant 1+\frac{n}{2}. \end{aligned}$$
(1.9)

Yamazaki [48] investigated a n-D generalized MHD equations to prove its global well-posedness with logarithmically supercritical dissipation and diffusion with the logarithmic power that is improved in contrast to the previous work of [35, 45]. When \(n=2\), Tran, Yu and Zhai [36] showed that smooth solutions of the system are global in the following three cases:

$$\begin{aligned} \begin{aligned}&(\text {i})~~\alpha \geqslant \frac{1}{2},~~\beta \geqslant {1};\\&(\text {ii})~~0\leqslant \alpha \leqslant \frac{1}{2},~~2\alpha +\beta >2;\\&(\text {iii})~~\alpha \geqslant 2,~~\beta =0. \end{aligned} \end{aligned}$$
(1.10)

They also showed that in the inviscid case \(\eta =0\), if \(\beta >1\), smooth solutions are global as long as the direction of the magnetic field remains smooth enough. Interested readers can refer to [40, 42,43,44, 46] for more details.

There are few results to our knowledge on the asymptotic stability for solutions to problem (1.1). The first target of this paper is to show the global existence and uniqueness of classical solution to (1.1) in the whole space \({\mathbb {R}}^{3}\) by the energy method which refines the works of Guo and Wang [16] and Wang [37], under the assumption that the \(H^3\)-norm of the initial data is small, but the higher order derivatives can be arbitrarily. Assuming that initial data additionally belong to the homogeneous negative index Sobolev space \({{\dot{H}}^{-s}({\mathbb {R}}^3)}\), we establish the asymptotic behavior of solutions as time goes to infinity by energy analysis, which is the second target of this paper.

For simplicity, we introduce several notations which will be used throughout the sequel. Throughout this paper, we denote \(\Vert (u,b)\Vert _{H^N}:=\Vert u\Vert _{H^N}+\Vert b\Vert _{H^N}\), and omit the variables x, t of functions if it does not cause any confusion. We use \(H^s({\mathbb {R}}^3),~s\in {\mathbb {R}}\) to denote the usual Sobolev spaces with norm \(\Vert \cdot \Vert _{H^s}\) and \(L^p({\mathbb {R}}^3)~(1\leqslant {p}\leqslant \infty )\) to denote the usual \(L^p\) space with norm \(\Vert \cdot \Vert _{L^p}\). \(\partial ^k\) with an integer \(k\geqslant 0\) stands for usual spatial derivatives of order k. When \(k<0\) or k is not a positive integer, \(\partial ^{k}\) stands for \(\Lambda ^{k}\), which \(\Lambda =(-\Delta )^{1/2}\) for notational convenience.

The result of global existence to (1.1) reads as follows.

Theorem 1.1

Assume \((u_0,b_0)\in {H^{N}({\mathbb {R}}^3)}\times {H^{N}({\mathbb {R}}^3)}\) for \({N}\geqslant {3}\), \({\alpha ,~\beta }\in (\frac{1}{2},1)\). There exists a constant \({\varepsilon _0}>0\), such that if

$$\begin{aligned} \Vert u_0\Vert _{H^3({\mathbb {R}}^3)}+\Vert b_0\Vert _{H^3({\mathbb {R}}^3)}\leqslant \varepsilon _0, \end{aligned}$$
(1.11)

then system (1.1) admits a unique global solution (ub) satisfying for all \(t\geqslant 0\),

$$\begin{aligned} \begin{aligned}&\Vert u(t)\Vert _{H^N({\mathbb {R}}^3)}^2+\Vert b(t)\Vert _{H^N({\mathbb {R}}^3)}^2 +\int _{0}^{t}\left( \Vert \partial ^{\alpha }u(\tau )\Vert _{H^N({\mathbb {R}}^3)}^2 +\Vert \partial ^{\beta }b(\tau )\Vert _{H^N({\mathbb {R}}^3)}^2\right) \text {d}\tau \\&\quad \leqslant ~{C}\Vert u_0\Vert _{H^N({\mathbb {R}}^3)}^2+\Vert b_0\Vert _{H^N({\mathbb {R}}^3)}^2, \end{aligned} \end{aligned}$$
(1.12)

where C is a positive constant independent of t.

Our second result concerns the asymptotic decay rates of solutions to (1.1). We introduce the homogeneous negative index Sobolev space \({{\dot{H}}^{-s}({\mathbb {R}}^3)}\):

$$\begin{aligned} {{\dot{H}}^{-s}({\mathbb {R}}^3)}:=\big \{f\in {L^2({\mathbb {R}}^3)}: \big \Vert |\xi |^{-s}{\hat{f}}(\xi )\big \Vert _{L^2({\mathbb {R}}^3)}<\infty \big \} \end{aligned}$$
(1.13)

embowed with the norm \(\Vert f\Vert _{{\dot{H}}^{-s}({\mathbb {R}}^3)}:=\big \Vert |\xi |^{-s}{\hat{f}}(\xi )\big \Vert _{L^2({\mathbb {R}}^3)}\). Thanks to the mass conservation, we can find that \({\dot{H}}^{-s}({\mathbb {R}}^3)\) is a natural function space for system (1.1). Under the assumption that the \({\dot{H}}^{-s}({\mathbb {R}}^3)\) norms of initial data are small, we derive the decay rate of solutions to (1.1) and their higher order spatial derivatives. More precisely, we have the following decay estimates.

Theorem 1.2

Let the assumptions in Theorem 1.1 hold. Furthermore, if \((u_0,b_0)\in {\dot{H}}^{-s}({\mathbb {R}}^3)\times {\dot{H}}^{-s}({\mathbb {R}}^3)\) for some \(s\in [0,\frac{3}{2})\), then for any \(t>0\), the solution (ub) of (1.1) obtained in Theorem 1.1 with suitably small \(\varepsilon _0\) has the following decay rates:

$$\begin{aligned} \begin{aligned}&\Vert \partial ^{k}u(t)\Vert _{L^2({\mathbb {R}}^3)}+\Vert \partial ^{k}b(t) \Vert _{L^2({\mathbb {R}}^3)}\leqslant {C}(1+t)^{-\dfrac{s+k}{2\text {min}\{\alpha ,\beta \}}}, ~~~(k=0,1,\cdots ,N-1)\\ \end{aligned} \end{aligned}$$
(1.14)

and

$$\begin{aligned} \begin{aligned}&\Vert \partial ^{N}u(t)\Vert _{L^2({\mathbb {R}}^3)}+\Vert \partial ^{N}b(t) \Vert _{L^2({\mathbb {R}}^3)}\leqslant {C}(1+t)^{-\dfrac{s+N-1}{2\text {min}\{\alpha ,\beta \}}}, \end{aligned} \end{aligned}$$
(1.15)

where C is a positive constant independent of t.

Remark 1

In the proof of Theorem 1.1, we need the assumption \(\alpha ,\beta >{1/2}\). However, it is still unknown whether this assumption is optimal or not. The optimality of the lower bound for \(\alpha ,\beta \) can be somehow questioned particularly because initial data are small, see for instance [15].

Remark 2

Notice that for the general existence of the solution in Theorem 1.1, we only assume that \(\Vert u_0\Vert _{H^3({\mathbb {R}}^3)}+\Vert b_0\Vert _{H^3({\mathbb {R}}^3)}\) is small enough, while the higher order derivatives can be arbitrarily large. The constraint \(s<{3}/{2}\) in Theorem 1.2 stems from applying Lemma 2.5 that been used to estimate the nonlinear terms when doing the negative estimate via \(\Lambda ^{-s}\).

As far as we know, there are few studies on the decay estimates for MHD equations. Recently, in [12], the authors focused on a system of the 2D MHD equations with the kinematic dissipation given by the fractional operator \((-\Delta )^\alpha \) and the magnetic diffusion by partial Laplacian. They developed a systematic approach for systems with partial dissipation to extract large-time decay rates for solutions.

The researches such as [35, 36, 40, 45] all considered system (1.1) under the condition that where \(\alpha \) or \(\beta \) is greater than or equal to 1. Our results Theorems 1.1 and 1.2 are established under the condition that \(1/2<\alpha ,\,\beta <1\). Therefore, it is necessary for us to find some new ideas and techniques (see the proof of Lemmas 3.1 and 3.2) to control the terms \(b\cdot \nabla {b}\) and \(b\cdot \nabla {u}\) in the proof of global existence and asymptotic stability of solutions to (1.1). Yamazaki [47] considered a 3D damped Euler equations and proved the global well-posedness of the equations for small initial data in critical Besov space. In fact, although the choosen spaces are different, our proof method of Theorem 1.1 is similar to the Proposition 2.3 in [47].

The rest of this paper is arranged as follows. In Sect. 2, we give some useful inequalities which will be fundamental to the arguments. In Sect. 3, we show the a priori estimates and the local existence of classical solution to (1.1), then complete the proof of Theorem 1.1. Finally, Sect. 4 is devoted to deriving the decay estimates and proving Theorem 1.2. For convenience, we will use \(a\lesssim {b}\) if \(a\leqslant {C}b\), where the positive constant C only depends on the parameters coming from the problem.

2 Preliminary

In this section, we introduce some lemmas which will be used in the next section.

Lemma 2.1

Let \(0\leqslant {k,\,m}\leqslant {l}\) and \(1\leqslant {p,\,q,\,r}\leqslant {\infty }\). Then we have

$$\begin{aligned} \Vert \partial ^{m}f\Vert _{L^{p}({\mathbb {R}}^3)}\lesssim \Vert \partial ^{k}f \Vert _{L^{q}({\mathbb {R}}^3)}^{1-\theta }\Vert \partial ^{l}f\Vert _{L^{r}({\mathbb {R}}^3)}^{\theta }, \end{aligned}$$
(2.1)

where \(\theta \in [0,1]\) and \(k,\,m,\,l\) satisfy

$$\begin{aligned} \frac{m}{3}-\frac{1}{p}=\left( \frac{k}{3}-\frac{1}{q}\right) (1-\theta )+\left( \frac{l}{3} -\frac{1}{r}\right) \theta . \end{aligned}$$

Especially, when \(p=\infty \), we require that \(\theta \in (0,1)\), \(k\leqslant {m+1}\) and \(l\geqslant {m+2}\).

Proof

One can refer to [29, p125, Theorem] for instance. \(\square \)

Lemma 2.2

Let \(k\geqslant {1}\) be an integer and define the commutator

$$\begin{aligned} {[}\partial ^{k},f]g=\partial ^{k}(fg)-f\partial ^{k}g. \end{aligned}$$
(2.2)

Then we have

$$\begin{aligned} \big \Vert [\partial ^{k},f]g\big \Vert _{L^{p}({\mathbb {R}}^{n})}\lesssim \Vert \partial {f}\Vert _{L^{p_1}({\mathbb {R}}^{n})} \Vert \partial ^{k-1}g\Vert _{L^{p_2}({\mathbb {R}}^{n})} +\Vert \partial ^{k}f\Vert _{L^{p_3}({\mathbb {R}}^{n})}\Vert g\Vert _{L^{p_4}({\mathbb {R}}^{n})}, \end{aligned}$$
(2.3)

and for \(k\geqslant {0}\)

$$\begin{aligned} \Vert \partial ^{k}(fg)\Vert _{L^{p}({\mathbb {R}}^{n})}\lesssim \Vert f\Vert _{L^{p_1}({\mathbb {R}}^{n})} \Vert \partial ^{k}g\Vert _{L^{p_2}({\mathbb {R}}^{n})} +\Vert \partial ^{k}f\Vert _{L^{p_3}({\mathbb {R}}^{n})}\Vert g\Vert _{L^{p_4}({\mathbb {R}}^{n})}, \end{aligned}$$
(2.4)

where \(p,\,p_2,\,p_3\in (1,\infty )\) with \(\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}=\frac{1}{p_3}+\frac{1}{p_4}\).

Proof

For \(p=p_2=p_3=2\), it can be proved by using Lemma 2.1. For the general cases, one may refer to [22, Lemma 3.1]. \(\square \)

Lemma 2.3

(Kato-Ponce’s commutator estimates.) Let \(s>0\) and \(1<p<\infty \). Then

$$\begin{aligned} \big \Vert [(-\Delta )^{s/2},f]g\big \Vert _{L^{p}({\mathbb {R}}^{n})} \lesssim \Vert \partial {f}\Vert _{L^{p_1}({\mathbb {R}}^{n})} \Vert (-\Delta )^{(s-1)/2}g\Vert _{L^{p_2}({\mathbb {R}}^{n})} +\Vert (-\Delta )^{s/2}f\Vert _{L^{p_3}({\mathbb {R}}^{n})}\Vert g\Vert _{L^{p_4}({\mathbb {R}}^{n})}, \end{aligned}$$
(2.5)

and

$$\begin{aligned} \Vert (-\Delta )^{s/2}(fg)\Vert _{L^{p}({\mathbb {R}}^{n})} \lesssim \Vert f\Vert _{L^{p_1}({\mathbb {R}}^{n})} \Vert (-\Delta )^{s/2}g\Vert _{L^{p_2}({\mathbb {R}}^{n})} +\Vert (-\Delta )^{s/2}f\Vert _{L^{p_3}({\mathbb {R}}^{n})}\Vert g\Vert _{L^{p_4}({\mathbb {R}}^{n})} \end{aligned}$$
(2.6)

with \(1<p_j\leqslant \infty \,(j=1,\,4)\) and \(1<p_j<\infty \,(j=2,\,3)\) such that \(\frac{1}{p}=\frac{1}{p_1}+\frac{1}{p_2}=\frac{1}{p_3}+\frac{1}{p_4}\).

Proof

One can refer to [23] for instance. \(\square \)

Lemma 2.4

Let \(\alpha >0\), \(s\geqslant 0\) and \(k\geqslant 0\). Then

$$\begin{aligned} \Vert \partial ^{k}f\Vert _{L^2({\mathbb {R}}^n)}\lesssim \Vert \partial ^{k+\alpha } f\Vert _{L^2({\mathbb {R}}^n)}^{1-\theta }\Vert \Lambda ^{-s}f\Vert _{L^2({\mathbb {R}}^n)}^\theta \end{aligned}$$
(2.7)

with \(\theta =\frac{\alpha }{s+k+\alpha }\).

Proof

By the Parseval theorem and Hölder’s inequality, we can easily get (2.7). See [50] for instance.

\(\square \)

Lemma 2.5

Assume that \(1<p<q<\infty \), \(0<s<3\) and \(\frac{1}{q}+\frac{s}{3}=\frac{1}{p}\). It holds that

$$\begin{aligned} \Vert \Lambda ^{-s}f\Vert _{L^q({\mathbb {R}}^3)}\lesssim ~\Vert f\Vert _{L^p({\mathbb {R}}^3)}. \end{aligned}$$
(2.8)

Proof

It follows from the Hardy-Littlewood-Sobolev theorem, and one can see [34, p119, Theorem 1] for instance. \(\square \)

3 Proof of Local and Global Existence

In this section, we investigate the global existence of solutions to (1.1). Since fractional powers of \(-(-\Delta )^\alpha \) and \(-(-\Delta )^\beta \) with \(\alpha ,\,\beta \in (\frac{1}{2},1)\) cause some new challenges in mathematics, some new ideas ad techniques are needed here. First of all, we derive the a priori estimates for solutions of (1.1) as follows.

Lemma 3.1

Let \(\alpha ,~\beta \in (\frac{1}{2},1)\) and \(N\geqslant 3\). Suppose that (ub) is a solution of (1.1). Then there exists a small enough \(\varepsilon \) such that if

$$\begin{aligned} \Vert (u,b)\Vert _{H^3}\leqslant \varepsilon , \end{aligned}$$
(3.1)

we have

$$\begin{aligned} \Vert (u(t),b(t))\Vert _{H^N}^2+\int _{0}^{t}\left( \Vert \partial ^{\alpha }u(\tau ) \Vert _{H^N}^2+\Vert \partial ^{\beta }b(\tau )\Vert _{H^N}^2\right) \text {d}\tau \leqslant {C_1}\Vert (u_0,b_0)\Vert _{H^N}^2 \end{aligned}$$
(3.2)

hold for any \(t\geqslant 0\), where \(C_1\) is a positive constant independent of t.

Proof

For \(0\leqslant {k}\leqslant {N}\), applying \(\partial ^k\) to the first two equations of (1.1), and taking the inner product with \(\partial ^k{u}\) and \(\partial ^k{b}\), respectively, we obtain

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\text {d}}{\text {d}t}\left( \Vert \partial ^k{u}\Vert _{L^2}^2 +\Vert \partial ^k{b}\Vert _{L^2}^2\right) +\eta \Vert \partial ^{k+\alpha }u\Vert _{L^2}^2 +\mu \Vert \partial ^{k+\beta }b\Vert _{L^2}^2\\&\quad =-\int _{{\mathbb {R}}^3}\partial ^{k}\left( (u\cdot \nabla ){u}\right) \cdot \partial ^{k}u\text {d}x +\int _{{\mathbb {R}}^3}\partial ^{k}\left( (b\cdot \nabla ){b}\right) \cdot \partial ^{k}u\text {d}x\\&\qquad -\int _{{\mathbb {R}}^3}\partial ^{k}\left( (u\cdot \nabla ){b}\right) \cdot \partial ^{k}b\text {d}x +\int _{{\mathbb {R}}^3}\partial ^{k}\left( (b\cdot \nabla ){u}\right) \cdot \partial ^{k}b\text {d}x\\&\quad =:I_1+I_2+I_3+I_4. \end{aligned} \end{aligned}$$
(3.3)

It is obviously to find taht \(I_1=I_3=I_2+I_4=0\) for the case \(k=0\). Now we are going to estimate the terms \(I_1\)-\(I_4\) for \(0<k\leqslant {N}\).

The estimate for \(I_1\). At first, using Lemma 2.1, we have the following inequality

$$\begin{aligned} \Vert \partial {u}\Vert _{L^{\frac{3}{2\alpha }}}\lesssim \Vert u\Vert _{L^2}^{\frac{3 (2\alpha -1)}{2(1+\alpha )}} \Vert \partial ^{1+\alpha }u\Vert _{L^2}^{\frac{5-4\alpha }{2(1+\alpha )}}\lesssim \Vert u\Vert _{H^3}. \end{aligned}$$
(3.4)

Recalling that \(\nabla \cdot {u}=0\), it holds that

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^3}(u\cdot \nabla )\partial ^ku\cdot \partial ^ku\text {d}x =-\frac{1}{2}\int _{{\mathbb {R}}^3}u\cdot \nabla |\partial ^k{u}|^2\text {d}x=0. \end{aligned} \end{aligned}$$
(3.5)

Employing (3.5) and \(I_1\) can be written as

$$\begin{aligned} \begin{aligned} I_1=-\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (u\cdot \nabla ){u}\right) -(u\cdot \nabla )\partial ^ku\right) \cdot \partial ^ku\text {d}x. \end{aligned} \end{aligned}$$
(3.6)

Then applying Kato-Ponce’s commutator estimate (2.5) in Lemma 2.3 and Hölder’s inequality, together with (3.4), we arrive at

$$\begin{aligned} \begin{aligned} I_1\lesssim&\Vert \partial ^{k}\left( (u\cdot \nabla ){u}\right) -(u\cdot \nabla ) \partial ^ku\Vert _{L^{\frac{6}{3+2\alpha }}}\Vert \partial ^{k}u\Vert _{L^{\frac{6}{3-2\alpha }}}\\ \lesssim&\Vert \partial {u}\Vert _{L^{\frac{3}{2\alpha }}}\Vert \partial ^{k}u\Vert _{L^{\frac{6}{3-2\alpha }}}^2\\ \lesssim&\Vert u\Vert _{H^3}\Vert \partial ^{k+\alpha }u\Vert _{L^2}^2\\ \lesssim&\varepsilon \Vert \partial ^{k+\alpha }u\Vert _{L^2}^2. \end{aligned} \end{aligned}$$
(3.7)

The estimate for \(I_2\) and \(I_4\) For the term \(I_2\) and \(I_4\), employing Lemma 2.1, it follows that

$$\begin{aligned} \begin{aligned} \Vert \partial {u}\Vert _{L^{\frac{3}{2\beta }}}\lesssim \Vert u\Vert _{L^2}^{\frac{2\alpha +4\beta -3}{2(1+\alpha )}} \Vert \partial ^{1+\alpha }u\Vert _{L^2}^{\frac{5-4\beta }{2(1+\alpha )}}\lesssim \Vert u\Vert _{H^3} \end{aligned} \end{aligned}$$
(3.8)

and

$$\begin{aligned} \begin{aligned} \Vert \partial {b}\Vert _{L^{\frac{3}{\alpha +\beta }}}\lesssim \Vert b\Vert _{L^2}^{\frac{2\alpha +4\beta -3}{2(1+\beta )}} \Vert \partial ^{1+\beta }b\Vert _{L^2}^{\frac{5-2(\alpha +\beta )}{2(1+\beta )}}\lesssim \Vert b\Vert _{H^3}. \end{aligned} \end{aligned}$$
(3.9)

Indeed, inspired by [49], noting that

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}^3}(b\cdot \nabla )\partial ^kb\cdot \partial ^ku\text {d}x+\int _{{\mathbb {R}}^3} (b\cdot \nabla )\partial ^ku\cdot \partial ^kb\text {d}x\\&\quad = \int _{{\mathbb {R}}^3}(b\cdot \nabla )(\partial ^ku\cdot \partial ^kb)\text {d}x =-\int _{{\mathbb {R}}^3}(\nabla \cdot {b})(\partial ^ku\cdot \partial ^kb)\text {d}x=0, \end{aligned} \end{aligned}$$
(3.10)

we can obviously find that

$$\begin{aligned} \begin{aligned} I_2+I_4=&\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (b\cdot \nabla ){b}\right) -(b\cdot \nabla )\partial ^kb\right) \cdot \partial ^ku\text {d}x\\&+\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (b\cdot \nabla ){u}\right) -(b\cdot \nabla )\partial ^ku\right) \cdot \partial ^kb\text {d}x. \end{aligned} \end{aligned}$$
(3.11)

Therefore, applying (2.5), Hölder’s and Cauchy’s inequalities, along with (3.8) and (3.9), we have

$$\begin{aligned} \begin{aligned} I_2+I_4\lesssim&\left( \Vert \partial {b}\Vert _{L^{\frac{3}{\alpha +\beta }}} \Vert \partial ^kb\Vert _{L^{\frac{6}{3-2\beta }}} +\Vert \partial ^kb\Vert _{L^{\frac{6}{3-2\beta }}}\Vert \partial {b}\Vert _{L^{\frac{3}{\alpha +\beta }}}\right) \Vert \partial ^ku\Vert _{L^{\frac{6}{3-2\alpha }}}\\&+\left( \Vert \partial {b}\Vert _{L^{\frac{3}{\alpha +\beta }}}\Vert \partial ^ku\Vert _{L^{\frac{6}{3-2\alpha }}} +\Vert \partial ^kb\Vert _{L^{\frac{6}{3-2\beta }}}\Vert \partial {u} \Vert _{L^{\frac{3}{2\beta }}}\right) \Vert \partial ^kb\Vert _{L^{\frac{6}{3-2\beta }}}\\ \lesssim&\Vert \partial {b}\Vert _{L^{\frac{3}{\alpha +\beta }}} \Vert \partial ^{k+\alpha }u\Vert _{L^2}\Vert \partial ^{k+\beta }b\Vert _{L^2} +\Vert \partial {u}\Vert _{L^{\frac{3}{2\beta }}}\Vert \partial ^{k+\beta }b\Vert _{L^2}^2\\ \lesssim&\varepsilon \left( \Vert \partial ^{k+\alpha }u\Vert _{L^2}^2 +\Vert \partial ^{k+\beta }b\Vert _{L^2}^2\right) . \end{aligned} \end{aligned}$$
(3.12)

The estimate for \(I_3\) Similarly, we will estimate the term \(I_3\). Due to \(\nabla \cdot {b}=0\), we obtain

$$\begin{aligned} \begin{aligned} \int _{{\mathbb {R}}^3}(u\cdot \nabla )\partial ^kb\cdot \partial ^kb\text {d}x =-\frac{1}{2}\int _{{\mathbb {R}}^3}u\cdot \nabla |\partial ^k{b}|^2\text {d}x=0. \end{aligned} \end{aligned}$$
(3.13)

Owing to the same arguments in (3.6)–(3.7), recalling (2.5), (3.8) and (3.9), together with Hölder’s and Cauchy’s inequalities, we observe

$$\begin{aligned} \begin{aligned} I_3=&-\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (u\cdot \nabla ){b} \right) -(u\cdot \nabla )\partial ^kb\right) \cdot \partial ^kb\text {d}x\\ \lesssim&\left( \Vert \partial {u}\Vert _{L^{\frac{3}{2\beta }}}\Vert \partial ^{k}b\Vert _{L^{\frac{6}{3-2\beta }}} +\Vert \partial ^{k}u\Vert _{L^{\frac{6}{3-2\alpha }}}\Vert \partial {b} \Vert _{L^{\frac{3}{\alpha +\beta }}}\right) \Vert \partial ^kb\Vert _{L^{\frac{6}{3-2\beta }}}\\ \lesssim&\Vert u\Vert _{H^3}\Vert \partial ^{k+\beta }b\Vert _{L^2}^2+\Vert b\Vert _{H^3}\Vert \partial ^{k+\alpha }u\Vert _{L^2}\Vert \partial ^{k+\beta }b\Vert _{L^2}\\ \lesssim&\varepsilon \left( \Vert \partial ^{k+\alpha }u\Vert _{L^2}^2+\Vert \partial ^{k+\beta }b\Vert _{L^2}^2\right) . \end{aligned} \end{aligned}$$
(3.14)

Hence, plugging (3.7), (3.12) and (3.14) into (3.3), and summing up with respect to k from 0 to N, we obtain

$$\begin{aligned} \begin{aligned} \frac{\text {d}}{\text {d}t}\left( \Vert u\Vert _{H^N}^2+\Vert b\Vert _{H^N}^2\right) +C\left( \Vert \partial ^\alpha {u}\Vert _{H^N}^2+\Vert \partial ^\beta {b}\Vert _{H^N}^2\right) \leqslant {0}. \end{aligned} \end{aligned}$$
(3.15)

Then integrating it from 0 to t, we complete the proof of Lemma 3.1. \(\square \)

Next, we prove the local existence of (1.1) by induction. The key is to look for the appropriate approximate solutions in the sequel. We construct the solution sequence \((X^j)_{j\geqslant 0}:=(u^j,~b^j)_{j\geqslant 0}\), by iteratively solving the following Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} \partial _{t}u^{j+1}+u^j\cdot \nabla {u^{j+1}}=-\nabla {P^{j+1}}+b^j\cdot \nabla {b^{j+1}}-\eta (-\Delta )^{\alpha }u^{j+1},&{}x\in {\mathbb {R}}^{3}, t>{0},\\ \partial _{t}b^{j+1}+u^j\cdot \nabla {b^{j+1}}=b^j\cdot \nabla {u^{j+1}}-\mu (-\Delta )^{\beta }b^{j+1},&{}x\in {\mathbb {R}}^{3}, t>{0},\\ \nabla \cdot {u^{j+1}}=\nabla \cdot {b^{j+1}}=0,&{}x\in {\mathbb {R}}^{3}, t>{0},\\ \end{array} \right. \end{aligned}$$
(3.16)

where

$$\begin{aligned} \begin{aligned}&(u^{j+1},b^{j+1})|_{t=0}=(u_0(x),b_0(x)):=X_0, ~~~~~~~x\in {\mathbb {R}}^3 \end{aligned} \end{aligned}$$
(3.17)

for \(j\geqslant 0\). Set \(X^0=0\) and solve (3.16) with \(j=0\) to obtain \(X^1\). Similarly, we define \(X^j\) iteratively.

Lemma 3.2

Let \(\alpha ,\,\beta \in (\frac{1}{2},1)\). Suppose that initial data \((u_0,b_0)\in {H^N}\times {H^N}\) with \(N\geqslant 3\). Then there exists a constant \(T_1>0\) such that (1.1) possesses a unique classical solution satisfying

$$\begin{aligned} (u,b)\in {L^\infty (0,T_1;H^N)}~~\text {and}~~(\partial ^\alpha {u},\partial ^\beta {b})\in {L^2(0,T_1;H^N)}. \end{aligned}$$

Proof

The readers may refer to the proof of Proposition 3.6 in [26] by using mollifier and Picard theorem (see [26, Theorem 3.1]). We omit the details here for brevity. \(\square \)

Lemma 3.3

Assume \(\alpha ,\,\beta \in (\frac{1}{2},1)\). There are small constant \(\varepsilon _0>0\), \(T_2>0\) and \(\varepsilon _1>0\) such that if \(\Vert (u_0,b_0)\Vert _{H^3}\leqslant \varepsilon _0\), then for any \(j\geqslant 0\), \((u^j,b^j)\in {C}([0,T_2];H^3\times {H}^3)\) is well-defined and

$$\begin{aligned} \begin{aligned} \sup _{0\leqslant {t}\leqslant {T_2}}\Vert (u^j,b^j)\Vert _{H^3}\leqslant \varepsilon _1, ~\text {for}~j\geqslant 0. \end{aligned} \end{aligned}$$
(3.18)

Proof

We prove it by induction. Suppose that it is true for \(j\geqslant 0\) with \(\varepsilon >0\) small enough to be specified later. To prove (3.18) for \(j+1\), we need some energy estimates on \((u^{j+1},b^{j+1})\). Applying \(\partial ^k\) to equations (3.16)\(_1\) and (3.16)\(_2\), taking the inner product with \(\partial ^{k}{u}^{j+1}\) and \(\partial ^{k}{b}^{j+1}\), respectively, we arrive at

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\text {d}}{\text {d}x}\left( \Vert \partial ^k{u}^{j+1}\Vert _{L^2}^2 +\Vert \partial ^k{b}^{j+1}\Vert _{L^2}^2\right) +\eta \Vert \partial ^{k+\alpha }{u}^{j+1}\Vert _{L^2}^2+\mu \Vert \partial ^{k+\beta }b^{j+1}\Vert _{L^2}^2\\&\quad = -\int _{{\mathbb {R}}^3}\partial ^k\left( (u^j\cdot \nabla ){u}^{j+1} \right) \cdot \partial ^k{u}^{j+1}\text {d}x +\int _{{\mathbb {R}}^3}\partial ^k\left( ({b}^j\cdot \nabla ){b}^{j+1} \right) \cdot \partial ^k{u}^{j+1}\text {d}x\\&\qquad -\int _{{\mathbb {R}}^3}\partial ^k\left( (u^j\cdot \nabla ){b}^{j+1} \right) \cdot \partial ^k{b}^{j+1}\text {d}x +\int _{{\mathbb {R}}^3}\partial ^k\left( ({b}^j\cdot \nabla ){u}^{j+1}\right) \cdot \partial ^k{b}^{j+1}\text {d}x\\&\quad := {R_1}+{R_2}+{R_3}+{R_4}. \end{aligned} \end{aligned}$$
(3.19)

Now we are going to estimate the terms \({R_1}\), \({R_2}\), \({R_3}\) and \({R_4}\). First of all, we deal with the term \({R_1}\) for the case \(k=0\), \(k=1\) and \(2\leqslant {k}\leqslant 3\). For \(k=0\), recalling that \(\nabla \cdot {u}^j=0\), we arrive at

$$\begin{aligned} R_1=-\frac{1}{2}\int _{{\mathbb {R}}^3}u^j\cdot \nabla |u^{j+1}|^2\text {d}x=0. \end{aligned}$$
(3.20)

For \(k=1\), based on \(\nabla \cdot {u}^j=0\), together with Lemma 2.1, Hölder’s and Young’s inequalities, we observe

$$\begin{aligned} \begin{aligned} R_1=&-\frac{1}{2}\int _{{\mathbb {R}}^3}u^j\cdot \nabla |\partial {u}^{j+1}|^2\text {d}x -\int _{{\mathbb {R}}^3}\partial {u}^j\cdot \nabla {u}^{j+1}\cdot \partial {u}^{j+1}\text {d}x\\ \lesssim&\Vert \partial {u}^j\Vert _{L^{\frac{3}{2\alpha }}}\Vert \partial {u}^{j +1}\Vert _{L^{\frac{6}{3-2\alpha }}}\Vert \partial {u}^{j+1}\Vert _{L^{\frac{6}{3-2\alpha }}}\\ \lesssim&\Vert u^j\Vert _{H^3}\Vert \partial ^{1+\alpha }u^{j+1}\Vert _{L^2}^2\\ \leqslant&{C}\Vert u^j\Vert _{H^3}^2\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2 +\frac{\eta }{16}\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2. \end{aligned} \end{aligned}$$
(3.21)

For \(2\leqslant {k}\leqslant 3\), noting that \(\nabla \cdot {u}^j=0\), it follows that

$$\begin{aligned} \int _{{\mathbb {R}}^3}(u^j\cdot \nabla )\partial ^ku^{j+1}\cdot \partial ^ku^{j+1}\text {d}x =-\frac{1}{2}\int _{{\mathbb {R}}^3}u^j\cdot \nabla |\partial ^ku^{j+1}|^2\text {d}x=0. \end{aligned}$$
(3.22)

Recalling (2.5) in Lemma 2.3 again, along with Lemma 2.1, Hölder’s and Young’s inequalities, we have

$$\begin{aligned} \begin{aligned} R_1=&-\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (u^j\cdot \nabla ){u^{j+1}}\right) -(u^j\cdot \nabla )\partial ^ku^{j+1}\right) \cdot \partial ^ku^{j+1}\text {d}x\\ \lesssim&\left( \Vert \partial {u}^j\Vert _{L^{\frac{3}{2\alpha }}}\Vert \partial ^k{u}^{j+1}\Vert _{L^{\frac{6}{3-2\alpha }}} +\Vert \partial ^k{u}^j\Vert _{L^2}\Vert \partial {u}^{j+1}\Vert _{L^{\frac{3}{\alpha }}}\right) \Vert \partial ^k{u}^{j+1}\Vert _{L^{\frac{6}{3-2\alpha }}}\\ \lesssim&\Vert u^j\Vert _{H^3}\Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}\Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}\\&+\Vert \partial ^k{u}^j\Vert _{L^2}\Vert \partial ^{\alpha }u^{j+1}\Vert _{L^2}^\theta \Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}^{1-\theta }\Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}\\ \lesssim&\Vert u^j\Vert _{H^3}\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2\\ \leqslant&{C}\Vert u^j\Vert _{H^3}^2\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2+\frac{\eta }{16}\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2, \end{aligned} \end{aligned}$$
(3.23)

where \(\theta =\frac{4\alpha +2k-5}{2k}\in [0,1)\) if \(\alpha \in (\frac{1}{2},1)\).

Secondly, we estimate the terms \(R_2\) and \(R_4\). For \(k=0\), using \(\nabla \cdot {b}^{j}=0\) and the integration by parts, we can easily find that

$$\begin{aligned} \begin{aligned} R_2+R_4=&\int _{{\mathbb {R}}^3}(b^j\cdot \nabla )b^{j+1}\cdot {u}^{j+1}\text {d}x +\int _{{\mathbb {R}}^3}(b^j\cdot \nabla )u^{j+1}\cdot {b}^{j+1}\text {d}x\\ =&\int _{{\mathbb {R}}^3}(b^j\cdot \nabla )({u}^{j+1}\cdot {b}^{j+1})\text {d}x =-\int _{{\mathbb {R}}^3}(\nabla \cdot {b}^j)({u}^{j+1}\cdot {b}^{j+1})\text {d}x=0. \end{aligned} \end{aligned}$$
(3.24)

For \(k=1\), applying Lemma 2.1, the integration by parts, Hölder’s and Young’s inequalities, we conclude

$$\begin{aligned} \begin{aligned} R_2+R_4=&-\int _{{\mathbb {R}}^3}b^j\cdot \nabla {b}^{j+1}\cdot \partial ^2u^{j+1}\text {d}x -\int _{{\mathbb {R}}^3}b^j\cdot \nabla {u}^{j+1}\cdot \partial ^2b^{j+1}\text {d}x\\ \lesssim&\Vert b^j\Vert _{L^\infty }\Vert \partial {b}^{j+1}\Vert _{L^2}\Vert \partial ^2u^{j+1}\Vert _{L^2} +\Vert b^j\Vert _{L^\infty }\Vert \partial {u}^{j+1}\Vert _{L^2}\Vert \partial ^2b^{j+1}\Vert _{L^2}\\ \lesssim&\Vert b^j\Vert _{H^3}\Vert \partial ^\beta {b}^{j+1}\Vert _{H^3}\Vert \partial ^\alpha {u}^{j+1}\Vert _{H^3}\\ \leqslant&{C}\Vert b^j\Vert _{H^3}^2\Vert \partial ^{\beta }b^{j+1}\Vert _{H^3}^2 +\frac{\eta }{16}\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2. \end{aligned} \end{aligned}$$
(3.25)

For \(2\leqslant {k}\leqslant 3\), employing \(\nabla \cdot {b}^{j+1}=0\), we deduce

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}^3}(b^j\cdot \nabla )\partial ^{k}b^{j+1}\cdot \partial ^ku^{j+1}\text {d}x +\int _{{\mathbb {R}}^3}(b^j\cdot \nabla )\partial ^{k}u^{j+1}\cdot \partial ^kb^{j+1}\text {d}x\\&\quad = \int _{{\mathbb {R}}^3}(b^j\cdot \nabla )(\partial ^{k}u^{j+1}\cdot \partial ^kb^{j+1})\text {d}x=0. \end{aligned} \end{aligned}$$
(3.26)

Owing to (3.26), \(R_2+R_4\) can be written as

$$\begin{aligned} \begin{aligned} R_2+R_4=&\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (b^j\cdot \nabla ){b^{j+1}}\right) -(b^j\cdot \nabla )\partial ^kb^{j+1}\right) \cdot \partial ^ku^{j+1}\text {d}x\\&+\int _{{\mathbb {R}}^3}\left( \partial ^{k}\left( (b^j\cdot \nabla ){u^{j+1}}\right) -(b^j\cdot \nabla )\partial ^ku^{j+1}\right) \cdot \partial ^kb^{j+1}\text {d}x. \end{aligned} \end{aligned}$$
(3.27)

According to Lemma 2.1, together with (2.5), Hölder’s and Young’s inequalities, we obtain

$$\begin{aligned} \begin{aligned} R_2+R_4\lesssim&\left( \Vert \partial {b}^j\Vert _{L^{\frac{3}{\alpha +\beta }}} \Vert \partial ^k{b}^{j+1}\Vert _{L^{\frac{6}{3-2\beta }}} +\Vert \partial ^kb^{j}\Vert _{L^2}\Vert \partial {b}^{j+1}\Vert _{L^{\frac{3}{\alpha }}} \right) \Vert \partial ^k{u}^{j+1}\Vert _{L^{\frac{6}{3-2\alpha }}}\\&+\left( \Vert \partial {b}^j\Vert _{L^{\frac{3}{\alpha +\beta }}}\Vert \partial ^k{u}^{j+1}\Vert _{L^{\frac{6}{3-2\alpha }}} +\Vert \partial ^kb^{j}\Vert _{L^2}\Vert \partial {u}^{j+1}\Vert _{L^{\frac{3}{\beta }}} \right) \Vert \partial ^k{b}^{j+1}\Vert _{L^{\frac{6}{3-2\beta }}}\\ \lesssim&\Vert b^j\Vert _{H^3}\Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}\Vert \partial ^{k+\beta }b^{j+1}\Vert _{L^2}\\&+\Vert \partial ^k{b}^j\Vert _{L^2}\Vert \partial ^{\beta }b^{j+1}\Vert _{L^2}^{\theta } \Vert \partial ^{k+\beta }b^{j+1}\Vert _{L^2}^{1-\theta }\Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}\\&+\Vert \partial ^k{b}^j\Vert _{L^2}\Vert \partial ^{\alpha }u^{j+1}\Vert _{L^2}^{\theta } \Vert \partial ^{k+\alpha }u^{j+1}\Vert _{L^2}^{1-\theta }\Vert \partial ^{k+\beta }b^{j+1}\Vert _{L^2}\\ \lesssim&\Vert b^j\Vert _{H^3}\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}\Vert \partial ^{\beta }b^{j+1}\Vert _{H^3}\\ \leqslant&{C}\Vert b^j\Vert _{H^3}^2\Vert \partial ^{\beta }b^{j+1}\Vert _{H^3}^2 +\frac{\eta }{16}\Vert \partial ^{\alpha }u^{j+1}\Vert _{H^3}^2, \end{aligned} \end{aligned}$$
(3.28)

where \(\theta =\frac{2(\alpha +\beta )+2k-5}{2k}\in [0,1)\) if \(\alpha ,\,\beta \in (\frac{1}{2},1)\).

Finally, for the term \(R_3\), using the same arguments from (3.20) to (3.23), \(R_3\) is estimated by

$$\begin{aligned} \begin{aligned} R_3\leqslant&{C}\Vert u^j\Vert _{H^3}^2\Vert \partial ^{\beta }b^{j+1}\Vert _{H^3}^2 +\frac{\mu }{16}\Vert \partial ^{\beta }b^{j+1}\Vert _{H^3}^2. \end{aligned} \end{aligned}$$
(3.29)

Therefore, substituting the estimates for \(R_1\), \(R_2+R_4\) and \(R_3\) into (3.19) and summing up with respect to k from 0 to 3, we have

$$\begin{aligned} \begin{aligned}&\frac{\text {d}}{\text {d}x}\left( \Vert \partial ^k{u}^{j+1}\Vert _{L^2}^2 +\Vert \partial ^k{b}^{j+1}\Vert _{L^2}^2\right) +C\left( \Vert \partial ^\alpha {u}^{j+1}\Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1} \Vert _{H^3}^2\right) \\&\quad \leqslant {C}\left( \Vert u^j\Vert _{H^3}^2+\Vert b^j\Vert _{H^3}^2\right) \left( \Vert \partial ^\alpha {u}^{j+1}\Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1}\Vert _{H^3}^2\right) . \end{aligned} \end{aligned}$$
(3.30)

After taking time integration, it holds that

$$\begin{aligned} \begin{aligned}&\Vert X^{j+1}(t)\Vert _{H^3}^2 +\int _{0}^{t}\left( \Vert \partial ^\alpha {u}^{j+1}(\tau )\Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1} (\tau )\Vert _{H^3}^2\right) \text {d}\tau \\&\quad \leqslant \Vert X_0\Vert _{H^3}^2 +C\int _{0}^{t}\Vert X^j(\tau )\Vert _{H^3}^2\left( \Vert \partial ^\alpha {u}^{j+1}(\tau ) \Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1}(\tau )\Vert _{H^3}^2\right) \text {d}\tau , \end{aligned} \end{aligned}$$
(3.31)

which from the inductive assumption implies

$$\begin{aligned} \begin{aligned}&\Vert X^{j+1}(t)\Vert _{H^3}^2 +\int _{0}^{t}\left( \Vert \partial ^\alpha {u}^{j+1}(\tau )\Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1}(\tau )\Vert _{H^3}^2\right) \text {d}\tau \\&\quad \leqslant \varepsilon _0^2+C\varepsilon _1^2\int _{0}^{t}\left( \Vert \partial ^\alpha {u}^{j+1}(\tau )\Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1}(\tau )\Vert _{H^3}^2\right) \text {d}\tau , \end{aligned} \end{aligned}$$
(3.32)

for any \(0\leqslant {t}\leqslant {T_2}\). Choosing properly small constants \(\varepsilon _0\), \(\varepsilon _1\) and \(T_2\) such that

$$\begin{aligned} \begin{aligned} \Vert X^{j+1}(t)\Vert _{H^3}^2 +\int _{0}^{t}\left( \Vert \partial ^\alpha {u}^{j+1}(\tau )\Vert _{H^3}^2+\Vert \partial ^\beta {b}^{j+1}(\tau )\Vert _{H^3}^2\right) \text {d}\tau \leqslant \varepsilon _1^2, \end{aligned} \end{aligned}$$
(3.33)

which implies that \(\Vert X^{j+1}(t)\Vert _{H^3}^2\leqslant \varepsilon _1^2\). By inductive argument, \(\Vert X^{j}(t)\Vert _{H^3}^2\leqslant \varepsilon _1^2\) holds true for all \(j\geqslant 0\) and \(0\leqslant {t}\leqslant {T_2}\). This completes the proof of Lemma 3.3. \(\square \)

Proof of Theorem 1.1

Let \({T^*}=\text {min}\{T_1,~T_2\}\), from the proof of Lemmas 3.1 and 3.2, it holds that if we assume that \(\Vert (u_0,b_0)\Vert _{H^3}\leqslant \varepsilon _0\), then the corresponding limit function satisfies

$$\begin{aligned} \begin{aligned} \sup _{0\leqslant {t}\leqslant {T^*}}\Vert (u(t),b(t))\Vert _{H^3}\leqslant \varepsilon _1, \end{aligned} \end{aligned}$$
(3.34)

where \(T_1\) and \(T_2\) are given in Lemmas 3.1 and 3.2. Now we prove \(T^*=\infty \) by contradiction. Let \(M_1=\text {min}\{\varepsilon _0,~\varepsilon _1,~\varepsilon _2\}\). Suppose that \(\Vert (u_0,b_0)\Vert _{H^3}\leqslant \frac{M_1}{2\sqrt{1+C_1}}\), where \(C_1\) is given in Lemma 3.1. We define the lifespan of solutions to Cauchy problem (1.1) by

$$\begin{aligned} \begin{aligned} T=\sup \big \{t|\sup _{0\leqslant {s}\leqslant {t}}\Vert (u(s),b(s))\Vert _{H^3}\leqslant {M_1}\big \}. \end{aligned} \end{aligned}$$
(3.35)

Since

$$\begin{aligned} \begin{aligned} \Vert (u_0,b_0)\Vert _{H^3}\leqslant \frac{M_1}{2\sqrt{1+C_1}}\leqslant \frac{M_1}{2}<{M_1}\leqslant \varepsilon _0, \end{aligned} \end{aligned}$$
(3.36)

then \(T>0\) holds true from the local existence result Lemma 3.2 and continuation argument. If T is finite, it follows from the definition of T that

$$\begin{aligned} \begin{aligned} \sup _{0\leqslant {s}\leqslant {T}}\Vert (u(s),b(s))\Vert _{H^3}={M_1}. \end{aligned} \end{aligned}$$
(3.37)

On the other hand, from a priori estimates, we observe

$$\begin{aligned} \begin{aligned} \sup _{0\leqslant {s}\leqslant {T}}\Vert (u(s),b(s))\Vert _{H^3}\leqslant \sqrt{C_1}\Vert (u_0,b_0)\Vert _{H^3} \leqslant \frac{M_1\sqrt{C_1}}{2\sqrt{1+C_1}}\leqslant \frac{M_1}{2}. \end{aligned} \end{aligned}$$
(3.38)

Thus (3.37) is a contradiction to (3.38) since T is finite. That is, \(\Vert (u(t),b(t))\Vert _{H^3}\leqslant \varepsilon _1\) for any \(t\geqslant 0\) if \(\Vert (u_0,b_0)\Vert _{H^3}\leqslant \varepsilon _0\).

Therefore, the global existence of solution to (1.1) follows from the local existence in Lemma 3.2 and the a priori estimates in Lemma 3.1 via standard continuity argument. In short, the global existence and uniqueness of solutions to (1.1) and estimates (1.12) have been proved. \(\square \)

4 Proof of Decay estimates

In this section, we prove Theorem 1.2 by the energy methods. Firstly, we may assume that there exist a positive constant \(M_2>1\) such that

$$\begin{aligned} \begin{aligned} \Vert u_0(t)\Vert _{{\dot{H}}^{-s}}^2+\Vert b_0(t)\Vert _{{\dot{H}}^{-s}}^2\leqslant {M_2}^2, \end{aligned} \end{aligned}$$
(4.1)

since \((u_0,b_0)\in {\dot{H}}^{-s}\times {\dot{H}}^{-s}\).

Lemma 4.1

Assume that \(\alpha ,\,\beta \in (0,1)\). Suppose that

$$\begin{aligned} \begin{aligned} \Vert u(t)\Vert _{{\dot{H}}^{-s}}^2+\Vert b(t)\Vert _{{\dot{H}}^{-s}}^2\leqslant 2{M_2}^2,~~~~t\in [0,T], \end{aligned} \end{aligned}$$
(4.2)

where \(0<s<\frac{3}{2}\). Then for any \(t\in [0,T]\) and all \(k=0,1,\cdots ,N-1\), we obtain

$$\begin{aligned} \begin{aligned} \Vert \partial ^ku\Vert _{L^2}^2+\Vert \partial ^kb\Vert _{L^2}^2\leqslant {C}M_2^{\frac{2}{\sigma _1}}(1+t)^{-\frac{s+k}{\sigma _1}}, \end{aligned} \end{aligned}$$
(4.3)

where \(\sigma _1=\text {min}\{\alpha ,\,\beta \}\); for some positive constant \(\kappa >1\) and any \(t\in [0,T]\), we have

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\text {d}}{\text {d}t}\left( \Vert \Lambda ^{-s}u\Vert _{L^2}^2+\Vert \Lambda ^{-s}b\Vert _{L^2}^2\right) +\eta \Vert \partial ^\alpha \Lambda ^{-s}u\Vert _{L^2}^2+\mu \Vert \partial ^\beta \Lambda ^{-s}b\Vert _{L^2}^2\\&\quad \leqslant CM_2^{\frac{2}{\sigma _1}}\varepsilon _0^{\frac{s}{2}}\left( \Vert \Lambda ^{-s}u\Vert _{L^2}+\Vert \Lambda ^{-s}b\Vert _{L^2}\right) (1+t)^{-\kappa }, \end{aligned} \end{aligned}$$
(4.4)

where C is a positive constant independent of t.

Proof

To derive (4.3), using Lemma 2.4, it holds that

$$\begin{aligned} \begin{aligned} \Vert \partial ^ku\Vert _{L^2}\leqslant {C}\Vert u\Vert _{{\dot{H}}^{-s}}^{\frac{\alpha }{s+k+\alpha }}\Vert \partial ^{k+\alpha }u\Vert _{L^2}^{\frac{s+k}{s+k+\alpha }} \end{aligned} \end{aligned}$$
(4.5)

and

$$\begin{aligned} \begin{aligned} \Vert \partial ^kb\Vert _{L^2}\leqslant {C}\Vert b\Vert _{{\dot{H}}^{-s}}^{\frac{\beta }{s+k+\beta }}\Vert \partial ^{k+\beta }b\Vert _{L^2}^{\frac{s+k}{s+k+\beta }}. \end{aligned} \end{aligned}$$
(4.6)

Then by collecting the above estimates (4.5) and (4.6), we deduce

$$\begin{aligned} \begin{aligned} \Vert \partial ^ku\Vert _{L^2}^2+\Vert \partial ^kb\Vert _{L^2}^2\leqslant 2CM_2^{\frac{2}{s+k+\sigma _1}} \left( \Vert \partial ^{k+\alpha }u\Vert _{L^2}^2+\Vert \partial ^{k+\beta }b\Vert _{L^2}^2\right) ^{\frac{s+k}{s+k+\sigma _1}}, \end{aligned} \end{aligned}$$
(4.7)

which, together with (3.15) in Lemma 3.1, yields that

$$\begin{aligned} \begin{aligned} \frac{\text {d}}{\text {d}t}\left( \Vert \partial ^ku\Vert _{L^2}^2+\Vert \partial ^kb \Vert _{L^2}^2\right) +CM_2^{-\frac{2}{s+k}} \left( \Vert \partial ^ku\Vert _{L^2}^2+\Vert \partial ^kb\Vert _{L^2}^2\right) ^{\frac{s+k +\sigma _1}{s+k}}\leqslant 0. \end{aligned} \end{aligned}$$
(4.8)

By a direct calculation, it follows that

$$\begin{aligned} \begin{aligned} \Vert \partial ^ku\Vert _{L^2}^2+\Vert \partial ^kb\Vert _{L^2}^2 \leqslant&{C}M_2^{\frac{2}{\sigma _1}}\left[ \left( \Vert \partial ^ku_0\Vert _{L^2}^2 +\Vert \partial ^kb_0\Vert _{L^2}^2\right) +t\right] ^{-\frac{s+k}{\sigma _1}}\\ \leqslant&{C}M_2^{\frac{2}{\sigma _1}}(1+t)^{-\frac{s+k}{\sigma _1}}. \end{aligned} \end{aligned}$$
(4.9)

Therefore, combining Lemma 3.1 and (4.9), we get (4.3).

Now we are going to estimate (4.4). Applying \(\Lambda ^{-s}\) to equations (1.1)\(_1\) and (1.1)\(_2\), and taking the inner product with \(\Lambda ^{-s}u\) and \(\Lambda ^{-s}b\), respectively, we conclude

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\text {d}}{\text {d}t}\left( \Vert \Lambda ^{-s}u\Vert _{L^2}^2+\Vert \Lambda ^{-s}b\Vert _{L^2}^2\right) +\eta \Vert \partial ^\alpha \Lambda ^{-s}u\Vert _{L^2}^2+\mu \Vert \partial ^\beta \Lambda ^{-s}b\Vert _{L^2}^2\\&\quad = -\int _{{\mathbb {R}}^3}\Lambda ^{-s}(u\cdot \nabla {u})\cdot \Lambda ^{-s}u\text {d}x +\int _{{\mathbb {R}}^3}\Lambda ^{-s}(b\cdot \nabla {b})\cdot \Lambda ^{-s}u\text {d}x\\&\qquad -\int _{{\mathbb {R}}^3}\Lambda ^{-s}(u\cdot \nabla {b})\cdot \Lambda ^{-s}b\text {d}x +\int _{{\mathbb {R}}^3}\Lambda ^{-s}(b\cdot \nabla {u})\cdot \Lambda ^{-s}b\text {d}x\\&\quad := F_1+F_2+F_3+F_4. \end{aligned} \end{aligned}$$
(4.10)

For the term \(F_1\), recalling Lemma 2.5 and Hölder’s inequality, we obtain

$$\begin{aligned} \begin{aligned} F_1\lesssim&\Vert \Lambda ^{-s}(u\cdot \nabla {u})\Vert _{L^2}\Vert \Lambda ^{-s}u\Vert _{L^2}\\ \lesssim&\Vert u\cdot \nabla {u}\Vert _{L^{\frac{6}{3+2s}}}\Vert \Lambda ^{-s}u\Vert _{L^2}\\ \lesssim&\Vert u\Vert _{L^{\frac{3}{s}}}\Vert \partial {u}\Vert _{L^2}\Vert \Lambda ^{-s}u\Vert _{L^2}. \end{aligned} \end{aligned}$$
(4.11)

Similarly, \(F_2\), \(F_3\) and \(F_4\) can be estimated as

$$\begin{aligned}&F_2\lesssim \Vert b\Vert _{L^{\frac{3}{s}}}\Vert \partial {b}\Vert _{L^2}\Vert \Lambda ^{-s}u\Vert _{L^2},\nonumber \\ \end{aligned}$$
(4.12)
$$\begin{aligned}&F_3\lesssim \Vert u\Vert _{L^{\frac{3}{s}}}\Vert \partial {b}\Vert _{L^2}\Vert \Lambda ^{-s}b\Vert _{L^2} \end{aligned}$$
(4.13)

and

$$\begin{aligned} \begin{aligned} F_4\lesssim \Vert b\Vert _{L^{\frac{3}{s}}}\Vert \partial {u}\Vert _{L^2}\Vert \Lambda ^{-s}b\Vert _{L^2}. \end{aligned} \end{aligned}$$
(4.14)

From Lemma 2.1, we have the following inequalities

$$\begin{aligned} \begin{aligned} \Vert u\Vert _{L^{\frac{3}{s}}}\lesssim \Vert u\Vert _{L^2}^{\frac{1+2s}{4}}\Vert \partial ^2u\Vert _{L^2}^{\frac{3-2s}{4}} \end{aligned} \end{aligned}$$
(4.15)

and

$$\begin{aligned} \begin{aligned} \Vert b\Vert _{L^{\frac{3}{s}}}\lesssim \Vert b\Vert _{L^2}^{\frac{1+2s}{4}}\Vert \partial ^2b\Vert _{L^2}^{\frac{3-2s}{4}} \end{aligned} \end{aligned}$$
(4.16)

Then plugging (4.11)–(4.14) into (4.10), using (4.15)–(4.16) and the decay estimate (4.3), we have

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\text {d}}{\text {d}t}\left( \Vert \Lambda ^{-s}u\Vert _{L^2}^2+\Vert \Lambda ^{-s}b\Vert _{L^2}^2\right) +\eta \Vert \partial ^\alpha \Lambda ^{-s}u\Vert _{L^2}^2+\mu \Vert \partial ^\beta \Lambda ^{-s}b\Vert _{L^2}^2\\&\quad \leqslant C\left( \Vert u\Vert _{L^{\frac{3}{s}}}+\Vert b\Vert _{L^{\frac{3}{s}}}\right) \left( \Vert \partial {u}\Vert _{L^2}+\Vert \partial {b}\Vert _{L^2}\right) \left( \Vert \Lambda ^{-s}u\Vert _{L^2}+\Vert \Lambda ^{-s}b\Vert _{L^2}\right) \\&\quad \leqslant C\left( \Vert u\Vert _{L^2}+\Vert b\Vert _{L^2}\right) ^{\frac{1+2s}{4}} \left( \Vert \partial ^2u\Vert _{L^2}+\Vert \partial ^2b\Vert _{L^2}\right) ^{\frac{3-2s}{4}}\\&\qquad \cdot \left( \Vert \partial {u}\Vert _{L^2}+\Vert \partial {b}\Vert _{L^2}\right) \left( \Vert \Lambda ^{-s}u\Vert _{L^2}+\Vert \Lambda ^{-s}b\Vert _{L^2}\right) \\&\quad \leqslant {C}M_2^{\frac{2}{\sigma _1}}\varepsilon _0^{\frac{1+2s}{4}}\left( \Vert \Lambda ^{-s}u\Vert _{L^2}+\Vert \Lambda ^{-s}b\Vert _{L^2}\right) (1+t)^{-\frac{s+1}{2\sigma _1}}(1+t)^{-\frac{s+2}{2\sigma _1}\cdot \frac{3-2s}{4}}\\&\quad \leqslant {C}M_2^{\frac{2}{\sigma _1}}\varepsilon _0^{\frac{s}{2}}\left( \Vert \Lambda ^{-s}u\Vert _{L^2}+\Vert \Lambda ^{-s}b\Vert _{L^2}\right) (1+t)^{-\kappa }, \end{aligned} \end{aligned}$$
(4.17)

where

$$\begin{aligned} \kappa =\frac{s+1}{2\sigma _1}+\frac{s+2}{2\sigma _1}\cdot \frac{3-2s}{4}>1 \end{aligned}$$
(4.18)

by \(s\in (0,\frac{3}{2})\). This completes the proof of Lemma 4.1. \(\square \)

Proof of Theorem 1.2

By Lemma 4.1, the decay estimate (1.14) can be obtained from (4.3) provided that we can close that the a priori assumption (4.2) for some constant \(M_2>1\). Now we show (4.2) holds true. According to (4.4), we observe

$$\begin{aligned} \begin{aligned} \Vert \Lambda ^{-s}u\Vert _{L^2}^2+\Vert \Lambda ^{-s}b\Vert _{L^2}^2\leqslant&{C}M_2^{\frac{2}{\sigma _1}}\varepsilon _0^{\frac{s}{2}} \int _{0}^{t}\left( \Vert \Lambda ^{-s}u(\tau )\Vert _{L^2}+\Vert \Lambda ^{-s}b(\tau )\Vert _{L^2} \right) (1+\tau )^{-\kappa }\text {d}\tau \\&+\left( \Vert \Lambda ^{-s}u_0\Vert _{L^2}^2+\Vert \Lambda ^{-s}b_0\Vert _{L^2}^2\right) \\ \leqslant&{C}M_2^{\frac{2}{\sigma _1}}\varepsilon _0^{\frac{s}{2}} \sup _{0\leqslant {\tau }\leqslant {t}} \left( \Vert \Lambda ^{-s}u(\tau )\Vert _{L^2}^2+\Vert \Lambda ^{-s}b(\tau )\Vert _{L^2}^2 \right) ^{\frac{1}{2}}\\&\times \int _{0}^{t}(1+\tau )^{-\kappa }\text {d}\tau +\left( \Vert \Lambda ^{-s}u_0\Vert _{L^2}^2+\Vert \Lambda ^{-s}b_0\Vert _{L^2}^2\right) \\ \leqslant&{C}M_2^{\frac{2}{\sigma _1}}\varepsilon _0^{\frac{s}{2}} \sup _{0\leqslant {\tau }\leqslant {t}} \left( \Vert \Lambda ^{-s}u(\tau )\Vert _{L^2}^2+\Vert \Lambda ^{-s}b(\tau )\Vert _{L^2}^2 \right) ^{\frac{1}{2}}\\&+\left( \Vert \Lambda ^{-s}u_0\Vert _{L^2}^2+\Vert \Lambda ^{-s}b_0\Vert _{L^2}^2\right) \end{aligned} \end{aligned}$$
(4.19)

by \(\kappa >1\). For convenience, we set

$$\begin{aligned} \begin{aligned} {\mathcal {M}}(t):=\sup _{0\leqslant {\tau }\leqslant {t}} \left( \Vert \Lambda ^{-s}u(\tau )\Vert _{L^2}^2+\Vert \Lambda ^{-s}b(\tau )\Vert _{L^2}^2 \right) ^{\frac{1}{2}}, \end{aligned} \end{aligned}$$
(4.20)

then using Young’s inequality, it holds that

$$\begin{aligned} \begin{aligned} {\mathcal {M}}^2(t)\leqslant {M_2}^2+{C}M_2^{\frac{2}{\sigma _1}} \varepsilon _0^{\frac{s}{2}}{\mathcal {M}}(t) \leqslant \frac{1}{4}{\mathcal {M}}^2(t)+M_2^2+CM_2^{\frac{4}{\sigma _1}}\varepsilon _0^{s} \end{aligned} \end{aligned}$$
(4.21)

for some positive constant C independent of \(M_2\) and \(\varepsilon _0\). Then, if we choose \(\varepsilon _0\) suitably small such that \(CM_2^{\frac{2}{\sigma _1}}\varepsilon _0^{s}\leqslant \frac{1}{2}\), we can find that

$$\begin{aligned} \begin{aligned} \Vert \Lambda ^{-s}u(t)\Vert _{L^2}^2+\Vert \Lambda ^{-s}b(t)\Vert _{L^2}^2\leqslant {\mathcal {M}}^2(t)\leqslant 2M_2^2, \end{aligned} \end{aligned}$$
(4.22)

which close the a priori assumption (4.2).

Therefore, from the standard continuity arguments we obtained (1.14)–(1.15). This completes the proof of Theorem 1.2. \(\square \)