1 Introduction

In this paper we are interested in the analysis of the fattening phenomenon for evolutions of sets according to nonlocal curvature flows. Fattening is a particular kind of singularity which arises in the evolution of boundaries by their (local or nonlocal) curvatures and more generally in geometric evolution of manifolds and is related to nonuniqueness of geometric solutions to the flow. Fattening phenomenon has been studied for mean curvature flow since long time and a complete characterization of initial data which develop fattening is still missing. In the case of the plane, it is known that smooth compact level curves never develop an interior, due to a result by Grayson on the evolution of regular compact curves. This result is no more valid for fractional mean curvature flow in the plane, as proved recently in [12]. We recall that examples of fattening of nonregular or noncompact curves in the plane for the mean curvature flow have been given in [3, 13, 15], where, in particular, the fattening of the evolution starting from the cross is proved. Finally nonfattening for strictly starshaped initial data is proved in [20], whereas nonfattening of convex and mean convex initial data is proved in [1], see also [2, 4].

In this paper we start the analysis of the fattening phenomenon (mostly in the plane) for general nonlocal curvature flows. This problem has not yet been considered in the literature apart from the result in [9] about nonfattening for convex initial data under fractional mean curvature evolution in any space dimension.

Here we will show that some results which are true for the mean curvature flow are still valid, such as nonfattening for regular initial data with positive curvature or strictly starshaped initial data.

Nevertheless, in general, some different behaviors with respect to the mean curvature flow arise, due to the fact that the fattening phenomenon is very sensitive to the strength of the nonlocal interactions. We discuss in particular the evolution starting from the cross in the plane, which develops fattening only if the interactions are sufficiently strong. Moreover, we show an example of a closed curve with positive curvature which fattens, and an example of a closed curve whose evolution by fractional mean curvature flow does not present fattening, differently from the case of the evolution by mean curvature flow.

We now introduce the mathematical setting in which we work. Given an initial set \(E_0\subset {\mathbb {R}}^n\), we define its evolution \(E_t\) for \(t>0\) according to a nonlocal curvature flow as follows: the velocity at a point \(x\in \partial E_t\) is given by

$$\begin{aligned} \partial _t x\cdot \nu =-H^K_{E_t}(x) \end{aligned}$$
(1.1)

where \(\nu \) is the outer normal at \(\partial E_t\) in x. The quantity \(H_E^K(x)\) is the K-curvature of E at x, which is defined in the forthcoming formula (1.4). More precisely, we take a function \(K:{\mathbb {R}}^n{\setminus }\{0\}\rightarrow [0,+\infty )\) which is a rotationally invariant kernel, namely

$$\begin{aligned} K(x)=K_0(|x|), \end{aligned}$$
(1.2)

for some \(K_0:(0,+\infty ) \rightarrow [0,+\infty )\). We assume that

$$\begin{aligned}&\min \{1, |x|\}\, K(x) \in L^1({\mathbb {R}}^n),\nonumber \\&\quad \text { i.e. }\int _0^1 \rho ^{n}\,K_0(\rho )\,d\rho +\int _1^{+\infty } \rho ^{n-1}\,K_0(\rho )\,d\rho <+\infty . \end{aligned}$$
(1.3)

Given \(E\subset {\mathbb {R}}^n\) and \(x\in \partial E\) we define the K-curvature of E at x, defined by

$$\begin{aligned} H^K_E(x):=\lim _{\varepsilon \searrow 0} \int _{{\mathbb {R}}^n{\setminus } B_\varepsilon (x)}\Big ( \chi _{{\mathbb {R}}^n{\setminus } E}(y)-\chi _E(y)\Big )\,K(x-y)\,dy, \end{aligned}$$
(1.4)

where, as usual,

$$\begin{aligned} \chi _E(y):=\left\{ \begin{matrix} 1 &{} { \text{ if } }y\in E, \\ 0 &{} { \text{ if } }y\not \in E. \end{matrix} \right. \end{aligned}$$

We point out that (1.3) is a very mild integrability assumption, compatible with the structure of nonlocal minimal surfaces (see e.g. condition (1.5) in [11]) and which fits the requirements in  [8, 16] in order to have existence and uniqueness for the level set flow associated to (1.1) (see Appendix A for the details about this matter).

Furthermore, when \(K(x)= \frac{1}{|x|^{n+s}}\) for some \(s\in (0,1)\), we will denote the K-curvature of a set E at a point x as \(H^s_E(x)\), and we indicate it as the fractional mean curvature of the set E at x.

While the setting in (1.4) makes clear sense for sets with \(C^{1,1}\)-boundaries, as customary we also use the notion of K-curvatures for sets which are locally the graphs of continuous functions: in this case, the K-curvature may be also infinite and the definition is in the sense of viscosity (see [8, 16] and Section 5 in [6]).

We observe that the curvature defined in (1.4) is the the first variation of the following nonlocal perimeter functional, see [7, 17],

$$\begin{aligned} \mathrm {Per}_K(E):= \int _{E}\int _{{\mathbb {R}}^n{\setminus } E} K(x-y)\,dx\,dy, \end{aligned}$$
(1.5)

and so the geometric evolution law in (1.1) can be interpreted as the \(L^2\) gradient flow of this perimeter functional, as proved in [8].

The existence and uniqueness of solutions for the K-curvature flow in (1.1) in the viscosity sense have been investigated in [16] by introducing the level set formulation of the geometric evolution problem (1.1) and a proper notion of viscosity solution. We refer to [8] for a general framework for the analysis via the level set formulation of a wide class of local and nonlocal translation-invariant geometric flows.

The level set flow associated to (1.1) can be defined as follows. Given an initial set \(E\subset {\mathbb {R}}^n\) and \(C:=\partial E\), we choose a bounded Lipschitz continuous function \(u_E:{\mathbb {R}}^n\rightarrow {\mathbb {R}}\) such that

$$\begin{aligned}&C=\{x\in {\mathbb {R}}^n \text { s.t. } u_E(x)=0\}=\partial \{x\in {\mathbb {R}}^n\text { s.t. } u_E(x)\geqslant 0\}\\ {\text{ and } }&E=\{x\in {\mathbb {R}}^n\text { s.t. } u_E(x)\geqslant 0\}. \end{aligned}$$

Let also \(u_E(x,t)\) be the viscosity solution of the following nonlocal parabolic problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t u(x,t)+|Du(x,t)| H^K_{\{y| u(y,t)\geqslant u(x,t)\}}(x)=0,\\ u(x,0)= u_E(x). \end{array}\right. } \end{aligned}$$
(1.6)

Then the level set flow of C is given by

$$\begin{aligned} \Sigma _E(t):=\{x\in {\mathbb {R}}^n\text { s.t. } u_E(x,t)=0\}. \end{aligned}$$
(1.7)

We associate to this level set the outer and inner flows defined as follows:

$$\begin{aligned} E^+(t) := \{x\in {\mathbb {R}}^n\text { s.t. } u_E(x,t)\geqslant 0\} \qquad {\text{ and }} \qquad E^-(t):= \{x\in {\mathbb {R}}^n\text { s.t. } u_E(x,t)>0\}.\nonumber \\ \end{aligned}$$
(1.8)

We observe that the equation in (1.6) is geometric, so if we replace the initial condition with any function \(u_0\) with the same level sets \(\{u_0 \geqslant 0\}\) and \(\{ u_0 > 0 \}\), the evolutions \(E^+(t)\) and \(E^-(t)\) remain the same. For more details, we refer to Appendix A.

The K-curvature flow has been recently studied from different perspectives, in particular the case fractional mean curvature flow, taking into account geometric features such as conservation of the positivity of the fractional mean curvature, conservation of convexity and formation of neckpinch singularities, see [9, 12, 18].

In this paper, we analyze the possible lack of uniqueness for the geometric evolution, i.e. the situation in which \(\partial E^+(t)\ne \partial E^-(t)\), in terms of the fattening properties of the zero level set of the viscosity solutions. To this end, we give the following definition:

Definition 1.1

We say that fattening occurs at time \(t>0\) if the set \(\Sigma _E(t)\), defined in (1.7), has nonempty interior, i.e.

$$\begin{aligned} {\mathrm{int}}(E^+(t){\setminus } E^-(t))\ne \varnothing . \end{aligned}$$

We point out that in [9, Section 6], in the case of fractional (anisotropic) mean curvature flow in any dimension, it has been proved that if the initial set \(E\subseteq {\mathbb {R}}^n\) is convex, then the evolution remains convex for all \(t>0\) and \(E^+(t)=\overline{E^{-}(t)}\), so fattening never occurs.

We start with a result about nonfattening of bounded regular sets with positive K-curvature (for the classical case of the mean curvature flow, see [1, 2, 4]).

Theorem 1.2

Let  (1.2) and (1.3) hold. Let \(E\subset {\mathbb {R}}^n\) be a compact set of class \(C^{1,1}\) and we assume that there exists \(\delta >0\) such that

$$\begin{aligned} {H_E^K(x)\geqslant \delta \quad \text{ for } \text{ every } x\in \partial E.} \end{aligned}$$
(1.9)

Then \(\Sigma _{ E}(t)\) has empty interior for every t.

We point out that, to get the result in Theorem 1.2, the assumption on the regularity of the sets cannot be completely dropped: indeed in the forthcoming Theorem 1.10 we will provide an example of bounded set in the plane, with a “Lipschitz-type” singularity and with positive K-curvature, which develops fattening.

1.1 Evolution of the cross

We consider now the cross in \({\mathbb {R}}^2\), i.e.

$$\begin{aligned} {{\mathcal {C}}}:=\big \{ x=(x_1,x_2)\in {\mathbb {R}}^2 { \text{ s.t. } } |x_1|\geqslant |x_2|\big \}. \end{aligned}$$
(1.10)

It is well known, see [13], that the evolution of the cross according to the curvature flow immediately develops fattening for \(t>0\). So, an interesting question is if the same phenomenon appears also for general nonlocal curvature flows as (1.1), for kernels which satisfy  (1.2) and (1.3). We show that actually the fattening feature in nonlocal curvature flows is very sensitive to the specific properties of the kernel since it depends on the strength of the interactions: we identify in particular two classes of kernels, giving fattening of the cross in the first class, i.e. for kernels which satisfy (1.13), (1.14) below, and nonfattening of the cross in the second class, i.e. for kernels which satisfy (1.19) below.

Remark 1.3

Recalling the notation in (1.7), we observe that

$$\begin{aligned} \big \{ x=(x_1,x_2)\in {\mathbb {R}}^2 { \text{ s.t. } } |x_1|= |x_2|\big \}\subseteq \Sigma _{{\mathcal {C}}}(t)\qquad { \text{ for } \text{ all } } t>0. \end{aligned}$$
(1.11)

Indeed, up to a rotation of coordinate system, we write \({\mathcal {C}}=\{(y_1,y_2)\in {\mathbb {R}}^2\text { s.t. }y_1y_2\geqslant 0\}\). Define a bounded Lipschitz function \(u_0\) such that \(u_0(y_1,y_2)=u_0(-y_1,-y_2)=-u_0(-y_1,y_2)=-u_0(y_1,-y_2)\), and such that \({\mathcal {C}}= \{(y_1,y_2)\in {\mathbb {R}}^2\text { s.t. }u_0(y_1,y_2)\geqslant 0\}\). Then the solution to (1.6) with initial condition \(u_0\) satisfies

$$\begin{aligned} u(y_1,y_2, t)=u(-y_1,-y_2,t)=-u(-y_1,y_2,t)=-u(y_1,-y_2,t), \end{aligned}$$

see Appendix A. In particular this implies that \(\big \{ (y_1,y_2)\in {\mathbb {R}}^2 { \text{ s.t. } } y_1y_2=0\big \}\subseteq \big \{ (y_1,y_2)\in {\mathbb {R}}^2 { \text{ s.t. } }u(y_1,y_2, t)=0\big \}= \Sigma _{{\mathcal {C}}}(t)\), that is (1.11) once we rotate back.

We introduce the function

$$\begin{aligned} \Psi (r):=\int _{B_{r/4}(7r/4,0)} K(x)\,dx. \end{aligned}$$
(1.12)

In our framework, the function \(\Psi (r)\) plays a crucial role in quantitative K-curvature estimates, also in view of a suitable barrier that will be discussed in Proposition 3.1 later on. Notice that when \(K(x)=\frac{1}{|x|^{2+s}}\) with \(s\in (0,1)\), the function \(\Psi (r)\) reduces, up to multiplicative constants, to \(\frac{1}{r^s}\).

We suppose that the kernel K satisfies

$$\begin{aligned} \int _{0}^1 \frac{d\rho }{\Psi (\rho )}<+\infty . \end{aligned}$$
(1.13)

We will need also the following technical assumption: there exists \(r_0>0\) such that for all \(r\in (0,r_0)\),

$$\begin{aligned} \inf _{p\in B_{3\sqrt{2}\,r}}\int _{B_{r/4}(3r/4,0)-p} K(x)\,dx >0. \end{aligned}$$
(1.14)

This assumption is trivially satisfied if \(K>0\) in \(B_{(3\sqrt{2}+1)r_0}\).

Under these conditions, we have that, for short times, the set \(\Sigma _{ {{\mathcal {C}}} }(t)\) contains a ball centered at the origin (seeFootnote 1 Fig. 1), according to the following result:

Fig. 1
figure 1

The fattening phenomenon described in Theorem 1.4

Theorem 1.4

Assume that (1.2),  (1.3), (1.13) and (1.14) hold true. For \(r\in (0,1)\), we define

$$\begin{aligned} \Lambda (r):=\int _0^r \frac{d\rho }{\Psi (\rho )}. \end{aligned}$$
(1.15)

Then, there exists \(T>0\) such that

$$\begin{aligned} B_{r(t)}\subset \Sigma _{ {{\mathcal {C}}} }(t) \end{aligned}$$
(1.16)

for any \(t\in (0,T)\), where r(t) is defined implicitly by

$$\begin{aligned} \Lambda (r(t))=t. \end{aligned}$$
(1.17)

We notice that the setting in (1.15) is well defined in view of the structural assumption in (1.13) and \(\Lambda (r)\), as defined in (1.15), is strictly increasing, which makes the implicit definition in (1.17) well posed.

Remark 1.5

We point out that the structural assumptions in (1.3) and (1.13) are satisfied by kernels of the form \(K(x)= \frac{1}{|x|^{2+s}}\) for some \(s\in (0,1)\), or more generally by kernels such that

$$\begin{aligned}&K\in L^1({\mathbb {R}}^2{\setminus } B_1)\qquad \text { and }\qquad \frac{1}{C\,|x|^\alpha }\leqslant K(x)\leqslant \frac{C}{|x|^\beta },\nonumber \\&\quad \text{ with }~\alpha >1, \beta <3, C\geqslant 1, \text{ for } \text{ any } x\in B_1. \end{aligned}$$
(1.18)

Indeed, the upper bound for K in (1.18) plainly implies (1.3). Moreover, the lower bound for K in (1.18) implies that

$$\begin{aligned} \Psi (r)=\int _{B_{r/4}(7r/4,0)} K(x)\,dx\geqslant \int _{B_{r/4}(7r/4,0)} \frac{1}{|x|^\alpha }\,dx\geqslant \frac{1}{(2r)^{\alpha }}|B_{r/4}|=C_0\, r^{2-\alpha } \end{aligned}$$

where \(C_0>0\) is independent of r, and this yields (1.13). Finally as for (1.14), we observe that it is trivially satisfied.

Note that r(t) defined in (1.17) satisfies \(r(t)\geqslant C_0 t^{\frac{1}{\alpha -1}}\), in particular, in the case \(K(x)=\frac{1}{|x|^{2+s}}\), r(t) is proportional to \(t^{\frac{1}{1+s}}\).

As a counterpart of Theorem 1.4, we show that the fattening phenomenon does not occur in straight crosses when the interaction kernel has sufficiently strong integrability properties. Namely, we have that:

Theorem 1.6

Assume (1.2) and (1.3). Suppose also that

$$\begin{aligned} \begin{aligned}&{K_0\leqslant K_1, \text{ with } K_1 \text{ nonincreasing } \text{ and }}\\&\Phi (r):=\int _{ [-r,r]\times {\mathbb {R}}} K_1(|x|)\,dx<+\infty , \end{aligned} \end{aligned}$$
(1.19)

for any \(r>0\), and that

$$\begin{aligned} \lim _{\delta \searrow 0} \int _{\delta }^{1} \frac{d\tau }{\Phi (\tau )}=+\infty . \end{aligned}$$
(1.20)

Then

$$\begin{aligned} {\text{ the } \text{ evolution } \text{ of }~{{\mathcal {C}}} \text{ under } \text{ the } K\text{-curvature } \text{ flow } \text{ coincides } \text{ with }~{{\mathcal {C}}} \text{ itself. }} \end{aligned}$$
(1.21)

Remark 1.7

We notice that conditions  (1.3), (1.19) and (1.20) are satisfied by kernels K such that \(K_0\) is nonincreasing, and which satisfy

$$\begin{aligned} K\in L^1({\mathbb {R}}^2{\setminus } B_1)\qquad \text { and }\qquad K(x)\leqslant \frac{C}{|x|^\alpha },\quad { \text{ with }~\alpha \in (0,1], C>0, \text{ for } \text{ any } x\in B_1}. \nonumber \\ \end{aligned}$$
(1.22)

Indeed, we observe first that in this case (1.3) is automatically satisfied. Moreover, from (1.22), we can take \(K_1:=K_0\) in (1.19) and have that

$$\begin{aligned} \Phi (r)= & {} \int _{ [-r,r]\times {\mathbb {R}}} K_0(|x|)\,dx\\\leqslant & {} \int _{ B_r} \frac{C}{|x|^\alpha }\,dx+ \int _{[-r,r]^2{\setminus } B_r} K_0(|x|)\,dx +\int _{ [-r,r]\times ((-\infty ,-r]\cup [r,+\infty ))} K_0(|x|)\,dx\\\leqslant & {} C r^{2-\alpha } +4 r\int _r^{+\infty } K_0(x_2)\,dx_2\\\leqslant & {} C r^{2-\alpha } +C r\left( \int _r^1 \frac{dx_2}{x_2^\alpha } +1\right) \\\leqslant & {} Cr|\log r|, \end{aligned}$$

up to renaming \(C>0\), and so (1.20) is satisfied.

We also observe that condition (1.22) is somewhat complementary to (1.18).

1.2 A remark on K-minimal cones

As a byproduct of the results that we discussed in Sect. 1.1, we observe that actually the cross is not a K-minimal set for the K-perimeter in \({\mathbb {R}}^2\), obtaining an alternative (and more general) proof of a result discussed in Proposition 5.2.3 of [5] for the fractional perimeter (see [19] for a full regularity theory of fractional minimal cones in the plane).

For this, we define

$$\begin{aligned} \mathrm {Per}_K(E, B_R) := \int _{E\cap B_R}\int _{{\mathbb {R}}^2{\setminus } E} K(x-y)\,dx\,dy + \int _{E{\setminus } B_R }\int _{B_R{\setminus } E} K(x-y)\,dx\,dy.\nonumber \\ \end{aligned}$$
(1.23)

Then, we say that E is a minimizer for \(\mathrm {Per}_K\) in the ball \(B_R\) if

$$\begin{aligned} \mathrm {Per}_K(E, B_R)\leqslant \mathrm {Per}_K(F,B_R) \end{aligned}$$

for every measurable set F such that \(E{\setminus } B_R=F{\setminus } B_R\).

Also, a measurable set \(E\subset {\mathbb {R}}^2\) is said to be K-minimal for the K-perimeter if it is a minimizer for \(\mathrm {Per}_K\) in every ball \(B_R\). Then, we have:

Proposition 1.8

Let (1.2) and (1.3) hold, and assume that K is not identically zero. Then \({\mathcal {C}}\subseteq {\mathbb {R}}^2\), as defined in (1.10), is not K-minimal for the K-perimeter.

1.3 Fractional curvature evolution of starshaped sets

Now we restrict ourselves to the case of homogeneous kernels K, i.e. we consider the case (up to multiplicative constants) in which

$$\begin{aligned} K_0(r)=\frac{1}{r^{n+s}}, \qquad {\text{ with } }s\in (0,1). \end{aligned}$$
(1.24)

We start by observing that strictly starshaped sets never fattens, similarly as for the (local) curvature flow (see [20]). A similar result has also been observed in [9, Remark 6.4].

Proposition 1.9

Assume (1.24). Let \({\mathbb {S}}^{n-1}=\{\omega \in {\mathbb {R}}^n{ \text{ s.t. } } |\omega |=1\}\), \( f:{\mathbb {S}}^{n-1}\rightarrow (0, +\infty )\) be a continuous positive function and \(E\subset {\mathbb {R}}^n\) be such that

$$\begin{aligned} E=\{0\}\cup \left\{ x\in {\mathbb {R}}^n\text { s.t. } x\ne 0, |x|\leqslant f\left( \frac{x}{|x|}\right) \right\} . \end{aligned}$$
(1.25)

Then, the set \(\Sigma _{E}(t)\) has empty interior for all \(t>0\).

Now we restrict ourselves to the case of the plane, so \(n=2\). We show that in general, for starshaped sets E which do not satisfy (1.25), we can expect either fattening or nonfattening. We provide two different examples of such sets in \({\mathbb {R}}^2\), which are particularly interesting in our opinion, since they model two different type of singularities that can arise in the geometric evolution of closed curves in \({\mathbb {R}}^2\), that is the “Lipschitz-type” singularity, and the“cusp singularity”. The first example is the “double droplet” in Fig. 2, namely

$$\begin{aligned} {{\mathcal {G}}}:= {{\mathcal {G}}}_+\cup {{\mathcal {G}}}_-\subseteq {\mathbb {R}}^2, \end{aligned}$$
(1.26)

where \({{\mathcal {G}}}_+\) is the convex hull of \(B_1(-1,1)\) with the origin, and \({{\mathcal {G}}}_-\) the convex hull of \(B_1(1,-1)\) with the origin. The second example is given by two tangent balls

$$\begin{aligned} {{\mathcal {O}}}:= B_1(-1,0)\cup B_1(1,0)\subseteq {\mathbb {R}}^2. \end{aligned}$$
(1.27)

We prove that fattening phenomenon occurs in the first case, whereas it does not occur in the second. It is also interesting to observe that the evolution of \({\mathcal {O}}\) by curvature flow immediately develops fattening, see [3].

Fig. 2
figure 2

The double droplet \({{\mathcal {G}}}\)

Fig. 3
figure 3

The fattening phenomenon described in Theorem 1.10

We start by considering the evolution of the set \({{\mathcal {G}}}\) defined in (1.26). Note that this provides an example of bounded set with positive K-curvature (being contained in a cross with zero K-curvature), whose evolution develops fattening near the origin, as sketched in Fig. 3 and detailed in the following statement.

Theorem 1.10

Assume (1.24) with \(n=2\). Then there exist \({{\hat{c}}} \), \(T>0\) such that

$$\begin{aligned} B_{r(t)}\subset \Sigma _{ {{\mathcal {G}}} }(t) \end{aligned}$$
(1.28)

for any \(t\in (0,T)\), where

$$\begin{aligned} r(t):={{\hat{c}}} t^{1/(1+s)}. \end{aligned}$$
(1.29)

Remark 1.11

The same result as in Theorem 1.10 holds more generally for kernels \(K_0\) which satisfy  (1.2),  (1.3), (1.13) and

$$\begin{aligned} \frac{{\underline{a}}}{r^{2+s}}\leqslant K_0(r)\leqslant \frac{{\overline{a}}}{r^{2+s}}\qquad { \text{ for } \text{ all } } r>0 \end{aligned}$$
(1.30)

for some suitable \({{\overline{a}}}\geqslant {{\underline{a}}}>0\).

We now consider the case of two tangent balls as in (1.27), and we show that \({{\mathcal {O}}}(t)\) presents no fattening phenomenon, according to the statement below.

Theorem 1.12

Assume (1.24) with \(n=2\). Then the set \(\Sigma _{{\mathcal {O}}}(t)\) has empty interior for all \(t>0\).

The evolution of the double ball is sketched in Fig. 4: roughly speaking, the set shrinks at its surroundings, emanating some mass from the origin, but it does not possess “gray regions” at its boundary.

Fig. 4
figure 4

The evolution of two tangent balls described in Theorem 1.12

The rest of the paper is organized as follows. Section 2 deals with the fact that the evolution starting from regular sets with positive K-curvature does not fatten and it contains the proof of Theorem 1.2. In Sect. 3 we prove the fattening of the evolution starting from the cross in \({\mathbb {R}}^2\), under assumption (1.13), as stated in Theorem 1.4.

In Sect. 4, we show that under assumption (1.19) the evolution starting from the cross in \({\mathbb {R}}^2\) does not fatten, but coincides with the cross itself, that is we prove Theorem 1.6.

Section 5 contains the proof of the fact that the cross in \({\mathbb {R}}^2\) is never a K-minimal set for \(\mathrm {Per}_K\), thus establishing Proposition 1.8.

The last three sections present the evolution under the fractional curvature flow, i.e., we assume that \(K(x)=\frac{1}{|x|^{n+s}}\). In particular, Sect. 6 is devoted to the proof of the fact that the fractional curvature evolution of strictly starshaped sets does not present fattening, which gives Proposition 1.9.

In Sect. 7, we show an example in \({\mathbb {R}}^2\) of a compact set with positive K-curvature, that is the double droplet, whose fractional curvature evolution presents fattening, thus proving Theorem 1.10.

Then, in Sect. 8 we show that the fractional curvature evolution starting from two tangent balls in \({\mathbb {R}}^2\) does not fatten, which establishes Theorem 1.12.

In Appendix A we review some basic facts about level set flow, moreover we provide some auxiliary results about comparison with geometric barriers and other basic properties of the evolution which are exploited in the proofs of the main results.

1.3.1 Notation

We denote by \(B_r\subset {\mathbb {R}}^n\) the ball centered at (0, 0) of radius r and by \(B_r(x_1,x_2,\dots , x_n)\) the ball of radius r and center \(x=(x_1,x_2,\dots , x_n)\in {\mathbb {R}}^n\).

Moreover \(e_1=(1,0,\dots , 0)\), \(e_2=(0,1, 0,\dots , 0)\) etc, and \({\mathbb {S}}^{n-1}= \{\omega \in {\mathbb {R}}^n\text { s.t. }|\omega |=1\}\).

For a given closed set E, and for any \(x\in {\mathbb {R}}^n{\setminus } E\) we denote by \(\text {dist}(x, E)\) the distance from x to E, that is

$$\begin{aligned} \text {dist}(x, E):= \inf _ {y\in E} |x-y|. \end{aligned}$$

Moreover, we will denote with \(d_{E}(x)\) the signed distance function to \(C=\partial E\), with the sign convention of being positive inside E and negative outside, that is

$$\begin{aligned} d_E(x)={\left\{ \begin{array}{ll} \text {dist}(x, {\mathbb {R}}^n{\setminus } E) &{}{\text{ if } } x\in {\overline{E}},\\ -\text {dist}(x, E) &{}{\text{ if } } x\in {\mathbb {R}}^n{\setminus } E. \end{array}\right. } \end{aligned}$$
(1.31)

Finally, given two sets \(E, F\subset {\mathbb {R}}^n\), we denote by d(EF) the distance between the boundary of E and the boundary of F, that is

$$\begin{aligned} d(E,F):=\min _{\begin{array}{c} {x\in \partial E}\\ {y\in \partial F} \end{array}} |x-y|. \end{aligned}$$
(1.32)

2 Regular sets of positive K-curvature and proof of Theorem 1.2

Proof of Theorem 1.2

We recall the continuity in \(C^{1,1}\) of the K-curvature proved in [8]. Namely, if \(E^\varepsilon \) is a family of compact sets with boundaries in \(C^{1,1}\) such that \(E^\varepsilon \rightarrow E\) in \(C^{1,1}\) (in the sense that the boundaries converges in \(C^{1}\) and are of class \(C^{1,1}\) uniformly in \(\varepsilon \)) and \(x^\varepsilon \in \partial E^\varepsilon \rightarrow x\in \partial E\), then \(H^K_{E^\varepsilon }(x^\varepsilon )\rightarrow H^K_E(x)\), as \( \varepsilon \searrow 0\).

Now, let E be as in the statement of Theorem 1.2, and define, for \(r>0\),

$$\begin{aligned} E^r:=\{x\in {\mathbb {R}}^n{ \text{ s.t. } } d_E(x)\geqslant -r\}. \end{aligned}$$

Then, using also (1.9), we find that there exists \(\varepsilon _0>0\) such that, for all \(\varepsilon \in (0, \varepsilon _0) \), there exists \(0<\delta (\varepsilon )\leqslant \delta \) such that

$$\begin{aligned} \min _{x\in \partial E^\varepsilon } H^K_{E^\varepsilon }(x)\geqslant \delta (\varepsilon ) >0. \end{aligned}$$

Fix \(\varepsilon <\varepsilon _0\) and let \({\bar{\delta }}:=\inf _{\eta \in [0, \varepsilon ]} \delta (\eta )>0\). Fix \(0<h<{{\bar{\delta }}}\). For all \(t\in \left[ 0, \frac{\varepsilon }{{{\bar{\delta }}}}\right] \) we define

$$\begin{aligned} C(t):=E^{\varepsilon -({{\bar{\delta }}}-h)t}. \end{aligned}$$

We observe that C(t) is a supersolution to (1.1), in the sense that it satisfies (A.6). Indeed,

$$\begin{aligned} \partial _t x\cdot \nu =-{{\bar{\delta }}}+h\geqslant -H^K_{C(t)}(x)+h. \end{aligned}$$

Since \(E\subseteq E^\varepsilon =C(0)\), by Proposition A.10, we get that

$$\begin{aligned} E^{+}(s)\subseteq C(s)=E^{\varepsilon -({{\bar{\delta }}}-h)s} \ \ \ { \text{ for } \text{ all } } s\in \left( 0, \frac{\varepsilon }{{{\bar{\delta }}}}\right] \quad {\text{ with } } d\left( E^+(s),E^{\varepsilon -({{\bar{\delta }}}-h)s}\right) \geqslant \varepsilon . \end{aligned}$$

This implies that \(E^{+}(s)\subseteq E\) for all \(s\in \left[ 0, \frac{\varepsilon }{{{\bar{\delta }}}}\right] \) and for all \(h<{{\bar{\delta }}}\) and moreover that

$$\begin{aligned} d\left( E^+(s), E\right) \geqslant d(E^{+}(s),E^{\varepsilon -({{\bar{\delta }}}-h)s})-d(E^{\varepsilon -({{\bar{\delta }}}-h)s}, E)\geqslant ({{\bar{\delta }}}-h)s. \end{aligned}$$

Then, by the Comparison Principle in Corollary A.8, we get that

$$\begin{aligned}&E^{+}(t+s)\subseteq E^{-}(t), \quad {\text{ with } } d\left( E^{+}(t+s),E^{-}(t)\right) \geqslant ({{\bar{\delta }}}-h)s\nonumber \\&\quad { \text{ for } \text{ all } } t>0, s\in \left( 0, \frac{\varepsilon }{{{\bar{\delta }}}}\right] , h<{{\bar{\delta }}}. \end{aligned}$$
(2.1)

Therefore, recalling Proposition A.12, we get

$$\begin{aligned} | {\mathrm{int}}\, (E^+(t)){\setminus } \overline{E^-(t)}|\leqslant & {} \limsup _{s\searrow 0}| {\mathrm{int}}\, (E^+(t))|- | E^+(t+s)|\\= & {} |{\mathrm{int}} (E^+(t))|-\liminf _{s\searrow 0} | E^+(t+s)| \leqslant 0. \end{aligned}$$

This gives the desired statement in Theorem 1.2. \(\square \)

3 K-curvature of the perturbed cross and proof of Theorem 1.4

In this section, our state space is \({\mathbb {R}}^2\) and we exploit the notation in (1.12) and consider the cross \({{\mathcal {C}}}\subseteq {\mathbb {R}}^2\) introduced in (1.10). Furthermore, we define, for any \(r>0\), the “perturbed cross”

$$\begin{aligned} {{\mathcal {C}}}_r := [-r,r]^2 \cup {{\mathcal {C}}}\subseteq {\mathbb {R}}^2. \end{aligned}$$
(3.1)

Then, the following result holds true:

Proposition 3.1

Assume that (1.2) and (1.3) hold true in \({\mathbb {R}}^2\). Then, we have that

$$\begin{aligned} H^K_{ {{\mathcal {C}}}_r }(p)\leqslant 0 \end{aligned}$$
(3.2)

for any \(p\in \partial { {{\mathcal {C}}}_r }\). Also, for any \(t\in [-r,r]\),

$$\begin{aligned} H^K_{ {{\mathcal {C}}}_r }(t,r)\leqslant -2\Psi (r). \end{aligned}$$
(3.3)

Proposition 3.1 provides the cornerstone to detect the fattening phenomenon of the K-curvature flow emanating from the cross and lead to the proof of Theorem 1.4. To prove Proposition 3.1, we give the following auxiliary result:

Lemma 3.2

Assume that (1.2) and (1.3) hold true in \({\mathbb {R}}^2\). Then, for any \(t\in [-r,r]\),

$$\begin{aligned} H^K_{ {{\mathcal {C}}}_r }(t,r)\leqslant -2\Psi (r). \end{aligned}$$
Fig. 5
figure 5

The set \({{\mathcal {D}}}_r\)

Proof

Let

$$\begin{aligned} {{\mathcal {T}}}_r:=\Big ( (-r,r)^2{\setminus }{{\mathcal {C}}}\Big )\cap \{x_2<0\} \end{aligned}$$

and

$$\begin{aligned} {{\mathcal {D}}}_r:={{\mathcal {C}}}_r{\setminus } {{\mathcal {T}}}_r, \end{aligned}$$

see Fig. 5. Notice that \({{\mathcal {C}}}_r\) is the disjoint union of \({{\mathcal {D}}}_r\) and \({{\mathcal {T}}}_r\), hence

$$\begin{aligned} \chi _{ {{\mathcal {C}}}_r }=\chi _{ {{\mathcal {D}}}_r }+\chi _{ {{\mathcal {T}}}_r }, \end{aligned}$$

while \({\mathbb {R}}^2{\setminus }{{\mathcal {D}}}_r\) is the disjoint union of \({\mathbb {R}}^2{\setminus }{{\mathcal {C}}}_r\) and \({{\mathcal {T}}}_r\), which gives that

$$\begin{aligned} \chi _{ {\mathbb {R}}^2{\setminus }{{\mathcal {D}}}_r }=\chi _{ {\mathbb {R}}^2{\setminus }{{\mathcal {C}}}_r}+\chi _{{{\mathcal {T}}}_r}. \end{aligned}$$

Hence, we find that

$$\begin{aligned} \chi _{{\mathbb {R}}^2{\setminus }{{\mathcal {C}}}_r}-\chi _{{{\mathcal {C}}}_r}= \chi _{{\mathbb {R}}^2{\setminus }{{\mathcal {D}}}_r}-\chi _{{{\mathcal {D}}}_r} -2\chi _{{{\mathcal {T}}}_r}. \end{aligned}$$
(3.4)

Now, we claim that, for any \(t\in [-r,r]\),

$$\begin{aligned} H^K_{ {{\mathcal {D}}}_r }(t,r)\leqslant 0. \end{aligned}$$
(3.5)

To this end, we partition \({\mathbb {R}}^2\) into different regions, as depicted in Fig. 6, and we use the notation, for each set \(Y\subseteq {\mathbb {R}}^2\),

$$\begin{aligned} {{\mathcal {H}}}(Y):=\lim _{\varepsilon \searrow 0}\int _{Y{\setminus } B_\varepsilon (t,r)} K\big (x-(t,r)\big )\,dx. \end{aligned}$$
(3.6)

In this way, we can write (1.4) as

$$\begin{aligned} H^K_{ {{\mathcal {D}}}_r }(t,r)= & {} {{\mathcal {H}}}(C)+ {{\mathcal {H}}}(D)+ {{\mathcal {H}}}(U')+ {{\mathcal {H}}}(V')+ {{\mathcal {H}}}(W')- {{\mathcal {H}}}(A)- {{\mathcal {H}}}(B)\nonumber \\&-\, {{\mathcal {H}}}(U)-{{\mathcal {H}}}(V)-{{\mathcal {H}}}(W). \end{aligned}$$
(3.7)

On the other hand, we can use symmetric reflections across the horizontal straight line passing through the pole (tr) to conclude that \({{\mathcal {H}}}(U)={{\mathcal {H}}}(U')\). Similarly, we see that \({{\mathcal {H}}}(V)={{\mathcal {H}}}(V')\) and \({{\mathcal {H}}}(W)={{\mathcal {H}}}(W')\). As a consequence, the identity in (3.7) becomes

$$\begin{aligned} H^K_{ {{\mathcal {D}}}_r }(t,r) = {{\mathcal {H}}}(C)+ {{\mathcal {H}}}(D)- {{\mathcal {H}}}(A)- {{\mathcal {H}}}(B). \end{aligned}$$
(3.8)
Fig. 6
figure 6

Splitting the set \({{\mathcal {D}}}_r\) and its complement into isometric regions

Now we consider the straight line \(\ell :=\{x_2=x_1-t+r\}\). Notice that \(\ell \) passes through the point (tr) and it is parallel to two edges of the cross \({{{\mathcal {C}}}_r}\). Considering the framework in Fig. 6, reflecting the set D across \(\ell \) we obtain a set \(D'\subseteq B\), and we write \(B=D'\cup E\), for a suitable slab E. Similarly, we reflect the set A across \(\ell \) to obtain a set \( A'\) which is contained in C, and we write \(C= A'\cup F\), for a suitable slab F, see Fig. 7.

Fig. 7
figure 7

Reflecting D and A across \(\ell \), being \(E:=B{\setminus } D'\) and \(F:=C{\setminus } A'\)

In further details, if \(T:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) is the reflection across \(\ell \), we have that \(T(t,r)=(t,r)\) and \(|T(x-(t,r))|=|T(x)-(t,r)|=|x-(t,r)|\) for every \(x\in {\mathbb {R}}^2\), and thus, by (1.2),

$$\begin{aligned} K\big (x-(t,r)\big )=K_0(|x-(t,r)|)=K_0\big ( \big |T(x-(t,r)) \big |\big )=K\big ( T(x-(t,r)) \big ). \end{aligned}$$

Accordingly, since \(D=T(D')\),

$$\begin{aligned} \begin{aligned} {{\mathcal {H}}}(B)-{{\mathcal {H}}}(D)&= \int _{B} K\big (x-(t,r)\big )\,dx - \int _{T(D')} K\big (x-(t,r)\big )\,dx\\&= \int _{B} K\big (x-(t,r)\big )\,dx - \int _{D'} K\big (x-(t,r)\big )\,dx\\&= \int _{E} K\big (x-(t,r)\big )\,dx, \end{aligned} \end{aligned}$$
(3.9)

and similarly

$$\begin{aligned} {{\mathcal {H}}}(C)-{{\mathcal {H}}}(A)= \int _{F} K\big (x-(t,r)\big )\,dx. \end{aligned}$$
(3.10)

Now we consider the straight line \(\ell ':=\{x_2=-x_1+t+r\}\). Notice that \(\ell \) passes through the point (tr) and it is perpendicular to \(\ell \). We let \(E'\) be the reflection across \(\ell '\) of the set E and we notice that \(E'\supseteq F\). Therefore

$$\begin{aligned} \int _{E} K\big (x-(t,r)\big )\,dx=\int _{E'} K\big (x-(t,r)\big )\,dx \geqslant \int _{F} K\big (x-(t,r)\big )\,dx. \end{aligned}$$

From this, (3.9) and (3.10), we obtain

$$\begin{aligned} {{\mathcal {H}}}(C)+ {{\mathcal {H}}}(D)- {{\mathcal {H}}}(A)- {{\mathcal {H}}}(B)= & {} \int _{F} K\big (x-(t,r)\big )\,dx-\int _{E} K\big (x-(t,r)\big )\,dx\\\leqslant & {} 0. \end{aligned}$$

This and (3.8) imply the desired result in (3.5).

Then, by (3.4) and (3.5),

$$\begin{aligned} H^K_{ {{\mathcal {C}}}_r }(t,r)=H^K_{ {{\mathcal {D}}}_r }(t,r)-2 \int _{ {{\mathcal {T}}}_r } K\big (y-(t,r)\big )\, dy\leqslant 0-2\Psi (r), \end{aligned}$$

and this gives the desired result. \(\square \)

With this, we are now in the position of completing the proof of Proposition 3.1 via the following argument:

Proof of Proposition 3.1

The claim in (3.3) follows from Lemma 3.2. In addition, we have that \({{\mathcal {C}}}\subset {{\mathcal {C}}}_r\), due to (3.1). We also observe that if \(p\in (\partial {{\mathcal {C}}}_r){\setminus } [-r,r]^2\), then \(p\in \partial {{\mathcal {C}}}\). Consequently, by (1.4), for any \(p\in (\partial {{\mathcal {C}}}_r){\setminus } [-r,r]^2\), we have that

$$\begin{aligned} H^K_{{\mathcal {C}}}(p)\geqslant H^K_{{{\mathcal {C}}}_r}(p). \end{aligned}$$
(3.11)

Also, by symmetry, we see that \(H^K_{{\mathcal {C}}}(p)=0\) at any point  \(p\in \partial {{\mathcal {C}}}\), hence (3.11) gives that \(H^K_{{{\mathcal {C}}}_r}(p)\leqslant 0\) for any \(p\in (\partial {{\mathcal {C}}}_r){\setminus } [-r,r]^2\). Since this inequality is also valid when \( p\in (\partial {{\mathcal {C}}}_r)\cap [-r,r]^2\), due to (3.3), the proof of (3.2) is complete. \(\square \)

With Proposition 3.1, we can now construct inner and outer barriers as in Corollary A.11 to complete the proof of Theorem 1.4. This auxiliary construction goes as follows.

Lemma 3.3

Let \({{\mathcal {C}}}_r\) be as in (3.1). Let \(R:=3\sqrt{2}\,r\) and define, for \(\lambda \in \left[ 0,\frac{r}{2}\right) \),

$$\begin{aligned} {{\mathcal {C}}}_r^\lambda :=\left\{ x\in {\mathbb {R}}^2 { \text{ s.t. } } d_{ {{\mathcal {C}}}_r }(x)\leqslant -\lambda \right\} . \end{aligned}$$
(3.12)

Then, for any \(p\in (\partial {{\mathcal {C}}}_r^\lambda ){\setminus } B_R\), we have that \(H^K_{{{\mathcal {C}}}_r^\lambda }(p)\leqslant 0\).

Fig. 8
figure 8

The set \({{\mathcal {C}}}_r^\lambda \), touched from inside at a boundary point by a translation of \({{\mathcal {C}}}\)

Proof

We observe that if \(p\in (\partial {{\mathcal {C}}}_r^\lambda ) {\setminus } B_R\), then \(\partial {{\mathcal {C}}}_r^\lambda \) in the vicinity of p is a segment, and there exists a vertical translation of \({{\mathcal {C}}}\) by a vector \(v_0:=\pm \sqrt{\lambda }\,e_2\) such that \(p\in {{\mathcal {C}}}+v_0\) and \({{\mathcal {C}}}+v_0\subset {{\mathcal {C}}}_r^\lambda \), see Fig. 8. From this, we find that

$$\begin{aligned} H^K_{{{\mathcal {C}}}_r^\lambda }(p)\leqslant H^K_{{{\mathcal {C}}}+v_0}(p)= H^K_{{{\mathcal {C}}}}(p-v_0)=0, \end{aligned}$$

as desired. \(\square \)

With this, we are ready to complete the proof of Theorem 1.4, by arguing as follows.

Proof of Theorem 1.4

The proof is based on the construction of suitable families of geometric sub and supersolutions starting from the perturbed cross \({\mathcal {C}}_r\), as defined in (3.1), to which apply Corollary A.11.

We observe that

$$\begin{aligned} {\mathcal {C}}=\bigcap _{r>0} {\mathcal {C}}_r. \end{aligned}$$

Moreover, we see that

$$\begin{aligned} d_{{\mathcal {C}}}(x)\leqslant d_{{\mathcal {C}}_r}(x)\leqslant d_{{\mathcal {C}}}(x)+r. \end{aligned}$$

These observations, together with the Comparison Principle in Theorem A.5 and Remark A.6, imply that

$$\begin{aligned} {\mathcal {C}}^+(t)=\bigcap _{r>0} {\mathcal {C}}_r^+(t),\qquad { \text{ for } \text{ all } } t>0. \end{aligned}$$
(3.13)

Analogously, one can define

$$\begin{aligned} {\mathcal {C}}^r := ({\mathbb {R}}^2{\setminus } {\mathcal {C}})\cup [-r,r]^2. \end{aligned}$$
(3.14)

Let \(\Psi \) as defined in (1.12). Fixed \(r\in (0,r_0)\), where \(r_0\) is as in (1.14), we define \(r_*(t)\) to be the solution to the ODE

$$\begin{aligned} \dot{r}_*(t)= \Psi (r_*(t)) \end{aligned}$$
(3.15)

with initial datum \(r_*(0)=r\). We fix \(T>0\) such that \(r_*(t)<r_0\) for all \(t\in [0,T]\). Recalling the definition of \(\Lambda \) in (1.15), it is easy to check that

$$\begin{aligned} \Lambda (r_*(t))=t+\Lambda (r),\qquad { \text{ for } \text{ all } } t\in (0,T]. \end{aligned}$$
(3.16)

Now, by (1.17) and (3.16), we see that

$$\begin{aligned} \Lambda (r_*(t))=\Lambda (r(t))+\Lambda (r)\geqslant \Lambda (r(t)). \end{aligned}$$
(3.17)

Now, recalling the setting in (3.12), we take into account the sets \({{\mathcal {C}}}_{r_*(t)}\) and \({{\mathcal {C}}}_{r_*(t)}^\lambda \), with \(\lambda \in \left[ 0,\frac{r}{2}\right) \) and \(t\in [0,T]\), and we claim that these sets satisfy the assumptions in Corollary A.11, item (ii). To this end, we observe that, in the vicinity of the angular points of \({{\mathcal {C}}}_r\), the complement of \({{\mathcal {C}}}_r\) is a convex set, and therefore condition (A.9) is satisfied by \({{\mathcal {C}}}_{r_*(t)}\). Also, we take

$$\begin{aligned}&\delta _1: =\inf _{t\in [0,T]} \Psi (r_*(t)),\\&\delta _2:=\inf _{t\in [0,T]} \inf _{p\in B_{3\sqrt{2}r_*(t)}}\int _{B_{r_*(t)/4}(3r_*(t)/4,0)-p} K(y)\,dy\qquad {\text{ and }}\qquad \delta :=\min \{\delta _1,\delta _2\}. \end{aligned}$$

Notice that \(\delta >0\) thanks to (1.13) and (1.14). Then, by Proposition 3.1 and (3.15), we get that at any point \(x=(x_1,x_2)\) of \(\partial {{\mathcal {C}}}_{r_*(t)}\) with \(x_2=\pm r_*(t)\), we have that

$$\begin{aligned} -H^K_{ {{\mathcal {C}}}_{r_*(t)} }(x) \geqslant 2\Psi (r_*(t)) = \dot{r}_*(t)+\Psi (r_*(t)) \geqslant \dot{r}_*(t) +\delta _1\geqslant \partial _t x\cdot \nu (x)+\delta .\quad \nonumber \\ \end{aligned}$$
(3.18)

In addition, if \(x=(x_1,x_2)\in (\partial {{\mathcal {C}}}_{r_*(t)})\cap B_{4R}\) and \(|x_2|> r_*(t)\), we have that

$$\begin{aligned} -H^K_{ {{\mathcal {C}}}_{r_*(t)} }(x)\geqslant & {} -H^K_{ {{\mathcal {C}}} }(x)+\int _{B_{ r_*(t)/4}(3 r_*(t)/4,0)} K(y-x)\,dy\\= & {} \int _{B_{ r_*(t)/4}(3 r_*(t)/4,0)-x} K(y)\,dy\geqslant \delta _2\geqslant \delta = \partial _t x\cdot \nu (x)+\delta . \end{aligned}$$

This and (3.18) give that condition (A.8) is fulfilled by \({{\mathcal {C}}}_{r_*(t)}\).

Furthermore, in light of Lemma 3.3, we know that, for any \(x\in (\partial {{\mathcal {C}}}_{r_*(t)}^\lambda ) {\setminus } B_R\),

$$\begin{aligned} H^K_{{{\mathcal {C}}}_{r_*(t)}^\lambda }(p)\leqslant 0=\partial _t x\cdot \nu (x), \end{aligned}$$

which says that condition (A.15) is fulfilled by \({{{\mathcal {C}}}_{r_*(t)}^\lambda }\).

Therefore, we are in the position of using Corollary A.11, item (ii). In this way, we find that

$$\begin{aligned} {\mathcal {C}}_{r_*(t)}\subseteq {\mathcal {C}}_r^+(t),\qquad { \text{ for } \text{ all } } t\in [0,T]. \end{aligned}$$

Hence, recalling (3.17),

$$\begin{aligned} {\mathcal {C}}_{r(t)}\subseteq {\mathcal {C}}_r^+(t),\qquad { \text{ for } \text{ all } } t\in [0,T]. \end{aligned}$$

Taking intersections, in view of (3.13), we obtain that

$$\begin{aligned} {\mathcal {C}}_{r(t)}\subseteq {\mathcal {C}}^+(t),\qquad { \text{ for } \text{ all } } t\in [0,T]. \end{aligned}$$
(3.19)

Analogously, one can use the setting in (3.14), combined with Corollary A.11, item (i), and deduce that

$$\begin{aligned} {\mathcal {C}}^{r(t)}\subseteq ({\mathbb {R}}^2{\setminus } {\mathcal {C}})^+(t)\qquad { \text{ for } \text{ all } } t\in [0, T]. \end{aligned}$$
(3.20)

By (3.19) and (3.20) we get

$$\begin{aligned} \left[ -r(t), r(t)\right] ^2= {\mathcal {C}}_{r(t)}\cap {\mathcal {C}}^{r(t)}\subseteq {\mathcal {C}}^+(t)\cap ({\mathbb {R}}^2{\setminus } {\mathcal {C}})^+(t) =\Sigma _{{\mathcal {C}}}(t), \end{aligned}$$

which implies (1.16), as desired. \(\square \)

4 Moving boxes, weak interaction kernels and proof of Theorem 1.6

To simplify some computation, in this section we operate a rotation of coordinates so that

$$\begin{aligned} {\mathcal {C}}=\{x\in {\mathbb {R}}^2\text { s.t. } x_1x_2\geqslant 0\}. \end{aligned}$$
(4.1)

To prove Theorem 1.6, it is convenient to consider “expanding boxes” built by the following sets. For any \(r\in (0,1)\), we define

$$\begin{aligned} {{\mathcal {N}}}_r:=\Big ( [r,+\infty )\times [r,+\infty )\Big )\cup \Big ( (-\infty ,-r]\times (-\infty ,-r]\Big ), \end{aligned}$$
(4.2)

see Fig. 9.

Fig. 9
figure 9

The set \({{\mathcal {N}}}_r\)

Then, recalling the notation in (1.19), we have:

Lemma 4.1

Assume that K satisfies (1.2), (1.3) and (1.19) in \({\mathbb {R}}^2\). Then, for any \(p\in \partial {{\mathcal {N}}}_r\),

$$\begin{aligned} H^K_{ {{\mathcal {N}}}_r }(p) \leqslant 2\,\Phi (2r). \end{aligned}$$
Fig. 10
figure 10

Simplifications in the computations of Lemma 4.1

Proof

We denote by A and B the two connected components of \({ {{\mathcal {N}}}_r }\) and consider the straight line \(\ell \) passing through p and tangent to \({ {{\mathcal {N}}}_r }\) at p: see Fig. 10. By reflection across \(\ell \), we can consider the regions \(A'\) and \(B'\) which are symmetric to A and B, respectively. In particular, if \(p=(p_1,p_2)\) and \(M(x_1,x_2):=(2p_1-x_1,x_2)\), we have that \(M(A\cup B)=A'\cup B'\) and \(M(B_\varepsilon (p))=B_\varepsilon (p)\), and therefore

$$\begin{aligned} \int _{(A'\cup B'){\setminus } B_\varepsilon (p)} K(p-y)\,dy= & {} \int _{M((A\cup B){\setminus } B_\varepsilon (p))} K(p-y)\,dy\\= & {} \int _{M((A\cup B){\setminus } B_\varepsilon (p))} K(p-Mx)\,dx\\= & {} \int _{M((A\cup B){\setminus } B_\varepsilon (p))} K(-p_1+x_1,p_2-x_2)\,dx \\= & {} \int _{(A'\cup B'){\setminus } B_\varepsilon (p)} K(p-x)\,dx, \end{aligned}$$

thanks to (1.2). Then, denoting by

$$\begin{aligned} T:=\big ( {\mathbb {R}}^2{\setminus }{ {{\mathcal {N}}}_r }\big ){\setminus } \big (A'\cup B'\big ), \end{aligned}$$

which is the “white region” in Fig. 10, we see that

$$\begin{aligned} H^K_{ {{\mathcal {N}}}_r }(p)= & {} \lim _{\varepsilon \searrow 0} \int _{(A'\cup B'){\setminus } B_\varepsilon (p)} K(p-x)\,dx -\int _{(A\cup B){\setminus } B_\varepsilon (p)} K(p-x)\,dx\nonumber \\&\quad + \int _{T} K(p-x)\,dx\nonumber \\= & {} \int _{T} K(p-x)\,dx. \end{aligned}$$
(4.3)

Up to rotations, we may assume that

$$\begin{aligned} T= \big ( {\mathbb {R}}\times [-r,r]\big )\cup \big ( [-r,3r]\times (-\infty ,-r]\big ). \end{aligned}$$
(4.4)

Recalling  (1.19), and that \(p_1=r\), we get

$$\begin{aligned} \int _{[-r,3r]\times (-\infty ,-r]} K(x-p)\,dx\leqslant & {} \int _{[-r,3r]\times (-\infty ,-r]} K_1(|x-p|)\,dx\nonumber \\\leqslant & {} \int _{[-r,3r]\times {\mathbb {R}}} K_1(|x-p|)\,dx = \Phi (2r) \end{aligned}$$
(4.5)

where \(\Phi \) is defined in (1.19). Moreover, since \(p_1=r\) and \(p_2\geqslant r\), and \(K_1\) is nonincreasing, we get that \(K_1(|x-p|)\leqslant K_1(|x-(r,r)|)\), for every \(x\in {\mathbb {R}}\times [-r,r]\). As consequence,

$$\begin{aligned} \int _{{\mathbb {R}}\times [-r,r]} K(x-p)\,dx\leqslant & {} \int _{{\mathbb {R}}\times [-r,r]} K_1(|x-p|)\,dx\\\leqslant & {} \int _{{\mathbb {R}}\times [-r,r]} K_1(|x-(r,r)|)\,dx \\\leqslant & {} \int _{{\mathbb {R}}\times [-r,3r]} K_1(|x-(r,r)|)\,dx= \Phi (2r). \end{aligned}$$

From this and (4.5), and recalling (4.4), we obtain that

$$\begin{aligned} \int _{T} K(p-x)\,dx\leqslant 2\Phi (2r). \end{aligned}$$

This and (4.3) give the desired result. \(\square \)

For \(\lambda \in (0,r)\) we define the sets

$$\begin{aligned} {{\mathcal {N}}}_{r}^\lambda :=\{ x\in {\mathbb {R}}^2 { \text{ s.t. } } d_{ {{\mathcal {N}}}_{r} }(x)\geqslant -\lambda \}. \end{aligned}$$
(4.6)

We observe that for any \(x\in \partial {{\mathcal {N}}}_{r}^\lambda \) there exists a unique point \(x'\in \partial {{\mathcal {N}}}_{r}\) such that \(|x-x'|=d({\mathcal {N}}_{r}^\lambda ,{\mathcal {N}}_{r})=\lambda \). Letting \(v_x:=x-x'\), it follows that \({{\mathcal {N}}}_{r}+v_x\subset {{\mathcal {N}}}_{r}^\lambda \), see Fig. 11. This and Lemma 4.1 give that

$$\begin{aligned} H^K_{ {{\mathcal {N}}}_{r}^\lambda }(x) \leqslant H^K_{ {{\mathcal {N}}}_{r} }(x+v_x)\leqslant 2\Phi (2r)\qquad \text {for any }x\in \partial {{\mathcal {N}}}_{r}^\lambda . \end{aligned}$$
(4.7)
Fig. 11
figure 11

The set \({{\mathcal {N}}}_r^\lambda \), touched from inside at a boundary point by a translation of \({{\mathcal {N}}}_r\)

With this preliminary work, we can prove Theorem 1.6.

Proof of Theorem 1.6

We note that \({{\mathcal {M}}}_{r}:={{\mathcal {N}}}_{r}^{r/2}\subseteq {\mathcal {C}}\), being \({\mathcal {C}}\) defined in (4.1) and \({{\mathcal {N}}}_{r}^{r/2}\) defined in (4.6), with \(\lambda =r/2\). Moreover, we have that \(d({\mathcal {C}},{{\mathcal {M}}}_r)=r/2>0\). Hence, by Corollary A.8 we get that \( {{\mathcal {M}}}_{r}^+(t)\subseteq {\mathcal {C}}^-(t)\) for all \(t>0\). In particular, since

$$\begin{aligned} \bigcup _{r>0} {\mathcal {M}}_r={\mathrm{int}}({\mathcal {C}}), \end{aligned}$$

we see that

$$\begin{aligned} \bigcup _{r>0} {{\mathcal {M}}}_{r}^+(t)= {\mathcal {C}}^-(t). \end{aligned}$$
(4.8)

Our aim is to construct starting from \({ {{\mathcal {M}}}_{r} }\) a continuous family of geometric subsolutions and then apply Proposition A.10. Fixed \(\varrho \in (0,1)\), we define

$$\begin{aligned} F_\varrho (r):=\int _\varrho ^r \frac{d\vartheta }{6\Phi (2\vartheta )}. \end{aligned}$$

Notice that \(F_\varrho \) is strictly increasing, so we can consider its inverse \(G_\varrho \) in such a way that \(F_\varrho (G_\varrho (t))=t\). Then, for \(t\in [0, T]\), we set \( r_\varrho (t):= G_\varrho (t)\) and we consider the evolving sets  \( { {{\mathcal {M}}}_{r_\varrho (t)} }\). We remark that

$$\begin{aligned} F_\varrho (\varrho )=0=F_\varrho (G_\varrho (0))=F_\varrho (r_\varrho (0)), \end{aligned}$$

and so \(r_\varrho (0)=\varrho \). In addition, the outer normal velocity of \({ {{\mathcal {M}}}_{r_\varrho (t)} }\) is

$$\begin{aligned} -\dot{r}_\varrho (t)+\frac{1}{2}\dot{r}_\varrho (t)= & {} -\frac{1}{2}G_\varrho '(t)=-\frac{1}{2F_\varrho '(G_\varrho (t))}=-3 {\,\Phi (2G_\varrho (t))}\nonumber \\= & {} -3{\,\Phi (2r_\varrho (t))}. \end{aligned}$$
(4.9)

So, if

$$\begin{aligned} \delta :=\Phi (2\varrho )=\min _{r\in [\varrho , r_\varrho (T)]} \Phi (2r), \end{aligned}$$

we have that

$$\begin{aligned} \partial _t x\cdot \nu (x)=-\frac{1}{2}\dot{r}_\varrho (t)=-2\Phi (2r_\varrho (t)) - \Phi (2r_\varrho (t))\leqslant - H^K_{{\mathcal {M}}_{r_\varrho (t)}}(x) -\delta \qquad \end{aligned}$$
(4.10)

for all \(x\in \partial {\mathcal {M}}_{r_\varrho (t)}\), thanks to (4.7).

We observe that (4.10) says that (A.8) is satisfied by \({\mathcal {M}}_{r_\varrho (t)}\). So, to exploit Corollary A.11, we now want to check that condition (A.15) is satisfied by the set

$$\begin{aligned} {{\mathcal {M}}}_{r_\varrho (t)}^\lambda :=\{ x\in {\mathbb {R}}^2 { \text{ s.t. } } d_{ {{\mathcal {M}}}_{r_\varrho (t)} }(x)\geqslant -\lambda \}\qquad \text{ for } \lambda \in (0, \rho ). \end{aligned}$$

We exploit again the estimate (4.7) which gives that

$$\begin{aligned} H^K_{ {{\mathcal {M}}}_{r_\varrho (t)}^\lambda }(x) \leqslant 2\Phi (2r_{\varrho }(t))\qquad \text {for any }x\in \partial {{\mathcal {M}}}_{r_\varrho (t)}^\lambda . \end{aligned}$$

Thus, in view of (4.9),

$$\begin{aligned} \partial _t x\cdot \nu (x)=-\frac{1}{2}\dot{r}_\varrho (t)= -3{\,\Phi (2r_\varrho (t))}\leqslant -H^K_{ {{\mathcal {M}}}_{r_\varrho (t)}^\lambda }(x). \end{aligned}$$

This gives that \({{\mathcal {M}}}_{r_\varrho (t)}^\lambda \) satisfies condition (A.15) and therefore we can apply Corollary A.11, item (ii).

Then, it follows that, for all \(\varrho \in (0,1)\),

$$\begin{aligned} {{\mathcal {M}}}_{r_\varrho (t)}\subseteq {{\mathcal {M}}}_{\varrho }^+(t). \end{aligned}$$
(4.11)

Also, for any \(t>0\), we claim that

$$\begin{aligned} \lim _{\varrho \searrow 0} r_\varrho (t)=0. \end{aligned}$$
(4.12)

To prove this, we argue by contradiction and suppose that \(r_{\varrho _k}(t)\geqslant a_0\), for some \(a_0>0\) and some infinitesimal sequence \(\varrho _k\). Then,

$$\begin{aligned} t=F_{\varrho _k}(G_{\varrho _k}(t))= F_{\varrho _k}(r_{\varrho _k}(t))\geqslant F_{\varrho _k}(a_0) =\int _{\varrho _k}^{a_0} \frac{d\vartheta }{6\,\Phi (2\vartheta )}=\frac{1}{12} \int _{2\varrho _k}^{2a_0} \frac{d\tau }{\Phi (\tau )}. \end{aligned}$$

This is in contradiction with (1.20) and so it proves (4.12).

In view of (4.12), we find that

$$\begin{aligned} \bigcup _{\varrho >0} {{\mathcal {M}}}_{r_\varrho (t)}=\text {int } {\mathcal {C}}. \end{aligned}$$

So, recalling (4.8) and (4.11), we conclude that

$$\begin{aligned} \text {int } {\mathcal {C}} =\bigcup _{\varrho>0} {{\mathcal {M}}}_{r_\varrho (t)}\subseteq \bigcup _{\varrho >0} {{\mathcal {M}}}_{\varrho }^+(t)= {\mathcal {C}}^-(t)\qquad {\text{ for } \text{ all } }t\in [0,T]. \end{aligned}$$
(4.13)

Analogously, one can define

$$\begin{aligned}&{{\mathcal {N}}}^{r}:=\Big ( (-\infty , -r]\times [r,+\infty )\Big )\cup \Big ( [r+\infty )\times (-\infty ,-r]\Big ), \\&{\mathcal {M}}^r=({{\mathcal {N}}}^{r})^{r/2}:=\{ x\in {\mathbb {R}}^2 { \text{ s.t. } } d_{ {{\mathcal {N}}}^{r} }(x)\geqslant -\lambda \}. \end{aligned}$$

and see that

$$\begin{aligned} \text {int } ({\mathbb {R}}^2{\setminus } {\mathcal {C}})\subseteq {\mathbb {R}}^2{\setminus } {\mathcal {C}}^-(t)\qquad {\text{ for } \text{ all } }t\in [0,T]. \end{aligned}$$
(4.14)

Putting together (4.13) and (4.14), we conclude that

$$\begin{aligned} \text {int } {\mathcal {C}} \subseteq {\mathcal {C}}^-(t)\subseteq {\mathcal {C}}^+(t) \subseteq {\mathcal {C}}, \end{aligned}$$

and so \(\Sigma _{{\mathcal {C}}}(t)=\partial {\mathcal {C}}\), thus establishing (1.21). \(\square \)

5 K-minimal cones and proof of Proposition 1.8

In this section we show that \({\mathcal {C}}\subseteq {\mathbb {R}}^2\), as defined in (1.10), is never a K-minimal set, under the assumptions (1.2) and (1.3), namely we prove Proposition 1.8. This will be proved using the family of perturbed crosses \({\mathcal {C}}_r\) introduced in (3.1) and the fact that \(H^K_E\) is the first variation of the nonlocal perimeter \(\mathrm {Per}_K\) defined in (1.5), as shown in [8].

Proof of Proposition 1.8

With the notation in (1.10) and (3.1), we claim that that there exists \(r>0\) such that, for all \(R>\sqrt{2}\,r\),

$$\begin{aligned} \mathrm {Per}_K({\mathcal {C}}_{r}, B_R) < \mathrm {Per}_K({\mathcal {C}}, B_R). \end{aligned}$$
(5.1)

Let \(r>0\) and \(R>\sqrt{2}r\), so that \( {\mathcal {C}}_{r}{\setminus } B_R= {\mathcal {C}}{\setminus } B_R\). Let

$$\begin{aligned} W_r:= {\mathcal {C}}_{r}{\setminus } {\mathcal {C}} \subseteq B_R. \end{aligned}$$

Let \(\delta \in (0, r)\) and \(K_\delta (y):=K(y)(1-\chi _{B_\delta }(y))\). We define \(\mathrm {Per}_\delta (E)\) as in (1.5), \(\mathrm {Per}_\delta (E,B_R)\) as in (1.23), and \(H^\delta _E\) as in (1.4), with \(K_\delta \) in place of K. In this setting, we get that

$$\begin{aligned} \mathrm {Per}_{\delta } (W_r)= & {} \mathrm {Per}_\delta (W_r, B_R)= \mathrm {Per}_\delta ({\mathcal {C}}_{r}, B_R) -\mathrm {Per}_\delta ({\mathcal {C}}, B_R)\nonumber \\&+ 2\int _{W_r}\int _{{\mathcal {C}}}K_\delta (x-y)\,dx\,dy. \end{aligned}$$
(5.2)

We also observe that

$$\begin{aligned} \mathrm {Per}_{\delta } (W_r)= & {} \int _{W_r}\int _{{\mathbb {R}}^2{\setminus } W_r} K_\delta (x-y)\,dx\,dy \\= & {} \int _{W_r}\int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r}} K_\delta (x-y)\,dx\,dy + \int _{W_r}\int _{ {\mathcal {C}}} K_\delta (x-y)\,dx\,dy. \end{aligned}$$

Substituting this identity into (5.2), we find that

$$\begin{aligned} \begin{aligned} \mathrm {Per}_\delta ({\mathcal {C}}_{r}, B_R) -\mathrm {Per}_\delta ({\mathcal {C}}, B_R)\,&=\mathrm {Per}_{\delta } (W_r)- 2\int _{W_r}\int _{{\mathcal {C}}}K_\delta (x-y)\,dx\,dy\\&=\int _{W_r}\int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r}} K_\delta (x-y)\,dx\,dy \\&\quad - \, \int _{W_r}\int _{ {\mathcal {C}}} K_\delta (x-y)\,dx\,dy. \end{aligned} \end{aligned}$$
(5.3)

Now, given \(x=(x_1,x_2)\in W_r\), we have that \(x\in \partial {\mathcal {C}}_{r(x)}\), with \(r(x):=|x_2|\in (0,r]\), where the notation of (3.1) has been used. Then, by Lemma 3.2, we have that

$$\begin{aligned} H^\delta _{{\mathcal {C}}_{r(x)}}(x)\leqslant -2\Psi _\delta (r(x)), \end{aligned}$$
(5.4)

where \(\Psi _\delta \) is as in (1.12) with \(K_\delta \) in place of K, that is

$$\begin{aligned} \Psi _\delta (s):=\int _{B_{s/4}(7s/4,0)} K_\delta (x)\,dx\geqslant 0. \end{aligned}$$

We write (5.4) as

$$\begin{aligned} -2\Psi _\delta (r(x))\,&\geqslant \int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r(x)}} K_\delta (x-y)\,dy-\int _{{\mathcal {C}}_{r(x)}} K_\delta (x-y)\,dy\\&=\int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r(x)}} K_\delta (x-y)\,dy- \int _{{\mathcal {C}}} K_\delta (x-y)\,dy- \int _{W_{r(x)}} K_\delta (x-y)\,dy\\&=\int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r}} K_\delta (x-y)\,dy +\int _{W_r{\setminus } W_{r(x)}}K_\delta (x-y)\,dy\\&\quad -\int _{{\mathcal {C}}} K_\delta (x-y)\,dy- \int _{W_{r(x)}} K_\delta (x-y)\,dy. \end{aligned}$$

Therefore, integrating over \(x\in W_r\),

$$\begin{aligned} \begin{aligned}&\int _{W_r}\int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r}} K_\delta (x-y)\,dx\,dy-\int _{W_r} \int _{{\mathcal {C}}} K_\delta (x-y)\,dx\,dy\\&\quad \leqslant \int _{W_r}\int _{W_{r(x)}} K_\delta (x-y)\,dx\,dy- \int _{W_r}\int _{W_r{\setminus } W_{r(x)}}K_\delta (x-y)\,dx\,dy\\&\qquad - \, 2\int _{W_r}\Psi _\delta (r(x))\,dx\\&\quad = 2\int _{W_r}\int _{W_{r(x)}} K_\delta (x-y)\,dx\,dy- \int _{W_r}\int _{W_r}K_\delta (x-y)\,dx\,dy\\&\qquad -2\int _{W_r}\Psi _\delta (r(x))\,dx . \end{aligned} \end{aligned}$$
(5.5)

We now observe that

$$\begin{aligned} W_r=\{x\in {\mathbb {R}}^2 { \text{ s.t. } }|x_2|>|x_1| { \text{ and } } |x_2|<r\}, \end{aligned}$$

and thus

$$\begin{aligned}&2\int _{W_r}\int _{W_{r(x)}} K_\delta (x-y)\,dx\,dy\\&\quad = \int _{x\in W_r}\left( \int _{y\in W_{r(x)}} K_\delta (x-y)\,dy\right) \,dx +\int _{y\in W_r}\left( \int _{x\in W_{r(y)}} K_\delta (x-y)\,dx\right) \,dy\\&\quad = \int _{\{ |x_1|<|x_2|<r\}}\left( \int _{ \{ |y_1|<|y_2|<r(x)\} } K_\delta (x-y)\,dy\right) \,dx\\&\qquad + \int _{\{ |y_1|<|y_2|<r\}}\left( \int _{ \{ |x_1|<|x_2|<r(y)\} } K_\delta (x-y)\,dx\right) \,dy\\&\quad = \int _{\{ |x_1|<|x_2|<r\}}\left( \int _{ \{ |y_1|<|y_2|<|x_2|\} } K_\delta (x-y)\,dy\right) \,dx\\&\qquad + \int _{\{ |y_1|<|y_2|<r\}}\left( \int _{ \{ |x_1|<|x_2|<|y_2|\} } K_\delta (x-y)\,dx\right) \,dy\\&\quad = \int _{\{ |x_1|<|x_2|<r\}}\left( \int _{ \{ |y_1|<|y_2|<|x_2|\} } K_\delta (x-y)\,dy\right) \,dx\\&\qquad + \int _{\{ |x_1|<|x_2|<r\}}\left( \int _{ \{\max \{|y_1|,|x_2|\}<|y_2|<r\} } K_\delta (x-y)\,dy\right) \,dx\\&\quad =\int _{\{ |x_1|<|x_2|<r\}}\left( \int _{ |y_1|<|y_2|<r\} } K_\delta (x-y)\,dx\right) \,dy. \end{aligned}$$

Hence, plugging this information into (5.5), we conclude that

$$\begin{aligned} \int _{W_r}\int _{{\mathbb {R}}^2{\setminus } {\mathcal {C}}_{r}} K_\delta (x-y)\,dx\,dy-\int _{W_r} \int _{{\mathcal {C}}} K_\delta (x-y)\,dx\,dy\leqslant -2\int _{W_r}\Psi _\delta (r(x))\,dx. \end{aligned}$$

This and (5.3) give that

$$\begin{aligned} \mathrm {Per}_\delta ({\mathcal {C}}_{r}, B_R) -\mathrm {Per}_\delta ({\mathcal {C}}, B_R)\leqslant -2\int _{W_r}\Psi _\delta (r(x))\,dx. \end{aligned}$$
(5.6)

Now, as \(\delta \searrow 0\), we have that \(\mathrm {Per}_\delta ({\mathcal {C}}_{r}, B_R)\rightarrow \mathrm {Per}_K({\mathcal {C}}_{r}, B_R)\) and \(\mathrm {Per}_\delta ({\mathcal {C}}, B_R)\rightarrow \mathrm {Per}_K({\mathcal {C}}, B_R)\), by Dominated Convergence Theorem, see [8]. Moreover, \(\Psi _\delta (s)\rightarrow \Psi (s)=\int _{B_{s/4}(7s/4,0)} K(x)\,dx\) a.e. and in \(L^1(0,1)\) by Dominated Convergence Theorem (observe that \(\Psi \in L^1(0,1)\) by assumption (1.3)).

So, letting \(\delta \searrow 0\) in (5.6), we end up with

$$\begin{aligned} \mathrm {Per}_K({\mathcal {C}}_{r}, B_R) -\mathrm {Per}_K({\mathcal {C}}, B_R)\leqslant -2 \int _{W_r}\Psi (|x_2|) \,dx. \end{aligned}$$
(5.7)

Recalling that K is not identically zero, we take a Lebesgue point \(\tau _0\in (0,+\infty )\) such that \(K_0(\tau _0)>0\). Then,

$$\begin{aligned} \lim _{\varepsilon \searrow 0}\frac{1}{2\varepsilon }\int _{\tau _0-\varepsilon }^{\tau _0+\varepsilon } K_0(\tau )\,d\tau =K_0(\tau _0)>0. \end{aligned}$$

Consequently, we take \(\varepsilon _0>0\) such that for all \(\varepsilon \in (0,\varepsilon _0]\) we have that

$$\begin{aligned} \int _{\tau _0-\varepsilon }^{\tau _0+\varepsilon } K_0(\tau )\,d\tau \geqslant \varepsilon K_0(\tau _0). \end{aligned}$$
(5.8)

Then, if \({{\bar{\varepsilon }}}:=\min \left\{ \varepsilon _0,\frac{\tau _0}{100}\right\} \) and \( r\in \left[ \frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}, \frac{4\tau _0}{7}+\frac{{{\bar{\varepsilon }}}}{14}\right] \), we have that

$$\begin{aligned} \begin{aligned}&\frac{7r}{4}+\frac{r}{8}= \frac{15r}{8}\geqslant \frac{15\tau _0}{14}-\frac{15{{\bar{\varepsilon }}}}{112} \geqslant \tau _0+{{\bar{\varepsilon }}}\\ {\text{ and } }\;&\frac{7r}{4}-\frac{r}{8}=\frac{13 r}{8}\leqslant \frac{13\tau _0}{14}+\frac{13{{\bar{\varepsilon }}}}{112} \leqslant \tau _0-{{\bar{\varepsilon }}}. \end{aligned} \end{aligned}$$
(5.9)

Now we cover the ring \(A_r:=B_{({7r}/{4})+({r}/{8})}{\setminus } B_{({7r}/{4})-({r}/{8})}\) by \(N_0\) balls of radius r / 4 centered at \(\partial B_{7r/4}\), with \(N_0\) independent of r. Then

$$\begin{aligned} \frac{13\pi r}{4}\int _{({7r}/{4})-({r}/{8})}^{({7r}/{4})+({r}/{8})} K_0(\tau )\,d\tau\leqslant & {} 2\pi \int _{({7r}/{4})-({r}/{8})}^{({7r}/{4})+({r}/{8})} \tau \,K_0(\tau )\,d\tau \\= & {} \int _{ A_r } K_0(|x|)\,dx\\\leqslant & {} N_0\, \int _{B_{r/4}(7r/4,0)} K_0(|x|)\,dx=N_0\,\Psi (r), \end{aligned}$$

thanks to (1.12).

Using this, (5.8) and (5.9), we obtain that, for any \(r\in \left[ \frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}, \frac{4\tau _0}{7}+\frac{{{\bar{\varepsilon }}}}{14}\right] \),

$$\begin{aligned} \begin{aligned} \Psi (r) \,&\geqslant \frac{13\pi r}{4N_0}\int _{({7r}/{4})-({r}/{8})}^{({7r}/{4})+({r}/{8})} K_0(\tau )\,d\tau \\&\geqslant \frac{\tau _0}{4N_0}\int _{\tau _0-{{\bar{\varepsilon }}}}^{\tau _0+{{\bar{\varepsilon }}}} K_0(\tau )\,d\tau \\&\geqslant \frac{{{\bar{\varepsilon }}}\tau _0\,K_0(\tau _0)}{4N_0}\\&=:{{\bar{c}}}. \end{aligned} \end{aligned}$$
(5.10)

Then, if \(r_0:=\frac{4\tau _0}{7}+\frac{{{\bar{\varepsilon }}}}{14}\), we have that

$$\begin{aligned} W_{r_0}\supset \left( 0,\frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}\right) \times \left( \frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}, \frac{4\tau _0}{7}+\frac{{{\bar{\varepsilon }}}}{14}\right) \end{aligned}$$

and therefore

$$\begin{aligned}&\int _{W_{r_0}}\Psi (|x_2|) \,dx\geqslant \left( \frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}\right) \,\int _{ \frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}}^{ \frac{4\tau _0}{7}+\frac{{{\bar{\varepsilon }}}}{14}} \Psi (x_2)\,dx_2\geqslant \frac{{{\bar{c}}}\,{{\bar{\varepsilon }}}}{7}\; \left( \frac{4\tau _0}{7}-\frac{{{\bar{\varepsilon }}}}{14}\right) , \end{aligned}$$

where (5.10) has been used in the last inequality. In particular,

$$\begin{aligned} \int _{W_{r_0}}\Psi (|x_2|) \,dx>0, \end{aligned}$$

which combined with (5.7) implies that claim in (5.1) with \(r:=r_0\).

Then, in light of (5.1), we get that \({\mathcal {C}}\) is not a K-minimal set, thus completing the proof of Proposition 1.8. \(\square \)

6 Strictly starshaped domains and proof of Proposition 1.9

Proof of Proposition 1.9

We observe that, due to assumption in (1.25), for every \(\lambda >0\), we have that there exists \(\delta _\lambda >0\) such that the distance between \(\partial E\) and \(\partial (\lambda E)\) is at least \(\delta _\lambda \). Therefore, for any \(\lambda >1\), from Corollary A.8 and Lemma A.13, we deduce that

$$\begin{aligned} E^+(\lambda ^{1+s}t)\subseteq E^-_\lambda (\lambda ^{1+s}t)= \lambda E^-\left( t\right) . \end{aligned}$$

Then for \(\lambda >1\),

$$\begin{aligned} |{\mathrm{int}} (E^+(t)){\setminus } \overline{E^-(t)}|\leqslant & {} |{\mathrm{int}} (E^+(t)){\setminus } \lambda ^{-1} E^+(\lambda ^{1+s}t)|\\= & {} |{\mathrm{int}} (E^+(t))|-\lambda ^{-1} | E^+(\lambda ^{1+s}t)|. \end{aligned}$$

Also, by Proposition A.12,

$$\begin{aligned} \liminf _{\lambda \searrow 1} | E^+(\lambda ^{1+s}t)|\geqslant |{\mathrm{int}} (E^+(t))|. \end{aligned}$$

Therefore we get

$$\begin{aligned} |{\mathrm{int}} (E^+(t)){\setminus } \overline{E^-(t)}|\leqslant & {} \limsup _{\lambda \searrow 1}|{\mathrm{int}} (E^+(t))|-\lambda ^{-1} | E^+(\lambda ^{1+s}t)|\\= & {} |{\mathrm{int}} (E^+(t))|-\liminf _{\lambda \searrow 1} \lambda ^{-1} | E^+(\lambda ^{1+s}t)| \leqslant 0. \end{aligned}$$

This gives the desired statement. \(\square \)

7 Perturbed double droplet and proof of Theorem 1.10

In this section, the state space is \({\mathbb {R}}^2\). Recalling the notation in (1.26), given \(r\in \left( 0,\frac{1}{2}\right) \) we set

$$\begin{aligned} {{\mathcal {G}}}_r:= [-r,r]^2\cup {{\mathcal {G}}}_0\subseteq {\mathbb {R}}^2, \end{aligned}$$
(7.1)

where  \({{\mathcal {G}}}_0\) is the union in \({\mathbb {R}}^2\) of \({{\mathcal {B}}}^{+}\), which is the convex envelope between \(B_1(\sqrt{2},0)\) and the origin, and \({{\mathcal {B}}}^{-}\), which is the convex envelope between \(B_1(-\sqrt{2},0)\) and the origin, see Fig. 12.

Fig. 12
figure 12

The set \({{\mathcal {G}}}_r\)

Now, fixed \(\delta \in (0,r)\), we denote by \({{\mathcal {B}}}_{\delta }^+\) the convex envelope between \(B_{1-\delta }(\sqrt{2},0)\) and the origin, and \({{\mathcal {B}}}^{-}_\delta \) the convex envelope between \(B_{1-\delta }(-\sqrt{2},0)\) and the origin. We let

$$\begin{aligned} {{\mathcal {G}}}_{\delta ,r}:=\big ([-2r,2r]\times [-r,r]\big )\cup {{\mathcal {B}}}_{\delta }^+\cup {{\mathcal {B}}}_{\delta }^-. \end{aligned}$$

Then we can estimate the K-curvature of \({{\mathcal {G}}}_{\delta ,r}\) as follows:

Lemma 7.1

Assume that (1.2), (1.3) and (1.24) hold true in \({\mathbb {R}}^2\). Then, there exists \(c_\sharp \in (0,1)\) such that the following statement holds true. If \(r\in (0,c_\sharp )\) and \(\delta \in (0,c_\sharp ^4 r)\), then

$$\begin{aligned} H^s_{ {{\mathcal {G}}}_{\delta ,r} }(p)\leqslant \frac{1}{c_\sharp } \end{aligned}$$
(7.2)

for any \(p\in \partial { {{\mathcal {G}}}_{\delta ,r} }\). In addition, for any \(p\in (\partial { {{\mathcal {G}}}_{\delta ,r} })\cap ([-2r,2r]\times [-r,r])\),

$$\begin{aligned} H^s_{ {{\mathcal {G}}}_{\delta ,r} }(p)\leqslant -\frac{c_\sharp }{r^s}. \end{aligned}$$
(7.3)

Proof

Let \(\alpha (\delta ) \) the angle at \(x=0\) in \({\mathcal {B}}_\delta ^+\). Observe that when \(\delta =0\), this angle is \(\pi /2\) and moreover there exist \(\delta _0\) and \(C_0>0\) such that \(|\alpha (\delta )-\frac{\pi }{2}| \leqslant C_0\delta \), for all \(0<\delta <\delta _0\). In particular we may assume that \(\alpha (\delta )\geqslant \pi /3\). We fix then \(\delta \leqslant r<\delta _0\).

First of all note that for all \(p=(p_1,p_2)\in \partial { {{\mathcal {G}}}_{\delta ,r} }\), with \(p_1\geqslant \sqrt{2}-\frac{(1-\delta )^2}{\sqrt{2}}\) (resp. \(p_1\leqslant -\sqrt{2}+\frac{(1-\delta )^2}{\sqrt{2}}\)), then \(p\in \partial B_{1-\delta }(\sqrt{2},0)\) (resp. \(p\in \partial B_{1-\delta }(-\sqrt{2},0)\)), and then

$$\begin{aligned}&H^s_{ {{\mathcal {G}}}_{\delta ,r} }(p)\leqslant H^s_{B_{1-\delta }(\sqrt{2},0)}(p)=c(1)(1-\delta )^{-s}\\&\qquad \left( \text {resp. } H^s_{ {{\mathcal {G}}}_{\delta ,r} }(p)\leqslant H^s_{B_{1-\delta }(-\sqrt{2},0)}(p)=c(1) (1-\delta )^{-s}\right) \end{aligned}$$

where \(c(1)= H^s_{B_1}\).

Fig. 13
figure 13

A possible configuration for the points p and q

We take \(c_\sharp \in (0,1)\) to be taken conveniently small in what follows. We notice that \({{\mathcal {S}}}:=(\partial {{\mathcal {G}}}_{\delta ,r})\cap \{|x_2|=r\}\) consists of four points. We take \(p=(p_1,p_2)\in \partial { {{\mathcal {G}}}_{\delta ,r} }\) such that there exists \(q\in {{\mathcal {S}}}\) such that \(|p-q|<c_\sharp r\) (see e.g. Fig. 13 for a possible configuration).

Then,

$$\begin{aligned} \begin{aligned}&\lim _{\varepsilon \searrow 0} \int _{B_{\sqrt{c_\sharp }\,r}(p){\setminus } B_\varepsilon (p)} \Big (\chi _{{\mathbb {R}}^2{\setminus } { {{\mathcal {G}}}_{\delta ,r} }}(y)- \chi _{ {{\mathcal {G}}}_{\delta ,r} }(y)\Big ) \frac{1}{|p-y|^{2+s}}\,dy\\&\quad \leqslant - \iint _{(0,\pi /6)\times ( {c_\sharp }\,r,\,\sqrt{c_\sharp }\,r)} \frac{1}{\varrho ^{1+s}}\,d\vartheta \,d\rho =- \frac{\pi }{6s}\frac{1}{c_\sharp ^{s/2}r^s}\left( \frac{1}{c_\sharp ^{s/2}} -1 \right) , \end{aligned} \end{aligned}$$
(7.4)

while

$$\begin{aligned} \int _{{\mathbb {R}}^2{\setminus } B{\sqrt{c_\sharp }\,r}(p)} \Big (\chi _{{\mathbb {R}}^2{\setminus } { {{\mathcal {G}}}_{\delta ,r} }}(y)- \chi _{ {{\mathcal {G}}}_{\delta ,r} }(y)\Big ) \frac{1}{|p-y|^{2+s}}\leqslant & {} 2\pi \int _{ \sqrt{c_\sharp } r }^{+\infty }\frac{1}{\rho ^{1+s}}\,d\rho \\= & {} \frac{2\pi }{s}\frac{1}{c_\sharp ^{s/2}r^s}. \end{aligned}$$

As a consequence,

$$\begin{aligned} H^K_{ {{\mathcal {G}}}_{\delta ,r} }(p)\leqslant - \frac{\pi }{6s}\frac{1}{c_\sharp ^{s/2}r^s}\left( \frac{1}{c_\sharp ^{s/2}}-1\right) + \frac{2\pi }{s}\frac{1}{c_\sharp ^{s/2}r^s}\leqslant -c_\sharp \frac{1}{r^s} \end{aligned}$$

as long as \(c_\sharp \) is sufficiently small, which implies (7.3) (and also (7.2)) in this case.

Now consider \(p \in \partial { {{\mathcal {G}}}_{\delta ,r} }\) such that \(p_2\ne r\) and \(d(p,{\mathcal {S}})\geqslant c_\sharp r\). If \(p\in \partial B_{1-\delta }(\pm \sqrt{2},0)\) we are ok, and in the other case, note that we can define a set \({ { {{\mathcal {G}}} } }'\) with \(C^{1,1}\)-boundary (uniformly in \(\delta \) and r) such that \({ { {{\mathcal {G}}}_{\delta ,r} } }\subset { { {{\mathcal {G}}} } }'\) and \({ { {{\mathcal {G}}} } }'{\setminus } B_{1/8}= { { {{\mathcal {G}}}_{\delta ,r} } }{\setminus } B_{1/8}\). Then, we obtain that

$$\begin{aligned} C'\geqslant H^K_{ { {{\mathcal {G}}} }' }(p)\geqslant H^K_{ { {{\mathcal {G}}}_{\delta ,r} } }(p)-C'', \end{aligned}$$

for some \(C'\), \(C''>0\), depending only on the local \(C^{1,1}\)-norms of the boundary of \({ { {{\mathcal {G}}} } }'\), and this gives (7.2) in this case.

Finally note that \( {{\mathcal {G}}}_{\delta ,r}\subseteq {\mathcal {C}}_r\), where \({\mathcal {C}}_r\) is the perturbed cross is defined in (3.1). So, if \(p\in \partial { {{\mathcal {G}}}_{\delta ,r} } \cap ([-r,r]\times [-r,r])\), then \(p\in \partial {\mathcal {C}}_r\). Moreover by Lemma 3.2 and the definition of \(\Psi \) in (1.12)

$$\begin{aligned} H^s_{ {{\mathcal {C}}}_{r} }(p)\leqslant -2\Psi (r)=-C \frac{1}{r^s} \end{aligned}$$

where \(C>0\) is a universal constant. In this case, we notice that \({ {{\mathcal {G}}}_{\delta ,r} }\) and \({{\mathcal {C}}}_r\) coincide in \(B_r\), and, outside such a neighborhood of the origin, they differ by four portions of cones (passing in the vicinity of \({{\mathcal {S}}}\)) with opening bounded by \(C_0\delta \). That is, if we set

$$\begin{aligned} { {{\mathcal {D}}}_{\delta ,r} }:=\big ( { {{\mathcal {G}}}_{\delta ,r} }{\setminus } {{\mathcal {C}}}_r\big )\cup \big ({{\mathcal {C}}}_r{\setminus } { {{\mathcal {G}}}_{\delta ,r} }\big ), \end{aligned}$$

we have that

$$\begin{aligned}&\int _{{ {{\mathcal {D}}}_{\delta ,r} }} \frac{dy}{|p-y|^{2+s}}\\&\quad \leqslant C_2\,\left[ \iint _{(0,C_1\delta )\times (c_\sharp r/2, 10r]}\frac{\rho \,d\vartheta \,d\rho }{ (c_\sharp \,r/2)^{2+s}} +\iint _{(0,C_1\delta )\times ( 10r,+\infty )}\frac{\rho \,d\vartheta \,d\rho }{ \rho ^{2+s}} \right] \\&\quad \leqslant \frac{C_3\,\delta }{c_\sharp ^{2+s}\,r^s} \leqslant \frac{c_\sharp }{r^s}, \end{aligned}$$

thanks to our assumption on \(\delta \). Consequently

$$\begin{aligned} \big | H^K_{ { {{\mathcal {G}}}_{\delta ,r} } }(p)-H^K_{ {{\mathcal {C}}}_r }(p)\big |\leqslant \frac{c_\sharp }{r^s} \end{aligned}$$

and so, making use of (3.3) and (1.30),

$$\begin{aligned} H^K_{ { {{\mathcal {G}}}_{\delta ,r} } }(p) \leqslant H^K_{ {{\mathcal {C}}}_r }(p)+ \frac{c_\sharp }{r^s}\leqslant -\frac{c_*}{r^s}+ \frac{c_\sharp }{r^s}\leqslant - \frac{c_*}{2\,r^s}, \end{aligned}$$

for a suitable \(c_*>0\), as long as \(c_\sharp >0\) is sufficiently small. This establishes (7.3) (and also (7.2)) in this case. \(\square \)

With these auxiliary computations, we can now complete the proof of Theorem 1.10, by arguing as follows.

Proof of Theorem 1.10

Let \(c_\sharp >0\) be as in Lemma 7.1, \(0<\varepsilon <c_\sharp /2\) and \(c_\star :=((c_\sharp -\varepsilon )\,(1+s))^{1/(1+s)}\). We define r(t) such that \(\dot{r}(t)= (c_\sharp -\varepsilon ) r(t)^{-s}\), with \(r(0)=0\). So, we have that \(r(t)=c_\star t^{1/(1+s)}\). Let also

$$\begin{aligned} \delta (t):=\frac{1}{c_\sharp \varepsilon }\,\int _0^t \frac{d\tau }{r(\tau )}= \frac{1+s}{(c_\sharp -\varepsilon )\,c_\star \,s}\; t^{s/(1+s)} . \end{aligned}$$

We now estimate the outer normal velocity of \({ { { {{\mathcal {G}}}_{\delta (t),r(t)} } } }\) via Lemma 7.1. First of all, from (7.3) at \(p\in (\partial { { { {{\mathcal {G}}}_{\delta (t),r(t)} } } })\cap \{|x_2|=r(t), |x_1|<\sqrt{2}\}\) we get

$$\begin{aligned} \dot{r}(t)=\frac{c_\sharp -\varepsilon }{(r(t))^s}\leqslant -H^s_{ {{\mathcal {G}}}_{\delta (t),r(t)} }(p)-\frac{\varepsilon }{c_\star ^s t^{s/(1+s)}}. \end{aligned}$$

Moreover, the shrinking velocity at \(x\in (\partial { { { {{\mathcal {G}}}_{\delta (t),r(t)} } } }) {\setminus }\{|x_2|=r(t)\}\) is at least \(r(t){{\dot{\delta }}}(t)=1/(c_\sharp -\varepsilon )\). This implies that at every \(x\in (\partial { { { {{\mathcal {G}}}_{\delta (t),r(t)} } } }) {\setminus }\{|x_2|=r(t)\}\) we get

$$\begin{aligned} \partial _t x\cdot \nu (x)\leqslant -\frac{1}{c_\sharp -\varepsilon }\leqslant -H^s_{ {{\mathcal {G}}}_{\delta (t),r(t)} }(x)-\frac{\varepsilon }{c_\sharp (c_\sharp -\varepsilon )}. \end{aligned}$$

by (7.2). Therefore, by Proposition A.10, we get that

$$\begin{aligned} B_{ c_\star \,t^{1/(1+s)} }\subseteq { { { {{\mathcal {G}}}_{\delta (t),r(t)} } } }\subseteq {\mathcal {G}}^+(t). \end{aligned}$$
(7.5)

Conversely, since \({{\mathcal {G}}}\) is contained in the cross \({{\mathcal {C}}}\), it follows from Corollary A.8 and Theorem 1.4 that

$$\begin{aligned} B_{c_o t^{1/(1+s)}}\subseteq ({\mathbb {R}}^2{\setminus }{\mathcal {C}})^+(t)\subseteq ({\mathbb {R}}^2{\setminus }{{\mathcal {G}}})^+(t). \end{aligned}$$

From this and (7.5) it follows that

$$\begin{aligned} B_{{\hat{c}}\, t^{1/(1+s)}}\subseteq {\mathcal {G}}^+(t)\cap ({\mathbb {R}}^2{\setminus }{{\mathcal {G}}})^+(t)=\Sigma _{{\mathcal {G}}}(t) \end{aligned}$$

with \({\hat{c}}:=\min \{ c_\star ,\,c_o\}\), which proves (1.28). \(\square \)

8 Perturbation of tangent balls and proof of Theorem 1.12

Also in this Section, the state space is \({\mathbb {R}}^2\). The idea to prove Theorem 1.12 is to construct inner barriers using “almost tangent” balls and take advantage of the scale invariance given by the homogeneous kernels in (1.24). For this, given \(\delta \in \left[ 0,\frac{1}{8}\right] \), we consider the set

$$\begin{aligned} {{\mathcal {Z}}}_{\delta ,r}:= B_r\big ( (1+\delta )r,0\big )\cup B_r\big ( (-1-\delta )r,0\big )\subseteq {\mathbb {R}}^2. \end{aligned}$$

Then, we have that the nonlocal curvature of \({{\mathcal {Z}}}_{\delta ,r}\) is always controlled from above by that of the ball, and it becomes negative in the vicinity of the origin. More precisely:

Lemma 8.1

Assume (1.24) with \(n=2\). Then, for any \(p\in \partial {{\mathcal {Z}}}_{\delta ,r}\) we have that

$$\begin{aligned} H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\leqslant \frac{C}{r^s}, \end{aligned}$$
(8.1)

for some \(C>0\). In addition, there exists \(c\in (0,1)\) such that if \(\delta \in (0,c^2)\) and \( p\in (\partial {{\mathcal {Z}}}_{\delta ,r})\cap B_{cr}\) then

$$\begin{aligned} H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\leqslant -c. \end{aligned}$$
(8.2)

Proof

Notice that \(\partial {{\mathcal {Z}}}_{\delta ,r}\subseteq \big ( \partial B_r\big ( (1+\delta )r,0\big )\big )\cup \big (\partial B_r\big ( (-1-\delta )r,0\big ) \big )\). Moreover, \({{\mathcal {Z}}}_{\delta ,r}\supseteq B_r\big ( (1+\delta )r,0\big )\), as well as \({{\mathcal {Z}}}_{\delta ,r}\supseteq B_r\big ( (-1-\delta )r,0\big )\), hence, in view of (1.4), the nonlocal curvature of \({{\mathcal {Z}}}_{\delta ,r}\) is less than or equal to that of \(B_r\), which proves (8.1).

Now we prove (8.2). For this, up to scaling, we assume that \(r:=1\) and we take \(p\in (\partial {{\mathcal {Z}}}_{\delta ,1})\cap B_{c}\). Without loss of generality, we also suppose that \(p_1\), \(p_2>0\) and we observe that

$$\begin{aligned} B_c(-2c,0)\subseteq B_1 \big ( (-1-\delta ),0\big ), \end{aligned}$$
(8.3)

as long as c is small enough. Indeed if \(x\in B_c(2c,0)\) then we can write \(x=-2ce_1+ce\), for some \(e\in {\mathbb {S}}^1\), and so

$$\begin{aligned} |x-(-1-\delta )e_1|= & {} |(1+\delta -2c)e_1+ce|\leqslant |1+\delta -2c|+c\\= & {} (1+\delta -2c)+c=1+\delta -c\leqslant 1+c^2-c<1. \end{aligned}$$

This proves (8.3).

Hence, from (1.4) and (8.3), the nonlocal curvature of \({{\mathcal {Z}}}_{\delta ,1}\) at p is less than or equal to the nonlocal curvature of \(B_r\big ( (1+\delta ),0\big )\), which is bounded by some \(C>0\), minus the contribution coming from \(B_c(-2c,0)\). That is,

$$\begin{aligned} -H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\geqslant -C+ \int _{ B_c(-2c,0) }\frac{dx}{|x-p|^{2+s}}= -C+ \int _{ B_c(2c+p_1,p_2) }\frac{dx}{|y|^{2+s}}.\qquad \end{aligned}$$
(8.4)

Also, if \(y\in B_c(2ce_1+p_1, p_2)\), we have that \(|y|\leqslant |y-2ce_1-p|+|2ce_1+p|\leqslant c+2c+|p|\leqslant 4c\), and so

$$\begin{aligned} \int _{ B_c(2c+p_1,p_2) }\frac{dx}{|y|^{2+s}}\geqslant \frac{c_0\, c^2}{c^{2+s}}=\frac{c_0}{c^s}, \end{aligned}$$

for some \(c_0>0\). So we insert this information into (8.4) and we obtain

$$\begin{aligned} -H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\geqslant -C+\frac{c_0}{c^s}\geqslant \frac{c_0}{2c^s} \end{aligned}$$

as long as c is sufficiently small. This completes the proof of (8.2), as desired. \(\square \)

From Lemma 8.1, we can control the geometric flow of the double tangent balls from inside with barriers that shrink the sides of the picture and make the origin emanate some mass:

Lemma 8.2

There exist \(\delta _0\in (0,1)\), and \({{\bar{C}}}>0\) such that if \(\delta \in (0,\delta _0)\), then

$$\begin{aligned} {{\mathcal {O}}}^-(\delta )\supset \bigcup _{\sigma \in (-\delta ^2,\delta ^2)}\big ( B_{1-{{\bar{C}}}\delta } \big (1-{{\bar{C}}}\delta , 0\big ) \cup B_{1-{{\bar{C}}}\delta }\big (-1+{{\bar{C}}}\delta , 0\big )+\sigma e_2 \Big ). \end{aligned}$$

Proof

Fix \(\varepsilon \in (0,1)\), to be taken arbitrarily small in what follows. Let \(\mu \in [0,\sqrt{\varepsilon }]\) and let, for any \(t\in [0,(1-\varepsilon )/C_0)\),

$$\begin{aligned} \varepsilon _\mu (t):= \varepsilon -\mu t \qquad {\text{ and }}\qquad r(t):=1-\varepsilon -C_0\,t, \end{aligned}$$

with \(C_0>0\) to be chosen conveniently large. We consider an inner barrier consisting in two balls of radius r(t) which, for any \(t\in [0,(1-\varepsilon )/C_0)\), remain at distance \(2\varepsilon (t)\). Namely, we set

$$\begin{aligned} {{\mathcal {F}}}_{\varepsilon ,\mu }(t):= B_{r(t)}\big (r(t)+\varepsilon _\mu (t),0\big ) \cup B_{r(t)}\big (-r(t)-\varepsilon _\mu (t),0\big ). \end{aligned}$$
(8.5)

Notice that

$$\begin{aligned} {{\mathcal {O}}}\supseteq {{\mathcal {F}}}_{\varepsilon ,\mu }(0)+\sigma e_2 \qquad { \text{ for } \text{ any } }\sigma \in (-\varepsilon ,\varepsilon )\qquad d({\mathcal {O}}, {{\mathcal {F}}}_{\varepsilon ,\mu }(0)+\sigma e_2)>0. \end{aligned}$$
(8.6)

We also observe that the vectorial velocity of this set is the superposition of a normal velocity \(-\dot{r} \nu \), being \(\nu \) the interior normal, and a translation velocity \(\pm (\dot{r}+{{\dot{\varepsilon }}}_\mu )e_1\), with the plus sign for the ball on the right and the minus sign for the ball on the left. The normal velocity of this set is therefore equal to

$$\begin{aligned} \Big (-\dot{r} \nu \pm (\dot{r}+{{\dot{\varepsilon }}}_\mu )e_1\Big )\cdot \nu =-\dot{r}\pm (\dot{r}+{{\dot{\varepsilon }}}_\mu )\nu _1 = C_0\,(1\mp \nu _1)\mp \mu . \end{aligned}$$
(8.7)

Now, taken a point p on \(\partial {{\mathcal {F}}}_{\varepsilon ,\mu }(t)\), we distinguish two cases. Either \(p\in B_c\), where c is the one given in Lemma 8.1, or \(p\in {\mathbb {R}}^2{\setminus } B_c\). In the first case, we have that

$$\begin{aligned} C_0\,(1\mp \nu _1)\mp \mu \geqslant C_0\,(1-|\nu _1|)-\mu \geqslant 0-\mu \geqslant -\sqrt{\varepsilon }>-c, \end{aligned}$$

This and (8.7) give that the normal velocity of \({{\mathcal {F}}}_{\varepsilon ,\mu }(t)\) at p is larger than \(-c\), and therefore greater than \(H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\), thanks to (8.2).

If instead \(p\in {\mathbb {R}}^2{\setminus } B_c\), we have that \(|\nu _1(p)|\leqslant 1-c_0\), for a suitable \(c_0\in (0,1)\), depending on c, and therefore

$$\begin{aligned} C_0\,(1\mp \nu _1)\mp \mu\geqslant & {} C_0\,(1-|\nu _1|)-\mu \geqslant C_0\,c_0-\mu \geqslant C_0\,c_0-1 \geqslant \frac{C_0\,c_0}{2} \\\geqslant & {} \frac{C_0\,c_0}{2^{s+1} \,(r(t))^s}, \end{aligned}$$

as long as \(C_0\) is sufficiently large. This and (8.7) give that the inner normal velocity of \({{\mathcal {F}}}_{\varepsilon ,\mu }(t)\) at p is strictly larger than \(\frac{C_0\,c_0}{2^{s+1} \,(r(t))^s}\), which, if \(C_0\) is chosen conveniently big, is in turn strictly larger than \(H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\), thanks to (8.1).

In any case, we have shown that the inner normal velocity of \({{\mathcal {F}}}_{\varepsilon ,\mu }(t)\) at p is strictly larger than \(H^K_{ {{\mathcal {Z}}}_{\delta ,r} }(p)\). This implies that \({{\mathcal {F}}}_{\varepsilon ,\mu }(t)\) is a strict subsolution according to Proposition A.10.

Then, by (8.6) and Proposition A.10

$$\begin{aligned} {{\mathcal {O}}}^-(t) \supseteq \bigcup _{\sigma \in (-\varepsilon ,\varepsilon )} \Big ({{\mathcal {F}}}_{\varepsilon ,\mu }(t)+\sigma e_2 \Big ), \end{aligned}$$
(8.8)

for any \(t\in [0,(1-\varepsilon )/C_0)\).

Now, taking \(\mu :=\sqrt{\varepsilon }\) in (8.5), we see that

$$\begin{aligned} {{\mathcal {F}}}_{\varepsilon ,\sqrt{\varepsilon }}(t) = B_{1-\varepsilon -C_0t}\big (1-\sqrt{\varepsilon }t-C_0 t,0\big ) \cup B_{1-\varepsilon -C_0t}\big (-(1-\sqrt{\varepsilon }t-C_0 t),0\big ) \end{aligned}$$

for all \(t\in [0,\,(1-\varepsilon )/C_0]\). In particular, taking \(t:=\sqrt{\varepsilon }\),

$$\begin{aligned} {{\mathcal {F}}}_{\varepsilon ,\sqrt{\varepsilon }}(\sqrt{\varepsilon }) = B_{1-\varepsilon -C_0\sqrt{\varepsilon }}\big (1-\varepsilon -C_0 \sqrt{\varepsilon },0\big ) \cup B_{1-\varepsilon -C_0\sqrt{\varepsilon }}\big (-(1-\varepsilon -C_0 \sqrt{\varepsilon }),0\big ), \end{aligned}$$

and the latter are two tangent balls at the origin. From this and (8.8), we deduce that

$$\begin{aligned} {{\mathcal {O}}}^-(\sqrt{\varepsilon })\supseteq & {} \bigcup _{\sigma \in (-\varepsilon ,\varepsilon )} \Big (B_{1-\varepsilon -C_0\sqrt{\varepsilon }}\big (1-\varepsilon -C_0 \sqrt{\varepsilon },0\big ) \\&\cup&B_{1-\varepsilon -C_0\sqrt{\varepsilon }}\big (-(1-\varepsilon -C_0 \sqrt{\varepsilon }),0\big )+\sigma e_2 \Big ), \end{aligned}$$

and this implies the desired result by choosing \(\delta :=\sqrt{\varepsilon }\) and \({{\bar{C}}}:= 2(C_0+1)\). \(\square \)

We can now complete the proof of Theorem 1.12 in the following way:

Proof of Theorem 1.12

We observe that, in the setting of Lemma A.13, the result in Lemma 8.2 can be written as

$$\begin{aligned} {{\mathcal {O}}}^-(\delta )\supseteq (1-{{\bar{C}}}\delta ){{\mathcal {O}}}^+(0)\qquad \text {with }\quad d( {{\mathcal {O}}}^-(\delta ), (1-{{\bar{C}}}\delta ){{\mathcal {O}}}^+(0))\geqslant \delta ^2 \end{aligned}$$

for all \(\delta \in (0,\delta _0)\).

Fix now \(C\geqslant {{\bar{C}}}\) and let \({{\mathcal {U}}}:=(1-C\delta ){{\mathcal {O}}}^+(0)\). Then, by Corollary A.8, we have

$$\begin{aligned} {{\mathcal {O}}}^-(t+\delta )\supseteq {{\mathcal {U}}}(t) \end{aligned}$$
(8.9)

for all \(t\geqslant 0\).

Now, in view of Lemma A.13,

$$\begin{aligned} {{\mathcal {U}}}(t)=(1-C\delta )\;{{\mathcal {O}}}^+\left( \frac{t}{(1-C\delta )^{1+s}}\right) \end{aligned}$$

and so, combining with (8.9),

$$\begin{aligned} {{\mathcal {O}}}^-(t+\delta )\supseteq (1- C\delta )\;{{\mathcal {O}}}^+\left( \frac{t}{(1-C\delta )^{1+s}}\right) . \end{aligned}$$

Consequently, for any \(t\geqslant \delta \), we can estimate the measure of the fattening set as

$$\begin{aligned} \begin{aligned} \big | {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) {\setminus } \overline{{{\mathcal {O}}}^-(t)}\big |\,&\leqslant \left| {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) {\setminus } (1-C\delta )\;{{\mathcal {O}}}^+\left( \frac{t-\delta }{(1-C\delta )^{1+s}}\right) \right| \\&= \left| {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) \right| -\left| (1-C\delta )\;{{\mathcal {O}}}^+\left( \frac{t-\delta }{(1-C\delta )^{1+s}}\right) \right| \\&= \left| {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) \right| -(1-C\delta )^2\,\left| {{\mathcal {O}}}^+\left( \frac{t-\delta }{(1-C\delta )^{1+s}}\right) \right| . \end{aligned} \end{aligned}$$
(8.10)

We now fix \(t_0\geqslant \delta \) and choose \(C=C(t_0)\geqslant {{\bar{C}}}\) such that

$$\begin{aligned} t\leqslant \frac{t-\delta }{(1-C\delta )^{1+s}}\qquad \text {for all }t\geqslant t_0\,. \end{aligned}$$

So, by Proposition A.12, we get that

$$\begin{aligned} \liminf _{\delta \searrow 0} \left| {{\mathcal {O}}}^+\left( \frac{t-\delta }{(1-C\delta )^{1+s}}\right) \right| \geqslant \big | {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) \big |. \end{aligned}$$

This and (8.10) yield that, for \(t\geqslant t_0\),

$$\begin{aligned}&\big | {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) - \overline{{{\mathcal {O}}}^-(t)}\big | \\&\quad \leqslant \limsup _{\delta \searrow 0} \left| {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) \right| -(1-C\delta )^2\,\left| {{\mathcal {O}}}^+\left( \frac{t-\delta }{(1-C\delta )^{1+s}}\right) \right| \\&\quad = \left| {\mathrm{int}}\left( {{\mathcal {O}}}^+(t)\right) \right| -\liminf _{\delta \searrow 0}(1-C\delta )^2\,\left| {{\mathcal {O}}}^+\left( \frac{t-\delta }{(1-C\delta )^{1+s}}\right) \right| \leqslant 0. \end{aligned}$$

Since \(t_0\) was chosen arbitrarily, this completes the proof of Theorem 1.12. \(\square \)