1 Introduction

We determine the maximum measure of non-trivial 3-wise intersecting families, and discuss the stability of the optimal structure. To make the statement precise let us start with some definitions.

Let \(n\ge t\ge 1\) and \(r\ge 2\) be integers. For a finite set X let \(2^X\) denote the power set of X. We say that a family of subsets \(\mathcal G\subset 2^{X}\) is r-wise t-intersecting if \(|G_1\cap \cdots \cap G_r|\ge t\) for all \(G_1,\ldots ,G_r\in \mathcal G\). If \(t=1\) then we omit t and say an r-wise intersecting family to mean an r-wise 1-intersecting family.

Let \(0<p<1\) be a real number and let \(q=1-p\). For \(\mathcal G\subset 2^{X}\) we define its measure (or p-measure) \(\mu _p(\mathcal G:X)\) by

$$\begin{aligned} \mu _p(\mathcal G:X):=\sum _{G\in \mathcal G}p^{|G|} q^{|X|-|G|}. \end{aligned}$$

We mainly consider the case \(X=[n]\), where \([n]:=\{1,2,\ldots ,n\}\). In this case we just write \(\mu _p(\mathcal G)\) to mean \(\mu _p(\mathcal G:[n])\).

We say that an r-wise t-intersecting family \(\mathcal G\subset 2^{[n]}\) is non-trivial if \(|\bigcap \mathcal G|<t\), where \(\bigcap \mathcal G:=\bigcap _{G\in \mathcal G}G\). Let us denote the maximum p-measure of such families by \(M_r^t(n,p)\), that is,

$$\begin{aligned} M_r^t(n,p):=\max \{\mu _p(\mathcal G):\mathcal G\subset 2^{[n]}\text { is non-trivial} r\text {-wise } t\text {-intersecting}\}. \end{aligned}$$

If a family \(\mathcal G\subset 2^{[n]}\) is non-trivial r-wise t-intersecting, then so is

$$\begin{aligned} \mathcal G':=\mathcal G\sqcup \{G\sqcup \{n+1\}:G\in \mathcal G\}\subset 2^{[n+1]}. \end{aligned}$$

Since \(\mathcal G\) and \(\mathcal G'\) have the same p-measure, the function \(M_r^t(n,p)\) is non-decreasing in n for fixed rtp, and we can define

$$\begin{aligned} M_r^t(p):=\lim _{n\rightarrow \infty }M_r^t(n,p). \end{aligned}$$

For simplicity if \(t=1\) then we just write \(M_r(n,p)\) and \(M_r(p)\).

What is generally known about \(M_r(n,p)\) and \(M_r(p)\)? For the case \(r=2\) we have the following.

$$\begin{aligned} M_2(p)={\left\{ \begin{array}{ll} p &{} \text {if } 0<p\le \frac{1}{2},\\ 1 &{} \text {if } \frac{1}{2}<p<1. \end{array}\right. } \end{aligned}$$

Indeed it is easy to see that \(M_2(n,\frac{1}{2})=\frac{1}{2}\), and it is known from [1] that \(M_2(n,p)<p\) for \(p<\frac{1}{2}\). Thus \(M_2(p)\le p\) for \(p\le \frac{1}{2}\). On the other hand we construct a non-trivial r-wise intersecting family by \(\mathcal F:=(\{F\in 2^{[n]}:1\in F\}{\setminus }\{\{1\}\})\cup \{[2,n]\}\), where \([i,j]:=[j]{\setminus }[i-1]\). Then we have \(\mu _p(\mathcal F)=p-pq^{n-1}+qp^{n-1}\rightarrow p\) as \(n\rightarrow \infty \). Thus \(M_2(p)=p\) for \(p\le \frac{1}{2}\). For the case \(p>\frac{1}{2}\) we construct a non-trivial r-wise intersecting family \(\mathcal G:=\{F\in 2^{[n]}:|F|>n/2\}\). Then \(\mu _p(\mathcal G)=\sum _{k>n/2}\left( {\begin{array}{c}n\\ k\end{array}}\right) p^kq^{n-k}\rightarrow 1\) as \(n\rightarrow \infty \), and so \(M_2(p)=1\) for \(p>\frac{1}{2}\).

The case \(p=\frac{1}{2}\) (and arbitrary \(r\ge 2\)) is also known. Brace and Daykin [5] determined the maximum size of non-trivial r-wise intersecting families. In other words, they determined \(M_r(n,\frac{1}{2})\). To state their results we define a non-trivial r-wise intersecting family \(\textrm{BD}_r(n)\) by

$$\begin{aligned} \textrm{BD}_r(n):=\{F\in 2^{[n]}:|F\cap [r+1]|\ge r\}. \end{aligned}$$

Then \(\mu _p(\textrm{BD}_r(n))=(r+1)p^rq+p^{r+1}\).

Theorem 1

(Brace and Daykin [5]) For \(r\ge 2\) we have \(M_r(n,\frac{1}{2})=\mu _{\frac{1}{2}}(\textrm{BD}_r(n))\). If \(r\ge 3\) then \(\textrm{BD}_r(n)\) is the only optimal family (up to isomorphism) whose measure attains \(M_r(n,\frac{1}{2})\).

Here two families \(\mathcal F,\mathcal G\subset 2^{[n]}\) are isomorphic if there is a permutation \(\tau \) on [n] such that \(\mathcal F=\{\{\tau (g):g\in G\}:G\in \mathcal G\}\). In this case we write \(\mathcal F\cong \mathcal G\).

The other thing we know is about the case p close to \(\frac{1}{2}\). In this case we can extend Theorem 1 if \(r\ge 8\) as follows.

Theorem 2

[20] Let \(r\ge 8\). Then there exists \(\epsilon =\epsilon (r)>0\) such that \(M_r(n,p)=\mu _p(\textrm{BD}_r(n))\) for \(|p-\frac{1}{2}|<\epsilon \), and \(\textrm{BD}_r(n)\) is the only optimal family (up to isomorphism).

In [20] it is conjectured the same holds for \(r=6\) and 7 as well. On the other hand there is a construction showing that if \(r\le 5\) then \(M_r(n,p)>\mu _p(\textrm{BD}_r(n))\) for p close to 1/2.

In this paper we focus on the case \(r=3\). We determine \(M_3(p)\) for all p.

Theorem 3

For non-trivial 3-wise intersecting families we have

$$\begin{aligned} M_3(p)={\left\{ \begin{array}{ll} p^2 &{} \text {if } p\le \frac{1}{3},\\ 4p^3q+p^4 &{} \text {if } \frac{1}{3}\le p\le \frac{1}{2},\\ p &{} \text {if } \frac{1}{2}<p\le \frac{2}{3},\\ 1 &{} \text {if } \frac{2}{3}<p<1. \end{array}\right. } \end{aligned}$$
Fig. 1
figure 1

The graph of \(M_3(p)\)

In case \(M_2(p)\) there is a jump at \(p=\frac{1}{2}\). In case \(M_3(p)\) there are two jumps at \(p=\frac{1}{2}\) and \(p=\frac{2}{3}\) as in Fig. 1, and \(M_3(p)\) is continuous at \(p=\frac{1}{3}\) but not differentiable at this point. We also note that \(\mu _p(\textrm{BD}_3(n))=4p^3q+p^4\).

The most interesting part is the case \(\frac{1}{3}\le p\le \frac{1}{2}\). In this case we determine \(M_3(n,p)\) and the corresponding optimal structure.

Theorem 4

Let \(\frac{1}{3}\le p\le \frac{1}{2}\). Then we have \(M_3(n,p)=\mu _p(\textrm{BD}_3(n))\). Moreover, \(\textrm{BD}_3(n)\) is the only optimal family (up to isomorphism), that is, if \(\mathcal F\subset 2^{[n]}\) is a non-trivial 3-wise intersecting family with \(\mu _p(\mathcal F)=M_3(n,p)\) then \(\mathcal F\cong \textrm{BD}_3(n)\).

We also consider the stability of the optimal family for \(\frac{1}{3}\le p\le \frac{1}{2}\). Roughly speaking we will claim that if a non-trivial 3-wise intersecting family has measure close to \(M_3(n,p)\) then the family is close to \(\textrm{BD}_3(n)\) in structure. A similar result is known for 2-wise t-intersecting families. For comparison with our case let \(\frac{1}{3}<p<\frac{2}{5}\) and \(t=2\). Note that \(\textrm{BD}_3(n)\) is a 2-wise 2-intersecting family. If \(\mathcal F\subset 2^{[n]}\) is a 2-wise 2-intersecting family, then it follows from the Ahlswede–Khachatrian theorem (see Theorem 11) that \(\mu _p(\mathcal F)\le \mu _p(\textrm{BD}_3(n))\). Moreover, if \(\mu _p(\mathcal F)\) is close to \(\mu _p(\textrm{BD}_3(n))\) then \(\mathcal F\) is close to \(\textrm{BD}_3(n)\). This follows from a stability result (corresponding to Theorem 11) proved by Ellis, Keller, and Lifshitz. Here we include a version due to Filmus applied to the case \(\frac{1}{3}<p<\frac{2}{5}\) and \(t=2\).

Theorem 5

(Ellis–Keller–Lifshitz [6], Filmus [7]) Let \(\frac{1}{3}<p<\frac{2}{5}\). There is a constant \(\epsilon _0=\epsilon _0(p)\) such that the following holds. If \(\mathcal F\subset 2^{[n]}\) is a 2-wise 2-intersecting family with \(\mu _p(\mathcal F)=\mu _p(\textrm{BD}_3(n))-\epsilon \), where \(\epsilon <\epsilon _0\), then there is a family \(\mathcal G\cong \textrm{BD}_3(n)\) such that \(\mu _p(\mathcal F\triangle \mathcal G)=O(\epsilon )\), where the hidden constant depends on p only.

We note that the condition \(\mu _p(\mathcal F\triangle \mathcal G)=O(\epsilon )\) in Theorem 5 cannot be replaced with the condition \(\mathcal F\subset \mathcal G\). To see this, consider a 2-wise 2-intersecting family

$$\begin{aligned} \mathcal F=\left( \textrm{BD}_3(n){\setminus }\{\{1,3,4\},\{2,3,4\}\}\right) \cup \{[n]{\setminus }\{3,4\}\}. \end{aligned}$$
(1)

Then \(\mu _p(\mathcal F)=\mu _p(\textrm{BD}_3(n))-2p^3q^{n-3}+p^2q^{n-2}\rightarrow \mu _p(\textrm{BD}_3(n))\) as \(n\rightarrow \infty \), but \(\mathcal F\) is not contained in \(\textrm{BD}_3(n)\) (or any isomorphic copy of \(\textrm{BD}_3(n)\)).

Note that a non-trivial r-wise t-intersecting family is necessarily an \((r-1)\)-wise \((t+1)\)-intersecting family. (Otherwise there are \(r-1\) subsets whose intersection is of size exactly t, and so all subsets contain the t vertices to be r-wise t-intersecting, which contradicts the non-trivial condition.) Thus Theorem 5 also holds if we replace the assumption that \(\mathcal F\) is a 2-wise 2-intersecting family with the assumption that \(\mathcal F\) is a non-trivial 3-wise intersecting family. We also note that the family \(\mathcal F\) defined by (1) is 2-wise 2-intersecting, but not 3-wise intersecting. This suggests a possibility of a stronger stability for non-trivial 3-wise intersecting families than 2-wise 2-intersecting families.

Conjecture 1

Let \(\frac{1}{3}<p\le \frac{1}{2}\). There is a constant \(\epsilon _0=\epsilon _0(p)\) such that the following holds. If \(\mathcal F\subset 2^{[n]}\) is a non-trivial 3-wise intersecting family with \(\mu _p(\mathcal F)=\mu _p(\textrm{BD}_3(n))-\epsilon \), where \(\epsilon <\epsilon _0\), then there is a family \(\mathcal G\cong \textrm{BD}_3(n)\) such that \(\mathcal F\subset \mathcal G\).

We verify the conjecture for the case \(\frac{2}{5}\le p\le \frac{1}{2}\) provided that the family is shifted. Here we say that a family \(\mathcal F\subset 2^{[n]}\) is shifted if \(F\in \mathcal F\) and \(\{i,j\}\cap \mathcal F=\{j\}\) for some \(1\le i<j\le n\), then \((F{\setminus }\{j\})\cup \{i\}\in \mathcal F\). The following is our main result in this paper.

Theorem 6

Let \(\frac{2}{5}\le p\le \frac{1}{2}\), and let \(\mathcal F\subset 2^{[n]}\) be a shifted non-trivial 3-wise intersecting family. If \(\mathcal F\not \subset \textrm{BD}_3(n)\) then \(\mu _p(\mathcal F)<\mu _p(\textrm{BD}_3(n))-0.0018\).

For the proof of Theorem 6 we divide the family into some subfamilies. These subfamilies are not only 3-wise intersecting, but also satisfy some additional intersection conditions. To capture the conditions we need some more definitions. We say that r families \(\mathcal F_1,\ldots ,\mathcal F_r\subset 2^{[n]}\) are r-cross t-intersecting if \(|F_1\cap \cdots \cap F_r|\ge t\) for all \(F_1\in \mathcal F_1,\ldots ,F_r\in \mathcal F_r\). If moreover \(\mathcal F_i\ne \emptyset \) for all \(1\le i\le r\), then the r families are called non-empty r-cross t-intersecting. As usual we say r-cross intersecting to mean r-cross 1-intersecting. The following result is used to prove Theorem 6.

Theorem 7

Let \(\frac{1}{3}\le p\le \frac{1}{2}\). If \(\mathcal F_1,\mathcal F_2,\mathcal F_3\subset 2^{[n]}\) are non-empty 3-cross intersecting families, then

$$\begin{aligned} \mu _p(\mathcal F_1)+ \mu _p(\mathcal F_2)+ \mu _p(\mathcal F_3) \le 3p. \end{aligned}$$
(2)

Suppose, moreover, that \(\frac{1}{3}<p\le \frac{1}{2}\), all \(\mathcal F_i\) are shifted, and \(\bigcap F=\emptyset \), where the intersection is taken over all \(F\in \mathcal F_1\cup \mathcal F_2\cup \mathcal F_3\). Then,

$$\begin{aligned} \mu _p(\mathcal F_1)+ \mu _p(\mathcal F_2)+ \mu _p(\mathcal F_3) \le 3p-\epsilon _p, \end{aligned}$$
(3)

where \(\epsilon _p=(2-3p)(3p-1)\).

The first inequality (2) is an easy consequence of a recent result on r-cross t-intersecting families obtained by Gupta et al. [13], while the second inequality (3) is proved by solving linear programming (LP) problems. We mention that equality holds in (2) only if \(|\bigcap _{F\in \mathcal F_1\cup \mathcal F_2\cup \mathcal F_3} F|=1\) unless \(p=\frac{1}{3}\). This fact will not be used for the proof of Theorem 6, but it follows easily from (3) and Lemma 1.

Here we outline the proof of Theorem 6. This is done by solving LP problems as follows. First we divide \(\mathcal F\) into subfamilies, say, \(\mathcal F=\mathcal F_1\cup \mathcal F_2\cup \cdots \cup \mathcal F_k\). Let \(x_i=\mu _p(\mathcal F_i)\). These subfamilies satisfy some additional conditions, which give us (not necessarily linear) constraints on the variables \(x_i\). Under the constraints we need to maximize \(\mu _p(\mathcal F)=\sum _{i=1}^kx_i\). In principle this problem can be solved by the method of Lagrange multipliers, but in practice it is not so easy even if concrete p is fixed. To overcome the difficulty we first replace non-linear constraints with weaker piecewise linear constraints. In this way the problem turns into an LP problem with the parameter p, which can be solved efficiently provided p is given. Next, instead of solving this primal problem, we seek a feasible solution to the dual LP problem with the parameter. Finally the desired upper bound for \(\mu _p(\mathcal F)\) is obtained by the weak duality theorem. As to related proof technique we refer [24] for application of LP methods to some other extremal problems, and also [18, 19] for SDP methods.

In the next section we gather tools we use to prove our results. Then in Sect. 3 we deduce Theorem 3 and Theorem 4 from Theorem 6. In Sect. 4 we prove Theorem 7, whose proof is a prototype of the proof of Theorem 6. In Sect. 5 we prove our main result Theorem 6. Finally in the last section we discuss possible extensions to non-trivial r-wise intersecting families for \(r\ge 4\), and a related k-uniform problem. In particular we include counterexamples to a recent conjecture posed by O’Neill and Versträete [17] (c.f. Balogh and Linz [4]).

2 Preliminaries

2.1 Shifting

For \(1\le i<j\le n\) we define the shifting operation \(\sigma _{i,j}:2^{[n]}\rightarrow 2^{[n]}\) by

$$\begin{aligned} \sigma _{i,j}(\mathcal G):=\{G_{i,j}:G\in \mathcal G\}, \end{aligned}$$

where

$$\begin{aligned} G_{i,j}:={\left\{ \begin{array}{ll} (G{\setminus }\{j\})\sqcup \{i\}&{}\text {if }(G{\setminus }\{j\})\sqcup \{i\}\not \in \mathcal G,\\ G &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

By definition \(\mu _p(\mathcal G)=\mu _p(\sigma _{i,j}(\mathcal G))\) follows. We say that \(\mathcal G\) is shifted if \(\mathcal G\) is invariant under any shifting operations, in other words, if \(G\in \mathcal G\) then \(G_{i,j}\in \mathcal G\) for all \(1\le i<j\le n\). If \(\mathcal G\) is not shifted then \(\sum _{G\in \mathcal G}\sum _{g\in G}g>\sum _{G'\in \sigma _{i,j}(\mathcal G)}\sum _{g'\in G'}g'\) for some ij, and so starting from \(\mathcal G\) we get a shifted \(\mathcal G'\) by applying shifting operations repeatedly finitely many times. It is not difficult to check that if \(\mathcal G\) is r-wise t-intersecting, then so is \(\sigma _{i,j}(\mathcal G)\). Therefore if \(\mathcal G\) is an r-wise t-intersecting family, then there is a shifted r-wise t-intersecting family \(\mathcal G'\) with \(\mu _p(\mathcal G')=\mu _p(\mathcal G)\). It is also true that if \(\mathcal G_1,\ldots ,\mathcal G_r\) are r-cross t-intersecting families, then there are shifted r-cross t-intersecting families \(\mathcal G'_1,\ldots ,\mathcal G_r'\) with \(\mu _p(\mathcal G_i)=\mu _p(\mathcal G'_i)\) for all \(1\le i\le r\).

For the proof of Theorem 4 we use the fact that if \(\sigma _{i,j}(\mathcal G)\cong \textrm{BD}_3(n)\) then \(\mathcal G\cong \textrm{BD}_3(n)\). More generally the following holds.

Lemma 1

Let nab be positive integers with \(a\ge 1\), \(b\ge 0\), and \(n\ge a+2b\), and let \(\mathcal F=\{F\subset [n]:|F\cap [a+2b]|\ge a+b\}\). If \(\mathcal G\subset 2^{[n]}\) satisfies \(\sigma _{i,j}(\mathcal G)=\mathcal F\) then \(\mathcal G\cong \mathcal F\).

The above result is well-known, see, e.g., Lemma 6 in [16] for a proof and a history. We note that the condition \(\sigma _{i,j}(\mathcal G)=\mathcal F\) can be replaced with \(\sigma _{i,j}(\mathcal G)\cong \mathcal F\). Indeed if \(\sigma _{i,j}(\mathcal G)=\mathcal F'\) and \(\mathcal F'\cong \mathcal F\), then by Lemma 1 (and by renaming the vertices) we have \(\mathcal G\cong \mathcal F'\), and so \(\mathcal G\cong \mathcal F\). By choosing \(a=r-1\) and \(b=1\), we see that if \(\sigma _{i,j}(\mathcal G)\cong \textrm{BD}_r(n)\) then \(\mathcal G\cong \textrm{BD}_r(n)\).

For \(G,H\subset [n]\) we say that G shifts to H, denoted by \(G\leadsto H\), if \(G=\emptyset \), or if \(|G|\le |H|\) and the ith smallest element of G is greater than or equal to that of H for each \(i\le |G|\). Note that the relation \(\leadsto \) is transitive, and this fact will be used later (Claims 16 and 17).

We say that \(\mathcal G\) is inclusion maximal if \(G\in \mathcal G\) and \(G\subset H\) imply \(H\in \mathcal G\). Since we are interested in the maximum measure of non-trivial 3-wise intersecting families, we always assume that families are inclusion maximal. If \(\mathcal G\) is shifted and inclusion maximal, then \(G\in \mathcal G\) and \(G\leadsto H\) imply \(H\in \mathcal G\).

2.2 Duality in linear programming

For later use we briefly record the weak duality theorem in linear programming. See e.g., chapter 6 in [12] for more details.

A primal linear programming problem (P) is formalized as follows.

maximize::

\(c^{\mathsf T}x\),

subject to::

\(Ax\le b\) and \(x\ge 0\).

The corresponding dual programming problem (D) is as follows.

minimize::

\(b^{\mathsf T}y\),

subject to::

\(A^{\mathsf T}y\ge c\) and \(y\ge 0\).

Theorem 8

(Weak duality) For each feasible solution x of (P) and each feasible solution y of (D) we have \(c^{\mathsf T}x\le b^{\mathsf T}y\).

2.3 Tools for the proof of Theorem 7

Let nta be fixed positive integers with \(t\le a\le n\). Define two families \(\mathcal A\) and \(\mathcal B\) by

$$\begin{aligned} \mathcal A&=\{F\subset [n]:|F\cap [a]|\ge t\}, \\ \mathcal B&=\{F\subset [n]:[a]\subset F\}. \end{aligned}$$

Then \(\mu _p(\mathcal A)=1-\sum _{j=0}^{t-1}\left( {\begin{array}{c}a\\ j\end{array}}\right) p^jq^{a-j}\), and \(\mu _p(\mathcal B)=p^a\). Let \(\mathcal F_1=\mathcal A\), \(\mathcal F_2=\cdots =\mathcal F_r=\mathcal B\). Then \(\mathcal F_1,\ldots ,\mathcal F_r\) are r-cross t-intersecting families with \(\sum _{i=1}^r\mu _p(\mathcal F_i)=\mu _p(\mathcal A)+(r-1)\mu _p(\mathcal B)\). The next result is a special case of Theorem 1.4 in [13], which states that the above construction is the best choice to maximize the sum of p-measures of non-empty r-cross t-intersecting families provided \(p\le \frac{1}{2}\).

Theorem 9

(Gupta–Mogge–Piga–Schülke [13]) Let \(r\ge 2\) and \(0<p\le \frac{1}{2}\). If \(\mathcal F_1,\ldots ,\mathcal F_r\subset 2^{[n]}\) are non-empty r-cross t-intersecting families, then

$$\begin{aligned} \sum _{i=1}^r\mu _p(\mathcal F_i)\le \max \bigg \{\bigg (1-\sum _{j=0}^{t-1}\left( {\begin{array}{c}a\\ j\end{array}}\right) p^jq^{a-j}\bigg )+(r-1)p^a:t\le a\le n\bigg \}. \end{aligned}$$

We need the non-empty condition to exclude the case \(\mathcal F_1=\emptyset \), \(\mathcal F_2=\cdots =\mathcal F_r=2^{[n]}\).

Lemma 2

Let \(0<p\le \frac{1}{2}\). Suppose that \(\mathcal F_1,\mathcal F_2 \subset 2^{[n]}\) are 2-cross intersecting families.

  1. (i)

    \(\mu _p(\mathcal F_1)+\mu _p(\mathcal F_2)\le 1\).

  2. (ii)

    \(\mu _p(\mathcal F_1)\mu _p(\mathcal F_2)\le p^2\).

Proof

(i) If one of the families is empty, then the inequality clearly holds. So suppose that both families are non-empty. Then, by Theorem 9, we have

$$\begin{aligned} \mu _p(\mathcal F_1) + \mu _p(\mathcal F_2) \le \max \{(1-q^a)+p^a:1\le a\le n\}. \end{aligned}$$

Thus it suffices to show that \(1-q^a+p^a\le 1\), or equivalently, \(p^a\le (1-p)^a\) for all \(a\ge 1\). Indeed this follows from the assumption \(p\le \frac{1}{2}\).

(ii) This is proved in [21] as Theorem 2. \(\square \)

Lemma 3

Let \(0<p\le \frac{1}{2}\), and \(t\ge 2\). If \(\mathcal F_1,\mathcal F_2, \mathcal F_3 \subset 2^{[n]}\) are non-empty 3-cross t-intersecting families, then \(\mu _p(\mathcal F_1)+\mu _p(\mathcal F_2)+\mu _p(\mathcal F_3)\le 1\).

Proof

Note that if \(t\ge 2\) then 3-cross t-intersecting families are 3-cross 2-intersecting families. Thus, using Theorem 9, it suffices to show that

$$\begin{aligned} f(p,a):=(1-q^a-apq^{a-1})+2p^a \le 1 \end{aligned}$$

for \(a\ge t\). This inequality follows from the fact that f(pa) is increasing in p, and \(f(\frac{1}{2},a)\) is non-decreasing in a for \(a=2,3,\ldots \), and \(\lim _{a\rightarrow \infty }f(\frac{1}{2},a)=1\). \(\square \)

2.4 Random walk

Here we extend the random walk method to deal with p-measures of r-cross t-intersecting families possibly with different p-measures. The method was originally introduced by Frankl in [9].

Let \(r\ge 2\) be a positive integer. For \(1\le i\le r\) let \(p_i\) be a real number with \(0<p_i<1-\frac{1}{r}\), and let \(q_i=1-p_i\). Let \({\alpha }(p_i)\in (0,1)\) be a unique root of the equation

$$\begin{aligned} X=p_i+q_iX^r, \end{aligned}$$
(4)

and let \({\beta }={\beta }(p_1,\ldots ,p_r)\in (0,1)\) be a unique root of the equation

$$\begin{aligned} X=\prod _{i=1}^r(p_i+q_iX). \end{aligned}$$
(5)

Consider two types of random walks, \(A_i\) and B, in the two-dimensional grid \(\mathbb Z^2\). Both walks start at the origin, and at each step it moves from (xy) to \((x,y+1)\) (one step up), or from (xy) to \((x+1,y)\) (one step to the right). For every step the type \(A_i\) walk takes one step up with probability \(p_i\), and one step to the right with probability \(q_i\). On the other hand, at step j, the type B walk takes one step up with probability \(p_i\), and one step to the right with probability \(q_i\), where \(i=j\bmod r\). Let \(L_j\) denote the line \(y=(r-1)x+j\).

Claim 1

Let \(r\ge 2\) and \(t\ge 1\) be integers. Then we have

$$\begin{aligned} \mathbb P(\text {the type }A_i\hbox { walk hits the line }L_t)&={\alpha }(p_i)^t,\\ \mathbb P(\text {the type }B\hbox { walk hits the line} L_{rt})&={\beta }(p_1,\ldots ,p_r)^t. \end{aligned}$$

Proof

Let \(x_i(t)\) denote the probability that the walk \(A_i\) hits the line \(L_t\). After the first step of the walk, it is at (0, 1) with probability \(p_i\), or at (1, 0) with probability \(q_i\). From (0, 1) the probability for the walk hitting \(L_t\) is \(x_i(t-1)\), and from (1, 0) the probability is \(x_i(t-1+r)\). Therefore we have

$$\begin{aligned} x_i(t)=p_i x_i(t-1)+q_i x_i(t-1+r). \end{aligned}$$
(6)

Let \(a_j\) be the number of walks from (0, 0) to \(P_j:=(j,(r-1)j+t)\) which touch \(L_t\) only at \(P_j\). (It is known that \(a_j=\frac{t}{rj+t}\left( {\begin{array}{c}rj+t\\ j\end{array}}\right) \), but we do not need this fact.) Then we have \(x_i(t)=\sum _{j\ge 0}a_jp_i^{(r-1)j+t}q_i^j\). If a walk touches the line \(L_{t+1}\), then the walk needs to hit \(L_t\) somewhere, say, at \(P_j\) for the first time. Then the probability that the walk hit \(L_{t+1}\) starting from \(P_j\) is equal to \(x_i(1)\). Thus we have

$$\begin{aligned} x_i(t+1)=\sum _{j\ge 0}{(a_jp_i^{(r-1)j+t}q_i^j)}\,x_i(1)=x_i(t)x_i(1), \end{aligned}$$

and so we can write \(x_i(t)=z^t\), where \(z:=x_i(1)\). Substituting this into (6) and dividing both sides by \(z^{t-1}\) we see that z is a root of the equation (4). Let \(f(X):=p_i+q_iX^r-X\). Then we have \(f(0)=p_i>0\), \(f(1)=0\), \(f'(1)=q_ir-1>0\), and \(f''(X)=q_ir(r-1)X^{r-2}>0\). Thus the equation \(f(X)=0\), or equivalently, (4) has precisely two roots in [0, 1], that is, \(\alpha (p_i)\) and 1. We claim that \(z\ne 1\). Indeed we have \(\lim _{t\rightarrow \infty }x_i(t)=\lim _{t\rightarrow \infty }z^t=0\) because a step in the type \(A_i\) walk reduces, on average, \(y-(r-1)x\) by \((r-1)-rp_i>0\). Consequently we have \(z=\alpha (p_i)\), and so \(x_i(t)=\alpha (p_i)^t\).

Next let y(t) denote the probability that the walk B hits the line \(L_{rt}\). After the first r steps, it is at \((x,r-x)\) for some \(0\le x\le r\) with probability

$$\begin{aligned} \sum _{J\in \left( {\begin{array}{c}[r]\\ x\end{array}}\right) }\prod _{i\in [r]{\setminus } J}p_i\prod _{j\in J}q_j. \end{aligned}$$

From \((x,r-x)\) the probability for the walk hitting \(L_{rt}\) is \(y(x+t-1)\). This yields

$$\begin{aligned} y(t)=\sum _{x=0}^ry(x+t-1) \sum _{J\in \left( {\begin{array}{c}[r]\\ x\end{array}}\right) }\prod _{i\in [r]{\setminus } J}p_i\prod _{j\in J}q_j. \end{aligned}$$
(7)

Let \(b_s\) be the number of walks from (0, 0) to \(Q_s:=(s,(r-1)s+rt)\) which touch \(L_{rt}\) only at \(Q_s\). Then we have

$$\begin{aligned} y(t)=\sum _{s\ge 0}b_s \sum _{J\in \left( {\begin{array}{c}[r(s+t)]\\ s\end{array}}\right) }\prod _{i\in [r(s+t)]{\setminus } J} p_i\prod _{j\in J}q_j. \end{aligned}$$

If a walk touches the line \(L_{r(t+1)}\), then the walk needs to hit \(L_{rt}\) somewhere, say, at \(Q_s\) for the first time. Then the probability that the walk hit \(L_{r(t+1)}\) starting from \(Q_s\) is equal to y(1). Thus we have

$$\begin{aligned} y(t+1)= \sum _{s\ge 0}b_s \sum _{J\in \left( {\begin{array}{c}[r(s+t)]\\ s\end{array}}\right) }\prod _{i\in [r(s+t)]{\setminus } J} p_i\prod _{j\in J}q_j\,y(1) =y(t)y(1), \end{aligned}$$

and so \(y(t)=w^t\), where \(w:=y(1)\). Substituting this into (7) and dividing both sides by \(w^{t-1}\) we have

$$\begin{aligned} w=\sum _{x=0}^r w^x \sum _{J\in \left( {\begin{array}{c}[r]\\ x\end{array}}\right) }\prod _{i\in [r]{\setminus } J}p_i \prod _{j\in J}q_j=\prod _{i=1}^r(p_i+q_iw). \end{aligned}$$

Thus w is a root of the equation (5). Let \(g(X):=\prod _{i=1}^r(p_i+q_iX)-X\). Then we have \(g(0)=\prod _i p_i>0\), \(g(1)=0\), \(g'(1)=\sum _i q_i>0\), and \(g''(X)>0\). Thus the equation \(g(X)=0\), or equivalently, (5) has precisely two roots in [0, 1], that is, \(\beta \) and 1. But we can exclude the possibility \(w=1\) in the same way as in the previous case. Thus we have \(w=\beta \) and so \(y(t)=\beta ^t\). \(\square \)

Claim 2

Let \(\mathcal F_1,\ldots ,\mathcal F_r\subset 2^{[n]}\) be shifted r-cross t-intersecting families. Then, for all \((F_1,\ldots ,F_r)\in \mathcal F_1\times \cdots \times \mathcal F_r\), there exists \(j=j(F_1,\ldots ,F_r)\in [n]\) such that \(\sum _{i=1}^r|F_i\cap [j]|\ge t+(r-1)j\).

This is Proposition 8.1 in [9]. We include a simple proof for convenience.

Proof

Suppose the contrary. Then there exist an r-tuple of a counterexample \((F_1,\ldots ,F_r)\in \mathcal F_1\times \cdots \times \mathcal F_r\), which we choose \(|F_1\cap \cdots \cap F_r|\) minimal. Let j be the t-th element of \(F_1\cap \cdots \cap F_r\). Then we have

$$\begin{aligned} \sum _{i=1}^r|F_i\cap [j]|<t+(r-1)j=|F_1\cap \cdots \cap F_r\cap [j]|+(r-1)|[j]|. \end{aligned}$$

Thus there exist some \(i\in [j-1]\) such that i is not contained in (at least) two of the r subsets, say, \(i\not \in F_1\cup F_2\). By the shiftedness we have \(F_1':=(F{\setminus }\{j\})\cup \{i\}\in \mathcal F_1\). Then \(|F_1\cap [j]|=|F_1'\cap [j]|\) and so \((F_1',F_2,\ldots ,F_r)\) is also a counterexample. But this contradicts the minimality because \(|F_1'\cap F_2\cap \cdots \cap F_r|<|F_1\cap F_2\cap \cdots \cap F_r|\). \(\square \)

Let \(\mathcal F_1,\ldots ,\mathcal F_r\subset 2^{[n]}\) be families of subsets. For each \((F_1,\ldots ,F_r)\in \mathcal F_1\times \cdots \times \mathcal F_r\) we define a vector w by

$$\begin{aligned} w=w(F_1,\ldots ,F_r):=(w_1^{(1)},w_2^{(1)},\ldots ,w_r^{(1)},\ldots , w_1^{(n)},w_2^{(n)},\ldots ,w_r^{(n)})\in \{0,1\}^{rn}, \end{aligned}$$

where

$$\begin{aligned} w_i^{(j)}={\left\{ \begin{array}{ll} 1&{}\text { if }j\in F_i, \\ 0&{}\text { if }j\not \in F_i. \end{array}\right. } \end{aligned}$$

We can view w as an rn-step walk whose k-th step is up (resp. right) if the k-th entry of w is 1 (resp. 0) for \(1\le k\le rn\).

Claim 3

Let \(\mathcal F_1,\ldots ,\mathcal F_r\subset 2^{[n]}\) be shifted r-cross t-intersecting families. Then, for all \((F_1,\ldots ,F_r)\in \mathcal F_1\times \cdots \times \mathcal F_r\), the walk \(w(F_1,\ldots ,F_r)\) hits the line \(L_{rt}\).

Proof

Let \(j=j(F_1,\ldots ,F_r)\) be from Claim 2, and let \(w=w(F_1,\ldots ,F_r)\) be the corresponding walk. In the first rj steps of w there are at least \(t+(r-1)j\) up steps, and so at most \(rj-(t+(r-1)j)=j-t\) right steps. This means that the walk w hits the line \(L_{rt}\) within the first rj steps. \(\square \)

Theorem 10

Let \(p_1,\ldots ,p_r\) be positive real numbers less than \(1-\frac{1}{r}\), and let \(\mathcal F_1,\ldots ,\mathcal F_r\subset 2^{[n]}\) be r-cross t-intersecting families. Then we have \(\prod _{i=1}^r\mu _{p_i}(\mathcal F_i)\le \beta ^t\), where \(\beta \) is the root of the equation (5).

Proof

Since the shifting operation preserves r-cross t-intersecting property and p-measures, we may assume that all \(\mathcal F_i\) are shifted. We have

$$\begin{aligned}{} & {} \prod _{i=1}^r\mu _{p_i}(\mathcal F_i) = \prod _{i=1}^r\sum _{F_i\in \mathcal F_i}p_i^{|F_i|}q_i^{n-|F_i|}\\{} & {} \quad = \sum _{(F_1,\ldots ,F_r)\in \mathcal F_1\times \cdots \times \mathcal F_r} \prod _{i=1}^rp_i^{|F_i|}q_i^{n-|F_i|}. \end{aligned}$$

Using Claim 3 the RHS is

$$\begin{aligned} \le \mathbb P(\text {type }B\hbox { walk hits }L_{rt}\hbox { in the first }rn \hbox { steps}) \le \mathbb P(\text {type }B\hbox { walk hits }L_{rt}) = \beta ^t, \end{aligned}$$

where the last equality follows from Claim 1. \(\square \)

By comparing (4) and (5) it follows that if \(p_1=\cdots =p_r=:p\) then \(\beta (p,\ldots ,p)=\alpha (p)^r\). If \(r=3\) then it is not so difficult to verify that \(\beta (p_1,p_2,p_3)\le \alpha (p_1)\alpha (p_2)\alpha (p_3)\), see [15] for more details, and we have the following.

Lemma 4

Let \(0<p_1,p_2,p_3<\frac{2}{3}\) and t be a positive integer. If \(\mathcal F_1,\mathcal F_2,\mathcal F_3\subset 2^{[n]}\) are 3-cross t-intersecting, then

$$\begin{aligned} \mu _{p_1}(\mathcal F_1) \mu _{p_2}(\mathcal F_2) \mu _{p_3}(\mathcal F_3) \le (\alpha (p_1)\alpha (p_2)\alpha (p_3))^t, \end{aligned}$$

where

$$\begin{aligned} \alpha (p):=\frac{1}{2}\left( \sqrt{\frac{1+3p}{1-p}}-1\right) . \end{aligned}$$
(8)

2.5 Tools for the proof of Theorem 6

Let \(0<p_1<p_2<1\) be fixed. Let \(\mathbb R^{[p_1,p_2]}\) denote the set of real-valued functions defined on the interval \([p_1,p_2]:=\{x\in \mathbb R:p_1\le x\le p_2\}\). We will bound a convex function \(g\in \mathbb R^{[p_1,p_2]}\) by a linear function connecting \((p_1,g(p_1))\) and \((p_2,g(p_2))\). To this end, define an operator \(L_{p_1,p_2}:\mathbb R^{[p_1,p_2]}\rightarrow \mathbb R^{[p_1,p_2]}\) by

$$\begin{aligned} (L_{p_1,p_2}(g))(p):=\frac{g(p_2)-g(p_1)}{p_2-p_1}(p-p_1)+g(p_1). \end{aligned}$$

By definition we have the following.

Claim 4

Let \(g\in \mathbb R^{[p_1,p_2]}\) be a convex function. Then \(g(p)\le (L_{p_1,p_2}(g))(p)\) for \(p\in [p_1,p_2]\).

The function \(\alpha =\alpha (p)\) defined by (8) is convex because \(\frac{\partial ^2\alpha (p)}{\partial p^2} =\frac{6p}{(1+3p)^2q^2}\big (\frac{1+3p}{q} \big )^{1/2}>0\). Thus by Claim 4 we have the following.

Claim 5

For \(\frac{2}{5}\le p\le \frac{1}{2}\) it follows that \(\alpha (p)\le {\tilde{\alpha }}(p)\), where

$$\begin{aligned} {\tilde{\alpha }}(p)&:=(L_{\frac{2}{5},\frac{1}{2}}(\alpha ))(p)= (-3 - 12\sqrt{5} + 5\sqrt{33})/6 + (30\sqrt{5} - 10\sqrt{33})p/6\\&\approx -0.185 + 1.60607 p. \end{aligned}$$

Let \(\textrm{AK}(n,t,p)\) denote the maximum p-measure \(\mu _p(\mathcal G)\) of 2-wise t-intersecting families \(\mathcal G\subset 2^{[n]}\).

Theorem 11

(Ahlswede and Khachatrian [2]) Let

$$\begin{aligned} \frac{i}{t+2i-1}\le p\le \frac{i+1}{t+2i+1}. \end{aligned}$$

Then \(\textrm{AK}(n,t,p)=\mu _p(\mathcal A(n,t,i))\), where

$$\begin{aligned} \mathcal A(n,t,i)=\{A\subset [n]:|A\cap [t+2i]|\ge t+i\}. \end{aligned}$$

Moreover, if \(\frac{i}{t+2i-1}< p< \frac{i+1}{t+2i+1}\) (resp. \(p=\frac{i}{t+2i-1}\)) then \(\mu _p(\mathcal G)=\textrm{AK}(n,t,p)\) if and only if \(\mathcal G\cong \mathcal A(n,t,i)\) (resp. \(\mathcal G\cong \mathcal A(n,t,i-1)\) or \(\mathcal G\cong \mathcal A(n,t,i)\)).

Let

$$\begin{aligned} f_t(p):=\limsup _{n\rightarrow \infty } \textrm{AK}(n,t,p). \end{aligned}$$

By Katona’s t-intersection theorem we have \(f_t(\frac{1}{2})=\frac{1}{2}\). For \(p<\frac{1}{2}\), by Theorem 11, we have \(\textrm{AK}(n,t,p)=\max \{\mu _p(\mathcal A(n,t,i)):i\le \frac{n-t}{2}\}\), and \(\textrm{AK}(n,t,p)\) is non-decreasing in n. In this case we have

$$\begin{aligned} f_t(p)=\lim _{n\rightarrow \infty }\textrm{AK}(n,t,p)=\sum _{j=t+i}^{t+2i} \left( {\begin{array}{c}t+2i\\ j\end{array}}\right) p^jq^{t+2i-j}, \end{aligned}$$

where \(i=\left\lfloor \frac{(t-1)p}{1-2p}\right\rfloor \). The function \(f_t(p)\) is left-continuous at \(p=\frac{1}{2}\).

Claim 6

Let \(t\ge 2\) be fixed. Then \(f_t(p)\) is a convex function in p.

Proof

First suppose that \(\frac{i}{t+2i-1}<p<\frac{i+1}{t+2i+1}\). Then \(f_t(p)=\sum _{t+i}^{t+2i}\left( {\begin{array}{c}t+2i\\ j\end{array}}\right) p^jq^{t+2i-j}=:g(p)\), and we have

$$\begin{aligned} \frac{\partial ^2}{\partial p^2}\,f_t(p)= \frac{(2i+t)!}{(i+t-1)!\,i!}\,p^{t-2+i}q^{i-1} (i+t-1-(2i+t-1)p) > 0. \end{aligned}$$

Next let \(p_0=\frac{i}{t+2i-1}\). If p is slightly larger than \(p_0\) then we have the same \(f_t(p)=g(p)\) as above, and if p is slightly smaller than \(p_0\) then \(f_t(p)=\sum _{t+i-1}^{t+2i-2}p^jq^{t+2i-2}=:h(p)\). Since \(h(p)<g(p)\) for \(p>p_0\) and \(h(p)>g(p)\) for \(p<p_0\), we see that the left derivative of \(f_t(p)\) at \(p=p_0\) is smaller than that of the right derivative. \(\square \)

By Claim 4 and Claim 6 we have the following.

Claim 7

For \(\frac{2}{5}\le p\le \frac{1}{2}\) it follows that \(\textrm{AK}(n,t,p)\le \tilde{a}_t(p)\), where \({\tilde{a}}_t=L_{\frac{2}{5},\frac{1}{2}}(f_t)\).

For convenience we record the \({\tilde{a}}_t\) which will be used to prove Theorem 6.

$$\begin{aligned} {\tilde{a}}_2(p)&= \tfrac{1}{2} + (401(p-\tfrac{1}{2}))/125\approx -1.104 + 3.208 p,\\ {\tilde{a}}_3(p)&= \tfrac{1}{2} + (1565029(p-\tfrac{1}{2}))/390625\approx -1.50324 + 4.00647 p,\\ {\tilde{a}}_4(p)&= \tfrac{1}{2} + (5391614441(p-\tfrac{1}{2}))/1220703125 \approx -1.70841 + 4.41681 p ,\\ {\tilde{a}}_5(p)&{=} \tfrac{1}{2} {+} (17729648464189(p-\tfrac{1}{2}))/3814697265625 \approx -1.82386 + 4.64772 p. \end{aligned}$$

3 Proof of Theorem 3 and Theorem 4

In this section we deduce Theorem 3 from Theorem 4, and then deduce Theorem 4 from Theorem 6 whose proof is given in the next section.

Proof of Theorem 3

Let \(\mathcal F\subset 2^{[n]}\) be a non-trivial 3-wise intersecting family with \(\mu _p(\mathcal F)=M_3(n,p)\). We may assume that \(\mathcal F\) is shifted and inclusion maximal. Since \(\mathcal F\) is non-trivially 3-wise intersecting, it is also 2-wise 2-intersecting, and so \(M_3(n,p)\le \textrm{AK}(n,2,p)\). \(\square \)

Claim 8

If \(p<\frac{1}{3}\) then \(M_3(p)=p^2\).

Proof

Let \(p<\frac{1}{3}\) be fixed. Then we have \(\mu _p(\mathcal F)=M_3(n,p)\le \textrm{AK}(n,2,p)=p^2\). Moreover \(\mathcal G\cong \mathcal A(n,2,0)\) is the only 2-wise 2-intersecting family with \(\mu _p(\mathcal G)=p^2\). Since \(\mathcal A(n,2,0)\) is not non-trivial 3-wise intersecting, we get \(M_3(n,p)<p^2\).

On the other hand we can construct a non-trivial 3-wise intersecting family \(\mathcal F_1\) by

$$\begin{aligned} \mathcal F_1=\{F\in [n]:[2]\subset F,\,F\cap [3,n]\ne \emptyset \}\sqcup \{[n]{\setminus }\{1\}\}\sqcup \{[n]{\setminus }\{2\}\}. \end{aligned}$$

Then it follows that

$$\begin{aligned} \mu _p(\mathcal F_1)=p^2(1-q^{n-2})+2p^{n-1}q \rightarrow p^2 \text { as } n\rightarrow \infty . \end{aligned}$$

Thus we have \(M_3(p)=p^2\) for \(p<\frac{1}{3}\). \(\square \)

Claim 9

If \(\frac{1}{3}\le p\le \frac{1}{2}\) then \(M_3(p)=4p^3q+p^4\).

Proof

This is an immediate consequence of Theorem 4. \(\square \)

Claim 10

If \(\frac{1}{2}<p\le \frac{2}{3}\) then \(M_3(p)=p\).

Proof

Let \(\frac{1}{2}<p\le \frac{2}{3}\) be fixed. It is known from [8, 10, 22] that 3-wise intersecting families \(\mathcal G\) have p-measure at most p for \(p\le \frac{2}{3}\), and moreover if \(\mu _p(\mathcal G)=p\) then \(|\bigcap \mathcal G|=1\) for \(p<\frac{2}{3}\). Thus we have \(M(n,p)<p\) for \(\frac{1}{2}<p<\frac{2}{3}\) and \(M(n,\frac{2}{3})\le \frac{2}{3}\).

On the other hand, let us define a non-trivial 3-wise intersecting family \(\mathcal F_2\) by

$$\begin{aligned} \mathcal F_2=\{F\in [n]:1\in F,\,|F\cap [2,n]|\ge n/2\}\sqcup \{[2,n]\}. \end{aligned}$$

Then it follows that, for fixed p,

$$\begin{aligned} \mu _p(\mathcal F_2)=p\sum _{k\ge n/2}\left( {\begin{array}{c}n-1\\ k\end{array}}\right) p^kq^{n-1-k}+qp^{n-1}\rightarrow p \text { as } n\rightarrow \infty . \end{aligned}$$

Thus we have \(M(p)=p\) for \(\frac{1}{2} <p\le \frac{2}{3}\). \(\square \)

Claim 11

If \(\frac{2}{3}<p<1\) then \(M_3(p)=1\).

Proof

Let \(\frac{2}{3}<p<1\) be fixed. Clearly we have \(M(n,p)\le 1\) and \(M(p)\le 1\). Let us define a non-trivial 3-wise intersecting family \(\mathcal F_3\) by

$$\begin{aligned} \mathcal F_3=\{F\subset [n]:|F|>\tfrac{2}{3}n\}. \end{aligned}$$

Then \(\mu _p(\mathcal F_3)=\sum _{i>\frac{2}{3}n}\left( {\begin{array}{c}n\\ i\end{array}}\right) p^iq^{n-i}\rightarrow 1\) as \(n\rightarrow \infty \). Thus we have \(M(p)=1\) for \(p>\frac{2}{3}\). \(\square \)

This completes the proof of Theorem 3 assuming Theorem 4. \(\square \)

Proof of Theorem 4

Let \(\mathcal F\subset 2^{[n]}\) be a non-trivial 3-wise intersecting family.

First suppose that \(\frac{1}{3}\le p<\frac{2}{5}\). Note that \(\mathcal F\) is 2-wise 2-intersecting, and \(\mathcal A(n,2,1)=\textrm{BD}_3(n)\). Thus it follows from Theorem 11 that \(\mu _p(\mathcal F)\le \mu _p(\textrm{BD}_3(n))\). Moreover equality holds if and only if \(\mathcal F\cong \textrm{BD}_3(n)\) for \(p>\frac{1}{3}\). If \(p=\frac{1}{3}\) then \(\mu _p(\mathcal F)=\mu _p(\textrm{BD}_3(n))\) if and only if \(\mathcal F\cong \textrm{BD}_3(n)\) or \(\mathcal A(n,2,0)\), but the latter is not non-trivial 3-wise intersecting, and so \(\mathcal F\cong \textrm{BD}_3(n)\) must hold.

Next suppose that \(\frac{2}{5}\le p\le \frac{1}{2}\). If \(\mathcal F\) is shifted then by Theorem 6 we have \(\mu _p(\mathcal F)\le \mu _p(\textrm{BD}_3(n))\) with equality holding if and only if \(\mathcal F=\textrm{BD}_3(n)\). The same inequality holds without assuming that \(\mathcal F\) is shifted (see the first paragraph in Sect. 2). In this case, by Lemma 1, we have \(\mu _p(\mathcal F)=\mu _p(\textrm{BD}_3(n))\) if and only if \(\mathcal F\cong \textrm{BD}_3(n)\). \(\square \)

4 Proof of Theorem 7

4.1 Proof of (2) of Theorem 7

Let \(\frac{1}{3}\le p\le \frac{1}{2}\), and let \(\mathcal F_1,\mathcal F_2,\mathcal F_3\) be non-empty 3-cross intersecting families. By Theorem 9 with \(r=3\) we have \(\sum _{i=1}^3\mu _p(\mathcal F_i)\le \max \{(1-q^a)+2p^a:a\in [n]\}\). So we need to show that \(f(p,a)\ge 0\), where

$$\begin{aligned} f(p,a):=3p-(1-q^a)-2p^a, \end{aligned}$$

for all \(\frac{1}{3}\le p\le \frac{1}{2}\) and all \(1\le a\le n\).

If \(a=1\) then \(f(p,1)=3p-(1-q)-2p=0\), and we are done. So we may assume that \(2\le a\le n\), and we show that \(f(p,a)>0\).

If \(p=\frac{1}{3}\) then \(f(\frac{1}{3},a)=1-1+(\frac{2}{3})^a-2(\frac{1}{3})^a= (\frac{1}{3})^a(2^a-2)>0\). We claim that f(pa) is increasing in p, which yields \(f(p,a)\ge f(\frac{1}{3},a)>0\) (for \(a\ge 2\)). We have

$$\begin{aligned} \frac{\partial f}{\partial p}(p,a)=3-aq^{a-1}-2ap^{a-1}. \end{aligned}$$
(9)

Fix p and let g(a) denote the RHS of (9). We have \(g(2)=1-2p>0\) for \(\frac{1}{3}\le p<\frac{1}{2}\). Next we show that g(a) is increasing in a. For this we have

$$\begin{aligned} g(a+1)-g(a)=(ap-q)q^{a-1}+2(aq-p)p^{a-1}, \end{aligned}$$

and we need to show that the RHS is positive. Since \(aq-p\ge ap-q\) it suffices to show that \(ap-q\ge 0\), or equivalently, \(a\ge \frac{1-p}{p}\). Indeed \(a\ge 2\ge \frac{1-p}{p}\) because \(p\ge \frac{1}{3}\). Thus g(a) is increasing in a, and \(g(a)\ge g(2)>0\) as needed. \(\square \)

4.2 Proof of (3) of Theorem 7

Recall that, for \(i<j\), we write \([i,j]:=\{i,i+1,\ldots ,j\}=[j]{\setminus }[i-1]\).

We divide \(\mathcal F_i=\{\{1\}\sqcup A:A\in \mathcal A_i\}\cup \mathcal B_i\), where

$$\begin{aligned} \mathcal A_i&:=\{F{\setminus }\{1\}:1\in F\in \mathcal F_i\}\subset 2^{[2,n]},\\ \mathcal B_i&:=\{F:1\not \in F\in \mathcal F_i\}\subset 2^{[2,n]}. \end{aligned}$$

Since \(\mathcal F_i\ne \emptyset \) is shifted, we have \(\mathcal A_i\ne \emptyset \). Let \(a_i=\mu _p(\mathcal A_i:[2,n])>0\) and \(b_i=\mu _p(\mathcal B_i:[2,n])\ge 0\). Then \(\sum _{i=1}^3\mu _p(\mathcal F_i)=\sum _{i=1}^3(pa_i+qb_i)\). Without loss of generality we may assume that \(b_1\ge b_2\ge b_3\). If \(b_1=0\) then \(B_i=\emptyset \) for all i. In this case \(1\in \bigcap F\), where the intersection is taken over all \(F\in \mathcal F_1\cup \mathcal F_2\cup \mathcal F_3\), a contradiction. So we may assume that \(b_1\ne 0\), that is, \(\mathcal B_1\ne \emptyset \).

Claim 12

Let \(\{i,j,k\}=[3]\).

  1. (1)

    If \(\{\mathcal A_i,\mathcal A_j,\mathcal B_k\}\) are all non-empty, then they are 3-cross intersecting, and \(a_i+a_j+b_k\le 3p\).

  2. (2)

    If \(\{\mathcal A_i,\mathcal B_j,\mathcal B_k\}\) are all non-empty, then they are 3-cross 2-intersecting, and \(a_i+b_j+b_k\le 1\).

Proof

The item (1) follows from the assumption that \(\mathcal F_i,\mathcal F_j,\mathcal F_k\) are 3-cross intersecting, and (2) of Theorem 7.

To show (2), suppose, to the contrary, that there exist three subsets \(A_i\in \mathcal A_i\), \(B_j\in \mathcal B_j\), \(B_k\in \mathcal B_k\), and \(x\in [2,n]\) such that \(\{x\}\supset A_i\cap B_j\cap B_k\). By definition we have \(F_i:=\{1\}\cup A_i\in \mathcal F_i\) and \(F_k:=B_k\in \mathcal F_k\). By the shiftedness we have \(F_j:=(B_j{\setminus }\{x\})\cup \{1\}\in \mathcal F_j\). Then \(F_i\cap F_j\cap F_k=\emptyset \), a contradiction. Thus \(\{\mathcal A_i,\mathcal B_j,\mathcal B_k\}\) are 3-wise 2-intersecting, and the inequality follows from Lemma 3. \(\square \)

Now we will show that \(\sum _{i=1}^3\mu _p(\mathcal F_i)\le 3p-\epsilon _p\), where \(\epsilon _p=(2-3p)(3p-1)\).

4.2.1 Case \(\mathcal B_2=\mathcal B_3=\emptyset \)

Since \(\{\mathcal B_1,\mathcal A_2,\mathcal A_3\}\) are 3-cross intersecting, any two of them are 2-cross intersecting. Then, by (2) of Theorem 7 and Lemma 2, we have the following.

Claim 13

  1. (1)

    \(b_1+a_2+a_3\le 3p\),

  2. (2)

    \(b_1+a_2\le 1\),

  3. (3)

    \(b_1+a_3\le 1\).

We solve the following linear programming problem:

maximize::

\(\sum _{i=1}^3\mu _p(\mathcal F_i)=p(a_1+a_2+a_3)+qb_1\),

subject to::

(1)–(3) in Claim 13, and \(0\le a_i\le 1\) for all i, \(0\le b_1\le 1\).

The corresponding dual problem is (Table 1)

minimize::

\(3py_1+\sum _{i=2}^7 y_i\),

subject to::

\(y_4\ge p\), \(y_1+y_2+y_5\ge p\), \(y_1+y_3+y_6\ge p\), \(y_1+y_2+y_3+y_7\ge q\), and \(y_i\ge 0\) for all i.

Table 1 Case \(\mathcal B_2=\mathcal B_3=\emptyset \)

A feasible solution is given by \(y_1=3p-1\), \(y_2=y_3=1-2p\), \(y_4=p\), \(y_5=y_6=y_7=0\), and the corresponding value of the objective function is

$$\begin{aligned} 3p(3p-1)+2(1-2p)+p=2-6p+9p^2=3p-\epsilon _p. \end{aligned}$$

Then it follows from Theorem 8 (weak duality) that the same bound applies to the primal problem, and so \(\sum _{i=1}^3\mu _p(\mathcal F_i)\le 3p-\epsilon _p\).

4.2.2 Case \(\mathcal B_2\ne \emptyset \), \(\mathcal B_3=\emptyset \)

In this case \(\{\mathcal A_1,\mathcal B_2,\mathcal A_3\}\) and \(\{\mathcal B_1,\mathcal A_2,\mathcal A_3\}\) are both 3-cross intersecting, and \(\{\mathcal B_1,\mathcal B_2,\mathcal A_3\}\) are 3-cross 2-intersecting. Thus we have the following.

Claim 14

  1. (1)

    \(b_1+a_2+a_3\le 3p\),

  2. (2)

    \(a_1+b_2+a_3\le 3p\),

  3. (3)

    \(b_1+b_2+a_3\le 1\),

  4. (4)

    \(b_1+a_2\le 1\),

  5. (5)

    \(a_1+b_2\le 1\).

We solve the following linear programming problem:

maximize::

\(\sum _{i=1}^3\mu _p(\mathcal F_i)=p(a_1+a_2+a_3)+q(b_1+b_2)\),

subject to::

(1)–(5) in Claim 14, and \(0\le a_i\le 1\) for all i, \(0\le b_j\le 1\) for all j.

The corresponding dual problem is (Table 2)

minimize::

\(3p(y_1+y_2)+\sum _{i=3}^{10} y_i\),

subject to::

\(y_2+y_5+y_6\ge p\), \(y_1+y_4+y_7\ge p\), \(y_1+y_2+y_3+y_8\ge p\), \(y_1+y_3+y_4+y_9\ge q\), \(y_2+y_3+y_5+y_{10}\ge q\), and \(y_i\ge 0\) for all i.

Table 2 Case \(\mathcal B_2\ne \emptyset , \mathcal B_3=\emptyset \)

A feasible solution is given by \(y_1=y_6=y_7=y_8=y_9=y_{10}=0\), \(y_2=3p-1\), \(y_3=y_5=1-2p\), \(y_4=p\), and the corresponding value of the objective function is

$$\begin{aligned} 3p(3p-1)+2(1-2p)+p=3p-\epsilon _p. \end{aligned}$$

Thus, by the weak duality, we have \(\sum _{i=1}^3\mu _p(\mathcal F_i)\le 3p-\epsilon _p\).

4.2.3 Case \(\mathcal B_2\ne \emptyset \), \(\mathcal B_3\ne \emptyset \)

Let \(\{i,j,k\}=[3]\). Then families \(\{\mathcal A_i,\mathcal A_j,\mathcal B_k\}\) are 3-cross intersecting, and families \(\{\mathcal A_i,\mathcal B_j,\mathcal B_k\}\) are 3-cross 2-intersecting. Thus we have the following.

Claim 15

  1. (1)

    \(b_1+a_2+a_3\le 3p\),

  2. (2)

    \(a_1+b_2+a_3\le 3p\),

  3. (3)

    \(a_1+a_2+b_3\le 3p\),

  4. (4)

    \(a_1+b_2+b_3\le 1\),

  5. (5)

    \(b_1+a_2+b_3\le 1\),

  6. (6)

    \(b_1+b_2+a_3\le 1\).

We solve the following linear programming problem:

maximize::

\(\sum _{i=1}^3\mu _p(\mathcal F_i)=p(a_1+a_2+a_3)+q(b_1+b_2+b_3)\),

subject to::

(1)–(6) in Claim 15, and \(0\le a_i\le 1\) for all i, \(0\le b_j\le 1\) for all j.

The corresponding dual problem is (Table 3)

minimize::

\(3p(y_1+y_2+y_3)+\sum _{i=4}^{12} y_i\),

subject to::

\(y_2+y_3+y_4+y_7\ge p\), \(y_1+y_3+y_5+y_8\ge p\), \(y_1+y_2+y_6+y_9\ge p\), \(y_1+y_5+y_6+y_{10}\ge q\), \(y_2+y_4+y_6+y_{11}\ge q\), \(y_3+y_4+y_5+y_{12}\ge q\), and \(y_i\ge 0\) for all i.

Table 3 Case \(\mathcal B_2\ne \emptyset ,\mathcal B_3\ne \emptyset \)

A feasible solution is given by \(y_1=3p-1\), \(y_2=y_3=y_7=y_8=y_9=y_{10}=y_{11}=y_{12}=0\), \(y_4=p\), \(y_5=y_6=1-2p\), and the corresponding value of the objective function is

$$\begin{aligned} 3p(3p-1)+p+2(1-2p)=3p-\epsilon _p. \end{aligned}$$

Thus we have \(\sum _{i=1}^3\mu _p(\mathcal F_i)\le 3p-\epsilon _p\).

This complete the proof of (3) of Theorem 7. \(\square \)

5 Proof of Theorem 6

Let \(\frac{2}{5}\le p\le \frac{1}{2}\), and let \(\mathcal F\subset 2^{[n]}\) be a non-trivial 3-wise intersecting family. Suppose that \(\mathcal F\) is shifted, inclusion maximal, and \(\mathcal F\not \subset \textrm{BD}_3(n)\). We may also assume that \(\mathcal F\) is size maximal (with respect to 3-wise intersection condition), that is, for every \(G\not \in \mathcal F\), the larger family \(\mathcal F\cup \{G\}\) is no longer 3-wise intersecting. Our goal is to show that

$$\begin{aligned} \mu _p(\mathcal F)<\mu _p(\textrm{BD}_3(n))-0.0018. \end{aligned}$$

For \(I\subset [3]\) define \(\mathcal F_I\subset 2^{[n]}\) and \(\mathcal G_I\subset 2^{[4,n]}\) by

$$\begin{aligned} \mathcal F_I&=\{F\in \mathcal F:F\cap [3]=I\},\\ \mathcal G_I&=\{F{\setminus }[3]:F\in \mathcal F_I\}. \end{aligned}$$

Let \(x_I=\mu _p(\mathcal G_I:[4,n])\). Then we have

$$\begin{aligned} \mu _p(\mathcal F_I)=p^{|I|}q^{3-|I|}x_I, \end{aligned}$$

and

$$\begin{aligned} \mu _p(\mathcal F) =\sum _{I\subset [3]} p^{|I|}q^{3-|I|}x_I. \end{aligned}$$
(10)

For simplicity we often write \(\mathcal G_I\) or \(x_I\) without braces and commas, e.g., we write \(\mathcal G_{12}\) to mean \(\mathcal G_{\{1,2\}}\). Let \(\bar{I}\) denote \([3]{\setminus } I\).

Claim 16

If \(I,J\subset [3]\) satisfy \(I\leadsto J\) then \(\mathcal G_I\subset \mathcal G_J\) and \(x_I\le x_J\).

Proof

Suppose that \(G\in \mathcal G_I\). Then \(I\cup G\in \mathcal F_I\). Since \(\mathcal F\) is shifted, inclusion maximal, and \(I\leadsto J\), we have that \(J\cup G\in \mathcal F_J\), and so \(G\in \mathcal G_J\). Thus \(\mathcal G_I\subset \mathcal G_J\), and so \(x_I\le x_J\). \(\square \)

Applying Claim 16 to the diagram in Fig. 2, we get Claim 17.

Fig. 2
figure 2

Poset induced by shifting and inclusion

Claim 17

We have \(x_\emptyset \le x_3\le x_2\le x_1\le x_{13}\le x_{12}\le x_{123}\), and \(x_2\le x_{23}\le x_{13}\).

Let \(I_1,I_2,I_3\subset [3]\). Define a \(3\times 3\) matrix \(M=M(I_1,I_2,I_3)=(m_{i,j})\) by

$$\begin{aligned} m_{i,j}={\left\{ \begin{array}{ll} 1 &{} \text {if }j\in I_i,\\ 0 &{} \text {if }j\not \in I_i. \end{array}\right. } \end{aligned}$$

Then \(I_1\cap I_2\cap I_3=\emptyset \) if and only if every column of M contains (at least one) 0. In this case we say that M is acceptable, and let \(\tau :=7-s\), where s is the total sum of \(m_{i,j}\).

Claim 18

Let \(M(I_1,I_2,I_3)\) be acceptable. If \(\{\mathcal G_{I_1},\mathcal G_{I_2},\mathcal G_{I_3}\}\) are all non-empty, then they are 3-cross \(\tau \)-intersecting, and any two of them are 2-cross \(\tau \)-intersecting.

Proof

Let us start with two concrete examples.

First example is the case \(I_1=I_2=\{1\}\), \(I_3=\{2,3\}\), and so \(s=1+1+2=4\), \(\tau =7-4=3\). We show that \(\{\mathcal G_1,\mathcal G_1,\mathcal G_{23}\}\) are 3-cross 3-intersecting. Suppose the contrary. Then there are \(G_1,G_1'\in \mathcal G_1\), \(G_{23}\in \mathcal G_{23}\) and \(x,y\in [4,n]\) such that \(G_1\cap G_1'\cap G_{23}\subset \{x,y\}\). Let \(F_{12}=\{1,2\}\cup (G_1{\setminus }\{x\})\), \(F_{13}=\{1,3\}\cup (G_1'{{\setminus }}\{y\})\), and \(F_{23}=\{2,3\}\cup G_{23}\in \mathcal F_{23}\). By the shiftedness we have \(F_{12}\in \mathcal F_{12}\) and \(F_{13}\in \mathcal F_{13}\). But \(F_{12}\cap F_{13}\cap F_{23}=\emptyset \), a contradiction.

Next example is the case \(I_1=I_2=I_3=\emptyset \), and so \(\tau =7\). We show that \(\{\mathcal G_\emptyset ,\mathcal G_\emptyset ,\mathcal G_\emptyset \}\) are 3-cross 7-intersecting, that is, \(\mathcal G_\emptyset \) is 3-wise 7-intersecting. Suppose the contrary. Then there are \(F,F',F''\in \mathcal F_\emptyset \) and \(x_1,\ldots ,x_6\in [4,n]\) such that \(F\cap F'\cap F''\subset \{x_1,\ldots ,x_6\}\). Let \(F_{12}:=(F{\setminus }\{x_1,x_2\})\cup \{1,2\}\in \mathcal F_{12}\), \(F'_{13}:=(F'{\setminus }\{x_3,x_4\})\cup \{1,3\}\in \mathcal F_{13}\), and \(F''_{23}:=(F'{\setminus }\{x_5,x_6\})\cup \{2,3\}\in \mathcal F_{23}\). Then we have \(F_{12}\cap F'_{13}\cap F''_{23}=\emptyset \), a contradiction.

The following proof for the general case is given by one of the referees. We assume by contradiction that there are sets \(G_i\in \mathcal G_I\) such that \(|G_1\cap G_2 \cap G_| \le \tau -1 = 6 - s\). The matrix M has s “taken” places out of 9. “Reserve” one empty spot in each column (these are available since the matrix is acceptable). We are left with \(6-s\) empty spots, say \(r_i\) in row i. Shift \(r_i\) elements from \(G_i\) to the empty spots on row i to construct a new set \(F_i\), no longer belonging to \(G_{I_i}\). By construction, \(F_1, F_2, F_3\) have empty intersection. \(\square \)

5.1 Case \(\mathcal G_1=\emptyset \)

In this case, by Claim 17, we have \(\mathcal G_\emptyset =\mathcal G_3=\mathcal G_2=\mathcal G_1=\emptyset \).

First suppose that \(\mathcal G_{23}\ne \emptyset \). If \(\bigcap G\ne \emptyset \), where the intersection is taken over all \(G\in \mathcal G_{12}\cup \mathcal G_{13}\cup \mathcal G_{23}\), then since the family is shifted, \(4\in \bigcap G\). Since \(\mathcal F\subset \mathcal F_{23}\cup \mathcal F_{13}\cup \mathcal F_{12}\cup \mathcal F_{123}\), we have \(|F\cap [4]|\ge 3\) for every \(F\in \mathcal F\). This means that \(\mathcal F\subset \textrm{BD}_3(n)\), which contradicts our assumption. Therefore we have \(\bigcap G=\emptyset \). Moreover, the families \(\mathcal G_{12},\mathcal G_{13},\mathcal G_{23}\) are 3-cross intersecting by Claim 18. Thus we can apply (3) of Theorem 7 with \(\min \{\epsilon _p:\frac{2}{5}\le p\le \frac{1}{2}\}=\frac{4}{25}\) to get

$$\begin{aligned} x_{12}+x_{13}+x_{23}\le 3p-\epsilon _p\le 3p-0.16. \end{aligned}$$

Next suppose that \(\mathcal G_{23}=\emptyset \). By Lemma 2 we have \(x_{12}+x_{13}+x_{23}=x_{12}+x_{13}\le 1\le 3p-0.2\). Thus in both cases we have \(x_{12}+x_{13}+x_{23}\le 3p-0.16\). Then it follows from (10) that

$$\begin{aligned} \mu _p(\mathcal F)&= p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\\&\le p^2q(3p-0.16)+p^3\\&=4p^3q+p^4-0.16p^2q. \end{aligned}$$

Noting that \(\mu _p(\textrm{BD}_3(n))=4p^3q+p^4\) and \(p^2q\ge \frac{12}{125}=0.96\) for \(\frac{2}{5}\le p\le \frac{1}{2}\), we have

$$\begin{aligned} \mu _p(\mathcal F)\le 4p^3q+p^4-0.16\cdot 0.96<\mu _p(\textrm{BD}_3(n))-0.01, \end{aligned}$$

as needed.

5.2 Case \(\mathcal G_1\ne \emptyset \) and \(\mathcal G_2=\emptyset \)

If \(\mathcal G_{23}=\emptyset \) then \([2,n]\not \in \mathcal F\). This means that there are \(F,F'\in \mathcal F\) such that \(F\cap F'=\{1\}\). (Otherwise all \(F,F'\in \mathcal F\) intersect on [2, n] and we could add [2, n] to \(\mathcal F\), which contradicts the assumption that \(\mathcal F\) is size maximal.) In this case all subsets in \(\mathcal F\) must contain 1, which contradicts the assumption that \(\mathcal F\) is non-trivial. So we may assume that \(\mathcal G_{23}\ne \emptyset \). Then both \(\mathcal F_1\) and \(\mathcal F_{23}\) are non-empty, and so the families \(\mathcal F_{13}, \mathcal F_{12}, \mathcal F_{123}\) are also non-empty by Claim 17.

By Claim 18 we have the following.

Claim 19

  1. (1)

    \(\{\mathcal G_1,\mathcal G_1,\mathcal G_{23}\}\) are 3-cross 3-intersecting, and so \(\mathcal G_1\) is 2-wise 3-intersecting.

  2. (2)

    \(\{\mathcal G_{12},\mathcal G_{13},\mathcal G_{23}\}\) are 3-cross intersecting, and so \(\{\mathcal G_{12},\mathcal G_{13}\}\) are 2-cross intersecting.

  3. (3)

    \(\{\mathcal G_1,\mathcal G_{23},\mathcal G_{123}\}\) are 3-cross intersecting, and so both \(\{\mathcal G_1,\mathcal G_{123}\}\) and \(\{\mathcal G_{23},\mathcal G_{123}\}\) are 2-cross intersecting.

  4. (4)

    \(\{\mathcal G_{1},\mathcal G_{12},\mathcal G_{23}\}\) are 3-cross 2-intersecting.

  5. (5)

    \(\{\mathcal G_{1},\mathcal G_{23},\mathcal G_{23}\}\) are 3-cross 2-intersecting, and so \(\mathcal G_{23}\) is 2-wise 2-intersecting.

Claim 20

  1. (1)

    \(\min \{x_1,x_{23}\}\le {\tilde{\alpha }}^3\) (see Claim 5 for the definition of \({\tilde{\alpha }}\)).

  2. (2)

    \(x_{12}+c x_{13}\le p(c+1)\), where \(c=\frac{1}{2p}(1+\sqrt{1-4p^2})\).

  3. (3)

    \(x_1+x_{123}\le 1\).

  4. (4)

    \(x_{23}+x_{123}\le 1\).

  5. (5)

    \(x_{1}+x_{12}+x_{23} \le 1\).

  6. (6)

    \(x_{23}\le {\tilde{a}}_2\) (see Claim 7 for the definition of \({\tilde{a}}_t\)).

  7. (7)

    \(x_1\le {\tilde{a}}_3\).

Proof

Item (1): By Lemma 4 with (1) of Claim 19, we have \(x_1^2x_{23}\le \alpha ^9\). Then, using \(\alpha \le {\tilde{\alpha }}\) from Claim 5, we get \((\min \{x_1,x_{23}\})^3\le x_1^2x_{23}\le \alpha ^9\le {\tilde{\alpha }}^9\).

Item (2): By (i) of Lemma 2 with (2) of Claim 19, we have \(x_{13}+x_{12}\le 1\). Moreover, by (ii) of Lemma 2 with \(x_{13}\le x_{12}\) from Claim 17, we have \(x_{13}^2\le x_{13}x_{12}\le p^2\) and \(x_{13}\le p\). By solving \(x_{13}x_{12}=p^2\) and \(x_{13}+x_{12}=1\) with \(x_{13}\le x_{12}\) we get \((x_{13},x_{12})=(\frac{1}{2}(1-\sqrt{1-4p^2}),\frac{1}{2}(1+\sqrt{1-4p^2})) =(1-cp,cp)\). Also, by solving \(x_{13}x_{12}=p^2\) and \(x_{13}=x_{12}\) we get \((x_{13},x_{12})=(p,p)\). Thus \((x_{13},x_{12})\) exists only under the line connecting these two points, that is, \(x_{12}\le c(p-x_{13})+p\) (See Fig. 3). \(\square \)

Fig. 3
figure 3

Item (2) of Claim 20: (\(x_{13}, x_{12}\)) is included in the gray area

Items (3) and (4): These follow from (i) of Lemma 2 with (3) of Claim 19.

Items (5): This follows from Lemma 3 with (4) of Claim 19.

Items (6): This follows from Claim 7 with (5) of Claim 19.

Items (7): This follows from Claim 7 with (1) of Claim 19.

Recall from (10) that \(\mu _p(\mathcal F)=pq^2x_1+p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\).

5.2.1 Subcase \(x_1\le x_{23}\).

We solve the following linear programming problem:

maximize::

\(pq^2x_1+p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\),

subject to::

\(x_1-x_{23}\le 0\), (1)–(6) in Claim 20, and \(x_I\ge 0\) for all I.

The corresponding dual problem is (Table 4)

minimize::

\({\tilde{\alpha }}^3 y_1+p(c+1) y_2+y_3+y_4+y_5+{\tilde{a}}_2 y_6\),

subject to::

\(y_0+y_1+y_3+y_5\ge pq^2\), \(-y_0+y_4+y_5+y_6\ge p^2q\), \(c y_2\ge p^2q\), \(y_2+y_5\ge p^2q\), \(y_3+y_4\ge p^3\), and \(y_i\ge 0\) for all i.

Table 4 Subcase \(x_1\le x_{23}\)

A feasible solution is given by \(y_0=y_6=0\), \(y_1=pq^2-p^2q(1-\frac{2}{c})-p^3\), \(y_2=y_4=\frac{p^2q}{c}\), \(y_3=p^3-\frac{p^2q}{c}\), \(y_5=p^2q(1-\frac{1}{c})\), and the corresponding value of the objective function is

$$\begin{aligned} p\left( p + p^2 - p^3 + {\tilde{\alpha }}^3(1 - 3p + p^2)\right) -\frac{1}{c} \,p^2 q(1-2{\tilde{\alpha }}^3-p). \end{aligned}$$
(11)

By the Taylor expansion of \(\frac{1}{c}\) at \(p=\frac{2}{5}\) it follows that \(\frac{1}{c}>d(p)\), where \(d(p)=\frac{1375 p^2}{216}-\frac{325p}{108}+\frac{37}{54}\) for \(\frac{2}{5}\le p\le \frac{1}{2}\). Since \(1-2{\tilde{\alpha }}^3-p>0\) in this interval, the value (11) satisfies

$$\begin{aligned}&< p\left( p + p^2 - p^3 + {\tilde{\alpha }}^3(1 - 3p + p^2)\right) -d(p)p^2 q(1-2{\tilde{\alpha }}^3-p) =:f(p)\\&\approx - 0.00633167 p + 0.490037 p^2 + 3.71975 p^3 - 8.76595 p^4 \\&\qquad \quad +21.3084 p^5 - 61.7755 p^6 + 95.9036 p^7 - 52.7438 p^8. \end{aligned}$$

Let \(g(p):=(4p^3q+p^4 -0.00194)-f(p)\), and let \(g^{(i)}(p)\) denote the i-th derivative. Let \(\frac{2}{5}\le p\le \frac{1}{2}\). We have \(g^{(6)}(p)\approx 1063314.937p^2-483354.3468p+44478.39041\) and \(g^{(6)}(p)>0\). Thus \(g^{(4)}(p)\) is convex. Since \(g^{(4)}(\frac{2}{5})<-213<0\) and \(g^{(4)}(\frac{1}{2})<-112<0\), we have \(g^{(4)}(p)<0\). Thus \(g^{(3)}(p)\) is decreasing in p. Since \(g^{(3)}(\frac{2}{5})<-7<0\) we have \(g^{(3)}(p)<0\). Thus \(g'(p)\) is concave. Since \(g'(\frac{2}{5})>0.23>0\) and \(g'(\frac{1}{2})>0.34>0\), we have \(g'(p)>0\) and g(p) is increasing in p. Finally we have \(g(\frac{2}{5})>3\times 10^{-6}>0\), and so \(g(p)>0\). This means that the value (11) is less than \(4p^3q+p^4 -0.00194\).

Then by the weak duality theorem we have \(\mu _p(\mathcal F)<4p^3q+p^4-0.00194\).

5.2.2 Subcase \(x_{23}\le x_1\).

We solve the following linear programming problem:

maximize::

\(pq^2x_1+p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\),

subject to::

\(x_{23}-x_1\le 0\), (1)–(5) and (7) in Claim 20, and \(x_I\ge 0\) for all I.

The corresponding dual problem is (Table 5)

minimize::

\({\tilde{\alpha }}^3 y_1+p(c+1) y_2+ y_3+y_4+y_5+{\tilde{a}}_3 y_7\),

subject to::

\(-y_0+y_3+y_5+y_7\ge pq^2\), \(y_0+y_1+y_4+y_5\ge p^2q\), \(c y_2\ge p^2q\), \(y_2+y_5\ge p^2q\), \(y_3+y_4\ge p^3\), and \(y_i\ge 0\) for all i.

Table 5 Subcase \(x_{23}\le x_1\)

A feasible solution is given by \(y_0=y_4=0\), \(y_1=y_2=p^2 q(1-\frac{1}{c})\), \(y_3=p^3\), \(y_5=p^2 q(1-\frac{1}{c})\), \(y_7=p q^2-p^2 q(1-\frac{1}{c})-p^3\). Noting that \(c+\frac{1}{c}=p\), the corresponding value of the objective function is

$$\begin{aligned} p^3\left( -{\tilde{\alpha }}^3+{\tilde{a}}_3-1\right) +p^2 \left( {\tilde{\alpha }}^3-3{\tilde{a}}_3+2\right) +{\tilde{a}}_3 p -\frac{1}{c}\,p^2q\left( {\tilde{\alpha }}^3-{\tilde{a}}_3+2 p+1\right) . \end{aligned}$$

Then, using \({\tilde{\alpha }}^3-{\tilde{a}}_3+2 p+1>0\) and \(\frac{1}{c}>d(p)\) (see the previous subsection), the above value satisfies

$$\begin{aligned}&< -1.50324 p + 8.79901 p^2 - 3.86493 p^3 - 26.8212 p^4\\&\qquad \quad + 30.6062 p^5 + 12.8608 p^6 - 47.9518 p^7 + 26.3719 p^8=:f(p). \end{aligned}$$

Let \(g(p):=(4p^3q+p^4 -0.00182)-f(p)\). Let \(J_1=\{p\in \mathbb R:\frac{2}{5}\le p\le \frac{9}{20}\}\), \(J_2=\{p\in \mathbb R:\frac{9}{20}\le p\le \frac{1}{2}\}\), and \(J=J_1\cup J_2\). We need to show that \(g(p)>0\) for \(p\in J\).

First we show that \(g^{(4)}(p)>0\) for \(p\in J_1\). We have \(g^{(7)}(p)<0\) for \(p\in J\). Thus \(g^{(5)}(p)\) is concave. Since \(g^{(5)}(\frac{2}{5})>0\) and \(g^{(5)}(\frac{9}{20})>0\) we have \(g^{(5)}(p)>0\) for \(p\in J_1\). Thus \(g^4(p)\) is increasing in \(p\in J_1\). Since \(g^{(4)}(\frac{9}{20})<0\), we have \(g^{(4)}(p)<0\) for \(p\in J_1\).

Next we show that \(g^{(4)}(p)>0\) for \(p\in J_2\). Since \(g^{(7)}(p)<0\), \(g^{(6)}(p)\) is decreasing in p. Since \(g^{(6)}(\frac{9}{20})<0\), \(g^{(4)}(p)\) is concave for \(p\in J_2\). Let \(h(p)=481.064p-381.36\) be the tangent to \(g^{(4)}(p)\) at \(p=\frac{9}{20}\). Then we have \(g^{(4)}(p)\le h(p)\le h(\frac{1}{2})\) for \(p\in J_2\). Since \(h(\frac{1}{2})<0\), we have \(g^{(4)}(p)<0\) for \(p\in J_2\).

Let \(p\in J\). We have shown that \(g^{(4)}(p)>0\). Then \(g^{(2)}(p)\) is concave. Since \(g^{(2)}(\frac{2}{5})>0\) and \(g^{(2)}(\frac{1}{2})>0\), we have \(g^{(2)}(p)>0\). Thus \(g'(p)\) is increasing in p. Since \(g'(\frac{2}{5})>0\), we have \(g'(p)>0\) and g(p) is increasing in p. Since \(g(\frac{2}{5})>0\), we have \(g(p)>0\) as needed.

Thus it follows from the weak duality theorem that \(\mu _p(\mathcal F)<4p^3q+p^4-0.0018\).

5.3 Case \(\mathcal G_2\ne \emptyset \), \(\mathcal G_3=\emptyset \)

Using Claim 18 we have the following.

Claim 21

  1. (1)

    \(\{\mathcal G_1,\mathcal G_1,\mathcal G_2\}\) are 3-cross 4-intersecting, and so \(\mathcal G_1\) is 2-wise 4-intersecting.

  2. (2)

    \(\{\mathcal G_2,\mathcal G_{13},\mathcal G_{13}\}\) are 3-cross 2-intersecting, and so \(\mathcal G_{13}\) is 2-wise 2-intersecting.

  3. (3)

    \(\{\mathcal G_2,\mathcal G_{13},\mathcal G_{123}\}\) are 3-cross intersecting, and so \(\{\mathcal G_{13},\mathcal G_{123}\}\) are 2-cross intersecting.

  4. (4)

    \(\{\mathcal G_{2},\mathcal G_{12},\mathcal G_{13}\}\) are 3-cross 2-intersecting.

  5. (5)

    \(\{\mathcal G_{1},\mathcal G_{2},\mathcal G_{123}\}\) are 3-cross 2-intersecting.

Claim 22

  1. (1)

    \(x_1\le {\tilde{a}}_4\).

  2. (2)

    \(x_{13}\le {\tilde{a}}_2\).

  3. (3)

    \(x_2\le {\tilde{\alpha }}^4\).

  4. (4)

    \(x_{13}+x_{123}\le 1\).

  5. (5)

    \(x_{1}+x_{12}+x_{23}\le 1\).

  6. (6)

    \(x_{2}+x_{12}+x_{13}\le 1\).

  7. (7)

    \(x_{1}+x_{2}+x_{123}\le 1\).

Proof

Item (3): By Lemma 4 with (1) of Claim 21, we have \(x_1^2x_2\le \alpha ^{12}\). Then, using \(x_2\le x_1\) from Claim 17 and Claim 5, we get \(x_2^3\le x_1^2x_2\le \alpha ^{12}\le {\tilde{\alpha }}^{12}\).

Item (5) is from Claim 20. Indeed all inequalities in Claim 20 are still valid in this case.

The other items follow from Claim 21, Claim 7, Lemma 2, and Lemma 3. \(\square \)

We solve the following linear programming problem:

maximize::

\(pq^2(x_1+x_2)+p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\),

subject to::

(1)–(7) in Claim 22, and \(x_I\ge 0\) for all I.

The corresponding dual problem is (Table 6)

minimize::

\({\tilde{a}}_4y_1+{\tilde{a}}_2y_2+{\tilde{\alpha }}^4 y_3+ y_4+ y_5+y_6+y_7\),

subject to::

\(y_3+y_6+y_7\ge pq^2\), \(y_1+y_5+y_7\ge pq^2\), \(y_5\ge p^2q\), \(y_2+y_4+y_6\ge p^2q\), \(y_5+y_6\ge p^2q\), \(y_4+y_7\ge p^3\), and \(y_i\ge 0\) for all i.

Table 6 Case \(\mathcal G_2\ne \emptyset \)

A feasible solution is given by \(y_1=pq(1-2p)\), \(y_2=p^2(1-2p)\), \(y_3=pq^2\), \(y_4=p^3\), \(y_5=p^2 q\), \(y_6=y_7=0\). Then the corresponding value of the objective function is

$$\begin{aligned}&{\tilde{a}}_4pq(1-2p) + {\tilde{a}}_2p^2(1-2p) +{\tilde{\alpha }}^4 pq^2+p^2\\&\qquad \approx -1.70723 p + 9.39501 p^2 - 10.639 p^3 - 1.74811 p^4 \\&\qquad \qquad + 13.3146 p^5 - 16.3729 p^6 + 6.6536 p^7\\&\qquad < 4p^3q+p^4-0.00436, \end{aligned}$$

and so \(\mu _p(\mathcal F)<4p^3q+p^4-0.004\).

5.4 Case \(\mathcal G_3\ne \emptyset \), \(\mathcal G_\emptyset =\emptyset \)

Using Claim 18 we have the following.

Claim 23

  1. (1)

    \(\{\mathcal G_3,\mathcal G_{12},\mathcal G_{12}\}\) are 3-cross 2-intersecting, and so \(\mathcal G_{12}\) is 2-wise 2-intersecting.

  2. (2)

    \(\{\mathcal G_3,\mathcal G_{12},\mathcal G_{123}\}\) are 3-cross intersecting, and so \(\{\mathcal G_{12},\mathcal G_{123}\}\) are 2-cross intersecting.

Claim 24

  1. (1)

    \(x_1\le {\tilde{a}}_4\).

  2. (2)

    \(x_{12}\le {\tilde{a}}_2\).

  3. (3)

    \(x_2\le {\tilde{\alpha }}^4\).

  4. (4)

    \(x_{12}+x_{123}\le 1\).

  5. (5)

    \(x_{1}+x_{12}+x_{23}\le 1\).

  6. (6)

    \(x_{2}+x_{12}+x_{13}\le 1\).

  7. (7)

    \(x_{1}+x_{2}+x_{123}\le 1\).

  8. (8)

    \(x_3-x_2\le 0\).

  9. (9)

    \(x_{23}-x_{13}\le 0\).

  10. (10)

    \(x_{13}-x_{12}\le 0\).

Proof

Items (1), (3), (6), and (7) are from Claim 21. Items (2) and (4) follow from Claim 23, Claim 7, and Lemma 2. Item (5) is from Claim 19. The other items are from Claim 17. \(\square \)

We solve the following linear programming problem:

maximize::

\(pq^2(x_1+x_2+x_3)+p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\),

subject to::

(1)–(10) in Claim 24, and \(x_I\ge 0\) for all I.

The corresponding dual problem is (Table 7)

minimize::

\({\tilde{a}}_4y_1+{\tilde{a}}_2y_2+{\tilde{\alpha }}^4 y_3+ y_4+ y_5+y_6+y_7\),

subject to::

\(y_8\ge pq^2\), \(y_3+y_6+y_7-y_8\ge pq^2\), \(y_1+y_5+y_7\ge pq^2\), \(y_5+y_9\ge p^2q\), \(y_6-y_9+y_{10}\ge p^2q\), \(y_2+y_4+y_5+y_6-y_{10}\ge p^2q\), \(y_4+y_7\ge p^3\), and \(y_i\ge 0\) for all i.

Table 7 Case \(\mathcal G_3\ne \emptyset \)

We distinguish the following two subcases.

5.4.1 Subcase \(\frac{2}{5}\le p\le 0.453264\)

A feasible solution is given by \(y_1=p q^2\), \(y_2=p^2(3-4p)\), \(y_3=2pq^2\), \(y_4=p^3\), \(y_5=y_6=y_7=0\), \(y_8=pq^2\), \(y_9=p^2q\), \(y_{10}=2p^2q\), and the corresponding value of the objective function is

$$\begin{aligned}&p\left( {\tilde{a}}_4q^2+{\tilde{a}}_2(3-4p)p+2{\tilde{\alpha }}^4q^2+p^2\right) \\&\quad \approx -1.70606 p + 4.43558 p^2 + 5.72241 p^3 - 16.7467 p^4\\&\qquad \quad + 26.6293 p^5 - 32.7457 p^6 + 13.3072 p^7\\&\qquad <4p^3q+p^4-0.00404377. \end{aligned}$$

Thus \(\mu _p(\mathcal F)<4p^3q+p^4-0.004\).

5.4.2 Subcase \(0.453264\le p\le \frac{1}{2}\)

A feasible solution is given by \(y_1=y_6=y_9=0\), \(y_2=p(1-2p)\), \(y_3=pq\), \(y_4=p(3p-p^2-1)\), \(y_5=y_{10}=p^2q\), \(y_7=pq(1-2p)\), \(y_8=pq^2\), and the corresponding value of the objective function is

$$\begin{aligned}&p\left( {\tilde{a}}_2(1-2p)+{\tilde{\alpha }}^4q+p\right) \\&\qquad \approx -1.10283 p + 6.37415 p^2 - 5.84563 p^3 - 3.59536 p^4 + 9.71927 p^5 - 6.6536 p^6\\&\qquad <4p^3q+p^4-0.00404377. \end{aligned}$$

Thus \(\mu _p(\mathcal F)<4p^3q+p^4-0.004\).

5.5 Case \(\mathcal G_\emptyset \ne \emptyset \)

Using Claim 18 we have the following.

Claim 25

  1. (1)

    \(\{\mathcal G_\emptyset ,\mathcal G_\emptyset ,\mathcal G_\emptyset \}\) are 3-cross 7-intersecting, that is, \(\mathcal G_\emptyset \) is 3-wise 7-intersecting.

  2. (2)

    \(\{\mathcal G_1,\mathcal G_1,\mathcal G_\emptyset \}\) are 3-cross 5-intersecting, and so \(\mathcal G_1\) is 2-wise 5-intersecting.

  3. (3)

    \(\{\mathcal G_\emptyset ,\mathcal G_{123},\mathcal G_{123}\}\) are 3-cross intersecting, and so \(\{\mathcal G_{123},\mathcal G_{123}\}\) are 2-cross intersecting.

Claim 26

  1. (1)

    \(x_\emptyset \le {\tilde{\alpha }}^7\).

  2. (2)

    \(x_2\le {\tilde{\alpha }}^4\).

  3. (3)

    \(x_1\le {\tilde{a}}_5\).

  4. (4)

    \(x_{123}\le p\).

  5. (5)

    \(x_{1}+x_{12}+x_{23}\le 1\).

  6. (6)

    \(x_3-x_2\le 0\).

  7. (7)

    \(x_{23}-x_{13}\le 0\).

  8. (8)

    \(x_{13}-x_{12}\le 0\).

  9. (9)

    \(x_{12}-x_{123}\le 0\).

Proof

Items (1), (3), and (4) follow from Claim 25, Lemma 4, Claim 5, Claim 7, and Lemma 2. Items (2) and (5) are from Claim 22 and Claim 20, respectively. The other items are from Claim 17. \(\square \)

We solve the following linear programming problem:

maximize::

\(q^3 x_\emptyset +pq^2(x_1+x_2+x_3)+p^2q(x_{12}+x_{13}+x_{23})+p^3x_{123}\),

subject to::

(1)–(9) in Claim 26, and \(x_I\ge 0\) for all I.

The corresponding dual problem is (Table 8)

minimize::

\({\tilde{\alpha }}^7 y_1+{\tilde{\alpha }}^4 y_2+{\tilde{a}}_5 y_3+ p y_4+ y_5\),

subject to::

\(y_1\ge q^3\), \(y_6\ge p q^2\), \(y_2-y_6\ge p q^2\), \(y_3+y_5\ge p q^2\), \(y_5+y_7\ge p^2 q\), \(-y_7+y_8\ge p^2 q\), \(y_5-y_8+y_9\ge p^2 q\), \(y_4-y_9\ge p^3\), and \(y_i\ge 0\) for all i.

Table 8 Case \(\mathcal G_\emptyset \ne \emptyset \)

We distinguish the following two subcases.

5.5.1 Subcase \(\frac{2}{5}\le p\le 0.424803\)

A feasible solution is given by \(y_1=q^3\), \(y_2=2pq^2\), \(y_3=y_6=pq^2\), \(y_4=p^2(3-2p)\), \(y_5=0\), \(y_7=p^2q\), \(y_8=2p^2 q\), \(y_9=3p^2 q\), and the corresponding value of the objective function is

$$\begin{aligned}&{\tilde{\alpha }}^7 q^3 + 2{\tilde{\alpha }}^4 pq^2+ {\tilde{a}}_5 pq^2+p^3(3-2p)\\&\qquad \approx -27.5644 p^{10}+104.919p^9-157.051 p^8+132.065p^7-82.6061 p^6+39.2544p^5\\&\qquad \qquad -7.70344 p^4-6.68845p^3+8.19629 p^2-1.82104p-7.41682\cdot 10^{-6}\\&\qquad <4p^3q+p^4-0.004322. \end{aligned}$$

Thus \(\mu _p(\mathcal F)<4p^3q+p^4-0.004\).

5.5.2 Subcase \(0.424803\le p\le \frac{1}{2}\)

A feasible solution is given by \(y_1=q^3\), \(y_2=2pq^2\), \(y_3=pq(1-2p)\), \(y_4=p^2\), \(y_5=y_8=y_9=p^2q\), \(y_6=pq^2\), \(y_7=0\), and the corresponding value of the objective function is

$$\begin{aligned}&{\tilde{\alpha }}^7 q^3 + 2{\tilde{\alpha }}^4 pq^2 + {\tilde{a}}_5 pq(1-2p)+p^2\\&\qquad \approx -27.5644 p^{10}+104.919p^9-157.051 p^8+132.065p^7-82.6061 p^6+39.2544p^5\\&\qquad \qquad -1.05572 p^4-16.16p^3+11.0202 p^2-1.82104p-7.41682\cdot 10^{-6}\\&\qquad <4p^3q+p^4-0.004322. \end{aligned}$$

Thus \(\mu _p(\mathcal F)<4p^3q+p^4-0.004\).

This completes the proof of Theorem 6. \(\square \)

6 Concluding remarks

In this section we discuss possible extensions and related problems.

6.1 Non-trivial r-wise intersecting families for \(r\ge 4\)

We have determined \(M_2(p)\) and \(M_3(p)\) for all p. Let us consider \(M_r(p)\) for the general case \(r\ge 4\). Some of the facts we used for the cases \(r=2,3\) can be easily extended for the other cases as follows.

Proposition 1

Let \(r\ge 2\).

  1. (1)

    For \(s=0,1,\ldots ,r-1\) we have \(M_r(p)\ge p^s\) for \(p>\frac{r-s-1}{r-s}\).

  2. (2)

    \(M_r(p)=p^{r-1}\) for \(0<p\le \frac{1}{r}\).

  3. (3)

    \(M_r(p)=p\) for \(\frac{r-2}{r-1}<p\le \frac{r-1}{r}\).

  4. (4)

    \(M_r(p)=1\) for \(\frac{r-1}{r}<p<1\).

Proof

Item (1): We construct a non-trivial r-wise intersecting family

$$\begin{aligned} \mathcal F_r(n,s):=\{\{[s]\cup G:G\subset [s+1,n],\, |G|\ge \tfrac{r-s-1}{r-s}n\}\}\cup \{[n]{\setminus }\{i\}:1\le i\le s\}. \end{aligned}$$

Then \(\mu _p(\mathcal F_r(n,s))\rightarrow p^s\) as \(n\rightarrow \infty \), cf. [11].

Item (2): By item (1) with \(s=r-1\) we have \(M_r(p)\ge p^{r-1}\). On the other hand, a non-trivial r-wise intersecting family is 2-wise \((r-1)\)-intersecting, and by Theorem 11 the p-measure of the family is at most \(p^{r-1}\) if \(p\le \frac{1}{r}\).

Item (3): By item (1) with \(s=1\) we have \(M_r(p)\ge p\). On the other hand, it is known from [8, 10, 22] that r-wise intersecting family has p-measure at most p if \(p\le \frac{r-1}{r}\).

Item (4): By item (1) with \(s=0\) we have \(M_r(p)\ge 1\), and so \(M_r(p)=1\) by definition of \(M_r(p)\). \(\square \)

Even for the case \(r=4\) the exact value of \(M_4(p)\) is not known for \(\frac{1}{4}<p\le \frac{2}{3}\). In this case Proposition 1 and Theorem 1 give us the following. For simplicity here we write \(\textrm{bd}_r(p)=\lim _{n\rightarrow \infty }\mu _p(\textrm{BD}_r(n))\) and \(f_r(p,s)=\lim _{n\rightarrow \infty }\mu _p(\mathcal F_r(n,s))\).

Fact 1

For non-trivial 4-wise intersecting families we have the following:

$$\begin{aligned} M_4(p){\left\{ \begin{array}{ll} =f_4(p,3)= p^3 &{} \text { if } \quad 0<p\le \frac{1}{4},\\ \ge \textrm{bd}_4(p) = 5p^4q+p^5 &{} \text { if }\quad \frac{1}{4}\le p<\frac{1}{2},\\ =\textrm{bd}_4(p) = 5p^4q+p^5 &{} \text { if }\quad p=\frac{1}{2},\\ \ge f_4(p,2)= p^2 &{}\text { if }\quad \frac{1}{2}<p\le \frac{1+\sqrt{17}}{8},\\ \ge \textrm{bd}_4(p) = 5p^4q+p^5 &{}\text { if }\quad \frac{1+\sqrt{17}}{8}<p\le \frac{2}{3},\\ =f_4(p,1)=p &{}\text { if }\quad \frac{2}{3}<p\le \frac{3}{4},\\ =1 &{}\text { if }\quad \frac{3}{4}<p<1. \end{array}\right. } \end{aligned}$$

Conjecture 2

For \(r\ge 2\) it holds that \(M_r(p)=\textrm{bd}_r(p)\) for \(\frac{1}{r}\le p\le \frac{1}{2}\).

It is known that \(M_r(p)=\textrm{bd}_r(p)\) if \(r\ge 13\) and \(\frac{1}{2}\le p\le \frac{1}{2}+\epsilon _r\) for some \(\epsilon _r>0\), see [11]. Note also that \(M_5(p)\ge f_5(p,3)>\textrm{bd}_5(p)\) for \(\frac{1}{2}<p<\frac{1+\sqrt{21}}{10}\).

Problem 1

Let \(0<p\le \frac{r-1}{r}\), and let \(\mathcal F\subset 2^{[n]}\) be a non-trivial r-wise intersecting family. Is it true that

$$\begin{aligned} M_r(p)\le \max \{\textrm{bd}_r(p),\, f_r(p,1),\ldots , f_r(p,r-1)\}\,? \end{aligned}$$

6.2 Uniform families

One can consider non-trivial r-wise intersecting k-uniform families, that is, families in \(\left( {\begin{array}{c}[n]\\ k\end{array}}\right) :=\{F\subset [n]:|F|=k\}\), and ask the maximum size. Let us construct some candidate families to address this problem. For \(1\le s\le r-1\) and \(r-s+1\le y\le k-s+1\), let \(j_0:=\lceil \frac{(r-s-1)y+1}{r-s}\rceil \). Note that \(j_0<y\) and \(j_0\) is the minimum integer j satisfying \((r-s)j\ge (r-s-1)y+1\). Let \(\mathcal F_r(n,k,s,y):=\mathcal A\cup \mathcal B\), where

$$\begin{aligned} \mathcal A&:=\{A\in \genfrac(){0.0pt}1{[n]}{k}:[s]\subset A,\,|A\cap [s+1,s+y]|\ge j_0\},\\ \mathcal B&:=\{B\in \genfrac(){0.0pt}1{[n]}{k}:|B\cap [s]|=s-1,\,[s+1,s+y]\subset B\}. \end{aligned}$$

Then the family \(\{A{\setminus }[s]:A\in \mathcal A\}\) is \((r-s)\)-wise intersecting due to the choice of \(j_0\) and y. Thus \(\mathcal F_r(n,k,s,y)\) is a k-uniform non-trivial r-wise intersecting family. In particular,

$$\begin{aligned} \mathcal F_r(n,k,1,r)=\textrm{BD}_r(n)\cap \genfrac(){0.0pt}1{[n]}{k}\quad (j_0=r-1), \end{aligned}$$

and \(\mathcal F_r(n,k,r-1,k-r+2)\) (\(j_0=1\)) is the so-called Hilton–Milner family. Note that different parameters may give the same family, e.g., \(\mathcal F_r(n,k,1,r)=\mathcal F_r(n,k,s,r-s+1)\) for all \(1\le s\le r-1\).

Conjecture 3

(O’Neill and Versträete [17]) Let \(k>r\ge 2\) and \(n\ge kr/(r-1)\). Then the unique extremal non-trivial r-wise intersecting families in \(\left( {\begin{array}{c}[n]\\ k\end{array}}\right) \) are \(\mathcal F_r(n,k,1,r)\) and \(\mathcal F_r(n,k,r-1,k-r+2)\) (up to isomorphism).

O’Neill and Versträete proved the conjecture if \(n\ge r+e(k^22^k)^{2^k}(k-r)\). This bound can be reduced to \(n>(1+\frac{r}{2})(k-r+2)\) using the Ahlswede–Khachatrian theorem for non-trivial 2-wise t-intersecting families in [3] with the fact that an r-wise intersecting family is 2-wise \((r-1)\)-intersecting, see [4] for more details. The case \(r=2\) in the conjecture is known to be true as the Hilton–Milner theorem [14]. The case \(r=3\) is studied in [23], and a k-uniform version of Theorem 3 is obtained, from which it follows that the conjecture fails if n and k are sufficiently large and roughly \(\frac{1}{2}<\frac{k}{n}\le \frac{2}{3}\). In this case \(\mathcal F_3(n,k,1,k-1)\) or \(\mathcal F_3(n,k,1,k)\) has size larger than \(\mathcal F_3(n,k,1,3)\) and \(\mathcal F_3(n,k,2,k-1)\) (see Theorem 4 in [23]). Balogh and Linz [4] verified that \(\mathcal F_3(11,7,1,7)\) is indeed a counterexample to the conjecture. They constructed the families \(\mathcal F_r(n,k,1,(r-1)i+1)\) (\(i\le \lfloor \frac{k-1}{r-1}\rfloor \)), and suggested that the largest family of them could be a counterexample if \(n\approx kr/(r-1)\). Here we show that Conjecture 3 fails if \(r\ge 3\), and n and k are sufficiently large and k/n is roughly in \((\frac{r-2}{r-1},\frac{r-1}{r})\). More precisely we prove the following. Let \(M_r(n,k)\) denote the maximum size of a non-trivial r-wise intersecting family in \(\left( {\begin{array}{c}[n]\\ k\end{array}}\right) \).

Theorem 12

Let \(r\ge 3\). For every \(\epsilon >0\) and every \(\delta >0\), there exists \(n_0\in \mathbb N\) such that for all integers n and k with \(n>n_0\) and \(\frac{r-2}{r-1}+\epsilon<\frac{k}{n}<\frac{r-1}{r}-\epsilon \), we have

$$\begin{aligned} (1-\delta )\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \le M_r(n,k)<\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) . \end{aligned}$$

Before proving this result, let us check that it gives counterexamples to the conjecture. To this end, suppose that \(\frac{k}{n}=p\), and n and k are sufficiently large. Then we have

$$\begin{aligned}&|\mathcal F_r(n,k,r,1)|{=}(r+1)\left( {\begin{array}{c}n-r-1\\ k-r\end{array}}\right) +\left( {\begin{array}{c}n-r-1\\ k-r-1\end{array}}\right) \approx \left( (r+1)p^{r-1}q+p^r\right) \left( {\begin{array}{c}n\\ k\end{array}}\right) ,\\&|\mathcal F_r(n,k,r-1,k-r+2)|=\left( {\begin{array}{c}n-r+1\\ k-r+1\end{array}}\right) -\left( {\begin{array}{c}n-k-1\\ k-r+1\end{array}}\right) +r-1 \approx p^{r-1} \left( {\begin{array}{c}n\\ k\end{array}}\right) . \end{aligned}$$

If \(p>\frac{1}{r}\) then \((r+1)p^{r-1}q+p^r>p^{r-1}\). Indeed if \(k>r\) and \(n\le r(k-r+2)\), then \(|\mathcal F_r(n,k,r,1)|>|\mathcal F_r(n,k,r-1,k-r+2)|\). If moreover \(p=\frac{k}{n}\le \frac{r-1}{r}\), then

$$\begin{aligned} \lim _{n,k\rightarrow \infty } |\mathcal F_r(n,k,r,1)|/\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) =(r+1)p^rq+p^r\le \frac{8}{9}, \end{aligned}$$

where equality holds if and only if \(r=3\) and \(p=\frac{2}{3}\). This implies that under the assumptions in Theorem 12 we have \(\max \{|\mathcal F_r(n,k,1,r)|,\,|\mathcal F_r(n,k,r-1,k-r+2)|\}<(1-\delta )\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \) for \(0<\delta <\frac{1}{9}\).

Proof of Theorem 12

The upper bound \(M_r(n,k)<\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \) was proved by Frankl in [9].

We prove the lower bound. Let r be fixed, and let \(\epsilon >0\) and \(\delta >0\) be given. Let \(\frac{r-2}{r-1}<p<\frac{r-1}{r}\), and \(k=pn\). Let \(c>0\) be a constant depending on r only (specified later), and let

$$\begin{aligned} J_{n,p}=\{j\in \mathbb N:|j-p^2n|\le c\sqrt{n}\}. \end{aligned}$$

For \(j\in J_{n,p}\) let

$$\begin{aligned} \theta _j(n,p)=\frac{\left( {\begin{array}{c}pn\\ j\end{array}}\right) \left( {\begin{array}{c}n-pn\\ pn-j\end{array}}\right) }{\left( {\begin{array}{c}n\\ pn\end{array}}\right) } =\frac{\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k\\ k-j\end{array}}\right) }{\left( {\begin{array}{c}n\\ k\end{array}}\right) }. \end{aligned}$$

Let \(\mathop {\textrm{erf}}(z)\) denote the error function, that is,

$$\begin{aligned} \mathop {\textrm{erf}}(z)=\frac{2}{\sqrt{\pi }}\int _0^z\exp (-x^2)\,dx. \end{aligned}$$

Then, by Lemma 5 of [23], we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{j\in J_{n,p}}\theta _j(n,p)=\mathop {\textrm{erf}}\left( \frac{3c}{\sqrt{2}p}\right) . \end{aligned}$$

The RHS is a function decreasing in p (for fixed c), and we have

$$\begin{aligned} \min _{p\in [\frac{r-2}{r-1},\frac{r-1}{r}]} \mathop {\textrm{erf}}\left( \frac{3c}{\sqrt{2} p}\right) =\mathop {\textrm{erf}}\left( \frac{3rc}{\sqrt{2}(r-1)}\right) . \end{aligned}$$

Then the RHS is a function increasing in c and approaching 1. Thus we can choose \(c>0\) so that \(\mathop {\textrm{erf}}\left( \frac{3rc}{\sqrt{2}(r-1)}\right) >1-\frac{\delta }{3}\), and we fix c.

We will show that \(|\mathcal F_r(n,k,1,k)|>(1-\delta )\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \). Let \(j_0=\lceil \frac{(r-2)k+1}{r-1}\rceil \). Choose \(n_1\) so that if \(n>n_1\) then

$$\begin{aligned} \sum _{j\in J_{n,p}}\theta _j(n,p)>1-\frac{\delta }{2} \end{aligned}$$
(12)

for all p with \(\frac{r-2}{r-1}\le p\le \frac{r-1}{r}\). Next choose \(n_2\) so that if \(\frac{r-2}{r-1}+\epsilon<p<\frac{r-1}{r}-\epsilon \), \(n>n_2\), and \(k=pn\), then \(j_0<pk-c\sqrt{n}\) and \(pk+c\sqrt{n}<k-1\). Then we have \(J_{n,p}\subset [j_0,k-1]\). Finally choose \(n_3\) so that if \(n>n_3\) then \(p-\frac{c}{q\sqrt{n}}>(1-\frac{\delta }{2})p\), and let \(n_0:=\max \{n_1,n_2,n_3\}\).

We have

$$\begin{aligned} |\mathcal F_r(n,k,1,k)| \ge \sum _{j=j_0}^{k-1}\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k-1\\ k-j-1\end{array}}\right) >\sum _{j\in J_{n,p}}\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k-1\\ k-j-1\end{array}}\right) . \end{aligned}$$

The summands in the RHS is \(\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k-1\\ k-j-1\end{array}}\right) =\frac{k-j}{n-k}\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k\\ k-j\end{array}}\right) \). For \(j\in J_{n,p}\) we have \(j<p^2n+c\sqrt{n}\) and

$$\begin{aligned} \frac{k-j}{n-k}=\frac{p-\frac{j}{n}}{1-p}>\frac{1}{q}\left( p-p^2-\frac{c}{\sqrt{n}}\right) =p-\frac{c}{q\sqrt{n}} >\left( 1-\frac{\delta }{2}\right) p, \end{aligned}$$

where we used \(n>n_3\) in the last inequality. We then have

$$\begin{aligned} \left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k-1\\ k-j-1\end{array}}\right) > \left( 1-\frac{\delta }{2}\right) p\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k\\ k-j\end{array}}\right) . \end{aligned}$$

The RHS can be rewritten as \(\left( 1-\frac{\delta }{2}\right) \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \theta _j(n,p) \) because \(\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k\\ k-j\end{array}}\right) =\theta _j(n,p)\left( {\begin{array}{c}n\\ k\end{array}}\right) \) and \(p\left( {\begin{array}{c}n\\ k\end{array}}\right) =\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \). Finally we have

$$\begin{aligned} M_r(n,k)&\ge |\mathcal F_r(n,k,1,k)|\\&>\sum _{j\in J_{n,p}}\left( {\begin{array}{c}k\\ j\end{array}}\right) \left( {\begin{array}{c}n-k-1\\ k-j-1\end{array}}\right) \\&>\left( 1-\frac{\delta }{2}\right) \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \sum _{j\in J_{n,p}} \theta _j(n,p) \\&>\left( 1-\frac{\delta }{2}\right) ^2 \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \qquad \text {(by (12))}\\&>\left( 1-\delta \right) \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) , \end{aligned}$$

and this is the lower bound we needed. \(\square \)

Fact 1 suggests that the conjecture could be false even if \(\frac{k}{n}<\frac{r-2}{r-1}\). For example we have \(|\mathcal F_4(41,26,2,25)|>|\mathcal F_4(41,26,1,4)|>|\mathcal F_4(41,26,3,24)|\). Noting that \(\frac{1+\sqrt{17}}{8}\approx 0.64\) we can expect \(\mathcal F_4(n,k,2,k-1)\) is larger than \(\mathcal F_4(n,k,1,4)\) if \(\frac{1}{2}<\frac{k}{n}<0.64\) and nk sufficiently large. Indeed we have

$$\begin{aligned}&|\mathcal F_4(1000,514,2,513)|/|\mathcal F_4(1000,514,1,4)|\approx 1.03254,\\&|\mathcal F_4(1000,630,2,629)|/|\mathcal F_4(1000,630,1,4)|\approx 1.0165,\\&|\mathcal F_4(1000,650,2,649)|/|\mathcal F_4(1000,650,1,4)|\approx 0.98655. \end{aligned}$$

Problem 2

Let \(k>r\ge 2\) and \(n\ge kr/(r-1)\), and let \(\mathcal F\subset \left( {\begin{array}{c}[n]\\ k\end{array}}\right) \) be a non-trivial r-wise intersecting family. Is it true that

$$\begin{aligned} |\mathcal F|\le \max \{|\mathcal F_r(n,k,s,y)|:1\le s\le r-1,\,r-s+1\le y\le k-s+1 \}\,? \end{aligned}$$