1 Introduction

Alan Selman was a pioneer and a leader in elucidating the structure of complexity classes. He initiated many of the most important concepts of structural complexity theory, he investigated them brilliantly, and he inspired generations of computer scientists to contribute to this endeavor.

Our objective in this paper is to show how resource-bounded dimension, which is a generalization of classical Hausdorff dimension, can extend Selman’s research program in fruitful new directions. To this end, we present three new results, one bringing the Point-to-Set Principle into complexity classes, one on dimension and p-selective sets, and one on dimension and disjoint NP pairs. The rest of this introduction motivates and explains these three results.

Hausdorff dimension, developed in 1919 [6, 16], is a scheme for assigning a dimension \({\dim }_{\mathrm {H}}(E)\) to every subset E of a given metric space. Assume for a moment that this metric space is a Euclidean space \(\mathbb {R}^{n}\). Then \({\dim }_{\mathrm {H}}(\mathbb {R}^{n}) = n\), and the Hausdorff dimension is monotone, i.e., \(E \subseteq F\) implies that \({\dim }_{\mathrm {H}}(E) \leq {\dim }_{\mathrm {H}}(F)\). For integers d = 0,…,n, subsets E of \(\mathbb {R}^{n}\) that are intuitively d-dimensional have \({\dim }_{\mathrm {H}}(E) = d\). However, every real number s ∈ [0,n] is the Hausdorff dimension of infinitely many (in fact, \(2^{|\mathbb {R}|}\) many) subsets of \(\mathbb {R}^{n}\). In general, \({\dim }_{\mathrm {H}}(E) < n\) implies that E is a Lebesgue measure 0 subset of \(\mathbb {R}^{n}\). (The converse does not hold.) Hausdorff dimension can thus be regarded as a measure of the “sizes” of Lebesgue measure 0 subsets of \(\mathbb {R}^{n}\). Hausdorff dimension has become a powerful tool for investigations in fractal geometry, probability theory, and other areas of mathematical analysis [2, 6, 35, 40].

We momentarily shift the focus of our discussion from Euclidean spaces \(\mathbb {R}^{n}\) to another metric space, the Cantor space C consisting of all decision problems, which are equivalently regarded as subsets of {0,1} or as infinite binary sequences. At the beginning of the present century, the first author proved a theorem characterizing Hausdorff dimension in C in terms of betting strategies called gales, which are minor but convenient generalization of martingales. Based on this characterization, he introduced two related methods for effectivizing Hausdorff dimension, i.e., imposing computability or complexity constraints on these gales. The first of these methods [24], called resource-bounded dimension imposes Hausdorff dimension structure on complexity classes. For example this theory defines, for every subset X of C, a quasipolynomial-time (i.e., npolylogn-time) dimension \({\dim }_{\text {qp}}(X)\) in such a way that \({\dim }(X\mid \text {EXP}) = {\dim }_{\text {qp}}(X \cap \text {EXP})\) is a coherent notion of the dimension of X within the complexity class EXP = TIME(2polynomial). The second method [25], algorithmic dimension (also called constructive dimension or effective dimension) has to date been more widely investigated, partly because of its interactions with algorithmic randomness (i.e., Martin-Löf randomness [34]) and partly because of its applications to classical fractal geometry [29, 30]. Algorithmic dimension plays a motivating role in this paper, but resource-bounded dimension is our main topic.

Several recent results in algorithmic fractal dimensions are based on the 2017 Point-to-Set Principle introduced by the first two authors [28]. This principle is a family of theorems, the first of which says that, for any set \(E \subseteq \mathbb {R}^{n}\),

$$ {\dim}_{\mathrm{H}}(E) = \min_{A\in\mathbf{C}}\sup_{x\in E}{\dim}^{A}(x), $$
(1.1)

where \({\dim }^{A}(x)\) is the algorithmic dimension of the individual point x relative to the oracle A. This theorem completely characterizes the classical Hausdorff dimensions of sets E in terms of the relativized algorithmic dimensions of their elements x. The term “classical” here does not mean “old,” but rather refers to mathematical concepts and theorems that, like Hausdorff dimension, do not involve computability or logic in their formulations. Thus the left-hand side of (1.1) is classical, but the right-hand side, involving computability, is not. The characterization theorem (1.1) is called the Point-to-Set Principle for Hausdorff dimension, because it enables one to prove lower bounds on the Hausdorff dimensions of sets by reasoning about the relativized algorithmic dimensions of judiciously chosen individual points in those sets. The paper [28] also proved a second instance of the Point-to-Set Principle that characterizes another classical fractal dimension, the packing dimension [6], in a manner dual to (1.1). These instances of the Point-to-Set Principle have recently been used to prove several new theorems in classical fractal geometry [27, 31,32,33]. The authors also recently extended (1.1) and its dual from \(\mathbb {R}^{n}\) to arbitrary separable metric spaces and to Hausdorff and packing dimensions with very general gauge families [31].

The above instances of the Point-to-Set Principle characterize classical fractal dimensions of sets in terms of the relativized algorithmic dimensions of the individual elements of those sets. In Section 4 below, we prove more general instances of the Point-to-Set Principle that characterize the classical or perhaps somewhat effective dimensions of sets in C in terms of the relativized more effective dimensions of the individual elements of those sets. One example of this says that, for every subset X of C,

$$ {\dim}_{\mathrm{H}}(X)= \min_{B\in\mathbf{C}}\sup_{A\in X}{\dim}_{\text{p}}^{B}(A). $$
(1.2)

That is, we can replace the algorithmic dimension on the right-hand side of (1.1) by the more effective polynomial-time dimension. Another example characterizes the quasipolynomial-time dimension of each subset X of C by

$$ {\dim}_{\text{qp}}(X)= \min_{g\in \text{qp}}\sup_{A\in X}{\dim}_{\text{p}}^{g}(A), $$
(1.3)

i.e., in terms of the more effective polynomial-time dimensions of the individual elements A of X. The instances (1.2) and (1.3) are special cases of Theorem 4.2 in Section 4.

In 1979, Alan Selman adapted Jockusch’s computability-theoretic notion of semirecursive sets [19], creating the complexity-theoretic notion of p-selective sets [37]. Briefly, a decision problem \(A \subseteq \{0,1\}^{*}\) is p-selective, and we write A ∈p-SEL, if there is a polynomial-time algorithm that, given an ordered pair (x,y) of strings x,y ∈{0,1}, outputs a string z ∈{x,y} such that {x,y}∖ AzA. (We note that the terms“p-selective” and “P-selective” have both been widely used for this notion. In fact, both have been used in papers with Selman as an author.) Every set AP is clearly p-selective, but there are uncountably many p-selective sets, so the converse does not hold. There is an extensive literature on p-selective sets and the related notions that they have spawned. We especially refer the reader to the books by Hemaspaandra and Torenvliet [17] and Zimand [42] and the references therein.

Selman [37] proved that no p-selective set can be \({\leq ^{P}_{m}}\)-hard for EXP and that, if P≠NP, then no p-selective set can be \({\leq ^{P}_{m}}\)-hard for NP. In order to extend the class of provably intractable problems, the first author [21] defined a language H to be weakly \({\leq ^{P}_{m}}\)-hard for EXP if μ(Pm(H)∣EXP)≠ 0, i.e., if the set Pm(H) of languages A such that \(A{\leq ^{P}_{m}} H\) does not have measure 0 in EXP in the sense of resource-bounded measure [22, 23, 42]. Buhrman and Longpré [3] and, independently, Wang [41] proved that μ(Pm(p-SEL)∣EXP) = 0, where for a class \(X\subseteq \mathbf {C}\), \(P_{m}(X)=\bigcup _{H\in X}(P_{m}(H))\). It follows that no p-selective set can be weakly \({\leq ^{P}_{m}}\)-hard for EXP. (They in fact proved the stronger fact that this also holds for \(\leq ^{P}_{tt}\)-reductions.) See [42] for a host of related results.

After the development of resource-bounded dimension [24], Ambos-Spies, Merkle, Reimann, and Stephan [1] defined a language H to be partially \({\leq ^{P}_{m}}\)-hard for EXP if \({\dim }(P_{m}(H)\mid \text {EXP}) > 0\). It is clear that weak hardness implies partial hardness, and it was shown in [1] that the converse does not hold. In Section 5 we use Theorem 4.2 (i.e., the Point-to-Set Principle) to prove that \({\dim }(P_{m}(\text {qp}\)-SEL)∣EXP) = 0, where the set qp-SEL of qp-selective sets is the obvious quasipolynomial-time analog of p-SEL. This implies that no qp-selective set can be partially \({\leq ^{P}_{m}}\)-hard for EXP and that, if \({\dim }(\text {NP}\mid \text {EXP}) > 0\), then no qp-selective set can be \({\leq ^{P}_{m}}\)-hard for NP.

In 1984, Even, Selman, and Yacobi [5] defined a promise problem to be an ordered pair (A,B) of disjoint languages. A solution of a promise problem (A,B) is an algorithm or other device that decides any separator of (A,B), i.e., any language S such that \(A \subseteq S\) and SB = . Intuitively, we are promised that every input will be an element of AB, so we are only required to correctly distinguish inputs in A from inputs in B.

A disjoint NP pair is a promise problem (A,B) with A,B ∈NP. Disjoint NP pairs were first investigated by Selman and collaborators to better understand public key cryptosystems [5, 13, 18, 38]. Razborov [36] later established a deep connection between disjoint NP pairs and propositional proof systems, associating with each propositional proof system a canonical disjoint NP pair. Glaßer, Selman, Sengupta, and Zhang [8,9,10,11] investigated this connection further, and it is now known that the degree structure of propositional proof systems under the natural notion of proof simulation is identical to the degree structure of disjoint NP pairs under reducibility of separators. See [12] for a survey of this and related results and [4] for more recent work.

In 2012, Fortnow, the first author, and the third author [7] investigated strong hypotheses involving the intractability of disjoint NP pairs. Among other things, this paper proved that

$$ \mu(\text{disjNP}\mid\text{disjEXP}) \neq 0 \implies \mu(\text{NP}\mid\text{EXP}) \neq 0 $$
(1.4)

and that μ(NP∣EXP)≠ 0 implies the existence, for every k, of disjoint NP pairs that cannot be separated in \(2^{n^{k}}\) time. (Here disjNP is the set of disjoint NP pairs, and disjEXP is the set of disjoint EXP pairs, the latter endowed with a natural measure.)

In Section 6, we prove a dimension-theoretic analog of (1.4), namely that

$$ {\dim}(\text{disjNP}\mid\text{disjEXP}) = 1 \implies {\dim}(\text{NP}\mid\text{EXP}) > 0. $$
(1.5)

Our proof of (1.5) is somewhat simplified by the use of Theorem 4.2 (i.e., the Point-to-Set Principle).

2 Resource Bounds

We work in the Cantor space C consisting of all decision problems (i.e., languages) \(A\subseteq \{0,1\}^{*}\). We identify each decision problem A with its characteristic sequence

$$\llbracket s_{0}\in A\rrbracket\ \llbracket s_{1}\in A\rrbracket\ \llbracket s_{2}\in A\rrbracket\ldots,$$

where s0,s1,s2,… is the standard enumeration of {0,1} and

$$\llbracket\varphi\rrbracket=\textbf{if}\ \varphi\ \textbf{then}\ 1\ \textbf{else}\ 0$$

is the Boolean value of a statement φ. We thus regard C as either the power set \(\mathcal {P}(\{0,1\}^{*})\) of {0,1} or as the set {0,1}ω of all infinite binary sequences, whichever is most convenient in a given context.

A resource bound in this paper is any one of several classes of functions from {0,1} to {0,1} that we now specify.

The largest resource bound is the set

$$\text{all}=\big\{f\mid f:\{0,1\}^{*}\to\{0,1\}^{*}\big\}.$$

We also use the resource bound

$$\text{comp}=\{f\in\text{all}\mid f\ \text{is computable}\}.$$

As in [20, 22, 24], we define a hierarchy G0,G1,G2,… of classes of growth rates \(f:\mathbb {N}\to \mathbb {N}\) by the following recursion. (All logarithms in this paper are base-2.)

$$ \begin{array}{@{}rcl@{}} G_{0}&=&\{f\mid (\exists k)(\forall^{\infty} n)f(n)\leq kn\}\\ G_{i+1}&=&2^{G_{i}(\log n)}=\left\{f \middle| (\exists g\in G_{i})(\forall^{\infty} n)f(n)\leq 2^{g(\log n)}\right\}. \end{array} $$

Note that G0 is the class of O(n) growth rates and that G1 is the class of polynomially bounded growth rates. For each \(i\in \mathbb {N}\), define a canonical growth rate \(\hat {g}_{i}\in G_{i}\) by \(\hat {g}_{0}(n)=2n\) and \(\hat {g}_{i+1}(n)=2^{\hat {g}_{i}(\log n)}\). It is easy to verify that each Gi is closed under composition, that each fGi is \(o(\hat {g}_{i+1})\), and that each \(\hat {g}_{i}\) is o(2n). Thus all growth rates in the Gi-hierarchy are subexponential.

Within the resource bound comp, we use the resource bounds

$$\text{p}_{i}=\{f\in\text{all}\mid f\ \text{is computable in}\ G_{i}\ \text{time}\}\quad(i\geq 1)$$

and

$$\text{p}_{i}\text{space}=\{f\in\text{all}\mid f\ \text{is computable in}\ G_{i}\ \text{space}\}\quad(i\geq 1).$$

(The length of the output is included as part of the space used in computing f.) We write p for the polynomial-time resource bound p1 and qp for the quasipolynomial-time resource bound p2. Similarly the notations pspace and qpspace denote the space resource bounds p1space and p2space, respectively.

In this paper, a resource bound Γ or Δ is one of the classes all, comp, pi (i ≥ 1), pispace (i ≥ 1) defined above. We will also use relativizations ΔA or Δg of a resource bound Δ to oracles \(A\subseteq \{0,1\}^{*}\) or function oracles g : {0,1}→{0,1}.

A constructor is a function δ : {0,1}→{0,1} such that δ(w) is a proper extension of w (i.e., w is a proper prefix of δ(w)) for all w ∈{0,1}. The result of a constructor δ is the unique sequence R(δ) ∈C such that δn(λ) is a prefix of R(δ) for all \(n\in \mathbb {N}\). (Here δn(λ) is the n-fold application of δ to the empty string λ.)

The result class of a resource bound Δ is the class R(Δ) consisting of all languages R(δ) such that δΔ is a constructor. The following facts are easily verified.

  1. 1.

    R(all) = C.

  2. 2.

    R(comp) = DEC, the set of all decidable languages.

  3. 3.

    For all i ≥ 1,

    $$R(\text{p}_{i})=\text{E}_{i}=\text{TIME}(2^{G_{i-1}}).$$

    In particular,

    $$R(\text{p})=\text{E}=\text{TIME}(2^{\text{linear}})$$

    and

    $$R(\text{qp})=\text{EXP}=\text{TIME}(2^{\text{poly}}).$$
  4. 4.

    For all i ≥ 1,

    $$R(\text{p}_{i}\text{space})=\text{E}_{i}\text{SPACE}=\text{SPACE}(2^{G_{i-1}}).$$

    In particular,

    $$R(\text{p}\text{space})=\text{ESPACE}=\text{SPACE}(2^{\text{linear}})$$

    and

    $$R(\text{qp}\text{space})=\text{EXPSPACE}=\text{SPACE}(2^{\text{poly}}).$$

Many of our functions will be of the form \(f:D\to [0,\infty )\), where D is a discrete domain such as {0,1} or \(\mathbb {N}\times \{0,1\}^{*}\) and \([0,\infty )\) is the set of nonnegative real numbers. If Δ is a resource bound, then such a function f is Δ-computable if there is a rational-valued function \(\hat {f}:D\times \mathbb {N}\to \mathbb {Q}\cap [0,\infty )\) such that \(\lvert \hat {f}(r,x)-f(x)\rvert \leq 2^{-r}\) for all xD and \(r\in \mathbb {N}\) and \(\hat {f}\in {\varDelta }\) (with r coded in unary and \(\hat {f}(x,r)\) coded in binary).

We say that f is lower semicomputable if there is a computable function \(\hat {f}:D\times \mathbb {N}\to \mathbb {Q}\cap [0,\infty )\) such that the following two conditions hold for all xD.

  1. (i)

    For all \(t\in \mathbb {N}\), \(\hat {f}(x,t)\leq \hat {f}(x,t+1)\leq f(x)\).

  2. (ii)

    \(\displaystyle \lim _{t\to \infty }\hat {f}(x,t)=f(x)\).

3 Resource-Bounded Dimensions

This section briefly reviews the elements of resource-bounded dimension developed in [24].

Definition 1

  1. 1.

    For \(s\in [0,\infty )\), an s-gale is a function \(d:\{0,1\}^{*}\to [0,\infty )\) such that, for all w ∈{0,1},

    $$d(w)=2^{-s}[d(w0)+d(w1)].$$
  2. 2.

    A martingale is a 1-gale.

Observation 3.1

[25] A function \(d:\{0,1\}^{*}\to [0,\infty )\) is an s-gale if and only if the function \(d^{\prime }:\{0,1\}^{*}\to [0,\infty )\) defined by \(d^{\prime }(w)=2^{(1-s)|w|}d(w)\) is a martingale.

An s-gale d succeeds on a language \(A\subseteq \{0,1\}^{*}\), and we write \(A\in S^{\infty }[d]\), if

$$\limsup_{w\to A} d(w)=\infty,$$

where the limit superior is taken over successively longer prefixes of A.

Notation 1

For \(X\subseteq \mathbf {C}\), let \(\mathcal {G}(X)\) be the set of all \(s\in [0,\infty )\) such that there is an s-gale d for which \(X\subseteq S^{\infty }[d]\).

Readers unfamiliar with fractal geometry can safely use the following characterization as the definition of the Hausdorff dimension \({\dim }_{\mathrm {H}}(X)\) of each set \(X\subseteq \mathbf {C}\).

Theorem 3.2 (Gale Characterization of Hausdorff Dimension 24)

For all \(X\subseteq \mathbf {C}\),

$${\dim}_{\mathrm{H}}(X)=\inf\mathcal{G}(X).$$

Intuitively, an s-gale is a strategy for betting on the successive bits of languages AC. The payoffs of these bets are fair if s = 1 and unfair if s < 1. Intuitively and roughly, Theorem 3.2 says that the Hausdorff dimension of X is the most hostile betting environment in which a gambler can succeed on every language AX.

Motivated by the above characterization of classical Hausdorff dimension, the first author defined resource-bounded dimensions and algorithmic dimensions as follows.

Notation 2

[24, 25] Let Δ be a resource bound, and let \(X\subseteq \mathbf {C}\).

  1. 1.

    \(\mathcal {G}_{{\varDelta }}(X)\) is the set of all \(s\in [0,\infty )\) such that there is a Δ-computable s-gale d for which \(X\subseteq S^{\infty }[d]\).

  2. 2.

    \(\mathcal {G}_{\text {alg}}(X)\) is the set of all \(s\in [0,\infty )\) such that there is a lower semicomputable s-gale d for which \(X\subseteq S^{\infty }[d]\).

Definition 2

[24, 25] Let Δ be a resource bound, let \(X\subseteq \mathbf {C}\), and let AC.

  1. 1.

    The Δ-dimension of X is

    $${\dim}_{{\varDelta}}(X)=\inf\mathcal{G}_{{\varDelta}}(X).$$
  2. 2.

    The Δ-dimension of X in R(Δ) is

    $${\dim}(X\mid R({\varDelta}))={\dim}_{{\varDelta}}(X\cap R({\varDelta})).$$
  3. 3.

    The Δ-dimension of A is

    $${\dim}_{{\varDelta}}(A)={\dim}_{{\varDelta}}(\{A\}).$$
  4. 4.

    The algorithmic dimension of X is

    $${\dim}_{\text{alg}}(X)=\inf\mathcal{G}_{\text{alg}}(X).$$
  5. 5.

    The algorithmic dimension of A is

    $${\dim}(A)={\dim}_{\text{alg}}(\{A\}).$$

(Algorithmic dimension has also been called constructive dimension and effective dimension.)

The papers [24, 25] showed that the above-defined dimensions are coherent, well-behaved “versions” of Hausdorff dimension. All the defined dimensions lie in [0,1], and all can take any real value in [0,1]. The dimensions 1., 2., and 4., have the crucial dimension properties that they are monotone in X and that they are stable in the sense that the dimension of XY is the maximum of the dimensions of X and Y. Classical Hausdorff dimension (i.e., \({\dim }_{\mathrm {H}}={\dim }_{\text {all}}\)) is also countably stable, meaning that

$$ {\dim}_{\mathrm{H}}\left( \bigcup_{i\in I} X_{i}\right)=\sup_{i\in I}{\dim}_{\mathrm{H}}(X_{i}) $$
(3.1)

holds for all countable index sets I. The dimensions 1. and 2. are not countably stable for Δ smaller than all, but they are Δ-countably stable in that (3.1) holds if the countable union is “Δ-effective.” The algorithmic dimension 4. is absolutely stable in the sense that (3.1) holds, regardless of whether I is countable. In particular, this implies that, for all \(X\subseteq \mathbf {C}\),

$$ {\dim}_{\text{alg}}(X)=\sup_{A\in X}{\dim}(A). $$
(3.2)

As a consequence of (3.2), investigations of algorithmic dimension focus almost entirely on the dimensions \({\dim }(A)\) of individual languages (or, in other contexts, individual sequences or individual points in a metric space) A.

Turning to complexity classes, i.e., the cases where Δ is some resource bound pi or pispace, the dimension 2. is non-degenerate in the sense that \({\dim }(R({\varDelta })\mid R({\varDelta }))=1\). If \(X\subseteq R({\varDelta })\) is finite or even “Δ-countable,” then \({\dim }(X\mid R({\varDelta }))=0\). This implies for example that, for each fixed \(k\in \mathbb {N}\),

$$ {\dim}(\text{TIME}(2^{kn})\mid \text{E})={\dim}(\text{TIME}(2^{n^{k}})\mid \text{EXP})=0. $$
(3.3)

Finally, we mention interactions of dimensions with randomness. A language AC is Δ-random if no Δ-computable martingale succeeds on it [22]. A language AC is algorithmically random (or Martin-Löf random [34]) if no lower semicomputable martingale succeeds on it. Since a martingale is a 1-gale, this implies that \({\dim }_{{\varDelta }}(A)=1\) holds for every Δ-random language and \({\dim }(A)=1\) holds for every algorithmically random language. In neither case does the converse hold.

4 The Point-to-Set Principle

As noted in the introduction, previous instances of the Point-to-Set Principle have characterized classical fractal dimensions of sets in terms of the relativized algorithmic dimensions of the elements of these sets. Here we make the Point-to-Set Principle more widely applicable by proving instances of it in which “classical” and “algorithmic” are replaced by resource bounds Δ and Γ, respectively, with Γ smaller (“more effective”) than Δ.

To this end, we partially order our resource bounds by

$$\text{p}_{i}<\text{p}_{i+1}<\text{comp},$$
$$\text{p}_{i}\text{space}<\text{p}_{i+1}\text{space}<\text{comp},$$

and

$$\text{p}_{i}\leq\text{p}_{i}\text{space}$$

for all i ≤ 1 and

$$\text{comp}<\text{all}.$$

Aside from reflecting current knowledge about the inclusions among these classes, this ordering has the crucial property that, if Γ and Δ are resource bounds with Γ < Δ, then Δ parametrizesΓ in the sense that there is a function fΔ such that

$${\varGamma}=\{f_{k}\mid k\in\mathbb{N}\},$$

where each \(f_{k}:\{0,1\}^{*}\to \{0,1\}^{*}\) is the k th slice of f, defined by fk(x) = f(0k1x) for all x ∈{0,1}. Moreover, this parametrization relativizes in the sense that, for each function oracle g : {0,1}→{0,1}, there is a function fgΔg such that

$${\varGamma}^{g}=\{{f_{k}^{g}}\mid k\in\mathbb{N}\}.$$

Theorem 4.1

If Γ and Δ are resource bounds with Γ < Δ, then for each function oracle g : {0,1}→{0,1}, there is a Δg-computable function dg such that \(\{{d_{k}^{g}}\mid k\in \mathbb {N}\}\) is the set of all martingales that are Γg-computable and satisfy \({d_{k}^{g}}(\lambda )\leq 1\).

Proof

This is implicit in the proofs of the time and space hierarchy theorems [15, 39] (minus the “disagreement” step of the diagonalizations), together with the well-known fact that these proofs relativize. □

The following theorem is the main result of this section.

Theorem 4.2 (Point-to-Set Principle for Resource-Bounded Dimensions)

If Γ and Δ are resource bounds with Γ < Δ, then, for all \(X\subseteq \mathbf {C}\),

$$ {\dim}_{{\varDelta}}(X)=\min_{g\in{\varDelta}}\sup_{A\in X}{\dim}_{{\varGamma}}^{g}(A). $$
(4.1)

Theorem 4.2 follows immediately from the following two lemmas, which we prove separately.

Lemma 4.3

If Γ, Δ, and X are as in Theorem 4.2 and gΔ, then

$$ {\dim}_{{\varDelta}}(X)\leq\sup_{A\in X}{\dim}_{{\varGamma}}^{g}(A). $$
(4.2)

Lemma 4.4

If Γ, Δ, and X are as in Theorem 4.2, then there exists gΔ such that, for all AX,

$$ {\dim}_{{\varGamma}}^{g}(A)\leq{\dim}_{{\varDelta}}(X). $$
(4.3)

Proof Proof of Lemma 4.3

Let Γ, Δ, X, and g be as given, and let \(s\in \mathbb {Q}\) satisfying

$$ s>\sup_{A\in X}{\dim}_{{\varGamma}}^{g}(A). $$
(4.4)

It suffices to show that

$$ {\dim}_{{\varDelta}}(X)\leq s. $$
(4.5)

Since Γ < Δ, Theorem 4.1 tells us that there is a Δg-computable function \(d^{g}:\{0,1\}^{*}\to [0,\infty )\) such that the set \(\{{d_{k}^{g}}\mid k\in \mathbb {N}\}\) of all slices of dg is the set of all martingales that are Γg-computable and satisfy \({d_{k}^{g}}(\lambda )\leq 1\). In fact, since gΔ, this function dg is Δ-computable. Define the function \(d^{g,s}:\{0,1\}^{*}\to [0,\infty )\) so that

$$d^{g,s}(0^{k}1x)=2^{(s-1)|x|}d^{g}(0^{k}1x)$$

holds for all \(k\in \mathbb {N}\) and x ∈{0,1}. Then dg,s is Δ-computable, and Observation 3.1 tells us that \(\{d_{k}^{g,s}\mid k\in \mathbb {N}\}\) is the set of all Γg-computable s-gales that satisfy \(d_{k}^{g,s}(\lambda )\leq 1\). Define \(d:\{0,1\}^{*}\to [0,\infty )\) by

$$ d=\sum\limits_{k=0}^{\infty} 2^{-k}d_{k}^{g,s}. $$
(4.6)

Then d is a Δ-computable s-gale, so to confirm (4.5) it suffices to show that

$$ X\subseteq S^{\infty}[d]. $$
(4.7)

For this, let AX. Then, by (4.4), there is a Γg-computable s-gale \(\tilde {d}\) such that \(A\in S^{\infty }[\tilde {d}]\). Then there exists \(k\in \mathbb {N}\) such that \(d_{k}^{g,s}=\tilde {d}\), whence \(A\in S^{\infty }[d_{k}^{g,s}]\). But then (4.6) tells us that

$$\limsup_{w\to A}d(w)\geq 2^{-k}\limsup_{w\to A} d_{k}^{g,s}(w)=\infty,$$

whence (4.7) holds. □

Proof Proof of Lemma 4.4

Let Γ, Δ, and X be as given, and let \(s\in \mathbb {Q}\) satisfy

$$ s>{\dim}_{{\varDelta}}(X). $$
(4.8)

If suffices to exhibit gΔ such that, for all AX,

$$ {\dim}_{{\varGamma}}^{g}(A)\leq s. $$
(4.9)

By (4.8), there is a Δ-computable s-gale d such that

$$ X\subseteq S^{\infty}[d]. $$
(4.10)

Observation: d can be chosen to be linearly bounded, that is, |d(w)| = O(|w|). Let \(g=\hat {d}\in {\varDelta }\) testify to the Δ-computability of d as defined in Section 2. Then d is a Γg-computable s-gale, and (4.10) tells us that, for all AX, \(A\in S^{\infty }[d]\), whence (4.9) holds.□

This completes the proof of Theorem 4.2.

The Point-to-Set Principle for Hausdorff dimension [28], stated in the context of C, says that, for all \(X\subseteq \mathbf {C}\),

$$ {\dim}_{\mathrm{H}}(X)=\min_{B\in\mathbf{C}}\sup_{A\in X}{\dim}^{B}(A), $$
(4.11)

thus characterizing the classical Hausdorff dimension of X in terms of the relativized algorithmic dimensions of its individual elements. Since \({\dim }_{\text {all}}={\dim }_{\mathrm {H}}\), Theorem 4.2 tells us, for example, that we also have, for all \(X\subseteq C\),

$$ {\dim}_{\mathrm{H}}(X)=\min_{B\in\mathbf{C}}\sup_{A\in X}{\dim}_{\text{p}}^{B}(A). $$
(4.12)

5 Selectivity

Definition 3

[37] For any resource bound Δ, a language \(A\subseteq \{0,1\}^{*}\) is Δ-selective if there is a selector function fΔ such that, for all pairs a,b ∈{0,1}, we have f(〈a,b〉) ∈{a,b} and

$$a\in A\ \text{or}\ b\in A\implies f(\langle a,b\rangle)\in A,$$

where 〈⋅,⋅〉 : {0,1}×{0,1}→{0,1} is a standard pairing function.

Theorem 5.1

If A,BC and g : {0,1}→{0,1} are such that B is pg-selective and \(A{\leq _{m}^{P}} B\), then \({\dim }_{\text {p}}^{g}(A)=0\).

Proof

Let A, B, and g be as in the theorem statement. Let f ∈pg be a selector for A, let h : {0,1}→{0,1} be a \({\leq _{m}^{P}}\)-reduction from A to B, and let s > 0. We will show that \({\dim }_{\text {p}}^{g}(A)\leq s\) by constructing an s-gale that succeeds on A and is computable in polynomial time relative to g.

Let \(k\in \mathbb {N}\) be sufficiently large so that

$$ \frac{2^{ks}}{k+1}>1. $$
(5.1)

We will consider blocks of k consecutive strings. For each \(q\in \mathbb {N}\), define the directed graph Gq whose vertex set is {0,…,k − 1} and edge set is

$$\left\{(i,j)\mid f(\langle h(s_{q k+i}),h(s_{q k+j})\rangle)=h(s_{q k+j})\right\}.$$

Notice that if sqk+iA and sqk+jA, then h(sqk+i) ∈ B and h(sqk+j)∉B. In this situation, the edge (i,j) cannot be present in Gq, and more generally there cannot be any path from i to j in Gq.

Let \(G^{\prime }_{q}\) be the directed acyclic graph obtained by contracting each strongly connected component of Gq to a single vertex. Define a linear order ≺q on {0,…,k − 1} by topologically sorting \(G^{\prime }_{q}\), breaking ties within each strongly connected component arbitrarily. In this order, iqj implies that there is a path from i to j in Gq.

Thus, if iqj and sqk+iA, then sqk+jA. Extending ≺q by defining iqk for all i ∈{0,…,k − 1}, it follows that

$$ A\cap\{s_{q k},\ldots,s_{q k+k-1}\}=\{s_{q k+j}\mid i\preceq_{q} j\} $$
(5.2)

for some i ∈{0,…,k}.

Define \(d:\{0,1\}^{*}\to [0,\infty )\) and, for each i ∈{0,…,k}, \(d_{i}:\{0,1\}^{*}\to [0,\infty )\) recursively as follows. For w ∈{0,1}, let qk + j = |w|, where j ∈{0,…,k − 1}.

  • For all i ∈{0,…,k}, di(λ) = d(λ) = 1.

  • \(d(w)=\frac {1}{k+1}{\sum }_{i=0}^{k} d_{i}(w)\).

  • For all i ∈{0,…,k} and j = 0,

    $$ \begin{array}{@{}rcl@{}} d_{i}(w0)&= \left\{\begin{array}{ll} 0&\text{if}\ i\preceq_{q} j\\ 2^{s}d(w)&\text{otherwise}, \end{array}\right.\\ d_{i}(w1)&= \left\{\begin{array}{ll} 2^{s}d(w)&\text{if}\ i\preceq_{q} j\\ 0&\text{otherwise}. \end{array}\right. \end{array} $$
  • For all i ∈{0,…,k} and j ∈{1,…,k − 1},

    $$ \begin{array}{@{}rcl@{}} d_{i}(w0)&= \left\{\begin{array}{ll} 0&\text{if}\ i\preceq_{q} j\\ 2^{s}d_{i}(w)&\text{otherwise}, \end{array}\right.\\ d_{i}(w1)&= \left\{\begin{array}{ll} 2^{s}d_{i}(w)&\text{if}\ i\preceq_{q} j\\ 0&\text{otherwise}. \end{array}\right. \end{array} $$

Informally, each di represents a betting strategy, and d is an aggregate betting strategy that evenly re-allocates between the di after each block of k bits. Observe that d is an s-gale, although the individual di are not.

Now consider \(d(A\upharpoonright n)\). If n = 0, then \(d(A\upharpoonright n)=1\). Otherwise, n = qk + j for some \(q\in \mathbb {N}\) and j ∈{1,…,k}. Let i ∈{0,…,k} be the value satisfying (5.2) for this q. Then

$$ \begin{array}{@{}rcl@{}} d(X\upharpoonright n)&\geq& \frac{d_{i}(X\upharpoonright n)}{k+1}\\ &=&\frac{2^{js}d(X\upharpoonright n-j)}{k+1}\\ &=&\frac{2^{(qk+j)s}}{(k+1)^{q}}\\ &>&\left( \frac{2^{ks}}{k+1}\right)^{q}. \end{array} $$

By inequality (5.1), this lower bound is monotonically increasing and unbounded, so \(\liminf _{n\to \infty }d(A\upharpoonright n)=\infty \). Therefore the s-gale d succeeds on A. Furthermore, for all w ∈{0,1}, the value d(w) can be computed in polynomial time relative to g by:

  • k calls to the polynomial-time reduction function h on inputs

    $$s_{qk},\ldots,s_{qk+k-1},$$

    each of which has length \(O(\log |w|)\);

  • k2 calls, for each ordered pairs from {sqk,…,sqk+k− 1}, to the selector function f, which runs in polynomial time relative to g; and

  • standard graph algorithms on Gq, which has k = O(1) vertices.

We conclude that \({\dim }_{\text {p}}^{g}(A)<s\), and the theorem follows immediately. □

Lemma 5.2

Let \(\text {qp}^{\prime }\) be the set of all functions in qp whose output length is polynomially bounded. There is a function \(h\in \text {qp}^{\prime }\) such that \(\text {qp}^{\prime }=\text {p}^{h}\).

Proof

By standard techniques of clocking Turing machines and bounding their running times and output lengths, we can form an enumeration M0,M1,M2,… of Turing machines such that \(\text {qp}^{\prime }\) is exactly the set of functions computed by Turing machines in this list. Define h : {0,1}→{0,1} by

$$ h(u)= \left\{\begin{array}{ll} M_{k}(x)&\text{if}\ u=0^{k}1x\\ \lambda&\text{if \textit{u} does not contain a 1.} \end{array}\right. $$

It is clear that \(\text {p}^{h}=\text {qp}^{\prime }\).□

Theorem 5.3

\({\dim }(P_{m}(\text {qp-SEL})\mid \text {EXP})=0\).

Proof

Let h be as in Lemma 5.2, and let APm(qp-SEL). Then there exists some language BC and function \(g\in \text {qp}^{\prime }=\text {p}^{h}\) such that \(A{\leq _{m}^{P}} B\) and g is a selector for B, i.e., B is ph-selective. By Theorem 5.1, then, \({\dim }_{\text {p}}^{h}(A)=0\). This holds for all APm(qp-SEL), so we can apply Theorem 4.2:

$$ \begin{array}{@{}rcl@{}} {\dim}_{\text{qp}}(P_{m}(\text{qp-SEL})) &&\leq\sup\limits_{A\in P_{m}(\text{qp-SEL})}{\dim}_{\text{p}}^{h}(A)\\ &&=0. \end{array} $$

Since \({\dim }(P_{m}(\text {qp-SEL})\mid \text {EXP})\) is defined as

$${\dim}_{\text{qp}}(P_{m}(\text{qp-SEL})\cap\text{EXP})\leq{\dim}_{\text{qp}}(P_{m}(\text{qp-SEL})),$$

this completes the proof. □

Corollary 5.4

No qp-selective set is partially \({\leq _{m}^{P}}\)-hard for EXP.

Corollary 5.5

If \({\dim }(\text {NP}\mid \text {EXP})>0\), then no qp-selective set is \({\leq ^{P}_{m}}\)-hard for NP.

6 Disjoint NP Pairs

In this section we improve the results in [7] by proving that the dimension of disjNP in disjEXP is related to the dimension of NP inside EXP.

Definition 4

[14, 26] For \(s\in [0,\infty )\) and distribution β on alphabet Σ, a β-s-gale is a function \(d:{\varSigma }^{*}\to [0,\infty )\) such that, for all wΣ,

$$d(w)=\sum\limits_{a\in{\varSigma}}d(wa)\beta(a)^{s}.$$

A β-s-gale succeeds on a language \(A\subseteq {\varSigma }^{*}\), and we write \(A\in S^{\infty }[d]\), if

$$\limsup_{w\to A}d(w)=\infty.$$

Let Δ be a resource bound, β a distribution on alphabet Σ, and \(X\subseteq \mathcal {P}({\varSigma }^{*})\). Then \(\mathcal {G}_{{\varDelta },\beta }(X)\) denotes the set of all \(s\in [0,\infty )\) such that there is a Δ-computable β-s-gale d for which \(X\subseteq S^{\infty }[d]\), and the Δβ-dimension of X is

$${\dim}_{{\varDelta},\beta}(X)=\inf\mathcal{G}_{{\varDelta},\beta}(X).$$

We code disjoint pairs as in [7], using the alphabet {0,1,− 1}. For a pair (A,B), 1 corresponds to A, − 1 to B, and 0 to (AB)c.

We fix a probability distribution γ0 on {0,1,− 1} as γ0(0) = 1/4, γ0(1) = γ0(− 1) = 3/8, that is the natural distribution used in [7]. For disjoint pairs we write \({\dim }_{{\varDelta }}(X)\) for \({\dim }_{{\varDelta },\gamma _{0}}(X)\). Theorem 4.2 extends routinely to this setting.

The main theorem of this section is the following

Theorem 6.1

If \({\dim }(\text {disjNP}\mid \text {disjEXP})=1\), then \({\dim }(\text {NP}\mid \text {EXP})>0\).

The proof of Theorem 6.1 is based on the following two results and Theorem 4.2.

Theorem 6.2

Let β be a positive distribution on {0,1}, \(X\subseteq \mathbf {C}\), and g : {0,1}→{0,1}. If \({\dim }_{\text {p}}^{g}(X)=0\), then \({\dim }_{\text {p},\beta }^{g}(X)<1\).

Theorem 6.3

Let β = (1/4,3/4) and g : {0,1}→{0,1}. If \({\dim }_{\text {p},\beta }^{g}(\text {NP}) < 1\), then \({\dim }_{\text {p}}^{g}(\text {disjNP})<1\).

Theorem 6.2 is a consequence of the following lemma.

Lemma 6.4

Let g : {0,1}→{0,1}, let s be such that \({\dim }_{\text {p}}^{g}(X)<s\), and let β be a distribution on {0,1}. If \(\max \limits (\beta (0), \beta (1))< 2^{-s}\), then \({\dim }_{\text {p},\beta }^{g}(X)<1\).

Proof Proof of Lemma 6.4

Let \(s^{\prime }> s\) and t ∈ (0,1) be such that \(\max \limits (\beta (0), \beta (1))<2^{-s^{\prime }/t}\). Let d be a pg-computable s-gale. Define

$$d^{\prime}(wb)=d^{\prime}(w) \frac{d(wb)}{2^{s}d(w)}\frac{1}{\beta(b)^{t}}.$$

Then \(d^{\prime }\) is a pg-computable β-t-gale. Furthermore,

$$d^{\prime}(w)\ge d (w) 2^{-s|w|}\frac{1}{\beta(w)^{t}} > d (w) 2^{-s|w|} 2^{s^{\prime}|w|},$$

and therefore \(S^{\infty }[d]\subseteq S^{\infty }[d^{\prime }]\). □

Theorem 6.3 is a consequence of the following lemma.

Lemma 6.5

Let g : {0,1}→{0,1}, γ a positive distribution on {0,1,− 1}, β a distribution on {0,1} with β(0) = γ(0), and \(X\subseteq \mathbf {C}\) a class that is closed under union. If \({\dim }_{\text {p},\beta }^{g}(X)<1\), then \({\dim }_{\text {p},\gamma }^{g}(\text {disj} X)<1\).

Proof Proof of Lemma 6.5

If \({\dim }_{\text {p},\beta }^{g}(X)<s<1\) and d is a pg-computable β-s gale succeeding on X, let \( s^{\prime } \in (0,1)\) with \(\beta (1)^{s} \ge \gamma (1)^{s^{\prime }}+ \gamma (-1)^{s^{\prime }}\) and \(\beta (0)^{s}\ge \gamma (0)^{s^{\prime }}\).

We define a pg-computable γ-\(s^{\prime }\) gale D by

$$ \begin{array}{@{}rcl@{}}D(w0)&=&D(w)\frac{d(\overline{w}0)}{d(\overline{w})}\frac{\beta(0)^{s}}{\gamma(0)^{s^{\prime}}}\\ D(w1)=D(w-1)&=&D(w)\frac{d(\overline{w}1)}{d(\overline{w})}\frac{\beta(1)^{s}}{\gamma(1)^{s^{\prime}}+\gamma(-1)^{s^{\prime}}},\\ \end{array} $$

where

$$ \begin{array}{@{}rcl@{}}\overline{w}[i]=0&\text{if}&w[i]=0\\ \overline{w}[i]=1&\text{if}&w[i]=1\ \mathrm{ or }\ w[i]=-1.\\ \end{array} $$

That is, if w is a prefix of (A,B) then \(\overline {w}\) is a prefix of AB.

Notice that \(D(w)\ge d(\overline {w})\) for every w.

Thus if (A,B) ∈disjX, then ABX and D succeeds on (A,B).□

Proof Proof of Theorem 6.1

We prove the contrapositive. Suppose that \({\dim }(\text {NP}\mid \text {EXP})=0\). By Theorem 4.2, there is a g ∈qp such that \({\dim }_{\text {p}}^{g}(\text {NP})=0\).

Let β = (1/4,3/4). By Theorem 6.2, \({\dim }_{\text {p},\beta }^{g}(\text {NP})<1\). By Theorem 6.3, \({\dim }_{\text {p}}^{g}(\text {disjNP})<1\).

Using Theorem 4.2 again, \({\dim }(\text {disjNP}\mid \text {disjEXP})= {\dim }_{\text {qp}}(\text {disjNP})<1\).□