1 Introduction

There are various geometric constants associated with a normed space, which are useful towards a quantitative understanding of the geometry of the space and also play an important role in the study of some other related problems of functional analysis. The James constant is one of the most prominent geometric constants associated with the space, which measures the “non-squareness” of the unit ball of a normed space. Our motivation behind this article is to illustrate the central role played by isosceles orthogonality, a natural generalization of the usual orthogonality in an inner product space, in studying various geometric constants, including the James constant. Before proceeding further, let us fix the notations and the terminologies.

Let \({\mathbb {X}}, {\mathbb {Y}}\) denote real normed spaces. Let \(B_{\mathbb {X}} = \{ x \in {\mathbb {X}} : \Vert x\Vert \le 1\} \) and \(S_{\mathbb {X}} = \{x\in {\mathbb {X}}: \Vert x\Vert =1\}\) denote the unit ball and the unit sphere of \({\mathbb {X}},\) respectively. For a non-empty convex subset S of \({\mathbb {X}},\) an element \(z \in S\) is said to be an extreme point of S if \( z = (1-t)x + ty \) for some \( t \in (0,1) \) and \(x,y \in S\) implies that \( x=y=z.\) The set of all extreme points of \(B_{\mathbb {X}}\) is denoted by \(E_{{\mathbb {X}}}.\) A normed space \({\mathbb {X}}\) is said to be strictly convex if \( E_{{\mathbb {X}}} = S_{{\mathbb {X}}}.\) An element \(x \in {\mathbb {X}}\) is said to be isosceles orthogonal [7] to an element \(y \in {\mathbb {X}} \), denoted as \(x\perp _I y\), if \( \Vert x + y\Vert = \Vert x - y\Vert .\) Geometrically it means that the length of the two diagonal vectors \( \Vert x +y\Vert \) and \( \Vert x -y\Vert \) of the parallelogram formed by two vectors x and y are equal. We refer the readers to [1, 2, 8] for more information related to this topic. An element \(x \in {\mathbb {X}}\) is said to be approximate isosceles orthogonal [5] to y if for \(\epsilon \in [0, 1), \) \(|\Vert x+y\Vert ^2 - \Vert x-y\Vert ^2| \le 4\epsilon \Vert x\Vert \Vert y\Vert ,\) and is written as \( x \perp _I^{\epsilon } y.\) Note that approximate isosceles orthogonality is symmetric, and therefore, so is exact isosceles orthogonality.

We now mention the definitions of the following geometric constants, to be studied throughout this paper.

Definition 1.1

[6] Let \({\mathbb {X}}\) be a normed space.

  1. (i)

    The James constant, denoted by \(J({\mathbb {X}}), \) is defined as

    $$\begin{aligned} J({\mathbb {X}}) = \sup \Big \{\min \big \{\Vert x+y\Vert , \Vert x-y\Vert \big \} : x, y \in S_{\mathbb {X}} \Big \}. \end{aligned}$$
  2. (ii)

    For \( x \in S_{\mathbb {X}},\) the local James constant, denoted by \(\beta (x),\) is defined as

    $$\begin{aligned} \beta (x) = \sup \Big \{\min \big \{\Vert x+y\Vert , \Vert x-y\Vert \big \} : y\in S_{\mathbb {X}}\Big \}. \end{aligned}$$
  3. (iii)

    The Schäffer constant, denoted by \( S({\mathbb {X}}), \) is defined as

    $$\begin{aligned} S({\mathbb {X}}) = \inf \Big \{\max \big \{\Vert x+y\Vert , \Vert x-y\Vert \big \} : x, y\in S_{\mathbb {X}} \Big \}. \end{aligned}$$
  4. (iv)

    For each \( x \in S_{\mathbb {X}},\) the local Schäffer constant at x, denoted by \(\alpha (x)\), is defined as

    $$\begin{aligned} \alpha (x) = \inf \Big \{\max \big \{\Vert x+y\Vert , \Vert x-y\Vert \big \} : y \in S_{\mathbb {X}}\Big \}. \end{aligned}$$

We note from [6] that for a given normed space \({\mathbb {X}},\) \(\sqrt{2} \le J({\mathbb {X}}) \le 2.\) Moreover, \( {\mathbb {X}} \) is said to be uniformly non-square if and only if \( J({\mathbb {X}}) < 2. \) It is also known that \( J({\mathbb {X}}) = \sqrt{2} \) whenever \({\mathbb {X}}\) is an inner product space but the converse is not true, in general. In [9], the authors studied the normed spaces with James constant \(\sqrt{2}.\)

Generalizations of the notions of the James constant and the local James constant, were introduced in [11] in the following way. For \( \lambda \in (0, 1),\) the generalized James constant, denoted by \(J(\lambda , {\mathbb {X}}),\) is defined as

$$\begin{aligned} J(\lambda , {\mathbb {X}}) = \sup \Big \{\min \big \{\Vert \lambda x + (1-\lambda ) y\Vert , \Vert \lambda x - (1-\lambda ) y\Vert \big \} : x, y \in S_{{\mathbb {X}}} \Big \} \end{aligned}$$

and for \(x\in S_{\mathbb {X}},\) the generalized local James constant, denoted by \( \beta (\lambda , x), \) is defined as

$$\begin{aligned} \beta (\lambda , x) = \sup \Big \{\min \big \{\Vert \lambda x+(1-\lambda )y\Vert , \Vert \lambda x - (1-\lambda )y\Vert \big \} : y \in S_{\mathbb {X}} \Big \}. \end{aligned}$$

We also need two other well-known geometric constants, modulus of smoothness and modulus of convexity, which are denoted by \( \rho _{{\mathbb {X}}}(\epsilon ) \) and \( \delta _{{\mathbb {X}}}(\epsilon ), \) respectively, and are defined as

$$\begin{aligned} \rho _{{\mathbb {X}}}(\epsilon ) = \sup \Big \{ 1 - \frac{\Vert x + y\Vert }{2} : x,y \in S_{{\mathbb {X}}}, \Vert x - y \Vert \le \epsilon \Big \},\\ \delta _{{\mathbb {X}}}(\epsilon ) = \inf \Big \{ 1 - \frac{\Vert x + y\Vert }{2} : x,y \in S_{{\mathbb {X}}}, \Vert x - y \Vert \ge \epsilon \Big \}, \end{aligned}$$

where \( \epsilon \in [0,2].\) We note from [10, Cor. 5] that \( \delta _{{\mathbb {X}}}\) is a continuous function on [0, 2) whereas from [14], \(\rho _{{\mathbb {X}}}\) is continuous on [0, 2]. The modulus of smoothness is also defined as:

$$\begin{aligned} \rho '_{{{\mathbb {X}}}}(\epsilon ) = \sup _{x, y\in S_{\mathbb {X}}}\Big \{\frac{\Vert x + \epsilon y\Vert + \Vert x - \epsilon y\Vert }{2}-1\Big \}. \end{aligned}$$

or (equivalently)

$$\begin{aligned} \rho '_{{\mathbb {X}}}(\epsilon ) = \Big \{\frac{\Vert x + y\Vert + \Vert x - y\Vert }{2}-1 : \Vert x\Vert =1, \Vert y\Vert \le \epsilon \Big \}. \end{aligned}$$

Observe that \(\rho '_{{{\mathbb {X}}}}(\epsilon )\) is not equivalent to \(\rho _{{\mathbb {X}}}(\epsilon )\) (see [3, Th. 1]).

Given any \(x, y \in {\mathbb {X}},\) we denote by \([x, y\rangle \) the ray passing through y and starting from x,  i.e., \( [x, y\rangle = \{ (1-t)x + ty : t \ge 0\}\) and [xy] denotes the closed convex line segment between x and y,  i.e., \([x, y] = \{(1-t) x + ty : 0\le t \le 1\}.\) Another important concept to be used in this paper is that of orientation. Following [4], we say that x precedes y in a two-dimensional Banach space \( {\mathbb {X}}\), if \(x_1y_2 - x_2y_1 > 0,\) where \(x = (x_1, y_1), y=(y_1, y_2) \in {\mathbb {X}}\) and in this case we write that \( x \prec y.\) Of course, here \( {\mathbb {X}} \) is identified with \( {\mathbb {R}}^2 \) in the obvious way. We note from [8, Cor.2.4] that for any \(x \in S_{\mathbb {X}}\) there exists a unique (except for the sign) \(y \in S_{\mathbb {X}}\) such that \(x \perp _I y.\) In particular, whenever it is given that \( x \perp _I y,\) without loss of generality we can assume that \( -y \prec x \prec y.\) We also consider the attainment set \( M_{J({\mathbb {X}})} \) of the James constant:

$$\begin{aligned} M_{J({\mathbb {X}})} = \{(x, y) \in S_{\mathbb {X}} \times S_{\mathbb {X}} : \min \{\Vert x+y\Vert , \Vert x-y\Vert \} = J({\mathbb {X}})\}. \end{aligned}$$

When \({\mathbb {X}}\) is finite-dimensional it is easy to see that \(M_{J({\mathbb {X}})} \ne \emptyset .\)

We explore the attainment problem for the generalized James constant and also study its converse. We illustrate the crucial role played by isosceles orthogonality in the whole scheme of things. In two-dimensional polyhedral Banach spaces, we make an observation which is computationally effective for finding the values of the James constant in each case. We also study approximate isosceles orthogonality from a geometric point of view and discuss its connections with the modulus of convexity.

We end this section by mentioning the following known results, which are essential in our present work.

Lemma 1.2

[12, Prop. 31] Let \({\mathbb {X}}\) be a two-dimensional Banach space. Let \(x, y, z \not = \theta ,\) \(x\not = z,\) with \([0, y\rangle \) lying in between \([0, x\rangle \) and \([0, z\rangle ,\) and suppose that \(\Vert y\Vert = \Vert z\Vert .\) Then \(\Vert x-y\Vert \le \Vert x-z\Vert .\) In particular, if \({\mathbb {X}}\) is strictly convex, then we always have strict inequality.

Lemma 1.3

[6, Lemma 2.2] Let \({\mathbb {X}}\) be a two-dimensional Banach space and let \(x \in S_{\mathbb {X}}.\) Then there exists a unique \(y \in S_{\mathbb {X}}\) such that \(\alpha (x) = \beta (x) = \Vert x+y\Vert = \Vert x-y\Vert .\)

Theorem 1.4

[6, Th. 3.3] Let \({\mathbb {X}}\) be a normed space. Then

$$\begin{aligned} J({\mathbb {X}}) = \sup \big \{\epsilon : \epsilon < 2-2\delta _{{\mathbb {X}}}(\epsilon )\}. \end{aligned}$$

Proposition 1.5

[6, Prop. 2.8] Let \({\mathbb {X}}\) be two-dimensional Banach space. If \(S_{\mathbb {X}}\) is affinely homeomorphic to a convex symmetric body in the two-dimensional Euclidean space \({\mathbb {R}}^2\) which is invariant under a rotation of \(\frac{\pi }{4}\), then \(J({\mathbb {X}}) = \sqrt{2}.\)

Theorem 1.6

[8, Th. 2.3] Let \({\mathbb {X}}\) be a two-dimensional Banach space and let \(x \in {\mathbb {X}}\) be non-zero. Then for each number \(0 \le r \le \Vert x\Vert ,\) there exists a unique \(y \in rS_{{\mathbb {X}}}\) (except for the sign) such that \(x \perp _I y.\)

Moreover, if \({\mathbb {X}}\) is strictly convex then for each \(r \in [0, +\infty ),\) there exists a unique \(y \in rS_{{\mathbb {X}}}\) (except for the sign) such that \(x \perp _I y.\)

2 Main Results

In [6], Gao and Lau proved that in a two-dimensional Banach space \({\mathbb {X}}\) if \(x, y \in S_{{\mathbb {X}}}\) are such that \(x \perp _{I} y,\) then \(\beta (x) = \beta (y) = \Vert x - y\Vert = \Vert x + y \Vert .\) We begin with a proposition by establishing a similar result in the case of the generalized local James constant \(\beta (\lambda , x),\) from which the above result follows directly as a particular case (\(\lambda = \frac{1}{2}\)).

Proposition 2.1

Let \( {\mathbb {X}}\) be a two-dimensional Banach space and \( x, y \in S_{{\mathbb {X}}}.\) If \(x \perp _{I} (\frac{1-\lambda }{\lambda }) y,\) where \( \lambda \in (0, 1),\) then \( \beta (\lambda , x) = \Vert \lambda x + (1 - \lambda )y\Vert = \Vert \lambda x - (1 - \lambda )y\Vert . \)

Proof

Let \( x \perp _{I}(\frac{1-\lambda }{\lambda }) y.\) Then we get, \( \Vert \lambda x + (1 - \lambda )y\Vert = \Vert \lambda x - (1 - \lambda )y\Vert .\) Clearly, for any \( z\not = \pm y\) we have \((1 - \lambda ) z \not = \pm (1 - \lambda )y.\) Consider the following four sets :

$$\begin{aligned} C_1 = \{(1-\lambda ) \frac{(1-t)x + t y}{\Vert (1-t)x + t y\Vert } : 0 \le t \le 1\},\\ C_2 = \{(1-\lambda ) \frac{(1-t)y - t x}{\Vert (1-t)y - t x\Vert } : 0 \le t \le 1\},\\ C_3 = \{(1-\lambda ) \frac{-(1-t)x - t y}{\Vert -(1-t)x - t y\Vert } : 0 \le t \le 1\},\\ C_4 = \{(1-\lambda ) \frac{-(1-t)y + t x}{\Vert -(1-t)y + t x\Vert } : 0 \le t \le 1\}, \end{aligned}$$

whose union is the circle of radius \(|1-\lambda |\) and the sets \(C_i\) intersect only at \( \pm (1-\lambda )x, \pm (1-\lambda )y.\) Observe that for any \(z\in S_{\mathbb {X}},\) we have \((1 - \lambda )z \in C_i,\) for some \( i,~ 1\le i \le 4. \) Let us assume that \( (1 - \lambda )z \in C_1.\) Then applying Lemma 1.2 it is straightforward to observe that \(\Vert \lambda x + (1 - \lambda )z\Vert \ge \Vert \lambda x + (1 - \lambda )y\Vert \) whereas \(\Vert \lambda x - (1 - \lambda ) z\Vert \le \Vert \lambda x - (1 - \lambda )y\Vert .\) Therefore, we obtain, \( \min \{\Vert \lambda x - (1 - \lambda )y\Vert , \Vert \lambda x + (1 - \lambda )y\Vert \} = \Vert \lambda x - (1 - \lambda )y\Vert \ge \Vert \lambda x - (1 - \lambda )z\Vert \ge \min \{\Vert \lambda x - (1 - \lambda )z\Vert , \Vert \lambda x + (1 - \lambda )z\Vert \}.\) If \( (1-\lambda )z \in C_i,\) for some \(i\in \{2, 3, 4\}\) then we can proceed similarly to conclude that \( \min \{\Vert \lambda x - (1 - \lambda )y\Vert , \Vert \lambda x + (1 - \lambda )y\Vert \} \ge \min \{\Vert \lambda x - (1 - \lambda )z\Vert , \Vert \lambda x + (1 - \lambda )z\Vert \}.\) As \(z \in S_{{\mathbb {X}}}\) is arbitrary, we get \( \beta (\lambda , x) = \min \{\Vert \lambda x - (1 - \lambda )y\Vert , \Vert \lambda x + (1 - \lambda )y\Vert \}= \Vert \lambda x + (1 - \lambda )y\Vert = \Vert \lambda x - (1 - \lambda )y\Vert . \)

\(\square \)

To determine the value of \(J(\lambda , {\mathbb {X}})\) of a normed space \({\mathbb {X}},\) we observe the following:

Remark 2.2

Following Proposition 2.1, it is easy to observe that for a given \( \lambda \in (0,1),\)

$$\begin{aligned} J(\lambda , {\mathbb {X}})= & {} \sup \Big \{ \Vert \lambda x + (1 - \lambda )y\Vert : x,y \in S_{{\mathbb {X}}}, x \perp _{I}(\frac{1-\lambda }{\lambda }) y \Big \}\\= & {} \sup \Big \{\Vert \lambda x - (1 - \lambda )y\Vert : x,y \in S_{{\mathbb {X}}}, x \perp _{I}(\frac{1-\lambda }{\lambda }) y \Big \}. \end{aligned}$$

Therefore, to find the generalized James constant \(J(\lambda , {\mathbb {X}}),\) for a given \(\lambda \in (0, 1),\) we only need to consider the subset \( \{(x, y) \in S_{{\mathbb {X}}} \times S_{{\mathbb {X}}} : x\perp _{I} (\frac{1-\lambda }{\lambda }) y\} \subseteq S_{{\mathbb {X}}} \times S_{{\mathbb {X}}}. \)

In the following theorem, we study the converse of Proposition 2.1.

Theorem 2.3

Let \({\mathbb {X}}\) be a strictly convex normed space and \(x \in S_{ {\mathbb {X}}}, \lambda \in (0, 1). \) If \( \beta (\lambda , x) = \min \{\Vert \lambda x + (1-\lambda ) y\Vert , \Vert \lambda x - (1-\lambda ) y\Vert \},\) for some \(y \in S_{\mathbb {X}}, \) then \( x \perp _{I} (\frac{1-\lambda }{\lambda })y.\)

Proof

Clearly \( x \ne \pm y.\) Since xy are linearly independent consider the two-dimensional subspace \({\mathbb {Y}} = span ~\{x, y\}.\) If possible let us assume that \( x \not \perp _{I} (\frac{1-\lambda }{\lambda })y.\) Then either \( \Vert x + (\frac{1-\lambda }{\lambda })y\Vert > \Vert x - (\frac{1-\lambda }{\lambda })y\Vert \) or \( \Vert x - (\frac{1-\lambda }{\lambda })y\Vert > \Vert x + (\frac{1-\lambda }{\lambda })y\Vert .\) Without loss of generality we assume that \( \Vert x + (\frac{1-\lambda }{\lambda })y\Vert > \Vert x - (\frac{1-\lambda }{\lambda })y\Vert \) so that \( \beta (\lambda , x) = \Vert \lambda x - (1-\lambda )y\Vert .\) Applying Theorem 1.6, there exists a unique \( z \in S_{{\mathbb {Y}}}\) (except for the sign) such that \(x\perp _{I} \frac{1-\lambda }{\lambda } z.\) Observe that either

(i) the ray \([0, (1-\lambda )y\rangle \) lies in between the rays \([0, \lambda x \rangle \) and \([0, (1- \lambda ) z \rangle \) or

(ii) the ray \([0, (1-\lambda )y\rangle \) lies in between the rays \([0, \lambda x \rangle \) and \([0, -(1- \lambda )z \rangle .\)

Assume that (i) holds. Since \(\lambda x, (1-\lambda ) y, (1- \lambda ) z \ne \theta \) and \(\Vert (1-\lambda )y\Vert = \Vert (1- \lambda )z\Vert \) applying Lemma 1.2, together with the assumption that \( {\mathbb {X}}\) is strictly convex, we conclude that \( \Vert \lambda x - (1- \lambda )y\Vert < \Vert \lambda x - (1- \lambda ) z\Vert = \Vert \lambda x + (1-\lambda )z\Vert .\) This implies that \( \beta (\lambda , x) < \min \{\Vert \lambda x- (1-\lambda )z\Vert , \lambda x + (1-\lambda )z\Vert ,\) a contradiction to the definition of \(\beta (\lambda , x).\) If (ii) holds then also we can proceed similarly. Thus we must have \( x \perp _{I} (\frac{1-\lambda }{\lambda })y.\) \(\square \)

It is easy to see that \( \beta (\frac{1}{2}, x) = \frac{1}{2}\beta (x),\) for any \(x \in S_{\mathbb {X}}.\) Therefore, taking \(\lambda = \frac{1}{2},\) we state the following result as a particular case of Theorem 2.3 that studies the converse of [6, Lemma 2.2(i)].

Theorem 2.4

Let \({\mathbb {X}} \) be a strictly convex normed space and let \(x_0 \in S_{{\mathbb {X}}}.\) If \( y_0 \in S_{{\mathbb {X}}}\) is such that \(\beta (x_0) = \min \{ \Vert x_0-y_0\Vert , \Vert x_0+y_0\Vert \},\) then \(x_0 \perp _{I} y_0.\)

The following example illustrates that the condition of strict convexity in the above theorem cannot be relaxed in general.

Example 2.5

Let \({\mathbb {X}} = \ell _\infty ^2\) and let \(x_0 = (1, 0) \in S_{{\mathbb {X}}}.\) To compute \(\beta ((1,0)),\) we observe that any \(y \in S_{\ell _\infty ^2}\) can be written as either \( y = (\alpha , \pm 1)\) or \( y = (\pm 1, \alpha ), \) where \(-1\le \alpha \le 1.\) It is straightforward to observe that whenever \(y = (\alpha , \pm 1),\) \(\min \{\Vert x_0-y\Vert _\infty , \Vert x_0+y\Vert _\infty \} = 1.\) On the other hand, \(\min \{\Vert x_0-y\Vert _\infty , \Vert x_0+y\Vert _\infty \} = |\alpha | \le 1,\) when \(y = (\pm 1, \alpha ).\) Therefore, \(\beta ((1, 0)) = 1.\) Clearly, for any \(y = (\alpha , \pm 1)\) with \( 0< \alpha < 1,\) we have that

$$\begin{aligned} \min \{\Vert x_0-y\Vert _\infty , \Vert x_0+y\Vert _\infty \} = \min \{ 1, |1+\alpha |\} = 1 = \beta ((1, 0)). \end{aligned}$$

In particular, we observe that isosceles orthogonality is not a necessary condition for the attainment of \( \beta (x), \) where \( x \in S_{{\mathbb {X}}}. \)

Remark 2.6

For another local constant \( \alpha (x),\) introduced in [6], using similar arguments as in Theorem 2.4, we conclude that if \( x_0, y_0 \in S_{{\mathbb {X}}}\) with \( \max \{\Vert x_0 - y_0\Vert , \Vert x_0 + y_0\Vert \} = \alpha (x_0)\) then \(x_0 \perp _{I} y_0,\) provided \({\mathbb {X}}\) is strictly convex.

Regarding the attainment of the local James constant \( \beta (x) \) in an arbitrary Banach space, we have already noticed that if there exist \(x_0, y_0 \in S_{{\mathbb {X}}}\) such that \(\min \{\Vert x_0 - y_0\Vert , \Vert x_0 + y_0\Vert \} = \beta (x_0)\) then \(x_0 \not \perp _{I} y_0,\) in general. However, as illustrated in the following theorem, we are going to observe a stronger behavior with respect to isosceles orthogonality, in the case of attainment of the corresponding global constant \(J({\mathbb {X}}).\)

Theorem 2.7

Let \({\mathbb {X}}\) be a normed space. Let \(u, v \in S_{{\mathbb {X}}}\) be such that \(\min \{\Vert u-v\Vert , \Vert u+v\Vert \} = J({\mathbb {X}}).\) Then \(u\perp _{I} v.\)

Proof

We prove the theorem by considering the following two possible cases.

Case (i): Let us assume that \(J({\mathbb {X}}) = 2.\) Then \(\min \{\Vert u-v\Vert , \Vert u+v\Vert \}= 2. \) It is trivial to see that \(\max \{\Vert x+y\Vert , \Vert x-y\Vert : x,y \in S_{{\mathbb {X}}}\} \le 2.\) Therefore, it necessarily follows that \(\Vert u+ v\Vert = \Vert u-v\Vert ,\) i.e., \(u \perp _{I} v.\)

Case (ii): Suppose that \(J({\mathbb {X}}) < 2.\) Consider the set \(S = \{\epsilon \in [0, 2) : \epsilon < 2-2\delta _{{\mathbb {X}}}(\epsilon )\},\) where \(\delta _{{\mathbb {X}}}(\epsilon )= \inf \{1-\frac{\Vert x+y\Vert }{2} : x, y \in S_{{\mathbb {X}}} ~ and ~ \Vert x-y\Vert \ge \epsilon \}.\) From Theorem 1.4, we observe that \( \sup S = J({\mathbb {X}}) < 2.\) Suppose on the contrary that \(u \not \perp _{I} v.\) Without loss of generality, let us assume that \(\Vert u+v\Vert > \Vert u - v\Vert .\) Also, let \(\Vert u - v\Vert = \epsilon _0 = J({\mathbb {X}}) <2 .\) Then, \(1- \frac{\Vert u+v\Vert }{2} < 1- \frac{\epsilon _0}{2}\) implies that \(\delta _{{\mathbb {X}}}(\epsilon _0) < 1 - \frac{\epsilon _0}{2},\) i.e., \(\epsilon _0 < 2 - 2\delta _{{\mathbb {X}}}(\epsilon _0).\) Therefore, \(\epsilon _0 \in S. \) Now, from [10, Cor. 5], we note that \(\delta _{{\mathbb {X}}}(\epsilon )\) is a continuous function on [0, 2). Therefore, it is easy to verify that S is an open set in \( {\mathbb {R}}, \) with its usual topology. Since \(\epsilon _0 \in S\) and S is open, it follows that there exists \( \mu _0 > 0 \) such that \( \epsilon _0 + \mu _0 \in S, \) which contradicts our assumption that \( \sup S = J({\mathbb {X}}) = \epsilon _0. \) Hence \(\Vert u - v\Vert = \Vert u+ v\Vert ,\) i.e., \(u \perp _{I} v,\) as claimed. \(\square \)

Remark 2.8

For \(x \in S_{\mathbb {X}}\) and \(\epsilon \in [0, 1),\) let us consider the set \(A(x, \epsilon ) = \{ y\in S_{\mathbb {X}} : x\perp _I^{\epsilon } y \}.\) Now it is easy to see that for any \(\epsilon > 0,\) if \(z \in \{{\mathbb {X}} \setminus A(x, \epsilon )\} \cap S_{\mathbb {X}}\) then \(\min \{ \Vert x+z\Vert , \Vert x-z\Vert \} < J({\mathbb {X}}).\) For, otherwise, from Theorem 2.7 we obtain \(x \perp _I z,\) which contradicts \(z \in \{{\mathbb {X}} \setminus A(x, \epsilon )\} \cap S_{\mathbb {X}}.\)

In view of the Example 2.5, it is natural to speculate whether strict convexity is essential for Theorem 2.4. We negate this by means of the following explicit example, constructed with the help of Theorem 2.7.

Let us recall from [9] that for each \(\theta \in {\mathbb {R}},\) the \(\theta \)-rotation matrix \(R(\theta )\) is given by

$$\begin{aligned} R(\theta ) = \begin{pmatrix} \cos \theta &{} -\sin \theta \\ \sin \theta &{} \cos \theta \end{pmatrix}. \end{aligned}$$

A norm \(\Vert . \Vert \) on \( {\mathbb {R}}^2\) is said to be \( \theta \)-invariant if \(R(\theta )\) is an isometry on \(({\mathbb {R}}^2, \Vert . \Vert ).\)

Example 2.9

Let \({\mathbb {X}}\) be the two-dimensional Banach space, identified as \( {\mathbb {R}}^2, \) endowed with the norm \( \Vert (x, y)\Vert = \max \{|x|, |y|, 2^{-1/2} (|x| + |y|)\} \) for any \((x, y) \in {\mathbb {R}}^2.\) It is easy to verify that \(S_{\mathbb {X}}\) is a regular octagon, with vertices \(\pm v_1 = \pm (1, \sqrt{2}-1), \pm v_2= \pm (\sqrt{2}-1, 1), \pm v_3=\pm (1-\sqrt{2}, 1), \pm v_4= \pm (-1, \sqrt{2}-1).\) The unit sphere is shown in the following figure:

It is easy to see that the given norm on \({\mathbb {R}}^2\) is \(\frac{\pi }{4}\)-invariant. Let \(E_1\) be the edge joining the vertices \(-v_4, v_1.\) Therefore, the following two conditions are equivalent.

(i) If for any \({\widetilde{x}}\in E_1\) there exists an \({\widetilde{y}} \in S_{{\mathbb {X}}}\) such that \(\min \{\Vert {\widetilde{x}} - {\widetilde{y}}\Vert , \Vert {\widetilde{x}} + {\widetilde{y}}\Vert \} = \beta ({\widetilde{x}})\) then \({\widetilde{x}} \perp _{I} {\widetilde{y}}.\)

(ii) If for any \({\widetilde{x}}\in S_{{\mathbb {X}}}\) there exists an \({\widetilde{y}} \in S_{{\mathbb {X}}}\) such that \(\min \{\Vert {\widetilde{x}} - {\widetilde{y}}\Vert , \Vert {\widetilde{x}} + {\widetilde{y}}\Vert \} = \beta ({\widetilde{x}})\) then \({\widetilde{x}} \perp _{I} {\widetilde{y}}.\)

We will show that (i) holds true. Any \(u\in E_1\) can be written as \( u = (1, \gamma ),\) where \(|\gamma |\le \sqrt{2}-1.\) Also, given any \(u = (1, \gamma ) \in E_1,\) we have \(v = \pm (-\gamma , 1)\in S_{\mathbb {X}}\) such that \( u \perp _I v.\) From Lemma 1.3, we obtain that \( \beta (v)= \Vert u - v\Vert = \sqrt{2}. \) On the other hand, from Proposition 1.5 it follows that \(J({\mathbb {X}}) = \sqrt{2}.\) This implies that \( (u, v) \in M_{J({\mathbb {X}})}.\)

Since \(\beta ({\widetilde{x}}) = \sqrt{2}\) for any \({\widetilde{x}} \in E_1,\) it is easy to see that \(\min \{ \Vert {\widetilde{x}}+{\widetilde{y}}\Vert , \Vert {\widetilde{x}}-{\widetilde{y}}\Vert \}= \beta ({\widetilde{x}})\) implies that \(({\widetilde{x}}, {\widetilde{y}})\in M_{J({\mathbb {X}})}.\) Now applying Theorem 2.7, we conclude that \({\widetilde{x}} \perp _{I} {\widetilde{y}}.\)

In particular, Theorem 2.4 may indeed hold true for certain Banach spaces which are not strictly convex.

As a complementary notion of the James constant, we may also consider the Schäffer constant, in view of Theorem 2.7. It can be shown similarly by using the method from [6, Th. 3.3] that:

$$\begin{aligned} S({\mathbb {X}}) = \inf \{ \epsilon : \epsilon > 2 - 2\rho _{{\mathbb {X}}}(\epsilon )\}. \end{aligned}$$

Recall that \( \rho _{{\mathbb {X}}}(\epsilon )\) is continuous and convex (see [14]) on [0, 2]. Therefore, applying similar methods as used in the Theorem 2.7 we obtain the following result.

Theorem 2.10

Let \({\mathbb {X}}\) be a normed space. Let \( u, v \in S_{{\mathbb {X}}}\) be such that \(\min \{\Vert u - v\Vert , \Vert u + v\Vert \} = S({\mathbb {X}}). \) Then \(u \perp _{I} v.\)

Next we show that in a two-dimensional polyhedral Banach space, the James constant is always attained at one of the extreme points of the unit ball. To achieve this we need the following lemma.

Lemma 2.11

Let \({\mathbb {X}}\) be a two-dimensional Banach space. Let \( v_1, v_2 \in S_{\mathbb {X}}\) be such that \( v_1 \not = \pm v_2\) and \(v_1 \prec v_2.\) Suppose that \(w_1, w_2 \in S_{\mathbb {X}}\) are such that \( v_i \perp _I w_i\) and \( -w_i \prec v_i \prec w_i, \) for \( i \in \{1, 2\}.\) Then \( w_1 \prec w_2.\)

Proof

It follows from Theorem 1.6 that \( w_1 \ne \pm w_2.\) Suppose on the contrary that \( w_1 \not \prec w_2.\) Then \( w_2 \prec w_1.\) Therefore, the only possibility is that \( v_1 \prec v_2 \prec w_2 \prec w_1 \prec -v_1.\) This implies that the ray \( [0, w_2\rangle \) lies in between the rays \([0, v_1\rangle \) and \([0, w_1\rangle \) and the ray \( [0, v_2\rangle \) lies in between the rays \([0, v_1\rangle \) and \([0, w_2\rangle .\) Now applying Lemma 1.2 we get,

$$\begin{aligned} \Vert v_1 - w_2\Vert \le \Vert v_1 - w_1\Vert = \Vert v_1 + w_1\Vert \le \Vert v_1 + w_2\Vert , \end{aligned}$$

and

$$\begin{aligned} \Vert v_1 - w_2 \Vert = \Vert w_2 - v_1\Vert \ge \Vert w_2 - v_2 \Vert = \Vert w_2 + v_2\Vert \ge \Vert w_2 + v_1\Vert = \Vert v_1 + w_1\Vert . \end{aligned}$$

Thus \( \Vert v_1 + w_2\Vert = \Vert v_1 - w_2\Vert ,\) which shows that \(v_1\) is isosceles orthogonal to \(w_2\). This is a contradiction as \( w_1 \ne \pm w_2.\) Therefore, \( w_1 \prec w_2,\) as desired. \(\square \)

The following important remark, which is immediate from the above lemma, is also relevant for the proof of our next theorem.

Remark 2.12

Let \({\mathbb {X}}\) be a two-dimensional Banach space. Let \( v_1, v_2 \in S_{\mathbb {X}}\) be such that \( v_1 \not = \pm v_2\) and let \(w_1, w_2 \in S_{\mathbb {X}}\) be such that \( v_i \perp _I w_i\) and let \( -w_i \prec v_i \prec w_i, \) for \( i \in \{1, 2\}.\) Without loss of generality we can assume that \(v_1 \prec v_2.\) Suppose \( v \in S_{\mathbb {X}}\) is such that the ray \( [0, v\rangle \) lies in between the rays \( [0, v_1\rangle \) and \([0, v_2\rangle ,\) which implies that \(v_1 \prec v \prec v_2.\) From Lemma 2.11, it can be concluded that \( w_1 \prec w \prec w_2.\) In other words, the ray \([0, w\rangle \) lies in between the rays \( [0, w_1\rangle \) and \([0, w_2\rangle ,\) where \( v \perp _I w.\)

We are now in a position to prove the following result.

Theorem 2.13

Let \( {\mathbb {X}}\) be a two-dimensional polyhedral Banach space. Then there exists \(z \in E_{\mathbb {X}}\) such that \(\beta (z) = J({\mathbb {X}}),\) i.e., \( \Vert z + y\Vert = \Vert z - y\Vert = J({\mathbb {X}}),\) where \(y \in S_{{\mathbb {X}}}\) and \( z \perp _{I} y.\)

Proof

Let \(v_1, v_2\) be two extreme points of \( B_{\mathbb {X}}\) such that \( v_1 \prec v_2 \) and \( tv_1 + (1-t) v_2 \in S_{{\mathbb {X}}}, \) for all \(t \in [0,1].\) Then there exist \( w_1, w_2 \in S_{{\mathbb {X}}}\) such that \( v_1 \perp _{I} w_1, v_2 \perp _I w_2\) and \( w_1 \prec w_2,\) by using Lemma 2.11. We consider the following two cases:

Case 1 : At first we consider that \( \lambda w_1 + (1 - \lambda )w_2 \subset S_{\mathbb {X}},\) for all \( \lambda \in [0, 1].\) For any \(v \in [v_1, v_2],\) we can write \(v = t_0v_1 + (1-t_0)v_2,\) for some \(t_0 \in [0, 1].\) Take \(w \in S_{\mathbb {X}}\) such that \( v \perp _I w.\) By virtue of Remark 2.12, it follows that the ray \( [0, w\rangle \) lies in between the rays \( [0, w_1\rangle \) and \([0, w_2\rangle .\) Now, if \( w = t_0w_1+ (1- t_0)w_2,\) then using Lemma 1.3, we get

$$\begin{aligned} \beta (v)= & {} \Vert v - w\Vert \\= & {} \Vert t_0v_1 + (1-t_0)v_2 - t_0w_1 - (1-t_0)w_2\Vert \\\le & {} t_0\Vert v_1-w_1\Vert + (1-t_0)\Vert v_2-w_2\Vert \\= & {} t_0 \beta (v_1) + (1-t_0) \beta (v_2). \end{aligned}$$

If \( w \not = t_0w_1+ (1- t_0)w_2\) then either \(\Vert v - w\Vert \le \Vert v- (t_0w_1+ (1- t_0)w_2)\Vert \) or \(\Vert v - w\Vert > \Vert v - (t_0w_1+ (1- t_0)w_2) \Vert .\) Applying Lemma 1.2, it is straightforward to see that in the latter case we have \(\Vert v + w\Vert \le \Vert v + t_0w_1+ (1- t_0)w_2\Vert .\) Therefore, we get,

$$\begin{aligned} \beta (v)= & {} \Vert v \pm w\Vert \\= & {} \Vert t_0v_1 + (1-t_0)v_2 \pm w\Vert \\\le & {} \Vert t_0v_1 + (1-t_0)v_2 \pm (t_0w_1 - (1-t_0)w_2)\Vert \\\le & {} t_0\Vert v_1\pm w_1\Vert + (1-t_0)\Vert v_2 \pm w_2\Vert \\= & {} t_0 \beta (v_1) + (1-t_0) \beta (v_2). \end{aligned}$$

Therefore, \( \beta (v) \le \max \{ \beta (v_1), \beta (v_2)\}.\)

Case 2 : Let \(\{\lambda w_1 + (1 -\lambda )w_2: \lambda \in [0,1] \}\not \subset S_{\mathbb {X}}.\) Assume that there exist k extreme points \(x_1, x_2, \ldots , x_k\) lying in between the rays \( [0, w_1\rangle \) and \([0, w_2\rangle \) such that \(w_1\prec x_1 \prec x_2\prec \ldots \prec x_k\prec w_2 . \) Then following Remark 2.12, we get \( z_1, z_2, \ldots , z_k \in [v_1, v_2]\) such that \( v_1\prec z_1\prec z_2 \prec \ldots \prec z_k \prec v_2\) and \( z_i \perp _{I} x_i,\) for \( 1 \le i \le k. \) Considering the segments \([v_1, z_1],\) \( [z_1, z_2], \ldots , [z_k, v_2] \) in place of \([v_1, v_2]\) and applying similar arguments as in Case 1,  we get, for any \(v \in [v_1, v_2],\)

$$\begin{aligned} \beta (v)\le & {} \max \{ \beta (v_1), \beta (z_1), \ldots , \beta (z_k), \beta (v_2) \}\\= & {} \max \{ \beta (v_1), \beta (x_1), \ldots , \beta (x_k), \beta (v_2)\}. \end{aligned}$$

Therefore, we observe that for any \( v \in [v_1, v_2],\) there exists \(z \in E_{\mathbb {X}}\) such that \(\beta (v) \le \beta (z).\) As \(v_1, v_2\) are chosen arbitrarily, we can conclude that for any \( v \in S_{\mathbb {X}}\) there exists \(z \in E_{\mathbb {X}}\) such that \( \beta (v) \le \beta (z).\) This completes the proof of the theorem. \(\square \)

The following remark is immediate from Theorem 2.13.

Remark 2.14

Let \({\mathbb {X}}\) be a two-dimensional polyhedral Banach space. Suppose that \(\pm v_1, \pm v_2, \ldots , \pm v_m\) are the extreme points of \(B_{\mathbb {X}}.\) From Theorem 2.13, it can be easily seen that to find the James constant \( J({\mathbb {X}}), \) we only need to deal with the extreme points of the unit ball of \({\mathbb {X}}.\) Indeed, we can compute the James constant \( J({\mathbb {X}}) \) in a more efficient way by the formula:

$$\begin{aligned} J({\mathbb {X}}) = \max _{1 \le i \le m} \beta (v_i) = \max \{ \Vert v_i + w_i \Vert : 1 \le i \le m, w_i \in S_{\mathbb {X}} ~~ and~ ~v_i \perp _I w_i\}. \end{aligned}$$

In the following example, we will show the applicability of Theorem 2.13 towards explicitly computing the James constant, as described in Remark 2.14.

Example 2.15

Consider a two-dimensional polyhedral Banach space \({\mathbb {X}}\) whose unit sphere is an irregular hexagon, as shown in the following figure:

The vertices of \(B_{\mathbb {X}}\) are \( \pm v_1=\pm (1, -1),\pm v_2= \pm (1, 1), \pm v_3 = \pm (\frac{1}{2}, 2).\) Clearly, \(\beta (x) = \beta (-x),\) for any \(x \in {\mathbb {X}},\) so that we only need to calculate \(\beta (1, -1), \beta (1, 1), \) \( \beta (\frac{1}{2}, 2).\) By a straightforward computation, we have \( (1, -1) \perp _I \pm (\frac{9}{13}, \frac{21}{13}),\) \((1, 1) \perp _I \pm (-\frac{5}{17}, \frac{25}{17})\) and \((\frac{1}{2}, 2) \perp _I \pm (1, -\frac{2}{7}).\) Using Lemma 1.3, we obtain that

$$\begin{aligned} \beta (1, -1)= & {} \Vert (1, -1) + (\frac{9}{13}, \frac{21}{13})\Vert = \Vert (\frac{22}{13}, \frac{8}{13}) \Vert = \frac{22}{13},\\ \beta (1, 1)= & {} \Vert (1, 1) + (-\frac{5}{17}, \frac{25}{17})\Vert = \Vert (\frac{12}{17}, \frac{42}{17})\Vert = \frac{22}{17},\\ \beta (\frac{1}{2}, 2)= & {} \Vert (\frac{1}{2}, 2) + (1, -\frac{2}{7}) = \Vert (\frac{3}{2}, \frac{12}{7})\Vert = \frac{11}{7}. \end{aligned}$$

Thus \( J({\mathbb {X}}) = \max \{ \frac{22}{13}, \frac{22}{17}, \frac{11}{7}\} = \frac{22}{13}.\)

The above example illustrates that the problem of finding the James constant in a two-dimensional polyhedral Banach space \({\mathbb {X}} \) is equivalent to calculating the local constant \(\beta (x)\) only for the finitely many extreme points of \( B_{{\mathbb {X}}}.\)

In the rest of the article, we study approximate isosceles orthogonality and its role in the attainment of the modulus of convexity, an important geometric constant associated with a given normed space. We begin with the following basic observation.

Proposition 2.16

Let \( {\mathbb {X}} \) be a normed space and let \(x, y \in S_{{\mathbb {X}}} \) with \(x \ne \pm y.\) Then there exists an \(\epsilon \in [0, 1)\) such that \( x \perp _{I}^{\epsilon } y.\)

Proof

If \( x \perp _I y \) then we are done by taking \( \epsilon = 0. \) Suppose that \( x \not \perp _I y.\) Since \(x \ne \pm y,\) it follows that \( |\Vert x+y\Vert ^2 - \Vert x-y \Vert ^2| = 4 - \epsilon _0,\) for some \( 0<\epsilon _0<4. \) Therefore, choosing \( \epsilon \in [\frac{4-\epsilon _0}{4}, 1) \) we conclude that \( |\Vert x+ y\Vert ^2 - \Vert x - y \Vert ^2 |\le 4\epsilon ,\) i.e., \( x \perp _I^{\epsilon } y. \) \(\square \)

For a given \(\epsilon \in [0,2],\) let us define the set:

$$\begin{aligned} M_{\delta _{{\mathbb {X}}}(\epsilon )} = \Big \{ (x, y) \in S_{\mathbb {X}} \times S_{\mathbb {X}} : 1- \frac{\Vert x +y\Vert }{2} = \delta _{{\mathbb {X}}}(\epsilon ) \Big \}. \end{aligned}$$

\( M_{\delta _{{\mathbb {X}}}(\epsilon )} \) is called the attainment set of \(\delta _{{\mathbb {X}}}(\epsilon ),\) for any \(\epsilon \in [0, 2].\) It is clear that whenever \({\mathbb {X}}\) is finite-dimensional, \(M_{\delta _{{\mathbb {X}}}(\epsilon )} \ne \emptyset .\) Our next result shows that the attainment of \(\delta _{{\mathbb {X}}}(\epsilon )\) is closely related to approximate isosceles orthogonality.

Theorem 2.17

Let \({\mathbb {X}}\) be a normed space. Let \( M_{\delta _{{\mathbb {X}}}(\epsilon )} \ne \emptyset ,\) for some \(\epsilon \in (0, 2).\) Then there exists \((u_0, v_0) \in M_{\delta _{{\mathbb {X}}}(\epsilon )}\) such that \( u_0 \perp _I^{\epsilon _0} v_0,\) where \(\epsilon _0 = |1+ \delta _{{\mathbb {X}}}(\epsilon )^2 - 2\delta _{{\mathbb {X}}}(\epsilon )- \frac{\epsilon ^2}{4}|\in [0, 1).\)

Proof

Suppose that \((u, v) \in M_{\delta _{{\mathbb {X}}}(\epsilon )}.\) Since \(\epsilon \in (0, 2),\) it is clear that \( u \ne \pm v.\) Consider the set \( P_u = \{ w \in S_{\mathbb {X}} : \Vert u - w\Vert = \epsilon \}.\) We claim that there exists \(w' \in P_u\) such that \((u, w') \in M_{\delta _{{\mathbb {X}}}(\epsilon )}.\) If \(v \in P_u\) then our claim holds true. Let us now assume that \(v \notin P_u.\) Suppose on the contrary that the claim is not true. Then clearly, \( \delta _{{\mathbb {X}}}(\epsilon ) < 1 - \frac{\Vert u+w\Vert }{2}\) for all \(w \in P_u,\) i.e., \( \Vert u+v\Vert > \Vert u + w\Vert .\) Considering the two-dimensional subspace \({\mathbb {Y}} = span\{u, v\}\) and applying Lemma 1.2, we obtain that \(\Vert u - v\Vert \le \Vert u -w\Vert \) for all \( w \in P_u.\) As \(v \notin P_u,\) we have \(\Vert u-v\Vert < \Vert u-w\Vert \) for all \(w \in P_u,\) which is a contradiction to the fact that \( \Vert u-v\Vert \ge \epsilon .\) This establishes our claim. It is now easy to observe that there exists \((u_0, v_0) \in M_{\delta _{{\mathbb {X}}}(\epsilon )}\) such that \(\Vert u_0 -v_0\Vert = \epsilon .\) This implies that \(|\Vert u_0 + v_0\Vert ^2 - \Vert u_0 - v_0\Vert ^2| = 4 |1+ \delta _{{\mathbb {X}}}(\epsilon )^2 - 2\delta _{{\mathbb {X}}}(\epsilon )- \frac{\epsilon ^2}{4}|.\) Let \(\epsilon _0 = |1+ \delta _{{\mathbb {X}}}(\epsilon )^2 - 2\delta _{{\mathbb {X}}}(\epsilon )- \frac{\epsilon ^2}{4}|.\) Then \( 0 \le \epsilon _0 < 1\) and \(|\Vert u_0 + v_0\Vert ^2 - \Vert u_0 - v_0\Vert ^2| = 4\epsilon _0,\) which shows that \( u_0 \perp _I^{\epsilon _0} v_0.\) \(\square \)

In case \({\mathbb {X}}\) is strictly convex, we have the following corollary to the above theorem.

Corollary 2.18

Let \({\mathbb {X}}\) be a strictly convex normed space and let \( \epsilon \in (0,2).\) If \((u, v) \in M_{\delta _{{\mathbb {X}}}(\epsilon )}\) then \(u \perp _{I}^{\epsilon _0} v ,\) where \(\epsilon _0 = |1+ \delta _{{\mathbb {X}}}(\epsilon )^2 - 2\delta _{{\mathbb {X}}}(\epsilon )- \frac{\epsilon ^2}{4}|\in [0, 1).\)

Proof

Given \( \epsilon \in (0, 2),\) we only need to show that for any \((u, v) \in M_{\delta _{{\mathbb {X}}}(\epsilon )},\) it necessarily follows that \( \Vert u-v\Vert = \epsilon .\) Suppose on the contrary that \( \Vert u-v\Vert > \epsilon .\) Consider the set \( P_u = \{ w \in S _{{\mathbb {X}}} : \Vert u-w \Vert = \epsilon \}.\) Clearly, \( v \not \in P_u\) and for any \( w \in P_u,\) we have that \( \Vert u-v \Vert > \Vert u-w\Vert .\) Therefore, by Lemma 1.2, together with strict convexity, we get \( \Vert u+v\Vert < \Vert u+w\Vert \) and so \( 1 - \frac{1}{2}\Vert u+v\Vert > 1 - \frac{1}{2}\Vert u+w\Vert ,\) which contradicts the fact that \(\delta _{{\mathbb {X}}}(\epsilon ) = 1-\frac{\Vert u+v\Vert }{2}.\) Now proceeding similarly as in the proof of Theorem 2.17, we obtain the desired conclusion. \(\square \)

In connection with the explicit computation of \( \delta _{{\mathbb {X}}}(\epsilon ), \) the following remark seems relevant.

Remark 2.19

For \( \epsilon \in (0, 2),\) let us consider the set :

$$\begin{aligned} G_\epsilon = \{ (u, v) \in S_{{\mathbb {X}}} \times S_{{\mathbb {X}}} : u \perp _{I}^{\epsilon _0} v ~ and ~ \Vert u - v\Vert = \epsilon \}, \end{aligned}$$

where \(\epsilon _0 =|1+ \delta _{{\mathbb {X}}}(\epsilon )^2 - 2\delta _{{\mathbb {X}}}(\epsilon )- \frac{\epsilon ^2}{4}|. \) Clearly, \( G_\epsilon \) is a closed set with respect to the usual product topology defined on \( {\mathbb {X}} \times {\mathbb {X}}.\) It can be readily seen that whenever \({\mathbb {X}}\) is finite-dimensional, there exists \( (u_1, v_1) \in G_\epsilon \) such that \( \delta _{{\mathbb {X}}}(\epsilon ) = 1 - \frac{\Vert u_1 + v_1\Vert }{2}. \) Therefore, we conclude that to find the value of \( \delta _{{\mathbb {X}}}(\epsilon ), \) for any \( \epsilon \in (0, 2),\) we only need to take into account the subset \( G_\epsilon .\)

In [13], the authors explored the geometric structure of the approximate Birkhoff-James orthogonality set. Motivated by this, we study the same in the case of approximate isosceles orthogonality, in our next theorem. For this purpose, we consider the \( \epsilon \)-approximate isosceles orthogonality set \( A(x, \epsilon ), \) corresponding to the vector \( x \in S_{{\mathbb {X}}} \) and \( \epsilon \in [0, 1), \) as defined in Remark 2.8:

We end the present article with the following characterization of \( A(x, \epsilon ). \)

Theorem 2.20

Let \({\mathbb {X}}\) be a two-dimensional Banach space. Then for any \(x \in S_{\mathbb {X}},\) \(A(x, \epsilon ) = D \cup -D,\) where D is a connected subset of \(S_{{\mathbb {X}}}.\)

Proof

We note from Theorem 1.6 that for \(x \in S_{{\mathbb {X}}},\) there exists a unique (except for the sign) \(y \in S_{{\mathbb {X}}}\) such that \(x \perp _{I} y.\) For each \(t \in [0,1],\) let \(u_t =\frac{(1-t)x+ty}{\Vert (1-t)x+ty\Vert } \) and \(v_t= \frac{-(1-t)x+ty}{\Vert -(1-t)x+ty\Vert }.\) Consider the sets \( R = \{ t\in [0, 1] : x \perp _{I}^{\epsilon }u_t\}, \) and \( L = \{ t\in [0, 1] : x \perp _{I}^{\epsilon }v_t\}.\) Clearly, \(R, L \ne \emptyset ,\) since \(1 \in R\cap L.\) Next we prove that R and L are closed. Suppose \(\{t_n\}_{n\in {\mathbb {N}}} \in R\) is such that \(t_n \rightarrow t.\) Then \(x \perp _{I}^{\epsilon } u_{t_n} . \) This implies that for every \( n \in {\mathbb {N}},\) we have \(|\Vert x+ u_{t_n}\Vert ^2 - \Vert x- u_{t_n}\Vert ^2| \le 4\epsilon .\) As \(n \rightarrow \infty ,\) \(|\Vert x+ u_{t}\Vert ^2 -\Vert x - u_{t}\Vert ^2| \le 4\epsilon .\) Therefore, \(x \perp _{I}^{\epsilon } u_t.\) This proves that R is closed. Similarly, it can be shown that L is also closed.

Let \(t_R = \inf R\) and let \(t_L = \inf L.\) Then using Lemma 1.2, for any \(t\in [0, 1]\) with \(t\ge t_R,\) we get that \( \Vert x - u_t\Vert \ge \Vert x - u_{t_R}\Vert \) and \( \Vert x + u_t\Vert \le \Vert x + u_{t_R}\Vert .\) This gives \(|\Vert x + u_t\Vert ^2- \Vert x - u_t\Vert ^2| \le |\Vert x + u_{t_R}\Vert ^2 -\Vert x - u_{t_R}\Vert ^2| \le 4\epsilon . \) Therefore, \(x \perp _{I}^{\epsilon } u_t. \) Similarly, one can show that for any \(t \in [0,1]\) with \(t \ge t_L,\) \(x \perp _{I}^{\epsilon } v_t.\) Consider \(D = \Big \{ \frac{s u_{t_R} + (1-s)u_{t_L}}{\Vert s u_{t_R} + (1-s)u_{t_L}\Vert } : 0 \le s \le 1 \Big \}.\) Clearly, D is connected. Moreover, it is easy to see that \( D \cup (-D) \subset A(x, \epsilon ). \) Also, the implication \( A(x, \epsilon ) \subset D \cup (-D) \) is trivial from the description of the sets R and L. This completes the proof of the theorem. \(\square \)