Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

We shall introduce basic ideas, which will be used in the paper.

Let X be a linear topological (Hausdorff) space over the set of real numbers \(\mathbb{R}\). Denote \(\mathbb{R}_{+} = [0,\infty )\), \(\mathbb{R}^{+} = [0,\infty ]\). Also \(0 \cdot \infty = \infty \cdot 0 = 0\). Let A be a subset of X. As usual for \(\alpha \in \mathbb{R}\),

$$\displaystyle{\alpha A:=\{ y \in X: y =\alpha x\ \mathrm{for}\ x \in A\}.}$$

We shall call A a symmetric, provided that \(A = -A\). Moreover, a set \(A \subset X\) is said to be bounded (sequentially) (see [6]) iff for every sequence \(\{t_{n}\} \subset \mathbb{R}\), t n  → 0 as \(n \rightarrow \infty\) and every sequence \(\{x_{n}\} \subset A\), the sequence \(\{t_{n} \cdot x_{n}\} \subset X\) satisfies \(t_{n} \cdot x_{n} \rightarrow 0\) as \(n \rightarrow \infty\).

We also recall the idea of generalized metric space (briefly gms) introduced by Luxemburg (see [5] and also [2]). Let X be a set. A function

$$\displaystyle{d: X \times X \rightarrow [0,\infty ]}$$

is called a generalized metric on X, provided that for all x, y, z ∈ X,

  1. (i)

    d(x, y) = 0 if and only if x = y,

  2. (ii)

    d(x, y) = d(y, x),

  3. (iii)

    d(x, y) ≤ d(x, z) + d(z, y),

A pair (X, d) is called a generalized metric space.

Clearly, every metric space is a generalized metric space.

Analogously, for a linear space X, we can define a generalized norm and a generalized normed space.

Let’s note that any generalized metric d is a continuous function.

For if \(x_{n},x,y_{n},y \in (X,d)\) for \(n \in \mathbb{N}\) (the set of all natural numbers) and

$$\displaystyle{x_{n} \rightarrow x,\quad y_{n} \rightarrow y\quad \mathrm{as}\quad n \rightarrow \infty }$$

i.e. d(x n , x) → 0 and d(y n , y) → 0 as \(n \rightarrow \infty\), then in the case \(d(x,y) <\infty\), we can prove, in standard way, that

$$\displaystyle{d(x_{n},y_{n}) \rightarrow d(x,y)\quad \mathrm{as}\quad n \rightarrow \infty.}$$

But if \(d(x,y) = \infty\), we have for \(\varepsilon> 0\)

$$\displaystyle{d(x,y) \leq d(x,x_{n}) + d(x_{n},y_{n}) + d(y_{n},y)}$$

and, for n > n 0, \(n,n_{0} \in \mathbb{N}\)

$$\displaystyle{\infty = d(x,y) \leq d(x_{n},y_{n})+\varepsilon,}$$

i.e. \(d(x_{n},y_{n}) = \infty\) for n > n 0 and consequently

$$\displaystyle{\infty = d(x_{n},y_{n}) \rightarrow d(x,y) = \infty \quad \mathrm{as}\quad n \rightarrow \infty,}$$

as claimed.

2 Generalized Minkowski Functionals

Now we shall prove the following basic result.

Theorem 1.

Let X be a linear topological (Hausdorff) space over \(\mathbb{R}\) and let a subset U of X satisfy the conditions:

  1. (i)

    U is a convex (nonempty) set,

  2. (ii)

    U is a symmetric set.

Then the function \(p: X \rightarrow \mathbb{R}^{+}\) defined by the formula

$$\displaystyle{ p(x):= \left \{\begin{array}{ll} \inf \{t> 0: x \in tU\},\quad x \in X,&\mbox{ if $A_{x}\neq \emptyset $,} \\ \infty, &\mbox{ if $A_{x} =\emptyset $,} \end{array} \right. }$$
(1)

where

$$\displaystyle{ A_{x}:=\{ t> 0: x \in tU\},\quad x \in X, }$$
(2)

has the properties:

$$\displaystyle{ if\quad x = 0,then\ p(x) = 0, }$$
(3)
$$\displaystyle{ p(\alpha x) = \vert \alpha \vert p(x)\ for\ x \in X\ and\ \alpha \in \mathbb{R}, }$$
(4)
$$\displaystyle{ p(x + y) \leq p(x) + p(y)\ for\ x,y \in X. }$$
(5)

Proof.

Clearly, \(p(x) \in [0,\infty ]\) for x ∈ X. Since 0 ∈ U, by the definition (1) we get (3). To prove (4), consider at first the case α > 0 (if α = 0, the property (4) is obvious). Assume that \(p(x) <\infty\) for x ∈ X. Then we have

$$\displaystyle\begin{array}{rcl} \alpha p(x)& =& \alpha \inf \left \{s> 0: x \in sU\right \} =\alpha \inf \left \{\frac{t} {\alpha }> 0: x \in \frac{t} {\alpha } U\right \} {}\\ & =& \inf \left \{\frac{t} {\alpha } \cdot \alpha> 0: x \in \frac{t} {\alpha } U\right \} =\inf \left \{t> 0: x \in \frac{t} {\alpha } U\right \} {}\\ & =& \inf \left \{t> 0: \alpha x \in tU\right \} = p(\alpha x). {}\\ \end{array}$$

If \(p(x) = \infty\), then {t > 0: x ∈ tU} = ∅. Therefore,

$$\displaystyle{\left \{t> 0: \alpha x \in tU\right \} =\alpha \left \{\frac{t} {\alpha }> 0: \alpha x \in tU\right \} =\alpha \left \{\frac{t} {\alpha }> 0: x \in \frac{t} {\alpha } U\right \} =\emptyset,}$$

and consequently \(p(\alpha x) = \infty\), i.e. (4) holds true.

Now consider the case α < 0. Taking into account that (ii) implies that also tU for \(t \in \mathbb{R}\) is a symmetric, one gets for x ∈ X and \(p(x) <\infty\)

$$\displaystyle{p(-x) =\inf \left \{t> 0: - x \in tU\right \} =\inf \left \{t> 0: x \in tU\right \} = p(x).}$$

If \(p(x) = \infty\), then

$$\displaystyle{\emptyset = \left \{t> 0: x \in tU\right \} = \left \{t> 0: - x \in tU\right \},}$$

which implies also \(p(-x) = \infty\), and consequently

$$\displaystyle{ p(-x) = p(x)\quad \mathrm{for}\ \mathrm{any}\ x \in X. }$$
(6)

Thus, for α < 0, x ∈ X and in view of the first part of the proof,

$$\displaystyle{p(\alpha x) = p(-\alpha x) = -\alpha p(x) = \vert \alpha \vert p(x),}$$

i.e. (4) has been verified.

Finally, if \(p(x) = \infty\) or \(p(y) = \infty\), then (5) is satisfied. So assume that x, y ∈ X and

$$\displaystyle{p(x) <\infty \quad \mathrm{and}\quad p(y) <\infty.}$$

Take an \(\varepsilon> 0\). From the definition (1), there exist numbers t 1 ≥ p(x) and t 2 ≥ p(y), \(t_{1} \in A_{x}\), \(t_{2} \in A_{y}\) such that

$$\displaystyle{0 <t_{1} <p(x) + \frac{1} {2}\varepsilon,\qquad 0 <t_{2} <p(y) + \frac{1} {2}\varepsilon.}$$

The convexity of U implies that

$$\displaystyle{ \frac{x + y} {t_{1} + t_{2}} = \frac{t_{1}} {t_{1} + t_{2}} \cdot \frac{x} {t_{1}} + \frac{t_{2}} {t_{1} + t_{2}} \cdot \frac{y} {t_{2}} \in U}$$

and consequently

$$\displaystyle{x + y \in (t_{1} + t_{2})U,}$$

which means that \(t_{1} + t_{2} \in A_{x+y}\).

Hence

$$\displaystyle{p(x + y) \leq t_{1} + t_{2} \leq p(x) + \frac{1} {2}\varepsilon + p(y) + \frac{1} {2}\varepsilon = p(x) + p(y)+\varepsilon }$$

i.e.

$$\displaystyle{p(x + y) \leq p(x) + p(y)+\varepsilon,}$$

and since \(\varepsilon\) is arbitrarily chosen, this concludes the proof. □ 

Example 1.

Consider \(X = \mathbb{R} \times \mathbb{R}\), \(U = (-1,1)\).

Then

$$\displaystyle\begin{array}{rcl} p(x)& =& \left \{\begin{array}{ll} \inf \{t> 0: (x,0) \in tU\},&\mbox{ for $x = (x,0)$,}\\ \infty, &\mbox{ for $x = (x_{ 1},y_{1}),y_{1}\neq 0$,} \end{array} \right. {}\\ p(x)& =& \left \{\begin{array}{ll} \vert x\vert,&\mbox{ for $x = (x,0)$,}\\ \infty, &\mbox{ for $x = (x_{ 1},y_{1}),y_{1}\neq 0$,} \end{array} \right.{}\\ \end{array}$$

because \(\{t> 0: (x_{1},y_{1}) \in tU\} =\emptyset\) for y 1 ≠ 0.

We see that p is a generalized norm in \(\mathbb{R}^{2} = \mathbb{R} \times \mathbb{R}\) (p takes values in \([0,\infty ]\)).

Remark 1.

The function \(p: X \rightarrow \mathbb{R}^{+}\) defined by (1) we shall call the generalized Minkowski functional of U (also a generalized seminorm).

Remark 2.

Under some stronger assumptions (see e.g. [3]), the function p is called the Minkowski functional of U.

The next basic property of the functional p is given in

Theorem 2.

Suppose that the assumptions of Theorem  1 are satisfied. If, moreover, U is bounded (sequentially), then

$$\displaystyle{ p(x) = 0\quad \Rightarrow \quad x = 0. }$$
(7)

Proof.

Assume that p(x) = 0 for x ∈ X. Suppose that x ≠ 0. From the definition of p(x) for every \(\varepsilon _{n} = \frac{1} {n}\), there exists a t n  > 0 such that x ∈ t n U, \(n \in \mathbb{N}\) and \(t_{n} <\frac{1} {n}\). Hence, \(x = t_{n}x_{n}\), x n  ∈ U for \(n \in \mathbb{N}\) and by the boundedness of U, \(x = t_{n}x_{n} \rightarrow 0\) as \(n \rightarrow \infty\). But clearly x → x, whence x = 0, which is a contradiction and completes the proof. □ 

Remark 3.

Under the assumptions of Theorem 2, the generalized Minkowski functional is a generalized norm in X.

Let’s note the following useful

Lemma 1.

Let \(U \subset X\) be a convex set and 0 ∈ U. Then

$$\displaystyle{ \alpha U \subset U }$$
(8)

for all 0 ≤α ≤ 1.

The simple proof of this Lemma is omitted here.

Next we prove

Lemma 2.

Let U be as in Theorem 1 . If, moreover, U does not contain half-lines, then

$$\displaystyle{p(x) = 0\quad \Rightarrow \quad x = 0.}$$

Proof.

For the contrary, suppose that x ≠ 0. By the definition of p(x) for every \(\varepsilon> 0\), there exists a \(0 <t <\varepsilon\) such that x ∈ tU. Take r > 0 and \(\varepsilon <\frac{1} {r}\). Clearly, \(\frac{x} {t} \in U\). Furthermore,

$$\displaystyle{rx = \frac{x} {t} (tr) =\alpha \frac{x} {t},\qquad \mathrm{where}\ \alpha = tr <1.}$$

By Lemma 1, \(rx =\alpha \frac{x} {t} \in \alpha U \subset U\), which means that there exists an x ≠ 0 such that for every r > 0, rx ∈ U what contradicts the assumptions on U. This yields our statement. □ 

We have also

Lemma 3.

Let U be as in Theorem  1 . Then

$$\displaystyle{ [p(x) = 0\ \Rightarrow \ x = 0]\quad \Rightarrow \quad \mbox{ U does not contain half-lines.} }$$
(9)

Proof.

For the contrary, suppose that there exists an x ≠ 0 such that for every r > 0 we have rx ∈ U. Hence

$$\displaystyle{x \in \frac{1} {r}U\quad \mathrm{for}\quad r> 0}$$

and therefore

$$\displaystyle{\frac{1} {r} \in \left \{t> 0: x \in tU\right \}}$$

which implies that p(x) = 0. From (9) we get x = 0, which is a contradiction. Eventually, one gets the implication (9) and this ends the proof. □ 

Therefore, Lemmas 2 and 3, we can rewrite as the following

Proposition 1.

Let the assumptions of Theorem  1 be satisfied. Then the generalized Minkowski functional p for U is a generalized norm iff U does not contain half-lines.

3 Properties of the Generalized Minkowski Functionals

In this part we start with the following

Theorem 3.

Let X be a linear topological (Hausdorff) space over \(\mathbb{R}\) and let \(f: X \rightarrow \mathbb{R}^{+}\) be any function with properties:

$$\displaystyle{ f(\alpha x) = \vert \alpha \vert f(x)\ for\ all\ x \in X\ and\ \alpha \in \mathbb{R}, }$$
(10)
$$\displaystyle{ f(x + y) \leq f(x) + f(y)\ for\ all\ x,y \in X. }$$
(11)

Define

$$\displaystyle{ U:=\{ x \in X: f(x) <1\}. }$$
(12)

Then

  1. a)

    U is a symmetric set,

  2. b)

    U is a convex (nonempty) set,

  3. c)

    f = p, i.e. f is the generalized Minkowski functional of U.

Proof.

The conditions a) and b) follow directly from the definition (12) and properties (10) and (11), respectively. To prove c), assume that x ∈ U, thus \(f(x) <\infty\). Therefore, for t > 0

$$\displaystyle\begin{array}{rcl} & & x \in tU = tf^{-1}([0,1)) \Leftrightarrow \frac{x} {t} \in f^{-1}([0,1)) {}\\ & & \quad \Leftrightarrow f\left (\frac{x} {t} \right ) \in [0,1) \Leftrightarrow \frac{1} {t}f(x) \in [0,1) \Leftrightarrow f(x) \in [0,t), {}\\ \end{array}$$

i.e. \(x \in tU \Leftrightarrow f(x) \in [0,t)\) for t > 0.

Thus

$$\displaystyle{A_{x} =\{ t> 0: x \in tU\} =\{ t> 0: f(x) \in [0,t)\},}$$

whence

$$\displaystyle{p(x) =\inf A_{x} =\inf \{ t> 0: f(x) \in [0,t)\} = f(x).}$$

Now let \(f(x) = \infty\). For the contrary, assume that \(p(x) <\infty\). Then by the definition of p,

$$\displaystyle{\{t> 0: x \in tU\}\neq \emptyset,}$$

which implies that there exists t > 0 such that x ∈ tU, thus also

$$\displaystyle\begin{array}{rcl} \frac{x} {t} & \in & U = f^{-1}([0,1)), {}\\ \frac{1} {t}f(x)& \in & [0,1) {}\\ \end{array}$$

and finally f(x) ∈ [0, t) which is impossible. This completes the proof. □ 

The next result reads as follows.

Theorem 4.

Let X, f, U be as in the Theorem 3 . If U is sequentially bounded, then f = p is a generalized norm.

Proof.

Assume that \(f(x) = p(x) = 0\). One has

$$\displaystyle{p(x) =\inf \{ t> 0: x \in tU\} = 0,}$$

therefore, for every \(\varepsilon _{n} = \frac{1} {n}\), \(n \in \mathbb{N}\), there exists \(0 <t_{n} <\frac{1} {n}\) such that x ∈ t n U, i.e. \(x = t_{n}u_{n}\), where u n  ∈ U for \(n \in \mathbb{N}\). Since U is bounded

$$\displaystyle{x = t_{n}u_{n} \rightarrow 0\quad \mathrm{as}\quad n \rightarrow \infty,}$$

thus x = 0, as claimed. □ 

4 Continuity of the Generalized Minkowski Functionals

Let’s note the following

Theorem 5.

Let the assumptions of Theorem 1 be satisfied. Then

$$\displaystyle{ \mbox{ $p$ is continuous at zero}\quad \Rightarrow \quad 0 \in int\ U. }$$
(13)

Proof.

From the assumption, for \(0 <\varepsilon <1\), there exists a neighbourhood V of zero such that

$$\displaystyle{p(u) <\varepsilon \quad \mathrm{for}\quad u \in V.}$$

But p(u) < 1 for u ∈ V, whence by Lemma 1 u ∈ U, and therefore, \(V \subset U\), which proves the implication (13). □ 

We have also

Theorem 6.

Let the assumptions of Theorem 1 be satisfied. Then

$$\displaystyle{ 0 \in int\ U\quad \Rightarrow \quad \mbox{ $p$ is continuous at zero.} }$$
(14)

Proof.

Let U 0 be a neighbourhood of zero such that \(U_{0} \subset U\). For the contrary, suppose that there exists an \(\varepsilon _{0}> 0\) such that for every neighbourhood V of zero there exists an x ∈ V with \(p(x) \geq \varepsilon _{0}\). Take \(V = V _{n} = \frac{1} {n}U_{0}\), \(n \in \mathbb{N}\) (clearly V n is a neighbourhood of zero). Then there exists an \(x_{n} \in \frac{1} {n}U_{0} \subset \frac{1} {n}U\), such that

$$\displaystyle{ p(x_{n}) \geq \varepsilon _{0}\quad \mathrm{for}\quad n \in \mathbb{N}. }$$
(15)

Take n such that \(\frac{1} {n} <\varepsilon _{0}\). Thus, one has

$$\displaystyle{p(x_{n}) =\inf \{ t> 0: x_{n} \in tU\} \leq \frac{1} {n},}$$

i.e.

$$\displaystyle{p(x_{n}) \leq \frac{1} {n} <\varepsilon _{0}}$$

which contradicts the inequality (15) and completes the proof. □ 

We have even more.

Theorem 7.

Let the assumptions of Theorem 1 be satisfied. Then

$$\displaystyle{ 0 \in int\ U\quad \Rightarrow \quad \mbox{ $p$ is continuous.} }$$
(16)

Proof.

First of all, observe that since there exists a neighbourhood V of zero, contained in U, then for x ∈ X

$$\displaystyle{\frac{1} {n}x \in V \quad \mathrm{for}\quad n> n_{0},}$$

and hence

$$\displaystyle{A_{x} =\{ t> 0: x \in tU\}\neq \emptyset \quad \mathrm{for}\ \mathrm{all}\ x \in X.}$$

Therefore, we have \(p(x) <\infty\) for any x ∈ X. Since p is also convex and, by Theorem 6, p is continuous at zero, then by the famous theorem of Bernstein–Doetsch (see e.g. [1]), p is continuous in X, which ends the proof. □ 

Remark 4.

To see that the condition 0 ∈ int U is essential in Theorems 5 and 6, the reader is referred to Example 1.

Eventually, taking into account Theorems 5 and 7, we can state the following useful result about the continuity of the generalized Minkowski functionals.

Proposition 2.

Under the assumptions of Theorem 1 , the equivalence

$$\displaystyle{ \mbox{ $p$ is continuous}\quad \Leftrightarrow \quad 0 \in int\ U }$$
(17)

holds true.

5 Kolmogorov Type Result

Let X be a linear space (over \(\mathbb{R}\) or \(\mathbb{C}\)—the set of all complex numbers) and a generalized metric space. We say that X is a generalized linear-metric space, if the operations of addition and multiplication by constant are continuous, i.e. if x n  → x and y n  → y, then \(x_{n} + y_{n} \rightarrow x + y\) and tx n  → tx (with respect to a generalized metric in X).

For example, if generalized metric is introduced by a generalized norm, then we get a generalized linear-metric space.

We shall prove the following.

Theorem 8.

Let (X,ϱ) be a generalized linear-metric space over \(\mathbb{R}\) . Suppose that \(U \subset X\) is an open, convex and sequentially bounded set. Then there exists a generalized norm \(\|\cdot \|\) such that the generalized metric induced by this norm is equivalent to a generalized metric ϱ.

Proof.

Take a point x 0 ∈ U, then

$$\displaystyle{V:= (U - x_{0}) \cap (x_{0} - U)}$$

is an open, convex, symmetric and sequentially bounded subset of X (the details we omit here).

Define

$$\displaystyle{ \|x\|:= \left \{\begin{array}{ll} \inf \{t> 0: x \in tV \},\quad x \in X,&\mbox{ if $A_{x}\neq \emptyset $,} \\ \infty, &\mbox{ if $A_{x} =\emptyset $.} \end{array} \right. }$$
(18)

By Theorem 2 we see that this function is a generalized norm.

At first we shall show the implication:

$$\displaystyle{ \varrho (x_{n},0) \rightarrow 0\quad \Rightarrow \quad \|x_{n}\| \rightarrow 0. }$$
(19)

To this end, take \(\varepsilon> 0\). Then the set \(\varepsilon V\) is also open: for it, because \(f(x) = \frac{1} {\varepsilon } x\), x ∈ X, is a continuous function and

$$\displaystyle{f^{-1}(V ) =\varepsilon V,}$$

we see that also \(\varepsilon V\) is open. Therefore, \(x_{n} \in \varepsilon V\) for n > n 0 and consequently

$$\displaystyle{\|x_{n}\| <\varepsilon \quad \mathrm{for}\quad n> n_{0}}$$

i.e. (19) is satisfied.

Conversely, assume that \(\|x_{n}\| \rightarrow 0\) as \(n \rightarrow \infty\). By the definition (18) for every \(\varepsilon _{n} = \frac{1} {n}\), n > n 0, there exists t n  > 0 such that

$$\displaystyle{\|x_{n}\| \leq t_{n} <\| x_{n}\| +\varepsilon _{n}\quad \mathrm{and}\quad x_{n} \in t_{n}V.}$$

Let \(\varepsilon _{n} \rightarrow 0\), then t n  → 0 as \(n \rightarrow \infty\). Also \(\frac{x_{n}} {t_{n}} \in V\), but since V is bounded, then

$$\displaystyle{t_{n}\left (\frac{x_{n}} {t_{n}} \right ) = x_{n} \rightarrow 0\quad \mathrm{as}\quad n \rightarrow \infty }$$

in the generalized metric ϱ thus ϱ(x n , 0) → 0, which ends the proof. □ 

Remark 5.

If ϱ is a metric, from Theorem 7, we get the Kolmogorov result (see [4]).