1 Introduction and Main Results

This article mainly concerns with mappings \(f :\,{\mathbb C}^n\rightarrow {\mathbb C}^n\), written in coordinates as

$$\begin{aligned} f(Z) =(f_1(Z),\ldots ,f_n(Z)), \quad Z= (z_1,\ldots ,z_n). \end{aligned}$$

We say that f is a polynomial map if each component function \(f_i :\,{\mathbb C}^n\rightarrow {\mathbb C}\) is a polynomial in n-variables \(z_1, \ldots , z_n\), for \(1\le i\le n\). A polynomial map \(f :\,{\mathbb C}^n\rightarrow {\mathbb C}^n\) is called invertible if it has an inverse map which is also a polynomial map.

Let \(Df:=\left( \frac{\partial f_j}{\partial z_i}\right) _{n\times n}\), \( 1\le i,j\le n\), be the Jacobian matrix of f. The Jacobian determinant is denoted by \(\det Df\). If a polynomial map f is invertible and \(g=f^{-1}\), then \(g\circ f=\mathrm{id}\), and because \(\det Df \cdot \det Dg=1\), \(\det Df\) must be a nonzero complex constant. However, the converse question is more difficult. Then, the Jacobian conjecture (JC) asserts that every polynomial mapping \(f:\,{\mathbb C}^n\rightarrow {\mathbb C}^n\) is globally invertible if \(\det Df\) is identically equal to a nonzero complex constant. This conjecture remains open for any dimension \(n\ge 2\). We remark that the JC was originally formulated by Keller [13] in 1939 for polynomial maps with integer coefficients. In the case of dimension one, it is simple. Polynomial map f is called a Keller map, if \(\det Df\) is a nonzero complex constant. In fact, Bialynicki-Birula and Rosenlicht [4] proved that a polynomial map is invertible if it is injective.

It is a simple exercise to see that the JC is true if it holds for polynomial mappings whose Jacobian determinant is 1, and thus, after suitable normalization, one can assume that \(\det Df= 1\). The JC is attractive because of the simplicity of its statement. Moreover, because there are so many ways to approach and making it useful, the JC has been studied extensively from calculus to complex analysis to algebraic topology, and from commutative algebra to differential algebra to algebraic geometry. Indeed, some faulty proofs have even been published. The JC is stated as one of the eighteen challenging problems for the twenty-first century proposed with brief details by Field medalist Steve Smale [19]. For the importance, history, a detailed account of the research on the JC and equivalent conjectures, and related investigations, we refer, for example, to [3] and the excellent book of van den Essen [11] and the references therein. See also [5,6,7,8,9,10, 14, 23, 24]. We would like to point out that in 1980, Wang [22] showed that every Keller map of degree less than or equal to 2 is invertible. In 1982, Bass et al. [3] (see also [5, 25]) showed that it suffices to prove the JC for all \(n\ge 2\) and all Keller mappings of the form \(f(Z)=Z+H(Z)\), where \(Z= (z_1,\ldots ,z_n)\), and H is cubic homogeneous, i.e., \(H=(H_1, \ldots , H_n)\) with \(H_i(Z)=(L_i(Z))^3\) and \(L_i(Z)=a_{i1}z_1+\cdots +a_{in}z_n\), \(1\le i\le n\). Cubic homogeneous map f of this form is called a Drużkowski or cubic linear map. Moreover, polynomial mappings from \({\mathbb C}^n\) to \({\mathbb C}^n\) are well behaved than the polynomial mappings from \({\mathbb R}^n\) to \({\mathbb R}^n\). Indeed, Pinchuk [17] constructed an explicit example to show that there exists a non-invertible polynomial map \(f:{\mathbb R}^2\rightarrow {\mathbb R}^2\) with \(\det Df(X)\ne 0\) for all \(X\in {\mathbb R}^2\). In any case, the study of the JC has given rise to several surprising results and interesting relations in various directions and in different perspectives. For instance, Abdesselam [1] formulated the JC as a question in perturbative quantum field theory and pointed out that any progress on this question will be beneficial not only for mathematics, but also for theoretical physics as it would enhance our understanding of perturbation theory.

The main purpose of this work is to identify the Keller maps for which the JC is true.

Theorem 1

The Jacobian conjecture is true for mappings \(F(X)=(A \circ f \circ B)(X),\) where \(X=(x_1,\ldots , x_n)\in {\mathbb R}^n\), A and B are linear such that \(\det A .\det B \ne 0\), \(f=(u_1,\ldots ,u_n),\)

$$\begin{aligned} u_k(X)= & {} x_k+\gamma _k\left[ \alpha _2(x_1+\cdots +x_n)^2+\alpha _3(x_1+\cdots +x_n)^3 \right. \\&\quad \left. +\cdots +\alpha _m(x_1+\cdots +x_n)^m\right] \end{aligned}$$

for \(k=1,\ldots ,n\), \(\alpha _j,\gamma _k\in {\mathbb R}\) with \(\sum _{k=1}^n\gamma _k=0\) and \(m\in {\mathbb N}\).

Often it is convenient to identify X in \({\mathbb C}^n\) (resp. \({\mathbb R}^n\)) as an \(n\times 1\) matrix with entries as complex (resp. real) numbers. It is interesting to know whether there are other polynomial mappings for which the JC is true in \({\mathbb C}^n\) (resp. \({\mathbb R}^n\)). In this connection, we will notice that for the case \(n=2\) of Theorem 1 it is possible to prove the following:

Theorem 2

With \(X=(x,y)\in {\mathbb R}^2\), consider \(f(X)=(u_1(X),u_2(X))\), where \(u_k(X)\) for \(k=1,2\) are as in Theorem 1 and

$$\begin{aligned} {\tilde{f}}(X)=(u_1(X)+W(X),u_2(X)+w(X)), \end{aligned}$$

where W and w are homogeneous polynomials of degree \((m+1)\) in x and y. If \(\det D{\tilde{f}}(X)\equiv 1\), then \( {\tilde{f}}=A^{-1} \circ F\circ A,\) where A is linear homogeneous non-degenerate mapping and

$$\begin{aligned} F(X)=\left( u_1(X)+\alpha _{m+1}(x+y)^{m+1}, u_2(X)-\alpha _{m+1}(x+y)^{m+1}\right) , \end{aligned}$$

for some real constant \(\alpha _{m+1}\). The Jacobian conjecture is true for the mapping \({{\tilde{f}}}\).

Remark 1

It follows from the proof of Theorem 2 that A equals the identity matrix I if \(f(X)\not \equiv X\).

In connection with Theorems 1 and 2, it is interesting to note that in the case \(n=2\), the mappings F defined in Theorem 1 provide a complete description of the Keller mappings F for which \(\mathrm{deg}\, F \le 3\) (see [21]).

Next, we denote by \({\widehat{\mathcal {P}}}_n(m)\), the set of all polynomial mappings \(F:\,{\mathbb R}^n\rightarrow {\mathbb R}^n\) of degree less than or equal to m such that \(DF(0)=I\) and \(F(0)=0\). Let \(P_n(m)\) be a subset consisting of mappings \(f\in {\widehat{\mathcal {P}}}_n(m)\) which satisfy the conditions of Theorem 1. Also, we introduce

$$\begin{aligned} {\mathcal {P}}_n(m)=\{F\in {\widehat{{\mathcal {P}}}}_n(m):\, \text{ F } \text{ is } \text{ injective }\}. \end{aligned}$$

If \(f,g\in P_n(m)\), then

$$\begin{aligned} f(X)=X+u(X)\gamma ~ \text{ and } ~ g(X)=X+v(X)\delta , \end{aligned}$$

where \(X=(x_1,\dots ,x_n)\), \(\gamma =(\gamma _1,\dots ,\gamma _n)\), \(\delta =(\delta _1,\dots ,\delta _n)\),

$$\begin{aligned} u(X)=\sum _{k=2}^m\alpha _k(x_1+\dots +x_n)^k ~ \text{ and } ~ v(X)=\sum _{k=2}^m\beta _k(x_1+\dots +x_n)^k, \end{aligned}$$

such that \(\sum _{k=1}^n\delta _k=\sum _{k=1}^n\gamma _k=0\). Here \(\alpha _k\)’s and \(\beta _k\)’s are some constants. It is obvious that \(f\circ g\) is injective, but it is unexpected that the composition \(f\circ g\) also belongs to \({\mathcal {P}}_n(m)\). This circumstance allows us to generalize Theorem 1 significantly.

Theorem 3

For \(n\ge 3\), consider the mapping \(F:\,{\mathbb R}^n\rightarrow {\mathbb R}^n\) defined by \(F(X)=(u_1,\ldots ,u_n)\), where

$$\begin{aligned} u_k(X)=x_k+p_k^{(2)}(x_1+\dots +x_n)^2+\cdots +p_k^{(m)}(x_1+\dots +x_n)^m~ \quad (k=1,\ldots , n), \end{aligned}$$

and \(p_k^{(l)}\) are constants satisfying the condition \(\sum _{k=1}^np_k^{(l)}=0\) for all \(l=2,\ldots ,m\). Then the Jacobian conjecture is true for F, where \(F=f_1\circ \cdots \circ f_N\) and each \(f_j\) \((j=1, \ldots , N)\) is a polynomial map of the form f given by Theorem 1.

From the proofs of Theorems 12 and 3, it is easy to see that these results continue to hold even if we replace \({\mathbb R}^n\) by \({\mathbb C}^n\). The proofs of Theorems 12 and 3 will be presented in Sect. 3. In Sect. 2, we present conditions for injectivity of functions defined on a convex domain.

2 Injectivity Conditions on Convex Domains

One can find discussion and several sufficient conditions for global injectivity [2, 16]. In the following, we state and prove several results on injectivity on convex domains.

Theorem 4

Let \(D\subset {\mathbb R}^n\) be convex and \(f:\, D\rightarrow {\mathbb R}^n\) belong to \(C^1(D)\). Then \(f=(f_1,\ldots ,f_n)\) is injective in D if for every \(X_1, X_2\in D\) \((X_1\ne X_2)\) and \(\gamma (t)=X_1+t(X_2-X_1)\) for \(t\in [0,1]\), \(\det A\ne 0,\) where

$$\begin{aligned} A=(a_{ij})_{n\times n} ~ \text{ with } ~a_{ij}=\int _0^1 \frac{\partial f_j}{\partial x_i} (\gamma (t))\,\mathrm{d}t, \, 1\le i,j\le n . \end{aligned}$$

Proof

Let \(X_1,X_2\in D\) be two distinct points. Since D is convex, the line segment \(\gamma (t)\in D\) for \(t\in (0,1)\), and thus, we have

$$\begin{aligned} f(X_2)-f(X_1) =\int _\gamma \mathrm{d}f(\zeta ) = \int _\gamma D f(\zeta )\,\mathrm{d}\zeta = \int _0^1 (Df)(\gamma (t))(X_2-X_1)\,\mathrm{d}t . \end{aligned}$$

Taking into account of the assumptions, we deduce that \(f(X_2)-f(X_1)\ne 0\) for each \(X_1,X_2\in D\) \((X_1\ne X_2)\) if for every \(X\in {\mathbb R}^n \backslash \{0\},\)

$$\begin{aligned} \int _0^1 Df(\gamma (t)) \,\mathrm{d}t\times X\ne 0 \end{aligned}$$

which holds whenever \(\det A\ne 0.\) The proof is complete. \(\square \)

We remark that Theorem 4 has obvious generalization for functions defined on convex domains \(D \subseteq {\mathbb C}^n\). In this case, the Jacobian matrix of f, i.e., \(Df=\left( \frac{\partial f_j}{\partial z_i}\right) _{n\times n}\), \( 1\le i,j\le n\), will be used. Moreover, using Theorem 4, we may easily obtain the following simple result.

Corollary 1

Let \(D \subseteq {\mathbb R}^n\) be a convex domain and \(f=(f_1,\ldots ,f_n)\) belong to \(C^1(D)\). If for every line L in \({\mathbb R}^n\), with \(L\cap D\ne \emptyset ,\)

$$\begin{aligned} \det \left( \frac{\partial f_j}{\partial x_i}(X_{i,j})\right) \ne 0 ~\text{ for } \text{ every } X_{i,j}\in L \cap D\text{, } \end{aligned}$$

then f is injective in D.

Proof

Let \(X_1,X_2\in D\) be two distinct points and \(\gamma (t)=X_1+t(X_2-X_1)\) be the line segment joining \(X_1\) and \(X_2\), \(t\in [0,1]\). Then, for every \(i, j=1,\ldots , n\), we have

$$\begin{aligned} \int _0^1 \frac{\partial f_j}{\partial x_i} (\gamma (t))\,\mathrm{d}t=\frac{\partial f_j}{\partial x_i} (X_{i,j}), \end{aligned}$$

where \(X_{i,j}\in (X_1,X_2)\). The desired conclusion follows from Theorem 4. \(\square \)

Corollary 2

Let \(D \subseteq {\mathbb C}\) be a convex domain, \(z=x+iy\) and \(f(z)=u(x,y)+iv(x,y)\) be analytic in D. Then f is injective in D whenever for every \(z_1,z_2\in D\), we have

$$\begin{aligned} \det \begin{bmatrix}u_x(z_1)&-v_x(z_2)\\v_x(z_2)&u_x(z_1)\end{bmatrix} =u_x^2(z_1)+v_x(z_2)^2\ne 0 . \end{aligned}$$

Corollary 2 shows that if \(u_x\ne 0\) or \(v_x\ne 0\) in a convex domain D, then the analytic function \(f=u+iv\) is univalent (injective) in D. Thus, it is a sufficient condition for the univalency and is different from the necessary condition \(f'(z)\ne 0\), the fact that in the latter case \(u_x\) and \(v_x\) have no common zeros in D. The reader may compare with the well-known Noshiro–Warschawski theorem which asserts that if f is analytic in a convex domain D in \({\mathbb C}\) and \(\mathrm{Re}\, f'(z)>0\) in D, then f is univalent in D. See also Corollary 4.

Throughout we let \({\mathbb R}^n_{+}=\{X=(x_1,\ldots , x_n)\in {\mathbb R}^n:\, x_1>0\}\) and \(S_{n-1}=\{X \in {\mathbb R}^n:\, \Vert X\Vert =1\}\), the unit sphere in the Euclidean space \({\mathbb R}^n\).

Theorem 5

Let D \(\subseteq {\mathbb R}^n\) be a convex domain, \(f:\,D\rightarrow {\mathbb R}^n\) belong to \(C^1(D)\) and \( \Omega =f(D)\). Then f is injective if and only if there exists a \(\phi \in C^1(\Omega ),\) \(\phi :\,\Omega \rightarrow {\mathbb R}^n,\) satisfying the following property: for every \(X_0\in S_{n-1}\) there exists a unitary matrix \(U=U(X_0)\) with

$$\begin{aligned} U \times (D\phi )(f(X))\times Df(X)X_0\in {\mathbb R}^n_+ \end{aligned}$$
(1)

for every \(X\in D\).

Proof

Let \(X_1,X_2\in D\). Then the line segment \(\gamma (t)\) connecting these points given by \(\gamma (t)=X_1+t(X_2-X_1)\) belongs to the convex domain D for every \(t\in [0,1]\). We denote \(\psi =\phi \circ f\) and observe that

$$\begin{aligned} \psi (X_2)-\psi (X_1){=}\int _\gamma d\psi (\zeta ){=}\int _0^1d(\psi \circ \gamma )(t){=}\int _0^1 (D\psi )(\gamma (t))\times (X_2-X_1)\,\mathrm{d}t. \end{aligned}$$

If \(X_2\ne X_2,\) we may let

$$\begin{aligned} X_0=\frac{X_2-X_1}{||X_2-X_1||} \in S_{n-1}. \end{aligned}$$

Sufficiency (\(\Leftarrow \)): Now we assume (1) and show that f is injective on D. Because of the truth of (1), it follows that

$$\begin{aligned} U(\psi (X_2)-\psi (X_1))=\Vert X_2-X_1\Vert \int _0^1U(D\psi )(\gamma (t))X_0\,\mathrm{d}t\ne 0 . \end{aligned}$$

Then the first component \(a_1(t)\) of \(U(D\psi )(\gamma (t))X_0=(a_1(t),\ldots ,a_n(t))\in {\mathbb R}^n_+\), by definition, satisfies the positivity condition \(a_1(t)>0\) for each t, and thus, \(\int _0^1a_1(t)\,\mathrm{d}t\ne 0\). Consequently, \(f(X_2)\ne f(X_1)\) for every \(X_1,X_2\, (X_1\ne X_2)\) in D.

Necessity (\(\Rightarrow \)): Assume that f is injective in D. Then we may let \(\phi =f^{-1}\) and assume that U is an unitary matrix such that \(UX_0=(1,0,\dots ,0)\). This implies that \(\psi (X)\equiv X\), and thus,

$$\begin{aligned} U\times (D\psi )(\gamma (t))X_0\equiv (1,0,\ldots ,0) \end{aligned}$$

and (1) holds. \(\square \)

It is now appropriate to state a several complex variables analog of Theorem 5. As with standard practice, for \(D\subset {\mathbb C}^n\), we consider \(f\in C^1(D),\) \(\Omega =f(D)\subset {\mathbb C}^n,\) \(\phi \in C^1(\Omega ),\) \(\phi :\,\Omega \longrightarrow {\mathbb C}^n\), \(Z=(z_1,\ldots ,z_n)\in {\mathbb C}^n,\) \(\overline{Z}=(\bar{z}_1,\ldots ,\bar{z}_n)\), and \(\psi =\phi \circ f\). We frequently, write down these mappings as functions of the independent complex variables Z and \(\overline{Z}\), namely as \(f(Z,\overline{Z})\), \(\phi (W,\overline{W})\) and \(\psi (Z,\overline{Z})\). Denote as usual

$$\begin{aligned} \partial \psi = \frac{\partial (\psi _1,\ldots ,\psi _n)}{\partial ( {z}_1,\ldots , {z}_n)} =\begin{pmatrix} \displaystyle \frac{\partial \psi _1}{\partial z_1} &{}\cdots &{}\displaystyle \frac{\partial \psi _1}{\partial z_n}\\ \vdots &{} \cdots &{}\vdots \\ \displaystyle \frac{\partial \psi _n}{\partial z_1}&{}\cdots &{} \displaystyle \frac{\partial \psi _n}{\partial z_n}\\ \end{pmatrix} ~\text{ and } ~\overline{\partial }\psi =\frac{\partial (\psi _1,\ldots ,\psi _n)}{\partial (\bar{z}_1,\ldots ,\bar{z}_n)}. \end{aligned}$$

At this place, it is convenient to use \(\partial \psi \) instead of \(D\psi \). Then for \(\gamma (t)=Z_1+t(Z_2-Z_1)\), \(t\in (0,1)\), we have

$$\begin{aligned}&\psi (Z_2,\overline{Z_2})-\psi (Z_1,\overline{Z_1}) = \int _\gamma d\psi (Z,\overline{Z})\\&\quad =\int _0^1\left[ \partial \psi (\gamma (t),\overline{\gamma (t)})\cdot (Z_2-Z_1) +\overline{\partial }\psi (\gamma (t),\overline{\gamma (t)})\cdot \overline{(Z_2-Z_1)}\right] \mathrm{d}t. \end{aligned}$$

Thus, Theorem 5 takes the following form.

Theorem 6

Let \(D\subseteq {\mathbb C}^n\) be a convex domain, \(f:\,D\rightarrow {\mathbb C}^n\) belong to \(C^1(D)\) and \(\Omega =f(D)\). Then f is injective in D if and only if there exists a \(\phi \in C^1(\Omega ),\) \(\phi :\,\Omega \rightarrow {\mathbb C}^n\) satisfying the following property with \(\psi =\phi \circ f\): for every \(Z_0\in {\mathbb C}^n\), \(\Vert Z_0\Vert =1\), there exists a unitary complex matrix \(U=U(Z_0)\) such that for every \(Z\in D,\) one has

$$\begin{aligned} U\left[ \partial \psi (Z,\overline{Z})Z_0+\overline{\partial }\psi (Z,\overline{Z})\overline{Z}_0\right] \in \{Z\in {\mathbb C}^n:\, \mathrm{Re}\,z_1>0\}. \end{aligned}$$

In particular, Theorem 6 is applicable to pluriharmonic mappings. In the case of planar harmonic mappings \(f=h+\overline{g}\), where h and g are analytic in the unit disk \(\mathbb {D} := \{z\in {\mathbb C}:\, |z| < 1 \}\), Theorem 6 takes the following form–another criterion for injectivity–harmonic analog of \(\Phi \)-like mappings, see [12, Theorem 1].

Corollary 3

Let \(f=h+\overline{g}\) be harmonic on a convex domain \(D\subset {\mathbb C}\) and \(\Omega =f(D)\). Then f is univalent in D if and only if there exists a complex-valued function \(\phi =\phi (w,\overline{w})\) in \(C^1(\Omega )\) and such that for every \(\epsilon \) with \(|\epsilon |=1\), there exists a real number \(\gamma =\gamma (\epsilon )\) satisfying

$$\begin{aligned} \mathrm{Re}\,\big \{e^{i\gamma }\big (\partial \phi (f(z),\overline{f(z)}) +\epsilon \overline{\partial }\phi (f(z),\overline{f(z)})\big )\big \}>0 ~ \text{ for } \text{ all } z\in D, \end{aligned}$$

where \(\partial =\frac{\partial }{\partial z}\) and \(\overline{\partial }=\frac{\partial }{\partial \overline{z}}\).

Several consequences and examples of Corollary 3 are discussed in [12], and they seem to be very useful. Another univalence criterion for harmonic mappings of \({\mathbb D}=\{z\in {\mathbb C}:\, |z|<1\}\) was obtained in [20]. Moreover, using Corollary 3, it is easy to obtain the following sufficient condition for the univalency of \(C^1\) functions.

Corollary 4

([15]) Let D be a convex domain in \({\mathbb C}\) and f be a complex-valued function of class \(C^1(D)\). Then f is univalent in D if there exists a real number \(\gamma \) such that

$$\begin{aligned} \mathrm{Re}\,\big (e^{i\gamma } f_z(z) \big )> \big |f_{\overline{z}}(z)\big | ~ \text{ for } \text{ all } z\in D. \end{aligned}$$

For example, if \(f=h+\overline{g}\) is a planar harmonic mapping in the unit disk \({\mathbb D}\) and if there exists a real number \(\gamma \) such that

$$\begin{aligned} \mathrm{Re}\,\big \{e^{i\gamma } h'(z)\}>|g'(z)|~ \text{ for } z\in {\mathbb D}\text{, } \end{aligned}$$
(2)

then f is univalent in \({\mathbb D}\). In [18], it was shown that harmonic functions \(f=h+\overline{g}\) satisfying condition (2) in \({\mathbb D}\) are indeed univalent and close-to-convex in \({\mathbb D}\), i.e., the complement of the image region \(f({\mathbb D})\) is the union of non-intersecting rays (except that the origin of one ray may lie on another one of the rays).

Moreover, using Theorem 6, one can also obtain a sufficient condition for p-valent mappings.

Corollary 5

Suppose that \(D\subseteq {\mathbb R}^n\) is a domain such that \(D=\cup _{m=1}^pD_m,\) where \(D_m\)’s are convex for \(m=1,\ldots ,p\). Furthermore, let \(f\in C^1(D),\) \(\Omega =f(D)\subset {\mathbb R}^n\), \(\phi \in C^1(\Omega )\) such that for every \(X_m\in S_{n-1}\) there exists a unitary matrix \(U_m:=U_m(X_m)\) for each \(m=1,\ldots ,p,\) such that

$$\begin{aligned} U_m \times (D\phi )(f(X))\times (Df)(X)\times X_m\in {\mathbb R}^n_{+} \end{aligned}$$

for every \(X\in D_m,\) \(m=1,\ldots ,p.\) Then f is no more than p-valent in D.

3 Proofs of Theorems 1, 2 and 3

Proof of Theorem 1

It is enough to prove the Theorem in the case \(A=B=I\). For convenience, we let \(z=x_1+\cdots +x_n\). Then

$$\begin{aligned} u_k(X)=x_k+\gamma _k\left[ \alpha _2 z^2+\alpha _3 z^3+\cdots +\alpha _m z^m\right] \end{aligned}$$

for \(k=1,\ldots ,n\), \(\alpha _j,\gamma _k\in {\mathbb R}\) with \(\sum _{k=1}^n\gamma _k=0\) and \(X=(x_1,\ldots , x_n)\).

We first prove that \(\det Df(X)\equiv 1.\) To do this, we begin to introduce

$$\begin{aligned} L(X)=2\alpha _2 z +3\alpha _3 z^2+\cdots +m\alpha _m z^{m-1}. \end{aligned}$$

Then a computation gives

$$\begin{aligned} \frac{\partial u_k}{\partial x_j}=\delta ^j_k+\gamma _kL. \end{aligned}$$

and

$$\begin{aligned} \det Df= \begin{vmatrix}1+\gamma _1L&\gamma _1L&\ldots&\gamma _1L\\ \gamma _2L&1+\gamma _2L&\ldots&\gamma _2L\\ \vdots&\vdots&\vdots&\vdots \\ \gamma _nL&\gamma _nL&\ldots&1+\gamma _nL\end{vmatrix} =I_1+\gamma _1L I_2, \end{aligned}$$

where

$$\begin{aligned} I_1 = \begin{vmatrix}1&0&\ldots&0\\ \gamma _2L&1+\gamma _2L&\ldots&\gamma _2L\\ \vdots&\vdots&\vdots&\vdots \\ \gamma _nL&\gamma _nL&\ldots&1+\gamma _nL\end{vmatrix} ~ \text{ and } ~ I_2=\begin{vmatrix}1&1&\ldots&1\\ \gamma _2L&1+\gamma _2L&\ldots&\gamma _2L\\ \vdots&\vdots&\vdots&\vdots \\ \gamma _nL&\gamma _nL&\ldots&1+\gamma _nL\\\end{vmatrix}. \end{aligned}$$

We will now show by induction that

$$\begin{aligned} \det Df(X)=1+\left( \sum _{k=1}^{n}\gamma _k\right) L(X). \end{aligned}$$

Obviously, for \(n=2\), we have \(\det Df(X)=1+(\gamma _1+\gamma _2)L\) for \(\gamma _1,\gamma _2\in {\mathbb R}\).

Next, we suppose that \(\det Df(X)=1+(\gamma _1+\cdots +\gamma _{p})L\) holds for \(p=n-1\), where \(\gamma _1,\ldots ,\gamma _{p}\in {\mathbb R}\). We need to show that it is true for \(p=n\). Clearly,

$$\begin{aligned} I_1=\begin{vmatrix}1+\gamma _2 L&\ldots&\gamma _2 L\\ \vdots&\vdots&\vdots \\ \gamma _nL&\ldots&1+\gamma _nL \end{vmatrix}=1+(\gamma _2+\cdots +\gamma _n)L \end{aligned}$$

and

$$\begin{aligned} I_2=\begin{vmatrix}1&1&1&1&\ldots&1\\ 0&1&0&0&\ldots&0 \\ 0&0&1&0&\ldots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots \\ 0&0&0&0&\ldots&1\end{vmatrix}=1. \end{aligned}$$

Since \(\det Df=I_1+\gamma _1L I_2\), using the above and the hypothesis \(\sum _{k=1}^n\gamma _k=0\) that

$$\begin{aligned} \det Df(X)=1+(\gamma _1+\gamma _2+\cdots +\gamma _n)L(X)\equiv 1. \end{aligned}$$

Applying Theorem 4, we will finally show that f is indeed a univalent mapping. Now, for convenience, we denote \(L_*=\displaystyle \int _0^1L[\gamma (t)]\,\mathrm{d}t\) and obtain that

$$\begin{aligned} \det \left( \int _0^1\frac{\partial u_k}{\partial x_j}(\gamma (t) )\,\mathrm{d}t\right) ^n_{j,k=1}= & {} \begin{vmatrix}1+\gamma _1 L_*&\gamma _1 L_*&\ldots&\gamma _1 L_* \\ \gamma _2 L_*&1+\gamma _2 L_*&\ldots&\gamma _2 L_*\\ \vdots&\vdots&\vdots&\vdots \\ \gamma _n L_*&\gamma _n L_*&\ldots&1+\gamma _n L_*\\ \end{vmatrix}\\= & {} 1+(\gamma _1+\cdots +\gamma _n)L_*\\= & {} 1 \end{aligned}$$

for all \(\gamma (t)\) as in Theorem 4. Thus, by Theorem 4, f is univalent in \({\mathbb R}^n.\) \(\square \)

Remark 2

The injective mapping f in Theorem 1 can be obtained directly from this kind of display and without the use of Theorem 4 but by a method of contradiction. Indeed, if \(X=(x_1,\ldots , x_n)\), \(Y=(y_1,\ldots , y_n)\), \(z=\sum _{j=1}^nx_j\), \(\varphi (z) = \alpha _2z^2 + \ldots +\alpha _mz^m\), \(f=(u_1,\ldots ,u_n),\) where

$$\begin{aligned} u_k(X)=x_k+\gamma _k\varphi (z) \end{aligned}$$

for \(k=1,\ldots ,n\), and \(\alpha _j,\gamma _k\) are constants with

$$\begin{aligned} \sum _{k=1}^n\gamma _k=0, \end{aligned}$$
(3)

then f is a globally injective mapping. This is because \(f(X)=f(Y)\) implies

$$\begin{aligned} x_k+\gamma _k\varphi \left( \sum _{j=1}^nx_j\right) =y_k+\gamma _k\varphi \left( \sum _{j=1}^ny_j\right) ~ \text{ for } k=1,\ldots ,n\text{, } \end{aligned}$$
(4)

which, because of (3), obviously gives that \(\sum _{k=1}^nx_k=\sum _{k=1}^ny_k\). Finally, from (4), it follows that \(x_k=y_k\), and hence, \(X=Y\).

Proof of Theorem 2

Consider \(f(X)=(u_1(X),u_2(X))\), where \(X=(x,y)\in {\mathbb R}^2\) and

$$\begin{aligned} u_1(x,y)=x+\left[ \alpha _2(x+y)^2+\alpha _3(x+y)^3+\cdots +\alpha _m(x+y)^m\right] \end{aligned}$$

and

$$\begin{aligned} u_2(x,y)=y-\left[ \alpha _2(x+y)^2+\alpha _3(x+y)^3+\cdots +\alpha _m(x+y)^m\right] \end{aligned}$$

for \(\alpha _j\in {\mathbb R}\), \(j=2, \ldots n\). Let

$$\begin{aligned} {\tilde{f}}(X)=(u_1(X)+W(X),u_2(X)+w(X)), \end{aligned}$$

where W and w are as mentioned in the statement.

As in the proof of Theorem 1, we see easily that

$$\begin{aligned} \det D{\tilde{f}}= & {} \begin{vmatrix}1+L +W_x&L +W_y\nonumber \\ -L +w_x&1-L +w_y \end{vmatrix}\\= & {} 1+(1+ L)w_y+(1- L)W_x +LW_y - Lw_x+(W_xw_y-w_xW_y)\qquad \end{aligned}$$
(5)

which is identically 1, by the hypothesis of Theorem 2. Moreover, allowing \(\Vert X\Vert \rightarrow \infty \) in (5), it follows that \(W_xw_y-w_xW_y=0\). We now show that this gives the relation

$$\begin{aligned} w=\lambda W \end{aligned}$$

for some constant \(\lambda \). We observe that both w and W are not identically zero simultaneously. If \(w\equiv 0\) and \(W\not \equiv 0\), then we choose \(\lambda =0\). Because of the symmetry, equality holds in the last relation when \(W\equiv 0\) and \(w\not \equiv 0\). Therefore, it suffices to consider the case \(W\not \equiv 0 \not \equiv w\). We denote \(t=x/y\). Using the definition of w(x) and W(X) in the statement, we may conveniently write

$$\begin{aligned} w(X)=\sum _{k=0}^{m+1}\alpha _{k}x^ky^{m+1-k} = y^{m+1}\sum _{k=0}^{m+1}\alpha _{k}t^k =: y^{m+1}p(t) \end{aligned}$$

and similarly,

$$\begin{aligned} W(X)=\sum _{k=0}^{m+1}\beta _{k}x^ky^{m+1-k}=: y^{m+1}q(t). \end{aligned}$$
(6)

Using these, we find that

$$\begin{aligned} w_x(X)= & {} \sum _{k=1}^{m+1}k\alpha _{k}x^{k-1}y^{m+1-k}=y^{m}p'(t), ~ \text{ and } \\ w_y(X)= & {} \sum _{k=0}^{m}(m+1-k)\alpha _{k}x^ky^{m-k} = y^{m}((m+1)p(t)-tp'(t)). \end{aligned}$$

Similarly, we see that

$$\begin{aligned} W_x(X)= y^{m}q'(t) ~\text{ and } ~W_y(X)= y^{m}((m+1)q(t)-tq'(t)). \end{aligned}$$

Then

$$\begin{aligned} \frac{W_y(X)}{W_x(X)}= \frac{w_y(X)}{w_x(X)}\Longleftrightarrow & {} \frac{(m+1)q(t)-tq'(t)}{q'(t)} =\frac{(m+1)p(t)-tp'(t)}{p'(t)}\\\Longleftrightarrow & {} \frac{q'(t)}{q(t)} =\frac{p'(t)}{p(t)}, \end{aligned}$$

and the last relation, by integration, gives \(q(t)=\lambda p(t)\) for some constant \(\lambda \). Thus, we have the desired claim \(w=\lambda W\). Consequently, by (5), \(\det D{{\tilde{f}}}(X)\equiv 1\) implies that

$$\begin{aligned} (1+ L)w_y+(1- L)W_x + LW_y - Lw_x\equiv 0 \end{aligned}$$

which, by the relation \(w=\lambda W\), becomes

$$\begin{aligned} (\lambda + L\lambda + L)W_y+(1- L- L\lambda )W_x \equiv 0. \end{aligned}$$
(7)

Allowing \(\Vert X\Vert \rightarrow 0\) in (7), we see that \(\lambda W_y(X)+W_x(X)=0\) is equivalent to

$$\begin{aligned} \sum _{k=1}^{m+1}\left[ \lambda (m+1-(k-1))\beta _{k-1}+k\beta _{k}\right] x^{k-1}y^{m+1-k} =0. \end{aligned}$$

This gives the condition \(\lambda (m+2-k)\beta _{k-1}+k\beta _{k}=0\) for \(k=1, \ldots , m+1\). From the last relation, it follows easily that

$$\begin{aligned} \beta _k=\frac{(-\lambda )^k (m+1)!}{k!(m+1-k)!}\beta _0 ~ \text{ for } k=1, \ldots , m+1 \end{aligned}$$

and thus, using this and (6), we obtain that

$$\begin{aligned} W(X)=\beta _0(y-\lambda x)^{m+1}. \end{aligned}$$

Since

$$\begin{aligned} W_x(X)=-\lambda (m+1)\beta _0(y-\lambda x)^{m} ~ \text{ and } ~ W_y(X)=(m+1)\beta _0(y-\lambda x)^{m}, \end{aligned}$$

allowing \(\Vert X\Vert \rightarrow \infty \) in (7) (and making use of the expression L in the proof of Theorem 1 for \(n=2\)), we have in case \(L\not \equiv 0\) and \(\alpha _m \ne 0:\)

$$\begin{aligned} \alpha _{m}(1+\lambda )^{2}=0, \end{aligned}$$

which gives the condition \(\lambda =-1\). Note that if \(\alpha _m =0\), we choose the leading largest j for which \(\alpha _j\ne 0\). However, the last condition gives that \(\lambda =-1\). Consequently, we end up with the forms

$$\begin{aligned} W(X)=\beta _0(x+y)^{m+1}=\alpha _{m+1}(x+y)^{m+1} \end{aligned}$$

and

$$\begin{aligned} w(X)=-\beta _0(x+y)^{m+1}=-\alpha _{m+1}(x+y)^{m+1}. \end{aligned}$$

If \(L\equiv 0\), then

$$\begin{aligned} {\tilde{f}}(X)=(x+\beta _0(y-\lambda x)^{m+1},y+\lambda \beta _0(y-\lambda x)^{m+1}). \end{aligned}$$

If at the same time \(\lambda \ne 0\), then we denote

$$\begin{aligned} A= \left( \begin{array}{cccc} \displaystyle -\lambda &{} 0 \\ \displaystyle 0 &{} 1 \end{array}\right) , ~ F(X)=A{\tilde{f}}(A^{-1}(X)) ~ \text{ and } ~ \alpha _{m+1}=-\lambda \beta _0. \end{aligned}$$

Thus, we have \({\tilde{f}}=A^{-1} \circ F\circ A\) and

$$\begin{aligned} F(X)=\left( x+\alpha _{m+1}(x+y)^{m+1},y -\alpha _{m+1}(x+y)^{m+1}\right) . \end{aligned}$$
(8)

If \(L\equiv 0\) and \(\lambda =0\), then \(w \equiv 0\) and \(W(X)=\beta _0y^{m+1}\) so that

$$\begin{aligned} {\tilde{f}}(X)=(x+\beta _0y^{m+1},y). \end{aligned}$$

Now, we denote

$$\begin{aligned} A= \left( \begin{array}{cccc} \displaystyle 1 &{} 0 \\ \displaystyle -1 &{} 1 \end{array}\right) ~ \text{ and } ~ \alpha _{m+1}=\beta _0. \end{aligned}$$

This gives \({\tilde{f}}=A^{-1} \circ F\circ A\) and F has form (8). The proof is complete. \(\square \)

Proof of Theorem 3 requires some preparation.

Lemma 1

Suppose that \(G_j\in P_n(m)\) for \(j=1,\ldots ,N\) and \(G_j(X)=X+u^{(j)}(X)\gamma ^{(j)}\), where \(u^{(j)}(X)=\sum _{l=2}^m a_l^{(j)}z^l\) with \(z=x_1+\cdots +x_n\), \(\gamma ^{(j)}=(\gamma ^{(j)}_1,\ldots ,\gamma ^{(j)}_n)\), \(\sum _{k=1}^n\gamma ^{(j)}_k=0\) for all j, and \(a_l^{(j)}\) are some constants. Then \(G_1\circ \cdots \circ G_N=:F\in {\mathcal {P}}_n(m)\), and

$$\begin{aligned} F(X)=X+u^{(1)}(X)\gamma ^{(1)}+\cdots +u^{(N)}(X)\gamma ^{(N)}. \end{aligned}$$

Proof

At first, we would like to prove the lemma for \(N=2\) and then extend it for the composition of N mappings, \(N\ge 2\). Let \(G_j\in P_n(m)\) for \(j=1,2\) and

$$\begin{aligned} G_j(X)=X+u^{(j)}(X)\gamma ^{(j)}. \end{aligned}$$

Also, introduce

$$\begin{aligned} G_2(X)=(g^{(2)}_1,g^{(2)}_2, \ldots , g^{(2)}_n) ~ \text{ and } ~L_p(X)=\sum _{l=2}^m la_l^{(p)}z^{l-1}~ \text{ for } p=1,2. \end{aligned}$$

We observe that

$$\begin{aligned} g^{(2)}_1(X)+ \cdots + g^{(2)}_n(X) =z + \sum _{l=2}^m \left[ a_l^{(2)} \sum _{k=1}^n \gamma ^{(2)}_k z^{l} \right] =z \end{aligned}$$

and therefore,

$$\begin{aligned} L_1(G_2(X))= L_1(X) ~ \text{ and } ~D{G_1}(G_2(X))=D{G_1}(X). \end{aligned}$$

Finally, for the case of \(N=2\), one has

$$\begin{aligned} { DF(X)}= & {} D{G_1}(G_2(X))D{G_2}(X)\\= & {} D{G_1}(X)D{G_2}(X)=A^{(1)}(L_1)A^{(2)}(L_2) =( r_{ij})_{n\times n}, \end{aligned}$$

where \(A^{(p)}(L_p)\) for \(p=1,2\) are the two \(n\times n\) matrices given by

$$\begin{aligned} A^{(p)}(L_p) = \left( {\begin{array}{cccc} 1+\gamma ^{(p)}_1L_p &{} \gamma ^{(p)}_1L_p &{} \cdots &{} \gamma ^{(p)}_1L_p\\ \gamma ^{(p)}_2L_p &{} 1+\gamma ^{(p)}_2L_p &{} \cdots &{} \gamma ^{(p)}_2L_p\\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \gamma ^{(p)}_nL_p &{} \gamma ^{(p)}_nL_p &{} \cdots &{} 1+\gamma ^{(p)}_nL_p\\ \end{array} } \right) , \quad p=1,2. \end{aligned}$$

If \(i\ne j\), then a computation gives

$$\begin{aligned} r_{ij}= \gamma _i^{(1)}L_1L_2\sum _{k=1}^n \gamma _k^{(2)}+\gamma _i^{(1)}L_1+\gamma _i^{(2)}L_2 = \gamma _i^{(1)}L_1+\gamma _i^{(2)}L_2 ; \end{aligned}$$

Similarly, for \(i=j\), we have

$$\begin{aligned} r_{ii}= \gamma _i^{(1)}L_1L_2\sum _{k=1}^n \gamma _k^{(2)}+1+\gamma _i^{(1)}L_1+\gamma _i^{(2)}L_2=1+\gamma _i^{(1)}L_1+\gamma _i^{(2)}L_2. \end{aligned}$$

Consequently, we obtain that

$$\begin{aligned} { DF(X)}= \left( {\begin{array}{cccc} (1+\gamma ^{(1)}_1L_1+ \gamma ^{(2)}_1L_2) &{} \gamma ^{(1)}_1L_1+ \gamma ^{(2)}_1L_2 &{} \cdots &{} \gamma ^{(1)}_1L_1+ \gamma ^{(2)}_1L_2\\ \gamma ^{(1)}_2L_1+\gamma ^{(2)}_2L_2 &{} (1+\gamma ^{(1)}_2L_1+\gamma ^{(2)}_2L_2) &{} \cdots &{} \gamma ^{(1)}_2L_1+\gamma ^{(2)}_2L_2\\ \vdots &{} \vdots &{} \vdots &{} \vdots \\ \gamma ^{(1)}_nL_1+\gamma ^{(2)}_nL_2 &{} \gamma ^{(1)}_nL_1+\gamma ^{(2)}_nL_2 &{} \cdots &{} (1+\gamma ^{(1)}_nL_1+\gamma ^{(2)}_nL_2)\\ \end{array} } \right) . \end{aligned}$$

Moreover, it follows easily that

$$\begin{aligned} F(X)&=(G_1\circ G_2)(X)= \left( {\begin{array}{cc} x_1+\gamma ^{(1)}_1u^{(1)}(X)+\gamma ^{(2)}_1u^{(2)}(X) \\ x_2+\gamma ^{(1)}_2u^{(1)}(X)+\gamma ^{(2)}_2u^{(2)}(X) \\ \vdots \\ x_n+\gamma ^{(1)}_nu^{(1)}(X)+\gamma ^{(2)}_nu^{(2)}(X) \\ \end{array} } \right) \\&= X+u^{(1)}(X)\gamma ^{(1)}+u^{(2)}(X)\gamma ^{(2)}. \end{aligned}$$

Similarly in the case of the composition of three mappings \(G_1\), \(G_2\) and \(G_3\) (i.e., for the case of \(N=3\)), the Jacobian matrix of \(F=G_1\circ G_2\circ G_3\) is given by

$$\begin{aligned} { DF(X)}= & {} D{G_1}((G_2\circ G_3)(X))D{G_2}(G_3(X))D{G_3}(X)\\= & {} D{G_1} (X ) D{G_2} (X) D{G_3} (X) \\= & {} ( r_{ij})_{n\times n}, \end{aligned}$$

where for \(i\ne j\),

$$\begin{aligned} r_{ij}=\gamma _i^{(1)}L_1+\gamma _i^{(2)}L_2+\gamma _i^{(3)}L_3, ~ L_3(X)=\sum _{l=2}^m la_l^{(3)}z^{l-1}; \end{aligned}$$

and

$$\begin{aligned} r_{ii}=1+\gamma _i^{(1)}L_1+\gamma _i^{(2)}L_2+\gamma _i^{(3)}L_3. \end{aligned}$$

Moreover, we find that

$$\begin{aligned} F(X)=(G_1\circ G_2\circ G_3)(X)= X+u^{(1)}(X)\gamma ^{(1)}+u^{(2)}(X)\gamma ^{(2)}+u^{(3)}(X)\gamma ^{(3)}. \end{aligned}$$

The above process may be continued to complete the proof. \(\square \)

Remark 3

We remark that the lemma is of interest only when the dimension of the space is greater than or equal to 3. For \(n=2\), it does not give anything new in comparison with Theorem 1. However, for \(n>2\), we obtain from Lemma 1 new mappings from \({\mathcal {P}}_n(m)\), not belonging to \(P_n (m)\).

Example 1

Let \(m=3=n\), \(N=2\), \(a_l^{(1)}=l-1\), \(a_l^{(2)}=2+l\), \(\gamma ^{(1)}=(1,2,-3)\), \(\gamma ^{(2)}=(-3,1,2)\) and apply Lemma 1 for the mappings \(G_1,G_2\in P_3(3)\).

Define \(F(X)=(G_1\circ G_2)(X)=(u_1(X),u_2(X),u_3(X))\), where \(X=(x_1,x_2,x_3)\), \(z=x_1+x_2+x_3\). Since

$$\begin{aligned} F(X)= & {} X+u^{(1)}(X)\gamma ^{(1)}+u^{(2)}(X)\gamma ^{(2)}\\= & {} X+ (z^2+2z^3)(1,2,-3)+(4z^2+5z^3)(-3,1,2), \end{aligned}$$

a comparison gives

$$\begin{aligned} u_1(X)=x_1-11z^2-13z^3,~ u_2(X)=x_2+6z^2+9z^3 ~ \text{ and } ~u_3(X)=x_3+5z^2+4z^3. \end{aligned}$$

According to Lemma 1, we obtain that \(F\in {\mathcal {P}}_3(3)\). We now show that \(F\notin P_3(3)\). For its proof, it is enough to show that there are no such vectors \(\Gamma =(\Gamma _1,\Gamma _2)\) and \(A=(A_1,A_2)\) from \({\mathbb R}^2\) that

$$\begin{aligned} \Gamma _1 A=(-11,-13),~ \Gamma _2 A=(6,9)~ \text{ and } ~ (-\Gamma _1-\Gamma _2)A=(5,4). \end{aligned}$$

It is easy to see that the above system of equations has no solution which means that \(F\notin P_3(3)\). \(\Box \)

Now, we prove our final result which offers functions from \({\mathcal {P}}_n(m)\) and generalizes Theorem 1 significantly in a natural way.

Proof of Theorem 3

The idea of the proof is in presenting F as a composition of mappings \(G_j\in P_n(m)\) for \(j=1,\ldots ,N\). For \(G_j\), we may write \(G_j(X)=(u_1^{(j)},\ldots ,u_n^{(j)}),\) where

$$\begin{aligned} u_k^{(j)}(X)=x_k+\gamma _k^{(j)}\sum _{l=2}^{m}A_l^{(j)}z^l,\quad \sum _{k=1}^n\gamma _k^{(j)}=0 ~ \text{ for } \text{ all } j=1, \ldots , N\text{, } \end{aligned}$$

and \(A_l^{(j)}\) are some constants.

From Lemma 1, the possibility of choosing \(F=G_1 \circ \cdots \circ G_N\) means the following: there exist two sets of numbers

$$\begin{aligned} \left\{ \gamma _k^{(j)}\right\} ^{n,\,N}_{k=1,\,j=1} \text{ with } \sum _{k=1}^n\gamma _k^{(j)}=0~ \text{ for } \text{ all } j=1, \ldots , N\text{, } \end{aligned}$$

and

$$\begin{aligned} \left\{ A_l^{(j)}\right\} ^{m,\,N}_{l=2,\,j=1} \end{aligned}$$

such that for all \(k=1, \ldots , n\),

$$\begin{aligned} \gamma _k^{(1)}A_l^{(1)}+\gamma _k^{(2)}A_l^{(2)}+\cdots +\gamma _k^{(N)}A_l^{(N)}=p_k^{(l)} \end{aligned}$$
(9)

holds for each \(l=2, \ldots , m\). For the solution of this task, for every \(j=1,\ldots ,N\), we fix a set of nonzero numbers \(\left\{ \gamma _k^{(j)}\right\} ^{n}_{k=1}\) such that \(\sum _{k=1}^n\gamma _k^{(j)}=0\) and the rank of the matrix \(\left\{ \gamma _k^{(j)}\right\} ^{n,\,N}_{k=1,\,j=1}\) is equal to \((n-1)\) (we can consider \(N>n\)). Then in each of the linear system of equations (9) for \(l=2,\ldots ,m\), the corresponding matrix \(\left( \gamma _k^{(j)}\right) ^{n,\,N}_{k=1,\,j=1}\) of the system is one and the same and has also the rank \((n-1)\), and \(A_l^{(1)},\dots ,A_l^{(N)}\) play a role of variables in the system of equations (9). At the same time in each system (9), the rank of an expanded matrix \(\left( \gamma _k^{(j)}\cup p_k^{(l)}\right) ^{n,\,N}_{k=1,\,j=1}\) as well as the rank of \(\left( \gamma _k^{(j)}\right) ^{n,\,N}_{k=1,\,j=1}\) will be equal to \((n-1)\) in each system, since

$$\begin{aligned} \sum _{k=1}^np_k^{(l)}=0=\sum _{k=1}^n\gamma _k^{(j)} \quad \text{ for } \text{ each } \text{ l } \text{ and } \text{ j }. \end{aligned}$$

Therefore, according to the theorem of Kronecker, each of the system of equations (9) \((l=2,\ldots ,m)\) will have the solution \((A_l^{(1)},\ldots ,A_l^{(N)})\). It finishes the proof of Theorem 3.

Remark 4

The proof that F in Theorem 3 is injective can also be easily obtained from the form F analogous to that of Remark 2 to Theorem 1.

Remark 5

For \(n\ge 3\), Theorem 3 significantly expands the set \(P_n(m)\) of mappings for which JC is fair. Really, without parameters of matrices A and B from Theorem 1, the set \(P_n(m)\) has \([(m-1)+(n-1)]\) free parameters, and the set of polynomial mappings from Theorem 3 has \((m-1)(n-1)\) such parameters.