Keywords

1 Introduction

Many applications, not only in computer vision, require a solution of a homogeneous system of linear equations Ax = 0 or a non-homogeneous system of linear equations Ax = b. There are several numerical methods used implemented in standard numerical libraries. However, the numerical solution actually does not allow further symbolic manipulation. Even more, solutions of equations Ax = 0 and Ax = b are considered as different problems and especially Ax = 0 is not usually solved quite correctly as users tend to use some additional condition for x unknown (usually setting \( x_{k} = 1 \) or so).

In the following, we show the equivalence of the extended cross-product (outer product or progressive product) with a solution of both types of linear systems of equations, i.e. Ax = 0 and Ax = b.

Many problems in computer vision, computer graphics and visualization are \( 3 \)-dimensional. Therefore specific numerical approaches can be applied to speed up the solution. In the following extended cross-product, also called outer product or progressive product, is introduced in the “classical” notation using “×” symbol.

2 Extended Cross Product

Let us consider the standard cross-product of two vectors \( \varvec{a} = \left[ {a_{1} \text{,}\,a_{2} \text{,}\,a_{3} } \right]^{T} \) and \( \varvec{b} = \left[ {b_{1} \text{,}\,b_{2} \text{,}\,b_{3} } \right]^{T} \). Then the cross-product is defined as:

$$ \varvec{a} \times \varvec{b} = \det \left[ {\begin{array}{*{20}c} \varvec{i} & \varvec{j} & \varvec{k} \\ {a_{1} } & {a_{2} } & {a_{3} } \\ {b_{1} } & {b_{2} } & {b_{3} } \\ \end{array} } \right] $$
(1)

where: \( \varvec{i} = \left[ {1\text{,}\,0\text{,}\,0} \right]^{T} \), \( \varvec{j = }\left[ {0\text{,}\,1\text{,}\,0} \right]^{T} \text{,}\,\varvec{k = }\left[ {0\text{,}\,0\text{,}\,1} \right]^{T} \).

If a matrix form is needed, then we can write:

$$ \varvec{a} \times \varvec{b} = \left[ {\begin{array}{*{20}r} \hfill 0 & \hfill { - a_{3} } & \hfill {a_{2} } \\ \hfill {a_{3} } & \hfill 0 & \hfill { - a_{1} } \\ \hfill { - a_{2} } & \hfill {a_{1} } & \hfill 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {b_{1} } \\ {b_{2} } \\ {b_{3} } \\ \end{array} } \right] $$
(2)

In some applications the matrix form is more convenient.

Let us introduce the extended cross-product of three vectors \( \varvec{a} = \left[ {a_{1} \text{,}\, \ldots \,\text{,}a_{n} } \right]^{T} \),

\( \varvec{b} = \left[ {b_{1} \text{,}\, \ldots \,\text{,}b_{n} } \right]^{T} \) and \( \varvec{c} = \left[ {c_{1} ,\, \ldots \,,c_{n} } \right]^{T} \), \( n = 4 \) as:

$$ \varvec{a} \times \varvec{b} \times \varvec{c} = \det \left[ {\begin{array}{*{20}c} \varvec{i} & \varvec{j} & \varvec{k} & \varvec{l} \\ {a_{1} } & {a_{2} } & {a_{3} } & {a_{4} } \\ {b_{1} } & {b_{2} } & {b_{3} } & {b_{4} } \\ {c_{1} } & {c_{2} } & {c_{3} } & {c_{4} } \\ \end{array} } \right] $$
(3)

where: \( \varvec{i} = \left[ {1,\,0,\,0,\,0} \right]^{T} \), \( \varvec{j} = \left[ {0,\,1,\,0,\,0} \right]^{T} \), \( \varvec{k} = \left[ {0,\,0,\,1,\,0} \right]^{T} \), \( \varvec{l} = \left[ {0,\,0,\,0,\,1} \right]^{T} \).

It can be shown that there exists a matrix form for the extended cross-product representation:

$$ \varvec{a} \times \varvec{b} \times \varvec{c} = \left( { - 1} \right)^{n + 1} \left[ {\begin{array}{*{20}r} \hfill 0 & \hfill { - \delta_{34} } & \hfill {\delta_{24} } & \hfill { - \delta_{23} } \\ \hfill {\delta_{34} } & \hfill 0 & \hfill { - \delta_{14} } & \hfill {\delta_{13} } \\ \hfill { - \delta_{24} } & \hfill {\delta_{14} } & \hfill 0 & \hfill { - \delta_{12} } \\ \hfill {\delta_{23} } & \hfill { - \delta_{13} } & \hfill {\delta_{12} } & \hfill 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ {c_{3} } \\ {c_{4} } \\ \end{array} } \right] $$
(4)

where: \( n = 4 \). In this case and \( \delta_{ij} \) are sub-determinants with columns i, j of the matrix T defined as:

$$ \varvec{T = }\left[ {\begin{array}{*{20}l} {a_{1} } \hfill & {a_{2} } \hfill & {a_{3} } \hfill & {a_{4} } \hfill \\ {b_{1} } \hfill & {b_{2} } \hfill & {b_{3} } \hfill & {b_{4} } \hfill \\ \end{array} } \right] $$
(5)

e.g. sub-determinant \( \delta_{24} = \det \left[ {\begin{array}{*{20}c} {a_{2} } & {a_{4} } \\ {b_{2} } & {b_{4} } \\ \end{array} } \right] \) etc.

The extended cross-product for 5-dimensions is defined as:

$$ \varvec{a} \times \varvec{b} \times \varvec{c} \times \varvec{d} = \det \left[ {\begin{array}{*{20}c} \varvec{i} & \varvec{j} & \varvec{k} & \varvec{l} & \varvec{n} \\ {a_{1} } & {a_{2} } & {a_{3} } & {a_{4} } & {a_{5} } \\ {b_{1} } & {b_{2} } & {b_{3} } & {b_{4} } & {b_{5} } \\ {c_{1} } & {c_{2} } & {c_{3} } & {c_{4} } & {c_{5} } \\ {d_{1} } & {d_{2} } & {d_{3} } & {d_{4} } & {d_{5} } \\ \end{array} } \right] $$
(6)

where: \( \varvec{i} = \left[ {1,0,0,0,0} \right]^{T} \), \( \varvec{j} = \left[ {0,1,0,0,0} \right]^{T} \), \( \varvec{k} = \left[ {0,0,1,0,0} \right]^{T} \), \( \varvec{l} = \left[ {0,0,0,1,0} \right]^{T} \), \( \varvec{n} = \left[ {0,0,0,0,0,1} \right]^{T} \).

It can be shown that there exists a matrix form as well:

$$ \varvec{a} \times \varvec{b} \times \varvec{c} \times \varvec{d} = \left( { - 1} \right)^{n + 1} \left[ {\begin{array}{*{20}r} \hfill 0 & \hfill { - \delta_{345} } & \hfill {\delta_{245} } & \hfill { - \delta_{235} } & \hfill {\delta_{234} } \\ \hfill {\delta_{345} } & \hfill 0 & \hfill { - \delta_{145} } & \hfill {\delta_{135} } & \hfill { - \delta_{134} } \\ \hfill { - \delta_{245} } & \hfill {\delta_{145} } & \hfill 0 & \hfill { - \delta_{125} } & \hfill {\delta_{124} } \\ \hfill {\delta_{235} } & \hfill { - \delta_{135} } & \hfill {\delta_{125} } & \hfill 0 & \hfill { - \delta_{123} } \\ \hfill { - \delta_{234} } & \hfill {\delta_{134} } & \hfill { - \delta_{124} } & \hfill {\delta_{123} } & \hfill 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {d_{1} } \\ {d_{2} } \\ {d_{3} } \\ {d_{4} } \\ {d_{5} } \\ \end{array} } \right] $$
(7)

where n = 5. In this case and \( \delta_{ijk} \) are sub-determinants with columns i, j, k of the matrix \( \varvec{T} \) defined as:

$$ \varvec{T} = \left[ {\begin{array}{*{20}c} {a_{1} } & {a_{2} } & {a_{3} } & {a_{4} } & {a_{5} } \\ {b_{1} } & {b_{2} } & {b_{3} } & {b_{4} } & {b_{5} } \\ {c_{1} } & {c_{2} } & {c_{3} } & {c_{4} } & {c_{5} } \\ \end{array} } \right] $$
(8)

e.g. sub-determinant \( \delta_{245} \) is defined as:

$$ \delta_{245} = \det \left[ {\begin{array}{*{20}c} {a_{2} } & {a_{4} } & {a_{5} } \\ {b_{2} } & {b_{4} } & {b_{5} } \\ {c_{2} } & {c_{4} } & {c_{5} } \\ \end{array} } \right] = a_{2} \det \left[ {\begin{array}{*{20}c} {b_{4} } & {b_{5} } \\ {c_{4} } & {c_{5} } \\ \end{array} } \right] - a_{4} \det \left[ {\begin{array}{*{20}c} {b_{2} } & {b_{5} } \\ {c_{2} } & {c_{5} } \\ \end{array} } \right] + a_{5} \det \left[ {\begin{array}{*{20}c} {b_{2} } & {b_{4} } \\ {c_{2} } & {c_{4} } \\ \end{array} } \right] $$
(9)

In spite of the “complicated” description above, this approach leads to a faster computation in the case of lower dimensions, see Sect. 7.

3 Projective Representation and Duality Principle

Projective representation and its application for computation are considered to be mysterious or too complex. Nevertheless we are using it naturally very frequently in the form of fractions, e.g. a/b. We also know that fractions help us to express values, which cannot be expressed precisely due to limited length of a mantissa, e.g.\( 1/3 = 0\text{,}33\, \ldots \,\, \ldots \,.333 \ldots = 0.\bar{3} \).

In the following we will explore projective representation, actually rational fractions, and its applicability.

3.1 Projective Representation

Projective extension of the Euclidean space is used commonly in computer graphics and computer vision mostly for geometric transformations. However, in computational sciences, the projective representation is not used, in general. This chapter shortly introduces basic properties and mutual conversions. More detailed description of projective representation and applications can be found in [12, 15, 20].

The given point \( \varvec{X} = \left( {X\text{,}Y} \right) \) in the Euclidean space \( E^{2} \) is represented in homogeneous coordinates as \( \varvec{x = }\left[ {x,y:w} \right]^{T} \), \( w \ne 0 \). It can be seen that x is actually a line in the projective space \( P^{3} \) with the origin excluded. Mutual conversions are defined as:

$$ X = \frac{x}{w}\quad \quad \quad Y = \frac{y}{w} $$
(10)

where: \( w \ne 0 \) is the homogeneous coordinate. Note that the homogeneous coordinate w is actually a scaling factor with no physical meaning, while x, y are values with physical units in general.

The projective representation enables us nearly double precision as the mantissa of x, resp. y and w are used for a value representation. However we have to distinguish two different data types, i.e.

  • Projective representation of a n-dimensional value \( \varvec{X} = \left( {X_{1} ,\, \ldots \,,X_{n} } \right) \), represented by one dimensional array \( \varvec{x} = \left[ {x_{1} ,\, \ldots \,,x_{n} :x_{w} } \right]^{T} \), e.g. coordinates of a point, that is fixed to the origin of the coordinate system.

  • Projective representation of a n -dimensional vector (in the mathematical meaning) \( \varvec{A} = \left( {A_{1} ,\, \ldots \,,A_{n} } \right) \), represented by one dimensional array \( \varvec{a} = \left[ {a_{1} ,\, \ldots \,,a_{n} :a_{w} } \right]^{T} \). In this case the homogeneous coordinate \( a_{w} \) is actually just a scaling factor. Any vector is not fixed to the origin of the coordinate system and it is “movable”.

Therefore a user should take an attention to the correctness of operations. Another interesting application of the projective representation is the rational trigonometry [19].

3.2 Principle of Duality

The projective representation offers also one very important property – principle of duality. The principle of duality in \( E^{2} \) states that any theorem remains true when we interchange the words “point” and “line”, “lie on” and “pass through”, “join” and “intersection”, “collinear” and “concurrent” and so on. Once the theorem has been established, the dual theorem is obtained as described above [1, 5, 9, 14]. In other words, the principle of duality says that in all theorems it is possible to substitute the term “point” by the term “line” and the term “line” by the term “point” etc. in \( E^{2} \) and the given theorem stays valid. Similar duality is valid for \( E^{3} \) as well, i.e. the terms “point” and “plane” are dual etc. it can be shown that operations “join” a “meet” are dual as well.

This helps a lot to solve some geometrical problems. In the following we will demonstrate that on very simple geometrical problems like intersection of two lines, resp. three planes and computation of a line given by two points, resp. of a plane given by three points.

4 Solution of Ax = B

Solution of non-homogeneous system of equation AX = b is used in many computational tasks.

For simplicity of explanation, let us consider a simple example of intersection computation of two lines \( p_{1} \) a \( p_{2} \) in \( E^{2} \) given as:

$$ p_{1} :A_{1} X + B_{1} Y + C_{1} = 0\;\;\;\;\;\;\;\;\;\;\;\;\;p_{2} :A_{2} X + B_{2} Y + C_{2} = 0 $$
(11)

An intersection point of two those lines is given as a solution of a linear system of equations: Ax = b:

$$ \left[ {\begin{array}{*{20}c} {a_{1} } & {b_{1} } \\ {a_{2} } & {b_{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} X \\ Y \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} { - c_{1} } \\ { - c_{2} } \\ \end{array} } \right] $$
(12)

Generally, for the given system of n liner equations with n unknowns in the form AX = b the solution is given:

$$ X_{i} = \frac{{\det \left( {\varvec{A}_{i} } \right)}}{{\det \left( \varvec{A} \right)}}\quad i = 1\text{,}\, \ldots \,\text{,}n $$
(13)

where: A is a regular matrix \( n \times n \) having non-zero determinant, the matrix \( \varvec{A}_{i} \) is the matrix A with replaced \( i^{th} \) column by the vector b and \( \varvec{X} = \left[ {X_{1} \text{,}\, \ldots \,\text{,}X_{n} } \right]^{T} \) is a vector of unknown values.

In a low dimensional case using general methods for solution of linear equations, e.g. Gauss-Seidel elimination etc., is computational expensive. Also division operation is computationally expensive and decreasing precision of a solution.

Usually, a condition if \( \det \left( \varvec{A} \right) < eps \) then EXIT is taken for solving “close to singular cases”. Of course, nobody knows, what a value of eps is appropriate.

5 Solution of Ax = 0

There is another very simple geometrical problem; determination of a line p given by two points \( \varvec{X}_{1} = \left( {X_{1} \text{,}\,Y_{1} } \right) \) and \( \varvec{X}_{2} = \left( {X_{2} \text{,}\,Y_{2} } \right) \) in \( E^{2} \). This seems to be a quite simple problem as we can write:

$$ aX_{1} + bY_{1} + c = 0\;\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,aX_{2} + bY_{2} + c = 0 $$
(14)

i.e. it leads to a solution of homogeneous systems of equations AX = 0, i.e.:

$$ \left[ {\begin{array}{*{20}c} {X_{1} } & {Y_{1} } & 1 \\ {X_{2} } & {Y_{2} } & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} a \\ b \\ c \\ \end{array} } \right] = 0 $$
(15)

In this case, we obtain one parametric set of solutions as the Eq. (15) can be multiplied by any value \( q \ne 0 \) and the line is the same.

There is a problem – we know that lines and points are dual in the \( E^{2} \) case, so the question is why the solutions are not dual. However if the projective representation is used the duality principle will be valid, as follows.

6 Solution Ax = b and Ax = 0

Let us consider again intersection of two lines \( \varvec{p}_{1} = \left[ {a_{1} \text{,}\,b_{1} :c_{1} } \right]^{T} \) a \( \varvec{ p}_{2} = \left[ {a_{2} \text{,}\,b_{2} :c_{2} } \right]^{T} \) leading to a solution of non-homogeneous linear system AX = b, which is given as:

$$ p_{1} :a_{1} X + b_{1} Y + c_{1} = 0\;\;\;\;\;\;\;\;\;\;p_{2} :a_{2} X + b_{2} Y + c_{2} = 0 $$
(16)

If the equations are multiplied by \( w \ne 0 \) we obtain:

$$ \begin{aligned} p_{1} :a_{1} X + b_{1} Y + c_{1} \triangleq \quad \quad \quad p_{2} :a_{2} X + b_{2} Y + c_{2} \triangleq \hfill \\ a_{1} x + b_{1} y + c_{1} w = 0\quad \quad \quad \quad a_{2} x + b_{2} y + c_{2} w = 0 \hfill \\ \end{aligned} $$
(17)

where:\( \triangleq \) means “projectively equivalent to” as x = wX and y = wY.

Now we can rewrite the equations to the matrix form as Ax = 0:

$$ \left[ {\begin{array}{*{20}c} {a_{1} } & {b_{1} } & { - c_{1} } \\ {a_{2} } & {b_{2} } & { - b_{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} x \\ y \\ w \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ \end{array} } \right] $$
(18)

where \( \varvec{x} = \left[ {x,y:w} \right]^{T} \) is the intersection point in the homogeneous coordinates.

In the case of computation of a line given by two points given in homogeneous coordinates, i.e. \( \varvec{x}_{1} = \left[ {x_{1} ,y_{1} :w_{1} } \right]^{T} \) and \( \varvec{x}_{2} = \left[ {x_{2} ,y_{2} :w_{2} } \right]^{T} \), the Eq. (14) is multiplied by \( w_{i} \ne 0 \).Then, we get a solution in the matrix form as Ax = 0, i.e.

$$ \left[ {\begin{array}{*{20}c} {x_{1} } & {y_{1} } & {w_{1} } \\ {x_{2} } & {y_{2} } & {w_{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} a \\ b \\ c \\ \end{array} } \right] = 0 $$
(19)

Now, we can see that the formulation is leading in the both cases to the same numerical problem: to a solution of a homogeneous linear system of equations.

However, a solution of homogeneous linear system of equations is not quite straightforward as there is a one parametric set of solutions and all of them are projectively equivalent. It can be seen that the solution of Eq. (18), i.e. intersection of two lines in \( E^{2} \), is equivalent to:

$$ \varvec{x} = \varvec{p}_{1} \times \varvec{p}_{2} $$
(20)

and due to the principle of duality we can write for a line given by two points:

$$ \varvec{p} = \varvec{x}_{1} \times \varvec{x}_{2} $$
(21)

In the three dimensional case we can use extended cross-product [12, 15, 16].

A plane \( \rho :aX + bY + cY + d = 0 \) given by three points \( \varvec{ x}_{1} = \left[ {x_{1} \text{,}\,y_{1} \text{,}\,z_{1} :w_{1} } \right]^{T} \), \( \varvec{x}_{2} = \left[ {x_{2} ,y_{2} ,z_{2} :w_{2} } \right]^{T} \) and \( \varvec{x}_{2} = \left[ {x_{3} ,y_{3} ,z_{3} :w_{3} } \right]^{T} \) is determined in the projective representation as:

$$ \varvec{\rho}= \left[ {a,b,c:d} \right]^{T} = \varvec{x}_{1} \times \varvec{x}_{2} \times \varvec{x}_{2} $$
(22)

and the intersection point x of three planes points \( \varvec{ \rho }_{1} = \left[ {a_{1} ,\,b_{1} ,\,c_{1} :d_{1} } \right]^{T} \), \( \varvec{\rho}_{2} = \left[ {a_{2} \text{,}\,b_{2} \text{,}\,c_{2} :d_{2} } \right]^{T} \) and \( \varvec{\rho}_{3} = \left[ {a_{3} ,\,b_{3} ,\,c_{3} :d_{3} } \right]^{T} \) is determined in the projective representation as:

$$ \varvec{x} = \left[ {x,y,z:w} \right]^{T} \,\varvec{ = }\,\varvec{\rho}_{1} \times\varvec{\rho}_{2} \times\varvec{\rho}_{2} $$
(23)

due to the duality principle.

It can be seen that there is no division operation needed, if the result can be left in the projective representation. The approach presented above has another one great advantage as it allows symbolic manipulation as we have avoided numerical solution and also precision is nearly doubled.

7 Barycentric Coordinates Computation

Barycentric coordinates are often used in many engineering applications, not only in geometry. The barycentric coordinates computation leads to a solution of a system of linear equations. However it was shown, that a solution of a linear system equations is equivalent to the extended cross product [1214]. Therefore it is possible to compute barycentric coordinates using cross product which is convenient for application of SSE instructions or for GPU oriented computations. Let us demonstrate the proposed approach on a simple example again.

Given a triangle in \( E^{2} \) defined by points \( \varvec{ x}_{i} = [x_{i} \text{,}\,y_{i} :1]^{T} \), \( i = 1\text{,}\, \ldots \,,3 \), the barycentric coordinates of the point \( \varvec{x}_{0} = [x_{0} \text{,}\,y_{0} :1]^{T} \) can be computed as follows:

$$ \begin{array}{*{20}c} {\lambda_{1} x_{1} + \lambda_{2} x_{2} + \lambda_{3} x_{3} = x_{0} } \\ {\lambda_{1} y_{1} + \lambda_{2} y_{2} + \lambda_{3} y_{3} = y_{0} } \\ {\lambda_{1} + \lambda_{2} + \lambda_{3} = 1} \\ \end{array} $$
(24)

For simplicity, we set \( w_{i} = 1 \), \( i = 1,\, \ldots \,,3 \). It means that we have to solve a system of linear equations Ax = b:

$$ \left[ {\begin{array}{*{20}c} {x_{1} } & {x_{2} } & {x_{3} } \\ {y_{1} } & {y_{2} } & {y_{3} } \\ 1 & 1 & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\lambda_{1} } \\ {\lambda_{2} } \\ {\lambda_{3} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {x_{0} } \\ {y_{0} } \\ 1 \\ \end{array} } \right] $$
(25)

if the points are given in the projective space with homogeneous coordinates\( \varvec{ x}_{i} = [x_{i} ,y_{i} :w_{i} ]^{T} \), \( i = 1,\, \ldots \,,3 \) and \( \varvec{ x}_{0} = [x_{0} ,y_{0} :w_{0} ]^{T} \). It can be easily proved, due to the multilinearity, we need to solve a linear system Ax = b:

$$ \left[ {\begin{array}{*{20}c} {x_{1} } & {x_{2} } & {x_{3} } \\ {y_{1} } & {y_{2} } & {y_{3} } \\ {w_{1} } & {w_{2} } & {w_{3} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\lambda_{1} } \\ {\lambda_{2} } \\ {\lambda_{3} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {x_{0} } \\ {y_{0} } \\ {w_{0} } \\ \end{array} } \right] $$
(26)

Let us define new vectors containing a row of the matrix A and vector b as:

$$ \varvec{x} = [x_{1} ,x_{2} ,x_{3} ,x_{0} ]^{T} \quad \varvec{y} = [y_{1} ,y_{2} ,y_{3} ,y_{0} ]^{T} \quad \varvec{w} = [w_{1} ,w_{2} ,w_{3} ,w_{0} ]^{T} $$
(27)

The projective barycentric coordinates \( \varvec{\xi}= [\xi_{1} ,\xi_{2} ,\xi_{3} :\xi_{w} ]^{T} \) are given as:

$$ \lambda_{1} = - \frac{{\xi_{1} }}{{\xi_{w} }} \quad \lambda_{2} = - \frac{{\xi_{2} }}{{\xi_{w} }} \quad \lambda_{3} = - \frac{{\xi_{3} }}{{\xi_{w} }} $$
(28)

i.e.

$$ \lambda_{i} = - \frac{{\xi_{i} }}{{\xi_{w} }}\quad i = 1,\, \ldots \,,3 $$
(29)

Using the extended cross product, the projective barycentric coordinates are given as:

$$ \varvec{\xi}= \varvec{x} \times \varvec{y} \times \varvec{w} = \det \left[ {\begin{array}{*{20}c} \varvec{i} & \varvec{j} & \varvec{k} & \varvec{l} \\ {x_{1} } & {x_{2} } & {x_{3} } & {x_{0} } \\ {y_{1} } & {y_{2} } & {y_{3} } & {y_{0} } \\ {w_{1} } & {w_{2} } & {w_{3} } & {w_{4} } \\ \end{array} } \right] = \left[ {\xi_{1} ,\xi_{2} ,\xi_{3} :\xi_{w} } \right]^{T} $$
(30)

where \( \varvec{i} = \left[ {1,0,0,0} \right]^{T} ,\,\varvec{j} = \left[ {0,1,0,0} \right]^{T} ,\,\varvec{k} = \left[ {0,0,1,0} \right]^{T} ,\,\varvec{l} = \left[ {0,0,0,1} \right]^{T} \)

Similarly in the \( E^{3} \) case, given a tetrahedron in \( E^{3} \) defined by points \( \varvec{ x}_{i} = [x_{i} ,y_{i} ,z_{i} :w_{i} ]^{T} \), \( i = 1,\, \ldots \,,3 \) and the point \( \varvec{x}_{0} = [x_{0} ,y_{0} , z_{0} :w_{0} ]^{T} \):

$$ \begin{aligned} \hfill \\ \begin{array}{*{20}l} {\varvec{x} = [x_{1} ,x_{2} ,x_{3} ,x_{4} :x_{0} ]^{T} } \hfill & {\varvec{y} = [y_{1} ,y_{2} ,y_{3} ,y_{4} :y_{0} ]^{T} } \hfill \\ {\varvec{z} = [z_{1} ,z_{2} ,z_{3} ,z_{4} :z_{0} ]^{T} } \hfill & {\varvec{w} = [w_{1} ,w_{2} ,w_{3} ,w_{4} :w_{0} ]^{T} } \hfill \\ \end{array} \hfill \\ \end{aligned} $$
(31)

Then projective barycentric coordinates are given as:

$$ \varvec{\xi}= \varvec{x} \times \varvec{y} \times \varvec{z} \times \varvec{w} = [\xi_{1} ,\xi_{2} ,\xi_{3} ,\xi_{4} :\xi_{w} ]^{T} $$
(32)

The Euclidean barycentric coordinates are given as:

$$ \lambda_{1} = - \frac{{\xi_{1} }}{{\xi_{w} }} \quad \lambda_{2} = - \frac{{\xi_{2} }}{{\xi_{w} }}\quad \lambda_{3} = - \frac{{\xi_{3} }}{{\xi_{w} }}\quad \lambda_{4} = - \frac{{\xi_{4} }}{{\xi_{w} }} $$
(33)

i.e.

$$ \lambda_{i} = - \frac{{\xi_{i} }}{{\xi_{w} }}\quad i = 1,\, \ldots \,,4 $$
(34)

How Simple and Elegant Solution!

The presented computation of barycentric coordinates is simple and convenient for GPU use or SSE instructions. Even more, as we have assumed from the very beginning, there is no need to convert projective values to the Euclidean notation. As a direct consequence of that is, that we are saving a lot of computational time also increasing robustness of the computation, especially due to division operation elimination. As a result is represented as a rational fraction, the precision is nearly equivalent to double mantissa precision and exponent range.

Let us again present advantages of the projective representation on simple examples.

8 Intersection of Two Planes

Intersection of two planes \( \rho_{1} \) and \( \rho_{1} \) in \( E^{3} \) is seemingly a simple problem, but surprisingly computationally expensive, Fig. 1. Let us consider the “standard” solution in the Euclidean space and a solution using the projective approach.

Fig. 1.
figure 1

A line as the intersection of two planes

Given two planes \( \rho_{1} \) and \( \rho_{2} \) in \( E^{3} \):

$$ \varvec{\rho}_{1} = [a_{1} ,b_{1} ,c_{1} :d_{1} ]^{T} = [\varvec{n}_{1}^{T} :d_{1} ]^{T} \quad\varvec{\rho}_{2} = [a_{2} ,b_{2} ,c_{2} :d_{2} ]^{T} = [\varvec{n}_{2}^{T} :d_{2} ]^{T} $$
(35)

where: \( \varvec{n}_{1}^{{}} \) and \( \varvec{n}_{2}^{{}} \) are normal vectors of those planes.

Then the directional vector s of a parametric line \( \varvec{X}\left( t \right) = \varvec{X}_{0} + \varvec{s}t \) is given by a cross product:

$$ \varvec{s} = \varvec{n}_{1} \times \varvec{n}_{2} \equiv [a_{3} , b_{3} ,c_{3} ]^{T} $$
(36)

and point \( \varvec{X}_{0} \in E^{3} \) of the line is given as:

$$ \begin{array}{*{20}c} {X_{0} = \frac{{d_{2} \left| {\begin{array}{*{20}c} {b_{1} } & {c_{1} } \\ {b_{3} } & {c_{3} } \\ \end{array} } \right| - d_{1} \left| {\begin{array}{*{20}c} {b_{2} } & {c_{2} } \\ {b_{3} } & {c_{3} } \\ \end{array} } \right|}}{DET}} & {Y_{0} = \frac{{d_{2} \left| {\begin{array}{*{20}c} {a_{3} } & {c_{3} } \\ {a_{1} } & {c_{1} } \\ \end{array} } \right| - d_{1} \left| {\begin{array}{*{20}c} {a_{3} } & {c_{3} } \\ {a_{2} } & {c_{2} } \\ \end{array} } \right|}}{DET}} \\ {Z_{0} = \frac{{d_{2} \left| {\begin{array}{*{20}c} {a_{1} } & {b_{1} } \\ {a_{3} } & {b_{3} } \\ \end{array} } \right| - d_{1} \left| {\begin{array}{*{20}c} {a_{2} } & {b_{2} } \\ {a_{3} } & {b_{3} } \\ \end{array} } \right|}}{DET}} & {DET = \left| {\begin{array}{*{20}c} {a_{1} } & {b_{1} } & {c_{1} } \\ {a_{2} } & {b_{2} } & {c_{2} } \\ {a_{3} } & {b_{3} } & {c_{3} } \\ \end{array} } \right|} \\ \end{array} $$
(37)

It can be seen that the formula above is quite difficult to remember and its derivation is not simple. It should be noted that there is again a severe problem with stability and robustness if a condition like \( \left| {DET} \right| < eps \) is used. Also the formula is not convenient for GPU or SSE applications. There is another equivalent solution based on Plücker coordinates and duality application, see [12, 16].

Let us explore a solution based on the projective representation explained above.

Given two planes \( \rho_{1} \) and \( \rho_{2} \). Then the directional vector s of their intersection is given as:

$$ \varvec{s} = \varvec{n}_{1} \times \varvec{n}_{2} $$
(38)

We want to determine the point \( \varvec{ x}_{0} \) of the line given as an intersection of those two planes. Let us consider a plane \( \rho_{0} \) passing the origin of the coordinate system with the normal vector \( \varvec{n}_{0} \) equivalent to s, Fig. 1. This plane \( \rho_{0} \) is represented as:

$$ \varvec{\rho}_{0} = \left[ {a_{0} ,b_{0} , c_{0} :0} \right]^{T} = [\varvec{s}^{T} :0]^{T} $$
(39)

Then the point \( \varvec{x}_{0} \) is simply determined as an intersection of three planes \( \rho_{1} ,\rho_{2} ,\rho_{0} \) as:

$$ \varvec{x}_{0} =\varvec{\rho}_{1} \times\varvec{\rho}_{2} \times\varvec{\rho}_{0} = \left[ {x_{0} ,y_{0} , z_{0} :w_{0} } \right]^{T} $$
(40)

It can be seen that the proposed algorithm is simple, easy to understand, elegant and convenient for SEE and GPU applications as it uses vector-vector operations.

9 Closest Point on the Line Given as an Intersection of Two Planes

Another example of advantages of the projective notation is finding the closest point on a line given as an intersection of two planes \( \rho_{1} \) and \( \rho_{2} \) to the given point \( \varvec{\xi}\in E^{3} \), Fig. 2. The closest point to the given point on an intersection of two planes

Fig. 2.
figure 2

The closest point to the given point on an intersection of two planes

A solution in the Euclidean space, proposed in [8], is based on a solution of a system of linear equations using Lagrange multipliers, leading to a matrix of \( \left( {5 \times 5} \right) \):

$$ \left[ {\begin{array}{*{20}c} 2 & 0 & 0 & {n_{1x} } & {n_{2x} } \\ 0 & 2 & 0 & {n_{1y} } & {n_{2y} } \\ 0 & 0 & 2 & {n_{1z} } & {n_{2z} } \\ {n_{1x} } & {n_{1y} } & {n_{1z} } & 0 & 0 \\ {n_{2x} } & {n_{2y} } & {n_{2z} } & 0 & 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} x \\ y \\ z \\ \lambda \\ \mu \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {2\xi_{x} } \\ {2\xi_{y} } \\ {2\xi_{z} } \\ {\varvec{p}_{1} \varvec{n}_{1} } \\ {\varvec{p}_{2} \varvec{n}_{2} } \\ \end{array} } \right] $$
(41)

where: \( \varvec{p}_{1} \), resp. \( \varvec{p}_{2} \) are points on planes \( \rho_{1} \), resp. \( \rho_{2} \), with a normal vector \( \varvec{n}_{1} \), resp. \( \varvec{n}_{2} \). Coordinates of the closest point \( \varvec{x} = \left[ {x,y,z} \right]^{T} \) on the intersection of two planes to the point \( \varvec{ \xi } = \left( {\xi_{x} ,\xi_{y} ,\xi_{z} } \right) \) are given as a solution of this system of linear equations. Note that the point \( \varvec{ \xi } \) is given in the Euclidean space.

Let us consider a solution based on the projective representation. The proposed approach is based on basic geometric transformations with the following steps:

  1. 1.

    Translation of planes \( \varvec{\rho}_{1} \), \( \varvec{ \rho }_{2} \) and point \( \varvec{ \xi } = \left[ {\xi_{x} ,\xi_{y} ,\xi_{z} :1} \right]^{T} \) so that the point \( \varvec{\xi} \) is in the origin of the coordinate system, i.e. using transformation matrix T for the point translation and matrix \( \left( {\varvec{T}^{T} } \right)^{ - 1} = \varvec{T}^{ - T} \) for translation of planes [11, 14, 16].

  2. 2.

    Intersection computation of those two translated planes; the result is a line with the directional vector s and point \( \varvec{x}_{0} \)

  3. 3.

    Translation of the point \( \varvec{x}_{0} \) by inverse translation using the matrix \( \varvec{T}^{ - 1} \)

The translation matrices are defined as:

$$ \begin{aligned} \hfill \\ \begin{array}{*{20}c} {\varvec{T} = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 & { - \xi_{x} } \\ 0 & 1 & 0 & { - \xi_{y} } \\ 0 & 0 & 1 & { - \xi_{z} } \\ 0 & 0 & 0 & 1 \\ \end{array} } \right]} & {\varvec{T}^{ - T} = \left[ {\begin{array}{*{20}c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ {\xi_{x} } & {\xi_{y} } & {\xi_{z} } & 1 \\ \end{array} } \right]} \\ {\varvec{T}^{'} = \left[ {\begin{array}{*{20}c} {\xi_{w} } & 0 & 0 & { - \xi_{x} } \\ 0 & {\xi_{w} } & 0 & { - \xi_{y} } \\ 0 & 0 & {\xi_{w} } & { - \xi_{z} } \\ 0 & 0 & 0 & {\xi_{w} } \\ \end{array} } \right]} & {} \\ \end{array} \hfill \\ \end{aligned} $$
(42)

If the point \( \varvec{ \xi } \) is given in the projective space, i.e. \( \varvec{ \xi } = \left[ {\xi_{x} ,\xi_{y} ,\xi_{z} :\xi_{w} } \right]^{T} \), \( w \ne 1 \,\& \, w \ne 0 \), then the matrix T is given as \( \varvec{T}^{'} \).

It can be seen that the computation is more simple, robust and convenient for SSE or GPU oriented applications. It should be noted that the formula is more general as the point \( \varvec{\xi} \) can be given in the projective space and no division operations are needed.

10 Symbolic Manipulations

Symbolic manipulations are very important and help to find or simplify computational formulas, avoid singularities etc. As the extended cross-product is an associative and anti-commutative as the cross-product in \( E^{3} \) similar rules are valid, i.e. in \( E^{3} \):

$$ \begin{aligned} \varvec{a} \times \left( {\varvec{b} + \varvec{c}} \right) & = \varvec{a} \times \varvec{b} + \varvec{a} \times \varvec{c} \\ \varvec{a} \times \varvec{b} & = - \varvec{b} \times \varvec{a} \\ \end{aligned} $$
(43)

In the case of the extended cross-product, i.e. in the projective notation \( P^{3} \) we actually formally have operations in \( E^{4} \):

$$ \begin{aligned} \varvec{a} \times \left( {\varvec{b} + \varvec{c}} \right) \times \varvec{d} & = \varvec{a} \times \varvec{b} \times \varvec{d} + \varvec{a} \times \varvec{c} \times \varvec{d} \\ \varvec{a} \times \varvec{b} \times \varvec{c} & = - \varvec{b} \times \varvec{a} \times \varvec{c} \\ \end{aligned} $$
(44)

This can be easily proved by applications of rules for operations with determinants.

However, for general understanding more general theory is to be used – Geometric Algebra [24, 6, 7, 10, 18], in which the extended cross-product is called outer product and the above identities are rewritten as:

$$ \begin{aligned} \varvec{a} \wedge \left( {\varvec{b} + \varvec{c}} \right) \wedge \varvec{d} & = \varvec{a} \wedge \varvec{b} \wedge \varvec{d} + \varvec{a} \wedge \varvec{c} \wedge \varvec{d} \\ \varvec{a} \wedge \varvec{b} \wedge \varvec{c} & = - \varvec{b} \wedge \varvec{a} \wedge \varvec{c} \\ \end{aligned} $$
(45)

where: “\( { \wedge } \)” is an operator of the outer product, which is equivalent to the cross-product in \( E^{3} \). There is also an operator “\( { \vee } \)” for the inner product which is equivalent to the dot product in \( E^{3} \).

In geometric algebra geometric product is defined as:

$$ \varvec{ab} = \varvec{a} \vee \varvec{b} + \varvec{a} \wedge \varvec{b} $$
(46)

i.e. in the case of \( E^{3} \) we can write:

$$ \varvec{ab} = \varvec{a} \cdot \varvec{b} + \varvec{a} \times \varvec{b} $$
(47)

and getting some “strange”, as a scalar and a vector (actually a bivector) are summed together. But it is a valid result and ab is called geometric product [18].

However, if the projective representation is used, we need to be a little bit careful with equivalent operations to the standard operations in the Euclidean space.

11 Example of Application

Let us consider a simple example in 3-dimensional space. Assume, that Ax = b is a system of linear equations, i.e.:

$$ \left[ {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & {a_{13} } \\ {a_{21} } & {a_{22} } & {a_{23} } \\ {a_{31} } & {a_{32} } & {a_{33} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x_{1} } \\ {x_{2} } \\ {x_{3} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {b_{1} } \\ {b_{2} } \\ {b_{3} } \\ \end{array} } \right] $$
(48)

and we want to explore \( \xi = \varvec{c} \cdot \varvec{x} \), where \( \varvec{c} = \left[ {c_{1} ,c_{2} ,c_{3} } \right]^{T} \).

In the “standard” approach a system of linear equations has to be solved numerically or symbolic manipulation has to be used. We can rewrite the Eq. (48) using the projective representation as:

$$ \left[ {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & {a_{13} } & { - b_{1} } \\ {a_{21} } & {a_{22} } & {a_{23} } & { - b_{2} } \\ {a_{31} } & {a_{32} } & {a_{33} } & { - b_{3} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\bar{x}_{1} } \\ {\bar{x}_{2} } \\ {\bar{x}_{3} } \\ {\bar{x}_{w} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 0 \\ \end{array} } \right] \,\& \, x_{i} = \frac{{\bar{x}_{i} }}{{\bar{x}_{w} }} $$
(49)

The conversion to the Euclidean space is given as:

$$ x_{i} = \frac{{\bar{x}_{i} }}{{\bar{x}_{w} }}\quad i = 1, \ldots ,3 $$
(50)

Then using equivalence of the extended cross-product and solution of a linear system of equations we can write:

$$ \bar{\varvec{x}} = \bar{\varvec{a}}_{1} \times \bar{\varvec{a}}_{2} \times \bar{\varvec{a}}_{3} $$
(51)

where: \( \varvec{ \bar{x}} = \left[ {\bar{x}_{1} ,\bar{x}_{2} ,\bar{x}_{3} :\bar{x}_{w} } \right]^{T} \), \( \bar{\varvec{a}}_{i} = \left[ {a_{i1} ,a_{i2} ,a_{i3} : - b_{i} } \right]^{T} \), \( i = 1, \ldots ,3 \). It should be noted that the result is actually in the 3-dimensional projective space.

In many cases, the result of computation is not necessarily to be converted to the Euclidean space. If left in the projective representation, we save division operations, increase precision of computation as the mantissa is actually nearly doubled (mantissa of \( \bar{x}_{i} \) and \( \bar{x}_{w} \)). Also robustness is increased as well as we haven’t made any specific assumptions about collinearity of planes. Let a scalar value \( \xi \in E^{1} \) is given as:

$$ \xi = \varvec{c} \cdot \varvec{x} $$
(52)

The scalar value \( \xi \) can be expressed as a homogeneous vector \( \bar{\varvec{\xi }} \) in the projective notation as:

$$ \bar{\xi }^{T} = \left[ {\bar{\xi }:\bar{\xi }_{w} } \right] \quad \& \quad \bar{\xi }_{w} = 1 $$
(53)

Generally, the value in the Euclidean space is given as \( \xi = \frac{{\bar{\xi }}}{{\bar{\xi }_{w} }} \). Extension to the 3-dimensional case is straightforward.

As an example let us consider a test if the given point \( \bar{\xi } = \left[ {\bar{\xi }_{1} ,\bar{\xi }_{2} ,\bar{\xi }_{3} :\bar{\xi }_{w} } \right]^{T} \) lies on a plane given by three points \( \varvec{ x}_{i} , i = 1,\, \ldots \,,3 \) using projective notation. A plane p is given:

$$ \varvec{\rho}= \varvec{x}_{1} \times \varvec{x}_{2} \times \varvec{x}_{3} = \left[ {a,b,c:d} \right]^{T} $$
(54)

and the given point has to fulfill condition \( \bar{\xi } \cdot\varvec{\rho}= a\bar{\xi }_{1} + b\bar{\xi }_{2} + c\bar{\xi }_{13} + d\bar{\xi }_{w} = 0 \).

We know that:

$$ \varvec{a} \times \varvec{b} \times \varvec{c} = \det \left[ {\begin{array}{*{20}c} \varvec{i} & \varvec{j} & \varvec{k} & \varvec{l} \\ {a_{1} } & {a_{2} } & {a_{3} } & {a_{4} } \\ {b_{1} } & {b_{2} } & {b_{3} } & {b_{4} } \\ {c_{1} } & {c_{2} } & {c_{3} } & {c_{4} } \\ \end{array} } \right] = - \left[ {\begin{array}{*{20}r} \hfill 0 & \hfill { - \delta_{34} } & \hfill {\delta_{24} } & \hfill { - \delta_{23} } \\ \hfill {\delta_{34} } & \hfill 0 & \hfill { - \delta_{14} } & \hfill {\delta_{13} } \\ \hfill { - \delta_{24} } & \hfill {\delta_{14} } & \hfill 0 & \hfill { - \delta_{12} } \\ \hfill {\delta_{23} } & \hfill { - \delta_{13} } & \hfill {\delta_{12} } & \hfill 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {c_{1} } \\ {c_{2} } \\ {c_{3} } \\ {c_{4} } \\ \end{array} } \right] $$
(55)

where: \( \varvec{i} = \left[ {1\text{,}\,0\text{,}\,0\text{,}\,0} \right]^{T} \), \( \varvec{j} = \left[ {0,1,0,0} \right]^{T} \), \( \varvec{k} = \left[ {0\text{,}\,0\text{,}\,1\text{,}\,0} \right]^{T} \), \( \varvec{l} = \left[ {0\text{,}\,0\text{,}\,0\text{,}\,1} \right]^{T} \). Then, the test \( \varvec{\xi}\cdot\varvec{\rho}= 0 \) is actually:

$$ \left[ {\xi_{1} ,\xi_{2} ,\xi_{3} :\xi_{w} } \right]\left[ {\begin{array}{*{20}r} \hfill 0 & \hfill { - \delta_{34} } & \hfill {\delta_{24} } & \hfill { - \delta_{23} } \\ \hfill {\delta_{34} } & \hfill 0 & \hfill { - \delta_{14} } & \hfill {\delta_{13} } \\ \hfill { - \delta_{24} } & \hfill {\delta_{14} } & \hfill 0 & \hfill { - \delta_{12} } \\ \hfill {\delta_{23} } & \hfill { - \delta_{13} } & \hfill {\delta_{12} } & \hfill 0 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {x_{3} } \\ {y_{3} } \\ {z_{3} } \\ {w_{3} } \\ \end{array} } \right] = 0 $$
(56)

It means that we are getting a bilinear form:

$$ \bar{\xi }^{T} \varvec{B x}_{3} = 0 $$
(57)

where: B is an antisymmetric matrix with a null diagonal. So we can analyze such conditions more deeply in an analytical form. It means that we can explore the formula on a symbolic level. It is also possible to derive some additional information for the \( \xi \) value, resp. \( \bar{\xi } \) value, if the projective notation is used. This approach can be directly extended do the d-dimensional space using geometry algebra [18].

12 Efficiency of Computation and GPU Code

Let us consider reliability and the cost of computation of the “standard” approach using Cramer’s rule using determinants. For the given system of n liner equations with n unknowns in the form Ax = b the solution is given as:

$$ X_{i} = \frac{{\det \left( {\varvec{A}_{i} } \right)}}{{\det \left( \varvec{A} \right)}}\quad i = 1\text{,}\, \ldots \,\text{,}n $$
(58)

In the projective notation using homogeneous coordinates we can actually write

\( \varvec{ x = }\left[ {x_{1} ,\, \ldots \,,x_{n} :w} \right]^{T} \), where: \( w = \det \left( \varvec{A} \right) \) and \( x_{i} = \det \left( {\varvec{A}_{i} } \right) \), \( i = 1\text{,}\, \ldots \,\text{,}n \)

The projective representation not only enables to postpone division operations, but also offers some additional advantages as follows. Computing of determinants is quite computationally expensive task. However for 2–4 dimensional cases there are some advantages using the extended cross-product as explained below (Table 1).

Table 1. Cost of determinant computation

Generally the computational expenses are given as:

$$ {\text{Det}}_{{\left( {k + 1} \right) \times (k + 1)}} = k\,{\text{Det}}_{k \times k} + k(^{{\prime \prime }} \pm^{{\prime \prime }} ) $$
(59)

Total cost of computation if Cramer’s rule for generalized is used (Table 2):

Table 2. Cost of cross-product computation

Computational expenses for the generalized cross-product matrix based formulation, if partial intermediate computations are used (Table 3).

Table 3. Cost of cross-product computation with subdeterminants

It means, that for the 2-dimensional and 4-dimensional cases, the expected speed up \( \upsilon \) is:

$$ \upsilon \cong \frac{{Cramer^{'} s\, rule}}{partial \,summation} \doteq 2 $$
(60)

In real implementations on CPU the SSE instructions can be used which are more convenient for vector-vector operations and some steps can be made in parallel. Additional speed up can be achieved by GPU use for computation.

In the case of higher dimension modified standard algorithms can be used including iterative methods [17]. Also as the projective representation nearly doubles precision of computation, if a single precision on GPU is used (only few processors compute in a double precision), the result after conversion to the Euclidean representation is equivalent to the double precision.

13 GPU Code

Many today’s computational systems can use GPU support, which allows fast and parallel processing. The above presented approach offers significant speed up as the “standard” cross-product is implemented in hardware as an instruction and the extended cross-product for 4D can be implemented as:

In general, it can be seen that a solution of linear systems of equations on GPU for a small dimension n is simple, fast and can be performed in parallel.

14 Conclusion

Projective representation is not widely used for general computation as it is mostly considered for as applicable to computer graphics and computer vision field only. In this paper the equivalence of cross-product and solution of linear system of equations has been presented. The presented approach is especially convenient for 3-dimensional and 4 dimensional cases applicable in many engineering and statistical computations, in which significant speed up can be obtained using SSE instructions or GPU use. Also, the presented approach enables symbolic manipulation as the solution of a system of linear equations is transformed to extended cross-product using a matrix form which enables symbolic manipulations.

Direct application of the presented approach has also been demonstrated on the barycentric coordinates computation and simple geometric problems.

The presented approach enables avoiding division operations as a denominator is actually stored in the homogeneous coordinate w. It which leads to significant computational savings, increase of precision and robustness as the division operation is the longest one and the most decreasing precision of computation.

The presented approach also enables derivation of new and more computationally efficient formula in other computational fields.