Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

The theory of scalar- and tensor-valued functions constitutes the mathematical framework based on which modeling of the elasticity, plasticity, and damage in polycrystalline metallic materials is built. In this chapter, we provide the basic concepts and key mathematical results to be used in the rest of the book.

We begin by presenting a concise survey of the basic results of vector algebra. This is also a natural starting point for the development of tensor algebra.

1.1 Elements of Vector Algebra

From elementary geometry, we know that to every three-dimensional point space, E, we can associate a vector space, V. An element of E is a point in space and a free vector connects any two points. A free vector is characterized by direction, magnitude, and sense. Free vectors can be added together and multiplied by numbers.

The generalization of the properties of free vectors of elementary geometry led to the general concept of vector space.

  • Definition of a Vector Space

A set V is called a vector space over the field R of real numbers, and its elements are called vectors, if the following conditions are fulfilled:

  1. (I)

    To any pair of vectors \({\mathbf{u}},{\mathbf{v}} \in V,\) corresponds a vector \({\mathbf{u}} + {\mathbf{v}} \in V,\) called the sum of these vectors, such that:

    1. (V.1)

      \({\mathbf{u}} + {\mathbf{v}} = {\mathbf{v}} + {\mathbf{u}}\) (commutativity),

    2. (V.2)

      For any three vectors \({\mathbf{u}} ,{\mathbf{ v}} ,{\mathbf{ w}}:{\mathbf{u}} + \left( {{\mathbf{v}} + {\mathbf{w}}} \right) = \left( {{\mathbf{u + v}}} \right){\mathbf{ + w}}\) (associativity),

    3. (V.3)

      There exists an element of V, called the zero vector (or null vector), denoted by \({\mathbf{0}}\) such that for any vector \({\mathbf{u}} \in V{:}\,{\mathbf{u}} = {\mathbf{u}} + {\mathbf{0}}\).

    4. (V.4)

      For any vector u, there exists another vector, denoted –u, such that \({\mathbf{u}} + \left( { - {\mathbf{u}}} \right) = {\mathbf{0}}.\)

  2. (II)

    The product of any vector \({\mathbf{u}} \in V\) with a real number \(\alpha\) is also a vector. It has the following properties:

    1. (V.5)

      For any \(\alpha ,\beta \in R,\alpha \left( {\beta {\mathbf{u}}} \right) = \left( {\alpha \beta } \right){\mathbf{u}}\) (associativity),

    2. (V.6)

      \(\left( {\alpha + \beta } \right){\mathbf{u}} = \alpha {\mathbf{u}} + \beta {\mathbf{u}}\) (distributivity relative to number addition),

    3. (V.7)

      \(\alpha \left( {{\mathbf{u}} + {\mathbf{v}}} \right) = \alpha {\mathbf{u}} + \alpha {\mathbf{v}}\) (distributivity relative to vector addition),

    4. (V.8)

      \(1{\mathbf{u}} = {\mathbf{u}}.\)

Using the above axioms, it can be shown that the following relations hold:

$$0{\mathbf{u}} = {\mathbf{0}}, \left( { - 1} \right){\mathbf{u}} = - {\mathbf{u}}, \alpha {\mathbf{0}} = {\mathbf{0}}.$$

The difference between any two vectors \({\mathbf{u}}\) and \({\mathbf{v}}\) is defined as:

$${\mathbf{u}} - {\mathbf{v}} = {\mathbf{u + }}\left( { - {\mathbf{v}}} \right).$$
  • Linear Independence of Vectors

Definition

A set of n vectors \({\mathbf{u}}_{1} ,{\mathbf{u}}_{2} ,\) …, \({\mathbf{u}}_{\text{n}}\) is said to be linearly independent if the relation:

$$\alpha_{1} {\mathbf{u}}_{1} {\mathbf{ + }}\alpha_{2} {\mathbf{u}}_{2} + \cdots + \alpha_{n} {\mathbf{u}}_{n} = {\mathbf{0}},$$

with \(\alpha_{1} , \ldots ,\alpha_{\text{n}} \in R,\) can take place if and only if: \(\alpha_{1} = \alpha_{2} = \cdots = \alpha_{n} = 0.\) Otherwise, the set of vectors is said to be linearly dependent.

  • Dimension of a Vector Space

Definition

A vector space V is called n-dimensional, if in V there exists at least one set of n linearly independent vectors, and any set containing n + 1 vectors is linearly dependent.

  • Basis of a Vector Space

Definition

In an n-dimensional vector space V, any set of n linearly independent vectors is called a basis of V.

  • Inner Product

Definition

Let V be a vector space. An application which associates to any vectors \({\mathbf{u}}\) and \({\mathbf{v}} \in V\) a real number, denoted \({\mathbf{u}} \cdot {\mathbf{v}},\) is called an inner product if it satisfies the following properties:

  1. (I1)

    \({\mathbf{u}} \cdot {\mathbf{v}} = {\mathbf{v}}{ \cdot }{\mathbf{u}}\) (commutativity);

  2. (I2)

    For any \(\alpha \in R{:}\,\left( {\alpha {\mathbf{u}}} \right) \cdot {\mathbf{v}} = \alpha \left( {{\mathbf{u}} \cdot {\mathbf{v}}} \right)\) (associativity with respect to multiplication with real numbers);

  3. (I3)

    \({\mathbf{u}} \cdot \left( {{\mathbf{v}} + {\mathbf{w}}} \right) = {\mathbf{u}} \cdot {\mathbf{v}} + {\mathbf{u}} \cdot {\mathbf{w}}\) (distributivity with respect to vector addition);

  4. (I4)

    \({\mathbf{u}} \cdot {\mathbf{u}} \ge 0\);

  5. (I5)

    \({\mathbf{u}} \cdot {\mathbf{u}} = 0\) if and only if \({\mathbf{u}} = {\mathbf{0}}.\)

The scalar product can then be used to define the norm (or magnitude) of any vector \({\mathbf{u}} \in V\). The norm of the vector \({\mathbf{u}}\) is defined by:

$$\left| {\mathbf{u}} \right| = \sqrt {{\mathbf{u}} \cdot {\mathbf{u}}},$$
(1.1)

and a vector with unit norm is termed a unit vector. By definition, two vectors are said to be orthogonal if their inner product is zero.

  • Euclidean Vector Space

Definition

A vector space V endowed with an inner product is called a Euclidean vector space.

  • Einstein Summation Convention

In this book, we adopt the Einstein summation convention which states that whenever the same letter subscript occurs twice in a term, that subscript is to be given all possible values and the results added together. For example, if i = 1, …, 3, then, \(u_{i}^{2} = u_{1}^{2} + u_{2}^{2} + u_{3}^{2}\)

  • Components of a Vector

Theorem 1.1

Let \({\mathbf{g}}_{1} ,{\mathbf{g}}_{2} ,\) …, \({\mathbf{g}}_{n}\) be a basis for the n-dimensional vector space V. Any vector \({\mathbf{u}} \in V\) may be uniquely represented as a linear combination of the basis vectors \({\mathbf{g}}_{\text{i}}\), i = 1, …, n, i.e.,

$${\mathbf{u}} = u_{1} {\mathbf{g}}_{1} + \cdots + u_{n} {\mathbf{g}}_{n} ,$$
(1.2)

The numbers (or scalars) \(u_{i}\) are called the components of u relative to this basis.

Proof

Since V is a n-dimensional vector space, the set of n + 1 vectors \(\left\{ {{\mathbf{g}}_{1} ,{\mathbf{g}}_{2} , \ldots ,{\mathbf{g}}_{n} ,{\mathbf{u}}} \right\}\) is linearly dependent. Hence, there exist a set of real numbers \(\alpha ,\alpha_{1}\), …, \(\alpha_{n}\), not all of them zero, such that

$$\alpha {\mathbf{u}} + \alpha_{1} {\mathbf{g}}_{1} + \cdots + \alpha_{n} {\mathbf{g}}_{n} = {\mathbf{0}},$$
(1.3)

Note that \(\alpha\) ought to be nonzero. Indeed, if \(\alpha = 0\) the above equation reduces to \(\alpha_{1} {\mathbf{g}}_{1} + \cdots + \alpha_{n} {\mathbf{g}}_{n} = {\mathbf{0}}\), and since \({\mathbf{g}}_{1} ,{\mathbf{g}}_{2}\), …, \({\mathbf{g}}_{n}\) are linearly independent, this would imply that all \(\alpha_{i}\) ought to be zero. Since \(\alpha \ne 0\), from Eq. (1.3) it follows that \({\mathbf{u}} = u_{k} {\mathbf{g}}_{k}\), with \(u_{k} = - \alpha_{k} /\alpha\), k = 1, …, n.

Therefore, \({\mathbf{u}}\) is a linear combination of the base vectors. Furthermore, the numbers \(u_{k}\) are uniquely determined. Indeed, \({\mathbf{u}}\) may also be expressed as

$${\mathbf{u}} = u_{k}^{\prime } {\mathbf{g}}_{k} ,$$
(1.4)

by subtracting Eq. (1.2) from Eq. (1.4), we obtain

$$\left( {u_{k}^{\prime } - u_{k} } \right){\mathbf{g}}_{k} = {\mathbf{0}}.$$

Given that vectors \({\mathbf{g}}_{k}\) form a basis, it follows that necessarily \(u_{k}^{\prime } = u_{k}\).

Using Theorem 1.1 in conjunction with the properties (I2) and (I3), it can be easily shown that the inner product between any two vectors \({\mathbf{u}}\) and \({\mathbf{v}}\) can be expressed in component form as:

$${\mathbf{u}} \cdot {\mathbf{v}} = g_{km} {\text{u}}_{k} {\text{v}}_{m} , \quad {\text{with }}\quad g_{km} = {\mathbf{g}}_{k} \cdot {\mathbf{g}}_{m} ,k,m = 1, \ldots, n$$
(1.5)

Obviously, due to the commutativity of the inner product (i.e., property (I1)),

$$g_{km} = g_{mk} .$$

Given that \(\left\{ {{\mathbf{g}}_{k} } \right\}\) form a basis, it can also be easily shown that the determinant of the matrix \(\left[ {g_{km} } \right]\) is nonzero.

  • Orthonormal Basis

A basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} , \ldots ,{\mathbf{e}}_{n} } \right\}\) of the n-dimensional vector space V is called orthonormal, if any two vectors of the basis are mutually orthogonal and of unit length, i.e.,

$${\mathbf{e}}_{i} \cdot {\mathbf{e}}_{j} = \delta_{ij} ,$$
(1.6)

where \(\delta_{ij}\) denotes the Kronecker delta symbol,

$$\delta_{ij} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {{\text{if}}\,\,i = j} \hfill \\ {0,} \hfill & {{\text{otherwise}} .} \hfill \\ \end{array} } \right.$$
(1.7)

Note that in view of the orthonormality condition (1.6), the components of a vector \({\mathbf{u}}\) relative to the orthonormal basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} , \ldots ,{\mathbf{e}}_{n} } \right\}\) are:

$$u_{k} = {\mathbf{u}} \cdot {\mathbf{e}}_{k} .$$
(1.8)

Let \({\mathbf{u}}\) and \({\mathbf{v}}\) be an arbitrary pair of vectors having components \(u_{k} ,{\text{v}}_{k}\) relative to the same basis. Then, using Eqs. (1.5) and (1.6), we obtain:

$${\mathbf{u}} \cdot {\mathbf{v}} = u_{k} {\text{v}}_{k} .$$
(1.9)
  • Cross Product

Definition

An application which associates to any vectors \({\mathbf{u}}\) and \({\mathbf{v}} \in V\) a vector denoted \({\mathbf{u}} \times {\mathbf{v}}\), is called the cross product (or vector product) of u and v if it satisfies the following properties:

  1. (C1)

    \({\mathbf{u}} \times {\mathbf{v}} = - {\mathbf{v}} \times {\mathbf{u}}\) for any \({\mathbf{u}},{\mathbf{v}} \in V\) (anti-commutativity);

  2. (C2)

    \(\left( {\alpha {\mathbf{v}} + \beta {\mathbf{w}}} \right) \times {\mathbf{u}} = \alpha \left( {{\mathbf{v}} \times {\mathbf{u}}} \right) + \beta \left( {{\mathbf{w}} \times {\mathbf{u}}} \right)\) for any \({\mathbf{u}},{\mathbf{v}},{\mathbf{w}} \in V\) and \(\alpha ,\beta \in R\);

  3. (C3)

    \({\mathbf{u}} \cdot \left( {{\mathbf{u}} \times {\mathbf{v}}} \right) = {\mathbf{0}}\) for any \({\mathbf{u}},{\mathbf{v}} \in V\);

  4. (C4)

    \(\left( {{\mathbf{u}} \times {\mathbf{v}}} \right) \cdot \left( {{\mathbf{u}} \times {\mathbf{v}}} \right) = \left( {{\mathbf{u}} \cdot {\mathbf{u}}} \right)\left( {{\mathbf{v}} \cdot {\mathbf{v}}} \right) - \left( {{\mathbf{u}} \cdot {\mathbf{v}}} \right)^{2}\) for any \({\mathbf{u}},{\mathbf{v}} \in V.\)

Using the above properties, it can be easily shown that \({\mathbf{u}} \times {\mathbf{v}} = {\mathbf{0}}\) if and only if \({\mathbf{u}}\) and \({\mathbf{v}}\) are linearly dependent.

  • Scalar Triple Product

The scalar triple product of three vectors \({\mathbf{u}},{\mathbf{v}},{\mathbf{w}}\), denoted by \(\left[ {{\mathbf{u}},{\mathbf{v}},{\mathbf{w}}} \right]\), is defined by:

$$[{\mathbf{u}},{\mathbf{v}},{\mathbf{w}}] = {\mathbf{u}} \cdot \left( {{\mathbf{v}} \, \times \, {\mathbf{w}}} \right).$$
(1.10)
  • Properties of the Scalar Triple Product

  • The scalar triple product is invariant under a circular permutation of the members of the product, i.e., \([{\mathbf{u}},{\mathbf{v}},{\mathbf{w}}] = [{\mathbf{v}},{\mathbf{w}},{\mathbf{u}}] = [{\mathbf{w}},{\mathbf{u}},{\mathbf{v}}].\)

  • The sign of scalar triple product is reversed when the second and third members of the product are reversed, i.e., \([{\mathbf{u}},{\mathbf{v}},{\mathbf{w}}] = - [{\mathbf{u}},{\mathbf{w}},{\mathbf{v}}] = - [{\mathbf{v}},{\mathbf{u}},{\mathbf{w}}] = - [{\mathbf{w}},{\mathbf{v}},{\mathbf{u}}].\)

  • The scalar triple product is equal to zero if and only if \({\mathbf{u}},{\mathbf{v}}\) and \({\mathbf{w}}\) are linearly dependent.

  • For any \({\mathbf{u}},{\mathbf{v}},{\mathbf{t}},{\mathbf{w}} \in V\) and \(\alpha ,\beta \in R\): \([\alpha {\mathbf{u}} + \beta {\mathbf{v}},{\mathbf{t}},{\mathbf{w}}] = \alpha [{\mathbf{u}},{\mathbf{t}},{\mathbf{w}}] + \beta [{\mathbf{v}},{\mathbf{t}},{\mathbf{w}}].\)

In a three-dimensional vector space, there exists an orthonormal basis \(\left( {{\mathbf{e}}_{k} } \right)_{k = 1 ,\ldots, 3}\). Based on the properties of the cross product and scalar triple product, it follows that:

$${\mathbf{e}}_{2} \times {\mathbf{e}}_{3} = \left[ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right]{\mathbf{e}}_{1} ,{\mathbf{e}}_{3} \times {\mathbf{e}}_{1} = \left[ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right]{\mathbf{e}}_{2} ,{\mathbf{e}}_{1} \times {\mathbf{e}}_{2} = \left[ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right]{\mathbf{e}}_{3}$$
(1.11)
$$\left[ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right] = \pm 1$$
(1.12)

Let \(\varepsilon_{ijk}\) designate the Ricci symbol, which takes the value 1 when (i, j, k) is a cyclic permutation of 1, 2, 3, and the value (−1) when (i, j, k) is a anticyclic permutation of 1, 2, 3, and it is otherwise zero. Therefore,

$${\mathbf{e}}_{i} \times {\mathbf{e}}_{j} = \pm \varepsilon_{ijk} {\mathbf{e}}_{k} .$$
(1.13)

Two bases are said to be similar if their triple products have the same sign.

A basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\) is said to be positively oriented if \(\left[ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right] > 0.\)

The formula for the cross product between any two vectors in terms of their components relative to the orthonormal basis \(\left( {{\mathbf{e}}_{k} } \right)_{k = 1, \ldots ,3}\) is found by using the axioms (C1)–(C2) and Eq. (1.13):

$${\mathbf{u}} \times {\mathbf{v}} = \pm \varepsilon_{ijk} u_{i} {\text{v}}_{j} {\mathbf{e}}_{k} ,$$
(1.14)

Also, using Eq. (1.14) one obtains the formula for the scalar triple product of any three vectors \({\mathbf{u}},{\mathbf{v}},{\mathbf{w}}\) to be:

$$\left[ {{\mathbf{u}},{\mathbf{v}},{\mathbf{w}}} \right] = \pm \varepsilon_{ijk} u_{i} {\text{v}}_{j} {\text{w}}_{k} .$$
(1.15)

If the basis is positively oriented, the scalar triple product is the determinant of the matrix having on the first row the components of \({\mathbf{u}}\), on the second row the components of \({\mathbf{v}}\), and on the third row the components of \({\mathbf{w}}\). In elementary geometry, the scalar product of any two nonzero free vectors \({\mathbf{u}}\) and \({\mathbf{v}}\) is designated by \({\mathbf{u}} \cdot {\mathbf{v}}\) and is defined as:

$${\mathbf{u}} \cdot {\mathbf{v}} = \left| {\mathbf{u}} \right|\left| {\mathbf{v}} \right|{ \cos }\left(\theta \right),$$
(1.16)

where \(\left| {\mathbf{u}} \right|\) and \(\left| {\mathbf{v}} \right|\) designate the magnitude (or length) of each vector and \(\theta\) is the angle between the two vectors. If one of the two vectors is zero, their inner product is, by definition, zero.

By definition, the cross product \({\mathbf{u}} \times {\mathbf{v}}\) of two free vectors \({\mathbf{u}}\) and \({\mathbf{v}}\) which are linearly independent is a vector that is orthogonal to both \({\mathbf{u}}\) and \({\mathbf{v}}\), and therefore normal to the two-dimensional plane containing them. The magnitude of \({\mathbf{u}} \times {\mathbf{v}}\) is given by,

$${\mathbf{u}} \times {\mathbf{v}} = \left| {\mathbf{u}} \right|\left| {\mathbf{v}} \right|\sin\theta \;\;{\text{for }}(0 < \theta < \pi )$$
(1.17)

where \({\theta }\) is the angle between the vectors \({\mathbf{u}},{\mathbf{v}}.\)

It can be easily shown that the free vector space is three-dimensional (any three vectors which are not coplanar form a basis) and that the scalar product defined by Eq. (1.16) satisfies the properties (I1)–(I5) and the cross product defined by Eq. (1.17) satisfies the axioms (C1)–(C4), i.e., the space of free vectors is endowed with an inner product and a vector product.

Therefore, the 3-D physical space is a Euclidean vector space. In this space, the scalar triple product is the volume of the parallelepiped defined by the respective vectors. If this volume is nonzero, then the three vectors are linearly independent. If \({\mathbf{u}},{\mathbf{v}},{\mathbf{w}}\) are linearly independent, then the triad \(\left\{ {{\mathbf{u}},{\mathbf{v}},{\mathbf{w}}} \right\}\) forms a basis.

  • Cartesian Coordinate Frame

A Cartesian coordinate frame for the three-dimensional Euclidean space consists of a reference point O called the origin together with a positively oriented orthonormal basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\). Being positively oriented, the basis vectors satisfy:

$${\mathbf{e}}_{i} \cdot {\mathbf{e}}_{j} = \delta_{ij} , \quad {\text{and}}\quad\left[ {{\mathbf{e}}_{i},{\mathbf{e}}_{j} ,{\mathbf{e}}_{k} } \right] = \varepsilon_{ijk} .$$

So far, we have provided a concise survey of basic results of vector algebra. A vector is also referred to as a first-order tensor, while a scalar is a tensor of order zero. In the next section, we shall introduce the concept of a second-order tensor and their properties.

1.2 Elements of Tensor Algebra

1.2.1 Second-Order Tensors

Definition

A second-order tensor is a linear transformation of the vector space V into itself. Specifically, a second-order tensor \({\mathbf{T}}\) assigns to an arbitrary vector \({\mathbf{v}}\) a vector denoted by \({\mathbf{Tv}}\) in such a way that for any vectors \({\mathbf{u}}\) and \({\mathbf{v}}\), and any real number \(\alpha\), and \(\beta\):

$${\mathbf{T}}\left( {\alpha {\mathbf{u}} + \beta {\mathbf{v}}} \right) = \alpha \left( {{\mathbf{Tu}}} \right) \, + \beta \left( {{\mathbf{Tv}}} \right) \,$$
(1.18)

The set of second-order tensors on the three-dimensional Euclidean vector space is denoted by L. From here on, a second-order tensor will be simply called tensor.

We say that two tensors \({\mathbf{T}}\) and \({\mathbf{U}}\) are equal if,

$${\mathbf{Tv}} = {\mathbf{Uv}}, \quad \forall {\mathbf{v}} \in V.$$

The null tensor, denoted by \({\mathbf{O}}\) assigns to any vector \({\mathbf{v}}\) the zero vector and the identity tensor \({\mathbf{I}}\) assigns to \({\mathbf{v}}\) the vector \({\mathbf{v}}\) itself:

$${\mathbf{O}}{\mathbf{v}} = {\mathbf{0,I}}{\mathbf{v}} = {\mathbf{v}}\quad \forall {\mathbf{v}} \in V.$$

The sum \({\mathbf{T}} + {\mathbf{U}}\) of tensors \({\mathbf{T}}\) and \({\mathbf{U}}\) and the product \(a{\mathbf{T}}\) of a tensor \({\mathbf{T}}\) and a real number (scalar) a are defined as follows,

$$\begin{aligned} \left( {{\mathbf{T}} + {\mathbf{U}}} \right){\mathbf{v}} & = {\mathbf{Tv}} + {\mathbf{Uv}}\quad\forall {\mathbf{v}} \in V \\ \left( {a{\mathbf{T}}} \right){\mathbf{v}} & = a\left( {{\mathbf{Tv}}} \right),\quad\forall {\mathbf{v}} \in V,a \in R. \\ \end{aligned}$$

Moreover, for any tensor T, there exists another tensor, denoted \({-}{\mathbf{T}}\), such that:

$$\left( { - {\mathbf{T}}} \right){\mathbf{v = - Tv}} \cdot {\mathbf{T}}{ + }\left( { - {\mathbf{T}}} \right){\mathbf{ = O}}.$$

It can be easily shown from their definitions that \({\mathbf{I}},{\mathbf{O}},\left( {{-}{\mathbf{T}}} \right),{\mathbf{T}} + {\mathbf{U}},a{\mathbf{T}}\) are actually linear transformations [i.e., satisfy the requirement (1.18)].

On the basis of the same definitions, it can be readily established that the set of all tensors L, endowed with the addition and scalar multiplications is a vector space (i.e., the axioms (V1)–(V8) concerning the addition and scalar multiplication and existence of a null element are satisfied, see Sect. 1.1). It will be later shown that L is nine-dimensional.

  • Multiplication of Tensors

The rule for multiplication (or composition) of tensors is:

$$\left( {{\mathbf{AB}}} \right){\mathbf{u}} = {\mathbf{A}}({\mathbf{Bu}})\quad\forall {\mathbf{A}},{\mathbf{B}} \in L\quad {\text{and }}\quad \forall {\mathbf{u}} \in V$$
(1.19)

We leave to the reader to establish that:

$$\begin{aligned} &\alpha \left( {{\mathbf{AB}}} \right) = \left( {\alpha {\mathbf{A}}} \right){\mathbf{B}} = {\mathbf{A}}\left( {\alpha {\mathbf{B}}} \right)\quad\forall {\mathbf{A}},{\mathbf{B}} \in L \text{ and } \alpha \in R \\ & {\mathbf{A}}\left( {{\mathbf{B + C}}} \right) = {\mathbf{AB + AC}} \\ & \left( {{\mathbf{A + B}}} \right){\mathbf{C}} = {\mathbf{AC + BC}} \\ & {\mathbf{A}}\left( {{\mathbf{BC}}} \right) = \left( {{\mathbf{AB}}} \right){\mathbf{C}} \\ & {\mathbf{AO = OA}} = {\mathbf{O}}, \, {\mathbf{AI = IA}} = {\mathbf{A}} \, \\ \end{aligned}$$
(1.20)

In order to construct bases in the vector space of all second-order tensors, L, we now introduce the concept of tensor product of two vectors.

  • Tensor Product (Dyadic Product) of Two Vectors

Definition

The tensor product or dyadic product of two vectors \({\mathbf{u}},{\mathbf{v}}\) is a tensor, denoted by \({\mathbf{u}} \otimes {\mathbf{v}}\), and defined by:

$$({\mathbf{u}} \otimes {\mathbf{v}})\left( {\mathbf{w}} \right) = {\mathbf{u}}({\mathbf{v}} \cdot {\mathbf{w}})\quad\forall {\mathbf{w}} \in V$$
(1.21)

The proof that \({\mathbf{u}} \otimes {\mathbf{v}}\) is actually a second-order tensor follows from the properties of the inner product (see axioms (I1)–(I5)). Furthermore, the properties

$$\begin{aligned} (\alpha_{1} {\mathbf{w}}_{1} + \alpha_{2} {\mathbf{w}}_{2} ) \otimes {\mathbf{u}} & = \alpha_{1} \left( {{\mathbf{w}}_{1} \otimes {\mathbf{u}}} \right) + \alpha_{2} \left( {{\mathbf{w}}_{2} \otimes {\mathbf{u}}} \right) \\ {\mathbf{u}} \otimes (\alpha_{1} {\mathbf{w}}_{1} + \alpha_{2} {\mathbf{w}}_{2} ) & = \alpha_{1} \left( {{\mathbf{u}} \otimes {\mathbf{w}}_{1} } \right) + \alpha_{2} \left( {{\mathbf{u}} \otimes {\mathbf{w}}_{2} } \right), \\ \end{aligned}$$
(1.22)

can be easily deduced from (1.21) by using the properties of commutativity and distributivity with respect to addition of the inner product of two vectors [i.e., axioms (I1) and (I2)].

Let \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\) be a positive-oriented orthonormal basis of the three-dimensional space. We have the following identity:

$${\mathbf{e}}_{\text{i}} \otimes {\mathbf{e}}_{\text{i}} = {\mathbf{I}}.$$
(1.23)

Proof

Note that for any vector \({\mathbf{v}}\),

$$\left( {{\mathbf{e}}_{\text{i}} \otimes {\mathbf{e}}_{\text{i}} } \right){\mathbf{v}} = \left( {{\mathbf{v}} \cdot {\mathbf{e}}_{\text{i}} } \right){\mathbf{e}}_{\text{i}} = {\mathbf{v}} = {\mathbf{Iv}}.$$

Theorem 1.2

The set of tensors \(\left\{ {{\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right\}\) with k, m = 1, …, 3 are a basis of L, which is thus a nine-dimensional vector space. Moreover, any tensor \({\mathbf{T}}\) admits the representation

$${\mathbf{T}} = T_{km} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} \quad {\text{with }}\quad T_{km} {\mathbf{ = }}{\mathbf{e}}_{k} \cdot {\mathbf{T}}{\mathbf{e}}_{m} ,k,m = 1, \ldots, 3.$$
(1.24)

Proof

Assuming that there exist the real numbers \(\lambda_{km}\), with k, m = 1, …, 3 such that,

$$\lambda_{km} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} = {\mathbf{0}},$$

we get,

$${\mathbf{0}} = {\mathbf{0e}}_{l} = \left( {\lambda_{km} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right){\mathbf{e}}_{l} = \left( {\lambda_{km} {\mathbf{e}}_{k} } \right)\left( {{\mathbf{e}}_{m} \cdot {\mathbf{e}}_{l} } \right) = \lambda_{km} {\mathbf{e}}_{k} \delta_{ml} = \lambda_{kl} {\mathbf{e}}_{k} .$$
(1.25)

Since \(\left\{ {{\mathbf{e}}_{k} } \right\}\) is a basis, it follows that \(\lambda_{kl} = 0\), for any k, l = 1, …, 3. Consequently, \(\left\{ {{\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right\}\), k, m = 1, …, 3 are a linearly independent set of tensors in the space L.

Let us consider now an arbitrary tensor \({\mathbf{T}}\) and denote by \(T_{km}\) the components relative to the orthonormal basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\) of the vector \({\mathbf{Te}}_{m}\), so that,

$${\mathbf{T}}{\mathbf{e}}_{m} = \, T_{km} {\mathbf{e}}_{k} \quad {\text{and }}\quad T_{km} {\mathbf{ = }}{\mathbf{e}}_{k} \cdot {\mathbf{T}}{\mathbf{e}}_{m} .$$

Using the orthonormality of the basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\), the properties of the tensor product and the above relation, it follows that for an arbitrary vector \({\mathbf{v}}\) of components vs relative to this basis, we have:

$$\begin{aligned} & \left( {{\mathbf{T}} - T_{km} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right){\mathbf{v}} = \left( {{\mathbf{T}} - T_{km} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right)\left( {{\text{v}}_{s} {\mathbf{e}}_{s} } \right) = {\text{v}}_{s} {\mathbf{T}}{\mathbf{e}}_{s} - {\text{v}}_{s} T_{km} {\mathbf{e}}_{k} \left( {{\mathbf{e}}_{m} \cdot {\mathbf{e}}_{s} } \right) \\ & = {\text{ v}}_{s} \left( {T_{ks} {\mathbf{e}}_{k} - T_{km} {\mathbf{e}}_{k} \delta_{ms} } \right) = {\text{ v}}_{s} \left( {T_{ks} {\mathbf{e}}_{k} - T_{ks} {\mathbf{e}}_{k} } \right) \, = \, {\mathbf{0}} \\ \end{aligned}$$
(1.26)

Hence, \({\mathbf{T}}\) admits the representation given by Eq. (1.24).

The nine real numbers \(T_{km}\), uniquely defined by Eq. (1.24), are called the Cartesian components of the tensor \({\mathbf{T}}\) relative to the basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}.\) If \({\mathbf{v}} = {\mathbf{Tu}}\), we also have by Eq. (1.24)

$${\mathbf{v = }}\left( {T_{km} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right){\mathbf{u = }}T_{km} u_{m} {\mathbf{e}}_{k} ,$$

and hence,

$${\text{v}}_{k} = T_{km} u_{m} .$$
(1.27)

Based on the representation given by Eq. (1.24), it follows that:

$$\left( {{\mathbf{AB}}} \right)_{km} = A_{kp} B_{pm} .$$
  • Transpose of a Tensor

Definition

Associated with any tensor \({\mathbf{T}}\), there is a unique tensor denoted \({\mathbf{T}}^{T}\), called the transpose of \({\mathbf{T}}\), defined as:

$$\left( {{\mathbf{T}}^{T} {\mathbf{u}}} \right) \cdot {\mathbf{v}} = {\mathbf{u}} \, \cdot \, {\mathbf{Tv}}\,{\text{for any }}{\mathbf{u}},{\mathbf{v}} \in V.$$
(1.28)

The above rule defines in a unique way \({\mathbf{T}}^{T}\). At the same time, using the above definition, the linearity of \({\mathbf{T}}\), and the properties of the scalar product in V, it can be shown that \({\mathbf{T}}^{T}\) is a linear mapping, hence, a second-order tensor. Denoting by \(T_{km}\) and \(T^{T}_{km}\), the components of \({\mathbf{T}}\) and \({\mathbf{T}}^{T}\) in the basis \(\left\{ {{\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right\}\) k, m = 1, …, 3, according to the definition of the components of a tensor given by Eq. (1.24),

$$T_{km}^{T} = \, {\mathbf{e}}_{k} \cdot{\mathbf{T}}^{T} {\mathbf{e}}_{m} = {\mathbf{e}}_{m} \cdot {\mathbf{T}}{\mathbf{e}}_{k} = T_{mk} ,$$

Hence, \({\mathbf{T}}^{\text{T}} = T_{mk} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} .\)

In other words, the matrix of the Cartesian components of the transpose tensor \({\mathbf{T}}^{T}\) is the transpose of the matrix of the components of the tensor \({\mathbf{T}}.\) Also, it follows from the definition of the transpose of a tensor that,

$$\left( {{\mathbf{T}}^{\text{T}} } \right)^{\text{T}} = {\mathbf{T}}, \, \left( {{\mathbf{TU}}} \right)^{\text{T}} = \, {\mathbf{U}}^{\text{T}} {\mathbf{T}}^{\text{T}} {\text{ for any }}{\mathbf{T}},{\mathbf{U}} \in {\text{L}}$$

and,

$$\left( {{\mathbf{u}} \otimes {\mathbf{v}}} \right)^{\text{T}} = {\mathbf{v}} \otimes {\mathbf{u}} {\text{ for any }} {\mathbf{u}},{\mathbf{v}} \in V.$$

Definition

A tensor \({\mathbf{T}}\) is called symmetric if,

$${\mathbf{T}}^{T} = {\mathbf{T}}$$

and skew or antisymmetric if,

$${\mathbf{T}}^{T} = - {\mathbf{T}}$$

If the tensor \({\mathbf{T}}\) is symmetric, the matrix of its components is also symmetric; if the tensor \({\mathbf{T}}\) is antisymmetric, the matrix of its components is also antisymmetric. Consequently, in the three-dimensional vector space, a symmetric tensor has six independent components, and an antisymmetric tensor has three independent components.

Moreover, if \({\varvec{\Omega}}\) is an antisymmetric tensor, all its diagonal components are zero, and there exists a unique vector \({\varvec{\upomega}}\) such that,

$${\varvec{\Omega}}\,{\mathbf{u}} = {\varvec{\upomega}} \times {\mathbf{u}}{\text{ for any }}{\mathbf{u}} \in V.$$
(1.29)

If \({\mathbf{T}}\) is an arbitrary tensor, the symmetric part, \({\mathbf{T}}^{S}\), of \({\mathbf{T}}\) and the skew-symmetric part, \({\mathbf{T}}^{A}\), of \({\mathbf{T}}\) are defined as:

$${\mathbf{T}}^{S} = \displaystyle\frac{1}{2}\left( {{\mathbf{T}} + {\mathbf{T}}^{\text{T}} } \right),{\mathbf{T}}^{\text{A}} = \displaystyle\frac{1}{2}\left( {{\mathbf{T}} - {\mathbf{T}}^{\text{T}} } \right),$$

such that,

$${\mathbf{T}} = {\mathbf{T}}^{\text{S}} + {\mathbf{T}}^{\text{A}}$$
(1.30)

The above identity demonstrates that an arbitrary tensor \({\mathbf{T}}\) can be uniquely expressed as the sum of a symmetric tensor and an antisymmetric tensor. Moreover, on the basis of (1.29)–(1.30) it follows that the set of symmetric tensors, denoted LS, forms a six-dimensional subspace of L while the set of all skew-symmetric tensors, denoted \(L_{A}\), forms a three-dimensional subspace of L.

  • Trace of a Tensor

Definition

The trace of the tensor \({\mathbf{T}}\), denoted tr \(\left( {\mathbf{T}} \right)\), is the real number given by,

$$tr({\mathbf{T}}) = T_{kk} ,\quad k = 1, \ldots , 3.$$
(1.31)

where \(T_{km}\) are the components of \({\mathbf{T}}\) in the basis \(\left\{ {{\mathbf{e}}_{k} } \right\}.\)

It can be easily seen that the trace is a linear function from L to R, and that,

$$\begin{aligned} & tr \, \left( {{\mathbf{u}} \otimes {\mathbf{v}}} \right) = {\mathbf{u}}\cdot{\mathbf{v}}\,{\text{ for any }}{\mathbf{u}},{\mathbf{v}} \in V, \\ & tr\left( {{\mathbf{T}}^{\text{T}} } \right) = {tr}\left( {\mathbf{T}} \right),\quad tr({\mathbf{AB}}) = tr({\mathbf{BA}}) \, \text{ for any tensors }\, {\mathbf{A}},{\mathbf{B}},{\mathbf{T}} \in L. \end{aligned}$$
(1.32)
  • Inner Product (Contracted Product) of Two Tensors

Definition

The inner product (contracted product) of any two tensors \({\mathbf{T}}\) and \({\mathbf{U}}\), denoted by \({\mathbf{T}} :{\mathbf{U}}\) is the real number:

$${\mathbf{T}}:{\mathbf{U}} = tr\left( {{\mathbf{TU}}^{T} } \right)$$
(1.33)

It is easily seen that this operation defined on the Cartesian product L × L and having values in R, satisfies the axioms (I1)–(I5) of a scalar product over the vector space of second-order tensors. Moreover, if \(T_{km}\) and \(U_{km}\) are the components of \({\mathbf{T}}\) and \({\mathbf{U}}\) relative to the basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\), then,

$${\mathbf{T}}\text{:}{\mathbf{U}} = \, T_{km} U_{km} .$$
(1.34)

This scalar product can be used to define the norm (also called the magnitude) of any tensor \({\mathbf{T}}\), as the real number,

$$\left\| T \right\| = \left( {{\mathbf{T}}:{\mathbf{T}}} \right)^{1/2} = T_{km} T_{km} .$$
(1.35)

From the definition of the scalar product of second-order tensors, it follows that for any vectors \({\mathbf{u}},{\mathbf{v}},{\mathbf{a}},{\mathbf{b}} \in V\),

$$\left( {{\mathbf{a}} \otimes {\mathbf{b}}} \right):\left( {{\mathbf{u}} \otimes {\mathbf{v}}} \right) = \left( {{\mathbf{a}}\cdot{\mathbf{u}}} \right)\left( {{\mathbf{b}}\cdot{\mathbf{v}}} \right)$$
(1.36)

In particular, if \(\left\{ {{\mathbf{e}}_{k} } \right\}\) with k = 1, …, 3 is an orthonormal basis in V,

$$\begin{aligned} & \left( {{\mathbf{e}}_{i} \otimes {\mathbf{e}}_{j} } \right) \cdot \left( {{\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right) = \left( {{\mathbf{e}}_{i} \cdot {\mathbf{e}}_{k} } \right)\left( {{\mathbf{e}}_{j} \cdot {\mathbf{e}}_{m} } \right) = \\ & \delta_{ik} \delta_{jm} = \left\{ {\begin{array}{*{20}l} {1,} \hfill & {{\text{if }}i = {k},{j} = {m}} \hfill \\ {0,} \hfill & {\text{otherwise}} \hfill \\ \end{array} } \right. \\ \end{aligned}$$
(1.37)

Hence, \(\left\{ {{\mathbf{e}}_{k} \otimes {\mathbf{e}}_{m} } \right\}\) with k, m = 1, …, 3 is an orthonormal basis in the space L of second-order tensors.

In many computations, it is useful to present the components \(T_{km}\) of a tensor \({\mathbf{T}}\) relative to a given Cartesian basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\) k = 1, …, 3 as the 3 × 3 matrix:

$${\mathbf{T}} = \left[ {T_{km} } \right] = \left[ {\begin{array}{*{20}c} {T_{11} } & {T_{12} } & {T_{13} } \\ {T_{21} } & {T_{22} } & {T_{23} } \\ {T_{31} } & {T_{32} } & {T_{33} } \\ \end{array} } \right].$$
  • Determinant of a Tensor

Definition

The determinant of a tensor \({\mathbf{T}}\), denoted by det \({\mathbf{T}}\) is defined by:

$$\det {\mathbf{T}} = \det \left[ {T_{km} } \right] = \varepsilon_{pqr} T_{1p} T_{2q} T_{3r} ,$$
(1.38)

where \(\left[ {T_{km} } \right]\) denotes the matrix of the Cartesian components of \({\mathbf{T}}\) in the basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\) and p, q, r = 1, …, 3. From this definition, it follows that for any tensors \({\mathbf{T}},{\mathbf{U}}\) and real number \(\alpha\):

$$\det \left( {\alpha {\mathbf{T}}} \right) = \alpha^{3} \det \left( {\mathbf{T}} \right),\det {\mathbf{T}}^{T} = \det {\mathbf{T}},\det \left( {{\mathbf{TU}}} \right) = \left( {\det {\mathbf{T}}} \right)\left( {\det {\mathbf{U}}} \right)$$
(1.39)

If det \({\mathbf{T}}\) = 0, the tensor \({\mathbf{T}}\) is said to be singular.

  • Inverse of a Second-Order Tensor

If \(\det {\mathbf{T}} \ne 0\), the tensor \({\mathbf{T}}\) is said to be invertible (or non-singular) since there exists a unique tensor, called the inverse of \({\mathbf{T}}\), and denoted by \({\mathbf{T}}^{ - 1}\) such that

$${\mathbf{TT}}^{ - 1} = {\mathbf{T}}^{ - 1} {\mathbf{T}} = {\mathbf{I}}.$$
(1.40)

From Eqs. (1.39) to (1.40), it follows that

$$\det {\mathbf{T}}^{ - 1} = \left( {\det {\mathbf{T}}} \right)^{ - 1} ,\left( {{\mathbf{TU}}} \right)^{ - 1} = {\mathbf{U}}^{ - 1} {\mathbf{T}}^{ - 1} ,\left( {{\mathbf{T}}^{T} } \right)^{ - 1} = \left( {{\mathbf{T}}^{ - 1} } \right)^{T}$$
(1.41)
  • Orthogonal Tensors

A special class of tensors has as a defining property the invariance of the scalar product of any two vectors.

Definition

A tensor \({\mathbf{Q}}\) is orthogonal if,

$${\mathbf{Q}}{\mathbf{u}} \cdot {\mathbf{Q}}{\mathbf{v}} = {\mathbf{u}}{ \cdot }{\mathbf{v}}{\text{ for any }}{\mathbf{u}},{\mathbf{v}} \in V.$$
(1.42)

Taking \({\mathbf{u}} = {\mathbf{v}}\) in the above equation, it follows that

$$\left| {{\mathbf{Q}}{\mathbf{u}}} \right| = \left| {\mathbf{u}} \right|,$$

so an orthogonal tensor applied to any vector \({\mathbf{u}}\) preserves its length. Furthermore, from Eq. (1.42) we obtain

$$\displaystyle\frac{{{\mathbf{Q}}{\mathbf{u}} \cdot {\mathbf{Q}}{\mathbf{v}}}}{{\left| {{\mathbf{Q}}{\mathbf{u}}} \right|\left| {{\mathbf{Q}}{\mathbf{v}}} \right|}} = \displaystyle\frac{{{\mathbf{u}}{ \cdot }{\mathbf{v}}}}{{\left| {\mathbf{u}} \right|\left| {\mathbf{v}} \right|}},$$

so the angle between two vectors \({\mathbf{u}}\) and \({\mathbf{v}}\) is preserved whenever \({\mathbf{u}}\) and \({\mathbf{v}}\) are transformed by an orthogonal tensor \({\mathbf{Q}}.\) Since by definition of the transpose of a tensor [see Eq. (1.28)],

$${\mathbf{Q}}{\mathbf{u}} \cdot {\mathbf{Q}}{\mathbf{v}} = {\mathbf{u}}{ \cdot }\left\{ {{\mathbf{Q}}^{T} \left( {{\mathbf{Q}}{\mathbf{v}}} \right)} \right\} = {\mathbf{u}}{ \cdot }\left\{ {\left( {{\mathbf{Q}}^{T} {\mathbf{Q}}} \right){\mathbf{v}}} \right\},$$

making use of Eq. (1.42) we obtain that a necessary and sufficient condition for Q to be orthogonal is

$${\mathbf{Q}}^{T} {\mathbf{Q}} = \, {\mathbf{I}}.$$
(1.43)

From Eq. (1.43), it follows \(\det \left( {{\mathbf{Q}}^{T} {\mathbf{Q}}} \right) = \left( {\det {\mathbf{Q}}} \right)^{2}\), hence,

$$\det {\mathbf{Q}} = \pm 1, \ {\mathbf{Q}}^{T} = {\mathbf{Q}}^{ - 1} .$$
(1.44)

\({\mathbf{Q}}\) is said to be a proper orthogonal tensor if \(\det {\mathbf{Q}} = 1\) and an improper orthogonal tensor if \(\det {\mathbf{Q}} = - 1\).

Note also that from Eq. (1.42), it follows that if \(\left\{ {{\mathbf{e}}_{k} } \right\}\) is an orthonormal basis, the set \(\left\{ {{\mathbf{Qe}}_{k} } \right\}\) also forms an orthonormal basis.

  • Change of Coordinate System: Transformation Matrix and Transformation Rules of Vector and Second-Order Tensor Components

Let us assume now that \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\) and \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\) are three-dimensional positive-oriented orthonormal bases of the three-dimensional space. Relative to these bases, an arbitrary vector \({\mathbf{u}}\) has the components \(u_{i}\) and respectively \(u_{i}^{*}\), i = 1, …, 3. Then,

$$u_{j} = q_{ji} u_{i}^{*} \quad {\text{with }}\quad q_{ji} = {\mathbf{e}}_{\text{j}} \cdot {\mathbf{e}}_{\text{i}}^{ *} ,i,j = 1, \ldots, 3$$
(1.45)

or, in matrix form:

$$\varvec{u}_{{(e_{i} )}} = {\mathbf{Q}}\varvec{u}_{{(e_{{_{i} }}^{*} )}}$$

where \({\mathbf{Q}} = \left[ {q_{ij} } \right]\) is the transformation matrix from the basis \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\) to the basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\), and \(\varvec{u}_{{(e_{i} )}} = \left( {u_{1} ,u_{2} ,u_{3} } \right),\ \varvec{u}_{{(\varvec{e}_{{_{i} }}^{*} )}} = \left( {u_{1}^{*} ,u_{2}^{*} ,u_{3}^{*} } \right)\) are the components of \({\mathbf{u}}\) in the respective basis.

Proof

First, let’s express each of the vectors of the basis \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\) relative to the basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}.\) For i-fixed,

$${\mathbf{e}}_{i}^{ * } = \left( {{\mathbf{e}}_{i}^{ * } \cdot {\mathbf{e}}_{j} } \right){\mathbf{e}}_{j} = q_{ji} {\mathbf{e}}_{j} = q_{1i} {\mathbf{e}}_{1} + q_{2i} {\mathbf{e}}_{2} + q_{3i} {\mathbf{e}}_{3}$$
(1.46)

i.e., the column “i” of the matrix \({\mathbf{Q}}\) contains the components of \({\mathbf{e}}_{i}^{ * }\) relative to the basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}.\) Therefore, relative to the basis given by \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\), the vector \({\mathbf{u}}\) can be expressed as:

$${\mathbf{u}} = u_{i}^{*} {\mathbf{e}}_{i}^{*} = u_{i}^{*} q_{ji} {\mathbf{e}}_{j}$$
(1.47)

In view of Theorem 1.1, the representation of a vector as a linear combination of the vectors of a given basis is unique. Thus from Eq. (1.47), it follows that for j-fixed, the component \(u_{j}\) of the vector \({\mathbf{u}}\) is:

$$u_{j} = q_{ji} u_{i}^{*}\quad {\text{or }}\quad\varvec{u}_{{(e_{i} )}} = {\mathbf{Q}}\varvec{u}_{{(e_{{_{i} }}^{*} )}} .$$

As already mentioned, the transformation matrix \({\mathbf{Q}}\) is orthogonal, and accordingly, \({\mathbf{Q}}^{ - 1} = {\mathbf{Q}}^{T}\) [Eq. (1.44)], thus,

$$\varvec{u}_{{({\mathbf{e}}_{{_{i} }}^{*} )}} \varvec{ = }{\mathbf{Q}}^{T} \varvec{u}_{{({\mathbf{e}}_{i} )}} .$$

In a similar manner, it can be shown that the transformation rule for tensor components is:

$$T_{km} = q_{kr} q_{ms} T_{rs}^{*} \quad {\text{and}}\;\;T_{rs}^{*} = q_{kr} q_{ms} T_{km}^{{}} ,k,r,m,s = 1, \ldots ,3,$$

or

$${\mathbf{T}} = \, {\mathbf{QT}}^{*} {\mathbf{Q}}^{T} \quad {\text{and }}\quad {\mathbf{T}}^{*} = {\mathbf{Q}}^{T} {\mathbf{T}}{\mathbf{Q}}$$
(1.48)

Orthogonal tensors and their properties are of great importance for the description of the mechanical response of polycrystalline materials. For instance, intrinsic crystal symmetries are characterized by various orthogonal tensors or transformations (see Chap. 3).

Remark

It is important to note that the trace and determinant of a tensor are invariants, i.e., have the same value irrespective of the Cartesian coordinate system in which the tensor is described. Indeed, using the transformation rule given by Eq. (1.48), it follows that:

$$tr({\mathbf{T}}) = T_{kk} = q_{kr} q_{ks} T_{rs}^{*} = \delta_{rs} T_{rs}^{*} = tr({\mathbf{T}}^{*} ),$$
$$\det \left( {{\mathbf{T}}^{*} } \right) = \det \left( {{\mathbf{Q}}^{T} } \right)\det \left( {{\mathbf{T}}{\mathbf{Q}}} \right) = \det \left( {{\mathbf{Q}}^{ - 1} } \right)\det \left( {\mathbf{T}} \right)\det \left( {\mathbf{Q}} \right) = \det ({\mathbf{T}})$$
(1.49)
  • Invariants of a second-order tensor and Spectral Theorem

In the mechanics of deformable bodies, an important role is played by the eigenvalues and eigenvectors of various second-order tensors, such as Cauchy’s stress tensor. Thus, we briefly present the definitions of eigenvalues, eigenvectors, and invariants highlighting some important properties of symmetric tensors.

  • Eigenvalues and Eigenvectors of Second-Order Tensors

Definition

A scalar \(\lambda\) is said to be an eigenvalue of a tensor T if there exists a nonzero vector u, such that

$${\mathbf{Tu}} = \lambda {\mathbf{u}};$$
(1.50)

where, \({\mathbf{u}}\) is called eigenvector of \({\mathbf{T}}\) associated to the eigenvalue \(\lambda\). Reciprocally, a nonzero vector \({\mathbf{u}}\) is said to be an eigenvector of \({\mathbf{T}}\) if there exists a real number \(\lambda\) such that Eq. (1.50) holds. Note that in this case \(\lambda\) is an eigenvalue of \({\mathbf{T}}\) associated to \({\mathbf{u}}\).

The set of all vectors \({\mathbf{u}}\) satisfying the Eq. (1.50) forms a subspace of V, which is called the characteristic space of \({\mathbf{T}}\) corresponding to the eigenvalue \(\lambda\). A unit eigenvector of \({\mathbf{T}}\) is called a principal direction of the tensor \({\mathbf{T}}\). Equation (1.50) implies that \(\lambda\) is an eigenvalue of \({\mathbf{T}}\) if and only if it is a real root of the algebraic equation,

$$\det ({\mathbf{T}} - \lambda {\mathbf{I}}) = 0$$
(1.51)

The above equation is called the characteristic equation of \({\mathbf{T}}.\)

Let \(\left\{ {{\mathbf{e}}_{k} } \right\}\), k = 1, …, 3 be an orthonormal basis. By expanding the determinant in Eq. (1.51), the characteristic equation can be written as a third-order algebraic equation for \(\lambda\):

$$\lambda^{3} - I_{T} \lambda^{2} - II_{T} \lambda - III_{T} = 0,$$
(1.52)

where

$$I_{T} = T_{11} + T_{22} + T_{33} = tr{\mathbf{T}},$$
(1.53)
$$II_{T} = - \left| {\begin{array}{*{20}c} {T_{22} } & {T_{23} } \\ {T_{32} } & {T_{33} } \\ \end{array} } \right| - \left| {\begin{array}{*{20}c} {T_{11} } & {T_{13} } \\ {T_{31} } & {T_{33} } \\ \end{array} } \right| - \left| {\begin{array}{*{20}c} {T_{11} } & {T_{12} } \\ {T_{21} } & {T_{22} } \\ \end{array} } \right| = \displaystyle\frac{1}{2}\left[ {tr\left( {{\mathbf{T}}^{2} } \right) - \left( {tr{\mathbf{T}}} \right)^{2} } \right]$$
(1.54)
$$III_{T} = \det [T_{km} ] = \det {\mathbf{T}}{\mathbf{.}}$$
(1.55)

The scalars \(I_{T} ,II_{T} ,III_{T}\) are referred to as principal invariants of \({\mathbf{T}}\), with \(I_{T}\) being called the first invariant, \(II_{T}\) the second invariant, and \(III_{T}\) third-invariant of the tensor \({\mathbf{T}}\), respectively.

It is important to recall that since the trace and determinant of any tensor do not depend on the basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\) [see Eq. (1.49)] \(I_{T} ,II_{T} ,III_{T}\) have the same values irrespective of the basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\) used to write the characteristic equation, i.e., they are invariants relative to a change of basis in V.

Let \(\lambda_{1} ,\lambda_{2} ,\lambda_{3}\) be the roots of the third-order characteristic Eq. (1.52). Classical linear algebra results in conjunction with Eqs. (1.53)–(1.55) lead to

$$\begin{aligned} I_{T} & = \, tr{\mathbf{T}} = \, \lambda_{1} + \, \lambda_{2} + \, \lambda_{3} \\ II_{T} & = \displaystyle\frac{1}{2}\left[ {tr\left( {T^{2} } \right) - \left( {trT} \right)^{2} } \right] = - \left( {\lambda_{1} \lambda_{2} + \, \lambda_{2} \lambda_{3} + \, \lambda_{3} \lambda_{1} } \right), \\ III_{T} & = \, \det {\mathbf{T}} = \, \lambda_{1} \lambda_{2} \lambda_{3} . \\ \end{aligned}$$
(1.56)

The next result, whose proof we omit, is a central theorem of linear algebra and one of great importance in modeling the behavior of materials.

  • Cayley–Hamilton Theorem

A symmetric second-order tensor \({\mathbf{T}}\) satisfies its own characteristic equation, i.e.,

$${\mathbf{T}}^{3} - I_{T} {\mathbf{T}}^{2} - II_{T} {\mathbf{T}} - III_{T} {\mathbf{I}} = 0.$$
(1.57)

It can also be shown (see, e.g., Halmos [3]) that:

  • Spectral Theorem

A symmetric second-order tensor \({\mathbf{T}}\) has three real eigenvalues (not necessarily distinct) and an orthonormal basis \(\left\{ {{\mathbf{n}}_{1} ,{\mathbf{n}}_{2} ,{\mathbf{n}}_{3} } \right\}\) such that:

$${\mathbf{T}} = \lambda_{1} \varvec{n}_{1} \otimes \varvec{n}_{1} + \lambda_{2} \varvec{n}_{2} \otimes \varvec{n}_{2} + \lambda_{3} \varvec{n}_{3} \otimes \varvec{n}_{3} .$$
(1.58)
  • If \(\lambda_{1} ,\lambda_{2}\) and \(\lambda_{3}\) are distinct, the characteristic spaces of \({\mathbf{T}}\) are one-dimensional vector subspaces of V, generated by the principal directions \({\mathbf{n}}_{1} ,{\mathbf{n}}_{2}\) and \({\mathbf{n}}_{3}\), respectively.

  • If two principal values are equal, \(\lambda_{1} \ne \lambda_{2} = \lambda_{3} ,{\mathbf{T}}\) has only two distinct characteristic spaces, namely the line generated by \({\mathbf{n}}_{1}\) and the plane perpendicular to \({\mathbf{n}}_{1}\) and the representation (1.58) reduces to:

    $${\mathbf{T}} = \lambda_{1} \varvec{n}_{1} \otimes \varvec{n}_{1} + \lambda_{2} \left( {{\mathbf{I}} - \varvec{n}_{1} \otimes \varvec{n}_{1} } \right).$$
    (1.59)
  • If \(\lambda_{1} = \lambda_{2} = \lambda_{3}\), then \({\mathbf{T}}\) has a single characteristic space, and:

    $${\mathbf{T}} = \lambda_{1} {\mathbf{I}}.$$
    (1.60)

The relations (1.58)–(1.60) give the spectral decomposition of the tensor \({\mathbf{T}}.\)

Proof

To prove that the eigenvalues of a symmetric tensor \({\mathbf{T}}\) are all real, we will show that if \(\lambda\) is a root of the characteristic Eq. (1.52), then \(\lambda = \bar{\lambda }.\) Indeed, if \(\lambda = a + ib\), \(\left( {i = \sqrt { - 1} } \right)\), there exist \({\mathbf{u}} = {\mathbf{v}} + i{\mathbf{w}}\) nonzero such that \({\mathbf{Tu}} = \lambda {\mathbf{u}}.\) Writing this latter equation in component form relative to the basis \(\left\{ {{\mathbf{e}}_{\text{k}} } \right\}\) and separating the real and imaginary parts, we have:

$$T_{km} v_{m} - av_{k} + bw_{k} = 0,\quad {\text{and }}\;T_{km} w_{m} - aw_{k} - bv_{k} = 0,\quad{\text{with }}\;k,m = 1 \ldots 3$$
(1.61)

Since \(T_{km} = T_{mk}\), by multiplying the first Eq. (1.61) by \(w_{k}\) and the second one by \(\left( { - v_{k} } \right)\) and then subtracting one from another, we obtain:

$$b\left( {v_{k} v_{k} + w_{k} w_{k} } \right) = b\left( {\left| {\mathbf{v}} \right|^{2} + \left| {\mathbf{w}} \right|^{2} } \right) = 0.$$

Since \({\mathbf{u}}\) is nonzero, from the above equation it follows that b = 0, and thus \(\lambda \in R.\) On the other hand, the characteristic equation is of order 3, and it has three roots (not necessarily distinct).

Assuming that the eigenvalues \(\lambda_{1} ,\lambda_{2} ,\lambda_{3}\) are distinct, and denoting by \({\mathbf{n}}_{1} ,{\mathbf{n}}_{2} ,{\mathbf{n}}_{3}\) the corresponding principal directions of the respective eigenvalues, we have:

$${\mathbf{T}}\varvec{n}_{1} = \lambda_{1} \varvec{n}_{1} ,{\mathbf{T}}\varvec{n}_{2} = \lambda_{2} \varvec{n}_{2} ,{\mathbf{T}}\varvec{n}_{3} = \lambda_{3} \varvec{n}_{3} .$$
(1.62)

It can be easily shown that the proper vectors of a symmetric tensor \({\mathbf{T}}\) corresponding to two distinct eigenvalues are reciprocally orthogonal, hence \({\mathbf{n}}_{1} ,{\mathbf{n}}_{2}\) and \({\mathbf{n}}_{3}\) form an orthonormal basis. Next, using successively Eqs. (1.23) and the definition of the dyadic product, we can express:

$${\mathbf{T}} = {\mathbf{TI}} = {\mathbf{T}}\left( {\varvec{n}_{i} \otimes \varvec{n}_{i} } \right) = \left( {{\mathbf{T}}\varvec{n}_{i} } \right) \otimes \varvec{n}_{i} = \sum\limits_{i = 1}^{3} {\lambda_{i} } \left( {\varvec{n}_{i} \otimes \varvec{n}_{i} } \right).$$

The proof for the other two cases (i.e., repeated roots) can be obtained in a similar manner.

Equation (1.58) is referred to as the spectral decomposition of a symmetric second-order tensor. The spectral theorem is of great importance for the theory of elasticity and plasticity.

  • Positive-Definite Tensor, Polar Decomposition Theorem

Definition

A tensor \({\mathbf{T}}\) is said to be positive semi-definite if for any vector \({\mathbf{u}}\):

$${\mathbf{u}} \cdot {\mathbf{Tu}} \ge 0.$$

If the stronger requirement,

$${\mathbf{u}} \cdot {\mathbf{Tu}} > 0 \quad \forall {\mathbf{u}} \ne 0$$
(1.63)

is fulfilled, \({\mathbf{T}}\) is said to be positive-definite. Using the above definition, it follows that the eigenvalues of a symmetric positive-definite tensor are strictly positive. Hence,

$$\det {\mathbf{T}} > 0,$$

and, \({\mathbf{QTQ}}^{T}\) is symmetric and positive-definite for any proper orthogonal tensor \({\mathbf{Q}}.\) This implies that any symmetric positive-definite tensor \({\mathbf{T}}\) admits an inverse. Moreover, from the spectral theorem [Eq. (1.58)], it follows that the inverse of \({\mathbf{T}}\) has the following spectral representation:

$${\mathbf{T}}^{ - 1} = \sum\limits_{i = 1}^{3} {\lambda_{i}^{ - 1} } \left( {\varvec{n}_{i} \otimes \varvec{n}_{i} } \right),$$

where \(\lambda_{i}\) are the eigenvalues of \({\mathbf{T}}\) and \(\left\{ {{\mathbf{n}}_{1} ,{\mathbf{n}}_{2} ,{\mathbf{n}}_{3} } \right\}\) are the associated eigenvectors (with corresponding representations deduced from Eqs. (1.59) and (1.60), respectively, if the eigenvalues \(\lambda_{i}\) are not distinct).

Another important result in the mechanics of materials, obtained using the spectral theorem, concerns the existence of the square root of a positive-definite tensor. It can be shown that given a symmetric positive semi-definite tensor \({\mathbf{T}}\), there exists a unique symmetric and positive semi-definite tensor U, called the square root of \({\mathbf{T}}\), and denoted \(\sqrt {\mathbf{T}}\), such that

$${\mathbf{U}}^{2} = {\mathbf{T}}{\mathbf{.}}$$
(1.64)

Indeed, if \({\mathbf{T}} = \lambda_{1} \varvec{n}_{1} \otimes \varvec{n}_{1} + \lambda_{2} \varvec{n}_{2} \otimes \varvec{n}_{2} + \lambda_{3} \varvec{n}_{3} \otimes \varvec{n}_{3}\), with \(\lambda_{1} \ge 0,\)\(\lambda_{2} \ge 0,\lambda_{3} \ge 0\) then the tensor, defined by \(\sqrt{\mathbf{T}} = \sum\nolimits_{i = 1}^{3} {\sqrt {\lambda_{i} } } \left( {\varvec{n}_{i} \otimes \varvec{n}_{i} } \right)\) is symmetric, positive-definite, and satisfies the requirement (1.64).

  • Polar Decomposition Theorem

Any invertible tensor \({\mathbf{A}}\) with \(\det {\mathbf{A}}{ > 0}\) has two unique multiplicative decompositions,

$${\mathbf{A }} = {\mathbf{RU}}{\text{ and }}{\mathbf{A}} = {\mathbf{VR}},$$

with \({\mathbf{U}}\) and \({\mathbf{V}}\) symmetric and positive-definite, and \({\mathbf{R}}\) orthogonal.

  • Deviator of a Symmetric Tensor

Definition

The deviator of a nonzero symmetric tensor \({\mathbf{T}}\), denoted \({\mathbf{T}^\prime}\), is defined as:

$${\mathbf{T}}^{\prime } = {\mathbf{T}} - \displaystyle\frac{{tr{\mathbf{T}}}}{3}{\mathbf{I}}.$$
(1.65)

To simplify writing let us denote, \(\left({{tr{\mathbf{T}}}} \right)/{3} = p.\)

Note that \({\mathbf{T}^\prime}\) is symmetric and traceless (\(tr{\mathbf{T}^\prime}\) = 0) and, its second and third-invariants can be expressed in terms of the invariants of \({\mathbf{T}}\) as:

$$\begin{aligned} I_{{{\mathbf{T}}^{\prime } }} & = 0 \\ II_{{{\mathbf{T}}^{\prime } }} & = \displaystyle\frac{1}{2}\left[ {tr\left( {{\mathbf{T}}^{\prime 2} } \right)} \right] = II_{T} + 3p^{2} , \\ III_{{{\mathbf{T}}^{\prime } }} & = \det \left( {{\mathbf{T}}^{\prime } } \right) = III_{T} + p\left( {II_{T} } \right) + 2p^{3} . \\ \end{aligned}$$
(1.66)

Lemma 1.1

Let \({\mathbf{T}}\) be a symmetric second-order tensor, and \(\lambda_{i}\), i = 1, …, 3, its principal values. Let \(\Gamma _{n} = \lambda_{1}^{n} + \lambda_{2}^{n} + \lambda_{3}^{n}\), where n is a positive integer. Then, the following recurrence relation holds:

$$\Gamma _{n + 1} = \left( {3p} \right)\Gamma _{n} + II_{T}\Gamma _{n - 1} + III_{T}\Gamma _{n - 2}\quad {\text{for }}n \ge 2$$
(1.67)

Proof

Let us note that by definition, \(\Gamma _{0} = 3\), and from Eq. (1.56) we have:

$$\begin{aligned}\Gamma _{1} & = I_{T} = 3p, \\\Gamma _{2} & = \left( {\lambda_{1} + \lambda_{2} + \lambda_{3} } \right)^{2} - 2\left( {\lambda_{1} \lambda_{2} + \lambda_{1} \lambda_{3} + \lambda_{2} \lambda_{3} } \right) = \left( {3p} \right)^{2} + 2II_{T} \\\Gamma _{3} & = \lambda_{1}^{3} + \lambda_{2}^{3} + \lambda_{3}^{3} = \left( {3p} \right)\Gamma _{2} + II_{T}\Gamma _{1} + 3III_{T} = 27p^{3} + 9pII_{T} + 3III_{T} \\ \end{aligned}$$

On the other hand,

$$\begin{aligned}\Gamma _{n + 1} & = \left( {\lambda_{1}^{n} + \lambda_{2}^{n} + \lambda_{3}^{n} } \right)\left( {\lambda_{1} + \lambda_{2} + \lambda_{3} } \right) - \lambda_{1}^{n} \left( {\lambda_{2} + \lambda_{3} } \right) - \lambda_{2}^{n} \left( {\lambda_{1} + \lambda_{3} } \right) \\ & \quad - \lambda_{3}^{n} \left( {\lambda_{1} + \lambda_{2} } \right) \\ \end{aligned}$$

Therefore,

$$\begin{aligned}\Gamma _{n + 1} & = \left( {3p} \right)\Gamma _{n} - \left( {\lambda_{1} \lambda_{2} + \lambda_{2} \lambda_{3} + \lambda_{3} \lambda_{1} } \right)\left( {\lambda_{1}^{n - 1} + \lambda_{2}^{n - 1} + \lambda_{3}^{n - 1} } \right) \\ & \quad + \left( {\lambda_{1} \lambda_{3} \lambda_{2}^{n - 1} + \lambda_{2} \lambda_{3} \lambda_{1}^{n - 1} + \lambda_{1} \lambda_{2} \lambda_{3}^{n - 1} } \right) \\ \end{aligned}$$

or,

$$\begin{aligned}\Gamma _{n + 1} & = \left( {3p} \right)\Gamma _{n} - \left( {\lambda_{1} \lambda_{2} + \lambda_{2} \lambda_{3} + \lambda_{3} \lambda_{1} } \right)\left( {\Gamma _{n - 1} } \right) \\ & \quad + \lambda_{1} \lambda_{2} \lambda_{3} \left( {\lambda_{1}^{n - 2} + \lambda_{2}^{n - 2} + \lambda_{3}^{n - 2} } \right) \\ \end{aligned}$$

Further substitution of Eq. (1.56) leads to the recurrence relation Eq. (1.67). Another useful result of importance in defining yield criteria for isotropic materials with the same behavior in tension–compression is given below.

Lemma 1.2

For any integer \(n \ge 1\), the following recurrence relation holds:

$$\begin{aligned}\Gamma _{2n + 4} & =\Gamma _{2n + 2} \left( {\lambda_{1}^{2} + \lambda_{2}^{2} + \lambda_{3}^{2} } \right) -\Gamma _{2n} \left( {\lambda_{1}^{2} \lambda_{2}^{2} + \lambda_{1}^{2} \lambda_{3}^{2} + \lambda_{2}^{2} \lambda_{3}^{2} } \right) \\ & \quad +\Gamma _{2n - 2} \left( {\lambda_{1}^{2} \lambda_{2}^{2} \lambda_{3}^{2} } \right) \\ \end{aligned}$$
(1.68)

Proof

$$\begin{aligned}\Gamma _{2n + 4} & = \left( {\lambda_{1}^{2n + 4} + \lambda_{2}^{2n + 4} + \lambda_{3}^{2n + 4} } \right) = \left( {\lambda_{1}^{2n + 2} + \lambda_{2}^{2n + 2} + \lambda_{3}^{2n + 2} } \right)\left( {\lambda_{1}^{2} + \lambda_{2}^{2} + \lambda_{3}^{2} } \right) \\ & \quad - \lambda_{1}^{2n + 2} \left( {\lambda_{2}^{2} + \lambda_{3}^{2} } \right) - \lambda_{2}^{2n + 2} \left( {\lambda_{1}^{2} + \lambda_{3}^{2} } \right) - \lambda_{3}^{2n + 2} \left( {\lambda_{1}^{2} + \lambda_{2}^{2} } \right) \\ \end{aligned}$$

or,

$$\begin{aligned}\Gamma _{2n + 4} & =\Gamma _{2n + 2} \left( {\lambda_{1}^{2} + \lambda_{2}^{2} + \lambda_{3}^{2} } \right) -\Gamma _{2n} \left( {\lambda_{1}^{2} \lambda_{2}^{2} + \lambda_{1}^{2} \lambda_{3}^{2} + \lambda_{2}^{2} \lambda_{3}^{2} } \right) \\ & \quad + \lambda_{1}^{2} \lambda_{2}^{2} \lambda_{3}^{2n} + \lambda_{1}^{2} \lambda_{3}^{2} \lambda_{2}^{2n} + \lambda_{2}^{2} \lambda_{3}^{2} \lambda_{1}^{2n} \\ \end{aligned}$$

Further collecting the last three terms in the above expression, we obtain,

$$\Gamma _{2n + 4} =\Gamma _{2n + 2} \left( {\lambda_{1}^{2} + \lambda_{2}^{2} + \lambda_{3}^{2} } \right) -\Gamma _{2n} \left( {\lambda_{1}^{2} \lambda_{2}^{2} + \lambda_{1}^{2} \lambda_{3}^{2} + \lambda_{2}^{2} \lambda_{3}^{2} } \right) + \lambda_{1}^{2} \lambda_{2}^{2} \lambda_{3}^{2}\Gamma _{2n - 2}$$

1.2.2 Higher-Order Tensors

  • Tensor of Order n

Definition

A tensor of order n (or \(n{\text{th}}\)-order tensor) is a linear mapping that assigns to each vector \({\mathbf{u}}\) a tensor of order (n–1), for \(n \ge 3.\) This definition, in conjunction with that of a second-order tensor given in the previous subsection, allows the iterative introduction of tensors of an arbitrary order. We denote by \(L_{n}\) the set of all tensors of order n, \(n \ge 3.\) The sum \({\mathbf{A}} + {\mathbf{B}}\) of any two \(n{\text{th}}\)-order tensors \({\mathbf{A}}\) and \({\mathbf{B}}\), and the product \(\alpha {\mathbf{A}} = {\mathbf{A}}\alpha\) of a \(n{\text{th}}\)-order tensor and a real number \(\alpha\) are defined by the equations:

$$\left( {{\mathbf{A}} + {\mathbf{B}}} \right){\mathbf{v}} = {\mathbf{Av}} + {\mathbf{Bv}},\left( {\alpha {\mathbf{A}}} \right){\mathbf{v}} = \alpha \left( {{\mathbf{Av}}} \right).$$
(1.69)

As in the case of second-order tensors, it is easy to see that \(L_{n}\) endowed with the above operations and similar definitions for the zero tensor and opposite tensor \(\left( { - {\mathbf{A}}} \right)\), respectively, form a vector space.

Definition

The tensor product or dyadic product of n vectors \({\mathbf{u}}_{\text{i}}\), with i = 1, …, n, is a tensor of \(n{\text{th}}\)-order, denoted by \({\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} \ldots \otimes {\mathbf{u}}_{\text{n}}\) and is defined by:

$$({\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} \ldots \otimes {\mathbf{u}}_{\text{n}} )\left( {\mathbf{w}} \right) = {\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} \ldots \otimes {\mathbf{u}}_{\text{n - 1}} ({\mathbf{u}}_{\text{n}} \cdot {\mathbf{w}}) \quad \forall {\mathbf{w}} \in V.$$
(1.70)

Note that for n = 2 the above definition reduces to the definition of a dyadic product of any two vectors given by Eq. (1.21). In particular, the tensor product of three vectors \({\mathbf{u}}_{1} ,{\mathbf{u}}_{2} ,{\mathbf{u}}_{3} \in V\) is a third-order tensor \({\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} \otimes {\mathbf{u}}_{3}\), which assigns to any vector \({\mathbf{a}}\) the second-order tensor \(\left( {{\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} } \right)\left( {{\mathbf{u}}_{3} \cdot {\mathbf{a}}} \right)\), so:

$$({\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} \otimes {\mathbf{u}}_{3} ){\mathbf{a}} = \left( {{\mathbf{u}}_{1} \otimes {\mathbf{u}}_{2} } \right)\left( {{\mathbf{u}}_{3} \cdot {\mathbf{a}}} \right)\quad\forall {\mathbf{a}} \in V$$
(1.71)

In the mechanics of deformable bodies, the role of the third-order tensors is relatively reduced. However, in order to introduce the gradient of a second-order tensor field, and to obtain in this way the divergence of a second-order tensor field, we must use third-order tensor fields.

In general, the products \({\mathbf{e}}_{{k_{1} }} \otimes {\mathbf{e}}_{{k_{2} }} \ldots \otimes {\mathbf{e}}_{{k_{n} }} ,k_{1} , \ldots ,k_{n} = 1, \ldots ,3\) form a basis of \(L_{n}\). Hence, \(L_{n}\) the vector space of \(n{\text{th}}\)-order tensors is \(3^{n}\) dimensional, and every tensor \({\mathbf{A}}\) can be uniquely written in the form:

$${\mathbf{A}}\text{ = }A_{{k_{1} \ldots k_{n} }} {\mathbf{e}}_{{k_{1} }} \otimes {\mathbf{e}}_{{k_{2} }} \ldots \otimes {\mathbf{e}}_{{k_{n} }} ,$$
(1.72)

where the scalars \(A_{{k_{1} \ldots .k_{n} }}\) are the Cartesian components of \({\mathbf{A}}\) in the considered basis. Furthermore, if \({\mathbf{T}} = {\mathbf{Av}}\) and \({\mathbf{T}}\text{ = }T_{{k_{1} \ldots k_{n - 1} }} {\mathbf{e}}_{{k_{1} }} \otimes {\mathbf{e}}_{{k_{2} }} \ldots \otimes {\mathbf{e}}_{{k_{n} }}\) by applying the definition (1.70), we obtain the expression of the components of the (n − 1)th-order tensor \({\mathbf{T}}\) in terms of the components of \({\mathbf{A}}\) and of the vector \({\mathbf{v}}\) as:

$$T_{{k_{1} \ldots k_{n - 1} }} {\mathbf{ = }}A_{{k_{1} \ldots k_{n} }} {\text{v}}_{{k_{n} }} .$$
  • Fourth-Order Tensors

The dimension of the vector space of fourth-order tensors, \(L_{4}\), is \(3^{4} = 81\), and any fourth-order tensor \({\varvec{\Phi}}\) can be expressed in a unique way as a linear combination of fourth-order dyads \({\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l} \otimes {\mathbf{e}}_{m} \otimes {\mathbf{e}}_{n}\), k, l, m, n = 1, …, 3; i.e.,

$${\varvec{\Phi}} =\Phi _{klmn} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l} \otimes {\mathbf{e}}_{m} \otimes {\mathbf{e}}_{n} ,$$
(1.73)

the numbers \(\Phi _{klmn}\), k, l, m, n = 1, …, 3 being the components of Φ in the considered basis; if A = Φ v the components of the third-order tensor A are:

$$A_{klm} =\Phi _{klmn} {\text{v}}_{n} .$$
(1.74)
  • Transformation Rules for the Components of Fourth-Order Tensors

If \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\) are three-dimensional positive-oriented orthonormal bases of the three-dimensional space V, then the components of the fourth-order tensor in the basis \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\) are:

$$\Phi _{rstu} = q_{kr} q_{ls} q_{mt} q_{nu}\Phi _{klmn} ,$$
(1.75)

where \({\mathbf{Q}} = \left[ {q_{ij} } \right]\) is the transformation matrix from the new basis \(\left\{ {{\mathbf{e}}_{1}^{ * } ,{\mathbf{e}}_{2}^{ * } ,{\mathbf{e}}_{3}^{ * } } \right\}\) to the old basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\) and r, s, t, u = 1, …, 3.

Relative to any orthonormal basis, the fourth-order identity tensor \({\mathbf{I}}_{4}\) has the components:

$${\mathbf{I}}_{4} = \delta_{km} \delta_{ln} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l} \otimes {\mathbf{e}}_{m} \otimes {\mathbf{e}}_{n} .$$
(1.76)
  • Contracted Products Between Tensors

In the previous section, we defined the inner product (contracted product) of any two second-order tensors [see Eq. (1.33)]. In the following, we introduce contracted products between various \(n{\text{th}}\)-order tensors, which will be later used to define anisotropic yield criteria in terms of transformed tensors (see Chap. 5).

Definition

The left dot product and right dot product (contracted product) of a vector \({\mathbf{v}}\) and a second-order tensor \({\mathbf{T}}\) is the vector defined as:

$$\begin{aligned} {\mathbf{v}}\cdot{\mathbf{T}} & = {\mathbf{T}}^{T} {\mathbf{v}} \\ {\mathbf{T}}\cdot{\mathbf{v}} & = {\mathbf{Tv}} \\ \end{aligned}$$
(1.77)

Relative to an orthonormal basis \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\),

$${\mathbf{v}}\cdot{\mathbf{T}} = v_{k} T_{kl} {\mathbf{e}}_{l} ,\quad {\mathbf{T}}\cdot{\mathbf{v}} = T_{kl} v_{l} {\mathbf{e}}_{k}$$
(1.78)

with k, l = 1, …, 3.

Definition

The left dot product and right dot product (contracted product) of a vector \({\mathbf{v}}\) and a third-order tensor \({\mathbf{A}}\) is the second-order tensor defined as:

$$\begin{aligned} {\mathbf{v}}\cdot{\mathbf{A}} & = {\mathbf{A}}^{T} {\mathbf{v}} = {\text{v}}_{k} A_{klm} {\mathbf{e}}_{l} \otimes {\mathbf{e}}_{m} , \\ {\mathbf{A}}\cdot{\mathbf{v}} & = {\mathbf{A}}^{T} {\mathbf{v}} = A_{klm} {\text{v}}_{m} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l} . \\ \end{aligned}$$
(1.79)

with k, l, m, n = 1, …, 3.

Definition

The contracted product of a fourth-order tensor \({\varvec{\Phi}}\) and a second-order tensor \({\mathbf{T}}\) is a second-order tensor defined as:

$${\varvec{\Phi}}{\mathbf{T}} =\Phi _{klmn} T_{mn} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l} .$$
(1.80)

Using the transformation rules of the components of vectors, second-order, third-order, and fourth-order tensors [see Eqs. (1.45), (1.48), (1.74)–(1.75)], it can be shown that these contracted products are independent of the basis used.

Remark

Note that if \({\mathbf{U}} = {\varvec{\Phi}}{\mathbf{T}}\) then the inner product of \({\mathbf{U}}\) with any second-order tensor \({\mathbf{B}} = B_{kl} {\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l}\) is given by:

$${\mathbf{B}}\cdot{\mathbf{U}} =\Phi _{klmn} B_{kl} T_{mn}$$
(1.81)

and in particular the norm of \({\mathbf{U}}\) is:

$$\left\| U \right\|^{2} = {\mathbf{U}}\cdot{\mathbf{U}} = {\mathbf{U}}\cdot{\varvec{\Phi}}{\mathbf{T}} =\Phi _{klmn} U_{kl} T_{mn}$$
(1.82)

Remark

On the basis of the definition and properties of the contracted product between a fourth-order tensor and a second-order tensor, it can be concluded that a fourth-order tensor can be considered as being a linear mapping of the vector space \(L\) of second-order tensors onto itself. Therefore, we can introduce the product or composition of two fourth-order tensors using the usual rule of composition of functions.

  • Product (Composition) of Fourth-Order Tensors

Definition

The product (or composition) of any fourth-order tensors \({\varvec{\Phi}}\) and \({\varvec{\Psi}}\) is the fourth-order tensor defined as:

$$\left( {{\varvec{\Phi}}\,{\varvec{\Psi}}} \right)\left( {\mathbf{T}} \right) = {\varvec{\Phi}}\left( {{\varvec{\Psi}}{\mathbf{T}}} \right)\;{\text{for any }}{\mathbf{T}} \in L.$$
(1.83)

Let \(\left\{ {{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right\}\) be an orthonormal basis, the product \({\mathbf{L}} = {\varvec{\Phi}}\,{\varvec{\Psi}}\) has the components:

$$L_{klmn} =\Phi _{klrs}\Psi _{rsmn}, \quad \text{with}\,k, l, m, n = 1, \ldots, 3.$$

It can be easily shown that fourth-order identity tensor \({\mathbf{I}}_{4}\) [see Eq. (1.76)] has the following properties:

\({\mathbf{I}}_{4} {\mathbf{T}} = {\mathbf{T}},\) for any second-order tensor \({\mathbf{T}}\) and for any fourth-order tensor \({\varvec{\Phi}}\),

$${\mathbf{I}}_{4} {\varvec{\Phi}} = {\varvec{\Phi}}{\mathbf{I}}_{4} = {\varvec{\Phi}}.$$

Given the above properties of \({\mathbf{I}}_{4}\) and of the product of fourth-order tensors, the inverse and transpose of a fourth-order tensor are defined in the same manner as the inverse and transpose of second-order tensors [see definitions and Eq. (1.40)].

  • Transpose of a Fourth-Order Tensor

Definition

Associated with any fourth-order tensor \({\varvec{\Phi}}\), there is a fourth-order tensor called the transpose of \({\varvec{\Phi}}\), denoted by \({\varvec{\Phi}}^{T}\) such that:

$${\mathbf{A}}:\left( {{\varvec{\Phi}}^{T} {\mathbf{B}}} \right) = {\mathbf{B}}:{\varvec{\Phi}}\,{\mathbf{A}},\quad\forall {\mathbf{A}},{\mathbf{B}} \in L$$
(1.84)

It can be easily shown that the above requirement uniquely defines the transpose \({\varvec{\Phi}}^{T}\) and that it is indeed a fourth-order tensor, its components relative to an orthonormal basis being:

$$\Phi _{klmn}^{T} =\Phi _{mnkl}$$
(1.85)
  • Symmetric Fourth-Order Tensors

Definition

A fourth-order tensor \({\varvec{\Phi}}\) is symmetric if,

$${\varvec{\Phi}}^{T} = {\varvec{\Phi}}.$$
(1.86)

Therefore, it follows that if \({\varvec{\Phi}}\) is symmetric its components satisfy the requirements:

$$\Phi _{klmn} =\Phi _{mnkl} , \quad {\text{with }}\quad k,l,m,n = 1,2,3.$$
(1.87)

We shall denote by \(L_{4}^{S}\) the set of all symmetric fourth-order tensors. From Eq. (1.87), it follows that a symmetric fourth-order tensor has only 45 independent components (dimension of \(L_{4}^{S}\) = 45). When introducing anisotropy using the linear transformation approach (see Chap. 5), an important role is played by those symmetric fourth-order tensors which also satisfy the additional symmetry property:

$${\varvec{\Phi}}{\mathbf{T}}^{T} = {\varvec{\Phi}}{\mathbf{T}} ,\quad \forall {\mathbf{T}} \in L\text{.}$$
(1.88)

Denoting by \(\Phi _{klmn}\) the components of the symmetric fourth-order tensor \({\varvec{\Phi}}\), it follows that the requirements (1.87) and (1.88) imply that:

$$\Phi _{klmn} =\Phi _{lkmn} =\Phi _{klnm} =\Phi _{mnkl} ,\quad {\text{with}}\quad k,l,m,n = 1 \ldots 3,$$
(1.89)

so the tensor \({\varvec{\Phi}}\) has only 21 independent components. Note that the above symmetry requirements imply

$$\left( {{\varvec{\Phi}}{\mathbf{T}}} \right)^{T} = {\varvec{\Phi}}{\mathbf{T}}, \quad\forall {\mathbf{T}} \in {\text{L}}$$

so,

$${\varvec{\Phi}}{\mathbf{T}} = {\varvec{\Phi}}{\mathbf{T}}^{S} ,$$

where \({\mathbf{T}}^{S}\) denotes the symmetric part of the second-order tensor \({\mathbf{T}}\) [see Eq. (1.30)]. An immediate consequence is that:

$${\varvec{\Phi}}\,{\varvec{\Omega}} = {\mathbf{0}},\quad{\text{for any skew tensor }}{\varvec{\Omega}}.$$
(1.90)

This means that a symmetric fourth-order tensor \({\varvec{\Phi}}\), having the additional symmetry properties of Eq. (1.89) is not a one-to-one mapping of L, the space of second-order tensors. However, a symmetric tensor \({\varvec{\Phi}}\) satisfying the symmetry conditions of Eq. (1.89) may admit an inverse in the space of symmetric fourth-order tensors.

Let us first note that the tensor \({\hat{\mathbf{I}}}\) defined as:

$$\hat{I}_{klmn} = \displaystyle\frac{1}{2}\left( {\delta_{km} \delta_{\ln } + \delta_{kn} \delta_{lm} } \right),$$
(1.91)

is indeed a fourth-order symmetric tensor and satisfies the additional symmetry requirements of Eq. (1.89). Moreover, it has the following property:

$${\hat{\mathbf{I}}}{\varvec{\Phi}} = {\varvec{\Phi}}{\hat{\mathbf{I}}} = {\varvec{\Phi}},\quad {\text{for any symmetric tensor }}{\varvec{\Phi}}.$$
(1.92)

In other words, \({\hat{\mathbf{I}}}\) is the unit tensor in the space of symmetric fourth-order tensors. Similarly with the definitions of positive-definiteness of second-order tensors [see Eq. (1.63)], we say that a fourth-order tensor \({\varvec{\Phi}} \in L_{4}\) is positive-definite if:

$${\mathbf{T}}:\Phi {\mathbf{T}} \ge {\mathbf{0}},\quad {\text{for any symmetric tensor }}{\mathbf{T}}$$
(1.93)

and,

$${\mathbf{T}}{:}\,{\varvec{\Phi}}{\mathbf{T}} = {\mathbf{0}}\quad{\text{if and only if }}{\mathbf{T}} = {\mathbf{0}}.$$

It can be easily seen that if a symmetric fourth-order tensor \({\varvec{\Phi}}\) is positive-definite, there exists a fourth-order symmetric tensor \(\Psi\) such that:

$${\varvec{\Phi}}\Psi =\Psi {\varvec{\Phi}} = {\hat{\mathbf{I}}}.$$
(1.94)

This result is of great importance for the theory of elasticity, since it ensures that the inverse of the stiffness tensor exists and it is positive definite. In the mathematical theory of plasticity use is also made of the deviator of \({\hat{\mathbf{I}}}\). This fourth-order symmetric deviatoric tensor is generally denoted by \({\mathbf{K}}\), and its components with respect to any Cartesian coordinate system are given by:

$$K_{ijkl} = 1/2\left( {\delta_{ik} \delta_{jl} + \delta_{il} \delta_{jk} } \right) - 1/3\left( {\delta_{ij} \delta_{kl} } \right)$$
(1.95)

1.3 Elements of Vector and Tensor Calculus

In this section, we provide a brief review of differentiation of functions of a scalar variable t (e.g., time). Differentiation of a scalar function of a tensor and ensuing identities are essential in calculating the plastic strain-rate tensor once the expression of the plastic potential is known.

In this section, components of vectors and tensors are relative to a fixed orthonormal basis \(\left\{ {{\mathbf{e}}_{k} } \right\}\), k = 1, …, 3. The position vector of a point M in space will be denoted by \({\mathbf{x}} = x_{k} \text{e}_{k}\), with \(x_{1} ,x_{2} ,x_{3}\) being the Cartesian coordinates of M in the Cartesian coordinate system \(\left( {O,{\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3} } \right).\)

  • Derivative of a Point Function of a Scalar

Definition

The derivative of a point function \({\mathbf{x}}\left( t \right)\) of a scalar variable t, denoted \({\dot{\mathbf{x}}}\left( t \right)\), is a vector function defined as:

$${\dot{\mathbf{x}}}\left( t \right) = \mathop {\lim }\limits_{h \to 0} \displaystyle\frac{{{\mathbf{x}}\left( {t + h} \right) - {\mathbf{x}}\left( t \right)}}{h}$$
(1.96)

Given a scalar f, vector \({\mathbf{v}}\), or second-order tensor function \({\mathbf{T}}\) of the scalar variable t, we write:

$$\begin{aligned} \dot{f}\left( t \right) & = \displaystyle\frac{{\text{d}f(t)}}{{\text{d}t}} = \mathop {\lim }\limits_{h \to 0} \displaystyle\frac{{f\left( {t + h} \right) - f\left( t \right)}}{h}, \\ {\dot{\mathbf{v}}}\left( t \right) & = {\dot{\text{v}}}_{i} \left( t \right){\mathbf{e}}_{i} , \\ {\dot{\mathbf{T}}}\left( t \right) & = \dot{T}_{ij} \left( t \right){\mathbf{e}}_{i} \otimes {\mathbf{e}}_{j} , \quad i,j = 1, \ldots, 3. \\ \end{aligned}$$

Using the above definition, it can be shown that for any nonzero tensor function \({\mathbf{T}}(t)\),

$$\displaystyle\frac{\text{d}}{{\text{d}t}}\left\| {{\mathbf{T}}(t)} \right\| = \displaystyle\frac{{{\mathbf{T}}(t):{\dot{\mathbf{T}}}(t)}}{{\left\| {{\mathbf{T}}(t)} \right\|}}.$$
(1.97)

Boundaries of the regions in the three-dimensional Euclidean space where these functions are defined are assumed to have continuity and differentiability properties sufficient to ensure that the boundary-value problems are well-posed. Thus, the domain of definition is the bounded open set D, the boundary of D, denoted \(\partial D\), being a closed regular surface (i.e., unit normal fields over the bounding surface are well-defined).

Definition

A function that assigns to each point of a region D a scalar, vector, or tensor function is called scalar, vector, or tensor field on D, respectively.

A vector or tensor field is said to be of class \(C^{n}\) on D if its components with respect to a fixed coordinate system are continuous on D together with their partial derivatives up to the \(n{\text{th}}\)-order.

Additionally, it is important to note that these regularity properties are independent of the chosen basis.

  • Gradient of a Scalar, Vector, or Tensor Field

Consider a scalar field \(\phi :D \to R\) of class \(C^{1}\). The gradient of \(\phi\), denoted \(grad\varphi\), is the vector field:

$$grad\varphi \left( {\mathbf{x}} \right) = \displaystyle\frac{\partial \varphi }{{\partial x_{i} }}{\mathbf{e}}_{i} , \quad i = 1 , \ldots, 3.$$
(1.98)

To differentiate a function \(f\left( {{\mathbf{x}}(t)} \right)\), where \({\mathbf{x}}\left( t \right)\) is a point function with real values, the chain rule in conjunction with the above definition is used:

$$\displaystyle\frac{\text{d}}{{\text{d}t}}\left( {f\left( {{\mathbf{x}}(t)} \right)} \right) = grad\;f\left( {\mathbf{x}} \right) \cdot {\dot{\mathbf{x}}}\left( t \right) = \displaystyle\frac{\partial f}{{\partial x_{i} }}\dot{x}_{i} .$$
  • Gradient, Curl, Divergence of a Vector Field

Definition

Let \({\mathbf{u}}\left( {\mathbf{x}} \right)\) be a vector field of class \(C^{1}\) in D. The gradient of \({\mathbf{u}}\left( {\mathbf{x}} \right)\) is the second-order tensor field,

$$\text{grad}\,{\mathbf{u}}\left( {\mathbf{x}} \right) = \displaystyle\frac{{\partial u_{i} }}{{\partial x_{j} }}{\mathbf{e}}_{i} \otimes {\mathbf{e}}_{j} ,$$

the curl of \({\mathbf{u}}\left( {\mathbf{x}} \right)\) is a vector field defined as:

$${\text{curl }}{\mathbf{u}}\left( {\mathbf{x}} \right) = \varepsilon_{mrs} \displaystyle\frac{{\partial {\mathbf{u}}_{r} \left( {\mathbf{x}} \right)}}{{\partial x_{s} }}{\mathbf{e}}_{m} ,$$
(1.99)

and the divergence of \({\mathbf{u}}\left( {\mathbf{x}} \right)\) is the scalar:

$${\text{div}}\ {\mathbf{u}}\left( {\mathbf{x}} \right) = tr(grad\,{\mathbf{u}}\left( {\mathbf{x}} \right)) = u_{kk} \left( {\mathbf{x}} \right).$$
(1.100)

We define the Laplace operator \(\Delta\) for scalar and vector fields as:

$$\Delta \varphi \left( {\mathbf{x}} \right) = {\text{div}}\left( {\text{grad}\varphi \left( {\mathbf{x}} \right)} \right),$$
(1.101)

and,

$$\Delta {\mathbf{u}}\left( {\mathbf{x}} \right) = {\text{div}}\left( {\text{grad}\ {\mathbf{u}}\left( {\mathbf{x}} \right)} \right).$$
(1.102)

The operators grad, curl, div, \(\Delta\) are linear mappings and therefore they are independent of the coordinate system (for proof, see, e.g., Malvern [5]).

  • Gradient, Curl, Divergence of a Tensor Field

Definition

Let \({\mathbf{T}}:D \to L\) be a second-order tensor field of class \(C^{1}\) on D. The gradient of \({\mathbf{T}}\) is the third-order tensor field defined as follows:

$$\text{grad}\,{\mathbf{T}}\left( {\mathbf{x}} \right) = \displaystyle\frac{{\partial T_{lm} \left( {\mathbf{x}} \right)}}{{\partial x_{k} }}{\mathbf{e}}_{k} \otimes {\mathbf{e}}_{l} \otimes {\mathbf{e}}_{m} ,$$
(1.103)

the curl of T\(\left( {\mathbf{x}} \right)\) is the second-order tensor field,

$$\text{curl}\,{\mathbf{T}}\left( {\mathbf{x}} \right) = \varepsilon_{ijk} \displaystyle\frac{{\partial T_{lj} \left( {\mathbf{x}} \right)}}{{\partial x_{k} }}{\mathbf{e}}_{l} \otimes {\mathbf{e}}_{i} ,$$
(1.104)

the divergence of T\(\left( {\mathbf{x}} \right)\) is the vector field:

$${\text{div}}\,{\mathbf{T}}\left( {\mathbf{x}} \right) = \displaystyle\frac{{\partial T_{ij} \left( {\mathbf{x}} \right)}}{{\partial x_{j} }}{\mathbf{e}}_{i} ,$$

while the Laplacian of T \(\left( {\mathbf{x}} \right)\) is the tensor field:

$$\Delta {\mathbf{T}}\left( {\mathbf{x}} \right) = \displaystyle\frac{{\partial^{2} T_{ij} }}{{\partial x_{k} \partial x_{k} }}{\mathbf{e}}_{i} \otimes {\mathbf{e}}_{j} .$$
(1.105)
  • Differentiation of a Scalar Function of a Tensor

Definition

For a scalar function \(\Phi ({\mathbf{A}} )\) of a second-order tensor variable \({\mathbf{A}} ,\) the derivative \(\partial\Phi ({\mathbf{A}} ) /{\partial }{\mathbf{A}}\) is the tensor function defined such that:

$$\displaystyle\frac{{\partial\Phi ({\mathbf{A}} )}}{{\partial {\mathbf{A}}}}:{\mathbf{B}}=\mathop {\lim }\limits_{s \to 0} \displaystyle\frac{{\Phi ({\mathbf{A}} + s{\mathbf{B}} )-\Phi ({\mathbf{A}} )}}{s} \quad \forall {\mathbf{B}} \in L.$$
(1.106)

It follows that:

$$\left[ {\displaystyle\frac{{\partial\Phi ({\mathbf{A}} )}}{{\partial {\mathbf{A}}}}} \right]_{ij} = \displaystyle\frac{{\partial\Phi ({\mathbf{A}} )}}{{\partial A_{ij} }}.$$
(1.107)

Note that if \({\mathbf{A}}\) is a symmetric second-order tensor, \(\displaystyle\frac{{\partial\Phi ({\mathbf{A}} )}}{{\partial {\mathbf{A}}}}\) is a symmetric second-order tensor. The following result is central to the theory of plasticity.

1.4 Elements of the Theory of Tensor Representation

1.4.1 Symmetry Transformations and Groups

We will use the following notations:

L:

the set of second-order tensors on V;

L+:

the set of all second-order tensors \({\mathbf{A}}\) with det\(\left( {\mathbf{A}} \right)\) >  0;

Sym:

the set of all symmetric second-order tensors;

PSym+:

the set of all symmetric and positive-definite second-order tensors;

Orth:

the set of all orthogonal tensors on V;

Orth+:

the set of all rotations (proper orthogonal group).

Definition

Let \(\varDelta \subset L\) and G a group of Orth. We say that a scalar function \(\Phi{:}\,\varDelta \to R\) is invariant relative to the group G, if for any \({\mathbf{T}} \in \varDelta\) and for any \({\mathbf{Q}} \in G\), we have:

$$\Phi \left( {\mathbf{T}} \right) =\Phi \left( {{\mathbf{QTQ}}^{T} } \right).$$
(1.108)

Similarly, a vector function \(h{:}\,V \to R\) is invariant relative to the group G, if for any \({\mathbf{v}} \in V\) and for any \({\mathbf{Q}} \in G\), we have:

$$h\left( {{\mathbf{Qv}}} \right) = {\mathbf{Q}}h\left( {\mathbf{v}} \right)$$
(1.109)

A tensor function \(S:\varDelta \to R\) is invariant relative to the group G, if for any \({\mathbf{T}} \in \varDelta\) and for any \({\mathbf{Q}} \in G\), we have:

$$S\left( {{\mathbf{QTQ}}^{T} } \right) = {\mathbf{Q}}S\left( {\mathbf{T}} \right){\mathbf{Q}}^{T}$$
(1.110)

Definition

An isotropic function is a function invariant relative to the full orthogonal group.

In Sect. 1.2, we have shown that:

  1. (a)

    The determinant, det, and the trace function, tr, are isotropic functions;

  2. (b)

    The principal invariants

    $$I_{T} = tr{\mathbf{T}};II_{T} = \displaystyle\frac{1}{2}\left[ {\left( {tr{\mathbf{T}}} \right)^{2} - tr({\mathbf{T}}^{2} )} \right];III_{T} = \det ({\mathbf{T}})$$
    (1.111)

of a symmetric tensor \({\mathbf{T}}\) are isotropic functions.

  • Representation Theorems for Isotropic Scalar Functions

Let us denote by \({\mathbf{I}}\left( \varDelta \right) = \left\{ {{\mathbf{I}}_{T} \left| {{\mathbf{T}} \in \varDelta } \right.} \right\}\) the set of all possible lists of invariants for symmetric tensors. Next, we present several representation theorems for scalar functions which are due to Cauchy and Wang [6].

  • Representation Theorem for Isotropic Scalar Function of a Symmetric Tensor

A scalar function \(\Phi :\varDelta \to R\), where \(\varDelta \subset Sym\), is isotropic if and only if there exists a function \(\hat{\Phi }:{\mathbf{I}}\left( \varDelta \right) \to R\) such that,

$$\Phi \left( {\mathbf{T}} \right) = \hat{\Phi }\left( {{\mathbf{I}}_{T} } \right)\,{\text{for any }}{\mathbf{T}} \in \varDelta$$
(1.112)

Proof

That Eq. (1.112) defines an isotropic function is a direct consequence of Theorem 1.1. To prove the converse statement, let us assume that \(\Phi\) is isotropic. It is sufficient to prove that if any two symmetric tensors \({\mathbf{T}}_{1}\) and \({\mathbf{T}}_{2}\) have the same spectrum, i.e., the same invariants [see Eq. (1.111)] then \(\Phi \left( {{\mathbf{T}}_{1} } \right) =\Phi \left( {{\mathbf{T}}_{2} } \right).\) Indeed, if \({\mathbf{I}}_{{{\mathbf{T}}_{1} }} = {\mathbf{I}}_{{{\mathbf{T}}_{2} }}\) then \({\mathbf{T}}_{1}\) and \({\mathbf{T}}_{2}\) have the same eigenvalues \(\lambda_{i}\), i = 1, …, 3. By the spectral theorem, there exist orthonormal bases \(\left\{ {{\mathbf{e}}_{i} } \right\}\) and \(\left\{ {{\mathbf{f}}_{i} } \right\}\) such that \({\mathbf{T}}_{1} = \sum\nolimits_{i} {\lambda_{i} } {\mathbf{e}}_{i} \otimes {\mathbf{e}}_{i}\) and \({\mathbf{T}}_{2} = \sum\nolimits_{i} {\lambda_{i} } {\mathbf{f}}_{i} \otimes {\mathbf{f}}_{i}\). Let \({\mathbf{Q}}\) be the orthogonal transformation from one basis to the other, i.e., \({\mathbf{Q}}\left( {{\mathbf{f}}_{i} } \right) = {\mathbf{e}}_{i}\). Then,

$${\mathbf{QT}}_{2} {\mathbf{Q}}^{T} = \sum\limits_{i} {\lambda_{i} } {\mathbf{Q}}\left( {{\mathbf{f}}_{i} \otimes {\mathbf{f}}_{i} } \right){\mathbf{Q}}^{T} = \sum\limits_{i} {\lambda_{i} } \left( {{\mathbf{Qf}}_{i} } \right) \otimes \left( {{\mathbf{Qf}}_{i} } \right) = \sum\limits_{i} {\lambda_{i} } {\mathbf{e}}_{i} \otimes {\mathbf{e}}_{i} = {\mathbf{T}}_{1}$$
(1.113)

But since \(\Phi\) is isotropic, \(\Phi \left( {{\mathbf{T}}_{2} } \right) =\Phi \left( {{\mathbf{QT}}_{2} {\mathbf{Q}}^{T} } \right)\); thus by Eq. (1.113) we obtain that \(\varPhi \left( {{\mathbf{T}}_{1} } \right) = \varPhi \left( {{\mathbf{T}}_{2} } \right)\). So, if \(\Phi\) is an isotropic scalar function, it depends on \({\mathbf{T}}\) only through its invariants. Representation theorems for isotropic scalar-valued functions of an arbitrary number of symmetric tensors, skew-symmetric tensors, and vectors have been derived by Wang [6].

  • Representation Theorem for Isotropic Scalar Function

A scalar function \(\Phi \left( {{\mathbf{T}}_{1} ,{\mathbf{T}}_{2} , \ldots, {\mathbf{T}}_{a} ,{\mathbf{W}}_{1} ,{\mathbf{W}}_{2} , \ldots, {\mathbf{W}}_{b} ,{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{k} } \right)\), where \({\mathbf{T}}_{\text{i}} ,{\mathbf{W}}_{\text{j}} ,{\mathbf{v}}_{\text{k}}\) are respectively an arbitrary number of symmetric tensors, skew-symmetric tensors, and vectors, is isotropic if and only if there exists a scalar function \({\hat{\Phi }}\left( {I_{{{\mathbf{T}}_{1} ,{\mathbf{T}}_{2} , \ldots ,{\mathbf{T}}_{\text{a}} ,{\mathbf{W}}_{1} ,{\mathbf{W}}_{2} , \ldots ,{\mathbf{W}}_{\text{b}} ,{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots , {\mathbf{v}}_{\text{k}} }} } \right)\) such that

$$\begin{aligned} &\Phi \left( {{\mathbf{T}}_{1} ,{\mathbf{T}}_{2} , \ldots ,{\mathbf{T}}_{a} ,{\mathbf{W}}_{1} ,{\mathbf{W}}_{2} , \ldots , {\mathbf{W}}_{b} ,{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots ,{\mathbf{v}}_{k} } \right) \\ & = \hat{\varPhi }\left( {I_{{{\mathbf{T}}_{1} ,{\mathbf{T}}_{2} , \ldots , {\mathbf{T}}_{a} ,{\mathbf{W}}_{1} ,{\mathbf{W}}_{2} , \ldots , {\mathbf{W}}_{b} ,{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots , {\mathbf{v}}_{k} }} } \right) \end{aligned}$$
(1.114)

where, \(I_{{{\mathbf{T}}_{1} ,{\mathbf{T}}_{2} , \ldots ,{\mathbf{T}}_{\text{a}} ,{\mathbf{W}}_{1} ,{\mathbf{W}}_{2} , \ldots , {\mathbf{W}}_{\text{b}} ,{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots , {\mathbf{v}}_{\text{k}} }}\) is an irreducible set of invariants for the arguments of the function \(\varPhi\) .

By definition, a set of invariants is called “functional basis” for the list of arguments if any arbitrary scalar function of these arguments can be expressed in terms of these basic invariants. A functional basis is called irreducible if none of its elements can be expressed as a function of the others. The complete list of invariants for the set of arguments \({\mathbf{T}}_{1} ,{\mathbf{T}}_{2} , \ldots , {\mathbf{T}}_{\text{a}} ,{\mathbf{W}}_{1} ,{\mathbf{W}}_{2} , \ldots , {\mathbf{W}}_{\text{b}} ,{\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots , {\mathbf{v}}_{\text{k}}\) is obtained considering all the (unordered) combinations of one, two, three, and four variables given in Table 1.1.

Table 1.1 Irreducible isotropic functional bases

For example, the representation for an isotropic scalar function of two symmetric second-order tensors, \({\mathbf{T}}_{ 1} ,{\mathbf{T}}_{2}\) involves ten invariants, i.e.,

$$\begin{aligned} I_{{{\mathbf{T}}_{1} ,{\mathbf{T}}_{2} }} & = \{ {tr}({\mathbf{T}}_{1} ),{tr}({\mathbf{T}}_{1}^{2} ),{tr}({\mathbf{T}}_{1}^{3} ),{tr}({\mathbf{T}}_{2} ),{tr}({\mathbf{T}}_{2}^{2} ),{tr}({\mathbf{T}}_{2}^{3} ),\text{tr}({\mathbf{T}}_{1} {\mathbf{T}}_{2} ), \\ & \text{tr}({\mathbf{T}}_{1}^{2} {\mathbf{T}}_{2} ),\text{tr}({\mathbf{T}}_{1} {\mathbf{T}}_{2}^{2} ),\text{tr}({\mathbf{T}}_{1}^{2} {\mathbf{T}}_{2}^{2} )\} . \\ \end{aligned}$$

To establish representation theorems for isotropic tensor functions, we need to first prove the following lemma given by Wang [6].

Lemma 1.3

Let \({\mathbf{I}}\) be the second-order unit tensor, \({\mathbf{T}}\) be a symmetric tensor, \(\lambda_{1} ,\lambda_{2} ,\lambda_{3}\) its eigenvalues with \({\mathbf{e}}_{1} ,{\mathbf{e}}_{2} ,{\mathbf{e}}_{3}\) corresponding eigenvectors.

  1. (a)

    If all the eigenvalues \(\lambda_{i}\) are distinct, then \(\left\{ {{\mathbf{I}},{\mathbf{T}},{\mathbf{T}}^{2} } \right\}\) are linearly independent;

  2. (b)

    If \({\mathbf{T}}\) has exactly two distinct eigenvalues, then \(\left\{ {{\mathbf{I}},{\mathbf{T}}} \right\}\) are linearly independent.

Proof

  1. (a)

    To prove that the set \(\left\{ {{\mathbf{I}},{\mathbf{T}},{\mathbf{T}}^{2} } \right\}\) is linearly independent, we must show that,

    $${a}{\mathbf{T}}^{2} + {b}{\mathbf{T}} + {c}{\mathbf{I}} = {\mathbf{0}},$$
    (1.115)

only if \({a} = {b} = {c} = 0.\) Since \({\mathbf{T}}\left( {{\mathbf{e}}_{i} } \right) = \lambda_{i} {\mathbf{e}}_{i}\) and \({\mathbf{T}}^{2} \left( {{\mathbf{e}}_{i} } \right) = \left( {\lambda_{i} } \right)^{2} {\mathbf{e}}_{i}\), from Eq. (1.115) it follows that,

$$\left( {{a}\lambda_{i}^{2} + {b}\lambda_{i} + {c}} \right){\mathbf{e}}_{i} = 0,{\text{so that a}}\lambda_{i}^{2} + {b}\lambda_{i} + {c} = 0\,{\text{for any }}i = 1,2,3.$$
(1.116)

The determinant of the homogeneous algebraic system (1.116) in the unknowns \({a},{b}\), and \({c}\) is:

$$\Delta = \left| {\begin{array}{*{20}c} {\lambda_{1}^{2} } & {\lambda_{1} } & 1 \\ {\lambda_{2}^{2} } & {\lambda_{2} } & 1 \\ {\lambda_{3}^{2} } & {\lambda_{3} } & 1 \\ \end{array} } \right|$$
(1.117)

Since the eigenvalues \(\lambda_{i}\) are distinct, \(\Delta \ne 0\); thus the unique solution of (1.116) is: \({a} = {b} = {c} = 0\). Thus, \(\left\{ {{\mathbf{I}},{\mathbf{T}},{\mathbf{T}}^{2} } \right\}\) are linearly independent.

  1. (b)

    To establish the linear independence of \(\left\{ {{\mathbf{I}},{\mathbf{T}}} \right\}\) we must show that,

    $${a}{\mathbf{I}} + {b}{\mathbf{T}} = {\mathbf{0}},$$
    (1.118)

only if \({a} = {b} = 0.\) Since \({\text{T}} = \lambda_{1} {\mathbf{e}} \otimes {\mathbf{e}} + \lambda_{2} \left( {{\mathbf{I}} - {\mathbf{e}} \otimes {\mathbf{e}}} \right)\), from Eq. (1.118) it follows that,

$$\left\{ {\begin{array}{*{20}l} {{a} + {b}\lambda_{1} = 0} \hfill \\ {{a} + {b}\lambda_{2} = 0} \hfill \\ \end{array} } \right..$$
(1.119)

The eigenvalues \(\lambda_{1}\) and \(\lambda_{2}\) being distinct, Eq. (1.118) holds only if \({a} = {b} = 0.\) Thus, \({\mathbf{I}}\) and \({\mathbf{T}}\) are linearly independent.

1.4.2 Representation Theorems for Orthotropic Scalar Functions

Certain anisotropic materials such as transversely isotropic materials as well as some crystalline solids (for detailed discussion of crystal classes and respective symmetry groups, see Chap. 3) can be characterized by preferred directions and planes, i.e., by certain vectors \({\mathbf{m}}_{1} \text{,}{\mathbf{m}}_{2}, \ldots, {\mathbf{m}}_{\text{p}}\) and some tensors \({\mathbf{M}}_{1} ,{\mathbf{M}}_{2} , \ldots , {\mathbf{M}}_{\text{q}}\). The symmetry group G of such materials preserves these characteristics and is of the form:

$$G = \left\{ {\left. {{\mathbf{Q}} \in {\text{O}}} \right|{\mathbf{Q}}\,{\mathbf{m}}_{1} = {\mathbf{m}}_{1} , \ldots ,{\mathbf{Q}}\,{\mathbf{m}}_{\text{p}} = {\mathbf{m}}_{\text{p}} ,{\mathbf{Q}}\,{\mathbf{M}}_{1} {\mathbf{Q}}^{T} = {\mathbf{M}}_{1} , \ldots ,{\mathbf{Q}}\,{\mathbf{M}}_{\text{q}} {\mathbf{Q}}^{T} = {\mathbf{M}}_{\text{q}} } \right\}$$
(1.120)

Theorem 1.3

A function f is invariant relative to the symmetry group G if and only if it can be represented by an isotropic function \(\hat{{f}}\):

$$\begin{aligned} & {f}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots ,{\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots, {\mathbf{A}}_{b} ) \\ & = \hat{{f}}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots, {\mathbf{A}}_{b} \text{,}{\mathbf{m}}_{1}, \ldots, {\mathbf{m}}_{p} ,{\mathbf{M}}_{1} , \ldots, {\mathbf{M}}_{q} ) \\ \end{aligned}$$
(1.121)

Proof

While in the theorem, the function f can be either scalar-valued, vector-valued, or tensor-valued, we will present the proof only for scalar-valued functions. The proof for vector-valued and tensor-valued functions is similar and for both proofs, we refer the reader to the paper of I-Shih [4].

Assume that f admits the representation given by Eq. (1.121). We need to show that f is invariant relative to G, i.e.,

$${f}({\mathbf{Qv}}_{1} , \ldots, {\mathbf{Qv}}_{a} ,{\mathbf{QA}}_{1} {\mathbf{Q}}^{{\mathbf{T}}} , \ldots, {\mathbf{QA}}_{b} {\mathbf{Q}}^{{\mathbf{T}}} ) = {f}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} , \ldots, {\mathbf{A}}_{b} )\quad\forall {\mathbf{Q}} \in {\text{G}}.$$

Since,

$$\begin{aligned} & {f}({\mathbf{Qv}}_{1} , \ldots, {\mathbf{Qv}}_{a} ,{\mathbf{QA}}_{1} {\mathbf{Q}}^{{\mathbf{T}}} , \ldots, {\mathbf{QA}}_{b} {\mathbf{Q}}^{{\mathbf{T}}} ) \\ & = \hat{{f}}({\mathbf{Qv}}_{1} , \ldots {\mathbf{Qv}}_{a} ,{\mathbf{QA}}_{1} {\mathbf{Q}}^{{\mathbf{T}}} , \ldots, {\mathbf{QA}}_{b} {\mathbf{Q}}^{{\mathbf{T}}} ,{\mathbf{m}}_{1} ,\ldots, {\mathbf{m}}_{p} ,{\mathbf{M}}_{1} , \ldots, {\mathbf{M}}_{q} ) \\ & = \hat{{f}}({\mathbf{Qv}}_{1} , \ldots ,{\mathbf{Qv}}_{a} ,{\mathbf{QA}}_{1} {\mathbf{Q}}^{{\mathbf{T}}} , \ldots ,{\mathbf{QA}}_{b} {\mathbf{Q}}^{{\mathbf{T}}} ,{\mathbf{QQ}}^{{\mathbf{T}}} {\mathbf{m}}_{1} \ldots {\mathbf{QQ}}^{{\mathbf{T}}} {\mathbf{m}}_{p} ,{\mathbf{QQ}}^{{\mathbf{T}}} {\mathbf{M}}_{1} {\mathbf{QQ}}^{{\mathbf{T}}} , \ldots {\mathbf{QQ}}^{{\mathbf{T}}} {\mathbf{M}}_{q} {\mathbf{QQ}}^{{\mathbf{T}}} ) \\ \end{aligned}$$

and \(\hat{{f}}\) is isotropic, it follows that:

$$\begin{aligned} & {f}({\mathbf{Qv}}_{1} , \ldots ,{\mathbf{Qv}}_{a} ,{\mathbf{QA}}_{1} {\mathbf{Q}}^{{\mathbf{T}}} , \ldots, {\mathbf{QA}}_{b} {\mathbf{Q}}^{{\mathbf{T}}} ) \\ & = \hat{{f}}({\mathbf{v}}_{1} , \ldots {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} , \ldots {\mathbf{A}}_{b} ,{\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{m}}_{1} ,\ldots, {\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{m}}_{p} ,{\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{M}}_{1} {\mathbf{Q}}, \ldots ,{\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{M}}_{q} {\mathbf{Q}}) \\ \end{aligned}$$

Given that \({\mathbf{Q}} \in G\), so we have: \({\mathbf{Qm}}_{1} = {\mathbf{m}}_{1}\), …, \({\mathbf{Qm}}_{p} = {\mathbf{m}}_{p}\) and \({\mathbf{QM}}_{1} {\mathbf{Q}}^{{\mathbf{T}}} = {\mathbf{M}}_{1}\), …, \({\mathbf{QM}}_{q} {\mathbf{Q}}^{{\mathbf{T}}} = {\mathbf{M}}_{q}\) .

Therefore,

$$\begin{aligned} & \hat{{f}}({\mathbf{v}}_{1} , \ldots {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} , \ldots ,{\mathbf{A}}_{b} ,{\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{m}}_{1} , \ldots , {\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{m}}_{p} ,{\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{M}}_{1} {\mathbf{Q}}, \ldots ,{\mathbf{Q}}^{{\mathbf{T}}} {\mathbf{M}}_{q} {\mathbf{Q}}) \\ & = \hat{{f}}({\mathbf{v}}_{1} , \ldots ,{\mathbf{v}}_{a} ,{\mathbf{A}}_{1} , \ldots ,{\mathbf{A}}_{b} ,{\mathbf{m}}_{1} , \ldots , {\mathbf{m}}_{p} ,{\mathbf{M}}_{1} , \ldots ,{\mathbf{M}}_{q} ) = {f}({\mathbf{v}}_{1} , \ldots , {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} , \ldots , {\mathbf{A}}_{b} ). \\ \end{aligned}$$

Transverse isotropy is characterized by a preferred direction n. Its symmetry group is:

$$G_{T} = \left\{ {\left. {\text{Q} \in {\text{O}}} \right|\text{Q} {\mathbf{n}} = {\mathbf{n}}} \right\}.$$
(1.122)

(See also Chap. 5). By the above theorem, we have the following result:

A transversely isotropic function \({f}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots, {\mathbf{A}}_{b} )\) can be represented as:

$${f}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots ,{\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots ,{\mathbf{A}}_{b} ) = {{\hat{{f}}}}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots ,{\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots {\mathbf{A}}_{b} ,n)$$

where \(\hat{{f}}\) is an isotropic function.

Orthotropy is characterized by reflections on three mutually orthogonal planes.

Another important result is given by:

Theorem 1.4

Any orthotropic function \({f}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots, {\mathbf{A}}_{b} )\) can be represented as:

$${f}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots, {\mathbf{A}}_{b} ) = \hat{{f}}({\mathbf{v}}_{1} ,{\mathbf{v}}_{2} , \ldots, {\mathbf{v}}_{a} ,{\mathbf{A}}_{1} ,{\mathbf{A}}_{2} , \ldots, {\mathbf{A}}_{b} ,{\mathbf{N}}_{1} ,{\mathbf{N}}_{2} )$$
(1.123)

where \(\hat{{f}}\) is an isotropic function, \({\mathbf{N}}_{1} = {\mathbf{n}}_{1} \otimes {\mathbf{n}}_{1}\) and \({\mathbf{N}}_{2} = {\mathbf{n}}_{2} \otimes {\mathbf{n}}_{2}\) (see I-Shih [4]).

The above results were used by Cazacu and Barlat [1, 2] to derive yield criteria for orthotropic and transversely isotropic metallic materials (see Chap. 5).