1 Introduction

Consider an isotropic linear elastic material which occupies an open bounded simply connected domain \(D\subset {\mathbb {R}^{2}}, \varGamma \) be a portion of the boundary \(\partial D, \varGamma _1=\partial D\setminus \varGamma \). In the absence of body forces, the equilibrium equations with respect to the Cauchy stress tensor \(\varvec{\sigma }(\mathbf{x })\) are given by

$$\begin{aligned} \nabla \cdot {\varvec{\sigma }}(\mathbf{x })=0,\quad \mathbf x ~{\in }\ D. \end{aligned}$$
(1)

While the stresses \(\varvec{\sigma }_{ij}\) are related to the strains \(\varvec{\varepsilon }_{ij}\) through the constitutive law (Hooke’s law) as following

$$\begin{aligned} \varvec{\sigma }_{ij}(\mathbf{x })=2\mu \varvec{\varepsilon }_{ij}+\lambda \delta _{ij}\sum _{k=1}^2\varvec{\varepsilon }_{kk}, \end{aligned}$$
(2)

with \(\delta _{ij}\) the Kronecker delta tensor. \(\lambda \) and \(\mu \) are the Lamé constants, which are related to shear modulus \(G\) and Poisson’s ratio \(\nu \) as \(\lambda =\frac{2G\nu }{1-2\nu },\) \(\mu =G\).

The strains \(\varvec{\varepsilon }_{ij}\) are defined as a differential operator on the displacement vector \(\mathbf u (\mathbf{x })\) by

$$\begin{aligned} \varvec{\varepsilon }_{ij}(\mathbf{x })=\frac{1}{2}\left( \frac{\partial u_i(\mathbf{x })}{\partial x_j}+\frac{\partial u_j(\mathbf{x })}{\partial x_i}\right) . \end{aligned}$$
(3)

This equation is often referred to as the compatibility equation for small deformations.

On combining the three field Eqs. (1)–(3) and eliminating \(\varvec{\varepsilon }\) and \(\varvec{\sigma }\), the displacement field \(\mathbf u \) is found to be governed by the Cauchy–Navier equation as following

$$\begin{aligned} \mu \Delta \mathbf{u }+(\lambda +\mu )\nabla \nabla \cdot \mathbf{u }=0,\quad \mathbf{x }~{\in }\ D. \end{aligned}$$
(4)

Let \(\mathbf{n }\) be the unit normal to the boundary \(\partial {D}\) directed into the exterior of \(D\), and \(T_n\mathbf u \) be the traction vector at a point \(\mathbf{x }\in \partial D\), defined by

$$\begin{aligned} (T_n\mathbf u )_i(\mathbf{x })=\sum _{j=1}^2\varvec{\sigma }_{ij}(\mathbf{x })n_j. \end{aligned}$$

In this paper, we consider the following Cauchy problem: given Cauchy data \(\mathbf f \) and \(\mathbf t \) on \(\varGamma \), find \(\mathbf u =(u_1,u_2)^\top \), such that \(\mathbf u \) satisfies Eq. (4) and the following boundary conditions

$$\begin{aligned} \mathbf u =\mathbf f ,\quad T_n\mathbf u =\mathbf t ,\quad \,\hbox {on}\, \Gamma . \end{aligned}$$
(5)

We will determine both the displacement and the traction vectors on \(\varGamma _1=\partial D\backslash \varGamma \).

In the above formulation of the boundary conditions (5), it can be seen that the boundary \(\varGamma \) is overspecified by prescribing both the displacement and the traction vectors, whilst the boundary \(\partial D\setminus \varGamma \) is underspecified since both the displacement and the traction vectors are unknown and have to be determined. This problem, termed the Cauchy problem, is much more difficult to solve both analytically and numerically than the direct problem, since the solution does not satisfy the general conditions of well-posedness. The existence and uniqueness of the solution to such problems have been well established by Calderon [5]. Moreover, it is well known that they are ill-posed, i.e. the solutions do not depend continuously on Cauchy data and a small perturbation in the data may result in large change in the solutions ([1, 10]).

There are many numerical methods in the literature to solve the Cauchy problem in linear elasticity. Liu [16] proposed a Lie-group integrator method for the solution of inverse Cauchy problem in linear or nonlinear elasticity. Baranger and Andrieux [3] gave an optimization method. The energy error minimization method for 3D Cauchy problem was investigated by Andrieux and Baranger [2]. Sun et al. [33] give an integral equations method for 3D elastostatics Cauchy problem, they construct a regularizing solution using the single-layer potential function to solve this ill-posed problem. We refer to [19, 20] for the alternating iterative boundary element method. Comino et al. [6] proposed the alternating iterative method  [13] to solve the Cauchy problem in two dimensional anisotropic elasticity, and the boundary element method (BEM) has been used for the numerical implementation. Delvare et al. [7] gave a least square fitting method to solve the Cauchy problem by solving a sequence of optimization problems under equality constraints. Durand et al. [8] gave an iterative method for solving axisymmetric Cauchy problems in linear elasticity based the finite element method. Marin and Johansson [29, 30] investigated alternating iterative algorithm for the Cauchy problem in linear isotropic elasticity by relaxation procedures. The BEM combined successfully with the conjugate gradient method and a stopping criterion based on a Monte-Carlo simulation of the generalized cross-validation (GCV) by Turco [35]. Other iterative method refer to [21]. Marin and Lesnic [23, 24, 26] investigated BEM via singular value decomposition, regularization, Landweber iteration, and the MFS was used by Marin and Lesnic [25]. Turco [34] used the BEM to discretize the problem along with a strategy based on the Tikhonov regularization method completed by the GCV criterion in order to make the solution process entirely automatic. BEM was also investigated by Marin et al. [22] and Zabaras et al. [36]. The FEM was investigated by Maniatty [17], Martin et al. [31] and Schnur and Zabaras [32]. Later, Marin and Johansson [30] applied alternating iterative algorithm [13] to the Cauchy problems in linear elasticity. The application of the MFS, in conjunction with the Tikhonov regularization method, to the numerical solution of Cauchy problem in three-dimensional isotropic linear elasticity was studied by Marin [27]. We can refer  [4, 14, 28] for other methods.

The main purpose of this paper is to provide a novel MFS for the Cauchy problem in two-dimensional linear elasticity. The main idea is to approximate the solution of (6) by the following form

$$\begin{aligned} \mathbf u (\mathbf{x })=\mathbf{c }+\sum _{j=1}^{M}\mathbf U (\mathbf{x },\mathbf{y }^j)\mathbf a ^j, \quad \sum _{j=1}^{M}\mathbf a ^j=\mathbf{0 }, \end{aligned}$$

where \(\mathbf a ^j=(a_j,b_j)^\top \), and \(\mathbf U (\mathbf{x })\) is fundamental matrix [15] given by

$$\begin{aligned} \mathbf U _{ij}(\mathbf{x }-\mathbf{y }) =C_1\left( C_2\delta _{ij}\ln |\mathbf{x }-\mathbf{y }|+\frac{({x}_i-{y}_i)({x}_j-{y}_j)}{|\mathbf{x }-\mathbf{y }|^2}\right) , \end{aligned}$$

with \(C_1={1}/[8\pi G(1-\bar{\nu })], ~C_2=4\bar{\nu }-3\). We set \(\bar{\nu }=\nu /(1+\nu )\) for plane stress and \(\bar{\nu }=\nu \) for the plane strain.

The MFS is a meshless method. In comparison to the BEM and FEM, the MFS doesn’t require interior or surface meshing which makes it extremely attractive for solving problems under complicated boundary, thus the method has become increasingly popular. The simple implementation of the MFS dealing with complex boundaries makes it an ideal candidate for the problems in which the boundary is of major importance or requires special attention, such as free boundary problems. For these reasons, the MFS has been used increasingly over the last decade for the numerical solution of the inverse problems. The excellent surveys of the MFS and related methods over the past three decades have been presented by Fairweather and Karageorghis [9] and Karageorghis et al. [11].

The outline of this paper is as follows. In Sect. 2, we formulate the invariance property of the solution for the boundary value problem, and then we give the invariant MFS. In Sect. 3, we solve the equations by Tikhonov regularization method with Morozov principle. Finally, several numerical examples are illustrated the effectiveness of our method, whilst comparing the accuracy errors with the classical MFS.

2 Formulation and Solution Method

2.1 The Invariance Property of the Solution

Let \(D\subset {\mathbb {R}^{2}}\) be a bounded and connected domain with piecewise smooth boundary \(\partial {D}\). We consider the following boundary value problem: given \(\mathbf{g }\) on \(\partial D\), find \(\mathbf u \) such that \(\mathbf u \) satisfies

$$\begin{aligned} (\mathbf{P.1 }) ~~~~~~~ {\left\{ \begin{array}{ll} \mu \Delta \mathbf{u }+(\lambda +\mu )\nabla \nabla \cdot \mathbf{u }=0,\quad \mathrm{in}\ D,\\ \mathbf u =\mathbf{g },~~\mathrm{on}\ \partial D. \end{array}\right. } \end{aligned}$$
(6)

Whilst we consider another problem as follows: \(D'\subset {\mathbb {R}^{2}}\) is a bounded and connected domain. For every \(\mathbf{x }\in D\), there holds \(\mathbf{x }'=\hat{\alpha }\mathbf{x }\in D', \hat{\alpha }>0\), i.e. \(D'\) is related to \(D\), which is enlarged or compressed. \(\mathbf u '\) is the solution of the following problem:

$$\begin{aligned} (\mathbf{P.2 })~~~~~~~ {\left\{ \begin{array}{ll} \mu \Delta \mathbf{u '}+(\lambda +\mu )\nabla \nabla \cdot \mathbf{u '}=0,\quad \mathrm{in}\ D',\\ \mathbf u '=\mathbf{g }',~~\mathrm{on}\ \partial D'. \end{array}\right. } \end{aligned}$$
(7)

If we fix the boundary conditions \(\mathbf{g }({\varvec{x}})=\mathbf{g }'(\hat{\alpha }\mathbf{x })\) for \(\mathbf{x }\in \partial D\), it is well known that the two solutions have some relevance, i.e. \(\mathbf u (\mathbf{x })=\mathbf u '(\hat{\alpha }\mathbf{x })\) for \(\mathbf{x }\in D\). We call that the invariance under trivial coordinate changes in the problem description as scaling of coordinates.

We will show that (P.1) and (P.2) are equivalent virtually.

First, we give the solvability of problem (P.1) and (P.2). It is well-known that such problem has a unique solution in \(\mathbf{H }^{1}\), and \(\mathbf u \) has the following double-layer representation for a charge density \(\varvec{\varphi }({\varvec{x}})\in L^2(\partial D)\),

$$\begin{aligned} \mathbf u (\mathbf{x })=\int _{\partial D}T_{n(\mathbf{y })}\mathbf U (\mathbf{x },\mathbf{y })\varvec{\varphi }({\varvec{y}})\mathrm{d}s_y,~~~{\varvec{x}}\in D, \end{aligned}$$
(8)

whilst \(u'\), with a charge density \(\phi ({\varvec{x}}')\in L^2(\partial D')\), will be

$$\begin{aligned} \mathbf u '({\varvec{x}}')=\int _{\partial D'}T_{n({\varvec{y}}')}\mathbf U ({\varvec{x}}',{\varvec{y}}')\varvec{\phi }({\varvec{y}}')\mathrm{d}s_{y'},~~~{\varvec{x}}'\in D', \end{aligned}$$
(9)

where \(T_{n({\varvec{y}})}\mathbf U ({\varvec{x}},{\varvec{y}})=(\mathbf T _{ij})\) is given by

$$\begin{aligned} \mathbf T _{ij}(\mathbf{x },\mathbf{y })&= \frac{C_3}{r(\mathbf x ,\mathbf y )}\left[ \left( C_4\delta _{ij}+2\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_i}\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_j}\right) \frac{\partial r(\mathbf{x },\mathbf{y })}{\partial n(\mathbf{y })}\right. \\&\quad -\left. C_4\left( \frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_i}n_j(\mathbf{y })-\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_j}n_i(\mathbf{y })\right) \right] , \end{aligned}$$

with \(r(\mathbf{x },\mathbf{y })=|\mathbf{x }-\mathbf{y }|, C_3=-\frac{1}{4\pi (1-\bar{\nu })}\) and \(C_4=1-2\bar{\nu }\). From the jump relation [15, p. 18], for (8) and (9), we get

$$\begin{aligned} \mathbf{g }({\varvec{x}})=\int _{\partial D}\mathbf T ({\varvec{x}},{\varvec{y}})\varvec{\varphi }({\varvec{y}})\mathrm{d}s_y-\frac{1}{2}\varvec{\varphi }({\varvec{x}}),~~~{\varvec{x}}\in {\partial D}, \end{aligned}$$
(10)

and

$$\begin{aligned} \mathbf{g }'({\varvec{x}}')=\int _{\partial D'}\mathbf T ({\varvec{x}}',{\varvec{y}}')\varvec{\phi }({\varvec{y}}')\mathrm{d}s_{y'}-\frac{1}{2}\varvec{\phi }({\varvec{x}}'),~~~{\varvec{x}}'\in \partial D', \end{aligned}$$
(11)

respectively. Thus, we can get the charge density \(\varvec{\varphi }({\varvec{x}})\) and \(\varvec{\phi }({\varvec{x}}')\).

By introducing the operator

$$\begin{aligned} \mathbf K \varvec{\varphi }({\varvec{x}})=2\int _{\partial D}\mathbf T ({\varvec{x}},{\varvec{y}})\varvec{\varphi }({\varvec{y}})\mathrm{d}s_y,~~{\varvec{x}}\in \partial D, \end{aligned}$$

we will get the charge density

$$\begin{aligned} \varvec{\varphi }({\varvec{x}})=-2(\mathbf I -\mathbf K )^{-1}\mathbf{g }({\varvec{x}}),~~{\varvec{x}}\in \partial D, \end{aligned}$$

providing that \(\mathbf I -\mathbf K \) is invertible.

Theorem 1

The operator \(\mathbf I -\mathbf K \) is injective.

Proof

It is sufficient to prove that the homogeneous equation \((\mathbf I -\mathbf K )\varvec{\varphi }=0\) only has a trivial solution. We can define a double-layer potential \(\varvec{v}\) by (8). Then by the jump relation [15, p. 18], we can get \(\varvec{v}_{-}=\frac{1}{2}(\mathbf K \varvec{\varphi }-\varvec{\varphi })=0\). Therefore, the potential \(\varvec{v}\) solves the boundary value problem in \(D\) with homogeneous Dirichlet boundary condition. So we get \(\varvec{v}=0\) in \(D\). This yields \({T}_n \varvec{v}_{-}=0\). By the jump relation, we can get \({T}_n \varvec{v}_{+}={T}_n \varvec{v}_{-}=0\) on \(\partial D\). We observe that \(\varvec{v}=\mathcal {O}(\frac{1}{|{\varvec{x}}|})\) for \(|{\varvec{x}}|\longrightarrow \infty \). The uniqueness of the exterior boundary value problem for Cauchy–Navier equation yields that \(\varvec{v}\) vanishes in the exterior of \(D\), i.e. \(\varvec{v}=0\) in \(\mathbb {R}^2\setminus D\). Hence by the jump relation, we get \(\varvec{\varphi }=\varvec{v}_{+}-\varvec{v}_{-}=0\) on \(\partial D\), which completes the proof. \(\square \)

By Theorem 1, we can get \(\varvec{\varphi }\) and \(\varvec{\phi }\) from (10) and (11), respectively. We will explain \(\varvec{\varphi }({\varvec{x}})=\varvec{\phi }(\hat{\alpha } {\varvec{x}}),~ {\varvec{x}} \in \partial D\).

Let us consider the reason why (6) and (7) have the invariance property. From (9) and the integral transform

$$\begin{aligned}&{u_1}'(\mathbf{x }')\nonumber \\&\quad =\int _{\partial D'}\left( \mathbf T _{11}(\mathbf{x }',\mathbf{y }')\phi _1(\mathbf y ')+\mathbf T _{12}(\mathbf{x }',\mathbf{y }')\phi _2(\mathbf y ')\right) \mathrm{d}s_{y'}\nonumber \\&\quad =\int _{\partial D'}\frac{C_3}{r(\mathbf{x }',\mathbf{y }')}\left( C_4+2\left( \frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial y'_1}\right) ^2\right) \frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial n(\mathbf{y }')}\phi _1(\mathbf y ')\mathrm{d}s_{y'}\nonumber \\&\qquad +\int _{\partial D'}\frac{C_3}{r(\mathbf{x }',\mathbf{y }')}\left[ 2\frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial y'_1}\frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial y'_2}\frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial n(\mathbf{y }')}\nonumber \right. \\&\qquad \left. -C_4\left( \frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial y'_1}n_2(\mathbf{y }')-\frac{\partial r(\mathbf{x }',\mathbf{y }')}{\partial y'_2}n_1(\mathbf{y }')\right) \right] \phi _2(\mathbf{y }')\mathrm{d}s_{y'}\nonumber \\&\quad =\int _{\partial D}\frac{C_3}{\hat{\alpha }r(\mathbf{x },\mathbf{y })}\left( C_4+2\left( \frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_1}\right) ^2\right) \frac{\partial r(\mathbf{x },\mathbf{y })}{\partial n(\mathbf{y })}\phi _1(\hat{\alpha }\mathbf{y })\hat{\alpha }\mathrm{d}s_{y}\nonumber \\&\qquad +\int _{\partial D}\frac{C_3}{\hat{\alpha }r(\mathbf{x },\mathbf{y })}\left[ 2\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_1}\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_2}\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial n(\mathbf{y })}\nonumber \right. \\&\qquad \left. -C_4\left( \frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y'_1}n_2(\mathbf{y })-\frac{\partial r(\mathbf{x },\mathbf{y })}{\partial y_2}n_1(\mathbf{y })\right) \right] \phi _2(\hat{\alpha }\mathbf{y })\hat{\alpha }\mathrm{d}s_{y}\nonumber \\&\quad =\int _{\partial D}\left( \mathbf T _{11}(\mathbf{x },\mathbf{y })\phi _1(\hat{\alpha }\mathbf{y })+\mathbf T _{12}(\mathbf{x },\mathbf{y })\phi _2(\hat{\alpha }\mathbf{y })\right) \mathrm{d}s_{y}. \end{aligned}$$
(12)

This means that (11) has the following form

$$\begin{aligned} \mathbf{g }'(\hat{\alpha }{\mathbf{x }})=\int _{\partial D}\mathbf T (\mathbf{x },\mathbf{y })\varvec{\phi }(\hat{\alpha }{\varvec{x}})\mathrm{d}s_y-\frac{1}{2}\varvec{\phi }(\hat{\alpha }\mathbf{x }),~~~\mathbf x \in \partial D. \end{aligned}$$
(13)

From (10), (13) and Theorem 1, it can be seen that \(\varvec{\varphi }(\mathbf{x })=\varvec{\phi }(\hat{\alpha }\mathbf{x })\) provides \(\mathbf{g }(\mathbf{x })=\mathbf{g }'(\hat{\alpha }\mathbf{x })\) for \(\mathbf x \in \partial D\).

By the above discussions, we get the following, for (P.1) and (P.2),

$$\begin{aligned} \mathbf u (\mathbf{x })=\mathbf u '(\hat{\alpha }\mathbf{x }),~\mathbf x \in D, \end{aligned}$$

provides \(\mathbf{g }(\mathbf{x })=\mathbf{g }'(\hat{\alpha }\mathbf{x })\) for \(\mathbf{x }\in \partial D\).

2.2 The Invariant MFS

One popular method for solving problems (P.1) and (P.2) is the MFS. The classical MFS assumes that \(\mathbf u \) is approximated by the following form

$$\begin{aligned} \mathbf u ^{(M)}_C(\mathbf{x })=\sum _{j=1}^{M}\mathbf U (\mathbf{x },\mathbf{y }^j)\mathbf a ^j \end{aligned}$$
(14)

where \(\mathbf{y }^j, j=1,\cdot \cdot \cdot ,M,\) are source points chosen suitably on the boundary of \(B\) with \(\overline{D}\subset B\). The coefficients \(a_j\) and \(b_j\) are to be determined from the boundary conditions, which are implemented by the interpolation conditions at the collocation method.

However, the approximation \(\mathbf u ^{(M)}_C(\mathbf{x })\) of \(\mathbf u \) constructed in this way lacks an essential property, i.e., the invariance under trivial coordinate changes in the problem description such as scaling of coordinates:

$$\begin{aligned} \mathbf{x }\rightarrow \hat{\alpha }\mathbf{x },\quad \mathbf{y }^j\rightarrow \hat{\alpha }\mathbf{y }^j \end{aligned}$$
(15)

and the origin shift for the boundary data:

$$\begin{aligned} \mathbf{g }(\mathbf{x })\rightarrow \mathbf{g }(\mathbf{x })+\mathbf a \quad (\mathbf a :\mathrm {constant}~ \mathrm {vector}). \end{aligned}$$
(16)

To be more specific, we expect that the approximation \(\mathbf u ^{(M)}_C(\mathbf{x })\) should transform as

$$\begin{aligned} \mathbf u ^{(M)}_C(\mathbf{x })\rightarrow \mathbf u '^{(M)}_C(\hat{\alpha } \mathbf{x }),\end{aligned}$$
(17)
$$\begin{aligned} \mathbf u ^{(M)}_C(\mathbf{x })\rightarrow \mathbf u '^{(M)}_C(\mathbf{x })+\mathbf a \end{aligned}$$
(18)

under the transformations (15) and (16), which, however, is not the case with \(\mathbf u ^{(M)}_C(\mathbf{x })\). Since for \(\mathbf{u '}^{(M)}_C=({u'_1}^{(M)}_C,{u'_2}^{(M)}_C)^{\top }\), we have

$$\begin{aligned}&{u'_1}^{(M)}_C(\hat{\alpha }\mathbf{x })\nonumber \\&\quad =C_1\sum _{j=1}^{M}\left( a_j(C_2\mathrm{ln}|\hat{\alpha }(\mathbf{x }-\mathbf{y }^j)|+\frac{(\mathbf{x }_1-\mathbf{y }_1^j)^2}{|\mathbf{x }-\mathbf{y }^j|^2})+b_j\frac{(\mathbf{x }_1-\mathbf{y }_1^j)(\mathbf{x }_2-\mathbf{y }_2^j)}{|\mathbf{x }-\mathbf{y }^j|^2}\right) \nonumber \\&\quad =\sum _{j=1}^{M}\left( C_1C_2 a_j\mathrm{ln}|\hat{\alpha }|+a_j\mathbf U _{11}(\mathbf{x }-\mathbf{y }^j)+b_j\mathbf U _{12}(\mathbf{x }-\mathbf{y }^j)\right) \nonumber \\&\quad =C_1C_2 \sum _{j=1}^{M}a_j\mathrm{ln}|\hat{\alpha }|+\sum _{j=1}^{M}\left( a_j\mathbf U _{11}(\mathbf{x }-\mathbf{y }^j)+b_j\mathbf U _{12}(\mathbf{x }-\mathbf{y }^j)\right) . \end{aligned}$$

and

$$\begin{aligned} {u'_2}^{(M)}_C(\hat{\alpha }\mathbf{x })=C_1C_2 \sum _{j=1}^{M}b_j\mathrm{ln}|\hat{\alpha }|+\sum _{j=1}^{M}\left( a_j\mathbf U _{21}(\mathbf{x }-\mathbf{y }^j)+b_j\mathbf U _{22}(\mathbf{x }-\mathbf{y }^j)\right) . \end{aligned}$$

Thus, we have

$$\begin{aligned} \mathbf u '^{(M)}_C(\hat{\alpha }\mathbf{x })=C_1C_2 \sum _{j=1}^{M}\mathbf a ^j{\ln |\hat{\alpha }|}+\mathbf u ^{(M)}_C( \mathbf{x }). \end{aligned}$$
(19)

In general, \(\sum _{j=1}^{M}\mathbf a ^j\ne 0\), thus \(\mathbf u '^{(M)}_C(\hat{\alpha }\mathbf{x })\ne \mathbf u ^{(M)}_C( \mathbf{x })\). About (18), i.e., the origin shift, it is easy to be satisfied, while the other is not.

From Sect. 2.1, we know that the analytic solution \(\mathbf u \) has the invariance property \(\mathbf u (\mathbf{x })=\mathbf u '(\hat{\alpha }\mathbf{x })\). We expect that the meshless method also has this invariance property. More preciously, it should have \(\mathbf u '^{(M)}_C(\hat{\alpha }\mathbf{x })=\mathbf u ^{(M)}_C( \mathbf{x })\), if the collocation points \(\mathbf{x }\rightarrow \hat{\alpha }\mathbf{x }\) and source points \(\mathbf{y }^j\rightarrow \hat{\alpha }\mathbf{y }^j\) simultaneously stretch or compress the same scale. But from (19), we know the traditional MFS \(\mathbf u _C^{(M)}(\mathbf{x })\) defined by (14) is not the case. Based on this consideration, the invariant MFS assumes an approximation of the following form

$$\begin{aligned} \mathbf u ^{(M)}_I(\mathbf{x })=\mathbf c +\sum _{j=1}^{M}\mathbf U (\mathbf{x },\mathbf{y }^j)\mathbf a ^j, \end{aligned}$$
(20)

where \(\mathbf c =(c_1,c_2)^\top \). However, the appended constant in (20) is not the most important, the major difference between the conventionally MFS and the invariant MFS is the constraint about the coefficients \(\mathbf a ^j\) as follows

$$\begin{aligned} \sum _{j=1}^{M}\mathbf a ^j=\mathbf 0 . \end{aligned}$$
(21)

From (19), we know the invariant approximation \(\mathbf u _{I}^{(M)}\) enjoys the expected invariance properties under (21).

Then we use the invariant MFS to deal with Cauchy problem. In the collocation method, the coefficients are determined from the interpolation conditions at \(N\) collocation points \(\mathbf{x }^i\) uniformly distributed on \(\varGamma \) by solving a system of \(2N+1\) equations consisting of (21) and

$$\begin{aligned} \left\{ \begin{array}{ll} \mathbf u ^{(M)}_I(\mathbf{x }^i)=\mathbf f (\mathbf{x }^i),\\ T_n\mathbf u ^{(M)}_I(\mathbf{x }^i)=\mathbf t (\mathbf{x }^i) \end{array} \right. \end{aligned}$$
(22)

We recast (21) and (22) as a system of \(4N+2\) linear algebraic equations with \(2M+2\) unknowns which can be generically written as

$$\begin{aligned} \fancyscript{A}\mathbf d =\mathbf h , \end{aligned}$$
(23)

where matrix \(\fancyscript{A}\), the unknown vector \(\mathbf d \) and the right-hand side \(\mathbf h \) are given by

$$\begin{aligned}&\fancyscript{A}_{(k-1)N+i,(l-1)M+j}=\mathbf U _{kl}(\mathbf{x }^i,\mathbf{y }^j),\quad \fancyscript{A}_{(k+1)N+i,(l-1)M+j}=\mathbf T _{kl}(\mathbf{x }^i,\mathbf{y }^j),\\&\fancyscript{A}_{i,2M+1}=1,\quad \fancyscript{A}_{N+i,2M+2}=1,\quad \fancyscript{A}_{4N+l,(l-1)M+j}=1,\\&\mathbf d _j=a_j,\quad \mathbf d _{M+j}=b_j,\quad \mathbf d _{2M+k}=c_k,\\&\mathbf h _{(k-1)N+i}=u_k(\mathbf{x }^i),\quad \mathbf h _{(k+1)N+i}=t_k(\mathbf{x }^i),\quad \mathbf h _{4N+l}=0,\\&i=1,\ldots ,N, \quad j=1,\ldots ,M,\quad k,l=1,2. \end{aligned}$$

The other elements of \(\fancyscript{A}\) is zero.

In order to uniquely determine the coefficients \(a_j\) and \(b_j\), it should be noticed that the number \(N\) of the boundary collocation points and the number \(M\) of the source points must satisfy \(2N\ge M\). However, the system of linear algebraic equation (23) cannot be solved by direct methods, such as the least squares method, since such an approximation would produce a highly unstable solution. Thus, we should solve it by some regularization method.

3 Regularization Method for Solving the Linear Algebraic Equations

In this section, we will use Tikhonov regularization method with Morozov discrepancy principle, see [12], to solve the system (23).

In general, we need to consider the perturbed equations with Gaussian noise by

$$\begin{aligned} \fancyscript{A}\mathbf d ^\delta =\mathbf h ^\delta . \end{aligned}$$
(24)

Here, \(\mathbf{h }^\delta \) is measured noisy data satisfying

$$\begin{aligned} \mathbf{h}_i^\delta =\mathbf{h}_i+\delta \mathrm {rand}(i)\mathbf{h}_i, \end{aligned}$$

where \(\delta \) denotes the noise level.

It is well known that the linear system (24) is ill-conditioned. An important tool for the analysis of rank-deficient and discrete ill-posed problems is the singular value decomposition (SVD). The matrix \(\fancyscript{A}\) in (24) will be SVD-decomposed into

$$\begin{aligned} \fancyscript{A}=\varvec{W}\varvec{\Sigma }{\varvec{V}^\top }. \end{aligned}$$

\(\varvec{W}\) and \(\varvec{V}\), which have \((4N+2)\times (2M+2)\) and \((2M+2)\times (2M+2)\) dimensions, respectively, are matrices whose columns are the orthogonal vectors \(\varvec{W}_i\) and \(\varvec{V}_i\), the left and right singular vectors, and \(\varvec{\Sigma }=\mathrm{diag}(\lambda _1,\ldots ,\lambda _{2M+2})\) is a diagonal matrix has non-negative diagonal elements in decreasing order, which are the singular values of \(\fancyscript{A}\). Such a decomposition makes explicit the degree of ill-conditioning of the matrix \(\fancyscript{A}\) through the ratio between the maximum and the minimum singular value but also allows to write the solution of the system (24) in the following form

$$\begin{aligned} \mathbf d =\sum _{i=1}^{2M+2}\frac{\varvec{W}_i^\top \mathbf b }{\lambda _i}\varvec{V}_i. \end{aligned}$$

For ill-conditioned matrix systems, there usually exist extremely small singular values, thus this equation clearly brings out the difficulties to deal with the ill-posed discrete problems, and the ill-posedness of this problem is given by

$$\begin{aligned} \hbox {Cond}(\fancyscript{A})=\frac{\lambda _1}{\lambda _{2M+2}}. \end{aligned}$$

For a given and fixed regularization parameter \(\alpha \), the Tikhonov regularization of system (24) is to solve the following equations

$$\begin{aligned} (\alpha I + \fancyscript{A}^\top \fancyscript{A})\mathbf d _\alpha ^\delta =\fancyscript{A}^\top \mathbf h ^\delta . \quad \alpha >0 \end{aligned}$$
(25)

By introducing the regularization operator

$$\begin{aligned} R_\alpha :=(\alpha I + \fancyscript{A}^\top \fancyscript{A})^{-1}\fancyscript{A}^\top , \quad \mathrm {for}\ \alpha >0, \end{aligned}$$

we can achieve the regularized solution \(\mathbf d ^\delta _{\alpha }=R_\alpha \mathbf h ^\delta \) of equations (24).

A suitable regularization parameter \(\alpha \) is crucial for the accuracy of the regularized solution. We can refer [18] for the investigation of regularization parameters and error estimating. Best-parameter search is not the scope of this work and we will choose the regularization parameter \(\alpha \) by Morozov discrepancy principle. The computation of \(\alpha (\delta )\) can be carried out with Newton’s method [33]. The derivative of the mapping \(\alpha \mapsto \mathbf d ^\delta _{\alpha }\) is given by the solution of the equation \((\alpha I + \fancyscript{A}^\top \fancyscript{A})\frac{d}{d\alpha }\mathbf d ^\delta _{\alpha }=-\mathbf d ^\delta _{\alpha }\), as easily seen by differentiating (25) with respect to \(\alpha \). With the regularization \(\alpha ^*\) fixed, we can solve (24) to obtain the regularized solution \(\mathbf d ^\delta _{\alpha ^*}=R_{\alpha ^*} \mathbf h ^\delta \). From (20), if we get the approximation \(\mathbf d ^\delta _{\alpha ^*}\) of \(\mathbf d \), and will get the approximation of \(\mathbf u ^{(M)}_I(\mathbf{x })\).

In order to analyze the accuracy of the numerical results obtained, we introduce the errors \(err(u_i)(i=1,2)\) and \(err(t_i)(i=1,2)\) given by

$$\begin{aligned} err(u_i)=\frac{\left\{ \sum _{l=1}^{L}\left\| {u_i}^{(an)}(\mathbf{x }^l)-{u_i}^{({\alpha })}(\mathbf{x }^l)\right\| _2^2\right\} ^{\frac{1}{2}}}{\left\{ \sum _{l=1}^{L}\left\| {u_i}^{(an)}(\mathbf{x }^l)\right\| _2^2\right\} ^{\frac{1}{2}}}, \end{aligned}$$

and

$$\begin{aligned} err(t_i)=\frac{\left\{ \sum _{l=1}^{L}\left\| {t_i}^{(an)}(\mathbf{x }^l)-{t_i}^{({\alpha })}(\mathbf{x }^l)\right\| _2^2\right\} ^{\frac{1}{2}}}{\left\{ \sum _{l=1}^{L}\left\| {t_i}^{(an)}(\mathbf{x }^l)\right\| _2^2\right\} ^{\frac{1}{2}}}, \end{aligned}$$

where \(\mathbf{x }^l, l=1,\ldots ,L\), are \(L\) uniformly distributed points on the underspecified boundary \(\varGamma _1. \mathbf u ^{(an)}\) and \(\mathbf t ^{(an)}\) are the exact displacement and traction vectors, respectively. \(\mathbf u ^{({\alpha })}\) and \(\mathbf t ^{({\alpha })}\) are the numerical displacement and traction vectors, respectively, obtained for the value \({\alpha }\) of the regularization parameter. When \({u_i}^{(an)}=0\) or \({t_i}^{(an)}=0\), we will use

$$\begin{aligned} err(u_i)=\left[ \frac{1}{L}\sum _{l=1}^{L}\left\| {u_i}^{({\alpha })}(\mathbf{x }^l)\right\| _2^2\right] ^{\frac{1}{2}}, \end{aligned}$$

or

$$\begin{aligned} err(t_i)=\left[ \frac{1}{L}\sum _{l=1}^{L}\left\| \frac{{t_i}^{({\alpha })}(\mathbf{x }^l)}{10^{10}}\right\| _2^2\right] ^{\frac{1}{2}}. \end{aligned}$$

4 Numerical Examples and Discussion

In this section, we report some examples to demonstrate the effectiveness of our algorithm. The implementation of the algorithm is based on the MATLAB software. We consider an isotropic linear elastic medium characterized by the material constants \(G=3.35\times 10^{10}\hbox {N}/{\hbox {m}^2}\) and \(\nu =0.34\) corresponding to a copper alloy. We can get \(\lambda \) and \(\mu \) by \(\mu =G\) and \(\lambda =\frac{2G\nu }{1-2\nu }\). For plane stress, we change \(\nu \) to \(\frac{\nu }{1+\nu }\).

In the forthcoming examples, we choose the source points, which locate on \(\partial B=\{{\mathbf{x }}\in \mathbb {R}^2:~x_1^2+x_2^2=25^2\}\) except Example 3, off-setting from the real boundary.

Example 1

[Simply connected, smooth geometry] Consider the case in which the exact solution to Navier equation is

$$\begin{aligned} u_i(x_1,x_2)=\frac{\lambda +\mu }{2\mu (2\lambda +\mu )}\sigma _0x_i,\quad i=1,2,\quad \sigma _0=1.5\times 10^{10}\hbox {N}/{\hbox {m}^2}. \end{aligned}$$

In this example, we set \(D=\{{\mathbf{x }}\in \mathbb {R}^2:~x_1^2+x_2^2<1\}, \varGamma \) is a portion of the unit circle and has the parametric representation \(\varGamma (\varTheta )=\{{\mathbf{x }}\in \partial D:~0\le \theta ({\mathbf{x }})\le \varTheta \}, \theta ({\mathbf{x }})\) is the polar angle of \({\mathbf{x }}\) in the form \({\mathbf{x }}=(\cos \theta , \sin \theta )\). Here, \(t_j=\sigma _0n_j\).

First, let \(\varTheta =\frac{\pi }{4}. \varGamma _1=\partial D\backslash \varGamma \) is the rest portion of the unit circle. Fig. 1 shows the numerical solutions with different levels of noise for Example 1 with 40 source points and 20 collocation points, and it can be seen that the numerical solutions are stable approximations to the exact solution.

Fig. 1
figure 1

The exact solution and the numerical solutions with different levels of noise for Example 1

Table 1 compares the accuracy errors with the MFS for Example 1 with \(\varTheta =\frac{\pi }{2}\) by using 80 source points and 40 collocation points. It can be seen that the invariant MFS and the MFS are effective for the Cauchy problem.

Table 1 Compare the accuracy errors with the MFS for Example 1 with \(\varTheta =\frac{\pi }{2}\)

In order to investigate the sensitivity analysis with respect to the measure of the boundary on which Cauchy data are available, we set \(\varTheta =\frac{\pi }{2}, \pi , \frac{3\pi }{2}\). Tables 2, 3 and 4 present the regularization parameters and the relative \(L^2\) errors for the numerical solutions on boundary \(\varGamma _1\) with 120 source points and 60 collocation points. From these tables, it is readily seen that the numerical approximation for larger \(\varTheta \) is more stable and accurate. It should also be noted that the numerical solution converges to the exact solution as the level of noise decreases.

Table 2 Relative \(L^2\) errors on boundary \(\varGamma _1\) for different noise levels for Example 1 with \(\varTheta =\frac{\pi }{2}\)
Table 3 Relative \(L^2\) errors on boundary \(\varGamma _1\) for different noise levels for Example 1 with \(\varTheta ={\pi }\).
Table 4 Relative \(L^2\) errors on boundary \(\varGamma _1\) for different noise levels for Example 1 with \(\varTheta =\frac{3\pi }{2}\).

Moreover, in order to give a numerical stability analysis of the proposed method by investigating the influence of the amount of noise added into the Cauchy data on the numerical results, Tables 5 and 6 show the numerical results with 80 source points and 40 collocation points in the following cases: (i) given noisy displacements and exact tractions; (ii) given exact displacements and noisy tractions. From the tables, we can see that the noisy displacements will not affect the numerical results for the tractions, and the noisy tractions will affect the numerical results for the displacements. We think the reason is that the noisy tractions will affect the regularization parameters. More precisely, the regularization parameter chosen by Morozov discrepancy principle is fixed, i.e. \(\alpha =7.14\times 10^{-17}\) for the first term. The second term is not fixed, i.e. \(\alpha =8.42\times 10^{-5}, 2.11\times 10^{-4}, 3.24\times 10^{-4}\).

Table 5 Relative \(L^2\) errors for noisy displacements and exact tractions for Example 1 with \(\varTheta =\frac{\pi }{2}\).
Table 6 Relative \(L^2\) errors for exact displacements and noisy tractions for Example 1 with \(\varTheta =\frac{\pi }{2}\).

Example 2

[Non-convex, smooth geometry] In the previous example, the exact solution is simple and \(D\) is a rather simple unit disk shape. In this example, we consider the case in which the exact solution to Navier equation is

$$\begin{aligned} u_1(x_1,x_2)=\frac{\lambda +\mu }{\mu (3\lambda +2\mu )}\sigma _0x_1x_2,\quad u_2(x_1,x_2)=-\frac{\lambda +\mu }{\mu (3\lambda +2\mu )}\sigma _0[x_1^2-1+\nu x_2^2]. \end{aligned}$$

\(\partial D\) is a non-convex kite-shaped and described by the parametric representation

$$\begin{aligned} {\mathbf{x }}(t)=0.5({\cos } t+0.65{\cos } (2t)-0.65,1.5 {\sin } t), ~~t\in [0,2\pi ], \end{aligned}$$

see Fig. 2. We give the Cauchy data on \(\varGamma =\{{\mathbf{x }}\in \partial D:~0\le t\le \pi \}, \varGamma _1=\partial D\backslash \varGamma \). Here, \(t_1=\sigma _0x_2n_1, t_2=0\).

Fig. 2
figure 2

The solution domain in Example 2

Figure 3 shows the numerical solutions with different levels of noise for Example 2 with 120 source points and 60 collocation points, and it can be seen that the numerical solutions are also giving stable approximations to the exact solution even for this non-convex shaped domain.

Fig. 3
figure 3

The exact solution and the numerical solutions with different levels of noise for Example 2

Example 3

[Doubly connected, smooth geometry] Consider the exact solution to Navier equation is

$$\begin{aligned} u_1(x_1,x_2)=\frac{3\lambda +2\mu }{4\mu (2\lambda +\mu )}\sigma _0x_1,\quad u_2(x_1,x_2)=-\frac{\lambda }{4\mu (2\lambda +\mu )}\sigma _0x_2. \end{aligned}$$

In this example, let \(D=\{\mathbf{x }\in \mathbb {R}^2: ~1<x_1^2+x_2^2<16\}\), and the Cauchy boundary be \(\varGamma =\{\mathbf{x }\in \mathbb {R}^2: ~x_1^2+x_2^2=16\}, \varGamma _1=\partial D\backslash \varGamma \), i.e., \(\varGamma _1\) is the rest portion of \(\partial D, \varGamma _1=\{\mathbf{x }\in \mathbb {R}^2: ~x_1^2+x_2^2=1\}\). Here, \(t_1=\sigma _0n_1, t_2=0\). The source points locate on \(\partial B=\{\mathbf{x }\in \mathbb {R}^2: ~x_1^2+x_2^2=6^2\}\cup \{\mathbf{x }\in \mathbb {R}^2: x_1^2+x_2^2=0.5^2\}\).

Figure 4 shows the numerical solutions with different levels of noise for Example 3 with 120 source points and 60 collocation points. Figure 5 gives the errors of Example 3. From these figures, it can be seen that the numerical solution converges to the exact solution as the level of noise decreases.

Fig. 4
figure 4

The exact solution and the numerical solutions with different levels of noise for Example 3

Fig. 5
figure 5

Error profiles with different levels of noise for Example 3

Example 4

[L-type piecewise smooth geometry] The exact solution to Navier equation is

$$\begin{aligned} u_1(x_1,x_2)=\frac{\lambda +\mu }{\mu (3\lambda +2\mu )}\sigma _0x_1x_2,\quad u_2(x_1,x_2)=-\frac{\lambda +\mu }{\mu (3\lambda +2\mu )}\sigma _0[x_1^2-1+\nu x_2^2]. \end{aligned}$$

In this example, we set \(D=\{\mathbf{x }\in \mathbb {R}^2: ~-1<x_1,x_2<1\}, \varGamma =\{\mathbf{x }\in \partial D: ~x_2=-1\}\cup \{\mathbf{x }\in \partial D: ~x_1=-1\}\), and \(\varGamma _1=\partial D\backslash \varGamma \). We only give the numerical simulation on \(s(\mathbf{x })=\{\mathbf{x }\in \partial D: ~x_1=1\}\). Here, \(t_1=\sigma _0x_2n_1, t_2=0\).

Figure 6 shows the numerical solutions with different levels of noise for Example 4 with 120 source points and 60 collocation points. From the figure, it can be seen that the numerical solution converges to the exact solution as the level of noise decreases.

Fig. 6
figure 6

The exact solution and the numerical solutions with different levels of noise for Example 4

In order to investigate the influence of choosing \(B\), Fig. 7 gives the accuracy errors corresponding to the unknown boundary data via changing the radius \(R\) with 120 source points and 60 collocation points. From this figure, we can see that these errors decrease when \(R\) becomes large.

Fig. 7
figure 7

Error profiles with different \(R\) with \(\delta =0.01\) for for Example 4

Table 7 gives the accuracy errors corresponding to the unknown boundary data via changing the regularization parameter for Example 4 with 200 source points and 100 collocation points. From Table 7, we can see that the minimum errors appear in \((10^{-8},10^{-7})\), which includes the regularization parameter \({\alpha }=8.6\times {10^{-8}}\) chosen by discrepancy principle.

Table 7 The accuracy errors corresponding to the unknown boundary data with different regularization parameters for example 4, \(\delta =0.01\)

Figure 8 shows the accuracy errors corresponding to the unknown boundary data with \(\alpha =4\times 10^{-7}\) via increasing the number of the collocation points \(N\). Moreover, Fig. 9 shows the numerical solutions for the different numbers of collocation points \(N=20,~40,~60\), and the the regularization parameters are chosen by discrepancy principle. These results indicate the fact that accurate numerical solutions for the displacement and the traction vectors on the underspecified boundary can be obtained using a relatively small number \(N\).

Fig. 8
figure 8

Error profiles with different \(N\) with \(\delta =0.01\) for for Example 4

Fig. 9
figure 9

The exact solution and the numerical solutions obtained using collocation points \(N=20, 40, 60\) for Example 4 with \(\delta =0.01\)

At last, in order to give the ill-posedness of this problem clearly, we find the condition number Cond(\(\fancyscript{A}\)) is \(3.3\times 10^{33}\). Since the minimum singular value is close to zero, this problem is illposed, and the introducing of a regularization parameter is necessary.

5 Conclusions

In this paper, we study the application of the invariant MFS to solve the Cauchy problem in two-dimensional linear elasticity based on Tikhonov regularization method with Morozov discrepancy principle. Through the use of the double-layer potential function, we give the invariant property for a problem with two different descriptions. Then, we formulate the MFS with an added constant and an additional constraint to adapt this invariance property. Moreover, the invariant MFS retains the advantages of the MFS. The solution of some kinds of domain are numerically tested. From the examples, we can see that the modified MFS and the classical MFS are all effective and stable, whilst the modified MFS is leaving the very basic natural property of the analytic solution, i.e., the invariance under trivial coordinate changes in the problem description.

This method also can be used to deal with other problems in linear elasticity, such as the inverse boundary determination problem, the boundary value problem. They will be reported elsewhere.