Introduction

Discontinuity mapping is a fundamental task for rock mass characterisation (Barton et al. 1974; ISRM 1978; Priest 1993; Kulatilake and Wu 1984; Mouldon 1998; Zhang and Einstein 1998; Li et al. 2014; Zhu et al. 2014). Rock discontinuities in outcrops can appear in the form of planar surfaces or embedded traces as shown in Fig. 1. Collecting geological information on rock discontinuities is difficult, time-consuming, and often dangerous when using traditional field mapping and hand-held direct measuring devices (Ferrero et al. 2009).

Fig. 1
figure 1

Exposed rock discontinuities in outcrops: a planar surfaces and b embedded traces

Digital image technology can be used to collect geological information in steep, inaccessible areas. The relative 2-D geometric relations of discontinuity traces can be extracted using general-purpose image-processing methods that account for changes in pixel intensities (Franklin et al. 1988; Crosta 1997; Reid and Harrison 2000; Hadjigeorgiou et al. 2003; Lemy and Hadjigeorgiou 2003). Due to the lack of 3-D perspective, these approaches typically provide trace length estimation, trace probability statistics, and discrete fracture networks in two dimensions, but they cannot measure the orientation and spatial distribution of discontinuities in three dimensions.

Currently, several techniques are available for creating high-resolution 3-D representations of a rock surface, such as photogrammetry (Roncella et al. 2004; Sturzenegger and Stead 2009; Tannant 2015) and terrestrial and aerial LiDAR (Gigli and Casagli 2011). Automated extraction of rock discontinuities is typically based on identifying the change in the principal curvatures of the vertices (Umili et al. 2013), searching for the best-fit planes (Otoo et al. 2011; Gigli and Casagli 2011), Normal Tensor Voting theory (Li et al. 2016) or moving a sample window or cube through a point cloud using geometric regional trend analysis software such as PlaneDetect (Lato et al. 2009; Lato and Vöge 2012; Vöge et al. 2013), DiAna 3D (Gigli and Casagli 2011), Split-FX (Slob et al. 2005; Slob et al. 2007) or Coltop 3D (Jaboyedoff et al. 2007). These methods are mainly suitable for exposed planar surfaces of discontinuities as shown in Fig. 1a. However, when the rock face is dominated by embedded discontinuity traces as shown in Fig. 1b, existing software is typically unsuccessful in extracting the discontinuity orientations.

This paper proposes a methodology for automated discontinuity mapping by identifying the 2-D discontinuity traces in image data and linking these features to their 3-D point coordinates in a point cloud acquired by terrestrial laser scanning or photogrammetry. The proposed method adequately exerts their advantages of point cloud and image data, visually detects the traces texture in image data, and digitally acquires the spatial coordinate of traces texture by fusion of point cloud and image data.

Methodology

A terrestrial laser scanner can acquire millions of highly accurate points to create a 3-D point cloud representation of a rock face. Most terrestrial laser scanners also feature a camera fixed in a coaxial orientation relative to the scanner that can synchronously take photographs to record a 2-D digital images of a rock surface. Another approach to gathering similar data is this use of multiple camera stations and photogrammetry software to process the acquired images. Using either workflow in the field, the resulting data consist of a large set of point coordinates in an (x,y,z) coordinate system and images recording the spectral information in the scene expressed as a matrix in an RGB format. The proposed methodology takes advantage of both types of data and involves three steps, as illustrated in Fig. 2.

  • Part I: The pixels corresponding to a discontinuity trace are extracted from 2-D images. This process involves the following sub-steps: convert an RGB image into a grey-scale image, extract pixels that outline traces, remove pixels that are ‘noise’ pixels, and extract the trace skeleton using a hybrid global and local threshold method.

  • Part II: The 3-D coordinates corresponding to the pixels that are classified as a trace are extracted by a coordinate transformation between the image coordinates and the point cloud coordinates. The camera lens calibration parameters and the camera orientation are used to establish the coordinate transform relationship between the image coordinates and point cloud coordinates.

  • Part III: The best-fit plane through the 3-D points corresponding to each individual trace is found. The dip, dip direction, trace length, and location of each discontinuity are acquired by analysing the geometrical features of the 3-D coordinates for each trace.

Fig. 2
figure 2

Flowchart of the proposed method

Convert image to grey-scale format

Digital images of the rock face in a RGB format are converted into a grey-scale format as an M × N matrix of grey values (pixels) that contains luminance information as shown in Fig. 3 and as expressed by Eq. 1.

Fig. 3
figure 3

a Image showing a joint trace and b close-up 70 × 70 pixel image of the joint trace after grey-scale conversion

$$ F\left(M,N\right)=\left[\begin{array}{ccc}f\left(1,1\right)& \cdots & f\left(1,N\right)\\ {}\vdots & \ddots & \vdots \\ {}f\left(M,1\right)& \cdots & f\left(M,N\right)\end{array}\right] $$
(1)

where F(M, N) is an M × N matrix of grey values (pixels) and f(i, j) is the grey value of a pixel. Digital images in an RGB format are converted into a grey-scale format according to the equation, grey value = (R × 30 + G × 59 + B × 11 + 50)/100, where R, G and B are the RGB values.

Extract outlines of traces as black pixels

Discontinuity traces observed in the field and in images tend to be darker than the rock surrounding them. Using this feature as shown in Fig. 3, image segmentation methods can be used to extract the outline of traces from image data. However, many traditional image segmentation methods used for discontinuity traces, such as the local threshold algorithm, Otsu algorithm, maximum entropy algorithm, and local mean algorithm (Lane et al. 2000; Cui 2002) do not work well when confronted with complex trace distributions in an image. Therefore, a new image segmentation method based on a hybrid global and local threshold algorithm is proposed in this study. This method requires two steps:

  • Step 1: Set local light grey colour to white (local threshold algorithm)

The minimum grey value in a local region of an image is used to determine whether a pixel remains unchanged or is replaced by a white value. The minimum grey value in a 7 × 7 local pixels grid window centred on the point (x, y) is min(x, y), where x, y are the pixel grid coordinates in the image. The average grey value in a larger 70 × 70 pixels grid window also centred on the point (x, y) is ave(x, y). The pixel grey value at the centre of the local grid f(x, y) is changed to 0 (white) if min(x, y) < ave(x, y); otherwise, the value remains unchanged (Eq. 2). This process is repeated until all pixels within the image have been analysed. Note that, for each cycle, the algorithm is applied to the original image. When this process is completed for all pixels in the image, a new image is generated from the grey values of each local grid centre.

$$ \left\{\begin{array}{c}f\left(x,y\right)=0\kern3.25em \mathit{\min}\left(x,y\right)< ave\left(x,y\right)\\ {}f\left(x,y\right)=f\left(x,y\right)\kern2.25em \mathit{\min}\left(x,y\right)\ge ave\left(x,y\right)\end{array}\right. $$
(2)
  • Step 2: Otsu algorithm (global threshold algorithm)

The Otsu method relies on the maximum between-class variance between the background region and the target region. The total number of pixels in an image is N while n(g) is the total number of pixels with a grey value g, as shown in Eq. 3. The maximum grey value in the image is gmax. The probability distributions of pixel values Pg in the histogram of an image are represented by Eq. 4. The threshold T0 of the Otsu algorithm can be expressed as shown in Eq. 5.

$$ N=\sum \limits_{i=1}^{g_{max}}{n}_i $$
(3)
$$ {P}_g=\frac{n_{(g)}}{N}\kern2em \sum \limits_{i=1}^{g_{max}}{P}_g=1 $$
(4)
$$ {T}_0=\underset{1\le g\le {g}_{max}}{\mathrm{argmax}}\left\{{\sigma}_B^2(g)\right\} $$
(5)

where \( {\sigma}_B^2(g) \) is the maximum between-class variance.

  • Step 3: Binarization

Finally, any non-white pixel is replaced with a value of 0 (black) to create the final image containing only black and white pixels.

Taking Fig. 1b as an example for extracting the trace outline, Fig. 4 shows the effect using the traditional methods and the proposed method. Fig. 4a and b show that using the local threshold or the Otsu method alone yields an image that does not adequately identify the important joint traces. Fig. 4c shows that the main traces are intermittent using the maximum entropy method, and Fig. 4d shows that the secondary traces are not distinct using the local mean method. Fig. 4e shows that the main and the secondary traces are connected and distinct using the proposed method, making it superior to the other methods.

Fig. 4
figure 4

Results from using different trace extraction methods: a local threshold, b Otsu, c maximum entropy, d local mean, and e proposed method

Image clean-up and refinement

Closer examination of Fig. 4e shows that there are white pixels where actual discontinuity traces occur, black pixels in regions of solid rock, and irregular boundaries at the edge of the traces. Therefore, further image processing is used to ‘clean’ the image. The following three steps are used to eliminate isolated white and black pixels, and smooth irregular boundaries.

  • Step 1: An algorithm called the Dilation algorithm is used to eliminate white pixels in regions where there is a trace and black pixels in regions where intact rock exists. The algorithm is defined as follows: All points sets that still intersect the original image A after being reflected by the structural element B are offset by y pixels, as indicated in Eq. 6 (Cui 2002).

$$ A\oplus B=\left\{x|\left({\left(\hat{B}\right)}_y\cap A\right)\ne \varnothing \right\} $$
(6)

Specific processing: A small 3 × 3 pixels window as structural elements B is used to scan over every pixel of the original image A; if any pixel among the surrounding eight pixels is a black pixel, the target pixel of the original image A is a black pixel. Otherwise, it is a white pixel. Eventually, the original binary image is enlarged. This process eliminates isolated pixels that were black or white and expands the coverage of black pixels representing traces.

  • Step 2: Small clusters of black pixels that are smaller than a minimum detection threshold for trace lengths are removed. This image filtering is accomplished using the Bwareaopen function of Matlab. The criterion for eliminating a small cluster is determined by testing the actual image.

  • Step 3: A median filtering algorithm is used to smooth the irregular edges of traces using a moving neighbourhood window of 5 × 5 pixels. The median filtering algorithm is defined as follows: n numbers in a given array [a1, a2, ⋯, an] are arranged in order of their value, and the median is labelled by med[a1, a2, ⋯, an]. When n is an odd number, the median of n numbers is the value at the intermediate position in an array [a1, a2, ⋯, an]; when n is an even number, the median of n numbers is the average value of two numbers in the middle position in an array [a1, a2, ⋯, an]. After an array [x(x, y)]M × N is median filtered by the neighbourhood window An, the result array y(x, y) can be expressed as indicated in Eq. 7 (Cui 2002).

$$ y\left(i,j\right)=\underset{A_n\left(x,j\right)}{\mathrm{med}}\left[x\left(i,j\right)\right] $$
(7)

where An(i, j) is the neighbourhood window of the pixel (i, j); n is the number of pixels in the window.

Figure 5a~d respectively demonstrate the initial image and the resulting images corresponding to the three abovementioned steps. The white spots, black spots and irregular boundaries are eliminated or smoothed to create a simplified image of the traces.

Fig. 5
figure 5

Image processing: a initial image, b isolated white pixels turned black, c small clusters of black pixels turned white, and d trace outlines smoothed

Thin trace outline

A trace-thinning operation is used to repeatedly remove pixels belonging to the outside boundary of a trace while maintaining the original shape and connectivity of pixels defining the traces until the trace is 1 pixel wide (Kapur et al. 1989; Mouldon 1998). The pixel-thinning operation uses a moving neighbourhood window of 3 × 3 pixels and follows the thinning rules regarding the deletion of the target pixel point:

  1. (1)

    The number of white dots in eight pixels of its neighbourhood window ranges from 2 to 6;

  2. (2)

    The distribution of white dots in eight pixels of its neighbourhood window is continuous;

  3. (3)

    The upper neighbourhood point, the left neighbourhood pixel and the right neighbourhood pixel are not all white dots;

  4. (4)

    The upper neighbourhood pixel, the left neighbourhood pixel and the below neighbourhood pixel are not all white dots.

Remove small branches

The removal of small branches further cleans the trace outlines to focus on the larger features only. This removal is accomplished by identifying and removing short trace branches extending from the main branch. The procedure involves endpoint identification, branch point identification and branch removal.

  1. (1)

    Endpoint identification

A 3 × 3 pixel window is used to evaluate every pixel in an image. When the centre pixel is white, and if only one pixel in the window is black, then this pixel is also an end point.

  1. (2)

    Branch point identification

A 3 × 3 pixels window is used to evaluate every pixel in an image. When the centre pixel is black and if the number of pixels in the window that are white is 3 or 4, then a 5 × 5 pixels window is used to detect and record the number of white pixels inside the 2-pixel-wide border around the 3 × 3 window. If the number is equal to the number of white pixels inside the 3 × 3 window, the detected point is a branch point.

  1. (3)

    Branch removal

The trace outline is separated into connected segments using the previously identified end points and branch points. A segment between two branch points is classified as a main branch, and a segment between an endpoint and a branch point is classified as a secondary branch as shown in Fig. 6a. The number of pixels Ni within each secondary branch is calculated. A desired minimum branch size, Nt, is then used to remove all smaller branches by replacing black with white pixels in the image, as shown in Fig. 6b. Figure 7 demonstrates the results of trace thinning and removal of small branches. For the branch removal in this example, the minimum number of pixels in a branch was set to 10.

Fig. 6
figure 6

Endpoint, branch point and small branch removal: a connected pixels and b identification of branch for removal

Fig. 7
figure 7

Extraction of a trace skeleton: a trace thinning and b removal of small branches

Link black pixel location in an image to 3-D coordinates in the field

Many LiDAR scanners also collect a photograph from a camera mounted on the scanner. It is possible to find the 3-D coordinates of pixel locations in an acquired image by establishing a transformation relationship between the coordinate systems for the point cloud data and image data. This transformation is accomplished using a coaxial rotation between the scanner and the camera (Hu 2009; Zhao 2011; Yan 2014).

Camera and scanner parameters

In general, a camera fixed to a LiDAR scanner uses a lens with a fixed focal length. The internal orientation parameters or the lens calibration for the camera can be obtained from the technical specifications or by performing a camera calibration (Wang 2013) as shown in Fig. 8. The external orientation parameters of the camera for each photograph depend on a transformation matrix between the camera coordinate system and the local coordinate system. The external orientation parameters for the camera for each photograph taken depend on a transformation matrix between the camera coordinate system and the local coordinate system. The transformation matrix between the scanner and the camera involves four coordinate systems: the image coordinate system (ICS), the camera coordinate system (CCS), the scanner coordinate system (SCS) and the engineering coordinate system (ECS). There is a coaxial rotational relationship between the scanner and the camera. As an example, Eqs. 8, 9 and 10 show the transformation matrices for a Riegl LMS-Z420i scanner.

Fig. 8
figure 8

Calibration parameters of camera

  • Mounting Matrix: Coordinate transformation matrix between CCS and SCS.

$$ M=\left[\begin{array}{cc}\begin{array}{cc}-0.003492804& -0.000179553\\ {}-0.022285786& -0.999751608\end{array}& \begin{array}{cc}0.999993884& -0.247474454\\ {}-0.00025735& 0.0141491500\end{array}\\ {}\begin{array}{cc}0.9997455400& -0.022286548\\ {}0.0000000000& 0.0000000000\end{array}& \begin{array}{cc}0.003487935& -0.022638301\\ {}0.000000000& 1.0000000000\end{array}\end{array}\right] $$
(8)
  • COP Matrix: Coordinate system rotation matrix between CCS at the shooting position of camera and CCS at the initial position of camera.

$$ COP=\left[\begin{array}{cc}\begin{array}{cc}-0.993114537& -0.117147411\\ {}0.1171474110& -0.993114537\end{array}& \begin{array}{cc}0.000000000& 0.0000000000\\ {}0.000000000& 0.0000000000\end{array}\\ {}\begin{array}{cc}0.0000000000& 0.0000000000\\ {}0.0000000000& 0.0000000000\end{array}& \begin{array}{cc}1.000000000& 0.0000000000\\ {}0.000000000& 1.0000000000\end{array}\end{array}\right] $$
(9)
  • SOP Matrix: Coordinate system rotation and translation matrix between SCS and ECS.

$$ SOP=\left[\begin{array}{ccc}0.291751063& 0.952970224& 0.082030910\kern0.5em -53.699865\\ {}-0.926339611& 0.302876887& -0.223965437\kern0.5em -72.74738\\ {}-0.238277659& -0.010646327& \begin{array}{cc}0.971138720& 3.1823090\end{array}\end{array}\right] $$
(10)

Coordinate system conversion

The engineering coordinates are indicated as Pw(Xw, Yw, Zw), the camera coordinates are indicated as Pu(Xu, Yu, Zu) and the image coordinates are indicated as (x, y). The relationship between ECS and CCS is expressed by Eq. 11 (Hu 2009).

$$ \left[\begin{array}{c}{X}_u\\ {}{Y}_u\\ {}\begin{array}{c}{Z}_u\\ {}1\end{array}\end{array}\right]=\left[\begin{array}{cc}R& t\\ {}{0}^T& 1\end{array}\right]\left[\begin{array}{c}{X}_w\\ {}{Y}_w\\ {}\begin{array}{c}{Z}_w\\ {}1\end{array}\end{array}\right]=\left[\begin{array}{ccc}{r}_{11}& {r}_{12}& {r}_{13}\kern0.5em {t}_1\\ {}{r}_{21}& {r}_{22}& \begin{array}{cc}{r}_{23}& {t}_2\end{array}\\ {}\begin{array}{c}{r}_{31}\\ {}0\end{array}& \begin{array}{c}{r}_{32}\\ {}0\end{array}& \begin{array}{cc}\begin{array}{c}{r}_{33}\\ {}0\end{array}& \begin{array}{c}{t}_3\\ {}1\end{array}\end{array}\end{array}\right]\left[\begin{array}{c}{X}_w\\ {}{Y}_w\\ {}\begin{array}{c}{Z}_w\\ {}1\end{array}\end{array}\right]=M\bullet {COP}^{-1}\bullet {SOP}^{-1}\left[\begin{array}{c}{X}_w\\ {}{Y}_w\\ {}\begin{array}{c}{Z}_w\\ {}1\end{array}\end{array}\right] $$
(11)

where the rotation matrix t is the position coordinate of the origin of ECS in CCS; the rotation matrix R is the orthogonal rotation matrix and it satisfies Eq. 12.

$$ \kern2em \left\{\begin{array}{c}{r}_{11}^2+{r}_{12}^2+{r}_{13}^2=1\\ {}{r}_{21}^2+{r}_{22}^2+{r}_{23}^2=1\\ {}{r}_{31}^2+{r}_{32}^2+{r}_{33}^2=1\end{array}\right. $$
(12)

where rij is an element in the rotation matrix R.

The relationship between CCS and ICS is expressed by Eq. 13 (Hu 2009).

$$ {Z}_u\left[\begin{array}{c}x\\ {}y\\ {}1\end{array}\right]=\left[\begin{array}{ccc}f& 0& 0\kern0.5em 0\\ {}0& f& \begin{array}{cc}0& 0\end{array}\\ {}0& 0& \begin{array}{cc}1& 0\end{array}\end{array}\right]\left[\begin{array}{c}{X}_u\\ {}{Y}_u\\ {}\begin{array}{c}{Z}_u\\ {}1\end{array}\end{array}\right] $$
(13)

where f is the lens focal length.

The conversion between ECS and ICS is expressed by Eq. 14 (Hu 2009).

$$ {Z}_u\left[\begin{array}{c}x\\ {}y\\ {}1\end{array}\right]=\left[\begin{array}{cc}f& 0\\ {}0& f\\ {}0& 0\end{array}\kern0.5em \begin{array}{cc}0& 0\\ {}0& 0\\ {}1& 0\end{array}\right]\left[\begin{array}{c}\begin{array}{c}{X}_u\\ {}{Y}_u\end{array}\\ {}\begin{array}{c}{Z}_u\\ {}1\end{array}\end{array}\right]=\left[\begin{array}{cc}\begin{array}{cc}f& 0\\ {}0& f\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\\ {}1& 0\end{array}\end{array}\right]M\bullet {COP}^{-1}\bullet {SOP}^{-1}\left[\begin{array}{c}\begin{array}{c}{X}_u\\ {}{Y}_u\end{array}\\ {}\begin{array}{c}{Z}_u\\ {}1\end{array}\end{array}\right] $$
(14)

where all variables in Eq. 14 are the same as in Eqs. 9, 10, 11, 12, and 13.

Trace coordinates

After data matching, the points representing a trace in the point cloud data and the trace pixels in the image will not exactly overlap in a common coordinate system. The coordinates along a trace can be obtained by interpolation within a triangular irregular network constructed from a point cloud using an orthogonal projection plane relative to the scanner.

  • Step 1: Data preparation.

The joint traces extracted from image data are discretized into a series of finite discrete points according to a distance interval related to the density scale of the point cloud data. Using Eqs. 11 to 14, the point cloud data are projected into a planar coordinate system that is perpendicular to the direction of the scanner/camera. Delaunay triangulation is used to generate a triangular irregular network (Zhou and Liu 2006).

  • Step 2: Interpolation to find 2-D trace coordinates.

It is assumed that the coordinates of three vertex points in a mesh unit ∆V1V2V3 are V1(x1, y1), V2(x2, y2) and V3(x3, y3) and the coordinate of a discrete point on the trace line is P(x, y) in ICS. In Fig. 9, the point P may lie within or outside the triangular mesh unit. Three area coordinates L1, L2 and L3, for point P are defined as follows (Zeng 2014):

$$ {L}_1=\frac{A_1}{A};{L}_2=\frac{A_2}{A};{L}_3=\frac{A_3}{A} $$
(15)
Fig. 9
figure 9

Geometric relationship between an interpolated point P and a triangular mesh unit: a within the unit and b outside the unit

Eq. 15 can be transformed into Eq. 16

$$ \left.\begin{array}{c}{L}_1=\frac{\left({x}_3-{x}_2\right)\left({y}_3-{y}_2\right)-\left(x-{x}_2\right)\left(y-{y}_2\right)}{A}\\ {}{L}_2=\frac{\left({x}_1-{x}_3\right)\left({y}_1-{y}_3\right)-\left(x-{x}_3\right)\left(y-{y}_3\right)}{A}\\ {}{L}_3=\frac{\left({x}_2-{x}_1\right)\left({y}_2-{y}_1\right)-\left(x-{x}_1\right)\left(y-{y}_1\right)}{A}\end{array}\right\} $$
(16)

where A, A1, A2 and A3, respectively, denote the areas of ∆V1V2V3, ∆V2V3P, ∆V3V1P and ∆V1V2P.

In Fig. 9, the vertex points in a mesh unit are numbered counter-clockwise. When P lies inside a mesh unit, L1 > 0, L2 > 0 and L3 > 0. When P lies outside of a mesh unit, at least one of the three area coordinates is less than zero. Therefore, whether the discrete point P lies inside or outside of a triangular grid ∆V1V2V3 can be determined by checking if L1, L2 and L3 are all greater than zero.

  • Step 3: Interpolation to find 3-D trace coordinates.

The z coordinate of a discrete point P in the ICS can be found using the coordinates V1(x1, y1, z1), V2(x2, y2, z2) and V3(x3, y3, z3) of three vertex points in the corresponding mesh unit containing point P in ECS and the coordinate P(x, y) of the discrete point P in ICS (Zeng 2014).

Discontinuity characterisation

After extracting the discontinuity trace coordinates using a combination of the point cloud and image data, the geometric parameters and spatial position of the discontinuity can be determined. To ensure that these parameters are meaningful, the coordinate system of the point cloud should match the geodetic coordinate system used at the site of interest.

Dip and dip direction

For a discrete set of points representing the jth trace, the coordinates of any three points are \( \left({x}_{j{k}_1},{y}_{j{k}_1},{z}_{j{k}_1}\right) \), \( \left({x}_{j{k}_2},{y}_{j{k}_2},{z}_{j{k}_2}\right) \) and \( \left({x}_{j{k}_3},{y}_{j{k}_3},{z}_{j{k}_3}\right) \), where k1, k2 and k3 are any three integers between 1 and n. These coordinates can be found by a ray-scanning method. A plane incorporating any three points can be represented as a triangular facet composed of \( \left({x}_{j{k}_1},{y}_{j{k}_1},{z}_{j{k}_1}\right) \), \( \left({x}_{j{k}_2},{y}_{j{k}_2},{z}_{j{k}_2}\right) \) and \( \left({x}_{j{k}_3},{y}_{j{k}_3},{z}_{j{k}_3}\right) \), as shown in Fig. 10. To recover a reasonably accurate planar orientation, the maximum interior angle of this triangular facet φjmax should be less than 150°. The reference value of 150° is to guarantee that three points on a discontinuity trace can be constituted of a triangle, and to determine the plane equation of a triangle. If the angle is higher than the selected threshold, three points on a discontinuity trace are nearly collinear, which affect the accuracy of the calculation, not determining the plane equation of a discontinuity trace.

Fig. 10
figure 10

Trace plane determined by three points

When φjmax ≤ 1500, the normal vector of the jth trace triangular facet is calculated as follows:

$$ \overrightarrow{n}=\left|\begin{array}{ccc}i& j& k\\ {}\left({x}_{j{k}_2}-{x}_{j{k}_1}\right)& \left({y}_{j{k}_2}-{y}_{j{k}_1}\right)& \left({z}_{j{k}_2}-{z}_{j{k}_1}\right)\\ {}\left({x}_{j{k}_3}-{x}_{j{k}_1}\right)& \left({y}_{j{k}_3}-{y}_{j{k}_1}\right)& \left({z}_{j{k}_3}-{z}_{j{k}_1}\right)\end{array}\right| $$
(17)

let,

$$ {a}_j=\left({y}_{j{k}_2}-{y}_{j{k}_1}\right)\left({z}_{j{k}_3}-{z}_{j{k}_1}\right)-\left({y}_{j{k}_3}-{y}_{j{k}_1}\right)\left({z}_{j{k}_2}-{z}_{j{k}_1}\right) $$
$$ {b}_j=\left({x}_{j{k}_3}-{x}_{j{k}_1}\right)\left({z}_{j{k}_2}-{z}_{j{k}_1}\right)-\left({x}_{j{k}_2}-{x}_{j{k}_1}\right)\left({z}_{j{k}_3}-{z}_{j{k}_1}\right) $$
$$ {c}_j=\left({x}_{j{k}_2}-{x}_{j{k}_1}\right)\left({y}_{j{k}_3}-{y}_{j{k}_1}\right)-\left({x}_{j{k}_3}-{x}_{j{k}_1}\right)\left({y}_{j{k}_2}-{y}_{j{k}_1}\right) $$

The dip and dip direction of the jth trace are represented by

$$ \mathrm{Dip}:\kern0.5em {\uptheta}_j=\operatorname{arccos}\left(\frac{c_j}{\sqrt{a_j^2+{b}_j^2+{c}_j^2}}\right) $$
(18)
$$ {\displaystyle \begin{array}{l}\mathrm{Dip}\ \mathrm{direction}:\kern0.5em {\varnothing}_j=\left\{\begin{array}{cc}\mathit{\operatorname{arccos}}\left(\frac{a_j}{\sqrt{a_j^2+{b}_j^2}}\right)\ & \left(b\ge 0\right)\\ {}{80}^0+\mathit{\operatorname{arccos}}\left(\frac{-{a}_j}{\sqrt{a_j^2+{b}_j^2}}\right)& \left(b<0\right)\end{array}\right.\\ {}\end{array}} $$
(19)

Trace length

  1. (1)

    Full trace length

The trace length is the distance between the endpoints of a trace that are projected onto the orthogonal projection plane relative to the scanner/camera. The two endpoints on the jth trace are (xj1, yj1, zj1) and (xjn, yjn, zjn), and the coordinates of two projection points corresponding to two endpoints are \( \left({x}_{j1}^{\prime },{y}_{j1}^{\prime },{z}_{j1}^{\prime}\right) \) and \( \left({x}_{jn}^{\prime },{y}_{jn}^{\prime },{z}_{jn}^{\prime}\right) \). The coordinate transformation relation between two endpoints and two projection points can be represented by

$$ \left[{x}_{j1}^{\prime}\kern0.5em {y}_{j1}^{\prime}\kern0.5em {z}_{j1}^{\prime}\right]=\left[\begin{array}{ccc}{x}_{j1}& {y}_{j1}& {\mathrm{z}}_{j1}\end{array}\right]\bullet \left(\begin{array}{ccc}{\alpha}_1& {\alpha}_2& {\alpha}_3\end{array}\right) $$
(20)
$$ \left[{x}_{jn}^{\prime}\kern0.5em {y}_{jn}^{\prime}\kern0.5em {z}_{jn}^{\prime}\right]=\left[\begin{array}{ccc}{x}_{jn}& {y}_{jn}& {\mathrm{z}}_{jn}\end{array}\right]\bullet \left(\begin{array}{ccc}{\alpha}_1& {\alpha}_2& {\alpha}_3\end{array}\right) $$
(21)

where α1, α2 and α3 are the three unit vectors of the orthogonal direction vector of the orthogonal projection plane relative to the scanner and the camera.

The full trace length of the jth trace can be expressed by

$$ {L}_j=\sqrt{{\left({x}_{jn}^{\prime }-{x}_{j1}^{\prime}\right)}^2+{\left({y}_{jn}^{\prime }-{y}_{j1}^{\prime}\right)}^2+{\left({z}_{jn}^{\prime }-{z}_{j1}^{\prime}\right)}^2} $$
(22)
  1. (2)

    Mean trace length

The mean trace length can be calculated using a circular window sampling method (Zhang and Einstein 1998). An automated trace sampling procedure (Umili et al. 2013) is used. Trace lines are projected onto an orthogonal projection plane relative to the scanner/camera. Then, the centres of circular windows with different radii are placed in locations with dense traces. The mean trace length is estimated using the following equation (Zhang and Einstein 1998):

$$ \hat{\mu}=\frac{\pi \left(\hat{N}+\hat{N_0}-\hat{N_2}\right)}{2\left(\hat{N}-{\hat{N}}_0+{\hat{N}}_2\right)} wr $$
(23)

where N0 is the number of traces with both ends censored, N1 is the number of traces with one end censored and one end observable, N2 is the number of traces with both ends observable, and wr is the radius of the window.

Case study

Data acquisition

Scans of a roadside slope along the Cao-Wu highway in China were captured. Point cloud data and image data acquired with a Riegl LMS-Z420i terrestrial laser scanner and a Nikon D100 camera coaxially fixed to the top of the scanner. Scanning was performed from one observation station located approximately 25 m far from the rock face. The point cloud had 189,215 points with a point resolution of approximately 5 mm. Figure 11 shows a photograph of the rock face and the 3-D point cloud. The vegetation cover is a thorny issue to automated extraction of rock discontinuities by photogrammetry or LiDAR. When the vegetation completely cover the discontinuity traces, it will seriously affect the correctness and accuracy of analysis. In a case study, the vegetation just covered a little the discontinuity traces, and the raw point cloud data were filtered to remove points corresponding to vegetation, trimmed to cover a smaller area of the rock cut in a clear exposed region of the discontinuities, and resampled to generate a convenient data set with uniform distribution using Riscan Pro software.

Fig. 11
figure 11

a Point cloud data and b digital image data of a roadside slope

Trace texture mapping and coordinate acquisition

The joint traces from the image data shown in Fig. 11b were extracted using the proposed methods, as shown in Fig. 12. Figure 12b shows the trace skeleton linked to the 3-D coordinates from the point cloud data, which was obtained by data matching and trace coordinate acquisition. The number of points representing the traces was reduced to simplify geological analysis, as shown in Fig. 12c. Figure 12d shows that the individual traces are automatically identified, classified and labelled by different colours. The Bwlabel function in Matlab can label individual traces according to the connectivity of trace point sets in a binary image, and the Label2fgb function in Matlab is used to label every connectivity in different colours.

Fig. 12
figure 12

Trace extraction from analysis of an image and corresponding point cloud. a Traces texture extraction b Data matching c Reducing the number of points d Extracted traces

Joint characteristics

For each extracted trace, the dip, dip direction, and trace length were obtained using the point coordinates along the traces. The joint properties are listed in Table 1. For visualisation purposes, the dip and dip direction were plotted on a lower-hemisphere, equal-angle, stereonet as shown in Fig. 13. The stereonet features three dominant clusters of poles, which reflect the three sets of discontinuities present in this section of the rock cut.

Table 1 Extraction of discontinuity characteristics compared with manual measurements
Fig. 13
figure 13

Lower-hemisphere stereonet showing discontinuity orientations

According to the circular window sampling method, the 3-D trace coordinates are projected onto an orthogonal projection plane relative to the scanner. Two sets of circular windows with three different radii (4.3, 3.2 and 2.15 m) were placed in locations with dense traces to evaluate the mean trace length and the trace midpoint density, as shown in Fig. 14 and Table 2. The midpoint density of the traces ranges between 0.026 m−2 and 0.069 m−2, with a mean of 0.042 m−2. The mean trace length ranges from 5.02 m to 11.25 m, and the average value is 8.05 m.

Fig. 14
figure 14

Traces sampled using circular windows of three radii at two locations

Table 2 Statistical calculation of mean trace length by circular window sampling method

Automatic mapping was found to yield discontinuity orientations equivalent to those afforded by manual mapping with a geological compass. However, the mapping efficiency can be much higher with the proposed method. The use of a combination of data from an image and a point cloud was an improvement over the use of point cloud data alone. Further research and testing will be conducted to ensure that this method works for a wide range of rock mass conditions and rock face topologies.

Conclusion

This paper describes a new method of 3-D automated mapping for discontinuity traces that appear in exposures of rock faces. The method offers an advantage over many existing methods by using both point cloud data and image data captured for a region of interest. The proposed method adequately exerts the advantages of point clouds and image, visually detects the traces texture in image data, and digitally acquires the spatial coordinate of traces texture by coordinate interpolation based on data matching of point cloud and image data. A series of image-processing techniques improves the trace extraction and reduces the effect of noisy data in detecting the traces. As demonstrated by the case study, the proposed method can efficiently and accurately extract embedded discontinuity traces from a fusion of point cloud data and image data. The method can be used to fundamentally integrate traditional survey methods.