Keywords

1 Introduction

It is undoubtedly true that technology for capturing reality has experienced exceptional progress in the last decade, with the result that huge amounts of data can now be acquired relatively quickly. This means that traditional barriers to acquiring and storing monumental data effectively and efficiently have been significantly lowered. However, this evolution in the techniques of capture of reality has not yet been reflected in a generalization of its use, beyond being considered a tool of digital documentation. So far, most of the contributions made in the field of Heritage have been limited to providing realistic and detailed digital models of archaeological objects and spaces.

A review of the literature and our own experience indicate that analyses of data acquired to provide the information of real use to archaeologists and other Heritage experts continue to be done manually, with a visual assessment of the element to be analysed. This process is often time-consuming, resulting in high economic costs. In addition, concerns frequently arise regarding the subjectivity and variability of the results obtained.

There is, therefore, a growing and undeniable need for methods that automatically extract useful information from the data currently obtained from the technology for capturing reality.

Nowadays, there are numerous groups on an international level working on the generation of digital models of historical-artistic Heritage obtained with 3D scanners and colour cameras; however, there is a minority that has gone one step beyond the mere generation of models and is dedicated to extracting information from these. Amongst others, the Stanford University group in the USA has been a reference in this field since the end of the last century and the beginning of the present (http://graphics.stanford.edu/). This group has developed projects recognized worldwide, such as The Digital Michelangelo Project [1] or the Digital Forma Urbis Romae [2]. The group led by Roberto Scopigno of Istituto di Scienza e Tecnologie dell’Informazione in Pisa has also been working for more than 25 years on 3D digitization, surface reconstruction, and visualization and interpretation of Cultural Heritage (http://vcg.isti.cnr.It/) [3,4,5]. In Spain, several teams have approached the use of the laser scanner with an archaeological objective. Worthy of mention is the Research Group on Photogrammetry and Laser Scanner (GIFLE) of the Universitat Politècnica de València (http://gifle.webs.upv.es), which in recent years has published a number of scientific publications in this regard [6, 7]. The authors of the present chapter also have extensive experience in the application of the techniques and procedures of Computer Vision to the investigation and conservation of Heritage. Also of note is the Restitution of Aeneas’s sculptural group, belonging to the collection of the National Museum of Roman Art in Mérida, which has had notable scientific and media impact [8, 9], or the digitization of the Fori Porticus and the Temple of Diana of Augusta Emerita (Fig. 1) or the Theatre of Segobriga [10].

Fig. 1.
figure 1

Photo and 3D model of the temple of Diana of Augusta Emerita.

The use of thermography applied to the study of pieces of Heritage has already come a long way too, mainly due to its non-invasive nature, which allows inspecting surfaces without contact [11, 12]. In [13], the authors present a proposal for the detection of pathologies in monuments from thermal images taken at different points in time.

Nevertheless, there are currently few groups in the world that have combined both types of sensors, i.e. the 3D laser scanner and thermal cameras, to create a unique model with geometry, colour and temperature information. In recent years, some authors have performed procedures of association between thermal images and point clouds oriented to studies of energy efficiency in modern buildings. Alba et al. [14] have developed a bi-cameral system consisting of an infrared camera, a digital camera and a 3D laser scanner with which they integrate the information and use it to locate, display and analyse anomalies in contemporary architecture. Borrmann et al. [15, 16] have created a 3D thermal modelling system using a light detector, a LIDAR and a low-resolution infrared camera (160 × 120 pixels) mounted on an Irma3D mobile robot commanded to accelerate scanning and recording processes. Due to the fixed position of the camera, its field of view is reduced, so scanning from a distance is necessary for tall buildings, which results in thermal data acquisition of very low resolution. Lagüela et al. [17] also introduced a method to project RGB thermal data onto the surface of point cloud meshes of buildings with the aim of carrying out energy efficiency studies on them.

Rather scarce, however, have been approximations made in the field of Heritage. Cabrelles et al. [18] proposed using 3D point clouds and cameras to provide photorealistic digital models and, in addition, to add thermal images that offer information that allows to infer the current state of preservation of a monument. They show the results obtained in a tomb in Petra (Jordan).

Given our experience in laser sensors and their application to the field of Cultural Heritage, as well as in the use of thermal information in problems solved with Vision techniques, we considered the time was ripe to endeavour to develop a line of research such as the one described in this work.

The rest of the chapter is organized as follows. Section 2 presents some questions we aim to provide answers for as the research progresses. In Sect. 3 we take a look at this new research line, highlighting the originality of the proposal for the two branches of knowledge involved in it, Computer Vision and Archaeology. Section 4 is taken up with some initial results of our work. Finally, Sect. 5 presents the conclusions.

2 Some Questions We Intend to Answer

We propose an investigation from two different but complementary perspectives. On the one hand, we present the point of view of the Conservation and Restoration of Cultural Heritage, which is concerned with what may be termed the practical applications of the research. On the other hand, work in the field of Computer Vision provides tools to be used in basic scientific research in order to identify and correct, as far as possible, impact problems in the field of Archaeology. The ultimate goal is the generation of work strategies to enhance the quality of the service provided to the visitors of Heritage sites and, consequently, their satisfaction.

Regarding the first aspect, detecting humidity, cracks and imperfections in structures (even those hidden from view) with a non-invasive/non-destructive technique is our main aim. To achieve this would allow us to keep the archaeological artefact or monument under control and to proceed to a quick and effective intervention. In this sense, we pose the challenge of developing a basic tool for the detection of the described defects to obtain results that complement, complete or even substitute those that have been obtained and controlled either manually or by using invasive methodologies up to now. Essentially, our work has two purposes: firstly, we would like to demonstrate that it is possible to act on a work of art with the minimum intervention, avoiding the currently used invasive techniques that consist in making tests in several areas in order to detect possible damage. Secondly, we aim to set up a methodology with which, having determined the problems in advance (mainly those that cannot be seen), work on the Heritage Buildings/pieces can be carried out more precisely and effectively, so that the buildings can be accessible to the public again as soon as possible.

To achieve the second objective, we need to develop techniques that combine the information obtained from a laser scanner, a camera and a thermal camera to provide a representation of the acquired object/scenario in which each 3D point has a colour value and a temperature associated with the intention of automatically generating semantically rich thermal models of Heritage Buildings that are useful for working with BIM methodology. Additionally, the information stored in the thermal point cloud would lead to considerable savings in working hours and human resources since all the useful data would be available at the same time. We would focus on different areas such as Architecture, Sculpture or Epigraphy to identify problems that could be solved using this multisensory information.

In the field of Architecture, the importance of having 3D digital models of historical buildings and architectural remains has proven to be extremely necessary in recent times. If we add the data provided by the thermographic camera, we are faced with a model which not only reflects the external reality of the monument but can help us to unveil hidden structures (pipes, drainage systems…) without the need for making excavations and the consequent destruction of the archaeological remains.

Furthermore, we would like to know whether this technology can help us to segment and classify building materials (wood, brick, marble…), especially in those places unperceived by the human eye. If so, and based on the principle that shape, position and the sort of building material used speak to us about ways of working that can define workshops and epochs, we would like to develop software that would help scientists to discern these patterns of work. This way, the archaeological problem of identifying whether two monuments were built by the same itinerant workshop could be solved. In some cases, helping to corroborate what other sources show (e.g. the planimetry of the buildings and the sculptures decorating them lead us to believe that the Roman Theatre of Lisbon (Portugal) and Medellín (Spain) were built by the same workshop [19]. Could the information obtained from the multisensor 3D model help to confirm this supposition?). In other cases, it would be a brand new discovery, as something hitherto unknown. And related to this, could it help to reinforce the supposed chronology of a building?

In other cases, we know that monuments have been restored in antiquity or in recent times but without following the current rules of restoration. So, based on the principle outlined above, would it be possible to know in what zones and when a monument was restored? Continuing with the Roman epoch, it is well known that the Circus of Augusta Emerita (Mérida, Spain) was restored at the end of the Roman Empire [20]. Would it still be possible for us to detect something of this restoration? Was it done where and with the extension the epigraphy says?

Related to this, we can highlight one other problem in Archaeology, not only in the field of Architecture but in Sculpture and Epigraphy too: knowing whether there are remains of stucco and the polychromy that decorated them. In antiquity, buildings, statues and inscriptions were painted with bright showy colours. From the Renaissance on, we are obliged to see them in white and all of the remains were cleaned of any possible stains. In recent years, some groups from different Universities and Museums [21] have considered that an artefact/building cannot be completely understood if we are unaware of this aspect of it, and they have developed some techniques to detect the invisible remains of pigments. With the application of this technology, we aim to obtain a complete digital replica of the pieces from which we would be able to obtain not only measures, shapes, etc., but even data on colour too. Some authors are already working on this line of research. For example, Poksińska et al. [22] present a method for the detection of polychromy in whitewashed walls using thermography. Doni et al. [23] employ thermographic images for the analysis of surface, subsurface and structural features of several illuminations belonging to a 15th century antiphonary.

Finally, in the detection of building materials, one branch could even be separated to form an independent research line, i.e. identifying ancient white marble, which is a real problem for scientists. Again, we intend to use Computer Vision techniques to be able to discern the different types of marmi antichi, avoiding those currently used invasive techniques that include cutting off a little part of the piece [24]. The key may lie in thermography, by studying the reaction of impurities and different types of crystals to changes in temperature.

To conclude, we should point out that the two aspects outlined at the beginning of this section are complementary since knowledge provided by basic research can lead to a better and more effective intervention on works of art. In the same way, the intervention itself can provide us with data to help in the development of scientific research.

3 The Novelty of Our Proposal

In the field of research in Computer Vision, the novelty of this new line focuses on automatically extracting information from a point cloud where each point has associated information on geometry, colour and temperature, which is obtained after incorporating thermal information to the data acquired by 3D laser scanners. New information fusion techniques will have to be developed to combine data acquired with these different types of sensors. So far, few research groups around the world have addressed the fusion of these technologies applied to the study and conservation of Heritage.

With regard to archaeological research, the key question is avoiding, as far as possible, contact with the piece, so as not to produce any type of damage in the interaction. In the area of conservation and restoration of Heritage, we intend to act on concrete and very specific problems that do not alter the remains of the object/monument. Therefore, to be able to use non-invasive/non-destructive techniques that provide at least the same data as the visual-manual studies used so far will be a qualitative leap in archaeological work and a major advance in scientific research.

The results of the research carried out will be embodied in the creation of a software tool that will facilitate and improve the work of specialists in Cultural Heritage. Having a working tool for digitized models from which to obtain accurate data is an important novelty with respect to what has been achieved so far, since this technology has been limited only to the documentation and reproduction of Archaeological objects/spaces. Using the valuable information generated by the proposed fusion of technologies means opening up a new field of action and applications within the research and conservation of Heritage and, consequently, improving its transfer to society.

Aside from results that can be directly applied in the field of Archaeology, Conservation/Preservation of Cultural Heritage or even in the field of Cultural Tourism, we would like to develop a ramification that would involve from Civil to Electrical Engineering, Computer Vision to History, from antiquity to modern techniques of energy efficiency; in short, we aim to raise BIM (Building Information Models) to the higher level of HBIM (Heritage/Historic Building Information Models).

In past decades, BIM has gradually been incorporated by the construction sector due to its many benefits and savings in resources during the design, planning, and construction of new buildings, and is in fact common practice at present. However, for the maintenance, refurbishment or deconstruction of existing buildings this methodology has barely been used [25]. For Heritage Buildings, until now primary data have been 3D geometric models showing physical conditions. But these data alone are neither sufficient nor very useful. Without the possibility of obtaining information in a complete, aggregated and easily accessible way, there remains a lack of knowledge and usability. The importance of semantically enriched 3D models to provide a more comprehensive repository of any architectural Heritage Building has been evidenced by the growing interest of recent research in this field. Nevertheless, applications are still in their early stages due to the multiple challenges of the topic. Recently, Merchán et al. [26] proposed a new subdivision of the dimensions that HBIM should contain that deals with this new line of development in more detail. As a natural continuation of previous works in which 3D points have been used for the creation of BIM models of civil buildings [27], we would like to continue directing our efforts to produce more and more complete HBIM models which are at present at their first stages of development. We aim to make the leap from generating models from 3D dense point clouds with colour and temperature information which is, at present, carried out manually (and is often slow, expensive and, in some cases, incomplete) to developing techniques that permit automation of the process of HBIM generation.

4 Initial Results

As previously stated, we are currently at the initial stages of this new line of research. Concerning our intention to automatically generate semantically rich thermal models of Heritage Buildings, one of the most important milestones achieved so far is the development of a hybrid 3D laser scanner supplied with colour and thermal cameras. This system provides point clouds with both colour and thermal information (i.e. temperature). It consists of a Riegl VZ-400 3D laser scanner that supports a Nikon D90 colour camera and a FLIR AX5 thermal camera (see Fig. 2). The scanner covers an area of 360º × 100º with precision of 3 mm (one sigma 100 m range). The colour and thermal cameras have resolutions of 4288 × 2848 pixels and 640 × 512 pixels, respectively.

Fig. 2.
figure 2

Hybrid 3D scanner. The universal coordinate system used is illustrated.

Calibrating the acquisition systems with each other so that the thermal data can be projected onto the 3D point cloud is the key operation of this process. Several solutions have been proposed up to now in this regard. Ham et al. [28] have a system composed only of a FLIR E60 thermal camera, capable of acquiring thermal and digital images. A 3D model, both thermal and spatial, is generated by extrinsic and intrinsic calibration (made with a panel composed of 42 LED bulbs). Later the models are aligned taking characteristic points chosen by the user. Rangel et al. [29] use a system consisting of a Microsoft Kinect depth camera and a Jenoptik IR-TCM 640 thermal camera. A calibration panel is used to obtain the geometric relationship between the two cameras. These authors carried out an exhaustive study on the material used in the panel and its geometric distribution, since the reference points must be visible in both the depth and the thermal image. Borrmann et al. in [30] developed a system consisting of a Riegl VZ-400 3D laser scanner and an Optris PI160 thermal camera. A calibration panel is again used to obtain the relation between both reference systems, in this case made up of 30 incandescent bulbs. The acquisition system created by Mader et al. [31] is not a system integrated into a single device like the previous ones, but consists of three drones each equipped with a different sensor. One of the drones, fitted with a laser range Hokuyo UTM-30LX, is responsible for obtaining the 3D point cloud of the stage. The other two drones, equipped with an RGB camera and a FLIR A65 thermal camera respectively, obtain colour data and thermal information. The calibration between the geometric data and the images is achieved by means of a pattern of markers identifiable by all the sensors. On the lines of the methods discussed above, Chao Wang et al. [32] use a chess-board pattern to record the data obtained with a LiDAR and a thermal camera. The pattern contains gaps in the white boxes so that, when placed in front of a hot body, common features can be seen in both the thermal image and the point cloud to perform the registration.

Other authors propose methodologies that allow 3D information to be combined with thermal information acquired through independent systems. Lagüela et al. [17] propose a methodology that records the relation between geometric and thermal data by means of commercial software; points in common between both scenarios are chosen manually and the data is registered. For the acquisition of data, these authors use a laser scanner Riegl LMS-Z390i and a thermal camera NEC TH9260. In this case, the calibration panel (composed of 64 light bulbs) is used for the intrinsic calibration of the thermal camera and the subsequent correction of the distortion. González-Aguilera et al. [33], on the other hand, present a method for automatic registration of information from both sensors by identifying singular points in the 3D point cloud (obtained using a Photon 80 Lighthouse) and the thermal images (obtained by a FLIR ThermaCAM SC640). Finally, López-Fernández et al. [34] also describe a method that records information from two independent systems. First, they acquire thermal images of each wall with a NEC TH9260 thermal camera, and then they acquire point clouds with an indoor mapping system consisting of a laser Hokuyo UTM-30LX 2D scanner, an IMU and two dual-channel encoders. The registration process between both systems is carried out by selecting homologous features in both images manually.

The fundamental differences with the system we have developed lie in three aspects. Firstly, our calibration system is novel since targets (beacons) incorporating both thermal and reflectance discriminants are used, which increases the accuracy and efficiency of the system. Secondly, the position of the targets is not restricted to small regions as in [29, 30, 32] (contained in small boards), frequently active beacons (bulbs) [30, 31], identified in boards [29, 32] or in image characteristics [17, 28, 33, 34]. In fact, our beacons can cover a wide area of the scene, in positions far from the scanner, and where there are no restrictions on its placement. As a consequence, calibration is more reliable and accurate. Third, many of the referenced systems do not address the problem of completeness of the observable space, so they only obtain a partial thermal map of the scene. Our system is prepared to perform a complete 3D thermal map of the space because it performs an integration of views acquired from one or several positions, obtaining a sampled accumulated map that covers the whole scenario.

The system has been designed to operate in two phases (see Fig. 3). Phase 1 consists of the calibration of the elements of the system and is executed only once. The calibration results are used as the basis for the acquisition of 3D-thermal data in phase 2.

Fig. 3.
figure 3

3D thermal scanner operation scheme.

In order to achieve the highest quality images, an intrinsic calibration process is performed to obtain the parameters that regulate the distortion present in the images, both for the RGB camera and for the thermal camera, using the method proposed by Heikkilä et al. [35]. The thermal camera, however, has a distortion known as vignetting, which causes a blackening of the edges of the image and which must also be corrected.

Association between pixels of the thermal image with points of the 3D cloud requires calculation of the corresponding projective transformation matrix. This is a transformation that relates the coordinates of the point cloud, \( \left( {X_{p} , Y_{p} , Z_{p} } \right) \), to the coordinates of the corresponding projected points in the thermal image, (\( X_{f } ,Y_{f} ) \), in pixels. The transformation is modelled in the following equation:

$$ \left( {\begin{array}{*{20}c} {\lambda X_{f} } \\ {\lambda Y_{f} } \\ \lambda \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} {r_{11} } \hfill & {r_{12} } \hfill & {r_{13} } \hfill & {r_{14} } \hfill \\ {r_{21} } \hfill & {r_{22} } \hfill & {r_{23} } \hfill & {r_{24} } \hfill \\ {r_{31} } \hfill & {r_{32} } \hfill & {r_{33} } \hfill & {r_{34} } \hfill \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {X_{p} } \\ {Y_{p} } \\ {Z_{p} } \\ 1 \\ \end{array} } \right) $$
(1)

Equation (1) can be expressed as:

$$ \left( {\begin{array}{*{20}c} {X_{f} } \\ {Y_{f} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} {X_{p} } \hfill & {Y_{p} } \hfill & {Z_{p} } \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & { - X_{f} X_{p} } \hfill & { - X_{f} Y_{p} } \hfill & { - X_{f} Z_{p} } \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & {X_{p} } \hfill & {Y_{p} } \hfill & {Z_{p} } \hfill & {1 } \hfill & { - Y_{f} X_{p} } \hfill & { - Y_{f} Y_{p} } \hfill & { - Y_{f} Z_{p} } \hfill \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {r_{11} } \hfill \\ {r_{12} } \hfill \\ {r_{13} } \hfill \\ {r_{14} } \hfill \\ {r_{21} } \hfill \\ {r_{22} } \hfill \\ {r_{23} } \hfill \\ {r_{24} } \hfill \\ {r_{31} } \hfill \\ {r_{32} } \hfill \\ {r_{33} } \hfill \\ \end{array} } \right) $$
(2)

For n pairs of corresponding coordinates, an overdetermined system is formed:

$$ \left( {\begin{array}{*{20}l} {X_{f1} } \hfill \\ {Y_{f1} } \hfill \\ \vdots \hfill \\ {X_{fn} } \hfill \\ {y_{fn} } \hfill \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {X_{p1} } \\ 0 \\ \end{array} } & {\begin{array}{*{20}c} {Y_{p1} } \\ 0 \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} {Z_{p1} } \\ 0 \\ \end{array} } & {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} 0 \\ {X_{p1} } \\ \end{array} } & {\begin{array}{*{20}c} 0 \\ {Y_{p1} } \\ \end{array} } & {\begin{array}{*{20}c} 0 \\ {Z_{p1} } \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} 0 \\ {1 } \\ \end{array} } & {\begin{array}{*{20}c} {{-}X_{f1} X_{p1} } \\ { - Y_{f1} X_{p1} } \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} { - X_{f1} Y_{p1} } \\ { - Y_{f1} Y_{p1} } \\ \end{array} } & {\begin{array}{*{20}c} { - X_{f1} Z_{p1} } \\ { - Y_{f1} Z_{p1} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \\ \vdots \\ \vdots \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\begin{array}{*{20}c} {X_{pn} } \\ 0 \\ \end{array} } & {\begin{array}{*{20}c} {Y_{pn} } \\ 0 \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} {Z_{pn} } \\ 0 \\ \end{array} } & {\begin{array}{*{20}c} 1 \\ 0 \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} 0 \\ {X_{pn} } \\ \end{array} } & {\begin{array}{*{20}c} 0 \\ {Y_{pn} } \\ \end{array} } & {\begin{array}{*{20}c} 0 \\ {Z_{pn} } \\ \end{array} } \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} 0 \\ {1 } \\ \end{array} } & {\begin{array}{*{20}c} {{-}X_{fn} X_{pn} } \\ { - Y_{fn} X_{pn} } \\ \end{array} } & {\begin{array}{*{20}c} {\begin{array}{*{20}c} { - X_{fn} Y_{pn} } \\ { - Y_{fn} Y_{pn} } \\ \end{array} } & {\begin{array}{*{20}c} { - X_{fn} Z_{pn} } \\ { - Y_{fn} Z_{pn} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {r_{11} } \hfill \\ {r_{12} } \hfill \\ {r_{13} } \hfill \\ {r_{14} } \hfill \\ {r_{21} } \hfill \\ {r_{22} } \hfill \\ {r_{23} } \hfill \\ {r_{24} } \hfill \\ {r_{31} } \hfill \\ {r_{32} } \hfill \\ {r_{33} } \hfill \\ \end{array} } \right) $$
(3)

This can be expressed as \( = \varvec{WP} \).

Once the coordinates of pixels \( (X_{f } ,Y_{f} ) \), the corresponding 3D coordinates of the associated point are known \( \left( {X_{p} , Y_{p} , Z_{p} } \right) \), and on imposing \( r_{34} = 1, \) Eq. (3) is solved by:

$$ \varvec{P} = \left( {\varvec{W}^{\varvec{T}} \varvec{W}} \right)^{ - 1} \varvec{W}^{\varvec{T}} \varvec{C} $$
(4)

In practice, in the calibration procedure, reflective targets are glued together in small plastic ice cubes. This facilitates their location in both the reflectance image associated with the 3D point cloud and in the thermal image. Matching of corresponding points is made by means of a search algorithm for positioning of targets in the image and subsequent refinement, and the consistency between the corresponding sets verified.

Data acquisition is performed in three sequential phases:

  • First, the 3D scanner captures the 3D coordinates of points reached by the laser in a space defined by the vertical (λ) and horizontal (θ) angle range set by the user. Usually the range of λ is maintained at [30°, 130°], while the range of θ is set at will.

  • Secondly, the RGB and thermal cameras perform, starting from θ = 0, the number of captures required to completely cover the range of θ chosen in the session. Depending on the field of view of the cameras, each capture is performed after turning the system at a horizontal angle that guarantees a minimum overlap between consecutive thermal images.

  • Third, images are preprocessed and temperature values are assigned to the point cloud taken in the first phase. Preprocessing consists essentially of the elimination of vignetting and distortion correction.

This system has been tested in building indoor environments with very good results. We present here findings obtained in the building that housed the winch of the Nueva Concepción mine in Almadenejos (Spain) as a case study: the so-called Baritel de San Carlos. Considered as a jewel of industrial archaeology worldwide, this curious building dates from the last years of the 18th century and the first of the 19th, and is in a reasonably good state of conservation. It consists of a single piece of polygonal shape on the outside, 16 equal sides, and circular in the interior, 17 m in diameter. It is finished in the interior with a spherical-conical vault and on the outside with a ceramic-tiled 16-sided pitched roof. Moved by animal traction, the lathe of its interior (which is not conserved) was used to introduce and extract men and materials from the mine (see Fig. 4).

Fig. 4.
figure 4

Baritel de San Carlos: exterior and interior views.

To generate the thermal point cloud of this Heritage Building 13 scanner shots were taken from 5 positions, with the following approximate volume of data: 78.3 million spatial points, of which 55% have assigned colour and 3% contain thermal information. In the data collection process, 40 reflective targets were used. The range in θ was [0, 360°], with a scanning step of the scanner of 0.08°. Fifteen photographs were taken for a complete rotation of the scanner. The time for data acquisition was 47 s. for scanner data acquisition (in 360º range) and 80 s. for colour and thermal imaging.

Figure 5 shows the initial results for the thermal model using a pseudocolour code. The complete thermal cloud of the interior building depicted from an external viewpoint is shown at the top. The cones of points that are seen coming out of it correspond with areas of doors and windows. The lower part of the figure is formed by two distinct interior zones. There are no large differences in temperature values, these being around 20 °C, hence the colour homogeneity. The beacons used for calibration appear as red points and the humid areas of the walls are easily identifiable.

Fig. 5.
figure 5

(a) Thermal cloud of the interior of Baritel de San Carlos. (b) Details of interior zones of the Heritage Building.

Continuing with the idea of automatic generation of semantically rich thermal models that could be used in HBIM methodology, we have also taken the first steps in a line of research that is currently open: the automatic segmentation of dense 3D point clouds provided by laser scanners.

Segmentation of point clouds is usually solved by region growing methods, which use the similarity between the normal directions of the planes fitted locally at each 3D point [36, 37]. Recently, a procedure that classifies point clouds into architectural elements based on super vector machine (SVM) has been proposed [38]. In Fig. 6 some preliminary results, obtained using the method proposed in [36] in the 3D point cloud of Nuestra Señora de la Candelaria church in Fuente del Maestre (Badajoz, Spain), are shown.

Fig. 6.
figure 6

(a) Segmentation of a partial point cloud of Nuestra Señora de la Candelaria church in Fuente del Maestre (Spain) (b) using the method proposed in [36].

In the near future, we plan to add the information provided by the colour images and the temperatures acquired with the thermal camera to the geometric information contained in the point clouds to enhance the segmentation procedure.

5 Conclusions

In the 21st century, with the growth of digitization and globalized knowledge, the multidisciplinary approach seems to be the most effective way of working. Thus, eminently humanistic subjects need to know how to take advantage of the opportunities that current technology provides. Imbued with that spirit and with the intention of promoting intercultural and interdisciplinary cooperation new lines of research are born.

The incorporation of thermal cameras to the data obtained with laser scanners and colour cameras to generate thermal semantic models of Heritage Buildings is a subject scarcely undertaken until now by research groups worldwide. Using the valuable information generated by the proposed fusion of technologies means opening up a new field of action and applications within research and conservation of Heritage, which will be not only a qualitative leap in archaeological work but a major advance in scientific research.