Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

The development of semantically rich digital facility models, known as building information modelling (BIM), is garnering increasing attention in the architecture, construction, engineering and facility management (AEC/FM) community due to their ability to enhance communication between different stakeholders involved in the various stagesFootnote 1 of the building process [1].

For the past several years, most countries in northern Europe have been involved in the widespread implementation of BIM technology, especially in the domain of new construction. Numerous studies describe methodologies, processes, case studies and standards in the field of BIM [2]. Examples can be drawn from countries such as the United Kingdom [3, 4] and Scandinavian countries such as Finland [5] and Norway [6]. Other relevant examples can be found in the United States [7], Singapore [8], Australia [9] and Canada [10].

Like many other countries, Italy faces a challenging situation: on the one hand, it is in the midst of a severe economic crisis, and on the other, there is a compelling need for the management of large buildings placed in historical contexts, which are often subject to artistic and landscape protection by the Cultural Heritage Ministry. The recent sequence of earthquakes in Italy (Aquila 2009, Emilia 2012, Amatrice 2016, Visso 2016, Norcia 2016) demonstrates the seismic vulnerability of the entire territory and the need to preserve and protect the historical built heritage.

In this respect, it seems clear that the efforts should be directed towards the restoration and management of existing buildings. If we consider that many of the historical centres of Italian towns are UNESCO sites, it is clear that the integrated development of standards and techniques for acquisition, processing and data integration is needed.

Cultural heritage sites and our valuable historical architectural heritage require high-resolution 3D models in order to achieve a significant added value from their digitalization [11]. These models are increasingly available thanks to the rapid technological progress in the field of acquisition, methods based on laser scanners and/or digital photogrammetry [12, 13], and several international projects such as those promoted by the CyArk organization. However, very little research has been undertaken to explore the advantages and criticalities of BIM methodologies in the cultural heritage domain [14,15,16,17] in order to characterize and improve the accuracy, precision, quality and graphic/alphanumeric representation and data enrichment in compliance with the acquired data.

We are essentially living during one of the most significant periods of technological innovation in all of history. The most recent research perspectives in the field of integrated survey procedures are directed towards the design of a mechatronic system that optimizes computing systems, improving the efficiency and effectiveness of the entire process and making use of a wealth of interdisciplinary knowledge in order to structure the hierarchy of knowledge to share with other professionals for the protection of cultural heritage (Fig. 1)Footnote 2.

Fig. 1
figure 1

The H-BIM process in relation to the life-cycle stages

This chapter is structured as follows: after presenting an overview on data acquisition, processing and integration, the chapter will deal with other relevant issues in the field of parametric modelling and semantics, level of accuracy, data enrichment and organization, graphic and numeric representation, and sharing (interoperability), after which it will conclude with a discussion of future research directions.

Data Acquisition

The digitalization of cultural heritage starts with a survey phase, the result of which is a digital model that is a virtual replica of the object of study and is as detailed and accurate as current technologies permit. Thus, the reverse engineering process provides a virtually scaled model that is coherent in metric terms and enriched with the information regarding the aesthetic appearance of the material and reflectance properties.

The latest available technologies and methods in the field of cultural heritage 3D data acquisition take advantage of two approaches: active sensors such as the well-known terrestrial laser scanning (TLS), and passive methods such as digital photogrammetry (close-range or UAV) that nowadays are connected with the automatization of the computer vision approach using structure-from-motion (SfM) techniques. In both TLS and SfM, the final output could be a coloured point cloud, which is a metric product that can be used in the next 3D modelling phase.

TLS systems are well-known range systems that are capable of measuring the 3D coordinates of millions of points in a few seconds [18]. The acquisition strategy is based on the use of a laser beam emitted by the instruments using one of two main approaches for the purpose of conducting a cultural heritage survey [19]. (Note that triangulation-based scanners are not considered in this classification). In the first approach, called time-of-flight (ToF), the distance is measured by shooting a laser beam out to an object and measuring how long it takes for the laser beam to bounce back. The second strategy, called phase-shift, utilizes a constant beam of laser energy that is emitted from the scanner. The scanner then measures the phase shift of the returning laser energy to calculate distances.

Today, with improvements in computer vision algorithms, digital photogrammetry is a more affordable and reliable technique in architectural 3D acquisition and processing. It provides a low-cost solution (especially packages capable of fully automatic 3D reconstructions) against the initial investment costs for TLS equipment, including instruments and processing software. Packages such as Agisoft PhotoScan, 3D Zephyr, Pix4D, ReCap 360 and ContextCapture MicMac are able to reconstruct a textured 3D model (point cloud and mesh) starting from an appropriate data set of images taken according to photogrammetric criteria.

Apart from a comparable metric precision, both techniques have singularity and present advantages and disadvantages in accordance with objects of particular shape and morphology.

Starting from the assumption that both techniques are able to deliver a point cloud (i.e., the first product needed for realizing a correct 3D model that can be input into a BIM), a comparison could be focused in terms of acquisition strategy, time for data processing, accuracy and cost.

The first step for both the techniques is the acquisition project. This part is planned according to the shape of the object and needs to be realized following several criteria that can be summarized from the TLS acquisition:

  • Overlaps of adjacent scans need to be greater than 30%.

  • Scan density, which is connected to the type of laser used and the minimum detail acquired, needs to be represented.

  • Angle of incidence is needed; this is related to the distance between the laser and the acquired object and to the orientation angle.

On the other hand, the photogrammetric acquisition rules for point cloud generation could be summarized as following:

  • Overlaps between images should be more than 80%.

  • Normal and convergent acquisition at different distances.

  • Evaluation of the ground sample distance (GSD)Footnote 3 according to the scale of the final representation.

The next step relates to data processing: extracting the final point cloud using the TLS is not computationally expensive because the instrument needs to acquire the 3D coordinates derived from the images. Data processing using the photogrammetric approach, on the contrary, is very time-consuming, since it utilizes a dense cloud production, especially if is performed at a high level of detail, which requires a large amount of computational time.

In terms of accuracy, it is important to note that both techniques for obtaining metric products need to be joined with traditional total station survey in order to reference the acquired object. This is then used to correctly perform the bundle block adjustment in the photogrammetric process and to evaluate the final point clouds using the measured point as ground truth (i.e., checkpoints). According to several studies, point clouds achieved by the photogrammetric process are comparable to those surveyed by laser scanner [20,21,22,23,24,25]. With laser scanning, the sharpness of the edges of the point clouds is better preserved than in those automatically obtained with digital photogrammetry algorithms, which suffer from a certain smoothness. Moreover, SfM packages often tend to automatically fill the holes in the point cloud by introducing additional geometry (which theoretically corresponds to the occlusion parts in laser scanning models); these must be deleted from the model to avoid confusion during the 3D modelling phase.

Finally, we should address the cost of both systems. The TLS is not a low-cost instrument, whereas the photogrammetric approach is supported by the open source community (several open source software applications for point cloud generation are available today), and is thus cost-efficient. For these reasons, photogrammetric techniques are now more often used for survey architectural elements and details (doors, capitals, plasters, frames) that can be represented with sufficient resolution and accuracy.

Data Processing and Integration

The first result of a survey campaign using TLS is the collection of a certain number of scans, each containing millions of measured points. After the point clouds are acquired, they have to be processed for use in 3D modelling.

The processing steps are well known, and the procedure is quite straightforward.

The first step of the process consists in the following:

  • Point-cloud colouring.

  • Scan registration.

  • Georeferencing/referencing (according to the reference system used).

  • Filtering/export of data (interoperable file).

This approach is actually carried out using the software delivered by the scanner and, with minimal differences, is the same for all the main scanner manufacturers. Point-cloud colouring is the first step that assigns an RGB value to each measured point. The RGB values are extracted from the images acquired by the camera coupled to the TLS. Typically, the camera is embedded in the instruments, and only a few instruments (e.g., Riegl) use a camera mounted on the top of the device. Using an external camera, it is possible to obtain images with a higher resolution compared to the camera mounted in the instrument, yet only the high-dynamic-range (HDR) sensors inserted in the new TLS (such as the Faro Focus 3D or the Z + F IMAGER 5010C) can acquire images with a resolution compared to the external one. The scan registration merges multiple images to a common point cloud, and here it is possible to follow two paths: registration based on the shape of the object using an iterative closest point (ICP) approach [26, 27], or registration using the marker positioned on the object.

With the ICP approach, it is important to insert some markers as well, since these are important (but not compulsory, as it is possible to use natural points) for the next step of georeferencing the registered point clouds.

To do this, a topographic network is first created that defines the reference system. Starting from the known points, a side-shot approach is used to measure the markers for scan georeferencing.

Naturally, it is possible to use these points to evaluate the accuracy of the registration phase and to obtain a better optimization of the process. The overall model is composed of multiple overlapping point clouds, which represents the first material and a geometric description of the analysed building. Nevertheless, it has to be considered a discrete model, a numerical sampling of 3D points derived from the object; even if it is a dense point cloud, it does not necessarily mathematically describe all of the targeted surfaces.

On the other hand, the photogrammetric workflow (using the SfM approach) consists of the following steps:

  • Tie-point extraction (using SIFT or similar algorithms [28, 29]).

  • Image orientation—relative orientation.

  • Markers (used as ground control points); the measured points are placed on the images where are visible).

  • Adjustment, absolute orientation and camera calibration [30, 31].

  • Dense cloud generation [32].

  • 2D and 3D model realization.

It is important to underscore at this step that in both of the methodologies, the use of measured markers is preferred in order to control the accuracy of the achieved products and to allow a complete integration, since the final outputs are in a common reference system.

As a general rule, the simplest way to obtain traditional representations (orthophotos) associated with the chromatic information, consists of extracting a triangular mesh. Then, the corresponding image needs to be calibrated and projected on that mesh in order to achieve a textured model.

The creation of the mesh can be achieved using different processing software. The procedure can follow two paths: the first one consists of the creation of a highly defined mesh, which, however, does not take into account the morphological characteristics of the surfaces; the second process can be used to create an intelligent mesh whose generative algorithms refer to the calculation of angular discontinuities according to the thresholds defined by the user.

The textured mesh models, however, present a number of issues—for example, they require a large file size. This is directly related to the resolution and accuracy of the point clouds. There are several techniques available to achieve “lean” digital models, such as the decimation of the mesh [33] or its re-topology (that is, rebuilding an existing mesh with more or less the same volume and shape but with a different mesh layout) by using sub-displaced modelling [34,35,36]. Using this method, these models can also be used for both online or desktop interactive visualization [37].

Another relevant issue is related to the correct geometric description of the generated surfaces. Although it is possible to obtain very high-resolution mesh, it lacks any kind of information before the critical interpretation.

The primary step is to semantically subdivide the point cloud and to identify and classify each building (or building element) according to a shared ontology of historical buildings that defines the hierarchies between the whole and its parts. Point clouds can be subdivided according to several levels of detail that are coherent with rules of architectural representation at different scales and could be resampled in relation to the requested level of detail of the project.Footnote 4

Parametric Modelling and Semantics

Accurate and detailed instrumental metric acquisition of data and their subsequent integration (occlusion areas) generate a database that visually represents the involucres of the acquired surfaces. However, such information has no intelligence until it is semantically interpreted and geometrically parameterized.

Scientific reports about the restitution process in a BIM environment applied to cultural heritage can follow different procedures.

Interesting work has been conducted by Armeni et al. on semantic automatic recognition in a point cloud [38]. Through complex algorithms that interpret the statistical presence of a point in a particular alignment, the software can identify the separation of spaces and automatically recognize objects.Footnote 5

In any case, this historical architecture knowledge-based approach is fundamental for recognizing single architectural components, and the understanding of their links and relationships with the whole architectural building is crucial for achieving a proper classification and semantic decomposition of the architectural elements and the different components/details that form them.

One very interesting approach is the use of historical treatises (e.g., Palladio’s), where this semantic decomposition is well structured and explained. This procedure can be used to move from the general to the specific and vice versa using knowledge about the relations between the parts. It is thus possible to create a library of profiles and components for adaptation and reuse in other projects [39,40,41,42]. Another possibility is based on the direct interpretation of survey data, once again recognizing and identifying the architectural components through their critical interpretation (geometric analysis of typical shapes, compositional rules, positioning and orientation constraints) to formalize the semantic structure and make it explicit in an analytical parametrized language. Furthermore, keeping in mind the goal to create a library of shared and reusable architectural components, a formal and typological classification is needed in order to detect the regulatory rules, invariants and the variation in the architectural style [11, 43, 44].

However, not all architectural components (e.g., vaults or irregular components) can be correctly modelled into commercial platforms [45]. In this case, a great support for achieving this goal could be given by shape grammar and procedural modelling, which requires significant computer science competence [46]; another possibility is to use reverse modelling software to create a non-uniform rational basis spline (NURBS) surface and them import that surface into the BIM platform [47, 48].

“Reasoning” for architectural components means to carry out “local” modelling to create semantic-aware libraries; otherwise, in a “global” modelling approach, each modelled element is directly referred to the point cloud.

Therefore, “global” modelling requires the referencing of the point cloud into a unique coordinate system; thus there will be a direct correspondence between the topographic environment reference system of the geometric database (point cloud) and the reference system used for parametric modelling.

“Local” modelling does not require knowing that relationship, but it is necessary that the cropped point cloud is aligned with the reference planes used for modelling of the component.

The global modelling (Fig. 2)Footnote 6 approach is usually preferred to overcome the difficulty in using point clouds within a commercial platform library interface (e.g., the Revit family). Over the past few years, several plug-ins (e.g., Cloudworks, PointSense, PointCab4Revit, Scan2BIM) have been released and can manage point clouds in both a project interface and a single-component creation interface. These plug-ins can easily extract x-ray orthoviews from the point cloud to crop and segment the point cloud in real time. They can also measure the deviation between the point cloud and the 3D-modelled geometry. This way, both the modelling and assessment phases are facilitated.

Fig. 2
figure 2

H-BIM of Poggio Rusco Church (Mantua–Italy)

A second distinction allows us to reflect on a different approach arising from the experiences of several research groups who investigated H-BIM: assuming that 3D geometric parametric modelling of architectural elements is time-consuming, should an object in an H-BIM be geometrically parametric or not? What seems to be a provocation actually underscores a different approach to the issue of modelling existing artefacts (or reduced portions).

A rigorous approach [47] assumes flexibility and reusability of the different components using the geometric parameterization of dimensions, relationships typical of systemic and cross-parameterization operations. But is it always useful? There are many unique pieces in classical architecture that do not require such flexibility. This is the case with several decorations that depict anthropomorphic and zoomorphic elements: altars. Or again, we refer to existing elements that do not find their correct categorization for some building component reverse modelling, as in the case of extremely irregular-surfaces vaults.

These are the best samples to demonstrate the added value of new BIM approaches, dealing with the implementation of the alphanumeric component and going beyond the typical static nature of those particular geometric conformations. In this respect, the international guidelines and standards define a clear distinction between graphic content and a minimum level of reliability of the information associated to the modelled components. This distinction can be applied to the project related to the surveyed elements, as described in the following paragraph.

Measuring the Level of Accuracy

In the architectural survey field, the geometric accuracy of digital models refers to the assessment of the deviation between the real model and its virtual representation. For a BIM model, the process is conducted by measuring the deviation that the H-BIM models, and in particular of BOMs, has in relation to the survey data (Figs. 3 and 4) [11, 49, 50].

Fig. 3
figure 3

Point cloud comparison with a historical building object model (H-BOM)

Fig. 4
figure 4

Sample of point cloud deviation for a building object model (H-BOM)

However, it is possible to refer to another type of level of accuracy, which is perhaps less intuitive, that takes into account the cognitive factors of an object. If we consider the structural analysis of a historical masonry, for instance, the verification is calibrated through a “confidence factor” that allows one to overestimate the forces in the case of missing information. For each case, it is possible to classify the artefact and to associate a particular mechanical characteristic. In that sense, the confidence factor can be amplified or reduced without using an invasive procedure that could irreversibly harm the historical masonry. Therefore, it refers to the quantification of the levels of accuracy of the knowledge directly linked to the virtual elements.

Considering the level of geometric accuracy related to the architectural survey, the references provided by regulations and international standards are very poor [17]. An example of level of accuracy definition was set in 2012 from COBIM2012—series 2 [51]. In this document, the authors establish the concept of an Inventory BIM model, and the level of accuracy is defined as follows:

Accuracy levels of Inventory model—Requirement—The structures of old buildings are almost always somewhat slanted, sloping, curved or otherwise inexact in their geometry. Striving for “absolute” accuracy in the Inventory model is not appropriate […]. The allowed measurement deviations for the Inventory model are: 10 mm on corner points of building elements, 25 mm on surfaces, e.g. walls and floors, 50 mm for old irregular structures such as roof structures.

Therefore, the definition of the level of accuracy is mainly related to “geometric” requirements, according to the inventory BIM rules. This definition does not take into account particular instances of simplified graphic detail associated with specific H-BIM models used in facility management (FM) or energy analysis.

It has to do with the old dilemma of the “representation scale” of the 3D model, partly overtaken in the BIM environment, considering the different levels of detail. It is therefore evident that according to the different uses of a BIM model [52], we can associate a different level of accuracy.

An interesting approach that could associate a level of accuracy in accordance with possible workflow for an H-BIM was elaborated on the basis of guidelines provided by Penn State University [52, 53]. The work individuates 25 “BIM uses”, organized by design development phase. BIM uses are categorized into five primary purposes: gathering, generating, analysing, communicating and realizing.

Although the document is complex, no references are available in terms of the accuracy of the metrics and the level of development (LoD), which are explained later. In our opinion, an effort can be made to identify the relationship between H-BIM’s uses and accuracy because, actually, there is no reference on the existing building topic. A first attempt was proposed in the work of Biagini and Donato [11].

Another international work that deals with the level of accuracy is the GSA document, volume 3 [54]. In Sect. 2.3 (“Types of Deliverables from 3D Data”) and Table 2, tolerance as a function of deliverables is reported. Moreover, in the document, the definition of “tolerance” in relation to the BIM model is:

the allowable dimensional deviation in the deliverable from truth (…). Some examples of tolerances are: (1) Point cloud: the distance between two points in a point cloud as compared to the true distance between the same two points in the actual scene should be less than or equal to the specified tolerance, (2) Plan: the difference between the length of a wall length in a 2D plan and the actual wall length should be less than the specified tolerance [[54] p. 5–6].

An elaboration of international and local standards is missing and must be implemented. In this regard, the work group UNI/CT 033/GL 05 of the Italian Organization for Standardization (UNI) is processing documents on the LoD of historical buildings, in particular with reference to the UNI-11337 that will integrate and propose the addition of sections for the restoration and renewal of historical and monumental buildings.

In short, the level of accuracy related to the BIM environment plays a more meaningful role today than in the past. While the cultural heritage is known, more complete (accurate) representations are present in the database associated with polyhedral shapes analysed below.

Data Enrichment and Organization

The survey acquisition phase should be followed by the implementation of the informative apparatus, the data of which are extremely heterogeneous. The digitalization of all these data will allow the creation of a database associated with the surveyed elements. Therefore, the survey is an integral process of metric surveys, whose final purpose is the representation and documentation of an existing building. It takes into account several kinds of documents, including archival, photographic, previous surveys, dimensioned sketches, material analysis, degradation phenomena analysis and representation (Fig. 5).

Fig. 5
figure 5

Dimensioned sketches and related picture

We are currently living during a time characterized by numerous technological advancements, especially with regard to architectural survey and design fields. In any case, the statements derived from reference legislation and treaties still remain valid. The introduction of the new BIM methodologies in the construction process also includes surveying and the use of surveys in the field of cultural heritage. In contrast to what has been done in the past, this process allows the creation of a unique database that holds both geometric and non-geometric information, which is implementable and freely accessible.

Importantly, an architectural survey is an operation that is not only connected to what one sees, such as the “skin” of the building, but a more complex operation consisting in the retrieval of archival sources, through which it is possible to retrace the building techniques of the time in order to understand the transformations that the artefact has undergone.

The organization of archival sources under a common umbrella that relates them to the main features of a building and to other possible information, such as historical or current pictures, helps the work of historians or restorers. Indeed, they can investigate the reason for the project and can carry out simulations on possible hypotheses. All of this can occur in a unique environment that holds all the information on the object of study, organized according to common taxonomies.

Considering the specific field of restoration, information assets can be integrated by analysing materials and related evaluations that record the graphic and numeric representation of different decay phenomena. This allows the researcher to simultaneously identify the causes, extensions and description of the proposed interventions.

Another relevant topic is the integration of geometric information with the internal stratigraphy of architectural elements. Data can be acquired using sophisticated non-invasive techniques, most of which (e.g., acoustic, electromagnetic) can highlight discontinuity in the walls, historical vault, lacunar and ceilings. Nevertheless, while useful, some of these techniques are very costly, and sometimes the recognition of the element the association of the typological historical element derived from historical manuals could accurately predict the internal composition.

Taking into account the above considerations, it can be assumed that a BIM approach to cultural heritage acts as a mechatronic hinge among all the involved disciplines, for the following reasons:

  • It allows the optimization of computing systems, thus improving the efficiency and effectiveness of the entire process.

  • It makes use of interdisciplinary knowledge that enables the researcher to organize the hierarchical levels of knowledge to share with the professionals involved, for future interventions and for the protection of cultural heritage.

The latest studies (applications) conducted in the restoration field further develop this relationship by bidirectionally relating graphic design and information. The most well-established procedures provide the explanation of different decay processes through the overlapping of 2D patterns. This type of representation is typically done directly using drawings, which are orthogonal projections. If the object of study is characterized by curved surfaces (e.g., walls, vaulted surfaces), this information will not be an accurate representation. Working on a 3D textured model (i.e., a textured mesh), this issue can be overcome because the mapping is done directly on 3D surfaces, and the computed quantities are correct.

Another issue deals with the difficulty in associating parameters to detailed 2D representations. In the parametric modelling environment, it is possible to use adaptive 3D components that are able to adhere to each surface and, at the same time, to record non-geometric information (e.g., date, author of the surveying, assembled system and degraded finishing material, ID code and description of the proposed interventions).

After data acquisition and management, in terms of processing and data organization, the representation phase can be deeply examined (Fig. 6).

Fig. 6
figure 6

Semantic subdivision in H-BOM and overlapped information for the restoration

Graphic and Alphanumeric Representation

As is well known, the BIM platform allows the association of multiple data to a single virtual component. A first group controls the graphic content, and international guidelines identify the grade level for managing the graphic representation of building components in orthogonal projection and spatial views, which are congruent with different levels of detail.

A second group concerns alphanumeric data: the international standard identifies the LoD [55] parameter as the degree of reliability of information that can be expected from data contained in the digital model.

Regarding the graphic views associated with different levels of detail, it is possible to base the modelling procedure on dimensioned sketches drawn during the survey, which are consistent with the different representation scales and consistent with the main representation scale used for architectural drawings (Fig. 7).

Fig. 7
figure 7

Grade levels applied to a historical building object model (H-BOM)

From a purely graphic point of view, the resolution degrees of the identified models can be divided into the following grades:

  • Grade 0 (G0), Schematic: at this level, a symbolic or schematic drawing is given in two dimensions and out of scale (e.g., the typical symbols of plan devices).

  • Grade 1 (G1), Concept: this is a 3D depiction with the minimum possible detail. There are no associated metric components, and the depiction has approximate dimensions. The model scale is variable according to the represented object, between 1:500 and 1:200/1:100 scale.

  • Grade 2 (G2), Defined: this is a 3D model with a good level of detail, sufficient to identify its topological, formal and dimensional characteristics. Also, some of the metric characteristics are presented. It can contain 2D detail drawings and can include complete metadata and data. The model scale is variable according to the represented object, between 1:50 and 1:20 scale.

  • Grade 3 (G3), Rendered: this is a 3D model that is equal to that in grade 2 in terms of technical and informative aspects, but it contains much more accurate graphic features and a photorealistic representation.

From the definitions above, it is easy to understand that the different values of grades are strictly based on the design development and represent increasing graphic detail. In the case of the survey of cultural heritage—particularly regarding the purposes for which the work is intended—the accuracy of acquisition will be defined, as will the so-called graphic error and related iconographic/symbolic graphic choices, depending on the chosen representation scale. As a consequence, for these specific purposes, the schematic level (grade 0) is not very useful, nor is the grade 3 level, which is often not considered within the set of conventional outputs. The choice related to which parts must be represented in the conventional orthogonal projection, rather than modelled in three dimensions, becomes an essential issue. In fact, only 3D components can be linked to the specific informative content that is congruent with the nature and diversity of the data found.

Regarding the LoD specification, such classifications are useful for new building interventions, where the measurement of the LoD is usually based on economic, topological, construction and maintenance information. In the case of interventions on historical buildings, other variables should be included for critical analysis of the richness of the information available; this can be done to measure the reliability degree of the survey. A more complete survey affords a greater opportunity to integrate the various stakeholders who participated in the study.

This procedure includes the retrieval of design archives, state-of-the-art photos, metric survey techniques and degradation surveys. Through the creation of shared parameters (usable for multiple types of components and on several projects), we are able to relate new data to the element detected. The added information can then be incorporated into the model through labels, schedules or thematic views.

Sharing

As for interventions on new construction, H-BIM modelling aims to support the management of a life-cycle process that is of high quality, efficient, safe and in compliance with sustainable development goals. Building information models (and components) can be used throughout the life cycle of the building, starting from the initial design and continuing even during use and facility management (FM) after the construction project is concluded by COBIM2012 [5].

One of the main advantages in using the BIM technology that has not yet been fully explored is the ability to share information among several stakeholders in the design process in order to avoid useless overlapping of information.

The need has therefore arisen to facilitate effective exchanges of information between various software applications by defining standards for sharing data.

Establishing a data archive that gathers all project information in a single integrated project database (IPDB) represents the ideal platform for integration and interaction among the key players in the design phase, thus rationalizing and structuring the process of consulting and subsequently reusing the data needed to describe the building design.

Each data transfer inevitably involves duplication of information, risk of error and thus loss of time. The quest for software interoperability attempts to remove these obstacles in order to enable the individual players in the various stages of the complex design process to share and exchange data automatically. Information exchange is a necessary condition for achieving an exchange of knowledge, because information, to become operational, needs to be interpreted.

Rather than interoperability between software, we choose to discuss semantic interoperability [56], which can be defined as the process that enables software applications or user systems to interpret the meaning of the information exchanged. The ability to manage semantic heterogeneity currently represents a key challenge for information systems integration.

The difficulty is that the meaning changes according to the context and the stage in the process, and various specialist domains have different design requirements, which give rise to different information models. Intuitive and customizable processes thus must be established for handling and editing the data, including its operational meaning [57].

From an information technology (IT) perspective, exchanges are possible both directly and through appropriate formats (proprietary and not). The Industry Foundation Classes (IFC) standard is a class of building data model (BDM); it is specifically developed for the construction industry and is not proprietary. Therefore, it tends to represent a certain set of objects commonly used in the building world.

Horizontal interoperability can be described as the ability to exchange information between platforms that explore the same themes; vertical interoperability, which is certainly more complex, can be achieved when sharing data can be referred to software that integrates their knowledge using a multidisciplinary approach.

The exchanges of data between the source and the receiving software and the interoperability of BIM products at different phases are still limited due to incomplete or ambiguous translations [45]. Most of the applications are still limited to research and academic communities [58].

One-way interfaces are often used to connect the BIM methodology to expert applications, even if it has been considered a real added value to the process, as in some case histories related to the finite element analysis modelling (FEM) approach to complex historical buildings [46, 48].

We also cite the International Framework for Dictionaries (IFD) [59], defined in ISO 12006-3 [37]—then renamed in the building SMART Data Dictionary (bSDD)—as a terminology standard for BIM libraries and ontologies [17]. It is an object-oriented database of multilingual terms that define concepts used in the construction industry and their respective IFC characteristics, such as denotations of objects, parts, attributes, units or values [60].

While information and data exchange is destined to shortly reach a reasonable level of interoperability among machines, the interpretation and understanding of designers remains latent, which ultimately reduces their exchange of knowledge.

Conclusion

It is clear that the current interest in BIM technology has increased the demand for digital representation techniques in the construction industry. Photogrammetry and laser scanning can make a remarkable contribution to the development of interoperable BIM, where the geometric component must be integrated with data surveys. Some reviews [1, 17, 61] have demonstrated satisfactory results for modern buildings, but in the case of cultural heritage buildings, this claim is not well adapted [62]. This essay focuses on enriching the 3D model from the early stages of data collection and segmentation.

This task will require new advanced manual, semi-automatic and fully automated solutions to create tools for as-built BIM creation.

Therefore, it is important to establish methodologies for performance evaluation; currently, no standard evaluation metrics have been established for as-built BIM creation [1].

The term “measurability” has multiple meanings. If the geometric meaning is obviously established and repeatedly mentioned in different documents that fully define the different metric detection operations, the same cannot be said for the ontological meaning, which is intended as a quantitative enhancement of the reliability of a survey.

Future tests will likely lead to the preparation of the intervention protocols replicable on a large scale, evaluating the possibilities of a critical mediation between the Italian public works laws and specification of LoD and the graphic detail (GRADE) by applying them to the themes of architectural heritage to develop proposals to measure their pertinence [63].

This essay has described research related to improving the efficiency and effectiveness of more conventional procedures. The final model synthesizes the information, which is usually fragmented (primarily because it belongs to different representations), and enriches it with new methods of analysis and management of data. This opens up the possibility for new scenarios of “knowledge” [44], particularly regarding the restoration and preservation of built architectural heritage, but also for the dissemination and enhancement of the historical architecture. In this regard, taking into account the statements of the European Charter of Architectural Heritage, the cultural asset is defined as “an irreplaceable expression of the diversity of cultures”, and the aim of conservation disciplines is to “preserve the aesthetics and value of the monument” [64]. Indeed, the elements that constitute historical architecture are not only physical artefacts, but hold intangible value, based on highly integrated approaches of programmed preservation [45].

The EU INCEPTION project, funded under H2020 call Reflective 7–Advanced 3D modelling, is related to this. Among the many objectives of this project is the development of an open-standard Semantic Web platform for accessing, processing and sharing interoperable digital models resulting from 3D survey and data capture.

Therefore, one must imagine an evolutionary route that begins with building information modelling an information model that implicitly contains very heterogeneous data. This process is characterized by the following: survey, thematic analysis, maintenance relating to the building, and venturing out towards a possible building knowledge model [56, 65]. Therefore, this process is intended to be used to gain knowledge concerning the building structure.

A true cultural step forward must be made in the future, and technology based on human cognitive and communication processes should be used. Intuitive and customizable methods will be important in moving from information to knowledge.

Authors Contribution

The paragraphs Introduction and conclusion are written by all authors. Data acquisition is edited by F. Chiabrando. Data processing and integration is edited by F. Chiabrando, M. Lo Turco and C. Santagati. Parametric modelling and semantics is edited by C. Santagati. Measurements of Level of Accuracy is edited by V. Donato. Data Enrichment and classification is edited by M. Lo Turco. Graphic and alphanumeric representation is edited by C. Santagati and V. Donato. Sharing is edited by V. Donato and M. Lo Turco.