Keywords

1 Introduction

Throughout history, human beings have felt the need to visually represent what they wish to communicate. The evolution of graphical representation methods has been most noticeable in the second half of the twentieth century, when drawing moved from paper to the computer screen. This development is constant due to the advance in technology, which has made it possible to create spaces in two or three dimensions thanks to graphic computing.

Computer graphic representation systems have become very important in architecture. Thanks to them it is possible to imagine the final aspect of a project before its execution.

Virtual reality techniques aim to show us in the most real possible way what is generated using computer software. With the use of electronic devices, it has been possible to get immersed in the three-dimensional architectural space, either by modeling a real space or not.

Among the different virtual reality techniques that are applied to the representation and valorization of architectural heritage, we will focus on comparing the performance of 360° spherical panoramic photographs, and the real-time visualization of a model generated by photogrammetric and three-dimensional survey, which will be visualized using the Unreal Engine 4 videogame engine. Determining in this way which method is most effective to represent a historical building already built.

The building studied is the Almudín of the city of Valencia, a historical monument that has evolved over the course of history, going from a simple portico where mercantile activities were carried out in the twelfth century to reach the current configuration of basilical typology with covered courtyard (Añón et al. 1996). This research will allow us to perform a virtual visit of this monument in an immersive and interactive way through digital platforms, either from 360° spherical photographic panoramas or the visualization in real time of a virtual 3D model.

2 Virtual Tours with Spherical Panoramas

The development of immersive panoramic photography has led to a revolution in the generation of virtual tours. They now abound on the Internet, one of its main exponents being Google Street View. The spherical panoramas can be produced with special panoramic cameras or also using conventional cameras by joining or stitching together different images taken from the same point of view with the help of a panoramic head. In this study, the second option was chosen, since it is a much inexpensive and more affordable option.

The elaboration process consisted of three phases; in the first instance, administrative permits for the photographic capture were requested from the City Council of Valencia. Once the permits had been obtained, the fieldwork was carried out, which consisted in the photographic capture of the panoramas. Finally, these images were processed by computer software.

For the realization of 360° spherical photographic panoramas, it is necessary to have a sufficient number of images to cover the whole scene, guaranteeing a minimum overlap of one third between adjacent photographs so that the stitching program can find enough homologous points between photographs to guarantee a correct merge between photographs. As in a spherical panorama there are usually high light contrasts between shadows and lights, the HDR (High Dynamic Range) capturing technique has been used, which is about performing exposure bracketing by taken three pictures in a range −3, +0 and +3EV for each of the camera positions. A wide-angle lens with an equivalent focal length of 28 mm and a Canon 7D reflex camera mounted on a tripod with Manfrotto 303 SPH panoramic head have been used.

The camera has been positioned at the level of the human eye and a total of 114 photographs per panorama have been taken in 38 camera positions (exposure bracketing of 3 photographs per position), which correspond to three rings of 12 positions per turn, each position with horizontal rotation increments of 30°. The first ring with a horizontal optical axis, the second with the optical axis forming an angle of 45° with the horizontal one and the third forming a vertical angle of −45°. Finally two additional positions with the vertical optical axis, on one hand the zenit facing the sky, and on the other hand the nadir facing the ground.

In order to obtain the HDR images, photographs taken using the exposure bracketing technique (Fig. 1) have been imported into Zero Noise free software, where they are merged into a final high dynamic range image. This process results in thirty-eight HDR images per panorama.

Fig. 1.
figure 1

Exposure bracketing technique. Exposures of −3, +0 and +3 EV

The panoramas are created by means of Hugin stitching program, which is also free software. The process begins with the import of the thirty-eight photographs that compose each panorama. The program then applies a homologous point detection algorithm to create the necessary control points for the orientation and spatial arrangement of the photographs, in order to create a single 360° panoramic image. The image is optimized geometrically to correct radial distortions resulting in a self-calibration of the camera. Photometric parameters are also optimized in order to avoid light differences between photographs due to the vignetting effect or variable exposure between shots. After checking and correcting the image in the viewfinder, the photographs are assembled. In our case, we opted for an output resolution of 12,000 × 6000 pixels in TIF format. This process results in a panoramic image with high dynamic range.

Since a high dynamic range image cannot be reproduced truthfully due to the limitations of the current monitors’ chromatic or gamut range, it is needed to perform what is called tonal mapping, i. e. the conversion of the images to a low dynamic range image in which the best tones have been chosen for the light, mid-tone and shadow tonal ranges. Before tonal mapping, Photoshop has been used to make three images from the HDR panorama with exposures of −3, +0 and +3 EV.

The Photomatix software has been used for tonal mapping. With this procedure we can select the values of light, shadow, contrast, saturation, number of midtones, etc. that are necessary for a credible representation of the image (Fig. 2).

Fig. 2.
figure 2

360° spherical photographic panorama of the access to the Almudín of Valencia

Once all 360° panoramas have been obtained they are imported into software Krpano, that allows the creation of a virtual tour from them. Panoramic photographs are transformed into cubic panoramic images, which result in six rectilinear images per panorama, corresponding to each of the faces of a cube (Figs. 3, 4, 5, 6 and 7). In the image corresponding to the pavement, the tripod used for taking photographs and the shadow cast by it are suppressed with a digital photo retouching program.

Fig. 3.
figure 3

360° spherical photographic panorama projected onto a cube

Fig. 4.
figure 4

360° spherical photographic panorama with an additional layer showing a previous state of the building

Fig. 5.
figure 5

360° cubic photographic panorama of the access of the Almudín of Valencia

Fig. 6.
figure 6

360° cubic photographic panorama of the west gallery of the Almudín of Valencia

Fig. 7.
figure 7

360° cubic photographic panorama of the outside of the Almudín of Valencia

mark the links between panoramas that define the tour, those that offer us additional information in textual form about specific representative elements, and those that allow us to obtain an image of the historical evolution of the monument. This is done by programming in a language similar to javascript, included on Krpano environment. The visualization of the virtual panoramic photo tour in 360° is done through internet, so it is necessary to upload the files created in Krpano to a web server.

Next we introduce a series of interactive elements called hotspots that can be of three types; those that.

3 Virtual Tours Using Unreal Engine

The procedure has three phases, the compilation of graphic material, the three-dimensional modeling of the monument, and the processing of materials and lighting necessary for its correct interpretation. See Shah (2014) and Bürguer (2013).

The planimetric documentation is acquired through the collection of data from the historical heritage archive of the city of Valencia, and the delineation of drawings from images obtained using the photogrammetric technique, which uses the 360° panoramas obtained using Hugin software, exported as images with a rectilinear type lens for their correct interpretation.

Using planimetry, the three-dimensional model of the monument is developed with a CAD program. The elements are classified in layers for subsequent editing, paying special attention to the separation of all the elements by material and by the direction of normal to the face on which it will be applied. This procedure allows a realistic visualization of the textures. In the present case, the elements have been classified by placing those of the same material in the same layer, and ignoring the direction of material projection on the faces of the object. The three-dimensional modeling is positioned in the coordinates X = 0, Y = 0, Z = 0, with the metric system in centimeters, and exported to the program 3ds Max, for the creation of the material mapping.

Mapping is done for each imported layer by modifying its UVW Map. This modifier controls the projection of materials on the surfaces of an object, affecting its coordinates and scale. Once the modification is completed, channels and maps are merged with the geometry. Each element of the model is then exported in. FBX format, marking the options that allow the modeling triangulation and the preservation of the reference system, as well as the smoothing and the joints between elements, taking into account that the metric unit is the centimeter and the Z axis is the vertical one.

Once in Unreal Engine 4, a new project is created in which we will include the necessary actions for the reproduction of the virtual space, which are as follows:

  • Player Start, which determines the start of the tour.

  • Light Source, which refers to sunlight, both its intensity and orientation.

  • Sky Light, which provides the diffuse lighting needed to give clarity to the environment. In the same way that the previous action, it allows us to control parameters such as light intensity, scale, etc.

  • Atmospheric Fog, which is the atmospheric mist created by the environment, and that defines the horizon. It also allows modifying its density, scale, color, etc.

  • BP_Sky_Sphere, with which we create the sky, the amount of clouds, their scale, density, wind speed, etc.

We then import all. FBX files. The option to generate the light mapping is enabled and the automatic creation of collision with each object is disabled. This computes the incidence of light in each of the elements and avoids the creation of unwanted simple collisions, such as in the holes of the porticoes, which would prevent the tour from going through these holes.

Next, each of the components of the model is modified, creating a complex collision that allows to correct the error produced by the simple collision. Once established it in all files, they are selected and dragged to the workspace where the relationship with the first actions is observed.

Materials are created from photographs of the monument, which are imported into the “Textures” folder. In the folder “Materials” we choose the option of creating a new material and, using parameters such as texture, scale, etc., it is elaborated until the desired effect is obtained. For each element, a material is produced and subsequently assigned to it (Fig. 8).

Fig. 8.
figure 8

Material assignment in Unreal Engine 4

Subsequently the interior lights are introduced, giving realism to the scene. Also, the reflections that these lights produce on the different materials are applied, in case they have them like in aluminum carpentry or glass, by means of boxes or spheres of reflection.

The next step is the insertion of a post-processing volume that allows us to control the tonality and intensity of the colors, the vignetting, the field blur and other parameters that give us a greater sensation of realism on the visualized scene (Figs. 9 and 10).

Fig. 9.
figure 9

View of the access to the Almudín of Valencia using Unreal Engine 4

Fig. 10.
figure 10

View of the central nave of the Almudín of Valencia using Unreal Engine 4

To check if the above steps are optimal, it is possible to create low-resolution simulations of the scene, and use them as a preview of the final result.

Finally, an executable file is generated, that allows access to the visualization of the processed space in various electronic devices without having the actual Unreal Engine 4 installed.

4 Conclusions

Both presented methods make it possible to enhance the value of the historical heritage. By means of the visual information provided and the dissemination through the Internet, a virtual scenario is created that allows the dissemination and knowledge of the historical building. The massive creation of this type of virtual tours in historic buildings would allow the creation of a library of historical heritage that would be of great interest for the valorization of architectural heritage.

Virtual reality offers us the possibility of total immersion into the project, through practices so common in the world of architecture such as photography or three-dimensional modeling. This is a great advance with respect to the methods commonly used in architectural representation, such as obtaining static images or photomontages obtained with post-production programs.

In our case, since it is a monument where we needed to produce photoplanes in order to obtain the inner sections, needed for generating the drawings, that subsequently were used for three-dimensional modeling, greater effort has been required in the method in which Unreal Engine 4 has been used. However, thanks to the 360° panoramic photographs that were taken, the productions process has been quicker. Consequently, if a simple building has to be modeled, and the necessary planimetry is readily available, the Virtual Reality procedure using a videogame engine can provide an optimal result.

The visualization of the space is conceived differently in both methods. The 360°photographic panoramas allow to obtain a real view, of high quality, from a fixed point. The inclusion of references through hotspots and links between panoramas makes it possible to create a virtual informative visit. The virtual representation by using Unreal Engine means a continuous visit without restrictions of movement, unlike in the previous case in which it is necessary to jump from panorama to panorama offering this way a discontinuous vision of the building. This produces a better understanding of the space, since the visitor can tour the entire monument in real time, just like in a video game. However, it must be borne in mind that the material and light mapping of objects is a complex task, so achieving an image as real as the one obtained with panoramic photography is quite difficult and laborious.

As for the platforms on which each of the results obtained can be viewed, there is a clear advantage of 360° panoramic images in comparison with the videogame engine, since due to the small size of the final file and its viewing via the Internet, it is possible to obtain a virtual representation in a greater number of electronic devices.