Abstract
With the progress of 2D/3D visualization systems, models and software that effectively integrate graphical content with domain-specific knowledge become the adopted solution to allow the interrogation, understanding, interpretation and manipulation of visualized information. This paper introduces a software module that extends our data-lifting toolbox to automatic export of photogrammetry information from the Agisoft software (photogrammetric processing of digital images and 3D spatial data generation) into a semantic knowledge base representation.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
With the progress of 3D technologies, photogrammetry techniques become the adopted solution for representing science-driven data by turning photos from small finds, to entire landscapes, into accurate 3D models.
This paper proposes a module that explicitly couples the photogrammetry process to a semantic knowledge base modeled by our photogrammetry-oriented ontology, ArpenteurFootnote 1. This coupling is represented in form of an export module for AgisoftFootnote 2 to transform the spatial 3D data into a knowledge base modeled by the Arpenteur ontology. This exportation is particularly useful in the pipeline process within our photogrammetry-driven toolbox ArpenteurFootnote 3 for semantic data-lifting: from image gathering to 3D/VR modeling coupled with the knowledge representation by Arpenteur ontology.
This module is based on Semantic Web technologies where ontologies provide us with the theoretic and axiomatic basis of the underlying knowledge bases. In this context, different approaches have been proposed to permit semantic representation and modeling of synthetic 3D content, a state of the art review is detailed in [7].
The paper is organized as follows: Sect. 2 presents our solution for mapping Agisoft python API to the Arpenteur ontology concepts detailing the adopted photogrammetry configuration. Section 3 depicts a use case scenario of exporting and exploiting the Xlendi shipwreck. Finally, Sect. 4 concludes and presents future work plans.
2 Agisoft to Arpenteur Concepts Mappings
The mapping between the two softwares is limited to the generic concept of photogrammetric model as defined in Kraus [8]: photographs, camera, internal and external orientation, 3D points and their observations done onto the photographs. For example feature description and dense cloud are not supported by this mapping.
These two photogrammetry software manipulate similar concepts but of course the translation of digital data from one to the other will have to support some adjustments. For example, the concept of Photograph in the Arpenteur is similar to Camera in Agisoft.
In Arpenteur, a Photograph is the image produced by a camera (film-based or digital) and the Camera is the object that produces the Photograph. This Camera in Agisoft is translated by the concept of Sensor. It should be noted that the concept of Sensor in Agisoft is more complex and will not be fully used, for example it supports the notion of Plane which refers to multi-sensor camera rig approach. This feature is not used in Arpenteur. The Arpenteur Camera Radial Decentring Distortion is so mapped with the Agisoft Sensor Calibration which manages standard internal orientation and lens distortion. 3D points and their 2D projection on images are present of course in both softwares even if they are not modeled in the same way. Hence, an ontology mapping will allow the exchange of information between these two softwares. Figure 1 recapitulates the mapping pattern used for linking Agisoft python API to the Arpenteur photogrammetry concepts.
Radial Distortion Configuration
Since both approaches are based on a standard description of close photogrammetry, it is relatively easy to establish a direct mapping of concepts between Arpenteur and Agisoft. The only remarkably different point is how to describe radial distortion.
Distortion is a physical phenomenon that sometime may greatly impact an image’s geometry without impairing quality nor reducing the information present in the image. Applying the projective pinhole camera model is often not possible without taking into account the distortion caused by the camera lens. So, most photogrammetry software models distortion using the well known polynomial approach proposed by Brown in the 1970s [4]. However, although we have the equations to compensate the distortion, how to compute the inverse function in order to apply such a distortion is not obvious. And this is a crucial technical point, currently most photogrammetric software use the same equations to manage distortion, but some use the mathematical model to apply distortion and other to compensate it and this is exactly the case of the Arpenteur vs Agisoft configuration. To solve this problem of inverse radial distortion, which is not obvious in the context of polynomial approach, some software use an iterative solution [1]. In our case this is not possible: we need to express the polynomial coefficient for Agisoft using the known coefficient used in Arpenteur.
A solution is proposed in [5] and is implemented in this paper. Here are the formula for the four first coefficients \(b_n\) for Agisoft computed from the four coefficients \(-k_n\) used in Arpenteur:
3 Xlendi Shipwreck Use Case
We consider the Phoenician shipwreck of Xlendi (Malta) as a use case scenario, where data was gathered by modern photogrammetry techniques presented in a previous work [6] in the framework of the GROPLAN projectFootnote 4. Successive dives on the Xlendi wreck have resulted in several temporal datasets corresponding to seven surveys dates.
An Xlendi artifacts dataset describing the typological and the morphological description of artifacts was published as Linked Open Data presented in previous work [3]. While Xlendi artifacts dataset was introduced manually by archaeologists, the photogrammetric description (camera settings, interior and exterior orientation parameters, extracted and matched 2D/3D points) was automatically exported from the Agisoft software to an ontology file containing the TBox + ABox description. This paper introduces the used exportation module, which is in form of a python script that can be called directly from the Agisoft Metashape sophtware. The script is made available as open source on GitHubFootnote 5.
In the context of Xlendi shipwreck LOD data publishing, we stored the different datasets in an Apache Jena Apache TDBFootnote 6 which is embedded in a FusekiFootnote 7 server. Seven SPARQL GUI user interfaces are made accessible online via our 2D/3D Web tools, allowing to query Xlendi datasets that correspond to the seven survey dates. Listing 1.1 depicts an example of a SPARQL query retrieving the position and the orientation settings of an Xlendi photograph, i.e. the “John_Stills_CC-309”Footnote 8 photograph from the 2018-09-21 datasetFootnote 9.
The lifted Xlendi datasets are published in a 2D/3D Web representation coupled with the knowledge base datasets in user-friendly web tools available onlineFootnote 10 for querying and semantic consumption of the data, as detailed in [2].
4 Conclusions
In this paper, we introduced a module (python script) for automatic export of the photogrammetry description into a knowledge base dataset modeled by the Arpenteur ontology. This module extends the Agisoft software used in our photogrammetry process. The automatic export is handled by mapping the Arpenteur ontology to the tool’s API. A real data wreck scenario Xlendi was presented in which the photogrammetry process was automatically exported by this new module.
In parallel with the photogrammetry description, we are currently working on the implementation of an ontology-based virtual reality representation to provide a panoramic view (2D/3D/VR) of the data coupled to the semantics knowledge.
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
References
Alvarez, L., Gómez, L., Sendra, J.R.: An algebraic approach to lens distortion by line rectification. J. Math. Imaging Vis. 35(1), 36–50 (2009)
Ben Ellefi, M., et al.: Ontology-based web tools for retrieving photogrammetric cultural heritage models. In: Underwater 3D Recording & Modeling. ISPRS, Limassol, Cyprus (2019)
Ben Ellefi, M., Nawaf, M., Sourisseau, J.C., Gambin, T., Castro, F., Drap, P.: Clustering over the cultural heritage linked open dataset: Xlendi Shipwreck. In: Proceedings of the Third International Workshop on Semantic Web for Cultural Heritage co-located with the 15th Extended Semantic Web Conference, SW4CH@ESWC 2018. LNCS, Heraklion, Crete, Greece, vol. 8, pp. 1–10 (2018)
Duane, C.B.: Close-range camera calibration. Photogram. Eng. 37(8), 855–866 (1971)
Drap, P., Lefevre, J.: An exact formula for calculating inverse radial lens distortions. Sensors 16(6), 807 (2016)
Drap, P., et al.: Underwater photogrammetry and object modeling: a case study of Xlendi Wreck in Malta. Sensors 15(12), 30351–30384 (2015)
Flotyński, J., Walczak, K.: Ontology-based representation and modelling of synthetic 3D content: a state-of-the-art review. In: Computer Graphics Forum, vol. 36, pp. 329–353 (2017)
Kraus, K., Jansa, J., Kager, H.: Photogrammetry, vol. 1&2. Ferd. Dummler’s, Verlag Bonn (1997)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Ben Ellefi, M., Drap, P. (2019). Semantic Export Module for Close Range Photogrammetry. In: Hitzler, P., et al. The Semantic Web: ESWC 2019 Satellite Events. ESWC 2019. Lecture Notes in Computer Science(), vol 11762. Springer, Cham. https://doi.org/10.1007/978-3-030-32327-1_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-32327-1_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-32326-4
Online ISBN: 978-3-030-32327-1
eBook Packages: Computer ScienceComputer Science (R0)