1. Introduction
Extended Reality (XR) is a term encompassing all current and future real and virtual combined environments such as VR (Virtual Reality), AR (Augmented Reality and MR (Mixed Reality). In the last 5 years, vast advancement has been taking place in the field of immersive media [
1] both in terms of production and consumption. In terms of production, computing and especially graphical processing technology has gone through three generations of evolution, now having features such as realtime ray-tracing in both consumer GPUs (nVidia RTX) as well as game consoles (raytracing-capable APUs by AMD in Mirosoft Xbox X and Sony Playstation 5). In terms of video capture, RED, Kandao, Insta360, Vuze and more, have released cameras that can capture the world in 3D-360. In terms of media consumption, companies such as Oculus, HTC, Valve, HP and more, have introduced affordable VR headsets capable of displaying immersive content. Efforts for AR and MR have also been massive from all the leading companies such as Apple’s ARKit [
2], Google’s ARCore [
3] and Holo/MR efforts from Microsoft with their Hololens systems [
4]. Apple’s more recent announcement of their Apple Vision Pro forthcoming device is also promising to be pushing the boundaries, focusing on Spatial Computing and offering industry leading resolution and fidelity.
XR can be used for immersive 3D visualisation in geoinformation and geological sciences where the virtual location can be based on geospatial datasets [
5]. Such virtual geosites can be used for popularising geoheritage for a general audience as well as engaging younger demographics, usually interested in more cutting edge forms of communication [
6]. Other advantages of XR on geoheritage sites are ability to visit locations around the clock, regardless of weather conditions or observing features that are difficult to access; for example a fossilised tree trunk might be too tall, necessitating use of scaffolding to access up close, a potential health & safety risk for general public observation. Multiple observers on the same location/artefact is also an added advantage, making it possible to a variety of audiences to observe a specific artefact up close at the same time including conservation professionals.
UAS refer to unmanned aerial systems. UAS systems have been rapidly advancing, with companies such as DJI continually updating their model offerings with drones aimed at both professionals and hobbyists, ranging from portable foldable models such as the Spark, the Mavic and the Mini series, to larger and more versatile drones with changeable payloads such as the Matrice series. Drones offer us rather versatile options in terms of optical viewpoints, practically allowing us to position cameras or other scanning equipment in areas that are not as easily reachable by terrestrial means. They are also efficient when it comes to photographing, mapping or otherwise gathering sensor data of wide areas. UAS surveys have also been used in archaeology, using a combination of LiDAR scans and photogrammetry techniques, helping to make observation of historical areas, interpret locations and make new discoveries that may not be visible to the naked eye such as other possible structures at the same location [
7]. Geospatially-aware datasets ensure accurate placement aiding both in reconstruction for an XR environment as well as navigation within it.
When it comes to cameras, there have also been developments in both software and hardware domains, enabling us to capture higher resolutions with greater fidelity as well as different formats and all possible fields of views. One of those developments have been 360º cameras where fisheye optics are used in conjunction with multiple sensors, resulting in realtime panoramas [
8], previously requiring extra stitching work in order to be produced and also being less suitable to capture visual material when there are moving elements in the target area. Multiple sensor fisheye arrangements also make it possible for 3D-360 stereoscopic results for use in immersive media.
Another major development in digital cameras has been in the onboard processing aspect, newer technologies over the years allowing us to acquire images with less sensor noise and also enabling more pixels to be included in smaller sensors [
9]. The advent of the smartphone also helped speed up development in that area since every smartphone user was in fact a digital camera user, the target audience no longer being focused mainly on photography professionals and enthusiasts. That, combined with fierce competition in that sector brought us vast camera improvements every year with fidelity rapidly improving and ending up in certain cases smartphones rivalling professional digital cameras in terms of fidelity as that sector has been evolving faster. Machine Learning and Artificial intelligence have also been integral to mobile phone chipsets, further aiding with processes such as better detail extraction during the process of taking pictures with the integrated camera, practically resulting in Machine Learning Computational Photography. This exciting development essentially democratises high fidelity photography, a field once exclusive to high end equipment. Material utilising such methods will also be collected and investigated, comparing results in the software.
When it comes to digitization of subjects in 3D, photogrammetry is a versatile method and it relies on material captured with cameras. In order to create believable immersive content to be deployed in XR, the physical objects need to be captured with as much detail as our current technology allows as to do so. In order to aid for better immersion, both the visual and the audio domain will be included, audio being in the form of 360º spatial audio (ambisonic).
XR applications have been implemented in geoheritage sites via a variety of methods. Both Augmented reality (AR) and Virtual Reality (VR) techniques have been used in order to digitally represent geoheritage sites and artefacts, however, the process tends to be focussed more on transmissibility of the information and preservation of the sites and artefacts in digital form, with less emphasis on factors such as immersion realistic approximation of the actual artefacts and sites. [
1]. While the areas and objects can be accurately digitised and represented from a size, dimension and geolocation perspective, they can lack important details such as high resolution meshes and textures, appropriate photorealistic shading or even completely ignoring the audio aspect, important to completely represent the environment of a location when experienced in XR.
This paper investigates and uses a variety of the aforementioned means as multisensory media techniques in order to create convincing XR representations of fossilized tree trunks. Visual material from a variety of sources, aerial and terrestrial are compared and appropriate processes have been applied to achieve more realistic detail and therefore more immersive results when deployed in XR. The innovation of this study is to implement computational photography aided imagery, derived from mobile phones, in 3d modeling and visualisation of geoheritage sites. Such imagery results in superior fidelity, suitable for extra detail representation when it comes to 3D digitisation of petrified tree trunks. Additionally, the resulting 3D model was fused with the model derived from scale-accurate RTK UAS imagery, as well as 360 panoramic imagery, resulting in a comprehensive model that includes both the surrounding environment area as well as the extra high fidelity tree trunk. The extra fidelity derived from our methodology allowed us to produce a more realistic visual result, suitable for extra immersive XR experience.
2. Materials and Methods
2.1. Study Area
The Lesvos Island UNESCO Global Geopark is the case study site. Its petrified forest formed some 15 to 20 million years ago, features rare and impressive fossilised tree trunks [
10]. Some of those trunks can still be seen today in their upright position with intact roots up to seven meters, while others are found in a fallen position measuring up to 20 metres. The fossilised trunks have retained fine details of their bark and their interior reveals a variety of colours. Such details will be accurately digitised in 3D using a combination of UAS, 3D-360 imagery and audiovisual capture devices. Specifically, the Bali Alonia Park was chosen due to the size and positioning of its large fossils. [
Figure 1]
2.2. Methods
The methodology followed for this project took place in the following stages: Area and tree trunk selection, Image/content acquisition, data processing, visualization for XR deployment (
Figure 2).
2.2.1 Area and Tree Trunk Selection
The Bali-Alonia park was chosen due to the size and positioning of its large fossils. The chosen Tree Trunk was Fossil Tree Trunk Nº69 (
Figure 3).
Fossil Tree Trunk 69 is the largest standing fossilised tree trunk in the world, standing at 7.20m with a 8.58m perimeter. It is ancestral form of Sequoia, belonging to to Taxodioxylon albertense. For reference, a more modern representative Sequoia (Semprevirens) is the type of Sequoia found in national parks in California and Oregon [
11]. The conservation work as well as the cleaning and aesthetic restoration of the trunk resulted in a rather impressive monument of nature, hence selecting that area as the subject to be digitised and 3D-visualised for XR deployment.
2.2.2 Image/Content Acquisition
An aerial survey was conducted using a DJI Matrice 300 RTK UAS, equipped with a DJI Zenmuse P1 in order to capture the general area, including the main fossilized tree trunk. A total of 459 pictures were taken, covering a wide area of that section of the geopark. The resolution of those images was 8192x5460 pixels which is inline with the advertised 45 Megapixels specification.
Following that, another set of pictures were taken around the tree trunk with a Xiaomi Mi 11 Lite 5G mobile phone, in order to have more closeup content as well as for comparison purposes during the data processing and visualization stages. A total of 214 pictures were taken using that mobile phone at the impressive resolution of 6944x9280 pixels, which exceeds its 64Megapixels advertised specification.
Further testing different available image acquisition techniques, an additional set of pictures were taken, using an insta360 Pro multi-sensor 3D-360 camera, equipped with 6 sensors and lenses onboard. That camera was placed in 9 different positions around the tree trunk a picture from all the sensors was taken from each position for a total of 54 fisheye images at a resolution of 4000x3000 pixels.
Additionally, an extra set of mobile phone pictures was taken, using an Apple iPhone 11 Pro. That was done in order to test that specific phone’s computational photography capabilities, when it comes to image capture and fidelity, especially focusing on fine details. A set of 177 pictures were taken, a task that proved to be extra challenging, especially when it comes to capturing the top parts of the tree trunk. A 3 metre long monopod was used with a mobile phone adapter and the mobile phone was streaming as well as being remote controlled by an Apple Watch Series 4 smartwatch, in order to monitor where the camera was pointing at as well as remote-trigger the shutter button (
Figure 4).
Following the acquisition of the different image sets, Spatial Audio of the area was also recorded, to capture the area’s aural ambience in 360º. Hearing is the fastest sense of the humans, much faster in response times than vision [
12] making Virtual Auditory Display (VAD) systems an important part of any XR application. Spatial audio and Ambisonics are used for deploying such a system.
Ambisonics and Spatial Audio is a sound technique that captures audio in a spherical way. Ambisonics was a method developed in the 70s by British Academics Michael Gerzon (University of Oxford) and Professor Peter Fellgett (University of Reading), designed to reproduce recordings in an immersive way, captured with specially arranged microphone arrays [
13]. A Zoom H2N multi-capsule recorder [
14] was used, with a Rycote cover to avoid unwanted distortion due to wind, fastened on a shock mount to avoid vibration transferring into audio.
2.2.3. Data Processing
All image specifications were acquired by reading the metadata available through the Exchangeable Image File format (EXIF) embedded in the files (
Figure 5).
Observing the values on
Table 1 it is clear that the Zenmuse P1 camera has the highest resolution compared to the rest, which was to be expected as it used a full frame camera sensor, compared to the other cameras using smaller sensors appropriate for mobile phones and smaller devices. The Xiaomi phone was the second highest, followed by the iPhone 11 Pro and the Insta360 Pro camera. Due to the nature of how 360º photogrammetry works, the images from the Insta360 Pro camera were reduced from 54 fisheye images to 9 stitched panoramas, one for each position where the camera was placed. The resolution of each panorama was 7680 x 3840. (
Figure 6)
Images previously acquired during the image/content acquisition stage went through a quality control process in order to be used for photogrammetry with the Agisoft Metashape Pro software package. The Image Quality Index (IQI) was used in order to determine unusable imagery as well as to compare fidelity between the different cameras.
Observing the IQI values of
Table 2 it appears that the Insta360 Pro scored the highest, followed by the Zenmuse camera, then the iPhone and lowest of them all being the Xiaomi phone. A surprising result since, on resolution values alone, the Xiaomi phone excelled with only the Zenmuse camera offering a higher pixel count. Moreover, it also produced pictures with an IQI score lower than 0.5, which were discarded during the phorogrammetry process. Similarly, the iPhone had the lowest resolution compared to all cameras, however, its IQI score was rather high and the imagery appeared to be of quite high fidelity.
2.2.4. 3D Visualisation for XR Deployment
After quality control concluded, all images were processed with photogrammetry software Agisoft Metashape Pro [
15] (
Figure 7) in order to visualise the content as 3D scenes for XR deployment. Concerning volumetric accuracy, DJI Matrice 300 RTK UAS was used for the first data set so the scale accuracy was achieved through RTK. For subsequent models, the tree trunk was adapted and visually checked against the RTK-based version. Following Photogrammetry processing, the different resulting models were observed and compared in order to conclude the most suitable approach for photorealistic immersive XR use.
3. Results
3.1. Geometry, Confidence and Shaded Views
Following photogrammetry processing, the resulting geometry was displayed in the following views, presented within Agisoft Metashape: Wireframe, Solid, Confidence and Shaded. Observing the Wireframe view we can see the density of the geometry while Solid shows us a solid Mesh, more accurately displaying the surface. Confidence view visualises the model in a way that it highlights problem areas where for example there was not enough overlap to achieve more accurate reconstruction. Shaded view provides ma m ore realistic view of the model, also including texture. Wireframe was the first view to be observed (
Figure 8).
Observing the views, it is obvious that the Zenmuse P1 had the most dense geometry, also covering a wider area. It reflects the fact that it had the most dense camera sensor of all other methods, as well as having a larger number of pictures since it was an aerial scan of the area. The iPhone model seemed to be the second densest, while the Xiaomi phone and insta360 model produced results that were not really usable due to inacurracies and also large gaps. When comparing the solid model views, the observations were rather similar. (
Figure 9)
Confidence view uses a colour range from Red to Blue, Reds being the lows and Blues being the highs. High values represent a more accurate model with less problematic reconstruction areas while low values highlight issues. All four models were compared (
Figure 10). The model from the Zenmuse camera again appeared to be the less problematic of the four, followed the one by the iPhone model, then the Xiaomi and last the one generated using content from the insta360 Pro camera. At this point it is rather obvious that the insta360 Pro model is unusable as does not have any blue areas at all, the Xiaomi model also being close. Furthermore the insta360 Pro model has quite large gaps.
Shaded view provided a more realistic image of the models, however, it also made obvious the shortcomings of each device when it comes to capturing detail (
Figure 11)
3.2. Detail Fidelity
In order to select the most realistic model to be used for XR, the detail fidelity of the models had to be observed. At this stage, the insta360 model had to be omitted since the large parts missing from the tree trunk area made it unsuitable for such use. Its source 360º panoramas will still be of use for the environment of the final XR visualization though. The remaining models that had at least the tree trunk reconstructed, were examined by observing them from a closer viewpoint (
Figure 12)
It is quite obvious that the model derived from the iPhone image set presents rather superior fidelity in terms of details, a result perhaps surprising, considering the resolutions used by the Zenmuse camera and the Xiaomi Phone as well as the fact that it is a nearly 4 year old mobile phone (released on September 2019). This is happening because of a number of reasons. On of them is due to a Machine Learning being used during the picture taking process on iPhones from model 11 Pro onwards, named Deep Fusion.
Deep Fusion is a Computational Photography approach which uses 9 shots in order to produce a picture; 4 shots before the shutter button is pressed (taken from the device’s preview/viewfinder buffer), 4 shorts after pressing the shutter button and one long exposure shot. Then within one second, the phone chipset’s Neural Engine is analysing the short and long exposure shots, picking the highest fidelity ones and examining on pixel level in order to optimize for details and low noise. The result is a high fidelity picture rivalling sensors of 4x the pixel count as demonstrated in the above comparison. Both the Zenmuse and the Xiaomi devices only used standard de-mosaic processes in order to produce the pictures, with no Machine Learning to help bring out details. Following detail fidelity comparison, the iPhone 11 Pro derived model was the model of choice for XR use. To achieve accurate scale, the iPhone 11 Pro derived model was adapted to and visually checked against the model produced by the Zenmuse camera since it was used with the DJI Matrice 300 UAS RTK, RTK technology ensuring accurate scale.
3.3. Finalising Material for XR
Following model selection, further process needed to be done to further improve the model to be used for XR. One of the processes was de-lighting. While it is good practice to capture images for photogrammetry with no strong shadows present, that is not always possible, therefore, de-lighting is used as a post process, in order to make the model suitable for any desirable lighting conditions while in XR (
Figure 13).
The de-lighting process alters the texture of the geometry so any in-built shadows are softened or even eliminated (
Figure 14). That gives the freedom of altering the lighting in the realtime engine, making the model suitable to be viewed at any desirable time in the day within the virtual world, without the shadows being unrealistic.
An example of a model that would look bad without de-lighting would be if the pictures for photogrammetry were taken when the sun is hitting the subject from one side and casting hard shadows on the other side, then wishing to use that model in XR in a scene where the sun is shining at a different direction, on a different time of the day (
Figure 15) An unprocessed model would still have its original shadows embedded, looking rather unrealistic and thus affecting immersion.
Following the de-lighting process the geometry was placed within 360º panoramic imagery in order to have an environment around it and the previously captured spatial audio was also included, for the purpose of being viewed as an XR experience (
Figure 16)
4. Discussion and Conclusion
The aim of this research has been to investigate and use a variety of immersive multisensory media techniques in order to create convincing digital models of fossilised tree trunks for use in XR. Immersion and realism have been key focus points from early stages, in order to be able to approximate the digitally reconstructed output as close as to the real artefact. In order to do that, extra factors were also included such as capturing the spatial audio of the area using ambisonic microphones as well as the surrounding environment using multi-sensor 3D-360º camera equipment.
Throughout this research, both common and experimental methods were used, challenging the familiar with the potentially improved new alternatives. The familiar was image sets taken with more commonplace methods using conventional (flat) photography with normal lenses and sensors [
17,
18]. A slightly different approach was also capturing one additional image set using the camera of a Xiaomi Mi11 Lite 5G mobile phone since it features an impressive 64Megapixel main camera within its specifications sheet. The new alternative was 360 cameras using multiple sensors and fisheye ultra-wide field of view sensors. At times the alternative method produced disappointing outputs, with the content produced using the 360º cameras resulting in inferior results as the resulting geometry lacked precision and had both distortions and rather large gaps. Panorama based photogrammetry has not been available for as long as conventional flat imagery photogrammetry so it is expected for it to be further improving as the technology further matures.
Being an avid photographer and cinematographer in my spare time, I have been familiar with the advantages of Computational Photography in the last several years, driven by Machine Learning and have repeatedly noticed a smaller and cheaper device like an iPhone, challenging my professional equipment in terms of fidelity when it comes to outputs straight from the device. Based on that, I hypothesized how beneficial would it be to use such technology to capture the source image sets for photogrammetry capture, therefore, I used an iPhone 11 Pro mobile for one extra set and the results were exceedingly impressive.
While there is always some basic computational process involved in digital cameras [
9] in terms converting the sensor data to an image file, the advent smart camera phones made such processes even more commonplace. The rapid evolution of such camera phones essentially brought a good quality camera into most peoples’ pockets and the included ‘app stores’ made it much easier and more accessible to alter the way the camera module works, compared to altering the software of a dedicated digital camera. Since certain picture qualities such as shallow depth of field and low light/low noise were normally the characteristics of cameras with high quality large sensors and optics, mobile phones had to find software solutions in order to calculate and realistically recreate such characteristics formerly reserved for professional cameras.
The one recent development I focused my interest on in relation to this research was Deep Fusion by apple, since the claimed advantage was added details by using multiple shorts and Machine Learning to determine the areas of interest. Impressively, when put to test, the additional dataset from the 12 Megapixel iPhone11Pro rivalled all my previous results when used for photogrammetry in terms of fidelity. It was rather surprising to see the Zenmuse P1 professional UAS camera using full frame (35mm+) sensor boasting 45 Megapixels of resolution, ending up with less detail fidelity compared to a small (10mm-) sensor with 12 Megapixels of resolution, not to mention the stark contrast compared to the results produced by the otherwise colossal 64 Megapixel content shot with the Xiaomi mobile phone. The advantages of Machine Learning Computational Photography approach were rather obvious in the results, to the point of being convinced doing all future photogrammetry work with it from now on.
Naturally, not all photogrammetry tasks are possible with a mobile phone, based on area and size requirements. This was also partly true with the fossillised tree trunk being rather tall and normally not possible to reach it with a handheld device. In that case, this was solved by using rather long monopods, however, when collecting visual material from rather large structures, it would be impractical or even impossible to use or construct monopods to match large heights. Seeing what is being photographed can also be an issue, since those devices use their displays as a preview screen, also resolved during this project by using proprietary solutions to live-preview and control the mobile phone with a smart watch.
Camera technology is constantly evolving, especially digital cameras that rely on sensors and internal processing for results. While Computational Photography assisted devices are not as widely available outside of the smartphone sort of domain, it is likely that that technology will find its way embedded in all digital camera equipment in the near future, including cameras such as the Zenmuse P1 used with the UAS for this research. Until such development appears in readily available products, I am already building a custom mount where I can attach Computational Photography capable mobile phones to an UAS as well as signal repeating equipment for remote controlling and previewing purposes
More modern technologies on photogrammetry shall also be used in the future, for further comparisons and experimentation. Some initial tests have already been made with 3D Capture tool within the recently released beta of Adobe Substance 3D Sampler [
19], with surprisingly good results on accuracy (
Figure 17) also at a fraction of the processing time compared to Agisoft Metashape.
Author Contributions
“Conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing—original draft preparation, writing and visualization were done by Charalampos Psarros. Review and editing, supervision, project administration, and funding acquisition by Nikolaos Zouros and Nikolaos Soulakellis. All authors have read and agreed to the published version of the manuscript.”
Funding
This research was funded by the Research e-Infrastructure “Interregional Digital Transformation for Culture and Tourism in Aegean Archipelagos” {Code Number MIS 5047046} which is implemented within the framework of the “Regional Excellence” Action of the Operational Program “Competitiveness, Entrepreneurship and Innovation”. The action was co-funded by the European Regional Development Fund (ERDF) and the Greek State [Partnership Agreement 2014–2020].
Data Availability Statement
Not applicable
Acknowledgments
We thank Stavros Proestakis for his help with the Xiaomi mobile phone terrestrial image set and Giorgos Tataris for kindly helping with mapping the location of the geopark. We also wholeheartedly appreciate the staff of the Lesvos Petrified Forest for welcoming us to the geopark and assisting us with all our needs.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Banfi, F.; Previtali, M. Human-Computer Interaction Based on Scan-to-BIM Models, Digital Photogrammetry, Visual Programming Language and eXtended Reality (XR). Appl. Sci 2021, 11, 6109. [Google Scholar] [CrossRef]
- Apple ARKit. Available online: https://developer.apple.com/augmented-reality/arkit/ (accessed on 1 July 2023).
- Google ARCore. Available online: https://developers.google.com/ar (accessed on 1 July 2023).
- Microsoft Hololens. Available online: https://www.microsoft.com/en-us/hololens (accessed on 1 July 2023).
- Edler, D.; Keil, J.; Wiedenlübbert, T.; Sossna, M.; Kühne, O. Dickmann, Immersive VR Experience of Redeveloped Post-Industrial Sites: The Example of “Zeche Holland”, Bochum-Wattenscheid. J. Cartogr. Geogr. Inf. 2019. [Google Scholar]
- Chang, S.C.; Hsu, T.C.; Jong, M.S.Y. Integration of the peer assessment approach with a virtual reality design system for learning earth science. Comput. Educ. 2020. [Google Scholar] [CrossRef]
- Bates-Domingo, I.; Gates, A.; Hunter, P.; Neal, B.; Snowden, K.; Webster, D. Unmanned Aircraft Systems for Archaeology Using Photogrammetry and LiDAR in Southwestern United States. 2021. [Google Scholar]
- Zhang, F.; Zhao, J.; Zhang, Y.; Zollmann, S. A survey on 360 images and videos in mixed reality: Algorithms and applications. Journal of Computer Science and Technology 2023, 38, 473–491. [Google Scholar] [CrossRef]
- Delbracio, M.; Kelly, D.; Brown, M.S.; Milanfar, P. Mobile computational photography: A tour. Annual Review of Vision Science 2021, 7, 571–604. [Google Scholar] [CrossRef]
- Zouros, N. European Geoparks Network. Episodes 2004, 27, 165–171. [Google Scholar] [CrossRef]
- Zouros, N. Petrified Forest Park, Bali Alonia. Available online: https://www.lesvosmuseum.gr/en/parks/petrified-forest-park-bali-alonia (accessed on 1 July 2023).
- Horowitz, S. The Universal Sense: How Hearing Shapes The Mind; Bloomsbury Publishing: USA, 2012. [Google Scholar]
- Gerzon, A. Michael, Periphony: With-Height Sound Reproduction. Journal of the Audio Engineering Society 1973, 21, 2–10. [Google Scholar]
- Zoom H2n, Zoom corporation, Tokyo, Japan. Available online: https://zoomcorp.com/en/gb/handheld-recorders/handheld-recorders/h2n-handy-recorder/ (accessed on 1 July 2023).
-
Agisoft Metashape, Agisoft Metashape Professional Edition; Agisoft LLC: St Petersbourg, Russia.
-
Epic Games Unreal Engine, Unreal Engine, version 5.2; Epic Games Inc: Cary, North Carolina, USA.
- Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. “Structure-from-Motion” photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef]
- Smith, M.W.; Carrivick, J.L.; Quincey, D.J. Structure from motion photogrammetry in physical geography. Prog. Phys. Geogr. 2015, 40, 247–275. [Google Scholar] [CrossRef]
-
Adobe Substance 3D sampler, Adobe Substance 3D sampler 3D Capture beta edition; Adobe Inc: San Jose, California, USA.
Figure 1.
Lesvos Geopark: Bali Alonia Park location map.
Figure 1.
Lesvos Geopark: Bali Alonia Park location map.
Figure 2.
Flowchart of the methodology followed.
Figure 2.
Flowchart of the methodology followed.
Figure 3.
Petrified Forest and Fossil Tree Trunk 69 study area.
Figure 3.
Petrified Forest and Fossil Tree Trunk 69 study area.
Figure 4.
image acquisition for large tree trunk. (a) 3 metre long monopod used with iPhone 11 Pro. (b) smartwatch camera control.
Figure 4.
image acquisition for large tree trunk. (a) 3 metre long monopod used with iPhone 11 Pro. (b) smartwatch camera control.
Figure 5.
EXIF data from the different cameras used.
Figure 5.
EXIF data from the different cameras used.
Figure 6.
360 content: (a) individual fisheye image (b) Panorama (c) EXIF data of stitched panorama.
Figure 6.
360 content: (a) individual fisheye image (b) Panorama (c) EXIF data of stitched panorama.
Figure 7.
Photogrammetry processing showing camera positioning for each picture used: (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 7.
Photogrammetry processing showing camera positioning for each picture used: (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 8.
Wireframe view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 8.
Wireframe view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 9.
Solid view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 9.
Solid view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 10.
Confidence view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro
Figure 10.
Confidence view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro
Figure 11.
Shaded view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 11.
Shaded view of models produced through Photogrammetry processing (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro (d) Insta360 Pro.
Figure 12.
Closeup views of models (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro.
Figure 12.
Closeup views of models (a) Zenmuse P1 (b) Xiaomi Mi 11 Lite 5G (c) apple iPhone 11 Pro.
Figure 13.
de-lighting tool in Agisoft Metashape.
Figure 13.
de-lighting tool in Agisoft Metashape.
Figure 14.
Removing Shadows: (a) original texture (b) de-lighted.
Figure 14.
Removing Shadows: (a) original texture (b) de-lighted.
Figure 15.
de-lighting demonstration (a) Unprocessed model with sun at side (b) Delighted model with sun in front (c) Unprocessed model with sun in front, with inaccurate embedded shadows.
Figure 15.
de-lighting demonstration (a) Unprocessed model with sun at side (b) Delighted model with sun in front (c) Unprocessed model with sun in front, with inaccurate embedded shadows.
Figure 16.
Processed model + 360º environment compiled and viewed in XR [
16].
Figure 16.
Processed model + 360º environment compiled and viewed in XR [
16].
Figure 17.
Figure 17 Highly detailed photogrammetry result using Adobe Substance 3D Sampler Advanced Physics Based Rendering (PBR) materials are also being considered for future use, aiming for even more realism and flexibility.
Figure 17.
Figure 17 Highly detailed photogrammetry result using Adobe Substance 3D Sampler Advanced Physics Based Rendering (PBR) materials are also being considered for future use, aiming for even more realism and flexibility.
Table 1.
Image/content acquisition by device.
Table 1.
Image/content acquisition by device.
|
Number of Images |
Pixel Resolution |
DJI Zenmuse P1 |
459 |
8192 x 5460 |
Xiaomi Mi 11 Lite 5G |
214 |
6944 x 9280 |
Apple iPhone 11 Pro |
177 |
3024 x 4032 |
Insta360 Pro |
54 |
4000 x 3000 |
Table 2.
Image Quality Index (IQI) by device.
Table 2.
Image Quality Index (IQI) by device.
|
Minimum |
Maximum |
Median value |
DJI Zenmuse P1 |
0.818392 |
0.843858 |
0.83 |
Xiaomi Mi 11 Lite 5G |
0.498249 |
0.835322 |
0.70 |
Apple iPhone 11 Pro |
0.777384 |
0.843705 |
0.81 |
Insta360 Pro |
0.803516 |
0.894998 |
0.84 |
|
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).