The aim of this research is to investigate and use a variety of immersive multisensory media techniques in order to create convincing digital models of fossilised tree trunks for use in XR. This is made possible via the use of geospatial data derived from sources such as aerial imaging using UAS, terrestrial material using cameras and also include both the visual and audio element for better immersion, accessible and explorable in 6 Degrees of Freedom (6DoF). Immersiveness is a key factor in order to result in output that is especially engaging to the user. Both conventional and alternative methods are explored and compared, emphasising in the advantages made possible via the help of Machine Learning Computational Photography. Material is collected using both UAS and terrestrial camera devices, including a 3D-360º camera with 6 sensors, using stitched panoramas as sources for photogrammetry processing. Difficulties such as capturing large free standing objects using terrestrial means were overcome using practical solutions involving mounts and remote streaming solutions. Conclusions indicated that superior fidelity can be achieved by the help of Machine Learning Computational Photography processes and higher resolutions and technical specs of equipment not necessarily translating to superior outputs.