1. Introduction
The Recent technological advances and the increasing public interest in cultural heritage (CH) have made it possible to experiment with new applications for analysing, monitoring, preserving and management of heritage sites. It is worth emphasising the importance that the Italian National Recovery and Resilience Plan (PNRR) has given to the concept of digitisation of the built cultural heritage and the projects developed in relation to this issue [
1].
The increased interest on the part of public administrations is due to the vulnerability to which the existing heritage has been affected in recent decades and the understanding of its importance. The understanding of value and culture triggers the need to valorise and protect also due to emergency situations, environmental disasters and the vulnerability of sites, whose fragility is often due to scarce or absent maintenance actions, which should have limited the progressive actions of degradation that need more invasive restoration interventions to limit the damage in a preventive perspective [
2,
3].
Preserving ancient evidence, defending it and protecting it from decay are certainly pro-cultural choices. This motivates researchers and experts to focus on the conservation, protection and valorisation of the built heritage. Starting from the careful and punctual analysis of the context, a necessary activity becomes the planning of a coordinated and continuous maintenance for the preservation of the “ materia “ (art. 29 c. 1, Code of Cultural Heritage and Landscape Legislative Decree 22 January 2004 - Minister of Cultural Heritage and Activities) the preservation of cultural identity that requires a specific and complex observation of what are the phenomena affecting the area and which may influence transformations in some cases irreversible [
4].
Maintenance activity becomes a tool to update the scientific knowledge of any ancient artefact, through direct observation of the built work, diagnostics and any research aimed at acquiring the most complete understanding of an asset. All this can be done via an approach that over time has been influenced favourably by technological innovations, as we can now consider preventive maintenance activities as Smart Preservation actions [
5,
6].
In the field of maintenance, the most widely used technique for detecting damage or recognising any material or structural variations is the inspection activity. Organised in structured inspection activities, it plays a crucial role since it allows the early identification of signs of degradation, possible structural hazards or other evident deterioration that could compromise the integrity of the asset, through the subsequent planning of maintenance interventions [
7]. Control activities are mainly based on inspection visits necessary for the periodic assessment of the state of conservation of historical and archaeological artefacts, for the identification of the most evident criticalities, for site accessibility and inspection of building structures.
They may be scheduled within a maintenance plan or they may be developed independently, but to ensure their effectiveness, they must always be planned and carried out at predetermined intervals. People involved in these activities include specialised technicians for surface and structural diagnostics, specialised maintenance workers and inspection managers who coordinate the entire process. Being a traditional practice that relies on the inspectors’ personal judgement, observation skills and experience, the preparation and competence of those involved is therefore crucial to ensure that inspection activities are conducted with scientific accuracy and that the information gathered is accurately documented [
8].
These considerations today, when technological advancement and developed methodologies applied to cultural heritage are now a fundamental part of the process, allow us to understand how inspection activities could take full advantage of the support of new technologies and digital systems, due also to their intrinsic limitations.
Visual inspections, in fact, might be inefficient because of the huge commitment of time and resources, or even impracticable for the complex accessibility of the area of interest or physical inaccessibility of the site. The adoption of advanced technologies such as laser scanning, remotely piloted aircraft systems (SAPR), could revolutionise the way in which inspections are carried out, allowing visual inspection to be extended to areas that are not directly accessible, offering the possibility of more frequent and constant monitoring of historic buildings [
9,
10]. A systematic and regular approach to inspections is crucial for the effective and sustainable management of cultural heritage, and the use of these systems and technologies would minimise costs and significantly improve efficiency and effect. The possibility of obtaining data acquired quickly, accurately, at high quality and with integrated non-invasive techniques opens up new scenarios in the conservation and maintenance of historical architectural heritage [
11].
2. Materials and Methods
2.1. Methodology
The safeguarding and preservation of cultural heritage (CH) represents a fundamental objective in the present era. Technological advancements, particularly in the realms of 3D digitisation and point cloud data, have facilitated the development of non-invasive and contactless evaluation techniques for cultural heritage (CH) sites. One of the key advantages of utilising point cloud data, through data acquisition by laser scanner or/and photogrammetry, is the ability to capture and analyse surface-level details with high precision [
12,
13]. 3D point clouds, captured through techniques like terrestrial laser scanning and Structure From Motion, provide high density and accurate representations of heritage structures, enabling the detection of damage and alterations over time [
14].
One approach to monitoring changes in cultural heritage involves the use of change detection algorithms developed in the remote sensing domain and applied at the monumental scale [
15].
Change detection in 3D analysis is a multifaceted process that involves aligning datasets from different time intervals, quantifying the changes, and interpreting the results. One of the early studies that investigated the application of change detection analysis to identify the temporal changes was conducted by Girardeau-Montaut et al. [
12]. Within this study, the point clouds of different time intervals were initially organized through an octree data structure by assigning a code to each point that is calculated based on the maximum subdivision level of the octree data structure. Afterward, the corresponding cells were compared based on three methods: average distance, best fitting plane orientation, and Hausdorff distance [
12]. In different studies, present a two-stage change detection approach, specifically for 3D point cloud data. The first step of the method utilizes a feature-based registration technique to precisely align the datasets being compared, which is a crucial prerequisite for accurate change detection. This alignment process involves identifying and matching prominent features across the datasets to establish correspondences, enabling the accurate superimposition of the point clouds. Following the alignment, the second step of their approach employs a voxel-based change detection technique to identify the specific regions within the 3D data that have undergone changes over time. By dividing the point clouds into discrete volumetric elements (voxels) and analyzing the differences between corresponding voxels, this method can effectively pinpoint the locations and nature of the changes, providing valuable insights for applications such as monitoring environmental transformations or tracking infrastructure developments [
16,
17,
18,
19,
20].
The methodological approach to change detection in dense point clouds is based on several fundamental principles that can be schematised as follows:
- i)
-
High-Resolution 3D Data Capture:
dense point clouds are generated through techniques such as LiDAR (Light Detection and Ranging), laser scanning, and photogrammetry. These methods provide high resolution spatial data, capturing millions of points that represent the surface geometry of the scanned object or site. The resulting point clouds contain precise information about the position, shape, and texture of the surface, making them ideal for detailed analysis and change detection [
21,
22];
alignment and registration of point clouds: the accurate alignment and registration of point clouds from different time periods is a crucial step in the change detection process. Octree data structures and iterative closest point algorithms are effective methods for organizing and aligning the point clouds, ensuring a high level of accuracy in the spatial integration of the datasets [
23].
- ii)
-
Temporal Comparisons:
change detection involves comparing point clouds captured at different times (temporal snapshots) to identify changes. This requires accurate alignment (registration) of the point clouds to ensure that comparisons are made between corresponding points in the data sets [
24];
temporal comparisons help in understanding how the site or object has evolved, providing insights into processes such as erosion, structural deformation, and material loss [
25].
- iii)
-
Mathematical and Statistical Analysis:
the comparison of point clouds involves mathematical and statistical techniques to quantify differences, these methods measure changes in distance, volume, and surface characteristics between the data sets [
26];
commonly used metrics include Euclidean distances, volumetric changes, and surface deviation measures. These metrics provide quantitative assessments of changes, which are critical for objective analysis and decision making [
19].
This research paper has highlighted the potential of these techniques and provided a framework for their effective implementation in the context of monitoring and preserving cultural heritage.
2.2. Case study
Matera represents a significant case of how building activity, over the ages, has directly affected the territory through the typical dual nature of construction that initially manifested itself as ‘architecture in negative’ (understood as excavation and removal of material in situ) later connected to ‘architecture in positive’, opening quarries from which to extract the construction raw material [
27,
28].
The experiment was conducted on the bell tower of the rupestrian church of San Pietro Barisano located within the historical part of the city of Matera, the Sassi (
Figure 1).
Calcarenite is the typical stone material of the architectural heritage of the city of Matera [
29]. It is a sedimentary rock of biochemical origin with a low degree of alteration in which fossil shells and skeletons of marine organisms can still be recognised. Calcarenite has different compositional and structural characteristics depending on the grain size and the type and quantity of the cement and matrice [
30]. Consequently, the properties of calcarenitic materials, such as porosity, mechanical strength and durability, also exhibit a high degree of variability (
Figure 2).
They typically exhibit low mechanical strength, high open porosity and poor durability [
31]. The severity of alteration and degradation of calcarenitic materials varies depending on their microstructural properties.
Calcarenitic stones are affected by various severe degradation phenomena due to their high open porosity, which allows aggressive agents present in the exposure environment to enter the materials. Furthermore, the open porosity also determines a continuous variation of their characteristics and properties so that calcarenitic materials can be considered “living stones” [
29,
30].
The degradation phenomena of calcarenites, like those of any other material, are determined by the complex interaction between the materials and their surroundings, i.e., the exposure environment. In most cases, chemical, physical and biological phenomena occur simultaneously and lead to synergistic negative effects (
Figure 3) [
32].
The type and entity of the various degradation phenomena depend both on the characteristics of the material (chemical and mineralogical composition, physical-mechanical properties, texture, etc.) and on the parameters that characterise environmental exposure, such as relative humidity, wind speed, air temperature variations, surface temperature of the material and presence of pollutants [
33]. It is not uncommon to note contiguous ashlars of the same wall face, one strongly honeycombed, the other only slightly damaged. Moreover, the high variability of the microstructural characteristics of calcarenites also influences the behaviour of the materials used as consolidating and protective agents.
Calcarenites have been intensively used as building materials due to their easy extraction and equally easy workability, even in complex shapes, such as those that characterise important decorative elements.
At present, the use of this stone is no longer a “compulsory” choice, but a “reasoned” one, to give architecture the identity and characteristic value of a place, becoming an accomplished testimony of the “modus costruendi” of the past.
The bell tower of the church of San Pietro Barisano, used as a pilot case for the experimentation of the methodology presented, represents a concrete synthesis of the construction and material aspects just described.
2.4. Tools and Dataset
The potential of open-source software for monitoring historical heritage is significant. The periodic acquisition of data through laser scanning allows for the monitoring of degradation processes and the assessment of the efficacy of intervention strategies. In the case study analysed, the open-source software CloudCompare (CC) (Version 2.6.1) was used to obtain data that was not only qualitative but also quantitative.
CC is a software mainly used for the management and analysis of 3D point clouds although, thanks to certain functionalities implemented in the same, it is also capable of managing and manipulating other three-dimensional formats such as meshes, elevation models, etc.
used in numerous industrial applications, this software lends itself very well to research needs; its main features are as follows
reliability and precision in the functions of alignment, registration, filtering, cleaning, analysis and measurement of three-dimensional elements thanks to numerous manual, semi-automatic and automatic tools;
flexibility in handling heterogeneous data (LAS, PLY, E57, OBJ, GeoTiff, etc.), integrating them with each other and converting their formats;
ability to handle remarkably large data (even billions of points);
open-source license offering a source code that can be easily modified and implemented, even with different customised plug-ins, depending on specific needs.
advanced user interface and visualisation tools to facilitate the interpretation of data and results;
an active documentation and community that not only provides a range of educational resources and continuous software updates, but also allows for easy resolution of problems.
The features mentioned below make CC a powerful and flexible tool capable of offering the scientific community a wide range of functionality [
34].
The proposed methodology integrates change detection algorithms with point cloud registration techniques, thereby ensuring accurate comparative analysis between different data acquisition campaigns. This approach is particularly effective for continuous monitoring and for planning timely conservation interventions, offering an innovative tool for the protection and enhancement of cultural heritage.
The efficacy of this approach is exemplified by the bell tower of San Pietro Barisano. In the following, the steps applied to data processing are shown, according to the workflow (
Figure 4):
point clouds import and data pre-processing;
alignment of the point clouds;
calculation of distances between the two point clouds;
selection of the region of interest (ROI);
calculation of the volume difference between the two point clouds.
Each step will be subjected to a detailed examination in a dedicated sub-section.
In addition, the acquisition of data from laser scans allows for a detailed evaluation of geometric anomalies, providing critical insights into the structural condition of the bell tower. By processing this high-resolution data, the software enables a numerical assessment of volumetric differences between the pre-intervention and post-intervention states of the bell tower of the Church of San Pietro Barisano. This analysis involves comparing the dense point clouds generated before and after conservation interventions, highlighting any changes in geometry, material loss, or deformation that may have occurred. The quantitative evaluation of these volumetric changes not only identifies specific areas of alteration but also provides a measurable basis for assessing the effectiveness of the preservation efforts. This detailed approach supports informed decision-making in heritage conservation, allowing for targeted and scientifically validated interventions that are essential for maintaining the structural integrity and historical value of the site.
Table 1 presents the principal characteristics of the scans that were performed and compared.
The cloud densities illustrated in the table have been calculated on the basis of a spherical surround of radius 0.05 m. It is crucial to highlight this geometric aspect, as it will be of pivotal importance in the subsequent processing stages.
The following calculations were performed on the following machine: Notebook GL65 Leopard 10SER; Microsoft Windows 11 Home 64 bit; CPU Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz; Memory 16 GB (2*8192 MB, DDR4-2933, Samsung M471A1K43DB1-CWE),1466 MHz; NVIDIA GeForce RTX 2060, 6144 MB; Drive SSD, SAMSUNG MZVLB1T0HBLR-00000, 953.87 GB.
3. Results and Discussions
As has been extensively discussed above, the georeferencing information pertaining to imported cloud data is stored within CC. In order to accommodate the memory requirements associated with the processing of large datasets, CC adopts a 32-bit representation for the coordinates of points. The management of such a substantial number of data points gives rise to a margin of error in the range of several centimetres. For smaller, architectural-scale objects, these discrepancies are particularly noteworthy.
To address this issue, CC proposes a shift of the global coordinates to a local coordinate system. This shift, which is reapplied in subsequent export stages to avoid loss of infomation, has the dual benefit of reducing memory consumption and speeding up processing times [
35]. The shift vector, established by the program for the two clouds, is delineated in
Table 2.
The preliminary step in the alignment of the two scans is the merging of the point clouds contained within the individual scans and originating from the instrument’s disparate acquisition locations. During the merge phase, the original cloud index was stored in the scalar field of each scan, thus ensuring the reversibility of this step (see
Figure 5).
Prior to executing the Iterative Closest Point (ICP) algorithm for automatic alignment, it is essential to perform a preliminary manual alignment. ICP algorithm is widely used for the accurate registration of 3D point cloud data. It aims to align two point clouds by iteratively refining the transformation parameters (translation and rotation) to minimize the distance between corresponding points. However, ICP requires a good initial alignment to avoid local minima and achieve accurate registration [
23].
The ‘translate/rotate’ tool may be employed to rotate the x, y and z axes, thereby facilitating an initial rough overlap of the two clouds. It is crucial to differentiate between the model cloud and the data cloud at this juncture. In the context of the ICP algorithm, the model cloud represents the reference points that serve as the basis for aligning the data cloud. In the present case, the model cloud is the post-intervention point cloud, and the data cloud is the pre-intervention point cloud.
A further preparatory operation prior to the launch of the ICP algorithm is the creation of a subset of points within the model cloud. In order for the algorithm to function correctly, the model cloud was divided into distinct sections by launching the algorithm only between the points present in both scans [
36].
The ICP fine registration algorithm facilitated the alignment of the two clouds, the data cloud and the point subset of the model cloud, by setting the following parameters:
Root Mean Square (RMS): error difference between two iterations. With each iteration, the discrepancy between the two clouds is reduced. Once a pre-established threshold is reached, the process is terminated. The specified root mean square (RMS) value for the process is 1.0 × 10^(−5).
Desired final overlap: determined based on an estimate of the homologous points between the two clouds under consideration. Despite the model cloud being segmented to compensate for the lack of data in the data cloud, the two acquisitions exhibit disparate point densities (average density data cloud = 16,355 points/m³, average density subset model cloud = 110,505 points/m³). The final overlap set is 50% considering the data cloud points (141,914 points) and the model cloud points (280,440 points).
Random Sampling Limit (RSL): a parameter that enables the random selection of a subset of points from a large cloud during each iteration of the registration process. A value of RSL = 300,000, which exceeds the number of points in the largest cloud (in this case, the model cloud), was employed to enhance the registration accuracy.
Enable Farthest Point Removal: optimal to maintain active during the alignment phases in order to disregard the most distant points, thus will minimise the probability of errors occurring [
36].
The final results are presented in
Table 3 below.
The transformation matrix shown above is a 4x4 matrix consisting of a 3x3 matrix of rotations and a translation vector on the x,y and z axes. To make the matrix consistent, the last row is filled with a row of three 0s and one 1 (1).
The elements rij, where i=1,2,3 and j=1,2,3, are the components of the 3x3 rotation matrix. Specifically:
- -
r11, r12, r13: represent the rotations along the x-axis;
- -
r21, r22, r23: represent the rotations along the y-axis;
- -
r31, r32, r33: represent the rotations along the z-axis.
The elements
tx,
ty, and
tz are utilized to represent the components of the translation along the x-, y-, and z-axes, respectively. The final row [0001] is included to ensure the mathematical consistency of the affine representation, without affecting the transformation itself [
37]. The aforementioned matrix was automatically applied to the data cloud at the conclusion of the process, with the objective of aligning it with the model cloud.
The combination of these settings with the preliminary operations performed enabled the programme to process an alignment with a sub-centimetric error and a processing time of a few seconds.
In CC, the calculation of distances can be performed in two distinct ways: between two clouds or between a cloud and a mesh. In the present case, the calculation will be performed between the two point clouds in question using the Cloud-to-Cloud Distance (C2C distance) tool. This tool enables precise comparisons, quantifying the extent of changes, deformations, or shifts that have taken place between the two datasets.
The C2C distance is calculated between two point clouds on the basis of the ‘Nearest Neighbour Distance’, this method relies on the concept of measuring the similarity between data points based on their proximity, known as the distance metric
The C2C distance calculation is performed between two point clouds using the ‘Nearest Neighbour Distance’ approach, a widely used method in point cloud analysis that measures the similarity between data points based on their spatial proximity. This method operates on the principle of identifying the closest corresponding point in one cloud relative to each point in the other cloud, thus creating a direct comparison of positional differences [
38,
39] (see
Figure 6).
For each point in the compared cloud, the CC tool determines the Euclidean distance to the nearest point in the reference cloud. To ensure the accuracy of the result, it is essential that the reference cloud has a higher point density than the compared cloud [
40]. Accordingly, the post-intervention scan was selected as the reference cloud, and the pre-intervention scan was designated as the comparison cloud.
Following the launch of the C2C distance, an approximation of the distance is calculated. This provides an estimate of the minimum, maximum and average distances, the maximum error and the optimal octree level to be employed. When applied to the case study, the aforementioned analysis required 0.32 seconds for completion. Based on this preliminary computation, the following parameters were set:
- -
maximum distance, left as default value at 0.426m;
- -
octree level: set to 8;
- -
Multi-threaded: left as default 10/12.
Following the completion of the requisite computation in a period of 0.41 seconds, the result was reprojected onto the aforementioned cloud and incorporated in the form of a scalar field (
Figure 7).
The point cloud illustrates the variation in distances between 0 m and 0.363 m, providing insight into the concentration of this phenomenon at the base of the steeple.
The decision of the region of interest (ROI) serves as a foundation for the subsequent steps. To mitigate subjectivity in the selection of this region, the elevation parameter z, present in each point, was utilized as a discriminant and subsequently transformed into a scalar field. Subsequently, a viewing range of 366.67–374.00 m a.s.l. was established, with the focus on the area below the horizontal plane of the marker frame (
Figure 8).
In order to quantify the extent of the intervention performed, a calculation of the volume difference between the two scans was conducted. To this end, the ‘Compute Volume 2.5D’ tool was employed. The aforementioned tool enabled the generation of a depth map, which was then utilized as the basis for the subsequent volume calculation.
In order to achieve this outcome, the following steps were undertaken:
- a)
Manual segmentation of the two basement elevations: this was a necessary step to ensure the accuracy and precision of volume difference calculations.
- b)
-
The volume calculation for both segmented elevations is performed by setting the pre-intervention scan as the ‘Before’ scan, leaving the no-date cells empty, and the post-intervention scan as the ‘After’ scan. In this case, an interpolation is set for the no-date cells. The selection of the appropriate interpolation method, which is unavoidably necessary in at least one instance where data is absent, was made on the basis of the post-intervention scan, which is the most dense of the two. Consequently, the application of processes such as interpolation would result in the generation of more accurate results. The differential height map was produced by setting the following parameters:
- -
Grid step = 0.01m
- -
Cell height = average height
- -
Projection direction = x for the east elevation and y for the north elevation of the basament.
- c)
The depth maps, the outcome of the aforementioned analysis, were exported as point clouds (for details, please refer to Figure 9). The regularity observable in the xz and yz planes permitted the calculation of the volume without the presence of geometric irregularities in the basement having an adverse effect on the results.
In order to quantify the difference in volume between the two scans, the ‘compute 2.5D’ algorithm was re-launched, with the reference plane set to ‘before’ and the point clouds of the depth maps obtained from the previous analyses set to ‘after’. The results demonstrate a discrepancy in overall volume, with a difference of 1.171m³ observed. This is distributed as follows: 0.304m³ on the east elevation and 0.867m³ on the north elevation.
The application of the bi-temporal change detection methodology to the case study examined in this research, aims to demonstrate the importance of monitoring over time of phenomena that could trigger changes to cultural heritage structures and surfaces.
As several authors have debated, change detection is currently estimated through visual inspections, from diagrams and reports carried out by operators who are not always the same and may produce different data. The mitigation of error and the subjective interpretation of the operators involved can be avoided through the use of technologies for the acquisition of images and videos from laser scanners or drone photogrammetry and artificial intelligence algorithms, in order to automate the inspection procedures by reducing execution times and ensuring better reproducibility.
For this reason, the research team contributed to the acquisition of data and the subsequent processing of two point clouds acquired at intervals of time, using a procedure that could give quantitative and replicable results. The study focused on the geometric analysis of the application case of the bell tower of the Church of San Pietro Barisano, in Matera. The presence of a variation in the distance between the points of the two clouds acquired in the two time periods considered, made it possible to understand the presence of a change in the basement area of the bell tower. This proves the potential of the use of the correct instrumentation, targeted technologies, algorithms and analysis in the maintenance field, representing a valid support to the visual inspection and monitoring of phenomena over time. At the same time, despite the developed methodology using non-invasive technologies that can be applied on any cultural heritage asset, captive or not, through the use of currently consolidated instruments such as laser scanners and drones and open source software that allow for a high reproducibility of the methodology, it seems that a data implementation is necessary.
According to the observations made, the integration of more data is necessary to allow not only a geometric evaluation of the data, but also a qualitative one, for example by extending the time span of observation of the phenomena from bi-temporal to multi-temporal. Moreover, since the data collected must be used to make assessments in the field of maintenance, and therefore to be of support in the decision-making phase, it is necessary that it be integrated with sensors and diagnostic tools, which allow the entity of the changes, currently already highlighted, to be understood, in order to make assessments consistent with the asset and structure examined.