Preprint
Article

Mitigating the impact of temperature variations on ultrasonic guided wave-based structural health monitoring through generative artificial intelligence

Altmetrics

Downloads

107

Views

63

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

03 January 2024

Posted:

05 January 2024

You are already at the latest version

Alerts
Abstract
Structural health monitoring (SHM) has become paramount for developing cheaper and more reliable maintenance policies. The advantages coming from adopting such process have turned out to be particularly evident when dealing with plated structures. In this context, state-of-the-art methods are based on exciting and acquiring ultrasonic guided waves through a permanently installed sensor network. A baseline is registered when the structure is healthy, and newly acquired signals are compared to it to detect, localize and quantify damage. To this purpose, the performance of traditional methods based on tomographic algorithms has been overcome by machine learning approaches, which allow processing a larger amount of data without losing diagnostic information. However, to date, no diagnostic method can deal with varying environmental and operational conditions. This works aims to develop a framework for mitigating the impact of temperature variations on ultrasonic guided wave-based SHM through generative artificial intelligence. A variational autoencoder and singular value decomposition were combined to learn the influence of temperature on guided waves. After training, the generative part of the algorithm was used to reconstruct signals at new unseen temperatures. Moreover, a refined version of the algorithm called forced variational autoencoder was introduced to further improve the reconstruction capabilities. The accuracy of the proposed framework was demonstrated against real measurements on a composite plate.
Keywords: 
Subject: Engineering  -   Mechanical Engineering

1. introduction

In recent years, there has been a notable surge in the exploration of structural health monitoring (SHM) based on ultrasonic guided wave propagation for damage detection, localization, and quantification [1,2,3,4,5,6,7]. This method employs a single piezoelectric transducer as an actuator, transmitting ultrasonic waves into the material, while multiple strategically positioned transducers serve as receivers to capture the transmitted waves. The discernible contrast between a baseline signal recorded when the structure is intact and a signal from an unknown state of the structure may indicate the presence of damage [8,9,10]. It is crucial to acknowledge that these changes are not exclusively indicative of structural alterations within the monitored system, but may also be influenced by various environmental and operational conditions (EOCs), such as moisture, vibration, and especially temperature [11,12,13].
In Abbassi et. al [14], autoencoders demonstrated the capability to detect damage at different positions independently of temperature. The training dataset encompassed data from all tested temperatures. However, it is notable that some temperatures are difficult to be maintained in the laboratory environment, while being very common during the operational life of the structure. For example, aircraft at cruise altitude encounters temperatures as low as -50 °C, a challenging condition to be replicated and maintained in the laboratory. Variational autoencoders (VAEs) [15] are a possible solution to this issue.
The VAE introduces a probabilistic interpretation of its results by modeling the latent space as a probability distribution. After training, the model can be used for generating new realistic data by sampling from the learned latent space distribution, thereby generating data samples which differ from the input, but remain faithful to the underlying behavior of the data. Notably, a VAE was previously employed in Shu et. al [16] to predict the displacement at various points on a dam, demonstrating lower prediction errors compared to traditional models such as the long-short term memory model. The VAE was capable of extracting features from all environmental data, e.g., water level, dam temperature, water temperature, and rainfall, while traditional models only extracted primary features, leaving information out of the analysis.
The novelty in this study lies in applying the VAE to predict ultrasonic guided wave signals traveling through a composite panel at temperatures not present in its training dataset. Furthermore, linearity of the latent space points was enforced by introducing a new loss function based on singular value decomposition (SVD), as it was expected that the impact of rising the temperature would be anti symmetric to the impact of lowering the temperature. A comparison between the VAE’s prediction error with and without this restriction was conducted.
The structure of this article is as follows: Section 2, Materials and Methods, provides a comprehensive description of the dataset used in this study and outlines the implementation of the VAEs. Section 3, Results, showcases the signal prediction outcomes for the diverse scenarios examined in this study. Section 4, Discussion, presents insightful comments regarding the obtained results. Finally, Section 5, Conclusions, offers a summarized overview of the conclusions drawn out from this study.

2. Materials and Methods

A generative artificial intelligence algorithm was trained over a dataset of ultrasonic guided waves acquired over different temperatures with the aim of learning temperature-related features. After training, the model was used for generating signals at new temperatures.

2.1. Dataset

The dataset utilized in this study was sourced from the OpenGuidedWaves [17] platform, a repository offering comprehensive datasets of wave signals acquired on a carbon fiber-reinforced polymer (CFRP) plate. The experimental setup involved 12 piezoelectric sensors. Signals were sampled over a frequency band from 40 kHz to 260 kHz, with intervals of 20 kHz. The sampling process was carried out under varying temperature conditions, encompassing a temperature range from 20 °C to 60 °C. Notably, temperature was varied in a cyclical manner, involving two complete cycles. As a result, a comprehensive dataset of 322 distinct temperatures was acquired.
In this work, the original dataset was split into three distinct sub-datasets, designed to simulate diverse scenarios:
  • Standard Dataset: this dataset encompasses all the 322 samples.
  • Band Dataset: this dataset includes all the signals acquired between 30 °C and 50 °C.
  • Sparse Dataset: this dataset comprises clusters of samples at nearby temperatures, strategically spaced with a fixed interval. That is, clusters with a radius of 2 °C and separated by 5 °C are considered.
This approach allowed for the creation of multiple datasets, providing varied case studies to enhance the model’s adaptability and performance across different temperature scenarios.

2.2. Generative artificial intelligence models

2.2.1. Variational Autoencoder

The model employed in this work was a VAE, an improved version of the traditional autoencoder.
An autoencoder is a neural network architecture designed for unsupervised learning. Comprising an encoder and a decoder, it aims to learn efficient representations of input data. The encoder transforms the input data into a compressed, lower-dimensional representation known as the latent space representation of the data. This encoded information is then decoded by the decoder to reconstruct the original input.
Mathematically, an autoencoder minimizes the reconstruction error, promoting the automatic learning of meaningful features in the data. The reconstruction loss shown in Equation 1, often represented as L recon , is typically defined as the mean squared error (MSE) between the input X and the reconstructed output X ^ :
L recon ( X , X ^ ) = 1 N i = 1 N X i X ^ i 2
where N is the number of data points in the signals X and X ^ .
The ability of autoencoders to learn compact representations of data makes them valuable for tasks where extracting meaningful features is crucial. For example, autoencoders find applications in various domains, such as dimensionality reduction, feature learning, and anomaly detection. Moreover, the versatility of their architecture allows autoencoders to be tailored to specific learning objectives. Leveraging this characteristic, several variations of the standard fully-connected architecture have been proposed in the literature, including denoising autoencoders and VAEs. Particularly promising capabilities characterize VAEs, which are described by a latent space characterized by probability distributions rather than single deterministic points, as illustrated in Figure 1. This enables the generation of new data points by sampling from the learned latent space distributions. Let z be a latent variable representing a sample from the latent space distributions, and μ and σ be the mean and standard deviation of the latent space distributions. The encoder network parametrizes the distributions. During training the model is encouraged to learn distributions that follow standard normal distributions, according to the Equation 2:
z = μ + σ ϵ
where:
z : sampled latent vector μ : mean vector obtained from the encoder σ : standard deviation vector obtained from the encoder ϵ : random vector sampled from a standard normal distribution : element-wise multiplication operator
The reconstruction loss ( L recon ) is added with the Kullback-Leibler (KL) divergence term ( L KL ) shown in Equation 3, which measures the difference between the learned distributions and the standard normal distributions:
L KL = 1 2 i = 1 M 1 + log ( σ i 2 ) μ i 2 σ i 2
where M is the number of variables, i.e., distributions, considered in the latent space.
Hence, the overall VAE loss presented in Equation 4 is a combination of the reconstruction loss and the KL divergence term:
L VAE = L recon + β · L KL
where β is a hyperparameter that controls the importance of the KL divergence term in the overall loss.
In summary, VAEs leverage probabilistic encoding to enable the generation of continuous and structured latent space representations, from which new data points can be created. The inclusion of the KL divergence term promotes the learning of well-behaved latent space distributions in such a manner that all latent space points are demoted if they are far from the center of the latent space.
The VAE architecture considered in this work is shown in Table 1.

2.2.2. Forced Variational Autoencoder

In this work, an enhanced version of the standard VAE, i.e., the forced VAE (f-VAE), is proposed. The f-VAE is an autoencoder with enforced linearity within the latent space. This is achieved through the introduction of an SVD component. The model architecture includes an encoder, a decoder, and a latent space sampling layer, each contributing to the overall VAE structure. A novel addition to the loss function is the SVD loss term. After the entire dataset is encoded into the latent space, the mean is subtracted and SVD is perfomed. The SVD loss term is presented in Equation 5:
min i = 2 n σ i σ 1
where σ i is the i-th singular value, and n is the number of singular values computed.
Hence, the total loss in Equation 6 is a combination of the reconstruction loss, KL divergence, and the introduced SVD loss:
L VAE = L recon + β · L KL + γ · L SVD
where β and γ are hyperparameters controlling the weight of KL divergence and SVD loss, respectively.

2.3. Training and signals generation

The f-VAE and VAE models were trained on all three proposed datasets. The workflow describing how training and signal reconstruction were addressed in this work is shown in Figure 2.
The following hyperparameters were tuned to optimize the training performance:
  • Learning Rate
  • Batch Size
  • Number of Epochs
  • Kullback-Leiber loss weight
  • SVD loss weight
Among the best performing values, the set of hyperparameters that allowed for a latent space characterized by a linear correspondence to the variation of the network inputs was selected.
After training, signals at target temperatures were generated for testing the generation performance of the models. The following steps were followed for generating signals:
  • Temperature Selection: the target temperature for signal generation was chosen. This temperature served as the basis for the desired signal.
  • Model Initialization: the pre-trained model was initialized, including loading the trained weights and preparing the model for signal generation.
  • Latent Space Interpolation: SVD was used to elucidate the connection between the latent space coordinates, i.e., z 1 and z 2 in this work, and temperature, discerning the direction of maximum variance. The primary direction was considered to fully characterize the learned trend in the latent space, enabling a unified entry point into the latent space representing the signal temperature.
  • Signal Reconstruction: the decoder was used to reconstruct the signal corresponding to the selected temperature.
By employing SVD to map the chosen temperature to latent space coordinates and subsequently utilizing the VAE’s decoder, this approach enabled the generation of signals that reflect the desired temperature. Moreover, employing SVD exclusively on the training data ensured that the interpolation line captured only the known data, simulating a real-world scenario where models are trained with limited data.
The performance of the model during the generation phase was evaluated using the following error metrics:
  • Root Mean Square Error (RMSE): measure of the average magnitude of the differences between the reconstructed signals and the original signals. It is calculated according to Equation 7.
    RMSE = 1 N i = 1 N ( x i x ^ i ) 2
    where N is the number of data points, x i is the i-th data point of the original signal, and x ^ i is its reconstruction.
  • Signals Comparison: different signals at different temperatures were qualitatively compared to visualize if the generated signal matched the expected result.
This metrics provided insights into how well the f-VAEs and VAEs were able to reconstruct signals starting from latent space representations. The generated signals spanned the entire temperature range in the original dataset described in Ref. [17], even though the training dataset for certain models did not encompass signals from the entire range. This intentional extension beyond the training dataset mirrored a testing scenario, allowing for a comprehensive evaluation of the models’ performance and their ability to interpolate and extrapolate out of the training set.

3. Results

The performance of the two models (VAE and f-VAE) was evaluated against the three different datasets described in Section 2.1. Each pairing of model and dataset was scrutinized considering latent space linearity, RMSE, and the qualitative evaluation of the signals generated at four distinct temperatures: 25°C, 35°C, 45°C, and 55°C. Moreover, the generated signals were also compared to the signal at 40°C, i.e., to the signal acquired at the median temperature value in the training datasets, in order to verify the interpolation and extrapolation capabilities of the proposed methods.
Without losing generality, all the considerations reported in this Section refer to latent space representations characterized by zero variance. That is, in the interest of clarity, only the mean values of the latent space distributions were considered.

3.1. VAE

3.1.1. Standard Dataset

The distribution of the learned latent space representations and the reconstruction error related to the VAE model trained over the standard dataset are shown in Figure 3. Figure 3a reveals distinct points aligned with a clear direction of primary variance. Indeed, the SVD method underscored a discernible gap between the first and other components, endorsing the reliability of the interpolation line. Notably, the RMSE shown in Figure 3b consistently depicted an error below 4.5% universally, with the lowest reconstruction error at around 40°C, and higher errors at the tails of the distribution. This behavior may indicate that the model failed to accurately learn temperature-related features, and always reconstructed the signal at the median temperature in the training dataset.
The qualitative comparison of the generated signals is shown in Figure 4. The results underscored the tendency of the generated signals to closely follow the signal at the median temperature in the training dataset, i.e., 40°C, rather than adhering to the dataset signal at the corresponding temperature. This behavior confirms the intuition that the VAE trained on the standard dataset failed to learn how guided waves are influenced by temperature.

3.1.2. Band Dataset

The distribution of the learned latent space representations and the reconstruction error related to the VAE model trained over the band dataset are shown in Figure 5. Similar observations to those already reported in Section 3.1.1 emerged. The latent space plot shown in Figure 5a demonstrated a pronounced alignment of points along the primary variance direction, reaffirming the efficacy of the interpolation line through SVD. Due to the dataset’s limited range in temperature, the tails of the dataset extended beyond the SVD interpolation line, as the VAE was not explicitly trained on those regions. The RMSE shown in Figure 5b mirrored the trend observed for the standard dataset. That is, errors were consistently below 4.5%, the lowest error was observed at around 40°C, and highest RMSEs characterized the distribution tails. Similarly, Figure 6 shows that the generated signals qualitatively resembled the signal at 40°C, rather than those at the target temperature.
The results described above allow concluding that the VAE trained over the band dataset was not able to learn temperature-related features.

3.1.3. Sparse Dataset

The distribution of the learned latent space representations and the reconstruction error related to the VAE model trained over the sparse dataset are shown in Figure 7. The Latent Space plot shown in Figure 7a reveals a distribution markedly different from the preceding datasets in Figure 3a and Figure 5a, where a more pronounced direction of variance was observed through SVD. In this case, instead, both the first and second components carried significance, showcasing a distinctive characteristic of the dataset. As a consequence, the encoded signals revealed a sinusoidal pattern, instead of the linear trend.
Despite this potentially disadvantageous behavior, the RMSE plot shown in Figure 7b follows a similar trend to the previous datasets, consistently emphasizing errors at the distribution tails. Interestingly, the RMSE pattern aligns with that characterizing the previously discussed datasets.
The signal reconstruction capabilities shown in Figure 8 are consistent with those of the VAEs trained on the other two datasets. The generated signals tend to closely resemble the signal at 40°C, rather than adhering to the signal at the target temperature.

3.2. f-VAE

3.2.1. Standard Dataset

The distribution of the learned latent space representations and the reconstruction error related to the f-VAE model trained over the standard dataset are shown in Figure 9. Conspicuous differences compared to the VAE model presented in Section 3.1.1 can be appreciated. The latent space distribution shown in Figure 9a exhibited a more linear trend, closely resembling a straight line. Notably, SVD underscores a significantly larger first component compared to secondary ones, implying the negligible contribution of these latter components. Also, the RMSE shown in Figure 9b presented a different behavior than that observed for the VAE model. That is, no pronounced increase in error characterized the tails of the distribution. Except for a few points at higher temperatures approaching a 4% error, the majority of points did not exceed a 2% overall error. Remarkably, 90% of the points remained below the 1% error threshold.
The qualitative comparison of the generated signals is shown in Figure 10. The f-VAE model clearly outperformed the VAE model in terms of accuracy of the generated signals. In fact, the generated signals exhibited a closer resemblance to the expected signals, rather than strictly adhering to the signal at 40°C.
The results showed that forcing VAEs to learn linear representations in the latent space allowed for correctly capturing the influence of temperature on ultrasonic guided waves.

3.2.2. Band Dataset

The distribution of the learned latent space representations and the reconstruction error related to the f-VAE model trained over the band dataset are shown in Figure 11. Distinguishable variations in comparison to the VAE model outlined in Section 3.1.2 can be observed. The latent space distribution depicted in Figure 11a displayed a more linear tendency similar to the one commented in Section 3.2.1. The RMSE illustrated in Figure 11b exhibited a pattern similar to that noted in the VAE model, but with some discrepancies: the tails exhibited higher errors up to 3.5%, but the RMSE within the temperature range of 30°C and 50°C showed greater consistency, staying below 1%.
The generated signals presented in Figure 12, in line with the model’s behavior observed in the standard dataset, continued to closely follow the dataset signals. The f-VAE model outperformed the VAE model in terms of accuracy of the generated signals, as already highlighted in Section 3.2.1. Despite the inherent challenges posed by extreme temperature points, the f-VAE was able to generate signals at temperatures out of the training dataset. That is, the f-VAE was also able to extrapolate.

3.2.3. Sparse Dataset

The f-VAE trained over the sparse dataset was characterized by satisfactory performance, as shown in Figure 13. The model was able to capture the primary sources of variance within the sparse dataset, even though the linearity was not as pronounced as observed for the f-VAE trained over the standard dataset (Figure 9a). Major differences are observable by comparing the the f-VAE and the VAE trained over the same sparse dataset. In fact, while the latent space shown in Figure 13a plot resembled a linear behavior, the VAE learned a sinusoidal pattern (Figure 7a) characterized by a non-negligible second singular value.
Also, the RMSE plot shown in Figure 13b displayed a satisfactory performance, consistently maintaining errors below 2.5%. Notably, 90% of the points fell below the 1% error threshold, indicating the model’s adaptation to the complexities of the sparse dataset.
In line with the observed trends in Figure 10 and Figure 12, the generated signals shown in Figure 14 faithfully followed the expected signals. Also here, the f-VAE model outperformed the VAE model in terms of accuracy of the generated signals.

4. Discussion

The analysis of the performance of the two models across different datasets revealed the potentialities and limitations of the employed generative artificial intelligence algorithms.
The VAE trained over the standard dataset seemed to effectively capture the relation between signal temperature and the latent space coordinates, supported by the SVD analysis. Although the reconstruction error over the test dataset was satisfactorily low, a trend indicating that the model was only able to reproduce signals at 40°C, rather than strictly adhering to the expected signal at the target temperature, was identified.
Similar considerations were drawn out from the analysis of the performance of the VAE trained over the band dataset. In fact, the latent space distribution was aligned along a clear primary variance direction. Despite SVD capturing the maximum variance in the dataset, the bandwidth considered in the band dataset introduced challenges, resulting in high reconstruction errors at the tails of the RMSE distribution. Additionally, the generated signals still matched the 40°C signal, regardless of the target temperature.
The same unsatisfactory generation capabilities characterized the VAE trained over the sparse dataset. Here, the performance was even worse, given that the latent space distribution was characterized by two non-negligible singular values.
The introduction of the f-VAE brought about notable improvements. When trained over the standard dataset, the f-VAE introduced a more linear latent space, with a significantly larger first component according to SVD. The RMSEs over the test dataset were considerably lower than those characterizing the VAEs. No clear trend of higher reconstruction errors at the tails of the temperature distribution was identified, indicating enhanced precision in signal generation. Furthermore, the generated signals closely resembled the expected signals at the target temperatures.
Similarly, the f-VAE trained over the band dataset was characterized by a linear latent space representation in the temperature range considered during training. The regression line slightly departed from the linear trend at unseen temperatures. The same trend was shown by the reconstruction error, which was characterized by higher values when extrapolation was performed. This behavior is commonly shown by all machine learning algorithms, which cannot be fully trusted when extrapolating. Still, the generated signals closely followed the expected signals at all temperatures, bringing evidence of the capability of the model to generate realistic signals.
The f-VAE trained over the sparse dataset offered satisfactory performance. The latent space exhibited a prominent linearity and the RMSE was kept low at all temperatures. Accordingly, the generated signals closely matched the expected signals. The reconstruction quality achieved using the sparse dataset was higher than that characterizing the f-VAE trained over the band dataset. This indicates than f-VAEs work better when interpolating, while caution should be taken when extrapolating.
Higher reconstruction errors characterized the signals close to 60°C generated by the f-VAEs. This behavior came from the dataset composition. In fact, signals were acquired by varying temperature in a cyclic manner. By so doing, the dataset included two acquisitions at 60°C, and four acquisitions at 20°C and throughout the rest of the dataset. This discrepancy implied a less densely populated training distribution in regions close to 60°C.

5. Conclusions

In this work, variational autoencoders and singular value decomposition have been used to learn the influence of temperature on ultrasonic guided waves. Moroever, a newly developed machine learning algorithm, i.e., the forced variational autoencoder, has been introduced to further improve the reconstruction capabilities of the generative artificial intelligence-based framework. The accuracy of the proposed method has been demonstrated against real measurements on a composite plate. The following conclusions can be drawn out:
  • Regardless of the composition of the training dataset, traditional variational autoencoders cannot learn how to generate signals at different temperatures.
  • Satisfactory reconstruction accuracy has been shown by forced variational autoencoders coupled with singular value decomposition.
  • Forced variational autoencoders can work in realistic scenarios, even when the training dataset is sparse.
Future work will focus on implementing forced variational autoencoders and singular value decomposition into unsupervised frameworks for damage detection, localization and quantification. This will allow making a step towards robust structural health monitoring tools that are not influenced by varying environmental and operational conditions.

Author Contributions

Conceptualization, R. Junges, L. Lomazzi, and F. Cadini; methodology, L. Lomazzi and L. Miele.; software, L. Lomazzi and L. Miele.; validation, R. Junges, L. Lomazzi, and L. Miele; formal analysis, R. Junges, L. Lomazzi, and F. Cadini.; resources, F. Cadini, and M. Giglio; data curation, L. Miele; writing—original draft preparation, R. Junges, L. Lomazzi, and L. Miele; writing—review and editing, R. Junges, and L. Lomazzi; supervision, L. Lomazzi, and F. Cadini. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The codes presented in the manuscript are available in GitHub: https://github.com/lorenzomie/VAE-generative-temperature-signal-for-CFRP-plate.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, F.; Hua, J. Damage characterization using CNN and SAE of broadband Lamb waves. Ultrasonics 2022, 119, 106592. [Google Scholar] [CrossRef]
  2. Gonzalez-Jimenez, A.; Lomazzi, L.; Junges, R.; Giglio, M.; Manes, A.; Cadini, F. Enhancing Lamb wave-based damage diagnosis in composite materials using a pseudo-damage boosted convolutional neural network approach. Structural Health Monitoring, 2023; 14759217231189972. [Google Scholar]
  3. Zhang, S.; Li, C.M.; Ye, W. Damage localization in plate-like structures using time-varying feature and one-dimensional convolutional neural network. Mechanical Systems and Signal Processing 2021, 147, 107107. [Google Scholar] [CrossRef]
  4. Migot, A.; Bhuiyan, Y.; Giurgiutiu, V. Numerical and experimental investigation of damage severity estimation using Lamb wave–based imaging methods. Journal of Intelligent Material Systems and Structures 2019, 30, 618–635. [Google Scholar] [CrossRef]
  5. Lomazzi, L.; Fabiano, S.; Parziale, M.; Giglio, M.; Cadini, F. On the explainability of convolutional neural networks processing ultrasonic guided waves for damage diagnosis. Mechanical Systems and Signal Processing 2023, 183, 109642. [Google Scholar] [CrossRef]
  6. Lomazzi, L.; Junges, R.; Giglio, M.; Cadini, F. Unsupervised data-driven method for damage localization using guided waves. Mechanical Systems and Signal Processing 2024, 208, 111038. [Google Scholar] [CrossRef]
  7. Lomazzi, L.; Giglio, M.; Cadini, F. Towards a deep learning-based unified approach for structural damage detection, localisation and quantification. Engineering Applications of Artificial Intelligence 2023, 121, 106003. [Google Scholar] [CrossRef]
  8. Lee, B.; Staszewski, W. Modelling of Lamb waves for damage detection in metallic structures: Part I. Wave propagation. Smart materials and structures 2003, 12, 804. [Google Scholar] [CrossRef]
  9. Staszewski, W.; Tomlinson, G.; Boller, C.; Tomlinson, G. Health monitoring of aerospace structures; Wiley Online Library, 2004.
  10. Lee, B.; Staszewski, W. Lamb wave propagation modelling for damage detection: I. Two-dimensional analysis. Smart Materials and Structures 2007, 16, 249. [Google Scholar] [CrossRef]
  11. Gorgin, R.; Luo, Y.; Wu, Z. Environmental and operational conditions effects on Lamb wave based structural health monitoring systems: A review. Ultrasonics 2020, 105, 106114. [Google Scholar] [CrossRef] [PubMed]
  12. Lee, S.J.; Gandhi, N.; Michaels, J.E.; Michaels, T.E. Comparison of the effects of applied loads and temperature variations on guided wave propagation. AIP Conference Proceedings. American Institute of Physics, 2011, Vol. 1335, pp. 175–182.
  13. Andrews, J.P.; Palazotto, A.N.; DeSimio, M.P.; Olson, S.E. Lamb wave propagation in varying isothermal environments. Structural Health Monitoring 2008, 7, 265–270. [Google Scholar] [CrossRef]
  14. Abbassi, A.; Römgens, N.; Tritschel, F.F.; Penner, N.; Rolfes, R. Evaluation of machine learning techniques for structural health monitoring using ultrasonic guided waves under varying temperature conditions. Structural Health Monitoring 2023, 22, 1308–1325. [Google Scholar] [CrossRef]
  15. Doersch, C. Tutorial on variational autoencoders. arXiv, 2016; preprint. arXiv:1606.05908. [Google Scholar]
  16. Shu, X.; Bao, T.; Li, Y.; Gong, J.; Zhang, K. VAE-TALSTM: a temporal attention and variational autoencoder-based long short-term memory framework for dam displacement prediction. Engineering with Computers, 2021; 1–16. [Google Scholar]
  17. Moll, J.; Kexel, C.; Pötzsch, S.; Rennoch, M.; Herrmann, A.S. Temperature affected guided wave propagation in a composite plate complementing the Open Guided Waves Platform. Scientific Data 2019, 6, 191. [Google Scholar] [CrossRef]
Figure 1. VAE Architecture
Figure 1. VAE Architecture
Preprints 95363 g001
Figure 2. Workflow describing training and signal generation
Figure 2. Workflow describing training and signal generation
Preprints 95363 g002
Figure 3. VAE - standard dataset. Latent space distribution and RMSE
Figure 3. VAE - standard dataset. Latent space distribution and RMSE
Preprints 95363 g003
Figure 4. Test signals generated by the VAE trained over the standard dataset
Figure 4. Test signals generated by the VAE trained over the standard dataset
Preprints 95363 g004
Figure 5. VAE - band dataset. Latent space distribution and RMSE
Figure 5. VAE - band dataset. Latent space distribution and RMSE
Preprints 95363 g005
Figure 6. Test signals generated by the VAE trained over the band dataset
Figure 6. Test signals generated by the VAE trained over the band dataset
Preprints 95363 g006
Figure 7. VAE - sparse dataset. Latent space distribution and RMSE
Figure 7. VAE - sparse dataset. Latent space distribution and RMSE
Preprints 95363 g007
Figure 8. Test signals generated by the VAE trained over the sparse dataset
Figure 8. Test signals generated by the VAE trained over the sparse dataset
Preprints 95363 g008
Figure 9. f-VAE - standard dataset. Latent space distribution and RMSE
Figure 9. f-VAE - standard dataset. Latent space distribution and RMSE
Preprints 95363 g009
Figure 10. Test signals generated by the f-VAE trained over the standard dataset
Figure 10. Test signals generated by the f-VAE trained over the standard dataset
Preprints 95363 g010
Figure 11. f-VAE - band dataset. Latent space distribution and RMSE
Figure 11. f-VAE - band dataset. Latent space distribution and RMSE
Preprints 95363 g011
Figure 12. Test signals generated by the f-VAE trained over the band dataset
Figure 12. Test signals generated by the f-VAE trained over the band dataset
Preprints 95363 g012
Figure 13. f-VAE - sparse dataset. Latent space distribution and RMSE
Figure 13. f-VAE - sparse dataset. Latent space distribution and RMSE
Preprints 95363 g013
Figure 14. Test signals generated by the f-VAE trained over the sparse dataset
Figure 14. Test signals generated by the f-VAE trained over the sparse dataset
Preprints 95363 g014
Table 1. Summary of the Neural Network Architecture
Table 1. Summary of the Neural Network Architecture
Layer Number of Neurons Activation Function
Input 1 x 13108 -
Dense 128 SiLu
Dense 64 SiLu
Dense 16 SiLu
Latent Space 2 -
Sampling 1 -
Dense 16 SiLu
Dense 64 SiLu
Dense 128 SiLu
Output 1 x 13108 Sigmoid
Optimizer: "Adam"
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated