Preprint
Article

A Novel Customised Load Adaptive Framework for Induction Motor Fault Classification Utilizing MFPT Bearing Dataset

Altmetrics

Downloads

92

Views

25

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

04 December 2023

Posted:

05 December 2023

You are already at the latest version

Alerts
Abstract
This research presents a novel Customised Load Adaptive Framework (CLAF) for fault classifica-tion in Induction Motors (IMs), utilizing the Machinery Fault Prevention Technology (MFPT) Bearing Dataset. CLAF represents a pioneering approach that extends traditional fault classification methodologies by accounting for load variations and dataset customization. Through a meticulous two-phase process, it unveils load-dependent fault subclasses that have not been readily identified in traditional approaches. Additionally, new classes are created to accommodate the dataset's unique characteristics. Phase 1 involves the exploration of load-dependent patterns in time and frequency domain features using one-way Analysis of Variance (ANOVA) ranking and validation via bagged tree classifiers. In Phase 2, CLAF is applied to identify mild, moderate, and severe load-dependent fault subclasses through optimal Continuous Wavelet Transform (CWT) selection through Wavelet Singular Entropy (WSE) and CWT energy analysis. The results are compelling, with a 96.3% classification accuracy achieved when employing a wide neural network to classify proposed load-dependent fault subclasses. This underscores the practical value of CLAF in en-hancing fault diagnosis in IMs and its future potential in advancing IM condition monitoring.
Keywords: 
Subject: Engineering  -   Industrial and Manufacturing Engineering

1. Introduction

Bearings are fundamental components in diverse industrial applications, such as Induction Motors (IM), turbines, medical devices, and aerospace [1]. IM is prevalent in industrial processes due to its affordability, reliability, and robustness, representing 85% of global energy consumption. However, with the advent of Industry 4.0, there is a growing emphasis on data utilization, focusing on interpreting vast amounts of data for early fault diagnosis to prevent critical downtimes. Current research efforts aim to enhance fault classification accuracy through data-centric machine learning models, encompassing multi-channel fault classification [2], parameter optimization [3], and transfer learning for signal feature extraction [4]. The monitoring and fault diagnosis of IM, which heavily rely on bearings, are subjects of extensive research [1,5]. Various fault detection techniques, including acoustic emission, motor current consumption, temperature, and vibration signal-based methods, have demonstrated reliability and effectiveness [6]. Vibration signal-based diagnostics represent a well-established method for Condition-Based Maintenance (CBM) of bearings, as defects generate vibration impulses on bearing surfaces [7]. However, this diagnostic method requires sensors to be directly attached to machines, making them susceptible to noise [8,9]. Strategically placing sensors on rolling bearings allows for the direct observation of these signals to determine the bearing's condition [10].
In recent years, there has been a growing interest in the detection of faults in induction motors, given their crucial role in various industries such as the electric power sector, manufacturing, and services. The overall vibration-based machine condition monitoring framework involves three key steps: signal collection using sensors, signal analysis through processing techniques, and fault detection and health assessment using a classification algorithm [11]. Consequently, efforts have been focused on developing reliable and cost-effective methods for diagnosing faults in induction motors. Early detection of potential failures is crucial to proactively prevent significant damage to machinery [12,13,14,15,16,17]. Despite the recognized importance of feature extraction and selection in intelligent diagnosis systems, there is a noticeable gap in the literature, particularly concerning evaluating load impact [18,19,20]. This gap invites further exploration—a void in our understanding of how varying loads influence the manifestation of faults. While previous research has explored areas such as estimating remaining useful life from run-to-failure datasets [18], the impact of loads on faults remains relatively unexplored. Radial impact was discussed in [21], where the authors used traditional statistical indicators to study the effects of inner and outer faults in bearings under different loads. They proposed a combination of indicators such as Kurtosis × RMS and Kurtosis × Peak, RMS × Peak for early fault detection in bearings using the Society for Machinery Failure Prevention Technology (MFPT) bearings dataset, which includes inner and outer race faults. A similar analysis was conducted on the Case Western Reserve University (CWRU) dataset, involving a thorough investigation and comparison of a wide range of traditional and new vibration indicators for detecting bearing defects and monitoring their progression [22].
While extensive research has explored fault classification under varying loads, the subtle repercussions of load variations on the intrinsic nature of faults have persistently evaded attention. This research addresses two limitations of existing technology. It explores how radial load characteristics influence fault behaviours, employing advanced methods like time and frequency domain feature extraction, feature reduction, and Continuous Wavelet Transform (CWT) for time-frequency analyses. The study introduces a paradigm shift in induction motor fault classification through the proposed Customised Radial Load Assessment Framework (CLAF), integrating time and frequency domain features, CWT, Wavelet Singular Entropy (WSE), CWT energy and novel load-dependent fault subclasses. CLAF will be customised and tested on the MFPT-bearing dataset to uncover intricate load-dependent patterns, providing a profound understanding of the interplay between load dynamics and bearing fault behaviour. The research yields impactful contributions:
  • Comprehensive Time and Frequency Analysis: The study conducted a comprehensive time and frequency domain analysis under six different load conditions. This analysis has highlighted patterns and variations in fault severity, providing valuable insights into IM behaviour.
  • Optimal Continuous Wavelet Transform (CWT) Approach: The selection of an optimal CWT approach using WSE contributes to improved signal processing for time-frequency feature extraction, denoising, and pattern recognition.
  • Revealing Load-Dependent Fault Subclasses: This represents an innovative extension of traditional fault classification methods. It effectively accommodates load variations and dataset customization, making it adaptable to different IM datasets. The research has identified and classified load-dependent fault subclasses, including mild, moderate, and severe, which enhances the understanding of fault severity in different load scenarios.
  • Proposing a Customized Load Assessment Framework (CLAF): The research introduces a novel CLAF, which represents a pioneering approach in the field of fault classification for Induction Motors (IMs). CLAF extends traditional fault classification methodologies by considering load variations and dataset customization.
The rest of the paper is organized as follows: Section 2 presents the theoretical background of this study, including the research state-of-the-art and research gaps concerning the aforementioned limitations. Section 3 provides details about the dataset and outlines the proposed research methodology, including procedures, techniques, and assumptions used in the study. The experimental results and evaluation parameters of Phase 1 and Phase 2 are discussed in Section 4. Finally, Section 5 concludes the paper and outlines future directions

2. Background and Related Work

2.1. Feature Extraction Domains

There are three primary domains for feature extraction: the time domain, frequency domain, and time-frequency domain. These distinct domains are employed to capture unique insights into signal behaviour:

2.1.1. Time Domain Analysis

Traditional Statistical Features (TSFs), shown in Table 1, are essential time domain measures derived from vibration or time series data. These features, including mean, Root Mean Square (RMS), kurtosis, peak-to-peak, Variance (Var), Standard Deviation (Std), shape factor, peaking factor, pulse factor, and margin factor, provide insights into statistical properties and amplitude variations of vibration signals [9]. Crest factor (CF), derived from time domain data, is a ratio representing the peak value of a signal to its RMS value and serves as a feature for assessing high-energy events or abnormal vibrations [21]. These features collectively capture the temporal characteristics of signals, enabling the examination of behaviour over time. Analyzing vibration signals in the time domain is crucial for understanding signal dynamics and detecting anomalies or faults [9]. Peak and RMS share the same unit and amplitude, while other factors are dimensionless [20]. RMS is widely used among these features due to its practical significance. Table 1 presents the formulas for TSFs [21,23,24]:

2.1.2. Frequency Domain Analysis

The extraction of features from the frequency domain can provide insights into the periodic components and harmonic structures within the data. The frequency domain analysis of vibration signals involves examining the amplitude changes with respect to different frequencies [11]. These features capture frequency-specific aspects of the signal and contribute to a better understanding of the vibration behaviour.[9].
Commonly used frequency domain features, such as Root Mean Square Frequency (RMSF), Center Frequency (CF), Mean Square Frequency (MSF), Frequency Variance (FV), and Root Frequency Variance (RVF), offer valuable information about the signal’s characteristics and power distribution within the frequency domain [9]. Standard harmonic features focus on the integral multiples of the fundamental frequency and have been widely applied. For instance, Total Harmonic Distortion (THD) has been utilized to characterize the frequency content of the vibration signal and is often represented as a percentage shown in Table 2 [25,26]. The Signal-to-Noise Ratio (SNR) and SINAD (Signal-to-Noise and Distortion Ratio) demonstrate the integration of SNR and SINAD with time domain features shown in Table 2; both are measured in decibels (dB), focusing on the signal's temporal properties and frequency domain features, especially in gearbox fault analysis [27].
On the other hand, computing spectral features from a signal involves transforming the signal from the time domain to the frequency domain and extracting various features that characterize the signal's frequency content, which provides a comprehensive overview of the entire frequency content of the signal [28]. The Autoregressive (AR) model is widely utilized for spectral estimation. Several techniques exist to compute the AR coefficients from data samples, including the Yule-Walker method, Burg’s method, and the covariance method [29]. In Rotating machine fault diagnosis, the AR model, especially with the forward-backwards approach, has demonstrated superior classification results, overcoming the limitations of traditional methods [30]. The AR model with linear prediction captures the deterministic signal component, while the residual part contains noise and bearing fault impulses. It can effectively cancel noise for bearings diagnosis when the optimal order is chosen [31]. Each technique has strengths and weaknesses; the periodogram lacks precision, Welch’s method struggles with rapid shifts, and the multitaper method is computationally intense [6,32]. The AR model’s model with an order of p , represented as AR(P), can be expressed in Equation 1 [29]:
x n = p = 1 p a p x n p + e [ n ]
where, x n represents the current value of the signal, a p is the autoregressive coefficients that need to be estimated, and e n   is the random noise term. The current value of the signal x n is influenced by its past values x n p   weighted by the corresponding autoregressive coefficients a p . The sum is taken over p terms, where p ranges from 1 to the order of the AR model. The noise term e n accounts for any unpredictability or randomness in the signal that is not captured by the past values. The AR coefficients a p   would be estimated using any of the following approaches to predict the behaviour of the signal based on its past values, like the Yule-Walker method, Burg's method, the covariance method or the forward-backwards approach to calculate the values of a p , and these calculated coefficients [6,29]. The resulting spectral features from the AR model, such as Peak Amplitude, Peak Frequency, and Band Power, are shown in Table 2.
Table 2. Frequency domain features.
Table 2. Frequency domain features.
Parameter Formula Description
Harmonic Features THD i = 2 N A i 2 A I Frequency domain, measuring the distortion caused by harmonics in the signal.
SNR 10 log 10 P s i g n a l P n o i s e Compares the level of a desired signal to the level of background noise.
SINAD 10 log 10 P s i g n a l P n o i s e + P d i s t o r t i o n A measure of signal quality compares the level of desired signal to the level of background noise and harmonics.
Spectral Features Peak amplitude x f p e a k Represents the highest point (or peak) of the signal's waveform when viewed in the frequency domain.
Peak frequency f p e a k Corresponds to the frequency component that is most prominent or dominant in the signal.
Band power f s t a r t f e n d x ( f ) 2 Quantifies the total energy within a specific frequency range, providing insights into the distribution of signal energy across the spectrum.
Where, A I denotes the amplitude of the fundamental frequency in the spectrum, while A i represents the amplitude of the i-th harmonic. In the formulas for SNR and SINAD, P s i g n a l refers to the power of the signal, P n o i s e ​ to the power of the noise, and P d i s t o r t i o n ​ to the power of harmonic distortion. In the peak amplitude, x f p e a k represents the complex value of the signal in the frequency domain at bin f p e a k , and “|” indicates the magnitude of a complex number. x f is the complex value of the signal in the frequency domain at frequency bin 'f'. The term x ( f ) 2 signifies the squared magnitude of x f . Σ symbolizes the summation over all frequency bins 'f' within the specified frequency range [ f s t a r t , f e n d ].

2.1.3. Time-Frequency Domain Analysis

Time-frequency domain analysis combines time and frequency information to understand the signal's frequency band over a specific time interval [33]. It provides a localised breakdown of the signal by considering smaller time segments. By examining the signal’s frequency content over time, these techniques can reveal hidden aspects and better capture the characteristics of non-stationary data [10]. One of the most famous techniques is wavelet transform, which offers the unique advantage of localising this frequency information in time, exemplified by the term "timescale (frequency) based analysis method." The wavelet transform predominantly employs three mother wavelet functions: Amor, Bump, and Morse, which have become staple choices in signal pre-processing [32]. The Wavelet scalogram based on CWT is a foundational tool in wavelet-based fault diagnostics [6]. Another pivotal technique is WSE, which merges the wavelet transform, singular value decomposition, and Shannon entropy. WSE adeptly extracts fault characteristics even from noise-laden and subtle fault transients, cementing its efficacy and superiority in fault detection [32,33].

2.2. Continuous Wavelet Transform (CWT)

The CWT produces a two-dimensional representation of the signal, with time on one axis and scale (related to frequency) on the other axis, called time–frequency domain analysis[34]. The wavelet coefficients can be used to create wavelet images for analysis and provide visual insights into how the frequency content of the signal changes over time, often referred to as a "scalogram" as in [35] or 2D time–frequency diagram” as in [32]. On the other hand, quantifying the complexity of the singular value distribution resulting from the wavelet transform is also possible. Quantification doesn't directly involve generating images but yields a numerical measure characterising the intricate patterns and changes in the signal's frequency behaviour over time. Instead, it quantifies the singular values' distribution complexity, providing insights into the signal's features [32].
The CWT employs a "mother wavelet" to represent a signal. By adjusting the scale and the translation of this wavelet, the transform achieves a nuanced time-frequency analysis. This method enhances the Fourier transform's local analysis capability, enabling the examination of transient features in signals. This provides a dynamic lens for signal analysis. The wavelet transform decomposes a signal across various scales using the mother wavelet. Such decomposition facilitates an in-depth study of how signal attributes vary across sections and scales. The technique is invaluable for understanding a signal's time and frequency characteristics in a localised manner [32]. The CWT coefficient formula for a specific signal, ( t ) , is presented as:
  W T f a , τ = 1 a f ( t ) φ ¯ t τ a d t ,
   W T f a , τ f t = f t φ a , τ t ,
Here, W T f a , τ denotes the wavelet coefficient at a specific scale, a , and position, τ. The term a is the scaling factor that instead stretches or compresses the wavelet, while τ is the translation factor that shifts the wavelet along the signal’s time axis [36]. The function φ reptresents the scaled and translated versions of the mother wavelet. Different mother wavelets yield distinct wavelet coefficients, highlighting varied facets of the signal [20].
Moreover, Equation 4 suggests that the original signal, f ( t ) , expressed via its wavelet transform and associated "daughter wavelets", φ a , τ ( t ) [32]. Mother wavelets, such as Morse, Amor, and Bump replace the φ (t) in the equation., acting as foundational waveforms underpinning the wavelet transform. These mother wavelets outline the fundamental shape and attributes of the wavelet that undergoes scaling and translation for signal analysis, as depicted in Equation 4, and CWT is presented in Equation 5 [36]:
φ a , τ t = 1 a φ t τ a ,
ω t a , τ = 1 a f ( t ) φ t τ a d t ,
Furthermore, when the Fourier Transform is applied to the wavelet function, φ (t), the CWT manifests in the frequency domain. This relationship underscores the CWT's interpretation as a band-pass filtering operation in the frequency domain, with the filter's shape and central frequency determined by the mother wavelet, t φ (t). The Fourier Transform (FT) equation of φ a , τ t when scaled by a and translated by τ for the daughter of wavelet φ a , τ t is Equation 6 [32]:
ψ a , τ ω = 1 a e j ω τ ψ a ω ,
In the equation, e j ω τ translates the wavelet along the frequency axis and ψ a ω scales the frequency component by a , and 1 a normalises the wavelet. The centre of frequency scaled by a where ω 0 is the central frequency before scaling, defined in Equation 7 [32]:
ω a , τ = ω 0 a ,

2.3. Wavelet Singular Entropy (WSE)

Wavelet Singular Entropy (WSE) is a metric utilised in signal processing and information theory to quantify the intricacy or irregularity of a signal. It is derived from wavelet singular values, the magnitudes of coefficients obtained during the wavelet transform. By calculating the entropy of these values, WSE measures the information content or complexity of the signal. A higher WSE value indicates a more complex or irregular signal, while a lower value suggests a more regular or predictable signal. WSE is commonly used in various fields to characterise signals and compare their complexity [26,33]. The WSE combines the principles of wavelet transform, singular decomposition, and generalised information entropy. It reflects the uncertainty of the energy distribution of the characteristic mode of the analysed signal. A smaller WSE indicates a more straightforward and concentrated energy distribution, while a higher WSE suggests a more complex and dispersed energy distribution. The WSE is calculated based on the singular values obtained from the wavelet transform of the signal. The singular values are non-negative and arranged in descending order. The WSE can be defined as [33]:
WSEk = ∑(λi / Σλi) ∗ log(λi / λi)
where Δpi is the jth order incremental WSE, λi is the ith singular value, and Σλi represents the sum of all singular values [32].

2.4. One-way Analysis of Variance (ANOVA) Features Selection.

In real-world scenarios, the extracted feature set is not equally important. Certain features hold more relevance for the final classification task than others. Conversely, some features could adversely affect classification accuracy, hampering the algorithms’ abilities to generalise patterns. One-way analysis of variance (ANOVA) can be employed to select the most robust subset of features from the entire set [37], a significant challenge encountered in structural health monitoring when collecting data through sensor networks. The challenge pertains to extracting crucial components and valuable features for detecting damage. Structural dynamic measurements often exhibit complex time-varying behaviour, making them susceptible to dynamic changes in their time-frequency characteristics [6]. Hence, this paper will discuss the features changing when the fault occurs and relate to the time-frequency domain. Features will be reduced using one-way ANOVA as it is considered a well-known and solid methodology for feature reduction employed for subset feature selection to identify the most significant features in [37,38,39].

2.5. State-Of-The-Art and Research Gaps

Bearing fault diagnosis is recognized as a pattern recognition challenge, underscoring the importance of dominant eigenvectors for fault features. Accurate feature identification is critical for enhancing diagnostic system reliability. Studies like [12] used wavelet scattering transform-based features, and [13] employed statistical time and frequency domain features to contribute to induction motor fault classification. Other techniques include time-domain features from current signals [14], homogeneity and kurtosis from electrical current during motor startup [15], and the use of CWT for fault diagnosis, as seen in [16]. This method, tested on CWRU and MFPT datasets, demonstrated superior diagnostic accuracy and stability. Other approaches proposed in [17], such as multimodal preprocessing using image fusion, enhance fault classification accuracy for ResNet-18 and SqueezeNet in induction motors. The field of induction motor fault classification remains an active area of research, focusing on optimal feature extraction and selection techniques and leveraging various machine learning methods.
The approach increasingly leans towards treating it as a pattern recognition challenge in bearing fault diagnosis, relying on dominant eigenvectors to represent fault features, enabling a more reliable detection and categorization of bearing faults [3].
To determine the exact location and intensity of a bearing defect, various Vibration Signal Analysis (VSA) techniques are available, broadly categorized into the time domain, frequency domain, and time-frequency domain analyses [22]. Feature extraction in machine learning for bearing fault diagnosis is pivotal, particularly in analyzing vibration signals, resulting in a multi-domain feature set. The goal is often to derive features with strong discriminatory capabilities [9]. Time-domain features assume a stationary signal, but signals often exhibit changes in statistical properties over time [40]. However, obtaining suitable features may require a long period of recorded signals, making it expensive, time-consuming, or even impossible for certain fault types or with complex equipment [5]. RMS and kurtosis are commonly used in the time domain, especially kurtosis, which is highly effective in early fault detection [24].
In contrast, frequency domain features demand higher computational effort than their time domain counterparts and operate under the assumption of a wide-sense stochastic signal [20]. Fast Fourier Transform (FFT), while powerful in stationary conditions, has limitations when applied to non-stationary data. Non-stationary data refers to signals that change over time or exhibit variations in their frequency content. In such cases, FFT's assumption of a constant frequency spectrum over the entire signal duration does not hold. Alternative time-frequency signal processing techniques have been developed to address this limitation [5]. Nevertheless, transitioning to the time-frequency domain analysis, which combines time and frequency information to understand the signal's frequency band over a specific time interval [33], offers a localized signal analysis by considering smaller time segments. This approach proves valuable for non-stationary signals, where the frequency content changes over time [10]. The CWT is a powerful tool for analyzing non-linear and non-stationary data in the time-frequency domain. It outperforms other techniques, such as the short-time Fourier transform (STFT), Gabor transforms, wavelet transforms, and Wigner-Ville transforms, effectively addressing the limitations of the FFT in dealing with such data [41,42,43]. The wavelet transform can analyze specific regions within a larger signal without sacrificing spectral details, unveiling concealed facets undetected by alternative methods [34]. This enables the distinctly different analysis of both frequency and time domains, breaking down signals into various frequency components and analyzing each component with the time domain corresponding to its specific scale [44]. It is crucial, however, to carefully consider or create the most suitable wavelet foundation.[43]. A 2022 study explored the effectiveness of three prevalent mother wavelet functions in conjunction with pre-trained CNNs on the automatic classification of an electrocardiogram (ECG) dataset. Specifically, the study used AlexNet and SqueezeNet, which revealed that Amor and Morse wavelet functions enhanced class recognition with AlexNet. In contrast, the Bump wavelet function demonstrated superior classification accuracy with pre-trained SqueezeNet [24].
The focus extends to techniques like wavelet entropy, wavelet packet energy entropy, and wavelet singular entropy. Wavelet entropy, combining wavelet transform and Shannon entropy, captures complexity and information content within signals at different scales or frequencies. In the continuous wavelet transform realm, this approach is valuable for analyzing time-frequency representations and revealing patterns associated with structural damage [43,45]. Examined on IM bearings, optimal contentious transform wavelet selection [43] and indicating the complexity of the analyzed transient signal in the time–frequency domain [33] allows distinguishing between transients with different complexity intuitively and quantitatively. Wavelet energy, measuring the energy distribution across different scales in the wavelet transform of a signal, was used to track changes in energy over time for fault localization and categorization [46]. This information is then employed to create a set of features for classification, followed by artificial neural network training to categorize these features.
On the other hand, a notable research gap that calls for further exploration—a gap in our understanding regarding the influence of varying loads on the manifestation of faults [18]. While previous research has delved into areas such as estimating remaining useful life from run-to-failure datasets [18], the domain of load’s impact on faults remains relatively unexplored. Indeed, radial impact was discussed in [21], where the authors used traditional statistical indicators to study the effects of inner and outer faults in bearings under different loads. They proposed a combination of indicators such as Kurtosis × RMS and Kurtosis × Peak, RMS × Peak for early fault detection using the Society for Machinery Failure Failure Prevention Technology (MFPT) bearings dataset, including inner and outer race faults. A similar analysis was conducted on the Case Western Reserve University (CWRU) dataset. It involved thoroughly investigating and comparing a wide range of traditional and new vibration indicators for detecting bearing defects and monitoring their progression [22].
In the past few years, researchers have shown considerable interest in detecting faults in induction motors, given their vital role in the electric power sector, manufacturing industry, and service domains. Consequently, there has been a focused effort to develop reliable and cost-effective methods for diagnosing faults in IM. The early detection of possible failures is paramount, as it can proactively prevent substantial damage to the machinery [12,13,14,15,16,17]. Despite the recognised significance of feature extraction and selection within intelligent diagnosis systems, assessing load impact has not received proportional attention in the literature [18,19]. In the realm of intelligent diagnosis systems, where feature extraction and selection are widely acknowledged [20], a noticeable gap emerges in the literature, particularly in evaluating load impact [11]. While extensive research has explored fault classification under varying loads, the nuanced repercussions of load variations on fault’s intrinsic nature have persistently evaded attention. The following Section 3 will introduce the proposed novel Customised Load Adaptive Framework (CLAF).

3. Methodology

The Customised Load Adaptive Framework (CLAF) proposed in this research is a two-phase approach designed to enhance our understanding of how radial loads influence system behaviour, especially in the presence of faults and varying load conditions. The term 'Customised' is used because this framework can be tailored to any dataset; in this study, it is specifically customized for the MFPT-bearing dataset. Additionally, it is referred to as 'Load Adaptive' because it emphasizes and deepens our understanding of how load changes impact induction motor (IM) defects, resulting in changes in time and frequency domain patterns and the identification of load-dependent subclasses (mild, moderate, severe) through CWT energy analysis. This approach primarily focuses on a tailored assessment of load effects and is implemented using MATLAB R2023a. 3.1. Phase 1: Time and Frequency Domain Load-Dependent Pattern Analysis
Phase 1 unveils load-dependent patterns in varying load conditions, as depicted in Figure 1, shedding light on the intricate interplay between load dynamics and bearing fault behaviour through the following steps:
Data Preprocessing and General Load-Dependent Feature Extraction: the MFPT-bearing dataset is segmented into smaller, manageable portions, involving the division of the continuous signal into smaller segments stored as separate CSV files.
5.
Data Segmentation and Load-Dependent Subfiles Creation: time and frequency domain features are extracted from the segmented data, focusing on assessing feature variations during faults and their sensitivity to load changes.
6.
Time and Frequency Domain Feature Extraction from Data Segmentation: generate a load-dependent time and frequency feature set, where an initial load-dependent feature set is created for use in the following step.
7.
Significant Load-Dependent Feature Selection and Validation: select and validate the most significant load-dependent features using an iterative one-way ANOVA approach. Then, validate this feature set by assessing the accuracy of different classifiers.

3.2. Phase 2: Customised Load Adaptive Framework (CLAF) for IM Fault Classification

In Phase 2, the research customises explicitly the methodology for the MFPT-bearing dataset, focusing on Wavelet Transform and Load dependent subclasses, Figure 2. The research explores different continuous wavelet transform approaches to find the optimal CWT approach. The optimal approach is determined using Wavelet Singular Entropy (WSE), followed by preprocessing and load effect assessment, resulting in the proposed CLAF. This framework introduces a new dimension to traditional fault classification by considering load variations dataset customization, revealing load-dependent fault subclasses' signatures absent in conventional approaches:
CWT Signal Encoding and Optimal Technique Selection: Various continuous wavelet transform methods are explored to represent signals concerning fault types, leading to the selection of the most appropriate approach (Amor, Bump, or Morse).
8.
CWT Energy Assessment For Each Load Factor: this step involves preprocessing, health condition classification, and categorization into thirteen classes corresponding to specific load levels. The research calculates wavelet singular entropy and mean energy, providing insights into fault severity and energy distribution.
9.
Customized Load Adaptive Framework (CLAF): the research proposes a Load-Dependent Fault Subclasses tailored to assess radial load impact under different conditions, incorporating insights gained from the analysis for a customised evaluation.
10.
CLAF Validation: train different classifiers on proposed Load-Dependent subclasses to examine the classification accuracy of the proposed classes.

3.3. Dataset

This research comprises two phases focused on investigating the radial effects of loads under various operational conditions, including fault and normal states, utilizing the publicly available MFPT-bearing dataset. The initial baseline data (N) was collected under a 270 lbs load, with a sampling rate of 97,656 samples per second (SPS) for a duration of 6 seconds. Concurrently, fault signals originating from Inner Race Defect (IRD) or Inner Race Fault (IRF) and Outer Race Defect (ORD) or Outer Race Fault (ORF) were obtained from the bearing test rig in Figure 3, under six different load conditions: 50, 100, 150, 200, 250, and 300 lbs under the constant speed of 25 Hz [47].
An essential aspect of this study involves categorizing the severity of Load-Dependent fault subclasses within the MFPT-bearings dataset. This categorization is based on changes in wavelet energy compared to the healthy condition, with a 20% increase classified as mild severity, 20% to 50% as moderate severity, and anything exceeding 50% as severe. While acknowledged as an assumption, this categorization is a fundamental component of the methodology, ensuring a structured and systematic approach to assessing fault severity under varying load scenarios. The results obtained from this novel framework will be presented in Section 4, covering Phase 1 and Phase 2.

4. Results and Discussion

4.1. Phase 1: Radial Load Features Assessment Framework

This phase involves data pre-processing for data pre-processing, general features extraction, and segmentation and data segmentation for load factor subsets creation.

4.1.1. Step1: Data Preprocessing and General Load-Dependent Feature Extraction

The dataset was categorized for separate analysis to assess the Load-dependent impact in fault scenarios, with a specific focus on IRF, as presented in Table 3, and ORF, as shown in Table 4. This study involved a comparison of six different load values (50, 100, 150, 200, 250, and 300 lbs) against the healthy state at 270 lbs. The healthy or normal condition dataset served as a baseline for comparative analysis, aiding in identifying distinctive features that indicate the presence of a fault in both IRF and ORF datasets.

General Load-Dependent Behaviour Analysis

The study conducted general time and frequency domain feature extraction, resulting in 13 features for IRF (presented in Table 5) and ORF (shown in Table 6). Additionally, spectral features were extracted using an Autoregressive(AR) model with an order of 15, focusing on two significant resonant peaks in the frequency spectrum and providing 5 additional load-dependent feature patterns, as detailed in Table 7.
The key findings regarding the impact of changing radial load on these extracted features are as follows from Table 5 and Table 6: Firstly, the Clearance Factor exhibited a noticeable decrease with increasing radial loads for both IRF and ORF. Specifically, IRF decreased by approximately 12.1% (from 40.039 at load 50 to 35.238 at load 300), while ORF experienced a decrease of about 68.0% (from 10.263 at load 50 to 27.176 at load 300). Secondly, the Crest Factor consistently decreased with higher radial loads, showing a decrease of approximately 16.0% for IRF (from 15.462 at load 50 to 12.998 at load 300) and a comparable reduction of roughly 50.6% for ORF (from 6.393 at load 50 to 12.918 at load 300). Lastly, Mean and RMS values significantly increased with higher radial loads for both IRF and ORF. Specifically, IRF exhibited an increase of approximately 10.9% in Mean (from 23.059 at load 150 to 25.585 at load 300), while ORF showed a substantial increase of about 294.9% in Mean (from 4.928 at load 100 to 19.433 at load 300).
In Table 7, variations in peak amplitudes (PeakAmp1 and PeakAmp2), peak frequencies (PeakFreq1 and PeakFreq2), and BandPower for both IRF and ORF across a range of load factors (from 50 to 300 lbs) were observed. Notably, with increasing radial load, Inner Faults exhibit higher peak amplitudes at a load of 300 lbs compared to ORF, while their peak frequencies tend to converge. Furthermore, as load factors increase, BandPower shows a more pronounced rise, especially for IRF, underscoring its sensitivity to load variations. When compared to the reference condition at a load factor of 270, we observe significant differences in peak amplitudes and frequencies, highlighting the discernible impact of varying loads on fault characteristics.
Further exploration is needed to fully understand the nuanced impact of each load factor through detailed feature extraction (as seen in Figure 4(b) for IRF and Figure 4(c) for ORF loads). Analyzing Standard Deviation (Std) and range across various features reveals distinctions between IRF and ORF types. In the frequency domain, PeakFreq1 and PeakFreq2 show notable variability, with IRF having lower variability in PeakFreq1 (510.38 vs. 1788.4) compared to ORF. Regarding impulse characteristics, IRF exhibits higher variability in ImpulseFactor (7.6174 vs. 5.5733), indicating diverse impulse characteristics compared to ORF. ClearanceFactor exhibits greater variability for IRF (11.237 vs. 7.4289), indicating significant changes in mechanical conditions. Vibration amplitudes also vary, with IRF showing higher variability in PeakValue (8.2942 vs. 5.4215). Additionally, IRF features display more pronounced changes in vibration characteristics compared to ORF, as seen in Kurtosis (12.08 vs 5.3444), Skewness (0.41466 vs 0.13983), Std (0.40479 vs 0.23206), RMS (0.40468 vs 0.22898), and ShapeFactor (0.25877 vs. 0.11854). Signal quality parameters (SNR and SINAD) vary more in ORF, indicating alterations in signal-to-noise characteristics. These insights contribute to a comprehensive understanding of vibration signals' dynamic response to IRF and ORF conditions, aiding condition monitoring and load-dependent behaviour analysis for fault detection.

4.1.2. Step2: Data Segmentation and Load-Dependent Subfiles Creation

First, the dataset was categorised by normal and fault types, each corresponding to load conditions of 50, 100, 150, 200, 250, and 300 lbs. Then, based on different sampling rates, the normal baseline signals were differentiated from fault signals IRF and ORF. Normal baseline signals were captured at 97,656 samples per second (SPS) for 6 seconds, while fault signals were sampled at 48,828 SPS for 3 seconds. Subfiles were created to enhance statistical robustness, each containing 2500 vibration data points. This led to 117 subfiles for the normal baseline and 58 subfiles for each fault category (IRF and ORF), strengthening both the sample size and signal integrity, see Table 8. Such meticulous preparation establishes a solid foundation for the subsequent one-way ANOVA analysis, enabling the identification of significant variations in vibration signals linked to different load levels and fault occurrences.

4.1.3. Step3: Time and Frequency Domain Feature Extraction from Data Segmentation

Section 4.1.1 discussed the impact of load variations on features. In this stage, we generate load-dependent time and frequency features from Table 8 subfiles for IRF, ORF, and Normal conditions. This allows for detailed analysis and subsequent one-way ANOVA feature ranking.
First, ten time domain features, Shape Factor, Peak Value, Clearance Factor, Impulse Factor, Mean, Crest Factor, Kurtosis, RMS, Std, and Skewness, were extracted. Second, there are three general frequency domain features: SINAD (Signal-to-Noise-and-Distortion Ratio), SNR (Signal-to-Noise Ratio), and THD (Total Harmonic Distortion). Third, Autoregressive (AR) model estimation was applied to transform the time domain signal into the frequency domain to extract specific spectral features: Peak Amplitude, Peak Frequency and Band Power.
This research delves into the exploration of two AR models for spectral feature extraction: one of order two with a single peak Figure 5(a) and another of order fifteen with five peaks Figure 5(b). This strategic approach aims to unravel how the complexity of modelling influences the representation of frequency components in the signal. The order-two model, being simpler, offers a foundational perspective, capturing fundamental frequency components. These features are extracted within a smaller frequency band of 600-18,000 Hz, excluding peaks beyond 18,000 Hz. On the other hand, the order-fifteen model, with its higher complexity, aspires to provide a more detailed and nuanced representation of intricate frequency variations. Here, the extraction of features is focused on a smaller band of frequencies between 10,000-25,000 Hz, excluding peaks after 25,000 Hz. For each signal, five spectral peaks are extracted, generating five frequency features for each peak.
The first AR model added 3 extra features to the existing 13 time and frequency domain features. Conversely, the second autoregressive model generated a larger set of 24 features, which included both general time and frequency domain features, as well as 11 features specifically derived from the autoregressive model. The disparity in feature count primarily resulted from variations in the extracted frequency domain features. The decision to calculate Peak Amplitude and Peak Frequency for each peak when testing different AR models aimed to achieve a more detailed and adaptable analysis of the signal's spectral characteristics. This approach acknowledges variations in frequency modes captured by different models, facilitating the identification and individual analysis of each peak.
This exploration is conducted to assess the trade-off between model simplicity and accuracy, a crucial consideration for diagnostic applications like fault detection. Furthermore, the testing of different peak configurations allows for a nuanced understanding of how the chosen models identify and distinguish peaks in the frequency spectrum. In essence, this approach yields valuable insights into the suitability of various model configurations for capturing the diverse characteristics of the signal under investigation.

4.1.4. Step 4: Significant Load-Dependent Feature Selection and Validation

Diverse classifier algorithms were systematically examined, focusing on optimal accuracy and minimal confusion. AR models with different peak counts were explored, with the first model (order two, peak one) and the second model (order fifteen, peak five) achieving the highest performance. Subsequently, the dataset was split into testing (20%), validation (20%), and training (60%) subsets, with 5-fold cross-validation for testing accuracy comparison. Feature richness varied with peak counts, where the first model showcased robust performance with a single peak, emphasizing the power of a strategically selected minimal feature. The second model, with five peaks, offered a more detailed representation of spectral characteristics. Features scoring below 20 ANOVA scores were excluded, refining the selection based on substantial impact. This step aimed to highlight load-changing trends on extracted features, providing valuable insights into load impact during faults. Feature subset selection, classifier training, and the selection of the highest-performing classifier with the optimal feature set were key steps. One-way ANOVA was employed to determine statistically significant variations in feature values across load conditions, aligning with the project's aim to comprehensively analyze load condition influences. ANOVA ranking was used to systematically rank features based on their significance in distinguishing fault types. The values associated with ANOVA ranking represent the effectiveness of each feature in differentiating between groups in vibration signal data.
(a) Autoregressive (AR) Model: Order Two, Peak =1
For the first AR model, a systematic reduction in features, guided by conditions based on ANOVA scores, see Figure 6-: features under 20 were excluded, while those surpassing thresholds of 350, 370, and 600 were taken into account. From the defined thresholds, the subsequent feature counts were achieved in the respective experiments: 13, 8, 7, and 2 features were retained. These experiments aimed to explore the effects of different feature combinations on classification accuracy, thus enhancing our understanding of the relationship between feature selection and its impact on classification accuracy.
However, Table 9 presents a comprehensive exploration of classifier performance across various feature selection thresholds, revealing notable insights. With the top 13 features, Boosted Trees exhibited superior adaptability, achieving the highest accuracy at 74.1%, emphasizing the discriminative power of the selected features. The reduction to the top 8 and 7 features demonstrated a trade-off between feature reduction and accuracy, with Boosted Trees maintaining a competitive edge. However, the drastic reduction to only two features significantly impacted accuracy across all classifiers, particularly affecting Fine Gaussian SVM. Importantly, the increase to 629 features did not proportionally enhance performance, suggesting a saturation point beyond which additional features may introduce noise. These findings underscore the nuanced relationship between feature selection and classifier performance, with Boosted Trees showcasing robustness across diverse feature sets.
(b) Autoregressive (AR) Model: Order Fifteen, Peak =5
In the context of the second AR model, the application of the one-way ANOVA Rank resulted in the creation of 24 spectral features, a notable increase from the initial 16, see Figure 7. These spectral features, which include time domain features like SINAD and SNR, alongside the frequency domain feature peakfrequency2, contribute to a comprehensive feature set. The top 19 features chosen for classifier training are exported to the classification learner, reserving 20% of the data for testing purposes.
The second AR model (Order 15) and Peak Five features exhibit compelling insights into classifier performance across distinct feature selection thresholds, see Figure 7. Utilizing the top 19 features, Bagged Trees and Cubic SVM achieved remarkable accuracy scores of 86.4%, underlining the efficacy of these classifiers in leveraging a relatively larger set of features Table 10. The reduction to the top 14 features maintained high accuracy across all classifiers, emphasizing their robustness. Notably, even with a more stringent selection of 14 features, all classifiers sustained accuracy levels above 80%, indicating resilience to feature reduction. The decrease to the top 13, 11, and 8 features demonstrated a nuanced trade-off between feature reduction and accuracy, with Bagged Trees consistently leading in performance. The findings reinforce the adaptability of the classifiers to varying feature sets, providing valuable insights for future considerations in feature selection strategies for this AR model and peak feature combination.
The impact of feature selection on classifier performance is substantial, with a notable trade-off between the number of features and accuracy. The relationship is intricate, as increased features do not necessarily guarantee improved performance due to the risk of overfitting. Moreover, different classifiers exhibit varied sensitivities to feature selection, with some performing well with a concise set of informative features while others benefit from a more extensive feature set. In the context of the AR model, considering the number of peaks proves crucial. Utilizing multiple peaks enhances sensitivity to changes in spectral composition, accommodates the potential introduction of new peaks, and furnishes a fine-grained feature set that adeptly captures the distinct contribution of each frequency component.

Summary of Selected Features

The superior classification accuracy on the testing dataset was achieved by the top 14 features, encompassing both time-domain characteristics and spectral features extracted from an AR model of order 15 and peak = 5. Consequently, these specific features contributed to an impressive 86.4% classification accuracy. The subsequent histogram visually represents each feature, employing a one-way ANOVA ranking to showcase their distinguishing capabilities. Histograms organized by fault code offer insights into the effectiveness of specific features in distinguishing between fault types. Robust differentiators are characterized by more distinct distributions, showcasing greater separation between them, as shown in Table 11.

4.2. Phase 2: Customised Load Adaptive Framework (CLAF) for IM Fault Classification

This section comprehensively explores time-frequency feature analysis across various fault types. The investigation encompasses CWT applied to vibration signals, wherein diverse mother wavelet functions are examined. The selection of the optimal wavelet function is determined through the utilisation of Wavelet Singular Entropy. This phase aims to develop the Customised Load Adaptive Framework (CLAF) for the MFPT-bearing dataset.

4.2.1. Step1: CWT Signal Encoding and Optimal Technique Selection

This entails selecting an optimal Continuous Wavelet Transform (CWT) for the MFPT-bearing dataset using Wavelet Singular Entropy (WSE) to create a time-frequency representation of the vibration signal. This representation serves multiple purposes, including feature extraction, denoising, and pattern recognition.

CWT Vibration Signal Time-Frequency Analysis

The analysis is initiated with the original MFPT-bearing dataset, categorized into health conditions: IRF, ORF, and Normal. The objective is to evaluate the capability of Continuous Wavelet Transform (CWT) in fault recognition, given its suitability for time-frequency analysis. CWT generates wavelet scalograms, which are 2D representations illustrating the local energy density across time and frequency, offering insights into system behaviour over time. Scalograms present time on the x-axis and scale on the y-axis, providing a comprehensive view of time-frequency domain characteristics compared to one-dimensional signals. The CWT effectively filters transient and non-smooth signal segments shown in Table 12. In Figure 8(a), 12 impulses in the inner vibration signal, corresponding to the bearing's IRF frequency, are observed. This results in 12 distinct peaks in the 2D time-frequency diagram in Table 12, with clearer patterns produced by Amor and Morse wavelets. Similarly, in Figure 8(b), eight peaks for outer faults are observed, with the most distinct pattern generated by Amor wavelets in Table 12. In contrast, in Figure 8(c), a lack of clear patterns or features is observed in the healthy signal, regardless of the wavelet used, refer to Table 12. The count of distinct peaks proves to be a valuable feature for distinguishing between inner race faults, outer race faults, and normal conditions. To quantitatively validate the selection of the optimal mother wavelet, Wavelet Singular Entropy (WSE) will be employed in the next section.

Wavelet Singular Entropy Analysis For Appropriate CWT Selection

A meticulous comparison of Wavelet Singular Entropy (WSE) scores identifies the most suitable mother wavelet function for fault scenarios. The largest WSE score indicates a more scattered signal with a less noticeable pattern, likely representing the healthy state, see Figure 8(c). WSE is a crucial quantitative measure for CWT, guiding the selection of effective wavelet foundations in wavelet analysis. The chosen mother wavelet significantly influences denoising, signal preservation, and feature extraction, enhancing the frequency spectrum of the denoised signal [6,43]. Average WSE was subsequently calculated in the process of selecting the optimal mother wavelet function by comparing WSE scores across different wavelet types [32]. The selection process involves evaluating ( W S E j ) scores across various mother wavelet functions:
W S E j = t = 1 n C f s ( t , j ) 2 . l o g C f s ( t , j ) 2 ,
Here, C f s is the wavelet transform coefficient obtained from W, and fs (Hz) is the sampling frequency that determines the number of samples taken per second. The range of summation depends on the number of wavelet coefficients obtained from the transform and the chosen wavelet scale. Each coefficient corresponds to a specific scale, j, and time, t, capturing information about the signal's frequency content and time location [32,33].
Afterwards, M e a n W S E W is calculated in Equation 10, where D represents the dataset (e.g., healthy, inner fault, outer fault), W represents the wavelet type (e.g., 'bump', 'morse', 'amor'), and N is the total number of datasets. Subsequently, the average mean WSE score ( A v g M e a n W S E W ) across all datasets for a specific wavelets determined in Equation 11:
M e a n W S E W , D = 1 n j = 1 n W S E j ,
A v g M e a n W S E W = 1 N D M e a n W S E W , D ,
Table 13 scores provide valuable insights into energy distribution patterns in signals under different fault conditions, with two randomly chosen datasets assessed using WSE:
Bump:
For the inner fault, the Wavelet Singular Entropy (WSE) scores are low (0.017424 and 0.039571), indicating a more concentrated energy distribution and simpler signals. In contrast, the outer fault exhibits higher scores (2.0282 and 1.7431), suggesting a more complex energy distribution. In the normal condition, the scores are relatively low (1.4832 and 1.5995), indicating a simpler energy distribution.
Morse:
In the case of the inner fault, low scores (0.011188 and 0.022887) suggest simpler signals. Conversely, the outer fault displays higher scores (2.311 and 2.2253), indicating a more complex energy distribution. In the normal condition, the scores are relatively low (2.2357 and 2.836), suggesting a simpler energy distribution.
Amor:
For the inner fault, low scores (0.0090466 and 0.019031) indicate simpler signals. The outer fault, however, shows a positive score (0.61065), suggesting a more dispersed energy distribution. In the normal condition, higher scores (2.6529, 5.3807, and 15.826) indicate more complex energy distributions.
The mother of wavelet analysis can be summarised in Figure 9, where it shows the visual comparison; the "Amor" wavelet type shows relatively better discrimination between the healthy and faulty conditions, as it exhibits lower WSE scores for the faulty conditions compared to the healthy condition. However, Based on the analysis of the WSE scores, three wavelet coefficients were evaluated: Morse, Bump, and Amor. For the healthy dataset, the Morse coefficient had an average WSE score of 2.53585, the Bump coefficient had a score of 1.54135, and the Amor coefficient had the highest score of 10.60335, indicating a more dispersed energy distribution. When considering the inner fault dataset, the Morse, Bump, and Amor coefficients had average WSE scores of 0.0170375, 0.0284975, and 0.0140388, respectively. For the outer fault dataset, the average WSE scores were 2.26815, 1.88565, and 1.631775 for the Morse, Bump, and Amor coefficients, respectively. The results show that the Amor coefficient exhibited the highest average WSE score for the healthy dataset, suggesting a distinct energy distribution. This makes the Amor coefficient a potential candidate for identifying healthy conditions compared to faulty ones.

4.2.2. Step 2: CWT Energy Assessment For Each Load Factor

This section uses the data segmentation subfiles described in Table 8 in section 4.1.2. for further mean energy analysis per load factor for inner and outer race fault types per load factor i, calculate the wavelet energy values using the CWT technique. Let x i t represent the vibration signal for load factor i at time t . The CWT coefficients are denoted as C i , j t , where j represents the selected wavelet scale [6,46]. Following these steps:
  • Extract the vibration signal for load factor i: x i t
  • Perform the CWT on the vibration signal: C i , j t , see Equation 12. The scale used in this study is 5.
  • Calculate the wavelet energy E w a v e l e t , i j ​ for each scale j: E w a v e l e t , i j , in Equation 13.
    C i , j t = C W T ( x i t , w a v e l e t t y p e , j ) ,
    E w a v e l e t , i j = t C i , j t 2
Hence, the concept of "scale" j is crucial in understanding the CWT technique in wavelet analysis. The CWT is a method used to examine signals at various scales, allowing the detection of different frequency components in a signal with varying levels of detail. Each scale j corresponds to a specific width of the analyzing wavelet, a mathematical function used in the transformation process. Smaller scales represent narrower wavelets sensitive to high-frequency details, enabling the capture of rapid signal variations. Conversely, larger scales correspond to wider wavelets, capturing lower-frequency signal components with broader coverage but less fine detail. In equations involving wavelet analysis, such as C i , j t 2 , the squared absolute value of wavelet coefficients at a particular scale j and for a specific load factor i is calculated. This squared magnitude is summed across time t, resulting in the computation of the wavelet energy at that scale j. This energy measure provides valuable insights into the contribution of different frequency components to the overall energy content of the signal [46].
After that, the mean energy table for each fault was found. By combining the calculated wavelet energy values into a table for each load factor i. let let E w a v e l e t , i = E w a v e l e t , i 1 , E w a v e l e t , i 2 , , E w a v e l e t , i N s c a l e s , be the vector of wavelet energy values for load factor i. Then, calculate the mean wavelet energy wavelet, E ¯ w a v e l e t , i   for each load factor i by taking the average of the wavelet energy values across all scales:
E ¯ w a v e l e t , i = 1 N s c a l e s j = 1 N s c a l e s E w a v e l e t , i j
Here, Building upon the foundation of wavelet energy, the mean wavelet energy E ¯ w a v e l e t , i is computed by averaging energy values over all scales. This metric provides a concise yet powerful representation of the energy behaviour post-fault for each load factor.
CWT Energy Assessment for Each Load Factor Using Optimal CWT Technique
In the assessment of mean energy values for IRF and ORF with load factor 270 as the normal condition shown in Table 14, the following observations are made: For inner bearings, load factor 270 (Normal Condition) exhibits a mean energy value of 5.7012, indicating lower energy content. Load factors 50, 100, and 150 have mean energy values ranging from 24.915 to 27.547, indicating relatively lower energy content, while load factors 200, 250, and 300 show mean energy values ranging from 32.199 to 36.147, suggesting higher energy content and a more pronounced presence of inner faults. Similarly, load factors 50, 100, 150, 200, and 250 for outer bearings have mean energy values ranging from 5.4309 to 7.6992, indicating relatively lower energy content than load factor 270. Load factor 270 (Normal Condition) has a mean energy value of 5.7012, representing normal or healthy conditions with lower energy content. Load factor 300 exhibits a mean energy value of 18.612, indicating a substantial 226.88% increase compared to normal conditions.
In summary, both ORF and IRF show notable increases in mean energy with distinct patterns. ORF exhibit the highest increase at load factor 300 (226.88%), while inner faults show higher increases, with the highest at load factor 250 (533.49%). The variability in increases ranges from 2.08% to 226.88% for outer faults and 337.68% to 533.49% for IRF. IRF generally display higher percentage increases than outer faults, providing insights for effective fault detection and system management.
Table 14. IRF and ORF CWT mean energy.
Table 14. IRF and ORF CWT mean energy.
Inner Race Fault Type Outer Race Fault Type
Load Factor (lbs) MeanEnergy Mean Energy Increase % MeanEnergy Mean Energy Increase %
50 25.549 347.70% 7.699 35.16%
100 27.547 383.65% 5.431 4.76%
150 24.915 337.68% 5.573 2.08%
200 33.742 491.88% 7.604 33.35%
250 36.147 533.49% 7.178 25.90%
270 5.7012 0% (baseline) 5.701 0% (baseline)
300 32.199 464.25% 18.612 226.88%

Two-Sample T-test for Significance Testing

In this study, a two-sample t-test was conducted using MATLAB R2023a to assess differences in mean CWT energy between the normal load condition (LoadFactor 270 lbs) and other loads (50, 100, 150, 200, 250, 300 lbs) for IRF in Figure 10 and ORF in Figure 11. Individual t-tests for each load factor determined whether the mean energy of the normal load significantly differed from other loads, with a significance level of 0.05. Results consistently demonstrated a clear and significant distinction in mean CWT energy between the normal condition and various loads. The null hypothesis (H0), suggesting no significant difference in CWT mean energy between LoadFactor 270 and other load factors, was rejected in favour of the alternative hypothesis (H1), indicating a substantial distinction. This finding held true for both IRF and ORF load factors, with low p-values, large sample sizes, substantial t-values, and confidence intervals, all supporting the robustness and reliability of these results.

4.2.3. Step 3: Customised Load Adaptive Framework

The Load Index, developed based on optimal CWT energy to capture the influence of load variations during fault occurrences, serves as a qualitative representation of the effects of varying loads on bearing behaviour. Subsequently, bearing faults were categorized into Load-Dependent subclasses, displaying distinct severity levels: mild, moderate, and severe, using CLAF. This comprehensive classification allows for understanding how varying loads contribute to the manifestation and progression of bearing faults with varying degrees of severity. This is built upon the previous steps:
Calculate Normalized Energy Values
For each load factor i, calculate normalized CWT energy values. E n o r m a l i z e d ,   i j By min-max scaling, the wavelet energy values E w a v e l e t ,   i j Between 0 and 1 Equation 15:
E n o r m a l i z e d , i j = E w a v e l e t , i j min E w a v e l e t j max E w a v e l e t , i j m i n E w a v e l e t , i j   ,  
In this normalized range, 0 represents the minimum energy value in the dataset, and 1 represents the maximum energy value in the dataset. All other energy values are linearly scaled within this range. Here, m i n ( E w a v e l e t j ) represents the minimum wavelet energy value across all load factors and scales, and max E w a v e l e t , i j . Represents the maximum wavelet energy value across all load factors and scales.
Identify Normal Condition Indices
I n o r m a l represents the indices corresponding to the normal condition. In the context of the analysis, it refers to the indices where the load factor is 270, which is considered the normal condition or baseline. These indices are used to calculate the deviation from the normal condition for each load factor and wavelet energy value.
In mathematical notation, I n o r m a l   is a set of indices i for which the load factor is equal to 270:
I n o r m a l = i l o a d f a c t o r = 270
Quantify Deviation: Calculate Deviation from Normal Condition
D i j = E n o r m a l i z e d , i j , i f   i       I n o r m a l 0 , o t h e r w i s e
where, Deviations D i j from the normal condition are calculated, highlighting differences between the normalized energy values and the baseline. When a load factor is not within I n o r m a l , the corresponding normalized energy value E n o r m a l i z e d ,   i   j ​ is considered. Otherwise, the deviation is set to zero.
Severity Of Changing Load: Threshold Setting
Define adjustable severity thresholds
Categorize the severity S i j based on the deviation magnitude D i j .and Threshold:
S i j = ' M i l d ' , i f   D i j < m i l d _ t h r e s h o l d ' M o d e r a t e ' , i f   D i j < m i l d _ t h r e s h o l d ' S e v e r e ' , o t h e r w i s e  
Hence, The severity of deviations D i j is categorized to assess the impact post-fault. Adjustable severity thresholds differentiate between "Mild," "Moderate," and "Severe" conditions and then store severity as a cell array value. This step is vital in determining the gravity of the machinery's response to various fault scenarios. It is crucial for maintenance teams to allocate resources efficiently, plan timely interventions, and prevent potential escalations. In this paper, the author chooses the following thresholds:
% Define severity thresholds
mild_threshold = 0.2; % Adjust according to your application
moderate_threshold = 0.5; % Adjust according to your application
Categorize Severity S i j Based on The Deviation Magnitude D i j
The normalised energy values allow us to effectively compare the energy levels of different load factors, as they are all scaled within the same range. However, it's essential to note that the normalised energy values are not directly related to the severity categorisation ('Mild', 'Moderate', or 'Severe'). The severity categorisation is based on the 'Deviation' column, which represents the deviation of each load factor's mean energy from the mean energy of the normal condition. Following are the inner and outer fault types after the assessment shown in Table 15 for both IRF and ORF:
Table 15. IRF and ORF Load-Dependent subclasses through CLAF.
Table 15. IRF and ORF Load-Dependent subclasses through CLAF.
LoadFactor (lb) Mean Energy NormalizedEnergy Deviation Load-Dependent Subclasses
Fault Type Inner Outter Inner Outter Inner Outter Inner Outter
50 25.549 7.6992 0.14035 0.05758 0.1403 0.05758 {'Mild' } {'Mild' }
100 27.547 5.4309 0.15053 0.023062 0.15053 0.023062 {'Mild' } {'Mild' }
150 24.915 5.5728 0.14063 0.031372 0.14063 0.031372 {'Mild' } {'Mild' }
200 33.742 7.6036 0.28444 0.092816 0.28444 0.092816 {'Moderate'} {'Mild' }
250 36.147 7.1779 0.29911 0.061822 0.29911 0.061822 {'Moderate'} {'Mild' }
270 5.7012 5.7012 0.00930 0.027659 0 0 {'Normal' } {'Normal'}
300 32.199 18.612 0.23412 0.89814 0.23412 0.89814 {'Moderate'} {'Severe'}
IRF Customised Load Factor Assessment:
Min-max scaling was employed to normalize the energy values, transforming the original energy values into a range of [0, 1]. In this normalized range, 0 signifies the minimum energy value in the dataset, while 1 represents the maximum energy value. All other energy values are linearly scaled within this range. The 'Normalized Energy' column in the provided table reflects the energy values post min-max scaling, where one corresponds to the maximum energy value. For instance, the energy value of 'LoadFactor' 250 is relatively the highest compared to other load factors in the dataset, evidenced by its proximity to 1 in the normalized range.
Conversely, 'LoadFactor' 50, 'LoadFactor' 100, and 'LoadFactor' 150 have normalized energy values around 0.14, indicating their energy values are closer to the lower end of the normalized range (0). These load factors exhibit lower energy values compared to others in the dataset. Notably, the normalized energy values do not directly correspond to severity categorization ('Mild,' 'Moderate,' or 'Severe'). The severity categorization is based on the 'Deviation' column, which represents the deviation of each load factor's mean energy from the mean energy of the normal condition.
ORF Type Customised Load Factor Assessment:
Long-duration operation at higher load factors for the ORF significantly influences degradation. Across load factors 50, 100, 150, 200, and 250, the mean energy values range from 5.4309 to 7.6992, indicating relatively lower energy content in the vibration signals compared to load factor 270, which represents the normal or healthy condition with a mean energy value of 6.0981. The normal condition exhibits relatively lower energy levels, as expected. However, load factor 300 stands out with a higher mean energy value of 18.612, suggesting that the associated outer fault condition has a notably higher energy content in the vibration signals than the other load factors. This detailed energy analysis provides valuable insights into the variations associated with different load factors and fault conditions, enhancing the understanding of the degradation process.

4.2.4. Step 4: CLAF Validation

The proposed CLAF is an early warning system that identifies potential issues based on the customised Load-Dependent fault subclasses. This approach enhances the efficiency of the CLAF, displaying time domain data grouped by the four framework classes. Time and frequency domain features are then extracted by creating a feature subset, training classifiers, and selecting optimal features based on accuracy. This dataset is detailed in section 5.1.4, "Classification and Features Selection using Second: Autoregressive (AR) Model (Order 15) and Peak Five," where 24 features from both time and frequency domains are generated within the 2500-25000 Hz frequency band. Each signal contributes five spectral peaks, resulting in five frequency features for each peak.
Following this, a one-way ANOVA test was conducted. Features below an ANOVA score of 26, referring to Figure 12, were excluded from further study. The aim of this step was to enhance the selection process by concentrating on features with a more significant impact. Observing the initial trial's high accuracy, the author systematically reduced the number of features, utilizing accuracy as a metric for efficient feature reduction. This reduction process was carried out gradually, guided by accuracy measures.
Subsequently, numerous classifiers were trained, and their performance was meticulously documented for each feature subset selection. A comprehensive evaluation of various classifier algorithms was then conducted, focusing on identifying combinations that optimised accuracy and minimized discrepancies among algorithms. The training dataset, comprising a total of 813 subfolders, was divided into 20% for testing, 20% for validation, and 60% for training. A five-fold cross-validation was executed, and testing accuracy was recorded for performance comparison. The process began with the top 20 features based on one-way ANOVA scores exceeding 26, followed by the top 17 features with scores exceeding 58.6, then the top 10 features with scores exceeding 161, followed by the top seven features with scores exceeding 215, and finally, the top five features with scores exceeding 240. Classifier performance for each subset was recorded, and in cases of equal accuracy, all classifiers were included in Table 16.

5. Conclusions

In this research, a new approach known as the Customised Load Adaptive Framework (CLAF) has been proposed for the classification of faults in Induction Motors (IM) in the form of Load-Dependent fault subclasses, namely mild, moderate, and severe fault categories. This framework has provided a comprehensive understanding of fault severity under varying load conditions, offering a profound and insightful method for fault analysis. Specifically tailored to the MFPT-bearing dataset, the research has highlighted patterns in time and frequency domain features under six different loads and has demonstrated how fault severity varies across various load conditions through the utilization of an optimal Continuous Wavelet Transform (CWT) energy approach selected by Wavelet Singular Entropy. Moreover, a two-sample t-test has been applied to assess significance, revealing a notable difference in CWT mean energy between the normal load condition and Outer Race Fault (ORF) and Inner Race Fault (IRF) under six distinct load factors. This discovery has yielded valuable insights into the energy dynamics of the examined system. Additionally, the CLAF framework has undergone validation with compelling results. When employing a wide neural network for the classification of these subclasses, an impressive 96.3% classification accuracy was achieved. This underscores the proper separation of classes and effective pattern recognition within the proposed CLAF.
In this research, the CLAF has undergone two phases: In Phase 1, Load-Dependent patterns in time and frequency domain features have been explored using one-way Analysis of Variance (ANOVA) ranking, and validation has been carried out via bagged tree classifiers. Major findings from Phase 1 have revealed consistent deviations in key features for both fault types, with Inner Race Fault (IRF) displaying more pronounced alterations. The one-way ANOVA test has ranked the shape factor feature as the most significant, followed by peak value, while Total Harmonic Distortion (THD) has shown no significance. Two different autoregressive models have been employed in frequency domain feature extraction. Subclassification based on extracted features per load has revealed distinct patterns, enabling the identification of load-induced patterns and contributing to a better understanding of the relationship between loads and feature expression in bearing health assessment. This approach has achieved an accuracy of up to 86.4% using bagged tree classifiers and 19 extracted features.
In Phase 2, Wavelet Singular Entropy (WSE) has identified "Amor" as the optimal Continuous Wavelet Transform (CWT) approach. Furthermore, the study has unveiled a significant relationship between fault severity and load factors, especially in the case of faults with load factors exceeding 300 lb when compared to a healthy condition. Severe faults, particularly in outer fault conditions, have exhibited a remarkable 226.88% increase in CWT mean energy compared to the healthy state. In contrast, inner faults have displayed varying degrees of sensitivity to different load levels. Notably, at load factors of 200 lb, 250 lb, and 300 lb, inner faults have shown substantial increases in WSE energy compared to their corresponding healthy conditions, with increments of 491%, 533.49%, and 464.25%, respectively. The Load-Dependent fault subclasses were validated through a subclassification task using extracted time and frequency domain features. Results demonstrate a 96.3% classification accuracy using a wide neural network and the top 10 extracted features based on one-way ANOVA ranking on the testing dataset. Severe indices were classified at 100%, moderate at 88.3%, and mild at 97.8%.
Future work will focus on exploring multimodal aspects through decision fusion within the proposed CLAF framework. This approach extends traditional fault classification methods innovatively, accommodating load variations and dataset customization. As a result, CLAF is not limited to the MFPT-bearing dataset but can also be customized for other Induction Motor (IM) datasets. This research has the potential to advance IM condition monitoring in the future.

Author Contributions

For research articles with several authors, a short paragraph specifying their individual contributions must be provided. The following statements should be used “Conceptualization, S.Z.M.H. and M.P.; methodology, S.Z.M.H. and M.P; software, S.Z.M.H.; validation, M.P, Y.L. and M.P.; formal analysis, S.Z.M.H. and M.P; investigation, M.P; resources, S.Z.M.H.; data curation, S.Z.M.H.; writing—original draft preparation, S.Z.M.H.; writing—review and editing, S.Z.M.H., M.P. and Y.L; visualisation, S.Z.M.H.; supervision, M.P.; project administration, S.Z.M.H, M.P. and Y.L.; funding acquisition, S.Z.M.H, M.P. and Y.L. All authors have read and agreed to the published version of the manuscript.”

Funding

This research was funded by the Saudi Arabian Government.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Condition Based Maintenance Fault Database for Testing of Diagnostic and Prognostics Algorithms Available online: https://www.mfpt.org/fault-data-sets/ (accessed on 30 October 2023) [48].

Acknowledgements

Special thanks to the Saudi Arabian Ministry of Education for their support and to the Society for Machinery Failure Prevention Technology for providing the publicly available dataset used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alshorman, O.; Irfan, M.; Saad, N.; Zhen, D.; Haider, N.; Glowacz, A.; Alshorman, A. A Review of Artificial Intelligence Methods for Condition Monitoring and Fault Diagnosis of Rolling Element Bearings for Induction Motor. Shock Vib. 2020, 2020. [Google Scholar] [CrossRef]
  2. Cinar, E. A Sensor Fusion Method Using Deep Transfer Learning for Fault Detection in Equipment Condition Monitoring. 16th Int. Conf. Innov. Intell. Syst. Appl. INISTA 2022 2022, 1–6. [Google Scholar] [CrossRef]
  3. Nemani, V.; Bray, A.; Thelen, A.; Hu, C.; Daining, S. Health Index Construction with Feature Fusion Optimization for Predictive Maintenance of Physical Systems. Struct. Multidiscip. Optim. 2022, 65, 1–23. [Google Scholar] [CrossRef]
  4. Ye, L.; Ma, X.; Wen, C. Rotating Machinery Fault Diagnosis Method by Combining Time-Frequency Domain Features and Cnn Knowledge Transfer. Sensors 2021, 21. [Google Scholar] [CrossRef] [PubMed]
  5. Resendiz-Ochoa, E.; Osornio-Rios, R.A.; Benitez-Rangel, J.P.; Romero-Troncoso, R.D.J.; Morales-Hernandez, L.A. Induction Motor Failure Analysis: An Automatic Methodology Based on Infrared Imaging. IEEE Access 2018, 6, 76993–77003. [Google Scholar] [CrossRef]
  6. Silik, A.; Noori, M.; Altabey, W.A.; Ghiasi, R.; Wu, Z. Comparative Analysis of Wavelet Transform for Time-Frequency Analysis and Transient Localization in Structural Health Monitoring. SDHM Struct. Durab. Heal. Monit. 2021, 15, 1–22. [Google Scholar] [CrossRef]
  7. Iunusova, E.; Gonzalez, M.K.; Szipka, K.; Archenti, A. Early Fault Diagnosis in Rolling Element Bearings: Comparative Analysis of a Knowledge-Based and a Data-Driven Approach. J. Intell. Manuf. 2023. [Google Scholar] [CrossRef]
  8. Li, J.; Ying, Y.; Ren, Y.; Xu, S.; Bi, D.; Chen, X.; Xu, Y. Research on Rolling Bearing Fault Diagnosis Based on Multi-Dimensional Feature Extraction and Evidence Fusion Theory. R. Soc. Open Sci. 2019, 6. [Google Scholar] [CrossRef] [PubMed]
  9. Shi, Z.; Li, Y.; Liu, S. A Review of Fault Diagnosis Methods for Rotating Machinery. IEEE Int. Conf. Control Autom. ICCA 2020, 2020, 1618–1623. [Google Scholar] [CrossRef]
  10. Zhang, X.; Zhao, B.; Lin, Y. Machine Learning Based Bearing Fault Diagnosis Using the Case Western Reserve University Data: A Review. IEEE Access 2021, 9, 155598–155608. [Google Scholar] [CrossRef]
  11. Ahmed, H.; Nandi, A.K. Compressive Sampling and Feature Ranking Framework for Bearing Fault Classification With Vibration Signals. IEEE Access 2018, 6, 44731–44746. [Google Scholar] [CrossRef]
  12. Toma, R.N.; Gao, Y.; Piltan, F.; Im, K.; Shon, D.; Yoon, T.H.; Yoo, D.S.; Kim, J.M. Classification Framework of the Bearing Faults of an Induction Motor Using Wavelet Scattering Transform-Based Features. Sensors 2022, 22. [Google Scholar] [CrossRef]
  13. Nayana, B.R.; Geethanjali, P. Improved Identification of Various Conditions of Induction Motor Bearing Faults. IEEE Trans. Instrum. Meas. 2020, 69, 1908–1919. [Google Scholar] [CrossRef]
  14. Toma, R.N.; Prosvirin, A.E.; Kim, J.M. Bearing Fault Diagnosis of Induction Motors Using a Genetic Algorithm and Machine Learning Classifiers. Sensors (Switzerland) 2020, 20. [Google Scholar] [CrossRef]
  15. Martinez-Herrera, A.L.; Ferrucho-Alvarez, E.R.; Ledesma-Carrillo, L.M.; Mata-Chavez, R.I.; Lopez-Ramirez, M.; Cabal-Yepez, E. Multiple Fault Detection in Induction Motors through Homogeneity and Kurtosis Computation. Energies 2022, 15. [Google Scholar] [CrossRef]
  16. Yuan, L.; Lian, D.; Kang, X.; Chen, Y.; Zhai, K. Rolling Bearing Fault Diagnosis Based on Convolutional Neural Network and Support Vector Machine. IEEE Access 2020, 8, 137395–137406. [Google Scholar] [CrossRef]
  17. Hejazi, S.; Packianather, M.; Liu, Y. Novel Preprocessing of Multimodal Condition Monitoring Data for Classifying Induction Motor Faults Using Deep Learning Methods. In Proceedings of the 2022 IEEE 2nd International Symposium on Sustainable Energy, Signal Processing and Cyber Security (iSSSC), December 15 2022; IEEE; pp. 1–6. [Google Scholar]
  18. Zhang, H.; Borghesani, P.; Randall, R.B.; Peng, Z. A Benchmark of Measurement Approaches to Track the Natural Evolution of Spall Severity in Rolling Element Bearings. Mech. Syst. Signal Process. 2022, 166, 108466. [Google Scholar] [CrossRef]
  19. Han, T.; Zhang, L.; Yin, Z.; Tan, A.C.C. Rolling Bearing Fault Diagnosis with Combined Convolutional Neural Networks and Support Vector Machine. Meas. J. Int. Meas. Confed. 2021, 177, 109022. [Google Scholar] [CrossRef]
  20. Narayan, Y. Hb VsEMG Signal Classification with Time Domain and Frequency Domain Features Using LDA and ANN Classifier Materials Today: Proceedings Hb VsEMG Signal Classification with Time Domain and Frequency Domain Features Using LDA and ANN Classifier. Mater. Today Proc. 2021. [Google Scholar] [CrossRef]
  21. Jain, P.H.; Bhosle, S.P. Study of Effects of Radial Load on Vibration of Bearing Using Time-Domain Statistical Parameters. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1070, 012130. [Google Scholar] [CrossRef]
  22. Jain, P.H.; Bhosle, S.P. Analysis of Vibration Signals Caused by Ball Bearing Defects Using Time-Domain Statistical Indicators. Int. J. Adv. Technol. Eng. Explor. 2022, 9, 700–715. [Google Scholar] [CrossRef]
  23. Liu, M.K.; Weng, P.Y. Fault Diagnosis of Ball Bearing Elements: A Generic Procedure Based on Time-Frequency Analysis. Meas. Sci. Rev. 2019, 19, 185–194. [Google Scholar] [CrossRef]
  24. Pinedo-Sánchez, L.A.; Mercado-Ravell, D.A.; Carballo-Monsivais, C.A. Vibration Analysis in Bearings for Failure Prevention Using CNN. J. Brazilian Soc. Mech. Sci. Eng. 2020, 42, 1–16. [Google Scholar] [CrossRef]
  25. Granados-Lieberman, D.; Huerta-Rosales, J.R.; Gonzalez-Cordoba, J.L.; Amezquita-Sanchez, J.P.; Valtierra-Rodriguez, M.; Camarena-Martinez, D. Time-Frequency Analysis and Neural Networks for Detecting Short-Circuited Turns in Transformers in Both Transient and Steady-State Regimes Using Vibration Signals. Appl. Sci. 2023, 13, 12218. [Google Scholar] [CrossRef]
  26. Tian, B.; Fan, X.; Xu, Z.; Wang, Z.; Du, H. Finite Element Simulation on Transformer Vibration Characteristics under Typical Mechanical Faults. Proc. 9th Int. Conf. Power Electron. Syst. Appl. PESA 2022 2022, 1–4. [Google Scholar] [CrossRef]
  27. Kumar, V.; Mukherjee, S.; Verma, A.K.; Sarangi, S. An AI-Based Nonparametric Filter Approach for Gearbox Fault Diagnosis. IEEE Trans. Instrum. Meas. 2022, 71, 1–11. [Google Scholar] [CrossRef]
  28. MathWorks Analyze and Select Features for Pump Diagnostics. Available online: https://www.mathworks.com/help/predmaint/ug/analyze-and-select-features-for-pump-diagnostics.html (accessed on 27 November 2023).
  29. Hu, L.; Zhang, Z. EEG Signal Processing and Feature Extraction; Hu, L., Zhang, Z., Eds.; Springer Singapore: Singapore, 2019; ISBN 978-981-13-9112-5. [Google Scholar]
  30. Metwally, M.; Hassan, M.M.; Hassaan, G. Diagnosis of Rotating Machines Faults Using Artificial Intelligence Based on Preprocessing for Input Data. 2020.
  31. Djemili, I.; Medoued, A.; Soufi, Y. A Wind Turbine Bearing Fault Detection Method Based on Improved CEEMDAN and AR-MEDA. J. Vib. Eng. Technol. 2023. [Google Scholar] [CrossRef]
  32. Zhang, H.; Zhang, S.; Qiu, L.; Zhang, Y.; Wang, Y.; Wang, Z.; Yang, G. A Remaining Useful Life Prediction Method Based on PSR-Former. Sci. Rep. 2022, 12. [Google Scholar] [CrossRef] [PubMed]
  33. He, Z.; Fu, L.; Lin, S.; Bo, Z. Fault Detection and Classification in EHV Transmission Line Based on Wavelet Singular Entropy. IEEE Trans. Power Deliv. 2010, 25, 2156–2163. [Google Scholar] [CrossRef]
  34. Kaji, M.; Parvizian, J.; van de Venn, H.W. Constructing a Reliable Health Indicator for Bearings Using Convolutional Autoencoder and Continuous Wavelet Transform. Appl. Sci. 2020, 10, 8948. [Google Scholar] [CrossRef]
  35. Wang, S.; Wang, D.; Kong, D.; Wang, J.; Li, W.; Zhou, S. Few-Shot Rolling Bearing Fault Diagnosis with Metric-Based Meta Learning. Sensors (Switzerland) 2020, 20, 1–15. [Google Scholar] [CrossRef] [PubMed]
  36. Amanollah, H.; Asghari, A.; Mashayekhi, M. Damage Detection of Structures Based on Wavelet Analysis Using Improved AlexNet. Structures 2023, 56, 105019. [Google Scholar] [CrossRef]
  37. Zhang, Y.; Guo, H.; Zhou, Y.; Xu, C.; Liao, Y. Recognising Drivers’ Mental Fatigue Based on EEG Multi-Dimensional Feature Selection and Fusion. Biomed. Signal Process. Control 2023, 79, 104237. [Google Scholar] [CrossRef]
  38. Suresh, S.; Naidu, V.P.S. Mahalanobis-ANOVA Criterion for Optimum Feature Subset Selection in Multi-Class Planetary Gear Fault Diagnosis. JVC/Journal Vib. Control 2022, 28, 3257–3268. [Google Scholar] [CrossRef]
  39. Alharbi, A.H.; Towfek, S.K.; Abdelhamid, A.A.; Ibrahim, A.; Eid, M.M.; Khafaga, D.S.; Khodadadi, N.; Abualigah, L.; Saber, M. Diagnosis of Monkeypox Disease Using Transfer Learning and Binary Advanced Dipper Throated Optimization Algorithm. Biomimetics 2023, 8, 1–21. [Google Scholar] [CrossRef] [PubMed]
  40. Sayyad, S.; Kumar, S.; Bongale, A.; Kamat, P.; Patil, S.; Kotecha, K. Data-Driven Remaining Useful Life Estimation for Milling Process: Sensors, Algorithms, Datasets, and Future Directions. IEEE Access 2021, 9, 110255–110286. [Google Scholar] [CrossRef]
  41. Toma, R.N.; Toma, F.H.; Kim, J. Comparative Analysis of Continuous Wavelet Transforms on Vibration Signal in Bearing Fault Diagnosis of Induction Motor. In Proceedings of the 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), September 14 2021; IEEE; pp. 1–4. [Google Scholar]
  42. Toma, R.N.; Kim, J. Comparative Analysis of Continuous Wavelet Transforms on Vibration Signal in Bearing Fault Diagnosis of Induction Motor. 2021, 14–16. [CrossRef]
  43. Guo, T.; Zhang, T.; Lim, E.; Lopez-Benitez, M.; Ma, F.; Yu, L. A Review of Wavelet Analysis and Its Applications: Challenges and Opportunities. IEEE Access 2022, 10, 58869–58903. [Google Scholar] [CrossRef]
  44. Ozaltin, O.; Yeniay, O. A Novel Proposed CNN–SVM Architecture for ECG Scalograms Classification. Soft Comput. 2023, 27, 4639–4658. [Google Scholar] [CrossRef] [PubMed]
  45. Li, D.; Cao, M.; Deng, T.; Zhang, S. Wavelet Packet Singular Entropy-Based Method for Damage Identification in Curved Continuous Girder Bridges under Seismic Excitations. Sensors (Switzerland) 2019, 19. [Google Scholar] [CrossRef]
  46. Jayamaha, D.K.J.S.; Lidula, N.W.A.; Rajapakse, A.D. Wavelet-Multi Resolution Analysis Based ANN Architecture for Fault Detection and Localization in DC Microgrids. IEEE Access 2019, 7, 145371–145384. [Google Scholar] [CrossRef]
  47. Bechhoefer, E. A Quick Introduction to Bearing Envelope Analysis. J. Chem. Inf. Model. 2016, 53, 1–10. [Google Scholar]
  48. Bechhoefer, E. Condition Based Maintenance Fault Database for Testing of Diagnostic and Prognostics Algorithms. Available online: https://www.mfpt.org/fault-data-sets/ (accessed on 30 October 2023).
Figure 1. Phase 1: Time and frequency domain Load-Dependent pattern analysis.
Figure 1. Phase 1: Time and frequency domain Load-Dependent pattern analysis.
Preprints 92261 g001
Figure 2. Phase 2: Customised Load Adaptive Framework (CLAF) for IM fault classification.
Figure 2. Phase 2: Customised Load Adaptive Framework (CLAF) for IM fault classification.
Preprints 92261 g002
Figure 3. Computer-aided drawings of defects made on: (a) Outer race; (b) Inner race [22].
Figure 3. Computer-aided drawings of defects made on: (a) Outer race; (b) Inner race [22].
Preprints 92261 g003
Figure 4. Std and Range of time and frequency domain extracted features (a) for (b) IRF and (c) ORF.
Figure 4. Std and Range of time and frequency domain extracted features (a) for (b) IRF and (c) ORF.
Preprints 92261 g004
Figure 5. AR model: (a) order two and peak = 1; (b) order fifteen and peak = 5.
Figure 5. AR model: (a) order two and peak = 1; (b) order fifteen and peak = 5.
Preprints 92261 g005
Figure 6. One-way ANOVA ranking including spectral features extracted by AR model (a) order two, peak = 1.
Figure 6. One-way ANOVA ranking including spectral features extracted by AR model (a) order two, peak = 1.
Preprints 92261 g006
Figure 7. One-way ANOVA ranking including spectral features extracted by AR model (b) order fifteen, peak = 5.
Figure 7. One-way ANOVA ranking including spectral features extracted by AR model (b) order fifteen, peak = 5.
Preprints 92261 g007
Figure 8. (a) IRF signal trace peaks count for InnerRaceFault_vload_1 dataset, (b) ORF signal trace peaks count for OuterRaceFault_3 dataset, (c) Normal signal trace for baseline_1 dataset.
Figure 8. (a) IRF signal trace peaks count for InnerRaceFault_vload_1 dataset, (b) ORF signal trace peaks count for OuterRaceFault_3 dataset, (c) Normal signal trace for baseline_1 dataset.
Preprints 92261 g008
Figure 9. Mean absolute WSE values for different mother of wavelets.
Figure 9. Mean absolute WSE values for different mother of wavelets.
Preprints 92261 g009
Figure 10. Two samples' t-test results compare IRF load factors (50, 100, 150, 200, 250, 300) with normal load conditions (LoadFactor 270).
Figure 10. Two samples' t-test results compare IRF load factors (50, 100, 150, 200, 250, 300) with normal load conditions (LoadFactor 270).
Preprints 92261 g010
Figure 11. Two samples' t-test results compare ORF load factors (50, 100, 150, 200, 250, 300) with normal load conditions (LoadFactor 270).
Figure 11. Two samples' t-test results compare ORF load factors (50, 100, 150, 200, 250, 300) with normal load conditions (LoadFactor 270).
Preprints 92261 g011
Figure 12. One-way ANOVA ranking AR model order fifteen, peaks = 5.
Figure 12. One-way ANOVA ranking AR model order fifteen, peaks = 5.
Preprints 92261 g012
Table 1. Traditional statistical features.
Table 1. Traditional statistical features.
Parameter Formula Description
Peak or Max X m a x The highest amplitude value is observed within a given signal or dataset.
Root Mean Square (RMS) 1 N i = 1 N x i 2 Gives a measure of the magnitude of the signal.
Skewness 1 n i = 1 n x i x ¯ 3 1 n i = 1 n x i x ¯ 2 3 2 Measures the asymmetry of the distribution about the mean.
Standard deviation (std) 1 n i = 1 n x i 2 The square root of the variance represents the average deviation from the mean.
Kurtosis 1 N i = 1 N x i x ¯ 4 1 n i = 1 n x i x ¯ 2 2 Indicates the "tailedness" of the distribution. A high kurtosis might indicate the presence of outliers or impulses in the signal.
Crest Factor P e a k R M S The ratio of the peak amplitude to its RMS value indicates the relative sharpness of peaks.
Peak to Peak X m i n X m a x Difference between the maximum and minimum values of the signal.
Impulse Factor m a x X i 1 n i = 1 n X i Highlights the impulsive behaviours indicative of machinery faults.
Where N is the sample size, x i represents individual data points and x ¯   is the average data point.
Table 3. IRF dataset splitting per load.
Table 3. IRF dataset splitting per load.
Inner Fault Dataset Code Load (lbs) Sampling rate
(Hz)
Duration
(sec)
baseline_2 data_normal 270 97656 6
InnerRaceFault_vload_2 IRF_50 50 48828 3
InnerRaceFault_vload_3 IRF_100 100 48828 3
InnerRaceFault_vload_4 IRF_150 150 48828 3
InnerRaceFault_vload_5 IRF_200 200 48828 3
InnerRaceFault_vload_6 IRF_250 250 48828 3
InnerRaceFault_vload_7 IRF_300 300 48828 3
Table 4. ORF dataset splitting per load.
Table 4. ORF dataset splitting per load.
Outer Fault Dataset Code Load (lbs) Sampling rate (Hz) Duration
(sec)
baseline_2 data_normal 270 97656 6
OuterRaceFault_vload_2 ORF_50 50 48828 3
OuterRaceFault_vload_3 ORF_100 100 48828 3
OuterRaceFault_vload_4 ORF_150 150 48828 3
OuterRaceFault_vload_5 ORF_200 200 48828 3
OuterRaceFault_vload_6 ORF_250 250 48828 3
OuterRaceFault_vload_7 ORF_300 300 48828 3
Table 5. General time and frequency domain features (IRD).
Table 5. General time and frequency domain features (IRD).
LoadFactor
(lbs)
Clearance
Factor
Crest
Factor
Impulse
Factor
Kurtosis Mean Peak
Value
RMS Shape
Factor
Skewness Std SINAD* SNR* THD*
50 40.04 15.462 28.69 27.97 -0.22 27.50 1.78 1.86 0.62 1.76 -21.32 -21.307 -5.36
100 37.30 14.488 26.96 30.53 -0.22 26.59 1.84 1.86 0.87 1.82 -21.05 -21.027 -0.53
150 33.30 13.249 24.31 33.13 -0.22 23.06 1.74 1.84 1.28 1.72 -19.05 -19.046 -10.06
200 38.15 13.537 26.92 37.28 -0.21 27.38 2.02 1.99 1.15 2.01 -18.22 -18.208 -6.31
250 37.52 13.022 26.18 37.49 -0.20 27.14 2.08 2.01 0.72 2.08 -17.70 -17.684 -5.46
300 35.24 12.998 25.17 35.30 -0.19 25.58 1.97 1.94 0.68 1.96 -17.35 -17.341 -8.41
270** 7.75 5.230 6.56 3.02 -0.14 4.65 0.89 1.25 0.00 0.88 -23.60 -23.598 -11.39
*Frequency domain features. **Normal condition.
Table 6. General time and frequency domain features (ORF).
Table 6. General time and frequency domain features (ORF).
LoadFactor
(lbs)
Clearance
Factor
Crest
Factor
Impulse
Factor
Kurtosis Mean Peak
Value
RMS Shape
Factor
Skewness Std SINAD* SNR* THD*
50 10.26 6.39 8.48 5.09 -0.19 6.35 0.99 1.33 0.04 0.98 -14.41 -14.40 -11.97
100 9.15 5.84 7.62 4.40 -0.18 4.93 0.84 1.31 -0.01 0.82 -13.15 -13.12 -9.06
150 9.54 6.10 7.94 4.04 -0.18 5.21 0.85 1.30 -0.04 0.83 -12.59 -12.56 -9.934
200 21.81 12.46 17.67 11.90 -0.17 12.28 0.99 1.42 0.31 0.97 -17.54 -17.52 -5.54
250 15.03 9.07 12.30 6.59 -0.16 8.66 0.96 1.36 0.12 0.94 -16.09 -16.06 -4.92
300 27.18 12.92 20.80 17.69 -0.16 19.43 1.50 1.61 0.27 1.50 -15.10 -15.10 -14.69
270** 7.75 5.23 6.56 3.02 -0.14 4.65 0.89 1.25 0.01 0.88 -23.60 -23.60 -11.39
*Frequency domain features. **Normal condition.
Table 7. Spectral features by AR model (IRF and ORF).
Table 7. Spectral features by AR model (IRF and ORF).
LoadFactor PeakAmp1 PeakAmp2 PeakFreq1 PeakFreq2 BandPower
Inner Outer Inner Outer Inner Outer Inner Outer Inner Outer
50 0.00034 0.000109 0.00031 0.000093 4363.937 1413.267 13991.090 14179.042 1.474 0.454
100 0.00046 0.000075 0.00012 0.000028 4256.059 1379.739 13968.668 14258.280 1.476 0.322
150 0.00046 0.000080 0.00005 0.000036 4191.394 1377.111 14127.206 14462.995 1.330 0.327
200 0.00031 0.000063 0.00011 0.000058 4025.383 4947.698 10622.786 1391.188 1.663 0.461
250 0.00061 0.000058 0.00009 0.000049 4124.988 1621.552 10365.553 5212.034 1.807 0.430
300 0.00077 0.000302 0.00058 0.000296 4081.332 2915.517 748.668 11675.566 1.618 1.101
Healthy 270 0.00003 0.000028 0.00003 0.000028 5490.855 5490.855 14478.764 14478.764 0.279 0.302
Table 8. Dataset segmentation and subfiles creation demonstration.
Table 8. Dataset segmentation and subfiles creation demonstration.
Dataset Segmentation CSV files Code Load Factor Subfiles Count
Example on baseline(Normal) with Matlab code. The segment is based on ratio, i.e., each segment in inner and outer fault contains 2500 samples, and each sample in normal condition contains 5000 data points.Preprints 92261 i001Preprints 92261 i002 Preprints 92261 i003 IRF_50 {'IRF_50' } 58
IRF_100 {'IRF_100'} 58
IRF_150 {'IRF_150'} 58
IRF_200 {'IRF_200'} 58
IRF_250 {'IRF_250'} 58
IRF_300 {'IRF_300'} 58
ORF_50 {'ORF_50' } 58
ORF_100 {'ORF_100' } 58
ORF_150 {'ORF_150'} 58
ORF_200 {'ORF_200'} 58
ORF_250 {'ORF_250'} 58
ORF_300 {'ORF_300'} 58
Normal {'Normal' } 117
Table 9. Classifier performance across feature selection thresholds for AR model (a) and peak = 1.
Table 9. Classifier performance across feature selection thresholds for AR model (a) and peak = 1.
No. of features used in classifier training Top 13
>20
Top 8
>345
Top 7
>373
Top 2
>629
Classifier name Boosted Trees Narrow Neural Network Bilayered Neural Network Fine Gaussian SVM
Accuracy score on the testing dataset 74.1% 72.8% 73.5% 59.9%
Table 10. Classifier performance across feature selection thresholds for AR model (b) order fifteen, peak = 5.
Table 10. Classifier performance across feature selection thresholds for AR model (b) order fifteen, peak = 5.
Number of selected features from ANOVA ranking Top 19
>20
Top 14
>72
Top 13
>129
Top 11
>171
Top 8
>345
Classifier Bagged trees Cubic SVM Quadratic SVM Quadratic Discriminant Quadratic SVM
Accuracy score on the testing dataset 86.4% 86.4% 83.3% 84.6% 76.5%
Table 11. Selected features (Top 14) distinguishing fault types: histogram visualization.
Table 11. Selected features (Top 14) distinguishing fault types: histogram visualization.
Features Color Code
Preprints 92261 i004
Features (ANOVA Rank) Features Histogram Features (ANOVA Rank) Features Histogram
1. Shape Factor Preprints 92261 i005 2. Peak Value Preprints 92261 i006
3. ClearanceFactor Preprints 92261 i007 4. Impulse Factor Preprints 92261 i008
5.Mean Preprints 92261 i009 6.CrestFactor Preprints 92261 i010
7. Kurtosis Preprints 92261 i011 8.RMS Preprints 92261 i012
9.Standard deviation Preprints 92261 i013 10.Band Power Preprints 92261 i014
11.Peak Amplitude1 Preprints 92261 i015 12.Peak Frequency4 Preprints 92261 i016
13.Peak Amplitude 2 Preprints 92261 i017 14.PeakFrequency3 Preprints 92261 i018
Table 12. Health conditions signal visualisation versus 2D time-frequency diagrams with three types mother of wavelet functions.
Table 12. Health conditions signal visualisation versus 2D time-frequency diagrams with three types mother of wavelet functions.
Health State Inner Outter Healthy
Dataset InnerRaceFault_vload_1 'OuterRaceFault_3.mat' 'baseline_1.mat'
2D time-frequency diagrams
Bump Preprints 92261 i019 Preprints 92261 i020 Preprints 92261 i021
Morse Preprints 92261 i022 Preprints 92261 i023 Preprints 92261 i024
Amor Preprints 92261 i025 Preprints 92261 i026 Preprints 92261 i027
Table 13. WSE scores comparison with three types mother of wavelet functions.
Table 13. WSE scores comparison with three types mother of wavelet functions.
Health State Training set Code Morse Bump Amor
Healthy baseline_1 data_normal 2.236 1.483 5.381
baseline_2 data_normal2 2.836 1.600 15.830
WSE Avg. for 0.1 sec 2.536 1.541 10.603
Inner InnerRaceFault_vload_1 datat_inner 0.011 0.017 0.009
InnerRaceFault_vload_2 datat_inner2 0.023 0.040 0.019
WSE Avg. for 0.1 sec 0.017 0.028 0.014
Outer OuterRaceFault_3 data_outer 2.311 2.028 0.611
OuterRaceFault_1 data_outer_2 2.225 1.743 2.653
WSE Avg. for 0.1 sec 2.268 1.886 1.632
Table 16. CLAF Load-Dependent subclasses classifiers training on various feature subsets.
Table 16. CLAF Load-Dependent subclasses classifiers training on various feature subsets.
Classifier ANOVA ranking TTime 1 Validation dataset Testing Dataset
(sec) VA2 HA3 MA4 MoA5 SA 6 Overall Accuracy
RUSBoostedTrees Top 20
>26
11.539 92.6% 100% 92.4% 91.2% 100% 93.8%
Fine Tree Top 17
>58.6
4.393 92.6% 100% 95.7% 82.4% 100% 93.8%
Wide neural network Top 10
>161
18.155 91.2% 100% 97.8% 88.2% 100% 96.3%
Cubic SVM Top7 (a)
>215
8.1055 93.1% 100% 96.7% 82.4% 100% 94.4%
Medium Gaussian SVM Top 7 (b) 5.8059 91.6% 100% 96.7% 82.4% 100% 94.4%
Fine Gaussian SVM Top 5
>240
12.711 92.9% 100% 97.8% 82.4% 100% 95.1%
1 TTime is the training time, 2 VA is the validation accuracy, 3 HA is the healthy state accuracy, 4MA is the moderate state accuracy, 5 MoA is the moderate state accuracy, 6 SA is the severe state accuracy.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.Top of Form
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated