Preprint
Article

A Novel Methodology for Classifying Electrical Disturbances Using Deep Neural Networks

Altmetrics

Downloads

127

Views

36

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

04 May 2023

Posted:

05 May 2023

You are already at the latest version

Alerts
Abstract
Electrical power quality is one of the main elements in power generation systems. At the same time, it is one of the most significant challenges regarding stability and reliability. Due to different switching devices in this type of architecture, different kinds of power generators, and non-linear loads are used for different industrial processes. As a result of this, the need to classify and analyze Power quality disturbance (PQD) to prevent and analyze the degradation of the system reliability affected by the non-linear and non-stationary oscillatory nature. This paper presents A Novel Mul-titasking Deep Neural Network (MDL) for the Classification and Analysis of Multiple Electrical Disturbances. The characteristics are extracted with a specialized and adaptive methodology for non-stationary signals, Empirical Mode Decomposition (EMD). The methodology’s design, devel-opment, and various performance tests are carried out with 28 different difficulty levels, such as severity, disturbance duration time, and noise in the 20 dB to 60 dB signal range. MDL was devel-oped with a diverse data set in difficulty and noise, with a quantity of 4500 records of different samples of multiple electrical disturbances. The analysis and classification methodology has an average accuracy percentage of 95% with multiple disturbances. In addition, an average accuracy percentage of 90% in analyzing important signal aspects for studying electrical power quality such as crest factor, Per Unit voltage analysis, Short Term Flicker Perceptibility (Pst), and Total Harmonic Distortion (THD), among others.
Keywords: 
Subject: Engineering  -   Electrical and Electronic Engineering

1. Introduction

Power generation systems comprise different levels: generation, distribution, transmission, and consumption [1]. Electrical energy monitoring, analysis, and quality are essential at all these levels and the main challenges within these distribution, transmission, and consumption infrastructures. Monitoring and analyzing energy quality in real-time is essential to subsequently apply corresponding mitigation actions and not interrupt industrial or critical processes [2]. According to IEEE 1159 standard and European EN 50160 standard, electromagnetic phenomena that disturb electrical power quality in generation systems are called power quality phenomena or Power Quality Disturbances (PQDs). These two documents define the physical properties and characteristics of the PQDs and, in simple terms, define the PQDs as the deviation of voltage and current from their ideal sinusoidal shape. The intrinsic characteristics of these phenomena are attributed to elements such as intermittent power flow caused by the use of maximum power point tracking control and harmonic current injections caused by various power converters using type control techniques. With the introduction of necessary elements in the industry 4.0 paradigm, the internet of things, and automatic manufacturing systems in these paradigms, many power converters based on high-frequency switching power electronic devices are necessary [3]. These phenomena or deviations severely affect the reliability and interoperability of industrial processes and electronic equipment. The negative impacts are diverse, including economic losses for the industries, such as effects on the distribution architecture and impact on the devices that consume the generated energy. The power quality problem has become increasingly prominent, so the main task is to guarantee voltage stability in power generation and distribution systems [4]. In this context, guaranteeing a high-quality power supply has become one of the tasks to be resolved urgently [5]. The architectures where the quality of electric power is most unstable are in microgrids because these types of schemes often use renewable distributed generators, which depend on natural resources[6]. Figure 1 shows an example of the architecture of microgrids.
Power quality in microgrids is a critical issue as it directly affects the efficiency and reliability of electricity supply [7]. Microgrids are designed to operate independently by integrating multiple types of distributed power generators, such as solar and wind, and storage systems with batteries and sometimes connected to the main grid [8]. Ensuring high power quality in a smart microgrid requires analyzing, quantifying, and monitoring the voltage signal, and the subsequent task of controlling and compensating for disturbances is necessary. The nature of microgrids as well as electrical disturbances, is stochastic [9]. PQDs not only occur individually but also occur in multiple or triggered manners . That is, it is common for frequency and voltage effects to occur jointly and randomly. The appearance of new combinations of PQDs means that monitoring and analysis schemes must be able to adapt and process new behaviors concerning already-known information [10]. As a result of this need, artificial intelligence has played an essential role in classifying and analyzing multiple PQDs. The methodologies proposed over the years have been diverse and innovative in feature extraction, classification strategies, difficulty levels, and learning methodologies. However, most of them face the dilemma of the stochastic and random behavior of PQDs. This work was carried out in the study, analysis, design, and development of a methodology necessary for classifying and analyzing the different electrical disturbances. The quantification of the electrical disturbance is also performed to study the severity of the impact of electrical disturbances on the quality of electric power and facilitate studies in electrical energy. The main contributions of this work are:
  • This paper presents a novel multitasking deep learning model for classifying and quantification multiple electrical disturbances.
  • This study proof how deep multitasking learning is an excellent model for solving the challenge of quantitative analysis and classification of multiple electrical disturbances without the level of complexity or noise in the signal being a problem.
  • Graphs are shown where the assessment of the quality of electrical power with electrical disturbance can be observed. This way, assessing the quality and impact of linear and linear loads is simpler.
  • The extraction of characteristics is proposed using an adaptive oscillatory methodology. Due to the random nature of the electrical disturbances, proof how using traditional strategies presented in other articles is ineffective.
  • The development and testing were conducted with 29 electrical disturbances, from single disturbances to several simultaneous electrical disturbances, with noise levels ranging from 20 dB to 50 dB.
  • The development of additional tests performed on an island network with a photovoltaic system and high-switching elements will be described in future sections. In addition, electrical disturbances of all 29 levels with noise levels between 20 dB and 50 dB were constantly injected.
The organization of the article is divided into: Section 1 Introduction, Section 2 Related Works present briefly reviews the main contributions and methodologies proposed by several authors in monitoring and processing of electrical disturbances. Section 3 Theoretical Background, Section 4 Materials and Methods, Section 5 Analysis and Results and Section 6 Conclusions.

2. Related Works

The Table 1 presents some relevant contributions in aspects such as feature extraction methodology and signal classification methodology. Shows a great diversity of methodologies but, at the same time, a clear trend for the classification of electrical disturbances. Phases such as the extraction of signal characteristics, classification methodologies, and the type of output of the neural network are detected in these contributions. Within the first phase of feature extraction, researchers traditionally use strategies such as Discrete Wavelet Transform, Hilbert Transform, S-Transform, power quality indices, statistical signal characteristics, deep neural network layers, discrete Fourier Transform, Fast Fourier, Etc. However, they are good strategies for working with signals, analyzing, and extracting information. PQDs possess an intrinsic nature of behaving as non-linear and non-stationary signals. Due to this, the methodology for extracting characteristics is a vital phase for the algorithm because it will be the input values of our signal analyzer and classifier algorithm. The Discrete Wavelet Transform (DWT) is based on wavelets, which are mathematical functions that efficiently represent the characteristics of a signal. The Wavelet Transform (WT) extracts information in time and frequency domains, suitable for dynamic signals, but it needs help when the data is noisy; its level of precision drops considerably. Discrete Fourier Transform (DFT) and Short Time Fourier Transform (STFT) are elements of low computational consumption, but they have problems analyzing and detecting frequency phenomena. Stockwell Transform (St) is a combination between STFT and WT; it has a better extraction and characterization of the signals; however, it is a redundant representation of the time-frequency domain, and its processing time is longer if the sampling time increases. Using statistical strategies or power quality indices as important signal characteristics is a good strategy. However, the learning algorithm needs precise characteristics of the types of signals, and these types of quantifiable values can have a considerable level of variability between types of signals. In addition, feature extraction methodologies often generate information that can cause problems due to the amount of information generated that is only sometimes significant; as a result, multiple authors implement algorithms such as Principal Component Analysis, LDA, or metaheuristic algorithms. Due to the nature of electrical disturbances being non-linear and non-stationary signals, the methodology becomes more complex and, to some extent, with overfitting and precision problems.
Another critical point is the development of the models, which can observe a great diversity among the strategies, most residing in neural networks, varying between simple networks or deep networks. Softer strategies like support vector machines, Logistic Regression (LR), Naïve Bayes, and J48 decision tree are also presented. The output of each of these models is a variable or label which determines the type of electrical disturbance; that is, the output is traditionally a label such as "Sag" or "Swell." All these methodologies solve the electrical signal classification task by only determining the taxonomy of the multiple PQDs from different classification and feature extraction perspectives. However, this does not mean that the taxonomy is the only element for the proper monitoring of the quality of electrical energy. The analysis of the signal and the quantification of essential values to assess the severity of the disturbance are scarce characteristics in this type of contribution. As a result of this need being detected within electrical power quality monitoring systems in this context, this paper proposes a novel methodology Multitasking Deep Neural Network for the Classification and Analysis of Multiple Electrical Disturbances with the capacity to process, analyze and classify signals, not stations for the in-depth study of the quality of electrical energy. In addition, the Design, development, and the different tests were carried out with about 29 levels of difficulty and a data set with high diversity in time parameters and severity of electrical disturbances and different noise levels in a range of 10 dB up to 50 dB. The performance quantification shows an accuracy above 98% in the training, validation, and test phases. Performance tests were carried out in different scenarios of power generation systems with different configurations and levels of complexity.

3. Theoretical Background

Developing a system monitoring electrical power quality has been a potential development area in recent years. This section presents essential elements for developing the classification, analysis, and monitoring proposal described in this paper.

3.1. Empirical Mode Decomposition

Empirical Mode Decomposition (EMD) is a data analysis method used in signal processing and time series analysis. It is a non-linear and non-stationary signal processing technique that decomposes a signal into its underlying oscillatory components, called Intrinsic Mode Functions (IMFs). The EMD method is based on the concept of "mode", which refers to a quasi-periodic oscillation that can be extracted from a signal. It uses a sifting process to isolate these IMFs by repeatedly decomposing a signal into high and low-frequency components until the residual is a monotonic function [22].
  • Initialize the signal x ( t ) to be decomposed into a set of intrinsic mode functions (IMFs).
  • For each IMF component i ( i = 1 , 2 , , N ) , repeat the following sifting process until convergence:
    a.
    Identify all local maxima and minima of x ( t ) to obtain the upper and lower envelopes, respectively.
    b.
    Calculate the average of the upper and lower envelopes to obtain the mean envelope h ( t ) .
    c.
    Subtract the mean envelope h ( t ) from the signal x ( t ) to obtain a "detrended" signal d ( t ) .
    d.
    Check whether d ( t ) is a valid IMF by verifying the following conditions:
    i.
    The number of zero-crossings and extrema must be equal or differ at most by one.
    ii.
    The local mean of d ( t ) is zero.
    e.
    If d ( t ) satisfies the above conditions, it is considered an IMF, and the sifting process for this IMF is complete.
    f.
    If d ( t ) does not satisfy the above conditions, it is added to the residual signal, and the sifting process is repeated on the residual signal.
  • The residual signal obtained after sifting all IMFs is the final trend component of the signal.
The EMD algorithm is an iterative process that extracts the oscillatory components of the signal by sifting out the trend component at each iteration. The resulting IMFs are typically sorted in order of decreasing frequency, with the first IMF representing the highest frequency oscillation in the signal. The EMD algorithm can be computationally intensive and may require careful tuning of parameters to achieve accurate decomposition [23].

3.2. Multitasking Deep Neural Network

Learning paradigms within deep learning and machine learning, such as supervised learning, offer fast and accurate classification and regression solutions in various intelligent systems and real-world applications [24]. The traditional methodology of learning paradigms is to learn a function that maps each given input to a corresponding output [25]. For classification problems, the output is a label or an identifier character. For data regression problems, it is a single predictive value, such as temperature and amount of money. A traditional learning paradigm is an excellent tool for solving various problems; however, sometimes, it needs to adapt better to the growing needs of today’s complex decision-making [26]. From this arises the pressing need to develop learning paradigms; thus, born Multi-head neural networks or multi-head deep learning models are also known as Multi-output Deep Learning models (MLD). MLD takes advantage of the relationship between tasks to improve the performance of learning models [27]. The Figure 2 compares the architecture of a traditional deep neural network and an MLD.
The Figure 2 is an example of how MLD works in images. This methodology can detect and classify elements in an image that could be undetectable to the human eye. It is essential to mention that MLD is widely used in medicine and has good performance results. In [27], MLD is used for surgical assistance in analyzing surgical gestures and predicting the progress and status of surgical movements. MLD has also been applied to determine ship positions in the contribution [28]; they analyze satellite images to detect multiple ship positions and determine the position coordinates in case the ship sensors fail and have to track ships with industrial containers. Deep learning multitasking refers to the ability of a deep learning model to perform multiple tasks or learn multiple functions simultaneously. The model is trained to perform more than one task, such as image classification and object detection or natural language processing. Deep learning multitasking can have several advantages, including improved performance on each task, reduced training time and computational resources, and the ability to learn shared representations that can benefit all tasks. It can also be more robust to noisy or incomplete data, as it can leverage information from multiple sources.

3.3. Performance Indices

The performance indices are elements used to quantify the behavior of the proposal in this work at different levels of complexity of PQDs and electrical generation systems with different architectures or scalability. The Table 2 shows the PQDs analysis, quantification, and regression task indices.
For the task of classifying PQDs by labels, indices were used, which are aided by the Confusion Matrix (CM) shown in the Figure 3. A CM is a table that evaluates a classification algorithm’s performance for a classification problem. It compares the predicted class labels to the actual class labels of a data set.
The rows of the CM represent the actual class labels, while the columns represent the predicted class labels. The CM main diagonal represents the correctly classified samples, while the off-diagonal elements represent the misclassified samples. A CM typically has four entries:
  • True positives (TP): Corresponds to the samples correctly predicted as positive.
  • False positives (FP): Corresponds to the incorrectly predicted as positive.
  • True negatives (TN): Corresponds to the correctly predicted as negative.
  • False negatives (FN): Corresponds to the incorrectly predicted as negative.
The CM provides valuable information of a classification algorithm, such as accuracy, precision, recall, and F1-score.
Table 3. Performance indices for data classification.
Table 3. Performance indices for data classification.
Performance indexes Formula
Accuracy
A c c u r a c y = T P + T N T P + T N + F P + F N
Recall
R e c a l l = T P T P + F N
Specificity
S p e c i f i c i t y = T N T N + F P

4. Materials And Methods

Figure 4 shows the methodology used to analyze and classify the different levels of complexity of electrical disturbances.
First, there is the electrical disturbance generation block. In later sections, the nature and properties of the different electrical disturbances throughout the proposal of this paper are deepened. The second block corresponds to the extraction phase of the characteristics of the different electrical disturbances with the algorithm of empirical decomposition, which breaks down into Intrinsic Mode Functions (IMF). The first three IMF for electrical disturbances is recommended since they have more relevant information. The first two components contain high frequencies, and the third is information related to disturbances in the fundamental component. Further, this is implemented to lower computational consumption time and fast response. In addition to these values, the voltage value of the electrical disturbance is added. These values form a matrix comprising the three IMF columns and the voltage value of the signal, which are the input values of the Multitasking Deep Neural Network system. Subsequently, when introducing these values to the architecture of the Multitasking Neural Network, it processes them. In later sections, the architecture and components of this multitasking deep neural network will be discussed in more detail. The output of the neural architecture presents a vector that is composed of two groups of fields. The first is a numerical identifier of the type of disturbance that goes from 1 to 29; later sections will explain what assignment each of these has numbers. The other group of values is an essential element for the analysis and quantification of the quality of electrical energy, that is, to evaluate the quality in amplitude and frequency. These elements include voltage per unit, the crest factor, Total Harmonic Distortion (THD), Short Term Flicker Perceptibility (Pst), notch area, and depth.

4.1. Synthesis Of Electrical Disturbances And Database

The electromagnetic phenomena used to test and develop the Multitasking Deep Neural Network model were made synthetically. Developing a synthesizer system of multiple electrical disturbances was carried out, the physical characteristics of which vary. However, the voltage signal without electrical disturbances has the following characteristics.
Table 4. General characteristics of synthesis of electrical disturbances.
Table 4. General characteristics of synthesis of electrical disturbances.
Parameter Value
sample rate 16 kHz
Peak Voltage 180 V
Frequency 60 Hz
The synthesis of the different levels of electrical disturbances was based on different mathematical models, which are presented in Appendix A, these mathematical models are the scientific contribution of [29], and each electrical disturbance has different characteristics individually, such as duration time, disturbance severity characteristics such as p.u voltage rise or voltage loss, among other characteristics used in mathematical models for their correct synthesis. The Table 5 shows a fragment of the table in Appendix A with the first five levels of types of electrical disturbances. This table shows essential fields such as the name of the electrical disturbance and the numerical identifier, which is used as one of the output parameter elements of the Multitasking Deep Neural Network model. This identifier is associated with the level of complexity of the electrical disturbance. The table in Appendix A shows 29 different levels, and, as can be seen, the complexity increases as the value of the identifier increases.
Table 5. Fragment of the table in Appendix A of mathematical models of electrical disturbances.
Table 5. Fragment of the table in Appendix A of mathematical models of electrical disturbances.
Power quality disturbance Identifier Mathematical model Synthesis parameters
Normal Signal 1 v ( t ) = 180 sin ( ω t ϕ ) ω = 2 π * 60 r a d s
Sag 2 v ( t ) = 180 ( 1 α ( u ( t t 1 ) u ( t t 2 ) ) ) sin ( ω t ϕ ) 0.1 α 0.9
Swell 3 v ( t ) = 180 ( 1 + β ( u ( t t 1 ) u ( t t 2 ) ) ) sin ( ω t ϕ ) 0.1 β 0.9
Table 5 shows the mathematical model of the electrical disturbance, which was used to synthesize the multiple electrical disturbances and used in the development of the model and the different performance tests. Different parameters are shown in all formulas for electrical disturbances, such as the duration time where t is the total time, the parameter t 1 , and the parameter t 2 , which is the end of the electrical disturbance. Also have different synthesis parameters that represent the severity of the electrical disturbance. These values are determined randomly to have a diverse data set to develop the Multitasking Deep Neural Network model better. In addition, this multiple electrical disturbance synthesis system developed a data set for the training, validation, and testing of the Multitasking Deep Neural Network models. About 5000 different electrical disturbances were synthesized in aspects of a disturbance duration time, the severity of electrical disturbance, and synthesis parameters; the only fixed value is the sampling frequency of 16 kHz. In this way, 70% of the data set was used for training, equivalent to 3500 electrical disturbances. And 15% for the validation and test phases, the equivalent of 750 disturbances of the 29 levels of electrical disturbances with a great variety of syntheses.

4.2. Description of Power System

The microgrid model used is an island type grid with a photovoltaic system developed in the Matlab 2021b environment with the Simulink and AppDesigner tools and introducing different electrical disturbances with different levels of complexity, as shown in the previous section. A photovoltaic system mainly integrates the configuration of the island-type microgrid model shown in Figure 2. A user can manipulate the variables of temperature and solar irradiation through an interface developed in App Designer.
Figure 5. Photovoltaic power generator system for additional tests with multiple electrical disturbance injector system.
Figure 5. Photovoltaic power generator system for additional tests with multiple electrical disturbance injector system.
Preprints 72723 g005
The solar photovoltaic system generate a maximum power of 250 kW in STC (cell temperature of 25 °C with solar irradiance of 1000 W m 2 ). The system also has an integrated DC-DC Boost type charge controller and a DC-AC voltage source inverter (VSI). Maximum power point tracking (MPPT) controller with a DC-DC boost type voltage controller. The MPPT control helps to generate the proper voltage by extracting the maximum power and adjusting the duty cycle to avoid performance problems due to changes in temperature and solar irradiance that are simulated in the system. For the Voltage Source Inverter (VSI), a full bridge inverter works at 1000 Hz switching.

4.3 Deep neural network multitasking architecture

The Figure 6 shows the architecture used for multi-tasking learning. The following points show the description of each layer and discuss different issues, such as how the layer works and its objective in the deep neural network, among other important points discussed. To observe more straightforwardly the function and composition of the neural network and how the information flows, how the data are processed, and how they are normalized to reduce computational consumption.
  • Batch Normalization: This layer is used to improve the training speed and stability of the model. The basic idea behind batch normalization is to normalize the input data of each layer [30]. This is done by subtracting the batch mean from each input data point and dividing it by the batch standard deviation. The batch means and standard deviation are estimated using the input data of a batch rather than the entire dataset [31]. Batch normalization helps to reduce the problem of internal covariate drift, which occurs when there is high variation in the input data. This can lead to slower convergence and overfitting [32]. Batch normalization is a powerful technique that improve the performance of deep neural networks [33].
  • Convolutional layer or conv layer: is a crucial building block of convolutional neural networks (CNNs). It is designed to perform feature extraction from input data such as images, video, or audio. The basic idea behind convolutional layers is to apply a set of learnable filters (kernels or weights) to the input data to extract essential features [34]. Each filter performs a convolution operation on the input data, which involves sliding the filter over the input and computing the dot product between the filter weights and the local input values at each position [35]. In a Convolutional Neural Network (CNN) used for regression, the convolutional layers will be designed to extract relevant features from the input data that help predict the target values. The convolutional layers are an essential part of the network architecture for regression problems because they allow the network to capture important local patterns in the input data, which can be highly relevant for predicting the target values. By stacking multiple convolutional layers with increasing filter sizes, the network can learn increasingly complex and abstract features from the input data, making more accurate predictions [36].
  • Polling: is used for down-sampling. The aim is to scale and map the data after feature extraction, reduce the dimension of the data and extract the important information, thus performing feature reduction efficiently within the neural architecture. Avoiding adding extra phases and reducing computational consumption. [37]. There are several types of pooling layers, but the most common ones are max pooling and average pooling. Max Pooling Layers in these layers reduce the spatial dimensions of the output from the convolutional layers by taking the maximum value within each pooling window [38].
  • Dropout: randomly drops out some of the neurons in the previous layer during training, which helps prevent overfitting and improves the network’s generalization ability. The main idea behind dropout is that the network learns to rely on the remaining neurons to make accurate predictions. This forces the network to learn more robust features that are not dependent on any specific set of neurons [39].
  • SoftMax: is a typical activation function used in neural networks, particularly in multi-class classification problems. The SoftMax function takes a vector of real-valued scores as input and normalizes them into a probability distribution over the classes [40].
  • The flatten layer: is a layer that converts multidimensional inputs into a one-dimensional vector. This is often done to connect a convolutional layer to a fully connected layer, which requires one-dimensional inputs [41].
  • Fully connected layer or dense layer: is a layer where each neuron is connected to every neuron in the previous layer. Each neuron performs a weighted sum of the activations from the previous layer and then applies an activation function to the sum to produce an output. The weights and biases are learned during training using backpropagation, where the gradients are propagated backward from the output to the input layer [42].
Figure 6. The architecture of a Novel Multitasking Deep Neural Network for Classification and Analysis of Multiple Electrical Disturbances.
Figure 6. The architecture of a Novel Multitasking Deep Neural Network for Classification and Analysis of Multiple Electrical Disturbances.
Preprints 72723 g006

5. Analysis and Results

The Figure 7 shows the graphical display of feature extraction with the Empirical Mode Decomposition (EMD) algorithm. Various multiple or ultrafast disturbances with noise and without noise are shown. It is essential to do this so that the difference in complexity is noticed when developing a system trained with clean signals to work with signals with noise between 10 dB and 50 dB. Figure 7 in the left shows an ultrafast frequency disturbance called Spike, which is difficult to detect with the human eye and often with sophisticated analysis and detection systems because the phenomenon lasts nanoseconds. The Figure 7 in the rigth shows a spike without noise on the left side and a Spike with noise on the right side.
Figure 7. Feature extraction with EMD algorithm left) Spike right) Spike with noise.
Figure 7. Feature extraction with EMD algorithm left) Spike right) Spike with noise.
Preprints 72723 g007
The image is divided into three fringes; the first is the complete signal with the electrical disturbance, and the other is the Intrinsic Mode Functions (IMF) extracted as characteristics of the electrical disturbance. In the Spike image show how difficult it is to detect the disturbance; however, with IMFs, it is easier to detect it, even when the signal has much noise. The following image presents the extraction of Notch features, which happens the same behavior as the previous image.
Figure 8. Feature extraction with EMD algorithm left) Notch right) Notch with noise.
Figure 8. Feature extraction with EMD algorithm left) Notch right) Notch with noise.
Preprints 72723 g008
At a first glance, noticing where the electrical disturbance is located in which seconds is located is complex. However, extracting characteristics helps to detect where the ultra-fast disturbance occurs. In the same way with the Oscillatory transient, it can see how it shows the characteristics of the electrical disturbance that can be detected even with high noise levels.
Figure 9. Feature extraction with EMD algorithm left) Oscillatory transient right) Oscillatory transient with noise.
Figure 9. Feature extraction with EMD algorithm left) Oscillatory transient right) Oscillatory transient with noise.
Preprints 72723 g009

5.1 Signal classification performance indices

Figure 10a shows the learning curve of the classification layer. This image graphically represents the behavior of the model in training and validation. It plots the value of the loss function (or error function) on the y-axis against the number of training iterations or epochs on the x-axis. The loss function measures the difference between the predicted outputs of the model and the actual outputs and is used to optimize the model during training.
Overall, the loss curve is an essential tool for understanding the performance of a machine learning algorithm during training and for diagnosing problems that may arise during the training process. On the other hand, Figure 10b shows the learning curve as a function of precision. It is possible to observe how it reaches an exact precision according to the graph and between values. However, performance is quantified beyond learning curves.
Tests were performed using the test data segment and graphically displayed in the following confusion matrix. Figure 11 on the left shows the confusion matrix of the first 16 levels of electrical disturbances, the data set with about 750 electrical disturbances with different levels of complexity: clean disturbances and noisy disturbances between 10 dB and 50 dB. It can be observed how there are erroneous data, oscillating with an error between 5% and 2% error. Figure 11 on the right side shows the second part, levels 17 to 29. It can be seen how a percentage ranging from 1.8% to 8.3% is obtained. Because this part’s complexity level increases, the model needs clarification to analyze the taxonomy of the last levels. This is because the last levels have much affinity between them due to the characteristics of the signals.
Additionally, tests were carried out on the power generation system described in Section 4.2. Figure 12 on the left shows shows the transition matrix of the first part, that is, from levels 1 to 16 of electrical disturbances. It can notice how the percentage rises more, but this was because the number of disturbances was around 200 of the 29 levels. However, it can notice how it retains a high accuracy value. Figure 12 on the right side shows part 2 of levels 17 to 29. Interestingly, the error level increases slightly in the last disturbances from 23 to 29. This is because levels from 23 to 29 tend to have more affinity between them. However, an accuracy above 80% is maintained.

5.2 Performance Indices in Data Regression

For the regression task, Figure 13 left shows the training curve in the context of error measurement. Figure 13 right shows the training curve in the context of the accuracy of the regression task on the training and validation data segments.
Figure 14 left shows the voltage analysis per unit of an electrical disturbance at 0.12 seconds. It can see how it detects a disturbance in the amplitude of more than 80%; that is, it is a severe disturbance. Figure 14 right shows the same analysis, but with a noise of 40 dB, it can see how it is not affected and shows a voltage drop of 80%.
Figure 15 shows the analysis of the crest factor of a disturbance, remembering that the average value in the voltage signals is 1.4. Figure 15 in the left shows how there are essential increases to consider. Figure 15 in the right shows the analysis but in a disturbance with a noise of 40 dB. In this way, it is noted how the noise does not represent problems for the analysis and quantification model.

5.3 Analysis of Results

The Table 6 compares critical elements for classifying methodologies for multiple electrical disturbances. Such as the precision of the models, methodology to extract characteristics, classificatory methodology, levels of complexity of electrical disturbances, the noise levels used, and finally, if any of these proposals offer any other methodology for the analysis and quantification of electrical disturbances.
Table 6. Comparison of investigations with the contribution in this paper.
Table 6. Comparison of investigations with the contribution in this paper.
Reference [15] [43] [12] [20] Current
Characteristics
Accuracy percentage 79.14% - 83.66% 90% 88% - 98% 91.3% - 99% 98% -99%
Feature extraction methodology short time Fourier Transform Higher-Order Statistics 1-D convolutional Wavelet Transform Empirical Mode Decomposition (EMD)
Classification methodology Convolutional Neural Networks Long Short-Term Memory Multi-layer perceptron (MLP) Support Vector Machine (SVM) Deep convolutional neural network Support Vector Machine (SVM) Multitasking Deep Neural Network
Electrical disturbance levels 7 levels 2 levels, Sags, and swells 15 levels 5 levels 28 levels
Noise levels Without noise 40 dB to 60 dB 40 dB to 60 dB Without noise 10 dB to 50 dB
Qualitative analysis of electrical disturbance Does not perform quantification or analysis Does not perform quantification or analysis Does not perform quantification or analysis Does not perform quantification or analysis Quantitative analysis of the different electrical disturbances
With this comparative table, exciting points are inferred. First, the percentage of precision is not very variable among the classification proposals, and most remain above 90%. However, there is beginning to be a high concentration in the following attributes, feature extraction, and classification methodologies based on deep learning or machine learning. Artificial intelligence is a discipline that has played a significant role in various contexts in the analysis and monitoring of electrical disturbances. The levels of electrical disturbances see how some are reduced to sags and swell, with five levels of disturbances or 15 levels. These contributions present a notable bias to classify disturbances that emerge from the amplitude. This is because working with frequency disturbances could often be complex and challenging if noise is added. This element in which is compare this proposal, although the range oscillates between 40 dB and 60 dB This advantage is because it represents less noise than 10 dB or 20 dB. The level of complexity increases as the signal is noisier. It is important. Furthermore, it can note how none of these proposals works with quantifying and analyzing electrical disturbances. They only give a taxonomy with labels for the type of disturbance that occurs. It is essential to mention that measuring and analyzing the main quality and energy factors develops more direct and informative energy quality reports. In summary, the following important points of the proposed model can be highlighted.
1.
Extraction of adaptive multi-resolution features of the signal
2.
Processing, signal classification analysis
3.
The noise affects the classification of the signal; however, due to the analysis task, it helps to identify the affectations on the quality of the electric wave.
4.
Introducing the analysis and quantification of primary elements to quantify the quality of electrical energy. An opportunity to make more visually appealing power quality reports with data science tools is outlined, in addition to performing the compensation of electrical disturbances intelligently.

6. Conclusions

This article proposes a new multitasking deep neural network for classifying and analyzing multiple electrical disturbances. The Empirical Mode Decomposition (EMD) algorithm was used to extract the signal characteristics, specialized in non-stationary signals and non-linear intrinsic elements of electrical disturbances. The methodology’s performance was quantified using traditional performance indices for the regression and classification layers of the signal. Under different levels of complexity of noise and levels of complexity in disturbances. This methodology shows an accuracy of over 90% regardless of the noise level and the electrical disturbances’ complexity. The different tests showed how the methodology performs excellently, classifying and analyzing the ultra-fast frequency and amplitude disturbances. In addition to having noise in the signal, the percentage of precision remained above 90%. The EMD feature extraction algorithm was an excellent tool for processing the data and preserving the accuracy of the algorithm. It is essential to mention that in addition to presenting a methodology for the classification and quantification of essential values for the quality of electrical energy and its study, with a high degree of precision and different levels of difficulty. The development of a methodology with various synthetic disturbances is presented, having as an advantage the ease of generating a diverse set of data and performing performance tests with different levels and diversity of electrical disturbances and with generators of different power. In this way, the proposed methodology is tested to face the scalability challenge of the different electricity generation systems.

Author Contributions

Conceptualization, A.E.G.S. and M.G.A.; Methodology, A.E.G.S. and E.A.R.A; Software, A.E.G.S; Validation, M.G.A. and S.T.A.; Formal analysis, J.R-R.; Investigation, A.E.G.S.; Resources, J.R.-R. and M.T.A.; Data curation, A.E.G.S.. and J.R-R; Writing original draft preparation, review, and editing, A.A.-D., J.R.-R., and R.V.C-S.

Funding

This research was funded by the National Council on Science and Technology.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in the manuscript:
PQD Power Quality Disturbance MDL Mul-titasking Deep Neural Network
EMD Empirical Mode Decomposition Pst Short Term Flicker Perceptibility
THD Total Harmonic Distortion SVM support Vector Machine
D-CNN Dimensional Convolution Neural Network
LSTM Long Short Term Memory CNN-LSTM Convolutional Neural Networks Long Short Term Memory
CA Cluster analysis ELM Extreme Learning Machine
GPQIs Global power quality indices DWT crete Wavelet Transform
WT Wavelet Transform DFT Discrete Fourier Transform
STFT Time Fourier Transform LR Logistic Regression
LDA algo EMD Empirical Mode Decomposition
IMFs Intrinsic Mode Functions MLD Deep Learning models
TP True positives FP False Positrives
TN True Negatives FN False Negatives
VSI tage source inverter IMF Intrinsic Mode Functions
MPPT maximum power point tracking PWM Pulse Width Modulation

Appendix A

Table A1. Mathematical models of electrical disturbances.
Table A1. Mathematical models of electrical disturbances.
Preprints 72723 i001
Preprints 72723 i002
Preprints 72723 i003

References

  1. Souza Junior, M.E.T.; Freitas, L.C.G. Power Electronics for Modern Sustainable Power Systems: Distributed Generation, Microgrids and Smart Grids—A Review. Sustainability 2022, 14. [Google Scholar] [CrossRef]
  2. Prashant; Siddiqui, A.S.; Sarwar, M.; Althobaiti, A.; Ghoneim, S.S.M. Optimal Location and Sizing of Distributed Generators in Power System Network with Power Quality Enhancement Using Fuzzy Logic Controlled D-STATCOM. Sustainability 2022, 14. [CrossRef]
  3. Ma, C.T.; Gu, Z.H. Design and Implementation of a GaN-Based Three-Phase Active Power Filter. Micromachines 2020, 11. [Google Scholar] [CrossRef] [PubMed]
  4. Ma, C.T.; Shi, Z.H. A Distributed Control Scheme Using SiC-Based Low Voltage Ride-Through Compensator for Wind Turbine Generators. Micromachines 2022, 13. [Google Scholar] [CrossRef] [PubMed]
  5. Wan, C.; Li, K.; Xu, L.; Xiong, C.; Wang, L.; Tang, H. Investigation of an Output Voltage Harmonic Suppression Strategy of a Power Quality Control Device for the High-End Manufacturing Industry. Micromachines 2022, 13. [Google Scholar] [CrossRef] [PubMed]
  6. Yoldaş, Y.; Önen, A.; Muyeen, S.; Vasilakos, A.V.; İrfan Alan. Enhancing smart grid with microgrids: Challenges and opportunities. Renewable and Sustainable Energy Reviews 2017, 72. [Google Scholar] [CrossRef]
  7. Jumani, T.A.; Mustafa, M.W.; Rasid, M.M.; Mirjat, N.H.; Leghari, Z.H.; Saeed, M.S. Optimal Voltage and Frequency Control of an Islanded Microgrid using Grasshopper Optimization Algorithm. Energies 2018, 11, 3191. [Google Scholar] [CrossRef]
  8. Samanta, H.; Das, A.; Bose, I.; Jana, J.; Bhattacharjee, A.; Bhattacharya, K.D.; Sengupta, S.; Saha, H. Field-Validated Communication Systems for Smart Microgrid Energy Management in a Rural Microgrid Cluster. Energies 2021, 14, 6329. [Google Scholar] [CrossRef]
  9. Banerjee, S.; Bhowmik, P.S. A machine learning approach based on decision tree algorithm for classification of transient events in microgrid. Electrical Engineering 2023. [Google Scholar] [CrossRef]
  10. Mahela, O.P.; Shaik, A.G.; Khan, B.; Mahla, R.; Alhelou, H.H. Recognition of Complex Power Quality Disturbances Using S-Transform Based Ruled Decision Tree. IEEE Access 2020, 8, 173530–173547. [Google Scholar] [CrossRef]
  11. Liu, H.; Hussain, F.; Yue, S.; Yildirim, O.; Yawar, S.J. Classification of multiple power quality events via compressed deep learning. International Transactions on Electrical Energy Systems 2019, 29, e12010, https://onlinelibrary.wiley.com/doi/pdf/10.1002/2050-7038.12010. [Google Scholar] [CrossRef]
  12. Wang, S.; Chen, H. A novel deep learning method for the classification of power quality disturbances using deep convolutional neural network. Applied Energy 2019, 235, 1126–1140. [Google Scholar] [CrossRef]
  13. Thirumala, K.; Pal, S.; Jain, T.; Umarikar, A.C. A classification method for multiple power quality disturbances using EWT based adaptive filtering and multiclass SVM. Neurocomputing 2019, 334, 265–274. [Google Scholar] [CrossRef]
  14. Shen, Y.; Abubakar, M.; Liu, H.; Hussain, F. Power Quality Disturbance Monitoring and Classification Based on Improved PCA and Convolution Neural Network for Wind-Grid Distribution Systems. Energies 2019, 12. [Google Scholar] [CrossRef]
  15. Garcia, C.I.; Grasso, F.; Luchetta, A.; Piccirilli, M.C.; Paolucci, L.; Talluri, G. A Comparison of Power Quality Disturbance Detection and Classification Methods Using CNN, LSTM and CNN-LSTM. Applied Sciences 2020, 10. [Google Scholar] [CrossRef]
  16. Jasiński, M.; Sikorski, T.; Kostyła, P.; Leonowicz, Z.; Borkowski, K. Combined Cluster Analysis and Global Power Quality Indices for the Qualitative Assessment of the Time-Varying Condition of Power Quality in an Electrical Power Network with Distributed Generation. Energies 2020, 13. [Google Scholar] [CrossRef]
  17. Subudhi, U.; Dash, S. Detection and classification of power quality disturbances using GWO ELM. Journal of Industrial Information Integration 2021, 22, 100204. [Google Scholar] [CrossRef]
  18. Radhakrishnan, P.; Ramaiyan, K.; Vinayagam, A.; Veerasamy, V. A stacking ensemble classification model for detection and classification of power quality disturbances in PV integrated power network. Measurement 2021, 175, 109025. [Google Scholar] [CrossRef]
  19. Chamchuen, S.; Siritaratiwat, A.; Fuangfoo, P.; Suthisopapan, P.; Khunkitti, P. Adaptive Salp Swarm Algorithm as Optimal Feature Selection for Power Quality Disturbance Classification. Applied Sciences 2021, 11. [Google Scholar] [CrossRef]
  20. Saxena, A.; Alshamrani, A.M.; Alrasheedi, A.F.; Alnowibet, K.A.; Mohamed, A.W. A Hybrid Approach Based on Principal Component Analysis for Power Quality Event Classification Using Support Vector Machines. Mathematics 2022, 10. [Google Scholar] [CrossRef]
  21. Cuculić, A.; Draščić, L.; Panić, I.; Ćelić, J. Classification of Electrical Power Disturbances on Hybrid-Electric Ferries Using Wavelet Transform and Neural Network. Journal of Marine Science and Engineering 2022, 10. [Google Scholar] [CrossRef]
  22. Sun, J.; Chen, W.; Yao, J.; Tian, Z.; Gao, L. Research on the Roundness Approximation Search Algorithm of Si3N4 Ceramic Balls Based on Least Square and EMD Methods. Materials 2023, 16. [Google Scholar] [CrossRef]
  23. Eltouny, K.; Gomaa, M.; Liang, X. Unsupervised Learning Methods for Data-Driven Vibration-Based Structural Health Monitoring: A Review. Sensors 2023, 23. [Google Scholar] [CrossRef]
  24. Xu, D.; Shi, Y.; Tsang, I.W.; Ong, Y.S.; Gong, C.; Shen, X. Survey on Multi-Output Learning. IEEE Transactions on Neural Networks and Learning Systems 2020, 31, 2409–2429. [Google Scholar] [CrossRef]
  25. Tien, C.L.; Chiang, C.Y.; Sun, W.S. Design of a Miniaturized Wide-Angle Fisheye Lens Based on Deep Learning and Optimization Techniques. Micromachines 2022, 13. [Google Scholar] [CrossRef]
  26. Liu, W.; Xu, D.; Tsang, I.W.; Zhang, W. Metric Learning for Multi-Output Tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence 2019, 41, 408–422. [Google Scholar] [CrossRef]
  27. van Amsterdam, B.; Clarkson, M.J.; Stoyanov, D. Multi-Task Recurrent Neural Network for Surgical Gesture Recognition and Progress Prediction. 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 1380–1386. [CrossRef]
  28. Yang, X.; Sun, H.; Sun, X.; Yan, M.; Guo, Z.; Fu, K. Position Detection and Direction Prediction for Arbitrary-Oriented Ships via Multitask Rotation Region Convolutional Neural Network. IEEE Access 2018, 6, 50839–50849. [Google Scholar] [CrossRef]
  29. Igual, R.; Medrano, C.; Arcega, F.J.; Mantescu, G. Integral mathematical model of power quality disturbances. 2018 18th International Conference on Harmonics and Quality of Power (ICHQP), 2018, pp. 1–6. [CrossRef]
  30. Kaur, R.; GholamHosseini, H.; Sinha, R.; Lindén, M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Medical Imaging 2022, 22, 103. [Google Scholar] [CrossRef] [PubMed]
  31. Zhao, R.; Wang, S.; Du, S.; Pan, J.; Ma, L.; Chen, S.; Liu, H.; Chen, Y. Prediction of Single-Event Effects in FDSOI Devices Based on Deep Learning. Micromachines 2023, 14. [Google Scholar] [CrossRef] [PubMed]
  32. Xu, S.; Zhou, Y.; Huang, Y.; Han, T. YOLOv4-Tiny-Based Coal Gangue Image Recognition and FPGA Implementation. Micromachines 2022, 13. [Google Scholar] [CrossRef] [PubMed]
  33. Halbouni, A.; Gunawan, T.S.; Habaebi, M.H.; Halbouni, M.; Kartiwi, M.; Ahmad, R. CNN-LSTM: Hybrid Deep Neural Network for Network Intrusion Detection System. IEEE Access 2022, 10, 99837–99849. [Google Scholar] [CrossRef]
  34. Naseer, S.; Saleem, Y.; Khalid, S.; Bashir, M.K.; Han, J.; Iqbal, M.M.; Han, K. Enhanced Network Anomaly Detection Based on Deep Neural Networks. IEEE Access 2018, 6, 48231–48246. [Google Scholar] [CrossRef]
  35. Li, C.; Qiu, Z.; Cao, X.; Chen, Z.; Gao, H.; Hua, Z. Hybrid Dilated Convolution with Multi-Scale Residual Fusion Network for Hyperspectral Image Classification. Micromachines 2021, 12. [Google Scholar] [CrossRef]
  36. Sundaram, S.; Zeid, A. Artificial Intelligence-Based Smart Quality Inspection for Manufacturing. Micromachines 2023, 14. [Google Scholar] [CrossRef]
  37. Marey, A.; Marey, M.; Mostafa, H. Novel Deep-Learning Modulation Recognition Algorithm Using 2D Histograms over Wireless Communications Channels. Micromachines 2022, 13. [Google Scholar] [CrossRef]
  38. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Networks 2015, 61, 85–117. [Google Scholar] [CrossRef]
  39. Devaraj, J.; Ganesan, S.; Elavarasan, R.M.; Subramaniam, U. A Novel Deep Learning Based Model for Tropical Intensity Estimation and Post-Disaster Management of Hurricanes. Applied Sciences 2021, 11. [Google Scholar] [CrossRef]
  40. Nguyen, H.D.; Cai, R.; Zhao, H.; Kot, A.C.; Wen, B. Towards More Efficient Security Inspection via Deep Learning: A Task-Driven X-ray Image Cropping Scheme. Micromachines 2022, 13. [Google Scholar] [CrossRef] [PubMed]
  41. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778. [CrossRef]
  42. Huang, S.; Wang, L. MOSFET Physics-Based Compact Model Mass-Produced: An Artificial Neural Network Approach. Micromachines 2023, 14. [Google Scholar] [CrossRef] [PubMed]
  43. Nagata, E.A.; Ferreira, D.D.; Bollen, M.H.; Barbosa, B.H.; Ribeiro, E.G.; Duque, C.A.; Ribeiro, P.F. Real-time voltage sag detection and classification for power quality diagnostics. Measurement 2020, 164, 108097. [Google Scholar] [CrossRef]
Figure 1. Smart microgrid architecture.
Figure 1. Smart microgrid architecture.
Preprints 72723 g001
Figure 2. Comparison of architectures between deep learning supervised learning and deep learning multitasking learning.
Figure 2. Comparison of architectures between deep learning supervised learning and deep learning multitasking learning.
Preprints 72723 g002
Figure 3. Confusion Matrix Example.
Figure 3. Confusion Matrix Example.
Preprints 72723 g003
Figure 4. Methodology for the analysis and classification of multiple electrical disturbances.
Figure 4. Methodology for the analysis and classification of multiple electrical disturbances.
Preprints 72723 g004
Figure 10. Model training performance evaluation left) Precision of Multitasking Deep Neural Network-Classification. right) Loss of Multitasking Deep Neural Network-Classification.
Figure 10. Model training performance evaluation left) Precision of Multitasking Deep Neural Network-Classification. right) Loss of Multitasking Deep Neural Network-Classification.
Preprints 72723 g010
Figure 11. Model training performance evaluation left) Test Data Segment Confusion Matrix-Part 2. right) Test Data Segment Confusion Matrix-Part 2.
Figure 11. Model training performance evaluation left) Test Data Segment Confusion Matrix-Part 2. right) Test Data Segment Confusion Matrix-Part 2.
Preprints 72723 g011
Figure 12. Model training performance evaluation left) Test confusion matrix in photovoltaic power generating system-Part 1. right) Test confusion matrix in photovoltaic power generating system-Part 2.
Figure 12. Model training performance evaluation left) Test confusion matrix in photovoltaic power generating system-Part 1. right) Test confusion matrix in photovoltaic power generating system-Part 2.
Preprints 72723 g012
Figure 13. Model training performance evaluation Left) Precision curve. Right) Loss Learning curve.
Figure 13. Model training performance evaluation Left) Precision curve. Right) Loss Learning curve.
Preprints 72723 g013
Figure 14. Analysis of voltage per unit in the electrical disturbance left) Voltage p.u in 0.12 seconds without dB disturbance or noise right) Voltage p.u in 0.12 seconds with Noise disturbance.
Figure 14. Analysis of voltage per unit in the electrical disturbance left) Voltage p.u in 0.12 seconds without dB disturbance or noise right) Voltage p.u in 0.12 seconds with Noise disturbance.
Preprints 72723 g014
Figure 15. Analysis of crest factor in the electrical disturbance left) No dB disturbance or noise right) Noise disturbance.
Figure 15. Analysis of crest factor in the electrical disturbance left) No dB disturbance or noise right) Noise disturbance.
Preprints 72723 g015
Table 1. Comparison of methodologies in multiple electrical disturbances.
Table 1. Comparison of methodologies in multiple electrical disturbances.
Paper Year Feature extraction methodology Signal classification Methodology
[11] 2018 Initial layers in the neural network Deep neural network
[12] 2019 One-dimensional convolutional, pooling, and batch normalization layers to capture multi-scale features Closed-loop deep-learning method
[13] 2019 Empirical wavelet Transform-based adaptive filtering technique Multiclass support vector machine (SVM)
[14] 2019 Root Mean Square, Skewness, Range, Kurtosis Improved Principal Component Analysis (IPCA) and 1-Dimensional Convolution Neural Network (1-D-CNN)
[15] 2020 Initial layers in the neural network Long Short Term Memory (LSTM), Convolutional Neural Networks (CNN), Convolutional Neural Networks Long Short Term Memory (CNN-LSTM), and CNN-LSTM
[16] 2020 Global power quality indices (GPQIs) Cluster analysis (CA)
[17] 2021 Stockwell Transform Extreme Learning Machine (ELM)
[18] 2021 Discrete Wavelet Transform Model assembled with Logistic Regression (LR), Naïve Bayes, and J48 decision tree
[19] 2021 Discrete Wavelet Transform and adaptive salp swarm algorithm Probabilistic neural network
[20] 2022 Hilbert Transform and Wavelet Transform Support Vector Machine
[21] 2022 Discrete Wavelet Transform Artificial Neural Network
Table 2. Performance indices for data prediction.
Table 2. Performance indices for data prediction.
Performance indices Formula Meaning of symbology
Mean absolute error
M A E = 1 N i = 1 N | y i y p |
y p is the predicted value
y i is real value
N is the total number of data
The absolute mean percentage error
M A P E = 1 N i = 1 N 100 | y i y p | y i
y p is the predicted value
y i is real value
N is the total number of data
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated