Preprint
Article

Adapt the Parameters of RBF Networks Using Grammatical Evolution

Altmetrics

Downloads

110

Views

46

Comments

0

Submitted:

21 September 2023

Posted:

22 September 2023

You are already at the latest version

Alerts
Abstract
RBF networks are used in a variety of real-world applications such as medical data or signal processing problems. The success of these parametric models lies in the successful adaptation of their parameters using efficient computational techniques. In the current work, a method of adjusting the parameters of these networks using Grammatical Evolution is presented. Grammatical Evolution will be used to successfully discover the most promising range of parameter values and then the training of the parameter set will be achieved using a Genetic Algorithm. The new method was applied to a wide range of data fitting and classification problems, and the results were more than promising.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Many practical problems of the modern world can be thought of either as data fitting problems, as for example, problem from physics [1,2], chemistry [3,4], economics [5,6], medicine [7,8], etc. A machine learning tool, commonly used to handle these problems, is the Radial Basis Function (RBF) artificial neural network [9,10]. Usually, an RBF network is expressed using the following equation:
y x = i = 1 k w i ϕ x c i
where the symbols in the equation are defined as follows:
  • The vector x is the input pattern from the objective problem. For the rest of this paper the notation d will be used to represent the number of elements in x .
  • The parameter k denotes the number of weights used to train the RBF network and the associated vector of weights is denotes as w .
  • The vectors c i , i = 1 , . . , k stand for the so - called centers.
  • The outcome of the equations y x stands for the estimated value of the network for the input pattern x .
The function ϕ ( x ) usually is a Gaussian function given by:
ϕ ( x ) = exp x c 2 σ 2
The RBF networks were used in many cases, such as problems from physics [11,12,13,14], solving differential equations [15,16,17], robotics [18,19], face recognition [20], digital communications [21,22], chemistry problems [23,24], economic problems [25,26,27], network security problems [28,29] etc. Also, recently a variety of papers have appeared proposing novel initialization techniques for the network parameters [30,31,32]. Also, Benoudjit et al [33] discuss the effect of kernel widths on RBF networks. Moreover, Neruda et al [34] presents a comparison of some learning methods for RBF networks. Additionally, a variety of pruning techniques [35,36,37] have been proposed to reduce the number of required parameters of the RBF networks. Due to the widespread usage of RBF networks but also because considerable computing time is often required for their effective training, in recent years a series of techniques have been proposed [38,39] for the exploitation of parallel computing units to adjust the parameters of neural networks.
The current work proposes a two phase method for the effective adjustment of the parameters of RBF networks in order to minimize the so -called training error given by:
E ( y ( x , g ) ) = i = 1 m y x i , g t i 2
Where the parameter m denotes the number of input patterns, the t i values represent the expected output for the input pattern x i . The vector g represents the parameter set of the RBF network. During the first phase, an attempt is made to bound the parameter values to intervals in which the training error of equation 3 is likely to be significantly reduced. The identification of the most promising intervals for the parameters is performed using a technique that utilizes Grammatical Evolution[40]. First, an estimate of an interval of values for the network parameters is made using the Kmeans algorithm [41]. Then, with the help of Grammatical Evolution, a series of division rules are applied to the initial interval of values in order to find a range of values that significantly reduces the training error. During the second phase, the parameters of the RBF network can be trained within the optimal range found in the first phase using some global optimization method [42,43]. In the proposed approach, the widely used method of genetic algorithm [44,45,46] was used for the second phase of the process.
The rest of this paper is divided in the following sections: in Section 2 the proposed method is fully described, in Section 3 the datasets used in the experiments are listed as well as the experimental results and finally in Section 4 some conclusions are provided.

2. Method description

This section begins with a detailed description of the Grammatical Evolution technique and the grammar that will be used to generate partition rules for the parameter set of RBFs. Subsequently, the first phase of the proposed methodology will be extensively analyzed and then the second phase, where a Genetic Algorithm will be applied to the outcome of the first phase.

2.1. Grammatical Evolution

Grammatical evolution is a genetic algorithm where the chromosomes stand for the production rules of any given BNF (Backus–Naur form) grammar[47]. Grammatical Evolution has been used successfully in a variety of cases, such as function approximation[48,49], solution of trigonometric equations [50], automatic music composition of music [51], neural network construction [52,53], creating numeric constraints[54],video games [55,56], estimation of energy demand[57], combinatorial optimization [58], cryptography [59] etc. The BNF grammar can be used to describe the syntax of programming languages and usually it is defined as the set G = N , T , S , P where
  • N is the set of the so - called non-terminal symbols. Every non - terminal symbol is associated with a series of production rules used to produce terminal symbols.
  • T is the set of terminal symbols.
  • S is a the start symbol of the grammar and S N .
  • P is a set of production rules, used to produce terminal symbols from non - terminal symbols. These rules are in the form A a or A a B , A , B N , a T .
The algorithm starts from the symbol S and gradually creates terminal symbols by replacing non-terminal symbols with the right hand of the selected production rule. The rule is selected through the following procedure:
  • Read the next element V from the current chromosome.
  • The production rule is selected as: Rule = V mod R, where R is the total number of production rules for the current non – terminal symbol.
The BNF grammar used in this work is presented in Figure 1. The symbols enclosed in <> denote the non-terminal symbols of the grammar. The numbers in parentheses in the right part of the grammar indicate production rule sequence numbers. The number n is the total number of parameters of the problem. In the case of this paper it is the total number of parameters of the RBF network. For the current work, the number n can be computed using the following formula:
n = ( d + 2 ) × k
The number n is computed as follows:
  • For every center c i , i = 1 , . . , k there are d variables. Hence, the total number of parameters required by the centers are d × k .
  • Every Gaussian unit required an additional parameter σ i , i = 1 , . . , k , which means k more parameters.
  • The weight vector w used in the output has k parameters.
As an example of production considered the chromosome x = 9 , 8 , 6 , 4 , 15 , 9 , 16 , 23 , 8 and d = 2 , k = 2 , n = 8 . The steps to produce the final program p t e s t = ( x 7 , 0 , 1 ) , ( x 1 , 1 , 0 ) are outlined in Table 1. Every partition program consists of a series of partition rules. Each partition rule contains three elements:
  • The variable for which its original interval will be partitioned, for example x 7 .
  • An integer number with values 0 and 1 at the left end of the value interval. If this value is 1, then the left end of the corresponding variable’s value field will be divided by two, otherwise no change will be made.
  • An integer number with values 0 and 1 at the right end of the range of values of the variable. If this value is 1, then the right end of the corresponding variable’s value field will be divided by two, otherwise no change will be made.
Hence, for the example program p t e s t the two partition rules will divide the right end of the variable x 7 and the left end of the variable x 1 .

2.2. The first phase of the proposed algorithm

The first step of the first phase of the proposed method is to initialize the bounds of the RBF network. For this initialization, the K-Means algorithm [41] technique is used, which is also used for the traditional RBF network training technique. A description of this algorithm in a series of steps is shown in Algorithm 1.
Algorithm 1:The K-Means algorithm.
Preprints 85744 i001
Having calculated the centers c i and the corresponding variances σ i , the algorithm continues to compute the vectors L , R with dimension n, that will be used as the initial bounds of the parameters. The above vectors are calculated through the procedure of the Algorithm 2.
Algorithm 2:Algorithm to locate the vectors L , R
Preprints 85744 i002
The bounds for the first ( d + 1 ) × k variables of any given RBF network are considered as a multiple of the quantity F with the values calculated by the K-Means algorithm. The positive constant B is used to initialize the intervals for the weight w . Afterwards, the following genetic algorithm is executed to locate the most promising vectors L , R for the RBF parameters:
  • Set  N c as the number of chromosomes for the Grammatical Evolution.
  • Set as k the number of weights of the RBF network.
  • Set  N g the maximum number of allowed generations.
  • Set as p s the selection rate of the algorithm, with p s 1 .
  • Set as p m the mutation rate, with p m 1 .
  • Set N s as the number of randomly created RBF networks, used in the fitness calculation.
  • Initialize randomly the N c chromosomes as sets of random numbers.
  • Set  f * = , , the fitness of the best chromosome. The fitness function  f g of any given chromosome g is considered as an interval f g = f g , low , f g , upper
  • Set iter=0.
  • For  i = 1 , , N c   do
    (a)
    Create the partition program p i using the grammar of Figure 1 for the chromosome c r i .
    (b)
    Produce the bounds L p i , R p i for the partition program p i .
    (c)
    Set  E m i n = , E m a x =
    (d)
    For  j = 1 , , N S   do
    • Create randomly a set of parameters g j L p i , R p i
    • Calculate the error E g j = j = 1 M y x j , g j t j 2
    • If  E g j E m i n   then  E m i n = E g j
    • If  E g j E m a x   then  E m a x = E g j
    (e)
    EndFor
    (f)
    Set the fitness f i = E m i n , E m a x
  • EndFor
  • Apply the selection procedure: Initially, the chromosomes of the population are sorted according to their fitness values. In order to compare two fitness values f a = a 1 , a 2 and f b = b 1 , b 2 the L * operator is used:
    L * f a , f b = T R U E , a 1 < b 1 , O R a 1 = b 1 A N D a 2 < b 2 F A L S E , O T H E R W I S E
    Hence, the fitness value f a is considered smaller than f b if L * f a , f b = T R U E . The first 1 p s × N c chromosomes with smaller fitness values are transferred intact to the next generation. The remaining chromosomes are replaced by offspring created in the crossover procedure. During the selection process for each offspring, two parents are selected from the population using the tournament selection.
  • Apply the crossover procedure. The crossover procedure will create new p s × N c chromosomes. For each new offspring two parents are selected from the population using the tournament selection. For each pair ( z , w ) of selected parents, two new chromosomes z ˜ and w ˜ are produced using the one - point crossover, shown in Figure 2.
  • Apply the mutation procedure. For each element of every chromosome, a random number r 0 , 1 is drawn. The corresponding element is altered randomly if r p m .
  • Set iter=iter+1
  • If  i t e r N g  goto step 10.

2.3. The second phase of the proposed algorithm

The second phase utilizes a genetic algorithm, to optimize the parameters of the RBF network within the best interval returned by the first phase of the method. The layout of each chromosome is shown in Figure 3.
  • Initialization Step
    (a)
    Set  N c as the number of chromosomes.
    (b)
    Set  N g the maximum number of allowed generations.
    (c)
    Set k the weight number of the RBF network.
    (d)
    Get the best interval S from the first step of subSection 2.2.
    (e)
    Initialize randomly the N C chromosomes in in S.
    (f)
    Set as p s the selection rate of the algorithm, with p s 1 .
    (g)
    Set as p m the mutation rate, with p m 1 .
    (h)
    Set iter=0.
  • Fitness calculation Step
    (a)
    For  i = 1 , , N g   do
    • Calculate the fitness f i of chromosome g i as f i = j = 1 m y x j , g i t j 2
    (b)
    EndFor
  • Genetic operations step
    (a)
    Selection procedure. The chromosomes are sorted according to their fitness values. The 1 p s × N c chromosomes with the lowest fitness values are transferred intact to the next generation. The remaining chromosomes are substituted by offspings created in the crossover procedure. During the selection process for each offspring, two parents are selected from the population using the tournament selection.
    (b)
    Crossover procedure: For every pair ( z , w ) of selected parents two additional chromosomes z ˜ and w ˜ are produced using the following equations:
    z i ˜ = a i z i + 1 a i w i w i ˜ = a i w i + 1 a i z i
    The value a i is considered as a random number with the property a i [ 0 . 5 , 1 . 5 ] [60].
    (c)
    Mutation procedure: For each element of every chromosome, a random number r 0 , 1 is drawn. The corresponding element is altered randomly if r p m .
  • Termination Check Step
    (a)
    Set i t e r = i t e r + 1
    (b)
    If  i t e r N g  goto step 2.

3. Experiments

The suggested method was tested on a series of classification and regression problems from the relevant literature and was compared against some other well -known machine learning models. The following databases were used to obtain the datasets:

3.1. Experimental datasets

The classification datasets have as follows:
  • Appendictis dataset, a medical dataset proposed in [62].
  • Australian dataset [63], an dataset related to economic data.
  • Balance dataset [64], which used to predict psychological states.
  • Cleveland dataset, which is a medical dataset related to heart diseases [65,66].
  • Dermatology dataset [67], a medical dataset.
  • Hayes roth dataset[68].
  • Heart dataset [69], a medical dataset related to heart diseases.
  • HouseVotes dataset [70], related to Congressional voting records.
  • Ionosphere dataset, used for classification of radar returns from the ionosphere [71,72].
  • Liverdisorder dataset [73], a medical dataset.
  • Mammographic dataset [74], used to identify breast tumors.
  • Parkinsons dataset, a medical dataset related to the Parkinson’s Disease[75].
  • Pima dataset, a medical dataset[76].
  • Popfailures dataset [77], a dataset related to climate measurements.
  • Spiral dataset, an artificial dataset with 2 features and two classes. The patterns for the first class are produced according to the equation: x 1 = 0 . 5 t cos 0 . 08 t , x 2 = 0 . 5 t cos 0 . 08 t + π 2 and the second class data using: x 1 = 0 . 5 t cos 0 . 08 t + π , x 2 = 0 . 5 t cos 0 . 08 t + 3 π 2
  • Regions2 dataset [78].
  • Saheart dataset [79], a medical dataset about heart diseases.
  • Segment dataset [80], an image processing dataset.
  • Wdbc dataset [81], used to identify breast tumors.
  • Wine dataset, used to classify wines [82,83].
  • Eeg dataset, a medical dataset about EEG measurements[84] . The datasets used are denoted as Z_F_S, ZONF_S and ZO_NF_S.
  • Zoo dataset [85], used to classify animals.
The following regression datasets were used in the experiments:
  • Abalone dataset [86].
  • Airfoildataset, a dataset derived from NASA [87].
  • Baseball dataset, a dataset used in baseball games.
  • BK dataset [88], used to predict the points in a basketball game.
  • BL dataset, an electrical engineering dataset.
  • Concrete dataset, related to civil engineering[89].
  • Dee dataset, used to predict the energy consumption.
  • Diabetes dataset, a medical dataset.
  • FA dataset, related to fat measurements.
  • Housing dataset, provided in [90].
  • MB dataset [91].
  • MORTGAGE dataset, which contains economic data.
  • NT dataset[92].
  • PY dataset[93].
  • Quake dataset, used to predict earthquakes [94].
  • Treasure dataset, which contains data about the economy.
  • Wankara dataset, a dataset used for climate measurements.

3.2. Experimental results

The used RBF network was coded in ANSI C++ using the freely available Armadillo library [95]. The optimization methods used were also freely available from the OPTIMUS computing environment, downloaded from https://github.com/itsoulos/OPTIMUS/(accessed on 9 September 2023). To validate the results, the 10 - fold validation technique was used in all datasets. The experiments were conducted 30 times for every dataset using a different seed for the random generator each time. In the conducted experiments, the drand48() random function of the C - programming language was employed. The average classification error is reported for the case of classification datasets and the average mean test error for the regression datasets. The machine used in the experiments was an AMD Ryzen 5950X with 128GB of RAM, running the Debian Linux operating system. All the values for the parameters of the used algorithms are shown in Table 2. The results obtained for the classification datasets are shown in Table 3 and for the regression datasets are listed in Table 4.
The following applies to the results tables:
  • The column NN-PROP represents an artificial neural network [96,97] with 10 hidden nodes trained with the Rprop method [98].
  • The column RBF-KMEANS represents the original two -phase training method for RBF networks, where in the first phase the centers and variances are estimated through the K-Means algorithm and in the second phase the output weights are calculated by solving a linear system of equations.
  • The column GENRBF stands for the RBF training method introduced in [99].
  • The column PROPOSED represents the results obtained by the proposed method.
  • An extra line was also added to the experimental tables under the title AVERAGE. This line represents the average classification or regression error for all datasets.
On average, the proposed technique appears to be 30-40% more accurate than the immediate best. In many cases, this percentage exceeds 70%. Moreover, in the vast majority of problems, the proposed technique significantly outperforms the next best available method in terms of test error. However, the proposed technique consists of two stages and in each of them a genetic algorithm should be executed. This means that it is significantly slower in computing time compared to the rest of the techniques and, of course, it needs more computing resources. Of course, since we are talking about Genetic Algorithms, the training time required could be significantly reduced by using parallel techniques that take advantage of modern parallel computing structures such as the MPI interface [100] or the OpenMP library [101]. The superiority of the proposed technique is also reinforced by the statistical tests carried out on the experimental results and presented in Figure 4 and Figure 5.
In addition, an additional set of experiments was performed on the classification data in which the critical parameter F took the values 3, 5 and 10. The aim of this set of experiments was to establish the sensitivity of the proposed technique to changes in its parameters. The experimental results are presented in the Table 5 and a statistical test on the results is presented in Figure 6. The results and the statistics test indicate that there is no significant difference in the efficiency of the method for different values of the critical parameter F.

4. Conclusions

A two-step method was presented in the present work to train RBF neural networks. In the first stage of the application, using Grammatical Evolution, the field of values of the neural network parameters is partitioned, so as to find a promising range that may contain low values of the training error. In the second stage, the neural network parameters are trained within the best range of values found in the first stage. The training of the parameters of the second phase is carried out using a Genetic Algorithm. The proposed method was applied on a wide series of well -known datasets from the relevant literature and was tested against a series of machine learning models. From the comparison of the results and their statistical processing, it was clearly seen that the proposed technique outperforms the others with which the comparison was made, both in classification datasets and regression datasets. Future improvements to the proposed method may include:
  • Application of the proposed method to other types of artificial neural networks.
  • Implementation of crossover and mutation techniques that focus more on the existing interval construction technique for the model parameters.
  • Incorporation of parallel programming techniques to speed up the method.

Author Contributions

I.G.T., A.T. and E.K. conceived the idea and methodology and supervised the technical part regarding the software. I.G.T. conducted the experiments, employing several datasets, and provided the comparative experiments. A.T. performed the statistical analysis. E.K. and all other authors prepared the manuscript. E.K. and I.G.T. organized the research team and A.T. supervised the project. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The experiments of this research work were performed at the high performance computing system established at Knowledge and Intelligent Computing Laboratory, Department of Informatics and Telecommunications, University of Ioannina, acquired with the project “Educational Laboratory equipment of TEI of Epirus” with MIS 5007094 funded by the Operational Programme “Epirus” 2014–2020, by ERDF and national funds.

Conflicts of Interest

The authors declare no conflict of interest.

Sample Availability

Not applicable.

References

  1. Mjahed, M. The use of clustering techniques for the classification of high energy physics data. Nucl. Instruments Methods Phys. Res. Sect. A: Accel. Spectrometers, Detect. Assoc. Equip. 2006, 559, 199–202, . [CrossRef]
  2. M Andrews, M Paulini, S Gleyzer, B Poczos, End-to-End Event Classification of High-Energy Physics Data, Journal of Physics: Conference Series 1085, 2018. [CrossRef]
  3. He, P.; Xu, C.-J.; Liang, Y.-Z.; Fang, K.-T. Improving the classification accuracy in chemistry via boosting technique. Chemom. Intell. Lab. Syst. 2004, 70, 39–46, . [CrossRef]
  4. Aguiar, J.A.; Gong, M.L.; Tasdizen, T. Crystallographic prediction from diffraction and chemistry data for higher throughput classification using machine learning. Comput. Mater. Sci. 2019, 173, 109409, . [CrossRef]
  5. Kaastra, I.; Boyd, M. Designing a neural network for forecasting financial and economic time series. Neurocomputing 1996, 10, 215–236, . [CrossRef]
  6. Hafezi, R.; Shahrabi, J.; Hadavandi, E. A bat-neural network multi-agent system (BNNMAS) for stock price prediction: Case study of DAX stock price. Appl. Soft Comput. 2015, 29, 196–210, . [CrossRef]
  7. Yadav, S.S.; Jadhav, S.M. Deep convolutional neural network based medical image classification for disease diagnosis. J. Big Data 2019, 6, 1–18, . [CrossRef]
  8. Qing, L.; Linhong, W.; Xuehai, D. A Novel Neural Network-Based Method for Medical Text Classification. Futur. Internet 2019, 11, 255, . [CrossRef]
  9. Park, J.; Sandberg, I.W. Universal Approximation Using Radial-Basis-Function Networks. Neural Comput. 1991, 3, 246–257, . [CrossRef]
  10. G.A. Montazer, D. Giveki, M. Karami, H. Rastegar, Radial basis function neural networks: A review. Comput. Rev. J 1, pp. 52-74, 2018.
  11. Teng, P. Machine-learning quantum mechanics: Solving quantum mechanics problems using radial basis function networks. Phys. Rev. E 2018, 98, 033305, . [CrossRef]
  12. R. Jovanovi´c, A. Sretenovic, Ensemble of radial basis neural networks with K-means clustering for heating energy consumption prediction, FME Transactions 45, pp. 51-57, 2017.
  13. Gorbachenko, V.I.; Zhukov, M.V. Solving boundary value problems of mathematical physics using radial basis function networks. Comput. Math. Math. Phys. 2017, 57, 145–155, . [CrossRef]
  14. Määttä, J.; Bazaliy, V.; Kimari, J.; Djurabekova, F.; Nordlund, K.; Roos, T. Gradient-based training and pruning of radial basis function networks with an application in materials physics. Neural Networks 2020, 133, 123–131, . [CrossRef]
  15. Mai-Duy, N.; Tran-Cong, T. Numerical solution of differential equations using multiquadric radial basis function networks. Neural Networks 2001, 14, 185–199, . [CrossRef]
  16. Mai-Duy, N. Solving high order ordinary differential equations with radial basis function networks. Int. J. Numer. Methods Eng. 2005, 62, 824–852, . [CrossRef]
  17. Sarra, S.A. Adaptive radial basis function methods for time dependent partial differential equations. Appl. Numer. Math. 2005, 54, 79–94, . [CrossRef]
  18. R. -J. Lian, Adaptive Self-Organizing Fuzzy Sliding-Mode Radial Basis-Function Neural-Network Controller for Robotic Systems, IEEE Transactions on Industrial Electronics 61, pp. 1493-1503, 2014.
  19. Vijay, M.; Jena, D. Backstepping terminal sliding mode control of robot manipulator using radial basis functional neural networks. Comput. Electr. Eng. 2018, 67, 690–707, . [CrossRef]
  20. M.J. Er, S. Wu, J. Lu, H.L. Toh, Face recognition with radial basis function (RBF) neural networks, IEEE Transactions on Neural Networks 13, pp. 697-710, 2002.
  21. Laoudias, C.; Kemppi, P.; Panayiotou, C.G. Localization Using Radial Basis Function Networks and Signal Strength Fingerprints in WLAN. GLOBECOM 2009 - 2009 IEEE Global Telecommunications Conference. LOCATION OF CONFERENCE, United StatesDATE OF CONFERENCE; pp. 1–6.
  22. M. Azarbad, S. Hakimi, A. Ebrahimzadeh, Automatic recognition of digital communication signal, International journal of energy, information and communications 3, pp. 21-33, 2012.
  23. Yu, D.; Gomm, J.; Williams, D. Sensor fault diagnosis in a chemical process via RBF neural networks. Control. Eng. Pr. 1999, 7, 49–55, . [CrossRef]
  24. Shankar, V.; Wright, G.B.; Fogelson, A.L.; Kirby, R.M. A radial basis function (RBF) finite difference method for the simulation of reaction-diffusion equations on stationary platelets within the augmented forcing method. Int. J. Numer. Methods Fluids 2014, 75, 1–22, . [CrossRef]
  25. Shen, W.; Guo, X.; Wu, C.; Wu, D. Forecasting stock indices using radial basis function neural networks optimized by artificial fish swarm algorithm. Knowledge-Based Syst. 2011, 24, 378–385, . [CrossRef]
  26. J. A. Momoh, S. S. Reddy, Combined Economic and Emission Dispatch using Radial Basis Function, 2014 IEEE PES General Meeting | Conference & Exposition, National Harbor, MD, pp. 1-5, 2014.
  27. Sohrabi, P.; Shokri, B.J.; Dehghani, H. Predicting coal price using time series methods and combination of radial basis function (RBF) neural network with time series. Miner. Econ. 2021, 36, 207–216, . [CrossRef]
  28. Ravale, U.; Marathe, N.; Padiya, P. Feature Selection Based Hybrid Anomaly Intrusion Detection System Using K Means and RBF Kernel Function. Procedia Comput. Sci. 2015, 45, 428–435, . [CrossRef]
  29. Lopez-Martin, M.; Sanchez-Esguevillas, A.; Arribas, J.I.; Carro, B. Network Intrusion Detection Based on Extended RBF Neural Network With Offline Reinforcement Learning. IEEE Access 2021, 9, 153153–153170, . [CrossRef]
  30. Kuncheva, L.I. Initializing of an RBF network by a genetic algorithm. Neurocomputing 1997, 14, 273–288, . [CrossRef]
  31. F. Ros, M. Pintore, A. Deman, J.R. Chrétien, Automatical initialization of RBF neural networks, Chemometrics and Intelligent Laboratory Systems 87, pp. 26-32, 2007.
  32. Wang, D.; Zeng, X.-J.; Keane, J.A. A clustering algorithm for radial basis function neural network initialization. Neurocomputing 2012, 77, 144–155, . [CrossRef]
  33. N. Benoudjit, M. Verleysen, On the KernelWidths in Radial-Basis Function Networks, Neural Processing Letters 18, pp. 139–154, 2003.
  34. Neruda, R.; Kudová, P. Learning methods for radial basis function networks. Futur. Gener. Comput. Syst. 2005, 21, 1131–1142, . [CrossRef]
  35. Ricci, E.; Perfetti, R. Improved pruning strategy for radial basis function networks with dynamic decay adjustment. Neurocomputing 2006, 69, 1728–1732, . [CrossRef]
  36. Guang-Bin Huang, P. Saratchandran and N. Sundararajan, A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation, IEEE Transactions on Neural Networks 16, pp. 57-67, 2005.
  37. Bortman, M.; Aladjem, M. A growing and pruning method for radial basis function networks. IEEE Trans. Neural Networks 2009, 20, 1039–1045, doi:10.1109/TNN.2009.2019270.
  38. Yokota, R.; Barba, L.; Knepley, M.G. PetRBF — A parallel O(N) algorithm for radial basis function interpolation with Gaussians. Comput. Methods Appl. Mech. Eng. 2010, 199, 1793–1804, . [CrossRef]
  39. Lu, C.; Ma, N.; Wang, Z. Fault detection for hydraulic pump based on chaotic parallel RBF network. EURASIP J. Adv. Signal Process. 2011, 2011, 49, . [CrossRef]
  40. M. O’Neill, C. Ryan, Grammatical evolution, IEEE Trans. Evol. Comput. 5,pp. 349–358, 2001.
  41. J. MacQueen, Some methods for classification and analysis of multivariate observations, in: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, Vol. 1, No. 14, pp. 281-297, 1967.
  42. H.Q.Wang, D.S. Huang, B.Wang, Optimisation of radial basis function classifiers using simulated annealing algorithm for cancer classification. electronics letters 41, pp. 630-632, 2005.
  43. Fathi, V.; Montazer, G.A. An improvement in RBF learning algorithm based on PSO for real time applications. Neurocomputing 2013, 111, 169–176, . [CrossRef]
  44. D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley Publishing Company, Reading, Massachussets, 1989.
  45. Z. Michaelewicz, Genetic Algorithms + Data Structures = Evolution Programs. Springer - Verlag, Berlin, 1996.
  46. Grady, S.; Hussaini, M.; Abdullah, M. Placement of wind turbines using genetic algorithms. Renew. Energy 2005, 30, 259–270, . [CrossRef]
  47. J. W. Backus. The Syntax and Semantics of the Proposed International Algebraic Language of the Zurich ACM-GAMM Conference. Proceedings of the International Conference on Information Processing, UNESCO, 1959, pp.125-132.
  48. C. Ryan, J. Collins, M. O’Neill, Grammatical evolution: Evolving programs for an arbitrary language. In: Banzhaf, W., Poli, R., Schoenauer, M., Fogarty, T.C. (eds) Genetic Programming. EuroGP 1998. Lecture Notes in Computer Science, vol 1391. Springer, Berlin, Heidelberg, 1998.
  49. M. O’Neill, M., C. Ryan, Evolving Multi-line Compilable C Programs. In: Poli, R., Nordin, P., Langdon, W.B., Fogarty, T.C. (eds) Genetic Programming. EuroGP 1999. Lecture Notes in Computer Science, vol 1598. Springer, Berlin, Heidelberg, 1999.
  50. C. Ryan, M. O’Neill, J.J. Collins, Grammatical evolution: Solving trigonometric identities, proceedings of Mendel. Vol. 98. 1998.
  51. A.O. Puente, R. S. Alfonso, M. A. Moreno, Automatic composition of music by means of grammatical evolution, In: APL ’02: Proceedings of the 2002 conference on APL: array processing languages: lore, problems, and applications July 2002 Pages 148–155.
  52. Lídio Mauro Limade Campo, R. Célio Limã Oliveira,Mauro Roisenberg, Optimization of neural networks through grammatical evolution and a genetic algorithm, Expert Systems with Applications 56, pp. 368-384, 2016.
  53. K. Soltanian, A. Ebnenasir, M. Afsharchi, Modular Grammatical Evolution for the Generation of Artificial Neural Networks, Evolutionary Computation 30, pp 291–327, 2022.
  54. Dempsey, M.O’ Neill, A. Brabazon, Constant creation in grammatical evolution, International Journal of Innovative Computing and Applications 1 , pp 23–38, 2007.
  55. E. Galván-López, J.M. Swafford, M. O’Neill, A. Brabazon, Evolving a Ms. PacMan Controller Using Grammatical Evolution. In: , et al. Applications of Evolutionary Computation. EvoApplications 2010. Lecture Notes in Computer Science, vol 6024. Springer, Berlin, Heidelberg, 2010.
  56. N. Shaker, M. Nicolau, G. N. Yannakakis, J. Togelius, M. O’Neill, Evolving levels for Super Mario Bros using grammatical evolution, 2012 IEEE Conference on Computational Intelligence and Games (CIG), 2012, pp. 304-31.
  57. Martínez-Rodríguez, D.; Colmenar, J.M.; Hidalgo, J.I.; Micó, R.V.; Salcedo-Sanz, S. Particle swarm grammatical evolution for energy demand estimation. Energy Sci. Eng. 2020, 8, 1068–1079, . [CrossRef]
  58. Sabar, N.R.; Ayob, M.; Kendall, G.; Qu, R. Grammatical Evolution Hyper-Heuristic for Combinatorial Optimization Problems. IEEE Transactions on Evolutionary Computation 2013, 17, 840–861, . [CrossRef]
  59. Ryan, C.; Kshirsagar, M.; Vaidya, G.; Cunningham, A.; Sivaraman, R. Design of a cryptographically secure pseudo random number generator with grammatical evolution. Sci. Rep. 2022, 12, 1–10, . [CrossRef]
  60. Kaelo, P.; Ali, M. Integrated crossover rules in real coded genetic algorithms. Eur. J. Oper. Res. 2007, 176, 60–76, . [CrossRef]
  61. J. Alcalá-Fdez, A. Fernandez, J. Luengo, J. Derrac, S. García, L. Sánchez, F. Herrera. KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. Journal of Multiple-Valued Logic and Soft Computing 17, pp. 255-287, 2011.
  62. Weiss, Sholom M. and Kulikowski, Casimir A., Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems, Morgan Kaufmann Publishers Inc, 1991.
  63. J.R. Quinlan, Simplifying Decision Trees. International Journal of Man-Machine Studies 27, pp. 221-234, 1987.
  64. T. Shultz, D. Mareschal, W. Schmidt, Modeling Cognitive Development on Balance Scale Phenomena, Machine Learning 16, pp. 59-88, 1994.
  65. Z.H. Zhou,Y. Jiang, NeC4.5: neural ensemble based C4.5," in IEEE Transactions on Knowledge and Data Engineering 16, pp. 770-773, 2004.
  66. Setiono, R.; Leow, W.K. FERNN: An Algorithm for Fast Extraction of Rules from Neural Networks. Appl. Intell. 2000, 12, 15–25, . [CrossRef]
  67. G. Demiroz, H.A. Govenir, N. Ilter, Learning Differential Diagnosis of Eryhemato-Squamous Diseases using Voting Feature Intervals, Artificial Intelligence in Medicine. 13, pp. 147–165, 1998.
  68. B. Hayes-Roth, B., F. Hayes-Roth. Concept learning and the recognition and classification of exemplars. Journal of Verbal Learning and Verbal Behavior 16, pp. 321-338, 1977.
  69. Kononenko, I.; Šimec, E.; Robnik-Šikonja, M. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Appl. Intell. 1997, 7, 39–55, . [CrossRef]
  70. French, R.M.; Chater, N. Using Noise to Compute Error Surfaces in Connectionist Networks: A Novel Means of Reducing Catastrophic Forgetting. Neural Comput. 2002, 14, 1755–1769, . [CrossRef]
  71. J.G. Dy , C.E. Brodley, Feature Selection for Unsupervised Learning, The Journal of Machine Learning Research 5, pp 845–889, 2004.
  72. Perantonis, S.J.; Virvilis, V. Input Feature Extraction for Multilayered Perceptrons Using Supervised Principal Component Analysis. Neural Process. Lett. 1999, 10, 243–252, . [CrossRef]
  73. Garcke, J.; Griebel, M. Classification with sparse grids using simplicial basis functions. Intell. Data Anal. 2002, 6, 483–502, . [CrossRef]
  74. Elter, M.; Schulz-Wendtland, R.; Wittenberg, T. The prediction of breast cancer biopsy outcomes using two CAD approaches that both emphasize an intelligible decision process. Med Phys. 2007, 34, 4164–4172, . [CrossRef]
  75. Little, M.; McSharry, P.E.; Hunter, E.J.; Spielman, J.; Ramig, L.O. Suitability of Dysphonia Measurements for Telemonitoring of Parkinson's Disease. IEEE Trans. Biomed. Eng. 2009, 56, 1015–1022, . [CrossRef]
  76. J.W. Smith, J.E. Everhart, W.C. Dickson, W.C. Knowler, R.S. Johannes, Using the ADAP learning algorithm to forecast the onset of diabetes mellitus, In: Proceedings of the Symposium on Computer Applications and Medical Care IEEE Computer Society Press, pp.261-265, 1988.
  77. Lucas, D.D.; Klein, R.; Tannahill, J.; Ivanova, D.; Brandon, S.; Domyancic, D.; Zhang, Y. Failure analysis of parameter-induced simulation crashes in climate models. Geosci. Model Dev. 2013, 6, 1157–1171, . [CrossRef]
  78. Giannakeas, N., Tsipouras, M.G., Tzallas, A.T., Kyriakidi, K., Tsianou, Z.E., Manousou, P., Hall, A., Karvounis, E.C., Tsianos, V., Tsianos, E. A clustering based method for collagen proportional area extraction in liver biopsy images (2015) Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, 2015-November, art. no. 7319047, pp. 3097-3100.
  79. Hastie, T.; Tibshirani, R. Non-Parametric Logistic and Proportional Odds Regression. J. R. Stat. Soc. Ser. C (Applied Stat. 1987, 36, 260, . [CrossRef]
  80. Dash, M.; Liu, H.; Scheuermann, P.; Tan, K.L. Fast hierarchical clustering and its validation. Data Knowl. Eng. 2003, 44, 109–138, . [CrossRef]
  81. Wolberg, W.H.; Mangasarian, O.L. Multisurface method of pattern separation for medical diagnosis applied to breast cytology.. Proc. Natl. Acad. Sci. 1990, 87, 9193–9196, . [CrossRef]
  82. Raymer, M.; Doom, T.; Kuhn, L.; Punch, W. Knowledge discovery in medical and biological datasets using a hybrid bayes classifier/evolutionary algorithm. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2003, 33, 802–813, . [CrossRef]
  83. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society, 33 , pp. 802-813, 2003.
  84. Zhong, P.; Fukushima, M. Regularized nonsmooth Newton method for multi-class support vector machines. Optim. Methods Softw. 2007, 22, 225–236, . [CrossRef]
  85. Andrzejak, R.G.; Lehnertz, K.; Mormann, F.; Rieke, C.; David, P.; Elger, C.E. Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Phys. Rev. E 2001, 64, 061907, . [CrossRef]
  86. M. Koivisto, K. Sood, Exact Bayesian Structure Discovery in Bayesian Networks, The Journal of Machine Learning Research 5, pp. 549–573, 2004.
  87. W. J Nash, T.L. Sellers, S.R. Talbot, A.J. Cawthor, W.B. Ford, The Population Biology of Abalone (_Haliotis_ species) in Tasmania. I. Blacklip Abalone (_H. rubra_) from the North Coast and Islands of Bass Strait, Sea Fisheries Division, Technical Report No. 48 (ISSN 1034-3288), 1994.
  88. T.F. Brooks, D.S. Pope, and A.M. Marcolini. Airfoil self-noise and prediction. Technical report, NASA RP-1218, July 1989.
  89. J.S. Simonoff, Smooting Methods in Statistics, Springer - Verlag, 1996. 89. I.Cheng Yeh, Modeling of strength of high performance concrete using artificial neural networks, Cement and Concrete Research. 28, pp. 1797-1808, 1998.
  90. D. Harrison and D.L. Rubinfeld, Hedonic prices and the demand for clean ai, J. Environ. Economics & Management 5, pp. 81-102, 1978.
  91. J.S. Simonoff, Smooting Methods in Statistics, Springer - Verlag, 1996.
  92. Mackowiak, P.A.; Wasserman, S.S.; Levine, M.M. A Critical Appraisal of 98.6°F, the Upper Limit of the Normal Body Temperature, and Other Legacies of Carl Reinhold August Wunderlich. JAMA 1992, 268, 1578–1580, . [CrossRef]
  93. R.D. King, S. Muggleton, R. Lewis, M.J.E. Sternberg, Proc. Nat. Acad. Sci. USA 89, pp. 11322–11326, 1992.
  94. M. Sikora, L. Wrobel, Application of rule induction algorithms for analysis of data collected by seismic hazard monitoring systems in coal mines, Archives of Mining Sciences 55, pp. 91-114, 2010.
  95. Sanderson, C.; Curtin, R. Armadillo: a template-based C++ library for linear algebra. J. Open Source Softw. 2016, 1, . [CrossRef]
  96. C. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995.
  97. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control. Signals, Syst. 1989, 2, 303–314, . [CrossRef]
  98. Riedmiller, M.; Braun, H. A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; Volume 1, pp. 586–591, doi:10.1109/icnn.1993.298623.
  99. Ding, S.; Xu, L.; Su, C.; Jin, F. An optimizing method of RBF neural network based on genetic algorithm. Neural Comput. Appl. 2012, 21, 333–336, . [CrossRef]
  100. Gropp, W.; Lusk, E.; Doss, N.; Skjellum, A. A high-performance, portable implementation of the MPI message passing interface standard. Parallel Comput. 1996, 22, 789–828, . [CrossRef]
  101. R. Chandra, L. Dagum, D. Kohr, D. Maydan,J. McDonald and R. Menon, Parallel Programming in OpenMP, Morgan Kaufmann Publishers Inc., 2001.
Figure 1. BNF grammar used in the current work.
Figure 1. BNF grammar used in the current work.
Preprints 85744 g001
Figure 2. One point crossover, used in the Grammatical Evolution.
Figure 2. One point crossover, used in the Grammatical Evolution.
Preprints 85744 g002
Figure 3. The layout of chromosomes in the second phase of the proposed algorithm.
Figure 3. The layout of chromosomes in the second phase of the proposed algorithm.
Preprints 85744 g003
Figure 4. Scatter plot representation and the two-sample paired (Wilcoxon) signed-rank test results of the comparison for each of the three (3) classification methods (NN-RPROP, RBF-KMEAN, GENRBF) with the PROPOSED method regarding the classification error in twenty-four (24) different public available classification datasets. The stars only intend to flag significance levels for the two most used groups. A p-value of less than 0.001 is flagged with three stars (***). A p-value of less than 0.0001 is flagged with four stars (****).
Figure 4. Scatter plot representation and the two-sample paired (Wilcoxon) signed-rank test results of the comparison for each of the three (3) classification methods (NN-RPROP, RBF-KMEAN, GENRBF) with the PROPOSED method regarding the classification error in twenty-four (24) different public available classification datasets. The stars only intend to flag significance levels for the two most used groups. A p-value of less than 0.001 is flagged with three stars (***). A p-value of less than 0.0001 is flagged with four stars (****).
Preprints 85744 g004
Figure 5. Scatter plot representation and the Wilcoxon signed-rank test results of the comparison for each of the three (3) regression methods (NN-RPROP, RBF-KMEAN, GENRBF) with the PROPOSED method regarding the regression error in seventeen (17) different publicly available regression datasets. Star links join significantly different values; one star (*) stand p<0.05 and four stars (****) stand for p<0.0001.
Figure 5. Scatter plot representation and the Wilcoxon signed-rank test results of the comparison for each of the three (3) regression methods (NN-RPROP, RBF-KMEAN, GENRBF) with the PROPOSED method regarding the regression error in seventeen (17) different publicly available regression datasets. Star links join significantly different values; one star (*) stand p<0.05 and four stars (****) stand for p<0.0001.
Preprints 85744 g005
Figure 6. A Friedman test was conducted to determine whether different values of the critical parameter F had a difference or not in the classification error of the proposed method in twenty-four (24) other publicly available classification datasets. The analysis results for three different values of the critical parameter F (F=3, F=5, F=10) indicated no significant difference.
Figure 6. A Friedman test was conducted to determine whether different values of the critical parameter F had a difference or not in the classification error of the proposed method in twenty-four (24) other publicly available classification datasets. The analysis results for three different values of the critical parameter F (F=3, F=5, F=10) indicated no significant difference.
Preprints 85744 g006
Table 1. Steps to produce a valid expression from the BNF grammar.
Table 1. Steps to produce a valid expression from the BNF grammar.
Expression Chromosome Operation
9,8,6,4,15,9,16,23,8 9 mod 2=1
<expr>,<expr> 8,6,4,15,9,16,23,8 8 mod 2=0
(<xlist>,<digit>,<digit>),<expr> 6,4,15,9,16,23,8 6 mod 8=6
(x7,<digit>,<digit>),<expr> 4,15,9,16,23,8 4 % 2=0
(x7,0,<digit>),<expr> 15,9,16,23,8 15%2=1
(x7,0,1),<expr> 9,16,23,8 9 %2 =1
(x7,0,1),(<xlist>,<digit>,<digit>) 16,23,8 16%8=0
(x7,0,1),(x1,<digit>,<digit>) 23,8 23%2=1
(x7,0,1),(x1,1,<digit>) 8 8%2=0
(x7,0,1),(x1,1,0)
Table 2. The values used for the experimental parameters.
Table 2. The values used for the experimental parameters.
PARAMETER VALUE
N c 200
N g 100
N s 50
F 10.0
B 100.0
k 10
p s 0.90
p m 0.05
Table 3. Experimental results for the classification datasets. The first column is the name of the used dataset.
Table 3. Experimental results for the classification datasets. The first column is the name of the used dataset.
DATASET NN-RPROP RBF-KMEANS GENRBF PROPOSED
Appendicitis 16.30% 12.23% 16.83% 15.77%
Australian 36.12% 34.89% 41.79% 22.40%
Balance 8.81% 33.42% 38.02% 15.62%
Cleveland 61.41% 67.10% 67.47% 50.37%
Dermatology 15.12% 62.34% 61.46% 35.73%
Hayes Roth 37.46% 64.36% 63.46% 35.33%
Heart 30.51% 31.20% 28.44% 15.91%
HouseVotes 6.04% 6.13% 11.99% 3.33%
Ionosphere 13.65% 16.22% 19.83% 9.30%
Liverdisorder 40.26% 30.84% 36.97% 28.44%
Mammographic 18.46% 21.38% 30.41% 17.72%
Parkinsons 22.28% 17.41% 33.81% 14.53%
Pima 34.27% 25.78% 27.83% 23.33%
Popfailures 4.81% 7.04% 7.08% 4.68%
Regions2 27.53% 38.29% 39.98% 25.18%
Saheart 34.90% 32.19% 33.90% 29.46%
Segment 52.14% 59.68% 54.25% 49.22%
Spiral 46.59% 44.87% 50.02% 23.58%
Wdbc 21.57% 7.27% 8.82% 5.20%
Wine 30.73% 31.41% 31.47% 5.63%
Z_F_S 29.28% 13.16% 23.37% 3.90%
ZO_NF_S 6.43% 9.02% 22.18% 3.99%
ZONF_S 27.27% 4.03% 17.41% 1.67%
ZOO 15.47% 21.93% 33.50% 9.33%
AVERAGE 26.56% 28.84% 33.35% 18.73%
Table 4. Experimental results for the regression datasets. The first column is the name of the used regression dataset.
Table 4. Experimental results for the regression datasets. The first column is the name of the used regression dataset.
DATASET NN-RPROP RBF-KMEANS GENRBF PROPOSED
ABALONE 4.55 7.37 9.98 5.16
AIRFOIL 0.002 0.27 0.121 0.004
BASEBALL 92.05 93.02 98.91 81.26
BK 1.60 0.02 0.023 0.025
BL 4.38 0.013 0.005 0.0004
CONCRETE 0.009 0.011 0.015 0.006
DEE 0.608 0.17 0.25 0.16
DIABETES 1.11 0.49 2.92 1.74
HOUSING 74.38 57.68 95.69 21.11
FA 0.14 0.015 0.15 0.033
MB 0.55 2.16 0.41 0.19
MORTGAGE 9.19 1.45 1.92 0.014
NT 0.04 8.14 0.02 0.007
PY 0.039 0.012 0.029 0.019
QUAKE 0.041 0.07 0.79 0.034
TREASURY 10.88 2.02 1.89 0.098
WANKARA 0.0003 0.001 0.002 0.003
AVERAGE 11.71 10.17 12.54 6.46
Table 5. Experimental results with the proposed method and using different values for the parameter F on the classification datasets.
Table 5. Experimental results with the proposed method and using different values for the parameter F on the classification datasets.
DATASET F = 3 F = 5 F = 10
Appendicitis 15.57% 16.60% 15.77%
Australian 24.29% 23.94% 22.40%
Balance 17.22% 15.39% 15.62%
Cleveland 52.09% 51.65% 50.37%
Dermatology 37.23% 36.81% 35.73%
Hayes Roth 35.72% 32.31% 35.33%
Heart 16.32% 15.54% 15.91%
HouseVotes 4.35% 3.90% 3.33%
Ionosphere 12.50% 11.44% 9.30%
Liverdisorder 28.08% 28.19% 28.44%
Mammographic 17.49% 17.15% 17.72%
Parkinsons 16.25% 15.17% 14.53%
Pima 23.29% 23.97% 23.33%
Popfailures 5.31% 5.86% 4.68%
Regions2 25.97% 26.29% 25.18%
Saheart 28.52% 28.59% 29.46%
Segment 44.95% 48.77% 49.22%
Spiral 15.49% 18.19% 23.58%
Wdbc 5.43% 5.01% 5.20%
Wine 7.59% 8.39% 5.63%
Z_F_S 4.37% 4.26% 3.90%
ZO_NF_S 3.79% 4.21% 3.99%
ZONF_S 2.34% 2.26% 1.67%
ZOO 11.90% 10.50% 9.33%
AVERAGE 19.03% 18.93% 18.73%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated