Altmetrics
Downloads
201
Views
153
Comments
0
This version is not peer-reviewed
Submitted:
06 March 2024
Posted:
08 March 2024
You are already at the latest version
Papers | Dataset | Algorithm(ML & EL) | Evaluation measures | Result | Discussion |
B. Marapelli [1] |
|
|
|
The results on the data sets show that in comparison to KNN (COCOMONASA2, COCOMONASA, COCOMO81) the LR model is a good estimator since it produces greater CC and lower values of RRSE, RAE, RMSE, and MAE. | LR: Its accuracy is dependent on the quality of the data and might overlook complicated, non-linear relationships. When working with large datasets, KNN can be computationally expensive and may need selecting the distance measure and k value carefully. |
B.,Turhan et al [2] |
|
|
|
Across the three datasets, RBF performed the best. The COCOMO model is effective in EE for the two datasets, NASA and USC, as demonstrated by the results of the experiments. | To enhance the model, the metrics obtained from those models may be modified, adjusted, or expanded upon with additional metrics. |
P. Pospieszny et al. [3] |
|
EL combined:
|
|
MLP, SVM, and GLM combined for EE. The combined model outperforms the single model. The results of EL models developed for EE and duration early in a project’s lifetime indicate that they are highly accurate contrasted to other methods used by different studies and can be implemented in real-world settings. | The variability of the ISBSG dataset is a result of its sourcing from numerous initiatives and organizations. Another factor is the high number of missing values, which when combined with heterogeneity, could make preparing data and creating ML models difficult. |
Singal et al.[4] |
|
|
|
Less memory was used and less computational complexity was achieved with the DE technique. It offered superior values for the cost factors, which greatly enhanced the EE. | The researcher just utilized the MMRE as a fitness function; alternative measures may be taken into account to increase accuracy. Also just use one model DE for EE. |
P. Rijwani et al.[5] |
|
|
|
MLF with the Back Propagation Method uses ANN. The(MLF-ANN) however, offered greater prediction accuracy. | Gathering and maintaining high-quality data is essential. Data quality and quantity directly affect the performance of the design. Additionally, the training and validation of ANNs require expertise and computational resources. Also, not enough datasets were used. |
Z.abdelali et al.[6] |
|
|
|
In this paper, three commonly used accuracy metrics Pred(0.25), MMRE, and MdMRE were applied to determine which approaches were the most accurate. Overall, the RF model outperforms the RT model, especially when it comes to COCOMO and ISBSG R8. | RF is robust to overfitting and noise in the data, making it suitable for real-world software projects where data quality can vary. |
M.Hammad, et al.[7] |
|
|
|
SVM offers the highest accuracy for forecasting in comparison to other methods because it has the lowest MAE values. For the SVM model, its lowest MAE value was 2.6. | Configuring hyperparameters for ML algorithms, such as selecting the right architecture for an ANN, can be a time-consuming process, Not enough datasets are used (73 projects), and other evaluation measures. |
M. Kumar, et al.[8] |
|
|
|
When the 12 features were merged with the LR, MLP, and RF methods, outcomes show that LR outscored the other ML techniques based on estimation. The LR performance matrices RSE, RMSE, and MAE are employed. The Desharnais dataset, with just seven features chosen, demonstrated that when LR was used, a more accurate estimation was feasible than when MLP and RF were used. | The data set used is not sufficient to judge the best model and did not provide evidence for the suggested work’s effects when applied to additional datasets. |
S. Elyassami,et al.[9] |
|
|
|
More accurate outcomes are obtained by ANN with HL and SVM with AK, respectively, than by ANN with two HL and SVM with LK. | The choice of the kernel function can significantly impact SVM’s performance. Selecting the right kernel for SEE can be challenging. Suitable for small datasets and complicated architecture. |
S. Elyassami , et al.[10] |
|
|
|
Pred (25) was used to determine that, in regards to prediction accuracy, the NBC technique was equally beneficial as the SWR technique. | Data preprocessing is required to address potential outliers and missing values in the ISBSG dataset. These techniques are computationally expensive and only work on linear issues; they cannot tackle non-linear problems and Must use other evaluation measures. |
I. F., da Silva [11] |
|
|
|
For this data set, the ANN outperformed LR. When the ANN is not restricted to a linear function, it may perform greater efficiency when dealing with data that is not a straight line. | The data set used is not sufficient to judge the best model and did not provide evidence for the suggested work’s effects when applied to additional datasets and Must use other evaluation measures. |
Benala et al.[12] |
|
|
|
More accurate estimations are produced by DBSCAN/UKW FLANN than by FLANN, SVR, RBF, and CART. | Clustering algorithms have parameters that need to be set, and their effectiveness can be influenced by the quality of the data. |
Leal et al. [13] |
|
|
|
WNNLR outperforms SVR, Bagging, and NNLR in terms of outcomes depending on the MMRE and the prediction rate. | Weight assignments in WNNLR, the choice of a distance metric, feature scaling, and other hyperparameters must be carefully tuned for optimal performance and Not enough datasets are used (73 projects). |
F., Gravino et al[14] |
|
|
|
GP outperformed CBR and MSWR depending on the MMRE, MdMRE, and the prediction rate. | GP can be computationally expensive and may require substantial computational resources. |
Nassif et al [15] |
|
|
|
In terms of each assessing measure, It is clear that DTF outperforms DT and MLR and has statistical significance. At the 95% confidence level, the DTF model has statistical significance (p value less than 0.05). | DT forests can help mitigate overfitting to some extent, but finding the right balance between complexity and accuracy is a challenge.DTF may produce less interpretable results, making it challenging to explain the reasoning behind predictions to stakeholders. |
Dave et al [16] |
|
|
|
MMRE demonstrates that FFNN outperforms RBFNN as an estimating model. However, our evaluation of these models using the Modified MMRE and RSD demonstrates that the RBFNN model has greater accuracy at EE. This demonstrates that MMRE is an unreliable criterion for evaluation and does not always result in the best estimating model. | The data set used is not sufficient to judge the best model and did not provide evidence for the suggested work’s effects when applied to additional datasets. |
Attarzadeh et al [17] |
|
|
|
When contrasted during the COCOMO II prototype, the proposed model improves accuracy by 17.1%. | Suitable for small datasets and complicated architecture. |
Hidmi et al [18] |
|
EL Combined:
|
|
The best-case scenario for a single strategy employed alone is an acceptable accuracy of 85%, according to the results. Nevertheless, when we mix the classifiers, we get an accuracy of 91.35% with the Desharnais dataset and 85.48% with the Maxwell dataset. We can therefore conclude that combining two methods improves estimation accuracy. | KNN might require a careful choice of distance metric and k value, Must use other ML algorithms and other data sets to give more accuracy, and Must use other evaluation measures. |
Hosni et al [19] |
|
EL combined:
|
|
There is not an optimum percentage EL since the performance of the suggested ensemble varies by dataset. | KNN might require a careful choice of distance metric and k value, Must use other ML algorithms and other data sets to give more accuracy, and Must use other evaluation measures. |
Shukla et al [20] |
|
|
|
The R-squared achievement of AdaBoost-MLPNN is 82.213%, Which is the greatest across every model, whereas MLPNN has a score of 78.33%. | The data set used is not sufficient to judge the best model and did not provide evidence for the suggested work’s effects when applied to additional datasets. |
Elish et al [21] |
|
EL combined:
|
|
The results validate the unreliability of individual models due to their inconsistent and unstable performance on various datasets. Conversely, the EL model offers performance that is more reliable than individual models | Ensemble averaging combines the predictive power of different algorithms, resulting in more accurate EE and duration estimates. |
Dataset name | Source Repository | Number of Features | Number of projects | Output feature-effort | Reference |
COCOMO81 | Promise | 17 | 63 | Person-months | [1,4,6,11,12,14] |
COCOMO NASA 1 | Promise | 17 | 63 | Person-months | [1,2,4,16,19] |
COCOMO NASA 2 | Promise | 24 | 93 | Person-months | [1,5,17] |
Maxwell | Github | 27 | 62 | Person-hours | [20] |
Desharnais | Github | 9 | 81 | Person-hours | [8,14,15,18,20,23] |
Desharnais-1-1 | Github | 12 | 81 | Person-hours | [8] |
China | Github | 16 | 499 | Person-hours | [9] |
Albrecht | Github | 8 | 24 | Person-hours | [4,19,21] |
belady | Github | 2 | 32 | Person-hours | [25] |
boehm | Github | 2 | 62 | Person-hours | [25] |
Kitchenham | Github | 4 | 145 | Person-hours | [24] |
COCOMO81 Features | Description | FS | Data Type |
Rely | Describes the application and its reliability required. | Numeric | |
Data | Describes the dimension of the database’s records. | ✓ | Numeric |
IntComplx | Describes the procedure’s complexities. | Numeric | |
Time | Describes the central processing unit (CPU) time constraint. | ✓ | Numeric |
Stor | Describes the CPU’s main limitation. | ✓ | Numeric |
Virt | Describes the machine volatility. | Numeric | |
Turn | Describes how long it takes to turn around. | Numeric | |
Acap | Describes the Capability Analyzer. | Numeric | |
aexp | Describes the app’s experiences. | Numeric | |
Pcap | Describes programmers’ abilities. | Numeric | |
vepx | Describes a virtual usage of computers. | Numeric | |
lexp | Describes the foreign language experience of learning. | Numeric | |
Modp | Describes the processes utilized in modern software. | Numeric | |
Tool | Describes the software tools that are implemented. | ✓ | Numeric |
Sced | Describe the timetable restriction. | Numeric | |
loc | Describes the source code lines. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-months". | ✓ | Numeric |
COCOMO Nasa-I Features | Description | FS | Data Type |
Rely | Describes the application and its reliability required. | Numeric | |
Data | Describes the dimension of the database’s records. | ✓ | Numeric |
IntComplx | Describes the procedure’s complexities. | Numeric | |
Time | Describes the central processing unit (CPU) time constraint. | ✓ | Numeric |
Stor | Describes the CPU’s main limitation. | ✓ | Numeric |
Virt | Describes the machine volatility. | Numeric | |
Turn | Describes how long it takes to turn around. | Numeric | |
Acap | Describes the Capability Analyzer. | Numeric | |
aexp | Describes the app’s experiences. | Numeric | |
Pcap | Describes programmers’ abilities. | Numeric | |
vepx | Describes a virtual usage of computers. | Numeric | |
lexp | Describes the foreign language experience of learning. | Numeric | |
Modp | Describes the processes utilized in modern software. | Numeric | |
Tool | Describes the software tools that are implemented. | ✓ | Numeric |
Sced | Describe the timetable restriction. | Numeric | |
loc | Describes the source code lines. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-months". | ✓ | Numeric |
COCOMO Nasa-II Features | Description | FS | Data Type |
id | Describes the Project ID. | Numeric | |
ProjectNam | Describes the Project name. | Ordinal | |
mode | Describes the development mode. | Ordinal | |
year | Describes the year of development. | Numeric | |
cat2 | Describes the category of application. | Ordinal | |
center | Describes which NASA center. | Ordinal | |
forg | Describes the flight or ground system. | Ordinal | |
Rely | Describes the application and its reliability required. | Numeric | |
Data | Describes the dimension of the database’s records. | ✓ | Numeric |
IntComplx | Describes the procedure’s complexities. | Numeric | |
Time | Describes the central processing unit (CPU) time constraint. | ✓ | Numeric |
Stor | Describes the CPU’s main limitation. | ✓ | Numeric |
Virt | Describes the machine volatility. | Numeric | |
Turn | Describes how long it takes to turn around. | Numeric | |
Acap | Describes the Capability Analyzer. | Numeric | |
aexp | Describes the app’s experiences. | Numeric | |
Pcap | Describes programmers’ abilities. | Numeric | |
vepx | Describes a virtual usage of computers. | Numeric | |
lexp | Describes the foreign language experience of learning. | Numeric | |
Modp | Describes the processes utilized in modern software. | Numeric | |
Tool | Describes the software tools that are implemented. | ✓ | Numeric |
Sced | Describe the timetable restriction. | Numeric | |
loc | Describes the source code lines. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-months". | ✓ | Numeric |
KITCHENHAM Features | Description | FS | Data Type |
Actual. duration | Describes the project duration. | Numeric | |
Adjusted function points | Describes the Function Point Adjustment Factor. | Numeric | |
First. Estimate | Describes the first estimate for the project. | ✓ | Numeric |
Actual. effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
DESHARNAIS Features | Description | FS | Data Type |
TeamExp | Describes the project’s team member’s capabilities. | Numeric | |
ManagerExp | Describes the project manager’s experience. | Numeric | |
YearEnd | Describes the final year of the project. | Numeric | |
Transactions | Describes the number of transactions processed. | ✓ | Numeric |
Entities | Describes the overall number of instances in the structure’s data. | Numeric | |
PointsAdjust | Describe the adjustments functionality points. | ✓ | Numeric |
Envergure | Describes the environment for the project. | Numeric | |
Language | Describes the project’s language. | Ordinal | |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
DESHARNAIS_1_1 Features | Description | FS | Data Type |
ID | Describes the object ID. | Numeric | |
TeamExp | Describes the project’s team member’s capabilities. | Numeric | |
ManagerExp | Describes the project manager’s experience. | Numeric | |
YearEnd | Describes the final year of the project. | Numeric | |
Transactions | Describes the number of transactions processed. | ✓ | Numeric |
Entities | Describes the overall number of instances in the structure’s data. | Numeric | |
PointsAdjust | Describe the adjustments functionality points. | ✓ | Numeric |
PointsnonAdjust | Describes the non-adjustment function points(Transaction & Entities). | ✓ | Numeric |
Length | Describes the actual project schedule in months. | Numeric | |
Language | Describes the language used for the project. | Ordinal | |
Adjustment | Describes the function point adjustment factor for complexity. | Numeric | |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
MAXWELL Features | Description | FS | Data Type |
Size | Describes the application size. | ✓ | Numeric |
Duration | Describes the project duration. | ✓ | Numeric |
Time | Describes the Time taken. | Numeric | |
Year | Describes the year of development. | Numeric | |
app | Describes the application type for the project. | Numeric | |
har | Describes the required hardware framework. | Numeric | |
dba | Describes the project’s database. | Numeric | |
ifc | Describes the user interface for the project. | Numeric | |
Source | Describes the conditions under which software is created. | Numeric | |
nlan | Describes the number of languages that were used. | Numeric | |
telonuse | Describes the Telon used for the project. | Numeric | |
T01 | Describes how the client interacts. | Numeric | |
T02 | Describes the creation of an environment’s capability. | Numeric | |
T03 | Describes the project’s workforce accessibility. | Numeric | |
T04 | Describes the standard used for the project. | Numeric | |
T05 | Describes the method used for the project. | Numeric | |
T06 | Describes the tools used for the project. | Numeric | |
T07 | Describes the logic that underlies the complexity of the software. | Numeric | |
T08 | Describes the range of limitations. | Numeric | |
T09 | Describes the standard of excellence criteria. | Numeric | |
T10 | Describes what is necessary for productivity. | Numeric | |
T11 | Describes the process of installation criteria. | Numeric | |
T12 | Describes the critical thinking skills of the team members. | Numeric | |
T13 | Describes the program and the experience of staff members. | Numeric | |
T14 | Describes the project’s team technical capabilities. | Numeric | |
T15 | Describes the project’s team member’s capabilities. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
BELADY Features | Description | FS | Data Type |
Size | Describes the application size. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
BOEHM Features | Description | FS | Data Type |
Size | Describes the application size. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
CHINA Features | Description | FS | Data Type |
AFP | Describe the altered functionality parameters. | Numeric | |
Input | Describes the function points of input for the project. | Numeric | |
Output | Describes the function points of output for the project. | Numeric | |
Inquiry | Describes the function points of external output inquiry. | Numeric | |
Files | Describes the function pointers of internal logical files. | Numeric | |
Interface | Describes the function pointers of the external interface added. | Numeric | |
Added | Describes the function pointers of the added functions. | Numeric | |
Changed | Describes the function pointers of changed functions. | Numeric | |
Resource | Describes the team type for the project. | ✓ | Numeric |
Duration | Describes the project duration. | Numeric | |
PDR_AFP | Describes the Productivity delivery rate(adjustment functionality parameters). | Numeric | |
PDR _UFP | Describes the Productivity delivery rate(Un-adjustment functionality parameters). | Numeric | |
NPDR _AFP | Describes the Normalized productivity delivery rate(adjustment functionality parameters). | Numeric | |
NPDU _UFP | Describes the Productivity delivery rate(Un-adjustment functionality parameters). | Numeric | |
N-Effort | Describes the normalized effort. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
ALBRECHT Features | Description | FS | Data Type |
Input | Describes the quantity of inputs a software must handle. | Numeric | |
Output | Describes the quantity of outputs that a program generates. | ✓ | Numeric |
Inquiry | Describes the number of queries or questions that an application must respond to. | ✓ | Numeric |
Files | Describes the amount of records needed for the program to write to or read from. | Numeric | |
FPAdj | Describes the Function Point Adjustment Factor. | Numeric | |
RawFPcounts | Describes the Function Point Measures are used to calculate the raw function points. | ✓ | Numeric |
AdjFP | Describes the function’s point adjustments factor given by the original function points. | ✓ | Numeric |
Effort | Describes the actual time spent in "person-hours". | ✓ | Numeric |
M-COCOMO Features | Description | Data Type |
Data | Describes the Database Size. | Numeric |
Time | Describes the CPU time limitation. | Numeric |
Stor | Describes the CPU’s main limitation. | Numeric |
Tool | Describes the software tools that are implemented. | Numeric |
loc | Describes the source code lines. | Numeric |
Effort | Describes the actual time spent in "person-months". | Numeric |
HPD Features | Description | Data Type |
Resource | Describe the team type for the project. | Numeric |
Output | Describes the quantity of outputs that a program generates. | Numeric |
Enquiry | Describes the number of queries or questions that an application must respond to. | Numeric |
Team Exp | Describes the team experience in years. | Numeric |
First Estimate | Describes the first estimate for the project. | Numeric |
AFP | Describes the Function Point Adjustment Factor | Numeric |
Non-AFP | Describes the non-adjustment function points. | Numeric |
Transaction | Describes the number of transactions processed. | Numeric |
Size | Describes the application size. | Numeric |
Duration | Describes the project duration. | Numeric |
N-effort | Describes the normalized effort. | Numeric |
Effort | Describes the actual time spent in "person-hours". | Numeric |
M-COCOMO Features | Description | Data Type |
Rely | Describes the application and its reliability required. | Numeric |
Data | Describes the dimension of the database’s records. | Numeric |
IntComplx | Describes the procedure’s complexities. | Numeric |
Time | Describes the central processing unit (CPU) time constraint. | Numeric |
Stor | Describes the CPU’s main limitation. | Numeric |
Virt | Describes the machine volatility. | Numeric |
Turn | Describes how long it takes to turn around. | Numeric |
Acap | Describes the Capability Analyzer. | Numeric |
aexp | Describes the app’s experiences. | Numeric |
Pcap | Describes programmers’ abilities. | Numeric |
vepx | Describes a virtual usage of computers. | Numeric |
lexp | Describes the foreign language experience of learning. | Numeric |
Modp | Describes the processes utilized in modern software. | Numeric |
Tool | Describes the software tools that are implemented. | Numeric |
Sced | Describes the timetable constraint . | Numeric |
loc | Describes the source code lines. | Numeric |
Effort | Describes the actual time spent in "person-months". | Numeric |
HPD Features | Description | Data Type |
Input | Describes the quantity of inputs a software must handle. | Numeric |
Output | Describes the quantity of outputs that a program generates. | Numeric |
File | Describes the number of files required for a program to write to or read from. | Numeric |
First Estimate | Describes the first estimate for the project. | Numeric |
AFP | Describe the function of the Points Adjustments Factor. | Numeric |
N-effort | Describes the normalized effort. | Numeric |
Effort | Describes the actual time spent in "person-hours". | Numeric |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0158 | 0.0671 | 53.2419 % | 56.3547 % | 0.9578 |
AdaBoost | 0.0159 | 0.0589 | 46.9494 % | 48.5212 % | 0.9998 |
Voting | 0.0143 | 0.0565 | 44.3188 % | 44.5415 % | 0.9998 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0214 | 0.0819 | 53.6069 % | 58.0754 % | 0.9759 |
AdaBoost | 0.0039 | 0.0356 | 9.7092 % | 25.2217 % | 0.9996 |
Voting | 0.0168 | 0.0640 | 42.0561 % | 45.3716 % | 0.9995 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0143 | 0.0661 | 53.8579 % | 57.3016 % | 0.9578 |
AdaBoost | 0.0013 | 0.0228 | 5.5397 % | 19.7348 % | 0.9998 |
Voting | 0.0109 | 0.0502 | 40.9738 % | 43.5082 % | 0.9998 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0131 | 0.0615 | 53.5125 % | 55.7224 % | 0.9179 |
AdaBoost | 0.009 | 0.0419 | 36.7875 % | 37.9658 % | 0.9999 |
Voting | 0.0115 | 0.0519 | 47.0259 % | 47.0276 % | 0.9999 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0094 | 0.0454 | 36.6104 % | 40.0959 % | 0.9164 |
AdaBoost | 0.0094 | 0.0421 | 36.7417 % | 37.1824 % | 0.9999 |
Voting | 0.0102 | 0.0463 | 41.9686 % | 41.9719 % | 0.9999 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0347 | 0.102 | 55.6538 % | 57.7707 % | 0.8922 |
AdaBoost | 0.0023 | 0.0313 | 3.6618 % | 17.7461 % | 0.9996 |
Voting | 0.0305 | 0.0878 | 48.8484 % | 49.7202 % | 0.9996 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0182 | 0.0771 | 56.4441 % | 60.7297 % | 0.8685 |
AdaBoost | 0.00643 | 0.0488 | 19.7554 % | 38.4640 % | 0.9989 |
Voting | 0.0176 | 0.0727 | 54.4594 % | 57.2534 % | 0.9989 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0076 | 0.0467 | 53.5633 % | 55.4589 % | 0.9936 |
AdaBoost | 0.0003 | 0.0101 | 1.9578 % | 12.0124 % | 0.9999 |
Voting | 0.0071 | 0.0423 | 49.8529 % | 50.2115 % | 0.9999 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0172 | 0.0705 | 54.0836 % | 55.9919 % | 0.9615 |
AdaBoost | 0.0118 | 0.0481 | 37.2787 % | 38.1860 % | 0.9999 |
Voting | 0.0140 | 0.0556 | 44.0882 % | 44.1649 % | 0.9999 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0429 | 0.1116 | 53.7217 % | 55.6905 % | 0.9892 |
AdaBoost | 0.0299 | 0.0764 | 37.3913 % | 38.2528 % | 0.9999 |
Voting | 0.0202 | 0.0505 | 25.2501 % | 25.2845 % | 0.9999 |
Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0015 | 0.0183 | 36.1662 % | 40.3380 % | 0.9938 |
AdaBoost | 0.0001 | 0.0051 | 1.4717 % | 11.1515 % | 0.9998 |
Voting | 0.0021 | 0.0229 | 49.9593 % | 50.2752 % | 0.9998 |
ComImp | Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0066 | 0.0443 | 43.5009 % | 49.6498 % | 0.9320 | |
AdaBoost | 0.0009 | 0.0183 | 6.1038 % | 20.948 % | 0.9999 | |
Voting | 0.0057 | 0.0357 | 37.3736 % | 40.9089 % | 0.9999 | |
PCA-ComImp | Bagging | 0.0069 | 0.0429 | 45.1777 % | 49.1607 % | 0.9359 |
AdaBoost | 0.0002 | 0.0085 | 1.0243 % | 9.7135 % | 0.9999 | |
Voting | 0.0038 | 0.0276 | 24.801 % | 31.697 % | 0.9999 |
ComImp | Model | MAE | RMSE | RAE | RRSE | CC |
Bagging | 0.0008 | 0.0135 | 36.2509 % | 39.5167 % | 0.9978 | |
AdaBoost | 0.0122 | 0.0494 | 37.2849 % | 38.5657 % | 0.9997 | |
Voting | 0.0004 | 0.0054 | 15.6635 % | 15.6637 % | 0.9999 | |
PCA-ComImp | Bagging | 0.0008 | 0.0135 | 36.0581 % | 39.5376 % | 0.9972 |
AdaBoost | 0.0001 | 0.0002 | 1.5617 % | 10.1143 % | 0.9999 | |
Voting | 0.0008 | 0.0114 | 32.986 % | 33.1954 % | 0.9999 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 MDPI (Basel, Switzerland) unless otherwise stated