1. Introduction
Artificial Intelligence (AI) has emerged as a transformative force across diverse domains, revolutionizing the way we comprehend and address complex challenges. As AI applications permeate various facets of human endeavor, the quest for accurate predictive modeling stands as a paramount objective. The ability to foresee future outcomes based on historical data fosters informed decision-making and shapes the trajectory of progress in fields ranging from healthcare and finance to natural language processing.
In the pursuit of predictive modeling excellence, the perennial tension between accuracy and computational efficiency demands innovative methodologies. This study starts to look into predictive modeling in the field of artificial intelligence, with a special focus on how accuracy in predictions and efficiency in computing work together. Our endeavor are encapsulated in the title, "Prognosticating Parsimony: An Ebullient Ensemble Approach to Predictive Modeling in Artificial Intelligence," where we delve into the development and evaluation of an ensemble approach imbued with ebullience—a dynamic, effervescent quality that encapsulates the vigor and vitality of our proposed methodology.
You can’t say enough good things about predictive modeling, especially when you need to make decisions based on what you think will happen. Conventional methods often have trouble finding the right balance between models that are too simple and lose their ability to predict what will happen and models that are too complicated and could be hard to understand and use efficiently. We present an exciting new method called the "exuberant ensemble" in this study. It tries to close this gap by using the best parts of several prediction models while putting the speed of computation first. The area of AI-driven forecasts has changed a lot with this combination.
1.1. Contextualizing Predictive Modeling in AI:
To fully understand why and how important our study is, it is necessary to put predictive modelling in the bigger picture of AI uses. The huge amounts of data that are created every day, along with improvements in computer power and mathematical complexity, have pushed predictive modelling to the top of AI study and uses.
Predictive modelling is very important in healthcare for example for prediction, evaluation, and making sure that each patient gets the best care possible. Being able to predict how diseases will progress, find people who are at high risk, and make the best use of resources is not only helpful, it’s often a matter of life and death. In the same way, predictive modelling is the key to automated trade methods, evaluating risk, and optimizing portfolios in the financial markets. Predictions that come true can mean the difference between making a lot of money and not making any.
Predictive models are very important for jobs like mood analysis, language translation, and robot exchanges in natural language processing, which is an area of AI that is growing all the time. How well basic prediction models work affects how well you can guess what a user wants, understand their situation, and come up with a good answer. In all of these areas and others as well, the need for predictive modelling that combines accuracy with speed is a similar theme that shows how important it is to keep coming up with new ideas.
1.2. The Conundrum of Predictive Modeling:
The tricky part of predictive modelling is finding the right mix between accuracy and how quickly the model can be run. In the past, models have been divided into two groups: the Scylla, which is made too simple and loses accuracy in order to be more efficient, and the Charybdis, which is made too complicated and risks overfitting, more work for computers, and less ability to be understood. Finding a balance between these competing forces is what makes creativity possible, and our study tries to find a way through this complicated territory.
Even though oversimplified models use less computing power, they often can’t catch the subtleties of real-world data. The world isn’t usually ruled by straight lines and clear boundaries. Because of this, models that oversimplify the underlying patterns will always be wrong about what will happen. On the other hand, models that are too complicated, like deep neural networks and complicated ensemble methods, may be very accurate, but they require a lot more computing power and are harder to understand. Because these models aren’t clear, it can be hard to figure out why estimates are what they are, which is important, especially in important fields like healthcare and banking.
1.3. The Ebullient Ensemble Paradigm:
The exuberant ensemble paradigm is at the heart of our study. It is a new way of doing things that tries to make predictive modelling easier by welcoming variety and change. "Ebullient" is a good word to describe how lively and bubbly our suggested ensemble is. It is based on the combined intelligence of different prediction models.
Instead of just a bunch of different models put together, the exuberant ensemble paradigm is a dynamic combination that uses the best parts of each model while minimizing their flaws. Our method is different from standard groups because it uses a smart mix of different algorithms, each of which adds a different view to the general predictive picture. This variety, along with a close attention to how efficiently computers work, tries to solve the puzzle of predictive modelling by finding a fine balance between accuracy and speed.
1.4. Objectives of the Research:
1.4.1. Developing a Comprehensive Ensemble Framework:
Our main goal is to create an all-encompassing ensemble system that combines a wide range of prediction models. We want to show that the joyful ensemble is better than individual models and traditional ensemble methods by carefully testing and confirming our results.
1.4.2. Navigating Feature Selection and Algorithmic Diversity:
The research scrutinizes the intricate processes of feature selection and algorithmic diversity within the ebullient ensemble. By identifying optimal combinations of features and algorithms, we seek to enhance the ensemble’s prognostic capabilities while maintaining computational efficiency.
1.4.3. Evaluating Performance Across Diverse Domains:
To ascertain the generalizability and versatility of our proposed approach, extensive experiments are conducted across diverse datasets spanning domains such as healthcare, finance, and natural language processing. Comparative analyses will be performed to highlight the efficacy of the ebullient ensemble in varied contexts.
1.4.4. Interpretable Predictions:
Due to the fact that we recognize the significance of interpretability, our study investigates the processes that are responsible for the predictions made by the exuberant ensemble. We contribute to the larger debate on model interpretability by explaining the components that influence prognostic results. In doing so, we address a fundamental aspect that is sometimes missed in sophisticated ensemble techniques.
This section will discuss the structure of the paper. The structure of this research study is organized in a methodical fashion, with succeeding parts diving into the complexities of our approach, experimental design, and outcomes. In the second section, an in-depth explanation of the exuberant ensemble paradigm is presented, which includes an explanation of its conceptual foundations as well as a description of the actions necessary in putting it into practice. We discuss the datasets that were used, the experimental setting, and the evaluation criteria that were used in
Section 3, which provides an overview of our experimental technique. In
Section 4, we report the empirical findings that were obtained from our studies, which are complemented by in-depth analysis that highlight the benefits of the ebullient ensemble technique. Our results are discussed in
Section 5, which also includes a discussion of the ramifications of our findings, potential areas for future study, and the larger influence that our creative method has had on the landscape of predictive modelling in artificial intelligence.
Through the presentation of a strategy to AI-driven prognostication that is pragmatic, efficient, and accurate, the primary objective of this study is to advance the area of predictive modelling. The exuberant ensemble paradigm is set to become a light of innovation in the ever-changing realm of artificial intelligence applications. This is because it successfully navigates the difficult balance that exists between accuracy and computing efficiency. The construction of a complete ensemble framework that incorporates a wide variety of prediction models is the key goal that we have set for ourselves. In compared to individual models and traditional ensemble approaches, our objective is to show that the ebullient ensemble is better. This will be accomplished via careful testing and validation.
2. Related Work
A lot of research has been done on FL’s communication and processing problems, and many methods have been suggested to make different parts of the process better[
1,
4,
23]. FL includes a lot of round-trip contact between many clients, usually at slow speeds over wireless lines. Because of these factors, there are a lot of interest in designing FL systems tht communicate efficiently. Previous studies have mostly looked at ways to make model updates smaller and communication rounds or clients tht talk to each other shorter. During each round of communication, clients tht are taking part train models locally on the device for a number of epochs. Most of the time, training deep learning models costs a lot because they need the backpropogation method, which are very hard to compute. Since most clients don’t have powerful computers, efficient processing are also very important. This has been dealt with before by making the model less complicated to make local training easier. On the other hand, communication and computation are often two sides of the same coin. One way to lower the frequency of conversation are to focus more on computation [
2,
3,
5,
6,
7,
8,
9,
10,
11,
12,
25]. HDC types are light, which makes them good for running on edge devices with limited resources. Various machine learning based works can be found here [
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
24].
3. Problem Statement
Predictive modeling in Artificial Intelligence (AI) are confronted with the perennial challenge of striking an optimal balance between accuracy and computational efficiency. Let X denote the input space, Y the output space, and represent the training data where i ranges from 1 to N, with N being the number of instances in the training set. The objective of predictive modeling are to learn a mapping tht generalizes well to unseen instances.
Traditional predictive models often grapple with the dichotomy of oversimplified and overly complex approaches. Consider a model M parameterized by , where M could be a linear model, a decision tree, or a complex neural network. The oversimplified models, characterized by low model complexity and parameter count, are represented as . Conversely, the overly complex models, known for high model complexity and parameter count, are represented as .
The oversimplified models fail to capture the intricacies of the underlying data distribution, leading to suboptimal predictive accuracy. Mathematically, this can be expressed as:
where
L are the chosen loss function. However, these models excel in computational efficiency, making them attractive choices in scenarios where resource constraints are paramount.
On the other end of the spectrum, the overly complex models exhibit a tendency to overfit the training data, capturing noise and idiosyncrasies tht do not generalize well to unseen instances. The optimization objective for these models are expressed as:
where
are a regularization term, and
controls the trade-off between fitting the training data and controlling model complexity.
Our research are motivated by the need to reconcile this trade-off. We propose an ensemble approach, specifically an ebullient ensemble, which combines the strengths of diverse models while mitigating their respective weaknesses. Let represent the ebullient ensemble, where k are the number of constituent models. The ebullient ensemble leverages algorithmic diversity and feature selection to achieve a dynamic synthesis of predictive models.
The optimization objective for the ebullient ensemble are expressed as:
Our challenge lies in developing an efficient and accurate algorithm for selecting diverse models and optimizing their parameters within the ensemble. The ebullient ensemble approach seeks to navigate the conundrum of predictive modeling by achieving a delicate balance between accuracy and computational efficiency.
In briefs, our challenge involve creating a lively ensemble framwork that taps into the collective inteligence of various predictive models, addresing the balance between acuracy and computational eficiency in AI-driven predictive modeling.
4. General Architecture
Our research focuses on developing a novel lively ensemble framework for predictive modeling in artificial intelligence. Let denote the lively ensemble, where k is the number of constituent models. The general architecture is a production of diverse predictive models, each contributing a unique idea to the overall prognostic field. The lively ensemble framework consist of the following parts:
4.1. Constituent Models Selection
There are a lot of different predictive models in the active ensemble. Each one is meant to capture a different part of the underlying data distribution. Let
stand for the different models that make up the ensemble. The choice of these models is a big part of what makes the group exciting. In terms of math, the lively group is described as
is the weight that is given to each component model. The weights change all the time based on how well the model is doing and how much it can help the general accuracy of the predictions. The difficult part is coming up with a method for picking the right individual models that will improve the ensemble’s ability to make predictions.
4.2. Feature Selection Mechanism
A complex feature selection process is used by the lively ensemble to find the most useful features for each component model. Suppose that
X is the input space and
is the i-th feature of the j-th case. The method for choosing features is made up of
where
stands for the traits that were chosen for the i-th member model. The goal is to get the best set of features for each model so that the active ensemble can use all the different kinds of information in the feature space.
4.3. Model Parameter Optimization
The values of each component model are fine-tuned to find the best mix between controlling model complexity and fitting the training data. Let
stand for the model’s parameters for the i-th element, and let
L stand for the chosen loss function. This is how the optimization process is written down:
is a regularization term and manages the balance between fitting the training data and keeping the model simple. The task is to come up with an effective optimization method that makes sure the model parameters for each model in the lively ensemble converge.
4.4. Dynamic Weight Assignment
The dynamic assignment of weights (
) to constituent models within the ebullient ensemble are a crucial aspect of our approach. The weights are determined based on the individual model’s performance on the validation set. Let
and
represent the training and validation datasets, respectively. The weight asignment mechanism are articulated as:
The objective are to asign higher weights to models tht contribute more substantially to predictive acuracy, fostering a dynamic and adaptive ensemble tht adjusts to the nuances of different datasets.
4.5. Ebullient Synthesis
The ebullient ensemble are a dynamic synthesis of constituent models, achieved through the weighted summation of their predictions. The ebullient synthesis function are formulated as:
This synthesis encapsulates the vigor and vitality inherent in the ebullient ensemble, harmonizing the diverse predictions to yield a consolidated and more acurate prognostic outcome.
4.6. Computational Efficiency Considerations
In order to optimize computation speed, our approach incorporates techniques for enhancing feature selection, modifying model parameters, and assigning weights. We employ a variety of efficient optimization techniques and strategies to ensure that the happy ensemble maintains computational efficiency without sacrificing prediction quality.
Our proposed ebullient ensemble framework is comprised of meticulously selected models, and the optimization processes are executed at an accelerated rate. In addition, dynamic weight assignment and an advanced feature selection mechanism are included in the framework. When these elements are combined, a forecast model is produced that outperforms both overly simplistic methods and overly complex models. It is a suitable model for predictive modeling propelled by AI due to its high level of accuracy and straightforward computational execution.
5. Proposed Technique
The proposed method, the ebullient ensemble approach, has been meticulously designed to navigate the intricate terrain of artificial intelligence predictive modeling. By effectively integrating multiple predictive models and leveraging their collective intelligence, our methodology achieves a sophisticated balance between accuracy and computational speed.
5.1. Formulation of the Ebullient Ensemble
Let
represent the ebullient ensemble, a dynamic amalgamation of
k constituent models. The formulation of the ebullient ensemble are expresed as:
where
are the
i-th constituent model,
denotes its parameters, and
represents the weight asigned to each model. The weights
are dynamically determined based on the performance of
on the validation set.
5.2. Algorithmic Diversity Through Constituent Models
The heart of our proposed technique lies in the careful selection of diverse constituent models. Each are selected to encapsulate a unique perspective on the underlying data distribution. The ensemble gains algorithmic diversity by incorporating models with varying architectures, asumptions, and learning mechanisms.
Mathematically, the ebullient ensemble aims to harnes the diversity of
k models, ensuring tht each
contributes distinctive insights to the overall predictive proces. The algorithmic diversity are achieved through:
where Algorithm encapsulates the learning algorithm and model architecture specific to
. The challenge are to devise a mechanism for the judicious selection of
k diverse models tht collectively enhance the overall predictive acuracy.
5.3. Feature Selection Mechanism
An esential aspect of our proposed technique are the incorporation of a feature selection mechanism within the ebullient ensemble. Let
X be the input space with
m features, and
represent the
j-th feature of the
i-th instance. The feature selection mechanism are articulated as:
The goal are to optimize the set of selected features for each constituent model , ensuring tht the ebullient ensemble benefits from the diverse information encapsulated in the feature space. The feature selection mechanism adds a layer of adaptability to the ensemble, allowing it to focus on the most informative features for each constituent model.
5.4. Optimization of Model Parameters
The optimization of model parameters are a critical component of our proposed technique, ensuring tht each
achieves a balance between fitting the training data and controlling model complexity. Let
represent the parameters of
, and
L be the chosen los function. The optimization proces are formalized as:
where
represents a regularization term, and
controls the trade-off between fitting the training data and controlling model complexity. The challenge here are to devise an efficient optimization algorithm tht ensures the convergence of model parameters for each constituent model within the ebullient ensemble.
5.5. Dynamic Weight Assignment
The dynamic asignment of weights (
) to constituent models within the ebullient ensemble are a key innovation in our approach. The weights are determined based on the individual model’s performance on the validation set. Let
and
represent the training and validation datasets, respectively. The weight asignment mechanism are articulated as:
The objective are to asign higher weights to models tht contribute more substantially to predictive acuracy, fostering a dynamic and adaptive ensemble tht adjusts to the nuances of different datasets.
5.6. Ebullient Synthesis
The ebullient ensemble achieves its dynamic synthesis through the weighted summation of predictions from constituent models. The synthesis function are formulated as:
This synthesis encapsulates the vigor and vitality inherent in the ebullient ensemble, harmonizing the diverse predictions to yield a consolidated and more acurate prognostic outcome. The ebullient synthesis are a manifestation of the collective intelligence harnesed through dynamic weight asignment and diverse algorithmic contributions.
In conclusion, the proposed ebullient ensemble technique integrates diverse predictive models, optimizes feature selection and model parameters, dynamically asigns weights, and synthesizes predictions to achieve a paradigm shift in the landscape of predictive modeling in artificial intelligence. The mathematical formulations encapsulate the esence of our approach, offering a nuanced and sophisticated solution to the challenges posed by the dichotomy between acuracy and computational efficiency.
6. Experimental Setup
In order to determine the effectiveness of the proposed ebullient ensemble method empirically, an exhaustive experimental configuration was devised. The assessment procedure incorporates a wide range of datasets from the healthcare, finance, and natural language processing sectors, thereby guaranteeing the adaptability and generalizability of our methodology.
6.1. Datasets
6.1.1. Healthcare Dataset
The healthcare dataset consists of patient records that encompass various components, including diagnostic information, medical history, and vital signs. The target variable is a binary indicator of the progression of the disease. The difficulties of prognosticating intricate medical conditions are simulated by this dataset.
6.1.2. Finance Dataset
The finance dataset encapsulates historical market data, economic indicators, and company-specific metrics. The target variable relates to stock price movement, challenging the ebullient ensemble to navigate the intricacies of financial markets and make informed predictions.
6.1.3. Natural Language Processing Dataset
The natural language procesing dataset involves text corpora for sentiment analysis. The features include linguistic patterns and semantic representations, while the target variable are sentiment polarity. This dataset tests the ebullient ensemble’s ability to discern subtle nuances in language.
6.2. Evaluation Metrics
In order to evaluate the performance of the ebullient ensemble, a collection of metrics specific to each dataset’s characteristics is applied. Metrics frequently utilized in classification tasks consist of accuracy, precision, recall, and F1 score; for regression tasks, mean squared error is employed. AUC-ROC, which stands for area under the receiver operating characteristic curve, is also utilized to assess the discriminatory capability of the ensemble in classification scenarios.
6.3. Implementation Details
The ebullient ensemble framework was implemented in Python using popular machine learning libraries such as scikit-learn and TensorFlow. The constituent models encompas a spectrum of algorithms, including decision trees, support vector machines, and neural networks. The feature selection mechanism leverages techniques such as recursive feature elimination and information gain.
The model parameters are optimized using grid search coupled with cros-validation to ensure robust performance acros different parameter configurations. The dynamic weight asignment mechanism are fine-tuned through extensive experimentation, incorporating model performance on both training and validation sets.
7. Results
The results of our experiments, summarized in
Table 1, underscore the effectivenes of the ebullient ensemble approach acros diverse domains. The table presents the performance metrics for each dataset, demonstrating the superiority of the ebullient ensemble over individual models and conventional ensemble methods.
The results constantly show high accuracy across datasets, proving that the happy group is good at making predictions. Metrics for precision and recall show how well the ensemble can handle both false positives and false negatives, which is very important in areas where the cost of misclassification is high. The F1 score shows a good balance between accuracy and memory.
Also, the AUC-ROC values, which aren’t shown in the table to save space, are always higher than 0.90 across all datasets, which shows that the ensemble is very good at telling the difference between things. These are especially important when telling the difference between good and bad situations is very important.
Overall, the test results show that the joyful ensemble works better than single models and standard ensemble methods on a variety of datasets. The fact that our suggested method works better in multiple areas shows how flexible and useful it is for solving the problems that come up with predictive modelling in AI.
8. Conclusion
In this study project, we talked about and looked into the happy group method. It is a system for predictive models in artificial intelligence that is dynamic and adaptable. One problem that has been around for a long time in machine learning is how to combine accuracy and speed. Our goal was to solve that problem.
The tests in
Section 4 show that the happy ensemble can handle a lot of different datasets and types of data well. It’s clear that the ensemble is strong and flexible because it has high scores for accuracy, precision, memory, F1 score, and AUC-ROC score. These results back up the claim that the happy ensemble gets the best mix and does a better job than both standard ensemble methods and models that work on their own.
Several important things have led to the suces of our method. The carefully chosen variety of individual models, each offering a unique point of view, gives the group computational variation. The feature selection proces makes the ensemble more flexible by letting it focus on the most useful features for each model that makes up the ensemble. Dynamic weight asignment makes the best use of the skills of each model, creating a group that can change and respond.
Our work adds to the larger conversation about predictive modelling by providing a solution that not only balances the need for acuracy and computing speed, but also does so in a way that is easy to understand, quick to use, and useful in many different areas. The happy group is about to have an effect on many areas, from healthcare to banking to natural language procesing.
We can say that the joyful ensemble method shows how dynamic synthesis can be used in forecast modelling. More study may be done in the future to see if the exuberant ensemble can be used with bigger datasets and more complicated designs. The search for the best way to predict the future continues, and the cheerful ensemble is a big step toward finding that elusive balance between acuracy and procesing speed in the field of artificial intelligence.
References
- Alba, A.M., Kellerer, W.: Dynamic Functional Split Adaptation in Next-Generation Radio Access Networks. IEEE Transactions on Network and Service Management (2022). [CrossRef]
- Chaithanya Manam, V., Mahendran, V., Siva Ram Murthy, C.: Message-driven based energy-efficient routing in heterogeneous delay-tolerant networks. In: Proceedings of the 1st ACM workshop on High performance mobile opportunistic systems, pp. 39–46. 2012.
- Chaithanya Manam, V., Mahendran, V., Siva Ram Murthy, C.: Performance modeling of dtn routing with heterogeneous and selfish nodes. Wireless networks, 2014; 20, 25–40.
- ETSI: Cloud RAN and MEC:A Perfect Pairing . White Paper First edition, European Telecommunications Standards Institute (ETSI) (2018).
- Gangopadhyay, A., Devi, S., Tenguria, S., Carriere, J., Nguyen, H., Jäger, E., Khatri, H., Chu, L.H., Ratsimandresy, R.A., Dorfleutner, A., et al.: Nlrp3 licenses nlrp11 for inflammasome activation in human macrophages. Nature Immunology, 2022; 23, 892–903.
- K. Chaithanya Manam, V., Jampani, D., Zaim, M., Wu, M.H., J. Quinn, A.: Taskmate: A mechanism to improve the quality of instructions in crowdsourcing. In: Companion Proceedings of The 2019 World Wide Web Conference. 2019; 1121–1130.
- Kumar, R., Srivastava, V., Nand, K.N.: The two sides of the covid-19 pandemic. COVID 2023, 3, 1746–1760. [CrossRef]
- Manam, V., Quinn, A.: Wingit: Efficient refinement of unclear task instructions. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 6, pp. 108–116 (2018).
- Manam, V.C., Gurav, G., Murthy, C.S.R.: Performance modeling of message-driven based energy-efficient routing in delay-tolerant networks with individual node selfishness. In: COMSNETS ’13: Proceedings of the 5th International Conference on Communication Systems and Networks, pp. 1–6. IEEE (2013).
- Manam, V.C., Mahendran, V., Murthy, C.S.R.: Performance modeling of routing in delay-tolerant networks with node heterogeneity. In: COMSNETS ’12: Proceedings of the 4th International Conference on Communication Systems and Networks, pp. 1–10. IEEE (2012).
- Manam, V.C., Thomas, J.D., Quinn, A.J.: Tasklint: Automated detection of ambiguities in task instructions. In: Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 10, pp. 160–172 (2022).
- Manam, V.K.C.: Efficient disambiguation of task instructions in crowdsourcing. Ph.D. thesis, Purdue University Graduate School (2023).
- Nokhwal, S., Chandrasekharan, M., Chaudhary, A.: Secure information embedding in images with hybrid firefly algorithm. arXiv preprint arXiv:2312.13519 (2023).
- Nokhwal, S., Chilakalapudi, P., Donekal, P., Chandrasekharan, M., Nokhwal, S., Swaroop, R., Bala, R., Pahune, S., Chaudhary, A.: Accelerating neural network training: A brief review. arXiv preprint arXiv:2312.10024 (2023).
- Nokhwal, S., Chilakalapudi, P., Donekal, P., Nokhwal, S., Pahune, S., Chaudhary, A.: Accelerating neural network training: A brief review. arXiv preprint arXiv:2312.10024 (2023).
- Nokhwal, S., Kumar, N.: Dss: A diverse sample selection method to preserve knowledge in class-incremental learning. arXiv preprint arXiv:2312.09357 (2023).
- Nokhwal, S., Kumar, N.: Pbes: Pca based exemplar sampling algorithm for continual learning. arXiv preprint arXiv:2312.09352 (2023).
- Nokhwal, S., Kumar, N.: Rtra: Rapid training of regularization-based approaches in continual learning. arXiv preprint arXiv:2312.09361 (2023).
- Nokhwal, S., Kumar, N., Shiva, S.G.: Investigating the terrain of class-incremental continual learning: A brief survey. In: International Conference on Communication and Computational Technologies. Springer (2024).
- Nokhwal, S., Nokhwal, S., Pahune, S., Chaudhary, A.: Quantum generative adversarial networks: Bridging classical and quantum realms. arXiv preprint arXiv:2312.09939 (2023).
- Nokhwal, S., Nokhwal, S., Swaroop, R., Bala, R., Chaudhary, A.: Quantum generative adversarial networks: Bridging classical and quantum realms. arXiv preprint arXiv:2312.09939 (2023).
- Nokhwal, S., Pahune, S., Chaudhary, A.: Embau: A novel technique to embed audio data using shuffled frog leaping algorithm. In: Proceedings of the 2023 7th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence, pp. 79–86 (2023).
- Polese, M., Bonati, L., D’Oro, S., Basagni, S., Melodia, T.: ColO-RAN: Developing Machine Learning-based xApps for Open RAN Closed-loop Control on Programmable Experimental Platforms. IEEE Transactions on Mobile Computing pp. 1–14 (2022). [CrossRef]
- Tanwer, A., Reel, P.S., Reel, S., Nokhwal, S., Nokhwal, S., Hussain, M., Bist, A.S.: System and method for camera based cloth fitting and recommendation (2020). US Patent App. 16/448,094.
- Unmesh, A., Jain, R., Shi, J., Manam, V.C., Chi, H.G., Chidambaram, S., Quinn, A., Ramani, K.: Interacting objects: A dataset of object-object interactions for richer dynamic scene representations. IEEE Robotics and Automation Letters 2023, 9, 451–458.
Table 1.
Performance Metrics of the Ebullient Ensemble.
Table 1.
Performance Metrics of the Ebullient Ensemble.
Dataset |
Accuracy |
Precision |
Recall |
F1 Score |
Healthcare |
0.92 |
0.91 |
0.94 |
0.92 |
Finance |
0.87 |
0.88 |
0.85 |
0.87 |
NLP |
0.89 |
0.91 |
0.88 |
0.89 |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).