Computer Science and Mathematics

Sort by

Short Note
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stefan Haufe

Abstract: The purpose of this work is to collect generic high-level requirements on interpretation tools. These requirements aim to ensure that the deployer of an AI system can indeed use them to assess and assure the quality and proper functioning of the system. I argue that the concrete purpose of an interpretation tool needs to be specified, the information provided through its output needs to be unambiguously defined, its utility for serving the specified purpose needs to be demonstrated, and sufficient evidence needs to be provided that the information provided by the tool is sufficiently accurate and precise and that the intended purpose can be fulfilled sufficiently well.
Article
Computer Science and Mathematics
Mathematics

Kejia Hu,

Hongyi Li,

Di Zhao,

Yuan Jiang,

Baozhu Li

Abstract: The Kohn-Nirenberg domains are unbounded domains in $\mathbb{C}^{n}$. In this article, we modify the Kohn-Nirenberg domain $\Omega_{K,L} =\left\{(z_{1},\ldots, z_{n}) \right.\in \mathbb{C}^{n} : Re z_{n} + g \mid z_{n}\mid^{2} + \sum_{j = 1}^{n-1} (\mid z_{j}\mid^{p} +K_{j} \mid z_{j}\mid^{p-q} Re z_{j}^{q} + L_{j} \mid z_{j}\mid^{p-2q} Im z_{j}^{2q}) < 0 \}$ and discuss the existence of supporting surface and peak function at the origin.
Article
Computer Science and Mathematics
Computational Mathematics

Maricela Fernanda Ormaza Morejón,

Rolando Ismael Yépez Moreira

Abstract: The identification of influential nodes in complex networks is fundamental for assessing their importance, particularly when simultaneously considering topological structure and nodal attributes. In this paper, we introduce SL-WLEN (Semi-local Centrality with Weighted and Lexicographic Extended Neighborhood), a novel centrality metric designed to identify the most influential nodes in complex networks. SL-WLEN integrates topological structure and nodal attributes by combining local components (degree and nodal values) with semi-local components (Local Relative Average Shortest Path LRASP and lexicographic ordering), thereby overcoming limitations of existing methods that treat these aspects independently. The incorporation of lexicographic ordering preserves the relative importance of nodes at each neighborhood level, ensuring that those with high values maintain their influence in the final metric without distortions from statistical aggregations. The metric was validated on a chip manufacturing quality control network comprising 1,555 nodes, where each node represents a critical process characteristic. The weighted connections between nodes reflect correlations among characteristics, enabling the evaluation of how changes propagate through the system and affect final product quality. Robustness testing demonstrates that SL-WLEN maintains high stability under various perturbations: preserving Top-1 rankings (98%) and correlations (R²&gt;0.92) even with 50% link removal, while maintaining robustness above 80% under moderate network modifications. These findings evidence its effectiveness for complex network analysis in dynamic environments.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mehdi Imani

Abstract: In this study, a range of machine learning models, including Artificial Neural Networks, Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, and advanced gradient boosting methods (XGBoost, LightGBM, and CatBoost), were examined for their efficacy in predicting customer churn within the telecommunications industry. The research utilized a publicly accessible dataset for this purpose. The effectiveness of these models was measured using established evaluation metrics such as Precision, Recall, F1-score, and the Receiver Operating Characteristic Area Under Curve (ROC AUC). The findings of the research emphasize the exceptional effectiveness of boosting algorithms in managing the complex aspects of predicting customer churn. In particular, LightGBM was remarkable, securing an outstanding F1-score of 92% and an ROC AUC of 91%. These figures greatly exceed the performance of conventional models such as Decision Trees and Logistic Regression. This highlights the superiority of sophisticated machine learning methods in dealing with challenges posed by imbalanced datasets and complex interrelations among features.
Article
Computer Science and Mathematics
Information Systems

Boris Chigarev

Abstract: Background. Nowadays, bibliometric analyses of data from abstract databases are often used to identify relevant research problems in order to rationalize the use of financial and other resources. The aim of this paper was to demonstrate the importance of pre-processing the text fields of bibliometric records to construct a term co-occurrence network and the feasibility of subsequently using Scimago Graphica to examine different slices of clustering results in detail in order to identify relevant research topics. Materials and Methods. A total of 8051 records exported from Scopus matching a filter (LIMIT-TO (EXACTKEYWORD, ‘Petroleum Reservoir Engineering’)) over the last ten years were used. VOSviewer and Scimago Graphica were applied for bibliometric analysis. The results of the study showed the relevance of using the filter ‘LIMIT-TO EXACTKEYWORD’ in the query to Scopus; the expediency of disclosing abbreviations in the text fields of records and preliminary clarification of texts; the effectiveness of using filters in the Scimago Graphica program to build a network of co-currency of terms in order to identify promising research topics; the proposal of promising research objectives arising from the analysis, which can be described by the following terms: 1. nanopores, shale oil, pore size, molecular; 2. nanoparticles; 2. It is observed that in some cases terms occurring in the same cluster are not the best choice for querying in order to expand the collection of publications on a given topic. Therefore, it is proposed to conduct a separate study using Apriori class algorithms for this purpose.
Article
Computer Science and Mathematics
Information Systems

Yu Chen,

Jia Li,

Erik Blasch,

Qian Qu

Abstract: The convergence of the Internet of Physical-Virtual Things (IoPVT) and the Metaverse presents a transformative opportunity for safety and health monitoring in outdoor environments. This concept paper explores how integrating human activity recognition (HAR) with IoPVT within the Metaverse can revolutionize public health and safety, particularly in urban settings with challenging climates and architectures. By seamlessly blending physical sensor networks with immersive virtual environments, the paper highlights a future where real-time data collection, digital twin modeling, advanced analytics, and predictive planning proactively enhance safety and well-being. Specifically, three dimensions of humans, technology, and the environment interact towards measuring safety, bio-health, and the climate. The goal would be to use three cultural scenarios, including urban, rural, and coastal locations. Our envisioned system would deploy smart sensors on external staircases, bio-health climate, and infrastructure sensors. Feeding various cameras, bio-sensors, and IoT sensors facilitates safe human activity recognition, routing, and planning, feeding real-time data into the Metaverse to create dynamic virtual representations of physical spaces. Advanced HAR algorithms and predictive analytics would identify potential hazards, enabling timely interventions and reducing accidents. We discuss the technological innovations enabling this vision, including advancements in sensor technologies, ubiquitous connectivity, and AI-driven HAR techniques. The paper also explores the societal benefits, such as proactive health monitoring, enhanced emergency response, and contributions to smart city initiatives. Additionally, we address the challenges and research directions necessary to realize this future, emphasizing technical scalability, ethical considerations, and the importance of interdisciplinary collaboration for designs and policies. By articulating an AI-driven HAR vision along with required advancements in edge-based sensor data fusion, city responsiveness with fog computing, and social planning through cloud analytics, we aim to inspire the academic community, industry stakeholders, and policymakers to collaborate in shaping a future where technology profoundly improves outdoor health monitoring, enhances public safety, and enriches the quality of urban life.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Vasileios Pitsiavas,

Georgios Spanos,

Sofia Polymeni,

Antonios Lalas,

Konstantinos Votis,

Dimitrios Tzovaras

Abstract: Achieving the Sustainable Development Goals (SDG) requires a transition from conventional fossil-fuel-powered vehicles to alternative energy sources, such as electricity. However, accurately forecasting energy consumption remains a critical challenge in the widespread adoption of Electric Vehicles (EVs), as it directly impacts operational efficiency, route planning, and charging strategies. To address this, a novel approach is proposed, combining advanced machine learning models—such as XGBoost, Random Forest, and regression-based techniques—with innovative dataset manipulation using statistical methods. The methodology integrates feature engineering to incorporate vehicle-specific metrics, including driving patterns and environmental conditions, ensuring models dynamically adapt to real-world scenarios. The proposed framework demonstrates high accuracy and robustness in predicting energy consumption, providing valuable insights for sustainable transportation and efficient energy management toward SDG achievement.
Article
Computer Science and Mathematics
Software

Raghunath Dey,

Jayashree Piri,

Biswaranjan Acharya,

Pragyan Paramita Das,

Vassilis C. Gerogiannis,

Andreas Kanavos

Abstract: Software defect prediction aims to identify defect-prone modules before testing, reducing costs and duration. Machine learning (ML) techniques are widely used to develop predictive models for classifying defective software components. However, high-dimensional training datasets often degrade classification accuracy and precision due to irrelevant or redundant features. To address this, effective feature selection is crucial, but it poses an NP-hard challenge that can be efficiently tackled using heuristic algorithms. This study introduces a Binary Multi-Objective Starfish Optimizer (BMOSFO) for optimal feature selection, enhancing classification accuracy and precision. The proposed BMOSFO balances two conflicting objectives: minimizing the number of selected features and maximizing classification performance. A Choquet Fuzzy Integral-based Ensemble Classifier is then employed to further enhance prediction reliability by aggregating multiple classifiers. The effectiveness of the proposed approach is validated using five real-world NASA benchmark datasets, demonstrating superior performance compared to traditional classifiers. Experimental results reveal that key software metrics—such as design complexity, operators and operands count, lines of code, and number of branches—significantly influence defect prediction. The findings confirm that BMOSFO not only reduces feature dimensionality but also enhances classification performance, providing a robust and interpretable solution for software defect prediction. This approach shows strong potential for generalization to other high-dimensional classification tasks.
Article
Computer Science and Mathematics
Robotics

Zhuo Yao

Abstract: Multi-agent pathfinding (MAPF) holds significant utility within autonomous systems, however, the calculation and memory space required for multi-agent path finding (MAPF) grows exponentially as the number of agents increases. This often results in some MAPF instances being unsolvable under limited computational resources and memory space, thereby limiting the application of MAPF in complex scenarios. Hence, we propose a decomposition approach for MAPF instances, which breaks down instances involving a large number of agents into multiple isolated subproblems involving fewer agents. Moreover, we present a framework to enable general MAPF algorithms to solve each subproblem independently and merge their solutions into one conflict-free final solution, and avoid loss of solvability as much as possible. Unlike existing works that propose isolated methods aimed at reducing the time cost of MAPF, our method is applicable to all MAPF methods. In our results, we apply decomposition to multiple state-of-the-art MAPF methods using a classic MAPF benchmark\footnote{https://movingai.com/benchmarks/mapf.html}. The decomposition of MAPF instances is completed on average within 1s, and its application to seven MAPF methods reduces the memory usage or time cost significantly, particularly for serial methods. Based on massive experiments, we speculate the possibilty about loss of solvability caused by our method is $<$ 1\%. To facilitate further research within the community, we have made the source code of the proposed algorithm publicly available\footnote{https://github.com/JoeYao-bit/LayeredMAPF/tree/minimize\_dependence}.
Review
Computer Science and Mathematics
Computer Networks and Communications

Qutaiba Ibrahim,

Zena Ali

Abstract: The Controller Area Network (CAN) bus has been a cornerstone in vehicular communication, facilitating robust and efficient data exchange among electronic control units (ECUs). This paper provides a comprehensive review of the classical CAN bus, CAN FD, and their key attributes, including message prioritization, arbitration mechanisms, and error detection. Additionally, the paper explores the IEEE 802.11b wireless standard, emphasizing its potential for extending CAN-based networks into wireless domains. The study categorizes existing literature into wired and wireless CAN applications, highlighting advancements, challenges, and limitations in both areas. A critical gap identified in current research is the lack of performance assessment of ECUs, particularly in autonomous vehicle (AV) applications. Moreover, most wireless implementations of CAN rely on Bluetooth, Zigbee, or IEEE 802.11b, which are constrained by limited data rates and scalability. This review outlines the necessity for more integrated, high-performance wireless CAN solutions to enhance vehicular network efficiency, particularly in AV environments.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Joan D. Gonzalez-Franco,

Alejandro Galaviz-Mosqueda,

Salvador Villarreal-Reyes,

Jose E. Lozano-Rizk,

Raul Rivera-Rodriguez,

Jose E. Gonzalez-Trejo,

Alexei-Fedorovish Licea-Navarro,

Jorge Lozoya-Arandia,

Edgar A. Ibarra-Flores

Abstract: Cardiovascular diseases stand as the leading cause of mortality worldwide, underscoring the urgent need for effective tools that enable early detection and monitoring of at-risk patients. This study combines Artificial Intelligence (AI) techniques—specifically K-means clustering algorithm—alongside dimensionality reduction methods like Principal Component Analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP) to identify patient groups with varying levels of heart attack risk. We were using a publicly available clinical dataset with 1319 patient records, which included variables such as age, gender, blood pressure, glucose levels, KCM, and troponin levels. We normalized and prepared the data, then we employed PCA and UMAP to reduce dimensionality and facilitate visualization. Using the K-means algorithm, we segmented the patients into distinct groups based on their clinical features. Our analysis revealed two distinct patient groups. Group 2 exhibited significantly higher levels of troponin (mean 0.4761 ng/mL), KCM (18.65 ng/ml) and glucose (mean 148.19 mg/dL) and was predominantly composed of men (97%). These factors indicate an increased risk of cardiac events compared to Group 1, which had lower levels of these biomarkers and a slightly higher average age. Interestingly, no significant differences in blood pressure were observed between the groups. This study demonstrates the effectiveness of combining Machine Learning (ML) techniques with dimensionality reduction methods to enhance risk stratification accuracy in cardiology. By enabling more targeted interventions for high-risk patients, our approach contributes to improved prevention strategies.
Article
Computer Science and Mathematics
Signal Processing

Josip Sabic,

Toni Perković,

Dinko Begušić,

Petar Šolić

Abstract: LoRaWAN networks are increasingly recognized for their vulnerability to various jamming attacks, which can significantly disrupt communication between end nodes and gateways. This paper explores the feasibility of executing reactive jammers upon detecting packet transmission using commercially available equipment based on Software-Defined Radios (SDRs). The proposed approach demonstrates how attackers can exploit packet detection to initiate targeted interference, effectively compromising message integrity. Two distinct experimental setups, one using separate SDRs for reception and transmission, and another leveraging a single SDR for both functions, were used to evaluate attack efficiency, reaction times, and packet loss ratios. Our experiments demonstrate that both scenarios effectively jam LoRaWAN packets across a range of spreading factors and payload sizes. This finding underscores a pressing need for enhanced security measures to maintain reliability and counter sophisticated attacks.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Fátima Rodrigues,

Miguel Machado

Abstract: The cryptocurrency market is currently one of the most interesting areas for investment, attracting both experienced and casual investors. Although it can offer high returns, it also poses significant risks due to its high volatility. In this context, artificial intelligence, particularly through deep learning and machine learning algorithms, has played a key role in developing applications that provide investment advice, with the aim of maximizing returns and reducing investment risks. This study proposes a system for forecasting the closing prices of ten of the leading cryptocurrencies currently available in the market, presented in a web application capable of making predictions ranging from one to four hours. To achieve this, different models using various machine learning and deep learning algorithms were analyzed and tested, including Recurrent Neural Networks, time series analysis algorithms such as ARIMA, and even some more conventional regression algorithms. For algorithm comparison, minute step Bitcoin price data over a 30-day period was used to forecast prices 60 minutes ahead. Through extensive experimentation, the GRU neural network demonstrated superior predictive accuracy, achieving MAPE = 0.09\%, MSE = 5954.89, RMSE = 77.17, and MAE = 60.20. A web application was also developed, which integrates the best-performing model to provide real-time price predictions for multiple cryptocurrencies.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Eunice Oyedokun,

Barnty William

Abstract: The rapid proliferation of fake news on digital platforms has emerged as a significant challenge, undermining trust in information and influencing public opinion. To combat this issue, researchers have increasingly turned to machine learning (ML) techniques for automated fake news detection. This paper explores the application of various ML approaches, including supervised, unsupervised, and deep learning models, to identify and classify fake news. Key techniques such as natural language processing (NLP), sentiment analysis, and feature extraction are discussed, highlighting their role in improving detection accuracy. Additionally, the challenges of dataset quality, model interpretability, and real-time detection are addressed. The study concludes that while ML techniques show promise in fake news detection, ongoing advancements in model robustness and adaptability are essential to keep pace with the evolving nature of misinformation.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Yuang Chen,

Yong Li,

Shaohua Li,

Shuhan Lv,

Fang Lin

Abstract: This paper proposes a lightweight violent behavior recognition model, DualCascadeTSF-MobileNetV2, which is improved based on the temporal shift module and its subsequent research. By introducing the Dual Cascade Temporal Shift and Fusion module, the model further enhances the feature correlation ability in the time dimension and solves the problem of information sparsity caused by multiple temporal shifts. Meanwhile, the model incorporates the efficient lightweight structure of MobileNetV2, significantly reducing the number of parameters and computational complexity. Experiments were conducted on three public violent behavior datasets, Crowd Violence, RWF-2000, and Hockey Fights, to verify the performance of the model. The results show that it outperforms other classic models in terms of accuracy, computational speed, and memory size, especially among lightweight models. This research continues and expands on the previous achievements in the fields of TSM and lightweight network design, providing a new solution for real-time violent behavior recognition on edge devices.
Review
Computer Science and Mathematics
Mathematical and Computational Biology

Felix Sadyrbaev

Abstract: The purpose of the study is to describe possible behaviors of trajectories of a multi-dimensional system of ordinary differential equations that arise in the mathematical modeling of complex networks. This description is based on the combination of analytical and computational tools which allow to understand in general the behavior of trajectories. After the detailed treatment of the second-order case with multiple possible phase portraits the third order systems are considered. The emphasis is laid on the coexistence of several attracting sets. The role of knowing the attracting sets is discussed and explained. Further, the higher order systems are considered, of order four and higher. A way to obtain higher-order systems for a better understanding of them is provided. Due to the lack of results concerning modeling networks by systems of ordinary differential equations, special attention is paid to our previously obtained facts about the behavior of solutions of arbitrary order systems. The problem of control and management of such systems is discussed. Some suggestions are made.
Article
Computer Science and Mathematics
Security Systems

Vahid Babaey,

Arun Ravindran

Abstract: The increasing reliance on web services has led to a rise in cybersecurity threats, particularly Cross-Site Scripting (XSS) attacks, which target client-side layers of web applications by injecting malicious scripts. Traditional Web Application Firewalls (WAFs) struggle to detect highly obfuscated and complex attacks, as their rules require manual updates. This paper presents a novel generative AI framework that leverages Large Language Models (LLMs) to enhance XSS mitigation. The framework achieves two primary objectives: (1) generating sophisticated and syntactically validated XSS payloads using in-context learning, and (2) automating defense mechanisms by testing these attacks against a vulnerable application secured by a WAF, classifying bypassing attacks, and generating effective WAF security rules. Experimental results using GPT-4o demonstrate the framework's effectiveness generating 264 XSS payloads, 83% of which were validated, with 80% bypassing ModSecurity WAF equipped with an industry standard security rule set developed by the Open Web Application Security Project (OWASP) to protect against web vulnerabilities. Through rule generation, 86% of previously successful attacks were blocked using only 15 new rules. In comparison, Google Gemini Pro achieved a lower bypass rate of 63%, highlighting performance differences across LLMs.
Article
Computer Science and Mathematics
Robotics

Tianyao Zheng,

Yuhui Jin,

Haopeng Zhao,

Zhichao Ma,

Yongzhou Chen,

Kunpeng Xu

Abstract:

The Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm offers a robust solution for the coverage path planning problem, where a robot must effectively and efficiently cover a designated area, ensuring minimal redundancy and maximum coverage. Traditional methods for path planning often lack the adaptability required for dynamic and unstructured environments. In contrast, TD3 utilizes twin Q-networks to reduce overestimation bias, delayed policy updates for increased stability, and target policy smoothing to maintain smooth transitions in the robot's path. These features allow the robot to learn an optimal path strategy in real-time, effectively balancing exploration and exploitation. This paper explores the application of TD3 to coverage path planning, demonstrating that it enables a robot to adaptively and efficiently navigate complex coverage tasks, showing significant advantages over conventional methods in terms of coverage rate, total length, and adaptability.

Article
Computer Science and Mathematics
Applied Mathematics

Guillermo Fernández-Anaya,

Francisco A. Godínez,

Rogelio Valdés,

Alberto Quezada-Téllez,

Marco Polo-Labarrios

Abstract: Fractional variable order systems with complex dynamics in the order is a little studied topic. In this research, we present three examples of a very simple fractional system with complex dynamics in the order of the derivative. These cases involve different approaches to define the variable order dynamics: 1) an integer-order differential equation that includes the state variable, 2) a differential equation that incorporates the state variable and features both integer and fractional-order derivatives, and 3) fractional variable-order differential equations nested in the orders of the derivatives. We prove a result that shows how the extended recursion of the last case is generalized. These examples illustrate the richness that simple dynamical systems with complex behavior can reveal through the order of their derivatives.
Article
Computer Science and Mathematics
Robotics

Wanli Zheng,

Guanglin Dai,

Miao Hu,

Pengbo Wang

Abstract: Accurate tomato yield estimation and ripeness monitoring are critical for optimizing greenhouse management. While manual counting remains labor-intensive and error-prone, this study introduces a novel vision-based framework for automated tomato counting in standardized greenhouse environments. The proposed method integrates YOLOv8-based detection, depth filtering, and an inter-frame prediction algorithm to address key challenges such as background interference, occlusion, and double-counting. Our approach achieves 97.09% accuracy in tomato cluster detection, with mature and immature single-fruit recognition accuracies of 92.03% and 91.79%, respectively. The multi-target tracking algorithm demonstrates a MOTA (Multiple Object Tracking Accuracy) of 0.954, outperforming conventional methods like YOLOv8+DeepSORT. By fusing odometry data from an inspection robot, this lightweight solution enables real-time yield estimation and maturity classification, offering practical value for precision agriculture.

of 445

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated