Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Networks and Communications

Qutaiba I. Ali

Abstract: Software-Defined Networking (SDN) introduces a paradigm shift in network management by decoupling the control and data planes, thereby enabling centralized, programmable network control. However, the dynamic and complex nature of modern traffic demands adaptive and intelligent decisionmaking beyond traditional rule-based systems. This paper explores the integration of Artificial Intelligence (AI) techniques—particularly supervised learning algorithms—into the SDN control architecture to improve performance, efficiency, and automation. The study provides an overview of SDN architecture and the OpenFlow protocol, followed by an empirical evaluation using real traffic scenarios. Multiple AI models including Support Vector Machine (SVM), Naïve Bayes (NB), and Nearest Centroid were tested on a software-defined testbed. Performance metrics such as classification accuracy, throughput, latency, packet loss, and controller decision time were analyzed. Results demonstrate that AI integration leads to significant improvements across all metrics, validating the potential of AI-SDN synergy in creating intelligent and self-optimizing networks.
Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Feng Chen

Abstract: Current fine-tuning of large language models typically relies on manually curated datasets to enhance model performance in specialized domains. However, with the rise of prompt engineering, is it possible for models to utilize constrained prompts like “You are an expert in semiconductor materials” or “You are an expert in fluid mechanics” to trigger domain-specific self-training? Could these prompts enable the model to autonomously retrieve and analyze information from open databases, thereby achieving a refined level of self-tuning without human intervention? This Perspective explores the feasibility of this idea and its potential to transform the conventional fine tuning paradigm. We present conceptual models and experimental comparisons that illustrate the differences in model responses with and without constrained prompts. Finally, we discuss how enabling self-training in large models could greatly enhance their utility in solving targeted domain specific challenges.
Article
Computer Science and Mathematics
Robotics

Fatma A.S. Alwafi,

Xu Xu,

Reza Saatchi,

Lyuba Alboul

Abstract: A new multi-robot path planning algorithm (MRPPA) for 2D static environments is developed and evaluated. It combines a roadmap method, utilising the visibility graph (VG), with the algebraic connectivity (second smallest eigenvalue (λ2)) of the graph’s Laplacian and Dijkstra's algorithm. The paths depend on the planning order, i.e., they are in sequence path-by-path, based on the measured values of algebraic connectivity of the graph’s Laplacian and the determined weights functions. Algebraic connectivity maintains robust communication between the robots during their movements while avoiding collision. The algorithm efficiently balanced connectivity maintenance and path length minimisation thus improving the performance of path finding. It produced solutions with optimal paths, i.e., the shortest and safest route. The devised MRPPA significantly improved path length efficiency across different configurations. The results demonstrated a highly efficient and robust solution for multi-robot systems requiring both optimal path planning and reliable connectivity, making it well-suited in scenarios where communication between robots is necessary. Simulation results demonstrated the performance of the proposed algorithm in balancing the path optimality and network connectivity across multiple static environments with varying complexities. The algorithm is suitable for identifying optimal and complete collision-free paths. The results illustrated the algorithm's effectiveness, computational efficiency, and adaptability.
Article
Computer Science and Mathematics
Computer Vision and Graphics

Van-Khang Nguyen,

Chiung-An Chen,

Cheng-Yu Hsu,

Bo-Yi Li

Abstract: We applied processing technology to detect and diagnose liver tumors in patients. The cancer imaging archive (TCIA) was used as it contains images of patients diagnosed with liver tumors by medical experts. These images were analyzed to detect and segment liver tumors using advanced segmentation techniques. Following segmentation, the images were converted into binary images for the automatic detection of the liver’s shape. The tumors within the liver were then localized and measured. By employing these image segmentation techniques, we accurately determined the size of the tumors. The application of medical image processing techniques significantly aids medical experts in identifying liver tumors more efficiently.
Article
Computer Science and Mathematics
Signal Processing

Jian Sun,

Hongxin Lin,

Wei Shi,

Wei Xu,

Dongming Wang

Abstract: Swarm-based unmanned aerial vehicle (UAV) systems offer enhanced spatial coverage, collaborative intelligence, and mission scalability for various applications, including environmental monitoring and emergency response. However, their onboard computing capabilities are often constrained by stringent size, weight, and power limitations, posing challenges for real-time data processing and autonomous decision-making. This paper proposes a comprehensive communication and computation framework that integrates cloud-edge-end collaboration with cell-free massive multiple-input multiple-output (CF-mMIMO) technology to support scalable and efficient computation offloading in UAV swarm networks. A lightweight task migration mechanism is developed to dynamically allocate processing workloads between UAVs and edge/cloud servers, while a CF-mMIMO communication architecture is designed to ensure robust, low-latency connectivity under mobility and interference. Furthermore, we implement a hardware-in-the-loop experimental testbed with nine UAVs and validate the proposed framework through real-time object detection tasks. Results demonstrate over 30% reduction in onboard computation and significant improvements in communication reliability and latency, highlighting the framework’s potential for enabling intelligent, cooperative aerial systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yuanhao Wu

Abstract: This study examines the impact of the COVID-19 pandemic on the U.S. aviation industry by analyzing key operational and financial metrics alongside public health data. Drawing from multiple data sources, including the Bureau of Transportation Statistics (BTS) and Worldometers, the analysis integrates trends in passenger traffic, flight operations, airline revenue, and net income with COVID-19 case trends. The BTS data provides detailed insights into the volume and nature of airline activity, while Worldometers contributes real-time and historical COVID-19 statistics that contextualize fluctuations in industry performance. By exploring the correlation between public health developments and aviation metrics, this study offers a comprehensive understanding of how the pandemic disrupted air travel and highlights potential pathways to recovery.
Review
Computer Science and Mathematics
Mathematical and Computational Biology

Yiting Wang,

Jiachen Zhong,

Rohan Kumar

Abstract: Infectious diseases pose a significant global health burden, contributing to millions of deaths annually despite advancements in sanitation and healthcare access. This review systematically examines the role of machine learning in infectious disease prediction, diagnosis, and outbreak forecasting in the United States. We first categorize existing studies according to the type of disease and the ML methodology, highlighting key findings and emerging trends. We then examine the integration of hybrid and deep learning models, the application of natural language processing (NLP) in public health monitoring, and the use of generative models for medical image enhancement. In addition, we discuss the applications of machine learning in five diseases, including coronavirus disease 2019 (COVID-19), influenza (flu), human immunodeficiency virus (HIV), tuberculosis, and hepatitis, focusing on its role in diagnosis, outbreak prediction, and early detection. Our findings suggest that while machine learning has significantly improved disease detection and prediction, challenges remain in model generalizability, data quality, and interpretability.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

David Ornelas,

Daniel Canedo,

António J. R. Neves

Abstract: As global trade expands, container terminals face growing pressure to improve efficiency and capacity. During the process of loading and unloading containers, several inspections are performed with the urgent need to minimize delays. In this paper we explore corrosion, as it poses a persistent threat that compromises container durability and leads to costly repairs. Identifying this threat is no simple task, as it varies in form, progresses unpredictably, and is influenced by diverse environmental conditions and container types. In collaboration with the Port of Sines, Portugal, this work explores a potential solution for a real-time computer vision system, with the aim to improve container inspections using deep learning algorithms. We propose a system based on the semantic segmentation model, DeepLabv3+, for precise corrosion detection using images provided from the terminal. Given that the data was entirely raw and unprocessed, several techniques were applied for pre-processing, along with a review of various annotation tools. Once the data and annotations were prepared, we explored two approaches: leveraging a pre-trained model originally designed for bridge corrosion detection and fine-tuning a version specifically for cargo container assessment. Achieving performance results of 49% corrosion detection on the fine-tuned model, this work showcases the potential of deep learning in automating inspection processes and highlights the importance of generalization and training in real-world scenarios, exploring innovative solutions for smart gates and terminals.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Moses Karema,

Kelvin Tole,

Mgala Mvurya

Abstract: Non Revenue Water (NRW) refers to the volume of water that is distributed from the water plant but does not get billed to customers, which is a major challenge for water bodies.It represents the difference between the total volume of water pumped into the water distribution system (WDS) and the volume actually billed to customers.NRW is composed of three components: physical losses, commercial losses, and unbilled authorized consumption.These losses cause financial deficits, increased operational costs, and infrastructure deterioration, making NRW reduction a critical challenge for water bodies globally. This study reviews optimization strategies for minimizing NRW, focusing on advanced metering infrastructure (AMI), remote leak detection (acoustic, pressure, and flow sensors), Geographic information systems (GIS), data analytics, machine learning, and digital twin modeling. Findings suggest that integrating emerging technologies, predictive analytics, and data-driven decision-making can significantly enhance water distribution efficiency. Future research should focus on AI-driven optimization, predictive maintenance, and sustainable water management strategies to optimize non revenue water
Article
Computer Science and Mathematics
Other

Chin Yu Huang,

Li-Cheng Hsieh

Abstract: This study investigated the impact of AI-driven video analysis on the serve performance of national university elite male tennis players, focusing on speed and accuracy optimization. Using a pre-test/post-test design, 46 participants (23 experimental, 23 control) underwent an 8-week AI-guided training intervention. The experimental group received individualized biomechanical recommendations via 2D motion analysis using OpenPose. Results showed serve speed increased from 160.0 ± 6.0 km/h to 163.0 ± 5.8 km/h (p = 0.032) and accuracy from 65.0 ± 8.0% to 72.0 ± 7.0% (p < 0.001) in the experimental group, with significant improvements in shoulder rotation, elbow velocity, racket speed, and center of mass displacement (p < 0.05). The control group showed no significant changes. Knee flexion, toss height, trunk rotation, and racket angle remained unchanged (p > 0.05). Findings suggest AI video analysis effectively enhances serve performance, particularly accuracy, with low-cost scalability, though speed gains were modest, indicating a need for longer interventions. Future research could explore 3D analysis and broader populations.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mark Harris

Abstract: Document-level event causality identification (DECI) is crucial for deep text understanding, yet traditional methods struggle with error propagation, neglect document structure, and incur high computational costs. This paper introduces Prompt-based Structure-Aware Causal Identification (PSACI), a novel approach leveraging Large Language Models (LLMs) through carefully designed prompts. PSACI implicitly captures document structure and performs causal reasoning by instructing the model to identify causal event pairs and generate rationales, eliminating the need for complex multi-task learning or explicit graph construction. Evaluated on EventStoryLine and Causal-TimeBank datasets, PSACI outperforms state-of-the-art baselines, particularly in cross-sentence causality identification, achieving an F1-score of 53.2% on EventStoryLine and 63.5% on Causal-TimeBank. Human evaluation confirms the high coherence and relevance of generated rationales. Our findings demonstrate the effectiveness of prompt engineering for DECI, offering a streamlined and adaptable framework with enhanced performance and interpretability.
Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Pegah Ahadian,

Qiang Guan

Abstract: Generative text models, particularly large language models (LLMs) and foundation models, have influenced numerous fields, including high-quality text generation, reasoning, and multimodal synthesis. These models have been widely applied in healthcare, legal analysis, and scientific research. However, where accuracy and reliability are critical, generative text models pose a significant risk due to hallucination, where generated outputs include incorrect factuality, fabricated, or misleading information. In this survey, we present a review of hallucination in generative AI, covering its taxonomy, detection methods, mitigation strategies, and evaluation benchmarks. We first establish a structured taxonomy, distinguishing between intrinsic vs. extrinsic hallucination and factual vs. semantic hallucination, also discussing task-specific variations in areas such as summarization, machine translation, and dialogue generation. Next, we examine state-of-the-art hallucination detection techniques, including uncertainty estimation, retrieval-augmented generation (RAG), self-consistency validation, and internal state monitoring. We further explore mitigation strategies, such as fine-tuning, reinforcement learning from human feedback (RLHF), knowledge injection, adversarial training, and contrastive learning. Additionally, we review key evaluation metrics and benchmarks, including FEVER, TruthfulQA, HALL-E, and Entity-Relationship-Based Hallucination Benchmarks (ERBench), which serve as standardized measures for assessing hallucination severity. Despite notable efforts, hallucination remains an open challenge, necessitating further improvements in real-time detection, multimodal hallucination evaluation, and trustworthiness frameworks. We show critical research gaps including the need for standardized hallucination taxonomies, scalable mitigation techniques, and human-AI hybrid verification methods. Our survey aims to serve as a foundational resource for researchers and practitioners, providing insights into current methodologies and guiding future advancements in trustworthy and explainable generative AI.
Article
Computer Science and Mathematics
Computer Science

Tasoulas Theofanis,

Alexandros Gazis,

Tsohou Aggeliki

Abstract: Web tracking (WT) systems are advanced technologies used to monitor and analyze online user behavior. Initially focused on HTML and static webpages, these systems have evolved with the proliferation of IoT, edge computing, and Big Data, encompassing a broad array of interconnected devices with APIs, interfaces and computing nodes for interaction. WT systems are pivotal in technological innovation and business development, although trends like GDPR complicate data extraction and mandate transparency. Specifically, this study examines WT systems purely from a technological perspective, excluding organizational and privacy implications. A novel classification scheme based on technological architecture and principles is proposed, compared to two preexisting frameworks. The scheme categorizes WT systems into six classes, emphasizing technological mechanisms such as HTTP protocols, APIs, and user identification techniques. Additionally, a survey of over 1,000 internet users, conducted via Google Forms, explores user awareness of WT systems. Findings indicate that knowledge of WT technologies is largely unrelated to demographic factors such as age or gender but is strongly influenced by a user's background in computer science. Most users demonstrate only a basic understanding of WT tools, and this awareness does not correlate with heightened concerns about data misuse. As such, the research highlights gaps in user education about WT technologies and underscores the need for a deeper examination of their technical underpinnings. This study provides a foundation for further exploration of WT systems from multiple perspectives, contributing to advancements in classification, implementation, and user awareness.
Article
Computer Science and Mathematics
Discrete Mathematics and Combinatorics

Kunle Adegoke

Abstract: Using an elementary approach involving the Euler Beta function and the binomial theorem, we derive two polynomial identities; one of which is a generalization of a known polynomial identity. Two well-known combinatorial identities, namely Frisch's identity and Klamkin's identity, appear as immediate consequences of these polynomial identities. We subsequently establish several combinatorial identities, including a generalization of each of Frisch's identity and Klamkin's identity. Finally, we develop a scheme for deriving combinatorial identities associated with polynomial identities of a certain type.
Article
Computer Science and Mathematics
Computational Mathematics

Mohamed Quafafou

Abstract: Sets play a foundational role in organizing, understanding, and interacting with the world in our daily lives. They also play a critical role in the functioning and behavior of social robots and artificial intelligence systems, which are designed to interact with humans and their environments in meaningful and socially intelligent ways. A multitude of non-classical set theories emerged during the last half-century aspiring to supplement Cantor’s set theory, allowing sets to be true to the reality of life by supporting for example human imprecision and uncertainty. The aim of this paper is to continue this effort introducing oSets which are sets depending on perception of their observers. In this context, an accessible set is a class of objects for which perception is passive, i.e., it is independent of perception; otherwise, it is said oSet, which cannot be known exactly with respect to its observers, but it can only be approximated by a family of sets representing the diversity of its perception. Thus, the new introduced membership function is a three-place predicate denoted ∈i, where the expression "x∈iX" indicates that "observer i perceives the element x as belonging to the set X". The accessibility notion is related to perception and can be best summarized as follows: "to be accessible is to be perceived" presenting a weaker stance than Berkeley’s idealism, which asserts that "to be is to be perceived".
Article
Computer Science and Mathematics
Applied Mathematics

Drew Remmenga

Abstract: We propose a new class of solutions to classic partial differential equations using the class of indescribable numbers.
Article
Computer Science and Mathematics
Mathematics

K. Manesh Krishna

Abstract: We introduce the notion of noncommutative equiangular lines and derive noncommutative versions of fundamental van Lint-Seidel relative and Gerzon universal bounds.
Article
Computer Science and Mathematics
Algebra and Number Theory

Runbo Li

Abstract: The author sharpens a result of Jia and Liu (2000), showing that for sufficiently large $x$, the interval $[x, x+x^{\frac{1}{2}+\varepsilon}]$ contains an integer with a prime factor larger than $x^{\frac{51}{53}-\varepsilon}$. This gives a solution with $\gamma = \frac{2}{53}$ to the Exercise 5.1 in Harman's book.
Article
Computer Science and Mathematics
Algebra and Number Theory

Runbo Li

Abstract: The author sharpens the result of Rivat and Wu (2000), showing that for sufficiently large n, there are infinitely many primes of the form [nc] for 1 < c < 211/178.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Abdelatif Hafid,

Mohamed Rahouti,

Mohammed Aledhari

Abstract: This paper addresses the critical challenges in network security, particularly in Internet of Medical Things (IoMT), through advanced machine learning approaches. We propose a high-performance cybersecurity framework leveraging a carefully fine-tuned XGBoost classifier to detect malicious attacks with superior predictive accuracy while maintaining interpretability. Our comprehensive evaluation compares the proposed model with a well-regularized logistic regression baseline using key performance metrics. Additionally, we analyze the security-cost trade-off in designing machine learning systems for threat detection and employ SHAP (SHapley Additive exPlanations) to identify key features driving predictions. We further introduce a late fusion approach based on max voting that effectively combines the strengths of both models. Results demonstrate that while XGBoost achieves higher accuracy (0.97) and recall (1.00) compared to logistic regression, our late fusion model provides a more balanced performance with improved precision (0.98) and reduced false negatives, making it particularly suitable for security-sensitive applications. This work contributes to the development of robust, interpretable, and efficient machine learning solutions for addressing evolving cybersecurity challenges in networked environments.

of 466

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2025 MDPI (Basel, Switzerland) unless otherwise stated