Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Science

Vicente Salas

Abstract: The increasing digitalization of photovoltaic (PV) inverters and their integration into distributed energy resource (DER) ecosystems expose these devices to a rapidly expanding cyber‑physical attack surface. Existing security requirements are fragmented across heterogeneous technical standards—including IEC 62443, IEC 62351, UL 2900‑1, UL 1741 SB, IEEE 1547, IEEE 2030.5, and SunSpec profiles—and only partially aligned with emerging regulatory obligations such as the EU Cyber Resilience Act (CRA) and NIS2 Directive. This fragmentation complicates assurance, hinders interoperability, and leaves critical security controls inconsistently implemented across vendors and deployments. This paper introduces a Unified Security Baseline (USB) that harmonizes essential technical and lifecycle security controls for PV inverters, including secure boot, firmware signing, anti‑rollback protection, strong authentication, TLS‑secured communication, SBOM governance, secure over‑the‑air updates, and coordinated vulnerability disclosure. The USB provides a device‑centric, standards‑agnostic framework designed to strengthen the security posture of inverter‑dominated DER environments while supporting regulatory compliance. By consolidating cross‑standard requirements into a coherent baseline, this work establishes a foundation for future conformity assessment, certification efforts, and secure‑by‑design engineering practices in critical IoT/OT infrastructures.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mengzhou Wu

,

Yuzhe Guo

,

Yuan Cao

,

Haochuan Lu

,

Songhe Zhu

,

Pingzhe Qu

,

Xin Chen

,

Kang Qin

,

Zhongpu Wang

,

Xiaode Zhang

+9 authors

Abstract: Scaling generalist GUI agents is hindered by the data scalability bottleneck of expensive human demonstrations and the ``distillation ceiling'' of synthetic teacher supervision. To transcend these limitations, we propose UI-Oceanus, a framework that shifts the learning focus from mimicking high-level trajectories to mastering interaction physics via ground-truth environmental feedback. Through a systematic investigation of self-supervised objectives, we identify that forward dynamics, defined as the generative prediction of future interface states, acts as the primary driver for scalability and significantly outweighs inverse inference. UI-Oceanus leverages this insight by converting low-cost autonomous exploration, which is verified directly by system execution, into high-density generative supervision to construct a robust internal world model. Experimental evaluations across a series of models demonstrate the decisive superiority of our approach: models utilizing Continual Pre-Training (CPT) on synthetic dynamics outperform non-CPT baselines with an average success rate improvement of 7% on offline benchmarks, which amplifies to a 16.8% gain in real-world online navigation. Furthermore, we observe that navigation performance scales with synthetic data volume. These results confirm that grounding agents in forward predictive modeling offers a superior pathway to scalable GUI automation with robust cross-domain adaptability and compositional generalization.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Dana El-Rushaidat

,

Nour Almohammad

,

Raine Yeh

,

Kinda Fayyad

Abstract: This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile or laptop cameras. Our methodology employs Mediapipe for real-time extraction of hand, face, and pose landmarks from video streams. These anatomical features are then processed by a hybrid deep learning model integrating Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), specifically Bidirectional Long Short-Term Memory (BiLSTM) layers. The CNN component captures spatial features, such as intricate hand shapes and body movements, within individual frames. Concurrently, BiLSTMs model long-term temporal dependencies and motion trajectories across consecutive frames. This integrated CNN-BiLSTM architecture is critical for generating a comprehensive spatiotemporal representation, enabling accurate differentiation of complex signs where meaning relies on both static gestures and dynamic transitions, thus preventing misclassification that CNN-only or RNN-only models would incur. Rigorously evaluated on the author-created JUST-SL dataset and the publicly available KArSL dataset, the system achieved 96% overall accuracy for JUST-SL and an impressive 99% for KArSL. These results demonstrate the system’s superior accuracy compared to previous research, particularly for recognizing full Arabic words, thereby significantly enhancing communication accessibility for the deaf and hearing-impaired community.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Taehyeon Kim

,

Hangyeol Lee

,

Chang Wook Ahn

,

Man-Je Kim

Abstract: Recent progress in text-to-music generation has enabled high-quality audio synthesis from natural language prompts. However, such models are at risk of unintended replication, raising concerns regarding originality and intellectual property. While training-time mitigation strategies can address this issue, they typically require retraining or curated datasets, limiting their practicality for largescale systems. Inference-time methods provide a more lightweight alternative but often involve a trade-off between fidelity and memorization risk. This work introduces Repulsive Guidance (RG), a systematic inference-time mitigation strategy that reduces memorization without disrupting the intended conditional guidance from the text prompt. RG operates by enforcing divergence between dual diffusion trajectories through a repulsive term applied only during early denoising steps, without reversing the conditional guidance from the prompt. Experiments on MusicBench with the TANGO model demonstrate that RG offers a complementary mitigation strategy, providing new insights into balancing fidelity and memorization risk.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Amit Kumar

,

Wakar Ahmad

,

Om Pal

,

Sunil

Abstract: Modern user authentication systems increasingly need user and device behavior aware adaptive mechanism to detect evolving threats beyond the traditional authentication framework of static credential verification. This paper proposes a hybrid multi-model framework for personalized user-level anomaly detection using a data-driven Hybrid Anomaly Score (HAS). Unlike static thresholding approaches, The proposed framework derives algorithm that integrates multiple anomaly detection methodologies to compute HAS through adaptive per-user thresholds (using cohort maturity and percentile-based optimization). The framework is evaluated on 72 million real-world data set. The framework demonstrate 96% precision, 92% recall, and an F1-score of 0.94, while maintaining inference latency within 2-3 ms per authentication event. The ablation analysis of the framework confirms the contribution of dynamic weighting and personalized threshold optimization to improved detection stability and convergence. The proposed framework outperforms existing approaches in both scalability and latency satisfying real-time operational constraint. The results indicate that data-driven adaptive thresholding combined with hybrid anomaly modeling provides an effective and deployable solution for large-scale authentication environments.

Technical Note
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Meijing Liang

,

Yang Hu

,

Zhiwu Zhang

Abstract:

Summary: The potential of deep learning (DL) in genomic selection (GS) is constrained by the significant technical expertise required to design and implement neural networks. While DL has revolutionized fields like language processing and structural biology, its application in GS has not yet consistently outperformed traditional models like mixed linear models. The key to unlocking DL's power in GS lies in the exploration of network architectures tailored to genomic data, a process that demands intensive programming and poses a barrier for many researchers. To overcome this challenge, we developed Artificial Intelligence for Efficient and Versatile Evaluation and Representation (AI4EVER), a freely available graphical software platform that enables users to explore and apply machine learning (ML) models without any coding. AI4EVER integrates a graphical user interface (GUI) with a Python-based ML backend. The platform currently supports five models: Ridge Regression, Random Forest, Gradient Boosted Decision Trees, Multi-Layer Perceptron, and a customizable Keras-based neural network that can simultaneously predict multiple traits in a single model. A key feature of AI4EVER is optional incorporation of genome-wide association study (GWAS) results (p-values) as feature weights during model training, enabling biologically informed DL workflows. The platform further provides real-time visualization of model performance metrics and automated feature-importance outputs to enhance interpretability. AI4EVER also separates model training and prediction workflows, allowing trained models to be reused for independent prediction datasets. Using a representative maize dataset, we demonstrate that AI4EVER enables access to advanced AI, empowers genomic researchers to accelerate data-driven decision-making in breeding programs, ultimately lowering the barrier to artificial intelligence-enabled genetic improvement in crops and animals and human health management.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gonçalo Melo de Magalhães

Abstract:

Why do different swarm algorithms achieve different performance on the same fitness landscape? This paper proposes that navigability—the structural capacity to find improving paths—is observer-dependent: different algorithms perceive different navigability on identical landscapes, and this difference is irreducible to landscape properties alone. We formalise this through the decomposition F = P/D, where Perception (P) measures an algorithm’s differentiation capacity and Distortion (D) measures structural resistance. The ratio form is derived uniquely from three axioms (monotonicity, scale-covariance, separability). Three claims are advanced and tested across five experiments on the Deucalion supercomputer, totalling over 200,000 simulated trials. Claim 1 (Distortion is multiplicative): D compounds geometrically, not additively (R2 = 0.993 vs. 0.856; n = 250 cross-algorithm trials). Claim 2 (Perception is observer-dependent): Six navigation strategies on the same 9,913 graphs yield six different P values; a hidden variable model reconstructing P from graph features and strategy identity achieves only R2 = 0.058 (n = 9,470 strategy–graph pairs). In the CEC optimisation domain, the same hidden variable test yields R2 = 0.403 (n = 50 algorithm–function pairs), indicating a domain-dependent boundary. Claim 3 (Alignment dominates): Step-wise alignment—the fraction of moves that reduce distance to the optimum—predicts navigation efficiency at R2 = 0.82 across 57,518 trials, outperforming all tested graph-theoretic and landscape metrics (maximum alternative R2 = 0.03). Cross-domain validation spans graph navigation (10,000 graphs, 6 strategies), CEC-2017 benchmarks (10 functions, 5 algorithms), 2D continuous landscapes (79,956 trials, mediation analysis), PSO parameter sweeps (5,000 runs), and ACO pheromone dynamics (2,987 runs). Six counterfactual tests and a mediation analysis support the framework. All results are simulation-based. What fails is reported with the same rigour as what succeeds: P alone outperforms P/D at the graph level (ρ = 0.343 vs. 0.108), the FLRP multiplicative decomposition is dead (R2 = 0.0002), and the scalar F-field fails in continuous space (R2 = 0.004). Twelve falsification criteria are specified. The framework is a hypothesis under test, not a proven law.

Article
Computer Science and Mathematics
Geometry and Topology

Deep Bhattacharjee

,

Priyanka Samal

,

Riddhima Sadhu

,

Sanjeevan Singha Roy

,

Shounak Bhattacharya

,

Soumendra Nath Thakur

Abstract: We propose a structural framework for organizing the submanifold content of compact Calabi--Yau manifolds through the notion of a {Topological Slice Structure} (TSS), a coherent collection of calibrated submanifolds compatible with the Ricci-flat metric data. The central result is a decomposition principle asserting that, under mild conditions on the K\"ahler polarization, such a structure exists, its cohomology classes span the full integer homology, and it is covariant with respect to mirror symmetry. Special cases recover special Lagrangian torus fibrations, divisors, and holomorphic curves as natural constituents of a unified geometric datum. We illustrate the framework through worked examples, introduce a numerical slice complexity invariant, and discuss implications for D-brane wrapping and moduli stabilization in string compactifications.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Marco Bianchi

,

Giulia Rossi

,

Alessandro Conti

Abstract: Not all web tasks are feasible under strict cost and safety requirements, yet standard reinforcement learning implicitly assumes feasibility. This study introduces a feasibility-aware agentic reinforcement learning framestudy that explicitly reasons about whether a task can be completed within given cost budgets and failure risk limits. A feasibility estimator is trained to predict the probability that any valid action sequence exists under current constraints. The agent uses this signal to adapt its strategy, prioritize feasible subtasks, or terminate early when feasibility is low. Evaluation on 800–1,400 constrained web tasks demonstrates that feasibility-aware decision-making reduces wasted interactions, prevents high-risk attempts, and improves overall system reliability. This study reframes web automation as a constrained decision problem where recognizing infeasibility is as important as optimizing success.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: Understanding how gradient descent shapes neural network representations remains a fun-damental challenge in deep learning theory. Recent work has revealed that neural networks behave as “racing” systems: neurons compete to align with task-relevant directions, and those that succeed experience exponential norm growth. However, the geometric principles govern-ing this race—particularly when data lies on low-dimensional manifolds and networks employ adaptive normalization—remain poorly understood. This paper establishes a mathematical framework that unifies and extends these insights. We prove three fundamental theorems: (1) neuron weight vectors converge exponentially to the tangent space of the data manifold, with a rate determined by local curvature and gating dynamics; (2) for rotation-equivariant tasks, an angular momentum tensor is conserved under gradient flow, imposing topological constraints on neuronal rearrangements; (3) the distribution of high-norm “winning” neurons follows a von Mises-Fisher concentration on the manifold, with concentration parameter linked to initial angular variance. As a case study, we integrate Bayesian R-LayerNorm—a provably stable nor- malization method—into our framework, deriving a modified norm growth law that explains its empirical robustness on corrupted datasets. Together, these results provide a geometric foun-dation for understanding capacity adaptation, lottery tickets, and uncertainty-aware learning in neural networks.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Vincenzo De Leo

,

Michelangelo Puliga

,

Martina Erba

,

Cesare Scalia

,

Andrea Filetti

,

Alessandro Chessa

Abstract: In this work, we inspected the friendship network on Twitter (recently rebranded as X), concentrating on individuals and organizations intertwined with the energy field. We particularly focus on seasoned professionals, corporate entities, and domain specialists, all connected through `following’ relationships. By meticulously examining these ties, we uncover several distinct groupings within the network, each defined by the unique roles its members occupy. Our analysis demonstrates that the natural emergence of such clusters on social platforms exerts a profound influence on public discourse regarding energy and other critical matters, including climate change. Furthermore, we reveal that the ever-changing interplay of misleading information catalyzes the formation of ideologically divided factions, which often leads to reduced engagement in online conversations. These emergent clusters, characterized by their shared communication styles, form relatively compact communities where the exchange of information is infrequent compared to larger networks and is usually confined to accounts created for specific commercial objectives. Additionally, by leveraging a machine learning approach, we are able to pinpoint pivotal actors within these niche segments and elucidate the mechanisms that sustain their connectivity. This method provides novel insights into how corporate communication unfolds on social media, offering a refreshed perspective on professional networking. Ultimately, our findings highlight the ways in which companies within the energy sector take advantage of Twitter to coordinate their initiatives, with key institutions serving as central nodes in maintaining the organization of these networks.

Article
Computer Science and Mathematics
Mathematical and Computational Biology

Michael Timothy Bennett

Abstract: Functional information measures how rare functional configurations are. Wong and colleagues argue that selection should drive a law of increasing functional information. This is often read as a claim that complexity must increase. Here I give a different interpretation, which is that survivors tend to be the systems that did not overcommit. I model a system as a policy π, meaning a bundle of commitments expressed in a finite embodied vocabulary. New selection pressures arrive as a set of future requirements drawn from the unobserved outcome set U. A currently viable policy leaves an unobserved buffer Bπ ⊆ U of outcomes it still permits. Under a maximally ignorant novelty model, the survival probability of π is exactly 2|Bπ| − |U|. Under any exchangeable novelty prior, survival remains monotone in Bπ. So persistence under novelty favours weak policies, where weakness counts the compatible completions left open. I define degree of future function as survival probability and functional information as Hazen and Szostak rarity within the currently viable set. Conditioning on persistence reweights the viable set toward larger buffers and therefore toward higher functional information. This yields a mathematical analogue of the proposed law under explicit assumptions. Supplementary analysis quantifies how much structured novelty is needed before that buffer size ordering can reverse. In fully enumerated toy worlds, weakness maximisation improves mean log survival probability relative to random choice. Weakness and simplicity are not the same thing. Weakness helps a system persist under novelty, because it keeps more futures compatible. Simplicity helps a system persist because there is less to break, which obviates the need for repair. Complexity requires self-repair to persist, increasing weakness. Life is persistent complexity. In between complex life and simple nonlife is the void of the unviable: complexity which is not alive.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rohan Le Roux

,

Siavash Khaksar

,

Mohammadali Sepehri

,

Iain Murray

Abstract: Open-pit mining relies heavily on visual inspection to identify indicators of slope instability such as surface cracks. Early identification of these geotechnical hazards allows for the implementation of safety interventions to protect both workers and assets in the event of slope failures or landslides. While computer vision (CV) approaches offer a promising avenue for autonomous crack detection, their effectiveness remains constrained by the scarcity of labelled geotechnical datasets. Deep learning (DL) models require large amounts of representative training data to generalize to unseen conditions; however, collecting such data from operational mine sites is limited by safety, cost, and data confidentiality constraints. To address this challenge, we propose a hybrid game engine—generative artificial intelligence (AI) framework for large-scale dataset synthesis. Leveraging a parameterized virtual environment developed in Unreal Engine 5 (UE5), the framework captures realistic images of open-pit surface cracks and enriches their visual diversity using StyleGAN2-ADA. The resulting datasets were used to train the YOLOv11 real-time object detection model and evaluated on a real-world dataset of open-pit slope imagery to assess the effectiveness of the proposed framework in improving CV model generalizability under extreme data scarcity. Experimental results demonstrated that models trained on the proposed framework substantially outperformed the UE5 baseline, with average precision (AP) at intersection over union (IoU) thresholds of 0.5 and [0.5:0.95] increasing from 0.403 to 0.922 and 0.223 to 0.722 respectively, accompanied by a reduction in missed detections from 95 to eight for the best-performing configurations. These findings demonstrate the potential of hybrid generative AI frameworks to mitigate data scarcity in CV applications and support the development of scalable automated slope monitoring systems for improved worker safety and operational efficiency in open-pit mining.

Article
Computer Science and Mathematics
Mathematics

Raoul Bianchetti

Abstract: Certain integer transformations exhibit unexpected forms of stability that resemble attractors in dynamical systems. Two classical examples are the Kaprekar transformation leading to the constant 6174 and the arithmetic structure of perfect numbers. Although traditionally studied in separate areas of number theory, both phenomena reveal a common feature: the emergence of stable configurations under discrete informational constraints. In this work, we propose a unified framework based on Viscous Time Theory (VTT) and its informational geometry perspective, in which these two structures are interpreted as complementary forms of arithmetic stabilization. The Kaprekar transformation defines a discrete dynamical system whose iterations rapidly converge to a unique attractor (6174) for almost all four-digit inputs. Perfect numbers, on the other hand, arise as equilibrium points of the divisor-sum operator, where the informational deviation between a number and the sum of its proper divisors vanishes. We formalize both mechanisms using a common representation based on discrete informational tension functions defined over the integers. Within this framework, Kaprekar collapse appears as a dynamic attractor produced by iterative dissipation of digit-configuration tension, while perfect numbers correspond to static coherence wells generated by structural balance in the divisor field. Numerical exploration further suggests the presence of near-equilibrium zones—arithmetic configurations where informational gradients become locally minimal. These structures provide a natural bridge between iterative attractors and divisor-based equilibria, suggesting that stability phenomena in number theory may be understood through a broader lens of informational relaxation processes. The results do not claim new proofs regarding perfect numbers, but instead propose a conceptual and computational framework that unifies dynamic and structural stability in arithmetic systems. This perspective may provide new tools for exploring discrete attractors, divisor dynamics, and informational structures within number theory.

Concept Paper
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Asifullah Khan

,

Hamna Asif

,

Aleesha Zainab

Abstract: Jet anomaly detection in a high-energy physics is a non-stationary task that is fuelled by shifts in the domain due to pile-up and predominantly by background noise, and dynamically changing relationships between jet constituents in such a scenario, where a conventional graph neural network architecture is frequently inadequate in terms of robustness and interpretability. Physics-Self-Adaptive Multi-Agent System (PhySA-MAS) is a physics-directed, self-adaptive multi-agent architecture that proposes jet analysis as a decentralized and dynamically reconfigurable reasoning scheme. It does not use one monolithic model but instead integrates specialist agents dealing with meta-learning, relational reasoning, communication and topology control which can vary their interactions depending on event-level physics. The energy conservation constraints are incorporated in graph message passing to provide physical consistency, and a reinforcement driven topology controller rewires inter-agent communication dynamically to forces according to anomalous patterns. An additional communication strategy, anchor-peer communication, ensures the further stabilization of learning through the reduction of gradient conflict and the amplification of the signals related to anomalies, which, in combination, offers a powerful and structurally understandable alternative to fixed deep learning models.

Article
Computer Science and Mathematics
Security Systems

Marko Corn

,

Primož Podržaj

Abstract: Human-centered cryptographic key management is constrained by a persistent tension between security and usability. While modern cryptographic primitives offer strong theoretical guarantees, practical failures often arise from the difficulty users face in generating, memorizing, and securely storing high-entropy secrets. Existing mnemonic approaches suffer from severe entropy collapse due to predictable human choice, while machine-generated mnemonics such as BIP–39 impose significant cognitive burden. This paper introduces GeoVault, a spatially anchored key derivation framework that leverages human spatial memory as a cryptographic input. GeoVault derives keys from user-selected geographic locations, encoded deterministically and hardened using memory-hard key derivation functions. We develop a formal entropy model that captures semantic and clustering biases in human location choice and distinguishes nominal from effective spatial entropy under attacker-prioritized dictionaries. Through information-theoretic analysis and CPU–GPU benchmarking, we show that spatially anchored secrets provide a substantially higher effective entropy floor than human-chosen passwords under realistic attacker models. When combined with Argon2id, spatial mnemonics benefit from a hardware-enforced asymmetry that strongly constrains attacker throughput as memory costs approach GPU VRAM limits. Our results indicate that modest multi-point spatial selection combined with memory-hard derivation can achieve attacker-adjusted work factors comparable to those of 12-word BIP–39 mnemonics, while single-point configurations provide meaningful offline resistance with reduced cognitive burden.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yuliang Wang

Abstract: This study proposes a deep learning framework that combines graph neural networks (GNN) and temporal modeling to enhance the accuracy and stability of supply chain risk prediction and optimization in pharmaceutical enterprises. By modeling the pharmaceutical supply chain as a complex graph structure, this research effectively captures the dependencies between nodes while using temporal networks to capture long-term dynamic changes within the supply chain. We design a model incorporating a multi-head attention mechanism, which provides accurate risk predictions under different demand fluctuation scenarios. The experimental results demonstrate that the proposed model outperforms existing traditional machine learning models and deep learning methods across multiple evaluation metrics, including Precision, Recall, F1-Score, and AUC-ROC. Particularly in complex environments, the model effectively identifies potential supply chain risk events, such as logistics delays, supply disruptions, and inventory fluctuations. Compared to traditional rule-based or statistical supply chain risk prediction methods, the proposed model shows greater robustness and accuracy by deeply exploring the structural and temporal features between supply chain nodes. Sensitivity analysis of model performance under varying demand fluctuation intensities and environmental changes further validates the model's feasibility and stability in real-world applications, providing effective technical support for the pharmaceutical industry in areas such as resource scheduling, inventory management, and risk early warning.

Article
Computer Science and Mathematics
Information Systems

Nelson Herrera-Herrera

,

Estevan Ricardo Gómez-Torres

Abstract: The rapid proliferation of heterogeneous IoT sensor networks in urban public transporta-tion systems generates large volumes of real-time data that are often fragmented across in-dependent platforms, thereby limiting interoperability, scalability, and coordinated intel-ligence. Existing architectures typically treat sensing, edge processing, and artificial intel-ligence as loosely coupled components, lacking unified frameworks that support real-time adaptive decision-making in complex transportation environments. To address this gap, this study proposes a sensor-centric extension of the CAMS architec-ture that integrates semantic sensor interoperability, edge-enabled distributed processing, and embedded AI-driven coordination within a unified framework. The sensor-centric ex-tended CAMS framework introduces a distributed sensor integration layer combined with a native intelligent coordination module that enables real-time multi-sensor fusion and predictive analytics. A functional prototype is evaluated using hybrid real-world and simulated datasets representing vehicle telemetry, infrastructure sensing, and passenger demand across diverse operational scenarios. Experimental results demonstrate signifi-cant improvements in interoperability efficiency, predictive accuracy, scalability, and end-to-end latency compared with conventional centralized architectures. The results indicate that tightly integrating distributed sensing with embedded intelli-gence enhances robustness and scalability in smart transportation ecosystems. The pro-posed architecture provides a practical and extensible foundation for next-generation in-telligent urban mobility systems and advances the integration of IoT sensing and AI-driven decision-making in large-scale cyber–physical environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mohsen Mostafa

Abstract: Physics-informed neural networks (PINNs) have emerged as powerful tools for solving partial differential equations, but their training remains challenging due to ill-conditioned loss landscapes. While adaptive methods like Adam dominate deep learning, they exhibit instability on stiff PDEs, and second-order methods are computationally prohibitive. We present EPANG-Gen, an optimizer that combines memory-efficient eigen-decomposition with lightweight Bayesian uncertainty quantification. EPANG-Gen introduces three ele- ments: (1) a randomized eigenspace estimator that approximates Hessian curvature with O(dk) memory (k ≪ d), (2) Bayesian R-LayerNorm for per-activation uncertainty estima-tion, and (3) adaptive rank selection (PASA) that dynamically adjusts to problem difficulty. We evaluate EPANG-Gen on four benchmark PDEs—Poisson 1D, Burgers’ equation, Darcy flow, and Helmholtz 2D—and on the Taylor-Green vortex at Re = 100, 000, a canonical 3D turbulence problem. All experiments were conducted under computational constraints (Kaggle, NVIDIA P100 GPU, limited epochs). Results show that EPANG-Gen achieves performance comparable to Adam on the toughest turbulent regime while eliminating the 25% catastrophic failure rate of ADOPT across 72 runs. Ablation studies confirm that eigen- preconditioning contributes to performance improvements of 11–35%. The built-in uncer- tainty estimates provide confidence metrics at negligible cost. This work represents an initial exploration of curvature-aware optimization for PINNs; further validation with larger com- pute resources is needed. Code is available at https://github.com/EPANG-Gen/EPANG-Gen.

Article
Computer Science and Mathematics
Robotics

Israel Kolaïgué Bayaola

,

Jean Louis Ebongué Kedieng Fendji

,

Blaise Omer Yenke

,

Marcellin Atemkeng

,

Ibidun Christiana Obagbuwa

Abstract: The rapid proliferation of unmanned aerial vehicles (UAVs) in energy-intensive applications (such as autonomous logistics, continuous surveillance, and mobile edge computing) has driven a critical need for highly reliable energy consumption models. However, selecting an appropriate modeling strategy remains an ad-hoc process; researchers must frequently navigate complex, undocumented trade-offs among required predictive accuracy, empirical data availability, and access to aerodynamic testing infrastructure. This study proposes a systematic, two-stage decision-making framework designed to standardize UAV energy model selection. In the first stage, a qualitative decision tree is inductively derived from a comprehensive corpus of recent literature (an 80% training split), explicitly mapping infrastructural and informational constraints to five distinct modeling regimes, ranging from novel white-box derivations to deep-learning black-box applications. This structural logic is subsequently validated against an independent 20% literature holdout set, achieving a 100% predictive match. In the second stage, the Analytic Hierarchy Process (AHP) is applied to quantitatively rank the feasible alternatives based on context-specific criteria: accuracy, interpretability, development cost, and customization adaptability. Crucially, this quantitative scoring introduces "fallback flexibility," allowing researchers to seamlessly pivot to mathematically adjacent alternative models when unforeseen experimental roadblocks occur. Embedded within an open-source Python graphical interface, this framework mitigates methodological ambiguity, prevents the over-allocation of research resources, and fosters greater reproducibility within the energy-aware UAV research community.

of 676

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated