Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Probability and Statistics

Iman Attia

Abstract: In the present paper, Probability weighted moments (PWMs) method for parameter estimation of the median based unit weibull (MBUW) distribution is discussed. The most widely used first order PWMs is compared with the higher order PWMs for parameter estimation of (MBUW) distribution. Asymptotic distribution of this PWM estimator is derived. This comparison is illustrated using real data analysis.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Gregor Herbert Wegener

Abstract: As artificial intelligence systems scale in depth, dimensionality, and internal coupling, their behavior becomes increasingly governed by deep compositional transformation chains rather than isolated functional components. Iterative projection, normalization, and aggregation mechanisms induce complex operator dynamics that can generate structural failure modes, including representation drift, non-local amplification, instability across transformation depth, loss of aligned fixed points, and the emergence of deceptive or mesa-optimizing substructures. Existing safety, interpretability, and evaluation approaches predominantly operate at local or empirical levels and therefore provide limited access to the underlying structural geometry that governs these phenomena. This work introduces \emph{SORT-AI}, a projection-based structural safety module that instantiates the Supra-Omega Resonance Theory (SORT) backbone for advanced AI systems. The framework is built on a closed algebra of 22 idempotent operators satisfying Jacobi consistency and invariant preservation, coupled to a non-local projection kernel that formalizes how information and influence propagate across representational scales during iterative updates. Within this geometry, SORT-AI provides diagnostics for drift accumulation, operator collapse, invariant violation, amplification modes, reward-signal divergence, and the destabilization of alignment-relevant fixed points. SORT-AI is intentionally architecture-agnostic and does not model specific neural network designs. Instead, it supplies a domain-independent mathematical substrate for analysing structural risk in systems governed by deep compositional transformations. By mapping AI failure modes to operator geometry and kernel-induced non-locality, the framework enables principled analysis of emergent behavior, hidden coupling structures, mesa-optimization conditions, and misalignment trajectories. The result is a unified, formal toolset for assessing structural safety limits and stability properties of advanced AI systems within a coherent operator–projection framework.
Article
Computer Science and Mathematics
Algebra and Number Theory

Felipe Oliveira Souto

Abstract: This article synthesizes and unifies a multifaceted investigation program of the Riemann Hypothesis (RH), transforming scattered numerical and geometric evidence into a rigorous conditional logical structure. We demonstrate that RH is equivalent to the existence of certain self-adjoint operators whose spectra, under specific transformations, coincide with the non-trivial zeros of the zeta function. We present three concrete candidates for such operators--an integral operator constructed from the prime distribution, the Laplacian on the Enneper minimal surface, and a quantum operator emerging from a conformal transformation of the hydrogen atom--and show how all satisfy, numerically with extreme accuracy (10^(-7) to 10^(-12)), the necessary conditions of the conditional theorem. The underlying geometric structure, encapsulated in the symmetry of a real-analytic function F(s) derived from the Gamma function, provides the unifying bridge between the approaches. We conclude by explicitly stating the open mathematical theorems whose proof would finalize a proof of RH.
Article
Computer Science and Mathematics
Mathematics

Hai Shen

,

JiaWei Liu

,

SiYi Li

,

JianBo Zhao

Abstract: This paper constructs a decision-making model of a dual-channel supply chain based on different carbon trading policies and discusses the impact of different carbon quota allocation methods adopted by the government on the dual-channel supply chain. Under the restriction of carbon quota trading policy, with the goal of maximizing enterprise profit, the paper compares and analyzes the influence of carbon emission quota and carbon trading price on the profits of dual-channel supply chain and obtains the optimal decision-making model of enterprise channel selection. The example calculation shows that the profit level of manufacturers and retailers will be significantly affected by different carbon quota allocation policies along with the development of channels. The profit of manufacturers is positively correlated with the amount of carbon allowances, and the relationship with carbon trading price shows different trends under different allocation policies of carbon allowances. The retailer’s profit in the dual channel is not affected by the amount of carbon quota and the price of carbon trading, and the relationship between the retailer’s profit and the amount of carbon quota and the price of carbon trading in the single channel shows different trends under different carbon quota allocation policies.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Silvie Illésová

,

Emmanuel Obeng

,

Tomáš Bezděk

,

Vojtěch Novák

,

Martin Beseda

Abstract: This work deals with the design of a hybrid classification model that uses two complementary parallel data processing branches. The aim was to verify whether the connection of different input representations within a common decision mechanism can support the stability and reliability of classification. The outputs of both branches are continuously integrated and together form the final decision of the model. On the validation set, the model achieved accuracy 0.9750, precision 1.0000, recall 0.9500 and F1-score 0.9744 at a threshold value of 0.5. These results suggest that parallel, complementary processing may be a promising direction for further development and optimization of the model, especially in tasks requiring high accuracy while maintaining robust detection of positive cases.
Article
Computer Science and Mathematics
Algebra and Number Theory

Felipe Oliveira Souto

Abstract: We present a geometric pathway to the Riemann Hypothesis through non-orientable Riemann surfaces. The completed zeta function $\xi(s)$ is shown to naturally inhabit a M\"obius strip $M$, where it defines a section of a holomorphic line bundle $L\to M$. The topological invariant $c_1(L)=2$, required by $M$'s non-orientability, leads to Hermiticity conditions that appear to constrain zeros to $\Re(s) = 1/2$. This geometric framework is compatible with all known properties of $\zeta(s)$ and supported by numerical computations with precision $< 10^{-7}$.
Article
Computer Science and Mathematics
Computational Mathematics

Bouchaib Bahbouhi

Abstract:

This work develops an analytic framework for Goldbach’s strong conjecture based on symmetry, modular structure, and density constraints of odd integers around the midpoint of an even number. By organizing integers into equidistant pairs about , a tripartite structural law emerges in which every even integer admits representations as composite–composite, prime–composite, or prime–prime sums. This triadic balance acts as a stabilizing mechanism that prevents the systematic elimination of prime–prime representations as the even number grows. The analysis introduces overlapping density windows, DNA-inspired mirror symmetry of primes, and modular residue conservation to show that destructive configurations cannot persist indefinitely. As a result, the classical obstruction known as the covariance barrier is reduced to a narrowly defined analytic condition. The paper demonstrates that Goldbach’s conjecture is structurally enforced for all sufficiently large even integers and that the remaining difficulty is confined to a minimal analytic refinement rather than a combinatorial or probabilistic gap. This places the conjecture within reach of a complete unconditional resolution.

Review
Computer Science and Mathematics
Other

Ângela Oliveira

,

Paulo Serra

,

Filipe Fidalgo

Abstract: Artificial intelligence has become fundamental to the advancement of digital gastronomy, a domain that integrates computer vision, natural language processing, graph-based modelling, recommender systems, multimodal learning, IoT and robotics to support culinary, nutritional and behavioural processes. Despite this progress, the field remains conceptually fragmented and lacks comprehensive syntheses that combine methodological insights with bibliometric evidence. To the best of our knowledge, this study presents the first systematic review to date dedicated to artificial intelligence in digital gastronomy, complemented by a bibliometric analysis covering publications from 2018 to 2025. A structured search was conducted across five major databases (ACM Digital Library, IEEE Xplore, Scopus, Web of Science and SpringerLink), identifying 233 records. Following deduplication, screening and full-text assessment, 53 studies met the predefined quality criteria and were included in the final analysis. The methodology followed established review protocols in engineering and computer science, incorporating independent screening, systematic quality appraisal and a multidimensional classification framework. The results show that research activity is concentrated in food recognition, recipe generation, personalised recommendation, nutritional assessment, cooking assistance, domestic robotics and smart-kitchen ecosystems. Persistent challenges include limited cultural diversity in datasets, annotation inconsistencies, difficulties in multimodal integration, weak cross-cultural generalisation and restricted real-world validation. The findings indicate that future progress will require more inclusive datasets, culturally robust models, harmonised evaluation protocols and systematic integration of ethical, privacy and sustainability principles to ensure reliable and scalable AI-driven solutions.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Bhavya Rupani

,

Dmitry Ignatov

,

Radu Timofte

Abstract: This paper defines a task that utilizes vision and language models to improve benchmarks through analysis of CIFAR-10 and CIFAR-100 datasets. The work divides its operations into image categorization followed by visual description production. The task utilizes BEiT and Swin models as state-of-the-art application-specific components for both parts of this research. We selected the current best image classification checkpoints available in the market which delivered 99.00% accuracy on CIFAR-10 and 92.01% on CIFAR-100. For dense contextually rich text output we used BLIP. The expert models performed well on their target responsibilities using minimal noisy data. The BART model achieved new state-of-the-art accuracies when used as a text classifier to compare synthesized descriptions and reached 99.73% accuracy on CIFAR-10 while attaining 98.38% accuracy on CIFAR-100. This paper demonstrates how our integrated vision and language decomposition-hierarchical model surpasses all existing state-of-the-art results on these common benchmark classifications. The full framework, along with the classified images and generated datasets, is available at https://github.com/bhavyarupani/LLM-Img-Classification.
Article
Computer Science and Mathematics
Applied Mathematics

Aeshah A. Raezah

,

Fahad Al Basir

,

Pankaj Kumar Tiwari

,

Animesh Sinha

,

Jahangir Chowdhury

Abstract: This study presents a comprehensive analysis of farming-awareness campaigns aimed at enhancing crop pest management through the strategic deployment of infected pests as a biological control mechanism. Additionally, the role of nutrient supplementation is examined within these campaigns to facilitate crop recovery and improve overall agricultural yield. A mathematical model is developed and rigorously analyzed to assess the efficacy of these integrated pest control strategies. The model is investigated with a focus on equilibrium states, stability analysis, and the conditions leading to Hopf bifurcation. Furthermore, optimal control theory is employed to optimize the release of infected pests, ensuring maximum crop yield while maintaining ecological balance. Our study not only underscores the critical influence of nutrient supplementation in augmenting crop productivity but also highlights the risk of excessive nutrient application, which may destabilize the system. These results emphasize the necessity of maintaining an optimal nutrient threshold. By integrating farming-awareness campaigns with precise biological control measures and nutrient management, our study establishes a robust framework for sustainable pest mitigation and agricultural productivity enhancement. The findings suggest that the synergistic application of infected pests and nutrient enrichment not only suppresses pest populations but also enhances crop resilience and productivity.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Muhammad Nuraddeen Ado

,

Shafi’i Muhammad Abdulhamid

,

Idris Ismaila

Abstract: The growing threat of cyber-enabled financial crimes, along with data sovereignty regulations, poses serious challenges for today’s fraud detection systems used for digital sovereignty. Traditional centralized methods struggle to detect complex fraud patterns and often fail to meet national data privacy requirements, leading to many undetected fraud cases and reduced accuracy. This chapter introduces the Intelligent Surveillance Engine (ISE), a sovereign-compliant artificial intelligence (AI) approach developed to enhance financial fraud detection. Unlike existing frameworks, ISE is purposefully designed to enable national digital sovereignty through auditable, privacy-preserving AI, adaptable to diverse legal and geopolitical contexts such as GDPR in Europe and India’s MeghRaj. ISE uses a mix of collaborative filtering, layered anomaly detection, and ensemble learning to improve fraud detection. It creates user behavior profiles, applies unsupervised techniques like Isolation Forest, Autoencoders, and DBSCAN to find unusual patterns, and then uses supervised classifiers like Random Forest, SVM, and Decision Trees. The results are combined through methods like stacking and majority voting to increase accuracy. Tests on real and synthetic financial datasets showed that ISE achieved a False Negative Rate (FNR) of 0.0%, Recall of 99.55%, and an F1-Score of 99.7%. These results significantly outperform conventional fraud detection systems, which had an FNR of 36.11%, Recall of 65.2%, and an F1-Score of 88.21%. The study illustrates that ISE significantly enhances anomaly detection in financial systems by reducing false negatives, aligning with digital sovereignty requirements, and offering a scalable, adaptive, and regulation-compliant fraud mitigation architecture that outperforms conventional models. This study also highlights how ISE enforces digital sovereignty through privacy-preserving AI models, national data control, and ethical AI governance architectures. Financial crime detection systems often face challenges balancing efficiency, privacy, and compliance with digital sovereignty principles. This study aims to propose the Intelligent Surveillance Engine (ISE), an AI-driven framework for sovereign-compliant financial fraud detection. A hybrid approach integrating systematic anomaly detection, privacy-preserving machine learning models, and sovereign data governance mechanisms was adopted. Results demonstrate that ISE achieves high detection accuracy while ensuring compliance with digital sovereignty and ethical AI governance requirements. These findings suggest that sovereignty-aware AI systems like ISE are vital for national data control, ethical surveillance, and technological independence.
Article
Computer Science and Mathematics
Mathematical and Computational Biology

Valentin E. Brimkov

Abstract: In this work, we pose and aim to answer the following questions, among others: Which quantitative characteristics, being satisfied, led to the phase transition from "primordial soup" to living organisms? How to measure the negentropy of a certain organic matter that underpinned the appearance of a certain species? To what extent do the biosequences of living organisms differ from random sequences? How do we quantitatively distinguish primitive from higher-level organisms? How can we compare the complexity of two living things? Is there an adequate mathematical structure that naturally and appropriately represents each organism biosequence and all of them as a whole? What are the properties of that structure? How does that structure evolve, and what are the theoretical limits of any further evolution? Is it likely that these bounds will be reached, and what are the "limits of life?" How to estimate the effect on the mechanism of evolution of natural selection vs. the one of chance and mutations? To this end, we introduce relevant mathematical structures and use them for modeling purposes. Finally, we also speculate on possible scenarios of the origin of life, evolution, and related issues.
Article
Computer Science and Mathematics
Applied Mathematics

Dmitriy Tverdyi

,

Roman Parovik

Abstract: This paper examines the efficiency of a parallel version of an algorithm that utilizes the capabilities of the central processing unit (CPU) to calculate bifurcation diagrams of the Selkov fractional oscillator depending on the characteristic time scale. The parallel version of the algorithm is implemented in the ABMSelkovFracSim 2.0 software package written in Python, which also includes the Adams-Bashforth-Multon numerical algorithm, which enables one to find a numerical solution to the Selkov fractional oscillator that takes into account heredity or memory effects. The Selkov fractional oscillator is a system of nonlinear ordinary differential equations with Gerasimov-Caputo derivatives of fractional order variables and non-constant coefficients, which include a characteristic time scale parameter for matching the dimensions in the model equations. The paper evaluates the efficiency, speedup, and cost of the parallel algorithm, and presents a calculation of its optimal cost depending on the number of CPU threads. The optimal number of threads required to achieve maximum efficiency of the algorithm is determined. The TAECO approach was chosen to evaluate the efficiency of the parallel algorithm: T (execution time), A (acceleration), E (efficiency), C (cost), O (cost optimality index). Graphs of the efficiency characteristics of the parallel algorithm depending on the number of CPU threads are provided.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Zulqarnain Ali

Abstract: We present a comprehensive analysis of consciousness in artificial intelligence systems using Integrated Information Theory (IIT) 3.0 and 4.0 frameworks. Our work confirms and formalizes the established IIT result that feedforward neural architectures necessarily generate zero integrated information ( Φ = 0) under both IIT 3.0 and 4.0 formalisms. Through mathematical analysis and computational validation on 16 diverse network configurations (8 feedforward, 8 recurrent), we demonstrate that all tested feedforward systems consistently yield Φ = 0 while recurrent systems exhibit Φ > 0 in 75% of cases. Our analysis addresses the architectural distinctions between causal and bidirectional attention mechanisms in transformers, clarifying that standard causal attention maintains feedforward structure while bidirectional attention creates recurrent causal dependencies. We systematically examine the implications for contemporary AI systems, including CNNs, transformers, and reinforcement learning agents, and discuss the relationship between our findings and recent IIT 4.0 developments regarding system irreducibility analysis and directional partitions.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yunzhuo Liu

,

Zhaowei Ma

,

Jiankun Guo

,

Haozhe Sun

,

Yifeng Niu

,

Hong Zhang

,

Mengyun Wang

Abstract: This paper proposes a novel large language model (LLM)-based approach for visual target navigation in unmanned aerial systems (UAS). By leveraging the exceptional language comprehension capabilities and extensive prior knowledge of LLM, our method significantly enhances unmanned aerial vehicles (UAVs) in interpreting natural language instructions and conducting autonomous exploration in unknown environments. To equip the UAV with planning capabilities, this study interacts with LLM and designs specialized prompt templates, thereby developing the intelligent planner module for the UAV. First, the intelligent planner derives the optimal location search sequence in unknown environments through probabilistic inference.Second, visual observation results are fused with prior probabilities and scene relevance metrics generated by LLM to dynamically generate detailed sub-goal waypoints. Finally, the UAV executes progressive target search via path planning algorithms until the target is successfully localized. Both simulation and physical flight experiments validate that this method exhibits excellent performance in addressing UAV visual navigation challenges, and demonstrates significant advantages in terms of search efficiency and success rate.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Vinesh Aluri

Abstract: Quantum computing poses a critical threat to existing cryptographic primitives, rendering current access control mechanisms in cloud-native infrastructures vulnerable to compromise. This paper introduces a comprehensive quantum-resilient access control framework specifically engineered for distributed, containerized, and zero-trust environments. The proposed system integrates post-quantum cryptographic (PQC) primitives—specifically lattice-based key encapsulation (Kyber) and digital signatures (Dilithium)—with a hybrid key exchange protocol to maintain crypto-agility and backward compatibility. We design a secure token issuance and verification process employing PQC-based authentication, ensuring resistance to both classical and quantum adversaries. A prototype implementation demonstrates that our hybrid PQC approach incurs a moderate computational overhead of approximately 10–30\% while preserving horizontal scalability and interoperability across Kubernetes clusters. Security analysis under the post-quantum adversary model confirms resistance to key compromise, replay, and forgery attacks. The results highlight that quantum-resilient access control protocols can be efficiently integrated into modern cloud infrastructures without sacrificing scalability, performance, or operational flexibility.
Article
Computer Science and Mathematics
Mathematical and Computational Biology

Debnarayan Khatua

,

Bikash Kumar

,

Manoranjan K. Singh

,

Somnath Kumar

Abstract: Hepatitis C Virus (HCV) continues to be a significant worldwide health issue, particularly in resource-limited environments with inadequate diagnostic and therapeutic options. This study formulates a deterministic six-compartment model, predicated on the assumptions that the population undergoes natural birth-death dynamics, awareness initiatives transition individuals from $S_1$ to $S_2$, diagnosis advances U to I, recovery is achieved through therapy or immunity, and infection and mortality rates vary among classes. The system is described by coupled nonlinear ODEs that include three time-dependent controls. Analytical examination guarantees the positivity and boundedness of all compartments and calculates the fundamental reproduction number ($R_0$) using the next-generation matrix. Sensitivity analysis shows that $\beta_1, \beta_2, \tau_1, \tau_2$ are the most important parameters. Using Pontryagin's Maximum Principle, the forward–backwards sweep method is employed to determine the optimal controls that minimise both infection and cost. A Mamdani fuzzy logic controller is added to handle parameter uncertainty and generate adaptive responses to infection pressure, awareness level, and hospital load. Simulations reveal that fuzzy control delivers equivalent suppression to the crisp optimum with around two-thirds lower cost, enabling a stable, interpretable, and resource-efficient paradigm for dynamic HCV intervention.
Article
Computer Science and Mathematics
Robotics

Alexander Krasavin

,

Gaukhar Nazenova

,

Аdema Dairbekova

,

Albina Kadyroldina

,

Tamás Haidegger

,

Darya Alontseva

Abstract: This article investigates the trajectory-tracking control of a differential-drive two-wheeled mobile robot (DDWMR) using its kinematic model. A nonlinear-to-linear transformation based on differential flatness is employed to convert the original nonlinear system into two fully decoupled linear subsystems, enabling a simple and robust controller design. Unlike conventional flatness-based methods that rely on exact feedforward linearization around a reference trajectory, the proposed approach performs plant linearization, ensuring reliable tracking across a wide range of trajectories. The resulting two-loop architecture consists of an inner nonlinear loop implementing state prolongation and static feedback, and an outer linear controller performing trajectory tracking of the linearized system. Simulation results on a circular reference trajectory demonstrate high tracking accuracy, with a maximum transient deviation of 0.28 m, a settling time of approximately 120 s, and a steady-state mean tracking error below 0.01 m. These results confirm that the plant-linearization-based framework provides superior accuracy, robustness, and practical applicability for DDWMR trajectory tracking.
Essay
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Stefan Trauth

Abstract: The P = NP problem is one of the most consequential unresolved questions in mathematics and theoretical computer science. It asks whether every problem whose solutions can be verified in polynomial time can also be solved in polynomial time. The implications extend far beyond theory: modern global cryptography, large-scale optimization, secure communication, finance, logistics, and computational complexity all depend on the assumption that NP-hard problems cannot be solved efficiently. Among these, the Spin-Glass ground-state problem represents a canonical NP-hard benchmark with an exponentially large configuration space. A constructive resolution of P = NP would therefore reshape fundamental assumptions across science and industry. While evaluating new methodological configurations, I encountered an unexpected behavior within a specific layer-cluster. Subsequent analysis revealed that this behavior was not an artifact, but an information-geometric collapse mechanism that consistently produced valid Spin-Glass ground states. With the assistance of Frontier LLMs Gemini-3, Opus-4.5, and ChatGPT-5.1, I computed exact ground states up to N = 24 and independently cross-verified them. For selected system sizes between N=30 and N=70, I validated the collapse-generated states using Simulated Annealing, whose approximate minima consistently matched the results. Beyond this range, up to N = 100, the behavior follows not from algorithmic scaling but from the information-geometric capacity of the layer clusters, where each layer contributes exactly one spin dimension. These findings indicate a constructive mechanism that collapses exponential configuration spaces into a polynomially bounded dynamical process. This suggests a pathway by which the P = NP problem may be reconsidered not through algorithmic search, but through information-geometric state collapse.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Piotr Klejment

Abstract: The Discrete Element Method is widely used in applied mechanics, particularly in situations where material continuity breaks down (fracturing, crushing, friction, granular flow) and classical rheological models fail (phase transition between solid and granular). In this study, the Discrete Element Method was employed to simulate stick-slip cycles, i.e., numerical earthquakes. At 2,000 selected, regularly spaced time checkpoints, parameters describing the average state of all particles forming the numerical fault were recorded. These parameters were related to the average velocity of the particles and were treated as the numerical equivalent of (pseudo) acoustic emission. The collected datasets were used to train the Random Forest and Deep Learning models, which successfully predicted the time to failure, also for entire data sequences. Notably, these predictions did not rely on the history of previous stick-slip events. SHapley Additive exPlanations (SHAP) was used to quantify the contribution of individual physical parameters of the particles to the prediction results.

of 619

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated