Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Science

Melchor Gómez García

,

Derlis Cáceres Troche

,

Moussa Boumadan Hamed

,

Roberto Soto Varela

Abstract: The rapid expansion of Generative Artificial Intelligence (GAI) is transforming higher education systems, particularly public institutions seeking to advance toward smart governance models and digital transformation. In this context, digital teaching competence emerges as a strategic factor for the effective, ethical, and pedagogically sound adoption of these technologies. This study assesses the level of digital competence among public higher education faculty in Paraguay and examines its predictive capacity regarding the adoption of GAI tools using machine learning models. A nationwide quantitative study was conducted with a sample of 800 faculty members from public universities across Paraguay. Data were collected through a structured questionnaire based on international digital competence frameworks, incorporating additional variables such as attitudes toward GAI, technological experience, institutional infrastructure, and perceived organizational support. Data analysis involved the application of machine learning techniques, including Logistic Regression, Random Forest, and Gradient Boosting, to identify the variables with the strongest predictive power regarding faculty readiness and willingness to integrate GAI into teaching practices. Model performance was evaluated using metrics such as accuracy, F1-score, and AUC-ROC. The findings identify key predictors of technological readiness and structural gaps within Paraguay’s public higher education system. This research provides empirical evidence from Latin America on the factors influencing GAI adoption in public sector educational contexts and contributes to the design of educational policies aimed at fostering smart universities and digitally sustainable academic ecosystems.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Jianhua Zhu

,

Changjiang Liu

,

Danling Liang

Abstract: Multi-modal remote sensing image registration is a challenging task due to differences in resolution, viewpoint, and intensity, which often leads to inaccurate and time-consuming results with existing algorithms. To address these issues, we propose an algorithm based on Curvature Scale Space Contour Point Features (CSSCPF). Our approach combines multi-scale Sobel edge detection, dominant direction determination, an improved curvature scale space corner detector, a new gradient definition, and enhanced SIFT descriptors. Test results on publicly available datasets show that our algorithm outperforms existing methods in overall performance. Our code will be released at https://github.com/JianhuaZhu-IR.

Article
Computer Science and Mathematics
Applied Mathematics

Xianqi Zhang

,

Zewei Wang

,

Dan Xue

,

Zikang Han

Abstract: Servo motors typically utilize Field-Oriented Control (FOC). However, the conventional cascaded PI control framework is inherently constrained by its fixed-parameter design, making it highly susceptible to parameter variations and unmodeled disturbances. While intelligent control strategies—such as model predictive control (MPC)—provide a robust, multi-objective alternative, their intensive stepwise computational demand often degrades transient response. Motivated by the stochastic dynamics of motor operation, we propose a novel physics-informed control paradigm. Specifically, we formulate the FOC-based motor control as an online stochastic optimization problem, wherein the objective function is updated iteratively using stochastic gradient estimates, and the resulting time-varying subproblems are solved efficiently by the MSALM algorithm. Our approach significantly outperforms conventional PI controllers in environmental adaptability and disturbance rejection. Experimental results demonstrate that the proposed method achieves comparable high-precision tracking performance while significantly reducing computational time per iteration, ensuring rapid dynamic response and strict enforcement of physical constraints.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Morshedul Islam

,

Khondokar Fida Hasan

,

Wali Mohammad Abdullah

,

Baidya Nath Saha

Abstract: Behavioral Authentication (BA) systems verify user identity claims based on unique behavioral characteristics using machine learning (ML)-based classifiers trained on user behavioral profiles. Although effective, ML-based BA systems face serious privacy threats, including profile inference and reconstruction attacks. This paper presents RUIP-BA (Renewable, Unlinkable, and Irreversible Privacy-Preserving Behavioral Authentication), a non-cryptographic framework tailored to low-computation devices such as IoT and mobile platforms. Random Projection (RP) maps behavioral profiles into lower-dimensional protected templates while approximately preserving utility-relevant geometry, and local Differential Privacy (DP) injects calibrated stochastic perturbations to provide formal privacy protection. The proposed design jointly targets the ISO/IEC 24745 requirements of renewability, unlinkability, and irreversibility. We provide complete algorithmic realizations for enrollment, verification, template renewal, unlinkability testing, and GAN-based adversarial privacy evaluation. We also introduce rigorous formal privacy derivations and proofs under explicit assumptions, including formal security games, theorem-level guarantees at information-theoretic and statistical levels, Cram'er-Rao lower bounds for irreversibility, full Jensen-Shannon divergence derivations for unlinkability, and GAN Nash-equilibrium attack bounds. Experiments on voice, swipe, and drawing datasets show authentication accuracy above 96% while sharply limiting feature recoverability under strong GAN-based attacks. RUIP-BA provides a scalable, mathematically grounded, and deployment-ready privacy-preserving BA solution.

Article
Computer Science and Mathematics
Algebra and Number Theory

Kunle Adegoke

Abstract: Using generalized binomial coefficient identities and some results of John Dougall, we derive some families of series involving the cubes of Catalan numbers. We also establish a family of series containing fourth powers of Catalan numbers. Finally, we find a generalization of the Bauer series for \( 1/\pi \) and obtain some Ramanujan-like series for \( 1/\pi^2 \) and~\( 1/\pi^3 \).

Article
Computer Science and Mathematics
Computer Science

A. Manoj Prabaharan

Abstract: Sensory-impaired children often experience barriers to motor development and psychosocial growth in recreational programs, where traditional assessments lack real-time precision and scalability. This paper introduces an edge AI phenomics framework for tracking motor proficiency encompassing kinematics like balance and coordination and psychosocial benefits such as social engagement and self-efficacy during adaptive play activities. Deployed on low-power edge devices, the system fuses RGB-D cameras, IMUs, and bioacoustics sensors into a lightweight pipeline featuring MobileNetV3 pose estimation and conformer encoders for phenotypic feature extraction. Evaluated on a dataset from 250 children across Chennai programs, it achieves 96% motor accuracy (MPJPE <10mm) and 0.85 correlation with clinical psychosocial scales, outperforming cloud baselines by 40% in latency. Results demonstrate 25-35% gains in proficiency and well-being over 8 weeks, with implications for inclusive therapies. The framework addresses deployment challenges through quantization and federated learning, advancing scalable, privacy-preserving phenomics in paediatric recreation.

Article
Computer Science and Mathematics
Robotics

Jack Vice

,

Gita Sukthankar

Abstract: Traditional social navigation systems often treat perception and motion as decoupled tasks, leading to reactive behaviors and perceptual surprise due to limited field of view. While active vision—the ability to choose where to look—offers a solution, most existing frameworks decouple sensing from execution to simplify the learning process. This article introduces a novel joint reinforcement learning (RL) framework (Active Vision for Social Navigation) that unifies locomotion and discrete gaze control within a single, end-to-end policy. Unlike existing factored approaches, our method leverages a model-based RL architecture with a latent world model to explicitly address the credit assignment problem inherent in active sensing. Experimental results in cluttered, dynamic environments demonstrate that our joint policy outperforms factored sensing-action approaches by prioritizing viewpoints specifically relevant to social safety, such as checking blind spots and tracking human trajectories. Our findings suggest that tight sensorimotor coupling is essential for reducing perceptual surprise and ensuring safe, socially aware navigation in unstructured spaces.

Article
Computer Science and Mathematics
Security Systems

Saulius Grigaitis

Abstract: This work investigates multi-scalar multiplication (MSM) over a fixed base for small input sizes, where classical large-scale optimizations are less effective. We propose a novel variant of the Pippenger-based bucket method that enhance performance by using additional precomputation. In particular, our approach extends the BGMW method by introducing structured precomputations of point combinations, enabling the replacement of multiple point additions with table lookups. We further generalize this idea through chunk-based precomputation, allowing flexible trade-offs between memory usage and runtime performance. Experimental results demonstrate that the proposed variants significantly outperform the Fixed Window method for small MSM instances, achieving up to 3× speedup under practical memory constraints. These results challenge the common assumption that bucket-based methods are inefficient for small MSMs.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Herlindo Hernandez-Ramirez

,

Jorge Luis Perez-Ramos

,

Daniel Canton-Enriquez

,

Ana Marcela Herrera-Navarro

,

Hugo Jimenez-Hernandez

Abstract: The integration of automated learning and video analysis enables the development of intelligent systems that can operate effectively in uncertain scenarios. These systems can autonomously identify dominant motion dynamics, depending on the theoretical framework used for representation and the learning process used for pattern identification. Current literature offers a state-based approach to describe the key temporal and spatial relationships required to understand motion dynamics. An important aspect of this approach is determining when the number of positively learned rules from a given information source is sufficient to detect dominant motion in automatic surveillance scenarios. This is crucial, as it affects both the variability of movements that monitored subjects can exhibit within the camera’s field of view and the resources needed for effective implementation. This study addresses these gaps through a grammar-based sufficiency criterion, which posits that learning is complete when production rule growth stabilizes, under the assumption of system stationarity. The stability criterion evaluates whether the most probable rules are learned over time, and whenever a high-growth rule is added, it is used to update the criterion. We outline several benefits of having a formal criterion for determining when a symbolic surveillance system has a robust model that explains the observed motion dynamics. Our hypothesis is that a correct model can consistently account for the majority of motion dynamics over time in an automated learning process. The proposed approach is evaluated by modeling motion dynamics in several scenarios using the SEQUITUR algorithm as input and computing the probability of stability along the learning curve, which indicates when the model reaches a steady state of consistent learning. Experimental validation was conducted in real-world scenarios under varying acquisition conditions. The results demonstrate that the proposed method achieves robust modeling performance, with accuracy values ranging from 83.56% to 95.92%in dynamic environments.

Article
Computer Science and Mathematics
Computational Mathematics

Dmytro Topchyi

Abstract: In this paper, we consider the properties of the following objects: plafal and geo-space (a general overview). As an application of the created theory, the proof of the equality of complexity classes P and NP will be given. The geo-plafal is a kernel (computational template) of the proof; constructive theory of serendipity approximations, Stepanets' school and the Bogolyubov principle of the decay of correlations for an infinite systems (dim=3) is a shell.

Article
Computer Science and Mathematics
Logic

Giuseppe Filippone

,

Mario Galici

,

Gianmarco La Rosa

,

Federica Piazza

,

Marco Elio Tabacchi

Abstract: This paper investigates the structure of fuzzy Lie subalgebras, with particular emphasis on isomorphisms and nilpotency. Building on two prior conference contributions, one of which established foundational results on fuzzy bases of Lie algebras, we develop here a more complete and unified treatment of these themes. We introduce a notion of isomorphism between fuzzy Lie subalgebras based on the transfer principle via t-cut sets, and we prove that isomorphic fuzzy Lie subalgebras necessarily share the same nilpotency measure. The central contribution of the paper is a fuzzy measure of nilpotency N(μ)∈[0,1], defined for any non-constant fuzzy Lie subalgebra μ of a Lie algebra g. This invariant equals 1 precisely when μ is fuzzy nilpotent, and decreases as the subalgebra departs from nilpotency. We show that nilpotency of the underlying Lie algebra implies N(μ)=1, but that the converse fails in general, as witnessed by an explicit counterexample.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jaehwan Kim

Abstract: We propose the Knowledge Landscape hypothesis: the forward pass of a large language model (LLM) encodes whether it knows the answer before producing any output token. Well-learned knowledge corresponds to deep convergence valleys in the activation landscape; unlearned queries traverse flat plains where signals disperse. These geometric properties manifest as measurable signals—token-level entropy and layer-wise hidden-state variance— that precede and causally influence the model’s output uncertainty. On TriviaQA with Qwen2.5-7B and Mistral-7B, token entropy strongly discriminates known from unknown questions (Mann-Whitney p < 10⁻⁷, rank-biserial r > 0.5 across both architectures). Hidden-state variance localises a metacognitive locus at layers 9 and 20–27 (peak p < 10⁻⁴, r = 0.46). Activation patching with monotone interpolation provides causal confirmation: entropy decreases strictly as the known hidden state is progressively substituted, with Spearman rank correlation of negative one (permutation p < 0.001). A single-pass abstention system built on these signals achieves an area under the ROC curve of 0.804 and a 5.6 percentage-point accuracy gain over the unaided baseline, without any fine-tuning

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Laxman M M

Abstract: Shannon's Mathematical Theory of Communication (1948) assumes encoding fidelity — that the encoder preserves the statistical structure of the source. Large Language Models show significant systematic degradation of this assumption for non-English languages, producing outputs that are internally consistent but semantically degraded. We call this failure mode Coherent Misalignment and introduce the Encoding Fidelity Index (EFI), a practical proxy measuring the preservation of semantic content across the encoding boundary. Across 4 languages (English, Kannada, Tamil, Hindi), 2 embedding models (384-dimensional, 768-dimensional), and 2 LLMs (DeepSeek V3.1, Mistral Small 24B), we find: (1) EFI degrades by ~90% for all non-English Indian languages tested (p < 10⁻¹³), independent of language family; a European language control (French, Spanish, German) confirms this is tokenizer-induced encoding loss, not inherent cross-lingual distance (p = 1.6 × 10⁻⁸, Cohen's d = 1.33); (2) variance amplification is Dravidian-specific: Kannada shows 1.72–2.05× amplification (p < 0.05 in both models), Tamil shows partial amplification (1.63×, p = 0.016 in Mistral), while Hindi shows no amplification despite equivalent EFI degradation; (3) complex medical sentences show paradoxical EFI increase from English loanword anchoring; (4) scenario-dependent code-switching and orthographic corruption of medical terms (Mistral). These findings suggest that output-layer consistency metrics are unlikely to detect encoding-level degradation, since they measure response variance structure rather than semantic content. The dissociation between universal encoding degradation and language-specific variance amplification reveals that training data representation, not encoding fidelity alone, determines clinical reliability, with implications for non-English clinical AI deployment.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Nurul Absar Siddiky

Abstract: Large language models (LLMs) are designed to be helpful, polite, and safe. However, users and attackers have discovered that these models can sometimes be pushed into ignoring their safety rules. This is commonly called jailbreaking. A jailbreak attack is a method for making an LLM answer a question or perform a task that it would normally refuse. At the same time, researchers have proposed many defenses to make models more robust against such attacks. This paper presents a beginner-friendly survey of major LLM jailbreak attack and defense methods. We follow a simple taxonomy in which attacks are divided into white-box and black-box methods, while defenses are divided into prompt-level and model-level methods. For each major method family, we explain the main idea in simple language, name representative techniques from the literature, and provide descriptive toy examples to help readers understand the mechanism. We also summarize common evaluation metrics and datasets used in jailbreak research. The purpose of this paper is pedagogical: to give new students and researchers a clear mental map of how jailbreak attacks work, why they succeed, and how current defense methods attempt to stop them.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Karthik Meduri

,

Ruthvik Yedla

,

Santosh Reddy Addula

,

Guna Sekhar Sajja

,

Shaila Rana

,

Elyson De La Cruz

,

Mohan Harish Maturi

,

Hari Gonaygunta

Abstract: Hybrid quantum-classical neural networks offer a parameter-efficient path for clinical prediction, yet the field lacks reproducible methodologies for architectural design. Most current models rely on ad hoc circuit choices, complicating replication and comparison. This study introduces a generalizable Hybrid Quantum-Classical Neural Network (HQCNN) framework that replaces trial-and-error design with a principled Bayesian-surprise-guided methodology. Evaluated on the Wisconsin Diagnostic Breast Cancer dataset (n = 569), the framework employs a four-component PCA pipeline feeding a 4-qubit parameterized quantum circuit with two variational layers, integrated within a classical neural pipeline. The model was benchmarked against tuned Support Vector Machine, Random Forest, XGBoost, and Multi-Layer Perceptron baselines under identical 5-fold stratified cross-validation with nested GridSearchCV. The HQCNN achieved 96.49% ± 1.24% accuracy and 99.51% ± 0.38% AUC, outperforming a structurally comparable MLP while using 11.27% fewer trainable parameters (441 versus 497). A circuit-depth ablation identified two variational layers as optimal, consistent with barren-plateau dynamics. KL divergence scores of 0.925, 0.804, and 0.653 nats quantified the epistemic informativeness of competitive accuracy, optimal shallow depth, and parameter efficiency, respectively, while the AI2 AutoDiscovery platform independently validated preprocessing choices post hoc. These results indicate that the primary near-term value of hybrid models in healthcare lies in empirical parameter efficiency rather than raw accuracy gains. Fewer parameters reduce overfitting risk on small medical datasets, lower deployment costs, and produce models that are easier to audit for clinical governance. The Bayesian-surprise methodology finally provides the reproducible, principled design framework that the field has long lacked.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Roa Alharbi

,

Noureddine Abbadeni

Abstract: Machine learning-based systems are increasingly deployed in high-stakes domains such as healthcare, finance, law, and e-commerce, where their predictions directly influence critical decisions. Although these systems offer powerful data-driven support, they also introduce serious concerns related to fairness, bias, and discrimination. As a result, detecting and addressing unfairness in machine learning software has become a central research challenge. This study presents a systematic mapping of research on software unfairness detection in machine learning systems, with the aim of consolidating existing fairness definitions, identifying major problem types, examining testing approaches, reviewing commonly used datasets, and highlighting open research gaps. A structured search was conducted across five major digital libraries and additional sources, covering publications from 2010 to 2025. From 1,805 initially identified records, 67 primary studies met the inclusion and quality assessment criteria. The findings show that research activity has grown significantly since 2019, reaching a peak in 2022. Most studies were published at conferences, followed by journals and workshops. The literature addresses various themes, including analysis of existing fairness methods, bias mitigation strategies, testing techniques, and evaluation frameworks. Fairness testing was performed at unit, integration, and system levels, with integration testing being the most common. Frequently used datasets include COMPAS, Adult Census Income, and German Credit. Widely adopted tools such as IBM AI Fairness 360, Themis, and Aequitas were also identified. Overall, the mapping highlights progress made in fairness research while emphasizing the need for stronger integration of fairness into practical machine learning development.

Article
Computer Science and Mathematics
Algebra and Number Theory

Weicun Zhang

Abstract: The Extended, Generalized, and Grand Riemann Hypotheses are proved under a unified framework, which is based on the general properties of L-functions, i.e., the divisibility of entire functions contained in the symmetric functional equation, where the uniqueness of zero multiplicities (although their specific values remain unknown) of a given non-zero entire function plays a critical role. Consequently, the existence of Landau-Siegel zeros is excluded, thereby confirming the Landau-Siegel zeros conjecture.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Fumin Zou

,

Lei Zou

,

Feng Guo

,

Xunhuang Wang

,

Jianqing Weng

,

Tao Fang

,

Haocai Jiang

,

Xueming Wu

Abstract: This paper proposes an enhanced quantum-inspired sentiment analysis model incorporating a self-embedding mechanism for sentiment feature extraction and classification tasks. The method integrates phase-pre-trained self-embedding, bidirectional GRUs, a multi-head attention mechanism, and a multi-layer Transformer structure, effectively capturing semantic and emotional features in texts. Simultaneously, the model introduces contrastive learning and an enhanced feature interaction module, further improving feature discriminability. Extensive experiments on the RECCON dataset demonstrate that the proposed model significantly outperforms mainstream baseline methods (KEC, MPEG, Window Transformer) on key metrics such as macro-F1, positive-class F1, and negative-class F1. The experimental results show that the method not only improves overall accuracy and recall but also effectively mitigates challenges arising from class imbalance, achieving a macro-F1 of 0.95, positive-class F1 of 0.93, and negative-class F1 of 0.97 on the test set. The findings suggest that the combination of quantum-inspired structures and self-embedding mechanisms holds broad application prospects for complex sentiment analysis tasks.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mizanu Degu

,

Midhila Madhusoodanan

,

Medha Chippa

,

Abhilash Hareendranathan

Abstract: (1) Background: Ultrasound (US) imaging is widely used in clinical diagnosis but is often degraded by speckle noise, which reduces image quality and can hinder interpretation. Deep learning has emerged as a promising approach for US denoising, yet its clinical applicability remains unclear. (2) Methods: A systematic review of studies published in the last three years on deep learning-based US denoising was conducted following PRISMA-DTA guidelines. Searches were performed in IEEE-Xplore, PubMed, ScienceDirect, Scopus, Web of Science, and Google Scholar. Data were extracted on Anatomy, noise type, learning paradigm, network architecture, datasets, evaluation metrics, and performance outcomes. (3) Results: from 951 records scrapped, 36 studies were included. Most focused-on breast, fetal, cardiac, and abdominal US. Convolutional neural networks (CNNs), particularly U-Net, were the most common approach, while GANs, transformers, and variational autoencoders were less explored. Reported PSNR ranged from 30-45 dB and SSIM from 0.85-0.97. Most studies (34 out of 36) relied on synthetic noise and paired datasets, with limited evaluation on real clinical images. (4) Conclusions: CNN-based methods dominate US denoising research, but translation to clinical practice is limited due to reliance on synthetic data and inconsistent evaluation metrics. Future work should focus on large benchmark datasets and standardized metrics to improve generalizability across clinical settings.

Article
Computer Science and Mathematics
Applied Mathematics

Mehmet Erbudak

Abstract: China served as the primary source of novel materials and innovations that significantly contributed to the development of medieval Europe. In this study, I employ an unconventional approach grounded in the mathematics of ornamental arts to trace the trajectory of Chinese goods to theWest. Utilizing the concept of the wallpaper group, this research analyzes Chinese ornaments to discern similarities with the artwork of the Arabs and Turkish Seljuks during the 8th to 12th centuries. Furthermore, it elucidates the mechanisms through which Chinese art reached theWest, thereby providing insights into the migration of technology.

of 690

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated