Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Sergio Villanueva López

,

Emilio Soria-Olivas

,

Manuel Sánchez-Montañés Isla

Abstract: In multi-product industrial inspection, maintaining one memory bank per product yields costs that scale linearly with the number of product types. A shared bank with a fixed memory budget is more practical, but mixing embeddings from different products introduces inter-product interference. We call this memory pollution: nearest-neighbor queries retrieve features from other products, and budget allocations optimized on isolated banks degrade once retrieval is shared. Across 15 MVTec AD products, a per-product oracle allocator underperforms uniform allocation by 1.6 percentage points (pp) at 18 MB, and the wrong-neighbor rate (WNR) reaches 38% at 2.9 MB. We address this with a training-free router based on mean-embedding prototypes that identifies the product before nearest-neighbor search. The router adds 0.06 MB and achieves perfect top-1 accuracy over 30 product types (MVTec AD, VisA, BTAD). With routing, the performance gap across five allocation strategies shrinks to at most 1 pp. Top 1 routing with uniform allocation improves image-level area under the ROC curve (AUROC) from 90.8% to 91.1% at 18 MB. Coreset selection and clustering further provide 16× memory reduction with less than 1 pp AUROC loss. All components are training-free and operate on frozen DINOv3 features.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Lei Jin

,

Runchi Zhang

Abstract: Traditional credit scoring models reduce decisions to static classification, ignoring dynamic risk evolution and long-term profit. This paper integrates the Hamilton-Jacobi-Bellman (HJB) equation with deep reinforcement learning, reformulating credit risk as a discrete-time stochastic optimal control problem. Theoretically, we establish equivalence between discrete Markov decision processes and the HJB equation, prove existence and uniqueness of the optimal value function, derive the closed-form Riccati solution under linear-quadratic assumptions, and show neural network value iteration is an effective numerical scheme with separable errors. Empirically, using LendingClub data (2016–2018), the HJB-based PPO model significantly outperforms all static baseline models considered (e.g., logistic regression, random forest, XGBoost) in average profit (1.5167) and total profit (786,700.4682). Ablation experiments replacing the policy network with linear mapping reduce profit by 34.7%, confirming the necessity of nonlinear approximation. Theoretical validation gives a mean squared error of 0.0006 between the neural value function and Riccati solution. This work provides a rigorous mathematical foundation for reinforcement learning in financial risk control and a path from static classification to dynamic optimization in credit scoring.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Loc Nguyen

Abstract: Currently (2025), deep learning is the most important and popular methodology in artificial intelligence (AI) and artificial neural network (ANN) is the foundation of deep learning. The main drawback of ANN is the boom problem of a huge number of parametric weights when ANN in deep learning establishes a large number of hidden layers. The boom problem can be alleviated by high-performance computer but will be serious in case of high-dimension input data like image. The excellent solution for image processing within context of deep learning is that large parametric weight vector is reduced into much smaller window encoded by a so-called filtering kernel which is often 3x3 matrix or 5x5 matrix which is convoluted over entire image data. ANN with support of such filtering kernel is called convolutional neural network (CNN). Many researches prove that CNN is feasible and effective in image processing. The hidden cause of the effectiveness of CNN is that the visionary structure of an image is aggregated in such a way that filtering kernel is ideal to extract image features. However, it is not asserted that matrix-based filtering kernel is appropriate to other high-dimension data that is not image. Another solution of the boom problem is that large parametric weight vector is organized as matrix that is the same structure of 2-dimension data like image, which leads to a so-called matrix neural network (MNN) whose parameters are weighted matrices. Computation cost of MNN is decreased significantly in comparison with ANN but it is necessary to test the effectiveness of MNN with respect to CNN. This is the main hypothesis “whether MNN is the alternative of CNN” which is tested in this research, hinted by the research title. Moreover, transformer which is the new trend (2025) in AI and deep learning, which aims to improve/replace traditional ANN by self-supervised learning, in which attention is the significant mechanism of self-supervised learning. Anyhow, attention which is the cornerstone of transformer is the representation of internal structure/relationship inside high-dimension data like image. Therefore, the implicit deep meanings of attention and filtering kernel are similar, which represents feature of data, which does not go beyond parametric weights too. In general, the research has two goals: 1) explaining and implementing ANN, CNN, and transformer (attention) and 2) applying analysis of variance (ANOVA) into evaluating the effectiveness of ANN, CNN, and transformer (attention) within context of image classification. The ultimate result is that it is not asserted that MNN is the alternative of CNN but MNN can be an optional choice for implementing ANN in context of image processing instead of focusing on the unique CNN solution. Moreover, the incorporation of MNN and attention in implementing transformer produces a compromising solution of high performance and computational cost.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Shakib Hasan

,

Mst Mosaddeka Naher Jabe

Abstract: As the world embarks on an artificial intelligence revolution, governments and supranational organizations are taking highly divergent approaches to regulation in an effort to regulate the effects of AI. Although there are new educational theories, which propose that AI might precipitate a paradigm shift in how knowledge is produced, which values human-AI co-creation [1], empirical studies on the actual way states will make the transition are in short supply. To fill this gap, this research paper applies a qualitative comparative policy review of 35 representative excerpts extracted from seven authoritative legislative and strategic documents across China, Singapore, and the European Union. We use a six-dimensional framework (inter-coder reliability κ = 1.00) to investigate the extent to which these policies are framed around optimization or restructuring: focusing on infrastructural scale and efficiency versus requiring systemic, pedagogical, and epistemic transformation. As findings indicate, there are radically different policy imaginaries. Relying solely on restructuring-based legal requirements, the EU compensates for high-risk algorithmic harms and implements tight ethical protection. China displays a characterized temporal development, as it alters macroeconomic optimization in 2017 to a hybrid system that requires interactive exploration and multimodal creation in 2025. Singapore, on the other hand, takes the calculated risk of a middle way, with massive reorganization of human-focused pedagogical functions and with optimization safely applied to scale up the infrastructures of the public services. Finally, this research paper proves that there is no single global AI educational governance. We state that negotiating this optimization-restructuring tension is the key to institutions that seek to develop authentic student agency without undermining ethical protection.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Nurul Absar Siddiky

Abstract: Large language models (LLMs) are now used in chatbots, search engines, writing assistants, coding tools, educational systems, and AI agents. At the same time, they are vulnerable to a wide range of attacks. Some attacks attempt to make the model ignore its rules and produce harmful or manipulated outputs, while others aim to extract private or sensitive information from the model or its training data. This paper presents a concept-level survey of major LLM attack methods in language that is simple enough for broad readers while remaining structured like a research paper. We organize the literature into two high-level groups: security attacks and privacy attacks. Under security attacks, we discuss prompt injection, jailbreaking, backdoor attacks, and data poisoning attacks. Under privacy attacks, we discuss gradient leak-age, membership inference, and personally identifiable information (PII) leakage. For each family, we explain the core idea, summarize representative methods from the literature, and provide descriptive toy examples that help readers understand the mechanism without requiring advanced background knowledge. The goal of this paper is pedagogical: to help new researchers, students, and general readers build a clear mental model of the LLM attack landscape.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Anton Svystunov

,

Yaroslav Tereshchenko

Abstract: Rapid advancements in large language models with code generation abilities have enabled new paradigms in automated software development, positioning AI both as a coding assistant and an active actor within complex software ecosystems. Traditional code generation pipelines, mostly relying on tool calling via ReAct approach, require a complete code snippet to be generated and followed by validation and correction, often leading to significant latency and resource overhead due to sequential inference and execution processes. This research introduces a novel asynchronous inference algorithm that integrates context-free grammar parsing with real-time REPL-based execution, enabling early detection of syntax, semantic, and runtime errors without completing entire code snippets. We formally define the suitability criteria for LLMs in a target programming language, establish parse-tree-based identification of top-level statements, and present an incremental buffer-parsing mechanism that triggers execution upon recognition of complete statements. Implemented for Python 3 using the Lark parser and evaluated on a modified MBPP split ($N{=}113$ tasks; dataset and prompts in the Appendix) across six models---CodeAct--Mistral, GPT-OSS~20B, Gemma~3, Llama~3.2, Phi~4, and Qwen3-Coder~30B---our method is compared to a synchronous baseline using paired Wilcoxon tests with Bonferroni correction. Empirical results show significantly faster time-to-first-output for every model, large reductions in total latency where top-level script execution dominates (up to roughly an order of magnitude for CodeAct--Mistral), and no material change in pass or correctness rates, indicating that incremental execution improves responsiveness without altering task outcomes. With special prompting or finetuning, the method shows up to 4x reduction in latency for valid code generation. The benchmark results confirm that synchronous inference constraints can be alleviated through grammar-guided incremental execution, allowing more efficient and responsive agent-driven code execution workflows. Future research will explore predictive parsing techniques, deeper integration with agentic system architectures, security constraints, and formulating runtime requirements for scalable deployment of LLM-generated code execution environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Morshedul Islam

,

Khondokar Fida Hasan

,

Wali Mohammad Abdullah

,

Baidya Nath Saha

Abstract: Behavioral Authentication (BA) systems verify user identity claims based on unique behavioral characteristics using machine learning (ML)-based classifiers trained on user behavioral profiles. Although effective, ML-based BA systems face serious privacy threats, including profile inference and reconstruction attacks. This paper presents RUIP-BA (Renewable, Unlinkable, and Irreversible Privacy-Preserving Behavioral Authentication), a non-cryptographic framework tailored to low-computation devices such as IoT and mobile platforms. Random Projection (RP) maps behavioral profiles into lower-dimensional protected templates while approximately preserving utility-relevant geometry, and local Differential Privacy (DP) injects calibrated stochastic perturbations to provide formal privacy protection. The proposed design jointly targets the ISO/IEC 24745 requirements of renewability, unlinkability, and irreversibility. We provide complete algorithmic realizations for enrollment, verification, template renewal, unlinkability testing, and GAN-based adversarial privacy evaluation. We also introduce rigorous formal privacy derivations and proofs under explicit assumptions, including formal security games, theorem-level guarantees at information-theoretic and statistical levels, Cram'er-Rao lower bounds for irreversibility, full Jensen-Shannon divergence derivations for unlinkability, and GAN Nash-equilibrium attack bounds. Experiments on voice, swipe, and drawing datasets show authentication accuracy above 96% while sharply limiting feature recoverability under strong GAN-based attacks. RUIP-BA provides a scalable, mathematically grounded, and deployment-ready privacy-preserving BA solution.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Herlindo Hernandez-Ramirez

,

Jorge Luis Perez-Ramos

,

Daniel Canton-Enriquez

,

Ana Marcela Herrera-Navarro

,

Hugo Jimenez-Hernandez

Abstract: The integration of automated learning and video analysis enables the development of intelligent systems that can operate effectively in uncertain scenarios. These systems can autonomously identify dominant motion dynamics, depending on the theoretical framework used for representation and the learning process used for pattern identification. Current literature offers a state-based approach to describe the key temporal and spatial relationships required to understand motion dynamics. An important aspect of this approach is determining when the number of positively learned rules from a given information source is sufficient to detect dominant motion in automatic surveillance scenarios. This is crucial, as it affects both the variability of movements that monitored subjects can exhibit within the camera’s field of view and the resources needed for effective implementation. This study addresses these gaps through a grammar-based sufficiency criterion, which posits that learning is complete when production rule growth stabilizes, under the assumption of system stationarity. The stability criterion evaluates whether the most probable rules are learned over time, and whenever a high-growth rule is added, it is used to update the criterion. We outline several benefits of having a formal criterion for determining when a symbolic surveillance system has a robust model that explains the observed motion dynamics. Our hypothesis is that a correct model can consistently account for the majority of motion dynamics over time in an automated learning process. The proposed approach is evaluated by modeling motion dynamics in several scenarios using the SEQUITUR algorithm as input and computing the probability of stability along the learning curve, which indicates when the model reaches a steady state of consistent learning. Experimental validation was conducted in real-world scenarios under varying acquisition conditions. The results demonstrate that the proposed method achieves robust modeling performance, with accuracy values ranging from 83.56% to 95.92%in dynamic environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Jaehwan Kim

Abstract: We propose the Knowledge Landscape hypothesis: the forward pass of a large language model (LLM) encodes whether it knows the answer before producing any output token. Well-learned knowledge corresponds to deep convergence valleys in the activation landscape; unlearned queries traverse flat plains where signals disperse. These geometric properties manifest as measurable signals—token-level entropy and layer-wise hidden-state variance— that precede and causally influence the model’s output uncertainty. On TriviaQA with Qwen2.5-7B and Mistral-7B, token entropy strongly discriminates known from unknown questions (Mann-Whitney p < 10⁻⁷, rank-biserial r > 0.5 across both architectures). Hidden-state variance localises a metacognitive locus at layers 9 and 20–27 (peak p < 10⁻⁴, r = 0.46). Activation patching with monotone interpolation provides causal confirmation: entropy decreases strictly as the known hidden state is progressively substituted, with Spearman rank correlation of negative one (permutation p < 0.001). A single-pass abstention system built on these signals achieves an area under the ROC curve of 0.804 and a 5.6 percentage-point accuracy gain over the unaided baseline, without any fine-tuning

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Laxman M M

Abstract: Shannon's Mathematical Theory of Communication (1948) assumes encoding fidelity — that the encoder preserves the statistical structure of the source. Large Language Models show significant systematic degradation of this assumption for non-English languages, producing outputs that are internally consistent but semantically degraded. We call this failure mode Coherent Misalignment and introduce the Encoding Fidelity Index (EFI), a practical proxy measuring the preservation of semantic content across the encoding boundary. Across 4 languages (English, Kannada, Tamil, Hindi), 2 embedding models (384-dimensional, 768-dimensional), and 2 LLMs (DeepSeek V3.1, Mistral Small 24B), we find: (1) EFI degrades by ~90% for all non-English Indian languages tested (p < 10⁻¹³), independent of language family; a European language control (French, Spanish, German) confirms this is tokenizer-induced encoding loss, not inherent cross-lingual distance (p = 1.6 × 10⁻⁸, Cohen's d = 1.33); (2) variance amplification is Dravidian-specific: Kannada shows 1.72–2.05× amplification (p < 0.05 in both models), Tamil shows partial amplification (1.63×, p = 0.016 in Mistral), while Hindi shows no amplification despite equivalent EFI degradation; (3) complex medical sentences show paradoxical EFI increase from English loanword anchoring; (4) scenario-dependent code-switching and orthographic corruption of medical terms (Mistral). These findings suggest that output-layer consistency metrics are unlikely to detect encoding-level degradation, since they measure response variance structure rather than semantic content. The dissociation between universal encoding degradation and language-specific variance amplification reveals that training data representation, not encoding fidelity alone, determines clinical reliability, with implications for non-English clinical AI deployment.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Md Nurul Absar Siddiky

Abstract: Large language models (LLMs) are designed to be helpful, polite, and safe. However, users and attackers have discovered that these models can sometimes be pushed into ignoring their safety rules. This is commonly called jailbreaking. A jailbreak attack is a method for making an LLM answer a question or perform a task that it would normally refuse. At the same time, researchers have proposed many defenses to make models more robust against such attacks. This paper presents a beginner-friendly survey of major LLM jailbreak attack and defense methods. We follow a simple taxonomy in which attacks are divided into white-box and black-box methods, while defenses are divided into prompt-level and model-level methods. For each major method family, we explain the main idea in simple language, name representative techniques from the literature, and provide descriptive toy examples to help readers understand the mechanism. We also summarize common evaluation metrics and datasets used in jailbreak research. The purpose of this paper is pedagogical: to give new students and researchers a clear mental map of how jailbreak attacks work, why they succeed, and how current defense methods attempt to stop them.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Karthik Meduri

,

Ruthvik Yedla

,

Santosh Reddy Addula

,

Guna Sekhar Sajja

,

Shaila Rana

,

Elyson De La Cruz

,

Mohan Harish Maturi

,

Hari Gonaygunta

Abstract: Hybrid quantum-classical neural networks offer a parameter-efficient path for clinical prediction, yet the field lacks reproducible methodologies for architectural design. Most current models rely on ad hoc circuit choices, complicating replication and comparison. This study introduces a generalizable Hybrid Quantum-Classical Neural Network (HQCNN) framework that replaces trial-and-error design with a principled Bayesian-surprise-guided methodology. Evaluated on the Wisconsin Diagnostic Breast Cancer dataset (n = 569), the framework employs a four-component PCA pipeline feeding a 4-qubit parameterized quantum circuit with two variational layers, integrated within a classical neural pipeline. The model was benchmarked against tuned Support Vector Machine, Random Forest, XGBoost, and Multi-Layer Perceptron baselines under identical 5-fold stratified cross-validation with nested GridSearchCV. The HQCNN achieved 96.49% ± 1.24% accuracy and 99.51% ± 0.38% AUC, outperforming a structurally comparable MLP while using 11.27% fewer trainable parameters (441 versus 497). A circuit-depth ablation identified two variational layers as optimal, consistent with barren-plateau dynamics. KL divergence scores of 0.925, 0.804, and 0.653 nats quantified the epistemic informativeness of competitive accuracy, optimal shallow depth, and parameter efficiency, respectively, while the AI2 AutoDiscovery platform independently validated preprocessing choices post hoc. These results indicate that the primary near-term value of hybrid models in healthcare lies in empirical parameter efficiency rather than raw accuracy gains. Fewer parameters reduce overfitting risk on small medical datasets, lower deployment costs, and produce models that are easier to audit for clinical governance. The Bayesian-surprise methodology finally provides the reproducible, principled design framework that the field has long lacked.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Roa Alharbi

,

Noureddine Abbadeni

Abstract: Machine learning-based systems are increasingly deployed in high-stakes domains such as healthcare, finance, law, and e-commerce, where their predictions directly influence critical decisions. Although these systems offer powerful data-driven support, they also introduce serious concerns related to fairness, bias, and discrimination. As a result, detecting and addressing unfairness in machine learning software has become a central research challenge. This study presents a systematic mapping of research on software unfairness detection in machine learning systems, with the aim of consolidating existing fairness definitions, identifying major problem types, examining testing approaches, reviewing commonly used datasets, and highlighting open research gaps. A structured search was conducted across five major digital libraries and additional sources, covering publications from 2010 to 2025. From 1,805 initially identified records, 67 primary studies met the inclusion and quality assessment criteria. The findings show that research activity has grown significantly since 2019, reaching a peak in 2022. Most studies were published at conferences, followed by journals and workshops. The literature addresses various themes, including analysis of existing fairness methods, bias mitigation strategies, testing techniques, and evaluation frameworks. Fairness testing was performed at unit, integration, and system levels, with integration testing being the most common. Frequently used datasets include COMPAS, Adult Census Income, and German Credit. Widely adopted tools such as IBM AI Fairness 360, Themis, and Aequitas were also identified. Overall, the mapping highlights progress made in fairness research while emphasizing the need for stronger integration of fairness into practical machine learning development.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Fumin Zou

,

Lei Zou

,

Feng Guo

,

Xunhuang Wang

,

Jianqing Weng

,

Tao Fang

,

Haocai Jiang

,

Xueming Wu

Abstract: This paper proposes an enhanced quantum-inspired sentiment analysis model incorporating a self-embedding mechanism for sentiment feature extraction and classification tasks. The method integrates phase-pre-trained self-embedding, bidirectional GRUs, a multi-head attention mechanism, and a multi-layer Transformer structure, effectively capturing semantic and emotional features in texts. Simultaneously, the model introduces contrastive learning and an enhanced feature interaction module, further improving feature discriminability. Extensive experiments on the RECCON dataset demonstrate that the proposed model significantly outperforms mainstream baseline methods (KEC, MPEG, Window Transformer) on key metrics such as macro-F1, positive-class F1, and negative-class F1. The experimental results show that the method not only improves overall accuracy and recall but also effectively mitigates challenges arising from class imbalance, achieving a macro-F1 of 0.95, positive-class F1 of 0.93, and negative-class F1 of 0.97 on the test set. The findings suggest that the combination of quantum-inspired structures and self-embedding mechanisms holds broad application prospects for complex sentiment analysis tasks.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mizanu Degu

,

Midhila Madhusoodanan

,

Medha Chippa

,

Abhilash Hareendranathan

Abstract: (1) Background: Ultrasound (US) imaging is widely used in clinical diagnosis but is often degraded by speckle noise, which reduces image quality and can hinder interpretation. Deep learning has emerged as a promising approach for US denoising, yet its clinical applicability remains unclear. (2) Methods: A systematic review of studies published in the last three years on deep learning-based US denoising was conducted following PRISMA-DTA guidelines. Searches were performed in IEEE-Xplore, PubMed, ScienceDirect, Scopus, Web of Science, and Google Scholar. Data were extracted on Anatomy, noise type, learning paradigm, network architecture, datasets, evaluation metrics, and performance outcomes. (3) Results: from 951 records scrapped, 36 studies were included. Most focused-on breast, fetal, cardiac, and abdominal US. Convolutional neural networks (CNNs), particularly U-Net, were the most common approach, while GANs, transformers, and variational autoencoders were less explored. Reported PSNR ranged from 30-45 dB and SSIM from 0.85-0.97. Most studies (34 out of 36) relied on synthetic noise and paired datasets, with limited evaluation on real clinical images. (4) Conclusions: CNN-based methods dominate US denoising research, but translation to clinical practice is limited due to reliance on synthetic data and inconsistent evaluation metrics. Future work should focus on large benchmark datasets and standardized metrics to improve generalizability across clinical settings.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rashid Mehmood

,

Eid Rehman

,

Muhammad Habib

Abstract: The rapid advancement of Large Language Models (LLMs) has sparked a debate on whether their performance reflects genuine inferential reasoning or sophisticated rote memorization of internet-scale datasets. While LLMs achieve high scores on standardized benchmarks, these metrics often fail to distinguish between the retrieval of learned patterns and the application of underlying logical principles. This study provides a diagnostic characterization of LLM behavior through a series of targeted probes designed to isolate structural reasoning breaks. Our experiments reveal a persistent "grounding gap" across contemporary models, where surface-level linguistic fluency masks failures in mechanical plausibility, geometric transformation, and multi-entity relational consistency. We identify a computational analog of the Einstellung effect, wherein models default to high-probability training templates even when presented with explicit counterfactual constraints. Furthermore, our analysis of the Abstraction and Reasoning Corpus (ARC-AGI) and proprietary cross-modal probes demonstrates that model performance is often "jagged"—highly sensitive to prompt structure and prone to context misattribution across conversation turns. These findings suggest that current architectures remain tightly coupled to training-time statistical distributions and lack stable mechanisms for internal verification or adaptive restructuring. In light of these findings, we advocate for a shift in AI evaluation from static, outcome-oriented benchmarks toward diagnostic, novelty-persistent frameworks that prioritize cognitive autonomy and introspective self-auditing. By mapping the boundaries where probabilistic pattern matching diverges from functional reasoning, this work underscores a critical requirement for architectural paradigms that move beyond mere parameter scaling. We conclude that achieving grounded, self-regulating intelligence necessitates sys- tems capable of maintaining structural invariants and verifying internal logic independently of training-time statistical frequencies. “Language serves as a medium for expressing intelligence, not as a substrate for its storage”.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Aisha Farooq

Abstract: Database systems have become an essential component of pharmacy and healthcare management, as it effectively manages the growing complexity and volume of medical data and have replaced the conventional paper-based approach. Patients who are on traditional medication management methods have often struggled due to medication errors, poor adherence, and lack of continuous monitoring, which poses a serious threat to treatment outcomes and patient safety. This review examines database management systems (DBMS) in the healthcare system, making it more efficient and accessible. Within the pharmacy database systems, support and enhance many critical functions such as drug information management, prescription processing, and inventory control, collectively reducing medication errors and providing greater patient safety.The findings reveal that database systems enable real-time data access, optimized resource management, and enhanced collaboration among health care teams. They also facilitate clinical decision-making and support overall health care service delivery. Despite the number of advantages, there are still significant challenges and gaps present, which include privacy risk, financial cost of its implementation, and lack of access in developing parts of the world. Both pharmacy-specific databases and broader healthcare systems need more research for better development. This paper evaluates the distinct roles of various database types like relational, NoSQl and cloud-based systems. Key applications within pharmacy and healthcare systems are discussed, such as drug information management, prescription processing, inventory control, Electronic Health Records (EHRs), Clinical Decision Support Systems (CDSS), and Hospital Management Systems (HMS). These systems have the potential to raise the health care quality and operational efficiency.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Hongyin Zhu

,

JinMing Liang

,

Mengjun Hou

,

Ruifan Tang

,

Xianbin Zhu

,

Jingyuan Yang

,

Yuanman Mao

,

Feng Wu

Abstract: Existing LLM-based agent systems share a common architectural failure: they answer from the unrestricted knowledge space without first simulating how active business scenarios reshape that space for the event at hand---producing decisions that are fluent but ungrounded and carrying no audit trail. We present LOM-action, which equips enterprise AI with event-driven ontology simulation: business events trigger scenario conditions encoded in the enterprise ontology (EO), which drive deterministic graph mutations in an isolated sandbox, evolving a working copy of the subgraph into the scenario-valid simulation graph; all decisions are derived exclusively from this evolved graph. The core pipeline is event to simulation to decision, realized through a dual-mode architecture---skill mode and reasoning mode. Every decision produces a fully traceable audit log. LOM-action achieves 93.82% accuracy and 98.74% tool-chain F1 against frontier baselines Doubao-1.8 and DeepSeek-V3.2, which reach only 24--36% F1 despite 80% accuracy---exposing the illusive accuracy phenomenon. The four-fold F1 advantage confirms that ontology-governed, event-driven simulation, not model scale, is the architectural prerequisite for trustworthy enterprise decision intelligence.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

M. E. Montes-Carmona

,

I. A. Burgos-Castro

,

R. de J. Portillo-Vélez

,

P. J. García-Ramírez

,

L. F. Marín-Urias

,

M. A. Hernández-Pérez

Abstract: Biogas production estimation has been one of the most important and challenging objectives for anaerobic digestion processes due to the complexity of its dynamics and the lack of high-quality open-access datasets. This study presents a hybrid modeling framework that combines a mechanistic model, based on ordinary differential equations (ODE), with a machine learning model. Rather than relying exclusively on experimental data, the proposed approach leverages physics-informed synthetic data generation, complemented by a lag-based feature engineering to capture inherent temporal dependencies in the process dynamics available in operational data of a bio-digester. Two configurations were evaluated: a baseline model and an enhanced version incorporating lag features and simplified temperature profile. While the improved model achieved high predictive performance (R2=0.97885, RMSE=131.80[L/d]), additional analyses reveal that this performance is partly driven by temporal memory and remains sensitive to noise and feature composition. Instead of presenting the model as a final solution, this work frames it as a step toward practical digital twin implementations, acknowledging the gap that still exists between simulation-based accuracy and real-world reliability.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rizwan Ayazuddin

Abstract: Digital health technologies have fundamentally transformed healthcare delivery by improving communication, monitoring, and patient-centered care. Patients who are on traditional medication management methods have often struggled due to medication errors, poor adherence, and lack of continuous monitoring, which poses a serious threat to treatment success and patient safety.The widespread growth of smartphones and mobile technologies has led to the development of mobile health (mHealth) applications. These serve as an effective tool for patients and healthcare professionals in managing medications more efficiently. These applications offer a diverse range of functionalities, from automated dosing reminders and prescription tracking to drug information databases and adherence-monitoring systems. They help patients stay on track with their medications, facilitate patient education, enable clinicians to monitor progress, and strengthen communication between patients and healthcare providers. Although these mobile health applications offer many advantages, they also present limitations, including data privacy and security risks, variable accuracy, regulatory complexities, and accessibility issues. Overcoming these limitations is required to unlock the full potential of digital health technologies. This review highlights the growth of digital health and mobile applications in medication management and confirms that continued advancements will further enhance patient care and medication outcomes. In this paper, we discuss conventional approaches to medication management before exploring how digital health applications are transforming the field by improving patient adherence, reducing medication errors, and making remote monitoring a practical and accessible reality.

of 235

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated