Computer Science and Mathematics

Sort by

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rogelio Ochoa-Barragán

,

Luis David Saavedra-Sánchez

,

Fabricio Nápoles-Rivera

,

César Ramírez-Márquez

,

Luis Fernando Lira-Barragán

,

José María Ponce-Ortega

Abstract: The integration of artificial intelligence (AI) into solar energy systems has emerged as a transformative pathway to enhance efficiency, reliability, and sustainability in renewable energy. This review provides a comprehensive examination of recent advances in AI-driven optimization and integration strategies across photovoltaic and solar thermal technologies. A particular emphasis is placed on machine learning and deep learning techniques applied to solar irradiance forecasting, maximum power point tracking, fault detection, energy management, and predictive maintenance. Unlike earlier reviews that focused on isolated applications, this work highlights the systemic role of AI in enabling smart grids, hybrid systems, and large-scale energy storage integration. The novelty of this contribution lies in mapping the evolution from traditional control methods to intelligent, self-adaptive frameworks that couple physical modeling with data-driven approaches, offering a structured roadmap for future developments. Furthermore, the review identifies challenges such as data scarcity, computational demand, and interpretability of AI models, while outlining opportunities for process intensification, resilience, and techno-economic optimization. By bridging technical progress with implementation prospects, this article provides an updated reference for researchers, policymakers, and industry stakeholders seeking to accelerate the deployment of AI-enhanced solar energy solutions.

Article
Computer Science and Mathematics
Applied Mathematics

Florentin Șerban

,

Bogdan Vrinceanu

Abstract: Modern financial markets are increasingly shaped by algorithmic trading systems and artificial intelligence techniques that process large volumes of financial data in real time. However, machine learning–based trading systems often suffer from signal instability and excessive sensitivity to market noise, which may lead to overtrading and increased financial risk. In highly volatile environments such as cryptocurrency markets, the re-liability of trading signals becomes a critical issue for both portfolio allocation and risk management. This study proposes an entropy-filtered machine learning framework designed to en-hance the stability and risk-awareness of algorithmic trading strategies. The proposed approach integrates entropy-based filtering techniques with machine learning classifiers in order to reduce noise in market signals improving the risk-adjusted stability of algo-rithmic trading strategies. Entropy measures are employed as a filtering mechanism that evaluates the informational content of market signals and suppresses unreliable predic-tions generated by the learning model. The empirical analysis is conducted using cryp-tocurrency market data, where the entropy-filtered machine learning framework is ap-plied to trading signal generation and portfolio decision making. The results indicate that the proposed approach improves the stability of trading signals and reduces the occur-rence of false signals compared to conventional machine learning trading models. Moreover, the integration of entropy filtering contributes to a more balanced risk–return profile and enhances the overall robustness of algorithmic trading strategies.The findings suggest that combining information-theoretic measures with machine learning tech-niques represents a promising direction for developing more reliable and risk-aware financial decision systems. The results suggest that entropy-based filtering can substan-tially improve the robustness and risk-awareness of machine learning trading systems, providing a promising direction for future AI-driven financial decision frameworks.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Monica Khadgi

Abstract: Artificial Intelligence (AI) has developed over the years from rudimentary systems of symbolic reasoning in the middle of the twentieth century to sophisticated data-driven and generative architectures, which give rise to modern society. The acceleration of machine learning, deep neural networks and large-scale computational infrastructures has turned AI into a basic technology in the economic, social, and societal sectors. This paper investigates the history of the development of AI and critically discusses its influence on society in the 21 st century. Following a narrative review approach, the paper summarises interdisciplinary literature of technological innovation, economic transformation, social change, ethical governance, and sustainability issues.Various findings are found in the analysis. To begin with, AI has greatly increased productivity and operational efficiency in the industry as well as redefining the labor markets and skill requirements. Second, AI-centered systems have enhanced the provision of services in the education, health, transportation, and government sectors, though the issue of bias, privacy, transparency, and accountability continues to be present. Third, the spread of AI to safety-critical systems highlights the value of reliability, regulation, and human-oriented design. Finally, the environmental impact of large-scale AI models represents the necessity of sustainable development practices.The paper concludes that AI is an opportunity for transformation and a governance challenge. The implications to be considered in the future are the emergence of human-focused AI models, the creation of control measures, and the introduction of sustainability indicators into technological change. The fair and responsible implementation of AI will be required in order to maximise the positive impacts on society and reduce the risks in the long term.

Article
Computer Science and Mathematics
Mathematics

Deep Bhattacharjee

Abstract: We prove Convex Seed Universality for the Kreuzer—Skarke classification of four-dimensional reflexive polytopes. Every reflexive polytope in the Kreuzer—Skarke dataset arises from a primitive convex seed through a finite sequence of four toric operations: unimodular transformations, stellar subdivisions, polar duality, and lattice translations. Seed orbits coincide with connected components of the GKZ secondary fan, and the Hodge numbers of the associated Calabi—Yau hypersurfaces remain constant on each orbit. The seed invariant matrix is identified with the GLSM charge matrix, providing a natural toric-geometric interpretation of the construction. Four structural theorems: Seed Completeness, Orbit Connectivity, Hodge Invariance, and Exhaustiveness, together establish seed universality for the entire Kreuzer—Skarke dataset.

Article
Computer Science and Mathematics
Algebra and Number Theory

José Antoine Séqueira

Abstract: In this article, we introduce a hypercomplex algebra based on a binary superposition structure. Each algebraic unit is defined by a pair (ƒ; S) where ƒ ∈ {0; 1} encodes the logical presence of a base component, and S ∈ {−1; 1} encodes a geometric phase or orientation. This framework allows us to define an imaginary product that is both commutative and associative, properties rarely combined in higher-dimensional algebras. We demonstrate the consistency of this product through a binary and superposed formalism. This result provides a solid foundation for representing multi-level logic states, with potential applications in quantum computing processing.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ruobing Yan

,

Yingxin Ou

,

Shihao Sun

,

Nuo Chen

,

Kan Zhou

,

Yingyi Shu

Abstract: Business risk prediction tasks such as fraud detection, credit default prediction, and equipment failure forecasting face two fundamental challenges simultaneously: severe class imbalance where anomalous events are extremely rare, and distribution shift where data patterns evolve over time due to changing business conditions or adversarial behavior. While existing approaches address these challenges in isolation, real-world deployment requires handling both simultaneously. We propose DualShiftNet, a unified framework that jointly addresses class imbalance and distribution shift through a two-stage architecture. The first stage learns imbalance-aware representations using synthetic minority oversampling, focal loss optimization, and class-balanced contrastive learning to create discriminative embeddings. The second stage employs Maximum Mean Discrepancy (MMD) based drift detection coupled with importance reweighting to adapt predictions under distribution shift. Additionally, we introduce an uncertainty-driven threshold calibration mechanism that dynamically adjusts decision boundaries based on detected shift intensity. Experiments on three benchmark datasets demonstrate that DualShiftNet achieves relative improvements of approximately 3–4% in AUC-ROC scores and 10–22% in F1-scores compared to state-of-the-art methods that address only one challenge. Our ablation studies confirm that both stages contribute meaningfully to performance, with the joint approach outperforming sequential or isolated solutions.

Article
Computer Science and Mathematics
Computer Science

Alona Kudriashova

,

Iryna Pikh

,

Vsevolod Senkivskyy

,

Liubomyr Sikora

,

Nataliia Lysa

Abstract: The quality of vector images depends on a significant set of geometric and structural factors, which makes objective assessment a challenging task. This paper proposes a comprehensive approach to identifying and prioritizing these factors. Recursive feature elimination based on a random forest model was applied. A reachability matrix of factors was constructed to analyze direct and indirect relationships. Models describing relationships between the factors were developed. The rank and weight of each factor were calculated using a dependency-weighting system. An information system was developed to automate the process of prioritizing factors based on the proposed methodology. The software architecture was implemented in Python using the Tkinter, NumPy, and NetworkX libraries. Experimental results confirmed that the factor «coordinate accuracy» has the highest level of significance, whereas «file format» has the smallest influence on the quality of vector images. Due to the lack of dependence on specific selected factors, the developed system is universal and suitable for prioritizing factors in any application domain. Future research will focus on integrating the developed information system into a fuzzy-logic-based system for assessing the quality of vector images.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

A. Sasiram

,

Charan Sai Deekonda

,

Gugulothu Geethanjali

,

Bhukya Jithendar Nayak

Abstract: The management of today’s optical networks is highly dependent on the correct estimation of Quality of Transmission (QoT). The current analytical approach requires exact physical values, which are often not available, resulting in inefficient management of the network. This paper proposes an Adaptive Machine Learning Framework that aims to address the analytical approach’s limitations using a new and innovative data-driven approach. The proposed framework combines linklevel embeddings with an Artificial Neural Network (ANN) to process the unique sequence of fiber links in a lightpath, focusing on the fine-grained details of the sequence that are normally overlooked by the current analytical approach. Through dynamic learning from the sequence data, the framework provides highly accurate signal quality estimates. These estimates enable intelligent and automated modulation format choices, greatly enhancing spectral efficiency and minimizing disconnections. This highly scalable solution is developed in Python and TensorFlow and is best suited for dynamic resource allocation and futureoriented network planning.

Article
Computer Science and Mathematics
Logic

Igor Durdanovic

Abstract: Mathematics, as actually practiced, operates as a federated system: practitioners work within autonomous domain-specific axiomatizations (geometry, algebra, analysis) and construct explicit bridges only when cross-domain reasoning is required. This organization is not accidental; it is a structural adaptation that safeguards local decidability and algorithmic efficiency.Yet the dominant foundational narrative still operates on the Compiler Myth—the belief that all mathematics must theoretically compile down into ZFC set theory to achieve rigor. We argue that this monolithic reductionism confuses representational universality with logical priority. Embedding a decidable (tame) domain into an undecidable (wild) one does not clarify foundations; it imposes a crippling epistemic overhead. It buries efficient, domain-specific decision procedures under general proof search and destroys the native structural immunities of the object.We introduce the Decidability Threshold — a litmus test based on Negation, Representability, and Discrete Unboundedness — to explain why mathematicians instinctively isolate tame domains from wild ones. Finally, we distinguish the Mathematician (builder of formal systems) from the Scientist (consumer modeling reality). We argue that federalism, through explicit bridges and domain autonomy, is not a failure of unification, but the primary safeguard preventing the scientist from inadvertently importing wild, undecidable paradoxes into physical theories.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Chaoyue He

,

Xin Zhou

,

Di Wang

,

Hong Xu

,

Wei Liu

,

Chunyan Miao

Abstract: Public agent ecosystems are emerging as a new object of study in NLP: settings in which language models not only generate text but also act, coordinate, authenticate, exchange reusable capabilities, and leave durable public traces. Using the OpenClaw--Moltbook ecosystem as a strategically revealing case, we survey a curated corpus of 38 ecosystem-specific papers and reports available as of 10-03-2026, together with official platform materials and adjacent survey literature. We provide a case-centered, NLP-centered survey of a public agent ecosystem in the wild. We argue that this case is best understood as language infrastructure: linguistic artifacts are executable, persistent, public, portable, and increasingly governance-bearing. We introduce GATE --- Grounding, Action, Transfer, and Exchange --- to organize what language does in public agent ecosystems, and pair it with AERO --- Authority, Enablement, Reach, and Orchestration --- to track how language acquires delegated operational force. Across the corpus, the main methodological bottleneck is weak triangulation across trajectories, discourse, portable artifacts, and grounding signals. That bottleneck yields four recurring fault lines: instruction is mistaken for authority, visible agent speech is mistaken for autonomous speakerhood, public claims outrun verification, and local control is mistaken for lower risk. We conclude with an NLP agenda centered on executable pragmatics, delegated-agent discourse analysis, provenance-aware evaluation, privacy-preserving agent NLP, multilingual public-agent research, and autonomy-sensitive benchmarks. We will release all artifacts once permitted.

Article
Computer Science and Mathematics
Discrete Mathematics and Combinatorics

Seung Jae Lee

,

Byung Soo Kim

Abstract: We study a pharmaceutical scheduling problem with hybrid batch-continuous manufacturing process in a distributed supply chain. The supply chain consists of heterogeneous plants and one distribution center. Each plant adopts an unrelated permutation flow shop layout consisting of a hybrid batch-continuous production line. Each pharmaceutical order is split and produced in multi-production sites located in various regions. The pharmaceutical medicines manufactured by the production sites are directly shipped to a distribution center To minimize the makespan, we formulate the addressed scheduling problem as a mathematical model. To solve this model, we propose four metaheuristics variants by applying two population-based metaheuristics to two distinct solution structures. We compare the proposed metaheuristics to evaluate their performance in the numerical experiments. Additionally, we present managerial insights through sensitivity analysis.

Article
Computer Science and Mathematics
Probability and Statistics

Bissilimou Rachidatou Orounla

,

Ouanan Nicolas Tuo

,

Kolawolé Valère Salako

,

Justice Moses K. Aheto

,

Romain Glèlè Kakaï

Abstract: The COVID-19 pandemic has spread rapidly across the world and caused several economic, social, and demographic impacts, even though there were strong geographical disparities. This study aims to assess the effect of socio-demographic factors and the use of non conventional medicines on COVID-19 risk perception in West Africa using Structural Equation Modeling (SEM) approach. A quantitative survey was conducted in four countries (Benin, Togo, Ghana and Côte d’Ivoire). Data were collected on demographic characteristics, COVID-19 risk perception (risk feeling and risk analysis), affective attitude, trust predictors and non-conventional medicine. Nominal polychotomous logistic regression, binary logistic regression and partial least squares were used for the data analysis. Among the respondents 59.11% from the in-person survey, 28.08% were from Benin, 32.84% from Côte d’Ivoire, 24.96% from Togo and 14.12% from Ghana. The results showed a very high level of risk perception within the countries. Participants aged between 18 and 40 used less non-conventional medicine. Also, people with a low level of education or no formal education often perceive a higher risk associated with COVID-19 and use more non-conventional medicine than others. The PLS-SEM model’s loadings were higher compared to those of the Consistent PLS (PLSc-SEM), but the Consistent PLS showed robust values in the structural model with lower RMSE than the linear model. Our results also indicated that non-conventional medicine has a positive relationship with COVID-19 risk perception. For decision-makers and health workers, this research underscores the importance of unconventional medicine and the emotional state of local population in managing epidemic.

Article
Computer Science and Mathematics
Security Systems

Jingtang Luo

,

Chenlin Zhang

Abstract: Large Language Model (LLM) agents are increasingly deployed to interact with untrusted external data, exposing them to Indirect Prompt Injection (IPI) attacks. While current black-box defenses (i.e., model-agnostic methods) such as “Sandwich Defense” and “Spotlighting” provide baseline protection, they remain brittle against adaptive attacks like Actor-Critic (where injections evolve to better evade LLM’s internal defense). In this paper, we introduce Real User Instruction (RUI), a lightweight, black-box middleware that enforces strict instruction-data separation without model fine-tuning. RUI operates on three novel mechanisms: (1) a Privileged Channel that encapsulates user instructions within a cryptographic-style schema; (2) Explicit Adversarial Identification, a cognitive forcing strategy that compels the model to detect and list potential injections before response generation; and (3) Dynamic Key Rotation, a moving target defense that re-encrypts the conversation state at every turn, rendering historical injection attempts obsolete. We evaluate RUI against a suite of adaptive attacks, including Context-Aware Injection, Token Obfuscation, and Delimitation Spoofing. Our experiments demonstrate that RUI reduces the Attack Success Rate (ASR) from 100% (undefended baseline) to less than 8.1% against cutting-edge adaptive attacks, while maintaining a Benign Performance Preservation (BPP) rate of over 88.8%. These findings suggest that RUI is an effective and practical solution for securing agentic workflows against sophisticated, context-aware adversaries.

Article
Computer Science and Mathematics
Computer Science

Rizwan Ayazuddin

,

Noor Ul Amin

Abstract: In large scale image retrieval and big data analytics it is a big challenge to search similar images from high dimensional data. Mostly used algorithms are Locality Sensitive Hashing and Random Projection Based Hashing. They are widely used for approximate nearest neighbor searching. These two algorithms treat all input features uniformly while they ignore feature importance and class separability. In this research we aim to propose a lightweight hashing framework named Adaptive Feature Aware Hashing which integrates feature weighting prior to projection-based hashing. The algorithm computes data-driven feature weights using variance, between-class separability, and Fisher-style discriminative criteria to enhance discriminative power during hash code generation. We also incorporated multi table and multi probe hashing which enhances discriminative power during hash code generation. For this research we used MNISH dataset for experimental evaluation. We compared the results against a Baseline Locality-Sensitive Hashing (LSH) method using random projections. Our results indicate that The AFAH methods (v1 and v2 Fisher) significantly improved both precision and recall compared to the Baseline LSH, with AFAH v2 Fisher showing the highest precision (0.7557) and AFAH v1 having the highest recall (0.2285).

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rohan Le Roux

,

Siavash Khaksar

,

Mohammadali Sepehri

,

Iain Murray

Abstract: Open-pit mining relies heavily on visual inspection to identify indicators of slope instability such as surface cracks. Early identification of these geotechnical hazards allows for the implementation of safety interventions to protect both workers and assets in the event of slope failures or landslides. While computer vision (CV) approaches offer a promising avenue for autonomous crack detection, their effectiveness remains constrained by the scarcity of labelled geotechnical datasets. Deep learning (DL) models require large amounts of representative training data to generalize to unseen conditions; however, collecting such data from operational mine sites is limited by safety, cost, and data confidentiality constraints. To address this challenge, we propose a hybrid game engine—generative artificial intelligence (AI) framework for large-scale dataset synthesis. Leveraging a parameterized virtual environment developed in Unreal Engine 5 (UE5), the framework captures realistic images of open-pit surface cracks and enriches their visual diversity using StyleGAN2-ADA. The resulting datasets were used to train the YOLOv11 real-time object detection model and evaluated on a real-world dataset of open-pit slope imagery to assess the effectiveness of the proposed framework in improving CV model generalizability under extreme data scarcity. Experimental results demonstrated that models trained on the proposed framework substantially outperformed the UE5 baseline, with average precision (AP) at intersection over union (IoU) thresholds of 0.5 and [0.5:0.95] increasing from 0.403 to 0.922 and 0.223 to 0.722 respectively, accompanied by a reduction in missed detections from 95 to eight for the best-performing configurations. These findings demonstrate the potential of hybrid generative AI frameworks to mitigate data scarcity in CV applications and support the development of scalable automated slope monitoring systems for improved worker safety and operational efficiency in open-pit mining.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Hongyin Zhu

Abstract: While enterprises amass vast quantities of data, much of it remains chaotic and effectively dormant, preventing decision-making based on comprehensive information. Existing neuro-symbolic approaches rely on disjoint pipelines and struggle with error propagation. We introduce the large ontology model (LOM), a unified framework that seamlessly integrates ontology construction, semantic alignment, and logical reasoning into a single end-to-end architecture. LOM employs a construct-align-reason (CAR) pipeline, leveraging its unified architecture across all three stages: it first autonomously constructs a domain-specific ontological universe from raw data, then aligns neural generation with this structural reality using a graph-aware encoder and reinforcement learning, and finally executes deterministic reasoning over the constructed topology, node attributes and relation types. We evaluate LOM on a comprehensive benchmark constructed from diverse real-world enterprise datasets. Experimental results demonstrate that LOM-4B achieves 88.8% accuracy in ontology completion and 94% in complex graph reasoning tasks, significantly outperforming state-of-the-art LLMs. These findings validate that autonomous logical construction is essential for achieving deterministic, enterprise-grade intelligence.

Article
Computer Science and Mathematics
Mathematics

Mohammad Abu-Ghuwaleh

Abstract: We extend the master-integral-transform theory from entire kernels to finite-principal-part Laurent kernels and show that the resulting transform is a weighted dilation operator acting on the Fourier transform of a weighted signal. This yields a unified operator framework for several exact inversion mechanisms, including Mellin diagonalization, two-sided Mellin-symbol inversion, Dirichlet–Wiener inversion, log-scale Fourier inversion, recursive inversion, and Neumann-series recovery. The main structural result is that finite negative Laurent tails do not destroy the spectral architecture; they enlarge the one-sided dilation orbit to a two-sided one. We establish exact factorization formulas on weighted function spaces, prove branchwise Mellin inversion under explicit integrability assumptions, derive a contour-free Dirichlet–Wiener inverse, obtain a log-scale Fourier multiplier representation suitable for FFT-based recovery, and prove a practical stability bound away from multiplier zeros. A worked symbolic example and a numerical blueprint are also included.

Article
Computer Science and Mathematics
Computer Science

Vicente Salas

Abstract: The increasing digitalization of photovoltaic (PV) inverters and their integration into distributed energy resource (DER) ecosystems expose these devices to a rapidly expanding cyber‑physical attack surface. Existing security requirements are fragmented across heterogeneous technical standards—including IEC 62443, IEC 62351, UL 2900‑1, UL 1741 SB, IEEE 1547, IEEE 2030.5, and SunSpec profiles—and only partially aligned with emerging regulatory obligations such as the EU Cyber Resilience Act (CRA) and NIS2 Directive. This fragmentation complicates assurance, hinders interoperability, and leaves critical security controls inconsistently implemented across vendors and deployments. This paper introduces a Unified Security Baseline (USB) that harmonizes essential technical and lifecycle security controls for PV inverters, including secure boot, firmware signing, anti‑rollback protection, strong authentication, TLS‑secured communication, SBOM governance, secure over‑the‑air updates, and coordinated vulnerability disclosure. The USB provides a device‑centric, standards‑agnostic framework designed to strengthen the security posture of inverter‑dominated DER environments while supporting regulatory compliance. By consolidating cross‑standard requirements into a coherent baseline, this work establishes a foundation for future conformity assessment, certification efforts, and secure‑by‑design engineering practices in critical IoT/OT infrastructures.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Mengzhou Wu

,

Yuzhe Guo

,

Yuan Cao

,

Haochuan Lu

,

Songhe Zhu

,

Pingzhe Qu

,

Xin Chen

,

Kang Qin

,

Zhongpu Wang

,

Xiaode Zhang

+9 authors

Abstract: Scaling generalist GUI agents is hindered by the data scalability bottleneck of expensive human demonstrations and the ``distillation ceiling'' of synthetic teacher supervision. To transcend these limitations, we propose UI-Oceanus, a framework that shifts the learning focus from mimicking high-level trajectories to mastering interaction physics via ground-truth environmental feedback. Through a systematic investigation of self-supervised objectives, we identify that forward dynamics, defined as the generative prediction of future interface states, acts as the primary driver for scalability and significantly outweighs inverse inference. UI-Oceanus leverages this insight by converting low-cost autonomous exploration, which is verified directly by system execution, into high-density generative supervision to construct a robust internal world model. Experimental evaluations across a series of models demonstrate the decisive superiority of our approach: models utilizing Continual Pre-Training (CPT) on synthetic dynamics outperform non-CPT baselines with an average success rate improvement of 7% on offline benchmarks, which amplifies to a 16.8% gain in real-world online navigation. Furthermore, we observe that navigation performance scales with synthetic data volume. These results confirm that grounding agents in forward predictive modeling offers a superior pathway to scalable GUI automation with robust cross-domain adaptability and compositional generalization.

Article
Computer Science and Mathematics
Computer Vision and Graphics

Dana El-Rushaidat

,

Nour Almohammad

,

Raine Yeh

,

Kinda Fayyad

Abstract: This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile or laptop cameras. Our methodology employs Mediapipe for real-time extraction of hand, face, and pose landmarks from video streams. These anatomical features are then processed by a hybrid deep learning model integrating Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), specifically Bidirectional Long Short-Term Memory (BiLSTM) layers. The CNN component captures spatial features, such as intricate hand shapes and body movements, within individual frames. Concurrently, BiLSTMs model long-term temporal dependencies and motion trajectories across consecutive frames. This integrated CNN-BiLSTM architecture is critical for generating a comprehensive spatiotemporal representation, enabling accurate differentiation of complex signs where meaning relies on both static gestures and dynamic transitions, thus preventing misclassification that CNN-only or RNN-only models would incur. Rigorously evaluated on the author-created JUST-SL dataset and the publicly available KArSL dataset, the system achieved 96% overall accuracy for JUST-SL and an impressive 99% for KArSL. These results demonstrate the system’s superior accuracy compared to previous research, particularly for recognizing full Arabic words, thereby significantly enhancing communication accessibility for the deaf and hearing-impaired community.

of 677

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated