Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Science

Shuriya B

Abstract: Autism spectrum disorder (ASD) frequently manifests with profound language impairments, particularly in verb morphology processing, which hinges on fronto-temporal connectivity for grammatical rule application. This study pioneers the use of graph neural networks (GNNs) to map these deficits, analysing task-based fMRI data from 72 children (36 ASD, 36 controls). Fronto-temporal graphs were constructed with nodes representing key regions (e.g., inferior frontal gyrus, superior temporal gyrus) and edges capturing dynamic Pearson correlations during an auditory verb tense judgment task. A three-layer GraphSAGE model, incorporating message passing and temporal embeddings, achieved 91.7% classification accuracy (AUC=0.95), outperforming traditional classifiers by 14%. Attention maps revealed hypo-connectivity in the arcuate fasciculus pathway (p<0.001), correlating with ADOS language scores (r=-0.62), alongside compensatory frontal hyperconnectivity. Ablation studies confirmed the model’s reliance on task-evoked dynamics. These findings elucidate the neural substrates of morphology impairments, offering interpretable biomarkers for early ASD diagnosis and personalized interventions. By bridging graph theory with cognitive neuroscience, this work advances precision psychiatry, with implications for neurofeedback therapies targeting syntactic networks. Future extensions to multi-modal data promise enhanced generalizability across ASD heterogeneity.

Article
Computer Science and Mathematics
Computer Science

Karthiga Devi R

Abstract: Waste-to-Energy Advances Using Domain-Specific AI Models and IoT for Scalable Biofuel Production 2026 introduces an innovative framework that leverages tailored artificial intelligence algorithms and Internet of Things infrastructure to transform heterogeneous organic waste streams into high-yield biofuels at industrial scales. By integrating graph neural networks for predictive modelling of biochemical reaction pathways and reinforcement learning for dynamic process optimization, the approach addresses longstanding inefficiencies in traditional waste-to-energy systems, such as variable feedstock quality and suboptimal reactor conditions. IoT-enabled sensor networks provide real-time data acquisition from distributed bioreactors, enabling edge computing for adaptive control that boosts biogas and bioethanol yields by over 50% compared to conventional methods. Experimental validation in pilot-scale continuous stirred-tank reactors demonstrates enhanced methane production rates of 0.45 m³/kg volatile solids, alongside 62% reduction in operational failures through predictive maintenance. Scalability mechanisms, including Kubernetes-orchestrated microservices and digital twins, project seamless deployment to megaton facilities by 2026, supporting global circular economy goals. This work not only mitigates landfill burdens but also accelerates net-zero transitions by rendering waste-derived biofuels economically viable against fossil alternatives, with implications for policy-driven biorefinery expansions.

Review
Computer Science and Mathematics
Computer Science

Nael M Radwan

,

Frederick T Sheldon

Abstract: The rapid proliferation of the Internet of Things (IoT) has positioned the Message Queuing Telemetry Transport (MQTT) protocol as a fundamental communication standard for large-scale, resource-constrained systems. Despite its lightweight design and scalability advantages, modern MQTT deployments operate under increasingly complex conditions characterized by intensive security enforcement, dynamic traffic patterns, and widespread use of wildcard subscriptions. These factors introduce tightly coupled challenges related to system performance, congestion, and security, which are often addressed independently in existing literature. This review provides a comprehensive and critical analysis of MQTT-based IoT sys-tems, focusing on the interaction between adaptive flow control, backpressure phe-nomena, security mechanisms, and wildcard-intensive access control strategies. The study synthesizes recent research on authentication, authorization, and encryption techniques, highlighting their impact on computational overhead, latency, and broker load. In parallel, it examines backpressure formation as a system-level phenomenon arising from the imbalance between message arrivals and processing rates, and evalu-ates existing flow-control mechanisms, including TCP-based approaches, broker-level controls, and MQTT v5 features such as Receive Maximum. Furthermore, the review investigates the role of wildcard subscriptions in scalable topic management, demonstrating their dual effect as both enablers of efficient data aggregation and amplifiers of routing complexity, traffic load, and security risks. The analysis reveals that wildcard usage significantly increases message fan-out and au-thorization overhead, thereby accelerating congestion and expanding the attack surface in poorly configured systems. A key contribution of this work is the identification of a fundamental gap in the litera-ture: the absence of integrated, cross-layer frameworks that jointly consider security, flow control, and wildcard behavior under realistic IoT workloads. Current approaches remain fragmented, leading to inefficiencies, reduced reliability, and potential vulner-abilities in large-scale deployments. Based on this synthesis, the paper outlines a forward-looking research roadmap that emphasizes security-aware adaptive flow control, wildcard-aware traffic optimization, cross-layer system design, and intelligent (AI-driven) management strategies. These directions are essential for enabling next-generation MQTT systems that are secure, scalable, and resilient in dynamic and adversarial environments.

Article
Computer Science and Mathematics
Computer Science

Gonçalo Melo de Magalhães

Abstract: In Brazil, one litre of tap water costs €0.00063. In Japan, €0.00186. In India (Mumbai), €0.000065. In Denmark, €0.00920. Across nineteen countries on five continents, the market price of potable water per litre is between approximately 444,000 and 38,500,000 times lower than the market price of residential space per square metre. Expressed as a percentage premium of space over water: between roughly 44 billion percent and 3.8 trillion percent, depending on the country and its water governance model. The first question of this paper is: are there other goods pairs like this anywhere — in economics, in biology, or in history? We searched systematically and could not find one that simultaneously satisfies four conditions: both goods are survival-relevant; both carry full market prices; the ratio exceeds 44 billion percent; and the more biologically vital good is the cheaper one. We propose this may be unique. Three structural paradoxes compound the strangeness. First, potable freshwater is physically scarcer than habitable land — only 0.007% of Earth's water is accessible freshwater (USGS; Shiklomanov 1993) while approximately 104 million km² of land is habitable — yet water is cheaper. Standard marginal utility theory, which predicts price rises with scarcity, makes the wrong prediction for both goods simultaneously. Second, humans require more water by volume every day (52–152 litres for basic needs; WHO 2017) than the volume of space they strictly need for survival (approximately 4–8 m² floor area), yet water is cheaper. The consumption ordering is also inverted. Third, and most deeply: in the approximately 2,000 years since the Roman Empire built concrete walls, walls have changed almost nothing in their intelligence. A Roman concrete wall and a Tokyo concrete wall in 2026 have identical awareness of their occupants: zero. The wall does not know you are there. It never has. Meanwhile, water delivery infrastructure — which incorporated continuous intelligence across four centuries (sensing, treatment, routing, optimisation, prediction) — became approximately 1,000 times cheaper in real terms. The good that learned to think got cheaper. The good that refused to think got more expensive. We propose that the ratio is large not primarily because of governance failures — though these amplify it — but because of a structural asymmetry in what these two goods are: water is a flow system, space is a frozen pattern. The Architecture of Freedom Intelligence (AFI) framework formalises this through five theses concerning path availability as the irreducible first condition of all value. We introduce the distinction between flow recognition (continuous navigation of available paths in real time) and pattern recognition (identification of static configurations from memory), and propose that intelligence is fundamentally a flow recognition capacity — which is why it is built from water. We connect this to the FREE (Freedom-Regulated Emergent Exploration) swarm intelligence algorithm, which makes buildings navigate as water navigates for the first time in human history. We explore how buildings might be designed — using agentic AI, Physical AI, swarm construction, and water-inspired materials — to embody the structural properties of the human body, which is perhaps the most sophisticated water-based optimisation system on Earth. We offer seven falsifiable predictions. All AFI quantitative results are labelled SIMULATED. All price data is sourced from primary references with public access points.

Article
Computer Science and Mathematics
Computer Science

Mohamed Meera Maidheen M

Abstract: This paper proposes an innovative framework integrating blockchain-verified digital twins to enable transparent, real-time carbon offset mechanisms within global ecotourism supply chains. Ecotourism, while promoting environmental stewardship, generates significant greenhouse gas emissions across transnational logistics from long-haul flights and eco-lodges to guided nature expeditions necessitating robust verification to counter greenwashing and ensure genuine neutrality. Digital twins, as dynamic virtual replicas of physical assets like transport vehicles and tourism sites, capture IoT sensor data on emissions, waste, and energy use, simulating chain-wide impacts with predictive analytics. Blockchain complements this by providing an immutable ledger for timestamped data validation, smart contract automation of offset tokenization, and decentralized marketplaces for trading verified credits linked to projects such as reforestation or renewable microgrids. A prototyped system demonstrated 32% emission reductions, 40% cost savings in audits, and full auditability in a simulated Galapagos itinerary spanning three continents, outperforming traditional opaque methods. Challenges like oracle reliability and scalability in low-connectivity regions are addressed through edge computing and federated chains. This hybrid model offers ecotourism stakeholders operators, regulators, and travellers a scalable blueprint for Paris Agreement-aligned sustainability, fostering trust and equitable low-carbon growth in emerging markets.

Article
Computer Science and Mathematics
Computer Science

Aleksandar Ivanović

,

Miloš Radenković

,

Sergei Prokhorov

,

Aleksandra Labus

,

Božidar Radenković

Abstract: Several fundamental problems in software systems and AI remain without a unified formal solution. Deterministic reproducibility of execution, formal consistency between runtime state and historical record, and equivalence of governance and operational execution are unresolved across contemporary architectural paradigms. In AI systems, traceable decision processes and structurally enforced purpose-constrained autonomy remain open problems for the same reason. The common root is ontological: no formally defined execution substrate exists in which execution, governance, persistence, system evolution, and AI reasoning share a single causally ordered knowledge structure. This paper introduces the Zero Tier Execution Substrate (ZTES), an axiomatic execution model derived through formal synthesis of the Mesarović–Takahara system ontology, Lamport-consistent causal ordering, and the DEVS formalism. The Three-Phase execution kernel acts as semantic closure of this synthesis. The append-only historical knowledge base becomes the canonical computational medium in which governance and operational execution are formally equivalent transition processes over a single causally ordered structure. System execution is formally identified with the causal evolution of knowledge: Execution(Σ) ≡ Evolution(K). The substrate is universal for discrete processes: any discrete process admits execution within ZTES without loss of process identity, event ordering, or executable semantics. The scope of this work is foundational: the formal model establishes a stable foundation from which concrete realizations, empirical validations, and higher-level abstractions may be derived. ZTES does not introduce new computational primitives; it defines the minimal semantic discipline under which existing mechanisms — append-only persistence, causal ordering, and discrete-event transition semantics — are interpreted and composed as a structurally closed execution substrate. The formal model establishes deterministic event serialization, projection-defined runtime state, and compensa-tion-based correction without destructive mutation. Sixth Normal Form emerges as a natural ontological consequence of atomic event semantics rather than merely a storage design choice. A closure-based structural maturity model and benchmark for execution architectures are introduced as methodological contributions. These formal properties directly address the open problems identified above. ZTES therefore addresses several pre-viously unresolved structural problems: deterministic reproducibility of distributed execution, structural consistency between runtime state and historical record, and governance–execution equivalence within a single operational model. In AI systems, it establishes a substrate for historically consistent reasoning, traceable decision processes, and purpose-constrained autonomy as structural consequences of substrate closure. Software systems and AI infrastructures are therefore formally interpretable not as layered architectures but as causally evolving knowledge structures governed by formally defined execution semantics.

Article
Computer Science and Mathematics
Computer Science

Melchor Gómez García

,

Derlis Cáceres Troche

,

Moussa Boumadan Hamed

,

Roberto Soto Varela

Abstract: The rapid expansion of Generative Artificial Intelligence (GAI) is transforming higher education systems, particularly public institutions seeking to advance toward smart governance models and digital transformation. In this context, digital teaching competence emerges as a strategic factor for the effective, ethical, and pedagogically sound adoption of these technologies. This study assesses the level of digital competence among public higher education faculty in Paraguay and examines its predictive capacity regarding the adoption of GAI tools using machine learning models. A nationwide quantitative study was conducted with a sample of 800 faculty members from public universities across Paraguay. Data were collected through a structured questionnaire based on international digital competence frameworks, incorporating additional variables such as attitudes toward GAI, technological experience, institutional infrastructure, and perceived organizational support. Data analysis involved the application of machine learning techniques, including Logistic Regression, Random Forest, and Gradient Boosting, to identify the variables with the strongest predictive power regarding faculty readiness and willingness to integrate GAI into teaching practices. Model performance was evaluated using metrics such as accuracy, F1-score, and AUC-ROC. The findings identify key predictors of technological readiness and structural gaps within Paraguay’s public higher education system. This research provides empirical evidence from Latin America on the factors influencing GAI adoption in public sector educational contexts and contributes to the design of educational policies aimed at fostering smart universities and digitally sustainable academic ecosystems.

Article
Computer Science and Mathematics
Computer Science

A. Manoj Prabaharan

Abstract: Sensory-impaired children often experience barriers to motor development and psychosocial growth in recreational programs, where traditional assessments lack real-time precision and scalability. This paper introduces an edge AI phenomics framework for tracking motor proficiency encompassing kinematics like balance and coordination and psychosocial benefits such as social engagement and self-efficacy during adaptive play activities. Deployed on low-power edge devices, the system fuses RGB-D cameras, IMUs, and bioacoustics sensors into a lightweight pipeline featuring MobileNetV3 pose estimation and conformer encoders for phenotypic feature extraction. Evaluated on a dataset from 250 children across Chennai programs, it achieves 96% motor accuracy (MPJPE <10mm) and 0.85 correlation with clinical psychosocial scales, outperforming cloud baselines by 40% in latency. Results demonstrate 25-35% gains in proficiency and well-being over 8 weeks, with implications for inclusive therapies. The framework addresses deployment challenges through quantization and federated learning, advancing scalable, privacy-preserving phenomics in paediatric recreation.

Article
Computer Science and Mathematics
Computer Science

Nithya Moorthy

Abstract: Inter-specific hybridization between grapevine (Vitis vinifera) and kiwifruit (Actinidia deliciosa) promises elite cultivars combining premium flavour profiles, nutritional density, and environmental resilience, yet faces barriers from chromosomal incompatibilities. This study pioneers FISH/GISH-enabled chromosome engineering to generate novel hybrids with superior fruit quality and adaptability. Optimized FISH probes targeted repetitive sequences for karyotyping, while GISH distinguished parental genomes in F1 hybrids, facilitating selection of 12 stable recombinant lines via irradiation-induced translocations and colchicine doubling. Resultant amphidiploids exhibited 25% larger fuzzy berries, 18° Brix sweetness fused with kiwifruit ascorbic acid, and enhanced tolerance to drought (80% photosynthesis retention), frost (-8°C), and pathogens (60% Botrytis reduction). Whole-genome sequencing and QTL mapping validated 8 key loci underpinning these traits. These findings demonstrate FISH/GISH as a cytogenetic accelerator for wide crosses, enabling scalable breeding of climate-adaptive superfruits to meet global demands for sustainable horticulture.

Article
Computer Science and Mathematics
Computer Science

Junbo Xiang

,

Tiejun Wang

Abstract: Efficiently and accurately converting floating-point numbers to decimal strings is a critical challenge in numerical computation and data exchange. While existing algorithms like Ryu, Dragonbox, and Schubfach satisfy the Steele-White (SW) principle for accuracy, they often suffer from performance bottlenecks due to branch prediction failures and high-precision multiplication overhead. This paper presents a novel floating-point to string conversion algorithm called "xjb", an optimized variant of the Schubfach algorithm designed to deliver superior performance for IEEE754 single-precision (binary32) and double-precision (binary64) floating-point numbers. By minimizing instruction dependencies, reducing multiplication operations, mitigating branch prediction penalties and by utilizing the simd instruction set, xjb achieves significant performance gains. The algorithm features concise core implementation. Extensive benchmarking across diverse platforms, including AMD R7-7840H and Apple M1, demonstrates that xjb outperforms state-of-the-art algorithms in most scenarios while maintaining full compliance with the SW principle.

Article
Computer Science and Mathematics
Computer Science

R Karthick

Abstract: Children with Autism Spectrum Disorders (ASD) frequently encounter profound challenges in pronoun comprehension, a core deficit impeding social communication and pragmatic language development. Traditional speech therapies often yield limited generalization due to their static nature and overlook underlying neural dysregulation. This paper introduces a novel neurofeedback-enhanced speech therapy system that fuses real-time electroencephalography (EEG) monitoring with adaptive speech recognition to target pronoun errors such as confusions between "I/you" and "he/she." The architecture employs a 16-channel wireless EEG headset for acquiring mu/beta rhythms, coupled with a fine-tuned transformer-based speech model for pronoun detection, delivering personalized auditory-visual feedback via gamified interfaces. In a randomized controlled trial with 24 ASD children aged 5-10 in Chennai, the system achieved a 32% improvement in pronoun accuracy (p < 0.01) and enhanced frontal-temporal coherence over 12 weeks, surpassing standard ABA protocols by 18%. Adaptive reinforcement learning ensures engagement, while edge computing enables scalability for low-resource clinics. Findings underscore neurofeedback's potential to drive neuroplasticity in ASD language circuits, offering a scalable, non-pharmacological pathway for inclusive speech interventions. Future extensions include VR immersion and multilingual support for global deployment.

Article
Computer Science and Mathematics
Computer Science

Daniel Tang

,

Kenneth Walker

Abstract: Point cloud completion is crucial for robotic tasks, especially with occluded and noisy industrial data. While two-dimensional image guidance has been traditional, pure point cloud methods increasingly achieve state-of-the-art results, making amodal completeness—recovering both visible and occluded parts—critical for robust interaction. Inspired by these insights, we propose MAG-Comp, a novel framework maximizing geometric information for amodal point cloud completion. MAG-Comp utilizes a Hierarchical Geometric Feature Encoder, a Class-Agnostic Geometric Memory Bank for shape priors, and a Dynamic Amodal Region Inference Module for explicit occluded geometry reconstruction. Experiments on ShapeNet-Amodal and an Industrial Bin-Picking Dataset confirm MAG-Comp's superior performance, achieving a Chamfer Distance of 1.58x10-3 and an Amodal IoU of 54.20% on ShapeNet-Amodal, consistently outperforming state-of-the-art methods. The framework demonstrates robustness to varying occlusion, strong generalization, and competitive inference efficiency, making it suitable for real-time industrial applications requiring precise amodal three-dimensional representations.

Article
Computer Science and Mathematics
Computer Science

Sai Praneeth Reddy Dhadi

,

Jithender Reddy M.

,

G.N.R. Prasad

Abstract: The dominant paradigm in cybersecurity continues to privilege infrastructure, protocol integrity, and endpoint resilience, while an increasing fraction of high-impact attacks bypasses these controls by directly manipulating human cognition. This work formalizes such attacks as structured distortions within human digital trust interactions, introducing Cognitive Topological Cybersecurity (CTC) as a rigorous analytical framework. Within this paradigm, interactions are modeled as a high-dimensional manifold whose geometry encodes trust relationships across users, communication channels, and psychological signals. The proposed Cognitive Attack Topology (CAT) framework operationalizes this view through the construction of a Trust Topology Tensor (TTT), enabling the quantification of adversarial influence via Cognitive Distortion Energy (CDE) and its normalized form, the Trust Distortion Index (TDI). A complementary Cognitive Manipulation Score (CMS) captures the composite effect of urgency, fear, authority, and persuasion signals.The framework is instantiated in a multi-layer architecture that integrates transformer- based signal decomposition with dynamic graph modeling and topological anomaly detection. Empirical evaluation is conducted on the GCT-100K dataset, a large-scale benchmark comprising real, public, and synthetically generated cognitive attack interactions. The CAT model achieves a ROC-AUC of 0.9555, exceeding conventional text-based baselines, while maintaining robustness under adversarial cloaking conditions. Notably, performance remains invariant across linguistic boundaries, with a multilingual AUC of 0.984, indicating that topological features of trust manipulation exhibit language-agnostic structure.These results establish that human-targeted cyber-attacks can be detected not as isolated semantic artifacts, but as measurable geometric perturbations in trust space, motivating a shift toward cognition-aware, topology-driven defensive systems.

Article
Computer Science and Mathematics
Computer Science

Ionuț Petre

,

Ella Magdalena Ciupercă

,

Ion Alexandru Marinescu

,

Dragoș Iordache

,

Alin Zamfiroiu

Abstract: The growing integration of immersive technologies into education is opening new possibilities for teaching and learning, while also raising concerns about the reliability and potential distortion of knowledge in artificial intelligence-mediated environments. Understanding how users perceive and accept AI-generated content in immersive learning systems is therefore essential. This study explores the factors that influence user acceptance of AI-driven virtual reality (VR) educational applications and explains it through a multidimensional framework that extends the Technology Acceptance Model (TAM), the Theory of Reasoned Action (TRA), and the Theory of Planned Behavior (TPB) – a new ED-SCALE model. We innovated the previous models through adding an ergonomic dimension, often overlooked in VR-based education. To test the model, we developed an AI-driven VR educational escape room designed to simulate adaptive and interactive learning experiences. Data were collected from 213 participants through a questionnaire measuring subjective norms, perceived behavioral control, attitudes toward AI-mediated instruction, perceived informational efficacy, and ergonomic quality. The findings show that ergonomic quality, intuitive interfaces, physical comfort, and social influence play an important role in shaping user trust and long-term adoption intentions. The results suggest that the success of AI-driven immersive learning systems depends not only on technological performance but also on user experience and social context, confirming our first hypothesis regarding new variables that are conditional for virtual technology acceptance.

Article
Computer Science and Mathematics
Computer Science

Jiacheng Wang

,

Liang Fan

,

Baihua Li

,

Luyan Zhang

Abstract: Accurate stock return forecasting remains a central challenge in quantitative finance, as it directly informs the construction of portfolios and the management of risk. Although traditional static factor models are widely used, they are limited by manual factor selection and fixed weight assignments, which makes them vulnerable to evolving market conditions and regime shifts. To overcome these limitations, we introduce Market Regime Aware-Augmented Attention GRU (MRA-AGRU), an automated dynamic factor gating framework that adaptively reweights factors in response to market regime signals. By integrating an attention-enhanced GRU network, MRA-AGRU effectively suppresses obsolete or noisy factors while amplifying those most relevant to the prevailing environment, thereby capturing nuanced temporal and cross-factor dependencies. Extensive experiments on the CSI 300 and NASDAQ 100 demonstrate the superior performance of MRA-AGRU, highlighting machine-driven factor modulation's role in improving robustness to structural breaks and reducing bias in factor engineering.

Article
Computer Science and Mathematics
Computer Science

Boris N. Chigarev

Abstract: The prevalence of graph-based approaches in bibliometric analysis is limited to considering only pairwise relationships—such as co-authorship, citation lists, or keywords—which artificially simplifies the structure of these relationships. Hypergraphs allow for the direct modeling of numerous relationships, thereby improving the accuracy of the analysis. This study demonstrates the application of the KaHyPar framework to partition sets of IEEE Xplore terms into blocks, treating them as hyperedges. This is the first in a series of articles detailing hypergraph-based term blocking techniques. The study utilized bibliometric data from 2021 to 2025, exported from the IEEE Xplore database using the terms “Artificial Intelligence,” “Blockchain,” “Data Science,” “Deep Learning,” “Image Processing,” “Internet of Things,” “Anomaly Detection,” and “Machine Learning.” After removing duplicates and excluding 16 records with empty “IEEE Terms” fields, 38,157 records were used for the study. While partitioning IEEE Terms records with KaHyPar is effective, the process requires rigorous data preparation for the .hgr format. This complexity explains why hypergraphs remain underutilized in scientometrics compared to more accessible tools like VOSviewer. The proposed significance criterion—based on a term's occurrence frequency within hyperedges associated with the block—yielded easily interpretable results. Future studies should investigate the impact of KaHyPar parameters and evaluate alternative frameworks such as HYPE and Mt-KaHyPar, alongside other metrics for term significance.

Article
Computer Science and Mathematics
Computer Science

Sayed Mahbub Hasan Amiri

Abstract: Cyber-physical systems (CPS) in safety-critical domains, including autonomous driving and robotic surgery, high-speed railways and power grids, increasingly rely on reinforcement learning (RL) as a method for decision-making through time. Unfortunately, deep RL policies are extremely brittle to adversarial perturbations; small, carefully crafted alterations to a policy’s observations or dynamics can result in catastrophic failure. Existing adversarial training methods mainly address static perception tasks and miss the nature of expected temporal compounding of perturbations under hard safety constraints unique to CPS. We present RADAR (Robust Adversarial Decision-making with Adaptive Resilience), a novel adversarial training framework for safety-critical sequential decision-making. RADAR casts the problem as a constrained robust Markov decision process and learns adversarial attacks that respect both physical dynamics and safety constraints at training time, propagating perturbations through time via a recurrent latent dynamics model. A Lagrangian-type min-max optimization jointly optimizes the robustness of the policy and the satisfaction of the safety constraint. RADAR achieves as much as 35% higher worst‑case reward and over 80% fewer safety violations (compared to strong RL under the strongest attacks) than strong baselines on benchmarks for autonomous vehicle lane‑keeping and power grid voltage control, with only minor degradation in nominal performance. RADAR offers an approach to robustify RL-based controllers against adversarial perturbations in a principled, scalable way that reconciles adversarial robustness with safe control.

Article
Computer Science and Mathematics
Computer Science

Ioannis Konstantaras

,

Efstratios Chatzoglou

,

Konstantinos E. Kampourakis

,

Georgios Kambourakis

Abstract: Modern Endpoint Detection and Response (EDR) platforms, such as Microsoft Defender for Endpoint (MDE), provide sophisticated telemetry but often leave Security Operations Centers (SOC) struggling with a significant detection lag, namely the time required to manually translate emerging threat intelligence into operational logic. This paper presents a systematic empirical study of an LLMintegrated pipeline designed to automate the transformation of structured threat intelligence from OpenCTI into functional Kusto Query Language (KQL) detection rules. By utilizing Large Language Models (LLMs) as a contextual translation layer, we evaluate a framework that maps graph-based STIX metadata directly to proprietary EDR schemas. Our experiments, conducted within a high-fidelity Windows Server 2025 environment, reveal that LLM-augmented rules successfully addressed critical visibility gaps in reconnaissance and early-stage lateral movement where native MDE heuristics remained silent. Importantly, the implementation reduced the intelligence-to-logic latency from an average of 45 minutes of manual engineering to sub-5-minute automated cycles. While the findings identify persistent challenges regarding schema hallucinations, the study concludes that LLM-assisted detection engineering serves as a significant operational force multiplier, enabling defensive postures to evolve at the velocity of the modern threat landscape.

Article
Computer Science and Mathematics
Computer Science

Salma Ali

,

Noah Fang

Abstract: Optimizing information freshness (Age-of-Information, AoI) in Mobile Edge Computing (MEC) for low-latency Internet of Things (IoT) applications presents a significant challenge due to the need for strict adherence to operational safety and resource constraints. Existing methods often struggle with robust constraint handling or fine-grained dynamic scheduling. This paper proposes Proactive Constrained Scheduling with Adaptive Preemption (PCSAP), a novel hybrid optimization framework. PCSAP integrates proactive constraint handling from Safe Reinforcement Learning with adaptive preemption for dynamic task scheduling in multi-user, heterogeneous MEC environments. It models the problem as a Constrained Markov Decision Process, incorporating a proactive constraint sensing term to guide violation avoidance and an Adaptive Preemption Module that dynamically calculates urgency indices for intelligent resource allocation. A multi-layer decision framework separates high-level strategic policy learning from low-level index-based scheduling. Extensive simulations demonstrate PCSAP's superior performance, achieving significantly lower average AoI and dramatically reduced constraint violation rates compared to state-of-the-art baselines. It also maintains high task completion and efficient energy utilization. An ablation study confirms the critical roles of both core components. Further analyses validate PCSAP's robustness, practical applicability, and ability to deliver a superior user experience, confirming its viability for real-time deployment.

Article
Computer Science and Mathematics
Computer Science

P. Selvaprasanth

Abstract: The digital media landscape faces escalating demands for creativity, scale, and personalization, challenging traditional human-centric workflows. This paper introduces cyborg workflows, a symbiotic paradigm fusing human judgment with agentic AI autonomous systems capable of goal-directed planning and execution to unlock next-generation transformation opportunities. We propose a comprehensive framework encompassing modular architectures, hybrid protocols, and real-time collaboration interfaces, drawing from cognitive science, AI engineering, and media studies. Through case studies in content generation, news curation, and immersive production, we demonstrate efficiency gains of up to 3x in throughput, enhanced creative output via iterative human-AI refinement, and robust bias mitigation strategies. Key challenges, including oversight mechanisms and regulatory hurdles, are addressed alongside scalability via edge computing. Opportunities span hyper-personalized narratives, democratized production, and ethical augmentation of underrepresented voices. Empirical evaluations validate 25-60% improvements in key metrics, offering media practitioners a roadmap for adoption. This work pioneer’s human-AI symbiosis, positioning cyborg workflows as pivotal for sustainable media innovation amid AI proliferation.

of 65

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated