Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Security Systems

Saulius Grigaitis

Abstract: This work investigates multi-scalar multiplication (MSM) over a fixed base for small input sizes, where classical large-scale optimizations are less effective. We propose a novel variant of the Pippenger-based bucket method that enhance performance by using additional precomputation. In particular, our approach extends the BGMW method by introducing structured precomputations of point combinations, enabling the replacement of multiple point additions with table lookups. We further generalize this idea through chunk-based precomputation, allowing flexible trade-offs between memory usage and runtime performance. Experimental results demonstrate that the proposed variants significantly outperform the Fixed Window method for small MSM instances, achieving up to 3× speedup under practical memory constraints. These results challenge the common assumption that bucket-based methods are inefficient for small MSMs.

Article
Computer Science and Mathematics
Security Systems

Sara Malik

,

N.A. Ahmed

Abstract: Hardware random number generators (HRNGs) underpin the security of cryptographic systems, yet their physical entropy sources are susceptible to degradation, environmental perturbation, and adversarial manipulation. Continuous health testing during operation is therefore mandated by all major certification frameworks, including NIST SP 800-90B and BSI AIS 31. This survey examines the feasibility and efficiency of employing three classical statistical measures—mean, median, and standard deviation—as lightweight online health indicators for HRNG output streams. We ground our analysis in the Hotelling–Solomons inequality |μ − m| ≤ σ, which establishes a distribution-free bound linking these three statistics. We survey efficient streaming algorithms—including Welford’s online variance computation, two-heap sliding-window median structures, and approximate quantile sketches—that enable their computation under the strict throughput and memory constraints of embedded cryptographic modules. We further address numerical stability considerations for long-running deployments processing billions of samples. Our analysis demonstrates that the mean–median–standard deviation triplet, combined with the Hotelling–Solomons bound, provides a complementary health test layer that fills the gap between the minimal repetition count and adaptive proportion tests of SP 800-90B and the comprehensive but offline NIST SP 800-22 statistical test suite.

Article
Computer Science and Mathematics
Security Systems

Jovita T. Nsoh

Abstract: The Fifth Industrial Revolution (Industry 5.0) foregrounds human–machine collaboration, sustainability, and resilience as organizing principles for next-generation cyber-physical systems. Yet the identity and access management (IAM) architectures inherited from Industry 4.0 remain perimeter-centric, policy-static, and blind to the behavioral dynamics of human–AI teaming. This paper introduces the Human-Centric Zero Trust Identity Architecture (HC-ZTIA), a novel framework that repositions identity as the adaptive control plane for Industry 5.0 environments. HC-ZTIA integrates three mutually reinforcing innovations: (1) a Joint Embedding Predictive Architecture (JEPA)-driven Behavioral Identity Assurance Engine (BIAE) that learns abstract world models of operator and machine-agent behavior to perform continuous, context-aware identity verification without relying on raw biometric surveillance; (2) a Privacy-Preserving Adaptive Authorization Protocol (PP-AAP) employing zero-knowledge proofs and federated policy evaluation to enforce least-privilege access across human, non-human, and hybrid identity classes while satisfying data-minimization mandates; and (3) a Resilience-Oriented Trust Degradation Model (RO-TDM) that guarantees fail-safe identity governance under adversarial, degraded, or disconnected operating conditions characteristic of operational technology (OT) and critical infrastructure. The framework is grounded in the Agile-Infused Design Science Research Methodology (A-DSRM) and formally extends NIST SP 800-207 and the CISA Zero Trust Maturity Model by addressing five identified gaps in human-centric identity governance. We present the formal system model, threat model, architectural specification, and a multi-scenario evaluation spanning energy-sector OT, smart manufacturing, and vehicle-to-everything (V2X) environments. Simulation results, validated through Monte Carlo trials with 95% confidence intervals, demonstrate that HC-ZTIA reduces identity-related breach exposure by 73.2% (±4.1%) while maintaining sub-200 ms authorization latency, offering a principled bridge between Zero Trust rigor and Industry 5.0 human-centricity.

Article
Computer Science and Mathematics
Security Systems

Hassan Wasswa

,

Timothy Lynar

Abstract: The rapid proliferation of Internet of Things (IoT) devices has significantly expanded the attack surface of modern networks, leading to a surge in IoT-based botnet attacks. Detecting such attacks remains challenging due to the high dimensionality and heterogeneity of IoT network traffic. This study proposes and evaluates three hybrid deep learning architectures for IoT botnet detection that combine representation learning with supervised classification: VAE-encoder-MLP, VAE-encoder-GAT, and VAE-encoder-MoTE. A variational autoencoder (VAE) is first trained to learn a compact latent representation of high-dimensional traffic features, after which the pretrained encoder projects the data into a low-dimensional embedding space. These embeddings are then used to train three different downstream classifiers: a multilayer perceptron (MLP), a graph attention network (GAT), and a mixture of tiny experts (MoTE) model. To further enhance representation discriminability, supervised contrastive learning is incorporated to encourage intra-class compactness and inter-class separability in the latent space. The proposed architectures are evaluated on two widely used benchmark datasets, CICIoT2022 and N-BaIoT, under both binary and multiclass classification settings. Experimental results demonstrate that all three models achieve near-perfect performance in binary attack detection, with accuracy exceeding 99.8%. In the more challenging multiclass scenario, the VAE-encoder-MLP model achieves the best overall performance, reaching accuracies of 98.55% on CICIoT2022 and 99.75% on N-BaIoT. These findings provide insights into the design of efficient and scalable deep learning architectures for IoT intrusion detection.

Article
Computer Science and Mathematics
Security Systems

David Cropley

,

Paul Whittington

,

Huseyin Dogan

Abstract: This research paper address why it is that disabled people often have extra problems with authentication (i.e. logging in to online services). While the focus is on authenti-cation, we also explore its relevance to electronic identification and consider the post-authentication stage of authorization (allowing continued use of the service once logged in). While ‘normal’ people regularly log into websites and applications without too much thought for the process with an end-goal or task in mind to be achieved with the service that they are accessing. We discover how there is a societal gap in terms of ease-of-use, as previous studies show that disabled people can find this step difficult, frustrating, or virtually impossible. For people who have a disability, complications will arise in this process, and we examine the nature of these problems identified by this group. identifying patterns in the A series of interviews (n=15) are analyzed with Constructivist Grounded Theory methods to discover patterns in the participant’s answers and build a theory about why Accessible Authentication is a problem. By way of inductive theorem building, this paper categorizes common traits that participants have revealed during interviews. The key findings reveal that most disabled users say that their capability to authenticate effectively is effectively reduced by accessibility barriers, in other words, participants felt hindered when logging in because of their disability. This leads us to conclude with some degree of confidence that there is a lack of accessibility for those using traditional authentication techniques. A further area of concern for the participants suggests that maintaining security alongside ease-of-use was found to be important to them, so future work on improving accessibility should find ways to ensure that disabled users’ information is not left vulnerable.

Article
Computer Science and Mathematics
Security Systems

Guy E. Toibin

,

Yotam Lurie

,

Shlomo Mark

Abstract: Telecommunication networks operate as highly distributed, multi-vendor, and mis-sion-critical infrastructures, making them prime targets for sophisticated cyber threats. As networks evolve toward cloud-native, virtualized, and software-defined architec-tures, traditional perimeter-based security models have become insufficient. Zero-Trust Architecture (ZTA) has therefore emerged as a key security paradigm in telecommu-nications, enabling continuous verification, fine-grained access control, and improved protection of network and information assets. While ZTA strengthens technical security and operational resilience, its large-scale deployment introduces significant so-cio-technical and governance challenges that extend beyond network engineering. This study examines the implementation of ZTA in a multinational telecommunications in-frastructure organization using a four-wave longitudinal design (2020 - 2023). Drawing on an extended Technology Acceptance Model incorporating Perceived Trust, we ana-lyze employee perceptions of productivity, ease of use, usefulness, and trust before and after ZTA deployment, and following a structured governance intervention. Results reveal a substantial decline in the composite TAM index following ZTA enforcement (−25%, Cohen’s d = 1.12), with no meaningful spontaneous recovery over time (d = 0.08). A Communication Campaign emphasizing transparency and stakeholder engagement produced a partial but incomplete recovery (d ~ 0.52), indicating that trust erosion under Zero-Trust conditions is measurable and contingent upon governance design rather than technological determinism. The findings demonstrate that ZTA functions not merely as a technical safeguard but as a socio-technical governance mechanism that restructures organizational trust. The study advances a Proactive Trust Management framework tailored to telecommunications environments, integrating security en-forcement with transparency, participatory oversight, and ethical calibration to sustain operational resilience in cloud-native infrastructures.

Review
Computer Science and Mathematics
Security Systems

Kaiyan Zhao

,

Zhe Sun

,

Lihua Yin

,

Tianqing Zhu

Abstract: With the rapid advancement of deep learning, differential privacy has become a key technique for protecting sensitive data with a formal guarantee of privacy. By injecting noise and enforcing privacy budgets, differentially private deep learning (DP-DL) systems are able to protect individual data points yet still maintain a model’s utility. However, recent studies reveal that DP-DL systems can be vulnerable to different types of attacks throughout their lifecycle. Naturally, this has attracted the attention of both academia and industry. Critically, these risks are not the same as those associated with traditional deep learning. This is because the differential privacy mechanism itself introduces new attack surfaces that adversaries can exploit. Our work focuses on the distinct vulnerabilities that can arise at the data, algorithm, and architecture levels. By analyzing representative attacks and corresponding defenses, this survey highlights emerging challenges and outlines promising research directions. Overall, our aim is to make differential privacy more robust and deployable in real-world deep learning systems.

Article
Computer Science and Mathematics
Security Systems

Jingtang Luo

,

Chenlin Zhang

Abstract: Large Language Model (LLM) agents are increasingly deployed to interact with untrusted external data, exposing them to Indirect Prompt Injection (IPI) attacks. While current black-box defenses (i.e., model-agnostic methods) such as “Sandwich Defense” and “Spotlighting” provide baseline protection, they remain brittle against adaptive attacks like Actor-Critic (where injections evolve to better evade LLM’s internal defense). In this paper, we introduce Real User Instruction (RUI), a lightweight, black-box middleware that enforces strict instruction-data separation without model fine-tuning. RUI operates on three novel mechanisms: (1) a Privileged Channel that encapsulates user instructions within a cryptographic-style schema; (2) Explicit Adversarial Identification, a cognitive forcing strategy that compels the model to detect and list potential injections before response generation; and (3) Dynamic Key Rotation, a moving target defense that re-encrypts the conversation state at every turn, rendering historical injection attempts obsolete. We evaluate RUI against a suite of adaptive attacks, including Context-Aware Injection, Token Obfuscation, and Delimitation Spoofing. Our experiments demonstrate that RUI reduces the Attack Success Rate (ASR) from 100% (undefended baseline) to less than 8.1% against cutting-edge adaptive attacks, while maintaining a Benign Performance Preservation (BPP) rate of over 88.8%. These findings suggest that RUI is an effective and practical solution for securing agentic workflows against sophisticated, context-aware adversaries.

Article
Computer Science and Mathematics
Security Systems

Marko Corn

,

Primož Podržaj

Abstract: Human-centered cryptographic key management is constrained by a persistent tension between security and usability. While modern cryptographic primitives offer strong theoretical guarantees, practical failures often arise from the difficulty users face in generating, memorizing, and securely storing high-entropy secrets. Existing mnemonic approaches suffer from severe entropy collapse due to predictable human choice, while machine-generated mnemonics such as BIP–39 impose significant cognitive burden. This paper introduces GeoVault, a spatially anchored key derivation framework that leverages human spatial memory as a cryptographic input. GeoVault derives keys from user-selected geographic locations, encoded deterministically and hardened using memory-hard key derivation functions. We develop a formal entropy model that captures semantic and clustering biases in human location choice and distinguishes nominal from effective spatial entropy under attacker-prioritized dictionaries. Through information-theoretic analysis and CPU–GPU benchmarking, we show that spatially anchored secrets provide a substantially higher effective entropy floor than human-chosen passwords under realistic attacker models. When combined with Argon2id, spatial mnemonics benefit from a hardware-enforced asymmetry that strongly constrains attacker throughput as memory costs approach GPU VRAM limits. Our results indicate that modest multi-point spatial selection combined with memory-hard derivation can achieve attacker-adjusted work factors comparable to those of 12-word BIP–39 mnemonics, while single-point configurations provide meaningful offline resistance with reduced cognitive burden.

Review
Computer Science and Mathematics
Security Systems

Yinggang Sun

,

Haining Yu

,

Wei Jiang

,

Xiangzhan Yu

,

Dongyang Zhan

,

Lixu Wang

,

Siyue Ren

,

Yue Sun

,

Tianqing Zhu

Abstract: The rapid evolution of Large Language Models (LLMs) from static text generators to autonomous agents has revolutionized their ability to perceive, reason, and act within complex environments. However, this transition from single-model inference to System Engineering Security introduces unique structural vulnerabilities—specifically instruction-data conflation, persistent cognitive states, and untrusted coordination—that extend beyond traditional adversarial robustness. To address the fragmented nature of the existing literature, this article presents a comprehensive and systematic survey of the security landscape for LLM-based agents. We propose a novel, structure-aware taxonomy that categorizes threats into three distinct paradigms: (1) External Interaction Attacks, which exploit vulnerabilities in perception interfaces and tool usage; (2) Internal Cognitive Attacks, which compromise the integrity of reasoning chains and memory mechanisms; and (3) Multi-Agent Collaboration Attacks, which manipulate communication protocols and collective decision-making. Adapting to this threat landscape, we systematize existing mitigation strategies into a unified defense framework that includes input sanitization, cognitive fortification, and collaborative consensus. In addition, we provide the first in-depth comparative analysis of agent-specific security evaluation benchmarks. The survey concludes by outlining critical open problems and future research directions, aiming to foster the development of next-generation agents that are not only autonomous but also provably secure and trustworthy.

Article
Computer Science and Mathematics
Security Systems

Pere Vidiella

,

Pere Tuset-Peiró

,

Josep Pegueroles

,

Michael Pilgermann

Abstract: The digitalization of healthcare systems increases their exposure to security incidents. Security analysts use standard CVE (Common Vulnerabilities and Exposures) records to identify and mitigate vulnerabilities. However, CVEs are often incomplete or overly generic, requiring the addition of structured, actionable information to support effective decision-making. Manually performing this augmentation is unfeasible due to the rapidly growing number of published CVEs. In this paper we evaluate the capabilities of LLMs (Large Language Models) to classify and analyze CVEs within the medical IT systems domain. We propose a framework where LLMs parse structured JSON context and answer a set of specific natural language questions, enabling the categorization of vulnerabilities by their position in the medical chain, affected component types, and mapping to the MITRE ATT&CK framework. While recent studies show that general LLMs can achieve high accuracy in objective CVSS elements and learn CNA-oriented patterns, they often struggle with subjective impact metrics. Our results demonstrate that domain-specific classification through natural language prompting provides the necessary granularity for medical risk prioritization. We conclude that this augmentation effectively bridges the gap in standard CVE records, allowing for a better understanding of how vulnerabilities impact critical healthcare infrastructure and patient safety.

Article
Computer Science and Mathematics
Security Systems

Faiz Alam

,

Mohammed Mubeen Mifthak

,

Sahil Purohit

,

Md Shadab

,

Gregory T Byrd

,

Khaled Harfoush

Abstract: Virtualization is the building block of modern cloud computing infrastructure. However, it remains vulnerable to a range of security threats, including malicious co-located tenants, hypervisor vulnerabilities, and side-channel attacks. These threats are generally mitigated by developing and deploying advanced and complex security solutions that incur significant performance overhead. Prior work on Virtual Machines (VMs) and containers has mainly evaluated basic security solutions, such as firewalls, using narrow performance metrics and synthetic models within limited evaluation frameworks. These studies often overlook advanced security modules in both user and kernel space, lack flexibility to incorporate emerging features, and fail to capture detailed system-level impacts. We address these gaps with HyperShield, an open-source framework for unified security evaluation across VMs and containers that mimics a realistic cloud infrastructure. HyperShield supports advanced security modules in both user and kernel space, providing rich system-level performance metrics for comprehensive evaluation. Our performance evaluation shows that containers generally outperform VMs due to their lower virtualization overhead, achieving a throughput of 9.38 Gb/s compared to 1.98 Gb/s for VMs for our benchmarks. However, VM's performance is comparable for kernel space deployments, as Docker uses the shared kernel space of the Docker bridge, which can result in packet congestion. In latency-sensitive workloads, VM access latency of 14.91 ms is comparable to Docker's 12.86 ms. In storage benchmarks, FIO, however, VMs outperform Docker due to the overhead of Docker’s layered, copy-on-write file system, whereas VMs leverage optimized virtual block devices with near-native I/O performance. These results highlight performance dependencies on benchmark choice, trade-offs in deploying security workloads between user and kernel space, and the choice of containers and virtual machines as virtualization environments. Therefore, HyperShield provides a comprehensive evaluation toolkit for exploring an optimal security module deployment strategy.

Article
Computer Science and Mathematics
Security Systems

Zhen Li

,

Kexin Qiang

,

Yiming Yang

,

Zongyue Wang

,

An Wang

Abstract: In side-channel analysis, simple power analysis (SPA) is a widely used technique for recovering secret information by exploiting differences between operations in traces. However, in realistic measurement environments, SPA is often hindered by noise, temporal misalignment, and weak or transient leakage, which obscure secret-dependent features in single or very few power traces. In this paper, we provide a systematic analysis of moving-skewness-based trace preprocessing for enhancing asymmetric leakage characteristics relevant to SPA. The method computes local skewness within a moving window along the trace, transforming the original signal into a skewness trace that emphasizes distributional asymmetry while suppressing noise. Unlike conventional smoothing-based preprocessing techniques, the proposed approach preserves and can even amplify subtle leakage patterns and spike-like transient events that are often attenuated by low-pass filtering or moving-average methods. To further improve applicability under different leakage conditions, we introduce feature-driven window-selection strategies that align preprocessing parameters with various leakage characteristics. Both simulated datasets and real measurement traces collected from multiple cryptographic platforms are used to evaluate the effectiveness of the approach. Experimental results indicate that moving-skewness preprocessing improves leakage visibility and achieves higher SPA success rates compared to commonly used preprocessing methods.

Article
Computer Science and Mathematics
Security Systems

Yeongseop Lee

,

Seungun Park

,

Yunsik Son

Abstract: In multi-turn conversational AI, individually innocuous personally identifiable information (PII) fragments disclosed across successive turns can accumulate into a re-identification risk that no single utterance reveals on its own. Existing PII detectors operate on isolated utterances and therefore cannot track this cross-turn evidence build-up. We propose a stateful middleware guardrail whose core design principle is speaker-attributed entity isolation: every extracted PII fragment is classified by its originating conversational participant (first-person USER vs. incidentally mentioned third parties), and evidence is accumulated in entity-isolated subgraphs that structurally prevent cross-entity contamination. A three-tier extraction pipeline (Tier-0 deterministic regex; Tier-1 Presidio/spaCy NER with zero-shot NER independent verification; Tier-2 independent zero-shot NER; plus rule-based post-processing) refines noisy NER candidates, and an evidence-gated Commit Gate writes only corroborated cues to entity state, firing a re-identification onset signal tpred at the earliest turn where combination-based onset rules grounded in the re-identification uniqueness literature are satisfied. On a 184-record template-synthetic evaluation corpus, the system achieves OW@5= 70.7% with MAE= 2.442 turns, reducing naïve accumulation MAE by 56% (BL2 MAE= 5.522). We confirm structural robustness on a 300-record mutation stress set and sanity-check RULE_B generalization on the ABCD external corpus (OW@0= 97.1%, MAE= 0.011). The pipeline requires no modification to the underlying conversational model and serves as a drop-in runtime guardrail for existing dialogue systems.

Article
Computer Science and Mathematics
Security Systems

Dina Ghanai Miandoab

,

Brit Riggs

,

Nicholas Navas

,

Bertrand Cambou

Abstract: In this paper we study the performance and feasibility of integrating a novel key encapsulation protocol into Quantum Key Distribution (QKD). The key encapsulation protocol includes a challenge-response pair (CRP). In our design, Alice and Bob derive identical cryptographic tables from shared challenges, allowing the ephemeral key to be encoded and recovered without disclosing helper data. Software simulations show error-free key recovery for quantum channel bit error rates up to 40% when using longer response lengths. Additionally, we designed the protocol to detect eavesdropping solely from the statistics of the received quantum stream, without sacrificing key bits for public comparison. We formalize the encoding and decoding model, analyze trade-offs between response length and latency, and report key recovery and error detection performance across different noise levels. The results indicate that this CRP-based multi-wavelength QKD protocol can reduce the reliance on classical reconciliation while preserving security in noisy settings.

Article
Computer Science and Mathematics
Security Systems

Javier Ruiz

,

Laura Fernández

,

María González

Abstract: In this study, we propose an RL-guided fuzzing scheduler that learns optimal mutation ordering and seed prioritization based on kernel coverage reward signals. The agent observes execution depth, subsystem transitions, and historical crash density to adapt exploration strategies. On Linux 5.10, the RL-fuzzer triggers 22% more unique crashes and 31% more deep paths compared with AFL-style schedulers. It identifies 7 previously unknown vulnerabilities, including mismanaged capability checks. Despite additional overhead from RL inference, throughput remains within 85% of baseline fuzzers. This study demonstrates the feasibility of applying RL-based policy learning to kernel fuzzing orchestration.

Article
Computer Science and Mathematics
Security Systems

Vyron Kampourakis

,

Michail Takaronis

,

Vasileios Gkioulos

,

Sokratis Katsikas

Abstract: Cyber Ranges (CRs) are complex socio-technical ecosystems, combining infrastructure resources, software services, learning mechanisms, and human-in-the-loop processes for cybersecurity training, education, and experimentation. However, their design and representation are conventionally described by diverse architectural representations and a lack of standardization, making them difficult to compare, integrate, and reason in an automated manner. This paper proposes a novel framework that uniquely integrates the structural, functional, informational, and decisional aspects of CR platforms, formalizing them into a common semantic framework. It models the architectural and learning characteristics of CRs, allowing the representation of design choices, operational processes, information resources, and capability development. The ontology is implemented using OWL 2 DL, which includes logical constraints and enables consistency checking and automated reasoning. Validation through instantiation and competency question assessment shows that the model allows for structured querying, traceability across abstraction levels, and capability-level reasoning. The findings indicate that ontology-based modeling can serve as a basis for more formalized CR configuration analysis and capability-focused evaluation of diverse CR platforms.

Article
Computer Science and Mathematics
Security Systems

Seema Sirpal

,

Pardeep Singh

,

Om Pal

Abstract: Digital signatures serves as a crucial cryptographic primitive in an e-governance system for the authentication of citizen-government interactions. Traditional methods (DSA, ECDSA) pose computational overheads at resource-limited endpoints and centralized verification servers. While complex-number cryptography provides theoretical efficiency through the Complex Discrete Logarithm Problem (CDLP), prior works often fail to meet the requirements for real-world applications. This paper advances the knowledge in lightweight cryptography by introducing LDSEGoV, a lightweight digital signature scheme for e-governance infrastructure. The proposed method overcomes the shortcomings of previous methods by incorporating sound modular arithmetic for consistent verification, using NIST-approved hash functions. Furthermore, we provide a comprehensive security analysis that provides the formal security proofs of existential unforgeability (EUF-CMA) of the proposed scheme in the Random Oracle Model. Additionally, the experimental results show a 6.5× improvement in signing performance and a 24.76× improvement in verification performance over ECDSA, with a 61% reduction in signature size. These results demonstrate that LDSEGoV is suitable for large-scale digital governance systems for authentication scenarios.

Article
Computer Science and Mathematics
Security Systems

Stefan Ivanov Stoyanov

,

Maria Marinova

,

Nikolay Rumenov Kakanakov

Abstract: Preserving critical data, preventing unauthorized access, securing communication are aspects of information security. To implement them as hardware is more reliable than software. There are various hardware solutions that suggest using a separate computational unit which is capable of providing various security enhancements. This article describes a heterogeneous security architecture with a tightly coupled security core to the CPU. A security interface that allows direct control and monitory of the security core over the CPU is proposed. In the article analysis of how the interface interacts with the controlled and monitored CPU is done. This analysis explains the benefits and why for certain aspects control is implemented seeking performance while for others - using less logic.

Article
Computer Science and Mathematics
Security Systems

Emily C. Rogers

,

Daniel K. Foster

,

Sarah L. Chen

,

Michael J. Turner

Abstract: Memory objects in the kernel often remain accessible long after their safe lifetime, leading to use-after-free exploits. We present a lifetime-aware isolation model that assigns temporal protection windows to kernel objects. PKS permissions are revoked when objects exit valid lifetime states. Applied to six kernel allocators (SLUB, SLOB, SLAB), the technique eliminates 67% of UAF exploitability cases** and shortens exposure windows by 72%. Benchmarking shows ≤4% overhead across memory-intensive workloads. This temporal model adds a new dimension to kernel compartmentalization by aligning memory protection with object lifecycles.

of 20

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated