Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Networks and Communications

Aiman Moldagulova

,

Zhuldyz Kalpeyeva

,

Raissa Uskenbayeva

,

Nurdaulet Tasmurzayev

,

Bibars Amangeldy

,

Yeldos Altay

Abstract: Low-cost air quality sensors enable dense monitoring networks but suffer from significant measurement noise and instability particularly in dynamic environments. Conventional fixed-window smoothing reduces noise but introduces a trade-off between signal stability and temporal responsiveness, often attenuating short-term pollution events. This paper proposes an adaptive filtering algorithm that dynamically adjusts the averaging window size based on short-term signal variability. The method relies on real-time variance estimation to balance noise suppression and sensitivity to rapid changes without increasing computational complexity. The approach is implemented within an IoT-based monitoring framework and evaluated using parallel measurements with a certified reference device. Comparative analysis against raw data and fixed-window filtering demonstrates improved statistical accuracy and stronger temporal correlation with reference measurements. In addition, this method enhances event detection stability in threshold-based monitoring scenarios. To support automated decision-making, the filtered signal integrated into an event-driven architecture with Robotic Process Automation (RPA), enabling reliable triggering of predefined workflows. The results show that proposed adaptive filtering provides an efficient and lightweight solution for real-time signal processing on resource-constrained devices, making it suitable for large-scale deployment in environmental monitoring systems.

Article
Computer Science and Mathematics
Computer Networks and Communications

Jisi Chandroth

,

Jehad Ali

Abstract: The Internet of Things (IoT) comprises diverse devices connected through heterogeneous communication protocols to deliver a wide range of services. However, the complexity and scale of IoT networks make them difficult to secure. Network intrusion detection systems (NIDS) have therefore become essential for identifying malicious activities and protecting IoT environments across many applications. Although recent deep learning (DL)-based IDS approaches achieve strong detection performance, they often require substantial computation and storage, which limits their practicality on resource-constrained IoT devices. To balance detection accuracy with computational efficiency, we propose a lightweight deep learning model for IoT intrusion detection. Specifically, our method learns compact, intrusion-relevant representations from traffic features using a two-layer Multi-Layer Perceptron (MLP) embedding backbone, followed by a linear SoftMax classification head for multi-class attack detection. We evaluate the proposed approach on two benchmark datasets, CICIDS2017 and NSL-KDD, and the results show strong performance, achieving 99.85% and 99.21% accuracy, respectively, while significantly reducing model size and computational overhead. Experimental results demonstrate that the proposed method achieves excellent classification performance while maintaining a lightweight design, with fewer parameters and lower FLOPs than existing approaches.

Article
Computer Science and Mathematics
Computer Networks and Communications

Ryan J. Buchanan

,

Parker M.D. Emmerson

Abstract: Using modest spectral graph theory, we show that under the assumption of convexity, beliefs will diffuse towards consensus. Our toy model captures opinion dynamics in a manner sensitive to the order of belief--updates amongst agents in a network. To accomplish this, we introduce a first-order deformation of the classical observable algebra and study the resulting non-commutative correction through an explicit graph bracket. We include a concrete computation alongside some code in the appendix.

Article
Computer Science and Mathematics
Computer Networks and Communications

Jak Brierley

Abstract: Authoritative server-client network models dominate multiplayer game implementations but incur significant operational costs due to simulation and replication workload. This paper investigates whether client assisted replication techniques can reduce server-side CPU utilisation without degrading the quality of gameplay. An experimental prototype was developed in which both simulation and replication of movement state were deferred from the server to all connected clients. The results show a reduction of server CPU utilisation of 95-98%. While these findings demonstrated substantial computational savings on the server, shifting responsibility of replication and simulation to clients introduces new technical challenges in maintaining authoritative consistency and resistance to client-side manipulation. The impact of this model on gameplay quality was not objectively evaluated in this study, and remains an area for future investigation.

Article
Computer Science and Mathematics
Computer Networks and Communications

Haowen Shi

,

Yichen Zong

Abstract: Efficient Mobile Edge Computing (MEC) resource management is critical for diverse Quality of Service (QoS) demands, but traditional reactive methods and existing preemptive policies struggle in dynamic environments, causing suboptimal experiences. This paper proposes Proactive Adaptive Preemptive Allocation (PAPA), a novel framework for intelligent, forward-looking MEC resource management. PAPA features a QoS prediction module using lightweight sequence models to forecast short-term trends, assess risk, and trigger pre-warnings. Its core, the Proactive Preemptive Strategy Learning (APPL) module, employs a deep reinforcement learning (DRL) agent with a unique dual-layer reward. This includes a proactive penalty compelling anticipatory preemptive actions when predicted QoS enters a warning zone, differentiating it from reactive approaches. PAPA further enhances adaptability via meta-learning and dynamic priority mechanisms. Extensive simulations show PAPA consistently outperforms baselines, achieving superior throughput, reduced latency, and a significantly lower critical QoS violation rate than reactive DRL. Ablation studies confirm the impact of proactive penalty and meta-learning. PAPA demonstrates competitive energy efficiency and optimized preemption, affirming its robustness and practical viability in dynamic MEC environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Hsu Ouyang

,

Shao-Yu Chang

,

Chao-Chun Chuang

,

Chia-Hung Yang

,

Yu-Chieh Lin

Abstract: Federated learning (FL) enables collaborative model training without sharing raw data and has become an important paradigm for privacy-sensitive applications such as healthcare and other regulated domains. However, most existing federated learning frameworks rely on centralized coordination servers, fixed network configurations, and complex infrastructure requirements, which limit their deployment in real-world institutional environments with strict cybersecurity and data governance constraints.In this work, we propose STellar-FL, a decentralized federated learning architecture designed for scalable cross-institution model training under network-constrained environments. The proposed framework adopts a microservice-based design consisting of a federated training orchestration module, a distributed communication layer, and federated execution nodes. STellar-FL enables secure model exchange through relay-assisted peer connectivity, eliminates the need for centralized servers with public IP exposure, and provides a unified workflow for model development, deployment, and validation.Compared with conventional federated learning frameworks, STellar-FL reduces deployment complexity, improves system robustness by removing single points of failure, and supports flexible collaboration across heterogeneous institutional infrastructures. The proposed architecture provides a practical foundation for real-world privacy-preserving AI deployment in healthcare and other data-sensitive domains.

Article
Computer Science and Mathematics
Computer Networks and Communications

Basker Palaniswamy

,

Paolo Palmieri

Abstract: Modern e-commerce platforms must handle sudden and unpredictable traffic surges caused by flash sales, festive shopping events, and viral online activity. Traditional web architectures typically adopt one of two extremes: a tightly coupled monolithic design that provides low latency but becomes fragile under heavy load, or a loosely coupled microservices architecture that improves scalability and resilience but introduces communication overhead during normal operation. This trade-off forces system designers to choose between performance efficiency and scalability robustness. This paper introduces ATLAS (Adaptive Traffic-aware Loose–tight Architecture System), a next-generation adaptive web architecture that dynamically adjusts its coupling strategy based on real-time system conditions. ATLAS employs machine learning models to analyse operational telemetry, predict traffic surges, detect anomalies, and forecast potential system failures. Using these predictions, the architecture can automatically transform its runtime structure, switching between tightly coupled monolithic execution and loosely coupled microservices deployment as traffic conditions evolve. To improve reliability, ATLAS incorporates a self-healing recovery pipeline that autonomously detects service failures, isolates faulty components, and restores normal operation without human intervention. Through case studies of large-scale platforms such as Google Search, Amazon, and Flipkart, we illustrate how existing systems can evolve toward the ATLAS paradigm, enabling self-adaptive and resilient web infrastructures for the next generation of large-scale online services.

Article
Computer Science and Mathematics
Computer Networks and Communications

Vladislav Vasilev

,

Georgi Iliev

Abstract: In this paper we introduce the CDF manifold algorithm that operates on data sets where a single target dimension is strictly increasing given a minimum of two or more number of input dimension which is very common in telco data. The manifold can then be used to compute the closest upper and lower limit to a given new point as well as its CDF. Training takes O(n.ln[n]) steps in the best case and O(n3/2) in the worst case. Look up takes O(ln[n]) steps in the best case and O(n1/2) in the worst case. The asymptotic computational cost is proven with a theorem. We compared our manifold method versus a standard dense neural network and show the asymptotic advantages both in terms of speed and accuracy. We also comment of potentials speed gains through the use of reference points. In summary, the manifold is a non-parametric and explanatory method to find the tightest data driven upper and lower limit of the output dimension given a new unseen input. This makes it ideal for planning new site deployments where we would need to find actual measurements as base-line performance.

Article
Computer Science and Mathematics
Computer Networks and Communications

Xianke Qiang

,

Zheng Chang

,

Jianhua Tang

,

Wei Feng

,

Chungang Yang

,

Yan Zhang

Abstract: Large language models (LLMs) are rapidly transforming the design and operation of communication systems, while the advent of sixth-generation (6G) networks provides the infrastructure necessary to sustain their unprecedented scale. This survey investigates the bidirectional relationship between LLMs and 6G networks from two complementary perspectives. From the perspective of LLM for Network, we illustrate how LLMs can enhance network management, strengthen security, optimize resource allocation, and act as intelligent agents. By leveraging their natural language understanding and reasoning capabilities, LLMs offer new opportunities for intent-driven orchestration, anomaly detection, and adaptive optimization beyond the scope of conventional AI models. From the perspective of Network for LLM, we discuss how 6G-native features such as integrated sensing and communication, semantic-aware transmission, and green resource management enable scalable, efficient, and sustainable training and inference of LLMs at the edge and in the cloud. Building on these two perspectives, we identify key challenges related to scalability and efficiency, robustness and security, as well as trustworthiness and sustainability. We further highlight open research directions as well. We envision that this work serves as a roadmap for cross-disciplinary research, fostering the integration of LLMs and 6G toward trustworthy and intelligent next-generation communication systems.

Concept Paper
Computer Science and Mathematics
Computer Networks and Communications

Edet Ekpenyong

,

Ubio Obu

,

Godspower Emmanuel Achi

,

Clement Umoh

,

Duke Peter

,

Udoma Obu

Abstract: In blockchain ecosystems, maintaining transparency and privacy has become an ethical dilemma. This is because, while certain specific information of the user is shared to ensure transparency of transactions across networks, such information could be detrimental to the user, as there is a possibility of it being tampered with. For instance, in the Catalyst voting process in Cardano, users can still see the amount of ADA tokens being held by other users, which can influence their voting options, especially when large ADA holders vote in support of certain ideas or proposals. To discourage such challenges as voter manipulation and vote buying, this study proposed the implementation of zero-knowledge proof (ZKP) in blockchain ecosystems to enhance the transparency of the catalyst voting process and enhance efficiency and speed of result release. Using survey questionnaire and a multivocal literature review, this study was able to proof that ZKP cannot only be applied in the catalyst voting process to enhance its transparency, but also addressed potential challenges to its applications such as scalability, encourage trust and fairness of the voting system, and improve voter participation due to its user-friendliness. Mathematical models emphasize scaled voting as optimal for balancing inclusion and plutocratic control.

Article
Computer Science and Mathematics
Computer Networks and Communications

Andrea Piroddi

,

Maurizio Torregiani

Abstract: This paper proposes a novel information-theoretic upper bound on the mutual information between the physical position of a user and the observed MIMO channel state information (CSI). Unlike classical Cram’er-Rao bounds or I-MMSE relations, our bound explicitly incorporates the spatial variability of the channel via the Jacobian of the channel with respect to position. We provide a derivation for both local linearized models and global nonlinear bounds, highlighting the dependence on array geometry and multipath structure. The results offer new insight into the intrinsic information available for position estimation and semantic localization in wireless networks.

Article
Computer Science and Mathematics
Computer Networks and Communications

Chandramouli Haldar

Abstract: This paper introduces a Morse code transmission system utilizing an ESP32 microcontroller and displaying the decoded message via a Telegram bot in real-time. In contrast to the traditional Morse code inputting methods, which are normally based on a single button with timing distinction between dot and dash, the suggested design utilizes two special buttons for dot and dash, respectively, and another button for sending the entire message. This method simplifies the input of users, minimizes timing errors, and enhances accuracy in message delivery. The decoded text is then transmitted securely to a specified Telegram chat via the Bot API at high speed and reliability over Wi-Fi. The system is portable, lightweight, and compact, thereby ideal for covert or clandestine messaging without inviting unnecessary attention.

Article
Computer Science and Mathematics
Computer Networks and Communications

Aymen I. Zreikat

,

Julien El Amine

Abstract: Wireless communications face both opportunities and challenges due to the coexistence of 5G New Radio (NR) high-band, 5G mid-band, and 5G low-band technologies. Each technology uses both licensed and unlicensed spectrum to operate in separate frequency bands. For example, 5G NR uses the high-band of 24+ GHz, the mid-band of 2-6 GHz, or the low-band of less than 2 GHz, including the 5 GHz band via Licensed-Assisted Access (LAA). With the use of sophisticated coexistence mechanisms and optimization techniques, this 5G coexistence scenario in shared spectrum can be effectively managed. These strategies are essential for boosting network capacity, reducing latency, and ensuring fair spectrum use across different wireless technologies. This work provides a comprehensive system-level evaluation of multi-band coexistence and offloading strategies under realistic deployment assumptions. The simulation results confirm the effectiveness of the proposed model, showing that spectrum sharing and coexistence among these technologies deliver scalable and robust performance in heterogeneous service environments. This approach enables efficient load balancing across the entire network and highlights the need for additional features to achieve further performance gains.

Article
Computer Science and Mathematics
Computer Networks and Communications

Mona Alghamdi

,

Atm S. Alam

,

Asma Cherif

Abstract: Mobile edge computing (MEC) enables resource-constrained mobile devices to execute delay-sensitive and compute-intensive applications by offloading tasks to nearby edge servers. However, task orchestration in MEC is challenged by the highly dynamic system conditions, unreliable networks, and the distributed edge environments. Moreover, as the number of users, tasks, and resources increases, the offloading decision-making problem becomes increasingly complex due to the exponential growth of the search space. To address these challenges, this paper proposes a Multi-Criteria Hierarchical Clustering-based Task Orchestrator (MCHC-TO), a novel framework that integrates multi-criteria decision making with divisive hierarchical clustering for preference-aware and adaptive workload orchestration. Edge servers are first evaluated using multiple decision criteria, and the resulting preference rankings are exploited to form hierarchical preference-based clusters. Incoming tasks are then assigned to the most suitable cluster based on task requirements, enabling efficient resource utilization and dynamic decision making. Extensive simulations conducted using an edge computing simulator demonstrate that the proposed MCHC-TO framework consistently outperforms benchmark approaches, achieving reductions in average service delay and task failure rate of up to 48\% and 92\%, respectively. These results highlight the effectiveness of combining multi-criteria evaluation with hierarchical clustering for robust and dynamic task orchestration in MEC environments.

Article
Computer Science and Mathematics
Computer Networks and Communications

Liliana Enciso

Abstract: In the context of communications networks, a mobile ad hoc network (MANET) represents a set of mobile nodes that are dynamically configured, without physical infrastructure or centralized administration. This article analyzes which of the routing strategies for MANET—proactive, reactive, or hybrid—offers the best results. For the measurement, twenty-four simulations were established in emergency situations in an urban area. This involved, in addition to the defined quality of service (QoS) parameters, calculating node densities and using a mobility model necessary to validate the results. The AODV, DSDV, and AOMDV protocols were used in these simulations, and the AOMDV protocol offered the best QoS with the random obstacle mobility model in the NS2 tool.

Article
Computer Science and Mathematics
Computer Networks and Communications

Ponglert Sangkaphet

,

Supawee Makdee

,

Chaivichit Kaewklom

,

Nawara Chansiri

,

Buppawan Chaleamwong

,

Pheerasap Wonglamai

,

Phattaraphol Chinnachot

Abstract: Cage-based tilapia farming is highly affected by rapid variations in water quality, particularly variations in dissolved oxygen (DO), which can lead to mass fish mortality and significant economic losses. To address this challenge, in this study, an internet of things (IoT)- and LoRa-based water quality monitoring and control system is proposed, designed for real-time aquaculture management. The developed prototype enables peer-to-peer communication among distributed control nodes for continuous monitoring of dissolved oxygen, pH, and water temperature. Measured data are transmitted remotely and integrated with automated oxygen pump control through a mobile application, allowing timely intervention without continuous on-site supervision. To mitigate sensor degradation caused by prolonged submersion, an automatic probe lifting mechanism was incorporated into the system, significantly reducing biofouling and sensor drift. The experimental results show that this mechanism improves measurement accuracy, achieving a dissolved oxygen RMSE of 0.186, which is substantially lower than that of a continuously submerged sensor. Evaluation of communication performance confirms reliable LoRa transmission with a 100% packet delivery rate over distances up to 1,600 m, maintaining positive signal-to-noise ratios and RSSI values above receiver sensitivity. Detection latency analysis demonstrates sub-second response times for both single- and multi-hop configurations, sufficient for timely aeration control. Evaluation by five specialists yielded a high average performance score of 4.11, while post-implementation satisfaction assessments involving 20 tilapia farmers indicated an average score of 4.48, confirming the system’s effectiveness, reliability, and suitability for practical deployment in cage-based tilapia farming.

Article
Computer Science and Mathematics
Computer Networks and Communications

Sethu Subramanian N.

,

Prabu P.

,

Kurunandan Jain

,

Prabhakar Krishnan

Abstract: Smart-city IoT ecosystems depend on a large number of devices with limited resources, which often lack built-in security mechanisms. While traditional cloud-based or gateway-centric intrusion detection systems (IDS) offer essential security, they are still characterized by high detection latency, considerable bandwidth demand, and lack of precise monitoring of single device actions. This work presents and experimentally evaluates a novel micro-layer intrusion detection architecture, termed the Edge AI Bridge as a new micro-computing security layer that is positioned between IoT devices and the gateway to enable early-stage threat interception. The proposed architecture incorporates embedded AI hardware that has a hybrid detection pipeline, tapping into the unsupervised anomaly detection mode for behavioral profiling and a lightweight signature-matching module that is used to cut down the false positives, thereby improving detection reliability. System operations—including localized traffic inspection, protocol parsing, and feature extraction—are performed before data aggregation, which not only preserves device-level privacy but also eases the computational burden of the IoT gateway to a large extent. The contemporary CIC-IoT-2023 dataset, which captures a wide range of smart-city protocols and attack vectors, is used to evaluate the architecture. The Edge AI Bridge leads to a significant reduction in detection latency, approximately 50 ms on average as opposed to the 500 ms of cloud-based solutions—while the resource footprint is kept low to about 20% CPU utilization. The Edge AI Bridge demonstrates a potential solution that is scalable, modular, and can preserve privacy while improving the cyber resilience of the smart-city infrastructures that are large, heterogeneous, and difficult to manage.

Article
Computer Science and Mathematics
Computer Networks and Communications

Craig S. Wright

Abstract: This article examines Nash equilibrium stability in digital cash systems, using Bitcoin as a canonical model for protocol-constrained strategic interaction. Building on the formal framework established in Wright (2025), we characterise mining as a repeated non-cooperative game under endogenous constraints: hashpower allocation, latency asymmetries, fee-substitution dynamics, and institutional noise. We show that equilibrium behaviours are sensitive to the structural composition of miner rewards—specifically, the transition from subsidy-dominated to fee-dominated environments—and that volatility in protocol rules leads to equilibrium multiplicity and eventual collapse. Using tools from mainstream game theory and Austrian time preference theory, we demonstrate that rational strategic cooperation is only sustainable under strict protocol immutability. Rule mutation introduces uncertainty that distorts intertemporal valuation and incentivises short-term extractive strategies. These results suggest that digital monetary systems must be governed by non-negotiable constitutional rules to preserve incentive compatibility across time.

Article
Computer Science and Mathematics
Computer Networks and Communications

Zsolt Bringye

,

Rita Fleiner

,

Eszter Kail

Abstract: The increasing reliance of Internet of Things (IoT) applications on low-power wide-area network technologies, particularly LoRaWAN, has amplified the need for intrusion detection approaches that go beyond attack-specific signatures and generic traffic anomalies. Existing IoT intrusion detection systems are often tailored to individual threat scenarios or rely on statistical indicators, which limits their ability to capture protocol-level misuse in a systematic and interpretable manner. This paper addresses this gap by proposing a methodology for protocol-aware anomaly detection based on a digital twin abstraction of LoRaWAN communication behavior. The approach models the Over-The-Air Activation (OTAA) procedure as a finite-state machine that serves as a lightweight, protocol-specific digital twin, encoding expected message sequences and specification-driven constraints. Rather than targeting individual attacks, observed network events are continuously validated against the modeled state evolution, enabling the identification of deviations that indicate anomalous or non-conformant behavior. Illustrative examples include replay attempts, integrity violations, and inconsistencies in protocol parameters, although the framework is not limited to predefined attack categories. The results demonstrate that state-machine-based digital twins provide a structured and extensible foundation for intrusion detection and can be integrated into SOC (Security Operation Center) oriented monitoring environments. Overall, the study highlights the methodological advantages of digital-twin-driven, state-aware detection for improving protocol compliance monitoring and interpretability in LoRaWAN-based IoT networks. Unlike prior LoRaWAN IDS approaches, the proposed model enables the detection of protocol-conformant yet semantically invalid behaviors that remain invisible to packet-centric or statistical detectors.

Article
Computer Science and Mathematics
Computer Networks and Communications

Ahmed Lateef Salih Al-Karawi

,

Rafet Akdeniz

Abstract: The fifth-generation (5G) networks are facing critical security challenges in device authenti- cation for massive Internet of Things deployments while preserving privacy. Traditional federated learning approaches depend on the computationally expensive homomorphic encryption to protect model gradients, resulting in substantial latency, communication over- head, and the energy consumption impractical for resource-constrained 5G devices. This paper proposes zero-knowledge federated learning (ZK-FL), eliminating homomorphic encryption by enabling devices to prove model correctness without revealing gradients. Our approach integrates zero-knowledge proofs with FL updates, where each device generates where each device generates a proof Proofi = ZK(Gradienti, Hashi), demon- strating computational integrity.Experimental results from 10,000 authentication attempts demonstrate ZK-FL achieves 78.4 ms average authentication latency versus 342.5 ms for homomorphic encryption-based FL (77% reduction), proof sizes of 0.128 KB versus 512 KB (99.97% reduction), and energy consumption of 284.5 mJ versus 6.525 mJ (95% reduc- tion), while maintaining 99.3% authentication success rate with formal privacy guarantees. These results demonstrate ZK-FL enables practical privacy-preserving authentication for massive-scale 5G deployment.

of 24

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated