Preprint
Article

Learning Automata-Based Enhancements to RPL: Pioneering Load-Balancing and Traffic Management in IoT

Altmetrics

Downloads

99

Views

43

Comments

0

This version is not peer-reviewed

Submitted:

16 July 2024

Posted:

17 July 2024

You are already at the latest version

Alerts
Abstract
The Internet of Things (IoT) signifies a revolutionary technological advancement, enhancing various applications through device interconnectivity while introducing significant challenges due to these devices' limited hardware and communication capabilities. To navigate these complexities, the Internet Engineering Task Force (IETF) has tailored the Routing Protocol for Low-Power and Lossy Networks (RPL) to meet the unique demands of IoT environments. However, RPL struggles with traffic congestion and load distribution issues, negatively impacting network performance and reliability. This paper presents a novel enhancement to RPL by integrating learning automata designed to optimize network traffic distribution. This enhanced protocol, the Learning Automata-based Load-Aware RPL (LALARPL), dynamically adjusts routing decisions based on real-time network conditions, achieving more effective load balancing and significantly reducing network congestion. Extensive simulations reveal that this approach outperforms existing methodologies, leading to notable improvements in packet delivery rates, end-to-end delay, and energy efficiency. The findings highlight the potential of our approach to enhance IoT network operations and extend the lifespan of network components. The effectiveness of learning automata in refining routing processes within RPL offers valuable insights that may drive future advancements in IoT networking, aiming for more robust, efficient, and sustainable network architectures.
Keywords: 
Subject: Computer Science and Mathematics  -   Computer Networks and Communications

1. Introduction

The Internet of Things (IoT) heralds an era of unprecedented connectivity, where the digital and physical realms converge through a vast network of embedded devices [1,2]. This paradigm encompasses a diverse range of applications, from enhancing the livability of smart cities and optimizing industrial processes to advancing healthcare outcomes and creating more efficient energy management systems. The proliferation of IoT devices, projected to exceed 75.44 billion by 2025, underscores the transformative potential of this technology to reshape our interactions with the world around us. Yet, as the fabric of the IoT continues to expand, it brings to the fore complex challenges that necessitate robust, scalable, and efficient networking protocols.
Low-Power and Lossy Networks (LLNs) are at the heart of IoT infrastructures, characterized by resource-constrained devices and inherently unreliable communication links [3,4]. These networks are the backbone of IoT, facilitating the seamless data exchange among many connected devices. To cater to the unique demands of LLNs, the Internet Engineering Task Force (IETF) introduced the Routing Protocol for Low-Power and Lossy Networks (RPL) [5,6,7,8]. As the de facto standard for IoT networking, RPL is engineered to navigate the intricacies of LLNs, offering a flexible framework to support diverse traffic flows, including multipoint-to-point (MP2P), point-to-point (P2P), and point-to-multipoint (P2MP) communications. Despite its comprehensive design, RPL encounters formidable challenges that could stymie the IoT’s growth and operational efficiency [9,10].
Among the myriad challenges RPL faces, congestion emerges as a critical bottleneck, severely impacting network performance [11,12]. Congestion in IoT networks, particularly in RPL-based LLNs, manifests as accumulating data packets at certain nodes, leading to increased packet loss, latency, and energy consumption. This congestion is primarily attributed to IoT devices’ high density and sporadic yet intense data transmission patterns, which overwhelm network resources. Furthermore, the inherent limitations of LLNs, such as restricted bandwidth and variable link quality, exacerbate the situation, making congestion control and load balancing paramount to the sustainability and reliability of IoT networks.
Effective congestion management and load balancing within RPL are not merely technical requisites but are imperative to realizing the full potential of the IoT [13,14,15]. These mechanisms ensure equitable network traffic distribution, preventing overload situations and optimizing the use of scarce resources. By addressing congestion proactively, RPL can enhance the quality of service (QoS), extend network lifetime, and ensure reliable data delivery—factors crucial for the success of IoT applications, from critical healthcare monitoring systems to real-time industrial automation processes.
This paper explores enhancements to the Routing Protocol for Low-Power and Lossy Networks (LLNs), specifically within the RPL-class structure, by introducing the Learning Automata-based Load-Aware RPL (LALARPL) algorithm aimed at improving load balancing in IoT networks. Initially, the document highlights the primary challenges encountered in RPL, mainly focusing on congestion control across both broader IoT environments and specifically within RPL networks. Following this, it delves into the vital role and existing methods of load balancing in these networks (Section 2). The subsequent sections detail the Learning Automata framework, which is utilized to optimize routing decisions (Section 3), and provide a thorough description of the proposed LALARPL algorithm (Section 4). The effectiveness of LALARPL is rigorously tested through simulations that assess various performance metrics, including packet delivery ratio, throughput, fairness in throughput, end-to-end delay, energy consumption fairness, and network lifetime (Section 5). The paper concludes by synthesizing the findings, affirming the advantages of LALARPL in boosting network performance and reliability, and suggesting directions for future research. This includes the potential integration of mobility features to accommodate better dynamic network environments (Section 6). This structure ensures a systematic exploration of LALARPL’s potential to revolutionize RPL implementations in IoT settings.

2. RPL Challenges

2.1. Congestion Problem in IoT

Congestion occurs in a network when resource demand (e.g., bandwidth, routing capacity, buffer space) exceeds the available supply, leading to performance degradation [16]. In the context of networks, especially in IoT environments, congestion can lead to packet loss, increased latency, lower throughput, and inefficient energy use, which is particularly critical for battery-powered devices. Congestion in networks signifies a state where network resources are overwhelmed due to excessive data packets sent through the network, leading to a bottleneck [17,18]. This situation results in a series of adverse effects, such as:
  • Packet Loss: When a network experiences congestion, the data packets arrive at a rate that exceeds the buffer’s capacity to store them for processing and forwarding. IoT devices, such as wireless sensor network nodes and LLN devices, have finite memory allocated for buffering packets. Once this memory (buffer) is filled due to congestion, any additional incoming packets cannot be accommodated, leading to what’s known as packet loss. This loss necessitates retransmissions if reliability is a requirement of the communication, which further contributes to the congestion, creating a feedback loop that exacerbates the network’s congested state.
  • Increased Latency: Latency refers to the time a packet travels from its source to its destination. In congestion conditions, packets queue up at network nodes, waiting for their turn to be processed and forwarded. This queuing delay is a significant contributor to overall network latency. The more congested a network is, the longer the queues at network nodes are and, consequently, the higher the latency. For real-time applications and time-based monitoring, increased latency can severely degrade the quality of the service, leading to delays, destroying the cyber-physical system, or lags that impact user experience.
  • Lower Throughput: Throughput in a network is the rate at which data is successfully delivered over a communication channel. Congestion leads to packet loss and increased latency, lowering the network’s effective throughput. As packets are dropped or delayed, the data transmission rate effectively decreases.
  • Energy Waste: Congestion introduces significant energy waste in wireless sensor networks and IoT devices, which are often battery-powered and designed to operate efficiently to prolong battery life. When packets are lost due to congestion, they frequently need to be retransmitted to ensure the information reaches its intended destination. Each retransmission consumes energy for both the retransmitting device and potentially other devices in the network that participate in forwarding the packet. Moreover, devices in a congested network may need to stay in a higher power state longer to deal with the congestion, increasing energy consumption. This unnecessary energy expenditure reduces the overall lifetime of the devices and can be particularly problematic in IoT applications, where devices are expected to operate autonomously for extended periods [1].

2.2. Congestion Control in RPL

Congestion control in networking, especially within the context of the RPL and similar protocols used in IoT applications, is a critical process designed to maintain optimal network performance under varying load conditions. It encompasses a set of mechanisms and strategies to prevent or mitigate the negative effects of congestion—such as packet loss, increased latency, and decreased throughput—that occur when network traffic exceeds its capacity to process and forward data efficiently [16,19]. Effective congestion control ensures that data flows smoothly through the network, optimizing resource utilization and enhancing the reliability and efficiency of data transmission, particularly in environments where network resources are constrained and must be judiciously managed to support the diverse needs of IoT applications [20].
  • Adaptive Retransmission Mechanisms: To further mitigate congestion, RPL can incorporate adaptive retransmission mechanisms that adjust the rate at which data is sent and the criteria under which retransmissions occur. The network can reduce unnecessary data traffic by monitoring the success rates of packet deliveries and dynamically adjusting retransmission strategies. For instance, increasing the retransmission timeout could prevent exacerbating the congestion in conditions where packet loss is due to congestion rather than poor link quality.
  • Proactive Congestion Detection: RPL can be enhanced with proactive congestion detection algorithms that identify potential congestion before it becomes problematic. The network can predict and address congestion risks by analyzing trends in data flow rates, queue lengths, and node capacities. Proactive measures might include rerouting traffic, adjusting transmission rates, or temporarily suspending non-critical data transmissions to allow the network to recover.
  • Multi-Path Routing: To enhance load balancing and traffic distribution, RPL supports multi-path routing [21]. This allows data to be sent over multiple paths, distributing the load evenly across the network and reducing the risk of any single path becoming a bottleneck. Multi-path routing contributes to congestion control and enhances network resilience by providing alternate routes for node or link failures.
  • Energy-Aware Congestion Control: Given the energy constraints of devices in LLNs, RPL’s congestion control mechanisms are designed to be energy-aware. This includes optimizing the trade-off between energy consumption and network performance. For example, while reducing the data transmission rate can alleviate congestion, it can also mean that devices spend more time in active communication states, consuming more energy [22]. RPL aims to balance these factors to maintain network efficiency without unduly depleting node batteries.
  • Integration with Application Layer Protocols: Congestion control in RPL is not limited to the network layer but involves coordination with application layer protocols such as CoAP. By integrating congestion control strategies across layers, RPL ensures that application layer behaviours—such as data generation rates and priority data transmissions—are aligned with the network’s congestion status. This cross-layer approach enables more sophisticated congestion management strategies that can adapt to the specific requirements and behaviours of IoT applications.
  • Community Engagement and Feedback Loops: RPL incorporates mechanisms for community engagement and feedback loops, allowing the network to adapt to its nodes’ collective behaviour. By aggregating feedback on congestion levels, packet loss rates, and throughput across different parts of the network, RPL can adjust its overall congestion control strategies to reflect the current network conditions. This collective intelligence approach ensures that RPL remains responsive to the dynamic nature of LLNs.

2.3. Load Balancing in RPL

In the expansive realm of the IoT, addressing load balancing in the RPL protocol presents a unique and pivotal strategy for orchestrating the distribution of computational and communication tasks across a vast network of heterogeneous devices. This orchestration transcends traditional network traffic management by optimizing the use of every iota of resource available, from bandwidth to processing power, thus ensuring that the network operates at its zenith of efficiency and reliability. The challenges stem from the inherent characteristics of IoT networks, such as resource constraints, dynamic network topologies, and the diverse requirements of IoT applications, necessitating meticulous management to conserve energy while fulfilling their roles [23].
The significance of load balancing in such a diversified ecosystem, facilitated by RPL protocol within IoT, cannot be overstated. The linchpin maintains equilibrium within the network, ensuring that no single node bears an undue burden that could precipitate its premature energy depletion or failure. By judiciously apportioning tasks, load balancing minimizes latency, optimizes energy use, and, by extension, amplifies the network’s operational longevity and resilience. This harmonization of disparate device capabilities—ranging from robust servers that anchor the network to the myriad of energy-constrained sensors populating WSNs—is particularly crucial in applications where data integrity and timeliness are paramount, underscoring the indispensable role of load balancing in the fabric of IoT.
Critical challenges in implementing effective load balancing in the RPL protocol underscore the need for advanced strategies that address IoT devices and networks’ unique demands and capacities. These efforts aim to create a seamless IoT ecosystem where devices can communicate and cooperate regardless of their underlying technology, ensuring a cohesive and efficient network operation [11,24,25,26,27]. Here are some of the critical challenges in implementing effective load balancing in RPL protocol:
  • Resource Constraints: IoT devices typically operate with limited computational power, memory, and energy supply. These constraints make it challenging to implement complex load-balancing algorithms that require significant processing power or memory resources. The challenge lies in designing lightweight load-balancing mechanisms that can operate efficiently within the limited capabilities of IoT devices [28].
  • Dynamic Network Topologies: IoT networks are highly dynamic, with devices frequently joining or leaving the network [28,29]. This dynamic nature results in fluctuating network topologies, complicating a balanced load distribution. Load-balancing solutions must be adaptive and capable of responding promptly to changes in the network structure without imposing excessive overhead.
  • Diverse Traffic Patterns: IoT applications can generate highly diverse traffic patterns, ranging from periodic telemetry data transmissions to event-driven alerts. This diversity challenges load-balancing efforts, as algorithms must accommodate varying data rates, packet sizes, and transmission frequencies to prevent congestion and ensure fair resource allocation [10].
  • Congestion Detection and Response: Accurately detecting congestion in its early stages is critical for preemptive load balancing. However, due to IoT networks/RPL, traditional congestion detection mechanisms may not be directly applicable. Moreover, once congestion is detected, the protocol must swiftly reroute traffic without exacerbating the network’s energy consumption or disrupting ongoing communications.
  • Energy-Efficiency Considerations: In battery-powered IoT devices, energy conservation is paramount [30]. Load balancing strategies must distribute network traffic evenly and minimize energy consumption. This requirement often leads to trade-offs between achieving optimal load distribution and prolonging the network’s operational lifetime.
  • Compatibility and Standardization: Ensuring that load balancing enhancements are compatible with the existing RPL specifications and interoperable across different implementations is crucial [23]. Any modification to the RPL protocol to support load balancing must adhere to standardization efforts to facilitate widespread adoption and maintain network interoperability.
  • Qos Requirements: IoT applications may have specific QoS requirements, such as low latency or high reliability [23]. Load balancing mechanisms must meet these application-specific demands while managing network resources efficiently. Balancing the QoS requirements with the goal of load balancing adds another layer of complexity to the design and implementation of RPL enhancements.
To provide a solid foundation for understanding the evolution and enhancement of the RPL in the context of IoT environments, we have meticulously selected a collection of pivotal research papers from 2020 to 2024. The aim was to encapsulate the breadth of innovation and strategic advancements in RPL to address critical issues such as energy efficiency, load balancing, and network reliability. Our selection criteria were anchored in identifying works that proposed novel enhancements or modifications to RPL and demonstrated tangible improvements in network performance metrics such as packet delivery ratio, energy consumption, and network lifetime (Table 1). This was predicated on the understanding that the IoT paradigm is rapidly evolving, with escalating demands for more resilient, efficient, and scalable network routing protocols that cater to the burgeoning array of IoT devices and applications.
The ensuing comparison table serves as a panoramic snapshot, showcasing the diversity and ingenuity of approaches undertaken by researchers to fortify RPL against the multifaceted challenges intrinsic to LLNs. Through (Table 2), readers can discern the trajectory of RPL enhancements over the years, where each entry highlights the primary aim, strategy, and strengths of the respective studies. This illuminates the progressive strides made in optimizing RPL for IoT applications and provides a clear vantage point from which to appreciate the nuanced strategies devised to strike a balance between competing objectives such as energy efficiency, network stability, and load distribution. By aggregating these insights, the table acts as a crucial reference point for stakeholders in the IoT ecosystem seeking to navigate the complexities of network routing in LLNs, offering a clear lens through which to gauge the state-of-the-art in RPL enhancements.
Figure 1. Parameters occurrence over years 2020 - 2024
Figure 1. Parameters occurrence over years 2020 - 2024
Preprints 112372 g001

3. Learning Automata

A Learning Automaton (LA) is a model for decision-making in stochastic environments which dynamically adjusts its strategies based on feedback to optimize overall performance. This document outlines Learning Automata’s operational principles and implementation details, focusing on the Linear Reward-Penalty (L_R-P) update scheme [67]. Below are key definitions related to the components of a Learning Automaton (see Figure 2):
  • Environment: The dynamic setting in which the automaton operates. It is a source of stimuli where the automaton performs actions and receives feedback based on the outcomes of these actions.
  • Actions: These are the decisions or moves the automaton can make. The probability of choosing each action is updated continuously based on the feedback received from the environment.
  • Response: The feedback from the environment following an action, which can be positive (reward) or negative (penalty), influencing how the probabilities of actions are updated.
  • Learning Algorithm: The method used to update the probabilities of actions in response to the feedback, to enhance the efficiency and effectiveness of the decision-making process.

3.1. Pseudocode for Learning Automaton

The pseudocode below demonstrates the implementation of a Learning Automaton using the Linear Reward-Penalty (L_R-P) update scheme. This scheme adjusts the probability of selecting actions based on the feedback received, optimizing the automaton’s responses to environmental changes.
Algorithm 1 Learning Automaton Procedure
1:
Input: Environment 
2:
Output: Updated probability vector P 
3:
Initialize P with equal probabilities for each action 
4:
while not terminate_condition do 
5:
    Select an action based on the probability distribution P 
6:
    Perform the action in the environment 
7:
    Receive feedback (reward or penalty) from the environment 
8:
    if feedback is reward then 
9:
         P [ selected_action ] P [ selected_action ] + α · ( 1 P [ selected_action ] )  
10:
        for each action j selected_action  do 
11:
            P [ j ] ( 1 α ) · P [ j ]  
12:
        end for 
13:
    else if feedback is penalty then 
14:
         P [ selected_action ] ( 1 β ) · P [ selected_action ]  
15:
        for each action j selected_action  do 
16:
            P [ j ] β N 1 + ( 1 β ) · P [ j ]  
17:
        end for 
18:
    end if 
19:
    Check the terminate_condition  
20:
end while 
21:
return P

3.2. Linear Reward-Penalty Scheme

The Linear Reward-Penalty (LRP) scheme described here is a method used in reinforcement learning to adjust the probabilities of selecting certain actions based on the feedback (reward or penalty) received from the environment. This type of scheme is especially relevant in contexts where decisions are probabilistic and learning from the outcomes is necessary to improve performance over time. Here’s a detailed breakdown of the scheme:
  • Updating Probabilities When an Action is Rewarded:
    When action α i is rewarded, the probability of choosing this action in the next time step, p i ( n + 1 ) , is increased. The formula used is (Equation  1):
    p i ( n + 1 ) = p i ( n ) + α ( 1 p i ( n ) )
    This equation means that the new probability p i ( n + 1 ) is the old probability p i ( n ) increased by a fraction α of the remaining probability space 1 p i ( n ) . The parameter α controls how much the probability increases — larger α results in a larger increase.
    For all other actions j i , the probability of selecting each of these actions is reduced proportionally (Equation  2):
    p j ( n + 1 ) = ( 1 α ) p j ( n )
    Here, each other action’s probability is scaled down by the factor 1 α , ensuring that the total probability across all actions remains equal to 1.
  • Updating Probabilities When an Action is Penalized:
    Conversely, if action α i is penalized, its probability is reduced (Equation  3):
    p i ( n + 1 ) = ( 1 β ) p i ( n )
    In this equation, β is the penalty parameter. The new probability p i ( n + 1 ) is the previous probability p i ( n ) reduced by a factor of β , meaning a larger β decreases the probability more significantly.
    For all other actions j i , their probabilities are updated as follows (Equation  4):
    p j ( n + 1 ) = β r 1 + ( 1 β ) p j ( n )
    This adjustment ensures that the total probability is still 1. The increase for each other action is partly a fixed amount β r 1 , which redistributes the reduced probability of the penalized action equally among them, and partly the old probability scaled down by 1 β .
  • Parameters and their Roles:
    • α (reward parameter): Determines how strongly an action’s probability is increased upon receiving a reward.
    • β (penalty parameter): Determines how strongly an action’s probability decreases after being penalized.
    • r (total number of actions): Affects the redistribution of probabilities when an action is penalized.
    The LRP scheme is a straightforward yet effective adaptive strategy for balancing exploration (trying out different actions) and exploitation (favoring actions that have previously led to positive outcomes) in environments where actions have probabilistic outcomes. This method ensures that actions that lead to success are more likely to be chosen in the future, while those that lead to negative outcomes are less likely to be repeated.

4. Proposed LALARPL

This section comprehensively elucidates the Learning Automata-based load-aware RPL (LALARPL), a distributed load-balancing algorithm tailored for the RPL. Aimed to enhance network efficiency and reliability significantly, LALARPL dynamically distributes traffic loads across pathways within LLN environments. Central to the LALARPL framework is the Traffic Index (TI) parameter, which was devised to quantify the burden of traffic load accurately. The TI of a designated parent node i is mathematically expressed as (Equation 5):
T I i = k N θ k i × T k C B i
Herein, T I i represents the traffic index pertinent to a specific parent node i. The coefficient θ k i signifies the traffic contribution from a child node k routed via the parent node i. The term T k denotes the aggregate traffic generated by node k, and C B i encapsulates the capacity or bandwidth accessible to the parent node i. Notably, TI values are restricted to a range between zero and one, with one value indicating full utilization of a parent node’s capacity. LALARPL’s methodology is bifurcated into two key phases: Parent Set Formation and Load Balancing:
  • Parent Set Formation: Initially, a collection of proximate parents is formulated for every child node. This phase solely utilizes DIO-indicator packets, distinguishing it from subsequent procedures. During this phase, parent sets are established for each node through a process where parent nodes broadcast DIO-indicator packets within their communicable range. These packets are characterized by three specific fields: the IP address of the broadcasting node, the minimal number of hops to the root from a parent node, and the parent node’s traffic index. Prior to the transmission of a DIO-indicator packet, the aforementioned fields are initialized by the parent node. Upon receipt of DIO-indicator packets, other nodes in the network undertake the update of their parent tables, adjusting the sending priorities based on the data received within their respective parent sets as delineated below:
    • Should a node receive merely a singular DIO-indicator packet, the information contained within is recorded in its parent set table, and the originating node is incorporated into its parent set with a selection probability assigned as 1.
    • Conversely, in scenarios where multiple DIO-indicator packets are received, nodes are tasked with selecting between a minimum of two and a maximum of five parent nodes. The selection criteria encompass proximity (regarding the number of hops) and the residual energy of the potential parent nodes. These chosen nodes are added to the recipient node’s parent set or routing path. The formulation for the selection probability of each potential parent node is given as follows (Equation  6):
      i N P i = ζ 1 numhop i j = 1 N 1 numhop j + ( 1 ζ ) T i j = 1 N T j
      Herein:
      • i indexes the enumeration of parent senders,  
      • T i quantifies the traffic index associated with parent i,  
      • numhop i specifies the number of hops from node i to the network root,  
      • N represents the total count of potential parent nodes under consideration,  
      • ζ is a weighting parameter within the interval [0,1], modulating the relative contributions of hop count and traffic index to the selection probability.
  • Distributed Load Balancing: Following the parent set formation stage, the procedure for data transmission commences. Child nodes initiate the transmission of data packets toward their selected parent nodes, as indicated within their parent set tables. Concomitantly, parent nodes respond by dispatching Acknowledgment (Ack) packets, which encapsulate the sender’s identification number and the traffic index. To curtail the message volume and consequently diminish the network load, a parameter designated as p is introduced. This parameter dictates that each node transmits a set of p data packets to a predetermined parent node, which issues a single Ack packet in response to the accumulation of p packets. Additionally, the architecture of the routing tables is elucidated, revealing four principal fields: the Parent IP address, the selection probability assigned to that parent, the Traffic Index, and the hop count to the network root. An integral feature of the algorithm is the incorporation of a learning automaton within each child node. This automaton executes operations correlating with the number of parent nodes listed in the node’s routing table or parent set, thereby enabling adaptive decision-making based on dynamic network conditions. The methodology governing the issuance of rewards or penalties after the receipt of an Ack packet is articulated as follows:
    A reward is allocated when the parent’s traffic index (the Ack’s sender) is observed to be less than 50% of the average traffic index pertinent to other parents within the identical set. 
    Should the traffic index exceed 50% yet remain below 80% of the average traffic index of the other parents in the set, and the parent node exhibits the minimal number of hops to the root, a reward is similarly conferred. 
    Conversely, a penalty is imposed when the traffic index surpasses the average traffic index associated with other parents in the set.
Algorithm 2 Load Balancing Algorithm based on Learning Automata
Preprints 112372 i002
The formulations for rewards and penalties are articulated as follows (Equations 7 and 8):
α = α 1 + δ · T I i + f ( max hop , num hop i , γ ) δ · max ( T I ) + g ( max hop , ξ ) + c 1
β = α 2 + δ · h ( avg T I , T I i , η ) + num hop i δ · avg T I + g ( max hop , ξ ) + c 2
Where:
  • f ( x , y , z ) = x e γ · y with γ being a damping factor that adjusts the impact of hop count differences.
  • g ( x , ξ ) = ξ · ln ( x + 1 ) where ξ helps modulate the influence of the maximum hop count dynamically.
  • h ( x , y , z ) = z · ( x y ) 2 where η scales the squared difference between average and individual traffic indices, emphasizing deviations.
  • max ( T I ) and avg T I are computed as the maximum and average traffic indices among the set, potentially including more sophisticated aggregation rules based on network topology or traffic patterns.
In simulations of network traffic, it was observed that increasing the hop count linearly increases the network delay under low traffic conditions. However, as traffic intensifies, the delay increases exponentially. The function f ( max hop , num hop i , γ ) was calibrated with real data to model this behaviour accurately, significantly improving the predictive performance of the model in high-traffic scenarios.

5. Simulation Results

In the conducted study, the performance of the proposed Learning Automata-based Load-Aware Routing Protocol for LLNs (LALARPL) was meticulously evaluated and then compared against seven other protocols specifically designed for IoT/LLNs, denoted as Protocols ECLRPL [25], FAHP-OF [55], NUCRPL [43], WRF-RPL [45], TLR 46 [64], LBS-RPL [54], and CEA-RPL [61]. Using the NS-2 discrete event simulator, a detailed assessment was conducted within a simulated environment designed to emulate potential real-world IoT deployment scenarios closely. This environment, characterized by a 1000 × 1000 meter area populated under three different scenarios with 50, 100, and 150 static nodes, provided a comprehensive backdrop for examining the operational performance of each protocol, with the simulation parameters elaborately detailed in Table 3.
Crucial performance metrics, including Jain’s Fairness Index in Throughput, were utilized to assess the equitable allocation of network resources across all nodes—a fundamental aspect to ensure fair bandwidth distribution within the IoT ecosystem. Additionally, the Packet Delivery Ratio (PDR) and Latency were thoroughly investigated as key indicators of each protocol’s reliability and efficiency. They measured the success rate of packet deliveries and the time required for packets to traverse from their origin to their intended destinations, respectively. The inclusion of control message sizes for DIO, DAO, DIS, and DAO-Ack, as well as traffic rates defined by λ = 0.1 packets per second and λ = 0.2 packets per second, further enriched the simulation’s scope. This methodological approach highlighted the comparative advantages and potential drawbacks of LALARPL relative to the other considered protocols and provided insightful revelations about the adaptability and operational effectiveness of these routing enhancements in anticipated IoT applications. Such discoveries emphasize the significant impact these protocols can have in shaping network environments to be more resilient, efficient, and fair.

5.1. Packet Delivery Ratio Test

Various factors influence the packet delivery rate in the RPL tree structure. Significant challenges such as link failure, congestion rates, and potential collisions within the network are recognized. These challenges can substantially impact the packet delivery rate. In addition, the PDR is defined as follows (Equation 9):
P D R = i = 1 n ϕ received , i i = 1 n ϕ sent , i
where:
  • ϕ received denotes the total data packets successfully received,
  • ϕ sent represents the total data packets sent.
The Packet Delivery Ratio (PDR) is a critical metric in network performance, reflecting the reliability and efficiency of the routing protocol. The LALARPL method showcases significant advancements in this domain by integrating a graph degree restriction for optimized graph formation and multipathing mechanisms for alternate parent selection. These strategies and the implementation of distributed learning automata to weigh the value of parents dynamically have markedly enhanced PDR across various network sizes and packet arrival rates.
In a comparative analysis, the LALARPL method has outperformed its counterparts in all tested scenarios. Specifically, for a network of 50 nodes with a packet arrival rate ( λ ) of 0.1 packets per second, LALARPL recorded a PDR of 0.96, demonstrating an improvement of 1.06 % over the nearest competing protocol, TLR (PDR of 0.95). At the same network size but with a higher packet arrival rate ( λ = 0.2 packets per second), LALARPL’s PDR of 0.91 is 3.41 % better than the next best protocol, LBS-RPL (PDR of 0.9). Expanding the network to 100 nodes, LALARPL’s superiority remains evident. With a λ of 0.1, LALARPL achieved a PDR of 0.93, which is 6.45 % higher than the other methods’ average PDR (0.87). At a λ of 0.2, LALARPL’s PDR of 0.9 is 8.43 % better than the competing protocols’ average PDR (0.83). In denser networks of 150 nodes, LALARPL still maintains its lead. At a packet arrival rate of 0.1, its PDR of 0.9 is 7.06 % higher than the other protocols’ average PDR (0.84). Even at a higher λ of 0.2, LALARPL’s PDR of 0.86 is 10.81 % better than the others’ average PDR (0.77).
These results, depicted in Figure 3, Figure 4, and Figure 5 for 50, 100, and 150 node networks, respectively, underscore the robustness of LALARPL against network scaling and increased traffic. The tabulated and graphically represented data conclusively demonstrate that LALARPL improves PDR in isolation and consistently across varying network conditions, cementing its position as a formidable protocol for load balancing in RPL-based IoT networks.

5.2. Throughput

To derive the throughput for each node, one must consider the node’s data reception and transmission over a specific period. We define the throughput, denoted as θ , for the i-th node in an RPL network. The throughput can be affected by network parameters such as link quality, number of child nodes, and the traffic pattern. Here’s the expression for throughput (Equation 10):
θ i = R i j Δ t
where θ i is the throughput for the i-th node, R i j represents the data received from node j during the time interval Δ t .
Where:
  • θ i is the throughput of the i-th node.
  • R i j represents the total packets received by node i from its j-th neighbour.
  • Δ t is the time interval the throughput is measured.
Formula 10 captures the packets a node receives from all its neighbours (children) over a given period, a direct measure of throughput. This essential measurement can include other factors like packet size if needed. In your RPL network considerations for Load Balancing, load balancing aims to evenly distribute traffic among all available nodes to prevent any single node from becoming a bottleneck, which would decrease overall network performance. Considering factors such as link quality and the number of children of a parent node, we update the throughput formula for node i in an RPL network and present it as follows:
θ i = ( R i j × L Q I i j ) Δ t × log ( 1 + C i )
Where:
  • L Q I i j is the link quality indicator between node i and its j-th neighbour, which affects the reliability and speed of the transmitted data.
  • C i is the number of child nodes for node i, influencing the load handled by the node.
Equation 11 considers the quantity of data transmitted, the quality of the links, and the structural load on the node, providing a more comprehensive measure of throughput in a load-balanced RPL network. Throughput is a critical metric in assessing the performance of network protocols, particularly in IoT environments where data efficiency is paramount. The LALARPL protocol has demonstrated notable advancements in this regard, as evidenced by its superior throughput rates in simulations with 50, 100, and 150 nodes. By imposing a cap on the number of parent nodes in the children’s list, LALARPL reduces the overhead involved in parent selection and streamlines the decision-making process. This constraint simplifies the network topology and minimizes the chance of creating suboptimal paths, thus enhancing the overall data transmission efficiency.
LALARPL’s dynamic weight assignment for parent nodes, guided by learning automata that adapt based on traffic and congestion, also allows for a responsive and flexible adaptation to changing network conditions. This feature ensures that traffic is distributed more evenly across the network, reducing bottlenecks and enhancing throughput.
In quantitative terms, LALARPL exhibits a remarkable improvement in throughput percentages across different network scales and packet intervals. At 50 nodes with λ = 0.1 , LALARPL’s throughput is 4.74, which is 7.36 % higher than the average throughput of 4.41 from other protocols. With a higher packet interval ( λ = 0.2 ), the improvement is 10.35 % over the average throughput of 8.75. For a network of 100 nodes, the throughput under LALARPL is 10.11 at λ = 0.1 , which is 3.26 % higher than the average of 9.79 for the others. At λ = 0.2 , LALARPL achieves 19.35, presenting a robust 7.38 % improvement over the average throughput of 18.02. At the highest scale tested, with 150 nodes, LALARPL continues to outshine its counterparts. For λ = 0.1 , its throughput of 15.94 surpasses the average of other protocols (13.29) by 19.95 % . At λ = 0.2 , the improvement is a significant 11.61 % over the average throughput of 26.49.
Figure 6, Figure 7, and Figure 8 in the study illustrate these results in detail, showcasing the throughput outcomes for networks with 50, 100, and 150 nodes, respectively. LALARPL’s consistent performance advantage underscores the protocol’s effectiveness in handling network traffic and its capability to sustain higher throughput levels, solidifying its potential for large-scale IoT deployments.

5.3. Jain Fairness Index in Throughput

Integrating advanced throughput calculations into evaluating JFI provides a nuanced understanding of network performance, highlighting efficiency and equity in resource distribution within a load-balanced RPL context. To quantify both the throughput of a child link i to a parent p and the fairness in resource allocation, the throughput and fairness are defined using equation 11. Subsequently, the Jain Fairness Index in Throughput is calculated using the following formula (Equation 12):
F = i = 1 n θ i 2 n · i = 1 n θ i 2
Where:
  • θ i denotes the throughput for the i-th node, considering the data transmitted, the quality of the links (LQI), and the number of child nodes (C).
  • The numerator i = 1 n θ i 2 is the square of the sum of the throughputs for all nodes.
  • The denominator n · i = 1 n θ i 2 represents the product of the number of nodes and the sum of the squares of individual throughputs, reflecting the dispersion of throughput across the network and facilitating the assessment of the load balancing effectiveness in terms of fairness.
The JFI offers an invaluable metric for assessing the equity of resource allocation among network nodes, particularly regarding throughput. A higher JFI indicates a more equitable distribution of network capacity among nodes, which is crucial for the consistent performance of applications in an IoT environment. The LALARPL protocol makes notable strides in the fairness of throughput distribution. By limiting each child node to a maximum of five parent nodes, the protocol ensures that node throughput remains high without overburdening any single node, thus promoting fair resource allocation and mitigating potential bottlenecks. Incorporating learning automata within LALARPL enhances the protocol’s adaptive capabilities, allowing for dynamic adjustments based on real-time network conditions. This adaptability is especially beneficial for managing network traffic, avoiding congestion, and preventing bottlenecks—all of which contribute to maintaining high JFI values.
LALARPL demonstrates superior fairness in various network configurations and packet intervals by examining the provided simulation data. For a network with 50 nodes at λ = 0.1 packet/s, LALARPL achieves a JFI of 0.89, which is 2.56 % higher than the average JFI of 0.868 of other protocols. The improvement is substantial at λ = 0.2 packet/s, at 5.88 % over the average JFI of 0.822. When the network scales up to 100 nodes, the JFI improvement is still evident. At λ = 0.1 packet/s, LALARPL’s JFI of 0.93 is 4.49 % higher than the average JFI of 0.89 of the other protocols. At the increased packet interval ( λ = 0.2 ), LALARPL’s JFI of 0.91 represents a considerable 9.64 % improvement over the average JFI of 0.83. At 150 nodes, LALARPL’s advantages persist. With λ = 0.1 packet/s, LALARPL’s JFI of 0.88 is 4.76 % better than the average JFI of 0.839. At a packet interval of λ = 0.2 packet/s, LALARPL records a JFI of 0.87, showcasing an impressive 13.33 % improvement over the average JFI of 0.768.
These fairness improvements demonstrate the effectiveness of LALARPL in managing network resources and indicate its potential for reducing the negative impacts of increased traffic load and congestion. By employing a learning-based approach, LALARPL can continuously optimize its behaviour to sustain fairness and efficiency, even as network dynamics evolve. The results of these simulations are visually detailed in Figure 9, Figure 10, and Figure 11 for networks of 50, 100, and 150 nodes, respectively, reflecting the robust and equitable throughput performance of LALARPL across different network scales and traffic conditions.

5.4. Average End to End Delay

The Average End-to-End Delay (AEED) is a vital statistic that reflects the time data packets travel from the source to the destination in a network. It encapsulates the propagation and transmission delays and the processing time at each hop along the route, queuing delays, and possible retransmissions due to errors or packet losses. Latency in networking is defined as the time required for a packet to traverse from the source to its destination. Unlike some methodologies that compute this metric in an aggregated end-to-end manner, our proposed method analyzes latency step-by-step. This approach ensures that the latency of a packet encompasses the cumulative delays incurred at each node, including processing, queuing, transmission, and propagation times. As a result, minimizing these individual components effectively reduces the overall network latency. The delay at each node i is detailed in the following equation 13:
N o d e D e l a y i = P r o c _ D e l a y i + Q u e u e _ D e l a y i + T r a n s _ D e l a y i + P r o p _ D e l a y i
Where:
  • P r o c _ D e l a y i is the processing delay at node i,
  • Q u e u e _ D e l a y i is the queuing delay at node i,
  • T r a n s _ D e l a y i is the transmission delay at node i,
  • P r o p _ D e l a y i is the propagation delay from node i.
Additionally, the latency experienced over each link for parent p is represented through the Link Delay Index, which aggregates the delays across different links to provide a metric for assessing packet transmission efficiency, especially in dynamic networking environments. This index is defined as follows (Equation 14):
L i n k D e l a y I n d e x i p = D e l a y i p
Another critical metric in IoT networks that utilize the RPL protocol is the average end-to-end delay, which assesses the network’s responsiveness and operational efficiency. This delay, denoted as Δ , is computed using the formula (Equation 15):
Δ = 1 N i = 1 N τ i
Where:
  • N is the total number of packets successfully delivered during the observation period,
  • τ i represents the time the i-th packet takes to travel from its source to its destination.
LALARPL’s commendable performance in reducing AEED can be attributed to several innovative protocol aspects. By limiting the number of potential parent nodes for any child node, LALARPL ensures a less congested and more streamlined path selection process, thereby reducing queuing delays. Furthermore, the protocol’s learning automata dynamically adjust the weights of various routing parameters, such as link quality, node congestion levels, and transmission rates, which results in a more efficient route selection and fewer packet retransmissions.
Analyzing the simulation results, we can deduce that LALARPL consistently outperforms the other protocols in terms of AEED across different network sizes and packet intervals. Specifically, in a network with 50 nodes at λ = 0.1 packet/s, LALARPL achieves an AEED of 11.064 milliseconds, which is approximately 5.14 % lower than the next best protocol, WRF-RPL, with an AEED of 11.089 milliseconds. At a packet interval of λ = 0.2 packet/s, the improvement is more pronounced at 11.88 % over the average AEED of 17.33 milliseconds of the other protocols. In a larger network of 100 nodes, LALARPL continues to demonstrate its efficacy. At λ = 0.1 packet/s, LALARPL’s AEED is 13.564 milliseconds, which is 7.52 % better than the average AEED of 14.662 milliseconds of the competing protocols. For λ = 0.2 packet/s, LALARPL’s performance improvement is 10.06 % over the average AEED of 23.761 milliseconds. Scaling up to 150 nodes, the advantage of LALARPL is still significant. With λ = 0.1 packet/s, LALARPL’s AEED is 16.061 milliseconds, offering a 4.92 % improvement over the average AEED of 16.887 milliseconds of the other protocols. At λ = 0.2 packet/s, LALARPL presents a 7.07 % better AEED, measuring 26.245 milliseconds compared to the average of 28.235 milliseconds.
The cumulative effect of these optimizations is clearly illustrated in Figure 12, Figure 13, and Figure 14 (referencing the figures provided for AEED results in networks of 50, 100, and 150 nodes, respectively). These improvements are not only indicative of LALARPL’s potential in enhancing network timeliness but also highlight its robustness in reducing latency, particularly in IoT environments that demand timely data delivery.

5.5. JFI in Energy Consumption

Given the limited power resources in such environments, energy consumption in RPL networks for IoT applications is critical. This metric is influenced by the energy expended during transmission, reception, idle, and sleep modes. The total energy consumption E total for node i can be calculated by considering the power consumed in different operational states. The equation 16 describes this:
E total i = P T · t T + P R · t R + P I · t I + P S · t S
Where:
  • P T , P R , P I , P S represent the power consumption while transmitting, receiving, idling, and sleeping, respectively,
  • t T , t R , t I , t S are the durations spent in each corresponding state.
This equation provides a comprehensive assessment of the energy utilization for each node. The Jain’s Fairness Index (JFI) can be employed to evaluate the fairness of energy consumption across the network. The JFI for energy consumption J F I E is calculated as follows (Equation 17):
J F I E = ( i = 1 n E total i ) 2 n · i = 1 n E total i 2
where n is the number of nodes in the network, and E t o t a l i is the total energy consumption of the i-th node.
The JFI in energy consumption provides a crucial measure of how uniformly energy resources are utilized across a network. A high JFI value signifies equitable energy usage among the nodes, which is particularly advantageous in IoT networks, where devices are often constrained by battery life. The strategic energy allocation prolongs individual nodes’ operational longevity and ensures the network infrastructure’s overall sustainability. The LALARPL protocol exhibits exemplary energy fairness. Its JFI scores across simulations with 50, 100, and 150 nodes at 0.1 and 0.2 packets per second packet intervals. The protocol’s algorithm optimizes energy use through a combination of innovative routing decisions and dynamic load balancing:
  • Optimized Path Selection: LALARPL carefully chooses routes that minimize energy expenditure, thereby preserving node battery life and reducing the need for frequent transmissions.
  • Load Distribution: By restricting the number of potential parents for a child node, LALARPL prevents the excessive energy drain of any single node, promoting a more uniform energy consumption across the network.
  • Adaptive Learning: Incorporating learning automata in LALARPL enables the network to adapt to changing conditions and optimize the routing dynamically. This adaptation means the network can avoid energy-intensive routes that might lead to retransmissions or increased processing.
Evaluating the simulation outcomes for a network of 50 nodes, LALARPL presents a JFI of 0.92 at λ = 0.1 packet/s, surpassing the next closest protocol, WRF-RPL, by a marginal but meaningful 0.54%. When the packet interval increases to λ = 0.2 packet/s, LALARPL maintains its lead with a JFI of 0.908, outperforming WRF-RPL’s 0.91. The protocol’s advantage becomes more distinct for networks of 100 nodes. LALARPL achieves a JFI of 0.965 at λ = 0.1 , a noteworthy 5.47% improvement over WRF-RPL’s 0.915. At the higher packet rate, the JFI for LALARPL is 0.942, representing a 5.25% increase over WRF-RPL. The trend of enhanced energy fairness with LALARPL continues in the 150-node network scenario. With a λ of 0.1, LALARPL attains a JFI of 0.938, edging out WRF-RPL by 3.63%. At λ = 0.2 , LALARPL’s JFI of 0.915 showcases a remarkable 4.93% improvement over WRF-RPL.
Figure 15, Figure 16, and Figure 17 visualize these enhancements in energy fairness, corresponding to the 50, 100, and 150-node networks, respectively. The data graphically represents the LALARPL protocol’s superior energy distribution management, affirming its suitability for IoT applications where energy efficiency is paramount. The computed percentage improvements are based on LALARPL’s highest JFI scores compared to the best-performing competing protocols in each scenario, highlighting its effectiveness in achieving energy fairness across diverse network scales and traffic conditions.

5.6. Average Lifetime Network

The Average Lifetime Network (ALTN)is a pivotal metric in evaluating the sustainability and operational viability of the RPL in IoT applications. This measure primarily assesses how energy resources are distributed and utilized within the network, reflecting on its ability to handle energy consumption efficiently and equitably among its nodes. The ALTN is instrumental in highlighting the effectiveness of energy management strategies within the network. By tracking the time until the death of each node, the metric provides insights into how well the network avoids or mitigates energy consumption hotspots—areas within the network where nodes may deplete their energy resources prematurely due to high traffic loads or poor distribution of network tasks. Furthermore, the delay in the time of death of the network’s first node is an essential indicator of the network’s overall health. A longer delay suggests that the network’s design and operational protocols effectively balance energy consumption, thereby prolonging the functional period of critical network nodes.
Additionally, the ALTN metric measures the average operational lifetime across all nodes and highlights energy consumption disparities, which are critical for network optimization. Networks with a high ALTN are generally more robust and capable of sustaining operation under varying conditions, which is essential for IoT environments where continuous operation is crucial. Furthermore, analyzing ALTN helps identify nodes or areas within the network that may require redesigns, such as enhancements in routing algorithms or energy harvesting capabilities, to ensure a more balanced energy distribution and longer network lifespan. The ALTN is computed after a specific operational period, typically 1000 seconds, to evaluate the network under a standardized condition. The formula for calculating ALTN, provided in Equation (18), incorporates both the lifetimes of nodes that cease operation during the simulation and a projection for those that survive [68]:
A L T N = i = 1 N M t i + ( M × ) N
where:
  • t i denotes the time of death of the i-th node, crucial for understanding when each node exhausts its energy reserves.
  • N represents the total number of nodes in the network, providing the denominator for averaging the lifetimes.
  • M is the number of nodes still operational at the end of the simulation period, indicating the network’s resilience.
  • signifies the predefined or estimated maximum lifetime for the surviving nodes, offering a way to estimate the potential maximum longevity of the network’s operational capability.
Analyzing the simulation results for the LALARPL method reveals that the protocol excels at extending the operational lifetime of IoT devices. Figure 18, Figure 19, and Figure 20 correspond to the simulation results of networks with 50, 100, and 150 nodes, respectively, and visually represent the ALTN across these scenarios.
The LALARPL method’s performance can be attributed to several key factors:
  • Efficient Energy Management: By intelligently limiting the communication and processing tasks per node, LALARPL prevents premature energy depletion, which directly translates to a longer ALTN.
  • Learning Automata: The protocol’s use of learning automata ensures that routing decisions are optimized in real-time. This adaptability prevents nodes from expending unnecessary energy, especially in high-traffic conditions, thereby delaying the time to the first node’s death.
  • Load Balancing: LALARPL’s load balancing mechanisms evenly distribute the energy demands across the network, ensuring no single node bears excessive burden. This prevents the formation of energy hotspots and results in a longer average node lifetime.
For the 50-node network at λ = 0.1 packets/s, LALARPL shows a superior ALTN of 0.975, compared to 0.958 by the following best protocol, WRF-RPL, highlighting a 1.78 % improvement. When the packet interval increases to λ = 0.2 packets/s, LALARPL achieves a 2.52 % increase in ALTN compared to WRF-RPL’s 0.921. With the 100-node network at λ = 0.1 packets/s, LALARPL’s ALTN stands at 0.948, significantly higher than WRF-RPL’s 0.918, marking a 3.27 % improvement. At the higher packet interval of λ = 0.2 packets/s, the ALTN improvement is evident at 0.88 % over WRF-RPL’s 0.908. For the most extensive network tested, consisting of 150 nodes, at λ = 0.1 packets/s, LALARPL registers an ALTN of 0.893, which is 2.17 % better than WRF-RPL’s 0.874. At λ = 0.2 packets/s, LALARPL maintains its lead with an ALTN of 0.875, an impressive 19.04 % increase over WRF-RPL’s 0.735.
These outcomes underscore the effectiveness of the LALARPL method in enhancing the longevity of the network by ensuring that the first node and subsequent nodes have a significantly delayed time of death. The delay in energy depletion of the first node is particularly crucial, as it indicates a resilient and well-distributed network that can sustain its operational capabilities for extended periods, which is critical to IoT applications. The enhanced ALTN achieved by LALARPL affirms the protocol’s advanced energy management capabilities, showcasing its potential to improve the sustainability of IoT networks. This is particularly beneficial in scenarios where network reliability and extended operation without maintenance are of the essence.

6. Conclusion

This paper presented an enhanced load-balancing algorithm for the Routing Protocol for LLNs (RPL) that employs learning automata, which is designated LALARPL. Our innovative approach significantly optimizes routing in Internet of Things (IoT) environments by integrating advanced decision-making capabilities that dynamically adapt to network conditions. The key benefits of LALARPL, as demonstrated through extensive simulation, include improved packet delivery ratios, reduced end-to-end delays, and more efficient energy usage across the network. By adopting a traffic-aware and energy-efficient strategy, LALARPL ensures a fair distribution of network load, extending the lifetime of individual nodes and enhancing the overall performance and reliability of IoT networks. Moreover, the learning automata component allows for the adaptive adjustment of routing decisions based on real-time network feedback, which optimizes both the throughput and the energy consumption. This feature is crucial in maintaining IoT devices’ sustainability and operational efficiency, often limited by battery life and processing power. In conclusion, the introduction of LALARPL represents a significant step forward in the evolution of RPL-based networking solutions, providing a robust framework for achieving optimal load balancing and efficient resource management in IoT networks. The results suggest that LALARPL can serve as a foundational technology for future developments in IoT routing protocols, potentially leading to more resilient and efficient network architectures tailored to the unique demands of the burgeoning IoT landscape.
As part of future work, introducing mobility management into the LALARPL framework presents a promising avenue for research. Mobility in IoT environments introduces additional challenges, including varying network topology, increased packet loss, and dynamic routing paths that require real-time adaptation. Integrating mobility would allow LALARPL to dynamically adjust to changing network conditions as nodes move, maintaining efficient routing and load balancing. Future iterations could incorporate mobility models that simulate real-world IoT applications, such as vehicular networks or mobile health devices, to assess the performance of LALARPL in managing dynamic and heterogeneous networks. Moreover, developing algorithms that predict node movements and preemptively adjust routing decisions could further enhance the robustness and efficiency of the RPL protocol in mobile scenarios. This mobility-aware version of LALARPL would be instrumental in broadening the applicability of RPL to a broader array of IoT systems where node mobility is a key characteristic. This exploration would contribute to the theoretical advancements in routing protocols and have practical implications for deploying IoT solutions in urban settings, disaster response scenarios, and other dynamic environments where mobility plays a crucial role.

References

  1. Lakshmi, M.S.; Ramana, K.S.; Ramu, G.; Shyam Sunder Reddy, K.; Sasikala, C.; Ramesh, G. Computational intelligence techniques for energy efficient routing protocols in wireless sensor networks: A critique. Transactions on Emerging Telecommunications Technologies 2023, 35. [Google Scholar] [CrossRef]
  2. Zainaddin, D.A.; Hanapi, Z.M.; Othman, M.; Ahmad Zukarnain, Z.; Abdullah, M.D.H. Recent trends and future directions of congestion management strategies for routing in IoT-based wireless sensor network: a thematic review. Wireless Networks 2024. [Google Scholar] [CrossRef]
  3. Darabkh, K.A.; Al-Akhras, M.; Zomot, J.N.; Atiquzzaman, M. RPL routing protocol over IoT: A comprehensive survey, recent advances, insights, bibliometric analysis, recommendations, and future directions. Journal of Network and Computer Applications 2022, 207, 103476. [Google Scholar] [CrossRef]
  4. Muzammal, S.M.; Murugesan, R.K.; Jhanjhi, N.Z. A Comprehensive Review on Secure Routing in Internet of Things: Mitigation Methods and Trust-Based Approaches. IEEE Internet of Things Journal 2021, 8, 4186–4210. [Google Scholar] [CrossRef]
  5. Hui, J.; Vasseur, J. The Routing Protocol for Low-Power and Lossy Networks (RPL) Option for Carrying RPL Information in Data-Plane Datagrams. RFC 6553, 2012. [Google Scholar] [CrossRef]
  6. Almutairi, H.; Zhang, N. A Survey on Routing Solutions for Low-Power and Lossy Networks: Toward a Reliable Path-Finding Approach. Network 2024, 4, 1–32. [Google Scholar] [CrossRef]
  7. Estepa, R.; Estepa, A.; Madinabeitia, G.; Garcia, E. RPL Cross-Layer Scheme for IEEE 802.15.4 IoT Devices With Adjustable Transmit Power. IEEE Access 2021, 9, 120689–120703. [Google Scholar] [CrossRef]
  8. Muzammal, S.M.; Murugesan, R.K.; Jhanjhi, N.Z. A Comprehensive Review on Secure Routing in Internet of Things: Mitigation Methods and Trust-Based Approaches. IEEE Internet of Things Journal 2021, 8, 4186–4210. [Google Scholar] [CrossRef]
  9. Zormati, M.A.; Lakhlef, H.; Ouni, S. Review and analysis of recent advances in intelligent network softwarization for the Internet of Things. Computer Networks 2024, 241, 110215. [Google Scholar] [CrossRef]
  10. Mishra, A.K.; Singh, O.; Kumar, A.; Puthal, D. Hybrid Mode of Operations for RPL in IoT: A Systematic Survey. IEEE Transactions on Network and Service Management 2022, 19, 3574–3586. [Google Scholar] [CrossRef]
  11. Tadigotla, S.; Murthy, J.K. A Comprehensive Study on RPL Challenges. 2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC). IEEE, 2020. [CrossRef]
  12. Lamaazi, H.; Benamar, N. A comprehensive survey on enhancements and limitations of the RPL protocol: A focus on the objective function. Ad Hoc Networks 2020, 96, 102001. [Google Scholar] [CrossRef]
  13. Maheshwari, A.; Yadav, R.K.; Nath, P. Enhanced RPL to Control Congestion in IoT: A Review 2023. p. 1–13. [CrossRef]
  14. Homaei, M.H.; Soleimani, F.; Shamshirband, S.; Mosavi, A.; Nabipour, N.; Varkonyi-Koczy, A.R. An Enhanced Distributed Congestion Control Method for Classical 6LowPAN Protocols Using Fuzzy Decision System. IEEE Access 2020, 8, 20628–20645. [Google Scholar] [CrossRef]
  15. S., P.S.; B., S. RPL Protocol Load balancing Schemes in Low-Power and Lossy Networks. International Journal of Scientific Research in Computer Science and Engineering 2023, 11, 7–13. [CrossRef]
  16. P., A.; Vimala, H.; J., S. Comprehensive review on congestion detection, alleviation, and control for IoT networks. Journal of Network and Computer Applications 2024, 221, 103749. [CrossRef]
  17. Shabani Baghani, A.; Khabbazian, M. RPL Point-to-Point Communication Paths: Analysis and Enhancement. IEEE Internet of Things Journal 2023, 10, 166–179. [Google Scholar] [CrossRef]
  18. Mahyoub, M.; Hasan Mahmoud, A.S.; Abu-Amara, M.; Sheltami, T.R. An Efficient RPL-Based Mechanism for Node-to-Node Communications in IoT. IEEE Internet of Things Journal 2021, 8, 7152–7169. [Google Scholar] [CrossRef]
  19. Jain, V.K.; Mazumdar, A.P.; Faruki, P.; Govil, M.C. Congestion control in Internet of Things: Classification, challenges, and future directions. Sustainable Computing: Informatics and Systems 2022, 35, 100678. [Google Scholar] [CrossRef]
  20. Safaei, B.; Monazzah, A.M.H.; Ejlali, A. ELITE: An Elaborated Cross-Layer RPL Objective Function to Achieve Energy Efficiency in Internet-of-Things Devices. IEEE Internet of Things Journal 2021, 8, 1169–1182. [Google Scholar] [CrossRef]
  21. Taghizadeh, S.; Elbiaze, H.; Bobarshad, H. EM-RPL: Enhanced RPL for Multigateway Internet-of-Things Environments. IEEE Internet of Things Journal 2021, 8, 8474–8487. [Google Scholar] [CrossRef]
  22. Venugopal, K.; Basavaraju, T.G. Congestion and Energy Aware Multipath Load Balancing Routing for LLNs. International journal of Computer Networks & Communications 2023, 15, 71–92. [Google Scholar] [CrossRef]
  23. Rani, S.; Kumar, A.; Bagchi, A.; Yadav, S.; Kumar, S. RPL Based Routing Protocols for Load Balancing in IoT Network. Journal of Physics: Conference Series 2021, 1950, 012073. [Google Scholar] [CrossRef]
  24. Darabkh, K.A.; Al-Akhras, M. RPL over Internet of Things: Challenges, Solutions, and Recommendations. 2021 IEEE International Conference on Mobile Networks and Wireless Communications (ICMNWC). IEEE, 2021. [CrossRef]
  25. Magubane, Z.; Tarwireyi, P.; Abu-Mafouz, A.; Adigun, M. Extended Context-Aware and Load Balancing Routing Protocol for Low power and Lossy Networks in IoT networks (ECLRPL). 2021 3rd International Multidisciplinary Information Technology and Engineering Conference (IMITEC). IEEE, 2021. [CrossRef]
  26. Pancaroglu, D.; Sen, S. Load balancing for RPL-based Internet of Things: A review. Ad Hoc Networks 2021, 116, 102491. [Google Scholar] [CrossRef]
  27. Venugopal, K.; Basavaraju, T. Load balancing routing in RPL for the internet of things networks: a survey. International Journal of Wireless and Mobile Computing 2023, 24, 243–257. [Google Scholar] [CrossRef]
  28. Vaezian, A.; Darmani, Y. MSE-RPL: Mobility Support Enhancement in RPL for IoT Mobile Applications. IEEE Access 2022, 10, 80816–80832. [Google Scholar] [CrossRef]
  29. Thiagarajan, C.; Samundiswary, P. Enhanced RPL-Based Routing with Mobility Support in IoT Networks. 2023 Second International Conference on Advances in Computational Intelligence and Communication (ICACIC). IEEE, 2023. [CrossRef]
  30. Grover, D. An Optimized RPL Protocol for Energy Efficient IoT Networks. 2023 International Conference on Data Science and Network Security (ICDSNS). IEEE, 2023. [CrossRef]
  31. Haque, K.F.; Abdelgawad, A.; Yanambaka, V.P.; Yelamarthi, K. An Energy-Efficient and Reliable RPL for IoT. 2020 IEEE 6th World Forum on Internet of Things (WF-IoT). IEEE, 2020. [CrossRef]
  32. Kumar, A.; Hariharan, N. DCRL-RPL: Dual context-based routing and load balancing in RPL for IoT networks. IET Communications 2020, 14, 1869–1882. [Google Scholar] [CrossRef]
  33. Seyfollahi, A.; Ghaffari, A. A lightweight load balancing and route minimizing solution for routing protocol for low-power and lossy networks. Computer Networks 2020, 179, 107368. [Google Scholar] [CrossRef]
  34. Pereira, H.; Moritz, G.L.; Souza, R.D.; Munaretto, A.; Fonseca, M. Increased Network Lifetime and Load Balancing Based on Network Interface Average Power Metric for RPL. IEEE Access 2020, 8, 48686–48696. [Google Scholar] [CrossRef]
  35. Wang, F.; Babulak, E.; Tang, Y. SL-RPL: Stability-aware load balancing for RPL. Transactions on Machine Learning and Data Mining 2020, 13, 27–39. [Google Scholar]
  36. Sebastian, A. Child Count Based Load Balancing in Routing Protocol for Low Power and Lossy Networks (Ch-LBRPL) 2019. p. 141–157. [CrossRef]
  37. Rana, P.J.; Bhandari, K.S.; Zhang, K.; Cho, G. EBOF: A New Load Balancing Objective Function for Low-power and Lossy Networks. IEIE Transactions on Smart Processing & Computing 2020, 9, 244–251. [Google Scholar] [CrossRef]
  38. Stoyanov, S.; Ghaleb, B.; Ghaleb, S.M. A Comparative Performance Evaluation of A load-balancing Algorithm using Contiki: “RPL vs QU-RPL”. International Journal of Advanced Trends in Computer Science and Engineering 2020, 9, 6834–6839. [Google Scholar] [CrossRef]
  39. Behrouz Vaziri, B.; Toroghi Haghighat, A. Brad-OF: An Enhanced Energy-Aware Method for Parent Selection and Congestion Avoidance in RPL Protocol. Wireless Personal Communications 2020, 114, 783–812. [Google Scholar] [CrossRef]
  40. Mahyoub, M.; Hasan Mahmoud, A.S.; Abu-Amara, M.; Sheltami, T.R. An Efficient RPL-Based Mechanism for Node-to-Node Communications in IoT. IEEE Internet of Things Journal 2021, 8, 7152–7169. [Google Scholar] [CrossRef]
  41. Sankar, S.; Ramasubbareddy, S.; Luhach, A.K.; Nayyar, A.; Qureshi, B. CT-RPL: Cluster Tree Based Routing Protocol to Maximize the Lifetime of Internet of Things. Sensors 2020, 20, 5858. [Google Scholar] [CrossRef] [PubMed]
  42. Sirwan, R.; al ani, m. Adaptive Load Balanced Routing in IOT Networks: A Distributed Learning Approach. Passer Journal of Basic and Applied Sciences 2021, 3, 102–106. [Google Scholar] [CrossRef]
  43. Zheng, H.; Zhang, Y.; Huang, D. Load Balancing RPL Routing Protocol Based on Non-uniform Clustering. 2021 4th International Conference on Data Science and Information Technology. ACM, 2021, DSIT 2021. [CrossRef]
  44. Idrees, A.K.; Witwit, A.J. Energy-efficient load-balanced RPL routing protocol for internet of things networks. International Journal of Internet Technology and Secured Transactions 2021, 11, 286. [Google Scholar] [CrossRef]
  45. Acevedo, P.D.; Jabba, D.; Sanmartin, P.; Valle, S.; Nino-Ruiz, E.D. WRF-RPL: Weighted Random Forward RPL for High Traffic and Energy Demanding Scenarios. IEEE Access 2021, 9, 60163–60174. [Google Scholar] [CrossRef]
  46. Fatemifar, S.A.; Javidan, R. A new load balancing clustering method for the RPL protocol. Telecommunication Systems 2021, 77, 297–315. [Google Scholar] [CrossRef]
  47. Royaee, Z.; Mirvaziri, H.; Khatibi Bardsiri, A. Designing a context-aware model for RPL load balancing of low power and lossy networks in the internet of things. Journal of Ambient Intelligence and Humanized Computing 2020, 12, 2449–2468. [Google Scholar] [CrossRef]
  48. Yassien, M.B.; Aljawarneh, S.A.; Eyadat, M.; Eaydat, E. Routing protocol for low power and lossy network–load balancing time-based. International Journal of Machine Learning and Cybernetics 2021, 12, 3101–3114. [Google Scholar] [CrossRef]
  49. Arunachalam, V.; Nallamothu, N. Load Balancing in RPL to Avoid Hotspot Problem for Improving Data Aggregation in IoT. International Journal of Intelligent Engineering and Systems 2021, 14, 528–540. [Google Scholar] [CrossRef]
  50. Zarzoor, A.R. Optimizing RPL performance based on the selection of best route between child and root node using E-MHOF method. International Journal of Electrical and Computer Engineering (IJECE) 2021, 11, 224. [Google Scholar] [CrossRef]
  51. Abdullah, M.; Alsukayti, I.; Alreshoodi, M. On the Need for Efficient Load Balancing in Large-scale RPL Networks with Multi-Sink Topologies. International Journal of Computer Science & Network Security 2021, 21, 212–218. [Google Scholar] [CrossRef]
  52. Hadaya, N.N.; Alabady, S.A. New RPL Protocol for IoT Applications. Journal of Communications Software and Systems 2022, 18, 72–79. [Google Scholar] [CrossRef]
  53. Anita, C.; Sasikumar, R. Learning automata and lexical composition method for optimal and load balanced RPL routing in IoT. International Journal of Ad Hoc and Ubiquitous Computing 2022, 40, 288. [Google Scholar] [CrossRef]
  54. Awiphan, S.; Jathuphornpaserd, S. Load-Balanced Structure for RPL-Based Routing in Wireless Sensor Networks. 2022 4th International Conference on Computer Communication and the Internet (ICCCI). IEEE, 2022. [CrossRef]
  55. Koosha, M.; Farzaneh, B.; Alizadeh, E.; Farzaneh, S. FAHP-OF: A New Method for Load Balancing in RPL-based Internet of Things (IoT). 2022 12th International Conference on Computer and Knowledge Engineering (ICCKE). IEEE, 2022. [CrossRef]
  56. Kaviani, F.; Soltanaghaei, M. CQARPL: Congestion and QoS-aware RPL for IoT applications under heavy traffic. The Journal of Supercomputing 2022, 78, 16136–16166. [Google Scholar] [CrossRef]
  57. Jagir Hussain, S.; Roopa, M. BE-RPL: Balanced-load and Energy-efficient RPL. Computer Systems Science and Engineering 2023, 45, 785–801. [Google Scholar] [CrossRef]
  58. Kalantar, S.; Jafari, M.; Hashemipour, M. Energy and load balancing routing protocol for IoT. International Journal of Communication Systems 2022, 36. [Google Scholar] [CrossRef]
  59. Subramani, P.S.; Bojan, S. Weighted Sum Metrics – Based Load Balancing RPL Objective Function for IoT. Annals of Emerging Technologies in Computing 2023, 7, 35–55. [Google Scholar] [CrossRef]
  60. Tiwari, J.; Soni, S.; Chandra, P. IMPROVED LOAD BALANCING PROTOCOL FOR WSN. Journal of Data Acquisition and Processing 2023, 38, 6104. [Google Scholar]
  61. Venugopal, K.; Basavaraju, T.G. Congestion and Energy Aware Multipath Load Balancing Routing for LLNs. International Journal of Computer Networks & Communications 2023, 15, 71–92. [Google Scholar] [CrossRef]
  62. Ahmed, A.K.; Farzaneh, B.; Boochanpour, E.; Alizadeh, E.; Farzaneh, S. TFUZZY-OF: a new method for routing protocol for low-power and lossy networks load balancing using multi-criteria decision-making. International Journal of Electrical and Computer Engineering (IJECE) 2023, 13, 3474. [Google Scholar] [CrossRef]
  63. Lei, J.; Liu, J. Reinforcement learning-based load balancing for heavy traffic Internet of Things. Pervasive and Mobile Computing 2024, 99, 101891. [Google Scholar] [CrossRef]
  64. Tabouche, A.; Djamaa, B.; Senouci, M.R.; Ouakaf, O.E.; Elaziz, A.G. TLR: Traffic-aware load-balanced routing for industrial IoT. Internet of Things 2024, 25, 101093. [Google Scholar] [CrossRef]
  65. Alilou, M.; Babazadeh Sangar, A.; Majidzadeh, K.; Masdari, M. QFS-RPL: mobility and energy aware multi path routing protocol for the internet of mobile things data transfer infrastructures. Telecommunication Systems 2023, 85, 289–312. [Google Scholar] [CrossRef]
  66. Shashidhar, P.K.; Thanuja, T.C.; Kunabeva, R. Adaptive RPL Routing Optimization Model for Multimedia Data Transmission using IOT. Indian Journal Of Science And Technology 2024, 17, 436–450. [Google Scholar] [CrossRef]
  67. Kordestani, J.K.; Mirsaleh, M.R.; Rezvanian, A.; Meybodi, M.R. Advances in Learning Automata and Intelligent Optimization; Springer International Publishing, 2021. [CrossRef]
  68. Homaei, M.H.; Band, S.S.; Pescape, A.; Mosavi, A. DDSLA-RPL: Dynamic Decision System Based on Learning Automata in the RPL Protocol for Achieving QoS. IEEE Access 2021, 9, 63131–63148. [Google Scholar] [CrossRef]

Short Biography of Authors

Preprints 112372 i001 MOHAMMADHOSSEIN HOMAEI (M’19) was born in Hamedan, Iran. He obtained his B.Sc. in Information Technology (Networking) from the University of Applied Science and Technology, Hamedan, Iran, in 2014 and his M.Sc. from Islamic Azad University, Malayer, Iran, in 2017. He is pursuing his Ph.D. at Universidad de Extremadura, Spain, where his prolific research has amassed over 100 citations.
Since December 2019, Mr. Homaei has been affiliated with Óbuda University, Hungary, as a Visiting Researcher delving into the Internet of Things and Big Data. His tenure at Óbuda University seamlessly extended into a research collaboration with J. Selye University, Slovakia, focusing on Cybersecurity from January 2020. His research voyage led him to the National Yunlin University of Science and Technology, Taiwan, where he was a Scientific Researcher exploring IoT and Open-AI from January to September 2021. His latest role was at the Universidade da Beira Interior, Portugal, in the Assisted Living Computing and Telecommunications Laboratory (ALLab), from June 2023 to January 2024, where he engaged in cutting-edge projects on digital twins and machine learning. He is the author of ten scholarly articles and holds three patents, highlighting his diverse research interests in Digital Twins, Cybersecurity, Wireless Communications, and IoT.
An active IEEE member, Mr. Homaei has carved a niche for himself with notable contributions to Digitalization, the Industrial Internet of Things (IIoT), Information Security Management, and Environmental Monitoring. His substantial work continues to influence the technological and cybersecurity landscape profoundly.
Figure 2. Stochastic Learning Automata [67]
Figure 2. Stochastic Learning Automata [67]
Preprints 112372 g002
Figure 3. Packet Delivery Ratio in 150 nodes test
Figure 3. Packet Delivery Ratio in 150 nodes test
Preprints 112372 g003
Figure 4. Packet Delivery Ratio in 100 nodes test
Figure 4. Packet Delivery Ratio in 100 nodes test
Preprints 112372 g004
Figure 5. Packet Delivery Ratio in 150 nodes test
Figure 5. Packet Delivery Ratio in 150 nodes test
Preprints 112372 g005
Figure 6. Throughput in 50 nodes test
Figure 6. Throughput in 50 nodes test
Preprints 112372 g006
Figure 7. Throughput in 100 nodes test
Figure 7. Throughput in 100 nodes test
Preprints 112372 g007
Figure 8. Throughput in 150 nodes test
Figure 8. Throughput in 150 nodes test
Preprints 112372 g008
Figure 9. Jain Fairness Index of Throughput in 50 nodes test
Figure 9. Jain Fairness Index of Throughput in 50 nodes test
Preprints 112372 g009
Figure 10. Jain Fairness Index of Throughput in 100 nodes test
Figure 10. Jain Fairness Index of Throughput in 100 nodes test
Preprints 112372 g010
Figure 11. Jain Fairness Index of Throughput in 150 nodes test
Figure 11. Jain Fairness Index of Throughput in 150 nodes test
Preprints 112372 g011
Figure 12. Average End to End delay in 50 nodes test
Figure 12. Average End to End delay in 50 nodes test
Preprints 112372 g012
Figure 13. Average End to End delay in 100 nodes test
Figure 13. Average End to End delay in 100 nodes test
Preprints 112372 g013
Figure 14. Average End to End delay in 150 nodes test
Figure 14. Average End to End delay in 150 nodes test
Preprints 112372 g014
Figure 15. Jain Fairness Index of Energy in 50 nodes test
Figure 15. Jain Fairness Index of Energy in 50 nodes test
Preprints 112372 g015
Figure 16. Jain Fairness Index of Energy in 100 nodes test
Figure 16. Jain Fairness Index of Energy in 100 nodes test
Preprints 112372 g016
Figure 17. Jain Fairness Index of Energy in 150 nodes test
Figure 17. Jain Fairness Index of Energy in 150 nodes test
Preprints 112372 g017
Figure 18. Average Lifetime Network in 50 nodes test
Figure 18. Average Lifetime Network in 50 nodes test
Preprints 112372 g018
Figure 19. Average Lifetime Network in 100 nodes test
Figure 19. Average Lifetime Network in 100 nodes test
Preprints 112372 g019
Figure 20. Average Lifetime Network in 150 nodes test
Figure 20. Average Lifetime Network in 150 nodes test
Preprints 112372 g020
Table 1. RPL Enhancements for Load Balancing (2020-2024)
Table 1. RPL Enhancements for Load Balancing (2020-2024)
Year/RFC Aim Strategy Strengths
Year/RFC Aim Strategy Strengths
2020 - [31] Enhance RPL for IoT focusing on energy efficiency and reliability Evaluate performance of ETX and Energy-based OFs; propose a hybrid OF Identifies trade-offs between energy efficiency and reliability; proposes a balanced approach through a hybrid OF
2020 - [32] Address routing overhead, packet losses, and load imbalance in RPL-based IoT networks Introduce DCRL-RPL framework with grid construction, ranking-based grid selection, and dual context-based OF selection Demonstrate improved network lifetime, packet delivery ratio, and reduced routing overhead
2020 - [33] Enhance RPL for IoT, focusing on load balancing Introduce L2RMR with a novel OF and PF to prevent HDP, optimizing path length and load distribution Significantly reduces packet loss, delay, and energy consumption, outperforming traditional RPL
2020 - [34] Improve load balancing and extend network lifetime in IoT Introduce NIAP metric for balancing energy consumption, relying on average power estimation Increases network lifetime by up to 24%, improves packet delivery ratio and reduces delay
2020 - [35] Address instability and inefficiency in RPL’s load balancing for IoT Propose SL-RPL with stability-aware mechanism, utilizing PTR and ETX for parent selection Enhances network stability and performance, reducing parent changes, packet loss, and energy usage
2020 - [36] Address load imbalance in RPL for IoT Introduce Ch-LBRPL to improve load balance using a child count method, reducing parent switching and enhancing energy efficiency More effective at achieving load balance, improving network stability and energy consumption
2020 - [37] Tackle load imbalance in RPL for IoT Introduce EBOF combining ETX and CC for optimal path selection, extending network lifetime Enhances network performance by balancing energy consumption and prolonging operational sustainability
2020 - [38] Evaluate QU-RPL’s load-balancing in RPL for IoT Comparative analysis of RPL and QU-RPL focusing on power consumption, PDR, and latency Finds QU-RPL does not significantly improve over traditional RPL, suggesting a need for further development
2020 - [39] Enhance energy-aware parent selection and congestion avoidance in RPL for IoT Propose Brad-OF using ETX, delay, and residual energy for parent selection and a metric for congestion avoidance Increases network lifetime by up to 65% and reduces packet loss by up to 81%
2020 - [40] Address N2N communication inefficiencies in LLNs for IoT Propose HRPL, integrating link-state routing with RPL for efficient N2N routes and employing adaptive reporting and SSSP mechanisms Significantly improves packet delivery ratio, reduces delay and energy consumption, maintaining RPL compatibility
2020 - [41] Extend network lifespan and reduce data traffic in IoT Introduce CT-RPL with cluster formation, CH selection, and route establishment based on RER, QU, and ETX Enhances network lifetime by 30-40% and packet delivery ratio by 5-10%
2021 - [42] Facilitate load-efficient IoT connectivity with anticipated device number surge Leverage self-coordinating networks and distributed learning for dynamic communication parameter adaptation Demonstrate improvements in reliability and traffic efficiency with lightweight learning
2021 - [25] Improve RPL in IoT by incorporating buffer occupancy for load balancing Introduce ECLRPL, using a buffer occupancy metric in routing decisions to enhance throughput and network lifetime Significantly outperforms standard RPL and CLRPL in packet delivery, power efficiency, and network delay
2021 - [43] Address load imbalance in LLNs with RPL by proposing a clustering-based protocol Use non-uniform clustering and cluster head rotation based on node energy and priority for balanced load Enhances network stability and efficiency by achieving balanced traffic distribution
2021 - [44] Develop an energy-efficient, load-balanced routing protocol for IoT networks Incorporate a novel parent selection algorithm in EL-RPL, considering energy and packet counts Outperforms existing protocols in energy conservation, control packet reduction, and extending network lifetime
2021 - [45] Enhance load balancing in high-traffic sensor networks Introduce WRF-RPL with a routing metric considering remaining energy and parent count Outperforms standard RPL in network lifetime, packet delivery, and energy consumption
2021 - [46] Improve routing in IoT networks Propose C-Balance with a dual-ranking system for cluster formation and routing, using ETX, hop count, and energy metrics Improves network longevity and energy efficiency, though increases end-to-end delay
2021 - [47] Address load imbalance in RPL for IoT Develop AMRRPL with ant colony optimization for rank computation and stochastic automata for parent selection Demonstrate improvements in packet delivery, network lifetime, energy efficiency, and convergence
2021 - [48] Address load balancing challenges in RPL for IoT Introduce LBTB, combining neighbour count and node power with a modified trickle timer for message distribution Reducing convergence time by up to 68%, power consumption by 16%, and delay by 56%
2021 - [49] Mitigate hotspot problem and improve data aggregation in IoT with RPL Propose LoB-RPL with a composite metric for parent selection and adaptive trickle parameters Significantly improves packet delivery, network lifetime, energy efficiency, and control overhead reduction
2021 - [50] Optimize RPL performance in IoT for reducing node congestion and latency Introduce E-MHOF with a three-layer approach for parent and path selection, and child node minimization Demonstrates significant improvements in network lifetime and latency reduction
2021 - [51] Improve routing and address node unreachability in LLNs for IoT Propose MSLBOF with Memory Utilization metrics for sink selection and load balancing Significantly reduces packet loss and improves network stability compared to standard MRHOF
2022 - [52] Address energy consumption and inefficiency in RPL for IoT Propose a novel RPL OF incorporating Load, Residual Energy, and ETX to enhance network lifetime and efficiency Shows a PDR increase of 58.425%, a decrease in packet loss ratio, and a reduction in power consumption
2022 - [53] Optimize RPL for energy efficiency and load balancing in IoT Introduce a methodology using learning automata and lexical composition for critical routing metrics selection Significantly improves packet delivery ratio, energy consumption, and network stability
2022 - [54] Improve load balancing in RPL-based by reducing node overload. Identify neighbours at the same rank and exchange metrics like available connections and ETX to better select network parents. Improved packet delivery and reduced packet loss compared to traditional methods, optimizing network traffic distribution.
2022 - [55] Enhance routing in RPL-based IoT networks Develop FAHP-OF using Fuzzy Logic and AHP for dynamic parent selection optimization Improves E2ED and PDR, enhancing network reliability and efficiency
2022 - [56] Propose CQARPL for IoT applications under heavy traffic conditions Incorporates congestion control and enhanced QoS into RPL; uses multiple metrics for routing decisions Enhances network lifetime, reduces queue loss ratio, improves packet reception, and lowers delay
2023 - [57] Introduce BE-RPL to address mobility issues in IoT LLN Enhances RPL with mobility awareness and energy efficiency; focuses on load balancing and reactive parent selection Demonstrates improvements in energy utilization, network control overhead, and packet delivery ratio
2023 - [58] Tackle energy management and traffic balance in IoT networks Introduces ELBRP with ECAOF for parent node selection based on energy and congestion Shows significant advancements in energy efficiency and packet delivery, with a slight increase in control overheads
2023 - [59] Achieve load balance and efficient routing in IoT networks Propose WSM-OF using a combination of ETX, LQL, RE, and Child Count Improves control overhead, jitter, packet delivery ratio, energy consumption, and network lifetime by up to 7.8%
2023 - [60] Enhance RPL for WSNs with integrated mobility management Focus on micro-mobility to optimize energy consumption and load balancing Reduces energy consumption, enhances packet delivery ratios, and ensures stable network operation
2023 - [61] Address load balancing and congestion in LLNs for IoT Introduce CEA-RPL with CEA-OF leveraging Queue Occupancy, Expected Lifetime, and Child Count Enhances power consumption, packet receiving rate, end-to-end delay, and network lifetime
2023 - [62] Address load balancing in RPL for IoT networks Propose TFUZZY-OF integrating fuzzy logic with TOPSIS Enhances PDR and reduces E2ED compared to traditional methods
2024 - [63] Optimize RPL routing in IoT environments using DRL Develop RARL with a DRL model for intelligent routing decisions Outperforms existing methods in network lifetime, queue loss ratio, packet reception ratio, and delay
2024 - [64] Address load-balancing issues in IIoT networks over 6TiSCH Develop TLR with a traffic-aware proactive path selection strategy Demonstrates superiority in throughput, reliability, latency, and energy efficiency over conventional RPL
2024 - [65] Integrate Q-learning and FSR in RPL for IoT to enhance mobility and energy efficiency Propose QFS-RPL for efficient load balancing and improved PDR Shows superior performance, especially in mobile node environments, enhancing network throughput and lifetime
2024 - [66] Enhance multimedia data transmission efficiency in IoT networks Introduce ARPLO with a grid-based structure and ADNN for data classification Improve energy efficiency, throughput, PDR, and network lifespan while reducing control overhead and delay
Table 2. Key parameters in recent research on RPL’s Load-balancing
Table 2. Key parameters in recent research on RPL’s Load-balancing
Year Ref Key Parameters
2020 [31] ETX, Energy
[32] EC, LB, Overhead, PDR
[33] LB, Path Length, PL, Latency
[34] EC, PDR, Latency
[35] Stability (PTR, ETX), PL, EC
[36] Child Count, EC, Overhead
[37] EC, ETX, Child Count, NL
[38] EC, PDR, Latency
[39] ETX, Latency, EC, TL, PL
[40] Link-state, PDR, MAC state, Latency, EC
[41] EC, Queue, ETX, NL, TL
2021 [42] Reliability, Communication Efficiency
[25] Buffer Occupancy, PDR, EC, Latency
[43] Clustering, Stability, TL
[44] EC, NL
[45] EC, LB, Parent Node Count
[46] ETX, Hop Count, EC, Number of Node Children, Network Longevity
[47] Congestion Mitigation, NL, EC, PDR
[48] Neighbour Count, EC, Trickle Timer, Convergence Time, Latency
[49] Composite Metric, Trickle Timer, PDR, NL
[50] Congestion Mitigation, ETX, RSSI, EC, Latency
[51] Memory Utilization, LB (Multi-Sink), PL
2022 [52] LB, EC, ETX, NL, PDR
[53] EC, LB, Hop Count, ETX, TL, PDR
[54] ETX, PDR, Overhead
[55] Hop-count, ETX, RSSI, PDR, Latency
[56] Congestion, QoS, ETX, Hop Count, NL, QLR, PRR, Latency
2023 [57] Mobility Management, EC, LB, PDR
[58] EC, Congestion, NL, Latency, Overhead
[59] ETX, Link Quality, EC, Child Count, Jitter, Parent Switching, Latency, NL
[60] Mobility Management, EC, PDR, Network Stability
[61] Congestion, EC, Queue, NL, Latency, PDR
[62] Hop Count, ETX, RSSI, PDR, Latency
2024 [63] EC, NL, Queue
[64] TL, Queue, Throughput, Latency, EC
[65] Overhead, PDR, Latency, Throughput
[66] EC, Throughput, PDR, Overhead, Latency
* Energy Consumption: EC | Packet Delivery Ratio: PDR | Latency: Average End-to-End Delay | Load balancing: LB | Expected Transmission Count: ETX | Packet Loss: PL | Network Lifetime: NL | Traffic Load: TL | Other Metrics: OM
Table 3. SIMULATION PARAMETERS
Table 3. SIMULATION PARAMETERS
Parameter Value
Simulator NS-2
Traffic Type Constant Bit Rate (CBR) over UDP
Simulation Area 1000 m × 1000 m
Simulation Time 1000 s
Number of Nodes 50, 100, 150
Sink Placement Centralized
Node Placement Random
Topology RPL tree-based
MAC Layer Protocol IEEE 802.15.4
Data Rate 250 kbps
Bandwidth Up to 250 kbps (consistent with IEEE 802.15.4)
Radio Range 100 m
Packet Size 50 bytes (maximum for IEEE 802.15.4)
Energy Model Enabled
Initial Energy per Node 2 Joules
Mobility Model Static nodes
Routing Protocol LALARPL, and others for comparison
DIO Message Size 80 bytes
DAO Message Size 100 bytes
DIS Message Size 77 bytes
DAO-Ack Message Size 80 bytes
Traffic Rate ( λ ) 0.1, 0.2 packets/s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated