1. Introduction
In the era of the Internet of Things (IoT) [
1,
2,
3,
4], the surge in the number of mobile devices and network traffic has created a high demand for data processing capabilities and response speeds. Mobile Edge Computing (MEC) technology effectively enhances the operational efficiency of smart devices by providing powerful computing resources at the network’s edge, particularly in delay-sensitive applications such as autonomous driving and AR/VR [
5]. By offloading complex computational tasks to MEC servers [
6,
7,
8,
9,
10,
11], resource-constrained mobile devices (MDs) can significantly alleviate the pressure experienced when running high-demand applications and achieve a notable leap in performance. However, the limited battery capacity of mobile devices has become a bottleneck that restricts their further development [
12]. This limitation is especially evident for devices that require continuous operation over long periods and are not easily recharged, such as in remote areas or emergency situations, where insufficient battery life can severely impact the functionality and reliability of the devices. Therefore, despite the significant advantages of MEC in enhancing network performance, the limitation of battery endurance remains a key issue that urgently needs to be addressed.
In addition to battery constraints, reducing the energy consumption of IoT devices during the data offloading process is equally important. Trillions of tiny smart sensors make up the Internet of Things, facing significant limitations in computational capabilities and energy supplied by batteries. Advancements in wireless energy harvesting technologies, including renewable energy harvesting and wireless power transfer (WPT) [
13], can alleviate the challenges previously posed by battery capacity limitations. Renewable energy sources like solar, wind, and ocean energy can provide power to some extent, but they are significantly influenced by natural conditions such as weather and climate [
14]. To address this issue, green wireless charging technology has emerged. This technology can offers stable energy to devices through radio frequency signals and store it in the batteries of IoT nodes for future use, extending battery life [
15,
16]. To ensure that nodes do not fail due to energy depletion, green wireless charging adheres to the principle of energy neutrality operation [
17] state, ensuring that the energy consumed in any operation will never exceed the energy collected. Green Wireless Powered Mobile Edge Computing (WPMEC) combines the strengths of WPT and MEC, enhancing devices’ computational capabilities and energy self-sufficiency. In the upcoming 6G networks, green WPMEC will provide IoT devices with quick response and real-time experiences [
18], while reducing operational costs and extending the lifespan of devices.
However, WPMEC networks face the challenge of the dual far-near effect [
19] caused by positional differences, which has prompted the development of edge collaborative networks [
20,
21,
22] to optimize application offloading performance. By introducing a user cooperation (UC) mechanism, where nearby users assist distant users in accelerating the processing of computational tasks while effectively offloading their own tasks, this collaborative approach leverages the superior channel conditions of nearby users to gain more energy during the WPT phase. This not only addresses the unfairness caused by geographical location but also enhances the efficiency of energy utilization. The dense deployment of smart IoT devices further facilitates the opportunity to utilize the unused computing resources of idle devices and wireless energy harvesting. These devices, by assisting in completing the computational tasks of distant users, contribute to improving the overall computational performance of the WPMEC network.
To demonstrate the effectiveness of UC, recent studies, such as references [
23,
24], have effectively addressed the dual far-near effect in WPMEC networks through UC. The D2D communication in reference [
25] and the incentive mechanism in reference [
26] are both designed to facilitate resource sharing and collaborative offloading. In references [
20,
24,
27,
28], authors have focused on studying the basic three-node WPMEC model, which involves a Far User (FU) being allowed to offload computational input data to a Near User (NU). In reference [
29], researchers designed a Non-Orthogonal Multiple Access (NOMA) based computation offloading scheme aimed at enhancing the performance of multi-user MEC systems. Google has also developed federated learning technology, which enables multiple devices to collaborate on machine learning tasks. Despite this, these studies are often based on the assumption of determinism or predictability of future information, failing to fully integrate the dynamic changes of the network environment, which may affect the efficiency and success rate of task offloading and processing.
This paper investigates the basic three-node green WPMEC network shown in
Figure 1, focusing on the use of collaborative communication technology to accomplish the computation-intensive and delay-sensitive tasks powered by the HAP. Our goal is to maximize the network’s data processing capability in a real-time dynamic offloading system, taking into account the randomness of data generation and the high dynamics of wireless channel conditions. The challenges we face in addressing this issue include the unpredictability of task arrivals and the dynamics of channel conditions, as well as the coupling of variables in resource allocation, which makes traditional convex optimization methods inapplicable. To tackle these challenges, we have designed an efficient dynamic task offloading algorithm, the User-Assisted Dynamic Resource Allocation Algorithm (UADRA). This algorithm employs Lyapunov optimization techniques to transform the problem into a simplified form that relies on current information, and performs dynamic resource allocation in each time slot to enhance the network’s data processing capability. Our primary contributions are summarized as follows:
• We propose a model for maximizing the long-term weighted computation rate in a green sustainable WPT-MEC network, subject to energy consumption constraints. Our model takes into account the randomness of task arrivals and the dynamic variations in wireless channel states. By extending the models in [
28,
30], we have effectively mitigated the double near-far effect and fostered collaboration among users. This is achieved by considering the different data weights of near and far nodes, which enhances data transmission efficiency, system flexibility, and alignment with real-world applications.
• We design a low-complexity dynamic control algorithm called UADRA, based on Lyapunov optimization theory, while providing rigorous mathematical theory analysis to demonstrate its performance. A virtual queue is introduced to transforms the time-average energy constraint to a queue stable requirement. Leveraging the drift-plus-penalty technique, we decouple the original multi-stage stochastic problem into a non-convex deterministic sub-problem for each time slot. Furthermore, by using variable substitution and convex optimization theory, we convert these sub-problems into convex problems with a minimal number of decisions , that can be solved efficiently. Our algorithm can work efficiently without depending on the prior knowledge of the system information.
• We have conducted extensive simulations to evaluate the effectiveness and practicality of our proposed algorithm, particularly examining how the control parameter V, network bandwidth, energy constraint, task arrival rate, and geographical distance affect the average computation rate and network stability. Experimental results demonstrate that the proposed algorithm not only shows superior performance over benchmark methods but also achieves this improvement while simultaneously ensuring the stability of the system queues, with an enhancement of up to 4% in overall performance. Moreover, our algorithm effectively achieves a trade-off between computation rate and stability, adhering to the theoretical bounds of .
The remainder of this paper is organized as follows. Section II we presents the system model of the user-assistance green WPMEC network and sets forth a MSSO problem. In Section III, we utilize the Lyapunov optimization approach to tackle the problem, putting forward an effective dynamic offloading algorithm with an accompanying theoretical analysis of its performance. Section IV evaluate the efficacy of the suggested algorithm via simulation outcomes. Conclusively, Section V concludes the paper.
1.1. Related Work
The integration of WPT technology with MEC networks provides an effective solution for IoT devices, enhancing their energy and computing capabilities with controllable power supply and low-latency services. Recent research has extensively explored the potential of these wirelessly powered MEC networks. For instance, in [
31], researchers optimized charging time and data offloading rates for WPT-MEC IoT sensor networks to improve computational rates in scenarios. Furthermore, authors in [
32] investigated a NOMA-assisted WPT-MEC network with a nonlinear EH model, successfully enhancing the system’s Computational Energy Efficiency (CEE) by fine-tuning key parameters within the network. Specifically, to meet the energy consumption requirements of devices, the authors in [
33] proposed a Particle Swarm Optimization (PSO)-based algorithm. The goal was to reduce the latency of devices processing computational data streams by jointly optimizing charging and offloading strategies. Additionally, in [
34], the authors focused on the computational latency issue in WPT-MEC networks. They found suitable offloading ratio strategies to achieve synchronized latency for all WDs, effectively reducing the duration of the overall computational task.
To tackle the dual far-near effect issue, researchers have begun to focus on user-assisted WPMEC networks and have confirmed their effectiveness in enhancing the computing performance of distant users. Specifically, in [
35], the study analyzed a three-node system composed of distant users, nearby users, and the base station within a user-assisted MEC-NOMA network model, addressing the optimization problem of joint transmission time and power allocation for users. Furthermore, [
36,
37] respectively explored joint computing and communication collaboration schemes and the application of Device-to-Device (D2D) communication in MEC. The method proposed in [
36] aims to maximize the total computing rate of the network with the assistance of nearby users, while [
37] focuses on minimizing the overall network response delay and energy consumption through joint multi-user collaborative partial offloading, transmission scheduling, and computing allocation. [
38] extended this research, expanding from a single collaboration pair to multiple collaboration pairs, proposing a scheme to achieve the minimization of the total energy consumption of the AP.
In user assistance networks, the online collaborative offloading method, which is highly adaptable and can promptly respond to changes in task arrivals, has garnered significant attention from the research community. For instance, in [
39], to address the randomness of energy and data arrivals, a Lyapunov optimization-based method was proposed to maximize the long-term system throughput. Furthermore, in [
40], the authors studied and proposed a Lyapunov-based Profit Maximization (LBPM) task offloading algorithm in the context of the Internet of Vehicles (IoV), which aims at maximizing the time-averaged profit as the optimization goal. Additionally, in [
41], within the application of MEC in the industrial IoT, a Lyapunov-based privacy-aware framework was introduced, which not only addressed privacy and security issues but also achieved optimization effects by reducing energy consumption. In [
42], focusing on a multi-device single MEC system, the energy-saving task offloading problem was formulated as a time-averaged energy minimization problem considering queue length and resource constraints.
Different from prior studies, this paper is dedicated to addressing the challenges of dynamic task offloading in a green sustainable WPMEC networks with user assistance. We take into account the total system’s energy consumption constraint, the dynamically arriving tasks in real-time scenarios, and the high dynamics of wireless channel conditions. Moreover, the temporal coupling between WPT and user collaborative communication, as well as the coupling of data offloading timing and transmission power in collaborative communication, impose significant challenge to the problem.
4. Simulation Results
In this section, plenty of numerical simulations are performed to assess the efficiency of our proposed algorithm. Our experiments were conducted on a computational platform with an Intel(R) Xeon(R) Gold 6148 CPU 2.40 GHz, 20 cores and four GeForce RTX 3070 GPUs. In our simulations, we employed a free-space path-loss model to depict the wireless channel characteristics [
48]. The averaged channel gain
here is denoted as
where
denotes the antenna gain,
denotes the carrier frequency,
denotes the path loss exponent, and
in meters denotes the distance between two nodes. The time-varying WPT and task offloading channel gains are represented by the vector
, adhering to the Rayleigh fading channel model. In this model, the random channel fading factors
are characterized by an exponential distribution with a unit mean, capturing the variability inherent in wireless communication channels. For the sake of simplicity, we assume that the vector of fading factors
is constant and equal to
for any given time slot, implying that the channel gains are static within that slot. The interval between task arrivals at FU and NU both follow an exponential distribution with constant average rates
and
respectively. The parameters are all listed in
Table 2.
4.1. Impact of System Parameters on Algorithm Performance
Figure 3 illustrates trend for the average task computation rate
and the average task queue lengths of FU and NU over a 5000 time slots period. The the task arrival rates for FU and NU are set at 1.75 Mbps and 2 Mbps, respectively. Initially,
is low, but it rapidly increases and eventually stabilizes as time progresses. This initial surge in
is due to the system’s initial adjustment to the initial task queue fluctuations, which demands more intensive processing for FU tasks, resulting in increased energy consumption and a temporary reduction in the overall computation rate. Moreover, the average queue length decreases and stabilizes, reflecting the system’s ability to self-regulate and reach a steady state.
Figure 4 demonstrates the average task computation rate of our proposes algorithm under different control parameter
.The results show that the average task computation rates converge similarly across different
V. Notably, as
V increases, there is a corresponding increase in the average task computation rates. This trend is attributed to the fact that a larger
V compels the algorithm to prioritize computation rates over queue stability, which is consistent with theoretical analysis, which is corresponding to Theorems 2. Here, the parameter
V serves as a balancing factor between the task computation rate and queue stability, reflecting a trade-off that is consistent with our theoretical predictions.
Figure 5 shows the trend of average task queue lengths of FU and NU after with different
V. As
V increases from 100 to 900, the task queue lengths of FU and HD declines from
bits to
bits and from
bits to
bits respectively. In the user-assisted offloading paradigm, processing a task from NU involves only a single offloading step, which is markedly more efficient than the two-step offloading data from FU. Consequently, with the increasing of
V, indicating a shift in the algorithm’s focus towards prioritizing computation rates over queue stability, the algorithm will naturally prefer to offload tasks from NU, resulting in a more rapid decline in the NU’s task queue length. By tuning the value of
V, an optimal balance in task processing between FU and HU can be effectively achieved, reflecting the algorithm’s adaptability to different operational priorities.
Figure 6 evaluates the impact of the average energy constraint
on system performance with
V fixed at 500. As
increases from
joule to
joule, the average task computation rate rises from
bits/s to
bits/s, while the average task queue length of FU and NU decreases from
bits to
bits. This reduction in queue length and increase in computation rate are attributed to the higher energy availability for WPT, enabling FU and NU to offload tasks more effectively. Notably, after the average energy constraint
reaching 2.1 joule, the variation of task computation rate and task queue is reduced. This observation suggests that there is an upper bound to the energy consumption of our algorithm. Beyond this threshold, additional energy has minimal impact on performance enhancement. Hence, energy constraint
as a critical parameter, significantly influences both the data processing rates and the stability of task queues within the system. The findings underscore the importance of energy management in optimizing system performance.
Figure 7 presents the total system energy consumption of our proposed algorithm under different values of the parameter
V, specifically
and
. Initially, the total energy consumption exhibits substantial fluctuations. However, with the progression of time, the system’s energy consumption stabilizes and hovers around the average energy constraint
after approximately 2500 time slots. Notably, an elevated
V value is correlated with higher average energy consumption, as the algorithm pay more attention to the system’s computation rate, consequently incurring greater energy costs.
Figure 7 highlight the algorithm’s efficacy in managing average energy consumption, a critical feature for the sustainability of IoT networks.
In
Figure 8, we evaluate the offloading power across varying bandwidths
W, ranging from
Hz to
Hz. It is observed that all the offloading powers increases as bandwidth
W escalates. Consistent with the results of the analysis in Theorem 1, the increase in bandwidth makes the system more inclined to perform task offloading, reflected in the increase in offloading power.
4.2. Comparison with Baseline Algorithms
To evaluate the performance of our propose algorithm, we choose the the following three representative benchmarks as the baseline algorithms.
(1) All offloading scheme: Both FU and NU do not perform local computing and consumes all the energy for task offloading.
(2) No cooperation scheme: FU offloads tasks directly to HAP without soliciting assistance from NU, similar to the method in [
49].
(3) No lyapunov scheme: Disregarding the dynamics of task queues and the energy queue, this scheme focuses solely on maximizing the average task computation rate, similar to the Myopic method in [
30]. To ensure a fair comparison, we constrain the energy consumption of each time slot by solving.
Figure 9 shows the average task computation rates under four schemes over a 5000 time-slots period, with the control parameter
V set to 500. All schemes converge after 1000 time slots. Our proposed algorithm achieves the best task computation rate after convergence, outperforming the other three by 0.8%, 3.9%, and 4.1% respectively. Our algorithm’s key strengths lie not only in achieving the highest data processing rate but also in ensuring the stability of the system queues. This prevents excessively long queues that could lead to prolonged response times and a negative user experience. The no-Lyapunov scheme, while achieving the second-highest computation rate, neglects queue stability in its pursuit of maximizing computation speed. This oversight can lead to system instability, prolonged user service times, and potential system failure. The all-offloading scheme, relying solely on edge computing, consumes more energy and thus underperforms in energy-limited scenarios. In the no-cooperation scheme, the system initially benefits from a high computation rate due to the NU’s lack of assistance to the FU. But as the NU’s tasks are completed and its resources are no longer available, the average computation rate falls sharply. The FU’s communication with the HAP is further impeded by the dual proximity effect, causing a notable decline in the system’s long-term computation performance.
Figure 10 show the impact of varying network bandwidth
W from
Hz to
Hz on the performance of different schemes. As
W increase, task computation rates for all schemes rise, reflecting improved transmission efficiency for wireless power transfer and task offloading. This allows HAP to handle more offloaded tasks, highlighting the critical role of bandwidth in system performance. Notably, our proposed scheme consistently outperforms others across all bandwidth levels, showcasing its adaptability and robustness in varying network conditions.
Figure 11 evaluates how the distance between FU and NU affects the performance of all four schemes, with distances varying from 120 m to 160 m. We observe that as the distance increases, the computation rates for both our proposed scheme and the all-offloading scheme decrease. This suggests that proximity plays a crucial role in task offloading efficiency. In contrast, the no-cooperation scheme shows a stable computation rate, consistent with its design that excludes task offloading between FU and NU. Interestingly, the no-Lyapunov scheme performs best at a distance of about 140 meters. However, its performance drops as the distance decreases, contrary to the expectation that a shorter distance would enhance task offloading from FU to NU. This unexpected trend is likely due to instances where the FU’s task queue depletes faster than new tasks arrive, leading to lower computation rates for the no-Lyapunov scheme. This highlights the importance of balancing task computation rates with queue stability in system design.
In
Figure 12, we evaluate the performance of four schemes as the task arrival rate of NU varies from
bits/s to
bits/s. Our proposed scheme’s task computation rate demonstrates a modest increase and maintains the highest computation rate as tasks arrive more rapidly. This trend underscores the scheme’s robustness across diverse scenarios. Correspondingly, the no cooperation scheme exhibits a more pronounced increase in task computation rate. This is attributable to its vigorous task processing capacity at the NU, which allows it to capitalize on the higher task arrival rates effectively.