Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = multiple access edge computing (MEC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 34410 KiB  
Article
Multi-UAV-Assisted Task Offloading and Trajectory Optimization for Edge Computing via NOMA
by Jiajia Liu, Haoran Hu, Xu Bai, Guohua Li, Xudong Zhang, Haitao Zhou, Huiru Li and Jianhua Liu
Sensors 2025, 25(16), 4965; https://doi.org/10.3390/s25164965 - 11 Aug 2025
Viewed by 510
Abstract
Unmanned Aerial Vehicles (UAVs) exhibit significant potential in enhancing the wireless communication coverage and service quality of Mobile Edge Computing (MEC) systems due to their superior flexibility and ease of deployment. However, the rapid growth of tasks leads to transmission queuing in edge [...] Read more.
Unmanned Aerial Vehicles (UAVs) exhibit significant potential in enhancing the wireless communication coverage and service quality of Mobile Edge Computing (MEC) systems due to their superior flexibility and ease of deployment. However, the rapid growth of tasks leads to transmission queuing in edge networks, while the uneven distribution of user nodes and services causes network load imbalance, resulting in increased user waiting delays. To address these issues, we propose a multi-UAV collaborative MEC network model based on Non-Orthogonal Multiple Access (NOMA). In this model, UAVs are endowed with the capability to dynamically offload tasks among one another, thereby fostering a more equitable load distribution across the UAV swarm. Furthermore, the integration of NOMA is strategically employed to alleviating the inherent queuing delays in the communication infrastructure. Considering delay and energy consumption constraints, we formulate a task offloading strategy optimization problem with the objective of minimizing the overall system delay. To solve this problem, we design a delay-optimized offloading strategy based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. By jointly optimizing task offloading decisions and UAV flight trajectories, the system delay is significantly reduced. Simulation results show that, compared to traditional approaches, the proposed algorithm achieves a delay reduction of 20.2%, 9.8%, 17.0%, 12.7%, 15.0%, and 11.6% under different scenarios, including varying task volumes, the number of IoT devices, UAV flight speed, flight time, IoT device computing capacity, and UAV computing capability. These results demonstrate the effectiveness of the proposed solution and offloading decisions in reducing the overall system delay. Full article
(This article belongs to the Special Issue Cloud and Edge Computing for IoT Applications)
Show Figures

Figure 1

16 pages, 1966 KiB  
Article
DRL-Driven Intelligent SFC Deployment in MEC Workload for Dynamic IoT Networks
by Seyha Ros, Intae Ryoo and Seokhoon Kim
Sensors 2025, 25(14), 4257; https://doi.org/10.3390/s25144257 - 8 Jul 2025
Viewed by 391
Abstract
The rapid increase in the deployment of Internet of Things (IoT) sensor networks has led to an exponential growth in data generation and an unprecedented demand for efficient resource management infrastructure. Ensuring end-to-end communication across multiple heterogeneous network domains is crucial to maintaining [...] Read more.
The rapid increase in the deployment of Internet of Things (IoT) sensor networks has led to an exponential growth in data generation and an unprecedented demand for efficient resource management infrastructure. Ensuring end-to-end communication across multiple heterogeneous network domains is crucial to maintaining Quality of Service (QoS) requirements, such as low latency and high computational capacity, for IoT applications. However, limited computing resources at multi-access edge computing (MEC), coupled with increasing IoT network requests during task offloading, often lead to network congestion, service latency, and inefficient resource utilization, degrading overall system performance. This paper proposes an intelligent task offloading and resource orchestration framework to address these challenges, thereby optimizing energy consumption, computational cost, network congestion, and service latency in dynamic IoT-MEC environments. The framework introduces task offloading and a dynamic resource orchestration strategy, where task offloading to the MEC server ensures an efficient distribution of computation workloads. The dynamic resource orchestration process, Service Function Chaining (SFC) for Virtual Network Functions (VNFs) placement, and routing path determination optimize service execution across the network. To achieve adaptive and intelligent decision-making, the proposed approach leverages Deep Reinforcement Learning (DRL) to dynamically allocate resources and offload task execution, thereby improving overall system efficiency and addressing the optimal policy in edge computing. Deep Q-network (DQN), which is leveraged to learn an optimal network resource adjustment policy and task offloading, ensures flexible adaptation in SFC deployment evaluations. The simulation result demonstrates that the DRL-based scheme significantly outperforms the reference scheme in terms of cumulative reward, reduced service latency, lowered energy consumption, and improved delivery and throughput. Full article
Show Figures

Figure 1

24 pages, 5736 KiB  
Article
Joint Task Offloading and Power Allocation for Satellite Edge Computing Networks
by Yuxuan Li, Shibing Zhu, Ting Xiong, Yuwei Li, Qi Su and Jianmei Dai
Sensors 2025, 25(9), 2892; https://doi.org/10.3390/s25092892 - 3 May 2025
Viewed by 637
Abstract
Low Earth orbit (LEO) satellite networks have shown extensive application in the fields of navigation, communication services in remote areas, and disaster early warning. Inspired by multi-access edge computing (MEC) technology, satellite edge computing (SEC) technology emerges, which deploys mobile edge computing on [...] Read more.
Low Earth orbit (LEO) satellite networks have shown extensive application in the fields of navigation, communication services in remote areas, and disaster early warning. Inspired by multi-access edge computing (MEC) technology, satellite edge computing (SEC) technology emerges, which deploys mobile edge computing on satellites to achieve lower service latency by leveraging the advantage of satellites being closer to users. However, due to the limitations in the size and power of LEO satellites, processing computationally intensive tasks with a single satellite may overload it, reducing its lifespan and resulting in high service latency. In this paper, we consider a scenario of multi-satellite collaborative offloading. We mainly focus on computation offloading in the satellite edge computing network (SECN) by jointly considering the transmission power and task assignment ratios. A maximum delay minimization problem under the power and energy constraints is formulated, and a distributed balance increasing penalty dual decomposition (DB-IPDD) algorithm is proposed, utilizing the triple-layer computing structure that can leverage the computing resources of multiple LEO satellites. Simulation results demonstrate the advantage of the proposed solution over several baseline schemes. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

20 pages, 468 KiB  
Article
Toward 6G: Latency-Optimized MEC Systems with UAV and RIS Integration
by Abdullah Alshahrani
Mathematics 2025, 13(5), 871; https://doi.org/10.3390/math13050871 - 5 Mar 2025
Viewed by 1126
Abstract
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative [...] Read more.
Multi-access edge computing (MEC) has emerged as a cornerstone technology for deploying 6G network services, offering efficient computation and ultra-low-latency communication. The integration of unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) further enhances wireless propagation, capacity, and coverage, presenting a transformative paradigm for next-generation networks. This paper addresses the critical challenge of task offloading and resource allocation in an MEC-based system, where a massive MIMO base station, serving multiple macro-cells, hosts the MEC server with support from a UAV-equipped RIS. We propose an optimization framework to minimize task execution latency for user equipment (UE) by jointly optimizing task offloading and communication resource allocation within this UAV-assisted, RIS-aided network. By modeling this problem as a Markov decision process (MDP) with a discrete-continuous hybrid action space, we develop a deep reinforcement learning (DRL) algorithm leveraging a hybrid space representation to solve it effectively. Extensive simulations validate the superiority of the proposed method, demonstrating significant latency reductions compared to state-of-the-art approaches, thereby advancing the feasibility of MEC in 6G networks. Full article
Show Figures

Figure 1

32 pages, 2442 KiB  
Article
Federated Learning System for Dynamic Radio/MEC Resource Allocation and Slicing Control in Open Radio Access Network
by Mario Martínez-Morfa, Carlos Ruiz de Mendoza, Cristina Cervelló-Pastor and Sebastia Sallent-Ribes
Future Internet 2025, 17(3), 106; https://doi.org/10.3390/fi17030106 - 26 Feb 2025
Viewed by 1447
Abstract
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource [...] Read more.
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource utilization, posing challenges in meeting the stringent requirements of next-generation applications. The Open Radio Access Network (O-RAN) and Multi-Access Edge Computing (MEC) have emerged as transformative paradigms, enabling disaggregation, virtualization, and real-time adaptability—which are key to achieving ultra-low latency, enhanced bandwidth efficiency, and intelligent resource management in future cellular systems. This paper presents a Federated Deep Reinforcement Learning (FDRL) framework for dynamic radio and edge computing resource allocation and slicing management in O-RAN environments. An Integer Linear Programming (ILP) model has also been developed, resulting in the proposed FDRL solution drastically reducing the system response time. On the other hand, unlike centralized Reinforcement Learning (RL) approaches, the proposed FDRL solution leverages Federated Learning (FL) to optimize performance while preserving data privacy and reducing communication overhead. Comparative evaluations against centralized models demonstrate that the federated approach improves learning efficiency and reduces bandwidth consumption. The system has been rigorously tested across multiple scenarios, including multi-client O-RAN environments and loss-of-synchronization conditions, confirming its resilience in distributed deployments. Additionally, a case study simulating realistic traffic profiles validates the proposed framework’s ability to dynamically manage radio and computational resources, ensuring efficient and adaptive O-RAN slicing for diverse and high-mobility scenarios. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

22 pages, 1686 KiB  
Article
Optimizing Transmit Power for User-Cooperative Backscatter-Assisted NOMA-MEC: A Green IoT Perspective
by Huaiwen He, Chenghao Zhou, Feng Huang, Hong Shen and Yihong Yang
Electronics 2024, 13(23), 4678; https://doi.org/10.3390/electronics13234678 - 27 Nov 2024
Viewed by 885
Abstract
Non-orthogonal multiple access (NOMA) enables the parallel offloading of multiuser tasks, effectively enhancing throughput and reducing latency. Backscatter communication, which passively reflects radio frequency (RF) signals, improves energy efficiency and extends the operational lifespan of terminal devices. Both technologies are pivotal for the [...] Read more.
Non-orthogonal multiple access (NOMA) enables the parallel offloading of multiuser tasks, effectively enhancing throughput and reducing latency. Backscatter communication, which passively reflects radio frequency (RF) signals, improves energy efficiency and extends the operational lifespan of terminal devices. Both technologies are pivotal for the next generation of wireless networks. However, there is little research focusing on optimizing the transmit power in backscatter-assisted NOMA-MEC systems from a green IoT perspective. In this paper, we aim to minimize the transmit energy consumption of a Hybrid Access Point (HAP) while ensuring task deadlines are met. We consider the integration of Backscatter Communication (BackCom) and Active Transmission (AT), and leverage NOMA technology and user cooperation to mitigate the double near–far effect. Specifically, we formulate a transmit energy consumption minimization problem, accounting for task deadline constraints, task offloading decisions, transmit power allocation, and energy constraints. To tackle the non-convex optimization problem, we employ variable substitution and convex optimization theory to transform the original non-convex problem into a convex one, which is then efficiently solved. We deduce the semi-closed form expression of the optimal solution and propose an energy-efficient algorithm to minimize the transmit power of the entire wireless powered MEC. The extensive simulation results demonstrate that our proposed scheme significantly reduces the HAP transmit power by around 8% compared to existing schemes, validating the effectiveness of our approach. This study provides valuable insights for the design of green IoT systems by optimizing the transmit power in NOMA-MEC networks. Full article
Show Figures

Figure 1

26 pages, 3821 KiB  
Article
A Cascaded Multi-Agent Reinforcement Learning-Based Resource Allocation for Cellular-V2X Vehicular Platooning Networks
by Iswarya Narayanasamy and Venkateswari Rajamanickam
Sensors 2024, 24(17), 5658; https://doi.org/10.3390/s24175658 - 30 Aug 2024
Cited by 4 | Viewed by 2098
Abstract
The platooning of cars and trucks is a pertinent approach for autonomous driving due to the effective utilization of roadways. The decreased gas consumption levels are an added merit owing to sustainability. Conventional platooning depended on Dedicated Short-Range Communication (DSRC)-based vehicle-to-vehicle communications. The [...] Read more.
The platooning of cars and trucks is a pertinent approach for autonomous driving due to the effective utilization of roadways. The decreased gas consumption levels are an added merit owing to sustainability. Conventional platooning depended on Dedicated Short-Range Communication (DSRC)-based vehicle-to-vehicle communications. The computations were executed by the platoon members with their constrained capabilities. The advent of 5G has favored Intelligent Transportation Systems (ITS) to adopt Multi-access Edge Computing (MEC) in platooning paradigms by offloading the computational tasks to the edge server. In this research, vital parameters in vehicular platooning systems, viz. latency-sensitive radio resource management schemes, and Age of Information (AoI) are investigated. In addition, the delivery rates of Cooperative Awareness Messages (CAM) that ensure expeditious reception of safety-critical messages at the roadside units (RSU) are also examined. However, for latency-sensitive applications like vehicular networks, it is essential to address multiple and correlated objectives. To solve such objectives effectively and simultaneously, the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) framework necessitates a better and more sophisticated model to enhance its ability. In this paper, a novel Cascaded MADDPG framework, CMADDPG, is proposed to train cascaded target critics, which aims at achieving expected rewards through the collaborative conduct of agents. The estimation bias phenomenon, which hinders a system’s overall performance, is vividly circumvented in this cascaded algorithm. Eventually, experimental analysis also demonstrates the potential of the proposed algorithm by evaluating the convergence factor, which stabilizes quickly with minimum distortions, and reliable CAM message dissemination with 99% probability. The average AoI quantity is maintained within the 5–10 ms range, guaranteeing better QoS. This technique has proven its robustness in decentralized resource allocation against channel uncertainties caused by higher mobility in the environment. Most importantly, the performance of the proposed algorithm remains unaffected by increasing platoon size and leading channel uncertainties. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

19 pages, 1076 KiB  
Article
TRUST-ME: Trust-Based Resource Allocation and Server Selection in Multi-Access Edge Computing
by Sean Tsikteris, Aisha B Rahman, Md. Sadman Siraj and Eirini Eleni Tsiropoulou
Future Internet 2024, 16(8), 278; https://doi.org/10.3390/fi16080278 - 4 Aug 2024
Cited by 3 | Viewed by 2089
Abstract
Multi-access edge computing (MEC) has attracted the interest of the research and industrial community to support Internet of things (IoT) applications by enabling efficient data processing and minimizing latency. This paper presents significant contributions toward optimizing the resource allocation and enhancing the decision-making [...] Read more.
Multi-access edge computing (MEC) has attracted the interest of the research and industrial community to support Internet of things (IoT) applications by enabling efficient data processing and minimizing latency. This paper presents significant contributions toward optimizing the resource allocation and enhancing the decision-making process in edge computing environments. Specifically, the TRUST-ME model is introduced, which consists of multiple edge servers and IoT devices, i.e., users, with varied computing tasks offloaded to the MEC servers. A utility function was designed to quantify the benefits in terms of latency and cost for the IoT device while utilizing the MEC servers’ computing capacities. The core innovation of our work is a novel trust model that was designed to evaluate the IoT devices’ confidence in MEC servers. This model integrates both direct and indirect trust and reflects the trustworthiness of the servers based on the direct interactions and social feedback from other devices using the same servers. This dual trust approach helps with accurately gauging the reliability of MEC services and ensuring more informed decision making. A reinforcement learning framework based on the optimistic Q-learning with an upper confidence bounds action selection algorithm enables the IoT devices to autonomously select a MEC server to process their computing tasks. Also, a multilateral bargaining model is proposed for fair resource allocation of the MEC servers’ computing resources to the users while accounting for their computing demands. Numerical simulations demonstrated the operational effectiveness, convergence, and scalability of the TRUST-ME model, which was validated through real-world scenarios and comprehensive comparative evaluations against existing approaches. Full article
Show Figures

Figure 1

18 pages, 3297 KiB  
Article
Computation Offloading Strategy for Detection Task in Railway IoT with Integrated Sensing, Storage, and Computing
by Qichang Guo, Zhanyue Xu, Jiabin Yuan and Yifei Wei
Electronics 2024, 13(15), 2982; https://doi.org/10.3390/electronics13152982 - 29 Jul 2024
Cited by 2 | Viewed by 1250
Abstract
Online detection devices, powered by artificial intelligence technologies, enable the comprehensive and continuous detection of high-speed railways (HSRs). However, the computation-intensive and latency-sensitive nature of these detection tasks often exceeds local processing capabilities. Mobile Edge Computing (MEC) emerges as a key solution in [...] Read more.
Online detection devices, powered by artificial intelligence technologies, enable the comprehensive and continuous detection of high-speed railways (HSRs). However, the computation-intensive and latency-sensitive nature of these detection tasks often exceeds local processing capabilities. Mobile Edge Computing (MEC) emerges as a key solution in the railway Internet of Things (IoT) scenario to address these challenges. Nevertheless, the rapidly varying channel conditions in HSR scenarios pose significant challenges for efficient resource allocation. In this paper, a computation offloading system model for detection tasks in the railway IoT scenario is proposed. This system includes direct and relay transmission models, incorporating Non-Orthogonal Multiple Access (NOMA) technology. This paper focuses on the offloading strategy for subcarrier assignment, mode selection, relay power allocation, and computing resource management within this system to minimize the average delay ratio (the ratio of delay to the maximum tolerable delay). However, this optimization problem is a complex Mixed-Integer Non-Linear Programming (MINLP) problem. To address this, we present a low-complexity subcarrier allocation algorithm to reduce the dimensionality of decision-making actions. Furthermore, we propose an improved Deep Deterministic Policy Gradient (DDPG) algorithm that represents discrete variables using selection probabilities to handle the hybrid action space problem. Our results indicate that the proposed system model adapts well to the offloading issues of detection tasks in HSR scenarios, and the improved DDPG algorithm efficiently identifies optimal computation offloading strategies. Full article
(This article belongs to the Special Issue Control Systems Design for Connected and Autonomous Vehicles)
Show Figures

Figure 1

23 pages, 1229 KiB  
Article
Towards Collaborative Edge Intelligence: Blockchain-Based Data Valuation and Scheduling for Improved Quality of Service
by Yao Du, Zehua Wang, Cyril Leung and Victor C. M. Leung
Future Internet 2024, 16(8), 267; https://doi.org/10.3390/fi16080267 - 28 Jul 2024
Cited by 3 | Viewed by 2003
Abstract
Collaborative edge intelligence, a distributed computing paradigm, refers to a system where multiple edge devices work together to process data and perform distributed machine learning (DML) tasks locally. Decentralized Internet of Things (IoT) devices share knowledge and resources to improve the quality of [...] Read more.
Collaborative edge intelligence, a distributed computing paradigm, refers to a system where multiple edge devices work together to process data and perform distributed machine learning (DML) tasks locally. Decentralized Internet of Things (IoT) devices share knowledge and resources to improve the quality of service (QoS) of the system with reduced reliance on centralized cloud infrastructure. However, the paradigm is vulnerable to free-riding attacks, where some devices benefit from the collective intelligence without contributing their fair share, potentially disincentivizing collaboration and undermining the system’s effectiveness. Moreover, data collected from heterogeneous IoT devices may contain biased information that decreases the prediction accuracy of DML models. To address these challenges, we propose a novel incentive mechanism that relies on time-dependent blockchain records and multi-access edge computing (MEC). We formulate the QoS problem as an unbounded multiple knapsack problem at the network edge. Furthermore, a decentralized valuation protocol is introduced atop blockchain to incentivize contributors and disincentivize free-riders. To improve model prediction accuracy within latency requirements, a data scheduling algorithm is given based on a curriculum learning framework. Based on our computer simulations using heterogeneous datasets, we identify two critical factors for enhancing the QoS in collaborative edge intelligence systems: (1) mitigating the impact of information loss and free-riders via decentralized data valuation and (2) optimizing the marginal utility of individual data samples by adaptive data scheduling. Full article
Show Figures

Figure 1

17 pages, 15754 KiB  
Article
Computation Offloading with Privacy-Preserving in Multi-Access Edge Computing: A Multi-Agent Deep Reinforcement Learning Approach
by Xiang Dai, Zhongqiang Luo and Wei Zhang
Electronics 2024, 13(13), 2655; https://doi.org/10.3390/electronics13132655 - 6 Jul 2024
Cited by 1 | Viewed by 1619
Abstract
The rapid development of mobile communication technologies and Internet of Things (IoT) devices has introduced new challenges for multi-access edge computing (MEC). A key issue is how to efficiently manage MEC resources and determine the optimal offloading strategy between edge servers and user [...] Read more.
The rapid development of mobile communication technologies and Internet of Things (IoT) devices has introduced new challenges for multi-access edge computing (MEC). A key issue is how to efficiently manage MEC resources and determine the optimal offloading strategy between edge servers and user devices, while also protecting user privacy and thereby improving the Quality of Service (QoS). To address this issue, this paper investigates a privacy-preserving computation offloading scheme, designed to maximize QoS by comprehensively considering privacy protection, delay, energy consumption, and the task discard rate of user devices. We first formalize the privacy issue by introducing the concept of privacy entropy. Then, based on quantified indicators, a multi-objective optimization problem is established. To find an optimal solution to this problem, this paper proposes a computation offloading algorithm based on the Twin delayed deep deterministic policy gradient (TD3-SN-PER), which integrates clipped double-Q learning, prioritized experience replay, and state normalization techniques. Finally, the proposed method is evaluated through simulation analysis. The experimental results demonstrate that our approach can effectively balance multiple performance metrics to achieve optimal QoS. Full article
Show Figures

Figure 1

17 pages, 1058 KiB  
Article
UAV-Mounted RIS-Aided Mobile Edge Computing System: A DDQN-Based Optimization Approach
by Min Wu, Shibing Zhu, Changqing Li, Jiao Zhu, Yudi Chen, Xiangyu Liu and Rui Liu
Drones 2024, 8(5), 184; https://doi.org/10.3390/drones8050184 - 7 May 2024
Cited by 7 | Viewed by 2772
Abstract
Unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) are increasingly employed in mobile edge computing (MEC) systems to flexibly modify the signal transmission environment. This is achieved through the active manipulation of the wireless channel facilitated by the mobile deployment of UAVs [...] Read more.
Unmanned aerial vehicles (UAVs) and reconfigurable intelligent surfaces (RISs) are increasingly employed in mobile edge computing (MEC) systems to flexibly modify the signal transmission environment. This is achieved through the active manipulation of the wireless channel facilitated by the mobile deployment of UAVs and the intelligent reflection of signals by RISs. However, these technologies are subject to inherent limitations such as the restricted range of UAVs and limited RIS coverage, which hinder their broader application. The integration of UAVs and RISs into UAV–RIS schemes presents a promising approach to surmounting these limitations by leveraging the strengths of both technologies. Motivated by the above observations, we contemplate a novel UAV–RIS-aided MEC system, wherein UAV–RIS plays a pivotal role in facilitating communication between terrestrial vehicle users and MEC servers. To address this challenging non-convex problem, we propose an energy-constrained approach to maximize the system’s energy efficiency based on a double-deep Q-network (DDQN), which is employed to realize joint control of the UAVs, passive beamforming, and resource allocation for MEC. Numerical results demonstrate that the proposed optimization scheme significantly enhances the system efficiency of the UAV–RIS-aided time division multiple access (TDMA) network. Full article
(This article belongs to the Special Issue Mobile Fog and Edge Computing in Drone Swarms)
Show Figures

Figure 1

36 pages, 1912 KiB  
Review
Empowering the Vehicular Network with RIS Technology: A State-of-the-Art Review
by Farheen Naaz, Ali Nauman, Tahir Khurshaid and Sung-Won Kim
Sensors 2024, 24(2), 337; https://doi.org/10.3390/s24020337 - 5 Jan 2024
Cited by 19 | Viewed by 5048
Abstract
Reconfigurable intelligent surfaces (RIS) are expected to bring about a revolutionary transformation in vehicular networks, thus paving the way for a future characterized by connected and automated vehicles (CAV). An RIS is a planar structure comprising many passive elements that can dynamically manipulate [...] Read more.
Reconfigurable intelligent surfaces (RIS) are expected to bring about a revolutionary transformation in vehicular networks, thus paving the way for a future characterized by connected and automated vehicles (CAV). An RIS is a planar structure comprising many passive elements that can dynamically manipulate electromagnetic waves to enhance wireless communication by reflecting, refracting, and focusing signals in a programmable manner. RIS exhibits substantial potential for improving vehicle-to-everything (V2X) communication through various means, including coverage enhancement, interference mitigation, improving signal strength, and providing additional layers of privacy and security. This article presents a comprehensive survey that explores the emerging opportunities arising from the integration of RIS into vehicular networks. To examine the convergence of RIS and V2X communications, the survey adopted a holistic approach, thus highlighting the potential benefits and challenges of this combination. In this study, we examined several applications of RIS-aided V2X communication. Subsequently, we delve into the fundamental emerging technologies that are expected to empower vehicular networks, encompassing mobile edge computing (MEC), non-orthogonal multiple access (NOMA), millimeter-wave communication (mmWave), Artificial Intelligence (AI), and visible light communication (VLC). Finally, to stimulate further research in this domain, we emphasize noteworthy research challenges and potential avenues for future exploration. Full article
(This article belongs to the Special Issue Sensing Technology in Internet of Vehicles)
Show Figures

Figure 1

21 pages, 2271 KiB  
Article
EDI-C: Reputation-Model-Based Collaborative Audit Scheme for Edge Data Integrity
by Fan Yang, Yi Sun, Qi Gao and Xingyuan Chen
Electronics 2024, 13(1), 75; https://doi.org/10.3390/electronics13010075 - 23 Dec 2023
Cited by 4 | Viewed by 1340
Abstract
The emergence of mobile edge computing (MEC) has facilitated the development of data caching technology, which enables application vendors to cache frequently used data on the edge servers close to the user, thereby providing low-latency data access services. However, in an unstable MEC [...] Read more.
The emergence of mobile edge computing (MEC) has facilitated the development of data caching technology, which enables application vendors to cache frequently used data on the edge servers close to the user, thereby providing low-latency data access services. However, in an unstable MEC environment, the multi-replica data cached by different edge servers is prone to corruption, making it crucial to verify the consistency of multi-replica data on different edge servers. Although the existing research realizes data integrity verification based on the cooperation of multiple edge servers, the integrity proof generated by multiple copies of data is the same, which has low verification efficiency and is vulnerable to attacks such as replay and replace. To address the above issues, based on homomorphic hash and sampling algorithms, this paper proposes an efficient and lightweight multi-replica integrity verification algorithm, which has significantly less storage cost and computational cost and can resist forgery and replay and replace attacks. Based on the verification algorithm, this paper further proposes a multi-replica edge data integrity collaborative audit scheme EDI-C based on the reputation model. EDI-C realizes the efficient collaborative audit of multiple edge servers in a distributed discrete environment through an incentive mechanism to avoid the trust problem of both sides caused by centralized audit. Also, it supports batch auditing of multiple copies of original data files at the same time through parallel processing and data block auditing technology, which not only significantly improves the verification efficiency but also realizes the accurate location and repair of corrupted data at the data block level. Finally, the security analyses and performance evaluation show the security and practicability of EDI-C. Compared with the representative schemes, the results show that EDI-C can ensure the integrity verification of cached data more efficiently in an MEC environment. Full article
Show Figures

Figure 1

24 pages, 812 KiB  
Article
Enhancing Data Freshness in Air-Ground Collaborative Heterogeneous Networks through Contract Theory and Generative Diffusion-Based Mobile Edge Computing
by Zhiyao Sun and Guifen Chen
Sensors 2024, 24(1), 74; https://doi.org/10.3390/s24010074 - 22 Dec 2023
Cited by 4 | Viewed by 1560
Abstract
Mobile edge computing is critical for improving the user experience of latency-sensitive and freshness-based applications. This paper provides insights into the potential of non-orthogonal multiple access (NOMA) convergence with heterogeneous air–ground collaborative networks to improve system throughput and spectral efficiency. Coordinated resource allocation [...] Read more.
Mobile edge computing is critical for improving the user experience of latency-sensitive and freshness-based applications. This paper provides insights into the potential of non-orthogonal multiple access (NOMA) convergence with heterogeneous air–ground collaborative networks to improve system throughput and spectral efficiency. Coordinated resource allocation between UAVs and MEC servers, especially in the NOMA framework, is addressed as a key challenge. Under the unrealistic assumption that edge nodes contribute resources indiscriminately, we introduce a two-stage incentive mechanism. The model is based on contract theory and aims at optimizing the utility of the service provider (SP) under the constraints of individual rationality (IR) and incentive compatibility (IC) of the mobile user. The block coordinate descent method is used to refine the contract design and complemented by a generative diffusion model to improve the efficiency of searching for contracts. During the deployment process, the study emphasizes the positioning of UAVs to maximize SP effectiveness. An improved differential evolutionary algorithm is introduced to optimize the positioning of UAVs. Extensive evaluation shows our approach has excellent effectiveness and robustness in deterministic and unpredictable scenarios. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Back to TopTop