Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = delay-power tradeoff

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 10412 KiB  
Article
Design and Evaluation of Radiation-Tolerant 2:1 CMOS Multiplexers in 32 nm Technology Node: Transistor-Level Mitigation Strategies and Performance Trade-Offs
by Ana Flávia D. Reis, Bernardo B. Sandoval, Cristina Meinhardt and Rafael B. Schvittz
Electronics 2025, 14(15), 3010; https://doi.org/10.3390/electronics14153010 - 28 Jul 2025
Viewed by 222
Abstract
In advanced Complementary Metal-Oxide-Semiconductor (CMOS) technologies, where diminished feature sizes amplify radiation-induced soft errors, the optimization of fault-tolerant circuit designs requires detailed transistor-level analysis of reliability–performance trade-offs. As a fundamental building block in digital systems and critical data paths, the 2:1 multiplexer, widely [...] Read more.
In advanced Complementary Metal-Oxide-Semiconductor (CMOS) technologies, where diminished feature sizes amplify radiation-induced soft errors, the optimization of fault-tolerant circuit designs requires detailed transistor-level analysis of reliability–performance trade-offs. As a fundamental building block in digital systems and critical data paths, the 2:1 multiplexer, widely used in data-path routing, clock networks, and reconfigurable systems, provides a critical benchmark for assessing radiation-hardened design methodologies. In this context, this work aims to analyze the power consumption, area overhead, and delay of 2:1 multiplexer designs under transient fault conditions, employing the CMOS and Differential Cascode Voltage Switch Logic (DCVSL) logic styles and mitigation strategies. Electrical simulations were conducted using 32 nm high-performance predictive technology, evaluating both the original circuit versions and modified variants incorporating three mitigation strategies: transistor sizing, D-Cells, and C-Elements. Key metrics, including power consumption, delay, area, and radiation robustness, were analyzed. The C-Element and transistor sizing techniques ensure satisfactory robustness for all the circuits analyzed, with a significant impact on delay, power consumption, and area. Although the D-Cell technique alone provides significant improvements, it is not enough to achieve adequate levels of robustness. Full article
Show Figures

Figure 1

18 pages, 1040 KiB  
Article
A TDDPG-Based Joint Optimization Method for Hybrid RIS-Assisted Vehicular Integrated Sensing and Communication
by Xinren Wang, Zhuoran Xu, Qin Wang, Yiyang Ni and Haitao Zhao
Electronics 2025, 14(15), 2992; https://doi.org/10.3390/electronics14152992 - 27 Jul 2025
Viewed by 251
Abstract
This paper proposes a novel Twin Delayed Deep Deterministic Policy Gradient (TDDPG)-based joint optimization algorithm for hybrid reconfigurable intelligent surface (RIS)-assisted integrated sensing and communication (ISAC) systems in Internet of Vehicles (IoV) scenarios. The proposed system model achieves deep integration of sensing and [...] Read more.
This paper proposes a novel Twin Delayed Deep Deterministic Policy Gradient (TDDPG)-based joint optimization algorithm for hybrid reconfigurable intelligent surface (RIS)-assisted integrated sensing and communication (ISAC) systems in Internet of Vehicles (IoV) scenarios. The proposed system model achieves deep integration of sensing and communication by superimposing the communication and sensing signals within the same waveform. To decouple the complex joint design problem, a dual-DDPG architecture is introduced, in which one agent optimizes the transmit beamforming vector and the other adjusts the RIS phase shift matrix. Both agents share a unified reward function that comprehensively considers multi-user interference (MUI), total transmit power, RIS noise power, and sensing accuracy via the CRLB constraint. Simulation results demonstrate that the proposed TDDPG algorithm significantly outperforms conventional DDPG in terms of sum rate and interference suppression. Moreover, the adoption of a hybrid RIS enables an effective trade-off between communication performance and system energy efficiency, highlighting its practical deployment potential in dynamic IoV environments. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

19 pages, 3044 KiB  
Review
Deep Learning-Based Sound Source Localization: A Review
by Kunbo Xu, Zekai Zong, Dongjun Liu, Ran Wang and Liang Yu
Appl. Sci. 2025, 15(13), 7419; https://doi.org/10.3390/app15137419 - 2 Jul 2025
Viewed by 551
Abstract
As a fundamental technology in environmental perception, sound source localization (SSL) plays a critical role in public safety, marine exploration, and smart home systems. However, traditional methods such as beamforming and time-delay estimation rely on manually designed physical models and idealized assumptions, which [...] Read more.
As a fundamental technology in environmental perception, sound source localization (SSL) plays a critical role in public safety, marine exploration, and smart home systems. However, traditional methods such as beamforming and time-delay estimation rely on manually designed physical models and idealized assumptions, which struggle to meet practical demands in dynamic and complex scenarios. Recent advancements in deep learning have revolutionized SSL by leveraging its end-to-end feature adaptability, cross-scenario generalization capabilities, and data-driven modeling, significantly enhancing localization robustness and accuracy in challenging environments. This review systematically examines the progress of deep learning-based SSL across three critical domains: marine environments, indoor reverberant spaces, and unmanned aerial vehicle (UAV) monitoring. In marine scenarios, complex-valued convolutional networks combined with adversarial transfer learning mitigate environmental mismatch and multipath interference through phase information fusion and domain adaptation strategies. For indoor high-reverberation conditions, attention mechanisms and multimodal fusion architectures achieve precise localization under low signal-to-noise ratios by adaptively weighting critical acoustic features. In UAV surveillance, lightweight models integrated with spatiotemporal Transformers address dynamic modeling of non-stationary noise spectra and edge computing efficiency constraints. Despite these advancements, current approaches face three core challenges: the insufficient integration of physical principles, prohibitive data annotation costs, and the trade-off between real-time performance and accuracy. Future research should prioritize physics-informed modeling to embed acoustic propagation mechanisms, unsupervised domain adaptation to reduce reliance on labeled data, and sensor-algorithm co-design to optimize hardware-software synergy. These directions aim to propel SSL toward intelligent systems characterized by high precision, strong robustness, and low power consumption. This work provides both theoretical foundations and technical references for algorithm selection and practical implementation in complex real-world scenarios. Full article
Show Figures

Figure 1

19 pages, 8803 KiB  
Article
An Accurate and Low-Complexity Offset Calibration Methodology for Dynamic Comparators
by Juan Cuenca, Benjamin Zambrano, Esteban Garzón, Luis Miguel Prócel and Marco Lanuzza
J. Low Power Electron. Appl. 2025, 15(2), 35; https://doi.org/10.3390/jlpea15020035 - 2 Jun 2025
Viewed by 666
Abstract
Dynamic comparators play an important role in electronic systems, requiring high accuracy, low power consumption, and minimal offset voltage. This work proposes an accurate and low-complexity offset calibration design based on a capacitive load approach. It was designed using a 65 nm CMOS [...] Read more.
Dynamic comparators play an important role in electronic systems, requiring high accuracy, low power consumption, and minimal offset voltage. This work proposes an accurate and low-complexity offset calibration design based on a capacitive load approach. It was designed using a 65 nm CMOS technology and comprehensively evaluated under Monte Carlo simulations and PVT variations. The proposed scheme was built using MIM capacitors and transistor-based capacitors, and it includes Verilog-based calibration algorithms. The proposed offset calibration is benchmarked, in terms of precision, calibration time, energy consumption, delay, and area, against prior calibration techniques: current injection via gate biasing by a charge pump circuit and current injection via parallel transistors. The evaluation of the offset calibration schemes relies on Analog/Mixed-Signal (AMS) simulations, ensuring accurate evaluation of digital and analog domains. The charge pump method achieved the best Energy-Delay Product (EDP) at the cost of lower long-term accuracy, mainly because of its capacitor leakage. The proposed scheme demonstrated superior performance in offset reduction, achieving a one-sigma offset of 0.223 mV while maintaining precise calibration. Among the calibration algorithms, the window algorithm performs better than the accelerated calibration. This is mainly because the window algorithm considers noise-induced output oscillations, ensuring consistent calibration across all designs. This work provides insights into the trade-offs between energy, precision, and area in dynamic comparator designs, offering strategies to enhance offset calibration. Full article
(This article belongs to the Special Issue Analog/Mixed-Signal Integrated Circuit Design)
Show Figures

Figure 1

19 pages, 3808 KiB  
Article
Dual Turbocharger and Synergistic Control Optimization for Low-Speed Marine Diesel Engines: Mitigating Black Smoke and Enhancing Maneuverability
by Cheng Meng, Kaiyuan Chen, Tianyu Chen and Jianfeng Ju
Energies 2025, 18(11), 2910; https://doi.org/10.3390/en18112910 - 2 Jun 2025
Viewed by 521
Abstract
Marine diesel engines face persistent challenges in balancing transient black smoke emissions and maneuverability under low-speed conditions due to inherent limitations of single turbocharger systems, such as high inertia and delayed intake response, compounded by control strategies prioritizing steady-state efficiency. To address this [...] Read more.
Marine diesel engines face persistent challenges in balancing transient black smoke emissions and maneuverability under low-speed conditions due to inherent limitations of single turbocharger systems, such as high inertia and delayed intake response, compounded by control strategies prioritizing steady-state efficiency. To address this gap, this study proposes a dual -turbocharger dynamic matching framework integrated with a speed–pitch synergistic control strategy—the first mechanical-control co-design solution for transient emission suppression. By establishing a λ-opacity correlation model and a multi-physics ship–engine–propeller simulation platform, we demonstrate that the Type-C dual turbocharger reduces rotational inertia by 80%, shortens intake pressure buildup time to 25.8 s (54.7% faster than single turbochargers), and eliminates high-risk black smoke regions (maintaining λ > 1.5). The optimized system reduces the fuel consumption rate by 12.9 g·(kW·h)−1 under extreme loading conditions and decreases the duration of high-risk zones by 74.4–100%. This study provides theoretical and practical support for resolving the trade-off between transient emissions and maneuverability in marine power systems through synergistic innovations in mechanical design and control strategies. Full article
Show Figures

Figure 1

15 pages, 5053 KiB  
Article
Enhanced Dual Carry Approximate Adder with Error Reduction Unit for High-Performance Multiplier and In-Memory Computing
by Kaeun Lim, Jinhyun Kim, Eunsu Kim and Youngmin Kim
Electronics 2025, 14(9), 1702; https://doi.org/10.3390/electronics14091702 - 22 Apr 2025
Viewed by 522
Abstract
The Dual Carry Approximate Adder (DCAA) is proposed as an advanced 8-bit approximate adder featuring dual carry-out and carry-in full adders (FAs) along with an Error Reduction Unit (ERU) to enhance accuracy. The 8-bit adder is partitioned into upper and lower 4-bit blocks, [...] Read more.
The Dual Carry Approximate Adder (DCAA) is proposed as an advanced 8-bit approximate adder featuring dual carry-out and carry-in full adders (FAs) along with an Error Reduction Unit (ERU) to enhance accuracy. The 8-bit adder is partitioned into upper and lower 4-bit blocks, connected via a dual carry-out full adder and a dual carry-in full adder. To minimize impact on the critical path, an ERU is designed for efficient error correction. Four variants of the DCAA are provided, allowing users to select the most suitable design based on their specific power, area, and accuracy requirements. The DCAA achieves a 78% reduction in Mean Error Distance (MED) while maintaining high computational speed and efficiency. When applied to Wallace Tree multipliers, it reduces delay by 32% compared to ripple carry adders (RCAs), and in in-memory computing (IMC) architectures, it significantly improves accuracy with minimal delay overhead. Experimental results demonstrate that the DCAA offers a well-balanced trade-off between accuracy, speed, and resource efficiency, making it suitable for high-performance, error-tolerant applications. Compared to existing approximate adders, DCAA exhibits superior error correction capabilities while achieving significantly lower delay. Furthermore, its efficient hardware implementation enables seamless integration into various computing paradigms, including AI accelerators and neuromorphic processors. Additionally, the scalability of the design allows for flexible adaptation to different bit-widths, making it a versatile solution for next-generation computing architectures. Full article
(This article belongs to the Special Issue CMOS Integrated Circuits Design)
Show Figures

Figure 1

15 pages, 2064 KiB  
Article
Multi-Objective Day-Ahead Scheduling for Air Conditioning Load Considering Dynamic Carbon Emission Factor
by Kun Zhang, Zhengxun Guo, Ji Wang, Jianlin Tang and Xiaoshun Zhang
Electronics 2025, 14(8), 1550; https://doi.org/10.3390/electronics14081550 - 11 Apr 2025
Viewed by 418
Abstract
The traditional optimal scheduling of air conditioning load has traditionally focused on improving user comfort and reducing electricity costs. However, research on carbon emissions generated during the operation of air conditioning is still in the developmental stage. Currently used average carbon emission factors [...] Read more.
The traditional optimal scheduling of air conditioning load has traditionally focused on improving user comfort and reducing electricity costs. However, research on carbon emissions generated during the operation of air conditioning is still in the developmental stage. Currently used average carbon emission factors in carbon-emission studies face challenges such as delayed data updates and difficulty in reflecting spatiotemporal variations. These issues contribute to the inaccurate quantification of carbon emissions, creating a challenging situation, which does not meet the development needs of green power systems under the “dual carbon goals”. Therefore, this paper proposes a multi-objective scheduling method for cooling-dominant air conditioning load considering the dynamic carbon emission factor (CEF), in conjunction with real-time spatiotemporal data from the electricity grid model, generates the electric carbon factor for each moment throughout the day. Firstly, while considering user comfort, the dynamic CEF-based carbon-emission cost and electricity cost are fused into the user’s comprehensive electricity cost, and a multi-objective optimization model for day-ahead scheduling of air conditioning loads is established. In addition, the above model is solved by the NSGA-II algorithm to obtain the Pareto front composed of non-dominated solutions. Then, the best compromise solution is objectively selected through gray target decision (GTD) to provide scientific decision-making for day-ahead scheduling of cooling-dominant air conditioning loads. Finally, four users with different air conditioning loads and room temperature requirements are designed to verify the effectiveness of the proposed strategy. The simulation results illustrate that compared with single-objective optimization and simple multi-objective decision-making methods, the proposed strategy possesses a stronger trade-off ability, which can greatly reduce the comprehensive electricity cost while ensuring user comfort. Full article
Show Figures

Figure 1

25 pages, 5804 KiB  
Article
Physical Model for the Simulation of an Air Handling Unit Employed in an Automotive Production Process: Calibration Procedure and Potential Energy Saving
by Luca Viscito, Francesco Pelella, Andrea Rega, Federico Magnea, Gerardo Maria Mauro, Alessandro Zanella, Alfonso William Mauro and Nicola Bianco
Energies 2025, 18(7), 1842; https://doi.org/10.3390/en18071842 - 5 Apr 2025
Cited by 2 | Viewed by 531
Abstract
A meticulous thermo-hygrometric control is essential for various industrial production processes, particularly those involving the painting phases of body-in-white, in which the air temperature and relative humidity in production boots must be limited in strict intervals to ensure the high quality of the [...] Read more.
A meticulous thermo-hygrometric control is essential for various industrial production processes, particularly those involving the painting phases of body-in-white, in which the air temperature and relative humidity in production boots must be limited in strict intervals to ensure the high quality of the final product. However, traditional proportional integrative derivative (PID) controllers may result in non-optimal control strategies, leading to energy wastage due to response delays and unnecessary superheatings. In this regard, predictive models designed for control can significantly aid in achieving all the targets set by the European Union. This paper focuses on the development of a predictive model for the energy consumption of an air handling unit (AHU) used in the paint-shop area of an automotive production process. The model, developed in MATLAB 2024b, is based on mass and energy balances within each component, and phenomenological equations for heat exchangers. It enables the evaluation of thermal powers and water mass flow rates required to process an inlet air flow rate to achieve a target condition for the temperature and relative humidity. The model was calibrated and validated using experimental data of a real case study of an automotive production process, obtaining mean errors of 16% and 31% for the hot and cold heat exchangers, respectively, in predicting the water mass flow rate. Additionally, a control logic based on six regulation thermo-hygrometric zones was developed, which depended on the external conditions of temperature and relative humidity. Finally, as the main outcome, several examples are provided to demonstrate both the applicability of the developed model and its potential in optimizing energy consumption, achieving energy savings of up to 46% compared to the actual baseline control strategy, and external boundary conditions, identifying an optimal trade-off between energy saving and operation feasibility. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

27 pages, 1376 KiB  
Article
Proof-of-Friendship Consensus Mechanism for Resilient Blockchain Technology
by Jims Marchang, Rengaprasad Srikanth, Solan Keishing and Indranee Kashyap
Electronics 2025, 14(6), 1153; https://doi.org/10.3390/electronics14061153 - 14 Mar 2025
Viewed by 918
Abstract
Traditional blockchain consensus mechanisms, such as Proof of Work (PoW) and Proof of Stake (PoS), face significant challenges related to the centralisation of validators and miners, environmental impact, and trustworthiness. While PoW is highly secure, it is energy-intensive, and PoS tends to favour [...] Read more.
Traditional blockchain consensus mechanisms, such as Proof of Work (PoW) and Proof of Stake (PoS), face significant challenges related to the centralisation of validators and miners, environmental impact, and trustworthiness. While PoW is highly secure, it is energy-intensive, and PoS tends to favour wealthy stakeholders, leading to validator centralisation. Existing mechanisms lack fairness, and the aspect of sustainability is not considered. Moreover, it fails to address social trust dynamics within validator selection. To bridge this research gap, this paper proposes Proof of Friendship (PoF)—a novel consensus mechanism that leverages social trust by improving decentralisation, enhancing fairness and sustainability among the validators. Unlike traditional methods that rely solely on computational power or financial stakes, PoF integrates friendship-based trust scores with geo-location diversity, transaction reliability, and sustainable energy adoption. By incorporating a trust graph, where validators are selected based on their verified relationships within the network, PoF mitigates the risks of Sybil attacks, promotes community-driven decentralisation, and enhances the resilience of the blockchain against adversarial manipulation. This research introduces the formal model of PoF, evaluates its security, decentralisation, and sustainability trade-offs, and demonstrates its effectiveness compared to existing consensus mechanisms. Our investigation and results indicate that PoF achieves higher decentralisation, improved trustworthiness, reduced validator monopolisation, and enhanced sustainability while maintaining strong network security. This study opens new avenues for socially aware blockchain governance, making consensus mechanisms more equitable, efficient, and environmentally responsible. This consensus mechanism demonstrates a holistic approach to modern blockchain design, addressing key challenges in trust, performance, and sustainability. The mechanism is tested theoretically and experimentally to validate its robustness and functionality. Processing latency (PL), network latency (NL) [transaction size/network speed], synchronisation delays (SDs), and cumulative delay per transaction are 85 ms, 172 ms, 1802 ms, [PL + NL + SD] 2059 ms, respectively. Full article
(This article belongs to the Special Issue Recent Advances in Information Security and Data Privacy)
Show Figures

Figure 1

20 pages, 15189 KiB  
Article
Numerical Analysis of Diesel Engine Combustion and Performance with Single-Component Surrogate Fuel
by Mehedi Hassan Pranta and Haeng Muk Cho
Energies 2025, 18(5), 1082; https://doi.org/10.3390/en18051082 - 23 Feb 2025
Viewed by 850
Abstract
Compression ignition engines are widely recognized for their reliability and efficiency, remaining essential for transportation and power generation despite the transition toward sustainable energy solutions. This study employs ANSYS Forte to analyze the combustion and performance characteristics of a direct-injection, single-cylinder, four-stroke engine [...] Read more.
Compression ignition engines are widely recognized for their reliability and efficiency, remaining essential for transportation and power generation despite the transition toward sustainable energy solutions. This study employs ANSYS Forte to analyze the combustion and performance characteristics of a direct-injection, single-cylinder, four-stroke engine fueled with an n-heptane-based diesel surrogate. The investigation considers varying SOI timings (−32.5°, −27.5°, −22.5°, and −17.5° BTDC) and EGR rates (0%, 15%, 30%, 45%, and 60%). The simulation incorporates the RNG k-ε turbulence model, the power-law combustion model, and the KH-RT spray breakup model. The results indicate that the optimal peak pressure and temperature occur at an SOI of −22.5° BTDC with 0% EGR. Advancing SOI enhances oxidation, reducing NOx and CO emissions but increasing UHC due to delayed fuel–air mixing. Higher EGR rates lower in-cylinder pressure, temperature, HRR, and NOx emissions while elevating CO and UHC levels due to oxygen depletion and incomplete combustion. These findings highlight the trade-offs between combustion efficiency and emissions, emphasizing the need for optimized SOI and EGR strategies to achieve balanced engine performance. Full article
Show Figures

Figure 1

27 pages, 4804 KiB  
Article
A Comparison of Reliability and Resource Utilization of Radiation Fault Tolerance Mechanisms in Spaceborne Electronic Systems
by Changhyeon Kim, Dongmin Lee and Jongwhoa Na
Aerospace 2025, 12(2), 152; https://doi.org/10.3390/aerospace12020152 - 17 Feb 2025
Cited by 2 | Viewed by 1474
Abstract
The advent of the New Space Era has significantly accelerated the development of space equipment systems using commercial off-the-shelf components. Field Programmable Gate Arrays are increasingly favored for their ability to be easily modified, which substantially reduces both development time and costs. However, [...] Read more.
The advent of the New Space Era has significantly accelerated the development of space equipment systems using commercial off-the-shelf components. Field Programmable Gate Arrays are increasingly favored for their ability to be easily modified, which substantially reduces both development time and costs. However, their high susceptibility to space radiation poses a considerable risk of mission failure, potentially compromising system reliability in harsh space environments. To mitigate this vulnerability, the implementation of fault-tolerant mechanisms is essential. In this study, we applied eight distinct fault-tolerant mechanisms to various circuits and conducted a comparative analysis between two different categories: hardware redundancy and informational redundancy. This comparison was based on consistent criteria, specifically the Architectural Vulnerability Factor and resource consumption. Utilizing statistical fault injection tests and specialized software, we quantitatively measured structural vulnerability, power consumption, delay, and area. The results revealed that while the Hamming Code achieved the lowest structural vulnerability, it resulted in approximately fourfold increases in resource consumption. Conversely, Triple Modular Redundancy provided high reliability with relatively minimal resource usage. This research elucidates the trade-offs between reliability and resource overhead among different fault-tolerant mechanisms, highlighting the critical importance of selecting appropriate mechanisms based on system requirements to optimize the balance between reliability and resource utilization. Our analysis offers new insights essential for optimizing fault-tolerant mechanisms in space applications. Future work should explore more complex circuit architectures and diverse fault models to refine the selection criteria for fault-tolerant mechanisms tailored to real-world space missions. Full article
(This article belongs to the Special Issue On-Board Systems Design for Aerospace Vehicles (2nd Edition))
Show Figures

Figure 1

28 pages, 2083 KiB  
Article
Pipe Routing with Topology Control for Decentralized and Autonomous UAV Networks
by Shreyas Devaraju, Shivam Garg, Alexander Ihler, Elizabeth Serena Bentley and Sunil Kumar
Drones 2025, 9(2), 140; https://doi.org/10.3390/drones9020140 - 13 Feb 2025
Cited by 1 | Viewed by 1079
Abstract
This paper considers a decentralized and autonomous wireless network of low SWaP (size, weight, and power) fixed-wing UAVs (unmanned aerial vehicles) used for remote exploration and monitoring of targets in an inaccessible area lacking communication infrastructure. Here, the UAVs collaborate to find target(s) [...] Read more.
This paper considers a decentralized and autonomous wireless network of low SWaP (size, weight, and power) fixed-wing UAVs (unmanned aerial vehicles) used for remote exploration and monitoring of targets in an inaccessible area lacking communication infrastructure. Here, the UAVs collaborate to find target(s) and use routing protocols to forward the sensed data of target(s) to an aerial base station (BS) in real-time through multihop communication, which can then transmit the data to a control center. However, the unpredictability of target locations and the highly dynamic nature of autonomous, decentralized UAV networks result in frequent route breaks or traffic disruptions. Traditional routing schemes cannot quickly adapt to dynamic UAV networks and can incur large control overhead and delays. In addition, their performance suffers from poor network connectivity in sparse networks with multiple objectives (exploration and monitoring of targets), which results in frequent route unavailability. To address these challenges, we propose two routing schemes: Pipe routing and TC-Pipe routing. Pipe routing is a mobility-, congestion-, and energy-aware scheme that discovers routes to the BS on-demand and proactively switches to alternate high-quality routes within a limited region around the routes (referred to as the “pipe”) when needed. TC-Pipe routing extends this approach by incorporating a decentralized topology control mechanism to help maintain robust connectivity in the pipe region around the routes, resulting in improved route stability and availability. The proposed schemes adopt a novel approach by integrating the topology control with routing protocol and mobility model, and rely only on local information in a distributed manner. Comprehensive evaluations under diverse network and traffic conditions—including UAV density and speed, number of targets, and fault tolerance—show that the proposed schemes improve throughput by reducing flow interruptions and packet drops caused by mobility, congestion, and node failures. At the same time, the impact on coverage performance (measured in terms of coverage and coverage fairness) is minimal, even with multiple targets. Additionally, the performance of both schemes degrades gracefully as the percentage of UAV failures in the network increases. Compared to schemes that use dedicated UAVs as relay nodes to establish a route to the BS when the UAV density is low, Pipe and TC-Pipe routing offer better coverage and connectivity trade-offs, with the TC-Pipe providing the best trade-off. Full article
Show Figures

Figure 1

28 pages, 2069 KiB  
Article
Latency Analysis of Drone-Assisted C-V2X Communications for Basic Safety and Co-Operative Perception Messages
by Abhishek Gupta and Xavier N. Fernando
Drones 2024, 8(10), 600; https://doi.org/10.3390/drones8100600 - 18 Oct 2024
Cited by 4 | Viewed by 3057
Abstract
Drone-assisted radio communication is revolutionizing future wireless networks, including sixth-generation (6G) and beyond, by providing unobstructed, line-of-sight links from air to terrestrial vehicles, enabling robust cellular cehicle-to-everything (C-V2X) communication networks. However, addressing communication latency is imperative, especially when considering autonomous vehicles. In this [...] Read more.
Drone-assisted radio communication is revolutionizing future wireless networks, including sixth-generation (6G) and beyond, by providing unobstructed, line-of-sight links from air to terrestrial vehicles, enabling robust cellular cehicle-to-everything (C-V2X) communication networks. However, addressing communication latency is imperative, especially when considering autonomous vehicles. In this study, we analyze different types of delay and the factors impacting them in drone-assisted C-V2X networks. We specifically investigate C-V2X Mode 4, where multiple vehicles utilize available transmission windows to communicate the frequently collected sensor data with an embedded drone server. Through a discrete-time Markov model, we assess the medium access control (MAC) layer performance, analyzing the trade-off between data rates and communication latency. Furthermore, we compare the delay between cooperative perception messages (CPMs) and periodically transmitted basic safety messages (BSMs). Our simulation results emphasize the significance of optimizing BSM and CPM transmission intervals to achieve lower average delay as well as utilization of drones’ battery power to serve the maximum number of vehicles in a transmission time interval (TTI). The results also reveal that the average delay heavily depends on the packet arrival rate while the processing delay varies with the drone occupancy and state-transition rates for both BSM and CPM packets. Furthermore, an optimal policy approximates a threshold-based policy in which the threshold depends on the drone utilization and energy availability. Full article
(This article belongs to the Special Issue Wireless Networks and UAV)
Show Figures

Figure 1

19 pages, 1201 KiB  
Article
Energy-Efficient Joint Partitioning and Offloading for Delay-Sensitive CNN Inference in Edge Computing
by Zhiyong Zha, Yifei Yang, Yongjun Xia, Zhaoyi Wang, Bin Luo, Kaihong Li, Chenkai Ye, Bo Xu and Kai Peng
Appl. Sci. 2024, 14(19), 8656; https://doi.org/10.3390/app14198656 - 25 Sep 2024
Cited by 2 | Viewed by 1388
Abstract
With the development of deep learning foundation model technology, the types of computing tasks have become more complex, and the computing resources and memory required for these tasks have also become more substantial. Since it has long been revealed that task offloading in [...] Read more.
With the development of deep learning foundation model technology, the types of computing tasks have become more complex, and the computing resources and memory required for these tasks have also become more substantial. Since it has long been revealed that task offloading in cloud servers has many drawbacks, such as high communication delay and low security, task offloading is mostly carried out in the edge servers of the Internet of Things (IoT) network. However, edge servers in IoT networks are characterized by tight resource constraints and often the dynamic nature of data sources. Therefore, the question of how to perform task offloading of deep learning foundation model services on edge servers has become a new research topic. However, the existing task offloading methods either can not meet the requirements of massive CNN architecture or require a lot of communication overhead, leading to significant delays and energy consumption. In this paper, we propose a parallel partitioning method based on matrix convolution to partition foundation model inference tasks, which partitions large CNN inference tasks into subtasks that can be executed in parallel to meet the constraints of edge devices with limited hardware resources. Then, we model and mathematically express the problem of task offloading. In a multi-edge-server, multi-user, and multi-task edge-end system, we propose a task-offloading method that balances the tradeoff between delay and energy consumption. It adopts a greedy algorithm to optimize task-offloading decisions and terminal device transmission power to maximize the benefits of task offloading. Finally, extensive experiments verify the significant and extensive effectiveness of our algorithm. Full article
(This article belongs to the Special Issue Deep Learning and Edge Computing for Internet of Things)
Show Figures

Figure 1

23 pages, 1762 KiB  
Article
Dynamic Framing and Power Allocation for Real-Time Wireless Networks with Variable-Length Coding: A Tandem Queue Approach
by Yuanrui Liu, Xiaoyu Zhao, Wei Chen and Ying-Jun Angela Zhang
Network 2024, 4(3), 367-389; https://doi.org/10.3390/network4030017 - 27 Aug 2024
Viewed by 1106
Abstract
Ensuring high reliability and low latency poses challenges for numerous applications that require rigid performance guarantees, such as industrial automation and autonomous vehicles. Our research primarily concentrates on addressing the real-time requirements of ultra-reliable low-latency communication (URLLC). Specifically, we tackle the challenge of [...] Read more.
Ensuring high reliability and low latency poses challenges for numerous applications that require rigid performance guarantees, such as industrial automation and autonomous vehicles. Our research primarily concentrates on addressing the real-time requirements of ultra-reliable low-latency communication (URLLC). Specifically, we tackle the challenge of hard delay constraints in real-time transmission systems, overcoming this obstacle through a finite blocklength coding scheme. In the physical layer, we encode randomly arriving packets using a variable-length coding scheme and transmit the encoded symbols by truncated channel inversion over parallel channels. In the network layer, we model the encoding and transmission processes as tandem queues. These queues backlog the data bits waiting to be encoded and the encoded symbols to be transmitted, respectively. This way, we represent the system as a two-dimensional Markov chain. By focusing on instances when the symbol queue is empty, we simplify the Markov chain into a one-dimensional Markov chain, with the packet queue being the system state. This approach allows us to analytically express power consumption and formulate a power minimization problem under hard delay constraints. Finally, we propose a heuristic algorithm to solve the problem and provide an extensive evaluation of the trade-offs between the hard delay constraint and power consumption. Full article
Show Figures

Figure 1

Back to TopTop