Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (161)

Search Parameters:
Keywords = energy-accuracy trade-off

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2965 KiB  
Article
Inspection Method Enabled by Lightweight Self-Attention for Multi-Fault Detection in Photovoltaic Modules
by Shufeng Meng and Tianxu Xu
Electronics 2025, 14(15), 3019; https://doi.org/10.3390/electronics14153019 - 29 Jul 2025
Viewed by 270
Abstract
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity [...] Read more.
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity concurrent detection in existing robotic inspection systems, while stringent onboard compute budgets also preclude the adoption of bulky detectors. To resolve this accuracy–efficiency trade-off for dual-defect detection, we present YOLOv8-SG, a lightweight yet powerful framework engineered for mobile PV inspectors. First, a rigorously curated multi-modal dataset—RGB for stains and long-wave infrared for hotspots—is assembled to enforce robust cross-domain representation learning. Second, the HSV color space is leveraged to disentangle chromatic and luminance cues, thereby stabilizing appearance variations across sensors. Third, a single-head self-attention (SHSA) block is embedded in the backbone to harvest long-range dependencies at negligible parameter cost, while a global context (GC) module is grafted onto the detection head to amplify fine-grained semantic cues. Finally, an auxiliary bounding box refinement term is appended to the loss to hasten convergence and tighten localization. Extensive field experiments demonstrate that YOLOv8-SG attains 86.8% mAP@0.5, surpassing the vanilla YOLOv8 by 2.7 pp while trimming 12.6% of parameters (18.8 MB). Grad-CAM saliency maps corroborate that the model’s attention consistently coincides with defect regions, underscoring its interpretability. The proposed method, therefore, furnishes PV operators with a practical low-latency solution for concurrent bird-dropping and hotspot surveillance. Full article
Show Figures

Figure 1

18 pages, 1040 KiB  
Article
A TDDPG-Based Joint Optimization Method for Hybrid RIS-Assisted Vehicular Integrated Sensing and Communication
by Xinren Wang, Zhuoran Xu, Qin Wang, Yiyang Ni and Haitao Zhao
Electronics 2025, 14(15), 2992; https://doi.org/10.3390/electronics14152992 - 27 Jul 2025
Viewed by 285
Abstract
This paper proposes a novel Twin Delayed Deep Deterministic Policy Gradient (TDDPG)-based joint optimization algorithm for hybrid reconfigurable intelligent surface (RIS)-assisted integrated sensing and communication (ISAC) systems in Internet of Vehicles (IoV) scenarios. The proposed system model achieves deep integration of sensing and [...] Read more.
This paper proposes a novel Twin Delayed Deep Deterministic Policy Gradient (TDDPG)-based joint optimization algorithm for hybrid reconfigurable intelligent surface (RIS)-assisted integrated sensing and communication (ISAC) systems in Internet of Vehicles (IoV) scenarios. The proposed system model achieves deep integration of sensing and communication by superimposing the communication and sensing signals within the same waveform. To decouple the complex joint design problem, a dual-DDPG architecture is introduced, in which one agent optimizes the transmit beamforming vector and the other adjusts the RIS phase shift matrix. Both agents share a unified reward function that comprehensively considers multi-user interference (MUI), total transmit power, RIS noise power, and sensing accuracy via the CRLB constraint. Simulation results demonstrate that the proposed TDDPG algorithm significantly outperforms conventional DDPG in terms of sum rate and interference suppression. Moreover, the adoption of a hybrid RIS enables an effective trade-off between communication performance and system energy efficiency, highlighting its practical deployment potential in dynamic IoV environments. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

20 pages, 2772 KiB  
Article
Cable Force Optimization of Circular Ring Pylon Cable-Stayed Bridges Based on Response Surface Methodology and Multi-Objective Particle Swarm Optimization
by Shengdong Liu, Fei Chen, Qingfu Li and Xiyu Ma
Buildings 2025, 15(15), 2647; https://doi.org/10.3390/buildings15152647 - 27 Jul 2025
Viewed by 177
Abstract
Cable force distribution in cable-stayed bridges critically impacts structural safety and efficiency, yet traditional optimization methods struggle with unconventional designs due to nonlinear mechanics and computational inefficiency. This study proposes a hybrid approach combining Response Surface Methodology (RSM) and Multi-Objective Particle Swarm Optimization [...] Read more.
Cable force distribution in cable-stayed bridges critically impacts structural safety and efficiency, yet traditional optimization methods struggle with unconventional designs due to nonlinear mechanics and computational inefficiency. This study proposes a hybrid approach combining Response Surface Methodology (RSM) and Multi-Objective Particle Swarm Optimization (MOPSO) to overcome these challenges. RSM constructs surrogate models for strain energy and mid-span displacement, reducing reliance on finite element analysis, while MOPSO optimizes Pareto solution sets for rapid cable force adjustment. Validated through an engineering case, the method reduces the main girder’s max bending moment by 8.7%, mid-span displacement by 31.2%, and strain energy by 7.1%, improving stiffness and mitigating stress concentrations. The response surface model demonstrates prediction errors of 0.35% for strain energy and 5.1% for maximum vertical mid-span deflection. By synergizing explicit modeling with intelligent algorithms, this methodology effectively resolves the longstanding efficiency–accuracy trade-off in cable force optimization for cable-stayed bridges. It achieves over 80% reduction in computational costs while enhancing critical structural performance metrics. Engineers are thereby equipped with a rapid and reliable optimization framework for geometrically complex cable-stayed bridges, delivering significant improvements in structural safety and construction feasibility. Ultimately, this approach establishes both theoretical substantiation and practical engineering benchmarks for designing non-conventional cable-stayed bridge configurations. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

32 pages, 5164 KiB  
Article
Decentralized Distributed Sequential Neural Networks Inference on Low-Power Microcontrollers in Wireless Sensor Networks: A Predictive Maintenance Case Study
by Yernazar Bolat, Iain Murray, Yifei Ren and Nasim Ferdosian
Sensors 2025, 25(15), 4595; https://doi.org/10.3390/s25154595 - 24 Jul 2025
Viewed by 380
Abstract
The growing adoption of IoT applications has led to increased use of low-power microcontroller units (MCUs) for energy-efficient, local data processing. However, deploying deep neural networks (DNNs) on these constrained devices is challenging due to limitations in memory, computational power, and energy. Traditional [...] Read more.
The growing adoption of IoT applications has led to increased use of low-power microcontroller units (MCUs) for energy-efficient, local data processing. However, deploying deep neural networks (DNNs) on these constrained devices is challenging due to limitations in memory, computational power, and energy. Traditional methods like cloud-based inference and model compression often incur bandwidth, privacy, and accuracy trade-offs. This paper introduces a novel Decentralized Distributed Sequential Neural Network (DDSNN) designed for low-power MCUs in Tiny Machine Learning (TinyML) applications. Unlike the existing methods that rely on centralized cluster-based approaches, DDSNN partitions a pre-trained LeNet across multiple MCUs, enabling fully decentralized inference in wireless sensor networks (WSNs). We validate DDSNN in a real-world predictive maintenance scenario, where vibration data from an industrial pump is analyzed in real-time. The experimental results demonstrate that DDSNN achieves 99.01% accuracy, explicitly maintaining the accuracy of the non-distributed baseline model and reducing inference latency by approximately 50%, highlighting its significant enhancement over traditional, non-distributed approaches, demonstrating its practical feasibility under realistic operating conditions. Full article
Show Figures

Figure 1

39 pages, 1774 KiB  
Review
FACTS Controllers’ Contribution for Load Frequency Control, Voltage Stability and Congestion Management in Deregulated Power Systems over Time: A Comprehensive Review
by Muhammad Asad, Muhammad Faizan, Pericle Zanchetta and José Ángel Sánchez-Fernández
Appl. Sci. 2025, 15(14), 8039; https://doi.org/10.3390/app15148039 - 18 Jul 2025
Viewed by 392
Abstract
Incremental energy demand, environmental constraints, restrictions in the availability of energy resources, economic conditions, and political impact prompt the power sector toward deregulation. In addition to these impediments, electric power competition for power quality, reliability, availability, and cost forces utilities to maximize utilization [...] Read more.
Incremental energy demand, environmental constraints, restrictions in the availability of energy resources, economic conditions, and political impact prompt the power sector toward deregulation. In addition to these impediments, electric power competition for power quality, reliability, availability, and cost forces utilities to maximize utilization of the existing infrastructure by flowing power on transmission lines near to their thermal limits. All these factors introduce problems related to power network stability, reliability, quality, congestion management, and security in restructured power systems. To overcome these problems, power-electronics-based FACTS devices are one of the beneficial solutions at present. In this review paper, the significant role of FACTS devices in restructured power networks and their technical benefits against various power system problems such as load frequency control, voltage stability, and congestion management will be presented. In addition, an extensive discussion about the comparison between different FACTS devices (series, shunt, and their combination) and comparison between various optimization techniques (classical, analytical, hybrid, and meta-heuristics) that support FACTS devices to achieve their respective benefits is presented in this paper. Generally, it is concluded that third-generation FACTS controllers are more popular to mitigate various power system problems (i.e., load frequency control, voltage stability, and congestion management). Moreover, a combination of multiple FACTS devices, with or without energy storage devices, is more beneficial compared to their individual usage. However, this is not commonly adopted in small power systems due to high installation or maintenance costs. Therefore, there is a trade-off between the selection and cost of FACTS devices to minimize the power system problems. Likewise, meta-heuristics and hybrid optimization techniques are commonly adopted to optimize FACTS devices due to their fast convergence, robustness, higher accuracy, and flexibility. Full article
(This article belongs to the Special Issue State-of-the-Art of Power Systems)
Show Figures

Figure 1

50 pages, 9734 KiB  
Article
Efficient Hotspot Detection in Solar Panels via Computer Vision and Machine Learning
by Nayomi Fernando, Lasantha Seneviratne, Nisal Weerasinghe, Namal Rathnayake and Yukinobu Hoshino
Information 2025, 16(7), 608; https://doi.org/10.3390/info16070608 - 15 Jul 2025
Viewed by 575
Abstract
Solar power generation is rapidly emerging within renewable energy due to its cost-effectiveness and ease of deployment. However, improper inspection and maintenance lead to significant damage from unnoticed solar hotspots. Even with inspections, factors like shadows, dust, and shading cause localized heat, mimicking [...] Read more.
Solar power generation is rapidly emerging within renewable energy due to its cost-effectiveness and ease of deployment. However, improper inspection and maintenance lead to significant damage from unnoticed solar hotspots. Even with inspections, factors like shadows, dust, and shading cause localized heat, mimicking hotspot behavior. This study emphasizes interpretability and efficiency, identifying key predictive features through feature-level and What-if Analysis. It evaluates model training and inference times to assess effectiveness in resource-limited environments, aiming to balance accuracy, generalization, and efficiency. Using Unmanned Aerial Vehicle (UAV)-acquired thermal images from five datasets, the study compares five Machine Learning (ML) models and five Deep Learning (DL) models. Explainable AI (XAI) techniques guide the analysis, with a particular focus on MPEG (Moving Picture Experts Group)-7 features for hotspot discrimination, supported by statistical validation. Medium Gaussian SVM achieved the best trade-off, with 99.3% accuracy and 18 s inference time. Feature analysis revealed blue chrominance as a strong early indicator of hotspot detection. Statistical validation across datasets confirmed the discriminative strength of MPEG-7 features. This study revisits the assumption that DL models are inherently superior, presenting an interpretable alternative for hotspot detection; highlighting the potential impact of domain mismatch. Model-level insight shows that both absolute and relative temperature variations are important in solar panel inspections. The relative decrease in “blueness” provides a crucial early indication of faults, especially in low-contrast thermal images where distinguishing normal warm areas from actual hotspot is difficult. Feature-level insight highlights how subtle changes in color composition, particularly reductions in blue components, serve as early indicators of developing anomalies. Full article
Show Figures

Graphical abstract

30 pages, 795 KiB  
Article
A Novel Heterogeneous Federated Edge Learning Framework Empowered with SWIPT
by Yinyin Fang, Sheng Shu, Yujun Zhu, Heju Li and Kunkun Rui
Symmetry 2025, 17(7), 1115; https://doi.org/10.3390/sym17071115 - 11 Jul 2025
Viewed by 217
Abstract
Federated edge learning (FEEL) is an innovative approach that facilitates collaborative training among numerous distributed edge devices while eliminating the need to transfer sensitive information. However, the practical deployment of FEEL faces significant constraints, owing to the limited and asymmetric computational and communication [...] Read more.
Federated edge learning (FEEL) is an innovative approach that facilitates collaborative training among numerous distributed edge devices while eliminating the need to transfer sensitive information. However, the practical deployment of FEEL faces significant constraints, owing to the limited and asymmetric computational and communication resources of these devices, along with their energy availability. To this end, we propose a novel asymmetry-tolerant training approach for FEEL, enabled via simultaneous wireless information and power transfer (SWIPT). This framework leverages SWIPT to offer sustainable energy support for devices while enabling them to train models with varying intensities. Given a limited energy budget, we highlight the critical trade-off between heterogeneous local training intensities and the quality of wireless transmission, suggesting that the design of local training and wireless transmission should be closely integrated, rather than treated as separate entities. To elucidate this perspective, we rigorously derive a new explicit upper bound that captures the combined impact of local training accuracy and the mean square error of wireless aggregation on the convergence performance of FEEL. To maximize overall system performance, we formulate two key optimization problems: the first aims to maximize the energy harvesting capability among all devices, while the second addresses the joint learning–communication optimization under the optimal energy harvesting solution. Comprehensive experiments demonstrate that our proposed framework achieves significant performance improvements compared to existing baselines. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

30 pages, 2575 KiB  
Review
The Potential of Utility-Scale Hybrid Wind–Solar PV Power Plant Deployment: From the Data to the Results
by Luis Arribas, Javier Domínguez, Michael Borsato, Ana M. Martín, Jorge Navarro, Elena García Bustamante, Luis F. Zarzalejo and Ignacio Cruz
Wind 2025, 5(3), 16; https://doi.org/10.3390/wind5030016 - 7 Jul 2025
Viewed by 703
Abstract
The deployment of utility-scale hybrid wind–solar PV power plants is gaining global attention due to their enhanced performance in power systems with high renewable energy penetration. To assess their potential, accurate estimations must be derived from the available data, addressing key challenges such [...] Read more.
The deployment of utility-scale hybrid wind–solar PV power plants is gaining global attention due to their enhanced performance in power systems with high renewable energy penetration. To assess their potential, accurate estimations must be derived from the available data, addressing key challenges such as (1) the spatial and temporal resolution requirements, particularly for renewable resource characterization; (2) energy balances aligned with various business models; (3) regulatory constraints (environmental, technical, etc.); and (4) the cost dependencies of the different components and system characteristics. When conducting such analyses at the regional or national scale, a trade-off must be achieved to balance accuracy with computational efficiency. This study reviews existing experiences in hybrid plant deployment, with a focus on Spain, identifying the lack of national-scale product cost models for HPPs as the main gap and establishing a replicable methodology for hybrid plant mapping. A simplified example is shown using this methodology for a country-level analysis. Full article
(This article belongs to the Topic Solar and Wind Power and Energy Forecasting, 2nd Edition)
Show Figures

Figure 1

31 pages, 9063 KiB  
Article
Client Selection in Federated Learning on Resource-Constrained Devices: A Game Theory Approach
by Zohra Dakhia and Massimo Merenda
Appl. Sci. 2025, 15(13), 7556; https://doi.org/10.3390/app15137556 - 5 Jul 2025
Viewed by 445
Abstract
Federated Learning (FL), a key paradigm in privacy-preserving and distributed machine learning (ML), enables collaborative model training across decentralized data sources without requiring raw data exchange. FL enables collaborative model training across decentralized data sources while preserving privacy. However, selecting appropriate clients remains [...] Read more.
Federated Learning (FL), a key paradigm in privacy-preserving and distributed machine learning (ML), enables collaborative model training across decentralized data sources without requiring raw data exchange. FL enables collaborative model training across decentralized data sources while preserving privacy. However, selecting appropriate clients remains a major challenge, especially in heterogeneous environments with diverse battery levels, privacy needs, and learning capacities. In this work, a centralized reward-based payoff strategy (RBPS) with cooperative intent is proposed for client selection. In RBPS, each client evaluates participation based on locally measured battery level, privacy requirement, and the model’s accuracy in the current round computing a payoff from these factors and electing to participate if the payoff exceeds a predefined threshold. Participating clients then receive the updated global model. By jointly optimizing model accuracy, privacy preservation, and battery-level constraints, RBPS realizes a multi-objective selection mechanism. Under realistic simulations of client heterogeneity, RBPS yields more robust and efficient training compared to existing methods, confirming its suitability for deployment in resource-constrained FL settings. Experimental analysis demonstrates that RBPS offers significant advantages over state-of-the-art (SOA) client selection methods, particularly those relying on a single selection criterion such as accuracy, battery, or privacy alone. These one-dimensional approaches often lead to trade-offs where improvements in one aspect come at the cost of another. In contrast, RBPS leverages client heterogeneity not as a limitation, but as a strategic asset to maintain and balance all critical characteristics simultaneously. Rather than optimizing performance for a single device type or constraint, RBPS benefits from the diversity of heterogeneous clients, enabling improved accuracy, energy preservation, and privacy protection all at once. This is achieved by dynamically adapting the selection strategy to the strengths of different client profiles. Unlike homogeneous environments, where only one capability tends to dominate, RBPS ensures that no key property is sacrificed. RBPS thus aligns more closely with real-world FL deployments, where mixed-device participation is common and balanced optimization is essential. Full article
Show Figures

Figure 1

24 pages, 2389 KiB  
Article
A Multi-Objective Optimization Framework for Robust and Accurate Photovoltaic Model Parameter Identification Using a Novel Parameterless Algorithm
by Mohammed Alruwaili
Processes 2025, 13(7), 2111; https://doi.org/10.3390/pr13072111 - 3 Jul 2025
Viewed by 368
Abstract
Photovoltaic (PV) models are hard to optimize due to their intrinsic complexity and changing operation conditions. Root mean square error (RMSE) is often given precedence in classic single-objective optimization methods, limiting them to address the intricate nature of PV model calibration. To bypass [...] Read more.
Photovoltaic (PV) models are hard to optimize due to their intrinsic complexity and changing operation conditions. Root mean square error (RMSE) is often given precedence in classic single-objective optimization methods, limiting them to address the intricate nature of PV model calibration. To bypass these limitations, this research proposes a novel multi-objective optimization framework balancing accuracy and robustness by considering both maximum error and the L2 norm as significant objective functions. Along with that, we introduce the Random Search Around Bests (RSAB) algorithm, which is a parameterless metaheuristic designed to be effective at exploring the solution space. The primary contributions of this work are as follows: (1) an extensive performance evaluation of the proposed framework; (2) an adaptable function to adjust dynamically the trade-off between robustness and error minimization; and (3) the elimination of manual tuning of the RSAB parameters. Rigorous testing across three PV models demonstrates RSAB’s superiority over 17 state-of-the-art algorithms. By overcoming significant issues such as premature convergence and local minima entrapment, the proposed procedure provides practitioners with a reliable tool to optimize PV systems. Hence, this research supports the overarching goals of sustainable energy technology advancements by offering an organized and flexible solution enhancing the accuracy and efficiency of PV modeling, furthering research in renewable energy. Full article
Show Figures

Figure 1

21 pages, 551 KiB  
Article
Enhancing LoRaWAN Performance Using Boosting Machine Learning Algorithms Under Environmental Variations
by Maram A. Alkhayyal and Almetwally M. Mostafa
Sensors 2025, 25(13), 4101; https://doi.org/10.3390/s25134101 - 30 Jun 2025
Viewed by 433
Abstract
Accurate path loss prediction is essential for optimizing Long-Range Wide-Area Network (LoRaWAN) performance. Previous studies have employed various Machine Learning (ML) models for path loss prediction. However, environmental factors such as temperature, humidity, barometric pressure, and particulate matter have been largely neglected. This [...] Read more.
Accurate path loss prediction is essential for optimizing Long-Range Wide-Area Network (LoRaWAN) performance. Previous studies have employed various Machine Learning (ML) models for path loss prediction. However, environmental factors such as temperature, humidity, barometric pressure, and particulate matter have been largely neglected. This study bridges this gap by evaluating the performance of five boosting ML models—AdaBoost, XGBoost, LightGBM, GentleBoost, and LogitBoost—under dynamic environmental conditions. The models were compared with theoretical models (Log-Distance and Okumura-Hata) and existing studies that employed the same dataset based on metrics such as RMSE, MAE, and R2. Furthermore, a detailed performance vs. complexity analysis was conducted using metrics such as training time, inference latency, model size, and energy consumption. Notably, barometric pressure emerged as the most influential environmental factor affecting path loss across all models. Bayesian Optimization was applied to fine-tune hyperparameters to improve model accuracy. Results showed that LightGBM outperformed other models with the lowest RMSE of 0.5166 and the highest R2 of 0.7151. LightGBM also offered the best trade-off between accuracy and computational efficiency. The findings show that boosting algorithms, particularly LightGBM, are highly effective for path loss prediction in LoRaWANs. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

31 pages, 741 KiB  
Article
Inspiring from Galaxies to Green AI in Earth: Benchmarking Energy-Efficient Models for Galaxy Morphology Classification
by Vasileios Alevizos, Emmanouil V. Gkouvrikos, Ilias Georgousis, Sotiria Karipidou and George A. Papakostas
Algorithms 2025, 18(7), 399; https://doi.org/10.3390/a18070399 - 28 Jun 2025
Viewed by 339
Abstract
Recent advancements in space exploration have significantly increased the volume of astronomical data, heightening the demand for efficient analytical methods. Concurrently, the considerable energy consumption of machine learning (ML) has fostered the emergence of Green AI, emphasizing sustainable, energy-efficient computational practices. We introduce [...] Read more.
Recent advancements in space exploration have significantly increased the volume of astronomical data, heightening the demand for efficient analytical methods. Concurrently, the considerable energy consumption of machine learning (ML) has fostered the emergence of Green AI, emphasizing sustainable, energy-efficient computational practices. We introduce the first large-scale Green AI benchmark for galaxy morphology classification, evaluating over 30 machine learning architectures (classical, ensemble, deep, and hybrid) on CPU and GPU platforms using a balanced subset of the Galaxy Zoo dataset. Beyond traditional metrics (precision, recall, and F1-score), we quantify inference latency, energy consumption, and carbon-equivalent emissions to derive an integrated EcoScore that captures the trade-off between predictive performance and environmental impact. Our results reveal that a GPU-optimized multilayer perceptron achieves state-of-the-art accuracy of 98% while emitting 20× less CO2 than ensemble forests, which—despite comparable accuracy—incur substantially higher energy costs. We demonstrate that hardware–algorithm co-design, model sparsification, and careful hyperparameter tuning can reduce carbon footprints by over 90% with negligible loss in classification quality. These findings provide actionable guidelines for deploying energy-efficient, high-fidelity models in both ground-based data centers and onboard space observatories, paving the way for truly sustainable, large-scale astronomical data analysis. Full article
(This article belongs to the Special Issue Artificial Intelligence in Space Applications)
Show Figures

Figure 1

54 pages, 2065 KiB  
Review
Edge Intelligence: A Review of Deep Neural Network Inference in Resource-Limited Environments
by Dat Ngo, Hyun-Cheol Park and Bongsoon Kang
Electronics 2025, 14(12), 2495; https://doi.org/10.3390/electronics14122495 - 19 Jun 2025
Viewed by 1437
Abstract
Deploying deep neural networks (DNNs) in resource-limited environments—such as smartwatches, IoT nodes, and intelligent sensors—poses significant challenges due to constraints in memory, computing power, and energy budgets. This paper presents a comprehensive review of recent advances in accelerating DNN inference on edge platforms, [...] Read more.
Deploying deep neural networks (DNNs) in resource-limited environments—such as smartwatches, IoT nodes, and intelligent sensors—poses significant challenges due to constraints in memory, computing power, and energy budgets. This paper presents a comprehensive review of recent advances in accelerating DNN inference on edge platforms, with a focus on model compression, compiler optimizations, and hardware–software co-design. We analyze the trade-offs between latency, energy, and accuracy across various techniques, highlighting practical deployment strategies on real-world devices. In particular, we categorize existing frameworks based on their architectural targets and adaptation mechanisms and discuss open challenges such as runtime adaptability and hardware-aware scheduling. This review aims to guide the development of efficient and scalable edge intelligence solutions. Full article
Show Figures

Figure 1

19 pages, 8803 KiB  
Article
An Accurate and Low-Complexity Offset Calibration Methodology for Dynamic Comparators
by Juan Cuenca, Benjamin Zambrano, Esteban Garzón, Luis Miguel Prócel and Marco Lanuzza
J. Low Power Electron. Appl. 2025, 15(2), 35; https://doi.org/10.3390/jlpea15020035 - 2 Jun 2025
Viewed by 754
Abstract
Dynamic comparators play an important role in electronic systems, requiring high accuracy, low power consumption, and minimal offset voltage. This work proposes an accurate and low-complexity offset calibration design based on a capacitive load approach. It was designed using a 65 nm CMOS [...] Read more.
Dynamic comparators play an important role in electronic systems, requiring high accuracy, low power consumption, and minimal offset voltage. This work proposes an accurate and low-complexity offset calibration design based on a capacitive load approach. It was designed using a 65 nm CMOS technology and comprehensively evaluated under Monte Carlo simulations and PVT variations. The proposed scheme was built using MIM capacitors and transistor-based capacitors, and it includes Verilog-based calibration algorithms. The proposed offset calibration is benchmarked, in terms of precision, calibration time, energy consumption, delay, and area, against prior calibration techniques: current injection via gate biasing by a charge pump circuit and current injection via parallel transistors. The evaluation of the offset calibration schemes relies on Analog/Mixed-Signal (AMS) simulations, ensuring accurate evaluation of digital and analog domains. The charge pump method achieved the best Energy-Delay Product (EDP) at the cost of lower long-term accuracy, mainly because of its capacitor leakage. The proposed scheme demonstrated superior performance in offset reduction, achieving a one-sigma offset of 0.223 mV while maintaining precise calibration. Among the calibration algorithms, the window algorithm performs better than the accelerated calibration. This is mainly because the window algorithm considers noise-induced output oscillations, ensuring consistent calibration across all designs. This work provides insights into the trade-offs between energy, precision, and area in dynamic comparator designs, offering strategies to enhance offset calibration. Full article
(This article belongs to the Special Issue Analog/Mixed-Signal Integrated Circuit Design)
Show Figures

Figure 1

24 pages, 6185 KiB  
Article
Decentralized Energy Management for Efficient Electric Vehicle Charging in DC Microgrids: A Piece-Wise Droop Control Approach
by Mallareddy Mounica, Bhooshan Avinash Rajpathak, Mohan Lal Kolhe, K. Raghavendra Naik, Janardhan Rao Moparthi, Sravan Kumar Kotha and Devasuth Govind
Processes 2025, 13(6), 1748; https://doi.org/10.3390/pr13061748 - 2 Jun 2025
Viewed by 810
Abstract
This paper addresses the challenges of efficient electric vehicle (EV) charging integration in Direct Current (DC) microgrids (MGs), particularly the impact of intermittent EV loads on power sharing and voltage regulation. Traditional droop control methods suffer from inherent trade-offs between performance indices of [...] Read more.
This paper addresses the challenges of efficient electric vehicle (EV) charging integration in Direct Current (DC) microgrids (MGs), particularly the impact of intermittent EV loads on power sharing and voltage regulation. Traditional droop control methods suffer from inherent trade-offs between performance indices of parallel distributed energy resources (DERs), which in turn results in improper source utilization. We propose a novel decentralized piece-wise droop control (PDC) approach with voltage compensation for EV charging to overcome this limitation and to minimize the unequal cable resistance effect on power sharing. This strategy dynamically optimises the droop characteristics based on EV charging load profiles, partitioning the droop curve to optimize power sharing accuracy and voltage stability considering the constraints of maximum allowable voltage deviation and loading. Simulation and experimental results demonstrate significant improvements in power sharing, enhanced DER utilization, and voltage deviations consistently within 2.5% when compared with traditional strategies. PDC offers a robust solution for enabling efficient and reliable EV charging in MGs, as it is not sensitive for EV load prediction errors and measurement noise. Full article
Show Figures

Figure 1

Back to TopTop