Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (76)

Search Parameters:
Keywords = smart grid convergence

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 1895 KiB  
Review
A Review of Artificial Intelligence and Deep Learning Approaches for Resource Management in Smart Buildings
by Bibars Amangeldy, Timur Imankulov, Nurdaulet Tasmurzayev, Gulmira Dikhanbayeva and Yedil Nurakhov
Buildings 2025, 15(15), 2631; https://doi.org/10.3390/buildings15152631 - 25 Jul 2025
Viewed by 455
Abstract
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying [...] Read more.
This comprehensive review maps the fast-evolving landscape in which artificial intelligence (AI) and deep-learning (DL) techniques converge with the Internet of Things (IoT) to manage energy, comfort, and sustainability across smart environments. A PRISMA-guided search of four databases retrieved 1358 records; after applying inclusion criteria, 143 peer-reviewed studies published between January 2019 and April 2025 were analyzed. This review shows that AI-driven controllers—especially deep-reinforcement-learning agents—deliver median energy savings of 18–35% for HVAC and other major loads, consistently outperforming rule-based and model-predictive baselines. The evidence further reveals a rapid diversification of methods: graph-neural-network models now capture spatial interdependencies in dense sensor grids, federated-learning pilots address data-privacy constraints, and early integrations of large language models hint at natural-language analytics and control interfaces for heterogeneous IoT devices. Yet large-scale deployment remains hindered by fragmented and proprietary datasets, unresolved privacy and cybersecurity risks associated with continuous IoT telemetry, the growing carbon and compute footprints of ever-larger models, and poor interoperability among legacy equipment and modern edge nodes. The authors of researches therefore converges on several priorities: open, high-fidelity benchmarks that marry multivariate IoT sensor data with standardized metadata and occupant feedback; energy-aware, edge-optimized architectures that lower latency and power draw; privacy-centric learning frameworks that satisfy tightening regulations; hybrid physics-informed and explainable models that shorten commissioning time; and digital-twin platforms enriched by language-model reasoning to translate raw telemetry into actionable insights for facility managers and end users. Addressing these gaps will be pivotal to transforming isolated pilots into ubiquitous, trustworthy, and human-centered IoT ecosystems capable of delivering measurable gains in efficiency, resilience, and occupant wellbeing at scale. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

46 pages, 9390 KiB  
Article
Multi-Objective Optimization of Distributed Generation Placement in Electric Bus Transit Systems Integrated with Flash Charging Station Using Enhanced Multi-Objective Grey Wolf Optimization Technique and Consensus-Based Decision Support
by Yuttana Kongjeen, Pongsuk Pilalum, Saksit Deeum, Kittiwong Suthamno, Thongchai Klayklueng, Supapradit Marsong, Ritthichai Ratchapan, Krittidet Buayai, Kaan Kerdchuen, Wutthichai Sa-nga-ngam and Krischonme Bhumkittipich
Energies 2025, 18(14), 3638; https://doi.org/10.3390/en18143638 - 9 Jul 2025
Viewed by 466
Abstract
This study presents a comprehensive multi-objective optimization framework for optimal placement and sizing of distributed generation (DG) units in electric bus (E-bus) transit systems integrated with a high-power flash charging infrastructure. An enhanced Multi-Objective Grey Wolf Optimizer (MOGWO), utilizing Euclidean distance-based Pareto ranking, [...] Read more.
This study presents a comprehensive multi-objective optimization framework for optimal placement and sizing of distributed generation (DG) units in electric bus (E-bus) transit systems integrated with a high-power flash charging infrastructure. An enhanced Multi-Objective Grey Wolf Optimizer (MOGWO), utilizing Euclidean distance-based Pareto ranking, is developed to minimize power loss, voltage deviation, and voltage violations. The framework incorporates realistic E-bus operation characteristics, including a 31-stop, 62 km route, 600 kW pantograph flash chargers, and dynamic load profiles over a 90 min simulation period. Statistical evaluation on IEEE 33-bus and 69-bus distribution networks demonstrates that MOGWO consistently outperforms MOPSO and NSGA-II across all DG deployment scenarios. In the three-DG configuration, MOGWO achieved minimum power losses of 0.0279 MW and 0.0179 MW, and voltage deviations of 0.1313 and 0.1362 in the 33-bus and 69-bus systems, respectively, while eliminating voltage violations. The proposed method also demonstrated superior solution quality with low variance and faster convergence, requiring under 7 h of computation on average. A five-method compromise solution strategy, including TOPSIS and Lp-metric, enabled transparent and robust decision-making. The findings confirm the proposed framework’s effectiveness and scalability for enhancing distribution system performance under the demands of electric transit electrification and smart grid integration. Full article
Show Figures

Figure 1

41 pages, 2392 KiB  
Review
How Beyond-5G and 6G Makes IIoT and the Smart Grid Green—A Survey
by Pal Varga, Áron István Jászberényi, Dániel Pásztor, Balazs Nagy, Muhammad Nasar and David Raisz
Sensors 2025, 25(13), 4222; https://doi.org/10.3390/s25134222 - 6 Jul 2025
Viewed by 655
Abstract
The convergence of next-generation wireless communication technologies and modern energy infrastructure presents a promising path toward sustainable and intelligent systems. This survey explores how beyond-5G and 6G communication technologies can support the greening of Industrial Internet of Things (IIoT) systems and smart grids. [...] Read more.
The convergence of next-generation wireless communication technologies and modern energy infrastructure presents a promising path toward sustainable and intelligent systems. This survey explores how beyond-5G and 6G communication technologies can support the greening of Industrial Internet of Things (IIoT) systems and smart grids. It highlights the critical challenges in achieving energy efficiency, interoperability, and real-time responsiveness across different domains. The paper reviews key enablers such as LPWAN, wake-up radios, mobile edge computing, and energy harvesting techniques for green IoT, as well as optimization strategies for 5G/6G networks and data center operations. Furthermore, it examines the role of 5G in enabling reliable, ultra-low-latency data communication for advanced smart grid applications, such as distributed generation, precise load control, and intelligent feeder automation. Through a structured analysis of recent advances and open research problems, the paper aims to identify essential directions for future research and development in building energy-efficient, resilient, and scalable smart infrastructures powered by intelligent wireless networks. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2025)
Show Figures

Figure 1

22 pages, 4907 KiB  
Article
Predefined Time Control of State-Constrained Multi-Agent Systems Based on Command Filtering
by Jianhua Zhang, Xuan Yu, Quanmin Zhu and Zhanyang Yu
Mathematics 2025, 13(13), 2151; https://doi.org/10.3390/math13132151 - 30 Jun 2025
Viewed by 280
Abstract
This paper resolves the predefined-time control problem for multi-agent systems under predefined performance metrics and state constraints, addressing critical limitations of traditional methods—notably their inability to enforce strict user-specified deadlines for mission-critical operations, coupled with difficulties in simultaneously guaranteeing transient performance bounds and [...] Read more.
This paper resolves the predefined-time control problem for multi-agent systems under predefined performance metrics and state constraints, addressing critical limitations of traditional methods—notably their inability to enforce strict user-specified deadlines for mission-critical operations, coupled with difficulties in simultaneously guaranteeing transient performance bounds and state constraints while suffering prohibitive stability proof complexity. To overcome these challenges, we propose a predefined performance control methodology that integrates Barrier Lyapunov Functions command-filtered backstepping. The framework rigorously ensures exact convergence within user-defined time independent of initial conditions while enforcing strict state constraints through time-varying BLF boundaries and further delivers quantifiable performance such as overshoot below 5% and convergence within 10 s. By eliminating high-order derivative continuity proofs via command-filter design, stability analysis complexity is reduced by 40% versus conventional backstepping. Stability proofs and dual-case simulations (UAV formation/smart grid) demonstrate over 95% tracking accuracy under disturbances and constraints, validating broad applicability in safety-critical multi-agent systems. Full article
Show Figures

Figure 1

40 pages, 3694 KiB  
Article
AI-Enhanced MPPT Control for Grid-Connected Photovoltaic Systems Using ANFIS-PSO Optimization
by Mahmood Yaseen Mohammed Aldulaimi and Mesut Çevik
Electronics 2025, 14(13), 2649; https://doi.org/10.3390/electronics14132649 - 30 Jun 2025
Viewed by 502
Abstract
This paper presents an adaptive Maximum Power Point Tracking (MPPT) strategy for grid-connected photovoltaic (PV) systems that uses an Adaptive Neuro-Fuzzy Inference System (ANFIS) optimized by Particle Swarm Optimization (PSO) to enhance energy extraction efficiency under diverse environmental conditions. The proposed ANFIS-PSO-based MPPT [...] Read more.
This paper presents an adaptive Maximum Power Point Tracking (MPPT) strategy for grid-connected photovoltaic (PV) systems that uses an Adaptive Neuro-Fuzzy Inference System (ANFIS) optimized by Particle Swarm Optimization (PSO) to enhance energy extraction efficiency under diverse environmental conditions. The proposed ANFIS-PSO-based MPPT controller performs dynamic adjustment Pulse Width Modulation (PWM) switching to minimize Total Harmonic Distortion (THD); this will ensure rapid convergence to the maximum power point (MPP). Unlike conventional Perturb and Observe (P&O) and Incremental Conductance (INC) methods, which struggle with tracking delays and local maxima in partial shading scenarios, the proposed approach efficiently identifies the Global Maximum Power Point (GMPP), improving energy harvesting capabilities. Simulation results in MATLAB/Simulink R2023a demonstrate that under stable irradiance conditions (1000 W/m2, 25 °C), the controller was able to achieve an MPPT efficiency of 99.2%, with THD reduced to 2.1%, ensuring grid compliance with IEEE 519 standards. In dynamic irradiance conditions, where sunlight varies linearly between 200 W/m2 and 1000 W/m2, the controller maintains an MPPT efficiency of 98.7%, with a response time of less than 200 ms, outperforming traditional MPPT algorithms. In the partial shading case, the proposed method effectively avoids local power maxima and successfully tracks the Global Maximum Power Point (GMPP), resulting in a power output of 138 W. In contrast, conventional techniques such as P&O and INC typically fail to escape local maxima under similar conditions, leading to significantly lower power output, often falling well below the true GMPP. This performance disparity underscores the superior tracking capability of the proposed ANFIS-PSO approach in complex irradiance scenarios, where traditional algorithms exhibit substantial energy loss due to their limited global search behavior. The novelty of this work lies in the integration of ANFIS with PSO optimization, enabling an intelligent self-adaptive MPPT strategy that enhances both tracking speed and accuracy while maintaining low computational complexity. This hybrid approach ensures real-time adaptation to environmental fluctuations, making it an optimal solution for grid-connected PV systems requiring high power quality and stability. The proposed controller significantly improves energy harvesting efficiency, minimizes grid disturbances, and enhances overall system robustness, demonstrating its potential for next-generation smart PV systems. Full article
(This article belongs to the Special Issue AI Applications for Smart Grid)
Show Figures

Figure 1

34 pages, 6313 KiB  
Article
Robust Photovoltaic Power Forecasting Model Under Complex Meteorological Conditions
by Yuxiang Guo, Qiang Han, Tan Li, Huichu Fu, Meng Liang and Siwei Zhang
Mathematics 2025, 13(11), 1783; https://doi.org/10.3390/math13111783 - 27 May 2025
Viewed by 405
Abstract
The rapid expansion of global photovoltaic (PV) capacity has imposed higher demands on forecast accuracy and timeliness in power dispatching. However, traditional PV power forecasting models designed for distributed PV power stations often struggle with accuracy due to unpredictable meteorological variations, data noise, [...] Read more.
The rapid expansion of global photovoltaic (PV) capacity has imposed higher demands on forecast accuracy and timeliness in power dispatching. However, traditional PV power forecasting models designed for distributed PV power stations often struggle with accuracy due to unpredictable meteorological variations, data noise, non-stationary signals, and human-induced data collection errors. To effectively mitigate these limitations, this work proposes a dual-stage feature extraction method based on Variational Mode Decomposition (VMD) and Principal Component Analysis (PCA), enhancing multi-scale modeling and noise reduction capabilities. Additionally, the Whale Optimization Algorithm is adopted to efficiently optimize the hyperparameters of iTransformer for the framework, improving parameter adaptability and convergence efficiency. Based on VMD-PCA refined feature extraction, the iTransformer is then employed to perform continuous active power prediction across time steps, leveraging its strength in modeling long-range temporal dependencies under complex meteorological conditions. Experimental results demonstrate that the proposed model exhibits superior robustness across multiple evaluation metrics, including coefficient of determination, mean square error, mean absolute error, and root mean square error, with comparatively low latency. This research provides valuable model support for reliable PV system dispatch and its application in smart grids. Full article
Show Figures

Figure 1

35 pages, 4428 KiB  
Article
An Evolutionary Deep Reinforcement Learning-Based Framework for Efficient Anomaly Detection in Smart Power Distribution Grids
by Mohammad Mehdi Sharifi Nevisi, Mehrdad Shoeibi, Francisco Hernando-Gallego, Diego Martín and Sarvenaz Sadat Khatami
Energies 2025, 18(10), 2435; https://doi.org/10.3390/en18102435 - 9 May 2025
Viewed by 588
Abstract
The increasing complexity of modern smart power distribution systems (SPDSs) has made anomaly detection a significant challenge, as these systems generate vast amounts of heterogeneous and time-dependent data. Conventional detection methods often struggle with adaptability, generalization, and real-time decision-making, leading to high false [...] Read more.
The increasing complexity of modern smart power distribution systems (SPDSs) has made anomaly detection a significant challenge, as these systems generate vast amounts of heterogeneous and time-dependent data. Conventional detection methods often struggle with adaptability, generalization, and real-time decision-making, leading to high false alarm rates and inefficient fault detection. To address these challenges, this study proposes a novel deep reinforcement learning (DRL)-based framework, integrating a convolutional neural network (CNN) for hierarchical feature extraction and a recurrent neural network (RNN) for sequential pattern recognition and time-series modeling. To enhance model performance, we introduce a novel non-dominated sorting artificial bee colony (NSABC) algorithm, which fine-tunes the hyper-parameters of the CNN-RNN structure, including weights, biases, the number of layers, and neuron configurations. This optimization ensures improved accuracy, faster convergence, and better generalization to unseen data. The proposed DRL-NSABC model is evaluated using four benchmark datasets: smart grid, advanced metering infrastructure (AMI), smart meter, and Pecan Street, widely recognized in anomaly detection research. A comparative analysis against state-of-the-art deep learning (DL) models, including RL, CNN, RNN, the generative adversarial network (GAN), the time-series transformer (TST), and bidirectional encoder representations from transformers (BERT), demonstrates the superiority of the proposed DRL-NSABC. The proposed DRL-NSABC model achieved high accuracy across all benchmark datasets, including 95.83% on the smart grid dataset, 96.19% on AMI, 96.61% on the smart meter, and 96.45% on Pecan Street. Statistical t-tests confirm the superiority of DRL-NSABC over other algorithms, while achieving a variance of 0.00014. Moreover, DRL-NSABC demonstrates the fastest convergence, reaching near-optimal accuracy within the first 100 epochs. By significantly reducing false positives and ensuring rapid anomaly detection with low computational overhead, the proposed DRL-NSABC framework enables efficient real-world deployment in smart power distribution systems without major infrastructure upgrades and promotes cost-effective, resilient power grid operations. Full article
Show Figures

Figure 1

16 pages, 1689 KiB  
Article
Smart Grids and Sustainability: The Impact of Digital Technologies on the Energy Transition
by Paola Campana, Riccardo Censi, Roberto Ruggieri and Carlo Amendola
Energies 2025, 18(9), 2149; https://doi.org/10.3390/en18092149 - 22 Apr 2025
Cited by 2 | Viewed by 1721
Abstract
Smart Grids (SG) represent a key element in the energy transition, facilitating the integration of renewable and conventional energy sources through the use of advanced digital technologies. This study analyzes the main research trends related to SG, energy efficiency, and the role of [...] Read more.
Smart Grids (SG) represent a key element in the energy transition, facilitating the integration of renewable and conventional energy sources through the use of advanced digital technologies. This study analyzes the main research trends related to SG, energy efficiency, and the role of Artificial Intelligence (AI) and the Internet of Things (IoT) in smart energy management. Following the PRISMA protocol, 179 relevant academic articles indexed in the Scopus database were selected and analyzed using VOSviewer software, version 1.6.20, to identify the main thematic clusters. The results reveal a converging research focus on energy flow optimization, renewable energy integration, and the adoption of digital technologies—including cybersecurity solutions—to ensure grid efficiency, security, and resilience. The study confirms that digitalization acts as a key enabler for building a more sustainable and reliable energy system, aligned with the objectives of the European Union and the United Nations 2030 Agenda. The contribution of this work lies in its integrated approach to the analysis of digital technologies, linking the themes of efficiency, resilience, and infrastructure security, in order to provide valuable insights for future research and sustainable energy policy development. Full article
Show Figures

Figure 1

20 pages, 3173 KiB  
Article
Tuning Parameters of Genetic Algorithms for Wind Farm Optimization Using the Design of Experiments Method
by Wahiba El Mestari, Nawal Cheggaga, Feriel Adli, Abdellah Benallal and Adrian Ilinca
Sustainability 2025, 17(7), 3011; https://doi.org/10.3390/su17073011 - 28 Mar 2025
Cited by 2 | Viewed by 803
Abstract
Wind energy is a vital renewable resource with substantial economic and environmental benefits, yet its spatial variability poses significant optimization challenges. This study advances wind farm layout optimization by employing a systematic genetic algorithm (GA) tuning approach using the design of experiments (DOE) [...] Read more.
Wind energy is a vital renewable resource with substantial economic and environmental benefits, yet its spatial variability poses significant optimization challenges. This study advances wind farm layout optimization by employing a systematic genetic algorithm (GA) tuning approach using the design of experiments (DOE) method. Specifically, a full factorial 22 DOE was utilized to optimize crossover and mutation coefficients, enhancing convergence speed and overall algorithm performance. The methodology was applied to a hypothetical wind farm with unidirectional wind flow and spatial constraints, using a fitness function that incorporates wake effects and maximizes energy production. The results demonstrated a 4.50% increase in power generation and a 4.87% improvement in fitness value compared to prior studies. Additionally, the optimized GA parameters enabled the placement of additional turbines, enhancing site utilization while maintaining cost-effectiveness. ANOVA and response surface analysis confirmed the significant interaction effects between GA parameters, highlighting the importance of systematic tuning over conventional trial-and-error approaches. This study establishes a foundation for real-world applications, including smart grid integration and adaptive renewable energy systems, by providing a robust, data-driven framework for wind farm optimization. The findings reinforce the crucial role of systematic parameter tuning in improving wind farm efficiency, energy output, and economic feasibility. Full article
(This article belongs to the Section Resources and Sustainable Utilization)
Show Figures

Figure 1

37 pages, 6149 KiB  
Article
Nested Optimization Algorithms for Accurately Sizing a Clean Energy Smart Grid System, Considering Uncertainties and Demand Response
by Ali M. Eltamaly and Zeyad A. Almutairi
Sustainability 2025, 17(6), 2744; https://doi.org/10.3390/su17062744 - 19 Mar 2025
Cited by 2 | Viewed by 690
Abstract
Driven by environmental concerns and dwindling fossil fuels, a global shift towards renewable energy for electricity generation is underway, with ambitions for complete reliance by 2050. However, the intermittent nature of renewable power creates a supply–demand mismatch. This challenge can be addressed through [...] Read more.
Driven by environmental concerns and dwindling fossil fuels, a global shift towards renewable energy for electricity generation is underway, with ambitions for complete reliance by 2050. However, the intermittent nature of renewable power creates a supply–demand mismatch. This challenge can be addressed through smart grid concepts that utilize demand-side management, energy storage systems, and weather/load forecasting. This study introduces a sizing technique for a clean energy smart grid (CESG) system that integrates these strategies. To optimize the design and sizing of the CESG, two nested approaches are proposed. The inner approach, “Optimal Operation,” is performed hourly to determine the most efficient operation for current conditions. The outer approach, “Optimal Sizing,” is conducted annually to identify the ideal size of grid components for maximum reliability and lowest cost. The detailed model incorporating component degradation predicted the operating conditions, showing that real-world conditions would make the internal loop computationally expensive. A lotus effect optimization algorithm (LEA) that demonstrated superior performance in many applications is utilized in this study to increase the convergence speed. Although there is a considerable reduction in the convergence time when using a nested LEA (NLEA), the convergence time is still long. To address this issue, this study proposes replacing the internal LEA loop with an artificial neural network, trained using data from the NLEA. This significantly reduces computation time while maintaining accuracy. Overall, the use of DR reduced the cost by about 28% compared with avoiding the use of DR. Moreover, the use of NLEA reduced the convergence time of the sizing problem by 43% compared with the best optimization algorithm used for comparison. The replacement of the inner LEA optimization loop reduced the convergence time of sizing the CESG to 1.08%, compared with the NLEA performance. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

34 pages, 10596 KiB  
Article
Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks
by Kennedy Chinedu Okafor, Wisdom Onyema Okafor, Omowunmi Mary Longe, Ikechukwu Ignatius Ayogu, Kelvin Anoh and Bamidele Adebisi
Technologies 2025, 13(3), 105; https://doi.org/10.3390/technologies13030105 - 5 Mar 2025
Cited by 2 | Viewed by 1798
Abstract
The integration of edge-to-cloud infrastructures in smart grid (SG) data center networks requires scalable, efficient, and secure architecture. Traditional server-based SG data center architectures face high computational loads and delays. To address this problem, a lightweight data center network (DCN) with low-cost, and fast-converging [...] Read more.
The integration of edge-to-cloud infrastructures in smart grid (SG) data center networks requires scalable, efficient, and secure architecture. Traditional server-based SG data center architectures face high computational loads and delays. To address this problem, a lightweight data center network (DCN) with low-cost, and fast-converging optimization is required. This paper introduces a container-based time synchronization model (CTSM) within a spine–leaf virtual private cloud (SL-VPC), deployed via AWS CloudFormation stack as a practical use case. The CTSM optimizes resource utilization, security, and traffic management while reducing computational overhead. The model was benchmarked against five DCN topologies—DCell, Mesh, Skywalk, Dahu, and Ficonn—using Mininet simulations and a software-defined CloudFormation stack on an Amazon EC2 HPC testbed under realistic SG traffic patterns. The results show that CTSM achieved near-100% reliability, with the highest received energy data (29.87%), lowest packetization delay (13.11%), and highest traffic availability (70.85%). Stateless container engines improved resource allocation, reducing administrative overhead and enhancing grid stability. Software-defined Network (SDN)-driven adaptive routing and load balancing further optimized performance under dynamic demand conditions. These findings position CTSM-SL-VPC as a secure, scalable, and efficient solution for next-generation smart grid automation. Full article
Show Figures

Figure 1

18 pages, 831 KiB  
Article
Bipartite Fault-Tolerant Consensus Control for Multi-Agent Systems with a Leader of Unknown Input Under a Signed Digraph
by Anning Liu, Wenli Zhang, Dongdong Yue, Chuang Chen and Jiantao Shi
Sensors 2025, 25(5), 1556; https://doi.org/10.3390/s25051556 - 3 Mar 2025
Viewed by 933
Abstract
This paper addresses the bipartite consensus problem of signed directed multi-agent systems (MASs) subject to actuator faults. This problem plays a crucial role in various real-world systems where agents exhibit both cooperative and competitive interactions, such as autonomous vehicle fleets, smart grids, and [...] Read more.
This paper addresses the bipartite consensus problem of signed directed multi-agent systems (MASs) subject to actuator faults. This problem plays a crucial role in various real-world systems where agents exhibit both cooperative and competitive interactions, such as autonomous vehicle fleets, smart grids, and robotic networks. To address this, unlike most existing works, an intermediate observer is designed using newly introduced intermediate variables, enabling simultaneous estimation of both agent states and faults. Furthermore, a distributed adaptive observer is developed to help followers estimate the leader’s state, overcoming limitations of prior bounded-input assumptions. Finally, simulation results demonstrate the method’s effectiveness, showing that consensus tracking errors converge to zero under under various fault scenarios and input uncertainties. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

6 pages, 139 KiB  
Editorial
Bridging the Gaps: Future Directions for Blockchain and IoT Integration in Smart Grids
by Joao C. Ferreira
Energies 2025, 18(4), 772; https://doi.org/10.3390/en18040772 - 7 Feb 2025
Cited by 1 | Viewed by 971
Abstract
The convergence of blockchain and Internet of Things technologies holds immense promise for revolutionizing the smart grid landscape [...] Full article
23 pages, 768 KiB  
Article
Robust Momentum-Enhanced Non-Negative Tensor Factorization for Accurate Reconstruction of Incomplete Power Consumption Data
by Dengyu Shi and Tangtang Xie
Electronics 2025, 14(2), 351; https://doi.org/10.3390/electronics14020351 - 17 Jan 2025
Viewed by 1014
Abstract
Power consumption (PC) data are fundamental for optimizing energy use and managing industrial operations. However, with the widespread adoption of data-driven technologies in the energy sector, maintaining the integrity and quality of these data has become a significant challenge. Missing or incomplete data, [...] Read more.
Power consumption (PC) data are fundamental for optimizing energy use and managing industrial operations. However, with the widespread adoption of data-driven technologies in the energy sector, maintaining the integrity and quality of these data has become a significant challenge. Missing or incomplete data, often caused by equipment failures or communication disruptions, can severely affect the accuracy and reliability of data analyses, ultimately leading to poor decision-making and increased operational costs. To address this, we propose a Robust Momentum-Enhanced Non-Negative Tensor Factorization (RMNTF) model, which integrates three key innovations. First, the model utilizes adversarial loss and L2 regularization to enhance its robustness and improve its performance when dealing with incomplete data. Second, a sigmoid function is employed to ensure that the results remain non-negative, aligning with the inherent characteristics of PC data and improving the quality of the analysis. Finally, momentum optimization is applied to accelerate the convergence process, significantly reducing computational time. Experiments conducted on two publicly available PC datasets, with data densities of 6.65% and 4.80%, show that RMNTF outperforms state-of-the-art methods, achieving an average reduction of 16.20% in imputation errors and an average improvement of 68.36% in computational efficiency. These results highlight the model’s effectiveness in handling sparse and incomplete data, ensuring that the reconstructed data can support critical tasks like energy optimization, smart grid maintenance, and predictive analytics. Full article
(This article belongs to the Special Issue Intelligent Data Analysis and Learning)
Show Figures

Figure 1

18 pages, 4015 KiB  
Article
Differentially Private Clustered Federated Load Prediction Based on the Louvain Algorithm
by Tingzhe Pan, Jue Hou, Xin Jin, Chao Li, Xinlei Cai and Xiaodong Zhou
Algorithms 2025, 18(1), 32; https://doi.org/10.3390/a18010032 - 8 Jan 2025
Cited by 1 | Viewed by 757
Abstract
Load forecasting plays a fundamental role in the new type of power system. To address the data heterogeneity and security issues encountered in load forecasting for smart grids, this paper proposes a load-forecasting framework suitable for residential energy users, which allows users to [...] Read more.
Load forecasting plays a fundamental role in the new type of power system. To address the data heterogeneity and security issues encountered in load forecasting for smart grids, this paper proposes a load-forecasting framework suitable for residential energy users, which allows users to train personalized forecasting models without sharing load data. First, the similarity of user load patterns is calculated under privacy protection. Second, a complex network is constructed, and a federated user clustering method is developed based on the Louvain algorithm, which divides users into multiple clusters based on load pattern similarity. Finally, a personalized and adaptive differentially private federated learning Long Short-Term Memory (LSTM) model for load forecasting is developed. A case study analysis shows that the proposed method can effectively protect user privacy and improve model prediction accuracy when dealing with heterogeneous data. The framework can train load-forecasting models with a fast convergence rate and better prediction performance than current mainstream federated learning algorithms. Full article
(This article belongs to the Special Issue Intelligent Algorithms for High-Penetration New Energy)
Show Figures

Figure 1

Back to TopTop