Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,020)

Search Parameters:
Keywords = IEEE 1547.4 standard

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
79 pages, 926 KB  
Systematic Review
Autonomous Forklifts for Warehouse Automation: A Comprehensive Review
by Aditya Dilip Patil and Siavash Farzan
Robotics 2026, 15(2), 30; https://doi.org/10.3390/robotics15020030 - 26 Jan 2026
Abstract
Despite decades of research, autonomous forklifts remain deployed at a small scale (2–50 vehicles), while industrial warehouses require coordinating hundreds of vehicles in environments shared with human workers. This systematic review analyzes forklift-specific autonomous technologies published between 2010 and 2025 across major robotics [...] Read more.
Despite decades of research, autonomous forklifts remain deployed at a small scale (2–50 vehicles), while industrial warehouses require coordinating hundreds of vehicles in environments shared with human workers. This systematic review analyzes forklift-specific autonomous technologies published between 2010 and 2025 across major robotics databases (including IEEE Xplore, ACM, Elsevier, and related venues) to identify deployment barriers. Following the PRISMA guidelines, we systematically selected 122 peer-reviewed papers addressing forklift-specific challenges across eight subsystems: vehicle modeling, localization, planning, control, vision-based manipulation, multi-vehicle coordination, and safety. We synthesized 80 methods through 8 standardized comparison tables with quality assessment based on validation rigor. State-of-the-art approaches demonstrate strong laboratory performance: localization achieving ±1.4 mm accuracy, control enabling sub-centimeter manipulation, planning reducing mission times by 2–55%, vision reaching 98%+ recognition, and safety frameworks cutting rollover risk by 53–59%. However, validation predominantly occurs at laboratory scale, revealing a critical deployment gap. These achievements do not scale to industrial environments due to fleet coordination complexity, payload variability, and unpredictable human behavior. Our contributions include the following: (1) performance rankings with technology selection guidance, (2) systematic gap characterization, and (3) research priorities addressing mixed-fleet coordination, learning-enhanced control, and human-aware safety. This review was not prospectively registered. Full article
20 pages, 733 KB  
Systematic Review
Federated Learning in Healthcare Ethics: A Systematic Review of Privacy-Preserving and Equitable Medical AI
by Bilal Ahmad Mir, Syed Raza Abbas and Seung Won Lee
Healthcare 2026, 14(3), 306; https://doi.org/10.3390/healthcare14030306 - 26 Jan 2026
Abstract
Background/Objectives: Federated learning (FL) offers a way for healthcare institutions to collaboratively train machine learning models without sharing sensitive patient data. This systematic review aims to comprehensively synthesize the ethical dimensions of FL in healthcare, integrating privacy preservation, algorithmic fairness, governance, and [...] Read more.
Background/Objectives: Federated learning (FL) offers a way for healthcare institutions to collaboratively train machine learning models without sharing sensitive patient data. This systematic review aims to comprehensively synthesize the ethical dimensions of FL in healthcare, integrating privacy preservation, algorithmic fairness, governance, and equitable access into a unified analytical framework. The application of FL in healthcare between January 2020 and December 2024 is examined, with a focus on ethical issues such as algorithmic fairness, privacy preservation, governance, and equitable access. Methods: Following PRISMA guidelines, six databases (PubMed, IEEE Xplore, Web of Science, Scopus, ACM Digital Library, and arXiv) were searched. The PROSPERO registration is CRD420251274110. Studies were selected if they described FL implementations in healthcare settings and explicitly discussed ethical considerations. Key data extracted included FL architectures, privacy-preserving mechanisms, such as differential privacy, secure multiparty computation, and encryption, as well as fairness metrics, governance models, and clinical application domains. Results: Out of 3047 records, 38 met the inclusion criteria. The most popular applications were found in medical imaging and electronic health records, especially in radiology and oncology. Through thematic analysis, four key ethical themes emerged: algorithmic fairness, which addresses differences between clients and attributes; privacy protection through formal guarantees and cryptographic techniques; governance models, which emphasize accountability, transparency, and stakeholder engagement; and equitable distribution of computing resources for institutions with limited resources. Considerable variation was observed in how fairness and privacy trade-offs were evaluated, and only a few studies reported real-world clinical deployment. Conclusions: FL has significant potential to promote ethical AI in healthcare, but advancement will require the development of common fairness standards, workable governance plans, and systems to guarantee fair benefit sharing. Future studies should develop standardized fairness metrics, implement multi-stakeholder governance frameworks, and prioritize real-world clinical validation beyond proof-of-concept implementations. Full article
Show Figures

Figure 1

24 pages, 1420 KB  
Article
Distributed Photovoltaic–Storage Hierarchical Aggregation Method Based on Multi-Source Multi-Scale Data Fusion
by Shaobo Yang, Xuekai Hu, Lei Wang, Guanghui Sun, Min Shi, Zhengji Meng, Zifan Li, Zengze Tu and Jiapeng Li
Electronics 2026, 15(2), 464; https://doi.org/10.3390/electronics15020464 - 21 Jan 2026
Viewed by 42
Abstract
Accurate model aggregation is pivotal for the efficient dispatch and control of massive distributed photovoltaic (PV) and energy storage (ES) resources. However, the lack of unified standards across equipment manufacturers results in inconsistent data formats and resolutions. Furthermore, external disturbances like noise and [...] Read more.
Accurate model aggregation is pivotal for the efficient dispatch and control of massive distributed photovoltaic (PV) and energy storage (ES) resources. However, the lack of unified standards across equipment manufacturers results in inconsistent data formats and resolutions. Furthermore, external disturbances like noise and packet loss exacerbate the problem. The resulting data are massive, multi-source, and heterogeneous, which poses severe challenges to building effective aggregation models. To address these issues, this paper proposes a hierarchical aggregation method based on multi-source multi-scale data fusion. First, a Multi-source Multi-scale Decision Table (Ms-MsDT) model is constructed to establish a unified framework for the flexible storage and representation of heterogeneous PV-ES data. Subsequently, a two-stage fusion framework is developed, combining Information Gain (IG) for global coarse screening and Scale-based Trees (SbT) for local fine-grained selection. This approach achieves adaptive scale optimization, effectively balancing data volume reduction with high-fidelity feature preservation. Finally, a hierarchical aggregation mechanism is introduced, employing the Analytic Hierarchy Process (AHP) and a weight-guided improved K-Means algorithm to perform targeted clustering tailored to the specific control requirements of different voltage levels. Validation on an IEEE-33 node system demonstrates that the proposed method significantly improves data approximation precision and clustering compactness compared to conventional approaches. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

41 pages, 5360 KB  
Article
Jellyfish Search Algorithm-Based Optimization Framework for Techno-Economic Energy Management with Demand Side Management in AC Microgrid
by Vijithra Nedunchezhian, Muthukumar Kandasamy, Renugadevi Thangavel, Wook-Won Kim and Zong Woo Geem
Energies 2026, 19(2), 521; https://doi.org/10.3390/en19020521 - 20 Jan 2026
Viewed by 168
Abstract
The optimal allocation of Photovoltaic (PV) and wind-based renewable energy sources and Battery Energy Storage System (BESS) capacity is an important issue for efficient operation of a microgrid network (MGN). The impact of the unpredictability of PV and wind generation needs to be [...] Read more.
The optimal allocation of Photovoltaic (PV) and wind-based renewable energy sources and Battery Energy Storage System (BESS) capacity is an important issue for efficient operation of a microgrid network (MGN). The impact of the unpredictability of PV and wind generation needs to be smoothed out by coherent allocation of BESS unit to meet out the load demand. To address these issues, this article proposes an efficient Energy Management System (EMS) and Demand Side Management (DSM) approaches for the optimal allocation of PV- and wind-based renewable energy sources and BESS capacity in the MGN. The DSM model helps to modify the peak load demand based on PV and wind generation, available BESS storage, and the utility grid. Based on the Real-Time Market Energy Price (RTMEP) of utility power, the charging/discharging pattern of the BESS and power exchange with the utility grid are scheduled adaptively. On this basis, a Jellyfish Search Algorithm (JSA)-based bi-level optimization model is developed that considers the optimal capacity allocation and power scheduling of PV and wind sources and BESS capacity to satisfy the load demand. The top-level planning model solves the optimal allocation of PV and wind sources intending to reduce the total power loss of the MGN. The proposed JSA-based optimization achieved 24.04% of power loss reduction (from 202.69 kW to 153.95 kW) at peak load conditions through optimal PV- and wind-based DG placement and sizing. The bottom level model explicitly focuses to achieve the optimal operational configuration of MGN through optimal power scheduling of PV, wind, BESS, and the utility grid with DSM-based load proportions with an aim to minimize the operating cost. Simulation results on the IEEE 33-node MGN demonstrate that the 20% DSM strategy attains the maximum operational cost savings of €ct 3196.18 (reduction of 2.80%) over 24 h operation, with a 46.75% peak-hour grid dependency reduction. The statistical analysis over 50 independent runs confirms the sturdiness of the JSA over Particle Swarm Optimization (PSO) and Osprey Optimization Algorithm (OOA) with a standard deviation of only 0.00017 in the fitness function, demonstrating its superior convergence characteristics to solve the proposed optimization problem. Finally, based on the simulation outcome of the considered bi-level optimization problem, it can be concluded that implementation of the proposed JSA-based optimization approach efficiently optimizes the PV- and wind-based resource allocation along with BESS capacity and helps to operate the MGN efficiently with reduced power loss and operating costs. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

18 pages, 1005 KB  
Systematic Review
Artificial Intelligence for Predicting Treatment Response in Neovascular Age Macular Degeneration with Anti-VEGF: A Systematic Review and Meta-Analysis
by Wei-Ting Luo and Ting-Wei Wang
Mach. Learn. Knowl. Extr. 2026, 8(1), 23; https://doi.org/10.3390/make8010023 - 19 Jan 2026
Viewed by 174
Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible vision loss; anti-vascular endothelial growth factor (anti-VEGF) therapy is standard care for neovascular AMD (nAMD), yet treatment response varies. We systematically reviewed and meta-analyzed artificial intelligence (AI) and machine learning (ML) models using [...] Read more.
Age-related macular degeneration (AMD) is a leading cause of irreversible vision loss; anti-vascular endothelial growth factor (anti-VEGF) therapy is standard care for neovascular AMD (nAMD), yet treatment response varies. We systematically reviewed and meta-analyzed artificial intelligence (AI) and machine learning (ML) models using optical coherence tomography (OCT)-derived information to predict anti-VEGF treatment response in nAMD. PubMed, Embase, Web of Science, and IEEE Xplore were searched from inception to 18 December 2025 for eligible studies reporting threshold-based performance. Two reviewers screened studies, extracted data, and assessed risk of bias using PROBAST+AI; pooled sensitivity and specificity were estimated with a bivariate random-effects model. Seven studies met inclusion criteria, and six were synthesized quantitatively. Pooled sensitivity was 0.79 (95% CI 0.68–0.87), and pooled specificity was 0.83 (95% CI 0.62–0.94), with substantial heterogeneity. Specificity tended to be higher for long-term and functional outcomes than for short-term and anatomical outcomes. Most studies had a high risk of bias, mainly due to limited external validation and incomplete reporting. OCT-based AI models may help stratify treatment response in nAMD, but prospective, multicenter validation and standardized outcome definitions are needed before routine use; current evidence shows no consistent advantage of deep learning over engineered radiomic features. Full article
Show Figures

Figure 1

44 pages, 648 KB  
Systematic Review
A Systematic Review and Energy-Centric Taxonomy of Jamming Attacks and Countermeasures in Wireless Sensor Networks
by Carlos Herrera-Loera, Carolina Del-Valle-Soto, Leonardo J. Valdivia, Javier Vázquez-Castillo and Carlos Mex-Perera
Sensors 2026, 26(2), 579; https://doi.org/10.3390/s26020579 - 15 Jan 2026
Viewed by 171
Abstract
Wireless Sensor Networks (WSNs) operate under strict energy constraints and are therefore highly vulnerable to radio interference, particularly jamming attacks that directly affect communication availability and network lifetime. Although jamming and anti-jamming mechanisms have been extensively studied, energy is frequently treated as a [...] Read more.
Wireless Sensor Networks (WSNs) operate under strict energy constraints and are therefore highly vulnerable to radio interference, particularly jamming attacks that directly affect communication availability and network lifetime. Although jamming and anti-jamming mechanisms have been extensively studied, energy is frequently treated as a secondary metric, and analyses are often conducted in partial isolation from system assumptions, protocol behavior, and deployment context. This fragmentation limits the interpretability and comparability of reported results. This article presents a systematic literature review (SLR) covering the period from 2004 to 2024, with a specific focus on energy-aware jamming and mitigation strategies in IEEE 802.15.4-based WSNs. To ensure transparency and reproducibility, the literature selection and refinement process is formalized through a mathematical search-and-filtering model. From an initial corpus of 482 publications retrieved from Scopus, 62 peer-reviewed studies were selected and analyzed across multiple dimensions, including jamming modality, affected protocol layers, energy consumption patterns, evaluation assumptions, and deployment scenarios. The review reveals consistent energy trends among constant, random, and reactive jamming strategies, as well as significant variability in the energy overhead introduced by defensive mechanisms at the physical (PHY), Medium Access Control (MAC), and network layers. It further identifies persistent methodological challenges, such as heterogeneous energy metrics, incomplete characterization of jamming intensity, and the limited use of real-hardware testbeds. To address these gaps, the paper introduces an energy-centric taxonomy that explicitly accounts for attacker–defender energy asymmetry, cross-layer interactions, and recurring experimental assumptions, and proposes a minimal set of standardized energy-related performance metrics suitable for IEEE 802.15.4 environments. By synthesizing energy behaviors, trade-offs, and application-specific implications, this review provides a structured foundation for the design and evaluation of resilient, energy-proportional WSNs operating under availability-oriented adversarial interference. Full article
(This article belongs to the Special Issue Security and Privacy in Wireless Sensor Networks (WSNs))
Show Figures

Figure 1

43 pages, 32899 KB  
Article
MEPEOA: A Multi-Strategy Enhanced Preschool Education Optimization Algorithm for Real-World Problems
by Shuping Ni, Chaofang Zhong, Yi Zhu and Meng Wang
Symmetry 2026, 18(1), 154; https://doi.org/10.3390/sym18010154 - 14 Jan 2026
Viewed by 103
Abstract
To address the limitations of the original Preschool Education Optimization Algorithm (PEOA) in population diversity preservation and late-stage convergence accuracy, this paper proposes a Multi-strategy Enhanced Preschool Education Optimization Algorithm (MEPEOA). The proposed algorithm integrates an improved population initialization strategy, a multi-strategy collaborative [...] Read more.
To address the limitations of the original Preschool Education Optimization Algorithm (PEOA) in population diversity preservation and late-stage convergence accuracy, this paper proposes a Multi-strategy Enhanced Preschool Education Optimization Algorithm (MEPEOA). The proposed algorithm integrates an improved population initialization strategy, a multi-strategy collaborative search mechanism, adaptive regulation, and boundary control to achieve a more effective balance between global exploration and local exploitation. The performance of MEPEOA is comprehensively evaluated on IEEE CEC2017 and CEC2022 benchmark suites and compared with several state-of-the-art metaheuristic algorithms, including EWOA, MPSO, L_SHADE, BKA, ALA, BPBO, and the original PEOA. Experimental results demonstrate that MEPEOA achieves superior optimization accuracy and stability on the majority of benchmark functions. For example, on CEC2017 with 30 dimensions, MEPEOA reduces the average fitness value of multimodal function F9 by approximately 73.6% compared with PEOA and by more than 47% compared with EWOA. In terms of stability, the standard deviation of MEPEOA on function F6 is only 4.13 × 10−3, which is several orders of magnitude lower than those of EWOA, MPSO, and BKA, indicating highly consistent convergence behavior. Furthermore, MEPEOA exhibits clear advantages in convergence speed and robustness, achieving the best Friedman mean rank across all tested benchmark suites. In addition, MEPEOA is applied to a two-dimensional grid-based path planning problem, where it consistently generates shorter and more stable collision-free paths than competing algorithms. Overall, the proposed MEPEOA demonstrates strong robustness, fast convergence, and superior stability, making it an effective and extensible solution for complex numerical optimization and practical engineering problems. Full article
Show Figures

Figure 1

17 pages, 2791 KB  
Systematic Review
Artificial Intelligence for Fibrosis Diagnosis in Metabolic-Dysfunction-Associated Steatotic Liver Disease: A Systematic Review
by Neilson Silveira de Souza, Théo Cordeiro Veiga Vitório, Raphael Augusto de Souza, Marcos Antônio Dórea Machado and Helma Pinchemel Cotrim
Diagnostics 2026, 16(2), 261; https://doi.org/10.3390/diagnostics16020261 - 14 Jan 2026
Viewed by 235
Abstract
Background/Objectives: Artificial intelligence (AI) is an emerging technology for diagnosing liver fibrosis in Metabolic-Dysfunction-Associated Steatotic Liver Disease (MASLD), but a comprehensive synthesis of its performance is lacking. This systematic review (SR) aimed to evaluate the current evidence of AI models for diagnosing [...] Read more.
Background/Objectives: Artificial intelligence (AI) is an emerging technology for diagnosing liver fibrosis in Metabolic-Dysfunction-Associated Steatotic Liver Disease (MASLD), but a comprehensive synthesis of its performance is lacking. This systematic review (SR) aimed to evaluate the current evidence of AI models for diagnosing or staging liver fibrosis in patients with MASLD compared to conventional diagnostic tools. Methods: A comprehensive search was conducted in PubMed, Scopus, Web of Science, ScienceDirect, Embase, LILACS, IEEE Series, and Association for Computing Machinery (ACM). Primary studies applying AI to diagnose fibrosis in adults with MASLD were included. Risk of bias was assessed using the QUADAS-2 tool, and methodological reporting was evaluated according to the MINimum Information for Medical AI Reporting (MINIMAR) guideline. A narrative synthesis was performed, grouping studies by data type (clinical/laboratory vs. imaging) and summarizing diagnostic performance and clinical application. A frequency-based analysis was applied to identify the most recurrent predictive features, and an analysis of the AI architecture and application was reported. The review was registered in PROSPERO (CRD420251035919). Results: Twenty-one studies were included, encompassing 19,221 patients and 5237 images. Across studies, AI models consistently outperformed non-invasive scores such as Fibrosis-4 Index (FIB-4) and NAFLD Fibrosis Score (NFS). The most frequent predictive variables were identified. Despite an overall low risk of bias, methodological transparency and external validation were limited. Conclusions: AI is feasible for the non-invasive diagnosis of liver fibrosis in MASLD, demonstrating superior accuracy to standard clinical scores. Broader clinical application is limited by the lack of external validation and high heterogeneity among the studies. Prospective validation in diverse, multicenter cohorts is essential before AI can be integrated into routine clinical practice. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

19 pages, 6478 KB  
Article
An Intelligent Dynamic Cluster Partitioning and Regulation Strategy for Distribution Networks
by Keyan Liu, Kaiyuan He, Dongli Jia, Huiyu Zhan, Wanxing Sheng, Zukun Li, Yuxuan Huang, Sijia Hu and Yong Li
Energies 2026, 19(2), 384; https://doi.org/10.3390/en19020384 - 13 Jan 2026
Viewed by 174
Abstract
As distributed generators (DGs) and flexible adjustable loads (FALs) further penetrate distribution networks (DNs), to reduce regulation complexity compared with traditional centralized control frameworks, DGs and FALs in DNs should be packed in several clusters to enable their dispatch to become standard in [...] Read more.
As distributed generators (DGs) and flexible adjustable loads (FALs) further penetrate distribution networks (DNs), to reduce regulation complexity compared with traditional centralized control frameworks, DGs and FALs in DNs should be packed in several clusters to enable their dispatch to become standard in the industry. To mitigate the negative influence of DGs’ and FALs’ spatiotemporal distribution and uncertain output characteristics on dispatch, this paper proposes an intelligent dynamic cluster partitioning strategy for DNs, from which the DN’s resources and loads can be intelligently aggregated, organized, and regulated in a dynamic and optimal way with relatively high implementation efficiency. An environmental model based on the Markov decision process (MDP) technique is first developed for DN cluster partitioning, in which a continuous state space, a discrete action space, and a dispatching performance-oriented reward are designed. Then, a novel random forest Q-learning network (RF-QN) is developed to implement dynamic cluster partitioning by interacting with the proposed environmental model, from which the generalization and robust capability to estimate the Q-function can be improved by taking advantage of combining deep learning and decision trees. Finally, a modified IEEE-33-node system is adopted to verify the effectiveness of the proposed intelligent dynamic cluster partitioning and regulation strategy; the results also indicate that the proposed RF-QN is superior to the traditional deep Q-learning (DQN) model in terms of renewable energy accommodation rate, training efficiency, and portioning and regulation performance. Full article
(This article belongs to the Special Issue Advanced in Modeling, Analysis and Control of Microgrids)
Show Figures

Figure 1

17 pages, 1585 KB  
Review
Second-Opinion Systems for Rare Diseases: A Scoping Review of Digital Workflows and Networks
by Vinícius Lima, Mariana Mozini and Domingos Alves
Informatics 2026, 13(1), 6; https://doi.org/10.3390/informatics13010006 - 10 Jan 2026
Viewed by 283
Abstract
Introduction: Rare diseases disperse expertise across institutions and borders, making structured second-opinion systems a pragmatic way to concentrate subspecialty knowledge and reduce diagnostic delays. This scoping review mapped the design, governance, adoption, and impacts of such services across implementation scales. Objectives: To describe [...] Read more.
Introduction: Rare diseases disperse expertise across institutions and borders, making structured second-opinion systems a pragmatic way to concentrate subspecialty knowledge and reduce diagnostic delays. This scoping review mapped the design, governance, adoption, and impacts of such services across implementation scales. Objectives: To describe how second-opinion services for rare diseases are organized and governed, to characterize technological and workflow models, to summarize benefits and barriers, and to identify priority evidence gaps for implementation. Methods: Using a population–concept–context approach, we included peer-reviewed studies describing implemented second-opinion systems for rare diseases and excluded isolated case reports, purely conceptual proposals, and work outside this focus. Searches in August 2025 covered PubMed/MEDLINE, Scopus, Web of Science Core Collection, Cochrane Library, IEEE Xplore, ACM Digital Library, and LILACS without date limits and were restricted to English, Portuguese, or Spanish. Two reviewers screened independently, and the data were charted with a standardized, piloted form. No formal critical appraisal was undertaken, and the synthesis was descriptive. Results: Initiatives were clustered by scale (European networks, national programs, regional systems, international collaborations) and favored hybrid models over asynchronous and synchronous ones. Across settings, services shared reproducible workflows and provided faster access to expertise, quicker decision-making, and more frequent clarification of care plans. These improvements were enabled by transparent governance and dedicated support but were constrained by platform complexity, the effort required to assemble panels, uneven incentives, interoperability gaps, and medico-legal uncertainty. Conclusions: Systematized second-opinion services for rare diseases are feasible and clinically relevant. Progress hinges on usability, aligned incentives, and pragmatic interoperability, advancing from registries toward bidirectional electronic health record connections, alongside prospective evaluations of outcomes, equity, experience, effectiveness, and costs. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

25 pages, 1110 KB  
Systematic Review
Impact of CT Intensity and Contrast Variability on Deep-Learning-Based Lung-Nodule Detection: A Systematic Review of Preprocessing and Harmonization Strategies (2020–2025)
by Saba Khan, Muhammad Nouman Noor, Imran Ashraf, Muhammad I. Masud and Mohammed Aman
Diagnostics 2026, 16(2), 201; https://doi.org/10.3390/diagnostics16020201 - 8 Jan 2026
Viewed by 385
Abstract
Background/Objectives: Lung cancer is the leading cause of cancer-related mortality worldwide, and early detection using low-dose computed tomography (LDCT) substantially improves survival outcomes. However, variations in CT acquisition and reconstruction parameters including Hounsfield Unit (HU) calibration, reconstruction kernels, slice thickness, radiation dose, [...] Read more.
Background/Objectives: Lung cancer is the leading cause of cancer-related mortality worldwide, and early detection using low-dose computed tomography (LDCT) substantially improves survival outcomes. However, variations in CT acquisition and reconstruction parameters including Hounsfield Unit (HU) calibration, reconstruction kernels, slice thickness, radiation dose, and scanner vendor introduce significant intensity and contrast variability that undermine the robustness and generalizability of deep-learning (DL) systems. Methods: This systematic review followed PRISMA 2020 guidelines and searched PubMed, Scopus, IEEE Xplore, Web of Science, ACM Digital Library, and Google Scholar for studies published between 2020 and 2025. A total of 100 eligible studies were included. The review evaluated preprocessing and harmonization strategies aimed at mitigating CT intensity variability, including perceptual contrast enhancement, HU-preserving normalization, physics-informed harmonization, and DL-based reconstruction. Results: Perceptual methods such as contrast-limited adaptive histogram equalization (CLAHE) enhanced nodule conspicuity and reported sensitivity improvements ranging from 10 to 15% but frequently distorted HU values and reduced radiomic reproducibility. HU-preserving approaches including HU clipping, ComBat harmonization, kernel matching, and physics-informed denoising were the most effective, reducing cross-scanner performance degradation, specifically in terms of AUC or Dice score loss, to below 8% in several studies while maintaining quantitative integrity. Transformer and hybrid CNN–Transformer architectures demonstrated superior robustness to acquisition variability, with observed AUC values ranging from 0.90 to 0.92 compared with 0.850.88 for conventional CNN models. Conclusions: The evidence indicates that standardized HU-faithful preprocessing pipelines, harmonization-aware modeling, and multi-center external validation are essential for developing clinically reliable and vendor-agnostic AI systems for lung-cancer screening. However, the synthesis of results is constrained by the heterogeneous reporting of acquisition parameters across primary studies. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

29 pages, 2664 KB  
Article
Optimization of Active Power Supply in an Electrical Distribution System Through the Optimal Integration of Renewable Energy Sources
by Irving J. Guevara and Alexander Aguila Téllez
Energies 2026, 19(2), 293; https://doi.org/10.3390/en19020293 - 6 Jan 2026
Viewed by 167
Abstract
The sustained growth of electricity demand and the global transition toward low-carbon energy systems have intensified the need for efficient, flexible, and reliable operation of electrical distribution networks. In this context, the coordinated integration of distributed renewable energy resources and demand-side flexibility has [...] Read more.
The sustained growth of electricity demand and the global transition toward low-carbon energy systems have intensified the need for efficient, flexible, and reliable operation of electrical distribution networks. In this context, the coordinated integration of distributed renewable energy resources and demand-side flexibility has emerged as a key strategy to improve technical performance and economic efficiency. This work proposes an integrated optimization framework for active power supply in a radial, distribution-like network through the optimal siting and sizing of photovoltaic (PV) units and wind turbines (WTs), combined with a real-time pricing (RTP)-based demand-side response (DSR) program. The problem is formulated using the branch-flow (DistFlow) model, which explicitly represents voltage drops, branch power flows, and thermal limits in radial feeders. A multiobjective function is defined to jointly minimize annual operating costs, active power losses, and voltage deviations, subject to network operating constraints and inverter capability limits. Uncertainty associated with solar irradiance, wind speed, ambient temperature, load demand, and electricity prices is captured through probabilistic modeling and scenario-based analysis. To solve the resulting nonlinear and constrained optimization problem, an Improved Whale Optimization Algorithm (I-WaOA) is employed. The proposed algorithm enhances the classical Whale Optimization Algorithm by incorporating diversification and feasibility-oriented mechanisms, including Cauchy mutation, Fitness–Distance Balance (FDB), quasi-oppositional-based learning (QOBL), and quadratic penalty functions for constraint handling. These features promote robust convergence toward admissible solutions under stochastic operating conditions. The methodology is validated on a large-scale radialized network derived from the IEEE 118-bus benchmark, enabling a DistFlow-consistent assessment of technical and economic performance under realistic operating scenarios. The results demonstrate that the coordinated integration of PV, WT, and RTP-driven demand response leads to a reduction in feeder losses, an improvement in voltage profiles, and an enhanced voltage stability margin, as quantified through standard voltage deviation and fast voltage stability indices. Overall, the proposed framework provides a practical and scalable tool for supporting planning and operational decisions in modern power distribution networks with high renewable penetration and demand flexibility. Full article
Show Figures

Figure 1

17 pages, 22627 KB  
Article
RMS-Based PLL Stability Limit Estimation Using Maximum Phase Error for Power System Planning in Weak Grids
by Beomju Kim, Jeonghoo Park, Seungchan Oh, Hwanhee Cho and Byongjun Lee
Energies 2026, 19(1), 281; https://doi.org/10.3390/en19010281 - 5 Jan 2026
Viewed by 205
Abstract
The increasing interconnection of inverter-based resources (IBRs) with low short-circuit current has weakened grid strength, making phase-locked loops (PLLs) susceptible to instability due to accumulated phase-angle error under current limiting. This study defines such instability as IBR instability induced by reduced grid robustness [...] Read more.
The increasing interconnection of inverter-based resources (IBRs) with low short-circuit current has weakened grid strength, making phase-locked loops (PLLs) susceptible to instability due to accumulated phase-angle error under current limiting. This study defines such instability as IBR instability induced by reduced grid robustness and proposes a root-mean-square (RMS) model-based screening method. After fault clearance, the residual q-axis voltage observed by the PLL is treated as a disturbance signal and, using the PLL synchronization equations, is analyzed with a standard second-order formulation. The maximum phase angle at which synchronization fails is defined as θpeak, and the corresponding q-axis voltage is defined as Vq,crit. This value is then mapped to a screening metric Ppeak suitable for RMS-domain assessment. The proposed methodology is applied to the IEEE 39-bus test system: the stability boundary and Ppeak are obtained in Power System Simulator for Engineering (PSSE), and the results are validated through electromagnetic transient (EMT) simulations in PSCAD. The findings demonstrate that the RMS-based screening can effectively identify operating conditions that are prone to PLL instability in weak grids, providing a practical tool for planning and operation with high IBR penetration. This screening method supports power system planning for high-penetration inverter-based resources by identifying weak-grid locations that require EMT studies to ensure secure operation after grid faults. Full article
(This article belongs to the Section F1: Electrical Power System)
Show Figures

Figure 1

31 pages, 5378 KB  
Article
Composite Fractal Index for Assessing Voltage Resilience in RES-Dominated Smart Distribution Networks
by Plamen Stanchev and Nikolay Hinov
Fractal Fract. 2026, 10(1), 32; https://doi.org/10.3390/fractalfract10010032 - 5 Jan 2026
Viewed by 159
Abstract
This work presents a lightweight and interpretable framework for the early warning of voltage stability degradation in distribution networks, based on fractal and spectral features from flow measurements. We propose a Fast Voltage Stability Index (FVSI), which combines four independent indicators: the Detrended [...] Read more.
This work presents a lightweight and interpretable framework for the early warning of voltage stability degradation in distribution networks, based on fractal and spectral features from flow measurements. We propose a Fast Voltage Stability Index (FVSI), which combines four independent indicators: the Detrended Fluctuation Analysis (DFA) exponent α (a proxy for long-term correlation), the width of the multifractal spectrum Δα, the slope of the spectral density β in the low-frequency range, and the c2 curvature of multiscale structure functions. The indicators are calculated in sliding windows on per-node series of voltage in per unit Vpu and reactive power Q, standardized against an adaptive rolling/first-N baseline, and anomalies over time are accumulated using the Exponentially Weighted Moving Average (EWMA) and Cumulative SUM (CUSUM). A full online pipeline is implemented with robust preprocessing, automatic scaling, thresholding, and visualizations at the system level with an overview and heat maps and at the node level and panel graphs. Based on the standard IEEE 13-node scheme, we demonstrate that the Fractal Voltage Stability Index (FVSI_Fr) responds sensitively before reaching limit states by increasing α, widening Δα, a more negative c2, and increasing β, locating the most vulnerable nodes and intervals. The approach is of low computational complexity, robust to noise and gaps, and compatible with real-time Phasor Measurement Unit (PMU)/Supervisory Control and Data Acquisition (SCADA) streams. The results suggest that FVSI_Fr is a useful operational signal for preventive actions (Q-support, load management/Photovoltaic System (PV)). Future work includes the calibration of weights and thresholds based on data and validation based on long field series. Full article
(This article belongs to the Special Issue Fractional-Order Dynamics and Control in Green Energy Systems)
Show Figures

Figure 1

23 pages, 2090 KB  
Article
Fault Section Localization in Distribution Networks Based on the Integration of Node Classification Matrix and an Improved Binary Particle Swarm Algorithm
by Kui Chen, Wen Xu and Yuheng Yang
Electronics 2026, 15(1), 233; https://doi.org/10.3390/electronics15010233 - 4 Jan 2026
Viewed by 165
Abstract
Single-phase-to-ground faults occur frequently in distribution networks, while traditional localization methods have limitations such as insufficient feature extraction and poor topological adaptability. To address these issues, this paper proposes a two-stage localization method that integrates the Node Classification Matrix (NCM) and an Improved [...] Read more.
Single-phase-to-ground faults occur frequently in distribution networks, while traditional localization methods have limitations such as insufficient feature extraction and poor topological adaptability. To address these issues, this paper proposes a two-stage localization method that integrates the Node Classification Matrix (NCM) and an Improved Binary Particle Swarm Optimization (IBPSO) algorithm. The NCM achieves rapid initial localization, and the IBPSO performs error correction. This paper employs an IEEE 33-node standard distribution network model to design simulations covering scenarios with varying fault locations, multiple fault resistances, and different numbers of node distortions for validation. The results demonstrate that the proposed method achieves a fault location accuracy of 96%, which is 19% higher than that of the NCM alone and 2% higher than that of the IBPSO alone. Moreover, it maintains an accuracy of over 95% under scenarios of 1–3 node distortions, topological switching, and high-impedance faults, and is compatible with existing Feeder Terminal Unit (FTU) devices. This method effectively balances localization speed and robustness, providing a reliable solution for the rapid fault isolation of distribution network. Full article
(This article belongs to the Topic Power System Protection)
Show Figures

Figure 1

Back to TopTop