Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,647)

Search Parameters:
Keywords = generative adversarial network (GAN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1788 KiB  
Article
Privacy-Aware Table Data Generation by Adversarial Gradient Boosting Decision Tree
by Shuai Jiang, Naoto Iwata, Sayaka Kamei, Kazi Md. Rokibul Alam and Yasuhiko Morimoto
Mathematics 2025, 13(15), 2509; https://doi.org/10.3390/math13152509 (registering DOI) - 4 Aug 2025
Abstract
Privacy preservation poses significant challenges in third-party data sharing, particularly when handling table data containing personal information such as demographic and behavioral records. Synthetic table data generation has emerged as a promising solution to enable data analysis while mitigating privacy risks. While Generative [...] Read more.
Privacy preservation poses significant challenges in third-party data sharing, particularly when handling table data containing personal information such as demographic and behavioral records. Synthetic table data generation has emerged as a promising solution to enable data analysis while mitigating privacy risks. While Generative Adversarial Networks (GANs) are widely used for this purpose, they exhibit limitations in modeling table data due to challenges in handling mixed data types (numerical/categorical), non-Gaussian distributions, and imbalanced variables. To address these limitations, this study proposes a novel adversarial learning framework integrating gradient boosting trees for synthesizing table data, called Adversarial Gradient Boosting Decision Tree (AGBDT). Experimental evaluations on several datasets demonstrate that our method outperforms representative baseline models regarding statistical similarity and machine learning utility. Furthermore, we introduce a privacy-aware adaptation of the framework by incorporating k-anonymization constraints, effectively reducing overfitting to source data while maintaining practical usability. The results validate the balance between data utility and privacy preservation achieved by our approach. Full article
22 pages, 6628 KiB  
Article
MCA-GAN: A Multi-Scale Contextual Attention GAN for Satellite Remote-Sensing Image Dehazing
by Sufen Zhang, Yongcheng Zhang, Zhaofeng Yu, Shaohua Yang, Huifeng Kang and Jingman Xu
Electronics 2025, 14(15), 3099; https://doi.org/10.3390/electronics14153099 - 3 Aug 2025
Abstract
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle [...] Read more.
With the growing demand for ecological monitoring and geological exploration, high-quality satellite remote-sensing imagery has become indispensable for accurate information extraction and automated analysis. However, haze reduces image contrast and sharpness, significantly impairing quality. Existing dehazing methods, primarily designed for natural images, struggle with remote-sensing images due to their complex imaging conditions and scale diversity. Given this, we propose a novel Multi-Scale Contextual Attention Generative Adversarial Network (MCA-GAN), specifically designed for satellite image dehazing. Our method integrates multi-scale feature extraction with global contextual guidance to enhance the network’s comprehension of complex remote-sensing scenes and its sensitivity to fine details. MCA-GAN incorporates two self-designed key modules: (1) a Multi-Scale Feature Aggregation Block, which employs multi-directional global pooling and multi-scale convolutional branches to bolster the model’s ability to capture land-cover details across varying spatial scales; (2) a Dynamic Contextual Attention Block, which uses a gated mechanism to fuse three-dimensional attention weights with contextual cues, thereby preserving global structural and chromatic consistency while retaining intricate local textures. Extensive qualitative and quantitative experiments on public benchmarks demonstrate that MCA-GAN outperforms other existing methods in both visual fidelity and objective metrics, offering a robust and practical solution for remote-sensing image dehazing. Full article
Show Figures

Figure 1

25 pages, 6934 KiB  
Article
Feature Constraints Map Generation Models Integrating Generative Adversarial and Diffusion Denoising
by Chenxing Sun, Xixi Fan, Xiechun Lu, Laner Zhou, Junli Zhao, Yuxuan Dong and Zhanlong Chen
Remote Sens. 2025, 17(15), 2683; https://doi.org/10.3390/rs17152683 - 3 Aug 2025
Abstract
The accelerated evolution of remote sensing technology has intensified the demand for real-time tile map generation, highlighting the limitations of conventional mapping approaches that rely on manual cartography and field surveys. To address the critical need for rapid cartographic updates, this study presents [...] Read more.
The accelerated evolution of remote sensing technology has intensified the demand for real-time tile map generation, highlighting the limitations of conventional mapping approaches that rely on manual cartography and field surveys. To address the critical need for rapid cartographic updates, this study presents a novel multi-stage generative framework that synergistically integrates Generative Adversarial Networks (GANs) with Diffusion Denoising Models (DMs) for high-fidelity map generation from remote sensing imagery. Specifically, our proposed architecture first employs GANs for rapid preliminary map generation, followed by a cascaded diffusion process that progressively refines topological details and spatial accuracy through iterative denoising. Furthermore, we propose a hybrid attention mechanism that strategically combines channel-wise feature recalibration with coordinate-aware spatial modulation, enabling the enhanced discrimination of geographic features under challenging conditions involving edge ambiguity and environmental noise. Quantitative evaluations demonstrate that our method significantly surpasses established baselines in both structural consistency and geometric fidelity. This framework establishes an operational paradigm for automated, rapid-response cartography, demonstrating a particular utility in time-sensitive applications including disaster impact assessment, unmapped terrain documentation, and dynamic environmental surveillance. Full article
Show Figures

Figure 1

17 pages, 1027 KiB  
Article
AI-Driven Security for Blockchain-Based Smart Contracts: A GAN-Assisted Deep Learning Approach to Malware Detection
by Imad Bourian, Lahcen Hassine and Khalid Chougdali
J. Cybersecur. Priv. 2025, 5(3), 53; https://doi.org/10.3390/jcp5030053 (registering DOI) - 1 Aug 2025
Viewed by 177
Abstract
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats [...] Read more.
In the modern era, the use of blockchain technology has been growing rapidly, where Ethereum smart contracts play an important role in securing decentralized application systems. However, these smart contracts are also susceptible to a large number of vulnerabilities, which pose significant threats to intelligent systems and IoT applications, leading to data breaches and financial losses. Traditional detection techniques, such as manual analysis and static automated tools, suffer from high false positives and undetected security vulnerabilities. To address these problems, this paper proposes an Artificial Intelligence (AI)-based security framework that integrates Generative Adversarial Network (GAN)-based feature selection and deep learning techniques to classify and detect malware attacks on smart contract execution in the blockchain decentralized network. After an exhaustive pre-processing phase yielding a dataset of 40,000 malware and benign samples, the proposed model is evaluated and compared with related studies on the basis of a number of performance metrics including training accuracy, training loss, and classification metrics (accuracy, precision, recall, and F1-score). Our combined approach achieved a remarkable accuracy of 97.6%, demonstrating its effectiveness in detecting malware and protecting blockchain systems. Full article
Show Figures

Figure 1

15 pages, 2158 KiB  
Article
A Data-Driven Approach for Internal Crack Prediction in Continuous Casting of HSLA Steels Using CTGAN and CatBoost
by Mengying Geng, Haonan Ma, Shuangli Liu, Zhuosuo Zhou, Lei Xing, Yibo Ai and Weidong Zhang
Materials 2025, 18(15), 3599; https://doi.org/10.3390/ma18153599 (registering DOI) - 31 Jul 2025
Viewed by 163
Abstract
Internal crack defects in high-strength low-alloy (HSLA) steels during continuous casting pose significant challenges to downstream processing and product reliability. However, due to the inherent class imbalance in industrial defect datasets, conventional machine learning models often suffer from poor sensitivity to minority class [...] Read more.
Internal crack defects in high-strength low-alloy (HSLA) steels during continuous casting pose significant challenges to downstream processing and product reliability. However, due to the inherent class imbalance in industrial defect datasets, conventional machine learning models often suffer from poor sensitivity to minority class instances. This study proposes a predictive framework that integrates conditional tabular generative adversarial network (CTGAN) for synthetic minority sample generation and CatBoost for classification. A dataset of 733 process records was collected from a continuous caster, and 25 informative features were selected using mutual information. CTGAN was employed to augment the minority class (crack) samples, achieving a balanced training set. Feature distribution analysis and principal component visualization indicated that the synthetic data effectively preserved the statistical structure of the original minority class. Compared with the other machine learning methods, including KNN, SVM, and MLP, CatBoost achieved the highest metrics, with an accuracy of 0.9239, precision of 0.9041, recall of 0.9018, and F1-score of 0.9022. Results show that CTGAN-based augmentation improves classification performance across all models. These findings highlight the effectiveness of GAN-based augmentation for imbalanced industrial data and validate the CTGAN–CatBoost model as a robust solution for online defect prediction in steel manufacturing. Full article
(This article belongs to the Special Issue Latest Developments in Advanced Machining Technologies for Materials)
Show Figures

Figure 1

24 pages, 1537 KiB  
Article
Privacy-Aware Hierarchical Federated Learning in Healthcare: Integrating Differential Privacy and Secure Multi-Party Computation
by Jatinder Pal Singh, Aqsa Aqsa, Imran Ghani, Raj Sonani and Vijay Govindarajan
Future Internet 2025, 17(8), 345; https://doi.org/10.3390/fi17080345 - 31 Jul 2025
Viewed by 198
Abstract
The development of big data analytics in healthcare has created a demand for privacy-conscious and scalable machine learning algorithms that can allow the use of patient information across different healthcare organizations. In this study, the difficulties that come with traditional federated learning frameworks [...] Read more.
The development of big data analytics in healthcare has created a demand for privacy-conscious and scalable machine learning algorithms that can allow the use of patient information across different healthcare organizations. In this study, the difficulties that come with traditional federated learning frameworks in healthcare sectors, such as scalability, computational effectiveness, and preserving patient privacy for numerous healthcare systems, are discussed. In this work, a new conceptual model known as Hierarchical Federated Learning (HFL) for large, integrated healthcare organizations that include several institutions is proposed. The first level of aggregation forms regional centers where local updates are first collected and then sent to the second level of aggregation to form the global update, thus reducing the message-passing traffic and improving the scalability of the HFL architecture. Furthermore, the HFL framework leveraged more robust privacy characteristics such as Local Differential Privacy (LDP), Gaussian Differential Privacy (GDP), Secure Multi-Party Computation (SMPC) and Homomorphic Encryption (HE). In addition, a Novel Aggregated Gradient Perturbation Mechanism is presented to alleviate noise in model updates and maintain privacy and utility. The performance of the proposed HFL framework is evaluated on real-life healthcare datasets and an artificial dataset created using Generative Adversarial Networks (GANs), showing that the proposed HFL framework is better than other methods. Our approach provided an accuracy of around 97% and 30% less privacy leakage compared to the existing models of FLBM-IoT and PPFLB. The proposed HFL approach can help to find the optimal balance between privacy and model performance, which is crucial for healthcare applications and scalable and secure solutions. Full article
(This article belongs to the Special Issue Security and Privacy in AI-Powered Systems)
Show Figures

Graphical abstract

31 pages, 2007 KiB  
Review
Artificial Intelligence-Driven Strategies for Targeted Delivery and Enhanced Stability of RNA-Based Lipid Nanoparticle Cancer Vaccines
by Ripesh Bhujel, Viktoria Enkmann, Hannes Burgstaller and Ravi Maharjan
Pharmaceutics 2025, 17(8), 992; https://doi.org/10.3390/pharmaceutics17080992 - 30 Jul 2025
Viewed by 539
Abstract
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the [...] Read more.
The convergence of artificial intelligence (AI) and nanomedicine has transformed cancer vaccine development, particularly in optimizing RNA-loaded lipid nanoparticles (LNPs). Stability and targeted delivery are major obstacles to the clinical translation of promising RNA-LNP vaccines for cancer immunotherapy. This systematic review analyzes the AI’s impact on LNP engineering through machine learning-driven predictive models, generative adversarial networks (GANs) for novel lipid design, and neural network-enhanced biodistribution prediction. AI reduces the therapeutic development timeline through accelerated virtual screening of millions of lipid combinations, compared to conventional high-throughput screening. Furthermore, AI-optimized LNPs demonstrate improved tumor targeting. GAN-generated lipids show structural novelty while maintaining higher encapsulation efficiency; graph neural networks predict RNA-LNP binding affinity with high accuracy vs. experimental data; digital twins reduce lyophilization optimization from years to months; and federated learning models enable multi-institutional data sharing. We propose a framework to address key technical challenges: training data quality (min. 15,000 lipid structures), model interpretability (SHAP > 0.65), and regulatory compliance (21CFR Part 11). AI integration reduces manufacturing costs and makes personalized cancer vaccine affordable. Future directions need to prioritize quantum machine learning for stability prediction and edge computing for real-time formulation modifications. Full article
Show Figures

Figure 1

17 pages, 4324 KiB  
Article
Anomaly Detection on Laminated Composite Plate Using Self-Attention Autoencoder and Gaussian Mixture Model
by Olivier Munyaneza and Jung Woo Sohn
Mathematics 2025, 13(15), 2445; https://doi.org/10.3390/math13152445 - 29 Jul 2025
Viewed by 160
Abstract
Composite laminates are widely used in aerospace, automotive, construction, and luxury industries, owing to their superior mechanical properties and design flexibility. However, detecting manufacturing defects and in-service damage remains a vital challenge for structural safety. While traditional unsupervised machine learning methods have been [...] Read more.
Composite laminates are widely used in aerospace, automotive, construction, and luxury industries, owing to their superior mechanical properties and design flexibility. However, detecting manufacturing defects and in-service damage remains a vital challenge for structural safety. While traditional unsupervised machine learning methods have been used in structural health monitoring (SHM), their high false positive rates limit their reliability in real-world applications. This issue is mostly inherited from their limited ability to capture small temporal variations in Lamb wave signals and their dependence on shallow architectures that suffer with complex signal distributions, causing the misclassification of damaged signals as healthy data. To address this, we suggested an unsupervised anomaly detection framework that integrates a self-attention autoencoder with a Gaussian mixture model (SAE-GMM). The model is solely trained on healthy Lamb wave signals, including high-quality synthetic data generated via a generative adversarial network (GAN). Damages are detected through reconstruction errors and probabilistic clustering in the latent space. The self-attention mechanism enhances feature representation by capturing subtle temporal dependencies, while the GMM enables a solid separation among signals. Experimental results demonstrated that the proposed model (SAE-GMM) achieves high detection accuracy, a low false positive rate, and strong generalization under varying noise conditions, outperforming traditional and deep learning baselines. Full article
Show Figures

Figure 1

52 pages, 3733 KiB  
Article
A Hybrid Deep Reinforcement Learning and Metaheuristic Framework for Heritage Tourism Route Optimization in Warin Chamrap’s Old Town
by Rapeepan Pitakaso, Thanatkij Srichok, Surajet Khonjun, Natthapong Nanthasamroeng, Arunrat Sawettham, Paweena Khampukka, Sairoong Dinkoksung, Kanya Jungvimut, Ganokgarn Jirasirilerd, Chawapot Supasarn, Pornpimol Mongkhonngam and Yong Boonarree
Heritage 2025, 8(8), 301; https://doi.org/10.3390/heritage8080301 - 28 Jul 2025
Viewed by 488
Abstract
Designing optimal heritage tourism routes in secondary cities involves complex trade-offs between cultural richness, travel time, carbon emissions, spatial coherence, and group satisfaction. This study addresses the Personalized Group Trip Design Problem (PGTDP) under real-world constraints by proposing DRL–IMVO–GAN—a hybrid multi-objective optimization framework [...] Read more.
Designing optimal heritage tourism routes in secondary cities involves complex trade-offs between cultural richness, travel time, carbon emissions, spatial coherence, and group satisfaction. This study addresses the Personalized Group Trip Design Problem (PGTDP) under real-world constraints by proposing DRL–IMVO–GAN—a hybrid multi-objective optimization framework that integrates Deep Reinforcement Learning (DRL) for policy-guided initialization, an Improved Multiverse Optimizer (IMVO) for global search, and a Generative Adversarial Network (GAN) for local refinement and solution diversity. The model operates within a digital twin of Warin Chamrap’s old town, leveraging 92 POIs, congestion heatmaps, and behaviorally clustered tourist profiles. The proposed method was benchmarked against seven state-of-the-art techniques, including PSO + DRL, Genetic Algorithm with Multi-Neighborhood Search (Genetic + MNS), Dual-ACO, ALNS-ASP, and others. Results demonstrate that DRL–IMVO–GAN consistently dominates across key metrics. Under equal-objective weighting, it attained the highest heritage score (74.2), shortest travel time (21.3 min), and top satisfaction score (17.5 out of 18), along with the highest hypervolume (0.85) and Pareto Coverage Ratio (0.95). Beyond performance, the framework exhibits strong generalization in zero- and few-shot scenarios, adapting to unseen POIs, modified constraints, and new user profiles without retraining. These findings underscore the method’s robustness, behavioral coherence, and interpretability—positioning it as a scalable, intelligent decision-support tool for sustainable and user-centered cultural tourism planning in secondary cities. Full article
(This article belongs to the Special Issue AI and the Future of Cultural Heritage)
Show Figures

Figure 1

19 pages, 1816 KiB  
Article
Rethinking Infrared and Visible Image Fusion from a Heterogeneous Content Synergistic Perception Perspective
by Minxian Shen, Gongrui Huang, Mingye Ju and Kai-Kuang Ma
Sensors 2025, 25(15), 4658; https://doi.org/10.3390/s25154658 - 27 Jul 2025
Viewed by 255
Abstract
Infrared and visible image fusion (IVIF) endeavors to amalgamate the thermal radiation characteristics from infrared images with the fine-grained texture details from visible images, aiming to produce fused outputs that are more robust and information-rich. Among the existing methodologies, those based on generative [...] Read more.
Infrared and visible image fusion (IVIF) endeavors to amalgamate the thermal radiation characteristics from infrared images with the fine-grained texture details from visible images, aiming to produce fused outputs that are more robust and information-rich. Among the existing methodologies, those based on generative adversarial networks (GANs) have demonstrated considerable promise. However, such approaches are frequently constrained by their reliance on homogeneous discriminators possessing identical architectures, a limitation that can precipitate the emergence of undesirable artifacts in the resultant fused images. To surmount this challenge, this paper introduces HCSPNet, a novel GAN-based framework. HCSPNet distinctively incorporates heterogeneous dual discriminators, meticulously engineered for the fusion of disparate source images inherent in the IVIF task. This architectural design ensures the steadfast preservation of critical information from the source inputs, even when faced with scenarios of image degradation. Specifically, the two structurally distinct discriminators within HCSPNet are augmented with adaptive salient information distillation (ASID) modules, each uniquely structured to align with the intrinsic properties of infrared and visible images. This mechanism impels the discriminators to concentrate on pivotal components during their assessment of whether the fused image has proficiently inherited significant information from the source modalities—namely, the salient thermal signatures from infrared imagery and the detailed textural content from visible imagery—thereby markedly diminishing the occurrence of unwanted artifacts. Comprehensive experimentation conducted across multiple publicly available datasets substantiates the preeminence and generalization capabilities of HCSPNet, underscoring its significant potential for practical deployment. Additionally, we also prove that our proposed heterogeneous dual discriminators can serve as a plug-and-play structure to improve the performance of existing GAN-based methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

47 pages, 18189 KiB  
Article
Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures
by Zineb Sordo, Eric Chagnon, Zixi Hu, Jeffrey J. Donatelli, Peter Andeer, Peter S. Nico, Trent Northen and Daniela Ushizima
J. Imaging 2025, 11(8), 252; https://doi.org/10.3390/jimaging11080252 - 26 Jul 2025
Viewed by 522
Abstract
Generative AI (genAI) has emerged as a powerful tool for synthesizing diverse and complex image data, offering new possibilities for scientific imaging applications. This review presents a comprehensive comparative analysis of leading generative architectures, ranging from Variational Autoencoders (VAEs) to Generative Adversarial Networks [...] Read more.
Generative AI (genAI) has emerged as a powerful tool for synthesizing diverse and complex image data, offering new possibilities for scientific imaging applications. This review presents a comprehensive comparative analysis of leading generative architectures, ranging from Variational Autoencoders (VAEs) to Generative Adversarial Networks (GANs) on through to Diffusion Models, in the context of scientific image synthesis. We examine each model’s foundational principles, recent architectural advancements, and practical trade-offs. Our evaluation, conducted on domain-specific datasets including microCT scans of rocks and composite fibers, as well as high-resolution images of plant roots, integrates both quantitative metrics (SSIM, LPIPS, FID, CLIPScore) and expert-driven qualitative assessments. Results show that GANs, particularly StyleGAN, produce images with high perceptual quality and structural coherence. Diffusion-based models for inpainting and image variation, such as DALL-E 2, delivered high realism and semantic alignment but generally struggled in balancing visual fidelity with scientific accuracy. Importantly, our findings reveal limitations of standard quantitative metrics in capturing scientific relevance, underscoring the need for domain-expert validation. We conclude by discussing key challenges such as model interpretability, computational cost, and verification protocols, and discuss future directions where generative AI can drive innovation in data augmentation, simulation, and hypothesis generation in scientific research. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Graphical abstract

35 pages, 1231 KiB  
Review
Toward Intelligent Underwater Acoustic Systems: Systematic Insights into Channel Estimation and Modulation Methods
by Imran A. Tasadduq and Muhammad Rashid
Electronics 2025, 14(15), 2953; https://doi.org/10.3390/electronics14152953 - 24 Jul 2025
Viewed by 298
Abstract
Underwater acoustic (UWA) communication supports many critical applications but still faces several physical-layer signal processing challenges. In response, recent advances in machine learning (ML) and deep learning (DL) offer promising solutions to improve signal detection, modulation adaptability, and classification accuracy. These developments highlight [...] Read more.
Underwater acoustic (UWA) communication supports many critical applications but still faces several physical-layer signal processing challenges. In response, recent advances in machine learning (ML) and deep learning (DL) offer promising solutions to improve signal detection, modulation adaptability, and classification accuracy. These developments highlight the need for a systematic evaluation to compare various ML/DL models and assess their performance across diverse underwater conditions. However, most existing reviews on ML/DL-based UWA communication focus on isolated approaches rather than integrated system-level perspectives, which limits cross-domain insights and reduces their relevance to practical underwater deployments. Consequently, this systematic literature review (SLR) synthesizes 43 studies (2020–2025) on ML and DL approaches for UWA communication, covering channel estimation, adaptive modulation, and modulation recognition across both single- and multi-carrier systems. The findings reveal that models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) enhance channel estimation performance, achieving error reductions and bit error rate (BER) gains ranging from 103 to 106. Adaptive modulation techniques incorporating support vector machines (SVMs), CNNs, and reinforcement learning (RL) attain classification accuracies exceeding 98% and throughput improvements of up to 25%. For modulation recognition, architectures like sequence CNNs, residual networks, and hybrid convolutional–recurrent models achieve up to 99.38% accuracy with latency below 10 ms. These performance metrics underscore the viability of ML/DL-based solutions in optimizing physical-layer tasks for real-world UWA deployments. Finally, the SLR identifies key challenges in UWA communication, including high complexity, limited data, fragmented performance metrics, deployment realities, energy constraints and poor scalability. It also outlines future directions like lightweight models, physics-informed learning, advanced RL strategies, intelligent resource allocation, and robust feature fusion to build reliable and intelligent underwater systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

22 pages, 1896 KiB  
Article
Physics-Constrained Diffusion-Based Scenario Expansion Method for Power System Transient Stability Assessment
by Wei Dong, Yue Yu, Lebing Zhao, Wen Hua, Ying Yang, Bowen Wang, Jiawen Cao and Changgang Li
Processes 2025, 13(8), 2344; https://doi.org/10.3390/pr13082344 - 23 Jul 2025
Viewed by 226
Abstract
In transient stability assessment (TSA) of power systems, the extreme scarcity of unstable scenario samples often leads to misjudgments of fault risks by assessment models, and this issue is particularly pronounced in new-type power systems with high penetration of renewable energy sources. To [...] Read more.
In transient stability assessment (TSA) of power systems, the extreme scarcity of unstable scenario samples often leads to misjudgments of fault risks by assessment models, and this issue is particularly pronounced in new-type power systems with high penetration of renewable energy sources. To address this, this paper proposes a physics-constrained diffusion-based scenario expansion method. It constructs a hierarchical conditional diffusion framework embedded with transient differential equations, combines a spatiotemporal decoupling analysis mechanism to capture grid topological and temporal features, and introduces a transient energy function as a stability boundary constraint to ensure the physical rationality of generated scenarios. Verification on the modified IEEE-39 bus system with a high proportion of new energy sources shows that the proposed method achieves an unstable scenario recognition rate of 98.77%, which is 3.92 and 2.65 percentage points higher than that of the Synthetic Minority Oversampling Technique (SMOTE, 94.85%) and Generative Adversarial Networks (GANs, 96.12%) respectively. The geometric mean achieves 99.33%, significantly enhancing the accuracy and reliability of TSA, and providing sufficient technical support for identifying the dynamic security boundaries of power systems. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

23 pages, 4256 KiB  
Article
A GAN-Based Framework with Dynamic Adaptive Attention for Multi-Class Image Segmentation in Autonomous Driving
by Bashir Sheikh Abdullahi Jama and Mehmet Hacibeyoglu
Appl. Sci. 2025, 15(15), 8162; https://doi.org/10.3390/app15158162 - 22 Jul 2025
Viewed by 225
Abstract
Image segmentation is a foundation for autonomous driving frameworks that empower vehicles to explore and navigate their surrounding environment. It gives a fundamental setting to the dynamic cycles by dividing the image into significant parts like streets, vehicles, walkers, and traffic signs. Precise [...] Read more.
Image segmentation is a foundation for autonomous driving frameworks that empower vehicles to explore and navigate their surrounding environment. It gives a fundamental setting to the dynamic cycles by dividing the image into significant parts like streets, vehicles, walkers, and traffic signs. Precise segmentation ensures safe navigation and the avoidance of collisions, while following the rules of traffic is very critical for seamless operation in self-driving cars. The most recent deep learning-based image segmentation models have demonstrated impressive performance in structured environments, yet they often fall short when applied to the complex and unpredictable conditions encountered in autonomous driving. This study proposes an Adaptive Ensemble Attention (AEA) mechanism within a Generative Adversarial Network architecture to deal with dynamic and complex driving conditions. The AEA integrates the features of self, spatial, and channel attention adaptively and powerfully changes the amount of each contribution as per input and context-oriented relevance. It does this by allowing the discriminator network in GAN to evaluate the segmentation mask created by the generator. This explains the difference between real and fake masks by considering a concatenated pair of an original image and its mask. The adversarial training will prompt the generator, via the discriminator, to mask out the image in such a way that the output aligns with the expected ground truth and is also very realistic. The exchange of information between the generator and discriminator improves the quality of the segmentation. In order to check the accuracy of the proposed method, the three widely used datasets BDD100K, Cityscapes, and KITTI were selected to calculate average IoU, where the value obtained was 89.46%, 89.02%, and 88.13% respectively. These outcomes emphasize the model’s effectiveness and consistency. Overall, it achieved a remarkable accuracy of 98.94% and AUC of 98.4%, indicating strong enhancements compared to the State-of-the-art (SOTA) models. Full article
Show Figures

Figure 1

43 pages, 2108 KiB  
Article
FIGS: A Realistic Intrusion-Detection Framework for Highly Imbalanced IoT Environments
by Zeynab Anbiaee, Sajjad Dadkhah and Ali A. Ghorbani
Electronics 2025, 14(14), 2917; https://doi.org/10.3390/electronics14142917 - 21 Jul 2025
Viewed by 376
Abstract
The rapid growth of Internet of Things (IoT) environments has increased security challenges due to heightened exposure to cyber threats and attacks. A key problem is the class imbalance in attack traffic, where critical yet underrepresented attacks are often overlooked by intrusion-detection systems [...] Read more.
The rapid growth of Internet of Things (IoT) environments has increased security challenges due to heightened exposure to cyber threats and attacks. A key problem is the class imbalance in attack traffic, where critical yet underrepresented attacks are often overlooked by intrusion-detection systems (IDS), thereby compromising reliability. We propose Feature-Importance GAN SMOTE (FIGS), an innovative, realistic intrusion-detection framework designed for IoT environments to address this challenge. Unlike other works that rely only on traditional oversampling methods, FIGS integrates sensitivity-based feature-importance analysis, Generative Adversarial Network (GAN)-based augmentation, a novel imbalance ratio (GIR), and Synthetic Minority Oversampling Technique (SMOTE) for generating high-quality synthetic data for minority classes. FIGS enhanced minority class detection by focusing on the most important features identified by the sensitivity analysis, while minimizing computational overhead and reducing noise during data generation. Evaluations on the CICIoMT2024 and CICIDS2017 datasets demonstrate that FIGS improves detection accuracy and significantly lowers the false negative rate. FIGS achieved a 17% improvement over the baseline model on the CICIoMT2024 dataset while maintaining performance for the majority groups. The results show that FIGS represents a highly effective solution for real-world IoT networks with high detection accuracy across all classes without introducing unnecessary computational overhead. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Figure 1

Back to TopTop