Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (300)

Search Parameters:
Keywords = CycleGAN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7320 KB  
Review
Next-Gen Nondestructive Testing for Marine Concrete: AI-Enabled Inspection, Prognostics, and Digital Twins
by Taehwi Lee and Min Ook Kim
J. Mar. Sci. Eng. 2025, 13(11), 2062; https://doi.org/10.3390/jmse13112062 - 29 Oct 2025
Viewed by 220
Abstract
Marine concrete structures are continuously exposed to harsh marine environments—salt, waves, and biological fouling—that accelerate corrosion and cracking, increasing maintenance costs. Traditional Non-Destructive Testing (NDT) techniques often fail to detect early damage due to signal attenuation and noise in underwater conditions. This study [...] Read more.
Marine concrete structures are continuously exposed to harsh marine environments—salt, waves, and biological fouling—that accelerate corrosion and cracking, increasing maintenance costs. Traditional Non-Destructive Testing (NDT) techniques often fail to detect early damage due to signal attenuation and noise in underwater conditions. This study critically reviews recent advances in Artificial Intelligence-integrated NDT (AI-NDT) technologies for marine concrete, focusing on their quantitative performance improvements and practical applicability. To be specific, a systematic comparison of vision-based and signal-based AI-NDT techniques was carried out across reported field cases. It was confirmed that the integration of AI improved detection accuracy by 17–25%, on average, compared with traditional methods. Vision-based AI models such as YOLOX-DG, Cycle GAN, and MSDA increased mean mAP 0.5 by 4%, while signal-based methods using CNN, LSTM, and Random Forest enhanced prediction accuracy by 15–20% in GPR, AE, and ultrasonic data. These results confirm that AI effectively compensates for environmental distortions, corrects noise, and standardizes data interpretation across variable marine conditions. Lastly, the study highlights that AI-enabled NDT not only automates data interpretation but also establishes the foundation for predictive and preventive maintenance frameworks. By linking data acquisition, digital twin-based prediction, and lifecycle monitoring, AI-NDT can transform current reactive maintenance strategies into sustainable, intelligence-driven management for marine infrastructure. Full article
Show Figures

Figure 1

14 pages, 13455 KB  
Article
Enhancing 3D Monocular Object Detection with Style Transfer for Nighttime Data Augmentation
by Alexandre Evain, Firas Jendoubi, Redouane Khemmar, Sofiane Ahmedali and Mathieu Orzalesi
Appl. Sci. 2025, 15(20), 11288; https://doi.org/10.3390/app152011288 - 21 Oct 2025
Viewed by 315
Abstract
Monocular 3D object detection (Mono3D) is essential for autonomous driving and augmented reality, yet its performance degrades significantly at night due to the scarcity of annotated nighttime data. In this paper, we investigate the use of style transfer for nighttime data augmentation and [...] Read more.
Monocular 3D object detection (Mono3D) is essential for autonomous driving and augmented reality, yet its performance degrades significantly at night due to the scarcity of annotated nighttime data. In this paper, we investigate the use of style transfer for nighttime data augmentation and evaluate its effect on individual components of 3D detection. Using CycleGAN, we generated synthetic night images from daytime scenes in the nuScenes dataset and trained a modular Mono3D detector under different configurations. Our results show that training solely on style-transferred images improves certain metrics, such as AP@0.95 (from 0.0299 to 0.0778, a 160% increase) and depth error (11% reduction), compared to daytime-only baselines. However, performance on orientation and dimension estimation deteriorates. When real nighttime data is included, style transfer provides complementary benefits: for cars, depth error decreases from 0.0414 to 0.021, and AP@0.95 remains stable at 0.66; for pedestrians, AP@0.95 improves by 13% (0.297 to 0.336) with a 35% reduction in depth error. Cyclist detection remains unreliable due to limited samples. We conclude that style transfer cannot replace authentic nighttime data, but when combined with it, it reduces false positives and improves depth estimation, leading to more robust detection under low-light conditions. This study highlights both the potential and the limitations of style transfer for augmenting Mono3D training, and it points to future research on more advanced generative models and broader object categories. Full article
Show Figures

Figure 1

19 pages, 5686 KB  
Article
RipenessGAN: Growth Day Embedding-Enhanced GAN for Stage-Wise Jujube Ripeness Data Generation
by Jeon-Seong Kang, Junwon Yoon, Beom-Joon Park, Junyoung Kim, Sung Chul Jee, Ha-Yoon Song and Hyun-Joon Chung
Agronomy 2025, 15(10), 2409; https://doi.org/10.3390/agronomy15102409 - 17 Oct 2025
Viewed by 256
Abstract
RipenessGAN is a novel Generative Adversarial Network (GAN) designed to generate synthetic images across different ripeness stages of jujubes (green fruit, white ripe fruit, semi-red fruit, and fully red fruit), aiming to provide balanced training data for diverse applications beyond classification accuracy. This [...] Read more.
RipenessGAN is a novel Generative Adversarial Network (GAN) designed to generate synthetic images across different ripeness stages of jujubes (green fruit, white ripe fruit, semi-red fruit, and fully red fruit), aiming to provide balanced training data for diverse applications beyond classification accuracy. This study addresses the problem of data imbalance by augmenting each ripeness stage using our proposed Growth Day Embedding mechanism, thereby enhancing the performance of downstream classification models. The core innovation of RipenessGAN lies in its ability to capture continuous temporal transitions among discrete ripeness classes by incorporating fine-grained growth day information (0–56 days) in addition to traditional class labels. The experimental results show that RipenessGAN produces synthetic data with higher visual quality and greater diversity compared to CycleGAN. Furthermore, the classification models trained on the enriched dataset exhibit more consistent and accurate performance. We also conducted comprehensive comparisons of RipenessGAN against CycleGAN and class-conditional diffusion models (DDPM) under strictly controlled and fair experimental settings, carefully matching model architectures, computational resources, training conditions, and evaluation metrics. The results indicate that although diffusion models yield highly realistic images and CycleGAN ensures stable cycle-consistent generation, RipenessGAN provides superior practical benefits in training efficiency, temporal controllability, and adaptability for agricultural applications. This research demonstrates the potential of RipenessGAN to mitigate data imbalance in agriculture and highlights its scalability to other crops. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

31 pages, 3812 KB  
Review
Generative Adversarial Networks in Dermatology: A Narrative Review of Current Applications, Challenges, and Future Perspectives
by Rosa Maria Izu-Belloso, Rafael Ibarrola-Altuna and Alex Rodriguez-Alonso
Bioengineering 2025, 12(10), 1113; https://doi.org/10.3390/bioengineering12101113 - 16 Oct 2025
Viewed by 564
Abstract
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores [...] Read more.
Generative Adversarial Networks (GANs) have emerged as powerful tools in artificial intelligence (AI) with growing relevance in medical imaging. In dermatology, GANs are revolutionizing image analysis, enabling synthetic image generation, data augmentation, color standardization, and improved diagnostic model training. This narrative review explores the landscape of GAN applications in dermatology, systematically analyzing 27 key studies and identifying 11 main clinical use cases. These range from the synthesis of under-represented skin phenotypes to segmentation, denoising, and super-resolution imaging. The review also examines the commercial implementations of GAN-based solutions relevant to practicing dermatologists. We present a comparative summary of GAN architectures, including DCGAN, cGAN, StyleGAN, CycleGAN, and advanced hybrids. We analyze technical metrics used to evaluate performance—such as Fréchet Inception Distance (FID), SSIM, Inception Score, and Dice Coefficient—and discuss challenges like data imbalance, overfitting, and the lack of clinical validation. Additionally, we review ethical concerns and regulatory limitations. Our findings highlight the transformative potential of GANs in dermatology while emphasizing the need for standardized protocols and rigorous validation. While early results are promising, few models have yet reached real-world clinical integration. The democratization of AI tools and open-access datasets are pivotal to ensure equitable dermatologic care across diverse populations. This review serves as a comprehensive resource for dermatologists, researchers, and developers interested in applying GANs in dermatological practice and research. Future directions include multimodal integration, clinical trials, and explainable GANs to facilitate adoption in daily clinical workflows. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
Show Figures

Figure 1

14 pages, 2127 KB  
Article
CycleGAN with Atrous Spatial Pyramid Pooling and Attention-Enhanced MobileNetV4 for Tomato Disease Recognition Under Limited Training Data
by Yueming Jiang, Taizeng Jiang, Chunyan Song and Jian Wang
Appl. Sci. 2025, 15(19), 10790; https://doi.org/10.3390/app151910790 - 7 Oct 2025
Viewed by 335
Abstract
To address the challenges of poor model generalization and suboptimal recognition accuracy stemming from limited and imbalanced sample sizes in tomato leaf disease identification, this study proposes a novel recognition strategy. This approach synergistically combines an enhanced image augmentation method based on generative [...] Read more.
To address the challenges of poor model generalization and suboptimal recognition accuracy stemming from limited and imbalanced sample sizes in tomato leaf disease identification, this study proposes a novel recognition strategy. This approach synergistically combines an enhanced image augmentation method based on generative adversarial networks with a lightweight deep learning model. Initially, an Atrous Spatial Pyramid Pooling (ASPP) module is integrated into the CycleGAN framework. This integration enhances the generator’s capacity to model multi-scale pathological lesion features, thereby significantly improving the diversity and realism of synthesized images. Subsequently, the Convolutional Block Attention Module (CBAM), incorporating both channel and spatial attention mechanisms, is embedded into the MobileNetV4 architecture. This enhancement boosts the model’s ability to focus on critical disease regions. Experimental results demonstrate that the proposed ASPP-CycleGAN significantly outperforms the original CycleGAN across multiple disease image generation tasks. Furthermore, the developed CBAM-MobileNetV4 model achieves a remarkable average recognition accuracy exceeding 97% for common tomato diseases, including early blight, late blight, and mosaic disease, representing a 1.86% improvement over the baseline MobileNetV4. The findings indicate that the proposed method offers exceptional data augmentation capabilities and classification performance under small-sample learning conditions, providing an effective technical foundation for the intelligent identification and control of tomato leaf diseases. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

14 pages, 1787 KB  
Article
HE-DMDeception: Adversarial Attack Network for 3D Object Detection Based on Human Eye and Deep Learning Model Deception
by Pin Zhang, Yawen Liu, Heng Liu, Yichao Teng, Jiazheng Ni, Zhuansun Xiaobo and Jiajia Wang
Information 2025, 16(10), 867; https://doi.org/10.3390/info16100867 - 7 Oct 2025
Viewed by 364
Abstract
This paper presents HE-DMDeception, a novel adversarial attack network that integrates human visual deception with deep model deception to enhance the security of 3D object detection. Existing patch-based and camouflage methods can mislead deep learning models but struggle to generate visually imperceptible, high-quality [...] Read more.
This paper presents HE-DMDeception, a novel adversarial attack network that integrates human visual deception with deep model deception to enhance the security of 3D object detection. Existing patch-based and camouflage methods can mislead deep learning models but struggle to generate visually imperceptible, high-quality textures. Our framework employs a CycleGAN-based camouflage network to generate highly camouflaged background textures, while a dedicated deception module disrupts non-maximum suppression (NMS) and attention mechanisms through optimized constraints that balance attack efficacy and visual fidelity. To overcome the scarcity of annotated vehicle data, an image segmentation module based on the pre-trained Segment Anything (SAM) model is introduced, leveraging a two-stage training strategy combining semi-supervised self-training and supervised fine-tuning. Experimental results show that the minimum P@0.5 values (50%, 55%, 20%, 25%, 25%) were achieved by HE-DMDeception across You Only Look Once version 8 (YOLOv8), Real-Time Detection Transformer (RT-DETR), Fast Region-based Convolutional Neural Network (Faster-RCNN), Single Shot MultiBox Detector (SSD), and MaskRegion-based Convolutional Neural Network (Mask RCNN) detection models, while maintaining high visual consistency with the original camouflage. These findings demonstrate the robustness and practicality of HE-DMDeception, offering new insights into 3D object detection adversarial attacks. Full article
Show Figures

Figure 1

12 pages, 2558 KB  
Article
Degradation and Damage Effects in GaN HEMTs Induced by Low-Duty-Cycle High-Power Microwave Pulses
by Dong Xing, Hongxia Liu, Mengwei Su, Xingjun Liu and Chang Liu
Micromachines 2025, 16(10), 1137; https://doi.org/10.3390/mi16101137 - 1 Oct 2025
Viewed by 491
Abstract
This study investigates the effects and mechanisms of high-power microwave on GaN HEMTs. By injecting high-power microwave from the gate into the device and employing techniques such as DC characteristics, gate-lag effect analysis, low-frequency noise measurement, and focused ion beam (FIB) cross-sectional inspection, [...] Read more.
This study investigates the effects and mechanisms of high-power microwave on GaN HEMTs. By injecting high-power microwave from the gate into the device and employing techniques such as DC characteristics, gate-lag effect analysis, low-frequency noise measurement, and focused ion beam (FIB) cross-sectional inspection, a systematic investigation was conducted on GaN HEMT degradation and failure behaviors under conditions of a low duty cycle and narrow pulse width. Experimental results indicate that under relatively low-power HPM stress, GaN HEMT exhibits only a slight threshold voltage shift and a modest increase in transconductance, attributed to the passivation of donor-like defects near the gate. However, when the injected power exceeds 43 dBm, the electric field beneath the gate triggers avalanche breakdown, forming a leakage path and causing localized heat accumulation, which ultimately leads to permanent device failure. This study reveals the physical failure mechanisms of GaN HEMTs under low-duty-cycle HPM stress and provides important guidance for the reliability design and hardening protection of RF devices. Full article
(This article belongs to the Section D1: Semiconductor Devices)
Show Figures

Figure 1

20 pages, 3260 KB  
Article
Lifetime Prediction of GaN Power Devices Based on COMSOL Simulations and Long Short-Term Memory (LSTM) Networks
by Yunfeng Qiu, Zenghang Zhang and Zehong Li
Electronics 2025, 14(19), 3883; https://doi.org/10.3390/electronics14193883 - 30 Sep 2025
Viewed by 456
Abstract
Gallium nitride (GaN) power devices have attracted extensive attention due to their superior performance in high-frequency and high-power applications. However, the reliability and lifetime prediction of these devices under various operating conditions remain critical challenges. In this study, a hybrid approach combining finite [...] Read more.
Gallium nitride (GaN) power devices have attracted extensive attention due to their superior performance in high-frequency and high-power applications. However, the reliability and lifetime prediction of these devices under various operating conditions remain critical challenges. In this study, a hybrid approach combining finite element simulation and deep learning is proposed to predict the lifetime of GaN power devices. COMSOL Multiphysics (V6.3) is employed to simulate the thermal and mechanical stress behavior of GaN devices under different power and frequency conditions, while capturing key degradation indicators such as temperature cycles and stress concentrations. The variation in temperature over time can reflect the degradation of the device and also reveal the fatigue damage caused by the long-term accumulation of thermal stress on the chip. LSTM performs exceptionally well in extracting features from time series data, effectively capturing the long-term and short-term dependencies within the time series. By using simulation data to establish a connection between the chip temperature and its service life, the temperature data and the lifespan data are combined into a dataset, and the LSTM neural network is used to explore the impact of temperature changes over time on the lifespan. The method mentioned in this paper can make preliminary predictions of the results when sufficient experimental data cannot be obtained in a short period of time. The prediction results have a certain degree of reliability. Full article
(This article belongs to the Special Issue Microelectronic Devices and Materials)
Show Figures

Figure 1

38 pages, 14848 KB  
Article
Image Sand–Dust Removal Using Reinforced Multiscale Image Pair Training
by Dong-Min Son, Jun-Ru Huang and Sung-Hak Lee
Sensors 2025, 25(19), 5981; https://doi.org/10.3390/s25195981 - 26 Sep 2025
Viewed by 489
Abstract
This study proposes an image-enhancement method to address the challenges of low visibility and color distortion in images captured during yellow sandstorms for an image sensor based outdoor surveillance system. The technique combines traditional image processing with deep learning to improve image quality [...] Read more.
This study proposes an image-enhancement method to address the challenges of low visibility and color distortion in images captured during yellow sandstorms for an image sensor based outdoor surveillance system. The technique combines traditional image processing with deep learning to improve image quality while preserving color consistency during transformation. Conventional methods can partially improve color representation and reduce blurriness in sand–dust environments. However, they are limited in their ability to restore fine details and sharp object boundaries effectively. In contrast, the proposed method incorporates Retinex-based processing into the training phase, enabling enhanced clarity and sharpness in the restored images. The proposed framework comprises three main steps. First, a cycle-consistent generative adversarial network (CycleGAN) is trained with unpaired images to generate synthetically paired data. Second, CycleGAN is retrained using these generated images along with clear images obtained through multiscale image decomposition, allowing the model to transform dust-interfered images into clear ones. Finally, color preservation is achieved by selecting the A and B chrominance channels from the small-scale model to maintain the original color characteristics. The experimental results confirmed that the proposed method effectively restores image color and removes sand–dust-related interference, thereby providing enhanced visual quality under sandstorm conditions. Specifically, it outperformed algorithm-based dust removal methods such as Sand-Dust Image Enhancement (SDIE), Chromatic Variance Consistency Gamma and Correction-Based Dehazing (CVCGCBD), and Rank-One Prior (ROP+), as well as machine learning-based methods including Fusion strategy and Two-in-One Low-Visibility Enhancement Network (TOENet), achieving a Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) score of 17.238, which demonstrates improved perceptual quality, and an Local Phase Coherence-Sharpness Index (LPC-SI) value of 0.973, indicating enhanced sharpness. Both metrics showed superior performance compared to conventional methods. When applied to Closed-Circuit Television (CCTV) systems, the proposed method is expected to mitigate the adverse effects of color distortion and image blurring caused by sand–dust, thereby effectively improving visual clarity in practical surveillance applications. Full article
Show Figures

Figure 1

18 pages, 3733 KB  
Article
Dual-Head Pix2Pix Network for Material Decomposition of Conventional CT Projections with Photon-Counting Guidance
by Yanyun Liu, Zhiqiang Li, Yang Wang, Ruitao Chen, Dinghong Duan, Xiaoyi Liu, Xiangyu Liu, Yu Shi, Songlin Li and Shouping Zhu
Sensors 2025, 25(19), 5960; https://doi.org/10.3390/s25195960 - 25 Sep 2025
Viewed by 552
Abstract
Material decomposition in X-ray imaging is essential for enhancing tissue differentiation and reducing the radiation dose, but the clinical adoption of photon-counting detectors (PCDs) is limited by their high cost and technical complexity. To address this, we propose Dual-head Pix2Pix, a PCD-guided deep [...] Read more.
Material decomposition in X-ray imaging is essential for enhancing tissue differentiation and reducing the radiation dose, but the clinical adoption of photon-counting detectors (PCDs) is limited by their high cost and technical complexity. To address this, we propose Dual-head Pix2Pix, a PCD-guided deep learning framework that enables simultaneous iodine and bone decomposition from single-energy X-ray projections acquired with conventional energy-integrating detectors. The model was trained and tested on 1440 groups of energy-integrating detector (EID) projections with their corresponding iodine/bone decomposition images. Experimental results demonstrate that the Dual-head Pix2Pix outperforms baseline models. For iodine decomposition, it achieved a mean absolute error (MAE) of 5.30 ± 1.81, representing an ~10% improvement over Pix2Pix (5.92) and a substantial advantage over CycleGAN (10.39). For bone decomposition, the MAE was reduced to 9.55 ± 2.49, an ~6% improvement over Pix2Pix (10.18). Moreover, Dual-head Pix2Pix consistently achieved the highest MS-SSIM, PSNR, and Pearson correlation coefficients across all benchmarks. In addition, we performed a cross-domain validation using projection images acquired from a conventional EID-CT system. The results show that the model successfully achieved the effective separation of iodine and bone in this new domain, demonstrating a strong generalization capability beyond the training distribution. In summary, Dual-head Pix2Pix provides a cost-effective, scalable, and hardware-friendly solution for accurate dual-material decomposition, paving the way for the broader clinical and industrial adoption of material-specific imaging without requiring PCDs. Full article
Show Figures

Figure 1

16 pages, 8404 KB  
Article
Edge-Enhanced CrackNet for Underwater Crack Detection in Concrete Dams
by Xiaobian Wu, Weibo Zhang, Guangze Shen and Jinbao Sheng
Appl. Sci. 2025, 15(19), 10326; https://doi.org/10.3390/app151910326 - 23 Sep 2025
Viewed by 424
Abstract
Underwater crack detection in dam structures is of significant importance to ensure structural safety, assess operational conditions, and prevent potential disasters. Traditional crack detection methods face various limitations when applied to underwater environments, particularly in high dam underwater environments where image quality is [...] Read more.
Underwater crack detection in dam structures is of significant importance to ensure structural safety, assess operational conditions, and prevent potential disasters. Traditional crack detection methods face various limitations when applied to underwater environments, particularly in high dam underwater environments where image quality is influenced by factors such as water flow disturbances, light diffraction effects, and low contrast, making it difficult for conventional methods to accurately extract crack features. This study proposes a dual-stage underwater crack detection method based on Cycle-GAN and YOLOv11 called Edge-Enhanced Underwater CrackNet (E2UCN) to overcome the limitations of existing image enhancement methods in retaining crack details and improving detection accuracy. First, underwater concrete crack images were collected using an underwater remotely operated vehicle (ROV), and various complex underwater environments were simulated to construct a test dataset. Then, an improved Cycle-GAN image style transfer method was used to enhance the underwater images. Unlike conventional GAN-based underwater image enhancement methods that focus on global visual quality, our model specifically constrains edge preservation and high-frequency crack textures, providing a novel solution tailored for crack detection tasks. Subsequently, the YOLOv11 model was employed to perform object detection on the enhanced underwater crack images, effectively extracting crack features and achieving high-precision crack detection. The experimental results show that the proposed method significantly outperforms traditional methods in terms of crack detection accuracy, edge clarity, and adaptability to complex backgrounds, effectively improving underwater crack detection accuracy (precision = 0.995, F1 = 0.99762, mAP@0.5 = 0.995, and mAP@0.5:0.95 = 0.736) and providing a feasible technological solution for intelligent inspection of high dam underwater cracks. Full article
Show Figures

Figure 1

33 pages, 14767 KB  
Article
Night-to-Day Image Translation with Road Light Attention Training for Traffic Information Detection
by Ye-Jin Lee, Young-Ho Go, Seung-Hwan Lee, Dong-Min Son and Sung-Hak Lee
Mathematics 2025, 13(18), 2998; https://doi.org/10.3390/math13182998 - 16 Sep 2025
Viewed by 689
Abstract
Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by [...] Read more.
Generative adversarial networks (GANs)-based image deep learning methods are useful to improve object visibility in nighttime driving environments, but they often fail to preserve critical road information like traffic light colors and vehicle lighting. This paper proposes a method to address this by utilizing both unpaired and four-channel paired training modules. The unpaired module performs the primary night-to-day conversion, while the paired module, enhanced with a fourth channel, focuses on preserving road details. Our key contribution is an inverse road light attention (RLA) map, which acts as this fourth channel to explicitly guide the network’s learning. This map also facilitates a final cross-blending process, synthesizing the results from both modules to maximize their respective advantages. Experimental results demonstrate that our approach more accurately preserves lane markings and traffic light colors. Furthermore, quantitative analysis confirms that our method achieves superior performance across eight no-reference image quality metrics compared to existing techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 8021 KB  
Article
Advanced Single-Phase Non-Isolated Microinverter with Time-Sharing Maximum Power Point Tracking Control Strategy
by Anees Alhasi, Patrick Chi-Kwong Luk, Khalifa Aliyu Ibrahim and Zhenhua Luo
Energies 2025, 18(18), 4925; https://doi.org/10.3390/en18184925 - 16 Sep 2025
Viewed by 602
Abstract
Partial shading poses a significant challenge to photovoltaic (PV) systems by degrading power output and overall efficiency, especially under non-uniform irradiance conditions. This paper proposes an advanced time-sharing maximum power point tracking (MPPT) control strategy implemented through a non-isolated single-phase multi-input microinverter architecture. [...] Read more.
Partial shading poses a significant challenge to photovoltaic (PV) systems by degrading power output and overall efficiency, especially under non-uniform irradiance conditions. This paper proposes an advanced time-sharing maximum power point tracking (MPPT) control strategy implemented through a non-isolated single-phase multi-input microinverter architecture. The system enables individual power regulation for multiple PV modules while preserving their voltage–current (V–I) characteristics and eliminating the need for additional active switches. Building on the concept of distributed MPPT (DMPPT), a flexible full power processing (FPP) framework is introduced, wherein a single MPPT controller sequentially optimizes each module’s output. By leveraging the slow-varying nature of PV characteristics, the proposed algorithm updates control parameters every half-cycle of the AC output, significantly enhancing controller utilization and reducing system complexity and cost. The control strategy is validated through detailed simulations and experimental testing under dynamic partial shading scenarios. Results confirm that the proposed system maximizes power extraction, maintains voltage stability, and offers improved thermal performance, particularly through the integration of GaN power devices. Overall, the method presents a robust, cost-effective, and scalable solution for next-generation PV systems operating in variable environmental conditions. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Photovoltaic Energy Systems)
Show Figures

Figure 1

19 pages, 20856 KB  
Article
A Wavelet-Recalibrated Semi-Supervised Network for Infrared Small Target Detection Under Data Scarcity
by Cheng Jiang, Jingwen Ma, Xinpeng Zhang, Chiming Tong, Zhongqi Ma and Yongshi Jie
Sensors 2025, 25(18), 5677; https://doi.org/10.3390/s25185677 - 11 Sep 2025
Viewed by 447
Abstract
Infrared small target detection has long faced significant challenges due to the extremely small size of targets, low contrast, and the scarcity of annotated data. To tackle these issues, we propose a wavelet-recalibrated semi-supervised network (WRSSNet) that integrates synthetic data augmentation, feature reconstruction, [...] Read more.
Infrared small target detection has long faced significant challenges due to the extremely small size of targets, low contrast, and the scarcity of annotated data. To tackle these issues, we propose a wavelet-recalibrated semi-supervised network (WRSSNet) that integrates synthetic data augmentation, feature reconstruction, and semi-supervised learning, aiming to fully exploit the potential of unlabeled infrared images under limited supervision. We construct a dataset containing 843 visible-light small target images and employ an improved CycleGAN model to convert them into high-quality pseudo-infrared images, effectively expanding the scale of training data for infrared small target detection. In addition, we design a lightweight wavelet-enhanced channel recalibration and fusion (WECRF) module, which integrates wavelet decomposition with both channel and spatial attention mechanisms. This module enables adaptive reweighting and efficient fusion of multi-scale features, highlighting high-frequency details and weak target responses. Extensive experiments on two public infrared small target datasets, NUAA-SIRST and IRSTD-1K, demonstrate that WRSSNet achieves superior detection accuracy and lower false alarm rates compared to several state-of-the-art methods, while maintaining low computational complexity. Full article
Show Figures

Figure 1

21 pages, 18869 KB  
Article
MambaRA-GAN: Underwater Image Enhancement via Mamba and Intra-Domain Reconstruction Autoencoder
by Jiangyan Wu, Guanghui Zhang and Yugang Fan
J. Mar. Sci. Eng. 2025, 13(9), 1745; https://doi.org/10.3390/jmse13091745 - 10 Sep 2025
Viewed by 433
Abstract
Underwater images frequently suffer from severe quality degradation due to light attenuation and scattering effects, manifesting as color distortion, low contrast, and detail blurring. These issues significantly impair the performance of downstream tasks. Therefore, underwater image enhancement (UIE) becomes a key technology to [...] Read more.
Underwater images frequently suffer from severe quality degradation due to light attenuation and scattering effects, manifesting as color distortion, low contrast, and detail blurring. These issues significantly impair the performance of downstream tasks. Therefore, underwater image enhancement (UIE) becomes a key technology to solve underwater image degradation. However, existing data-driven UIE methods typically rely on difficult-to-acquire paired data for training, severely limiting their practical applicability. To overcome this limitation, this study proposes MambaRA-GAN, a novel unpaired UIE framework built upon a CycleGAN architecture, which introduces a novel integration of Mamba and intra-domain reconstruction autoencoders. The key innovations of our work are twofold: (1) We design a generator architecture based on a Triple-Gated Mamba (TG-Mamba) block. This design dynamically allocates feature channels to three parallel branches via learnable weights, achieving optimal fusion of CNN’s local feature extraction capabilities and Mamba’s global modeling capabilities. (2) We construct an intra-domain reconstruction autoencoder, isomorphic to the generator, to quantitatively assess the quality of reconstructed images within the cycle consistency loss. This introduces more effective structural information constraints during training. The experimental results demonstrate that the proposed method achieves significant improvements across five objective performance metrics. Visually, it effectively restores natural colors, enhances contrast, and preserves rich detail information, robustly validating its efficacy for the UIE task. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop