Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (168)

Search Parameters:
Keywords = cycle generative adversarial networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 4256 KiB  
Article
A GAN-Based Framework with Dynamic Adaptive Attention for Multi-Class Image Segmentation in Autonomous Driving
by Bashir Sheikh Abdullahi Jama and Mehmet Hacibeyoglu
Appl. Sci. 2025, 15(15), 8162; https://doi.org/10.3390/app15158162 - 22 Jul 2025
Viewed by 141
Abstract
Image segmentation is a foundation for autonomous driving frameworks that empower vehicles to explore and navigate their surrounding environment. It gives a fundamental setting to the dynamic cycles by dividing the image into significant parts like streets, vehicles, walkers, and traffic signs. Precise [...] Read more.
Image segmentation is a foundation for autonomous driving frameworks that empower vehicles to explore and navigate their surrounding environment. It gives a fundamental setting to the dynamic cycles by dividing the image into significant parts like streets, vehicles, walkers, and traffic signs. Precise segmentation ensures safe navigation and the avoidance of collisions, while following the rules of traffic is very critical for seamless operation in self-driving cars. The most recent deep learning-based image segmentation models have demonstrated impressive performance in structured environments, yet they often fall short when applied to the complex and unpredictable conditions encountered in autonomous driving. This study proposes an Adaptive Ensemble Attention (AEA) mechanism within a Generative Adversarial Network architecture to deal with dynamic and complex driving conditions. The AEA integrates the features of self, spatial, and channel attention adaptively and powerfully changes the amount of each contribution as per input and context-oriented relevance. It does this by allowing the discriminator network in GAN to evaluate the segmentation mask created by the generator. This explains the difference between real and fake masks by considering a concatenated pair of an original image and its mask. The adversarial training will prompt the generator, via the discriminator, to mask out the image in such a way that the output aligns with the expected ground truth and is also very realistic. The exchange of information between the generator and discriminator improves the quality of the segmentation. In order to check the accuracy of the proposed method, the three widely used datasets BDD100K, Cityscapes, and KITTI were selected to calculate average IoU, where the value obtained was 89.46%, 89.02%, and 88.13% respectively. These outcomes emphasize the model’s effectiveness and consistency. Overall, it achieved a remarkable accuracy of 98.94% and AUC of 98.4%, indicating strong enhancements compared to the State-of-the-art (SOTA) models. Full article
Show Figures

Figure 1

38 pages, 6851 KiB  
Article
FGFNet: Fourier Gated Feature-Fusion Network with Fractal Dimension Estimation for Robust Palm-Vein Spoof Detection
by Seung Gu Kim, Jung Soo Kim and Kang Ryoung Park
Fractal Fract. 2025, 9(8), 478; https://doi.org/10.3390/fractalfract9080478 - 22 Jul 2025
Viewed by 176
Abstract
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality [...] Read more.
The palm-vein recognition system has garnered attention as a biometric technology due to its resilience to external environmental factors, protection of personal privacy, and low risk of external exposure. However, with recent advancements in deep learning-based generative models for image synthesis, the quality and sophistication of fake images have improved, leading to an increased security threat from counterfeit images. In particular, palm-vein images acquired through near-infrared illumination exhibit low resolution and blurred characteristics, making it even more challenging to detect fake images. Furthermore, spoof detection specifically targeting palm-vein images has not been studied in detail. To address these challenges, this study proposes the Fourier-gated feature-fusion network (FGFNet) as a novel spoof detector for palm-vein recognition systems. The proposed network integrates masked fast Fourier transform, a map-based gated feature fusion block, and a fast Fourier convolution (FFC) attention block with global contrastive loss to effectively detect distortion patterns caused by generative models. These components enable the efficient extraction of critical information required to determine the authenticity of palm-vein images. In addition, fractal dimension estimation (FDE) was employed for two purposes in this study. In the spoof attack procedure, FDE was used to evaluate how closely the generated fake images approximate the structural complexity of real palm-vein images, confirming that the generative model produced highly realistic spoof samples. In the spoof detection procedure, the FDE results further demonstrated that the proposed FGFNet effectively distinguishes between real and fake images, validating its capability to capture subtle structural differences induced by generative manipulation. To evaluate the spoof detection performance of FGFNet, experiments were conducted using real palm-vein images from two publicly available palm-vein datasets—VERA Spoofing PalmVein (VERA dataset) and PLUSVein-contactless (PLUS dataset)—as well as fake palm-vein images generated based on these datasets using a cycle-consistent generative adversarial network. The results showed that, based on the average classification error rate, FGFNet achieved 0.3% and 0.3% on the VERA and PLUS datasets, respectively, demonstrating superior performance compared to existing state-of-the-art spoof detection methods. Full article
Show Figures

Figure 1

19 pages, 3591 KiB  
Article
Physics-Informed Generative Adversarial Networks for Laser Speckle Noise Suppression
by Xiangji Guo, Fei Xie, Tingkai Yang, Ming Ming and Tao Chen
Sensors 2025, 25(13), 3842; https://doi.org/10.3390/s25133842 - 20 Jun 2025
Viewed by 412
Abstract
In high-resolution microscopic imaging, using shorter-wavelength ultraviolet (UV) lasers as illumination sources is a common approach. However, the high spatial coherence of such lasers, combined with the surface roughness of the sample, often introduces disturbances in the received optical field, resulting in strong [...] Read more.
In high-resolution microscopic imaging, using shorter-wavelength ultraviolet (UV) lasers as illumination sources is a common approach. However, the high spatial coherence of such lasers, combined with the surface roughness of the sample, often introduces disturbances in the received optical field, resulting in strong speckle noise. This paper presents a novel speckle noise suppression method specifically designed for coherent laser-based microscopic imaging. The proposed approach integrates statistical physical modeling and image gradient discrepancy into the training of a Cycle Generative Adversarial Network (CycleGAN), capturing the perturbation mechanism of speckle noise in the optical field. By incorporating these physical constraints, the method effectively enhances the model’s ability to suppress speckle noise without requiring annotated clean data. Experimental results under high-resolution laser microscopy settings demonstrate that the introduced constraints successfully guide network training and significantly outperform traditional filtering methods and unsupervised CNNs in both denoising performance and training efficiency. While this work focuses on microscopic imaging, the underlying framework offers potential extensibility to other laser-based imaging modalities with coherent noise characteristics. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 2167 KiB  
Article
High-Cycle Fatigue Life Prediction of Additive Manufacturing Inconel 718 Alloy via Machine Learning
by Zongxian Song, Jinling Peng, Lina Zhu, Caiyan Deng, Yangyang Zhao, Qingya Guo and Angran Zhu
Materials 2025, 18(11), 2604; https://doi.org/10.3390/ma18112604 - 3 Jun 2025
Cited by 1 | Viewed by 607
Abstract
This study established a machine learning framework to enhance the accuracy of very-high-cycle fatigue (VHCF) life prediction in selective laser melted Inconel 718 alloy by systematically comparing the use of generative adversarial networks (GANs) and variational auto-encoders (VAEs) for data augmentation. We quantified [...] Read more.
This study established a machine learning framework to enhance the accuracy of very-high-cycle fatigue (VHCF) life prediction in selective laser melted Inconel 718 alloy by systematically comparing the use of generative adversarial networks (GANs) and variational auto-encoders (VAEs) for data augmentation. We quantified the influence of critical defect parameters (dimensions and stress amplitudes) extracted from fracture analyses on fatigue life and compared the performance of GANs versus VAEs in generating synthetic training data for three regression models (ANN, Random Forest, and SVR). The experimental fatigue data were augmented using both generative models, followed by hyperparameter optimization and rigorous validation against independent test sets. The results demonstrated that the GAN-generated data significantly improved the prediction metrics, with GAN-enhanced models achieving superior R2 scores (0.91–0.97 vs. 0.86 ± 0.87) and lower MAEs (1.13–1.62% vs. 2.00–2.64%) compared to the VAE-based approaches. This work not only establishes GANs as a breakthrough tool for AM fatigue prediction but also provides a transferable methodology for data-driven modeling of defect-dominated failure mechanisms in advanced materials. Full article
(This article belongs to the Special Issue High Temperature-Resistant Ceramics and Composites)
Show Figures

Figure 1

16 pages, 3824 KiB  
Article
Style Transfer and Topological Feature Analysis of Text-Based CAPTCHA via Generative Adversarial Networks
by Tao Xue, Zixuan Guo, Zehang Yin and Yu Rong
Mathematics 2025, 13(11), 1861; https://doi.org/10.3390/math13111861 - 2 Jun 2025
Viewed by 419
Abstract
The design and cracking of text-based CAPTCHAs are important topics in computer security. This study proposes a method for the style transfer of text-based CAPTCHAs using Generative Adversarial Networks (GANs). First, a curated dataset was used, combining a text-based CAPTCHA library and image [...] Read more.
The design and cracking of text-based CAPTCHAs are important topics in computer security. This study proposes a method for the style transfer of text-based CAPTCHAs using Generative Adversarial Networks (GANs). First, a curated dataset was used, combining a text-based CAPTCHA library and image collections from four artistic styles—Van Gogh, Monet, Cézanne, and Ukiyo-e—which were used to generate style-based text CAPTCHA samples. Subsequently, a universal style transfer model, along with trained CycleGAN models for both single- and double-style transfers, were employed to generate style-enhanced text-based CAPTCHAs. Traditional methods for evaluating the anti-recognition capability of text-based CAPTCHAs primarily focus on recognition success rates. This study introduces topological feature analysis as a new method for evaluating text-based CAPTCHAs. Initially, the recognition success rates of the three methods across four styles were evaluated using Muggle-OCR. Subsequently, the graph diameter was employed to quantify the differences between text-based CAPTCHA images before and after style transfer. The experimental results demonstrate that the recognition rates of style-enhanced text-based CAPTCHAs are consistently lower than those of the original CAPTCHA, suggesting that style transfer enhances anti-recognition capability. Topological feature analysis indicates that style transfer results in a more compact topological structure, further validating the effectiveness of the GAN-based twice-transfer method in enhancing CAPTCHA complexity and anti-recognition capability. Full article
Show Figures

Figure 1

21 pages, 4536 KiB  
Article
Feature Attention Cycle Generative Adversarial Network: A Multi-Scene Image Dehazing Method Based on Feature Attention
by Na Li, Na Liu, Yanan Duan and Yuyang Chai
Appl. Sci. 2025, 15(10), 5374; https://doi.org/10.3390/app15105374 - 12 May 2025
Viewed by 339
Abstract
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in [...] Read more.
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in the real world are ignored in most current algorithms; that is, the degree of fog is related to the depth of field and scattering coefficient. Moreover, most current dehazing algorithms only consider the image dehazing of land scenes and ignore maritime scenes. To address these problems, we propose a multi-scene image dehazing algorithm based on an improved cycle generative adversarial network (CycleGAN). The generator structure is improved based on the CycleGAN model, and a feature fusion attention module is proposed. This module obtains relevant contextual information by extracting different levels of features. The obtained feature information is fused using the idea of residual connections. An attention mechanism is introduced in this module to retain more feature information by assigning different weights. During the training process, the atmospheric scattering model is established to guide the learning of the neural network using its prior information. The experimental results show that, compared with the baseline model, the peak signal-to-noise ratio (PSNR) increases by 32.10%, the structural similarity index (SSIM) increases by 31.07%, the information entropy (IE) increases by 4.79%, and the NIQE index is reduced by 20.1% in quantitative comparison. Meanwhile, it demonstrates better visual effects than other advanced algorithms in qualitative comparisons on synthetic datasets and real datasets. Full article
Show Figures

Figure 1

31 pages, 5220 KiB  
Article
A Generative Adversarial Network-Based Investor Sentiment Indicator: Superior Predictability for the Stock Market
by Shiqing Qiu, Yang Wang, Zong Ke, Qinyan Shen, Zichao Li, Rong Zhang and Kaichen Ouyang
Mathematics 2025, 13(9), 1476; https://doi.org/10.3390/math13091476 - 30 Apr 2025
Cited by 5 | Viewed by 718
Abstract
Investor sentiment has a profound impact on financial market volatility; however, it is difficult to accurately capture the complex nonlinear relationships among sentiment proxies with the existing methods. In this study, we propose a novel investor sentiment indicator, SGAN, which uses [...] Read more.
Investor sentiment has a profound impact on financial market volatility; however, it is difficult to accurately capture the complex nonlinear relationships among sentiment proxies with the existing methods. In this study, we propose a novel investor sentiment indicator, SGAN, which uses generative adversarial networks (GANs) to extract the nonlinear latent structure from eight sentiment proxies from February 2003 to September 2023 in the Chinese A-share market. Unlike traditional linear dimensionality reduction methods, GANs are able to capture complex market dynamics through adversarial training, effectively reducing noise and improving prediction accuracy. The empirical analyses show that SGAN significantly outperforms existing methods in both in-sample and out-of-sample prediction capabilities. The GAN-based investment strategy achieves impressive annualized returns and provides a powerful tool for portfolio construction and risk management. Robustness tests across economic cycles, industries, and U.S. markets further validate the stability of SGAN. These findings highlight the unique advantages of GANs as sentiment-driven financial forecasting tools, providing market participants with new ways to more accurately capture sentiment-shifting trends and develop effective investment strategies. Full article
Show Figures

Figure 1

25 pages, 28786 KiB  
Article
Text-Conditioned Diffusion-Based Synthetic Data Generation for Turbine Engine Sensor Analysis and RUL Estimation
by Luis Pablo Mora-de-León, David Solís-Martín, Juan Galán-Páez and Joaquín Borrego-Díaz
Machines 2025, 13(5), 374; https://doi.org/10.3390/machines13050374 - 30 Apr 2025
Viewed by 808
Abstract
This paper introduces a novel framework for generating synthetic time-series data from turbine engine sensor readings using a text-conditioned diffusion model. The approach begins with dataset preprocessing, including correlation analysis, feature selection, and normalization. Principal Component Analysis (PCA) transforms the normalized signals into [...] Read more.
This paper introduces a novel framework for generating synthetic time-series data from turbine engine sensor readings using a text-conditioned diffusion model. The approach begins with dataset preprocessing, including correlation analysis, feature selection, and normalization. Principal Component Analysis (PCA) transforms the normalized signals into three components, mapped to the RGB channels of an image. These components, combined with engine identifiers and cycle information, form compact 19 × 19 × 3 pixel images, later scaled to 512 × 512 × 3 pixels. A variational autoencoder (VAE)-based diffusion model, fine-tuned on these images, leverages text prompts describing engine characteristics to generate high-quality synthetic samples. A reverse transformation pipeline reconstructs synthetic images back into time-series signals, preserving the original engine-specific attributes while removing padding artifacts. The quality of the synthetic data is assessed by training Remaining Useful Life (RUL) estimation models and comparing performance across original, synthetic, and combined datasets. Results demonstrate that synthetic data can be beneficial for model training, particularly in the early epochs when working with limited datasets. Compared to existing approaches, which rely on generative adversarial networks (GANs) or deterministic transformations, the proposed framework offers enhanced data fidelity and adaptability. This study highlights the potential of text-conditioned diffusion models for augmenting time-series datasets in industrial Prognostics and Health Management (PHM) applications. Full article
(This article belongs to the Section Turbomachinery)
Show Figures

Figure 1

15 pages, 1792 KiB  
Article
An Adaptive Cycle-GAN-Based Augmented LIME-Enabled Multi-Stage Transfer Learning Model for Improving Breast Tumor Detection Using Ultrasound Images
by Neeraja Sappa and Greeshma Lingam
Electronics 2025, 14(8), 1571; https://doi.org/10.3390/electronics14081571 - 13 Apr 2025
Viewed by 483
Abstract
Breast cancer is recognized as an aggressive cancer with the highest rate of mortality. Ultrasound imaging is a non-invasive and cost-effective strategy which is most frequently utilized in clinical methods. Especially, in ultrasound scan, breast tumors may appear in blurred and unclear boundaries. [...] Read more.
Breast cancer is recognized as an aggressive cancer with the highest rate of mortality. Ultrasound imaging is a non-invasive and cost-effective strategy which is most frequently utilized in clinical methods. Especially, in ultrasound scan, breast tumors may appear in blurred and unclear boundaries. Thus, there is a necessity to improve the quality of breast ultrasound images. In this work, we introduce a cycle generative adversarial network (GAN) for translating noisy breast ultrasound images to denoised images. Furthermore, translating denoised images to reconstructed images helps in preserving breast tumor boundaries for better efficacy. To accurately identify the augmented breast tumor images, we consider an ensemble model of pre-trained transfer learning models such as Inception-v3, Densenet121, and XceptionLike. Furthermore, we present an automated boundary extraction using Local Interpretable Model-agnostic Explanations (LIME), providing interpretability for boundary extraction in breast lesions from ultrasound images. Through experimentation, we have achieved 93% of accuracy for the proposed model, and LIME provides better interpretability for each pre-trained model. Furthermore, the proposed model outperforms Vison Transformer (ViT) models. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Image Classification)
Show Figures

Figure 1

22 pages, 5152 KiB  
Article
Hyper-CycleGAN: A New Adversarial Neural Network Architecture for Cross-Domain Hyperspectral Data Generation
by Yibo He, Kah Phooi Seng, Li Minn Ang, Bei Peng and Xingyu Zhao
Appl. Sci. 2025, 15(8), 4188; https://doi.org/10.3390/app15084188 - 10 Apr 2025
Cited by 1 | Viewed by 1060
Abstract
The scarcity of labeled training samples poses a significant challenge in hyperspectral image classification. Cross-scene classification has been shown to be an effective approach to tackle the problem of limited sample learning. This paper investigates the usage of generative adversarial networks (GANs) to [...] Read more.
The scarcity of labeled training samples poses a significant challenge in hyperspectral image classification. Cross-scene classification has been shown to be an effective approach to tackle the problem of limited sample learning. This paper investigates the usage of generative adversarial networks (GANs) to enable collaborative artificial intelligence learning on hyperspectral datasets. We propose and design a specialized architecture, termed Hyper-CycleGAN, for heterogeneous transfer learning across source and target scenes. This architecture enables the establishment of bidirectional mappings through efficient adversarial training and merges both source-to-target and target-to-source generators. The proposed Hyper-CycleGAN architecture harnesses the strengths of GANs, along with custom modifications like the integration of multi-scale attention mechanisms to enhance feature learning capabilities specifically tailored for hyperspectral data. To address training instability, the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) loss discriminator is utilized. Additionally, a label smoothing technique is introduced to enhance the generalization capability of the generator, particularly in handling unlabeled samples, thus improving model robustness. Experimental results are performed to validate and confirm the effectiveness of the cross-domain Hyper-CycleGAN approach by demonstrating its applicability to two real-world cross-scene hyperspectral image datasets. Addressing the challenge of limited labeled samples in hyperspectral image classification, this research makes significant contributions and gives valuable insights for remote sensing, environmental monitoring, and medical imaging applications. Full article
Show Figures

Figure 1

21 pages, 1231 KiB  
Article
Advanced Load Cycle Generation for Electrical Energy Storage Systems Using Gradient Random Pulse Method and Information Maximising-Recurrent Conditional Generative Adversarial Networks
by Steven Neupert, Jiaqi Yao and Julia Kowal
Batteries 2025, 11(4), 149; https://doi.org/10.3390/batteries11040149 - 9 Apr 2025
Cited by 2 | Viewed by 554
Abstract
The paper presents two approaches to generating load cycles for electrical energy storage systems. A load cycle is described as the operation of an energy storage system. The cycles can include different metrics depending on the storage application. Load cycle analysis using the [...] Read more.
The paper presents two approaches to generating load cycles for electrical energy storage systems. A load cycle is described as the operation of an energy storage system. The cycles can include different metrics depending on the storage application. Load cycle analysis using the rainflow counting method is employed to understand and validate the metrics of the load cycles generated. Current load cycle generation can involve clustering methods, random microtrip methods, and machine learning techniques. The study includes a random microtrip method that utilises the Random Pulse Method (RPM) and enhances it to develop an improved version called the Gradient Random Pulse Method (gradRPM), which includes the control of stress factors such as the gradient of the state of charge (SOC). This method is relatively simple but, in many cases, it fulfills its purpose. Another more sophisticated method to control stress factors has been proposed, namely the Information Maximising-Recurrent Conditional Generative Adversarial Network (Info-RCGAN). It uses a deep learning algorithm to follow a machine learning-based, data-driven load cycle generation approach. Both approaches use the measurement dataset of a BMW i3 over multiple years to generate new synthetic load cycles. After generating the load cycles using both approaches, they are applied in a laboratory environment to evaluate the stress factors and validate how similar the synthetic data are to a real measurement. The results provide insights into generating simulation or testing data for electrical energy storage applications. Full article
(This article belongs to the Special Issue Towards a Smarter Battery Management System: 2nd Edition)
Show Figures

Figure 1

18 pages, 4664 KiB  
Article
Local Binary Pattern–Cycle Generative Adversarial Network Transfer: Transforming Image Style from Day to Night
by Abeer Almohamade, Salma Kammoun and Fawaz Alsolami
J. Imaging 2025, 11(4), 108; https://doi.org/10.3390/jimaging11040108 - 31 Mar 2025
Viewed by 649
Abstract
Transforming images from day style to night style is crucial for enhancing perception in autonomous driving and smart surveillance. However, existing CycleGAN-based approaches struggle with texture loss, structural inconsistencies, and high computational costs. In our attempt to overcome these challenges, we produced LBP-CycleGAN, [...] Read more.
Transforming images from day style to night style is crucial for enhancing perception in autonomous driving and smart surveillance. However, existing CycleGAN-based approaches struggle with texture loss, structural inconsistencies, and high computational costs. In our attempt to overcome these challenges, we produced LBP-CycleGAN, a new modification of CycleGAN that benefits from the advantages of a Local Binary Pattern (LBP) that extracts details of texture, unlike traditional CycleGAN, which relies heavily on color transformations. Our model leverages LBP-based single-channel inputs, ensuring sharper, more consistent night-time textures. We evaluated three model variations: (1) LBP-CycleGAN with a self-attention mechanism in both the generator and discriminator, (2) LBP-CycleGAN with a self-attention mechanism in the discriminator only, and (3) LBP-CycleGAN without a self-attention mechanism. Our results demonstrate that the LBP-CycleGAN model without self-attention outperformed the other models, achieving a superior texture quality while significantly reducing the training time and computational overhead. This work opens up new possibilities for efficient, high-fidelity night-time image translation in real-world applications, including autonomous driving and low-light vision systems. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

23 pages, 49147 KiB  
Article
The Applicability of Two Generative Adversarial Networks to Generative Plantscape Design: A Comparative Study
by Lu Feng, Yuting Sun, Chenwen Yu, Ran Chen and Jing Zhao
Land 2025, 14(4), 746; https://doi.org/10.3390/land14040746 - 31 Mar 2025
Viewed by 415
Abstract
Plantscape design combines both scientific and technical elements, with flower borders serving as a representative example. Generative Adversarial Networks (GANs), which can automatically generate images through training, offer new technological support for plantscape design, potentially enhancing the efficiency of designers. This study focuses [...] Read more.
Plantscape design combines both scientific and technical elements, with flower borders serving as a representative example. Generative Adversarial Networks (GANs), which can automatically generate images through training, offer new technological support for plantscape design, potentially enhancing the efficiency of designers. This study focuses on flower border plans as the research subject and creates a dataset of flower border designs. Subsequently, the research employed two algorithms, Pix2Pix and CycleGAN, for training and testing, enabling the automatic generation of flower border design images, with subsequent optimization of the results. The paper compares the generated results of both algorithms in terms of image quality and design patterns, providing both objective and subjective evaluations of CycleGAN, which performed better. Experimental results show that the algorithm can learn the latent patterns of flower border design to some extent and generate high-quality images with reasonable performance in terms of ornamental character and ecological character. Among the design types, bar-shaped layouts showed the best results. However, the algorithm still faces challenges in handling complex site processing, boundary clarity, and design innovation. Additionally, aspects such as vertical variation, texture harmony, low maintenance, and sustainability remain areas for future improvement. This study demonstrates the potential of GAN in small-scale plantscape design and offers innovative and feasible solutions for flower border design. Full article
(This article belongs to the Section Land Planning and Landscape Architecture)
Show Figures

Figure 1

12 pages, 1300 KiB  
Article
Improving Image Quality of Chest Radiography with Artificial Intelligence-Supported Dual-Energy X-Ray Imaging System: An Observer Preference Study in Healthy Volunteers
by Sung-Hyun Yoon, Jihang Kim, Junghoon Kim, Jong-Hyuk Lee, Ilwoong Choi, Choul-Woo Shin and Chang-Min Park
J. Clin. Med. 2025, 14(6), 2091; https://doi.org/10.3390/jcm14062091 - 19 Mar 2025
Viewed by 1874
Abstract
Background/Objectives: To compare the image quality of chest radiography with a dual-energy X-ray imaging system using AI technology (DE-AI) to that of conventional chest radiography with a standard protocol. Methods: In this prospective study, 52 healthy volunteers underwent dual-energy chest radiography. Images were [...] Read more.
Background/Objectives: To compare the image quality of chest radiography with a dual-energy X-ray imaging system using AI technology (DE-AI) to that of conventional chest radiography with a standard protocol. Methods: In this prospective study, 52 healthy volunteers underwent dual-energy chest radiography. Images were obtained using two exposures at 60 kVp and 120 kVp, separated by a 150 ms interval. Four images were generated for each participant: a conventional image, an enhanced standard image, a soft-tissue-selective image, and a bone-selective image. A machine learning model optimized the cancellation parameters for generating soft-tissue and bone-selective images. To enhance image quality, motion artifacts were minimized using Laplacian pyramid diffeomorphic registration, while a wavelet directional cycle-consistent adversarial network (WavCycleGAN) reduced image noise. Four radiologists independently evaluated the visibility of thirteen anatomical regions (eight soft-tissue regions and five bone regions) and the overall image with a five-point scale of preference. Pooled mean values were calculated for each anatomic region through meta-analysis using a random-effects model. Results: Radiologists preferred DE-AI images to conventional chest radiographs in various anatomic regions. The enhanced standard image showed superior quality in 9 of 13 anatomic regions. Preference for the soft-tissue-selective image was statistically significant for three of eight anatomic regions. Preference for the bone-selective image was statistically significant for four of five anatomic regions. Conclusions: Images produced by DE-AI provide better visualization of thoracic structures. Full article
(This article belongs to the Special Issue New Insights into Lung Imaging)
Show Figures

Figure 1

18 pages, 3728 KiB  
Article
Generative Adversarial Networks for Climate-Sensitive Urban Morphology: An Integration of Pix2Pix and the Cycle Generative Adversarial Network
by Mo Wang, Ziheng Xiong, Jiayu Zhao, Shiqi Zhou, Yuankai Wang, Rana Muhammad Adnan Ikram, Lie Wang and Soon Keat Tan
Land 2025, 14(3), 578; https://doi.org/10.3390/land14030578 - 10 Mar 2025
Cited by 2 | Viewed by 981
Abstract
Urban heat island (UHI) effects pose significant challenges to sustainable urban development, necessitating innovative modeling techniques to optimize urban morphology for thermal resilience. This study integrates the Pix2Pix and CycleGAN architectures to generate high-fidelity urban morphology models aligned with local climate zones (LCZs), [...] Read more.
Urban heat island (UHI) effects pose significant challenges to sustainable urban development, necessitating innovative modeling techniques to optimize urban morphology for thermal resilience. This study integrates the Pix2Pix and CycleGAN architectures to generate high-fidelity urban morphology models aligned with local climate zones (LCZs), enhancing their applicability to urban climate studies. This research focuses on eight major Chinese coastal cities, leveraging a robust dataset of 4712 samples to train the generative models. Quantitative evaluations demonstrated that the integration of CycleGAN with Pix2Pix substantially improved structural fidelity and realism in urban morphology synthesis, achieving a peak Structural Similarity Index Measure (SSIM) of 0.918 and a coefficient of determination (R2) of 0.987. The total adversarial loss in Pix2Pix training stabilized at 0.19 after 811 iterations, ensuring high convergence in urban structure generation. Additionally, CycleGAN-enhanced outputs exhibited a 35% reduction in relative error compared to Pix2Pix-generated images, significantly improving edge preservation and urban feature accuracy. By incorporating LCZ data, the proposed framework successfully bridges urban morphology modeling with climate-responsive urban planning, enabling adaptive design strategies for mitigating UHI effects. This study integrates Pix2Pix and CycleGAN architectures to enhance the realism and structural fidelity of urban morphology generation, while incorporating the LCZ classification framework to produce urban forms that align with specific climatological conditions. Compared to the model trained by Pix2Pix coupled with LCZ alone, the approach offers urban planners a more precise tool for designing climate-responsive cities, optimizing urban layouts to mitigate heat island effects, improve energy efficiency, and enhance resilience. Full article
Show Figures

Figure 1

Back to TopTop