Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (7)

Search Parameters:
Keywords = complex-valued Pix2pix

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 17747 KB  
Article
GAN Predictability for Urban Environmental Performance: Learnability Mechanisms, Structural Consistency, and Efficiency Bounds
by Chenglin Wang, Shiliang Wang, Sixuan Ren, Wenjing Luo, Wenxin Yi and Mei Qing
Atmosphere 2025, 16(12), 1403; https://doi.org/10.3390/atmos16121403 - 13 Dec 2025
Viewed by 72
Abstract
Generative adversarial networks (GANs) can rapidly predict urban environmental performance. However, most existing studies focus on a single target and lack cross-performance comparisons under unified conditions. Under unified urban-form inputs and training settings, this study employs the conditional adversarial model pix2pix to predict [...] Read more.
Generative adversarial networks (GANs) can rapidly predict urban environmental performance. However, most existing studies focus on a single target and lack cross-performance comparisons under unified conditions. Under unified urban-form inputs and training settings, this study employs the conditional adversarial model pix2pix to predict four targets—the Universal Thermal Climate Index (UTCI), annual global solar radiation (Rad), sunshine duration (SolarH), and near-surface wind speed (Wind)—and establishes a closed-loop evaluation framework spanning pixel, structural/region-level, cross-task synergy, complexity, and efficiency. The results show that (1) the overall ranking in accuracy and structural consistency is SolarH ≈ Rad > UTCI > Wind; (2) per-epoch times are similar, whereas convergence epochs differ markedly, indicating that total time is primarily governed by convergence difficulty; (3) structurally, Rad/SolarH perform better on hot-region overlap and edge alignment, whereas Wind exhibits larger errors at corners and canyons; (4) in terms of learnability, texture variation explains errors far better than edge count; and (5) cross-task synergy is higher in low-value regions than in high-value regions, with Wind clearly decoupled from the other targets. The distinctive contribution lies in a unified, reproducible evaluation framework, together with learnability mechanisms and applicability bounds, providing fast and reliable evidence for performance-oriented planning and design. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

31 pages, 2485 KB  
Article
DCBAN: A Dynamic Confidence Bayesian Adaptive Network for Reconstructing Visual Images from fMRI Signals
by Wenju Wang, Yuyang Cai, Renwei Zhang, Jiaqi Li, Zinuo Ye and Zhen Wang
Brain Sci. 2025, 15(11), 1166; https://doi.org/10.3390/brainsci15111166 - 29 Oct 2025
Viewed by 456
Abstract
Background: Current fMRI (functional magnetic resonance imaging)-driven brain information decoding for visual image reconstruction techniques faces issues such as poor structural fidelity, inadequate model generalization, and unnatural visual image reconstruction in complex scenarios. Methods: To address these challenges, this study proposes a [...] Read more.
Background: Current fMRI (functional magnetic resonance imaging)-driven brain information decoding for visual image reconstruction techniques faces issues such as poor structural fidelity, inadequate model generalization, and unnatural visual image reconstruction in complex scenarios. Methods: To address these challenges, this study proposes a Dynamic Confidence Bayesian Adaptive Network (DCBAN). In this network model, deep nested Singular Value Decomposition is introduced to embed low-rank constraints into the deep learning model layers for fine-grained feature extraction, thus improving structural fidelity. The proposed Bayesian Adaptive Fractional Ridge Regression module, based on singular value space, dynamically adjusts the regularization parameters, significantly enhancing the decoder’s generalization ability under complex stimulus conditions. The constructed Dynamic Confidence Adaptive Diffusion Model module incorporates a confidence network and time decay strategy, dynamically adjusting the semantic injection strength during the generation phase, further enhancing the details and naturalness of the generated images. Results: The proposed DCBAN method is applied to the NSD, outperforming state-of-the-art methods by 8.41%, 0.6%, and 4.8% in PixCorr (0.361), Incep (96.0%), and CLIP (97.8%), respectively, achieving the current best performance in both structural and semantic fMRI visual image reconstruction. Conclusions: The DCBAN proposed in this thesis offers a novel solution for reconstructing visual images from fMRI signals, significantly enhancing the robustness and generative quality of the reconstructed images. Full article
Show Figures

Figure 1

27 pages, 10559 KB  
Article
A Comparative Study of Deep Learning Frameworks Applied to Coffee Plant Detection from Close-Range UAS-RGB Imagery in Costa Rica
by Sergio Arriola-Valverde, Renato Rimolo-Donadio, Karolina Villagra-Mendoza, Alfonso Chacón-Rodriguez, Ronny García-Ramirez and Eduardo Somarriba-Chavez
Remote Sens. 2024, 16(24), 4617; https://doi.org/10.3390/rs16244617 - 10 Dec 2024
Cited by 1 | Viewed by 2408
Abstract
Introducing artificial intelligence techniques in agriculture offers new opportunities for improving crop management, such as in coffee plantations, which constitute a complex agroforestry environment. This paper presents a comparative study of three deep learning frameworks: Deep Forest, RT-DETR, and Yolov9, customized for coffee [...] Read more.
Introducing artificial intelligence techniques in agriculture offers new opportunities for improving crop management, such as in coffee plantations, which constitute a complex agroforestry environment. This paper presents a comparative study of three deep learning frameworks: Deep Forest, RT-DETR, and Yolov9, customized for coffee plant detection and trained from images with a high spatial resolution (cm/pix). Each frame had dimensions of 640 × 640 pixels acquired from passive RGB sensors onboard a UAS (Unmanned Aerial Systems) system. The image set was structured and consolidated from UAS-RGB imagery acquisition in six locations along the Central Valley, Costa Rica, through automated photogrammetric missions. It was evidenced that the RT-DETR and Yolov9 frameworks allowed adequate generalization and detection with mAP50 values higher than 90% and mAP5095 higher than 54%, in scenarios of application with data augmentation techniques. Deep Forest also achieved good metrics, but noticeably lower when compared to the other frameworks. RT-DETR and Yolov9 were able to generalize and detect coffee plants in unseen scenarios that include complex forest structures within tropical agroforestry Systems (AFS). Full article
Show Figures

Figure 1

15 pages, 31080 KB  
Article
Exploration of Semantic Label Decomposition and Dataset Size in Semantic Indoor Scenes Synthesis via Optimized Residual Generative Adversarial Networks
by Hatem Ibrahem, Ahmed Salem and Hyun-Soo Kang
Sensors 2022, 22(21), 8306; https://doi.org/10.3390/s22218306 - 29 Oct 2022
Cited by 3 | Viewed by 3089
Abstract
In this paper, we revisit the paired image-to-image translation using the conditional generative adversarial network, the so-called “Pix2Pix”, and propose efficient optimization techniques for the architecture and the training method to maximize the architecture’s performance to boost the realism of the generated images. [...] Read more.
In this paper, we revisit the paired image-to-image translation using the conditional generative adversarial network, the so-called “Pix2Pix”, and propose efficient optimization techniques for the architecture and the training method to maximize the architecture’s performance to boost the realism of the generated images. We propose a generative adversarial network-based technique to create new artificial indoor scenes using a user-defined semantic segmentation map as an input to define the location, shape, and category of each object in the scene, exactly similar to Pix2Pix. We train different residual connections-based architectures of the generator and discriminator on the NYU depth-v2 dataset and a selected indoor subset from the ADE20K dataset, showing that the proposed models have fewer parameters, less computational complexity, and can generate better quality images than the state of the art methods following the same technique to generate realistic indoor images. We also prove that using extra specific labels and more training samples increases the quality of the generated images; however, the proposed residual connections-based models can learn better from small datasets (i.e., NYU depth-v2) and can improve the realism of the generated images in training on bigger datasets (i.e., ADE20K indoor subset) in comparison to Pix2Pix. The proposed method achieves an LPIPS value of 0.505 and an FID value of 81.067, generating better quality images than that produced by Pix2Pix and other recent paired Image-to-image translation methods and outperforming them in terms of LPIPS and FID. Full article
Show Figures

Figure 1

15 pages, 3870 KB  
Article
Image Recognition of Male Oilseed Rape (Brassica napus) Plants Based on Convolutional Neural Network for UAAS Navigation Applications on Supplementary Pollination and Aerial Spraying
by Zhu Sun, Xiangyu Guo, Yang Xu, Songchao Zhang, Xiaohui Cheng, Qiong Hu, Wenxiang Wang and Xinyu Xue
Agriculture 2022, 12(1), 62; https://doi.org/10.3390/agriculture12010062 - 5 Jan 2022
Cited by 12 | Viewed by 2877
Abstract
To ensure the hybrid oilseed rape (OSR, Brassica napus) seed production, two important things are necessary, the stamen sterility on the female OSR plants and the effective pollen spread onto the pistil from the OSR male plants to the OSR female plants. [...] Read more.
To ensure the hybrid oilseed rape (OSR, Brassica napus) seed production, two important things are necessary, the stamen sterility on the female OSR plants and the effective pollen spread onto the pistil from the OSR male plants to the OSR female plants. The unmanned agricultural aerial system (UAAS) has developed rapidly in China. It has been used on supplementary pollination and aerial spraying during the hybrid OSR seed production. This study developed a new method to rapidly recognize the male OSR plants and extract the row center line for supporting the UAAS navigation. A male OSR plant recognition model was constructed based on the convolutional neural network (CNN). The sequence images of male OSR plants were extracted, the feature regions and points were obtained from the images through morphological and boundary process methods and horizontal segmentation, respectively. The male OSR plant image recognition accuracies of different CNN structures and segmentation sizes were discussed. The male OSR plant row center lines were fitted using the least-squares method (LSM) and Hough transform. The results showed that the segmentation algorithm could segment the male OSR plants from the complex background. The highest average recognition accuracy was 93.54%, and the minimum loss function value was 0.2059 with three convolutional layers, one fully connected layer, and a segmentation size of 40 pix × 40 pix. The LSM is better for center line fitting. The average recognition model accuracies of original input images were 98% and 94%, and the average root mean square errors (RMSE) of angle were 3.22° and 1.36° under cloudy day and sunny day lighting conditions, respectively. The results demonstrate the potential of using digital imaging technology to recognize the male OSR plant row for UAAS visual navigation on the applications of hybrid OSR supplementary pollination and aerial spraying, which would be a meaningful supplement in precision agriculture. Full article
(This article belongs to the Special Issue Image Analysis Techniques in Agriculture)
Show Figures

Figure 1

14 pages, 2045 KB  
Article
Complex-Valued Pix2pix—Deep Neural Network for Nonlinear Electromagnetic Inverse Scattering
by Liang Guo, Guanfeng Song and Hongsheng Wu
Electronics 2021, 10(6), 752; https://doi.org/10.3390/electronics10060752 - 22 Mar 2021
Cited by 29 | Viewed by 4246
Abstract
Nonlinear electromagnetic inverse scattering is an imaging technique with quantitative reconstruction and high resolution. Compared with conventional tomography, it takes into account the more realistic interaction between the internal structure of the scene and the electromagnetic waves. However, there are still open issues [...] Read more.
Nonlinear electromagnetic inverse scattering is an imaging technique with quantitative reconstruction and high resolution. Compared with conventional tomography, it takes into account the more realistic interaction between the internal structure of the scene and the electromagnetic waves. However, there are still open issues and challenges due to its inherent strong non-linearity, ill-posedness and computational cost. To overcome these shortcomings, we apply an image translation network, named as Complex-Valued Pix2pix, on the inverse scattering problem of electromagnetic field. Complex-Valued Pix2pix includes two parts of Generator and Discriminator. The Generator employs a multi-layer complex valued convolutional neural network, while the Discriminator computes the maximum likelihoods between the original value and the reconstructed value from the aspects of the two parts of the complex: real part and imaginary part, respectively. The results show that the Complex-Valued Pix2pix can learn the mapping from the initial contrast to the real contrast in microwave imaging models. Moreover, due to the introduction of discriminator, Complex-Valued Pix2pix can capture more features of nonlinearity than traditional Convolutional Neural Network (CNN) by confrontation training. Therefore, without considering the time cost of training, Complex-Valued Pix2pix may be a more effective way to solve inverse scattering problems than other deep learning methods. The main improvement of this work lies in the realization of a Generative Adversarial Network (GAN) in the electromagnetic inverse scattering problem, adding a discriminator to the traditional Convolutional Neural Network (CNN) method to optimize network training. It has the prospect of outperforming conventional methods in terms of both the image quality and computational efficiency. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

19 pages, 4133 KB  
Article
Complex-Valued Convolutional Autoencoder and Spatial Pixel-Squares Refinement for Polarimetric SAR Image Classification
by Ronghua Shang, Guangguang Wang, Michael A. Okoth and Licheng Jiao
Remote Sens. 2019, 11(5), 522; https://doi.org/10.3390/rs11050522 - 4 Mar 2019
Cited by 30 | Viewed by 5161
Abstract
Recently, deep learning models, such as autoencoder, deep belief network and convolutional autoencoder (CAE), have been widely applied on polarimetric synthetic aperture radar (PolSAR) image classification task. These algorithms, however, only consider the amplitude information of the pixels in PolSAR images failing to [...] Read more.
Recently, deep learning models, such as autoencoder, deep belief network and convolutional autoencoder (CAE), have been widely applied on polarimetric synthetic aperture radar (PolSAR) image classification task. These algorithms, however, only consider the amplitude information of the pixels in PolSAR images failing to obtain adequate discriminative features. In this work, a complex-valued convolutional autoencoder network (CV-CAE) is proposed. CV-CAE extends the encoding and decoding of CAE to complex domain so that the phase information can be adopted. Benefiting from the advantages of the CAE, CV-CAE extract features from a tiny number of training datasets. To further boost the performance, we propose a novel post processing method called spatial pixel-squares refinement (SPF) for preliminary classification map. Specifically, the majority voting and difference-value methods are utilized to determine whether the pixel-squares (PixS) needs to be refined or not. Based on the blocky structure of land cover of PolSAR images, SPF refines the PixS simultaneously. Therefore, it is more productive than current methods worked on pixel level. The proposed algorithm is measured on three typical PolSAR datasets, and better or comparable accuracy is obtained compared with other state-of-the-art methods. Full article
Show Figures

Figure 1

Back to TopTop