Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = generativeadversarial network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
47 pages, 814 KB  
Systematic Review
Generative Adversarial Networks in Histological Image Segmentation: A Systematic Literature Review
by Yanna Leidy Ketley Fernandes Cruz, Antonio Fhillipi Maciel Silva, Ewaldo Eder Carvalho Santana and Daniel G. Costa
Appl. Sci. 2025, 15(14), 7802; https://doi.org/10.3390/app15147802 - 11 Jul 2025
Viewed by 2807
Abstract
Histological image analysis plays a crucial role in understanding and diagnosing various diseases, but manually segmenting these images is often complex, time-consuming, and heavily reliant on expert knowledge. Generative adversarial networks (GANs) have emerged as promising tools to assist in this task, enhancing [...] Read more.
Histological image analysis plays a crucial role in understanding and diagnosing various diseases, but manually segmenting these images is often complex, time-consuming, and heavily reliant on expert knowledge. Generative adversarial networks (GANs) have emerged as promising tools to assist in this task, enhancing the accuracy and efficiency of segmentation in histological images. This systematic literature review aims to explore how GANs have been utilized for segmentation in this field, highlighting the latest trends, key challenges, and opportunities for future research. The review was conducted across multiple digital libraries, including IEEE, Springer, Scopus, MDPI, and PubMed, with combinations of the keywords “generative adversarial network” or “GAN”, “segmentation” or “image segmentation” or “semantic segmentation”, and “histology” or “histological” or “histopathology” or “histopathological”. We reviewed 41 GAN-based histological image segmentation articles published between December 2014 and February 2025. We summarized and analyzed these papers based on the segmentation regions, datasets, GAN tasks, segmentation tasks, and commonly used metrics. Additionally, we discussed advantages, challenges, and future research directions. The analyzed studies demonstrated the versatility of GANs in handling challenges like stain variability, multi-task segmentation, and data scarcity—all crucial challenges in the analysis of histopathological images. Nevertheless, the field still faces important challenges, such as the need for standardized datasets, robust evaluation metrics, and better generalization across diverse tissues and conditions. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 6639 KB  
Article
Efficient Generative-Adversarial U-Net for Multi-Organ Medical Image Segmentation
by Haoran Wang, Gengshen Wu and Yi Liu
J. Imaging 2025, 11(1), 19; https://doi.org/10.3390/jimaging11010019 - 12 Jan 2025
Cited by 7 | Viewed by 4185
Abstract
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial [...] Read more.
Manual labeling of lesions in medical image analysis presents a significant challenge due to its labor-intensive and inefficient nature, which ultimately strains essential medical resources and impedes the advancement of computer-aided diagnosis. This paper introduces a novel medical image-segmentation framework named Efficient Generative-Adversarial U-Net (EGAUNet), designed to facilitate rapid and accurate multi-organ labeling. To enhance the model’s capability to comprehend spatial information, we propose the Global Spatial-Channel Attention Mechanism (GSCA). This mechanism enables the model to concentrate more effectively on regions of interest. Additionally, we have integrated Efficient Mapping Convolutional Blocks (EMCB) into the feature-learning process, allowing for the extraction of multi-scale spatial information and the adjustment of feature map channels through optimized weight values. Moreover, the proposed framework progressively enhances its performance by utilizing a generative-adversarial learning strategy, which contributes to improvements in segmentation accuracy. Consequently, EGAUNet demonstrates exemplary segmentation performance on public multi-organ datasets while maintaining high efficiency. For instance, in evaluations on the CHAOS T2SPIR dataset, EGAUNet achieves approximately 2% higher performance on the Jaccard metric, 1% higher on the Dice metric, and nearly 3% higher on the precision metric in comparison to advanced networks such as Swin-Unet and TransUnet. Full article
Show Figures

Figure 1

27 pages, 21410 KB  
Article
Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting
by Jingdao Chen, John Seon Keun Yi, Mark Kahoush, Erin S. Cho and Yong K. Cho
Sensors 2020, 20(18), 5029; https://doi.org/10.3390/s20185029 - 4 Sep 2020
Cited by 22 | Viewed by 7526
Abstract
Collecting 3D point cloud data of buildings is important for many applications such as urban mapping, renovation, preservation, and energy simulation. However, laser-scanned point clouds are often difficult to analyze, visualize, and interpret due to incompletely scanned building facades caused by numerous sources [...] Read more.
Collecting 3D point cloud data of buildings is important for many applications such as urban mapping, renovation, preservation, and energy simulation. However, laser-scanned point clouds are often difficult to analyze, visualize, and interpret due to incompletely scanned building facades caused by numerous sources of defects such as noise, occlusions, and moving objects. Several point cloud scene completion algorithms have been proposed in the literature, but they have been mostly applied to individual objects or small-scale indoor environments and not on large-scale scans of building facades. This paper introduces a method of performing point cloud scene completion of building facades using orthographic projection and generative adversarial inpainting methods. The point cloud is first converted into the 2D structured representation of depth and color images using an orthographic projection approach. Then, a data-driven 2D inpainting approach is used to predict the complete version of the scene, given the incomplete scene in the image domain. The 2D inpainting process is fully automated and uses a customized generative-adversarial network based on Pix2Pix that is trainable end-to-end. The inpainted 2D image is finally converted back into a 3D point cloud using depth remapping. The proposed method is compared against several baseline methods, including geometric methods such as Poisson reconstruction and hole-filling, as well as learning-based methods such as the point completion network (PCN) and TopNet. Performance evaluation is carried out based on the task of reconstructing real-world building facades from partial laser-scanned point clouds. Experimental results using the performance metrics of voxel precision, voxel recall, position error, and color error showed that the proposed method has the best performance overall. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

26 pages, 17818 KB  
Article
Deep-Learning-Based Defective Bean Inspection with GAN-Structured Automated Labeled Data Augmentation in Coffee Industry
by Yung-Chien Chou, Cheng-Ju Kuo, Tzu-Ting Chen, Gwo-Jiun Horng, Mao-Yuan Pai, Mu-En Wu, Yu-Chuan Lin, Min-Hsiung Hung, Wei-Tsung Su, Yi-Chung Chen, Ding-Chau Wang and Chao-Chun Chen
Appl. Sci. 2019, 9(19), 4166; https://doi.org/10.3390/app9194166 - 4 Oct 2019
Cited by 49 | Viewed by 15307
Abstract
In the production process from green beans to coffee bean packages, the defective bean removal (or in short, defect removal) is one of most labor-consuming stages, and many companies investigate the automation of this stage for minimizing human efforts. In this paper, we [...] Read more.
In the production process from green beans to coffee bean packages, the defective bean removal (or in short, defect removal) is one of most labor-consuming stages, and many companies investigate the automation of this stage for minimizing human efforts. In this paper, we propose a deep-learning-based defective bean inspection scheme (DL-DBIS), together with a GAN (generative-adversarial network)-structured automated labeled data augmentation method (GALDAM) for enhancing the proposed scheme, so that the automation degree of bean removal with robotic arms can be further improved for coffee industries. The proposed scheme is aimed at providing an effective model to a deep-learning-based object detection module for accurately identifying defects among dense beans. The proposed GALDAM can be used to greatly reduce labor costs, since the data labeling is the most labor-intensive work in this sort of solutions. Our proposed scheme brings two main impacts to intelligent agriculture. First, our proposed scheme is can be easily adopted by industries as human effort in labeling coffee beans are minimized. The users can easily customize their own defective bean model without spending a great amount of time on labeling small and dense objects. Second, our scheme can inspect all classes of defective beans categorized by the SCAA (Specialty Coffee Association of America) at the same time and can be easily extended if more classes of defective beans are added. These two advantages increase the degree of automation in the coffee industry. The prototype of the proposed scheme was developed for studying integrated tests. Testing results of a case study reveal that the proposed scheme can efficiently and effectively generate models for identifying defective beans with accuracy and precision values up to 80 % . Full article
(This article belongs to the Special Issue Actionable Pattern-Driven Analytics and Prediction)
Show Figures

Figure 1

Back to TopTop