Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (111)

Search Parameters:
Keywords = deep convolutional generative adversarial networks (DCGAN)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1563 KiB  
Article
High-Resolution Time-Frequency Feature Selection and EEG Augmented Deep Learning for Motor Imagery Recognition
by Mouna Bouchane, Wei Guo and Shuojin Yang
Electronics 2025, 14(14), 2827; https://doi.org/10.3390/electronics14142827 - 14 Jul 2025
Viewed by 287
Abstract
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI [...] Read more.
Motor Imagery (MI) based Brain Computer Interfaces (BCIs) have promising applications in neurorehabilitation for individuals who have lost mobility and control over parts of their body due to brain injuries, such as stroke patients. Accurately classifying MI tasks is essential for effective BCI performance, but this task remains challenging due to the complex and non-stationary nature of EEG signals. This study aims to improve the classification of left and right-hand MI tasks by utilizing high-resolution time-frequency features extracted from EEG signals, enhanced with deep learning-based data augmentation techniques. We propose a novel deep learning framework named the Generalized Wavelet Transform-based Deep Convolutional Network (GDC-Net), which integrates multiple components. First, EEG signals recorded from the C3, C4, and Cz channels are transformed into detailed time-frequency representations using the Generalized Morse Wavelet Transform (GMWT). The selected features are then expanded using a Deep Convolutional Generative Adversarial Network (DCGAN) to generate additional synthetic data and address data scarcity. Finally, the augmented feature maps data are subsequently fed into a hybrid CNN-LSTM architecture, enabling both spatial and temporal feature learning for improved classification. The proposed approach is evaluated on the BCI Competition IV dataset 2b. Experimental results showed that the mean classification accuracy and Kappa value are 89.24% and 0.784, respectively, making them the highest compared to the state-of-the-art algorithms. The integration of GMWT and DCGAN significantly enhances feature quality and model generalization, thereby improving classification performance. These findings demonstrate that GDC-Net delivers superior MI classification performance by effectively capturing high-resolution time-frequency dynamics and enhancing data diversity. This approach holds strong potential for advancing MI-based BCI applications, especially in assistive and rehabilitation technologies. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 7958 KiB  
Article
Truncation Artifact Reduction in Stationary Inverse-Geometry Digital Tomosynthesis Using Deep Convolutional Generative Adversarial Network
by Burnyoung Kim and Seungwan Lee
Appl. Sci. 2025, 15(14), 7699; https://doi.org/10.3390/app15147699 - 9 Jul 2025
Viewed by 225
Abstract
Stationary inverse-geometry digital tomosynthesis (s-IGDT) causes truncation artifacts in reconstructed images due to its geometric characteristics. This study introduces a deep convolutional generative adversarial network (DCGAN)-based out-painting method for mitigating truncation artifacts in s-IGDT images. The proposed network employed an encoder–decoder architecture for [...] Read more.
Stationary inverse-geometry digital tomosynthesis (s-IGDT) causes truncation artifacts in reconstructed images due to its geometric characteristics. This study introduces a deep convolutional generative adversarial network (DCGAN)-based out-painting method for mitigating truncation artifacts in s-IGDT images. The proposed network employed an encoder–decoder architecture for the generator, and a dilated convolution block was added between the encoder and decoder. A dual-discriminator was used to distinguish the artificiality of generated images for truncated and non-truncated regions separately. During network training, the generator was able to selectively learn a target task for the truncated regions using binary mask images. The performance of the proposed method was compared to conventional methods in terms of signal-to-noise ratio (SNR), normalized root-mean-square error (NRMSE), peak SNR (PSNR), and structural similarity (SSIM). The results showed that the proposed method led to a substantial reduction in truncation artifacts. On average, the proposed method achieved 62.31, 16.66, and 14.94% improvements in the SNR, PSNR, and SSIM, respectively, compared to the conventional methods. Meanwhile, the NRMSE values were reduced by an average of 37.22%. In conclusion, the proposed out-painting method can offer a promising solution for mitigating truncation artifacts in s-IGDT images and improving the clinical availability of the s-IGDT. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

24 pages, 4465 KiB  
Article
A Deep Learning-Based Echo Extrapolation Method by Fusing Radar Mosaic and RMAPS-NOW Data
by Shanhao Wang, Zhiqun Hu, Fuzeng Wang, Ruiting Liu, Lirong Wang and Jiexin Chen
Remote Sens. 2025, 17(14), 2356; https://doi.org/10.3390/rs17142356 - 9 Jul 2025
Viewed by 326
Abstract
Radar echo extrapolation is a critical forecasting tool in the field of meteorology, playing an especially vital role in nowcasting and weather modification operations. In recent years, spatiotemporal sequence prediction models based on deep learning have garnered significant attention and achieved notable progress [...] Read more.
Radar echo extrapolation is a critical forecasting tool in the field of meteorology, playing an especially vital role in nowcasting and weather modification operations. In recent years, spatiotemporal sequence prediction models based on deep learning have garnered significant attention and achieved notable progress in radar echo extrapolation. However, most of these extrapolation network architectures are built upon convolutional neural networks, using radar echo images as input. Typically, radar echo intensity values ranging from −5 to 70 dBZ with a resolution of 5 dBZ are converted into 0–255 grayscale images from pseudo-color representations, which inevitably results in the loss of important echo details. Furthermore, as the extrapolation time increases, the smoothing effect inherent to convolution operations leads to increasingly blurred predictions. To address the algorithmic limitations of deep learning-based echo extrapolation models, this study introduces three major improvements: (1) A Deep Convolutional Generative Adversarial Network (DCGAN) is integrated into the ConvLSTM-based extrapolation model to construct a DCGAN-enhanced architecture, significantly improving the quality of radar echo extrapolation; (2) Considering that the evolution of radar echoes is closely related to the surrounding meteorological environment, the study incorporates specific physical variable products from the initial zero-hour field of RMAPS-NOW (the Rapid-update Multiscale Analysis and Prediction System—NOWcasting subsystem), developed by the Institute of Urban Meteorology, China. These variables are encoded jointly with high-resolution (0.5 dB) radar mosaic data to form multiple radar cells as input. A multi-channel radar echo extrapolation network architecture (MR-DCGAN) is then designed based on the DCGAN framework; (3) Since radar echo decay becomes more prominent over longer extrapolation horizons, this study departs from previous approaches that use a single model to extrapolate 120 min. Instead, it customizes time-specific loss functions for spatiotemporal attenuation correction and independently trains 20 separate models to achieve the full 120 min extrapolation. The dataset consists of radar composite reflectivity mosaics over North China within the range of 116.10–117.50°E and 37.77–38.77°N, collected from June to September during 2018–2022. A total of 39,000 data samples were matched with the initial zero-hour fields from RMAPS-NOW, with 80% (31,200 samples) used for training and 20% (7800 samples) for testing. Based on the ConvLSTM and the proposed MR-DCGAN architecture, 20 extrapolation models were trained using four different input encoding strategies. The models were evaluated using the Critical Success Index (CSI), Probability of Detection (POD), and False Alarm Ratio (FAR). Compared to the baseline ConvLSTM-based extrapolation model without physical variables, the models trained with the MR-DCGAN architecture achieved, on average, 18.59%, 8.76%, and 11.28% higher CSI values, 19.46%, 19.21%, and 19.18% higher POD values, and 19.85%, 11.48%, and 9.88% lower FAR values under the 20 dBZ, 30 dBZ, and 35 dBZ reflectivity thresholds, respectively. Among all tested configurations, the model that incorporated three physical variables—relative humidity (rh), u-wind, and v-wind—demonstrated the best overall performance across various thresholds, with CSI and POD values improving by an average of 16.75% and 24.75%, respectively, and FAR reduced by 15.36%. Moreover, the SSIM of the MR-DCGAN models demonstrates a more gradual decline and maintains higher overall values, indicating superior capability in preserving echo structural features. Meanwhile, the comparative experiments demonstrate that the MR-DCGAN (u, v + rh) model outperforms the MR-ConvLSTM (u, v + rh) model in terms of evaluation metrics. In summary, the model trained with the MR-DCGAN architecture effectively enhances the accuracy of radar echo extrapolation. Full article
(This article belongs to the Special Issue Advance of Radar Meteorology and Hydrology II)
Show Figures

Figure 1

17 pages, 5418 KiB  
Article
DCCopGAN: Deep Convolutional Copula-GAN for Unsupervised Multi-Sensor Anomaly Detection in Industrial Gearboxes
by Bowei Ge, Ye Li and Guangqiang Yin
Electronics 2025, 14(13), 2631; https://doi.org/10.3390/electronics14132631 - 29 Jun 2025
Viewed by 309
Abstract
The gearbox, a key transmission device in industrial applications, can cause severe vibrations or failures when anomalies occur. With increasing industrial automation complexity, precise anomaly detection is crucial. This paper introduces DCCopGAN, a novel unsupervised framework that uses a deep convolutional copula-generative adversarial [...] Read more.
The gearbox, a key transmission device in industrial applications, can cause severe vibrations or failures when anomalies occur. With increasing industrial automation complexity, precise anomaly detection is crucial. This paper introduces DCCopGAN, a novel unsupervised framework that uses a deep convolutional copula-generative adversarial network for unsupervised multi-sensor anomaly detection in industrial gearboxes. Firstly, a Deep Convolutional Generative Adversarial Network (DCGAN) generator is trained on high-dimensional normal operational data from multi-sensors to learn its underlying distribution, enabling the calculation of reconstruction errors for input samples. Then, these reconstruction errors are analyzed by Copula-Based Outlier Detection (CopOD), an efficient non-parametric technique, to identify anomalies. In the testing phase, reconstruction errors for test samples are similarly computed, normalized, and then evaluated by the CopOD mechanism to assign anomaly scores and detect deviations from normal behavior. The proposed DCCopGAN framework has been validated on a real gearbox dataset, where experimental results demonstrate its superior anomaly detection performance over other methods. Full article
Show Figures

Figure 1

17 pages, 1856 KiB  
Article
Exploring Bioimage Synthesis and Detection via Generative Adversarial Networks: A Multi-Faceted Case Study
by Valeria Sorgente, Dante Biagiucci, Mario Cesarelli, Luca Brunese, Antonella Santone, Fabio Martinelli and Francesco Mercaldo
J. Imaging 2025, 11(7), 214; https://doi.org/10.3390/jimaging11070214 - 27 Jun 2025
Viewed by 219
Abstract
Background:Generative Adversarial Networks (GANs), thanks to their great versatility, have a plethora of applications in biomedical imaging with the goal of simulating complex pathological conditions and creating clinical data used for training advanced machine learning models. The ability to generate high-quality synthetic clinical [...] Read more.
Background:Generative Adversarial Networks (GANs), thanks to their great versatility, have a plethora of applications in biomedical imaging with the goal of simulating complex pathological conditions and creating clinical data used for training advanced machine learning models. The ability to generate high-quality synthetic clinical data not only addresses issues related to the scarcity of annotated bioimages but also supports the continuous improvement of diagnostic tools. Method: We propose a two-step method aimed to detect whether a bioimage can be considered fake or real. The first step is related to bioimage generation using a Deep Convolutional GAN, while the second step involves the training and testing of a set of machine learning models aimed to distinguish between real and generated bioimages. Results: We evaluate our approach by exploiting six different datasets. We observe notable results, demonstrating the ability of Deep Convolutional GAN to generate realistic synthetic images for some specific bioimages. However, for other bioimages, the accuracy does not align with the expected trend, indicating challenges in generating images that closely resemble real ones. Conclusions: This study highlights both the potential and limitations of GAN in generating realistic bioimages. Future work will focus on improving generation quality and detection accuracy across different datasets. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

20 pages, 11512 KiB  
Article
A Generative Urban Form Design Framework Based on Deep Convolutional GANs and Landscape Pattern Metrics for Sustainable Renewal in Highly Urbanized Cities
by Shencheng Xu, Haitao Jiang and Hanyang Wang
Sustainability 2025, 17(10), 4548; https://doi.org/10.3390/su17104548 - 16 May 2025
Viewed by 554
Abstract
The iterative process of urban development often produces fragmented renewal zones that disrupt the continuity of urban morphology, undermining both cultural identity and economic cohesion. Addressing this challenge, this study proposes a generative design framework based on Deep Convolutional Generative Adversarial Networks (DCGANs) [...] Read more.
The iterative process of urban development often produces fragmented renewal zones that disrupt the continuity of urban morphology, undermining both cultural identity and economic cohesion. Addressing this challenge, this study proposes a generative design framework based on Deep Convolutional Generative Adversarial Networks (DCGANs) to predict and regenerate urban morphology in alignment with existing spatial contexts. A dataset was constructed from highly urbanized city centers and used to train a DCGAN model. To evaluate the model performance, seven landscape pattern indices—LPI, PLAND, LSI, MPFD, AI, PLADJ, and NP—were employed to quantify changes in scale, shape, compactness, fragmentation, and spatial adjacency. Results show that the model accurately predicts morphological patterns and captures underlying spatial logic in developed urban areas, demonstrating strong sensitivity to local form characteristics, and enhancing the feasibility of sustainable urban renewal. Nonetheless, the model’s generalizability is constrained by inter-city morphological heterogeneity, highlighting the need for region-specific adaptation. This work contributes a data-driven approach to urban morphology research and offers a scalable framework for form-based, sustainability-oriented urban design. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

16 pages, 9080 KiB  
Article
Drainage Network Generation for Urban Pluvial Flooding (UPF) Using Generative Adversarial Networks (GANs) and GIS Data
by Muhammad Nasar Ahmad, Hariklia D. Skilodimou, Fakhrul Islam, Akib Javed and George D. Bathrellos
Sustainability 2025, 17(10), 4380; https://doi.org/10.3390/su17104380 - 12 May 2025
Viewed by 556
Abstract
Mapping urban pluvial flooding (UPF) in data-scarce regions poses significant challenges, particularly when drainage systems are inadequate or outdated. These limitations hinder effective flood mitigation and risk assessment. This study proposes an innovative approach to address these challenges by integrating deep learning (DL) [...] Read more.
Mapping urban pluvial flooding (UPF) in data-scarce regions poses significant challenges, particularly when drainage systems are inadequate or outdated. These limitations hinder effective flood mitigation and risk assessment. This study proposes an innovative approach to address these challenges by integrating deep learning (DL) models with traditional methods. First, deep convolutional generative adversarial networks (DCGANs) were employed to enhance drainage network data generation. Second, deep recurrent neural networks (DRNNs) and multi-criteria decision analysis (MCDA) methods were implemented to assess UPF. The study compared the performance of these approaches, highlighting the potential of DL models in providing more accurate and robust flood mapping outcomes. The methodology was applied to Lahore, Pakistan—a rapidly urbanizing and data-scarce region frequently impacted by UPF during monsoons. High-resolution ALOS PALSAR DEM data were utilized to extract natural drainage networks, while synthetic datasets generated by GANs addressed the lack of historical flood data. Results demonstrated the superiority of DL-based approaches over traditional MCDA methods, showcasing their potential for broader applicability in similar regions worldwide. This research emphasizes the role of DL models in advancing urban flood mapping, providing valuable insights for urban planners and policymakers to mitigate flooding risks and improve resilience in vulnerable regions. Full article
Show Figures

Graphical abstract

27 pages, 6725 KiB  
Article
SIR-DCGAN: An Attention-Guided Robust Watermarking Method for Remote Sensing Image Protection Using Deep Convolutional Generative Adversarial Networks
by Shaoliang Pan, Xiaojun Yin, Mingrui Ding and Pengshuai Liu
Electronics 2025, 14(9), 1853; https://doi.org/10.3390/electronics14091853 - 1 May 2025
Viewed by 716
Abstract
Ensuring the security of remote sensing images is essential to prevent unauthorized access, tampering, and misuse. Deep learning-based digital watermarking offers a promising solution by embedding imperceptible information to protect data integrity. This paper proposes SIR-DCGAN, an attention-guided robust watermarking method for remote [...] Read more.
Ensuring the security of remote sensing images is essential to prevent unauthorized access, tampering, and misuse. Deep learning-based digital watermarking offers a promising solution by embedding imperceptible information to protect data integrity. This paper proposes SIR-DCGAN, an attention-guided robust watermarking method for remote sensing image protection. It incorporates an IR-FFM feature fusion module to enhance feature reuse across different layers and an SE-AM attention mechanism to emphasize critical watermark features. Additionally, a noise simulation sub-network is introduced to improve resistance against common and combined attacks. The proposed method achieves high imperceptibility and robustness while maintaining low computational cost. Extensive experiments on both remote sensing and natural image datasets validate its effectiveness, with performance consistently surpassing existing approaches. These results demonstrate the practicality and reliability of SIR-DCGAN for secure image distribution and copyright protection. Full article
Show Figures

Figure 1

25 pages, 15919 KiB  
Article
Automated Detection Method for Bolt Detachment of Wind Turbines in Low-Light Scenarios
by Jiayi Deng, Yong Yao, Mumin Rao, Yi Yang, Chunkun Luo, Zhenyan Li, Xugang Hua and Bei Chen
Energies 2025, 18(9), 2197; https://doi.org/10.3390/en18092197 - 25 Apr 2025
Viewed by 352
Abstract
Tower bolts play a crucial role as connecting components in wind turbines and are of great interest for health monitoring systems. Non-contact monitoring techniques offer superior efficiency, convenience, and intelligence compared to contact-based methods. However, the precision and robustness of the non-contact monitoring [...] Read more.
Tower bolts play a crucial role as connecting components in wind turbines and are of great interest for health monitoring systems. Non-contact monitoring techniques offer superior efficiency, convenience, and intelligence compared to contact-based methods. However, the precision and robustness of the non-contact monitoring process are significantly impacted by suboptimal lighting conditions within the wind turbine tower. To address this problem, this article proposes an automated detection method for the bolt detachment of wind turbines in low-light scenarios. The approach leverages the deep convolutional generative adversarial network (DCGAN) to expand and augment the small-sample bolt dataset. Transfer learning is then applied to train the Zero-DCE++ low-light enhancement model and the bolt defect detection model, with the experimental verification of the proposed method’s effectiveness. The results reveal that the deep convolutional generative adversarial network can generate realistic bolt images, thereby improving the quantity and quality of the dataset. Additionally, the Zero-DCE++ light enhancement model significantly increases the mean brightness of low-light images, resulting in a decrease in the error rate of defect detection from 31.08% to 2.36%. In addition, the model’s detection performance is affected by shooting angles and distances. Maintaining a shooting distance within 1.6 m and a shooting angle within 20° improves the reliability of the detection results. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

16 pages, 9321 KiB  
Article
Improved Deep Convolutional Generative Adversarial Network for Data Augmentation of Gas Polyethylene Pipeline Defect Images
by Zihan Zhang, Yang Wang, Nan Lin and Shengtao Ren
Appl. Sci. 2025, 15(8), 4293; https://doi.org/10.3390/app15084293 - 13 Apr 2025
Viewed by 430
Abstract
Gas polyethylene (PE) pipes have an become essential component of the urban gas pipeline network due to their long service life and corrosion resistance. To prevent safety incidents, regular monitoring of gas pipelines is crucial. Traditional inspection methods face significant challenges, including low [...] Read more.
Gas polyethylene (PE) pipes have an become essential component of the urban gas pipeline network due to their long service life and corrosion resistance. To prevent safety incidents, regular monitoring of gas pipelines is crucial. Traditional inspection methods face significant challenges, including low efficiency, high costs, and limited applicability. Machine vision-based inspection methods have emerged as a key solution to these issues. Despite this, the method also encounters the problem of scarcity of defect samples and uneven data distribution in gas pipeline defect detection. For this reason, an improved Deep Convolutional Generative Adversarial Network (DCGAN) is proposed. By integrating the Minibatch Discrimination (MD), Spectral Normalization (SN), Self-Attention Mechanism (SAM) and Two-Timescale Update Rule (TTUR), the proposed approach overcomes the original DCGAN’s limitations, including mode collapse, low resolution of generated images, and unstable training, the data augmentation of defective images inside the pipeline is realized. Experimental results demonstrate the superiority of the improved algorithm in terms of image generation quality and diversity, while the ablation study validates the positive impact of the improvement in each part. Additionally, the relationship between the number of augmented images and classification accuracy, showing that classifier performance improved across all scenarios when generated defect images were included. The findings indicate that the images produced by the improved model significantly enhance defect detection accuracy and hold considerable potential for practical application. Full article
Show Figures

Figure 1

13 pages, 2295 KiB  
Article
Seafloor Sediment Classification Using Small-Sample Multi-Beam Data Based on Convolutional Neural Networks
by Haibo Ma, Xianghua Lai, Taojun Hu, Xiaoming Fu, Xingwei Zhang and Sheng Song
J. Mar. Sci. Eng. 2025, 13(4), 671; https://doi.org/10.3390/jmse13040671 - 27 Mar 2025
Viewed by 476
Abstract
Accurate, rapid, and automatic seafloor sediment classification represents a crucial challenge in marine sediment research. To address this, our study proposes a seafloor sediment classification method integrating convolutional neural networks (CNNs) with small-sample multi-beam backscatter data. We implemented four CNN architectures for classification—LeNet, [...] Read more.
Accurate, rapid, and automatic seafloor sediment classification represents a crucial challenge in marine sediment research. To address this, our study proposes a seafloor sediment classification method integrating convolutional neural networks (CNNs) with small-sample multi-beam backscatter data. We implemented four CNN architectures for classification—LeNet, AlexNet, GoogLeNet, and VGG—all achieving an overall accuracy exceeding 92%. To overcome the scarcity of seafloor sediment acoustic image data, we applied a deep convolutional generative adversarial network (DCGAN) for data augmentation, incorporating a de-normalization and anti-normalization module into the original DCGAN framework. Through comparative analysis of the generated versus original datasets using visual inspection and grayscale co-occurrence matrix methods, we substantially enhanced the similarity between synthetic and authentic images. Subsequent model training using the augmented dataset demonstrated improved classification performance across all architectures: LeNet showed a 1.88% accuracy increase, AlexNet an increase of 1.06%, GoogLeNet an increase of 2.59%, and VGG16 achieved a 2.97% improvement. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 705 KiB  
Systematic Review
Data Augmentation with Generative Methods for Inherited Retinal Diseases: A Systematic Review
by Jorge Machado, Ana Marta, Pedro Mestre, João Melo Beirão and António Cunha
Appl. Sci. 2025, 15(6), 3084; https://doi.org/10.3390/app15063084 - 12 Mar 2025
Viewed by 1275
Abstract
Inherited retinal diseases (IRDs) are rare and genetically diverse disorders that cause progressive vision loss and affect 1 in 3000 individuals worldwide. Their rarity and genetic variability pose a challenge for deep learning models due to the limited amount of data. Generative models [...] Read more.
Inherited retinal diseases (IRDs) are rare and genetically diverse disorders that cause progressive vision loss and affect 1 in 3000 individuals worldwide. Their rarity and genetic variability pose a challenge for deep learning models due to the limited amount of data. Generative models offer a promising solution by creating synthetic data to improve training datasets. This study carried out a systematic literature review to investigate the use of generative models to augment data in IRDs and assess their impact on the performance of classifiers for these diseases. Following PRISMA 2020 guidelines, searches in four databases identified 32 relevant studies, 2 focused on IRD and the rest on other retinal diseases. The results indicate that generative models effectively augment small datasets. Among the techniques identified, Deep Convolutional Adversarial Generative Networks (DCGAN) and the Style-Based Generator Architecture of Generative Adversarial Networks 2 (StyleGAN2) were the most widely used. These architectures generated highly realistic and diverse synthetic data, often indistinguishable from real data, even for experts. The results highlight the need for more research into data generation in IRD to develop robust diagnostic tools and improve genetic studies by creating more comprehensive genetic repositories. Full article
Show Figures

Figure 1

37 pages, 34201 KiB  
Article
Measuring the Level of Aflatoxin Infection in Pistachio Nuts by Applying Machine Learning Techniques to Hyperspectral Images
by Lizzie Williams, Pancham Shukla, Akbar Sheikh-Akbari, Sina Mahroughi and Iosif Mporas
Sensors 2025, 25(5), 1548; https://doi.org/10.3390/s25051548 - 2 Mar 2025
Cited by 5 | Viewed by 1531
Abstract
This paper investigates the use of machine learning techniques on hyperspectral images of pistachios to detect and classify different levels of aflatoxin contamination. Aflatoxins are toxic compounds produced by moulds, posing health risks to consumers. Current detection methods are invasive and contribute to [...] Read more.
This paper investigates the use of machine learning techniques on hyperspectral images of pistachios to detect and classify different levels of aflatoxin contamination. Aflatoxins are toxic compounds produced by moulds, posing health risks to consumers. Current detection methods are invasive and contribute to food waste. This paper explores the feasibility of a non-invasive method using hyperspectral imaging and machine learning to classify aflatoxin levels accurately, potentially reducing waste and enhancing food safety. Hyperspectral imaging with machine learning has shown promise in food quality control. The paper evaluates models including Dimensionality Reduction with K-Means Clustering, Residual Networks (ResNets), Variational Autoencoders (VAEs), and Deep Convolutional Generative Adversarial Networks (DCGANs). Using a dataset from Leeds Beckett University with 300 hyperspectral images, covering three aflatoxin levels (<8 ppn, >160 ppn, and >300 ppn), key wavelengths were identified to indicate contamination presence. Dimensionality Reduction with K-Means achieved 84.38% accuracy, while a ResNet model using the 866.21 nm wavelength reached 96.67%. VAE and DCGAN models, though promising, were constrained by dataset size. The findings highlight the potential for machine learning-based hyperspectral imaging in pistachio quality control, and future research should focus on expanding datasets and refining models for industry application. Full article
Show Figures

Figure 1

13 pages, 1650 KiB  
Technical Note
Pano-GAN: A Deep Generative Model for Panoramic Dental Radiographs
by Søren Pedersen, Sanyam Jain, Mikkel Chavez, Viktor Ladehoff, Bruna Neves de Freitas and Ruben Pauwels
J. Imaging 2025, 11(2), 41; https://doi.org/10.3390/jimaging11020041 - 2 Feb 2025
Cited by 1 | Viewed by 1851
Abstract
This paper presents the development of a generative adversarial network (GAN) for the generation of synthetic dental panoramic radiographs. While this is an exploratory study, the ultimate aim is to address the scarcity of data in dental research and education. A deep convolutional [...] Read more.
This paper presents the development of a generative adversarial network (GAN) for the generation of synthetic dental panoramic radiographs. While this is an exploratory study, the ultimate aim is to address the scarcity of data in dental research and education. A deep convolutional GAN (DCGAN) with the Wasserstein loss and a gradient penalty (WGAN-GP) was trained on a dataset of 2322 radiographs of varying quality. The focus of this study was on the dentoalveolar part of the radiographs; other structures were cropped out. Significant data cleaning and preprocessing were conducted to standardize the input formats while maintaining anatomical variability. Four candidate models were identified by varying the critic iterations, number of features and the use of denoising prior to training. To assess the quality of the generated images, a clinical expert evaluated a set of generated synthetic radiographs using a ranking system based on visibility and realism, with scores ranging from 1 (very poor) to 5 (excellent). It was found that most generated radiographs showed moderate depictions of dentoalveolar anatomical structures, although they were considerably impaired by artifacts. The mean evaluation scores showed a trade-off between the model trained on non-denoised data, which showed the highest subjective quality for finer structures, such as the mandibular canal and trabecular bone, and one of the models trained on denoised data, which offered better overall image quality, especially in terms of clarity and sharpness and overall realism. These outcomes serve as a foundation for further research into GAN architectures for dental imaging applications. Full article
(This article belongs to the Special Issue Tools and Techniques for Improving Radiological Imaging Applications)
Show Figures

Figure 1

17 pages, 7198 KiB  
Article
DCGAN-Based Feature Augmentation: A Novel Approach for Efficient Mineralization Prediction Through Data Generation
by Soran Qaderi, Abbas Maghsoudi, Amin Beiranvand Pour, Abdorrahman Rajabi and Mahyar Yousefi
Minerals 2025, 15(1), 71; https://doi.org/10.3390/min15010071 - 13 Jan 2025
Cited by 10 | Viewed by 1371
Abstract
This study aims to improve the efficiency of mineral exploration by introducing a novel application of Deep Convolutional Generative Adversarial Networks (DCGANs) to augment geological evidence layers. By training a DCGAN model with existing geological, geochemical, and remote sensing data, we have synthesized [...] Read more.
This study aims to improve the efficiency of mineral exploration by introducing a novel application of Deep Convolutional Generative Adversarial Networks (DCGANs) to augment geological evidence layers. By training a DCGAN model with existing geological, geochemical, and remote sensing data, we have synthesized new, plausible layers of evidence that reveal unrecognized patterns and correlations. This approach deepens the understanding of the controlling factors in the formation of mineral deposits. The implications of this research are significant and could improve the efficiency and success rate of mineral exploration projects by providing more reliable and comprehensive data for decision-making. The predictive map created using the proposed feature augmentation technique covered all known deposits in only 18% of the study area. Full article
Show Figures

Figure 1

Back to TopTop