Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = GAN-physics fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 1231 KiB  
Review
Toward Intelligent Underwater Acoustic Systems: Systematic Insights into Channel Estimation and Modulation Methods
by Imran A. Tasadduq and Muhammad Rashid
Electronics 2025, 14(15), 2953; https://doi.org/10.3390/electronics14152953 - 24 Jul 2025
Viewed by 282
Abstract
Underwater acoustic (UWA) communication supports many critical applications but still faces several physical-layer signal processing challenges. In response, recent advances in machine learning (ML) and deep learning (DL) offer promising solutions to improve signal detection, modulation adaptability, and classification accuracy. These developments highlight [...] Read more.
Underwater acoustic (UWA) communication supports many critical applications but still faces several physical-layer signal processing challenges. In response, recent advances in machine learning (ML) and deep learning (DL) offer promising solutions to improve signal detection, modulation adaptability, and classification accuracy. These developments highlight the need for a systematic evaluation to compare various ML/DL models and assess their performance across diverse underwater conditions. However, most existing reviews on ML/DL-based UWA communication focus on isolated approaches rather than integrated system-level perspectives, which limits cross-domain insights and reduces their relevance to practical underwater deployments. Consequently, this systematic literature review (SLR) synthesizes 43 studies (2020–2025) on ML and DL approaches for UWA communication, covering channel estimation, adaptive modulation, and modulation recognition across both single- and multi-carrier systems. The findings reveal that models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) enhance channel estimation performance, achieving error reductions and bit error rate (BER) gains ranging from 103 to 106. Adaptive modulation techniques incorporating support vector machines (SVMs), CNNs, and reinforcement learning (RL) attain classification accuracies exceeding 98% and throughput improvements of up to 25%. For modulation recognition, architectures like sequence CNNs, residual networks, and hybrid convolutional–recurrent models achieve up to 99.38% accuracy with latency below 10 ms. These performance metrics underscore the viability of ML/DL-based solutions in optimizing physical-layer tasks for real-world UWA deployments. Finally, the SLR identifies key challenges in UWA communication, including high complexity, limited data, fragmented performance metrics, deployment realities, energy constraints and poor scalability. It also outlines future directions like lightweight models, physics-informed learning, advanced RL strategies, intelligent resource allocation, and robust feature fusion to build reliable and intelligent underwater systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

14 pages, 1438 KiB  
Article
CDBA-GAN: A Conditional Dual-Branch Attention Generative Adversarial Network for Robust Sonar Image Generation
by Wanzeng Kong, Han Yang, Mingyang Jia and Zhe Chen
Appl. Sci. 2025, 15(13), 7212; https://doi.org/10.3390/app15137212 - 26 Jun 2025
Viewed by 296
Abstract
The acquisition of real-world sonar data necessitates substantial investments of manpower, material resources, and financial capital, rendering it challenging to obtain sufficient authentic samples for sonar-related research tasks. Consequently, sonar image simulation technology has become increasingly vital in the field of sonar data [...] Read more.
The acquisition of real-world sonar data necessitates substantial investments of manpower, material resources, and financial capital, rendering it challenging to obtain sufficient authentic samples for sonar-related research tasks. Consequently, sonar image simulation technology has become increasingly vital in the field of sonar data analysis. Traditional sonar simulation methods predominantly focus on low-level physical modeling, which often suffers from limited image controllability and diminished fidelity in multi-category and multi-background scenarios. To address these limitations, this paper proposes a Conditional Dual-Branch Attention Generative Adversarial Network (CDBA-GAN). The framework comprises three key innovations: The conditional information fusion module, dual-branch attention feature fusion mechanism, and cross-layer feature reuse. By integrating encoded conditional information with the original input data of the generative adversarial network, the fusion module enables precise control over the generation of sonar images under specific conditions. A hierarchical attention mechanism is implemented, sequentially performing channel-level and pixel-level attention operations. This establishes distinct weight matrices at both granularities, thereby enhancing the correlation between corresponding elements. The dual-branch attention features are fused via a skip-connection architecture, facilitating efficient feature reuse across network layers. The experimental results demonstrate that the proposed CDBA-GAN generates condition-specific sonar images with a significantly lower Fréchet inception distance (FID) compared to existing methods. Notably, the framework exhibits robust imaging performance under noisy interference and outperforms state-of-the-art models (e.g., DCGAN, WGAN, SAGAN) in fidelity across four categorical conditions, as quantified by FID metrics. Full article
Show Figures

Figure 1

21 pages, 4536 KiB  
Article
Feature Attention Cycle Generative Adversarial Network: A Multi-Scene Image Dehazing Method Based on Feature Attention
by Na Li, Na Liu, Yanan Duan and Yuyang Chai
Appl. Sci. 2025, 15(10), 5374; https://doi.org/10.3390/app15105374 - 12 May 2025
Viewed by 358
Abstract
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in [...] Read more.
For the clearing of hazy images, it is difficult to obtain dehazing datasets with paired mapping images. Currently, most algorithms are trained on synthetic datasets with insufficient complexity, which leads to model overfitting. At the same time, the physical characteristics of fog in the real world are ignored in most current algorithms; that is, the degree of fog is related to the depth of field and scattering coefficient. Moreover, most current dehazing algorithms only consider the image dehazing of land scenes and ignore maritime scenes. To address these problems, we propose a multi-scene image dehazing algorithm based on an improved cycle generative adversarial network (CycleGAN). The generator structure is improved based on the CycleGAN model, and a feature fusion attention module is proposed. This module obtains relevant contextual information by extracting different levels of features. The obtained feature information is fused using the idea of residual connections. An attention mechanism is introduced in this module to retain more feature information by assigning different weights. During the training process, the atmospheric scattering model is established to guide the learning of the neural network using its prior information. The experimental results show that, compared with the baseline model, the peak signal-to-noise ratio (PSNR) increases by 32.10%, the structural similarity index (SSIM) increases by 31.07%, the information entropy (IE) increases by 4.79%, and the NIQE index is reduced by 20.1% in quantitative comparison. Meanwhile, it demonstrates better visual effects than other advanced algorithms in qualitative comparisons on synthetic datasets and real datasets. Full article
Show Figures

Figure 1

45 pages, 34765 KiB  
Review
Comparative Analysis of Traditional and Deep Learning Approaches for Underwater Remote Sensing Image Enhancement: A Quantitative Study
by Yunsheng Ma, Yanan Cheng and Dapeng Zhang
J. Mar. Sci. Eng. 2025, 13(5), 899; https://doi.org/10.3390/jmse13050899 - 30 Apr 2025
Viewed by 773
Abstract
Underwater remote sensing image enhancement is complicated by low illumination, color bias, and blurriness, affecting deep-sea monitoring and marine resource development. This study compares a multi-scale fusion-enhanced physical model and deep learning algorithms to optimize intelligent processing. The physical model, based on the [...] Read more.
Underwater remote sensing image enhancement is complicated by low illumination, color bias, and blurriness, affecting deep-sea monitoring and marine resource development. This study compares a multi-scale fusion-enhanced physical model and deep learning algorithms to optimize intelligent processing. The physical model, based on the Jaffe–McGlamery model, integrates multi-scale histogram equalization, wavelength compensation, and Laplacian sharpening, using cluster analysis to target enhancements. It performs well in shallow, stable waters (turbidity < 20 NTU, depth < 10 m, PSNR = 12.2) but struggles in complex environments (turbidity > 30 NTU). Deep learning models, including water-net, UWCNN, UWCycleGAN, and U-shape Transformer, excel in dynamic conditions, achieving UIQM = 0.24, though requiring GPU support for real-time use. Evaluated on the UIEB dataset (890 images), the physical model suits specific scenarios, while deep learning adapts better to variable underwater settings. These findings offer a theoretical and technical basis for underwater image enhancement and support sustainable marine resource use. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

16 pages, 2486 KiB  
Article
Remaining Useful Life Prediction Based on Wear Monitoring with Multi-Attribute GAN Augmentation
by Xiaojun Zhu, Yan Pan, Bin Lan, He Wang and Huixin Huang
Lubricants 2025, 13(4), 145; https://doi.org/10.3390/lubricants13040145 - 25 Mar 2025
Viewed by 511
Abstract
With the growing imperative for advanced prognostics and health management (PHM) systems, remaining useful life (RUL) prediction through lubricating oil monitoring has become pivotal for intelligent preventive maintenance. However, existing methodologies face dual challenges: the inherent sparsity of wear monitoring data and the [...] Read more.
With the growing imperative for advanced prognostics and health management (PHM) systems, remaining useful life (RUL) prediction through lubricating oil monitoring has become pivotal for intelligent preventive maintenance. However, existing methodologies face dual challenges: the inherent sparsity of wear monitoring data and the complex interdependencies among multiple indicators, leading to compromised prediction accuracy that fails to satisfy reliability requirements. To address these limitations, this study proposes a novel multi-indicator RUL prediction framework with three technical innovations. First, a fuzzy probabilistic characterization method is proposed to quantify multivariate wear state in the lubricating system, using the weighted fusion of multi-source indicators. Second, a novel CMC-GAN (Centralized Multi-channel Constrained Generative Adversarial Network) architecture is designed. It can increase data using physical knowledge. This solves the problem of sparse data and keeps the important relationships between indicators. Furthermore, we establish a Wiener-process-based degradation model with time-varying coefficients to capture stochastic wear deterioration patterns. The expectation-maximization algorithm with Bayesian updating is employed for real-time parameter calibration, enabling a dynamic derivation of the probability density functions for RUL estimation. Finally, the validity and practicality of the proposed model are verified through actual engineering case studies. Full article
(This article belongs to the Special Issue Wear Mechanism Identification and State Prediction of Tribo-Parts)
Show Figures

Figure 1

27 pages, 39507 KiB  
Review
Deep Learning Applications in Ionospheric Modeling: Progress, Challenges, and Opportunities
by Renzhong Zhang, Haorui Li, Yunxiao Shen, Jiayi Yang, Wang Li, Dongsheng Zhao and Andong Hu
Remote Sens. 2025, 17(1), 124; https://doi.org/10.3390/rs17010124 - 2 Jan 2025
Cited by 10 | Viewed by 5888
Abstract
With the continuous advancement of deep learning algorithms and the rapid growth of computational resources, deep learning technology has undergone numerous milestone developments, evolving from simple BP neural networks into more complex and powerful network models such as CNNs, LSTMs, RNNs, and GANs. [...] Read more.
With the continuous advancement of deep learning algorithms and the rapid growth of computational resources, deep learning technology has undergone numerous milestone developments, evolving from simple BP neural networks into more complex and powerful network models such as CNNs, LSTMs, RNNs, and GANs. In recent years, the application of deep learning technology in ionospheric modeling has achieved breakthrough advancements, significantly impacting navigation, communication, and space weather forecasting. Nevertheless, due to limitations in observational networks and the dynamic complexity of the ionosphere, deep learning-based ionospheric models still face challenges in terms of accuracy, resolution, and interpretability. This paper systematically reviews the development of deep learning applications in ionospheric modeling, summarizing findings that demonstrate how integrating multi-source data and employing multi-model ensemble strategies has substantially improved the stability of spatiotemporal predictions, especially in handling complex space weather events. Additionally, this study explores the potential of deep learning in ionospheric modeling for the early warning of geological hazards such as earthquakes, volcanic eruptions, and tsunamis, offering new insights for constructing ionospheric-geological activity warning models. Looking ahead, research will focus on developing hybrid models that integrate physical modeling with deep learning, exploring adaptive learning algorithms and multi-modal data fusion techniques to enhance long-term predictive capabilities, particularly in addressing the impact of climate change on the ionosphere. Overall, deep learning provides a powerful tool for ionospheric modeling and indicates promising prospects for its application in early warning systems and future research. Full article
(This article belongs to the Special Issue Advances in GNSS Remote Sensing for Ionosphere Observation)
Show Figures

Figure 1

30 pages, 17378 KiB  
Article
High-Fidelity Infrared Remote Sensing Image Generation Method Coupled with the Global Radiation Scattering Mechanism and Pix2PixGAN
by Yue Li, Xiaorui Wang, Chao Zhang, Zhonggen Zhang and Fafa Ren
Remote Sens. 2024, 16(23), 4350; https://doi.org/10.3390/rs16234350 - 21 Nov 2024
Cited by 1 | Viewed by 1209
Abstract
 To overcome the problems in existing infrared remote sensing image generation methods, which make it difficult to combine high fidelity and high efficiency, we propose a High-Fidelity Infrared Remote Sensing Image Generation Method Coupled with the Global Radiation Scattering Mechanism and Pix2PixGAN (HFIRSIGM_GRSMP) [...] Read more.
 To overcome the problems in existing infrared remote sensing image generation methods, which make it difficult to combine high fidelity and high efficiency, we propose a High-Fidelity Infrared Remote Sensing Image Generation Method Coupled with the Global Radiation Scattering Mechanism and Pix2PixGAN (HFIRSIGM_GRSMP) in this paper. Firstly, based on the global radiation scattering mechanism, the HFIRSIGM_GRSMP model is constructed to address the problem of accurately characterizing factors that affect fidelity—such as the random distribution of the radiation field, multipath scattering, and nonlinear changes—through the innovative fusion of physical models and deep learning. This model accurately characterizes the complex radiation field distribution and the image detail-feature mapping relationship from visible-to-infrared remote sensing. Then, 8000 pairs of image datasets were constructed based on Landsat 8 and Sentinel-2 satellite data. Finally, the experiment demonstrates that the average SSIM of images generated using HFIRSIGM_GRSMP reaches 89.16%, and all evaluation metrics show significant improvement compared to the contrast models. More importantly, this method demonstrates high accuracy and strong adaptability in generating short-wave, mid-wave, and long-wave infrared remote sensing images. This method provides a more comprehensive solution for generating high-fidelity infrared remote sensing images.  Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

19 pages, 10565 KiB  
Article
AMSMC-UGAN: Adaptive Multi-Scale Multi-Color Space Underwater Image Enhancement with GAN-Physics Fusion
by Dong Chao, Zhenming Li, Wenbo Zhu, Haibing Li, Bing Zheng, Zhongbo Zhang and Weijie Fu
Mathematics 2024, 12(10), 1551; https://doi.org/10.3390/math12101551 - 16 May 2024
Cited by 2 | Viewed by 1526
Abstract
Underwater vision technology is crucial for marine exploration, aquaculture, and environmental monitoring. However, the challenging underwater conditions, including light attenuation, color distortion, reduced contrast, and blurring, pose difficulties. Current deep learning models and traditional image enhancement techniques are limited in addressing these challenges, [...] Read more.
Underwater vision technology is crucial for marine exploration, aquaculture, and environmental monitoring. However, the challenging underwater conditions, including light attenuation, color distortion, reduced contrast, and blurring, pose difficulties. Current deep learning models and traditional image enhancement techniques are limited in addressing these challenges, making it challenging to acquire high-quality underwater image signals. To overcome these limitations, this study proposes an approach called adaptive multi-scale multi-color space underwater image enhancement with GAN-physics fusion (AMSMC-UGAN). AMSMC-UGAN leverages multiple color spaces (RGB, HSV, and Lab) for feature extraction, compensating for RGB’s limitations in underwater environments and enhancing the use of image information. By integrating a membership degree function to guide deep learning based on physical models, the model’s performance is improved across different underwater scenes. In addition, the introduction of a multi-scale feature extraction module deepens the granularity of image information, learns the degradation distribution of different image information of the same image content more comprehensively, and provides useful guidance for more comprehensive data for image enhancement. AMSMC-UGAN achieved maximum scores of 26.04 dB, 0.87, and 3.2004 for PSNR, SSIM, and UIQM metrics, respectively, on real and synthetic underwater image datasets. Additionally, it obtained gains of at least 6.5%, 6%, and 1% for these metrics. Empirical evaluations on real and artificially distorted underwater image datasets demonstrate that AMSMC-GAN outperforms existing techniques, showcasing superior performance with enhanced quantitative metrics and strong generalization capabilities. Full article
Show Figures

Figure 1

19 pages, 5632 KiB  
Article
AUIE–GAN: Adaptive Underwater Image Enhancement Based on Generative Adversarial Networks
by Fengxu Guan, Siqi Lu, Haitao Lai and Xue Du
J. Mar. Sci. Eng. 2023, 11(7), 1476; https://doi.org/10.3390/jmse11071476 - 24 Jul 2023
Cited by 9 | Viewed by 2383
Abstract
Underwater optical imaging devices are often affected by the complex underwater environment and the characteristics of the water column, which leads to serious degradation and distortion of the images they capture. Deep learning-based underwater image enhancement (UIE) methods reduce the reliance on physical [...] Read more.
Underwater optical imaging devices are often affected by the complex underwater environment and the characteristics of the water column, which leads to serious degradation and distortion of the images they capture. Deep learning-based underwater image enhancement (UIE) methods reduce the reliance on physical parameters in traditional methods and have powerful fitting capabilities, becoming a new baseline method for UIE tasks. However, the results of these methods often suffer from color distortion and lack of realism because they tend to have poor generalization and self-adaptation capabilities. Generating adversarial networks (GANs) provides a better fit and shows powerful capabilities on UIE tasks. Therefore, we designed a new network structure for the UIE task based on GANs. In this work, we changed the learning of the self-attention mechanism by introducing a trainable weight to balance the effect of the mechanism, improving the self-adaptive capability of the model. In addition, we designed a feature extractor based on residuals with multi-level residuals for better feature recovery. To further improve the performance of the generator, we proposed a dual path discriminator and a loss function with multiple weighted fusions to help model fitting in the frequency domain, improving image quality. We evaluated our method on the UIE task using challenging real underwater image datasets and a synthetic image dataset and compared it to state-of-the-art models. The method ensures increased enhancement quality, and the enhancement effect of the model for different styles of images is also relatively stable. Full article
Show Figures

Figure 1

12 pages, 8991 KiB  
Article
An Underwater Image Enhancement Method for a Preprocessing Framework Based on Generative Adversarial Network
by Xiao Jiang, Haibin Yu, Yaxin Zhang, Mian Pan, Zhu Li, Jingbiao Liu and Shuaishuai Lv
Sensors 2023, 23(13), 5774; https://doi.org/10.3390/s23135774 - 21 Jun 2023
Cited by 14 | Viewed by 4322
Abstract
This paper presents an efficient underwater image enhancement method, named ECO-GAN, to address the challenges of color distortion, low contrast, and motion blur in underwater robot photography. The proposed method is built upon a preprocessing framework using a generative adversarial network. ECO-GAN incorporates [...] Read more.
This paper presents an efficient underwater image enhancement method, named ECO-GAN, to address the challenges of color distortion, low contrast, and motion blur in underwater robot photography. The proposed method is built upon a preprocessing framework using a generative adversarial network. ECO-GAN incorporates a convolutional neural network that specifically targets three underwater issues: motion blur, low brightness, and color deviation. To optimize computation and inference speed, an encoder is employed to extract features, whereas different enhancement tasks are handled by dedicated decoders. Moreover, ECO-GAN employs cross-stage fusion modules between the decoders to strengthen the connection and enhance the quality of output images. The model is trained using supervised learning with paired datasets, enabling blind image enhancement without additional physical knowledge or prior information. Experimental results demonstrate that ECO-GAN effectively achieves denoising, deblurring, and color deviation removal simultaneously. Compared with methods relying on individual modules or simple combinations of multiple modules, our proposed method achieves superior underwater image enhancement and offers the flexibility for expansion into multiple underwater image enhancement functions. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision and Image Processing Sensors)
Show Figures

Figure 1

24 pages, 7656 KiB  
Article
Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data
by Baoyu Zhu, Qunbo Lv and Zheng Tan
Drones 2023, 7(2), 96; https://doi.org/10.3390/drones7020096 - 30 Jan 2023
Cited by 12 | Viewed by 3632
Abstract
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image [...] Read more.
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image information, but images with different degrees of blurring use the same weights, leading to increasing errors in the feature fusion process layer by layer. Based on the physical properties of image blurring, this paper proposes an adaptive multi-scale fusion blind deblurred generative adversarial network (AMD-GAN), which innovatively applies the degree of image blurring to guide the adjustment of the weights of multi-scale fusion, effectively suppressing the errors in the multi-scale fusion process and enhancing the interpretability of the feature layer. The research work in this paper reveals the necessity and effectiveness of a priori information on image blurring levels in image deblurring tasks. By studying and exploring the image blurring levels, the network model focuses more on the basic physical features of image blurring. Meanwhile, this paper proposes an image blurring degree description model, which can effectively represent the blurring degree of aerial remote sensing images. The comparison experiments show that the algorithm in this paper can effectively recover images with different degrees of blur, obtain high-quality images with clear texture details, outperform the comparison algorithm in both qualitative and quantitative evaluation, and can effectively improve the object detection performance of blurred aerial remote sensing images. Moreover, the average PSNR of this paper’s algorithm tested on the publicly available dataset RealBlur-R reached 41.02 dB, surpassing the latest SOTA algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

20 pages, 5280 KiB  
Article
Deep Learning-Based Cyber–Physical Feature Fusion for Anomaly Detection in Industrial Control Systems
by Yan Du, Yuanyuan Huang, Guogen Wan and Peilin He
Mathematics 2022, 10(22), 4373; https://doi.org/10.3390/math10224373 - 20 Nov 2022
Cited by 10 | Viewed by 3335
Abstract
In this paper, we propose an unsupervised anomaly detection method based on the Autoencoder with Long Short-Term Memory (LSTM-Autoencoder) network and Generative Adversarial Network (GAN) to detect anomalies in industrial control system (ICS) using cyber–physical fusion features. This method improves the recall of [...] Read more.
In this paper, we propose an unsupervised anomaly detection method based on the Autoencoder with Long Short-Term Memory (LSTM-Autoencoder) network and Generative Adversarial Network (GAN) to detect anomalies in industrial control system (ICS) using cyber–physical fusion features. This method improves the recall of anomaly detection and overcomes the challenges of unbalanced datasets and insufficient labeled samples in ICS. As a first step, additional network features are extracted and fused with physical features to create a cyber–physical dataset. Following this, the model is trained using normal data to ensure that it can properly reconstruct the normal data. In the testing phase, samples with unknown labels are used as inputs to the model. The model will output an anomaly score for each sample, and whether a sample is anomalous depends on whether the anomaly score exceeds the threshold. Whether using supervised or unsupervised algorithms, experimentation has shown that (1) cyber–physical fusion features can significantly improve the performance of anomaly detection algorithms; (2) the proposed method outperforms several other unsupervised anomaly detection methods in terms of accuracy, recall, and F1 score; (3) the proposed method can detect the majority of anomalous events with a low false negative rate. Full article
Show Figures

Figure 1

15 pages, 1175 KiB  
Article
Comparison of Neutron Detection Performance of Four Thin-Film Semiconductor Neutron Detectors Based on Geant4
by Zhongming Zhang and Michael D. Aspinall
Sensors 2021, 21(23), 7930; https://doi.org/10.3390/s21237930 - 27 Nov 2021
Cited by 8 | Viewed by 3796
Abstract
Third-generation semiconductor materials have a wide band gap, high thermal conductivity, high chemical stability and strong radiation resistance. These materials have broad application prospects in optoelectronics, high-temperature and high-power equipment and radiation detectors. In this work, thin-film solid state neutron detectors made of [...] Read more.
Third-generation semiconductor materials have a wide band gap, high thermal conductivity, high chemical stability and strong radiation resistance. These materials have broad application prospects in optoelectronics, high-temperature and high-power equipment and radiation detectors. In this work, thin-film solid state neutron detectors made of four third-generation semiconductor materials are studied. Geant4 10.7 was used to analyze and optimize detectors. The optimal thicknesses required to achieve the highest detection efficiency for the four materials are studied. The optimized materials include diamond, silicon carbide (SiC), gallium oxide (Ga2O3) and gallium nitride (GaN), and the converter layer materials are boron carbide (B4C) and lithium fluoride (LiF) with a natural enrichment of boron and lithium. With optimal thickness, the primary knock-on atom (PKA) energy spectrum and displacements per atom (DPA) are studied to provide an indication of the radiation hardness of the four materials. The gamma rejection capabilities and electron collection efficiency (ECE) of these materials have also been studied. This work will contribute to manufacturing radiation-resistant, high-temperature-resistant and fast response neutron detectors. It will facilitate reactor monitoring, high-energy physics experiments and nuclear fusion research. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop