Deep Learning in Image Processing and Segmentation

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (15 October 2024) | Viewed by 6924

Special Issue Editor


E-Mail Website
Guest Editor
School of Electrical, Computer and Telecommunications Engineering, University of Wollongong, Wollongong, NSW 2522, Australia
Interests: computer vision; artificial intelligence related to computer vision; image processing; image steganography; radar signal processing in inverse synthetic aperture radar imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image processing has been traditionally attempted using number of approaches that have been vital for the growth of computer vision. The applications that have emerged from such traditional approaches have changed the way we live, from vehicle number plate recognition to medical imaging, and without the growth of image processing, our modern lives will cease to exist. The traditional image processing tasks involve:

  • Image enhancement;
  • Image restoration;
  • Wavelets and multi-resolution processing;
  • Image compression;
  • Morphological processing;
  • Representation and description;
  • Object detection and recognition;
  • Knowledge base.

Image segmentation is also one of these image processing tasks; however, this journal Special Issue will treat these two sections separately as it wants to discover the new trends in these two traditional fields using deep learning or artificial intelligence-based approaches. This volume calls for recent research progress using deep learning in realizing the above image processing tasks or approaches that would result in image segmentation similar to traditional watershed algorithms, region-growing algorithms or any modern approaches.

Dr. Prashan Premaratne
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image enhancement
  • image restoration
  • wavelets and multi-resolution processing
  • image compression
  • morphological processing
  • representation and description
  • object detection and recognition
  • knowledge base

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 6474 KiB  
Article
A Safety Helmet Detection Model Based on YOLOv8-ADSC in Complex Working Environments
by Jingyang Wang, Bokai Sang, Bo Zhang and Wei Liu
Electronics 2024, 13(23), 4589; https://doi.org/10.3390/electronics13234589 - 21 Nov 2024
Cited by 4 | Viewed by 1415
Abstract
A safety helmet is indispensable personal protective equipment in high-risk working environments. Factors such as dense personnel, varying lighting conditions, occlusions, and different head postures can reduce the precision of traditional methods for detecting safety helmets. This paper proposes an improved YOLOv8n safety [...] Read more.
A safety helmet is indispensable personal protective equipment in high-risk working environments. Factors such as dense personnel, varying lighting conditions, occlusions, and different head postures can reduce the precision of traditional methods for detecting safety helmets. This paper proposes an improved YOLOv8n safety helmet detection model, YOLOv8-ADSC, to enhance the performance of safety helmet detection in complex working environments. In this model, firstly, Adaptive Spatial Feature Fusion (ASFF) and Deformable Convolutional Network version 2 (DCNv2) are used to enhance the detection head, enabling the network to more effectively capture multi-scale information of the target; secondly, a new detection layer for small targets is incorporated to enhance sensitivity to smaller targets; and finally, the Upsample module is replaced with the lightweight up-sampling module Content-Aware ReAssembly of Features (CARAFE), which increases the perception range, reduces information loss caused by up-sampling, and improves the precision and robustness of target detection. The experimental results on the public Safety-Helmet-Wearing-Dataset (SHWD) demonstrate that, in comparison to the original YOLOv8n model, the mAP@0.5 of YOLOv8-ADSC has increased by 2% for all classes, reaching 94.2%, and the mAP@0.5:0.95 has increased by 2.3%, reaching 62.4%. YOLOv8-ADSC can be better suited to safety helmet detection in complex working environments. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Segmentation)
Show Figures

Figure 1

14 pages, 2930 KiB  
Article
AsymUNet: An Efficient Multi-Layer Perceptron Model Based on Asymmetric U-Net for Medical Image Noise Removal
by Yan Cui, Xiangming Hong, Haidong Yang, Zhili Ge and Jielin Jiang
Electronics 2024, 13(16), 3191; https://doi.org/10.3390/electronics13163191 - 12 Aug 2024
Viewed by 1342
Abstract
With the continuous advancement of deep learning technology, U-Net–based algorithms for image denoising play a crucial role in medical image processing. However, most U-Net-based medical image denoising algorithms typically have large parameter sizes, which poses significant limitations in practical applications where computational resources [...] Read more.
With the continuous advancement of deep learning technology, U-Net–based algorithms for image denoising play a crucial role in medical image processing. However, most U-Net-based medical image denoising algorithms typically have large parameter sizes, which poses significant limitations in practical applications where computational resources are limited or large-scale patient data processing are required. In this paper, we propose a medical image denoising algorithm called AsymUNet, developed using an asymmetric U-Net framework and a spatially rearranged multilayer perceptron (MLP). AsymUNet utilizes an asymmetric U-Net to reduce the computational burden, while a multiscale feature fusion module enhances the feature interaction between the encoder and decoder. To better preserve the image details, spatially rearranged MLP blocks serve as the core building blocks of AsymUNet. These blocks effectively extract both the local and global features of the image, reducing the model’s reliance on prior knowledge of the image and further accelerating the training and inference processes. Experimental results demonstrate that AsymUNet achieves superior performance metrics and visual results compared with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Segmentation)
Show Figures

Figure 1

15 pages, 1112 KiB  
Article
ALKU-Net: Adaptive Large Kernel Attention Convolution Network for Lung Nodule Segmentation
by Juepu Chen, Shuxian Liu and Yulong Liu
Electronics 2024, 13(16), 3121; https://doi.org/10.3390/electronics13163121 - 7 Aug 2024
Cited by 1 | Viewed by 1814
Abstract
The accurate segmentation of lung nodules in computed tomography (CT) images is crucial for the early screening and diagnosis of lung cancer. However, the heterogeneity of lung nodules and their similarity to other lung tissue features make this task more challenging. By using [...] Read more.
The accurate segmentation of lung nodules in computed tomography (CT) images is crucial for the early screening and diagnosis of lung cancer. However, the heterogeneity of lung nodules and their similarity to other lung tissue features make this task more challenging. By using large receptive fields from large convolutional kernels, convolutional neural networks (CNNs) can achieve higher segmentation accuracies with fewer parameters. However, due to the fixed size of the convolutional kernel, CNNs still struggle to extract multi-scale features for lung nodules of varying sizes. In this study, we propose a novel network to improve the segmentation accuracy of lung nodules. The network integrates adaptive large kernel attention (ALK) blocks, employing multiple convolutional layers with variously sized convolutional kernels and expansion rates to extract multi-scale features. A dynamic selection mechanism is also introduced to aggregate the multi-scale features obtained from variously sized convolutional kernels based on selection weights. Based on this, we propose a lightweight convolutional neural network with large convolutional kernels, called ALKU-Net, which integrates the ALKA module in a hierarchical encoder and adopts a U-shaped decoder to form a novel architecture. ALKU-Net efficiently utilizes the multi-scale large receptive field and enhances the model perception capability through spatial attention and channel attention. Extensive experiments demonstrate that our method outperforms other state-of-the-art models on the public dataset LUNA-16, exhibiting considerable accuracy in the lung nodule segmentation task. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Segmentation)
Show Figures

Figure 1

19 pages, 11581 KiB  
Article
CMP-UNet: A Retinal Vessel Segmentation Network Based on Multi-Scale Feature Fusion
by Yanan Gu, Ruyi Cao, Dong Wang and Bibo Lu
Electronics 2023, 12(23), 4743; https://doi.org/10.3390/electronics12234743 - 22 Nov 2023
Cited by 2 | Viewed by 1489
Abstract
Retinal vessel segmentation plays a critical role in the diagnosis and treatment of various ophthalmic diseases. However, due to poor image contrast, intricate vascular structures, and limited datasets, retinal vessel segmentation remains a long-term challenge. In this paper, based on an encoder–decoder framework, [...] Read more.
Retinal vessel segmentation plays a critical role in the diagnosis and treatment of various ophthalmic diseases. However, due to poor image contrast, intricate vascular structures, and limited datasets, retinal vessel segmentation remains a long-term challenge. In this paper, based on an encoder–decoder framework, a novel retinal vessel segmentation model called CMP-UNet is proposed. Firstly, the Coarse and Fine Feature Aggregation module decouples and aggregates coarse and fine vessel features using two parallel branches, thus enhancing the model’s ability to extract features for vessels of various sizes. Then, the Multi-Scale Channel Adaptive Fusion module is embedded in the decoder to realize the efficient fusion of cascade features by mining the multi-scale context information from these features. Finally, to obtain more discriminative vascular features and enhance the connectivity of vascular structures, the Pyramid Feature Fusion module is proposed to effectively utilize the complementary information of multi-level features. To validate the effectiveness of the proposed model, it is evaluated on three publicly available retinal vessel segmentation datasets: CHASE_DB1, DRIVE, and STARE. The proposed model, CMP-UNet, reaches F1-scores of 82.84%, 82.55%, and 84.14% on these three datasets, with improvements of 0.76%, 0.31%, and 1.49%, respectively, compared with the baseline. The results show that the proposed model achieves higher segmentation accuracy and more robust generalization capability than state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning in Image Processing and Segmentation)
Show Figures

Figure 1

Back to TopTop