Image Segmentation, 2nd Edition

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 October 2025 | Viewed by 8933

Special Issue Editors

Division of Culture Contents, Graduate School of Data Science, AI Convergence and Open Sharing System, Chonnam National University, Gwangju 61186, Republic of Korea
Interests: object/image detection; segmentation; recognition; tracking; image understanding; action/behavior/gesture recognition; emotion recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Artificial Intelligence Convergence, Chonnam National University, 77 Yongbong-ro, Gwangju 61186, Republic of Korea
Interests: deep-learning-based emotion recognition; medical image analysis; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Republic of Korea
Interests: bio-mechanics; robotics; data/image analysis; human pose estimation; IoT system
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Image segmentation is a core task in image processing with software and hardware applications, and includes aspects such as image understanding, medical image analysis, image classification, emotion recognition, obejct recognition and tracking, object retrieval, video surveillance, augmented reality and the meta-verse, and autonomous vehicles. Image segmentation enables the extraction of tumor boundaries and the measurement of tissue volume, the detection of pedestrians for autonomous vehicle operation, and the detection of traffic signals and navigable surfaces; it also has the ability to recognize the meaning of emotions and actions from facial expressions and actions.

Image segmentation can be divided into semantic segmentation, instance segmentation, and panoptic segmentation, the latter of which proceeds the former two. Semantic segmentation is the operation of labeling all pixels by classifying them into meaningful units; for example, people, cups, and airplanes. It is the task of segmenting the target image according to its meaning, including its background. This method is a partitioning operation used for individual objects, predicting class labels for all objects in the image, and additionally assigning them random IDs. The main difference between this and semantic segmentation is that in one, when each overlapping object is detected, a class label is not assigned to an object without a fixed shape, like the sky or a road, while in the other a random ID is assigned to each object. When multiple people are photographed in one image, each person is recognized as a distinct object.

Topics of interests include, but are not limited to. the following:

  • Image Segmantation: semantic segmentation, instance segmentation, and panoptic segmentation;
  • Image Segmentation Methods: legacy methods (histogram-based bundling, region growing, k-means clustering, watershed methods, active contours, graph cuts, cMRF, and sparsity-based methods) and deep learning methods (encoder–decoder-based model, multiscle and pyramid networks, R-CNN, dilated convolutional models, recurrent neural networks, generative adversarial networks, attention-based models, and graph-based models);
  • Image Segmantation Appllications: medical image segmentation, autonomous vechicles, emotion recognition, image understanding and captioning, augented reality and the meta-verse, gesture and behavior recognition, etc.;
  • Segmentation image datasets and performance;
  • 2D and 3D segmentation and devices;
  • Survey for image segmentation.

Dr. Inseop Na
Prof. Dr. Soo-Hyung Kim
Prof. Dr. Hieyong Jeong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • image processing
  • image segmentation
  • artificial intelligence
  • medical image analysis
  • deep learning
  • semantic segmentation
  • instance segmentation
  • panoptic segmentation
  • video survilliance
  • augmented reality
  • meta-verse
  • autonomous vehicle

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 23714 KiB  
Article
Optimized AI Methods for Rapid Crack Detection in Microscopy Images
by Chenxukun Lou, Lawrence Tinsley, Fabian Duarte Martinez, Simon Gray and Barmak Honarvar Shakibaei Asli
Electronics 2024, 13(23), 4824; https://doi.org/10.3390/electronics13234824 - 6 Dec 2024
Cited by 1 | Viewed by 1164
Abstract
Detecting structural cracks is critical for quality control and maintenance of industrial materials, ensuring their safety and extending service life. This study enhances the automation and accuracy of crack detection in microscopic images using advanced image processing and deep learning techniques, particularly the [...] Read more.
Detecting structural cracks is critical for quality control and maintenance of industrial materials, ensuring their safety and extending service life. This study enhances the automation and accuracy of crack detection in microscopic images using advanced image processing and deep learning techniques, particularly the YOLOv8 model. A comprehensive review of relevant literature was carried out to compare traditional image-processing methods with modern machine-learning approaches. The YOLOv8 model was optimized by incorporating the Wise Intersection over Union (WIoU) loss function and the bidirectional feature pyramid network (BiFPN) technique, achieving precise detection results with mean average precision (mAP@0.5) of 0.895 and a precision rate of 0.859, demonstrating its superiority in detecting fine cracks even in complex and noisy backgrounds. Experimental findings confirmed the model’s high accuracy in identifying cracks, even under challenging conditions. Despite these advancements, detecting very small or overlapping cracks in complex backgrounds remains challenging. Our future work will focus on optimizing and extending the model’s generalisation capabilities. The findings of this study provide a solid foundation for automatic and rapid crack detection in industrial applications and indicate potential for broader applications across various fields. Full article
(This article belongs to the Special Issue Image Segmentation, 2nd Edition)
Show Figures

Figure 1

24 pages, 5663 KiB  
Article
Automated Classification and Segmentation and Feature Extraction from Breast Imaging Data
by Yiran Sun, Zede Zhu and Barmak Honarvar Shakibaei Asli
Electronics 2024, 13(19), 3814; https://doi.org/10.3390/electronics13193814 - 26 Sep 2024
Viewed by 975
Abstract
Breast cancer is the most common type of cancer in women and poses a significant health risk to women globally. Developments in computer-aided diagnosis (CAD) systems are focused on specific tasks of classification and segmentation, but few studies involve a completely integrated system. [...] Read more.
Breast cancer is the most common type of cancer in women and poses a significant health risk to women globally. Developments in computer-aided diagnosis (CAD) systems are focused on specific tasks of classification and segmentation, but few studies involve a completely integrated system. In this study, a comprehensive CAD system was proposed to screen ultrasound, mammograms and magnetic resonance imaging (MRI) of breast cancer, including image preprocessing, breast cancer classification, and tumour segmentation. First, the total variation filter was used for image denoising. Second, an optimised XGBoost machine learning model using EfficicnetB0 as feature extraction was proposed to classify breast images into normal and tumour. Third, after classifying the tumour images, a hybrid CNN deep learning model integrating the strengths of MobileNet and InceptionV3 was proposed to categorise tumour images into benign and malignant. Finally, Attention U-Net was used to segment tumours in annotated datasets while classical image segmentation methods were used for the others. The proposed models in the designed CAD system achieved an accuracy of 96.14% on the abnormal classification and 94.81% on tumour classification on the BUSI dataset, improving the effectiveness of automatic breast cancer diagnosis. Full article
(This article belongs to the Special Issue Image Segmentation, 2nd Edition)
Show Figures

Figure 1

45 pages, 31956 KiB  
Article
Early Breast Cancer Detection Using Artificial Intelligence Techniques Based on Advanced Image Processing Tools
by Zede Zhu, Yiran Sun and Barmak Honarvar Shakibaei Asli
Electronics 2024, 13(17), 3575; https://doi.org/10.3390/electronics13173575 - 9 Sep 2024
Cited by 3 | Viewed by 6459
Abstract
The early detection of breast cancer is essential for improving treatment outcomes, and recent advancements in artificial intelligence (AI), combined with image processing techniques, have shown great potential in enhancing diagnostic accuracy. This study explores the effects of various image processing methods and [...] Read more.
The early detection of breast cancer is essential for improving treatment outcomes, and recent advancements in artificial intelligence (AI), combined with image processing techniques, have shown great potential in enhancing diagnostic accuracy. This study explores the effects of various image processing methods and AI models on the performance of early breast cancer diagnostic systems. By focusing on techniques such as Wiener filtering and total variation filtering, we aim to improve image quality and diagnostic precision. The novelty of this study lies in the comprehensive evaluation of these techniques across multiple medical imaging datasets, including a DCE-MRI dataset for breast-tumor image segmentation and classification (BreastDM) and the Breast Ultrasound Image (BUSI), Mammographic Image Analysis Society (MIAS), Breast Cancer Histopathological Image (BreakHis), and Digital Database for Screening Mammography (DDSM) datasets. The integration of advanced AI models, such as the vision transformer (ViT) and the U-KAN model—a U-Net structure combined with Kolmogorov–Arnold Networks (KANs)—is another key aspect, offering new insights into the efficacy of these approaches in different imaging contexts. Experiments revealed that Wiener filtering significantly improved image quality, achieving a peak signal-to-noise ratio (PSNR) of 23.06 dB and a structural similarity index measure (SSIM) of 0.79 using the BreastDM dataset and a PSNR of 20.09 dB with an SSIM of 0.35 using the BUSI dataset. When combined filtering techniques were applied, the results varied, with the MIAS dataset showing a decrease in SSIM and an increase in the mean squared error (MSE), while the BUSI dataset exhibited enhanced perceptual quality and structural preservation. The vision transformer (ViT) framework excelled in processing complex image data, particularly with the BreastDM and BUSI datasets. Notably, the Wiener filter using the BreastDM dataset resulted in an accuracy of 96.9% and a recall of 96.7%, while the combined filtering approach further enhanced these metrics to 99.3% accuracy and 98.3% recall. In the BUSI dataset, the Wiener filter achieved an accuracy of 98.0% and a specificity of 98.5%. Additionally, the U-KAN model demonstrated superior performance in breast cancer lesion segmentation, outperforming traditional models like U-Net and U-Net++ across datasets, with an accuracy of 93.3% and a sensitivity of 97.4% in the BUSI dataset. These findings highlight the importance of dataset-specific preprocessing techniques and the potential of advanced AI models like ViT and U-KAN to significantly improve the accuracy of early breast cancer diagnostics. Full article
(This article belongs to the Special Issue Image Segmentation, 2nd Edition)
Show Figures

Figure 1

Back to TopTop