Deep Learning for Computer Vision Application: Second Edition

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 August 2026 | Viewed by 2445

Special Issue Editor


E-Mail Website
Guest Editor
Research Officer (AI/ML Expert), Construction Research Centre, National Research Council Canada, Ottawa, ON K1A 0R6 Canada
Interests: computer vision; image processing; artificial intelligence; deep learning; medical imaging; thermal imaging; spectroscopy; virtual reality; data analytics and risk assessment; electronics/embedded systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) methods, and more specifically deep neural networks (also called deep learning models), have became the core technique for computer vision tasks across various applications. The advent of these powerful deep learning models allows state-of-the-art automation levels in autonomous pattern recognition from image data. In a general sense, the ultimate manifestation of these techniques can be seen in our daily life, from automatically sorting and retrieving photos in Google Photos to autonomous cars. However, these powerful techniques still have not been utilized in all computer vision tasks. Future studies should seek to find more applications of AI in our life, e.g., via data acquisition and cleaning, as well as more model optimization, innovation, and research. In this Special Issue, we are particularly interested in new applications of deep learning in the computer vision field.

Topics of interest include, but are not limited to, the following:

  • Image classification using deep learning;
  • Object detection using deep learning;
  • Semantic and instant segmentation using deep learning;
  • Deep learning techniques for generating new images (generative adversarial networks);
  • Employing reinforcement learning for computer vision tasks;
  • Application of deep learning in the Internet of Things (IoT);
  • Application of deep learning in embedded systems, sensor development, and electronics;
  • Computer vision tasks using deep learning (medical image processing, remote sensing, hyperspectral imaging, thermal imaging, space and extraterrestrial observations);
  • Image sequence analysis using deep learning;
  • Deep learning and computer vision for smart and green building, smart industry, and smart devices.

Dr. Hamed Mozaffari
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • convolutional neural network
  • deep learning
  • computer vision
  • artificial intelligence
  • image processing
  • medical image processing
  • Internet of Things
  • thermal imaging
  • image technologies
  • application of deep learning
  • autonomous vehicles
  • image classification
  • object detection
  • object segmentation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2052 KB  
Article
A Dual-Branch Multi-Scale Network for Skin Lesion Classification
by Ying Liu, Xinyu Feng, Yuchai Wan, Huifu Li, Xun Zhang and Abdureyim Raxidin
Electronics 2026, 15(5), 1118; https://doi.org/10.3390/electronics15051118 - 8 Mar 2026
Viewed by 401
Abstract
Dermoscopic images are widely used for diagnosing skin diseases, and automatic classification of lesion types using deep learning can significantly enhance diagnostic efficiency. However, challenges such as variations in imaging conditions, subtle differences between classes, high variability within classes, and severe class imbalance [...] Read more.
Dermoscopic images are widely used for diagnosing skin diseases, and automatic classification of lesion types using deep learning can significantly enhance diagnostic efficiency. However, challenges such as variations in imaging conditions, subtle differences between classes, high variability within classes, and severe class imbalance complicate skin lesion analysis. This paper introduces a dual-branch deep learning model where two branches independently process high-frequency and low-frequency image features to generate multi-scale fused representations. To address class imbalance, the model employs cosine similarity to strengthen inter-class discrimination and incorporates a bias term to improve recognition of minority lesion classes. Experiments conducted on the ISIC 2017 and ISIC 2018 datasets demonstrate that the proposed method surpasses state-of-the-art approaches, achieving accuracies of 97.0% and 91.9%, respectively, with sensitivity and specificity both exceeding 90% on the two datasets. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application: Second Edition)
Show Figures

Figure 1

24 pages, 3288 KB  
Article
Multi-Task Deep Learning for Lung Nodule Detection and Segmentation in CT Scans
by Runhan Li and Barmak Honarvar Shakibaei Asli
Electronics 2026, 15(4), 736; https://doi.org/10.3390/electronics15040736 - 9 Feb 2026
Viewed by 804
Abstract
The early detection of pulmonary nodules in chest CT scans is critical for improving lung cancer outcomes. While existing computer-aided diagnosis (CAD) systems have shown promise, most treat detection and segmentation as separate tasks, leading to fragmented pipelines and limited representation sharing. This [...] Read more.
The early detection of pulmonary nodules in chest CT scans is critical for improving lung cancer outcomes. While existing computer-aided diagnosis (CAD) systems have shown promise, most treat detection and segmentation as separate tasks, leading to fragmented pipelines and limited representation sharing. This study proposes a 2.5D multi-task learning (MTL) framework that integrates both tasks within a unified Mask R-CNN architecture. The framework incorporates a tailored preprocessing pipeline—including Hounsfield Unit (HU) normalisation, CLAHE enhancement, and lung parenchyma masking—to improve input consistency and task-relevant contrast characteristics. To enhance sensitivity for small or ambiguous nodules, an auxiliary RoI classifier is introduced. Additionally, a nodule-level evaluation strategy aggregates slice-wise predictions across the z-axis, supporting a clinically meaningful assessment that approximates 3D diagnostic workflows. Experiments on the LUNA16 dataset demonstrate that the proposed framework achieves a favourable trade-off between detection and segmentation performance under a unified 2.5D multi-task setting. These results highlight the potential of integrated MTL approaches to advance CAD systems for early lung cancer screening. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application: Second Edition)
Show Figures

Figure 1

23 pages, 1308 KB  
Article
MFA-Net: Multiscale Feature Attention Network for Medical Image Segmentation
by Jia Zhao, Han Tao, Song Liu, Meilin Li and Huilong Jin
Electronics 2026, 15(2), 330; https://doi.org/10.3390/electronics15020330 - 12 Jan 2026
Cited by 2 | Viewed by 890
Abstract
Medical image segmentation acts as a foundational element of medical image analysis. Yet its accuracy is frequently limited by the scale fluctuations of anatomical targets and the intricate contextual traits inherent in medical images—including vaguely defined structural boundaries and irregular shape distributions. To [...] Read more.
Medical image segmentation acts as a foundational element of medical image analysis. Yet its accuracy is frequently limited by the scale fluctuations of anatomical targets and the intricate contextual traits inherent in medical images—including vaguely defined structural boundaries and irregular shape distributions. To tackle these constraints, we design a multi-scale feature attention network (MFA-Net), customized specifically for thyroid nodule, skin lesion, and breast lesion segmentation tasks. This network framework integrates three core components: a Bidirectional Feature Pyramid Network (Bi-FPN), a Slim-neck structure, and the Convolutional Block Attention Module (CBAM). CBAM steers the model to prioritize boundary regions while filtering out irrelevant information, which in turn enhances segmentation precision. Bi-FPN facilitates more robust fusion of multi-scale features via iterative integration of top-down and bottom-up feature maps, supported by lateral and vertical connection pathways. The Slim-neck design is constructed to simplify the network’s architecture while effectively merging multi-scale representations of both target and background areas, thus enhancing the model’s overall performance. Validation across four public datasets covering thyroid ultrasound (TNUI-2021, TN-SCUI 2020), dermoscopy (ISIC 2016), and breast ultrasound (BUSI) shows that our method outperforms state-of-the-art segmentation approaches, achieving Dice similarity coefficients of 0.955, 0.971, 0.976, and 0.846, respectively. Additionally, the model maintains a compact parameter count of just 3.05 million and delivers an extremely fast inference latency of 1.9 milliseconds—metrics that significantly outperform those of current leading segmentation techniques. In summary, the proposed framework demonstrates strong performance in thyroid, skin, and breast lesion segmentation, delivering an optimal trade-off between high accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision Application: Second Edition)
Show Figures

Figure 1

Back to TopTop