Machine Vision Applications and Efficient Deep Learning Models for Resource-Limited Learning

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 10073

Special Issue Editors


E-Mail Website
Guest Editor
Faulty of Engineering, University of Windsor, Windsor, ON N9B3P4, Canada
Interests: computer vision systems for active vehicle safety and driver assistance; machine learning and sensor fusion for autonomous driving; sensor technology; big data analytics for medicine; cross-border security; distributed sensing for industrial monitoring and automation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Software Engineering, Lakehead University, Thunder Bay, ON P7B 5E1, Canada
Interests: deep learning; applied machine learning; computer vision; image processing

E-Mail Website
Guest Editor
Department of Computer Applications, National Institute of Technology Tiruchirappalli, Tamil Nadu 620015, India
Interests: cloud computing; predictive analytics; computational intelligence; multi-objective optimization; resource management; GPGPU computing

E-Mail Website
Guest Editor
School of Computer Science, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: computer vision; deep learning; image processing; AI security; multimedia forensics

Special Issue Information

Dear Colleagues,

The past two decades of intelligent learning systems have fundamentally evolved around the advancement in deep neural networks (DNNs). DNNs have become a go-to model for various problems, from basic image understanding to complex segmentation and predictive analysis using big data (BD). For example, deep convolutional neural networks (DCNNs) are the backbones of state-of-the-art object classification, object localization, computer-aided diagnosis (CADx), robotics, and autonomous vehicles. Given a large set of labeled data, DNNs’ data representation mechanism has repeatedly proven superior to conventional human-engineered features.

Despite their adoption in a wide range of applications across all the fields of natural science and engineering, they do not scale well on resource-limited conditions, such as scarcity of data and hardware support. Specifically, in the domains where collecting annotated data is very costly and time-consuming since it requires domain expertise. In some cases, due to security and privacy, gathering a large amount of information is not even feasible. Hence, beyond a successful training and testing of DNNs in a laboratory setting, most real-world deployment does not have the luxury of a high-performance computing platform. To overcome these shortcomings, there is a huge demand for research and development of optimized DNNs with the following considerations: quicker training and convergence (higher training speed), applicability for real-time environments (higher inference speed), ability to reach generalization from a small number of data samples (even weekly supervised and unsupervised strategies are considered), scalability across computational platforms (GPUs, CPUs, and embedded platforms). A proposed solution can focus on one or more of the above criteria.

We request research articles presenting novel algorithms and experimental studies on machine vision applications, and ideas (methods, tools, concepts, or even literature surveys) that will contribute to the advanced development of future deep learning models for resource-limited environments.

Prof. Dr. Jonathan Wu
Dr. Thangarajah Akilan
Dr. Jitendra Kumar
Dr. Chengsheng Yuan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • lightweight convolutional neural network
  • neural network compression
  • transfer learning/domain adaptations approaches
  • machine/computer vision
  • medical image processing
  • object classification/segmentation/localization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2029 KiB  
Article
Image Classification in JPEG Compression Domain for Malaria Infection Detection
by Yuhang Dong and W. David Pan
J. Imaging 2022, 8(5), 129; https://doi.org/10.3390/jimaging8050129 - 3 May 2022
Cited by 2 | Viewed by 2387
Abstract
Digital images are usually stored in compressed format. However, image classification typically takes decompressed images as inputs rather than compressed images. Therefore, performing image classification directly in the compression domain will eliminate the need for decompression, thus increasing efficiency and decreasing costs. However, [...] Read more.
Digital images are usually stored in compressed format. However, image classification typically takes decompressed images as inputs rather than compressed images. Therefore, performing image classification directly in the compression domain will eliminate the need for decompression, thus increasing efficiency and decreasing costs. However, there has been very sparse work on image classification in the compression domain. In this paper, we studied the feasibility of classifying images in their JPEG compression domain. We analyzed the underlying mechanisms of JPEG as an example and conducted classification on data from different stages during the compression. The images we used were malaria-infected red blood cells and normal cells. The training data include multiple combinations of DCT coefficients, DC values in both decimal and binary forms, the “scan” segment in both binary and decimal form, and the variable length of the entire bitstream. The result shows that LSTM can successfully classify the image in its compressed form, with accuracies around 80%. If using only coded DC values, we can achieve accuracies higher than 90%. This indicates that images from different classes can still be well separated in their JPEG compressed format. Our simulations demonstrate that the proposed compression domain-processing method can reduce the input data, and eliminate the image decompression step, thereby achieving significant savings on memory and computation time. Full article
Show Figures

Figure 1

15 pages, 17786 KiB  
Article
HISFCOS: Half-Inverted Stage Block for Efficient Object Detection Based on Deep Learning
by Beomyeon Hwang, Sanghun Lee and Seunghyun Lee
J. Imaging 2022, 8(4), 117; https://doi.org/10.3390/jimaging8040117 - 17 Apr 2022
Cited by 1 | Viewed by 2311
Abstract
Recent advances in object detection play a key role in various industrial applications. However, a fully convolutional one-stage detector (FCOS), a conventional object detection method, has low detection accuracy given the calculation cost. Thus, in this study, we propose a half-inverted stage FCOS [...] Read more.
Recent advances in object detection play a key role in various industrial applications. However, a fully convolutional one-stage detector (FCOS), a conventional object detection method, has low detection accuracy given the calculation cost. Thus, in this study, we propose a half-inverted stage FCOS (HISFCOS) with improved detection accuracy at a computational cost comparable to FCOS based on the proposed half inverted stage (HIS) block. First, FCOS has low detection accuracy owing to low-level information loss. Therefore, an HIS block that minimizes feature loss by extracting spatial and channel information in parallel is proposed. Second, detection accuracy was improved by reconstructing the feature pyramid on the basis of the proposed block and improving the low-level information. Lastly, the improved detection head structure reduced the computational cost and amount compared to the conventional method. Through experiments, the proposed method defined the optimal HISFCOS parameters and evaluated several datasets for fair comparison. The HISFCOS was trained and evaluated using the PASCAL VOC and MSCOCO2017 datasets. Additionally, the average precision (AP) was used as an evaluation index to quantitatively evaluate detection performance. As a result of the experiment, the parameters were increased by 0.5 M compared to the conventional method, but the detection accuracy was improved by 3.0 AP and 1.5 AP in the PASCAL VOC and MSCOCO datasets, respectively. in addition, an ablation study was conducted, and the results for the proposed block and detection head were analyzed. Full article
Show Figures

Figure 1

23 pages, 417 KiB  
Article
Rethinking Weight Decay for Efficient Neural Network Pruning
by Hugo Tessier, Vincent Gripon, Mathieu Léonardon, Matthieu Arzel, Thomas Hannagan and David Bertrand
J. Imaging 2022, 8(3), 64; https://doi.org/10.3390/jimaging8030064 - 4 Mar 2022
Cited by 19 | Viewed by 3620
Abstract
Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in [...] Read more.
Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which carries out efficient, continuous pruning throughout training. Our approach, theoretically grounded on Lagrangian smoothing, is versatile and can be applied to multiple tasks, networks, and pruning structures. We show that SWD compares favorably to state-of-the-art approaches, in terms of performance-to-parameters ratio, on the CIFAR-10, Cora, and ImageNet ILSVRC2012 datasets. Full article
Show Figures

Figure 1

Back to TopTop