Traditional and Machine Learning Methods to Solve Imaging Problems

A topical collection in Algorithms (ISSN 1999-4893). This collection belongs to the section "Evolutionary Algorithms and Machine Learning".

Viewed by 22997

Editors


E-Mail Website
Guest Editor
Institute for High-Performance Computing and Networking, National Research Council of Italy, via P. Castellino, 111, I-80131 Naples, Italy
Interests: computational data science; image processing; omics and imaging data integration
Special Issues, Collections and Topics in MDPI journals

Topical Collection Information

Dear Colleagues,

Imaging problems, including image restoration, inpainting, etc., are usually modelled as inverse problems; analytical methods, such as regularization, have been widely used as a classical approach to the solution. Even image segmentation can be formulated as an ill-posed problem, and several models include a priori information as a sort of regularization approach.

Recently, machine and deep learning methods have been widely used to solve such imaging problems, often outperforming classical approaches. Classical and learning methods aim to achieve the solution through different means; classical methods are carefully designed to carry out a specific solution using domain knowledge, whereas learning approaches do not benefit from such prior knowledge but take advantage of large datasets to "learn", i.e., extract information on the unknown solution to the imaging problem. However, the two approaches can be combined by integrating prior domain-based knowledge into the machine (and deep) learning framework.

The proposed Topical Collection aims to gather original research articles and reviews on these two approaches to solving imaging problems, including combined methods aiming to provide a better solution. We welcome papers presenting results from theory to experimental practice in various application fields, especially those promoting critical comparisons between traditional and learning methods, revealing their strength and weakness.

Submissions may cover different application fields, such as biomedical imaging, microscopy imaging, remote sensing, etc., with potential topics of interest including but not being limited to:

  • Image deblurring;
  • Image denoising;
  • Image reconstruction from projections;
  • Image inpainting;
  • Image segmentation;
  • Image classification;
  • object detection;
  • Application in biomedical imaging;
  • Application in super-resolution microscopy;
  • Application in healthcare;
  • Application in (your field of research!);
  • Other related areas.

Dr. Laura Antonelli
Dr. Lucia Maddalena
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image deblurring
  • image denoising
  • image inpainting
  • semantic segmentation
  • instance segmentation

Published Papers (9 papers)

2024

Jump to: 2023, 2022

15 pages, 15233 KiB  
Article
A Preprocessing Method for Coronary Artery Stenosis Detection Based on Deep Learning
by Yanjun Li, Takaaki Yoshimura, Yuto Horima and Hiroyuki Sugimori
Algorithms 2024, 17(3), 119; https://doi.org/10.3390/a17030119 - 13 Mar 2024
Viewed by 797
Abstract
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging [...] Read more.
The detection of coronary artery stenosis is one of the most important indicators for the diagnosis of coronary artery disease. However, stenosis in branch vessels is often difficult to detect using computer-aided systems and even radiologists because of several factors, such as imaging angle and contrast agent inhomogeneity. Traditional coronary artery stenosis localization algorithms often only detect aortic stenosis and ignore branch vessels that may also cause major health threats. Therefore, improving the localization of branch vessel stenosis in coronary angiographic images is a potential development property. In this study, we propose a preprocessing approach that combines vessel enhancement and image fusion as a prerequisite for deep learning. The sensitivity of the neural network to stenosis features is improved by enhancing the blurry features in coronary angiographic images. By validating five neural networks, such as YOLOv4 and R-FCN-Inceptionresnetv2, our proposed method can improve the performance of deep learning network applications on the images from six common imaging angles. The results showed that the proposed method is suitable as a preprocessing method for coronary angiographic image processing based on deep learning and can be used to amend the recognition ability of the deep model for fine vessel stenosis. Full article
Show Figures

Figure 1

2023

Jump to: 2024, 2022

24 pages, 22771 KiB  
Article
Using Deep Learning to Detect the Need for Forest Thinning: Application to the Lungau Region, Austria
by Philipp Satlawa and Robert B. Fisher
Algorithms 2023, 16(9), 419; https://doi.org/10.3390/a16090419 - 01 Sep 2023
Viewed by 1191
Abstract
Timely information about the need to thin forests is vital in forest management to maintain a healthy forest while maximizing income. Currently, very-high-spatial-resolution remote sensing data can provide crucial assistance to experts when evaluating the maturity of thinnings. Nevertheless, this task is still [...] Read more.
Timely information about the need to thin forests is vital in forest management to maintain a healthy forest while maximizing income. Currently, very-high-spatial-resolution remote sensing data can provide crucial assistance to experts when evaluating the maturity of thinnings. Nevertheless, this task is still predominantly carried out in the field and demands extensive resources. This paper presents a deep convolutional neural network (DCNN) to detect the necessity and urgency of carrying out thinnings using only remote sensing data. The approach uses very-high-spatial-resolution RGB and near-infrared orthophotos; a canopy height model (CHM); a digital terrain model (DTM); the slope; and reference data, which, in this case, originate from spruce-dominated forests in the Austrian Alps. After tuning, the model achieves an F1 score of 82.23% on our test data, which indicates that the model is usable in a practical setting. We conclude that DCNNs are capable of detecting the need to carry out thinnings in forests. In contrast, attempts to assess the urgency of the need for thinnings with DCNNs proved to be unsuccessful. However, additional data, such as age or yield class, have the potential to improve the results. Our investigation into the influence of each individual input feature shows that orthophotos appear to contain the most relevant information for detecting the need for thinning. Moreover, we observe a gain in performance when adding the CHM and slope, whereas adding the DTM harms the model’s performance. Full article
Show Figures

Figure 1

40 pages, 3336 KiB  
Article
Storytelling with Image Data: A Systematic Review and Comparative Analysis of Methods and Tools
by Fariba Lotfi, Amin Beheshti, Helia Farhood, Matineh Pooshideh, Mansour Jamzad and Hamid Beigy
Algorithms 2023, 16(3), 135; https://doi.org/10.3390/a16030135 - 02 Mar 2023
Cited by 2 | Viewed by 3511
Abstract
In our digital age, data are generated constantly from public and private sources, social media platforms, and the Internet of Things. A significant portion of this information comes in the form of unstructured images and videos, such as the 95 million daily photos [...] Read more.
In our digital age, data are generated constantly from public and private sources, social media platforms, and the Internet of Things. A significant portion of this information comes in the form of unstructured images and videos, such as the 95 million daily photos and videos shared on Instagram and the 136 billion images available on Google Images. Despite advances in image processing and analytics, the current state of the art lacks effective methods for discovering, linking, and comprehending image data. Consider, for instance, the images from a crime scene that hold critical information for a police investigation. Currently, no system can interactively generate a comprehensive narrative of events from the incident to the conclusion of the investigation. To address this gap in research, we have conducted a thorough systematic literature review of existing methods, from labeling and captioning to extraction, enrichment, and transforming image data into contextualized information and knowledge. Our review has led us to propose the vision of storytelling with image data, an innovative framework designed to address fundamental challenges in image data comprehension. In particular, we focus on the research problem of understanding image data in general and, specifically, curating, summarizing, linking, and presenting large amounts of image data in a digestible manner to users. In this context, storytelling serves as an appropriate metaphor, as it can capture and depict the narratives and insights locked within the relationships among data stored across different islands. Additionally, a story can be subjective and told from various perspectives, ranging from a highly abstract narrative to a highly detailed one. Full article
Show Figures

Figure 1

20 pages, 6726 KiB  
Article
Examination of Lemon Bruising Using Different CNN-Based Classifiers and Local Spectral-Spatial Hyperspectral Imaging
by Razieh Pourdarbani, Sajad Sabzi, Mohsen Dehghankar, Mohammad H. Rohban and Juan I. Arribas
Algorithms 2023, 16(2), 113; https://doi.org/10.3390/a16020113 - 14 Feb 2023
Cited by 5 | Viewed by 2156
Abstract
The presence of bruises on fruits often indicates cell damage, which can lead to a decrease in the ability of the peel to keep oxygen away from the fruits, and as a result, oxygen breaks down cell walls and membranes damaging fruit content. [...] Read more.
The presence of bruises on fruits often indicates cell damage, which can lead to a decrease in the ability of the peel to keep oxygen away from the fruits, and as a result, oxygen breaks down cell walls and membranes damaging fruit content. When chemicals in the fruit are oxidized by enzymes such as polyphenol oxidase, the chemical reaction produces an undesirable and apparent brown color effect, among others. Early detection of bruising prevents low-quality fruit from entering the consumer market. Hereupon, the present paper aims at early identification of bruised lemon fruits using 3D-convolutional neural networks (3D-CNN) via a local spectral-spatial hyperspectral imaging technique, which takes into account adjacent image pixel information in both the frequency (wavelength) and spatial domains of a 3D-tensor hyperspectral image of input lemon fruits. A total of 70 sound lemons were picked up from orchards. First, all fruits were labeled and the hyperspectral images (wavelength range 400–1100 nm) were captured as belonging to the healthy (unbruised) class (class label 0). Next, bruising was applied to each lemon by freefall. Then, the hyperspectral images of all bruised samples were captured in a time gap of 8 (class label 1) and 16 h (class label 2) after bruising was induced, thus resulting in a 3-class ternary classification problem. Four well-known 3D-CNN model namely ResNet, ShuffleNet, DenseNet, and MobileNet were used to classify bruised lemons in Python. Results revealed that the highest classification accuracy (90.47%) was obtained by the ResNet model, followed by DenseNet (85.71%), ShuffleNet (80.95%) and MobileNet (73.80%); all over the test set. ResNet model had larger parameter sizes, but it was proven to be trained faster than other models with fewer number of free parameters. ShuffleNet and MobileNet were easier to train and they needed less storage, but they could not achieve a classification error as low as the other two counterparts. Full article
Show Figures

Figure 1

14 pages, 890 KiB  
Article
Image Quality Assessment for Gibbs Ringing Reduction
by Yue Wang and John J. Healy
Algorithms 2023, 16(2), 96; https://doi.org/10.3390/a16020096 - 09 Feb 2023
Cited by 3 | Viewed by 1417
Abstract
Gibbs ringing is an artefact that is inevitable in any imaging modality where the measurement is Fourier band-limited. It impacts the quality of the image by creating a ringing appearance around discontinuities. Many novel ways of suppressing the artefact have been proposed, including [...] Read more.
Gibbs ringing is an artefact that is inevitable in any imaging modality where the measurement is Fourier band-limited. It impacts the quality of the image by creating a ringing appearance around discontinuities. Many novel ways of suppressing the artefact have been proposed, including machine learning methods, but the quantitative comparisons of the results have frequently been lacking in rigour. In this paper, we examine image quality assessment metrics on three test images with different complexity. We determine six metrics which show promise for simultaneously assessing severity of Gibbs ringing and of other error such as blurring. We examined applying metrics to a region of interest around discontinuities in the image and use the metrics on the resulting region of interest. We demonstrate that the region of interest approach does not improve the performance of the metrics. Finally, we examine the effect of the error threshold parameter in two metrics. Our results will aid development of best practice in comparison of algorithms for the suppression of Gibbs ringing. Full article
Show Figures

Figure 1

13 pages, 28183 KiB  
Article
Egyptian Hieroglyphs Segmentation with Convolutional Neural Networks
by Tommaso Guidi, Lorenzo Python, Matteo Forasassi, Costanza Cucci, Massimiliano Franci, Fabrizio Argenti and Andrea Barucci
Algorithms 2023, 16(2), 79; https://doi.org/10.3390/a16020079 - 01 Feb 2023
Cited by 6 | Viewed by 4661
Abstract
The objective of this work is to show the application of a Deep Learning algorithm able to operate the segmentation of ancient Egyptian hieroglyphs present in an image, with the ambition to be as versatile as possible despite the variability of the image [...] Read more.
The objective of this work is to show the application of a Deep Learning algorithm able to operate the segmentation of ancient Egyptian hieroglyphs present in an image, with the ambition to be as versatile as possible despite the variability of the image source. The problem is quite complex, the main obstacles being the considerable amount of different classes of existing hieroglyphs, the differences related to the hand of the scribe as well as the great differences among the various supports, such as papyri, stone or wood, where they are written. Furthermore, as in all archaeological finds, damage to the supports are frequent, with the consequence that hieroglyphs can be partially corrupted. In order to face this challenging problem, we leverage on the well-known Detectron2 platform, developed by the Facebook AI Research Group, focusing on the Mask R-CNN architecture to perform segmentation of image instances. Likewise, for several machine learning studies, one of the hardest challenges is the creation of a suitable dataset. In this paper, we will describe a hieroglyph dataset that has been created for the purpose of segmentation, highlighting its pros and cons, and the impact of different hyperparameters on the final results. Tests on the segmentation of images taken from public databases will also be presented and discussed along with the limitations of our study. Full article
Show Figures

Figure 1

18 pages, 1408 KiB  
Article
Iterative Image Reconstruction Algorithm with Parameter Estimation by Neural Network for Computed Tomography
by Takeshi Kojima and Tetsuya Yoshinaga
Algorithms 2023, 16(1), 60; https://doi.org/10.3390/a16010060 - 16 Jan 2023
Cited by 1 | Viewed by 1755
Abstract
Recently, an extended family of power-divergence measures with two parameters was proposed together with an iterative reconstruction algorithm based on minimization of the divergence measure as an objective function of the reconstructed images for computed tomography. Numerical experiments on the reconstruction algorithm illustrated [...] Read more.
Recently, an extended family of power-divergence measures with two parameters was proposed together with an iterative reconstruction algorithm based on minimization of the divergence measure as an objective function of the reconstructed images for computed tomography. Numerical experiments on the reconstruction algorithm illustrated that it has advantages over conventional iterative methods from noisy measured projections by setting appropriate values of the parameters. In this paper, we present a novel neural network architecture for determining the most appropriate parameters depending on the noise level of the projections and the shape of the target image. Through experiments, we show that the algorithm of the architecture, which has an optimization sub-network with multiplicative connections rather than additive ones, works well. Full article
Show Figures

Figure 1

2022

Jump to: 2024, 2023

9 pages, 2299 KiB  
Article
Improved Ship Detection Algorithm from Satellite Images Using YOLOv7 and Graph Neural Network
by Krishna Patel, Chintan Bhatt and Pier Luigi Mazzeo
Algorithms 2022, 15(12), 473; https://doi.org/10.3390/a15120473 - 12 Dec 2022
Cited by 16 | Viewed by 3839
Abstract
One of the most critical issues that the marine surveillance system has to address is the accuracy of its ship detection. Since it is responsible for identifying potential pirate threats, it has to be able to perform its duties efficiently. In this paper, [...] Read more.
One of the most critical issues that the marine surveillance system has to address is the accuracy of its ship detection. Since it is responsible for identifying potential pirate threats, it has to be able to perform its duties efficiently. In this paper, we present a novel deep learning approach that combines the capabilities of a Graph Neural Network (GNN) and a You Only Look Once (YOLOv7) deep learning framework. The main idea of this method is to provide a better understanding of the ship’s presence in harbor areas. The three hyperparameters that are used in the development of this system are the learning rate, batch sizes, and optimization selection. The results of the experiments show that the Adam optimization achieves a 93.4% success rate when compared to the previous generation of the YOLOv7 algorithm. The High-Resolution Satellite Image Dataset (HRSID), which is a high-resolution image of a synthetic aperture radar, was used for the test. This method can be further improved by taking into account the various kinds of neural network architecture that are commonly used in deep learning. Full article
Show Figures

Figure 1

23 pages, 7441 KiB  
Article
Vineyard Gap Detection by Convolutional Neural Networks Fed by Multi-Spectral Images
by Shazia Sulemane, João P. Matos-Carvalho, Dário Pedro, Filipe Moutinho and Sérgio D. Correia
Algorithms 2022, 15(12), 440; https://doi.org/10.3390/a15120440 - 22 Nov 2022
Cited by 4 | Viewed by 2269
Abstract
This paper focuses on the gaps that occur inside plantations; these gaps, although not having anything growing in them, still happen to be watered. This action ends up wasting tons of liters of water every year, which translates into financial and environmental losses. [...] Read more.
This paper focuses on the gaps that occur inside plantations; these gaps, although not having anything growing in them, still happen to be watered. This action ends up wasting tons of liters of water every year, which translates into financial and environmental losses. To avoid these losses, we suggest early detection. To this end, we analyzed the different available neural networks available with multispectral images. This entailed training each regional and regression-based network five times with five different datasets. Networks based on two possible solutions were chosen: unmanned aerial vehicle (UAV) depletion or post-processing with external software. The results show that the best network for UAV depletion is the Tiny-YOLO (You Only Look Once) version 4-type network, and the best starting weights for Mask-RCNN were from the Tiny-YOLO network version. Although no mean average precision (mAP) of over 70% was achieved, the final trained networks managed to detect mostly gaps, including low-vegetation areas and very small gaps, which had a tendency to be overlooked during the labeling stage. Full article
Show Figures

Figure 1

Back to TopTop