Next Issue
Volume 10, March
Previous Issue
Volume 10, January
 
 

J. Imaging, Volume 10, Issue 2 (February 2024) – 22 articles

Cover Story (view full-size image): In the context of producing a digital surface model (DSM) and an orthophotomosaic of a study area, a modern Unmanned Aerial System (UAS) allows us to reduce the time required both for primary data collection in the field and for data processing in the office. The first objective is to test and compare the accuracy of the DSMs and orthophotomosaics generated from the UAS RGB sensor images when image processing is performed using only the PPK system measurements (without Ground Control Points (GCPs)), or when processing is performed using only GCPs. The second objective is to perform image fusion using the images of the two UAS sensors (RGB and Multispectral (MS)) and to control the spectral information transferred from the MS orthophotomosaic to the fused image. For the above-mentioned control, the combined study of the correlation matrix and the ERGAS index value is valuable. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 5317 KiB  
Article
Constrained Plug-and-Play Priors for Image Restoration
by Alessandro Benfenati and Pasquale Cascarano
J. Imaging 2024, 10(2), 50; https://doi.org/10.3390/jimaging10020050 - 19 Feb 2024
Viewed by 909
Abstract
The Plug-and-Play framework has demonstrated that a denoiser can implicitly serve as the image prior for model-based methods for solving various inverse problems such as image restoration tasks. This characteristic enables the integration of the flexibility of model-based methods with the effectiveness of [...] Read more.
The Plug-and-Play framework has demonstrated that a denoiser can implicitly serve as the image prior for model-based methods for solving various inverse problems such as image restoration tasks. This characteristic enables the integration of the flexibility of model-based methods with the effectiveness of learning-based denoisers. However, the regularization strength induced by denoisers in the traditional Plug-and-Play framework lacks a physical interpretation, necessitating demanding parameter tuning. This paper addresses this issue by introducing the Constrained Plug-and-Play (CPnP) method, which reformulates the traditional PnP as a constrained optimization problem. In this formulation, the regularization parameter directly corresponds to the amount of noise in the measurements. The solution to the constrained problem is obtained through the design of an efficient method based on the Alternating Direction Method of Multipliers (ADMM). Our experiments demonstrate that CPnP outperforms competing methods in terms of stability and robustness while also achieving competitive performance for image quality. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

15 pages, 4083 KiB  
Article
Automatic MTF Conversion between Different Characteristics Caused by Imaging Devices
by Midori Tanaka, Tsubasa Ando and Takahiko Horiuchi
J. Imaging 2024, 10(2), 49; https://doi.org/10.3390/jimaging10020049 - 17 Feb 2024
Viewed by 966
Abstract
Depending on various design conditions, including optics and circuit design, the image-forming characteristics of the modulated transfer function (MTF), which affect the spatial resolution of a digital image, may vary among image channels within or between imaging devices. In this study, we propose [...] Read more.
Depending on various design conditions, including optics and circuit design, the image-forming characteristics of the modulated transfer function (MTF), which affect the spatial resolution of a digital image, may vary among image channels within or between imaging devices. In this study, we propose a method for automatically converting the MTF to the target MTF, focusing on adjusting the MTF characteristics that affect the signals of different image channels within and between different image devices. The experimental results of MTF conversion using the proposed method for multiple image channels with different MTF characteristics indicated that the proposed method could produce sharper images by moving the source MTF of each channel closer to a target MTF with a higher MTF value. This study is expected to contribute to technological advancements in various imaging devices as follows: (1) Even if the imaging characteristics of the hardware are unknown, the MTF can be converted to the target MTF using the image after it is captured. (2) As any MTF can be converted into a target, image simulation for conversion to a different MTF is possible. (3) It is possible to generate high-definition images, thereby meeting the requirements of various industrial and research fields in which high-definition images are required. Full article
(This article belongs to the Special Issue Imaging Technologies for Understanding Material Appearance)
Show Figures

Figure 1

17 pages, 2521 KiB  
Article
Vectorial Image Representation for Image Classification
by Maria-Eugenia Sánchez-Morales, José-Trinidad Guillen-Bonilla, Héctor Guillen-Bonilla, Alex Guillen-Bonilla, Jorge Aguilar-Santiago and Maricela Jiménez-Rodríguez
J. Imaging 2024, 10(2), 48; https://doi.org/10.3390/jimaging10020048 - 13 Feb 2024
Viewed by 1179
Abstract
This paper proposes the transformation SC, where S is a digital gray-level image and C is a vector expressed through the textural space. The proposed transformation is denominated Vectorial Image Representation on the Texture Space (VIR-TS), given that [...] Read more.
This paper proposes the transformation SC, where S is a digital gray-level image and C is a vector expressed through the textural space. The proposed transformation is denominated Vectorial Image Representation on the Texture Space (VIR-TS), given that the digital image S is represented by the textural vector C. This vector C contains all of the local texture characteristics in the image of interest, and the texture unit T entertains a vectorial character, since it is defined through the resolution of a homogeneous equation system. For the application of this transformation, a new classifier for multiple classes is proposed in the texture space, where the vector C is employed as a characteristics vector. To verify its efficiency, it was experimentally deployed for the recognition of digital images of tree barks, obtaining an effective performance. In these experiments, the parametric value λ employed to solve the homogeneous equation system does not affect the results of the image classification. The VIR-TS transform possesses potential applications in specific tasks, such as locating missing persons, and the analysis and classification of diagnostic and medical images. Full article
Show Figures

Figure 1

12 pages, 1622 KiB  
Article
A Mobile App for Detecting Potato Crop Diseases
by Dunia Pineda Medina, Ileana Miranda Cabrera, Rolisbel Alfonso de la Cruz, Lizandra Guerra Arzuaga, Sandra Cuello Portal and Monica Bianchini
J. Imaging 2024, 10(2), 47; https://doi.org/10.3390/jimaging10020047 - 13 Feb 2024
Viewed by 1164
Abstract
Artificial intelligence techniques are now widely used in various agricultural applications, including the detection of devastating diseases such as late blight (Phytophthora infestans) and early blight (Alternaria solani) affecting potato (Solanum tuberorsum L.) crops. In this paper, we [...] Read more.
Artificial intelligence techniques are now widely used in various agricultural applications, including the detection of devastating diseases such as late blight (Phytophthora infestans) and early blight (Alternaria solani) affecting potato (Solanum tuberorsum L.) crops. In this paper, we present a mobile application for detecting potato crop diseases based on deep neural networks. The images were taken from the PlantVillage dataset with a batch of 1000 images for each of the three identified classes (healthy, early blight-diseased, late blight-diseased). An exploratory analysis of the architectures used for early and late blight diagnosis in potatoes was performed, achieving an accuracy of 98.7%, with MobileNetv2. Based on the results obtained, an offline mobile application was developed, supported on devices with Android 4.1 or later, also featuring an information section on the 27 diseases affecting potato crops and a gallery of symptoms. For future work, segmentation techniques will be used to highlight the damaged region in the potato leaf by evaluating its extent and possibly identifying different types of diseases affecting the same plant. Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
Show Figures

Figure 1

19 pages, 429 KiB  
Article
Media Forensic Considerations of the Usage of Artificial Intelligence Using the Example of DeepFake Detection
by Dennis Siegel, Christian Kraetzer, Stefan Seidlitz and Jana Dittmann
J. Imaging 2024, 10(2), 46; https://doi.org/10.3390/jimaging10020046 - 09 Feb 2024
Cited by 1 | Viewed by 1632
Abstract
In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first [...] Read more.
In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first document to be turned into European Law. This initiative focuses on turning AI systems in decision support systems (human-in-the-loop and human-in-command), where the human operator remains in control of the system. While this supposedly solves accountability issues, it includes, on one hand, the necessary human–computer interaction as a potential new source of errors; on the other hand, it is potentially a very effective approach for decision interpretation and verification. This paper discusses the necessary requirements for high-risk AI systems once the AIA comes into force. Particular attention is paid to the opportunities and limitations that result from the decision support system and increasing the explainability of the system. This is illustrated using the example of the media forensic task of DeepFake detection. Full article
Show Figures

Figure 1

22 pages, 87559 KiB  
Article
Exploration of Interpretability Techniques for Deep COVID-19 Classification Using Chest X-ray Images
by Soumick Chatterjee, Fatima Saad, Chompunuch Sarasaen, Suhita Ghosh, Valerie Krug, Rupali Khatun, Rahul Mishra, Nirja Desai, Petia Radeva, Georg Rose, Sebastian Stober, Oliver Speck and Andreas Nürnberger
J. Imaging 2024, 10(2), 45; https://doi.org/10.3390/jimaging10020045 - 08 Feb 2024
Cited by 2 | Viewed by 1224
Abstract
The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray [...] Read more.
The outbreak of COVID-19 has shocked the entire world with its fairly rapid spread, and has challenged different sectors. One of the most effective ways to limit its spread is the early and accurate diagnosing of infected patients. Medical imaging, such as X-ray and computed tomography (CT), combined with the potential of artificial intelligence (AI), plays an essential role in supporting medical personnel in the diagnosis process. Thus, in this article, five different deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and DenseNet161) and their ensemble, using majority voting, have been used to classify COVID-19, pneumoniæ and healthy subjects using chest X-ray images. Multilabel classification was performed to predict multiple pathologies for each patient, if present. Firstly, the interpretability of each of the networks was thoroughly studied using local interpretability methods—occlusion, saliency, input X gradient, guided backpropagation, integrated gradients, and DeepLIFT—and using a global technique—neuron activation profiles. The mean micro F1 score of the models for COVID-19 classifications ranged from 0.66 to 0.875, and was 0.89 for the ensemble of the network models. The qualitative results showed that the ResNets were the most interpretable models. This research demonstrates the importance of using interpretability methods to compare different models before making a decision regarding the best performing model. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

14 pages, 5685 KiB  
Article
Sand Painting Generation Based on Convolutional Neural Networks
by Chin-Chen Chang and Ping-Hao Peng
J. Imaging 2024, 10(2), 44; https://doi.org/10.3390/jimaging10020044 - 07 Feb 2024
Viewed by 1068
Abstract
Neural style transfer is an algorithm that transfers the style of one image to another image and converts the style of the second image while preserving its content. In this paper, we propose a style transfer approach for sand painting generation based on [...] Read more.
Neural style transfer is an algorithm that transfers the style of one image to another image and converts the style of the second image while preserving its content. In this paper, we propose a style transfer approach for sand painting generation based on convolutional neural networks. The proposed approach aims to improve sand painting generation via neural style transfer, which can address the problem of blurred objects. Furthermore, it can reduce background noise caused by neural style transfers. First, we segment the main objects from the content image. Subsequently, we perform close–open filtering operations on the content image to obtain smooth images. Subsequently, we perform Sobel edge detection to process the images and obtain edge maps. Based on these edge maps and the input style image, we perform neural style transfer to generate sand painting images. Finally, we integrate the generated images to obtain the final stylized sand painting image. The results show that the proposed approach yields good visual effects from sand paintings. Moreover, the proposed approach achieves better visual effects for sand painting than the previous method. Full article
Show Figures

Figure 1

14 pages, 8556 KiB  
Article
Spherical Aberration and Scattering Compensation in Microscopy Images through a Blind Deconvolution Method
by Francisco J. Ávila and Juan M. Bueno
J. Imaging 2024, 10(2), 43; https://doi.org/10.3390/jimaging10020043 - 07 Feb 2024
Viewed by 1098
Abstract
The optical quality of an image depends on both the optical properties of the imaging system and the physical properties of the medium the light passes while travelling from the object to the image plane. The computation of the point spread function (PSF) [...] Read more.
The optical quality of an image depends on both the optical properties of the imaging system and the physical properties of the medium the light passes while travelling from the object to the image plane. The computation of the point spread function (PSF) associated to the optical system is often used to assess the image quality. In a non-ideal optical system, the PSF is affected by aberrations that distort the final image. Moreover, in the presence of turbid media, the scattering phenomena spread the light at wide angular distributions that contribute to reduce contrast and sharpness. If the mathematical degradation operator affecting the recorded image is known, the image can be restored through deconvolution methods. In some scenarios, no (or partial) information on the PSF is available. In those cases, blind deconvolution approaches arise as useful solutions for image restoration. In this work, a new blind deconvolution method is proposed to restore images using spherical aberration (SA) and scatter-based kernel filters. The procedure was evaluated in different microscopy images. The results show the capability of the algorithm to detect both degradation coefficients (i.e., SA and scattering) and to restore images without information on the real PSF. Full article
Show Figures

Figure 1

42 pages, 24044 KiB  
Review
Image Inpainting Forgery Detection: A Review
by Adrian-Alin Barglazan, Remus Brad and Constantin Constantinescu
J. Imaging 2024, 10(2), 42; https://doi.org/10.3390/jimaging10020042 - 02 Feb 2024
Viewed by 1935
Abstract
In recent years, significant advancements in the field of machine learning have influenced the domain of image restoration. While these technological advancements present prospects for improving the quality of images, they also present difficulties, particularly the proliferation of manipulated or counterfeit multimedia information [...] Read more.
In recent years, significant advancements in the field of machine learning have influenced the domain of image restoration. While these technological advancements present prospects for improving the quality of images, they also present difficulties, particularly the proliferation of manipulated or counterfeit multimedia information on the internet. The objective of this paper is to provide a comprehensive review of existing inpainting algorithms and forgery detections, with a specific emphasis on techniques that are designed for the purpose of removing objects from digital images. In this study, we will examine various techniques encompassing conventional texture synthesis methods as well as those based on neural networks. Furthermore, we will present the artifacts frequently introduced by the inpainting procedure and assess the state-of-the-art technology for detecting such modifications. Lastly, we shall look at the available datasets and how the methods compare with each other. Having covered all the above, the outcome of this study is to provide a comprehensive perspective on the abilities and constraints of detecting object removal via the inpainting procedure in images. Full article
Show Figures

Figure 1

18 pages, 15270 KiB  
Article
Classification of Pepper Seeds by Machine Learning Using Color Filter Array Images
by Kani Djoulde, Boukar Ousman, Abboubakar Hamadjam, Laurent Bitjoka and Clergé Tchiegang
J. Imaging 2024, 10(2), 41; https://doi.org/10.3390/jimaging10020041 - 31 Jan 2024
Viewed by 1234
Abstract
The purpose of this work is to classify pepper seeds using color filter array (CFA) images. This study focused specifically on Penja pepper, which is found in the Litoral region of Cameroon and is a type of Piper nigrum. India and Brazil [...] Read more.
The purpose of this work is to classify pepper seeds using color filter array (CFA) images. This study focused specifically on Penja pepper, which is found in the Litoral region of Cameroon and is a type of Piper nigrum. India and Brazil are the largest producers of this variety of pepper, although the production of Penja pepper is not as significant in terms of quantity compared to other major producers. However, it is still highly sought after and one of the most expensive types of pepper on the market. It can be difficult for humans to distinguish between different types of peppers based solely on the appearance of their seeds. To address this challenge, we collected 5618 samples of white and black Penja pepper and other varieties for classification using image processing and a supervised machine learning method. We extracted 18 attributes from the images and trained them in four different models. The most successful model was the support vector machine (SVM), which achieved an accuracy of 0.87, a precision of 0.874, a recall of 0.873, and an F1-score of 0.874. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 4112 KiB  
Article
Multi-Particle Tracking in Complex Plasmas Using a Simplified and Compact U-Net
by Niklas Dormagen, Max Klein, Andreas S. Schmitz, Markus H. Thoma and Mike Schwarz
J. Imaging 2024, 10(2), 40; https://doi.org/10.3390/jimaging10020040 - 31 Jan 2024
Viewed by 1149
Abstract
Detecting micron-sized particles is an essential task for the analysis of complex plasmas because a large part of the analysis is based on the initially detected positions of the particles. Accordingly, high accuracy in particle detection is desirable. Previous studies have shown that [...] Read more.
Detecting micron-sized particles is an essential task for the analysis of complex plasmas because a large part of the analysis is based on the initially detected positions of the particles. Accordingly, high accuracy in particle detection is desirable. Previous studies have shown that machine learning algorithms have made great progress and outperformed classical approaches. This work presents an approach for tracking micron-sized particles in a dense cloud of particles in a dusty plasma at Plasmakristall-Experiment 4 using a U-Net. The U-net is a convolutional network architecture for the fast and precise segmentation of images that was developed at the Computer Science Department of the University of Freiburg. The U-Net architecture, with its intricate design and skip connections, has been a powerhouse in achieving precise object delineation. However, as experiments are to be conducted in resource-constrained environments, such as parabolic flights, preferably with real-time applications, there is growing interest in exploring less complex U-net architectures that balance efficiency and effectiveness. We compare the full-size neural network, three optimized neural networks, the well-known StarDist and trackpy, in terms of accuracy in artificial data analysis. Finally, we determine which of the compact U-net architectures provides the best balance between efficiency and effectiveness. We also apply the full-size neural network and the the most effective compact network to the data of the PK-4 experiment. The experimental data were generated under laboratory conditions. Full article
Show Figures

Figure 1

14 pages, 8815 KiB  
Article
Evaluation of Non-Invasive Methods for (R)-[11C]PK11195 PET Image Quantification in Multiple Sclerosis
by Dimitri B. A. Mantovani, Milena S. Pitombeira, Phelipi N. Schuck, Adriel S. de Araújo, Carlos Alberto Buchpiguel, Daniele de Paula Faria and Ana Maria M. da Silva
J. Imaging 2024, 10(2), 39; https://doi.org/10.3390/jimaging10020039 - 31 Jan 2024
Viewed by 1226
Abstract
This study aims to evaluate non-invasive PET quantification methods for (R)-[11C]PK11195 uptake measurement in multiple sclerosis (MS) patients and healthy controls (HC) in comparison with arterial input function (AIF) using dynamic (R)-[11C]PK11195 PET and magnetic resonance images. The total [...] Read more.
This study aims to evaluate non-invasive PET quantification methods for (R)-[11C]PK11195 uptake measurement in multiple sclerosis (MS) patients and healthy controls (HC) in comparison with arterial input function (AIF) using dynamic (R)-[11C]PK11195 PET and magnetic resonance images. The total volume of distribution (VT) and distribution volume ratio (DVR) were measured in the gray matter, white matter, caudate nucleus, putamen, pallidum, thalamus, cerebellum, and brainstem using AIF, the image-derived input function (IDIF) from the carotid arteries, and pseudo-reference regions from supervised clustering analysis (SVCA). Uptake differences between MS and HC groups were tested using statistical tests adjusted for age and sex, and correlations between the results from the different quantification methods were also analyzed. Significant DVR differences were observed in the gray matter, white matter, putamen, pallidum, thalamus, and brainstem of MS patients when compared to the HC group. Also, strong correlations were found in DVR values between non-invasive methods and AIF (0.928 for IDIF and 0.975 for SVCA, p < 0.0001). On the other hand, (R)-[11C]PK11195 uptake could not be differentiated between MS patients and HC using VT values, and a weak correlation (0.356, p < 0.0001) was found between VTAIF and VTIDIF. Our study shows that the best alternative for AIF is using SVCA for reference region modeling, in addition to a cautious and appropriate methodology. Full article
Show Figures

Figure 1

10 pages, 2294 KiB  
Article
Identifying the Causes of Unexplained Dyspnea at High Altitude Using Normobaric Hypoxia with Echocardiography
by Jan Stepanek, Juan M. Farina, Ahmed K. Mahmoud, Chieh-Ju Chao, Said Alsidawi, Chadi Ayoub, Timothy Barry, Milagros Pereyra, Isabel G. Scalia, Mohammed Tiseer Abbas, Rachel E. Wraith, Lisa S. Brown, Michael S. Radavich, Pamela J. Curtisi, Patricia C. Hartzendorf, Elizabeth M. Lasota, Kyley N. Umetsu, Jill M. Peterson, Kristin E. Karlson, Karen Breznak, David F. Fortuin, Steven J. Lester and Reza Arsanjaniadd Show full author list remove Hide full author list
J. Imaging 2024, 10(2), 38; https://doi.org/10.3390/jimaging10020038 - 31 Jan 2024
Viewed by 1295
Abstract
Exposure to high altitude results in hypobaric hypoxia, leading to physiological changes in the cardiovascular system that may result in limiting symptoms, including dyspnea, fatigue, and exercise intolerance. However, it is still unclear why some patients are more susceptible to high-altitude symptoms than [...] Read more.
Exposure to high altitude results in hypobaric hypoxia, leading to physiological changes in the cardiovascular system that may result in limiting symptoms, including dyspnea, fatigue, and exercise intolerance. However, it is still unclear why some patients are more susceptible to high-altitude symptoms than others. Hypoxic simulation testing (HST) simulates changes in physiology that occur at a specific altitude by asking the patients to breathe a mixture of gases with decreased oxygen content. This study aimed to determine whether the use of transthoracic echocardiography (TTE) during HST can detect the rise in right-sided pressures and the impact of hypoxia on right ventricle (RV) hemodynamics and right to left shunts, thus revealing the underlying causes of high-altitude signs and symptoms. A retrospective study was performed including consecutive patients with unexplained dyspnea at high altitude. HSTs were performed by administrating reduced FiO2 to simulate altitude levels specific to patients’ history. Echocardiography images were obtained at baseline and during hypoxia. The study included 27 patients, with a mean age of 65 years, 14 patients (51.9%) were female. RV systolic pressure increased at peak hypoxia, while RV systolic function declined as shown by a significant decrease in the tricuspid annular plane systolic excursion (TAPSE), the maximum velocity achieved by the lateral tricuspid annulus during systole (S’ wave), and the RV free wall longitudinal strain. Additionally, right-to-left shunt was present in 19 (70.4%) patients as identified by bubble contrast injections. Among these, the severity of the shunt increased at peak hypoxia in eight cases (42.1%), and the shunt was only evident during hypoxia in seven patients (36.8%). In conclusion, the use of TTE during HST provides valuable information by revealing the presence of symptomatic, sustained shunts and confirming the decline in RV hemodynamics, thus potentially explaining dyspnea at high altitude. Further studies are needed to establish the optimal clinical role of this physiologic method. Full article
Show Figures

Figure 1

18 pages, 2942 KiB  
Article
Point Projection Mapping System for Tracking, Registering, Labeling, and Validating Optical Tissue Measurements
by Lianne Feenstra, Stefan D. van der Stel, Marcos Da Silva Guimaraes, Behdad Dashtbozorg and Theo J. M. Ruers
J. Imaging 2024, 10(2), 37; https://doi.org/10.3390/jimaging10020037 - 30 Jan 2024
Viewed by 1385
Abstract
The validation of newly developed optical tissue-sensing techniques for tumor detection during cancer surgery requires an accurate correlation with the histological results. Additionally, such an accurate correlation facilitates precise data labeling for developing high-performance machine learning tissue-classification models. In this paper, a newly [...] Read more.
The validation of newly developed optical tissue-sensing techniques for tumor detection during cancer surgery requires an accurate correlation with the histological results. Additionally, such an accurate correlation facilitates precise data labeling for developing high-performance machine learning tissue-classification models. In this paper, a newly developed Point Projection Mapping system will be introduced, which allows non-destructive tracking of the measurement locations on tissue specimens. Additionally, a framework for accurate registration, validation, and labeling with the histopathology results is proposed and validated on a case study. The proposed framework provides a more-robust and accurate method for the tracking and validation of optical tissue-sensing techniques, which saves time and resources compared to the available conventional techniques. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

12 pages, 1549 KiB  
Article
The Reality of a Head-Mounted Display (HMD) Environment Tested via Lightness Perception
by Ichiro Kuriki, Kazuki Sato and Satoshi Shioiri
J. Imaging 2024, 10(2), 36; https://doi.org/10.3390/jimaging10020036 - 29 Jan 2024
Viewed by 1528
Abstract
Head-mounted displays (HMDs) are becoming more and more popular as a device for displaying a virtual reality space, but how real are they? The present study attempted to quantitatively evaluate the degree of reality achieved with HMDs by using a perceptual phenomenon as [...] Read more.
Head-mounted displays (HMDs) are becoming more and more popular as a device for displaying a virtual reality space, but how real are they? The present study attempted to quantitatively evaluate the degree of reality achieved with HMDs by using a perceptual phenomenon as a measure. Lightness constancy is an ability that is present in human visual perception, in which the perceived reflectance (i.e., the lightness) of objects appears to stay constant across illuminant changes. Studies on color/lightness constancy in humans have shown that the degree of constancy is high, in general, when real objects are used as stimuli. We asked participants to make lightness matches between two virtual environments with different illuminant intensities, as presented in an HMD. The participants’ matches showed a high degree of lightness constancy in the HMD; our results marked no less than 74.2% (84.8% at the maximum) in terms of the constancy index, whereas the average score on the computer screen was around 65%. The effect of head-tracking ability was confirmed by disabling that function, and the result showed a significant drop in the constancy index but that it was equally effective when the virtual environment was generated by replay motions. HMDs yield a realistic environment, with the extension of the visual scene being accompanied by head motions. Full article
(This article belongs to the Special Issue Imaging Technologies for Understanding Material Appearance)
Show Figures

Figure 1

1 pages, 175 KiB  
Correction
Correction: Bolocan et al. Convolutional Neural Network Model for Segmentation and Classification of Clear Cell Renal Cell Carcinoma Based on Multiphase CT Images. J. Imaging 2023, 9, 280
by Vlad-Octavian Bolocan, Mihaela Secareanu, Elena Sava, Cosmin Medar, Loredana Sabina Cornelia Manolescu, Alexandru-Ștefan Cătălin Rașcu, Maria Glencora Costache, George Daniel Radavoi, Robert-Andrei Dobran and Viorel Jinga
J. Imaging 2024, 10(2), 35; https://doi.org/10.3390/jimaging10020035 - 29 Jan 2024
Viewed by 987
Abstract
In the original publication [...] Full article
25 pages, 10937 KiB  
Article
Measurement Accuracy and Improvement of Thematic Information from Unmanned Aerial System Sensor Products in Cultural Heritage Applications
by Dimitris Kaimaris
J. Imaging 2024, 10(2), 34; https://doi.org/10.3390/jimaging10020034 - 28 Jan 2024
Viewed by 1301
Abstract
In the context of producing a digital surface model (DSM) and an orthophotomosaic of a study area, a modern Unmanned Aerial System (UAS) allows us to reduce the time required both for primary data collection in the field and for data processing in [...] Read more.
In the context of producing a digital surface model (DSM) and an orthophotomosaic of a study area, a modern Unmanned Aerial System (UAS) allows us to reduce the time required both for primary data collection in the field and for data processing in the office. It features sophisticated sensors and systems, is easy to use and its products come with excellent horizontal and vertical accuracy. In this study, the UAS WingtraOne GEN II with RGB sensor (42 Mpixel), multispectral (MS) sensor (1.2 Mpixel) and built-in multi-frequency PPK GNSS antenna (for the high accuracy calculation of the coordinates of the centers of the received images) is used. The first objective is to test and compare the accuracy of the DSMs and orthophotomosaics generated from the UAS RGB sensor images when image processing is performed using only the PPK system measurements (without Ground Control Points (GCPs)), or when processing is performed using only GCPs. For this purpose, 20 GCPs and 20 Check Points (CPs) were measured in the field. The results show that the horizontal accuracy of orthophotomosaics is similar in both processing cases. The vertical accuracy is better in the case of image processing using only the GCPs, but that is subject to change, as the survey was only conducted at one location. The second objective is to perform image fusion using the images of the above two UAS sensors and to control the spectral information transferred from the MS to the fused images. The study was carried out at three archaeological sites (Northern Greece). The combined study of the correlation matrix and the ERGAS index value at each location reveals that the process of improving the spatial resolution of MS orthophotomosaics leads to suitable fused images for classification, and therefore image fusion can be performed by utilizing the images from the two sensors. Full article
Show Figures

Figure 1

11 pages, 2544 KiB  
Article
A Lightweight Browser-Based Tool for Collaborative and Blinded Image Analysis
by Philipp Schippers, Gundula Rösch, Rebecca Sohn, Matthias Holzapfel, Marius Junker, Anna E. Rapp, Zsuzsa Jenei-Lanzl, Philipp Drees, Frank Zaucke and Andrea Meurer
J. Imaging 2024, 10(2), 33; https://doi.org/10.3390/jimaging10020033 - 27 Jan 2024
Viewed by 1169
Abstract
Collaborative manual image analysis by multiple experts in different locations is an essential workflow in biomedical science. However, sharing the images and writing down results by hand or merging results from separate spreadsheets can be error-prone. Moreover, blinding and anonymization are essential to [...] Read more.
Collaborative manual image analysis by multiple experts in different locations is an essential workflow in biomedical science. However, sharing the images and writing down results by hand or merging results from separate spreadsheets can be error-prone. Moreover, blinding and anonymization are essential to address subjectivity and bias. Here, we propose a new workflow for collaborative image analysis using a lightweight online tool named Tyche. The new workflow allows experts to access images via temporarily valid URLs and analyze them blind in a random order inside a web browser with the means to store the results in the same window. The results are then immediately computed and visible to the project master. The new workflow could be used for multi-center studies, inter- and intraobserver studies, and score validations. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

14 pages, 5416 KiB  
Article
Attention-Enhanced Unpaired xAI-GANs for Transformation of Histological Stain Images
by Tibor Sloboda, Lukáš Hudec, Matej Halinkovič and Wanda Benesova
J. Imaging 2024, 10(2), 32; https://doi.org/10.3390/jimaging10020032 - 25 Jan 2024
Viewed by 1329
Abstract
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is [...] Read more.
Histological staining is the primary method for confirming cancer diagnoses, but certain types, such as p63 staining, can be expensive and potentially damaging to tissues. In our research, we innovate by generating p63-stained images from H&E-stained slides for metaplastic breast cancer. This is a crucial development, considering the high costs and tissue risks associated with direct p63 staining. Our approach employs an advanced CycleGAN architecture, xAI-CycleGAN, enhanced with context-based loss to maintain structural integrity. The inclusion of convolutional attention in our model distinguishes between structural and color details more effectively, thus significantly enhancing the visual quality of the results. This approach shows a marked improvement over the base xAI-CycleGAN and standard CycleGAN models, offering the benefits of a more compact network and faster training even with the inclusion of attention. Full article
Show Figures

Figure 1

16 pages, 1800 KiB  
Review
Source Camera Identification Techniques: A Survey
by Chijioke Emeka Nwokeji, Akbar Sheikh-Akbari, Anatoliy Gorbenko and Iosif Mporas
J. Imaging 2024, 10(2), 31; https://doi.org/10.3390/jimaging10020031 - 25 Jan 2024
Viewed by 1578
Abstract
The successful investigation and prosecution of significant crimes, including child pornography, insurance fraud, movie piracy, traffic monitoring, and scientific fraud, hinge largely on the availability of solid evidence to establish the case beyond any reasonable doubt. When dealing with digital images/videos as evidence [...] Read more.
The successful investigation and prosecution of significant crimes, including child pornography, insurance fraud, movie piracy, traffic monitoring, and scientific fraud, hinge largely on the availability of solid evidence to establish the case beyond any reasonable doubt. When dealing with digital images/videos as evidence in such investigations, there is a critical need to conclusively prove the source camera/device of the questioned image. Extensive research has been conducted in the past decade to address this requirement, resulting in various methods categorized into brand, model, or individual image source camera identification techniques. This paper presents a survey of all those existing methods found in the literature. It thoroughly examines the efficacy of these existing techniques for identifying the source camera of images, utilizing both intrinsic hardware artifacts such as sensor pattern noise and lens optical distortion, and software artifacts like color filter array and auto white balancing. The investigation aims to discern the strengths and weaknesses of these techniques. The paper provides publicly available benchmark image datasets and assessment criteria used to measure the performance of those different methods, facilitating a comprehensive comparison of existing approaches. In conclusion, the paper outlines directions for future research in the field of source camera identification. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

17 pages, 1236 KiB  
Article
A CNN Hyperparameters Optimization Based on Particle Swarm Optimization for Mammography Breast Cancer Classification
by Khadija Aguerchi, Younes Jabrane, Maryam Habba and Amir Hajjam El Hassani
J. Imaging 2024, 10(2), 30; https://doi.org/10.3390/jimaging10020030 - 24 Jan 2024
Viewed by 1504
Abstract
Breast cancer is considered one of the most-common types of cancers among females in the world, with a high mortality rate. Medical imaging is still one of the most-reliable tools to detect breast cancer. Unfortunately, manual image detection takes much time. This paper [...] Read more.
Breast cancer is considered one of the most-common types of cancers among females in the world, with a high mortality rate. Medical imaging is still one of the most-reliable tools to detect breast cancer. Unfortunately, manual image detection takes much time. This paper proposes a new deep learning method based on Convolutional Neural Networks (CNNs). Convolutional Neural Networks are widely used for image classification. However, the determination process for accurate hyperparameters and architectures is still a challenging task. In this work, a highly accurate CNN model to detect breast cancer by mammography was developed. The proposed method is based on the Particle Swarm Optimization (PSO) algorithm in order to look for suitable hyperparameters and the architecture for the CNN model. The CNN model using PSO achieved success rates of 98.23% and 97.98% on the DDSM and MIAS datasets, respectively. The experimental results proved that the proposed CNN model gave the best accuracy values in comparison with other studies in the field. As a result, CNN models for mammography classification can now be created automatically. The proposed method can be considered as a powerful technique for breast cancer prediction. Full article
Show Figures

Figure 1

2 pages, 137 KiB  
Editorial
Editorial for the Special Issue on “Geometry Reconstruction from Images”
by Daniel Meneveaux and Gianmarco Cherchi
J. Imaging 2024, 10(2), 29; https://doi.org/10.3390/jimaging10020029 - 23 Jan 2024
Viewed by 1109
Abstract
This special issue on geometry reconstruction from images has received much attention from the community, with 10 published papers [...] Full article
(This article belongs to the Special Issue Geometry Reconstruction from Images)
Previous Issue
Next Issue
Back to TopTop