Computational Imaging and Its Application

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 December 2024) | Viewed by 10090

Special Issue Editors

School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China Hangzhou Institute of Technology, Xidian University, Hangzhou 311231, China
Interests: computational imaging; polarization imaging; 3D imaging and machine vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China
2. Hangzhou Institute of Technology, Xidian University, Hangzhou 311231, China
Interests: computational imaging; optical instrumentation, optical image processing and pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China
Interests: imaging through scattering media; computational optical imaging system design; quantitative phase imaging techniques and applications

E-Mail Website
Guest Editor
School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China
Interests: imaging through scattering media; biomedical imaging

E-Mail Website
Guest Editor
Hangzhou Institute of Technology, Xidian University, Hangzhou 311231, China
Interests: lensless optics; deep learning

Special Issue Information

Dear Colleagues,

After many years of development, the techniques in computational imaging have caused profound societal and financial effects. With the rapid changes and developments in application environments and detection technologies, traditional methods are unable to provide high-quality imaging needs. Providing strong and effective methods to ensure the resolution, clarity, efficiency, and robustness of imaging in different application scenarios is becoming important, both in academia and industry. In particular, it is urgent to explore and develop new imaging technologies with higher resolutions, smaller optical system sizes, stronger adaptability, longer detection distances, and larger fields of view. Moreover, there are still many open problems in this area that need to be studied more deeply. Therefore, research on advanced techniques in computational imaging and its applications can bring about countless potential improvements to our world.

The objective of this Special Issue is to attract the latest research results dedicated to computational imaging and its applications. This Special Issue will bring leading researchers and developers from both academia and industry together to present their novel research on computational imaging and its applications. The submitted papers will be peer-reviewed and will be selected based on their quality and relevance to the main themes of this Special Issue.

The scope includes, but is not limited to:

(1) 3D imaging;

(2) Polarization imaging;

(3) Scattering imaging;

(4) Wave front Coding Imaging;

(5) Phase imaging;

(6) Biomedical imaging;

(7) Computational imaging with deep learning

(8) Lensless optics;

(9) Fiber optic sensing;

(10) Optical frequency comb and its application.

Dr. Xuan Li
Prof. Dr. Xiaopeng Shao
Dr. Teli Xi
Dr. Jinpeng Liu
Dr. Yangyundou Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computational imaging
  • phase
  • polarization
  • 3D
  • coding
  • digital holography
  • wave front sensing
  • deep learning
  • super-resolution

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4568 KiB  
Article
Frame-Stacking Method for Dark Digital Holographic Microscopy to Acquire 3D Profiles in a Low-Power Laser Environment
by Takahiro Koga, Kosei Nakamura, Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Electronics 2025, 14(5), 879; https://doi.org/10.3390/electronics14050879 - 23 Feb 2025
Viewed by 320
Abstract
Digital Holographic Microscopy (DHM) is a method of converting hologram images into three-dimensional (3D) images by image processing, which enables us to obtain the detailed shapes of the objects to be observed. Three-dimensional imaging of the microscopic objects by DHM can contribute to [...] Read more.
Digital Holographic Microscopy (DHM) is a method of converting hologram images into three-dimensional (3D) images by image processing, which enables us to obtain the detailed shapes of the objects to be observed. Three-dimensional imaging of the microscopic objects by DHM can contribute to the early diagnosis and the detection of the diseases in the medical field by observing the shape of the cells. DHM requires several experimental components. One of them is the laser, which is a problem because its high power may cause the deformation and the destruction of the cells and the death of the microorganisms. Since the greatest advantage of DHM is the detailed geometrical information of the object by 3D measurement, the loss of such information is a serious problem. To solve this problem, a Neutral Density (ND) filter has been used to reduce power after the laser irradiation. However, the image acquired by the image sensor becomes too dark to obtain sufficient information, and the effect of noise increased due to the decrease in the amount of light. Therefore, in this paper, we propose the Frame-Stacking Method (FSM) for dark DHM for reproducing 3D profiles that enable us to observe the shape of the objects from the images taken in low-power environments when the power is reduced. The proposed method realizes highly accurate 3D profiles by the frame decomposition of the low-power videos into images and superimposing and rescaling the obtained low-power images. On the other hand, the continuous irradiation of the laser beam for a long period may destroy the shape of the cells and the death of the microorganisms. Therefore, we conducted experiments to investigate the relationship between the number of superimposed images corresponding to the irradiation time and the 3D profile, as well as the characteristics of the power and the 3D profile. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

16 pages, 5984 KiB  
Article
Automated Scattering Media Estimation in Peplography Using SVD and DCT
by Seungwoo Song, Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Electronics 2025, 14(3), 545; https://doi.org/10.3390/electronics14030545 - 29 Jan 2025
Viewed by 587
Abstract
In this paper, we propose automation of estimating scattering media information in peplography using singular value decomposition (SVD) and discrete cosine transform (DCT). Conventional scattering media-removal methods reduce light scattering in images utilizing a variety of image-processing techniques and machine learning algorithms. However, [...] Read more.
In this paper, we propose automation of estimating scattering media information in peplography using singular value decomposition (SVD) and discrete cosine transform (DCT). Conventional scattering media-removal methods reduce light scattering in images utilizing a variety of image-processing techniques and machine learning algorithms. However, under conditions of heavy scattering media, they may not clearly visualize the object information. Peplography has been proposed as a solution to this problem. Peplography is capable of visualizing the object information by estimating the scattering media information and detecting the ballistic photons from heavy scattering media. Following that, 3D information can be obtained by integral imaging. However, it is difficult to apply this method to real-world situations since the process of scattering media estimation in peplography is not automated. To overcome this problem, we use automatic scattering media-estimation methods using SVD and DCT. They can estimate the scattering media information automatically by truncating the singular value matrix and Gaussian low-pass filter in the frequency domain. To evaluate our proposed method, we implement the experiment with two different conditions and compare the result image with the conventional method using metrics such as structural similarity (SSIM), feature similarity (FSIMc), gradient magnitude similarity deviation (GMSD), and learned perceptual image path similarity (LPIPS). Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

20 pages, 458 KiB  
Article
Neural Architecture Search via Trainless Pruning Algorithm: A Bayesian Evaluation of a Network with Multiple Indicators
by Yiqi Lin, Yuki Endo, Jinho Lee and Shunsuke Kamijo
Electronics 2024, 13(22), 4547; https://doi.org/10.3390/electronics13224547 - 19 Nov 2024
Viewed by 942
Abstract
Neural Architecture Search (NAS) has found applications in various areas of computer vision, including image recognition and object detection. An increasing number of algorithms, such as ENAS (Efficient Neural Architecture Search via Parameter Sharing) and DARTS (Differentiable Architecture Search), have been applied to [...] Read more.
Neural Architecture Search (NAS) has found applications in various areas of computer vision, including image recognition and object detection. An increasing number of algorithms, such as ENAS (Efficient Neural Architecture Search via Parameter Sharing) and DARTS (Differentiable Architecture Search), have been applied to NAS. Nevertheless, the current Training-free NAS methods continue to exhibit unreliability and inefficiency. This paper introduces a training-free prune-based algorithm called TTNAS (True-Skill Training-Free Neural Architecture Search), which utilizes a Bayesian method (true-skill algorithm) to combine multiple indicators for evaluating neural networks across different datasets. The algorithm demonstrates highly competitive accuracy and efficiency compared to state-of-the-art approaches on various datasets. Specifically, it achieves 93.90% accuracy on CIFAR-10, 71.91% accuracy on CIFAR-100, and 44.96% accuracy on ImageNet 16-120, using 1466 GPU seconds in NAS-Bench-201. Additionally, the algorithm exhibits improved adaptation to other datasets and tasks. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

16 pages, 2015 KiB  
Article
A Study on the Simple Encryption of QR Codes Using Random Numbers
by Iori Okubo, Seiya Ono, Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Electronics 2024, 13(15), 3003; https://doi.org/10.3390/electronics13153003 - 30 Jul 2024
Viewed by 1649
Abstract
Recently, with the widespread adoption of quick response (QR) code payments, there have been incidents of unauthorized use of QR codes presented at the time of payment, due to theft or duplication. As a countermeasure, conventional QR code payment systems are designed to [...] Read more.
Recently, with the widespread adoption of quick response (QR) code payments, there have been incidents of unauthorized use of QR codes presented at the time of payment, due to theft or duplication. As a countermeasure, conventional QR code payment systems are designed to update the QR code periodically. However, there is a problem: it is possible to make a payment using an illegally obtained QR code until the update. Therefore, it is necessary to encrypt the QR code itself to prevent its duplication. The objective of this research is to prevent fraudulent use of QR payments by combining image encryption using random numbers and Rivest Cipher 4 (RC4). In this paper, we perform image encryption using random numbers generated from a uniform distribution for QR codes presented at the time of payment and encrypt the seed value, which is the decryption key, using RC4. As a result, the proposed encryption method prevents unauthorized use of the QR code used for payment by stealing the image, and the processing speed and encryption strength are sufficient. Histogram analysis, key sensitivity analysis, and correlation coefficients were used to measure encryption strength. Finally, the proposed method is expected to enable more secure use of QR payments compared to conventional systems. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

12 pages, 4378 KiB  
Article
Boundary Segmentation of Vascular Images in Fourier Domain Doppler Optical Coherence Tomography Based on Deep Learning
by Chuanchao Wu, Zhibin Wang, Peng Xue and Wenyan Liu
Electronics 2024, 13(13), 2516; https://doi.org/10.3390/electronics13132516 - 27 Jun 2024
Viewed by 966
Abstract
Microscopic and ultramicroscopic vascular sutures are indispensable in surgical procedures such as arm transplantation and finger reattachment. The state of the blood vessels after suturing, which may feature vascular patency, narrowness, and blockage, determines the success rate of the operation. If we can [...] Read more.
Microscopic and ultramicroscopic vascular sutures are indispensable in surgical procedures such as arm transplantation and finger reattachment. The state of the blood vessels after suturing, which may feature vascular patency, narrowness, and blockage, determines the success rate of the operation. If we can take advantage of the golden window of opportunity after blood vessel suture and before muscle tissue suture to achieve an accurate and objective assessment of blood vessel status, this will not only reduce medical costs but will also offer social benefits. Doppler optical coherence tomography enables the high-speed, high-resolution imaging of biological tissues, especially microscopic and ultramicroscopic blood vessels. By using Doppler optical coherence tomography to image the sutured blood vessels, a three-dimensional structure of the blood vessels and blood flow information can be obtained. By extracting the contour of the blood vessel wall and the contour of the blood flow area, the three-dimensional shape of the blood vessel can be reconstructed in three dimensions, providing parameter support for the assessment of blood vessel status. In this work, we propose a neural network-based multi-classification deep learning model that can automatically and simultaneously extract blood vessel boundaries from Doppler OCT vessel intensity images and the contours of blood flow regions from corresponding Doppler OCT vessel phase images. Compared to the traditional random walk segmentation algorithm and cascade neural network method, the proposed model can produce the vessel boundary from the intensity image and the lumen area boundary from the corresponding phase image simultaneously, achieving an average testing segmentation accuracy of 0.967 and taking, on average, 0.63 s. This method can realize system integration more easily and has great potential for clinical evaluations. It is expected to be applied to the evaluation of microscopic and ultramicroscopic vascular status in microvascular anastomosis. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

20 pages, 21056 KiB  
Article
Outlier Detection by Energy Minimization in Quantized Residual Preference Space for Geometric Model Fitting
by Yun Zhang, Bin Yang, Xi Zhao, Shiqian Wu, Bin Luo and Liangpei Zhang
Electronics 2024, 13(11), 2101; https://doi.org/10.3390/electronics13112101 - 28 May 2024
Viewed by 1116
Abstract
Outliers significantly impact the accuracy of geometric model fitting. Previous approaches to handling outliers have involved threshold selection and scale estimation. However, many scale estimators assume that the inlier distribution follows a Gaussian model, which often does not accurately represent cases in geometric [...] Read more.
Outliers significantly impact the accuracy of geometric model fitting. Previous approaches to handling outliers have involved threshold selection and scale estimation. However, many scale estimators assume that the inlier distribution follows a Gaussian model, which often does not accurately represent cases in geometric model fitting. Outliers, defined as points with large residuals to all true models, exhibit similar characteristics to high values in quantized residual preferences, thus causing outliers to cluster away from inliers in quantized residual preference space. In this paper, we leverage this consensus among outliers in quantized residual preference space by extending energy minimization to combine model error and spatial smoothness for outlier detection. The outlier detection process based on energy minimization follows an alternate sampling and labeling framework. Subsequently, an ordinary energy minimization method is employed to optimize inlier labels, thereby following the alternate sampling and labeling framework. Experimental results demonstrate that the energy minimization-based outlier detection method effectively identifies most outliers in the data. Additionally, the proposed energy minimization-based inlier segmentation accurately segments inliers into different models. Overall, the performance of the proposed method surpasses that of most state-of-the-art methods. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

11 pages, 5843 KiB  
Article
Controllable Spatial Filtering Method in Lensless Imaging
by Jae-Young Jang and Myungjin Cho
Electronics 2024, 13(7), 1184; https://doi.org/10.3390/electronics13071184 - 23 Mar 2024
Viewed by 876
Abstract
We propose a method for multiple-depth extraction in diffraction grating imaging. A diffraction grating can optically generate a diffraction image array (DIA) having parallax information about a three-dimensional (3D) object. The optically generated DIA has the characteristic of forming images periodically, and the [...] Read more.
We propose a method for multiple-depth extraction in diffraction grating imaging. A diffraction grating can optically generate a diffraction image array (DIA) having parallax information about a three-dimensional (3D) object. The optically generated DIA has the characteristic of forming images periodically, and the period depends on the depth of the object, the wavelength of the light source, and the grating period of the diffraction grating. The depth image can be extracted through the convolution of the DIA and the periodic delta function array. Among the methods for extracting depth images through the convolution characteristics of a parallax image array (PIA) and delta function array, an advanced spatial filtering method for the controllable extract of multiple depths (CEMD) has been studied as one of the reconstruction methods. And that possibility was confirmed through a lens-array-based computational simulation. In this paper, we aim to perform multiple-depth extraction by applying the CEMD method to a DIA obtained optically through a diffraction grating. To demonstrate the application of the CEMD in diffraction grating imaging, a theoretical analysis is performed to apply the CEMD in diffraction grating imaging; the DIA is acquired optically, and the spatial filtering process is performed through computational methods and then compared with the conventional single-depth extraction method in diffraction grating imaging. The application of the CEMD to DIA enables the simultaneous reconstruction of images corresponding to multiple depths through a single spatial filtering process. To the best of our knowledge, this is the first research on the extraction of multiple-depth images in diffraction grating imaging. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

12 pages, 4458 KiB  
Article
The Design and Application of a Polarization 3D Imager for Land Object Imaging
by Yue Zhang, Jianchao Jiao, Xuemin Zhang, Yi Liu, Xuan Li and Yun Su
Electronics 2024, 13(1), 168; https://doi.org/10.3390/electronics13010168 - 30 Dec 2023
Cited by 2 | Viewed by 1324
Abstract
Polarization 3D imaging is a passive, monocular, long-distance 3D imaging technology. Compared with traditional 3D imaging methods, it has many advantages, such as its lack of need for a light source, lack of need for image matching, and ability to achieve 3D imaging [...] Read more.
Polarization 3D imaging is a passive, monocular, long-distance 3D imaging technology. Compared with traditional 3D imaging methods, it has many advantages, such as its lack of need for a light source, lack of need for image matching, and ability to achieve 3D imaging using only a single image. In this study, the principle of polarization 3D imaging was introduced. In the design process of a polarization 3D imager, the acquisition method for obtaining polarization information, the extinction ratio, the spatial resolution, and the refractive index of objects was introduced in detail. The influence of these key factors on the accuracy of polarization 3D imaging was analyzed. Taking the limitations of a small satellite payload into account, specific indicators such as multi-aperture polarized imaging, a 10,000:1 extinction ratio, and a spatial resolution of 30 m were designed. The implementation and functions of the polarization 3D imager were elaborated upon, and optical systems and polarizing devices were developed. Finally, by utilizing the image data obtained by the polarization 3D imager, polarization 3D imaging of real ground objects was obtained. The accuracy of the polarization 3D imaging inversion was approximately twice the spatial resolution. These research results lay the technical foundations for the development and practical application of polarization 3D imaging technology and instruments. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

21 pages, 1025 KiB  
Article
Weakly Supervised Cross-Domain Person Re-Identification Algorithm Based on Small Sample Learning
by Huiping Li, Yan Wang, Lingwei Zhu, Wenchao Wang, Kangning Yin, Ye Li and Guangqiang Yin
Electronics 2023, 12(19), 4186; https://doi.org/10.3390/electronics12194186 - 9 Oct 2023
Cited by 1 | Viewed by 1694
Abstract
This paper proposes a weakly supervised cross-domain person re-identification (Re-ID) method based on small sample data. In order to reduce the cost of data collection and annotation, the model design focuses on extracting and abstracting the information contained in the data under limited [...] Read more.
This paper proposes a weakly supervised cross-domain person re-identification (Re-ID) method based on small sample data. In order to reduce the cost of data collection and annotation, the model design focuses on extracting and abstracting the information contained in the data under limited conditions. In this paper, we focus on the problems of strong data dependence, weak cross-domain capability and low accuracy in Re-ID in weakly supervised scenarios. Our contributions are as follows: first, we implement a joint training framework with the help of small sample learning and cross-domain migration for Re-ID. Second, with the help of residual compensation and fusion attention module, the RCFA module is designed, and the model framework is built on this basis to improve the cross-domain ability of the model. Third, to solve the problem of low accuracy caused by insufficient data coverage of small samples, a fusion of shallow features and deep features is designed to enable the model to weighted fusion of shallow detail information and deep semantic information. Finally, by selecting different camera images in Market1501 dataset and DukeMTMC-reID dataset as small samples, respectively, and introducing another dataset data for joint training, we demonstrate the feasibility of this joint training framework, which can perform weakly supervised cross-domain Re-ID based on small sample data. Full article
(This article belongs to the Special Issue Computational Imaging and Its Application)
Show Figures

Figure 1

Back to TopTop