Next Article in Journal
Assessment of Three-Dimensional Kinematics of High- and Low-Calibre Hockey Skaters on Synthetic Ice Using Wearable Sensors
Next Article in Special Issue
Design of a Low-Cost Diffuse Optical Mammography System for Biomedical Image Processing in Breast Cancer Diagnosis
Previous Article in Journal
Lock-In Thermal Test Simulation, Influence, and Optimum Cycle Period for Infrared Thermal Testing in Non-Destructive Testing
Previous Article in Special Issue
Acquisition and Analysis of Microcirculation Image in Septic Model Rats
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Honeycomb Artifact Removal Using Convolutional Neural Network for Fiber Bundle Imaging

1
Center for Intelligent and Interactive Robotics, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea
2
Department of Mechanical Convergence Engineering, Hanyang University, Seoul 04763, Republic of Korea
3
Department of Biological Sciences, Seoul National University, Seoul 03080, Republic of Korea
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(1), 333; https://doi.org/10.3390/s23010333
Submission received: 29 November 2022 / Revised: 14 December 2022 / Accepted: 24 December 2022 / Published: 28 December 2022
(This article belongs to the Special Issue Advanced Biomedical Optics and Imaging)

Abstract

:
We present a new deep learning framework for removing honeycomb artifacts yielded by optical path blocking of cladding layers in fiber bundle imaging. The proposed framework, HAR-CNN, provides an end-to-end mapping from a raw fiber bundle image to an artifact-free image via a convolution neural network (CNN). The synthesis of honeycomb patterns on ordinary images allows conveniently learning and validating the network without the enormous ground truth collection by extra hardware setups. As a result, HAR-CNN shows significant performance improvement in honeycomb pattern removal and also detailed preservation for the 1961 USAF chart sample, compared with other conventional methods. Finally, HAR-CNN is GPU-accelerated for real-time processing and enhanced image mosaicking performance.

1. Introduction

Fiber bundle endomicroscopy has great potential in in vivo, in situ diagnosis and interventional procedures in a wide range of clinical applications because of its flexibility and compactness [1]. For example, fiber bundle endomicroscopy has been deployed for optical biopsies predominantly in the gastrointestinal, urological and the respiratory tracts such as for cancer diagnosis [2,3,4]. However, images acquired through a fiber bundle suffer from artifacts such as honeycomb patterns, which are induced by void imaging space in the fiber bundle. Since the fiber bundle is composed of thousands of single optical fibers, the cladding layers of adjacent optical fibers inevitably block the light path. As a result, such void pixels in fiber bundle imaging degrade the quality of images and thus hinder accurate diagnosis.
A common remedy for the honeycomb artifact is to apply spatial or spectral filters onto acquired images because it is simple and fast enough for real-time processing. For the spatial filters, a Gaussian filter is commonly adopted to eliminate the artifact since it can smooth the edges of the single fibers in an efficient manner. For example, the Gaussian filter approach was introduced with histogram equalization for honeycomb pattern removal [5]. In addition to the Gaussian filtering, Winter et al. proposed an automatically generated adaptive mask in the spectral domain for filtering [6]. The use of convolution with a specifically designed kernel was also introduced for real-time processing in embedded systems [7]. Furthermore, Regeling et al. proposed spectral filtering for honeycomb pattern removal with minimal loss of information for cancer detection [8]. However, these approaches degrade the intensities of core pixels while smoothing the edges of honeycomb patterns. Accordingly, the resulting images become blurred.
Interpolation-based approaches can mitigate such a blurring effect as they preserve the intensities of the core pixels. They thus need to identify the locations of the core pixels prior to the interpolation, in which local maxima or circular Hough transforms are commonly used [9]. Natural neighbor interpolation can then be conducted with the Delaunay triangulation [10]. Wang et al. removed honeycomb patterns and mosaicked images using barycentric interpolation [11]. These approaches can be further improved. For example, Zheng et al. showed enhanced fiber bundle images by applying local binary patterns and nonlocal means subsequent to interpolation [12]. Although the interpolation-based methods can maintain the intensities of the core pixels, they involve preprocessing to identify core locations and interpolating gridded data, which are computationally expensive. Furthermore, these methods are only valid with the accurate detection of the core locations. Otherwise, resulting images are prone to distortion.
The superimposition of motion-induced images has been introduced for eliminating honeycomb artifacts and also enhancing limited resolution in fiber bundle imaging [13,14,15,16]. It recovers images by overlapping shifted images from successive frames, even without interpolation. Nevertheless, it is still challenging to apply these methods to real-time applications since multiple frames should be accumulated in acquisition and processing for image restoration.
Recently, deep learning-based approaches have also been proposed [17,18]. Shao et al. adopted a generative adversarial network (GAN) for image restoration [13]. To learn a direct mapping from fiber bundle to ground truth images, a dual-sensor fiber bundle imaging system was also developed, which offers ground truth data aligned with fiber bundle images. Later, a cascaded networks scheme improved restoration accuracy by utilization of multiple fiber bundle images [18]. A motion estimation network estimates unknown motions represented by homographies among successive fiber bundle images and a 3D convolution neural network then provide a mapping from aligned fiber bundle image sequences to their ground truth images. Since these supervised learning approaches demand ground truth data for training, the development of extra hardware setups is involved to obtain artifact-free images.
In this paper, we propose a new deep learning framework for honeycomb artifact removal in fiber bundle imaging. The proposed framework offers an end-to-end mapping from given raw images to artifact-free images via a convolutional neural network regarded as a composition of nonlinear filters, as shown in Figure 1. Moreover, our reference-free approach introduces synthetic honeycomb patterns on ordinary images from an image database for training the network without an extra optical hardware setup for the ground truth collection. We validate the restoration performance of honeycomb-patterned images in terms of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). In addition, the quality of honeycomb removal is evaluated on real fiber bundle images collected from a 1951 USAF resolution test chart. Finally, the proposed framework is GPU-accelerated for real-time processing and its honeycomb removal and mosaicking performances are also evaluated in imaging a lens tissue sample.
The main contribution of this paper is to provide a deep-learning based honeycomb artifact removal method that can be readily used for fiber bundle imaging. The honeycomb pattern synthesis eases the training procedure of the convolutional neural network with tremendous artificial data sets. In addition, the end-to-end mapping allows immediately restoring honeycomb-free images without any preprocessing upon their own honeycomb patterns, e.g., the number and location of fiber cores. In terms of performance, it shows significant performance improvement in honeycomb pattern removal and also detail preservation in real time, compared to the conventional methods.
Consequently, our approach tackles challenges in the conventional methods of honeycomb artifact removal. First, the deep-learning based method that can selectively removes honeycomb artifact overcomes the common limitation of spatial and spectral filtering, such as blurring both artifact and sample patterns. Second, the proposed method is robust to variation on the core locations of fiber bundles yielded occasionally by the reassembly or new uses of fiber bundles, which is prone to distortion in honeycomb removal with interpolation-based methods. Furthermore, the honeycomb-artifact synthesis scheme can lessen enormous burden raised by building extra hardware setups for the ground truth collection in other learning-based methods.

2. Materials and Methods

2.1. Deep Learning-Based Honeycomb Pattern Removal

2.1.1. Training Dataset Generation via Honeycomb Pattern Synthesis

To learn the end-to-end mapping from raw images to artifact-free images, we utilize ordinary images for the ground truth from the image database (ImageNet Large Scale Visual Recognition Challenge 2013, ILSVRC 2013) [19] instead of using an extra hardware setup to simultaneously obtain artifact-free images. A honeycomb-like pattern is then synthesized on the ordinary images for training, where the pattern is taken from the white reference image of an actual fiber bundle. The white reference image thus indicates the core locations of the fiber bundle and void imaging area (honeycomb patterns) shown as bright and dark, respectively. Sequences for generating a training dataset are as follows. First, the ordinary images are converted to grayscale if they are colored images. The images are then resized corresponding to the pixel size of the imaging setup used; herein, we resize the images to 1024 × 1024 pixels. Next, we create a circular binary mask for a region of interest, where the outer region of the fiber bundle is forced to zero. Finally, pixel-wise multiplication is performed on the ordinary images with the white reference image. It is noted that the white reference image should be normalized between 0 to 1 to maintain the intensities of core pixels. These preprocessing steps are also illustrated in Figure 1.

2.1.2. Deep Neural Network Architecture for Honeycomb Pattern Removal

The proposed deep learning framework adopts a convolutional neural network for honeycomb artifact removal. The proposed HAR-CNN (honeycomb-artifact-removal-convolution neural network) algorithm includes three convolution layers for image restoration: patch extraction, nonlinear mapping, and reconstruction. Accordingly, its architecture is similar to that of super-resolution CNN (SRCNN) [20]. The patch extraction layer creates a set of image patches encompassing the region of a single honeycomb. The nonlinear mapping layer is designed to deal with the non-linear features of honeycomb artifacts by transform high-dimensional vectors into low-dimensional vectors. Finally, the reconstruction layer outputs honeycomb-free images with high resolution by the convolution of neighboring pixels.
Since there is a trade-off between speed and performance according to the hyper-parameters of each layer, the appropriate spatial size of a kernel and the number of feature maps need to be set. For example, determining the filter size of the first layer is important to remove the honeycomb artifact. Through the investigation of the hyper-parameters, the first filter size is set to 9 to sufficiently enclose the honeycomb patterns because the distance between two adjacent cores is about 5–6 pixels with our imaging setup. The filter size of the nonlinear mapping layer was set to 1 with 64 feature maps, which provides enough nonlinear filtering and is also computationally efficient. Last, we adopt the filter size of the reconstruction layer as 5 with 32 feature maps. The entire network architecture is represented in Figure 2b.
The computational complexity of HAR-CNN is given by O ( ( f 1 2 n 1 + n 1 f 2 2 n 2 + n 2 f 3 2 ) S H R ) , where f i and n i are the filter size and the number of filters at the ith layer, respectively, and S H R is the image size; f 1 = 9 , n 1 = 64 , f 2 = 1 , n 2 = 32 , and f 3 = 5 .
To train HAR-CNN, a total of 130 images were used: 100 for training and 30 for validation. A single image is then decomposed into sub-images with a size of 33 × 33 and with a stride of 14, which creates 5184 sub-images in total. Thus, the total 518,400 sub-images were used for training. The batch size was set to be 128 and mean squared error (MSE) was used as a loss function. The activation function in the first and the second layers was adopted as rectified linear units (ReLU). The network model was trained with the stochastic gradient descent optimizer with a learning rate of 1e-4 for 10,000 epochs using TensorFlow. Image restoration performance for the number of training epoch will be further discussed in the Result section. Finally, the proposed algorithm was implemented on a C/C++ platform with an OpenCV library and accelerated by the use of a graphical processing unit (GPU, Geforce RTX 3090, NVIDIA) for real-time processing. As a result, the honeycomb removal is accomplished within 10 ms for a raw fiber bundle image with a 1024 × 1024-pixel size.

2.2. Experimental Setup

Our fiber bundle imaging setup consists primarily of a fiber bundle (FIGH-30-650S, Fujikura Ltd., Tokyo, Japan), an 10X objective lens (RMS10X, 0.25NA, 10.6 mm WD, Olympus Corp., Tokyo, Japan), a tube lens (TTL200-A, Thorlabs Inc., Newton, NJ, USA), and an sCMOS camera (Zyla 4.2 PLUS, Oxford Instruments, Abingdon, UK) as shown in Figure 3. The fiber bundle includes 30,000 fiber cores with melted silica claddings in a 650- μ m diameter. The distal end of the fiber bundle is coupled with an aspheric lens with a 3-mm diameter (#15-271, Edmund Optics Inc., Barrington, NJ, USA) for enlarging field-of-view, which provides a field of view of 2.2 mm at 20-mm apart from a target. The incident light propagates through the fiber bundle and is imaged at the sCMOS camera with a pixel resolution of 1024 × 1024 at 100 Hz for high-speed imaging.

3. Results

3.1. Validation of HAR-CNN on Synthetic Images

We validated the restoration performance of HAR-CNN on synthetic images by superimposing artificial honeycomb patterns on original images, which are taken as the ground truth for quantitative evaluation. The quantitative evaluation was performed with two metrics–peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) widely used for quality evaluation in image restoration or enhancement. PSNR is to quantify restoration quality for images while comparing pixel-wise intensity difference between two images as in Equation (1).
P S N R ( d B ) = 20 log ( 255 M S E ) ,
where MSE is the mean squared error between the ground truth and estimated image. SSIM is adopted as a perceptual metric that quantifies image quality degradation compared to the ground truth, as a combination of three independent components, luminance, contrast, and structure [2].
S S I M = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
c 1 = ( 0.01 L ) 2 and c 2 = ( 0.03 L ) 2 ,
where μ x , μ y , σ x , σ y and σ x y are the local means, standard deviations, and cross-covariance for images x, y. L is a dynamic range of pixel values.
First, we examined image restoration performance of HAR-CNN along with the increment of training epochs—2000, 10,000, 20,000 using PSNR and SSIM. As shown in Figure 4, there was no significant improvement between 10,000 and 20,000 epochs in image restoration. The restoration performance of HAR-CNN was then compared with other conventional methods: median filter, Gaussian filter, and the interpolation method in [11]. The sizes of the applied filters were set to 7 × 7 and 3 × 3, for the median and Gaussian filters, respectively. Figure 5 shows qualitative results of honeycomb pattern removal on a synthetic fiber bundle image. It is found that HAR-CNN most preserves the detail of the image after restoration as shown in a magnified region of Figure 5e (see bird’s feather). In addition, we obtained the highest PSNR with HAR-CNN. It indicates that the proposed network was appropriately trained subject to minimizing MSE between output images and the corresponding ground truth. SSIM is also highest with HAR-CNN compared to the other methods. Interestingly, the median filter’s SSIM score is highest among the conventional methods while the interpolation method’s PSNR score is higher than that of the median filter. This is because the median filter could further preserve the sharpness of the edges compared to the other conventional methods. The quantitative results are summarized in Table 1.

3.2. Evaluation of Honeycomb Pattern Removal on 1951 USAF Target

We evaluated the proposed method on real fiber bundle images taken on a 1951 USAF target. Figure 6 shows the resulting images for various honeycomb pattern removal algorithms.
An inset figure at the top-left corner of each resulting image shows the magnified view of a cropped region with a red square in order to present restoration details. We also investigated the intensity values along a line of interest marked as a white line in the region. As a result, significant intensity modulations are observed in the raw image because of void imaging pixels as shown in Figure 6f. These wavy modulations become diminished as the filtering methods, interpolations, and the proposed HAR-CNN are applied. As shown in the magnified view, the edges of lines are clearly distinguishable with the interpolation method and HAR-CNN while those are still fuzzy with the median and Gaussian filtering because of a smoothing effect.
For further quantitative analysis, two quality measures, variance-based smoothness and Rayleigh-based line separation criteria are introduced. First, the variance-based smoothness describes how well the honeycomb effect is mitigated and smoothed. The smoothness, s is defined by the ratio between the variances of a raw fiber bundle image and a restored image at a flat region on the USAF target as in Equation (4).
s = 1 σ s σ 0 = 1 i , j R s s s i m a g e i , j μ s 2 i , j R s s 0 i m a g e i , j μ 0 2 ,
where σ s and μ s are the standard deviation and mean value of pixel intensities s s i m a g e ( i , j ) at the region of interest, R s in the restored image. σ 0 and μ 0 are calculated from the raw fiber bundle image in the same manner. Accordingly, smaller variance at the restored image leads to higher smoothness. In contrast, the Rayleigh-based line separation criterion, r was adopted to evaluate how each algorithm preserves details after removing the honeycomb patterns, which is thus calculated in a line-spaced region, R r . An averaged intensity value S l ( k ) along a specified line R r r o w is obtained by Equation (5).
S l ( k ) = 1 M j R r c o l s r i m a g e ( k , j ) for k R r r o w ,
where R r r o w and R r c o l contain row and column pixel coordinates, respectively. s r i m a g e ( k , j ) is a pixel value at the kth row and jth column and M is the number of columns in the region R r . Finally, the line separation quality r is defined as in Equation (6).
r = max ( S l ( k ) ) min ( S l ( k ) ) max ( S l ( k ) )
To take both honeycomb removal performance and detail preservation into account, we also investigated a weighted quality measure q by combining these two metrics, s and r, with a weighting factor γ .
q = γ · s + ( 1 γ ) · r
The weighting factor thus represents the importance of honeycomb pattern removal compared to detail preservation.
We compared the performance of the HAR-CNN with the conventional honeycomb pattern removal methods. The identical filter sizes adopted in synthetic images evaluation were also used for the median and Gaussian filtering methods. The interpolation method is also the same as that used for synthetic image restoration. According to the variance-based smoothness, HAR-CNN shows the best honeycomb removal performance and it is followed by the Gaussian filter. In addition, the image details are also most preserved with HAR-CNN as the line-separation quality measure is above 0.45. For the weighted quality measure q, the highest scores are also obtained with HAR-CNN for both the weighting factor, 0.5 and 0.8. The quality measures for each algorithm are summarized in Table 2.

3.3. Honeycomb Pattern Removal and Image Mosaicking on Lens Tissue Sample

We also evaluated the performance of HAR-CNN on a lens tissue sample. To visualize the fibrous structure of the lens tissue, the test was conducted without the objective lens located at the distal end of the fiber bundle, which yields a field-of-view (FOV) of 650 μ m in diameter. Image mosaicking is then accompanied to obtain an enlarged image from such a small FOV. A common method for image mosaicking is to first extract feature points (e.g., distinct corner points) and then match their correspondences in a subsequent frame. Therefore, it is necessary to eliminate honeycomb patterns to successfully achieve image mosaicking in fiber bundle imaging. Otherwise, the stationary and distinct honeycomb patterns are recognized highly likely as image features instead of shifted sample images. Consequently, detection of such spurious feature points would hinder accurate image mosaicking. The resulting images for the various honeycomb removal methods on the lens tissue sample are shown in Figure 7. As found in the earlier section, HAR-CNN clearly restored the details of the fibrous structures while efficiently eliminating honeycomb patterns.
Prior to image mosaicking, we also investigated the number of image features found in a restored image by each algorithm. As shown in Figure 8, HAR-CNN provides more numbers of image features compared to the other methods. Interestingly, the raw image gives more image features compared to the spatial filtering methods. However, it should be noted that some of these detected image features could be rather spurious, which may lead to inaccurate image mosaicking as presented in Figure 8b.
Image mosaicking performance was evaluated during a handheld scan of the fiber bundle probe along an upward direction for a length of approximately 1.7 mm. As a result, the proposed HAR-CNN yielded a vertically well-aligned mosaic image, since it provided a sufficient number of image features for accurate correspondence matching, as shown in Figure 9f. On the other hand, mosaicking results of the other methods were either fairly limited or distorted. For example, mosaic results by the median filtering and interpolation were distorted as is the result of the raw fiber bundle images due to spurious feature points. In the Gaussian filtering, the lack of extracted features impeded further image mosaicking although a well-aligned mosaic image was obtained only at the beginning of the scan.

4. Discussion

We proposed a deep learning framework to eliminate honeycomb artifacts via a convolutional neural network with the artifact synthesis scheme on the ordinary image dataset. Hence, the proposed method utilizes a single white-reference image that contains honeycomb patterns for artifact synthesis. It provides the ground truth for training the network in an efficient manner because the ordinary images themselves can be taken as the ground truth without extra hardware setups to obtain enormous ground-truth images. As a result, our HAR-CNN offers an end-to-end mapping from a raw fiber bundle image to a honeycomb-free image.
Although the white-reference image taken from a single fiber bundle was used for the artifact synthesis, the trained network was able to be applied to other new fiber bundles regardless of their own honeycomb patterns. We also found that HAR-CNN was robust to the translational and/or rotational variations of honeycomb patterns, which could occasionally occur during the installation of the fiber bundle. For example, no loss in image restoration was observed while the restoration performance by the interpolation-based method was significantly degraded regarding the alignment variation by the reassembly of the fiber bundle. This is because the CNN algorithm can be treated as a set of nonlinear convolutional filters on a raw fiber bundle image. Accordingly, the proposed algorithm could seamlessly deal with the reassembly or replacement of fiber bundles without any modification of the trained network.
The experimental results show that HAR-CNN significantly improves the honeycomb removal performance compared to the conventional methods while preserving image details. In a perspective of the application of fiber bundle imaging, the proposed method is superior to the other conventional methods in image feature extraction and matching, which is important for post-processing of fiber bundle images such as for image mosaicking. Furthermore, the GPU acceleration of the proposed algorithm allows a real-time image restoration even for the large size of a fiber bundle image. The image restoration with HAR-CNN is completed within 10 ms for a raw fiber bundle image with a 1024 × 1024-pixel size (100 fps), while conventional filtering methods still take about 3–4 ms for the same image size. Compared to other similar networks, HAR-CNN showed reliable performance in image restoration and also time-efficient processing. For example, honeycomb removal adopting enhanced deep residual networks (EDSR) [21] with the same dataset is prone to saturation in relatively bright images and also takes 0.9–1.0 s per a frame, which is unsuitable for real-time application.
The comparison with the other deep learning-based methods [13,14] in terms of PSNR and SSIM is not available yet. Since our hardware setup is designed for generic fiber bundle imaging, the ground truth data for such evaluation is not offered. Regarding these measures, our results for the synthetic images are, however, comparable to those results by the other deep learning-based methods. Since HAR-CNN was trained with the single white reference image from a specific fiber bundle, its application to various fiber bundle models could potentially be limited. Accordingly, further improvement could also be made by adopting a multi-scale/level feature learning scheme in order to deal with a large variety of fiber bundles with different core distances through a unified network structure. In addition, the combination of other deep neural networks may further improve its restoration performance and enhance fundamentally limited resolution in fiber bundle imaging. Future plans include testing the proposed algorithm in imaging tissue samples in vivo.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/s23010333/s1, Video S1: Real-time demonstration of HAR-CNN on the lens tissue sample.

Author Contributions

Conceptualization, E.K. and S.Y.; methodology, E.K., S.K., M.C. and S.Y.; software, E.K.; validation, E.K. and S.Y.; formal analysis, E.K. and S.Y.; investigation, E.K. and S.Y.; resources, E.K., S.K., M.C. and S.Y.; data curation, E.K. and S.Y.; writing—original draft preparation, E.K. and S.Y.; writing—review and editing, E.K., T.S. and S.Y.; visualization, E.K.; supervision, T.S. and S.Y.; project administration, S.Y.; funding acquisition, S.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded partially by the Bio & Medical Technology Development Program of the National Research Foundation (NRF) funded by the Korean government (MSIT) (No. 2019M3A9E2061784) and also by the Pioneer Research Center Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning (No. 2022M3C1A3090828).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Orth, A.; Ploschner, M.; Wilson, E.; Maksymov, I.; Gibson, B. Optical fiber bundles: Ultra-slim light field imaging probes. Sci. Adv. 2019, 5, eaav1555. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Yserbyt, J.; Dooms, C.; Janssens, W.; Verleden, G. Endoscopic advanced imaging of the respiratory tract: Exploring probe-based confocal laser endomicroscopy in emphysema. Thorax 2018, 73, 188–190. [Google Scholar] [CrossRef] [PubMed]
  4. Wu, J.; Wang, T.; Uckermann, O.; Galli, R.; Schackert, G.; Cao, L.; Czarske, J.; Kuschmierz, R. Learned end-to-end high-resolution lensless fiber imaging towards real-time cancer diagnosis. Sci. Rep. 2022, 12, 18846. [Google Scholar] [CrossRef]
  5. Han, J.H.; Lee, J.; Kang, J.U. Pixelation effect removal from fiber bundle probe based optical coherence tomography imaging. Opt. Express 2010, 18, 7427–7439. [Google Scholar] [CrossRef]
  6. Winter, C.; Rupp, S.; Elter, M.; Munzenmayer, C.; Gerhauser, H.; Wittenberg, T. Automatic adaptive enhancement for images obtained with fiberscopic endoscopes. IEEE Trans. Biomed. Eng. 2006, 53, 2035–2046. [Google Scholar] [CrossRef]
  7. Dumripatanachod, M.; Piyawattanametha, W. A fast depixelation method of fiber bundle image for an embedded system. In Proceedings of the 2015 8th Biomedical Engineering International Conference (BMEiCON), Pattaya, Thailand, 25–27 November 2015; pp. 1–4. [Google Scholar]
  8. Regeling, B.; Thies, B.; Gerstner, A.O.; Westermann, S.; Müller, N.A.; Bendix, J.; Laffers, W. Hyperspectral imaging using flexible endoscopy for laryngeal cancer detection. Sensors 2016, 16, 1288. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Perperidis, A.; Dhaliwal, K.; McLaughlin, S.; Vercauteren, T. Image computing for fibre-bundle endomicroscopy: A review. Med. Image Anal. 2020, 62, 101620. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Elter, M.; Rupp, S.; Winter, C. Physically motivated reconstruction of fiberscopic images. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; Volume 3, pp. 599–602. [Google Scholar]
  11. Wang, P.; Turcatel, G.; Arnesano, C.; Warburton, D.; Fraser, S.E.; Cutrale, F. Fiber pattern removal and image reconstruction method for snapshot mosaic hyperspectral endoscopic images. Biomed. Opt. Express 2018, 9, 780–790. [Google Scholar] [CrossRef] [PubMed]
  12. Zheng, Z.; Cai, B.; Kou, J.; Liu, W.; Wang, Z. A Honeycomb Artifacts Removal and Super Resolution Method for Fiber-Optic Images. In Proceedings of the International Conference on Intelligent Autonomous Systems, Shanghai, China, 3–7 July 2016; pp. 771–779. [Google Scholar]
  13. Lee, C.Y.; Han, J.H. Elimination of honeycomb patterns in fiber bundle imaging by a superimposition method. Opt. Lett. 2013, 38, 2023–2025. [Google Scholar] [CrossRef] [PubMed]
  14. Cheon, G.W.; Cha, J.; Kang, J.U. Random transverse motion-induced spatial compounding for fiber bundle imaging. Opt. Lett. 2014, 39, 4368–4371. [Google Scholar] [CrossRef] [PubMed]
  15. Renteria, C.; Suárez, J.; Licudine, A.; Boppart, S.A. Depixelation and enhancement of fiber bundle images by bundle rotation. Appl. Opt. 2020, 59, 536–544. [Google Scholar] [CrossRef] [PubMed]
  16. Jiang, J.; Zhou, X.; Liu, J.; Pan, L.; Pan, Z.; Zou, F.; Li, Z.; Li, F.; Ma, X.; Geng, C.; et al. Optical Fiber Bundle-Based High-Speed and Precise Micro-Scanning for Image High-Resolution Reconstruction. Sensors 2021, 22, 127. [Google Scholar] [CrossRef] [PubMed]
  17. Shao, J.; Zhang, J.; Huang, X.; Liang, R.; Barnard, K. Fiber bundle image restoration using deep learning. Opt. Lett. 2019, 44, 1080–1083. [Google Scholar] [CrossRef] [PubMed]
  18. Shao, J.; Zhang, J.; Liang, R.; Barnard, K. Fiber bundle imaging resolution enhancement using deep learning. Opt. Express 2019, 27, 15880–15890. [Google Scholar] [CrossRef] [PubMed]
  19. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  20. Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
  21. Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
Figure 1. Workflow of HAR-CNN for honeycomb artifact removal in fiber bundle imaging.
Figure 1. Workflow of HAR-CNN for honeycomb artifact removal in fiber bundle imaging.
Sensors 23 00333 g001
Figure 2. (a) Sequence of the training dataset generation via honeycomb pattern synthesis on ordinary images. (b) Network architecture of HAR-CNN with three convolution layers for patch extraction, nonlinear mapping, and reconstruction layers.
Figure 2. (a) Sequence of the training dataset generation via honeycomb pattern synthesis on ordinary images. (b) Network architecture of HAR-CNN with three convolution layers for patch extraction, nonlinear mapping, and reconstruction layers.
Sensors 23 00333 g002
Figure 3. Hardware setup for the fiber bundle imaging system. (a) the proximal end of the fiber bundle, (b) objective lens (10×), (c) tube lens, (d) sCMOS camera, and (e) the distal end of the fiber bundle.
Figure 3. Hardware setup for the fiber bundle imaging system. (a) the proximal end of the fiber bundle, (b) objective lens (10×), (c) tube lens, (d) sCMOS camera, and (e) the distal end of the fiber bundle.
Sensors 23 00333 g003
Figure 4. (a) Restored images for various training epochs, (b) PSNR according to the increment of training epoch, (c) SSIM according to the increment of training epoch.
Figure 4. (a) Restored images for various training epochs, (b) PSNR according to the increment of training epoch, (c) SSIM according to the increment of training epoch.
Sensors 23 00333 g004
Figure 5. Honeycomb artifact removal with various methods. (a) Synthetic fiber bundle image (raw), (b) median filter, (c) Gaussian filter, (d) interpolation, (e) HAR-CNN, and (f) ground truth image.
Figure 5. Honeycomb artifact removal with various methods. (a) Synthetic fiber bundle image (raw), (b) median filter, (c) Gaussian filter, (d) interpolation, (e) HAR-CNN, and (f) ground truth image.
Sensors 23 00333 g005
Figure 6. Resulting images for various honeycomb removal methods on the USAF target sample: (a) raw fiber bundle image, (b) median filter, (c) Gaussian filter, (d) interpolation, (e) HAR-CNN. (f) Pixel intensities along a line of interest shown as a white line within the line-patterned region, where the values are averaged across a column width of two.
Figure 6. Resulting images for various honeycomb removal methods on the USAF target sample: (a) raw fiber bundle image, (b) median filter, (c) Gaussian filter, (d) interpolation, (e) HAR-CNN. (f) Pixel intensities along a line of interest shown as a white line within the line-patterned region, where the values are averaged across a column width of two.
Sensors 23 00333 g006
Figure 7. Resulting images for various honeycomb removal methods on the lens tissue sample: (a) raw fiber bundle image, (b) median filter, (c) Gaussian filter, (d) interpolation, and (e) HAR-CNN (see Video S1). The red-squared region is magnified at the top-left corner of each image to clearly show image restoration details.
Figure 7. Resulting images for various honeycomb removal methods on the lens tissue sample: (a) raw fiber bundle image, (b) median filter, (c) Gaussian filter, (d) interpolation, and (e) HAR-CNN (see Video S1). The red-squared region is magnified at the top-left corner of each image to clearly show image restoration details.
Sensors 23 00333 g007
Figure 8. Detected image feature points for various honeycomb removal methods on the lens tissue sample: (a) raw fiber bundle image, (b) median filter, (c) Gaussian filter, (d) interpolation, and (e) HAR-CNN. The red cross indicates a detected image feature point. (f) The number of detected image features for the various honeycomb removal methods.
Figure 8. Detected image feature points for various honeycomb removal methods on the lens tissue sample: (a) raw fiber bundle image, (b) median filter, (c) Gaussian filter, (d) interpolation, and (e) HAR-CNN. The red cross indicates a detected image feature point. (f) The number of detected image features for the various honeycomb removal methods.
Sensors 23 00333 g008
Figure 9. (a) The number of matched image features between successive frames for various honeycomb removal methods. (bf) Mosaic images during a handheld scan of the fiber bundle probe on the lens tissue sample: (b) by use of raw images, (c) median filter, (d) Gaussian filter, (e) interpolation, and (f) HAR-CNN for honeycomb pattern removal.
Figure 9. (a) The number of matched image features between successive frames for various honeycomb removal methods. (bf) Mosaic images during a handheld scan of the fiber bundle probe on the lens tissue sample: (b) by use of raw images, (c) median filter, (d) Gaussian filter, (e) interpolation, and (f) HAR-CNN for honeycomb pattern removal.
Sensors 23 00333 g009
Table 1. Image restoration performance of various algorithms for honeycomb-pattern synthesized image.
Table 1. Image restoration performance of various algorithms for honeycomb-pattern synthesized image.
MethodRawMedian 1Gaussian 2InterpolationHAR-CNN
PSNR (dB)20.5819.7418.9823.6626.52
SSIM0.73010.79780.68780.77190.9119
1 Size of median filter is 7 × 7 and 2 standard deviation of Gaussian filter is 3.
Table 2. Comparison of quality measures for various honeycomb removal methods.
Table 2. Comparison of quality measures for various honeycomb removal methods.
Methods1r2q3 ( γ = 0.5 ) q ( γ = 0.8 )
Raw image00.46540.23270.0931
Median0.66100.28800.47450.5864
Gaussian0.77250.28210.52730.6744
Interpolation0.75090.40440.58860.6992
HAR-CNN0.77580.45690.61640.7120
1 Variance-based smoothness, 2 Rayleigh-based line separation criteria, and 3 weighted quality measure with a weighting factor γ.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, E.; Kim, S.; Choi, M.; Seo, T.; Yang, S. Honeycomb Artifact Removal Using Convolutional Neural Network for Fiber Bundle Imaging. Sensors 2023, 23, 333. https://doi.org/10.3390/s23010333

AMA Style

Kim E, Kim S, Choi M, Seo T, Yang S. Honeycomb Artifact Removal Using Convolutional Neural Network for Fiber Bundle Imaging. Sensors. 2023; 23(1):333. https://doi.org/10.3390/s23010333

Chicago/Turabian Style

Kim, Eunchan, Seonghoon Kim, Myunghwan Choi, Taewon Seo, and Sungwook Yang. 2023. "Honeycomb Artifact Removal Using Convolutional Neural Network for Fiber Bundle Imaging" Sensors 23, no. 1: 333. https://doi.org/10.3390/s23010333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop