Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe
Abstract
:1. Introduction
2. Materials and Methods
2.1. Cross-Modal All-in-Focus Imaging Strategy
2.2. Lighting and Imaging Strategy of Overexposure Suppression
- Four light sources are used to separately provide illumination and pre-imaging.
- The reflection of the image formed under the illumination of each light source is calculated to obtain each source’s prior information.
- The prior information that produces the least high reflection is chosen.
- Condition-Ca: If a single light source, a or b(a, b∈{Tlight, Blight, Llight, Rlight}), can avoid the high-reflection overexposure, it is selected for supplementary light,
- Condition-Cb: If highly reflective overexposure cannot be avoided, a combined lighting scheme with dual light sources, a and b(a, b∈{Tlight, Blight, Llight, Rlight}), is selected for supplementary illumination, and the decision equation is expressed as
2.3. Imaging Control and Fusion Scheme
- Depth data acquisition and all-in-focus scheme: Using the depth sensor to obtain the depth distribution of the unknown curved surface in the current field of view, depth data are applied to Equation (7).
- Pre-imaging and lighting plan: Using the depth data, different light sources are used for single-exposure pre-imaging at a fixed photosensitive level at the middle depth of the current surface and analyzing quality by judging whether there is overexposure or calculating the exposure conditions. The acquired prior information of high reflectivity under each light source is used to form the lighting plan using Equations (9) and (10).
- Full-focus imaging under an optimal lighting scheme: The combination of four imaging scenarios is present (i.e., I: single-focus single-illumination scene with small DoF; II: single-focus dual-illumination scene with small DoF; III: multifocus single-illumination scene with large DoF; and Ⅳ: multifocus dual-illumination scene with large DoF). Then, the all-in-focus scheme of Step 1 and the lighting scheme of Step 2 are used to match one of the four preset combinations, execute the illumination scheme once, and image each object plane separately.
- Sequence image fusion: After acquiring the sequence images, a specific fusion scheme (e.g., multifocus or wavelet) is performed on the sequence images according to the combined method chosen in Step 3. The image fusion process in the four scenarios proceeds as described below.
3. Experimental Verification
3.1. All-in-Focus Imaging Device with Overexposure Suppression
3.2. Imaging Experiments
3.2.1. Single-Focus and Illumination under Small DoF
- As shown in Figure 7b, by detecting the depth distribution of the current Viewpoint i to obtain the contour surface-depth data matrix, the maximum and minimum depths are db = 485 mm and df = 319 mm, respectively, because Dfront−1(df) ≥ Dback−1(db), according to the all-in-focus imaging strategy of Equation (5). If the current mode is single-focus, then the camera automatically sets the depth plane as DFS = (Dfront−1(df) + Dback−1(db))/2 = 386 mm for the final all-in-focus imaging scheme, as shown in Figure 7c.
- After selecting the intermediate depth, Dmid = 402 mm, of the object plane, rapid pre-imaging under light sources of Tlight, Blight, Llight, and Rlight are controlled according to the parameters in Table 1.
3.2.2. Single-Focus and Dual Illumination under Small DoFs
3.2.3. Multifocus and Dual Illumination under Large DoF
- According to Step 1, the ranges of the current depth as detected by the depth sensor were found to be df = 247 and db = 878. The depth matrix and surface diagram are shown in Figure 11a,b. As Dfront−1(df) < Dback−1(db), the current focus mode is selected. Then, according to Equation (7), the depth distribution data are used to segment the current surface with depth-matching (Figure 11c). The assignment results for the object plane depth of the final all-in-focusing scheme are listed in Table 5. Unlike Viewpoints i and ii, Viewpoint iii requires depths of DFM1 = 584 mm, DFM2 = 350 mm, and DFM3 = 250 mm, as shown in Table 5.
- Pre-imaging steps such as those in the previous experiments are performed to determine the lighting scheme. The pre-imaging and decision parameters are listed in Table 1 and Table 3, respectively, and the lighting decision-making process and data are shown in Figure 11d–f and Table 6, respectively. By comparing the LC results in Table 6, the combination of Tlight and Blight are selected as the required combination to provide lighting.
- As with Viewpoint ii, under Tlight and Blight, the multi-exposure image sequence at object plane depths of DFM1 = 584 mm, DFM3 = 350 mm, and DFM2 = 250 mm is acquired.
- According to the fusion process shown in Figure 4b, the multi-exposure sequence images are first fused to obtain the HDR image according to the combination of each object plane’s depth and light-source condition (Figure 12a,b). For the multifocus HDR images obtained from the same light source, the method in [17] is used to perform multifocus fusion to obtain the all-in-focus HDR image under Viewpoint iii, and a wavelet fusion operation like the one from Viewpoint ii is used on the all-in-focused HDR images under light sources of Tlight and Blight to suppress the high reflectivity and restore the information of the overexposed area. The final fusion results are shown in Figure 12c.
3.3. Effect Evaluation of All-in-Focus Imaging with Overexposure Suppression
3.3.1. Evaluation of the All-in-Focus Imaging Effect
Objective Evaluation of the All-in-Focus Effects
- ➀
- EOG function
- ➁
- Tenengrad function
- ➂
- Vollath function
- ➃
- Signal-to-noise ratio
- ➄
- Peak signal-to-noise ratio
Subjective Evaluation of the Effects of All-in-Focus Fusion
3.3.2. Evaluation of Information Recovery Effect after Suppressing Overexposure
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Saheby, E.B.; Gouping, H.; Hays, A. Design of Top Mounted Supersonic Inlets for a Cylindrical Fuselage. Proceedings of the Institution of Mechanical Engineers, Part G. J. Aerosp. Eng. 2019, 233, 2956–2979. [Google Scholar] [CrossRef]
- Madadi, A.; Nili-Ahmadabadi, M.; Kermani, M.J. Quasi-3D Inverse Design of S-Shaped Diffuser with Specified Cross-Section Distribution; Super-Ellipse, Egg-Shaped, and Ellipse. Inverse Probl. Sci. Eng. 2021, 29, 2611–2628. [Google Scholar] [CrossRef]
- Li, X.; Ma, L.; Xu, L. Empirical Modeling for Non-Lambertian Reflectance Based on Full-Waveform Laser Detection. Opt. Eng. 2013, 52, 116110. [Google Scholar] [CrossRef] [Green Version]
- Hawari, A.; Alamin, M.; Alkadour, F.; Elmasry, M.; Zayed, T. Automated Defect Detection Tool for Closed Circuit Television (cctv) Inspected Sewer Pipelines. Automat. Constr. 2018, 89, 99–109. [Google Scholar] [CrossRef]
- Hansen, P.; Alismail, H.; Rander, P.; Browning, B. Visual Mapping for Natural Gas Pipe Inspection. Int. J. Robot. Res. 2015, 34, 532–558. [Google Scholar] [CrossRef]
- Karkoub, M.; Bouhali, O.; Sheharyar, A. Gas Pipeline Inspection Using Autonomous Robots With Omni-Directional Cameras. IEEE Sens. J. 2021, 21, 15544–15553. [Google Scholar] [CrossRef]
- Zheng, D.; Tan, H.; Zhou, F. A Design of Endoscopic Imaging System for Hyper Long Pipeline Based on Wheeled Pipe Robot. In Proceedings of the AIP Conference Proceedings, Penang, Malaysia, 8–9 August 2017; Volume 1820, p. 060001. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Q.; Tian, Y.; Wang, J.; Xu, M. Design and Implementation of a High-Performance Panoramic Annular Lens. Appl. Opt. 2020, 59, 11246–11252. [Google Scholar] [CrossRef]
- Lei, J.; Fu, J.; Guo, Q. Distortion Correction on Gun Bore Panoramic Image. In Proceedings of the 2011 International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (ICQR2MSE), Chengdu, China, 17–19 June 2011; Huang, H.Z., Zuo, M.J., Jia, X., Liu, Y., Eds.; IEEE: New York, NY, USA, 2011; pp. 956–959. [Google Scholar]
- Wang, Z.; Liang, H.; Wu, X.; Zhao, Y.; Cai, B.; Tao, C.; Zhang, Z.; Wang, Y.; Li, S.; Huang, F.; et al. A Practical Distortion Correcting Method from Fisheye Image to Perspective Projection Image. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 26–30 May 2015; IEEE: New York, NY, USA, 2015; pp. 1178–1183. [Google Scholar] [CrossRef]
- Ai, Q.; Yuan, Y. Rapid Acquisition and Identification of Structural Defects of Metro Tunnel. Sensors 2019, 19, 4278. [Google Scholar] [CrossRef] [PubMed]
- Akpinar, U.; Sahin, E.; Meem, M.; Menon, R.; Gotchev, A. Learning Wavefront Coding for Extended Depth of Field Imaging. IEEE Trans. Image Process. 2021, 30, 3307–3320. [Google Scholar] [CrossRef] [PubMed]
- Yao, C.; Shen, Y. Optical Aberration Calibration and Correction of Photographic System Based on Wavefront Coding. Sensors 2021, 21, 4011. [Google Scholar] [CrossRef] [PubMed]
- Banerji, S.; Meem, M.; Majumder, A.; Sensale-Rodriguez, B.; Menon, R. Extreme-Depth-of-Focus Imaging with a Flat Lens. Optica 2020, 7, 214. [Google Scholar] [CrossRef]
- Perwass, C.; Wietzke, L. Single Lens 3D-Camera with Extended Depth-of-Field. In Human Vision and Electronic Imaging XVII; Rogowitz, B.E., Pappas, T.N., DeRidder, H., Eds.; Spie-Int Soc Optical Engineering: Bellingham, WA, USA, 2012; Volume 8291, p. 829108. [Google Scholar] [CrossRef] [Green Version]
- Ihrke, I.; Restrepo, J.; Mignard-Debise, L. Principles of Light Field Imaging Briefly Revisiting 25 Years of Research. IEEE Signal Process. Mag. 2016, 33, 59–69. [Google Scholar] [CrossRef] [Green Version]
- Liu, S.; Chen, J.; Rahardja, S. A New Multi-Focus Image Fusion Algorithm and Its Efficient Implementation. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 1374–1384. [Google Scholar] [CrossRef]
- Liu, H.; Zhou, X. Multi-Focus Image Region Fusion and Registration Algorithm with Multi-Scale Wavelet. Intell. Autom. Soft Comput. 2020, 26, 1493–1501. [Google Scholar] [CrossRef]
- Guo, C.; Ma, Z.; Guo, X.; Li, W.; Qi, X.; Zhao, Q. Fast Auto-Focusing Search Algorithm for a High-Speed and High-Resolution Camera Based on the Image Histogram Feature Function. Appl. Opt. 2018, 57, F44–F49. [Google Scholar] [CrossRef]
- Wang, C.; Huang, Q.; Cheng, M.; Ma, Z.; Brady, D.J. Deep Learning for Camera Autofocus. IEEE Trans. Comput. Imaging 2021, 7, 258–271. [Google Scholar] [CrossRef]
- Liu, H.; Li, H.; Luo, J.; Xie, S.; Sun, Y. Construction of All-in-Focus Images Assisted by Depth Sensing. Sensors 2019, 19, 1409. [Google Scholar] [CrossRef]
- Wilburn, B.; Joshi, N.; Vaish, V.; Talvala, E.V.; Antunez, E.; Barth, A.; Adams, A.; Horowitz, M.; Levoy, M. High Performance Imaging Using Large Camera Arrays. ACM Trans. Graph. 2005, 24, 765–776. [Google Scholar] [CrossRef] [Green Version]
- Shao, W.; Liu, K.; Shao, Y.; Zhou, A. Smooth Surface Visual Imaging Method for Eliminating High Reflection Disturbance. Sensors 2019, 19, 4953. [Google Scholar] [CrossRef] [Green Version]
- Liu, B.; Yang, Y.; Wang, S.; Bai, Y.; Yang, Y.; Zhang, J. An Automatic System for Bearing Surface Tiny Defect Detection Based on Multi-Angle Illuminations. Optik 2020, 208, 164517. [Google Scholar] [CrossRef]
- Feng, W.; Liu, H.; Zhao, D.; Xu, X. Research on Defect Detection Method for High-Reflective-Metal Surface Based on High Dynamic Range Imaging. Optik 2020, 206, 164349. [Google Scholar] [CrossRef]
- Chen, Y.-J.; Tsai, J.-C.; Hsu, Y.-C. A Real-Time Surface Inspection System for Precision Steel Balls Based on Machine Vision. Meas. Sci. Technol. 2016, 27, 074010. [Google Scholar] [CrossRef]
- Phong, B.T. Illumination for Computer Generated Pictures. Commun. ACM 1975, 18, 7. [Google Scholar] [CrossRef] [Green Version]
- Tian, H.; Wang, D.; Lin, J.; Chen, Q.; Liu, Z. Surface Defects Detection of Stamping and Grinding Flat Parts Based on Machine Vision. Sensors 2020, 20, 4531. [Google Scholar] [CrossRef]
- Ren, Z.; Fang, F.; Yan, N.; Wu, Y. State of the Art in Defect Detection Based on Machine Vision. Int. J. Precis. Eng. Manuf.-Green Tech. 2022, 9, 661–691. [Google Scholar] [CrossRef]
- Tezerjani, A.D.; Mehrandezh, M.; Paranjape, R. Optimal Spatial Resolution of Omnidirectional Imaging Systems for Pipe Inspection Applications. Int. J. Optomechatron. 2015, 9, 261–294. [Google Scholar] [CrossRef] [Green Version]
- Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. Comput. Graph. Forum 2009, 28, 161–171. [Google Scholar] [CrossRef]
- Abdulrahman, A.A.; Rasheed, M.; Shihab, S. The Analytic of Image Processing Smoothing Spaces Using Wavelet. J. Phys. Conf. Ser. 2021, 1879, 022118. [Google Scholar] [CrossRef]
Parameter | Resolution | Feedback Light Intensity | Aperture Value | ISO | Shutter Time | Saturation | Brightness | Sharpness | |
---|---|---|---|---|---|---|---|---|---|
Shutter Time of Pre-Imaging (ms) | Collection of Multiple Exposure Shutter Times (ms) | ||||||||
Value | 2592 × 1944 | 20 lux | 2.8 | 800 | 50 | {35, 50, 80, 120} | 0 | 50 | 20 |
Light Source | Tlight | Blight | Llight | Rlight |
---|---|---|---|---|
Spot area (Size (IMGa)) | 590 | 3571 | 0 | 4713 |
Normalized spot area | 0.0117 | 0.0711 | 0.0 | 0.0938 |
Weight | kO | kM | kα | kβ |
---|---|---|---|---|
Value | 0.7 | 0.3 | 30 | 5 |
Tlight-Llight | Tlight-Blight | Tlight-Rlight | Llight-Blight | Llight-Rlight | Blight-Rlight | |
---|---|---|---|---|---|---|
Sor | 1580 | 4435 | 5770 | 4131 | 5790 | 6264 |
Sand | 0 | 54 | 165 | 206 | 0 | 2450 |
ORI | 1 | 0.984 | 0.950 | 0.939 | 1 | 0.376 |
SOI | 0.974 | 0.966 | 0.962 | 0.967 | 0.988 | 0.960 |
LC | 0.992 | 0.979 | 0.953 | 0.947 | 0.961 | 0.551 |
<Dj> (mm) | DFMj (mm) | |
---|---|---|
j = 1 | 219–292 | 250 |
j = 2 | 292–438 | 350 |
j = 3 | 438–878 | 584 |
Tlight-Llight | Tlight-Blight | Tlight-Rlight | Llight-Blight | Llight-Rlight | Blight-Rlight | |
---|---|---|---|---|---|---|
Sor | 662 | 681 | 1920 | 491 | 1832 | 1802 |
Sand | 26 | 25 | 39 | 66 | 11 | 27 |
ORI | 0.992 | 1 | 0.988 | 0.980 | 0.997 | 0.992 |
SOI | 0.977 | 0.977 | 0.974 | 0.980 | 0.974 | 0.974 |
LC | 0.988 | 0.993 | 0.984 | 0.979 | 0.990 | 0.987 |
IMGFM1 | IMGFM2 | IMGFM3 | IMGFused | |
---|---|---|---|---|
VEOG | 51.537 | 89.531 | 98.709 | 171.402 |
VTenengrad (103) | 1.294 | 2.342 | 2.879 | 4.377 |
VVollaths (103) | 0.591 | 0.610 | 0.607 | 0.839 |
SNR (db) | PSNR (db) | |
---|---|---|
IMGFM1 | 14.6442 | 33.9384 |
IMGFM2 | 15.8984 | 35.1926 |
IMGFM3 | 17.5386 | 36.8323 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, S.; Xing, Q.; Xu, H.; Lu, G.; Wang, J. Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe. Sensors 2022, 22, 7634. https://doi.org/10.3390/s22197634
Wang S, Xing Q, Xu H, Lu G, Wang J. Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe. Sensors. 2022; 22(19):7634. https://doi.org/10.3390/s22197634
Chicago/Turabian StyleWang, Shuangjie, Qiang Xing, Haili Xu, Guyue Lu, and Jiajia Wang. 2022. "Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe" Sensors 22, no. 19: 7634. https://doi.org/10.3390/s22197634
APA StyleWang, S., Xing, Q., Xu, H., Lu, G., & Wang, J. (2022). Method and Device of All-in-Focus Imaging with Overexposure Suppression in an Irregular Pipe. Sensors, 22(19), 7634. https://doi.org/10.3390/s22197634