Next Article in Journal
An ML-Based Resource Allocation Scheme for Energy Optimization in 5G NR
Previous Article in Journal
RETRACTED: He et al. The Multi-Station Fusion-Based Radiation Source Localization Method Based on Spectrum Energy. Sensors 2025, 25, 1339
Previous Article in Special Issue
Design of a Cost-Effective Ultrasound Force Sensor and Force Control System for Robotic Extra-Body Ultrasound Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique

1
Department of Electrical and Electronics Engineering, Azrieli College of Engineering, Jerusalem 9103501, Israel
2
Faculty of Electrical and Electronics Engineering, Holon Institute of Technology, Holon 5810201, Israel
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(16), 4977; https://doi.org/10.3390/s25164977
Submission received: 9 July 2025 / Revised: 4 August 2025 / Accepted: 7 August 2025 / Published: 12 August 2025
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)

Abstract

Non-invasive diagnostics play a crucial role in medicine, and they ensure both contamination safety and patient comfort. The proposed study integrates hyperspectral imaging with advanced image fusion, enabling non-invasive, diagnostic procedure within tissue. It utilizes near-infrared (NIR) wavelength vision that is suitable for reflections from objects within a dispersive layer, enabling the reconstruction of internal tissue layers images. It can detect objects, including cancerous tumors (presented as phantoms), inside human tissue. This involves processing data from multiple images taken in different NIR bands and merging them through image fusion techniques. Our research demonstrates evident data about objects within the diffusive media, visible only in the reconstructed images. The experimental results demonstrate a significant correlation with the samples employed in the study’s experimental design.

1. Introduction

The identification of clear tumor margins is a critical factor in the complete removal of tumors and reducing the risk of their recurrence [1]. This is particularly important when the tumor is located near neurological structures, as removing too much tissue can be dangerous and harmful to vital organs or nerve tissue [2]. Several methods are aimed at improving tumor margin visualization, including MRI, CT, and targeted fluorescence imaging [1,3,4,5,6,7,8,9]. However, these methods have limitations such as reduced spatial resolution due to tissue shifting during surgery in MRI and CT, dependence on photo-physics and photochemistry in targeted fluorescence imaging, as well as issues such as phototoxicity and photobleaching [10,11,12,13,14,15,16]. Moreover, MRI and CT devices are costly and less accessible to common patients. The CT technique can also be harmful to the human body due to its radiative detection nature, and a major concern with this medical imaging is the risk from ionizing radiation, such as X-rays and gamma rays, which can increase the likelihood of DNA damage and cancer.
Utilizing a hyperspectral camera within the near-infrared spectrum introduces a novel approach aimed at identifying reflections originating from objects situated within a dispersive layer. By processing data from images captured in various NIR bands and utilizing image fusion techniques [17], one can produce a non-invasive and non-harmful tumor detection method. This technique holds promise for detecting tumors located inside human tissue layers.
The human body’s tissues, rich in fluids, exhibit dispersion, contrasting with the dense composition of tumor tissue. When directing halogen light sources with a black-body radiation spectrum toward the body, the heightened density of tumors yields significantly more reflections compared to a typical healthy tissue [18,19]. Conventional silicon-based cameras primarily function within the visible spectrum range of 400–700 [nm] and are much less sensitive to NIR spectrum [20]. Yet, detecting objects obscured by diffuse tissue in this spectrum proves challenging. This limitation arises from the scattering or absorption of much of the visible light within the tissue, preventing deep penetration. A hyperspectral camera in the NIR range can be used to sense light coming from tissue at a depth of up to 2 cm; thus, cancerous tumors and blood vessels can be detected [21,22].
In the proposed work, the illumination for the hyperspectral imaging is achieved using a halogen light, which is an incandescent lamp (that generates light by heating a solid body to a high temperature) [23]. Halogen illumination was chosen because it has wide-spectrum illumination, with most of the energy emitted lying in the NIR regions of the spectrum. Only 15–20% of the light falls into the visible range (400–700 nm), and less than 1% is ultraviolet light [24,25]. As a source of black-body radiation, a halogen bulb can be considered a rough approximation of a black-body radiator. A black-body radiator is a theoretical object that absorbs all the electromagnetic radiation that falls on it and emits radiation according to Planck’s law [26]. The main innovation lies in combining NIR and multispectral imaging with advanced image fusion techniques. In detail, Different wavelengths penetrate tissue with different paths to different depths, each providing specific information about tissue structure. By extracting valuable data from each image, the fused result offers enhanced detailed image.

2. Theoretical Explanation

Exposure fusion is a technique for synthesizing a single image from a series of input images by retaining only the regions corresponding to the highest-quality parameters defined in the model [17,27,28,29,30]. The process involves the computation of weight maps for each input image, where higher weights indicate that a pixel should contribute more prominently to the final composite image. These weights are designed to reflect desired image attributes, such as high contrast and optimal exposure. Unlike many other fusion techniques, exposure fusion does not depend on generating a high-dynamic-range (HDR) image [28]. This approach offers faster computational performance and is particularly advantageous for visualization on displays that do not support HDR format.
The quality parameters used for the weight calculations are as follows:
  • Contrast: Contrast is typically measured using Michelson contrast [31] or RMS contrast [32]. However, in this application, a Laplacian filter was applied to the grayscale version of each image. The absolute value of the filter’s response was then calculated for each pixel. This method tends to assign higher weights to significant image features, such as edges and textures. This feature is denoted as C (the contrast weights) and is computed separately for each pixel in the image
  • Exposure and Illumination power: Examining the unprocessed intensities within a channel allows us to assess the exposure quality of a pixel. Our objective is to retain intensities that are not close to zero (indicating underexposure) or one (indicating overexposure). We assign a weight to each intensity, denoted as i, based on its proximity to the pixel-normalized intensity middle value, 0.5, which is represented by employing a Gaussian curve:
E = e x p ( ( i 0.5 ) 2 2 σ 2 )
Here, E denotes the image exposedness per pixel, and σ (the standard deviation) controls the width of the Gaussian intensity distribution at each pixel. In our method, σ = 2.
Now we establish a weighted image parameter, W, in the k-th image defined as follows:
W k = C k · E k
where k is the index of the image in a sequence. Furthermore, a weighted image, W ~ , over the sequence of images is defined as:
W ~ = n = 1 N W n    
where N is the number of images in the sequence. Then, the normalized weight per k-th image per each pixel is defined as follows:
W k ^ = W k W ~
The resulting fused image, R, can be defined as follows, calculated pixel-wise:
R ^ = n = 1 N W n ^ · I n
where In is the n-th image in the sequence.
Currently, the resulting image is of inadequate quality to properly depict the outcome. This issue arose because of the visible stitch lines, a consequence of rapid fluctuations in the weights. To overcome this issue, Laplacian pyramid decomposition was applied [33,34]. The Laplacian pyramid decomposition is a multi-scale image representation technique that decomposes an image into layers, each representing different levels of detail. It is built by iteratively applying a Gaussian filter to the image and down-sampling it by a factor of two, creating a sequence of images at progressively lower resolutions (known as the Gaussian pyramid). At each level, the higher-resolution image is reconstructed from the lower-resolution version using interpolation, and the difference (residuals) between the original and the reconstructed image is stored. These residuals form the layers of the Laplacian pyramid, which captures fine details at each scale. Figure 1 shows the concept of the Gaussian and Laplacian pyramid decomposition in the proposed research.
Following the Gaussian and Laplacian decompositions, there are N images along with N normalized weight maps functioning as alpha masks. The l-th level in a Laplacian pyramid decomposition of image I is represented as L { I } l , while a Gaussian pyramid of image I is denoted as G { I } l . The Laplacian pyramid of the image and the Gaussian pyramid of the normalized weights maps is pixel-wise multiplexed, as described in Equation (5):
L { R } l = n = 1 N G { W ^ } n l L { I } n l
where n is the n-th image out of N images in the sequence. The computation of each level “l” within the resulting Laplacian pyramid involves a weighted average of the original Laplacian decompositions for that level. The weights are determined by the l-th level of the pyramid of the corresponding weight map. Finally, the pyramid, L{R}, collapses to derive R, which is the final fused image.
Our experiment involves two stages for capturing the initial sets of images: First, we capture ten images at a specific wavelength with varying exposures (ranging from underexposed to overexposed, evenly distributed). This procedure is subsequently repeated for ten distinct wavelengths. Consequently, in our experiment, N equals 100 (resulting from 10 wavelengths multiplied by 10 exposures each).

3. Experimental Setup

The experimental setup was designed to capture images of back-reflection light coming from a sample situated under a halogen lamp using a hyperspectral camera with a wavelength range of 713 nm–920 nm (Monarch II hyperspectral, Unispectral, Ramat Gan, Israel, 1280 × 1024 Pixels, H-FOV (H, V, D) 31.5°, 25.5°, 39.8°, 10 CWL bands (±10 nm) 713, 736, 759, 782, 805, 828, 851, 874, 897, 920 nm). In Figure 2, one can see a schematic sketch of the setup. The camera was positioned perpendicular (90°) to the tissue with the light source angled at 70° towards an object that resembles the human body and with a denser material placed underneath (a piece of chicken breast with a plastic disk of 175 mm dia. underneath it) to simulate a tumor. The disk size in camera pixels was about 125 pixels in diameter. Both the object and denser material were placed on a non-reflective surface. The hyperspectral camera was positioned above the object (40 cm above the sample area), and images were taken in a dark environment. To ensure a uniform distribution of gray levels across the frame, we captured an image of a gray uniform patch before proceeding with the samples. The subsequent step involved acquiring the sequence of images. Then, a weight map was generated after considering the quality measures of each pixel in the images within the sequence. The final image was produced by consolidating the stack of images through weighted blending, with each image making a proportional contribution to the result determined by its weight in the weight map (Equations (1)–(6)).

4. Experimental Results and Discussion

A standard CMOS camera, which records light wavelengths below the near-infrared (NIR) spectrum, would not be able to distinguish the plasticine underneath the chicken tissue. Consequently, a hyperspectral camera was used to acquire 10 sets of images at different wavelength bandwidths, with different 10 exposures per wavelength (100 images in total). The different exposures were taken to compensate for the uneven light intensities per wavelength of the light source. In Figure 3, one can see experimental images: hyperspectral images taken in a sequence. The object appears brighter at lower wavelengths because of detector sensitivity and the illumination spectrum. The corresponding filter wavelength of a specific image is mentioned below each one. The FWHM (Full-Width Half Maxima) of all images is 40 nm per bandwidth, with a spectral band accuracy of +/− 2 nm and a 100 ms exposure time. All the images presented in Figure 3 were taken at the same exposure time. One can see the different intensities due to the light source as mentioned above.
Subsequently, the image fusion technique was employed to merge all the images, aiming to generate a clearer and more harmonized composite. Following the merging of exposures, we iteratively applied the image fusion procedure to refine the detection of the plasticine area beneath the examined tissue, thereby facilitating a clearer delineation of its position. Furthermore, the illumination penetrating deeper into the region of interest (ROI) primarily comprises longer wavelengths within the near-infrared spectrum, undetectable by conventional silicon-based cameras.
To overcome this challenge, we utilized a camera equipped with near-infrared (NIR) detection capabilities. Leveraging this specialized camera, we captured reflections penetrating deeply into the tissue, thus revealing obscured objects. We acquired ten images from the hyperspectral camera for each exposure, spanning a wavelength range of 705–920 [nm], adequately suited for our objectives.
The purpose of the following step is to prepare all the images for the final stage of the fusion. After demonstrating that the camera can be used to detect objects behind scattering tissue, we wanted to improve its detection capability. In the first step, we captured the object at 10 different wavelengths as in the previous step (Figure 3). However, this time, we took sets of images with different exposures ranging from underexposed to overexposed (10 sets of images). The image fusion method was used to combine all the wavelengths into one image (as described before in Equations (1)–(6)). This method helped us extract the relevant information from each image and ignore any irrelevant information using the different interactions of each wavelength with the tissue. Since each wavelength reflection provides slightly different information about the area where the object is located, the combination of the images yielded a single image with more information, resulting in a clearer identification.
In Figure 4, one can see the experimental results. In Figure 4a, a white-light-illumination image (RGB) is shown, and no signs of the internal phantom can be seen. In Figure 4b, a one-shot of the NIR image is shown, resolving some internal phantom image, and the resulting exposure fusion image is shown in Figure 4c, with a clear resolution of the internal tissue (a black spot in the middle of the image). Note that the gray-colored images are due to the monochromatic nature of the images.
In Figure 5, one can see cross-sections of the fusion image that clearly show a black spot (tumor location) indicated on the phantom. Figure 5a,c show the fusion image with dashed lines indicating the X- and Y-axis cross-section locations, respectively. Figure 5b,d show the intensity profile along the X- and Y-axis at the black spot location (marked by the dashed lines). The green line denotes the horizontal axis intensity profile, while the red line denotes the vertical axis.
The resulting cross-sectional graphs presented enable us to clearly distinguish the presence of a high-density body beneath the diffusing tissue with a high level of certainty.
In Figure 6, one can see a zoom-in of the fused image with the X- and Y-axis cross-section intensity profile (Figure 6a). The zoomed-in location is marked with dashed white lines, as can be seen in Figure 5.
The cross-sectional analysis presented in these images demonstrates the effectiveness of multispectral imaging and image fusion in detecting subsurface structures within diffusing biological tissues. The significant variations in intensity profiles along both the X- and Y-axis confirm the presence of a high-density object beneath the diffusing tissue.
Hyperspectral imaging with exposure fusion provides a powerful non-invasive method to visualize subsurface features in tissue. It is done by capturing a full reflectance spectrum at each pixel and combining multiple NIR wavelength images. It greatly enhances contrast and reveals hidden objects (e.g., tumors) that are indiscernible with conventional cameras. The proposed approach exploits the rich spectral and spatial information to differentiate materials by their optical signatures without any prior knowledge of the object, yielding superior delineation of the target in comparison with the background. However, the method is also characterized by high complexity and cost. It requires specialized NIR hyperspectral sensors, multi-exposure image acquisition and intensive data processing, resulting in large multi-dimensional datasets. Its performance is inherently limited to low tissue depths due to light attenuation, which collectively constrains its practical applicability. Our next step involves using deep convolutional neural networks to achieve the same results but using fewer images. Thus, we will need less computational and hyperspectral resources.

5. Conclusions

In this study, we demonstrated the feasibility of detecting reflections from objects situated within a dispersive layer using the near-infrared (NIR) range. Employing a hyperspectral camera operating within the NIR spectrum coupled with advanced image fusion techniques can enable the detection of cancerous tumors (phantoms) with improved resolution and clarity. By processing data from images captured across various bands of the near-IR range and merging them using image fusion methods, we devised a non-invasive and safe approach to tumor detection. This research lays the groundwork for future endeavors aimed at detecting cancerous tumors and blood vessels buried within tissue depths of the human body.

Author Contributions

Conceptualization, Y.D.; methodology, Y.D.; software, N.A. and Y.D.; validation, N.A. and Y.D.; formal analysis, N.A. and Y.D.; investigation, N.A. and Y.D.; resources, A.S. (Amir Shemer); data curation, Y.D.; writing—original draft preparation, N.A., Y.D., and A.S. (Amir Shemer); writing—review and editing, Y.B., A.S. (Ariel Schwarz), and A.S. (Amir Shemer); visualization, Y.B., A.S. (Ariel Schwarz), A.S. (Amir Shemer), and Y.D.; supervision, Y.D. and A.S. (Amir Shemer); project administration, A.S. (Amir Shemer); funding acquisition, A.S. (Amir Shemer). All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the RAMA 302-2024 grant provided by MOST Science Accelerators and Azrieli internal fund.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NIRNear Infrared
MRIMagnetic Resonance Imaging
CTComputed Tomography
FWHMFull-Width Half Maxima
RGBRed, Green, Blue

References

  1. Danan, Y.; Yariv, I.; Zalevsky, Z.; Sinvani, M. Improved Margins Detection of Regions Enriched with Gold Nanoparticles inside Biological Phantom. Materials 2017, 10, 203. [Google Scholar] [CrossRef] [PubMed]
  2. Unger, J.; Hebisch, C.; Phipps, J.E.; Lagarto, J.L.; Kim, H.; Darrow, M.A.; Bold, R.J.; Marcu, L. Real-time Diagnosis and Visualization of Tumor Margins in Excised Breast Specimens using Fluorescence Lifetime Imaging and Machine Learning. Biomed. Opt. Exp. 2020, 11, 1216–1230. [Google Scholar] [CrossRef] [PubMed]
  3. Baba, A.I.; Câtoi, C. Chapter 14, Nervous System Tumors. In Comparative Oncology; The Publishing House of the Romanian Academy: Bucharest, Romania, 2007. [Google Scholar]
  4. Heidkamp, J.; Scholte, M.; Rosman, C.; Manohar, S.; Fütterer, J.J.; Rovers, M.M. Novel Imaging Techniques for Intraoperative Margin Assessment in Surgical Oncology, A Systematic Review. Int. J. Cancer 2021, 149, 635–645. [Google Scholar] [CrossRef] [PubMed]
  5. Arica, S.; Altuntas, T.S.; Erbay, G. On Visualization and Quantification of Lesion Margin in CT Liver Images. In Proceedings of the Medical Technologies Congress (TIPTEKNO), Antalya, Turkey, 19–20 November 2020; pp. 1–4. [Google Scholar]
  6. Veluponnar, D.; Dashtbozorg, B.; Jong, L.-J.S.; Geldof, F.; Guimaraes, M.D.S.; Peeters, M.-J.T.F.D.V.; van Duijnhoven, F.; Sterenborg, H.J.C.M.; Ruers, T.J.M.; de Boer, L.L. Diffuse Reflectance Spectroscopy for Accurate Margin Assessment in Breast-Conserving Surgeries: Importance of an Optimal Number of Fibers. Biomed. Opt. Express 2023, 14, 4017. [Google Scholar] [CrossRef]
  7. Veluponnar, D.; de Boer, L.L.; Geldof, F.; Jong, L.-J.S.; Guimaraes, M.D.S.; Peeters, M.-J.T.F.D.V.; van Duijnhoven, F.; Ruers, T.; Dashtbozorg, B. Toward Intraoperative Margin Assessment using a Deep Learning-based Approach for Automatic Tumor Segmentation in Breast Lumpectomy Ultrasound Images. Cancers 2023, 15, 1652. [Google Scholar] [CrossRef]
  8. Veluponnar, D.; de Boer, L.L.; Dashtbozorg, B.; Jong, L.-J.S.; Geldof, F.; Guimaraes, M.D.S.; Sterenborg, H.J.C.M.; Vrancken-Peeters, M.-J.T.F.D.; van Duijnhoven, F.; Ruers, T. Margin Assessment During Breast Conserving Surgery using Diffuse Reflectance Spectroscopy. J. Biomed. Opt. 2024, 29, 045006. [Google Scholar] [CrossRef]
  9. Jong, L.J.S.; Veluponnar, D.; Geldof, F.; Sanders, J.; Guimaraes, M.D.S.; Peeters, M.-J.T.F.D.V.; van Duijnhoven, F.; Sterenborg, H.J.C.M.; Dashtbozorg, B.; Ruers, T.J.M. Toward Real-time Margin Assessment in Breast-conserving Surgery with Hyperspectral Imaging. Sci. Rep. 2025, 15, 9556. [Google Scholar] [CrossRef]
  10. Yang, J.; Li, K.; Deng, H.; Feng, J.; Fei, Y.; Jin, Y.; Liao, C.; Li, Q. CT Cinematic Rendering for Pelvic Primary Tumor Photorealistic Visualization. Quant. Imaging Med. Surg. 2018, 8, 804–818. [Google Scholar] [CrossRef]
  11. Kim, K.; Park, H.; Lim, K.M. Phototoxicity: Its Mechanism and Animal Alternative Test Methods. Toxicol. Res. 2015, 31, 97–104. [Google Scholar] [CrossRef]
  12. Bernas, T.; Robinson, J.P.; Asem, E.K.; Rajwa, B. Loss of Image Quality in Photobleaching During Microscopic Imaging of Fluorescent Probes Bound to Chromatin. J. Biomed. Opt. 2005, 10, 064015. [Google Scholar] [CrossRef]
  13. Berglund, A.J. Nonexponential Statistics of Fluorescence Photobleaching. J. Chem. Phys. 2004, 121, 2899–2903. [Google Scholar] [CrossRef] [PubMed]
  14. Pertzborn, D.; Nguyen, H.N.; Hüttmann, K.; Prengel, J.; Ernst, G.; Guntinas-Lichius, O.; von Eggeling, F.; Hoffmann, F. Intraoperative Assessment of Tumor Margins in Tissue Sections with Hyperspectral Imaging and Machine Learning. Cancers 2023, 15, 213. [Google Scholar] [CrossRef]
  15. Zhang, L.; Liao, J.; Wang, H.; Zhang, M.; Liu, Y.; Jiang, C.; Han, D.; Jia, Z.; Qin, C.; Niu, S.; et al. Near-Infrared II Hyperspectral Imaging Improves the Accuracy of Pathological Sampling of Multiple Cancer Types. Lab Investig. 2023, 103, 100212. [Google Scholar] [CrossRef]
  16. Parasca, S.V.; Calin, M.A.; Manea, D.; Radvan, R. Hyperspectral Imaging with Machine Learning for in Vivo Skin Carcinoma Margin Assessment: A Preliminary Study. Phys. Eng. Sci. Med. 2024, 47, 1141. [Google Scholar] [CrossRef]
  17. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (PG’07), Maui, HI, USA, 29 October–2 November 2007; pp. 382–390. [Google Scholar] [CrossRef]
  18. Jung, K.Y.; Cho, S.W.; Kim, Y.A.; Kim, D.; Oh, B.-C.; Park, D.J.; Park, Y.J. Cancers with Higher Density of Tumor-Associated Macrophages Were Associated with Poor Survival Rates. J. Pathol. Transl. Med. 2015, 49, 318–324. [Google Scholar] [CrossRef]
  19. Heusmann, H.; Kölzer, J.; Otto, J.; Puls, R.; Friedrich, T.; Heywang-Koebrunner, S.; Zinth, W. Photon Transport in Highly Scattering Tissue; SPIE: Bellingham, WA, USA, 1995; Volume 2326, p. 370. [Google Scholar]
  20. Mangold, K.; Shaw, J.; Vollmer, M. The physics of near-infrared photography. Eur. J. Phys. 2013, 34, 51. [Google Scholar] [CrossRef]
  21. Nilsson, A.M.K.; Heinrich, D.; Olajos, J.; Andersson-Engels, S. Near Infrared Diffuse Reflection and Laser-induced Fluorescence Spectroscopy for Myocardial Tissue Characterisation. Spectrochim. Acta Part A Mol. Biomol. Spectrosc. 1997, 53, 1901–1912. [Google Scholar] [CrossRef] [PubMed]
  22. Henderson, T.A.; Morries, L.D. Near-infrared Photonic Energy Penetration: Can Infrared Phototherapy Effectively Reach the Human Brain? Neuropsychiatr. Dis. Treat. 2015, 11, 2191–2208. [Google Scholar] [CrossRef]
  23. Van Bommel, W. Halogen Lamp. In Encyclopedia of Color Science and Technology; Luo, R., Ed.; Springer: New York, NY, USA, 2012. [Google Scholar] [CrossRef]
  24. Ojanen, M.; Kärhä, P.; Ikonen, E. Spectral Irradiance Model for Tungsten Halogen Lamps in 340–850 nm Wavelength Range. Appl. Opt. 2010, 49, 880–886. [Google Scholar] [CrossRef]
  25. Ohmi, M.; Haruna, M. Ultra-High Resolution Optical Coherence Tomography (OCT) Using a Halogen Lamp as the Light Source. Opt. Rev. 2003, 10, 478–481. [Google Scholar] [CrossRef]
  26. Ribbing, C.G. Chapter 8.2. Blackbody radiation. In Optical Thin Films and Coatings; Woodhead Publishing Limited: Cambridge, UK, 2013. [Google Scholar]
  27. Hessel, C. An Implementation of the Exposure Fusion Algorithm. Image Process. Line 2018, 8, 369–387. [Google Scholar] [CrossRef]
  28. Xu, F.; Liu, J.; Song, Y.; Sun, H.; Wang, X. Multi-Exposure Image Fusion Techniques: A Comprehensive Review. Remote Sens. 2022, 14, 771. [Google Scholar] [CrossRef]
  29. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level Imageaw Fusion: A Survey of the State of the Art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
  30. Michelson, A. Studies in Optics; University of Chicago Press: Chicago, IL, USA, 1927. [Google Scholar]
  31. Peli, E. Contrast in Complex Images. J. Opt. Soc. Am. A 1990, 7, 2032–2040. [Google Scholar] [CrossRef]
  32. Burt, P.J.; Hanna, K.; Kolczynski, R.J. Enhanced image capture through fusion. In Proceedings of the Workshop on Augmented Visual Display Research, Berlin, Germany, 11–14 May 1993; pp. 207–224. [Google Scholar]
  33. Ogden, J.M.; Adelson, E.H.; Bergen, J.R.; Burt, P.J. Pyramid-based computer graphics. RCA Eng. 1985, 30, 4–15. [Google Scholar]
  34. Tet, A. Hierarchical image fusion. Mach. Vis. Appl. 1990, 3, 1–11. [Google Scholar] [CrossRef]
Figure 1. (a) Gaussian pyramid decomposition and (b) Laplacian pyramid decomposition.
Figure 1. (a) Gaussian pyramid decomposition and (b) Laplacian pyramid decomposition.
Sensors 25 04977 g001
Figure 2. The schematic sketch of the setup.
Figure 2. The schematic sketch of the setup.
Sensors 25 04977 g002
Figure 3. Experimental images: Hyperspectral images taken in a sequence with the same exposure of 100 ms. The corresponding filter wavelength of a specific image is shown below each one.
Figure 3. Experimental images: Hyperspectral images taken in a sequence with the same exposure of 100 ms. The corresponding filter wavelength of a specific image is shown below each one.
Sensors 25 04977 g003
Figure 4. Experimental results: (a) white-light-illuminated image (RGB), (b) one-shot of NIR image, (c) resulting exposure fusion image.
Figure 4. Experimental results: (a) white-light-illuminated image (RGB), (b) one-shot of NIR image, (c) resulting exposure fusion image.
Sensors 25 04977 g004
Figure 5. Images (a,c) are the fusion images, with green and red dashed lines indicating the cross-section’s locations. (b,d) Intensity profile along the X- and Y-axis at the black spot location along the dashed lines.
Figure 5. Images (a,c) are the fusion images, with green and red dashed lines indicating the cross-section’s locations. (b,d) Intensity profile along the X- and Y-axis at the black spot location along the dashed lines.
Sensors 25 04977 g005
Figure 6. Experimental results: (a,c) zoomed version of the white dashed line areas in Figure 5 of the fusion image with the intensity profile along the Y-axis at the black spot location, (b,d) cross-section intensity profiles of the zoomed-in area.
Figure 6. Experimental results: (a,c) zoomed version of the white dashed line areas in Figure 5 of the fusion image with the intensity profile along the Y-axis at the black spot location, (b,d) cross-section intensity profiles of the zoomed-in area.
Sensors 25 04977 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Atiya, N.; Shemer, A.; Schwarz, A.; Beiderman, Y.; Danan, Y. Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique. Sensors 2025, 25, 4977. https://doi.org/10.3390/s25164977

AMA Style

Atiya N, Shemer A, Schwarz A, Beiderman Y, Danan Y. Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique. Sensors. 2025; 25(16):4977. https://doi.org/10.3390/s25164977

Chicago/Turabian Style

Atiya, Nisan, Amir Shemer, Ariel Schwarz, Yevgeny Beiderman, and Yossef Danan. 2025. "Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique" Sensors 25, no. 16: 4977. https://doi.org/10.3390/s25164977

APA Style

Atiya, N., Shemer, A., Schwarz, A., Beiderman, Y., & Danan, Y. (2025). Imaging Through Scattering Tissue Based on NIR Multispectral Image Fusion Technique. Sensors, 25(16), 4977. https://doi.org/10.3390/s25164977

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop