Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = Bayer color filter array

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 15128 KB  
Review
Compression for Bayer CFA Images: Review and Performance Comparison
by Kuo-Liang Chung, Hsuan-Ying Chen, Tsung-Lun Hsieh and Yen-Bo Chen
Sensors 2022, 22(21), 8362; https://doi.org/10.3390/s22218362 - 31 Oct 2022
Cited by 5 | Viewed by 6192
Abstract
Bayer color filter array (CFA) images are captured by a single-chip image sensor covered with a Bayer CFA pattern which has been widely used in modern digital cameras. In the past two decades, many compression methods have been proposed to compress Bayer CFA [...] Read more.
Bayer color filter array (CFA) images are captured by a single-chip image sensor covered with a Bayer CFA pattern which has been widely used in modern digital cameras. In the past two decades, many compression methods have been proposed to compress Bayer CFA images. These compression methods can be roughly divided into the compression-first-based (CF-based) scheme and the demosaicing-first-based (DF-based) scheme. However, in the literature, no review article for the two compression schemes and their compression performance is reported. In this article, the related CF-based and DF-based compression works are reviewed first. Then, the testing Bayer CFA images created from the Kodak, IMAX, screen content images, videos, and classical image datasets are compressed on the Joint Photographic Experts Group-2000 (JPEG-2000) and the newly released Versatile Video Coding (VVC) platform VTM-16.2. In terms of the commonly used objective quality, perceptual quality metrics, the perceptual effect, and the quality–bitrate tradeoff metric, the compression performance comparison of the CF-based compression methods, in particular the reversible color transform-based compression methods and the DF-based compression methods, is reported and discussed. Full article
Show Figures

Figure 1

11 pages, 4188 KB  
Article
The Spectrum of Light Emitted by LED Using a CMOS Sensor-Based Digital Camera and Its Application
by Hyeon-Woo Park, Ji-Won Choi, Ji-Young Choi, Kyung-Kwang Joo and Na-Ri Kim
Sensors 2022, 22(17), 6418; https://doi.org/10.3390/s22176418 - 25 Aug 2022
Cited by 5 | Viewed by 4830
Abstract
We introduced a digital photo image analysis in color space to estimate the spectrum of fluor components dissolved in a liquid scintillator sample through the hue and wavelength relationship. Complementary metal oxide semiconductor (CMOS) image sensors with Bayer color filter array (CFA) technology [...] Read more.
We introduced a digital photo image analysis in color space to estimate the spectrum of fluor components dissolved in a liquid scintillator sample through the hue and wavelength relationship. Complementary metal oxide semiconductor (CMOS) image sensors with Bayer color filter array (CFA) technology in the digital camera were used to reconstruct and decode color images. Hue and wavelength are closely related. To date, no literature has reported the hue and wavelength relationship measurements, especially for blue or close to the UV region. The non-linear hue and wavelength relationship in the blue region was investigated using a light emitting diode source. We focused on this wavelength region, because the maximum quantum efficiency of the bi-alkali photomultiplier tube (PMT) is around 430 nm. It is necessary to have a good understanding of this wavelength region in PMT-based experiments. The CMOS Bayer CFA approach was sufficient to estimate the fluor emission spectrum in the liquid scintillator sample without using an expensive spectrophotometer. Full article
(This article belongs to the Special Issue Feature Papers in Optical Sensors 2022)
Show Figures

Figure 1

9 pages, 3414 KB  
Article
Estimation of Fluor Emission Spectrum through Digital Photo Image Analysis with a Water-Based Liquid Scintillator
by Ji-Won Choi, Ji-Young Choi and Kyung-Kwang Joo
Sensors 2021, 21(24), 8483; https://doi.org/10.3390/s21248483 - 20 Dec 2021
Cited by 2 | Viewed by 2751
Abstract
In this paper, we performed a feasibility study of using a water-based liquid scintillator (WbLS) for conducting imaging analysis with a digital camera. The liquid scintillator (LS) dissolves a scintillating fluor in an organic base solvent to emit light. We synthesized a liquid [...] Read more.
In this paper, we performed a feasibility study of using a water-based liquid scintillator (WbLS) for conducting imaging analysis with a digital camera. The liquid scintillator (LS) dissolves a scintillating fluor in an organic base solvent to emit light. We synthesized a liquid scintillator using water as a solvent. In a WbLS, a suitable surfactant is needed to mix water and oil together. As an application of the WbLS, we introduced a digital photo image analysis in color space. A demosaicing process to reconstruct and decode color is briefly described. We were able to estimate the emission spectrum of the fluor dissolved in the WbLS by analyzing the pixel information stored in the digital image. This technique provides the potential to estimate fluor components in the visible region without using an expensive spectrophotometer. In addition, sinogram analysis was performed with Radon transformation to reconstruct transverse images with longitudinal photo images of the WbLS sample. Full article
Show Figures

Figure 1

16 pages, 11694 KB  
Article
Data-Driven Convolutional Model for Digital Color Image Demosaicing
by Francesco de Gioia and Luca Fanucci
Appl. Sci. 2021, 11(21), 9975; https://doi.org/10.3390/app11219975 - 25 Oct 2021
Cited by 4 | Viewed by 4367
Abstract
Modern digital cameras use specific arrangement of Color Filter Array to sample light wavelength corresponding to visible colors. The most common Color Filter Array is the Bayer filter that samples only one color per pixel. To recover the full resolution image, an interpolation [...] Read more.
Modern digital cameras use specific arrangement of Color Filter Array to sample light wavelength corresponding to visible colors. The most common Color Filter Array is the Bayer filter that samples only one color per pixel. To recover the full resolution image, an interpolation algorithm can be used. This process is called demosaicing and it is one of the first processing stages of a digital imaging pipeline. We introduce a novel data-driven model for demosaicing that takes into account the different requirements for reconstruction of the image Luma and Chrominance channels. The final model is a parallel composition of two reconstruction networks with individual architecture and trained with distinct loss functions. In order to solve the overfitting problem, we prepared a dataset that contains groups of patches that share common chromatic and spectral characteristics. We reported the reconstruction error on noise-free images and measured the effect of random noise and quantization noise in the demosaicing reconstruction. To test our model performance, we implemented the network on NVIDIA Jetson Nano, obtaining an end-to-end running time of less than one second for a full frame 12 MPixel image. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

10 pages, 6432 KB  
Article
Color Digital Holography Based on Generalized Phase-Shifting Algorithm with Monitoring Phase-Shift
by Minwoo Jung, Hosung Jeon, Sungjin Lim and Joonku Hahn
Photonics 2021, 8(7), 241; https://doi.org/10.3390/photonics8070241 - 28 Jun 2021
Cited by 4 | Viewed by 3691
Abstract
Color digital holography (DH) has been researched in various fields such as the holographic camera and holographic microscope because it acquires a realistic color object wave by measuring both amplitude and phase. Among the methods for color DH, the phase-shifting DH has an [...] Read more.
Color digital holography (DH) has been researched in various fields such as the holographic camera and holographic microscope because it acquires a realistic color object wave by measuring both amplitude and phase. Among the methods for color DH, the phase-shifting DH has an advantage of obtaining a signal wave of objects without the autocorrelation and conjugate noises. However, this method usually requires many interferograms to obtain signals for all wavelengths. In addition, the phase-shift algorithm is sensitive to the phase-shift error caused by the instability or hysteresis of the phase shifter. In this paper, we propose a new method of color phase-shifting digital holography with monitoring the phase-shift. The color interferograms are recorded by using a focal plane array (FPA) with a Bayer color filter. In order to obtain the color signal wave from the interferograms with unexpected phase-shift values, we devise a generalized phase-shifting DH algorithm. The proposed method enables the robust measurement in the interferograms. Experimentally, we demonstrate the proposed algorithm to reconstruct the object image with negligibly small conjugate noises. Full article
(This article belongs to the Special Issue Holography)
Show Figures

Graphical abstract

12 pages, 3924 KB  
Communication
Bionic Birdlike Imaging Using a Multi-Hyperuniform LED Array
by Xin-Yu Zhao, Li-Jing Li, Lei Cao and Ming-Jie Sun
Sensors 2021, 21(12), 4084; https://doi.org/10.3390/s21124084 - 14 Jun 2021
Cited by 4 | Viewed by 3642
Abstract
Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during [...] Read more.
Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during demosaicking process which causes degradation of image quality. Inspired by the biological structure of the avian retinas, we developed a chromatic LED array which has a geometric arrangement of multi-hyperuniformity, which exhibits an irregularity on small-length scales but a quasi-uniformity on large scales, to suppress frequency aliasing and color misregistration in full color image retrieval. Experiments were performed with a single-pixel imaging system using the multi-hyperuniform chromatic LED array to provide structured illumination, and 208 fps frame rate was achieved at 32 × 32 pixel resolution. By comparing the experimental results with the images captured with a conventional digital camera, it has been demonstrated that the proposed imaging system forms images with less chromatic moiré patterns and color misregistration artifacts. The concept proposed verified here could provide insights for the design and the manufacturing of future bionic imaging sensors. Full article
Show Figures

Figure 1

18 pages, 3371 KB  
Article
A Compact High-Quality Image Demosaicking Neural Network for Edge-Computing Devices
by Shuyu Wang, Mingxin Zhao, Runjiang Dou, Shuangming Yu, Liyuan Liu and Nanjian Wu
Sensors 2021, 21(9), 3265; https://doi.org/10.3390/s21093265 - 8 May 2021
Cited by 16 | Viewed by 6642
Abstract
Image demosaicking has been an essential and challenging problem among the most crucial steps of image processing behind image sensors. Due to the rapid development of intelligent processors based on deep learning, several demosaicking methods based on a convolutional neural network (CNN) have [...] Read more.
Image demosaicking has been an essential and challenging problem among the most crucial steps of image processing behind image sensors. Due to the rapid development of intelligent processors based on deep learning, several demosaicking methods based on a convolutional neural network (CNN) have been proposed. However, it is difficult for their networks to run in real-time on edge computing devices with a large number of model parameters. This paper presents a compact demosaicking neural network based on the UNet++ structure. The network inserts densely connected layer blocks and adopts Gaussian smoothing layers instead of down-sampling operations before the backbone network. The densely connected blocks can extract mosaic image features efficiently by utilizing the correlation between feature maps. Furthermore, the block adopts depthwise separable convolutions to reduce the model parameters; the Gaussian smoothing layer can expand the receptive fields without down-sampling image size and discarding image information. The size constraints on the input and output images can also be relaxed, and the quality of demosaicked images is improved. Experiment results show that the proposed network can improve the running speed by 42% compared with the fastest CNN-based method and achieve comparable reconstruction quality as it on four mainstream datasets. Besides, when we carry out the inference processing on the demosaicked images on typical deep CNN networks, Mobilenet v1 and SSD, the accuracy can also achieve 85.83% (top 5) and 75.44% (mAP), which performs comparably to the existing methods. The proposed network has the highest computing efficiency and lowest parameter number through all methods, demonstrating that it is well suitable for applications on modern edge computing devices. Full article
(This article belongs to the Collection Artificial Intelligence in Sensors Technology)
Show Figures

Figure 1

12 pages, 2688 KB  
Letter
Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach
by Kuo-Liang Chung, Tzu-Hsien Chan and Szu-Ni Chen
Sensors 2020, 20(14), 3908; https://doi.org/10.3390/s20143908 - 14 Jul 2020
Cited by 10 | Viewed by 4106
Abstract
As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images [...] Read more.
As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images I R G B W is necessary in order to provide high-quality RGB full-color images as the target images for human perception. In this letter, we propose a three-stage demosaicking method for I R G B W . In the first-stage, a cross shape-based color difference approach is proposed in order to interpolate the missing W color pixels in the W color plane of I R G B W . In the second stage, an iterative error compensation-based demosaicking process is proposed to improve the quality of the demosaiced RGB full-color image. In the third stage, taking the input image I R G B W as the ground truth RGBW CFA image, an I R G B W -based refinement process is proposed to refine the quality of the demosaiced image obtained by the second stage. Based on the testing RGBW images that were collected from the Kodak and IMAX datasets, the comprehensive experimental results illustrated that the proposed three-stage demosaicking method achieves substantial quality and perceptual effect improvement relative to the previous method by Hamilton and Compton and the two state-of-the-art methods, Kwan et al.’s pansharpening-based method, and Kwan and Chou’s deep learning-based method. Full article
Show Figures

Figure 1

44 pages, 43692 KB  
Article
Demosaicing of CFA 3.0 with Applications to Low Lighting Images
by Chiman Kwan, Jude Larkin and Bulent Ayhan
Sensors 2020, 20(12), 3423; https://doi.org/10.3390/s20123423 - 17 Jun 2020
Cited by 10 | Viewed by 7064
Abstract
Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as [...] Read more.
Low lighting images usually contain Poisson noise, which is pixel amplitude-dependent. More panchromatic or white pixels in a color filter array (CFA) are believed to help the demosaicing performance in dark environments. In this paper, we first introduce a CFA pattern known as CFA 3.0 that has 75% white pixels, 12.5% green pixels, and 6.25% of red and blue pixels. We then present algorithms to demosaic this CFA, and demonstrate its performance for normal and low lighting images. In addition, a comparative study was performed to evaluate the demosaicing performance of three CFAs, namely the Bayer pattern (CFA 1.0), the Kodak CFA 2.0, and the proposed CFA 3.0. Using a clean Kodak dataset with 12 images, we emulated low lighting conditions by introducing Poisson noise into the clean images. In our experiments, normal and low lighting images were used. For the low lighting conditions, images with signal-to-noise (SNR) of 10 dBs and 20 dBs were studied. We observed that the demosaicing performance in low lighting conditions was improved when there are more white pixels. Moreover, denoising can further enhance the demosaicing performance for all CFAs. The most important finding is that CFA 3.0 performs better than CFA 1.0, but is slightly inferior to CFA 2.0, in low lighting images. Full article
Show Figures

Figure 1

14 pages, 69145 KB  
Article
Joint Demosaicing and Denoising Based on a Variational Deep Image Prior Neural Network
by Yunjin Park, Sukho Lee, Byeongseon Jeong and Jungho Yoon
Sensors 2020, 20(10), 2970; https://doi.org/10.3390/s20102970 - 24 May 2020
Cited by 15 | Viewed by 5522
Abstract
A joint demosaicing and denoising task refers to the task of simultaneously reconstructing and denoising a color image from a patterned image obtained by a monochrome image sensor with a color filter array. Recently, inspired by the success of deep learning in many [...] Read more.
A joint demosaicing and denoising task refers to the task of simultaneously reconstructing and denoising a color image from a patterned image obtained by a monochrome image sensor with a color filter array. Recently, inspired by the success of deep learning in many image processing tasks, there has been research to apply convolutional neural networks (CNNs) to the task of joint demosaicing and denoising. However, such CNNs need many training data to be trained, and work well only for patterned images which have the same amount of noise they have been trained on. In this paper, we propose a variational deep image prior network for joint demosaicing and denoising which can be trained on a single patterned image and works for patterned images with different levels of noise. We also propose a new RGB color filter array (CFA) which works better with the proposed network than the conventional Bayer CFA. Mathematical justifications of why the variational deep image prior network suits the task of joint demosaicing and denoising are also given, and experimental results verify the performance of the proposed method. Full article
(This article belongs to the Special Issue Digital Imaging with Multispectral Filter Array (MSFA) Sensors)
Show Figures

Figure 1

21 pages, 19279 KB  
Article
Noise Removal in the Developing Process of Digital Negatives
by Marek Szczepański and Filip Giemza
Sensors 2020, 20(3), 902; https://doi.org/10.3390/s20030902 - 7 Feb 2020
Cited by 2 | Viewed by 4620
Abstract
Most modern color digital cameras are equipped with a single image sensor with a color filter array (CFA). One of the most important stages of preprocessing is noise reduction. Most research related to this topic ignores the problem associated with the actual color [...] Read more.
Most modern color digital cameras are equipped with a single image sensor with a color filter array (CFA). One of the most important stages of preprocessing is noise reduction. Most research related to this topic ignores the problem associated with the actual color image acquisition process and assumes that we are processing the image in the sRGB space. In the presented paper, the real process of developing raw images obtained from the CFA sensor was analyzed. As part of the work, a diverse database of test images in the form of a digital negative and its reference version was prepared. The main problem posed in the work was the location of the denoising and demosaicing algorithms in the entire raw image processing pipeline. For this purpose, all stages of processing the digital negative are reproduced. The process of noise generation in the image sensors was also simulated, parameterizing it with ISO sensitivity for a specific CMOS sensor. In this work, we tested commonly used algorithms based on the idea of non-local means, such as NLM or BM3D, in combination with various techniques of interpolation of CFA sensor data. Our experiments have shown that the use of noise reduction methods directly on the raw sensor data, improves the final result only in the case of highly disturbed images, which corresponds to the process of image acquisition in difficult lighting conditions. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

58 pages, 51056 KB  
Article
Demosaicing of Bayer and CFA 2.0 Patterns for Low Lighting Images
by Chiman Kwan and Jude Larkin
Electronics 2019, 8(12), 1444; https://doi.org/10.3390/electronics8121444 - 1 Dec 2019
Cited by 15 | Viewed by 8883
Abstract
It is commonly believed that having more white pixels in a color filter array (CFA) will help the demosaicing performance for images collected in low lighting conditions. However, to the best of our knowledge, a systematic study to demonstrate the above statement does [...] Read more.
It is commonly believed that having more white pixels in a color filter array (CFA) will help the demosaicing performance for images collected in low lighting conditions. However, to the best of our knowledge, a systematic study to demonstrate the above statement does not exist. We present a comparative study to systematically and thoroughly evaluate the performance of demosaicing for low lighting images using two CFAs: the standard Bayer pattern (aka CFA 1.0) and the Kodak CFA 2.0 (RGBW pattern with 50% white pixels). Using the clean Kodak dataset containing 12 images, we first emulated low lighting images by injecting Poisson noise at two signal-to-noise (SNR) levels: 10 dBs and 20 dBs. We then created CFA 1.0 and CFA 2.0 images for the noisy images. After that, we applied more than 15 conventional and deep learning based demosaicing algorithms to demosaic the CFA patterns. Using both objectives with five performance metrics and subjective visualization, we observe that having more white pixels indeed helps the demosaicing performance in low lighting conditions. This thorough comparative study is our first contribution. With denoising, we observed that the demosaicing performance of both CFAs has been improved by several dBs. This can be considered as our second contribution. Moreover, we noticed that denoising before demosaicing is more effective than denoising after demosaicing. Answering the question of where denoising should be applied is our third contribution. We also noticed that denoising plays a slightly more important role in 10 dBs signal-to-noise ratio (SNR) as compared to 20 dBs SNR. Some discussions on the following phenomena are also included: (1) why CFA 2.0 performed better than CFA 1.0; (2) why denoising was more effective before demosaicing than after demosaicing; and (3) why denoising helped more at low SNRs than at high SNRs. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

10 pages, 4352 KB  
Article
High-Sensitivity Pixels with a Quad-WRGB Color Filter and Spatial Deep-Trench Isolation
by Yongnam Kim and Yunkyung Kim
Sensors 2019, 19(21), 4653; https://doi.org/10.3390/s19214653 - 26 Oct 2019
Cited by 15 | Viewed by 7001
Abstract
The demand for a high-resolution metal-oxide-semiconductor (CMOS) image sensor has increased in recent years, and pixel size has shrunk below 1.0 μm to allow accumulation of numerous pixels in a limited area. However, shrinking the pixel size lowers the sensitivity and increases crosstalk [...] Read more.
The demand for a high-resolution metal-oxide-semiconductor (CMOS) image sensor has increased in recent years, and pixel size has shrunk below 1.0 μm to allow accumulation of numerous pixels in a limited area. However, shrinking the pixel size lowers the sensitivity and increases crosstalk because the aspect ratio is worsened by maintaining the height of the pixel. This work introduces a high-sensitivity pixel with a quad-WRGB (White, Red, Green, Blue) color filter array (CFA), spatial deep-trench isolation (S-DTI), and a spatial tungsten grid (S-WG). The optical performance of the suggested pixel was analyzed by performing 3D optical simulations at 1.0, 0.9, and 0.8 μm pixel pitches as small-sized pixels. The quad-WRGB CFA is compared with the quad-Bayer CFA, and the S-DTI and S-WG are compared with the conventional DTI and WG. We confirmed an improvement in the sensitivity of the suggested pixel using the quad-WRGB CFA with S-DTI and S-WG to a maximum of 58.2%, 67.0%, and 66.3% for 1.0, 0.9, and 0.8 μm pixels, respectively. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

14 pages, 6394 KB  
Technical Note
Further Improvement of Debayering Performance of RGBW Color Filter Arrays Using Deep Learning and Pansharpening Techniques
by Chiman Kwan and Bryan Chou
J. Imaging 2019, 5(8), 68; https://doi.org/10.3390/jimaging5080068 - 1 Aug 2019
Cited by 20 | Viewed by 8613
Abstract
The RGBW color filter arrays (CFA), also known as CFA2.0, contains R, G, B, and white (W) pixels. It is a 4 × 4 pattern that has 8 white pixels, 4 green pixels, 2 red pixels, and 2 blue pixels. The pattern repeats [...] Read more.
The RGBW color filter arrays (CFA), also known as CFA2.0, contains R, G, B, and white (W) pixels. It is a 4 × 4 pattern that has 8 white pixels, 4 green pixels, 2 red pixels, and 2 blue pixels. The pattern repeats itself over the whole image. In an earlier conference paper, we cast the demosaicing process for CFA2.0 as a pansharpening problem. That formulation is modular and allows us to insert different pansharpening algorithms for demosaicing. New algorithms in interpolation and demosaicing can also be used. In this paper, we propose a new enhancement of our earlier approach by integrating a deep learning-based algorithm into the framework. Extensive experiments using IMAX and Kodak images clearly demonstrated that the new approach improved the demosaicing performance even further. Full article
Show Figures

Figure 1

13 pages, 6424 KB  
Article
Weights-Based Image Demosaicking Using Posteriori Gradients and the Correlation of R–B Channels in High Frequency
by Meidong Xia, Chengyou Wang and Wenhan Ge
Symmetry 2019, 11(5), 600; https://doi.org/10.3390/sym11050600 - 26 Apr 2019
Cited by 2 | Viewed by 4523
Abstract
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, [...] Read more.
In this paper, we propose a weights-based image demosaicking algorithm which is based on the Bayer pattern color filter array (CFA). When reconstructing the missing G components, the proposed algorithm uses weights based on posteriori gradients to mitigate color artifacts and distortions. Furthermore, the proposed algorithm makes full use of the correlation of R–B channels in high frequency when interpolating R/B values at B/R positions. Experimental results show that the proposed algorithm is superior to previous similar algorithms in composite peak signal-to-noise ratio (CPSNR) and subjective visual effect. The biggest advantage of the proposed algorithm is the use of posteriori gradients and the correlation of R–B channels in high frequency. Full article
Show Figures

Graphical abstract

Back to TopTop