Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (32)

Search Parameters:
Keywords = red, green, and blue (RGB) visible lights

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 5255 KiB  
Article
Rapid Quantitative Detection of Dye Concentration in Pt/TiO2 Photocatalytic System Based on RGB Sensing
by Cuiyan Han, Ziao Wang, Jiahong Cui, Shuqi Liu, Liu Yang, Yang Fu, Baolin Zhu and Cheng Guo
Sensors 2025, 25(10), 3195; https://doi.org/10.3390/s25103195 - 19 May 2025
Viewed by 462
Abstract
This article presents an integrated strategy that couples high-efficiency photocatalytic degradation with low-cost, rapid detection to overcome the main drawbacks of conventional TiO2-based photocatalysts, including a weak visible-light response, rapid charge–carrier recombination, and reliance on expensive instrumentation for dye concentration detection. [...] Read more.
This article presents an integrated strategy that couples high-efficiency photocatalytic degradation with low-cost, rapid detection to overcome the main drawbacks of conventional TiO2-based photocatalysts, including a weak visible-light response, rapid charge–carrier recombination, and reliance on expensive instrumentation for dye concentration detection. Platinum-decorated TiO2 (Pt/TiO2) was prepared by photoreduction deposition, and systematic characterization confirmed the successful loading of zero-valent Pt nanoparticles onto the TiO2 surface, significantly improving charge separation and extending absorption into the visible region. Methylene blue degradation was quantified under ultraviolet (UV) and simulated sunlight; radical-scavenging tests clarified the reaction pathway. In parallel, smartphone images of the reaction mixture were processed in ImageJto extract red–green–blue (RGB) values, which were related to dye concentration through a partial least-squares (PLS) model validated against reference UV–Vis data. Pt/TiO2 removed 95.0% of methylene blue within 20 min of UV irradiation and 90.2% within 160 min of simulated sunlight—31.8% and 19.1% faster, respectively, than pristine TiO2. The RGB-based PLS model achieved a coefficient of determination (R2) of 0.961 for the prediction set. By integrating photocatalysis with smartphone-based colorimetry, the proposed method enables rapid monitoring of organic dyes concentration, providing an intelligent and economical platform for industrial wastewater treatment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 5660 KiB  
Article
Exploring Imaging Techniques for Detecting Tomato Spotted Wilt Virus (TSWV) Infection in Pepper (Capsicum spp.) Germplasms
by Eric Opoku Mensah, Hyeonseok Oh, Jiseon Song and Jeongho Baek
Plants 2024, 13(23), 3447; https://doi.org/10.3390/plants13233447 - 9 Dec 2024
Viewed by 1602
Abstract
Due to the vulnerability of pepper (Capsicum spp.) and the virulence of tomato spotted wilt virus (TSWV), seasonal shortages and surges of prices are a challenge and thus threaten household income. Traditional bioassays for detecting TSWV, such as observation for symptoms and [...] Read more.
Due to the vulnerability of pepper (Capsicum spp.) and the virulence of tomato spotted wilt virus (TSWV), seasonal shortages and surges of prices are a challenge and thus threaten household income. Traditional bioassays for detecting TSWV, such as observation for symptoms and reverse transcription-PCR, are time-consuming, labor-intensive, and sometimes lack precision, highlighting the need for a faster and more reliable approach to plant disease assessment. Here, two imaging techniques—Red–Green–Blue (RGB) and hyperspectral imaging (using NDVI and wavelength intensities)—were compared with a bioassay method to study the incidence and severity of TSWV in different pepper accessions. The bioassay results gave TSWV an incidence from 0 to 100% among the accessions, while severity ranged from 0 to 5.68% based on RGB analysis. The normalized difference vegetative index (NDVI) scored from 0.21 to 0.23 for healthy spots on the leaf but from 0.14 to 0.19 for disease spots, depending on the severity of the damage. The peak reflectance of the disease spots on the leaves was identified in the visible light spectrum (430–470 nm) when spectral bands were studied in the broad spectrum (400.93–1004.5 nm). For the selected wavelength in the visible light spectrum, a high reflectance intensity of 340 to 430 was identified for disease areas, but between 270 and 290 for healthy leaves. RGB and hyperspectral imaging techniques can be recommended for precise and accurate detection and quantification of TSWV infection. Full article
(This article belongs to the Special Issue Plant Diseases and Sustainable Agriculture)
Show Figures

Figure 1

25 pages, 3609 KiB  
Article
Detection and Quantification of Arnica montana L. Inflorescences in Grassland Ecosystems Using Convolutional Neural Networks and Drone-Based Remote Sensing
by Dragomir D. Sângeorzan, Florin Păcurar, Albert Reif, Holger Weinacker, Evelyn Rușdea, Ioana Vaida and Ioan Rotar
Remote Sens. 2024, 16(11), 2012; https://doi.org/10.3390/rs16112012 - 3 Jun 2024
Cited by 5 | Viewed by 1527
Abstract
Arnica montana L. is a medicinal plant with significant conservation importance. It is crucial to monitor this species, ensuring its sustainable harvesting and management. The aim of this study is to develop a practical system that can effectively detect A. montana inflorescences utilizing [...] Read more.
Arnica montana L. is a medicinal plant with significant conservation importance. It is crucial to monitor this species, ensuring its sustainable harvesting and management. The aim of this study is to develop a practical system that can effectively detect A. montana inflorescences utilizing unmanned aerial vehicles (UAVs) with RGB sensors (red–green–blue, visible light) to improve the monitoring of A. montana habitats during the harvest season. From a methodological point of view, a model was developed based on a convolutional neural network (CNN) ResNet101 architecture. The trained model offers quantitative and qualitative assessments of A. montana inflorescences detected in semi-natural grasslands using low-resolution imagery, with a correctable error rate. The developed prototype is applicable in monitoring a larger area in a short time by flying at a higher altitude, implicitly capturing lower-resolution images. Despite the challenges posed by shadow effects, fluctuating ground sampling distance (GSD), and overlapping vegetation, this approach revealed encouraging outcomes, particularly when the GSD value was less than 0.45 cm. This research highlights the importance of low-resolution image clarity, on the training data by the phenophase, and of the need for training across different photoperiods to enhance model flexibility. This innovative approach provides guidelines for mission planning in support of reaching sustainable management goals. The robustness of the model can be attributed to the fact that it has been trained with real-world imagery of semi-natural grassland, making it practical for fieldwork with accessible portable devices. This study confirms the potential of ResNet CNN models to transfer learning to new plant communities, contributing to the broader effort of using high-resolution RGB sensors, UAVs, and machine-learning technologies for sustainable management and biodiversity conservation. Full article
Show Figures

Graphical abstract

27 pages, 10421 KiB  
Article
A New Remote Sensing Desert Vegetation Detection Index
by Zhenqi Song, Yuefeng Lu, Ziqi Ding, Dengkuo Sun, Yuanxin Jia and Weiwei Sun
Remote Sens. 2023, 15(24), 5742; https://doi.org/10.3390/rs15245742 - 15 Dec 2023
Cited by 8 | Viewed by 3174
Abstract
Land desertification is a key environmental problem in China, especially in Northwest China, where it seriously affects the sustainable development of natural resources. In this paper, we combine high-resolution satellite remote sensing images and UAV (unmanned aerial vehicle) visible light images to extract [...] Read more.
Land desertification is a key environmental problem in China, especially in Northwest China, where it seriously affects the sustainable development of natural resources. In this paper, we combine high-resolution satellite remote sensing images and UAV (unmanned aerial vehicle) visible light images to extract desert vegetation data and quickly locate and accurately monitor land desertification in relevant areas according to changes in vegetation coverage. Due to the strong light and dry climate of deserts in Northwest China, which results in deeper vegetation shadow texture and mostly dry shrubs with fewer stems and leaves, the accuracy of the vegetation index commonly used in visible remote sensing image classification is not able to meet the requirements for monitoring and evaluating land desertification. For this reason, in this paper, we took the Hangjin Banner in Bayannur as an example and constructed a new vegetation index, the HSVGVI (hue–saturation–value green enhancement vegetation index), based on the HSV (hue–saturation–value) color space using channel enhancement that can improve the extraction accuracy of desert vegetation and reduce misclassification. In addition, in order to further test the extraction accuracy, samples of densely vegetated and multi-shaded areas were divided in the study area according to the accuracy-influencing factors. At the same time, the HSVGVI was compared with the vegetation indices EXG (excess green index), RGBVI (red–green–blue vegetation index), MGRVI (modified green–red vegetation index), NGBDI (normalized green–red discrepancy index), and VDVI (visible-band discrepancy vegetation index) constructed based on the RGB (red–green–blue) color space. The experimental results show that the extraction accuracy of the EXG and other vegetation indices constructed in RGB color space can only reach 70%, while the extraction accuracy of the HSVGVI can reach more than 95%. In summary, the HSVGVI proposed in this paper can better realize the extraction of desert vegetation data and can provide a reliable technical tool for monitoring and evaluating land desertification. Full article
(This article belongs to the Special Issue Land Degradation Assessment with Earth Observation (Second Edition))
Show Figures

Graphical abstract

20 pages, 8189 KiB  
Article
A Novel Desert Vegetation Extraction and Shadow Separation Method Based on Visible Light Images from Unmanned Aerial Vehicles
by Yuefeng Lu, Zhenqi Song, Yuqing Li, Zhichao An, Lan Zhao, Guosheng Zan and Miao Lu
Sustainability 2023, 15(4), 2954; https://doi.org/10.3390/su15042954 - 6 Feb 2023
Cited by 10 | Viewed by 2294
Abstract
Owing to factors such as climate change and human activities, ecological and environmental problems of land desertification have emerged in many regions around the world, among which the problem of land desertification in northwestern China is particularly serious. To grasp the trend of [...] Read more.
Owing to factors such as climate change and human activities, ecological and environmental problems of land desertification have emerged in many regions around the world, among which the problem of land desertification in northwestern China is particularly serious. To grasp the trend of land desertification and the degree of natural vegetation degradation in northwest China is a basic prerequisite for managing the fragile ecological environment there. Visible light remote sensing images taken by a UAV can monitor the vegetation cover in desert areas on a large scale and with high time efficiency. However, as there are many low shrubs in desert areas, the shadows cast by them are darker, and the traditional RGB color-space-based vegetation index is affected by the shadow texture when extracting vegetation, so it is difficult to achieve high accuracy. For this reason, this paper proposes the Lab color-space-based vegetation index L2AVI (L-a-a vegetation index) to solve this problem. The EXG (excess green index), NGRDI (normalized green-red difference index), VDVI (visible band difference vegetation index), MGRVI (modified green-red vegetation index), and RGBVI (red-green-blue vegetation index) constructed based on RGB color space were used as control experiments in the three selected study areas. The results show that, although the extraction accuracies of the vegetation indices constructed based on RGB color space all reach more than 70%, these vegetation indices are all affected by the shadow texture to different degrees, and there are many problems of misdetection and omission. However, the accuracy of the L2AVI index can reach 99.20%, 99.73%, and 99.69%, respectively, avoiding the problem of omission due to vegetation shading and having a high extraction accuracy. Therefore, the L2AVI index can provide technical support and a decision basis for the protection and control of land desertification in northwest China. Full article
(This article belongs to the Special Issue Climate Change and Enviromental Disaster)
Show Figures

Figure 1

13 pages, 1549 KiB  
Article
Application of Imaging and Artificial Intelligence for Quality Monitoring of Stored Black Currant (Ribes nigrum L.)
by Ewa Ropelewska
Foods 2022, 11(22), 3589; https://doi.org/10.3390/foods11223589 - 11 Nov 2022
Cited by 13 | Viewed by 2490
Abstract
The objective of this study was to assess the influence of storage under different storage conditions on black currant quality in a non-destructive and inexpensive manner using image processing and artificial intelligence. Black currants were stored at a room temperature of 20 ± [...] Read more.
The objective of this study was to assess the influence of storage under different storage conditions on black currant quality in a non-destructive and inexpensive manner using image processing and artificial intelligence. Black currants were stored at a room temperature of 20 ± 1 °C and a temperature of 3 °C (refrigerator). The images of black currants directly after harvest and fruit stored for one and two weeks were obtained using a digital camera. Then, texture parameters were computed from the images converted to color channels R (red), G (green), B (blue), L (lightness component from black to white), a (green for negative and red for positive values), b (blue for negative and yellow for positive values), X (component with color information), Y (lightness), and Z (component with color information). Models for the classification of black currants were built using various machine learning algorithms based on selected textures for RGB, Lab, and XYZ color spaces. Models built using the IBk, multilayer perceptron, and multiclass classifier for textures from RGB color space, and the IBk algorithm for textures from Lab color space distinguished unstored black currants and samples stored in the room for one and two weeks with an average accuracy of 100%, and the kappa statistic and weighted averages of precision, recall, Matthews correlation coefficient (MCC), receiver operating characteristic (ROC) area, and precision–recall (PRC) area equal to 1.000. This indicated a very distinct change in the external structure of the fruit after the first week and more and more visible changes in quality with increasing storage time. A classification accuracy reaching 98.67% (multilayer perceptron, Lab color space) for the samples stored in the refrigerator may indicate smaller quality changes caused by storage at a low temperature. The approach combining image textures and artificial intelligence turned out to be promising to monitor the quality changes in black currants during storage. Full article
(This article belongs to the Special Issue Advanced Analytical Strategies in Food Safety and Quality Monitoring)
Show Figures

Figure 1

14 pages, 8842 KiB  
Article
Optical Beam Steerable Visible Light Communication (VLC) System Supporting Multiple Users Using RGB and Orthogonal Frequency Division Multiplexed (OFDM) Non-Orthogonal Multiple Access (NOMA)
by Wahyu Hendra Gunawan, Chi-Wai Chow, Yang Liu, Yun-Han Chang and Chien-Hung Yeh
Sensors 2022, 22(22), 8707; https://doi.org/10.3390/s22228707 - 11 Nov 2022
Cited by 14 | Viewed by 2927
Abstract
In order to achieve high-capacity visible light communication (VLC), five dimensions in physics, including frequency, time, quadrature modulation, space, and polarization can be utilized. Orthogonality should be maintained in order to reduce the crosstalk among different dimensions. In this work, we illustrate a [...] Read more.
In order to achieve high-capacity visible light communication (VLC), five dimensions in physics, including frequency, time, quadrature modulation, space, and polarization can be utilized. Orthogonality should be maintained in order to reduce the crosstalk among different dimensions. In this work, we illustrate a high-capacity 21.01 Gbit/s optical beam steerable VLC system with vibration mitigation based on orthogonal frequency division multiplexed (OFDM) non-orthogonal multiple access (NOMA) signals using red, green, and blue (RGB) laser-diodes (LDs). The OFDM-NOMA can increase the spectral efficiency of VLC signal by allowing high overlapping of different data channel spectra in the power domain to maximize the bandwidth utilization. In the NOMA scheme, different data channels are digitally multiplexed using different levels of power with superposition coding at the transmitter (Tx). Successive interference cancellation (SIC) is then utilized at the receiver (Rx) to retrieve different power multiplexed data channels. The total data rates (i.e., Data 1 and Data 2) achieved by the R/G/B OFDM-NOMA channels are 8.07, 6.62, and 6.32 Gbit/s, respectively, achieving an aggregated data rate of 21.01 Gbit/s. The corresponding average signal-to-noise ratios (SNRs) of Data 1 in the R, G, and B channels are 9.05, 9.18 and 8.94 dB, respectively, while that of Data 2 in the R, G, and B channels are 14.92, 14.29, and 13.80 dB, respectively. Full article
Show Figures

Figure 1

12 pages, 2110 KiB  
Article
Quantitative vs. Qualitative Assessment of the Effectiveness of the Removal of Vascular Lesions Using the IPL Method—Preliminary Observations
by Aleksandra Lipka-Trawińska, Sławomir Wilczyński, Anna Deda, Robert Koprowski, Agata Lebiedowska and Dominika Wcisło-Dziadecka
Processes 2022, 10(11), 2225; https://doi.org/10.3390/pr10112225 - 29 Oct 2022
Viewed by 2069
Abstract
The aim of the study was to develop a methodology for the acquisition of skin images in visible light in a repeatable manner, enabling an objective assessment and comparison of the skin condition before and after a series of IPL treatments. Thirteen patients [...] Read more.
The aim of the study was to develop a methodology for the acquisition of skin images in visible light in a repeatable manner, enabling an objective assessment and comparison of the skin condition before and after a series of IPL treatments. Thirteen patients with erythematous lesions, vascular skin and/or rosacea were examined. Treatments aimed at reducing the erythema were carried out using the Lumecca™ (InMode MD Ltd., Yokneam, Israel) The research used the FOTOMEDICUS image acquisition system (Elfo, Łódź, Poland). The RGB images were recorded and decomposed to individual channels: red, green and blue. Then, the output image (RGB) and its individual channels were transformed into images in shades of gray. The GLCM and QTDECOMP algorithms were used for the quantitative analysis of vascular lesions. Image recording in cross-polarized light enables effective visualization of vascular lesions of the facial skin. A series of three treatments using the IPL light source seems to be sufficient to reduce vascular lesions in the face. GLCM contrast and homogeneity analysis can be an effective method of identifying skin vascular lesions. Quadtree decomposition allows for the quantitative identification of skin vascular lesions to a limited extent. The brightness analysis of the images does not allow quantification of the vascular features of the skin. Mexametric measurements do not allow for a quantitative assessment of the skin’s blood vessel response to high-energy light. Full article
(This article belongs to the Special Issue 10th Anniversary of Processes: Women's Special Issue Series)
Show Figures

Figure 1

16 pages, 5473 KiB  
Article
Infusion-Net: Inter- and Intra-Weighted Cross-Fusion Network for Multispectral Object Detection
by Jun-Seok Yun, Seon-Hoo Park and Seok Bong Yoo
Mathematics 2022, 10(21), 3966; https://doi.org/10.3390/math10213966 - 25 Oct 2022
Cited by 20 | Viewed by 3171
Abstract
Object recognition is conducted using red, green, and blue (RGB) images in object recognition studies. However, RGB images in low-light environments or environments where other objects occlude the target objects cause poor object recognition performance. In contrast, infrared (IR) images provide acceptable object [...] Read more.
Object recognition is conducted using red, green, and blue (RGB) images in object recognition studies. However, RGB images in low-light environments or environments where other objects occlude the target objects cause poor object recognition performance. In contrast, infrared (IR) images provide acceptable object recognition performance in these environments because they detect IR waves rather than visible illumination. In this paper, we propose an inter- and intra-weighted cross-fusion network (Infusion-Net), which improves object recognition performance by combining the strengths of the RGB-IR image pairs. Infusion-Net connects dual object detection models using a high-frequency (HF) assistant (HFA) to combine the advantages of RGB-IR images. To extract HF components, the HFA transforms input images into a discrete cosine transform domain. The extracted HF components are weighted via pretrained inter- and intra-weights for feature-domain cross-fusion. The inter-weighted fused features are transmitted to each other’s networks to complement the limitations of each modality. The intra-weighted features are also used to enhance any insufficient HF components of the target objects. Thus, the experimental results present the superiority of the proposed network and present improved performance of the multispectral object recognition task. Full article
(This article belongs to the Special Issue Advances in Pattern Recognition and Image Analysis)
Show Figures

Figure 1

9 pages, 2681 KiB  
Article
Rational Distributed Bragg Reflector Design for Improving Performance of Flip-Chip Micro-LEDs
by Yuechang Sun, Lang Shi, Peng Du, Xiaoyu Zhao and Shengjun Zhou
Electronics 2022, 11(19), 3030; https://doi.org/10.3390/electronics11193030 - 23 Sep 2022
Cited by 8 | Viewed by 3736
Abstract
The distributed Bragg reflector (DBR) has been widely used in flip-chip micro light-emitting diodes (micro-LEDs) because of its high reflectivity. However, the conventional double-stack DBR has a strong angular dependence and a narrow reflective bandwidth. Here, we propose a wide reflected angle Ti [...] Read more.
The distributed Bragg reflector (DBR) has been widely used in flip-chip micro light-emitting diodes (micro-LEDs) because of its high reflectivity. However, the conventional double-stack DBR has a strong angular dependence and a narrow reflective bandwidth. Here, we propose a wide reflected angle Ti3O5/SiO2 DBR (WRA-DBR) for AlGaInP-based red and GaN-based green/blue flip-chip micro-LEDs (RGB flip-chip micro-LEDs) to overcome the drawbacks of the double-stack DBR. The WRA-DBR consisting of six sub-DBRs has high reflectivity within the visible light wavelength region at an incident angle of light ranging from 0° to 60°. Furthermore, the influence of the WRA-DBR and double-stack DBR on performances of RGB flip-chip micro-LEDs is numerically investigated based on the finite-difference time-domain method. Owing to higher reflectivity and less angular dependence of the WRA-DBR, the RGB flip-chip micro-LEDs with the WRA-DBR have a stronger electric field intensity in the top side in comparison with RGB flip-chip micro-LEDs with the double-stack DBR, which indicates that more photons can be extracted from micro-LEDs with the WRA-DBR. Full article
(This article belongs to the Special Issue Quantum and Optoelectronic Devices, Circuits and Systems)
Show Figures

Figure 1

19 pages, 3831 KiB  
Article
Machine Learning-Based Processing of Multispectral and RGB UAV Imagery for the Multitemporal Monitoring of Vineyard Water Status
by Patricia López-García, Diego Intrigliolo, Miguel A. Moreno, Alejandro Martínez-Moreno, José Fernando Ortega, Eva Pilar Pérez-Álvarez and Rocío Ballesteros
Agronomy 2022, 12(9), 2122; https://doi.org/10.3390/agronomy12092122 - 7 Sep 2022
Cited by 21 | Viewed by 4116
Abstract
The development of unmanned aerial vehicles (UAVs) and light sensors has required new approaches for high-resolution remote sensing applications. High spatial and temporal resolution spectral data acquired by multispectral and conventional cameras (or red, green, blue (RGB) sensors) onboard UAVs can be useful [...] Read more.
The development of unmanned aerial vehicles (UAVs) and light sensors has required new approaches for high-resolution remote sensing applications. High spatial and temporal resolution spectral data acquired by multispectral and conventional cameras (or red, green, blue (RGB) sensors) onboard UAVs can be useful for plant water status determination and, as a consequence, for irrigation management. A study in a vineyard located in south-eastern Spain was carried out during the 2018, 2019, and 2020 seasons to assess the potential uses of these techniques. Different water qualities and irrigation application start throughout the growth cycle were imposed. Flights with RGB and multispectral cameras mounted on a UAV were performed throughout the growth cycle, and orthoimages were generated. These orthoimages were segmented to include only vegetation and calculate the green canopy cover (GCC). The stem water potential was measured, and the water stress integral (Sψ) was obtained during each irrigation season. Multiple linear regression techniques and artificial neural networks (ANNs) models with multispectral and RGB bands, as well as GCC, as inputs, were trained and tested to simulate the Sψ. The results showed that the information in the visible domain was highly related to the Sψ in the 2018 season. For all the other years and combinations of years, multispectral ANNs performed slightly better. Differences in the spatial resolution and radiometric quality of the RGB and multispectral geomatic products explain the good model performances with each type of data. Additionally, RGB cameras cost less and are easier to use than multispectral cameras, and RGB images are simpler to process than multispectral images. Therefore, RGB sensors are a good option for use in predicting entire vineyard water status. In any case, field punctual measurements are still required to generate a general model to estimate the water status in any season and vineyard. Full article
(This article belongs to the Special Issue Precision Water Management)
Show Figures

Figure 1

13 pages, 1511 KiB  
Article
An Indoor Visible Light Positioning System for Multi-Cell Networks
by Roger Alexander Martínez-Ciro, Francisco Eugenio López-Giraldo, José Martín Luna-Rivera and Atziry Magaly Ramírez-Aguilera
Photonics 2022, 9(3), 146; https://doi.org/10.3390/photonics9030146 - 1 Mar 2022
Cited by 19 | Viewed by 3595
Abstract
Indoor positioning systems based on visible light communication (VLC) using white light-emitting diodes (WLEDs) have been widely studied in the literature. In this paper, we present an indoor visible-light positioning (VLP) system based on red–green–blue (RGB) LEDs and a frequency division multiplexing (FDM) [...] Read more.
Indoor positioning systems based on visible light communication (VLC) using white light-emitting diodes (WLEDs) have been widely studied in the literature. In this paper, we present an indoor visible-light positioning (VLP) system based on red–green–blue (RGB) LEDs and a frequency division multiplexing (FDM) scheme. This system combines the functions of an FDM scheme at the transmitters (RGB LEDs) and a received signal strength (RSS) technique to estimate the receiver position. The contribution of this work is two-fold. First, a new VLP system with RGB LEDs is proposed for a multi-cell network. Here, the RGB LEDs allow the exploitation of the chromatic space to transmit the VLP information. In addition, the VLC receiver leverages the responsivity of a single photodiode for estimating the FDM signals in RGB lighting channels. A second contribution is the derivation of an expression to calculate the optical power received by the photodiode for each incident RGB light. To this end, we consider a VLC channel model that includes both line-of-sight (LOS) and non-line-of-sight (NLOS) components. The fast Fourier transform (FFT) estimates the powers and frequencies of the received FDM signal. The receiver uses these optical signal powers in the RSS-based localization application to calculate the Euclidean distances and the frequencies for the RGB LED position. Subsequently, the receiver’s location is estimated using the Euclidean distances and RGB LED positions via a trilateration algorithm. Finally, Monte Carlo simulations are performed to evaluate the error performance of the proposed VLP system in a multi-cell scenario. The results show a high positioning accuracy performance for different color points. The average positioning error for all chromatic points was less than 2.2 cm. These results suggest that the analyzed VLP system could be used in application scenarios where white light balance or luminaire color planning are also the goals. Full article
(This article belongs to the Special Issue Visible Light Communication (VLC))
Show Figures

Figure 1

17 pages, 3872 KiB  
Communication
Characterization of the Relationship between the Loess Moisture and Image Grayscale Value
by Qingbing Liu, Jinge Wang, Hongwei Zheng, Tie Hu and Jie Zheng
Sensors 2021, 21(23), 7983; https://doi.org/10.3390/s21237983 - 30 Nov 2021
Cited by 5 | Viewed by 2281
Abstract
This paper presents a model for estimating the moisture of loess from an image grayscale value. A series of well-controlled air-dry tests were performed on saturated Malan loess, and the moisture content of the loess sample during the desiccation process was automatically recorded [...] Read more.
This paper presents a model for estimating the moisture of loess from an image grayscale value. A series of well-controlled air-dry tests were performed on saturated Malan loess, and the moisture content of the loess sample during the desiccation process was automatically recorded while the soil images were continually captured using a photogrammetric device equipped with a CMOS image sensor. By converting the red, green, and blue (RGB) image into a grayscale one, the relationship between the water content and grayscale value, referred to as the water content–gray value characteristic curve (WGCC), was obtained; the impacts of dry density, particle size distribution, and illuminance on WGCC were investigated. It is shown that the grayscale value increases as the water content decreases; based on the rate of increase of grayscale value, the WGCC can be segmented into three stages: slow-rise, rapid-rise, and asymptotically stable stages. The influences that dry density and particle size distribution have on WGCC are dependent on light reflection and transmission, and this dependence is closely related to soil water types and their relative proportion. Besides, the WGCC for a given soil sample is unique if normalized with illuminance. The mechanism behind the three stages of WGCC is discussed in terms of visible light reflection. A mathematical model was proposed to describe WGCC, and the physical meaning of the model parameters was interpreted. The proposed model is validated independently using another six different types of loess samples and is shown to match well the experimental data. The results of this study can provide a reference for the development of non-contact soil moisture monitoring methods as well as relevant sensors and instruments. Full article
(This article belongs to the Special Issue Smart CMOS Image Sensors and Related Applications)
Show Figures

Figure 1

20 pages, 8375 KiB  
Article
Simulated Annealing-Based Hyperspectral Data Optimization for Fish Species Classification: Can the Number of Measured Wavelengths Be Reduced?
by John Chauvin, Ray Duran, Kouhyar Tavakolian, Alireza Akhbardeh, Nicholas MacKinnon, Jianwei Qin, Diane E. Chan, Chansong Hwang, Insuck Baek, Moon S. Kim, Rachel B. Isaacs, Ayse Gamze Yilmaz, Jiahleen Roungchun, Rosalee S. Hellberg and Fartash Vasefi
Appl. Sci. 2021, 11(22), 10628; https://doi.org/10.3390/app112210628 - 11 Nov 2021
Cited by 8 | Viewed by 2652
Abstract
Relative to standard red/green/blue (RGB) imaging systems, hyperspectral imaging systems offer superior capabilities but tend to be expensive and complex, requiring either a mechanically complex push-broom line scanning method, a tunable filter, or a large set of light emitting diodes (LEDs) to collect [...] Read more.
Relative to standard red/green/blue (RGB) imaging systems, hyperspectral imaging systems offer superior capabilities but tend to be expensive and complex, requiring either a mechanically complex push-broom line scanning method, a tunable filter, or a large set of light emitting diodes (LEDs) to collect images in multiple wavelengths. This paper proposes a new methodology to support the design of a hypothesized system that uses three imaging modes—fluorescence, visible/near-infrared (VNIR) reflectance, and shortwave infrared (SWIR) reflectance—to capture narrow-band spectral data at only three to seven narrow wavelengths. Simulated annealing is applied to identify the optimal wavelengths for sparse spectral measurement with a cost function based on the accuracy provided by a weighted k-nearest neighbors (WKNN) classifier, a common and relatively robust machine learning classifier. Two separate classification approaches are presented, the first using a multi-layer perceptron (MLP) artificial neural network trained on sparse data from the three individual spectra and the second using a fusion of the data from all three spectra. The results are compared with those from four alternative classifiers based on common machine learning algorithms. To validate the proposed methodology, reflectance and fluorescence spectra in these three spectroscopic modes were collected from fish fillets and used to classify the fillets by species. Accuracies determined from the two classification approaches are compared with benchmark values derived by training the classifiers with the full resolution spectral data. The results of the single-layer classification study show accuracies ranging from ~68% for SWIR reflectance to ~90% for fluorescence with just seven wavelengths. The results of the fusion classification study show accuracies of about 95% with seven wavelengths and more than 90% even with just three wavelengths. Reducing the number of required wavelengths facilitates the creation of rapid and cost-effective spectral imaging systems that can be used for widespread analysis in food monitoring/food fraud, agricultural, and biomedical applications. Full article
Show Figures

Figure 1

23 pages, 4982 KiB  
Article
Simulating the Leaf Area Index of Rice from Multispectral Images
by Shenzhou Liu, Wenzhi Zeng, Lifeng Wu, Guoqing Lei, Haorui Chen, Thomas Gaiser and Amit Kumar Srivastava
Remote Sens. 2021, 13(18), 3663; https://doi.org/10.3390/rs13183663 - 14 Sep 2021
Cited by 30 | Viewed by 6777
Abstract
Accurate estimation of the leaf area index (LAI) is essential for crop growth simulations and agricultural management. This study conducted a field experiment with rice and measured the LAI in different rice growth periods. The multispectral bands (B) including red edge (RE, 730 [...] Read more.
Accurate estimation of the leaf area index (LAI) is essential for crop growth simulations and agricultural management. This study conducted a field experiment with rice and measured the LAI in different rice growth periods. The multispectral bands (B) including red edge (RE, 730 nm ± 16 nm), near-infrared (NIR, 840 nm ± 26 nm), green (560 nm ± 16 nm), red (650 nm ± 16 nm), blue (450 nm ± 16 nm), and visible light (RGB) were also obtained by an unmanned aerial vehicle (UAV) with multispectral sensors (DJI-P4M, SZ DJI Technology Co., Ltd.). Based on the bands, five vegetation indexes (VI) including Green Normalized Difference Vegetation Index (GNDVI), Leaf Chlorophyll Index (LCI), Normalized Difference Red Edge Index (NDRE), Normalized Difference Vegetation Index (NDVI), and Optimization Soil-Adjusted Vegetation Index (OSAVI) were calculated. The semi-empirical model (SEM), the random forest model (RF), and the Extreme Gradient Boosting model (XGBoost) were used to estimate rice LAI based on multispectral bands, VIs, and their combinations, respectively. The results indicated that the GNDVI had the highest accuracy in the SEM (R2 = 0.78, RMSE = 0.77). For the single band, NIR had the highest accuracy in both RF (R2 = 0.73, RMSE = 0.98) and XGBoost (R2 = 0.77, RMSE = 0.88). Band combination of NIR + red improved the estimation accuracy in both RF (R2 = 0.87, RMSE = 0.65) and XGBoost (R2 = 0.88, RMSE = 0.63). NDRE and LCI were the first two single VIs for LAI estimation using both RF and XGBoost. However, putting more than one VI together could only increase the LAI estimation accuracy slightly. Meanwhile, the bands + VIs combinations could improve the accuracy in both RF and XGBoost. Our study recommended estimating rice LAI by a combination of red + NIR + OSAVI + NDVI + GNDVI + LCI + NDRE (2B + 5V) with XGBoost to obtain high accuracy and overcome the potential over-fitting issue (R2 = 0.91, RMSE = 0.54). Full article
Show Figures

Graphical abstract

Back to TopTop