Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (110)

Search Parameters:
Keywords = NIR image colorization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6371 KiB  
Article
Growth Stages Discrimination of Multi-Cultivar Navel Oranges Using the Fusion of Near-Infrared Hyperspectral Imaging and Machine Vision with Deep Learning
by Chunyan Zhao, Zhong Ren, Yue Li, Jia Zhang and Weinan Shi
Agriculture 2025, 15(14), 1530; https://doi.org/10.3390/agriculture15141530 - 15 Jul 2025
Viewed by 228
Abstract
To noninvasively and precisely discriminate among the growth stages of multiple cultivars of navel oranges simultaneously, the fusion of the technologies of near-infrared (NIR) hyperspectral imaging (HSI) combined with machine vision (MV) and deep learning is employed. NIR reflectance spectra and hyperspectral and [...] Read more.
To noninvasively and precisely discriminate among the growth stages of multiple cultivars of navel oranges simultaneously, the fusion of the technologies of near-infrared (NIR) hyperspectral imaging (HSI) combined with machine vision (MV) and deep learning is employed. NIR reflectance spectra and hyperspectral and RGB images for 740 Gannan navel oranges of five cultivars are collected. Based on preprocessed spectra, optimally selected hyperspectral images, and registered RGB images, a dual-branch multi-modal feature fusion convolutional neural network (CNN) model is established. In this model, a spectral branch is designed to extract spectral features reflecting internal compositional variations, while the image branch is utilized to extract external color and texture features from the integration of hyperspectral and RGB images. Finally, growth stages are determined via the fusion of features. To validate the availability of the proposed method, various machine-learning and deep-learning models are compared for single-modal and multi-modal data. The results demonstrate that multi-modal feature fusion of HSI and MV combined with the constructed dual-branch CNN deep-learning model yields excellent growth stage discrimination in navel oranges, achieving an accuracy, recall rate, precision, F1 score, and kappa coefficient on the testing set are 95.95%, 96.66%, 96.76%, 96.69%, and 0.9481, respectively, providing a prominent way to precisely monitor the growth stages of fruits. Full article
Show Figures

Figure 1

25 pages, 4184 KiB  
Article
Determination of Optimal Harvest Time in Cannabis sativa L. Based upon Stigma Color Transition
by Jonathan Tran, Adam M. Dimech, Simone Vassiliadis, Aaron C. Elkins, Noel O. I. Cogan, Erez Naim-Feil and Simone J. Rochfort
Plants 2025, 14(10), 1532; https://doi.org/10.3390/plants14101532 - 20 May 2025
Viewed by 1520
Abstract
Cannabis sativa L. is cultivated for therapeutic and recreational use. Delta-9 tetrahydrocannabinol (THC) and cannabidiol (CBD) are primarily responsible for its psychoactive and medicinal effects. As the global cannabis industry continues to expand, constant review and optimization of horticultural practices are needed to [...] Read more.
Cannabis sativa L. is cultivated for therapeutic and recreational use. Delta-9 tetrahydrocannabinol (THC) and cannabidiol (CBD) are primarily responsible for its psychoactive and medicinal effects. As the global cannabis industry continues to expand, constant review and optimization of horticultural practices are needed to ensure a reliable harvest and improved crop quality. There is currently uncertainty about the optimal harvest time of C. sativa, i.e., when cannabinoid concentrations are at their highest during inflorescence maturation. At present, growers observe the color transition of stigmas from white to amber as an indicator of harvest time. This research investigates the relationship between stigma color and cannabinoid concentration using liquid chromatography–mass spectrometry (LCMS) and digital image analysis. Additionally, early screening prediction models have also been developed for six cannabinoids using near-infrared (NIR) spectroscopy and LCMS to assist in early cannabinoid determination. Among the genotypes grown, 22 of 25 showed cannabinoid concentration peaks between the third (mostly amber) and fourth (fully amber) stages; however, some genotypes peaked within the first (no amber) and second (some amber) stages. We have determined that the current ‘rule of thumb’ of harvesting when a cannabis plant is mostly amber is still a useful approximation in most cases; however, studies on individual genotypes should be performed to determine their individual optimal harvest time based on the desired cannabinoid profile or total cannabinoid concentration. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

39 pages, 14246 KiB  
Article
Comparison of PlanetScope and Sentinel-2 Spectral Channels and Their Alignment via Linear Regression for Enhanced Index Derivation
by Christian Massimiliano Baldin and Vittorio Marco Casella
Geosciences 2025, 15(5), 184; https://doi.org/10.3390/geosciences15050184 - 20 May 2025
Viewed by 1852
Abstract
Prior research has shown that for specific periods, vegetation indices from PlanetScope and Sentinel-2 (used as a reference) must be aligned to benefit from the experience of Sentinel-2 and utilize techniques such as data fusion. Even during the worst-case scenario, it is possible [...] Read more.
Prior research has shown that for specific periods, vegetation indices from PlanetScope and Sentinel-2 (used as a reference) must be aligned to benefit from the experience of Sentinel-2 and utilize techniques such as data fusion. Even during the worst-case scenario, it is possible through histogram matching to calibrate PlanetScope indices to achieve the same values as Sentinel-2 (useful also for proxy). Based on these findings, the authors examined the effectiveness of linear regression in aligning individual bands prior to computing indices to determine if the bands are shifted differently. The research was conducted on five important bands: Red, Green, Blue, NIR, and RedEdge. These bands allow for the computation of well-known vegetation indices like NDVI and NDRE, and soil indices like Iron Oxide Ratio and Coloration Index. Previous research showed that linear regression is not sufficient by itself to align indices in the worst-case scenario. However, this paper demonstrates its efficiency in achieving accurate band alignment. This finding highlights the importance of considering specific scaling requirements for bands obtained from different satellite sensors, such as PlanetScope and Sentinel-2. Contemporary images acquired by the two sensors during May and July demonstrated different behaviors in their bands; however, linear regression can align the datasets even during the problematic month of May. Full article
Show Figures

Figure 1

18 pages, 9301 KiB  
Article
Adapting SAM for Visible-Light Pupil Segmentation Baseline
by Oded Milman, Dovi Yellin and Yehudit Aperstein
Electronics 2025, 14(9), 1850; https://doi.org/10.3390/electronics14091850 - 1 May 2025
Viewed by 639
Abstract
Pupil segmentation in visible-light (RGB) images presents unique challenges due to variable lighting conditions, diverse eye colors, and poor contrast between iris and pupil, particularly in individuals with dark irises. While near-infrared (NIR) imaging has been the traditional solution for eye-tracking systems, the [...] Read more.
Pupil segmentation in visible-light (RGB) images presents unique challenges due to variable lighting conditions, diverse eye colors, and poor contrast between iris and pupil, particularly in individuals with dark irises. While near-infrared (NIR) imaging has been the traditional solution for eye-tracking systems, the accessibility and practicality of RGB-based solutions make them attractive for widespread adoption in consumer devices. This paper presents a baseline for RGB pupil segmentation by adapting the Segment Anything Model (SAM). We introduce a multi-stage fine-tuning approach that leverages SAM’s exceptional generalization capabilities, further enhancing its elemental capacity for accurate pupil segmentation. The staged approach consists of SAM-BaseIris for enhanced iris detection, SAM-RefinedIris for improving iris segmentation with automated bounding box prompts, and SAM-RefinedPupil for precise pupil segmentation. Our method was evaluated on three standard visible-light datasets: UBIRIS.v2, I-Social DB, and MICHE-I. The results demonstrate robust performance across diverse lighting conditions and eye colors. Our method achieves near SOTA results for iris segmentation and attains mean mIOU and DICE scores of 79.37 and 87.79, respectively, for pupil segmentation across the evaluated datasets. This work establishes a strong foundation for RGB-based eye-tracking systems and demonstrates the potential of adapting foundation models for specialized medical imaging tasks. Full article
Show Figures

Figure 1

36 pages, 26652 KiB  
Article
Low-Light Image Enhancement for Driving Condition Recognition Through Multi-Band Images Fusion and Translation
by Dong-Min Son and Sung-Hak Lee
Mathematics 2025, 13(9), 1418; https://doi.org/10.3390/math13091418 - 25 Apr 2025
Viewed by 514
Abstract
When objects are obscured by shadows or dim surroundings, image quality is improved by fusing near-infrared and visible-light images. At night, when visible and NIR lights are insufficient, long-wave infrared (LWIR) imaging can be utilized, necessitating the attachment of a visible-light sensor to [...] Read more.
When objects are obscured by shadows or dim surroundings, image quality is improved by fusing near-infrared and visible-light images. At night, when visible and NIR lights are insufficient, long-wave infrared (LWIR) imaging can be utilized, necessitating the attachment of a visible-light sensor to an LWIR camera to simultaneously capture both LWIR and visible-light images. This camera configuration enables the acquisition of infrared images at various wavelengths depending on the time of day. To effectively fuse clear visible regions from the visible-light spectrum with those from the LWIR spectrum, a multi-band fusion method is proposed. The proposed fusion process subsequently combines detailed information from infrared and visible-light images, enhancing object visibility. Additionally, this process compensates for color differences in visible-light images, resulting in a natural and visually consistent output. The fused images are further enhanced using a night-to-day image translation module, which improves overall brightness and reduces noise. This night-to-day translation module is a trained CycleGAN-based module that adjusts object brightness in nighttime images to levels comparable to daytime images. The effectiveness and superiority of the proposed method are validated using image quality metrics. The proposed method significantly contributes to image enhancement, achieving the best average scores compared to other methods, with a BRISQUE of 30.426 and a PIQE of 22.186. This study improves the accuracy of human and object recognition in CCTV systems and provides a potential image-processing tool for autonomous vehicles. Full article
Show Figures

Figure 1

16 pages, 3114 KiB  
Article
Exploring Effects of Mental Stress with Data Augmentation and Classification Using fNIRS
by M. N. Afzal Khan, Nada Zahour, Usman Tariq, Ghinwa Masri, Ismat F. Almadani and Hasan Al-Nashah
Sensors 2025, 25(2), 428; https://doi.org/10.3390/s25020428 - 13 Jan 2025
Cited by 1 | Viewed by 1355
Abstract
Accurately identifying and discriminating between different brain states is a major emphasis of functional brain imaging research. Various machine learning techniques play an important role in this regard. However, when working with a small number of study participants, the lack of sufficient data [...] Read more.
Accurately identifying and discriminating between different brain states is a major emphasis of functional brain imaging research. Various machine learning techniques play an important role in this regard. However, when working with a small number of study participants, the lack of sufficient data and achieving meaningful classification results remain a challenge. In this study, we employ a classification strategy to explore stress and its impact on spatial activation patterns and brain connectivity caused by the Stroop color–word task (SCWT). To improve our results and increase our dataset, we use data augmentation with a deep convolutional generative adversarial network (DCGAN). The study is carried out at two separate times of day (morning and evening) and involves 21 healthy participants. Additionally, we introduce binaural beats (BBs) stimulation to investigate its potential for stress reduction. The morning session includes a control phase with 10 SCWT trials, whereas the afternoon session is divided into three phases: stress, mitigation (with 16 Hz BB stimulation), and post-mitigation, each with 10 SCWT trials. For a comprehensive evaluation, the acquired fNIRS data are classified using a variety of machine-learning approaches. Linear discriminant analysis (LDA) showed a maximum accuracy of 60%, whereas non-augmented data classified by a convolutional neural network (CNN) provided the highest classification accuracy of 73%. Notably, after augmenting the data with DCGAN, the classification accuracy increases dramatically to 96%. In the time series data, statistically significant differences were noticed in the data before and after BB stimulation, which showed an improvement in the brain state, in line with the classification results. These findings illustrate the ability to detect changes in brain states with high accuracy using fNIRS, underline the need for larger datasets, and demonstrate that data augmentation can significantly help when data are scarce in the case of brain signals. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

20 pages, 7144 KiB  
Article
A Study of NOAA-20 VIIRS Band M1 (0.41 µm) Striping over Clear-Sky Ocean
by Wenhui Wang, Changyong Cao, Slawomir Blonski and Xi Shao
Remote Sens. 2025, 17(1), 74; https://doi.org/10.3390/rs17010074 - 28 Dec 2024
Cited by 3 | Viewed by 844
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the National Oceanic and Atmospheric Administration-20 (NOAA-20) satellite was launched on 18 November 2017. The on-orbit calibration of the NOAA-20 VIIRS visible and near-infrared (VisNIR) bands has been very stable over time. However, NOAA-20 operational [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the National Oceanic and Atmospheric Administration-20 (NOAA-20) satellite was launched on 18 November 2017. The on-orbit calibration of the NOAA-20 VIIRS visible and near-infrared (VisNIR) bands has been very stable over time. However, NOAA-20 operational M1 (a dual gain band with a center wavelength of 0.41 µm) sensor data records (SDR) have exhibited persistent scene-dependent striping over clear-sky ocean (high gain, low radiance) since the beginning of the mission, different from other VisNIR bands. This paper studies the root causes of the striping in the operational NOAA-20 M1 SDRs. Two potential factors were analyzed: (1) polarization effect-induced striping over clear-sky ocean and (2) imperfect on-orbit radiometric calibration-induced striping. NOAA-20 M1 is more sensitive to the polarized lights compared to other NOAA-20 short-wavelength bands and the similar bands on the Suomi NPP and NOAA-21 VIIRS, with detector and scan angle-dependent polarization sensitivity up to ~6.4%. The VIIRS M1 top of atmosphere radiance is dominated by Rayleigh scattering over clear-sky ocean and can be up to ~70% polarized. In this study, the impact of the polarization effect on M1 striping was investigated using radiative transfer simulation and a polarization correction method similar to that developed by the NOAA ocean color team. Our results indicate that the prelaunch-measured polarization sensitivity and the polarization correction method work well and can effectively reduce striping over clear-sky ocean scenes by up to ~2% at near nadir zones. Moreover, no significant change in NOAA-20 M1 polarization sensitivity was observed based on the data analyzed in this study. After the correction of the polarization effect, residual M1 striping over clear-sky ocean suggests that there exists half-angle mirror (HAM)-side and detector-dependent striping, which may be caused by on-orbit radiometric calibration errors. HAM-side and detector-dependent striping correction factors were analyzed using deep convective cloud (DCC) observations (low gain, high radiances) and verified over the homogeneous Libya-4 desert site (low gain, mid-level radiance); neither are significantly affected by the polarization effect. The imperfect on-orbit radiometric calibration-induced striping in the NOAA operational M1 SDR has been relatively stable over time. After the correction of the polarization effect, the DCC-based striping correction factors can further reduce striping over clear-sky ocean scenes by ~0.5%. The polarization correction method used in this study is only effective over clear-sky ocean scenes that are dominated by the Rayleigh scattering radiance. The DCC-based striping correction factors work well at all radiance levels; therefore, they can be deployed operationally to improve the quality of NOAA-20 M1 SDRs. Full article
(This article belongs to the Collection The VIIRS Collection: Calibration, Validation, and Application)
Show Figures

Figure 1

16 pages, 9116 KiB  
Article
Cross-Modal Feature Fusion for Field Weed Mapping Using RGB and Near-Infrared Imagery
by Xijian Fan, Chunlei Ge, Xubing Yang and Weice Wang
Agriculture 2024, 14(12), 2331; https://doi.org/10.3390/agriculture14122331 - 19 Dec 2024
Cited by 2 | Viewed by 1082
Abstract
The accurate mapping of weeds in agricultural fields is essential for effective weed control and enhanced crop productivity. Moving beyond the limitations of RGB imagery alone, this study presents a cross-modal feature fusion network (CMFNet) designed for precise weed mapping by integrating RGB [...] Read more.
The accurate mapping of weeds in agricultural fields is essential for effective weed control and enhanced crop productivity. Moving beyond the limitations of RGB imagery alone, this study presents a cross-modal feature fusion network (CMFNet) designed for precise weed mapping by integrating RGB and near-infrared (NIR) imagery. CMFNet first applies color space enhancement and adaptive histogram equalization to improve the image brightness and contrast in both RGB and NIR images. Building on a Transformer-based segmentation framework, a cross-modal multi-scale feature enhancement module is then introduced, featuring spatial and channel feature interaction to automatically capture complementary information across two modalities. The enhanced features are further fused and refined by integrating an attention mechanism, which reduces the background interference and enhances the segmentation accuracy. Extensive experiments conducted on two public datasets, the Sugar Beets 2016 and Sunflower datasets, demonstrate that CMFNet significantly outperforms CNN-based segmentation models in the task of weed and crop segmentation. The model achieved an Intersection over Union (IoU) metric of 90.86% and 90.77%, along with a Mean Accuracy (mAcc) of 93.8% and 94.35%, respectively. Ablation studies further validate that the proposed cross-modal fusion method provides substantial improvements over basic feature fusion methods, effectively localizing weed and crop regions across diverse field conditions. These findings underscore their potential as a robust solution for precise and adaptive weed mapping in complex agricultural landscapes. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 41766 KiB  
Article
Methodology for Removing Striping Artifacts Encountered in Planet SuperDove Ocean-Color Products
by Brittney Slocum, Sherwin Ladner, Adam Lawson, Mark David Lewis and Sean McCarthy
Remote Sens. 2024, 16(24), 4707; https://doi.org/10.3390/rs16244707 - 17 Dec 2024
Viewed by 1035
Abstract
The Planet SuperDove sensors produce eight-band, three-meter resolution images covering the blue, green, red, red-edge, and NIR spectral bands. Variations in spectral response in the data used to perform atmospheric correction combined with low signal-to-noise over ocean waters can lead to visible striping [...] Read more.
The Planet SuperDove sensors produce eight-band, three-meter resolution images covering the blue, green, red, red-edge, and NIR spectral bands. Variations in spectral response in the data used to perform atmospheric correction combined with low signal-to-noise over ocean waters can lead to visible striping artifacts in the downstream ocean-color products. It was determined that the striping artifacts could be removed from these products by filtering the top of the atmosphere radiance in the red and NIR bands prior to selecting the aerosol models, without sacrificing high-resolution features in the imagery. This paper examines an approach that applies this filtering to the respective bands as a preprocessing step. The outcome and performance of this filtering technique are examined to assess the success of removing the striping effect in atmospherically corrected Planet SuperDove data. Full article
Show Figures

Figure 1

17 pages, 3419 KiB  
Article
Breath-Holding as a Stimulus to Assess Peripheral Oxygenation Flow Using Near-Infrared Spectroscopic Imaging
by Kevin Leiva, Isabella Gonzalez, Juan Murillo, Aliette Espinosa, Robert S. Kirsner and Anuradha Godavarty
Bioengineering 2024, 11(12), 1221; https://doi.org/10.3390/bioengineering11121221 - 3 Dec 2024
Viewed by 962
Abstract
A mammalian breath-hold (BH) mechanism can induce vasoconstriction in the limbs, altering blood flow and oxygenation flow changes in a wound site. Our objective was to utilize a BH paradigm as a stimulus to induce peripheral tissue oxygenation changes via studies on control [...] Read more.
A mammalian breath-hold (BH) mechanism can induce vasoconstriction in the limbs, altering blood flow and oxygenation flow changes in a wound site. Our objective was to utilize a BH paradigm as a stimulus to induce peripheral tissue oxygenation changes via studies on control and diabetic foot ulcer (DFU) subjects. Subjects were imaged under a breath-hold paradigm (including 20 s BH) using a non-contact spatio-temporal-based NIRS device. Oxygenated flow changes were similar between darker and lighter skin colors but differed between wound site and normal background tissues. Thus, the ability of peripheral vasculature to response to oxygenation demand can be assessed in DFUs. Full article
(This article belongs to the Special Issue Optical Imaging for Biomedical Applications)
Show Figures

Figure 1

26 pages, 23777 KiB  
Article
Performance Assessment of Landsat-9 Atmospheric Correction Methods in Global Aquatic Systems
by Aoxiang Sun, Shuangyan He, Yanzhen Gu, Peiliang Li, Cong Liu, Guanqiong Ye and Feng Zhou
Remote Sens. 2024, 16(23), 4517; https://doi.org/10.3390/rs16234517 - 2 Dec 2024
Cited by 2 | Viewed by 1508
Abstract
The latest satellite in the Landsat series, Landsat-9, was successfully launched on 27 September 2021, equipped with the Operational Land Imager-2 (OLI-2) sensor, continuing the legacy of OLI/Landsat-8. To evaluate the uncertainties in water surface reflectance derived from OLI-2, this study conducts a [...] Read more.
The latest satellite in the Landsat series, Landsat-9, was successfully launched on 27 September 2021, equipped with the Operational Land Imager-2 (OLI-2) sensor, continuing the legacy of OLI/Landsat-8. To evaluate the uncertainties in water surface reflectance derived from OLI-2, this study conducts a comprehensive performance assessment of six atmospheric correction (AC) methods—DSF, C2RCC, iCOR, L2gen (NIR-SWIR1), L2gen (NIR-SWIR2), and Polymer—using in-situ measurements from 14 global sites, including 13 AERONET-OC stations and 1 MOBY station, collected between 2021 and 2023. Error analysis shows that L2gen (NIR-SWIR1) (RMSE ≤ 0.0017 sr−1, SA = 6.33°) and L2gen (NIR-SWIR2) (RMSE ≤ 0.0019 sr−1, SA = 6.38°) provide the best results across four visible bands, demonstrating stable performance across different optical water types (OWTs) ranging from clear to turbid water. Following these are C2RCC (RMSE ≤ 0.0030 sr−1, SA = 5.74°) and Polymer (RMSE ≤ 0.0027 sr−1, SA = 7.76°), with DSF (RMSE ≤ 0.0058 sr−1, SA = 11.33°) and iCOR (RMSE ≤ 0.0051 sr−1, SA = 12.96°) showing the poorest results. By comparing the uncertainty and consistency of Landsat-9 (OLI-2) with Sentinel-2A/B (MSI) and S-NPP/NOAA20 (VIIRS), results show that OLI-2 has similar uncertainties to MSI and VIIRS in the blue, blue-green, and green bands, with RMSE differences within 0.0002 sr−1. In the red band, the OLI-2 uncertainties are lower than those of MSI but higher than those of VIIRS, with an RMSE difference of about 0.0004 sr−1. Overall, OLI-2 data processed using L2gen provide reliable surface reflectance and show high consistency with MSI and VIIRS, making it suitable for integrating multi-satellite observations to enhance global coastal water color monitoring. Full article
Show Figures

Figure 1

27 pages, 4071 KiB  
Review
Advances in Emerging Non-Destructive Technologies for Detecting Raw Egg Freshness: A Comprehensive Review
by Elsayed M. Atwa, Shaomin Xu, Ahmed K. Rashwan, Asem M. Abdelshafy, Gamal ElMasry, Salim Al-Rejaie, Haixiang Xu, Hongjian Lin and Jinming Pan
Foods 2024, 13(22), 3563; https://doi.org/10.3390/foods13223563 - 7 Nov 2024
Cited by 2 | Viewed by 3065
Abstract
Eggs are a rich food source of proteins, fats, vitamins, minerals, and other nutrients. However, the egg industry faces some challenges such as microbial invasion due to environmental factors, leading to damage and reduced usability. Therefore, detecting the freshness of raw eggs using [...] Read more.
Eggs are a rich food source of proteins, fats, vitamins, minerals, and other nutrients. However, the egg industry faces some challenges such as microbial invasion due to environmental factors, leading to damage and reduced usability. Therefore, detecting the freshness of raw eggs using various technologies, including traditional and non-destructive methods, can overcome these challenges. As the traditional methods of assessing egg freshness are often subjective and time-consuming, modern non-destructive technologies, including near-infrared (NIR) spectroscopy, Raman spectroscopy, fluorescence spectroscopy, computer vision (color imaging), hyperspectral imaging, electronic noses, and nuclear magnetic resonance, have offered objective and rapid results to address these limitations. The current review summarizes and discusses the recent advances and developments in applying non-destructive technologies for detecting raw egg freshness. Some of these technologies such as NIR spectroscopy, computer vision, and hyperspectral imaging have achieved an accuracy of more than 96% in detecting egg freshness. Therefore, this review provides an overview of the current trends in the state-of-the-art non-destructive technologies recently utilized in detecting the freshness of raw eggs. This review can contribute significantly to the field of emerging technologies in this research track and pique the interests of both food scientists and industry professionals. Full article
(This article belongs to the Section Food Engineering and Technology)
Show Figures

Figure 1

21 pages, 8602 KiB  
Review
From Outside to Inside: The Subtle Probing of Globular Fruits and Solanaceous Vegetables Using Machine Vision and Near-Infrared Methods
by Junhua Lu, Mei Zhang, Yongsong Hu, Wei Ma, Zhiwei Tian, Hongsen Liao, Jiawei Chen and Yuxin Yang
Agronomy 2024, 14(10), 2395; https://doi.org/10.3390/agronomy14102395 - 16 Oct 2024
Viewed by 1789
Abstract
Machine vision and near-infrared light technology are widely used in fruits and vegetable grading, as an important means of agricultural non-destructive testing. The characteristics of fruits and vegetables can easily be automatically distinguished by these two technologies, such as appearance, shape, color and [...] Read more.
Machine vision and near-infrared light technology are widely used in fruits and vegetable grading, as an important means of agricultural non-destructive testing. The characteristics of fruits and vegetables can easily be automatically distinguished by these two technologies, such as appearance, shape, color and texture. Nondestructive testing is reasonably used for image processing and pattern recognition, and can meet the identification and grading of single features and fusion features in production. Through the summary and analysis of the fruits and vegetable grading technology in the past five years, the results show that the accuracy of machine vision for fruits and vegetable size grading is 70–99.8%, the accuracy of external defect grading is 88–95%, and the accuracy of NIR and hyperspectral internal detection grading is 80.56–100%. Comprehensive research on multi-feature fusion technology in the future can provide comprehensive guidance for the construction of automatic integrated grading of fruits and vegetables, which is the main research direction of fruits and vegetable grading in the future. Full article
Show Figures

Figure 1

9 pages, 5763 KiB  
Article
Longitudinal Structural and Functional Evaluation of Dark-without-Pressure Fundus Lesions in Patients with Autoimmune Diseases
by Marco Lombardo, Federico Ricci, Andrea Cusumano, Benedetto Falsini, Carlo Nucci and Massimo Cesareo
Diagnostics 2024, 14(20), 2289; https://doi.org/10.3390/diagnostics14202289 - 15 Oct 2024
Cited by 1 | Viewed by 969
Abstract
Objectives: The main objective of this study was to report and investigate the characteristics and longitudinal changes in dark-without-pressure (DWP) fundus lesions in patients with autoimmune diseases using multimodal imaging techniques. Methods: In this retrospective observational case series, five patients affected by ocular [...] Read more.
Objectives: The main objective of this study was to report and investigate the characteristics and longitudinal changes in dark-without-pressure (DWP) fundus lesions in patients with autoimmune diseases using multimodal imaging techniques. Methods: In this retrospective observational case series, five patients affected by ocular and systemic autoimmune disorders and DWP were examined. DWP was assessed by multimodal imaging, including color fundus photography (CFP), near-infrared reflectance (NIR), blue reflectance (BR), blue autofluorescence (BAF), optical coherence tomography (OCT), OCT-angiography (OCT-A), fluorescein angiography (FA) and indocyanine green angiography (ICGA), and functional testing, including standard automated perimetry (SAP) and electroretinography (ERG). Follow-up examinations were performed for four out of five patients (range: 6 months–7 years). Results: DWP fundus lesions were found in the retinal mid-periphery and were characterized by the hypo-reflectivity of the ellipsoid zone on OCT. DWP appeared hypo-reflective in NIR, BR and BAF, and exhibited hypo-fluorescence in FA in two patients while showing no signs in one patient. ICGA showed hypo-fluorescent margins in one patient. SAP and ERG testing did not show alterations attributable to the DWP lesion. Follow-up examinations documented rapid dimensional changes in DWP even in the short term (1 month). Conclusions: This study suggests a possible association between autoimmune diseases and DWP. New FA and ICGA features were described. The proposed pathogenesis hypotheses may operate as a basis for further investigation of a lesion that is still largely unknown. Large population studies would be necessary to confirm whether there is a higher incidence of DWP in this patient category. Full article
(This article belongs to the Special Issue Vitreo-Retinal Disorders: Pathophysiology and Diagnostic Imaging)
Show Figures

Figure 1

15 pages, 2954 KiB  
Review
Rapid Analysis of Soil Organic Carbon in Agricultural Lands: Potential of Integrated Image Processing and Infrared Spectroscopy
by Nelundeniyage Sumuduni L. Senevirathne and Tofael Ahamed
AgriEngineering 2024, 6(3), 3001-3015; https://doi.org/10.3390/agriengineering6030172 - 20 Aug 2024
Viewed by 1949
Abstract
The significance of soil in the agricultural industry is profound, with healthy soil representing an important role in ensuring food security. In addition, soil is the largest terrestrial carbon sink on earth. The soil carbon pool is composed of both inorganic and organic [...] Read more.
The significance of soil in the agricultural industry is profound, with healthy soil representing an important role in ensuring food security. In addition, soil is the largest terrestrial carbon sink on earth. The soil carbon pool is composed of both inorganic and organic forms. The equilibrium of the soil carbon pool directly impacts the carbon cycle via all of the other processes on the planet. With the development of agricultural systems from traditional to conventional ones, and with the current era of precision agriculture, which involves making decisions based on information, the importance of understanding soil is becoming increasingly clear. The control of microenvironment conditions and soil fertility represents a key factor in achieving higher productivity in these systems. Furthermore, agriculture represents a significant contributor to carbon emissions, a topic that has become timely given the necessity for carbon neutrality. In addition to these concerns, updating soil-related data, including information on macro and micronutrient conditions, is important. Carbon represents one of the major nutrients for crops and plays a key role in the retention and release of other nutrients and the management of soil physical properties. Despite the importance of carbon, existing analytical methods are complex and expensive. This discourages frequent analyses, which results in a lack of soil carbon-related data for agricultural fields. From this perspective, in situ soil organic carbon (SOC) analysis can provide timely management information for calibrating fertilizer applications based on the soil–carbon relationship to increase soil productivity. In addition, the available data need frequent updates due to rapid changes in ecosystem services and the use of extensive fertilizers and pesticides. Despite the importance of this topic, few studies have investigated the potential of image analysis based on image processing and spectral data recording. The use of spectroscopy and visual color matching to develop SOC predictions has been considered, and the use of spectroscopic instruments has led to increased precision. Our extensive literature review shows that color models, especially Munsell color charts, are better for qualitative purposes and that Cartesian-type color models are appropriate for quantification. Even for the color model, spectroscopy data could be used, and these data have the potential to improve the precision of measurements. On the other hand, mid-infrared radiation (MIR) and near-infrared radiation (NIR) diffuse reflection has been reported to have a greater ability to predict SOC. Finally, this article reports the availability of inexpensive portable instruments that can enable the development of in situ SOC analysis from reflection and emission information with the integration of images and spectroscopy. This integration refers to machine learning algorithms with a reflection-oriented spectrophotometer and emission-based thermal images which have the potential to predict SOC without the need for expensive instruments and are easy to use in farm applications. Full article
Show Figures

Figure 1

Back to TopTop