Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (125)

Search Parameters:
Keywords = RGB saturation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1383 KiB  
Article
Enhancing Underwater Images with LITM: A Dual-Domain Lightweight Transformer Framework
by Wang Hu, Zhuojing Rong, Lijun Zhang, Zhixiang Liu, Zhenhua Chu, Lu Zhang, Liping Zhou and Jingxiang Xu
J. Mar. Sci. Eng. 2025, 13(8), 1403; https://doi.org/10.3390/jmse13081403 - 23 Jul 2025
Viewed by 246
Abstract
Underwater image enhancement (UIE) technology plays a vital role in marine resource exploration, environmental monitoring, and underwater archaeology. However, due to the absorption and scattering of light in underwater environments, images often suffer from blurred details, color distortion, and low contrast, which seriously [...] Read more.
Underwater image enhancement (UIE) technology plays a vital role in marine resource exploration, environmental monitoring, and underwater archaeology. However, due to the absorption and scattering of light in underwater environments, images often suffer from blurred details, color distortion, and low contrast, which seriously affect the usability of underwater images. To address the above limitations, a lightweight transformer-based model (LITM) is proposed for improving underwater degraded images. Firstly, our proposed method utilizes a lightweight RGB transformer enhancer (LRTE) that uses efficient channel attention blocks to capture local detail features in the RGB domain. Subsequently, a lightweight HSV transformer encoder (LHTE) is utilized to extract global brightness, color, and saturation from the hue–saturation–value (HSV) domain. Finally, we propose a multi-modal integration block (MMIB) to effectively fuse enhanced information from the RGB and HSV pathways, as well as the input image. Our proposed LITM method significantly outperforms state-of-the-art methods, achieving a peak signal-to-noise ratio (PSNR) of 26.70 and a structural similarity index (SSIM) of 0.9405 on the LSUI dataset. Furthermore, the designed method also exhibits good generality and adaptability on unpaired datasets. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

26 pages, 3771 KiB  
Article
BGIR: A Low-Illumination Remote Sensing Image Restoration Algorithm with ZYNQ-Based Implementation
by Zhihao Guo, Liangliang Zheng and Wei Xu
Sensors 2025, 25(14), 4433; https://doi.org/10.3390/s25144433 - 16 Jul 2025
Viewed by 222
Abstract
When a CMOS (Complementary Metal–Oxide–Semiconductor) imaging system operates at a high frame rate or a high line rate, the exposure time of the imaging system is limited, and the acquired image data will be dark, with a low signal-to-noise ratio and unsatisfactory sharpness. [...] Read more.
When a CMOS (Complementary Metal–Oxide–Semiconductor) imaging system operates at a high frame rate or a high line rate, the exposure time of the imaging system is limited, and the acquired image data will be dark, with a low signal-to-noise ratio and unsatisfactory sharpness. Therefore, in order to improve the visibility and signal-to-noise ratio of remote sensing images based on CMOS imaging systems, this paper proposes a low-light remote sensing image enhancement method and a corresponding ZYNQ (Zynq-7000 All Programmable SoC) design scheme called the BGIR (Bilateral-Guided Image Restoration) algorithm, which uses an improved multi-scale Retinex algorithm in the HSV (hue–saturation–value) color space. First, the RGB image is used to separate the original image’s H, S, and V components. Then, the V component is processed using the improved algorithm based on bilateral filtering. The image is then adjusted using the gamma correction algorithm to make preliminary adjustments to the brightness and contrast of the whole image, and the S component is processed using segmented linear enhancement to obtain the base layer. The algorithm is also deployed to ZYNQ using ARM + FPGA software synergy, reasonably allocating each algorithm module and accelerating the algorithm by using a lookup table and constructing a pipeline. The experimental results show that the proposed method improves processing speed by nearly 30 times while maintaining the recovery effect, which has the advantages of fast processing speed, miniaturization, embeddability, and portability. Following the end-to-end deployment, the processing speeds for resolutions of 640 × 480 and 1280 × 720 are shown to reach 80 fps and 30 fps, respectively, thereby satisfying the performance requirements of the imaging system. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 3340 KiB  
Article
Infrared Monocular Depth Estimation Based on Radiation Field Gradient Guidance and Semantic Priors in HSV Space
by Rihua Hao, Chao Xu and Chonghao Zhong
Sensors 2025, 25(13), 4022; https://doi.org/10.3390/s25134022 - 27 Jun 2025
Viewed by 392
Abstract
Monocular depth estimation (MDE) has emerged as a powerful technique for extracting scene depth from a single image, particularly in the context of computational imaging. Conventional MDE methods based on RGB images often degrade under varying illuminations. To overcome this, an end-to-end framework [...] Read more.
Monocular depth estimation (MDE) has emerged as a powerful technique for extracting scene depth from a single image, particularly in the context of computational imaging. Conventional MDE methods based on RGB images often degrade under varying illuminations. To overcome this, an end-to-end framework is developed that leverages the illumination-invariant properties of infrared images for accurate depth estimation. Specifically, a multi-task UNet architecture was designed to perform gradient extraction, semantic segmentation, and texture reconstruction from infrared RAW images. To strengthen structural learning, a Radiation Field Gradient Guidance (RGG) module was incorporated, enabling edge-aware attention mechanisms. The gradients, semantics, and textures were mapped to the Saturation (S), Hue (H), and Value (V) channels in the HSV color space, subsequently converted into an RGB format for input into the depth estimation network. Additionally, a sky mask loss was introduced during training to mitigate the influence of ambiguous sky regions. Experimental validation on a custom infrared dataset demonstrated high accuracy, achieving a δ1 of 0.976. These results confirm that integrating radiation field gradient guidance and semantic priors in HSV space significantly enhances depth estimation performance for infrared imagery. Full article
Show Figures

Figure 1

21 pages, 2555 KiB  
Article
Semantic-Aware Low-Light Image Enhancement by Learning from Multiple Color Spaces
by Bo Jiang, Xuefei Wang, Naidi Yang, Yuhan Liu, Xi Chen and Qiwen Wu
Appl. Sci. 2025, 15(10), 5556; https://doi.org/10.3390/app15105556 - 15 May 2025
Viewed by 610
Abstract
Extreme low-light image enhancement presents persistent challenges due to compounded degradations involving underexposure, sensor noise, and structural detail loss. Traditional low-light image enhancement methods predominantly employ global adjustment strategies that disregard semantic context, often resulting in incomplete detail recovery or color distortion. To [...] Read more.
Extreme low-light image enhancement presents persistent challenges due to compounded degradations involving underexposure, sensor noise, and structural detail loss. Traditional low-light image enhancement methods predominantly employ global adjustment strategies that disregard semantic context, often resulting in incomplete detail recovery or color distortion. To address these limitations, we propose a semantic-aware knowledge-guided framework (SKF) that systematically integrates semantic priors for improved illumination recovery. Our framework introduces three key modules: A Semantic Feature Enhancement Module for integrating hierarchical semantic features, a Semantic-Guided Color Histogram Loss to enforce color consistency, and a Semantic-Guided Adversarial Loss to enhance perceptual realism. Furthermore, we improve the semantic-guided color histogram loss by leveraging multi-color space constraints. Inspired by human visual perception mechanisms, our enhanced loss function calculates color discrepancies across three color spaces—RGB, LAB, and LCH—through three components: lossrgb, losslab and losslch. These components collaboratively optimize image contrast and saturation, thereby simultaneously enhancing contrast preservation and chromatic naturalness. Full article
Show Figures

Figure 1

21 pages, 5272 KiB  
Article
Selecting High Forage-Yielding Alfalfa Populations in a Mediterranean Drought-Prone Environment Using High-Throughput Phenotyping
by Hamza Armghan Noushahi, Luis Inostroza, Viviana Barahona, Soledad Espinoza, Carlos Ovalle, Katherine Quitral, Gustavo A. Lobos, Fernando P. Guerra, Shawn C. Kefauver and Alejandro del Pozo
Remote Sens. 2025, 17(9), 1517; https://doi.org/10.3390/rs17091517 - 25 Apr 2025
Viewed by 2357
Abstract
Alfalfa is a deep-rooted perennial forage crop with diverse drought-tolerant traits. This study evaluated 250 alfalfa half-sib populations over three growing seasons (2021–2023) under irrigated and rainfed conditions in the Mediterranean drought-prone region of Central Chile (Cauquenes), aiming to identify high-yielding, drought-tolerant populations [...] Read more.
Alfalfa is a deep-rooted perennial forage crop with diverse drought-tolerant traits. This study evaluated 250 alfalfa half-sib populations over three growing seasons (2021–2023) under irrigated and rainfed conditions in the Mediterranean drought-prone region of Central Chile (Cauquenes), aiming to identify high-yielding, drought-tolerant populations using remote sensing. Specifically, we assessed RGB-derived indices and canopy temperature difference (CTD; Tc − Ta) as proxies for forage yield (FY). The results showed considerable variation in FY across populations. Under rainfed conditions, winter FY ranged from 1.4 to 6.1 Mg ha−1 and total FY from 3.7 to 14.7 Mg ha−1. Under irrigation, winter FY reached up to 8.2 Mg ha−1 and total FY up to 25.1 Mg ha−1. The AlfaL4-5 (SARDI7), AlfaL57-7 (WL903), and AlfaL62-9 (Baldrich350) populations consistently produced the highest yields across regimes. RGB indices such as hue, saturation, b*, v*, GA, and GGA positively correlated with FY, while intensity, lightness, a*, and u* correlated negatively. CTD showed a significant negative correlation with FY across all seasons and water regimes. These findings highlight the potential of RGB imaging and CTD as effective, high-throughput field phenotyping tools for selecting drought-resilient alfalfa genotypes in Mediterranean environments. Full article
(This article belongs to the Special Issue High-Throughput Phenotyping in Plants Using Remote Sensing)
Show Figures

Graphical abstract

14 pages, 3259 KiB  
Article
The Color and Magnetic Properties of Urban Dust to Identify Contaminated Samples by Heavy Metals in Mexico City Metropolitan Area
by Alexandra Méndez-Sánchez, Ángeles Gallegos, Rafael García, Rubén Cejudo, Avto Goguitchaichvili and Francisco Bautista
Atmosphere 2025, 16(4), 374; https://doi.org/10.3390/atmos16040374 - 25 Mar 2025
Viewed by 1051
Abstract
Particles from gasoline-powered vehicle combustion often contain dark or black magnetic iron oxides. This work evaluates color variations and heavy metal concentrations in urban dust by separating magnetic particles. We used a high-power magnet to separate the magnetic particles of 30 urban dust [...] Read more.
Particles from gasoline-powered vehicle combustion often contain dark or black magnetic iron oxides. This work evaluates color variations and heavy metal concentrations in urban dust by separating magnetic particles. We used a high-power magnet to separate the magnetic particles of 30 urban dust samples from the Metropolitan Zone of the Valley of Mexico. In this way, we obtained three types of dust samples: complete particles (CPs), magnetic particles (MPs), and residual particles (RPs). The change in color with the CIE L*a*b* and RGB systems was estimated, while the concentrations of 18 heavy metals with XRF were measured. Results showed significant color differences between magnetic particles (MPs) and complete (CPs) or residual particles (RPs), with MPs exhibiting darker tones and higher concentrations of Cu, Fe, Mn, and V. The redness and saturation indices may help to identify urban dust samples contaminated with heavy metals and magnetic particles. Magnetism is a method that removes magnetic particles and some heavy metals from urban dust, partially reducing its toxicity. Full article
(This article belongs to the Section Air Pollution Control)
Show Figures

Figure 1

22 pages, 3547 KiB  
Article
Classification of Garden Chrysanthemum Flowering Period Using Digital Imagery from Unmanned Aerial Vehicle (UAV)
by Jiuyuan Zhang, Jingshan Lu, Qimo Qi, Mingxiu Sun, Gangjun Zheng, Qiuyan Zhang, Fadi Chen, Sumei Chen, Fei Zhang, Weimin Fang and Zhiyong Guan
Agronomy 2025, 15(2), 421; https://doi.org/10.3390/agronomy15020421 - 7 Feb 2025
Viewed by 951
Abstract
Monitoring the flowering period is essential for evaluating garden chrysanthemum cultivars and their landscaping use. However, traditional field observation methods are labor-intensive. This study proposes a classification method based on color information from canopy digital images. In this study, an unmanned aerial vehicle [...] Read more.
Monitoring the flowering period is essential for evaluating garden chrysanthemum cultivars and their landscaping use. However, traditional field observation methods are labor-intensive. This study proposes a classification method based on color information from canopy digital images. In this study, an unmanned aerial vehicle (UAV) with a red-green-blue (RGB) sensor was utilized to capture orthophotos of garden chrysanthemums. A mask region-convolutional neural network (Mask R-CNN) was employed to remove field backgrounds and categorize growth stages into vegetative, bud, and flowering periods. Images were then converted to the hue-saturation-value (HSV) color space to calculate eight color indices: R_ratio, Y_ratio, G_ratio, Pink_ratio, Purple_ratio, W_ratio, D_ratio, and Fsum_ratio, representing various color proportions. A color ratio decision tree and random forest model were developed to further subdivide the flowering period into initial, peak, and late periods. The results showed that the random forest model performed better with F1-scores of 0.9040 and 0.8697 on two validation datasets, requiring less manual involvement. This method provides a rapid and detailed assessment of flowering periods, aiding in the evaluation of new chrysanthemum cultivars. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application—2nd Edition)
Show Figures

Figure 1

13 pages, 4090 KiB  
Article
Discoloration Characteristics of Mechanochromic Sensors in RGB and HSV Color Spaces and Displacement Prediction
by Woo-Joo Choi, Myongkyoon Yang, Ilhwan You, Yong-Sik Yoon, Gum-Sung Ryu, Gi-Hong An and Jae Sung Yoon
Appl. Sci. 2025, 15(3), 1066; https://doi.org/10.3390/app15031066 - 22 Jan 2025
Viewed by 827
Abstract
Mechanochromic sensors are promising for structural health monitoring as they can visually monitor the deformation caused by discoloration. Most studies have focused on the large deformation problems over 100% strain; however, it is necessary to investigate the discoloration characteristics in a small deformation [...] Read more.
Mechanochromic sensors are promising for structural health monitoring as they can visually monitor the deformation caused by discoloration. Most studies have focused on the large deformation problems over 100% strain; however, it is necessary to investigate the discoloration characteristics in a small deformation range to apply it to engineering structures, such as reinforced concrete. In this study, a photonic crystal-based discoloration sensor was investigated to determine the discoloration characteristics of the red, green, and blue (RGB) as well as hue, saturation, and value (HSV) color spaces according to displacement levels. B and S showed the highest sensitivity and linear discoloration at displacements < 1 mm, whereas R and H showed significant discoloration characteristics at displacements > 1 mm. The Vision Transformer model based on RGB and HSV channels was linearly predictable up to 4 mm displacement with an accuracy of R2 0.89, but errors were found at the initial displacement within 2 mm. Full article
(This article belongs to the Section Materials Science and Engineering)
Show Figures

Figure 1

20 pages, 8703 KiB  
Article
Depth-Oriented Gray Image for Unseen Pig Detection in Real Time
by Jongwoong Seo, Seungwook Son, Seunghyun Yu, Hwapyeong Baek and Yongwha Chung
Appl. Sci. 2025, 15(2), 988; https://doi.org/10.3390/app15020988 - 20 Jan 2025
Cited by 1 | Viewed by 1516
Abstract
With the increasing demand for pork, improving pig health and welfare management productivity has become a priority. However, it is impractical for humans to manually monitor all pigsties in commercial-scale pig farms, highlighting the need for automated health monitoring systems. In such systems, [...] Read more.
With the increasing demand for pork, improving pig health and welfare management productivity has become a priority. However, it is impractical for humans to manually monitor all pigsties in commercial-scale pig farms, highlighting the need for automated health monitoring systems. In such systems, object detection is essential. However, challenges such as insufficient training data, low computational performance, and generalization issues in diverse environments make achieving high accuracy in unseen environments difficult. Conventional RGB-based object detection models face performance limitations due to brightness similarity between objects and backgrounds, new facility installations, and varying lighting conditions. To address these challenges, this study proposes a DOG (Depth-Oriented Gray) image generation method using various foundation models (SAM, LaMa, Depth Anything). Without additional sensors or retraining, the proposed method utilizes depth information from the testing environment to distinguish between foreground and background, generating depth background images and establishing an approach to define the Region of Interest (RoI) and Region of Uninterest (RoU). By converting RGB input images into the HSV color space and combining HSV-Value, inverted HSV-Saturation, and the generated depth background images, DOG images are created to enhance foreground object features while effectively suppressing background information. Experimental results using low-cost CPU and GPU systems demonstrated that DOG images improved detection accuracy (AP50) by up to 6.4% compared to conventional gray images. Moreover, DOG image generation achieved real-time processing speeds, taking 3.6 ms on a CPU, approximately 53.8 times faster than the GPU-based depth image generation time of Depth Anything, which requires 193.7 ms. Full article
(This article belongs to the Special Issue Advances in Machine Vision for Industry and Agriculture)
Show Figures

Figure 1

20 pages, 7839 KiB  
Article
Normalized Difference Vegetation Index Prediction for Blueberry Plant Health from RGB Images: A Clustering and Deep Learning Approach
by A. G. M. Zaman, Kallol Roy and Jüri Olt
AgriEngineering 2024, 6(4), 4831-4850; https://doi.org/10.3390/agriengineering6040276 - 16 Dec 2024
Viewed by 1527
Abstract
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of [...] Read more.
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of blueberry plants using RGB images and deep learning, offering a cost-effective alternative. To identify individual plant bushes, K-means and Gaussian Mixture Model (GMM) clustering were applied. RGB images were transformed into the HSL (hue, saturation, lightness) color space, and the hue channel was constrained using percentiles to exclude extreme values while preserving relevant plant hues. Further refinement was achieved through adaptive pixel-to-pixel distance filtering combined with the Davies–Bouldin Index (DBI) to eliminate pixels deviating from the compact cluster structure. This enhanced clustering accuracy and enabled precise NDVI calculations. A convolutional neural network (CNN) was trained and tested to predict NDVI-based health indices. The model achieved strong performance with mean squared losses of 0.0074, 0.0044, and 0.0021 for training, validation, and test datasets, respectively. The test dataset also yielded a mean absolute error of 0.0369 and a mean percentage error of 4.5851. These results demonstrate the NDVI prediction method’s potential for cost-effective, real-time plant health assessment, particularly in agrobotics. Full article
Show Figures

Figure 1

24 pages, 8093 KiB  
Article
Comparison of Deep Learning Models and Feature Schemes for Detecting Pine Wilt Diseased Trees
by Junjun Zhi, Lin Li, Hong Zhu, Zipeng Li, Mian Wu, Rui Dong, Xinyue Cao, Wangbing Liu, Le’an Qu, Xiaoqing Song and Lei Shi
Forests 2024, 15(10), 1706; https://doi.org/10.3390/f15101706 - 26 Sep 2024
Cited by 2 | Viewed by 1218
Abstract
Pine wilt disease (PWD) is a severe forest disease caused by the invasion of pine wood nematode (Bursaphelenchus xylophilus), which has caused significant damage to China’s forestry resources due to its short disease cycle and strong infectious ability. Benefiting from the [...] Read more.
Pine wilt disease (PWD) is a severe forest disease caused by the invasion of pine wood nematode (Bursaphelenchus xylophilus), which has caused significant damage to China’s forestry resources due to its short disease cycle and strong infectious ability. Benefiting from the development of unmanned aerial vehicle (UAV)-based remote sensing technology, the use of UAV images for the detection of PWD-infected trees has become one of the mainstream methods. However, current UAV-based detection studies mostly focus on multispectral and hyperspectral images, and few studies have focused on using red–green–blue (RGB) images for detection. This study used UAV-based RGB images to extract feature information using different color space models and then utilized semantic segmentation techniques in deep learning to detect individual PWD-infected trees. The results showed that: (1) The U-Net model realized the optimal image segmentation and achieved the highest classification accuracy with F1-score, recall, and Intersection over Union (IoU) of 0.9586, 0.9553, and 0.9221, followed by the DeepLabv3+ model and the feature pyramid networks (FPN) model. (2) The RGBHSV feature scheme outperformed both the RGB feature scheme and the hue saturation value (HSV) feature scheme, which were unrelated to the choice of the semantic segmentation techniques. (3) The semantic segmentation techniques in deep-learning models achieved superior model performance compared with traditional machine-learning methods, with the U-Net model obtaining 4.81% higher classification accuracy compared with the random forest model. (4) Compared to traditional semantic segmentation models, the newly proposed segment anything model (SAM) performed poorly in identifying pine wood nematode disease. Its success rate is 0.1533 lower than that of the U-Net model when using the RGB feature scheme and 0.2373 lower when using the HSV feature scheme. The results showed that the U-Net model using the RGBHSV feature scheme performed best in detecting individual PWD-infected trees, indicating that the proposed method using semantic segmentation technique and UAV-based RGB images to detect individual PWD-infected trees is feasible. The proposed method not only provides a cost-effective solution for timely monitoring forest health but also provides a precise means to conduct remote sensing image classification tasks. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

23 pages, 23664 KiB  
Article
Development of a UAS-Based Multi-Sensor Deep Learning Model for Predicting Napa Cabbage Fresh Weight and Determining Optimal Harvest Time
by Dong-Ho Lee and Jong-Hwa Park
Remote Sens. 2024, 16(18), 3455; https://doi.org/10.3390/rs16183455 - 18 Sep 2024
Cited by 4 | Viewed by 2172
Abstract
The accurate and timely prediction of Napa cabbage fresh weight is essential for optimizing harvest timing, crop management, and supply chain logistics, which ultimately contributes to food security and price stabilization. Traditional manual sampling methods are labor-intensive and lack precision. This study introduces [...] Read more.
The accurate and timely prediction of Napa cabbage fresh weight is essential for optimizing harvest timing, crop management, and supply chain logistics, which ultimately contributes to food security and price stabilization. Traditional manual sampling methods are labor-intensive and lack precision. This study introduces an artificial intelligence (AI)-powered model that utilizes unmanned aerial systems (UAS)-based multi-sensor data to predict Napa cabbage fresh weight. The model was developed using high-resolution RGB, multispectral (MSP), and thermal infrared (TIR) imagery collected throughout the 2020 growing season. The imagery was used to extract various vegetation indices, crop features (vegetation fraction, crop height model), and a water stress indicator (CWSI). The deep neural network (DNN) model consistently outperformed support vector machine (SVM) and random forest (RF) models, achieving the highest accuracy (R2 = 0.82, RMSE = 0.47 kg) during the mid-to-late rosette growth stage (35–42 days after planting, DAP). The model’s accuracy improved with cabbage maturity, emphasizing the importance of the heading stage for fresh weight estimation. The model slightly underestimated the weight of Napa cabbages exceeding 5 kg, potentially due to limited samples and saturation effects of vegetation indices. The overall error rate was less than 5%, demonstrating the feasibility of this approach. Spatial analysis further revealed that the model accurately captured variability in Napa cabbage growth across different soil types and irrigation conditions, particularly reflecting the positive impact of drip irrigation. This study highlights the potential of UAS-based multi-sensor data and AI for accurate and non-invasive prediction of Napa cabbage fresh weight, providing a valuable tool for optimizing harvest timing and crop management. Future research should focus on refining the model for specific weight ranges and diverse environmental conditions, and extending its application to other crops. Full article
Show Figures

Graphical abstract

22 pages, 7057 KiB  
Article
Extraction of Crop Row Navigation Lines for Soybean Seedlings Based on Calculation of Average Pixel Point Coordinates
by Bo Zhang, Dehao Zhao, Changhai Chen, Jinyang Li, Wei Zhang, Liqiang Qi and Siru Wang
Agronomy 2024, 14(8), 1749; https://doi.org/10.3390/agronomy14081749 - 9 Aug 2024
Cited by 2 | Viewed by 1022
Abstract
The extraction of navigation lines is a crucial aspect in the field autopilot system for intelligent agricultural equipment. Given that soybean seedlings are small, and straw can be found in certain Northeast China soybean fields, accurately obtaining feature points and extracting navigation lines [...] Read more.
The extraction of navigation lines is a crucial aspect in the field autopilot system for intelligent agricultural equipment. Given that soybean seedlings are small, and straw can be found in certain Northeast China soybean fields, accurately obtaining feature points and extracting navigation lines during the soybean seedling stage poses numerous challenges. To solve the above problems, this paper proposes a method of extracting navigation lines based on the average coordinate feature points of pixel points in the bean seedling belt according to the calculation of the average coordinate. In this study, the soybean seedling was chosen as the research subject, and the Hue, Saturation, Value (HSV) colour model was employed in conjunction with the maximum interclass variance (OTSU) method for RGB image segmentation. To extract soybean seedling bands, a novel approach of framing binarised image contours by drawing external rectangles and calculating average coordinates of white pixel points as feature points was proposed. The feature points were normalised, and then the improved adaptive DBSCAN clustering method was used to cluster the feature points. The least squares method was used to fit the centre line of the crops and the navigation line, and the results showed that the average distance deviation and the average angle deviation of the proposed algorithm were 7.38 and 0.32. The fitted navigation line achieved an accuracy of 96.77%, meeting the requirements for extracting navigation lines in intelligent agricultural machinery equipment for soybean inter-row cultivation. This provides a theoretical foundation for realising automatic driving of intelligent agricultural machinery in the field. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 102045 KiB  
Article
Histogram-Based Edge Detection for River Coastline Mapping Using UAV-Acquired RGB Imagery
by Grzegorz Walusiak, Matylda Witek and Tomasz Niedzielski
Remote Sens. 2024, 16(14), 2565; https://doi.org/10.3390/rs16142565 - 12 Jul 2024
Cited by 2 | Viewed by 1261
Abstract
This paper presents a new approach for delineating river coastlines in RGB close-range nadir aerial imagery acquired by unmanned aerial vehicles (UAVs), aimed at facilitating waterline detection through the reduction of the dimensions of a colour space and the use of coarse grids [...] Read more.
This paper presents a new approach for delineating river coastlines in RGB close-range nadir aerial imagery acquired by unmanned aerial vehicles (UAVs), aimed at facilitating waterline detection through the reduction of the dimensions of a colour space and the use of coarse grids rather than pixels. Since water has uniform brightness, expressed as the value (V) component in the hue, saturation, value (HSV) colour model, the reduction in question is attained by extracting V and investigating its histogram to identify areas where V does not vary considerably. A set of 30 nadir UAV-acquired photos, taken at five different locations in Poland, were used to validate the approach. For 67% of all analysed images (both wide and narrow rivers were photographed), the detection rate was above 50% (with the false hit rate ranged between 5.00% and 61.36%, mean 36.62%). When the analysis was limited to wide rivers, the percentage of images in which detection rate exceeded 50% increased to 80%, and the false hit rates remained similar. Apart from the river width, land cover in the vicinity of the river, as well as uniformity of water colour, were found to be factors which influence the waterline detection performance. Our contribution to the existing knowledge is a rough waterline detection approach based on limited information (only the V band, and grids rather than pixels). Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

19 pages, 8831 KiB  
Article
Tongue Disease Prediction Based on Machine Learning Algorithms
by Ali Raad Hassoon, Ali Al-Naji, Ghaidaa A. Khalid and Javaan Chahl
Technologies 2024, 12(7), 97; https://doi.org/10.3390/technologies12070097 - 28 Jun 2024
Cited by 8 | Viewed by 19625
Abstract
The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels [...] Read more.
The diagnosis of tongue disease is based on the observation of various tongue characteristics, including color, shape, texture, and moisture, which indicate the patient’s health status. Tongue color is one such characteristic that plays a vital function in identifying diseases and the levels of progression of the ailment. With the development of computer vision systems, especially in the field of artificial intelligence, there has been important progress in acquiring, processing, and classifying tongue images. This study proposes a new imaging system to analyze and extract tongue color features at different color saturations and under different light conditions from five color space models (RGB, YcbCr, HSV, LAB, and YIQ). The proposed imaging system trained 5260 images classified with seven classes (red, yellow, green, blue, gray, white, and pink) using six machine learning algorithms, namely, the naïve Bayes (NB), support vector machine (SVM), k-nearest neighbors (KNN), decision trees (DTs), random forest (RF), and Extreme Gradient Boost (XGBoost) methods, to predict tongue color under any lighting conditions. The obtained results from the machine learning algorithms illustrated that XGBoost had the highest accuracy at 98.71%, while the NB algorithm had the lowest accuracy, with 91.43%. Based on these obtained results, the XGBoost algorithm was chosen as the classifier of the proposed imaging system and linked with a graphical user interface to predict tongue color and its related diseases in real time. Thus, this proposed imaging system opens the door for expanded tongue diagnosis within future point-of-care health systems. Full article
Show Figures

Figure 1

Back to TopTop