Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (139)

Search Parameters:
Keywords = HSV color space

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 7223 KiB  
Article
Smart Wildlife Monitoring: Real-Time Hybrid Tracking Using Kalman Filter and Local Binary Similarity Matching on Edge Network
by Md. Auhidur Rahman, Stefano Giordano and Michele Pagano
Computers 2025, 14(8), 307; https://doi.org/10.3390/computers14080307 - 30 Jul 2025
Viewed by 193
Abstract
Real-time wildlife monitoring on edge devices poses significant challenges due to limited power, constrained bandwidth, and unreliable connectivity, especially in remote natural habitats. Conventional object detection systems often transmit redundant data of the same animals detected across multiple consecutive frames as a part [...] Read more.
Real-time wildlife monitoring on edge devices poses significant challenges due to limited power, constrained bandwidth, and unreliable connectivity, especially in remote natural habitats. Conventional object detection systems often transmit redundant data of the same animals detected across multiple consecutive frames as a part of a single event, resulting in increased power consumption and inefficient bandwidth usage. Furthermore, maintaining consistent animal identities in the wild is difficult due to occlusions, variable lighting, and complex environments. In this study, we propose a lightweight hybrid tracking framework built on the YOLOv8m deep neural network, combining motion-based Kalman filtering with Local Binary Pattern (LBP) similarity for appearance-based re-identification using texture and color features. To handle ambiguous cases, we further incorporate Hue-Saturation-Value (HSV) color space similarity. This approach enhances identity consistency across frames while reducing redundant transmissions. The framework is optimized for real-time deployment on edge platforms such as NVIDIA Jetson Orin Nano and Raspberry Pi 5. We evaluate our method against state-of-the-art trackers using event-based metrics such as MOTA, HOTA, and IDF1, with a focus on detected animals occlusion handling, trajectory analysis, and counting during both day and night. Our approach significantly enhances tracking robustness, reduces ID switches, and provides more accurate detection and counting compared to existing methods. When transmitting time-series data and detected frames, it achieves up to 99.87% bandwidth savings and 99.67% power reduction, making it highly suitable for edge-based wildlife monitoring in resource-constrained environments. Full article
(This article belongs to the Special Issue Intelligent Edge: When AI Meets Edge Computing)
Show Figures

Figure 1

21 pages, 2965 KiB  
Article
Inspection Method Enabled by Lightweight Self-Attention for Multi-Fault Detection in Photovoltaic Modules
by Shufeng Meng and Tianxu Xu
Electronics 2025, 14(15), 3019; https://doi.org/10.3390/electronics14153019 - 29 Jul 2025
Viewed by 298
Abstract
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity [...] Read more.
Bird-dropping fouling and hotspot anomalies remain the most prevalent and detrimental defects in utility-scale photovoltaic (PV) plants; their co-occurrence on a single module markedly curbs energy yield and accelerates irreversible cell degradation. However, markedly disparate visual–thermal signatures of the two phenomena impede high-fidelity concurrent detection in existing robotic inspection systems, while stringent onboard compute budgets also preclude the adoption of bulky detectors. To resolve this accuracy–efficiency trade-off for dual-defect detection, we present YOLOv8-SG, a lightweight yet powerful framework engineered for mobile PV inspectors. First, a rigorously curated multi-modal dataset—RGB for stains and long-wave infrared for hotspots—is assembled to enforce robust cross-domain representation learning. Second, the HSV color space is leveraged to disentangle chromatic and luminance cues, thereby stabilizing appearance variations across sensors. Third, a single-head self-attention (SHSA) block is embedded in the backbone to harvest long-range dependencies at negligible parameter cost, while a global context (GC) module is grafted onto the detection head to amplify fine-grained semantic cues. Finally, an auxiliary bounding box refinement term is appended to the loss to hasten convergence and tighten localization. Extensive field experiments demonstrate that YOLOv8-SG attains 86.8% mAP@0.5, surpassing the vanilla YOLOv8 by 2.7 pp while trimming 12.6% of parameters (18.8 MB). Grad-CAM saliency maps corroborate that the model’s attention consistently coincides with defect regions, underscoring its interpretability. The proposed method, therefore, furnishes PV operators with a practical low-latency solution for concurrent bird-dropping and hotspot surveillance. Full article
Show Figures

Figure 1

26 pages, 3771 KiB  
Article
BGIR: A Low-Illumination Remote Sensing Image Restoration Algorithm with ZYNQ-Based Implementation
by Zhihao Guo, Liangliang Zheng and Wei Xu
Sensors 2025, 25(14), 4433; https://doi.org/10.3390/s25144433 - 16 Jul 2025
Viewed by 239
Abstract
When a CMOS (Complementary Metal–Oxide–Semiconductor) imaging system operates at a high frame rate or a high line rate, the exposure time of the imaging system is limited, and the acquired image data will be dark, with a low signal-to-noise ratio and unsatisfactory sharpness. [...] Read more.
When a CMOS (Complementary Metal–Oxide–Semiconductor) imaging system operates at a high frame rate or a high line rate, the exposure time of the imaging system is limited, and the acquired image data will be dark, with a low signal-to-noise ratio and unsatisfactory sharpness. Therefore, in order to improve the visibility and signal-to-noise ratio of remote sensing images based on CMOS imaging systems, this paper proposes a low-light remote sensing image enhancement method and a corresponding ZYNQ (Zynq-7000 All Programmable SoC) design scheme called the BGIR (Bilateral-Guided Image Restoration) algorithm, which uses an improved multi-scale Retinex algorithm in the HSV (hue–saturation–value) color space. First, the RGB image is used to separate the original image’s H, S, and V components. Then, the V component is processed using the improved algorithm based on bilateral filtering. The image is then adjusted using the gamma correction algorithm to make preliminary adjustments to the brightness and contrast of the whole image, and the S component is processed using segmented linear enhancement to obtain the base layer. The algorithm is also deployed to ZYNQ using ARM + FPGA software synergy, reasonably allocating each algorithm module and accelerating the algorithm by using a lookup table and constructing a pipeline. The experimental results show that the proposed method improves processing speed by nearly 30 times while maintaining the recovery effect, which has the advantages of fast processing speed, miniaturization, embeddability, and portability. Following the end-to-end deployment, the processing speeds for resolutions of 640 × 480 and 1280 × 720 are shown to reach 80 fps and 30 fps, respectively, thereby satisfying the performance requirements of the imaging system. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 14169 KiB  
Article
High-Precision Complex Orchard Passion Fruit Detection Using the PHD-YOLO Model Improved from YOLOv11n
by Rongxiang Luo, Rongrui Zhao, Xue Ding, Shuangyun Peng and Fapeng Cai
Horticulturae 2025, 11(7), 785; https://doi.org/10.3390/horticulturae11070785 - 3 Jul 2025
Viewed by 347
Abstract
This study proposes the PHD-YOLO model as a means to enhance the precision of passion fruit detection in intricate orchard settings. The model has been meticulously engineered to circumvent salient challenges, including branch and leaf occlusion, variances in illumination, and fruit overlap. This [...] Read more.
This study proposes the PHD-YOLO model as a means to enhance the precision of passion fruit detection in intricate orchard settings. The model has been meticulously engineered to circumvent salient challenges, including branch and leaf occlusion, variances in illumination, and fruit overlap. This study introduces a pioneering partial convolution module (ParConv), which employs a channel grouping and independent processing strategy to mitigate computational complexity. The module under consideration has been demonstrated to enhance the efficacy of local feature extraction in dense fruit regions by integrating sub-group feature-independent convolution and channel concatenation mechanisms. Secondly, deep separable convolution (DWConv) is adopted to replace standard convolution. The proposed method involves decoupling spatial convolution and channel convolution, a strategy that enables the retention of multi-scale feature expression capabilities while achieving a substantial reduction in model computation. The integration of the HSV Attentional Fusion (HSVAF) module within the backbone network facilitates the fusion of HSV color space characteristics with an adaptive attention mechanism, thereby enhancing feature discriminability under dynamic lighting conditions. The experiment was conducted on a dataset of 1212 original images collected from a planting base in Yunnan, China, covering multiple periods and angles. The dataset was constructed using enhancement strategies, including rotation and noise injection, and contains 2910 samples. The experimental results demonstrate that the improved model achieves a detection accuracy of 95.4%, a recall rate of 85.0%, mAP@0.5 of 91.5%, and an F1 score of 90.0% on the test set, which are 0.7%, 3.5%, 1.3%, and 2. The model demonstrated a 4% increase in accuracy compared to the baseline model YOLOv11n, with a single-frame inference time of 0.6 milliseconds. The model exhibited significant robustness in scenarios with dense fruits, leaf occlusion, and backlighting, validating the synergistic enhancement of staged convolution optimization and hybrid attention mechanisms. This solution offers a means to automate the monitoring of orchards, achieving a balance between accuracy and real-time performance. Full article
(This article belongs to the Section Fruit Production Systems)
Show Figures

Figure 1

20 pages, 3340 KiB  
Article
Infrared Monocular Depth Estimation Based on Radiation Field Gradient Guidance and Semantic Priors in HSV Space
by Rihua Hao, Chao Xu and Chonghao Zhong
Sensors 2025, 25(13), 4022; https://doi.org/10.3390/s25134022 - 27 Jun 2025
Viewed by 405
Abstract
Monocular depth estimation (MDE) has emerged as a powerful technique for extracting scene depth from a single image, particularly in the context of computational imaging. Conventional MDE methods based on RGB images often degrade under varying illuminations. To overcome this, an end-to-end framework [...] Read more.
Monocular depth estimation (MDE) has emerged as a powerful technique for extracting scene depth from a single image, particularly in the context of computational imaging. Conventional MDE methods based on RGB images often degrade under varying illuminations. To overcome this, an end-to-end framework is developed that leverages the illumination-invariant properties of infrared images for accurate depth estimation. Specifically, a multi-task UNet architecture was designed to perform gradient extraction, semantic segmentation, and texture reconstruction from infrared RAW images. To strengthen structural learning, a Radiation Field Gradient Guidance (RGG) module was incorporated, enabling edge-aware attention mechanisms. The gradients, semantics, and textures were mapped to the Saturation (S), Hue (H), and Value (V) channels in the HSV color space, subsequently converted into an RGB format for input into the depth estimation network. Additionally, a sky mask loss was introduced during training to mitigate the influence of ambiguous sky regions. Experimental validation on a custom infrared dataset demonstrated high accuracy, achieving a δ1 of 0.976. These results confirm that integrating radiation field gradient guidance and semantic priors in HSV space significantly enhances depth estimation performance for infrared imagery. Full article
Show Figures

Figure 1

29 pages, 21063 KiB  
Article
Perceiving Fifth Facade Colors in China’s Coastal Cities from a Remote Sensing Perspective: A New Understanding of Urban Image
by Yue Liu, Richen Ye, Wenlong Jing, Xiaoling Yin, Jia Sun, Qiquan Yang, Zhiwei Hou, Hongda Hu, Sijing Shu and Ji Yang
Remote Sens. 2025, 17(12), 2075; https://doi.org/10.3390/rs17122075 - 17 Jun 2025
Viewed by 522
Abstract
Urban color represents the visual skin of a city, embodying regional culture, historical memory, and the contemporary spirit. However, while the existing studies focus on pedestrian-level facade colors, the “fifth facade” from a bird’s-eye view has been largely overlooked. Moreover, color distortions in [...] Read more.
Urban color represents the visual skin of a city, embodying regional culture, historical memory, and the contemporary spirit. However, while the existing studies focus on pedestrian-level facade colors, the “fifth facade” from a bird’s-eye view has been largely overlooked. Moreover, color distortions in traditional remote sensing imagery hinder precise analysis. This study targeted 56 Chinese coastal cities, decoding the spatiotemporal patterns of their fifth facade color (FFC). Through developing an innovative natural color optimization algorithm, the oversaturation and color bias of Sentinel-2 imageries were addressed. Several color indicators, including dominant colors, hue–saturation–value, color richness, and color harmony, were developed to analyze the spatial variations of FFC. Results revealed that FFC in Chinese coastal cities is dominated by gray, black, and brown, reflecting the commonality of cement jungles. Among them, northern warm grays exude solidity, as in Weifang, while southern cool grays convey modern elegance, as in Shenzhen. Blue PVC rooftops (e.g., Tianjin) and red-brick villages (e.g., Quanzhou) serve as symbols of industrial function and cultural heritage. Economically advanced cities (e.g., Shanghai) lead in color richness, linking vitality to visual diversity, while high-harmony cities (e.g., Lianyungang) foster livability through coordinated colors. The study also warns of color pollution risks. Cities like Qingdao exposed planning imbalances through color clashes. This research pioneers a systematic and large-scale decoding of urban fifth facade color from a remote sensing perspective, quantitatively revealing the dilemma of “identical cities” in modernization development. The findings inject color rationality into urban planning and create readable and warm city images. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

22 pages, 3331 KiB  
Article
Maize Leaf Area Index Estimation Based on Machine Learning Algorithm and Computer Vision
by Wanna Fu, Zhen Chen, Qian Cheng, Yafeng Li, Weiguang Zhai, Fan Ding, Xiaohui Kuang, Deshan Chen and Fuyi Duan
Agriculture 2025, 15(12), 1272; https://doi.org/10.3390/agriculture15121272 - 12 Jun 2025
Viewed by 712
Abstract
Precise estimation of the leaf area index (LAI) is vital in efficient maize growth monitoring and precision farming. Traditional LAI measurement methods are often destructive and labor-intensive, while techniques relying solely on spectral data suffer from limitations such as spectral saturation. To overcome [...] Read more.
Precise estimation of the leaf area index (LAI) is vital in efficient maize growth monitoring and precision farming. Traditional LAI measurement methods are often destructive and labor-intensive, while techniques relying solely on spectral data suffer from limitations such as spectral saturation. To overcome these difficulties, the study integrated computer vision techniques with UAV-based remote sensing data to establish a rapid and non-invasive method for estimating the LAI in maize. Multispectral imagery of maize was acquired via UAV platforms across various phenological stages, and vegetation features were derived based on the Excess Green (ExG) Index and the Hue–Saturation–Value (HSV) color space. LAI standardization was performed through edge detection and the cumulative distribution function. The proposed LAI estimation model, named VisLAI, based solely on visible light imagery, demonstrated high accuracy, with R2 values of 0.84, 0.75, and 0.50, and RMSE values of 0.24, 0.35, and 0.44 across the big trumpet, tasseling–silking, and grain filling stages, respectively. When HSV-based optimization was applied, VisLAI achieved even better performance, with R2 values of 0.92, 0.90, and 0.85, and RMSE values of 0.19, 0.23, and 0.22 at the respective stages. The estimation results were validated against ground-truth data collected using the LAI-2200C plant canopy analyzer and compared with six machine learning algorithms, including Gradient Boosting (GB), Random Forest (RF), Ridge Regression (RR), Support Vector Regression (SVR), and Linear Regression (LR). Among these, GB achieved the best performance, with R2 values of 0.88, 0.88, and 0.65, and RMSE values of 0.22, 0.25, and 0.34. However, VisLAI consistently outperformed all machine learning models, especially during the grain filling stage, demonstrating superior robustness and accuracy. The VisLAI model proposed in this study effectively utilizes UAV-captured visible light imagery and computer vision techniques to achieve accurate, efficient, and non-destructive estimation of maize LAI. It outperforms traditional and machine learning-based approaches and provides a reliable solution for real-world maize growth monitoring and agricultural decision-making. Full article
Show Figures

Figure 1

19 pages, 2229 KiB  
Article
Dyeing to Know: Harmonizing Nile Red Staining Protocols for Microplastic Identification
by Derek Ho and Julie Masura
Colorants 2025, 4(2), 20; https://doi.org/10.3390/colorants4020020 - 3 Jun 2025
Cited by 1 | Viewed by 1281
Abstract
The increasing prevalence of microplastic (MP) pollution and the labor-intensive nature of existing identification methods necessitate improved large-scale detection approaches. Nile Red (NR) fluorescence, which varies with polarity, offers a potential classification method, but standardization of carrier solvents and fluorescence differentiation techniques remains [...] Read more.
The increasing prevalence of microplastic (MP) pollution and the labor-intensive nature of existing identification methods necessitate improved large-scale detection approaches. Nile Red (NR) fluorescence, which varies with polarity, offers a potential classification method, but standardization of carrier solvents and fluorescence differentiation techniques remains lacking. This study evaluated eight NR-carrier solvents (n-hexane, chloroform, acetone, methanol, ethanol, acetone/hexane, acetone/ethanol, and acetone/water) across ten common MP polymers (HDPE, LDPE, PP, EPS, PS, PC, ABS, PVC, PET, and PA). Fluorescence intensity, Stokes shift, and solvent-induced polymer degradation were analyzed. The study also assessed HSV (Hue/Saturation/Value) color spaces for Stokes shift representation and MP differentiation. Fenton oxidation effectively quenched fluorescence in natural organic matter (e.g., eggshells, fingernails, wood, cotton) while preserving NR-stained MPs. Acetone/water [25% (v/v)] emerged as the optimal solvent, balancing fluorescence performance and minimal degradation. Full article
(This article belongs to the Special Issue Feature Papers in Colorant Chemistry)
Show Figures

Figure 1

22 pages, 6392 KiB  
Article
Dual-Phase Severity Grading of Strawberry Angular Leaf Spot Based on Improved YOLOv11 and OpenCV
by Yi-Xiao Xu, Xin-Hao Yu, Qing Yi, Qi-Yuan Zhang and Wen-Hao Su
Plants 2025, 14(11), 1656; https://doi.org/10.3390/plants14111656 - 29 May 2025
Viewed by 660
Abstract
Phyllosticta fragaricola-induced angular leaf spot causes substantial economic losses in global strawberry production, necessitating advanced severity assessment methods. This study proposed a dual-phase grading framework integrating deep learning and computer vision. The enhanced You Only Look Once version 11 (YOLOv11) architecture incorporated [...] Read more.
Phyllosticta fragaricola-induced angular leaf spot causes substantial economic losses in global strawberry production, necessitating advanced severity assessment methods. This study proposed a dual-phase grading framework integrating deep learning and computer vision. The enhanced You Only Look Once version 11 (YOLOv11) architecture incorporated a Content-Aware ReAssembly of FEatures (CARAFE) module for improved feature upsampling and a squeeze-and-excitation (SE) attention mechanism for channel-wise feature recalibration, resulting in the YOLOv11-CARAFE-SE for the severity assessment of strawberry angular leaf spot. Furthermore, an OpenCV-based threshold segmentation algorithm based on H-channel thresholds in the HSV color space achieved accurate lesion segmentation. A disease severity grading standard for strawberry angular leaf spot was established based on the ratio of lesion area to leaf area. In addition, specialized software for the assessment of disease severity was developed based on the improved YOLOv11-CARAFE-SE model and OpenCV-based algorithms. Experimental results show that compared with the baseline YOLOv11, the performance is significantly improved: the box mAP@0.5 is increased by 1.4% to 93.2%, the mask mAP@0.5 is increased by 0.9% to 93.0%, the inference time is shortened by 0.4 ms to 0.9 ms, and the computational load is reduced by 1.94% to 10.1 GFLOPS. In addition, this two-stage grading framework achieves an average accuracy of 94.2% in detecting selected strawberry horn leaf spot disease samples, providing real-time field diagnostics and a high-throughput phenotypic analysis for resistance breeding programs. This work demonstrates the feasibility of rapidly estimating the severity of strawberry horn leaf spot, which will establish a robust technical framework for strawberry disease management under field conditions. Full article
(This article belongs to the Section Crop Physiology and Crop Production)
Show Figures

Figure 1

19 pages, 5870 KiB  
Article
Tilt-Induced Error Compensation with Vision-Based Method for Polarization Navigation
by Meng Yuan, Xindong Wu, Chenguang Wang and Xiaochen Liu
Appl. Sci. 2025, 15(9), 5060; https://doi.org/10.3390/app15095060 - 2 May 2025
Viewed by 487
Abstract
To rectify significant heading calculation errors in polarized light navigation for unmanned aerial vehicles (UAVs) under tilted states, this paper proposes a method for compensating horizontal attitude angles based on horizon detection. First, a defogging enhancement algorithm that integrates Retinex theory with dark [...] Read more.
To rectify significant heading calculation errors in polarized light navigation for unmanned aerial vehicles (UAVs) under tilted states, this paper proposes a method for compensating horizontal attitude angles based on horizon detection. First, a defogging enhancement algorithm that integrates Retinex theory with dark channel prior is adopted to improve image quality in low-illumination and hazy environments. Second, a dynamic threshold segmentation method in the HSV color space (Hue, Saturation, and Value) is proposed for robust horizon region extraction, combined with an improved adaptive bilateral filtering Canny operator for edge detection, aimed at balancing detail preservation and noise suppression. Then, the progressive probabilistic Hough transform is used to efficiently extract parameters of the horizon line. The calculated horizontal attitude angles are utilized to convert the body frame to the navigation frame, achieving compensation for polarization orientation errors. Onboard experiments demonstrate that the horizontal attitude angle estimation error remains within 0.3°, and the heading accuracy after compensation is improved by approximately 77.4% relative to uncompensated heading accuracy, thereby validating the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

17 pages, 3914 KiB  
Article
Multi-Scale Fusion Underwater Image Enhancement Based on HSV Color Space Equalization
by Jialiang Zhang, Haibing Su, Tao Zhang, Hu Tian and Bin Fan
Sensors 2025, 25(9), 2850; https://doi.org/10.3390/s25092850 - 30 Apr 2025
Viewed by 510
Abstract
Meeting the escalating demand for high-quality underwater imagery poses a significant challenge due to light absorption and scattering in water, resulting in color distortion and reduced contrast. This study presents an innovative approach for enhancing underwater images, combining color correction, HSV color space [...] Read more.
Meeting the escalating demand for high-quality underwater imagery poses a significant challenge due to light absorption and scattering in water, resulting in color distortion and reduced contrast. This study presents an innovative approach for enhancing underwater images, combining color correction, HSV color space equalization, and multi-scale fusion techniques. Initially, automatic contrast adjustment and improved white balance corrected color bias; this was followed by saturation and value equalization in the HSV space to enhance brightness and saturation. Gaussian and Laplacian pyramid methods extracted multi-scale features that were fused to augment image details and edges. Extensive subjective and objective evaluations compared our method with existing algorithms, demonstrating its superior performance in UCIQE (0.64368) and information entropy (7.8041) metrics. The proposed method effectively improves overall image quality, mitigates color bias, and enhances brightness and saturation. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 39460 KiB  
Article
An Efficient Method for Counting Large-Scale Plantings of Transplanted Crops in UAV Remote Sensing Images
by Huihua Wang, Yuhang Zhang, Zhengfang Li, Mofei Li, Haiwen Wu, Youdong Jia, Jiankun Yang and Shun Bi
Agriculture 2025, 15(5), 511; https://doi.org/10.3390/agriculture15050511 - 26 Feb 2025
Cited by 1 | Viewed by 620
Abstract
Counting the number of transplanted crops is a crucial link in agricultural production, serving as a key method to promptly obtain information on crop growth conditions and ensure the yield and quality. The existing counting methods primarily rely on manual counting or estimation, [...] Read more.
Counting the number of transplanted crops is a crucial link in agricultural production, serving as a key method to promptly obtain information on crop growth conditions and ensure the yield and quality. The existing counting methods primarily rely on manual counting or estimation, which are inefficient, costly, and difficult to evaluate statistically. Additionally, some deep-learning-based algorithms can only crop large-scale remote sensing images obtained by Unmanned Aerial Vehicles (UAVs) into smaller sub-images for counting. However, this fragmentation often leads to incomplete crop contours of some transplanted crops, issues such as over-segmentation, repeated counting, low statistical efficiency, and also requires a significant amount of data annotation and model training work. To address the aforementioned challenges, this paper first proposes an effective framework for farmland segmentation, named MED-Net, based on DeepLabV3+, integrating MobileNetV2 and Efficient Channel Attention Net (ECA-Net), enabling precise plot segmentation. Secondly, color masking for transplanted crops is established in the HSV color space to further remove background information. After filtering and denoising, the contours of transplanted crops are extracted. An efficient contour filtering strategy is then applied to enable accurate counting. This paper conducted experiments on tobacco counting, and the experimental results demonstrated that the proposed MED-Net framework could accurately segment farmland in UAV large-scale remote sensing images with high similarity and complex backgrounds. The contour extraction and filtering strategy can effectively and accurately identify the contours of transplanted crops, meeting the requirements for rapid and accurate survival counting in the early stage of transplantation. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 13945 KiB  
Article
Quantitative Comparison of Geographical Color of Traditional Village Architectural Heritage Based on K-Means Color Clustering—A Case Study of Southeastern Hubei Province, China
by Li Dong and Meiqi Kang
Buildings 2025, 15(5), 748; https://doi.org/10.3390/buildings15050748 - 25 Feb 2025
Cited by 3 | Viewed by 928
Abstract
The architectural heritage of traditional villages, as an important bearing entity of regional culture, contains strong regional color attributes. However, under the wave of contemporary rapid economic development, the color of traditional village architectural heritage is facing serious challenges. The K-means clustering algorithm [...] Read more.
The architectural heritage of traditional villages, as an important bearing entity of regional culture, contains strong regional color attributes. However, under the wave of contemporary rapid economic development, the color of traditional village architectural heritage is facing serious challenges. The K-means clustering algorithm has outstanding advantages in image color clustering and is suitable for the large-scale data collection of sample picture primary colors to reduce subjective bias and can be combined with the HSV color space to optimize the results. In this study, the architectural heritage of four traditional villages of the Ming and Qing dynasties in the southeastern region of Hubei Province is taken as the research object, the K-means clustering algorithm is used to quantify the color data of the architectural heritage, and the HSV color space is used to analyze the distribution characteristics of the color data and to excavate the uniqueness of its colors and the regional characteristics. The results of this study show that the color characteristics of the architectural heritage of the four villages are as follows: the main colors, red-yellow and red, and the overall color percentage should be between 80% and 100%. The auxiliary colors, cyan blue and blue, should range from 0 to 20% and show low saturation and medium-high value characteristics. Based on the above results, the recommended range of values for the architectural heritage colors in the southeastern part of Hubei Province is clarified: the hue values are between the ranges of 0–40 and 200–230, the saturation is between 0 and 30%, and the values are in the range of 30–70%. At the same time, based on this range of values, a set of recommended chromatograms was generated to provide a visual reference for the adjustment of architectural heritage colors, which is helpful for the conservation and development of architectural heritage colors and landscapes. Full article
(This article belongs to the Special Issue Built Heritage Conservation in the Twenty-First Century: 2nd Edition)
Show Figures

Figure 1

25 pages, 8870 KiB  
Article
Classification of Tomato Harvest Timing Using an AI Camera and Analysis Based on Experimental Results
by Yasuhiro Okabe, Takefumi Hiraguri, Keita Endo, Tomotaka Kimura and Daisuke Hayashi
AgriEngineering 2025, 7(2), 48; https://doi.org/10.3390/agriengineering7020048 - 19 Feb 2025
Viewed by 1579
Abstract
Smart agriculture has the potential to solve labor shortages and improve production efficiency and prices at the time of shipment. Predicting tomato yields during the cultivation period is crucial for planning shipment volumes and costs in advance. We propose technology that utilizes an [...] Read more.
Smart agriculture has the potential to solve labor shortages and improve production efficiency and prices at the time of shipment. Predicting tomato yields during the cultivation period is crucial for planning shipment volumes and costs in advance. We propose technology that utilizes an AI camera to enable producers to predict yields more accurately, and we verify the effectiveness of the developed system through experimental validation. Specifically, an AI-recognition camera was developed, utilizing You Only Look Once (YOLO) to detect individual tomatoes. The detected tomatoes are analyzed for size using point cloud data. Moreover, the AI-recognition camera performs to classify ripeness based on hue. This technology can achieve accurate ripeness classification without being dependent on the brightness of the greenhouse. To evaluate this AI classification camera, the predicted yield obtained from the camera was compared with the actual harvested yield in the field. The analysis showed an error rate of 6.85%, demonstrating sufficient accuracy for practical implementation. By introducing this system, efficient yield prediction can be achieved, leading to reduced labor costs, stable tomato supply, improved quality, and optimized market distribution. As a result, it is expected to contribute to the benefits of both shippers and consumers. Full article
Show Figures

Figure 1

23 pages, 18399 KiB  
Article
Channel Attention for Fire and Smoke Detection: Impact of Augmentation, Color Spaces, and Adversarial Attacks
by Usama Ejaz, Muhammad Ali Hamza and Hyun-chul Kim
Sensors 2025, 25(4), 1140; https://doi.org/10.3390/s25041140 - 13 Feb 2025
Cited by 1 | Viewed by 1424
Abstract
The prevalence of wildfires presents significant challenges for fire detection systems, particularly in differentiating fire from complex backgrounds and maintaining detection reliability under diverse environmental conditions. It is crucial to address these challenges for developing sustainable and effective fire detection systems. In this [...] Read more.
The prevalence of wildfires presents significant challenges for fire detection systems, particularly in differentiating fire from complex backgrounds and maintaining detection reliability under diverse environmental conditions. It is crucial to address these challenges for developing sustainable and effective fire detection systems. In this paper: (i) we introduce a channel-wise attention-based architecture, achieving 95% accuracy and demonstrating an improved focus on flame-specific features critical for distinguishing fire in complex backgrounds. Through ablation studies, we demonstrate that our channel-wise attention mechanism provides a significant 3–5% improvement in accuracy over the baseline and state-of-the-art fire detection models; (ii) evaluate the impact of augmentation on fire detection, demonstrating improved performance across varied environmental conditions; (iii) comprehensive evaluation across color spaces including RGB, Grayscale, HSV, and YCbCr to analyze detection reliability; and (iv) assessment of model vulnerabilities where Fast Gradient Sign Method (FGSM) perturbations significantly impact performance, reducing accuracy to 41%. Using Local Interpretable Model-Agnostic Explanations (LIME) visualization techniques, we provide insights into model decision-making processes across both standard and adversarial conditions, highlighting important considerations for fire detection applications. Full article
(This article belongs to the Special Issue Object Detection and Recognition Based on Deep Learning)
Show Figures

Figure 1

Back to TopTop