Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,897)

Search Parameters:
Keywords = color in image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3624 KB  
Article
Multi-Scale Feature Fusion and Attention-Enhanced R2U-Net for Dynamic Weight Monitoring of Chicken Carcasses
by Tian Hua, Pengfei Zou, Ao Zhang, Runhao Chen, Hao Bai, Wenming Zhao, Qian Fan and Guobin Chang
Animals 2026, 16(3), 410; https://doi.org/10.3390/ani16030410 - 28 Jan 2026
Abstract
In recent years, real-time monitoring of broiler chicken weight has become crucial for assessing growth and health status. Currently, obtaining weight data often relies on manual collection. However, this process is cumbersome, labor-intensive, and inefficient. This paper proposes a broiler carcass weight detection [...] Read more.
In recent years, real-time monitoring of broiler chicken weight has become crucial for assessing growth and health status. Currently, obtaining weight data often relies on manual collection. However, this process is cumbersome, labor-intensive, and inefficient. This paper proposes a broiler carcass weight detection model based on deep learning image segmentation and regression to address these issues. The model first segments broiler carcasses and then uses the pixel area of the segmented region as a key feature for a regression model to predict weight. A custom dataset comprising 2709 images from 301 Taihu yellow chickens was established for this study. A novel segmentation network, AR2U-AtNet, derived from R2U-Net, is proposed. To mitigate the interference of background color and texture on target carcasses in slaughterhouse production lines, the Convolutional Block Attention Module (CBAM) is introduced to enable the network to focus on areas containing carcasses. Furthermore, broilers exhibit significant variations in size, morphology, and posture, which impose high demands on the model’s scale adaptability. Selective Kernel Attention (SKAttention) is therefore integrated to flexibly handle broiler images with diverse body conditions. The model achieved an average Intersection over Union (mIoU) score of 90.45%, and Dice and F1 scores of 95.18%. The regression-based weight prediction achieved an R2 value of 0.9324. The results demonstrate that the proposed method can quickly and accurately determine individual broiler carcass weights, thereby alleviating the burden of traditional weighing methods and ultimately improving the production efficiency of yellow-feather broilers. Full article
(This article belongs to the Section Poultry)
Show Figures

Figure 1

27 pages, 2292 KB  
Article
Source Camera Identification via Explicit Content–Fingerprint Decoupling with a Dual-Branch Deep Learning Framework
by Zijuan Han, Yang Yang, Jiaxuan Lu, Jian Sun, Yunxia Liu and Ngai-Fong Bonnie Law
Appl. Sci. 2026, 16(3), 1245; https://doi.org/10.3390/app16031245 - 26 Jan 2026
Viewed by 38
Abstract
In this paper, we propose a source camera identification method based on disentangled feature modeling, aiming to achieve robust extraction of camera fingerprint features under complex imaging and post-processing conditions. To address the severe coupling between image content and camera fingerprint features in [...] Read more.
In this paper, we propose a source camera identification method based on disentangled feature modeling, aiming to achieve robust extraction of camera fingerprint features under complex imaging and post-processing conditions. To address the severe coupling between image content and camera fingerprint features in existing methods, which makes content interference difficult to suppress, we develop a dual-branch deep learning framework guided by imaging physics. By introducing physical consistency constraints, the proposed framework explicitly separates image content representations from device-related fingerprint features in the feature space, thereby enhancing the stability and robustness of source camera identification. The proposed method adopts two parallel branches: a content modeling branch and a fingerprint feature extraction branch. The content branch is built upon an improved U-Net architecture to reconstruct scene and color information, and further incorporates texture refinement and multi-scale feature fusion to reduce residual content interference in fingerprint modeling. The fingerprint branch employs ResNet-50 as the backbone network to learn discriminative global features associated with the camera imaging pipeline. Based on these branches, fingerprint information dominated by sensor noise is explicitly extracted by computing the residual between the input image and the reconstructed content, and is further encoded through noise analysis and feature fusion for joint camera model classification. Experimental results on multiple public-source camera forensics datasets demonstrate that the proposed method achieves stable and competitive identification performance in same-brand camera discrimination, complex imaging conditions, and post-processing scenarios, validating the effectiveness of the proposed disentangled modeling and physical consistency constraint strategy for source camera identification. Full article
(This article belongs to the Special Issue New Development in Machine Learning in Image and Video Forensics)
Show Figures

Figure 1

16 pages, 812 KB  
Review
A Review of Adaptive Mechanisms in Fish Retinal Structure and Opsins Under Light Environment Regulation
by Zheng Zhang, Fan Fei, Liang Wang, Yunsong Rao, Wenyang Li, Xiaoqiang Gao, Ao Li and Baoliang Liu
Fishes 2026, 11(2), 73; https://doi.org/10.3390/fishes11020073 - 23 Jan 2026
Viewed by 57
Abstract
Light, as one of the most crucial environmental factors, plays an essential role in the growth, physiology, and evolutionary survival of fish. To cope with diverse light conditions in aquatic environments, fish adapt through photosensory systems composed of both visual and non-visual pathways. [...] Read more.
Light, as one of the most crucial environmental factors, plays an essential role in the growth, physiology, and evolutionary survival of fish. To cope with diverse light conditions in aquatic environments, fish adapt through photosensory systems composed of both visual and non-visual pathways. The retina is a key component of the visual system of fish, capable of converting external optical signals into neural electrical signals, making it crucial for visual formation. During the process of visual signal transduction, opsins serve as the molecular foundation for vision formation. They can be divided into two major categories: visual opsins and non-visual opsins. Among these, melanopsin, as a member of the non-visual opsin family, acts as a key upstream factor in the circadian phototransduction pathway of fish. In this review, we review the adaptability of fish retinal structures to light reception and introduce in detail the gene diversity and relative expression levels of fish opsins. At the same time, we comprehensively describe the molecular mechanism by which fish adapt to changes in the underwater light environment. We also concluded that melanopsin, as a non-imaging photoreceptor, possesses not only core light-sensing functions but also non-imaging visual functions such as circadian rhythm regulation, body coloration changes, and hormone secretion. This review suggests that future research should not only elucidate the physiological functions of melanopsin in fish but also comprehensively reveal the mechanisms underlying the multi-adaptive nature of fish vision across varying light environments. Through these studies, researchers can have a deeper understanding of the physiological regulation mechanism of fish in complex light environments, and then formulate fish light environment management strategies, optimize aquaculture practices, improve economic returns, and promote the development of related fields. Full article
(This article belongs to the Special Issue Adaptation and Response of Fish to Environmental Changes)
22 pages, 7096 KB  
Article
An Improved ORB-KNN-Ratio Test Algorithm for Robust Underwater Image Stitching on Low-Cost Robotic Platforms
by Guanhua Yi, Tianxiang Zhang, Yunfei Chen and Dapeng Yu
J. Mar. Sci. Eng. 2026, 14(2), 218; https://doi.org/10.3390/jmse14020218 - 21 Jan 2026
Viewed by 77
Abstract
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching [...] Read more.
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching method that integrates ORB (Oriented FAST and Rotated BRIEF) feature extraction with a fixed-ratio constraint matching strategy. First, lightweight color and contrast enhancement techniques are employed to restore color balance and improve local texture visibility. Then, ORB descriptors are extracted and matched via a KNN (K-Nearest Neighbors) nearest-neighbor search, and Lowe’s ratio test is applied to eliminate false matches caused by weak texture similarity. Finally, the geometric transformation between image frames is estimated by incorporating robust optimization, ensuring stable homography computation. Experimental results on real underwater datasets show that the proposed method significantly improves stitching continuity and structural consistency, achieving 40–120% improvements in SSIM (Structural Similarity Index) and PSNR (peak signal-to-noise ratio) over conventional Harris–ORB + KNN, SIFT (scale-invariant feature transform) + BF (brute force), SIFT + KNN, and AKAZE (accelerated KAZE) + BF methods while maintaining processing times within one second. These results indicate that the proposed method is well-suited for real-time underwater environment perception and panoramic mapping on low-cost, micro-sized underwater robotic platforms. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 6960 KB  
Article
First-Stage Algorithm for Photo-Identification and Location of Marine Species
by Rosa Isela Ramos-Arredondo, Francisco Javier Gallegos-Funes, Blanca Esther Carvajal-Gámez, Guillermo Urriolagoitia-Sosa, Beatriz Romero-Ángeles, Alberto Jorge Rosales-Silva and Erick Velázquez-Lozada
Animals 2026, 16(2), 281; https://doi.org/10.3390/ani16020281 - 16 Jan 2026
Viewed by 123
Abstract
Marine species photo-identification and location for tracking are crucial for understanding the characteristics and patterns that distinguish each marine species. However, challenges in camera data acquisition and the unpredictability of animal movements have restricted progress in this field. To address these challenges, we [...] Read more.
Marine species photo-identification and location for tracking are crucial for understanding the characteristics and patterns that distinguish each marine species. However, challenges in camera data acquisition and the unpredictability of animal movements have restricted progress in this field. To address these challenges, we present a novel algorithm for the first stage of marine species photo-identification and location methods. For marine species photo-identification applications, a color index-based thresholding segmentation method is proposed. This method is based on the characteristics of the GMR (Green Minus Red) color index and the proposed empirical BMG (Blue Minus Green) color index. These color indexes are modified to provide better information about the color of regions, such as marine animals, the sky, and land found in the scientific sightings images, allowing an optimal thresholding segmentation method. In the case of marine species location, a SURFs (Speeded-Up Robust Features)-based supervised classifier is used to obtain the location of the marine animal in the sighting image; with this, its tracking could be obtained. The tests were performed with the Kaggle happywhale public database; the results obtained in precision shown range from 0.77 up to 0.98 using the proposed indexes. Finally, the proposed method could be used in real-time marine species tracking with a processing time of 0.33 s for images of 645 × 376 pixels using a standard PC. Full article
(This article belongs to the Section Aquatic Animals)
Show Figures

Figure 1

24 pages, 11080 KB  
Article
Graph-Based and Multi-Stage Constraints for Hand–Object Reconstruction
by Wenrun Wang, Jianwu Dang, Yangping Wang and Hui Yu
Sensors 2026, 26(2), 535; https://doi.org/10.3390/s26020535 - 13 Jan 2026
Viewed by 200
Abstract
Reconstructing hand and object shapes from a single view during interaction remains challenging due to severe mutual occlusion and the need for high physical plausibility. To address this, we propose a novel framework for hand–object interaction reconstruction based on holistic, multi-stage collaborative optimization. [...] Read more.
Reconstructing hand and object shapes from a single view during interaction remains challenging due to severe mutual occlusion and the need for high physical plausibility. To address this, we propose a novel framework for hand–object interaction reconstruction based on holistic, multi-stage collaborative optimization. Unlike methods that process hands and objects independently or apply constraints as late-stage post-processing, our model progressively enforces physical consistency and geometric accuracy throughout the entire reconstruction pipeline. Our network takes an RGB-D image as input. An adaptive feature fusion module first combines color and depth information to improve robustness against sensing uncertainties. We then introduce structural priors for 2D pose estimation and leverage texture cues to refine depth-based 3D pose initialization. Central to our approach is the iterative application of a dense mutual attention mechanism during sparse-to-dense mesh recovery, which dynamically captures interaction dependencies while refining geometry. Finally, we use a Signed Distance Function (SDF) representation explicitly designed for contact surfaces to prevent interpenetration and ensure physically plausible results. Through comprehensive experiments, our method demonstrates significant improvements on the challenging ObMan and DexYCB benchmarks, outperforming state-of-the-art techniques. Specifically, on the ObMan dataset, our approach achieves hand CDh and object CDo metrics of 0.077 cm2 and 0.483 cm2, respectively. Similarly, on the DexYCB dataset, it attains hand CDh and object CDo values of 0.251 cm2 and 1.127 cm2, respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 5736 KB  
Article
A Model for Identifying the Fermentation Degree of Tieguanyin Oolong Tea Based on RGB Image and Hyperspectral Data
by Yuyan Huang, Yongkuai Chen, Chuanhui Li, Tao Wang, Chengxu Zheng and Jian Zhao
Foods 2026, 15(2), 280; https://doi.org/10.3390/foods15020280 - 12 Jan 2026
Viewed by 166
Abstract
The fermentation process of oolong tea is a critical step in shaping its quality and flavor profile. In this study, the fermentation degree of Anxi Tieguanyin oolong tea was assessed using image and hyperspectral features. Machine learning algorithms, including Support Vector Machine (SVM), [...] Read more.
The fermentation process of oolong tea is a critical step in shaping its quality and flavor profile. In this study, the fermentation degree of Anxi Tieguanyin oolong tea was assessed using image and hyperspectral features. Machine learning algorithms, including Support Vector Machine (SVM), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU), were employed to develop models based on both single-source features and multi-source fused features. First, color and texture features were extracted from RGB images and then processed through Pearson correlation-based feature selection and Principal Component Analysis (PCA) for dimensionality reduction. For the hyperspectral data, preprocessing was conducted using Normalization (Nor) and Standard Normal Variate (SNV), followed by feature selection and dimensionality reduction with Competitive Adaptive Reweighted Sampling (CARS), Successive Projections Algorithm (SPA), and PCA. We then performed mid-level fusion on the two feature sets and selected the most relevant features using L1 regularization for the final modeling stage. Finally, SHapley Additive exPlanations (SHAP) analysis was conducted on the optimal models to reveal key features from both hyperspectral bands and image data. The results indicated that models based on single features achieved test set accuracies of 68.06% to 87.50%, while models based on data fusion achieved 77.78% to 94.44%. Specifically, the Pearson+Nor-SPA+L1+SVM fusion model achieved the highest accuracy of 94.44%. This demonstrates that data feature fusion enables a more comprehensive characterization of the fermentation process, significantly improving model accuracy. SHAP analysis revealed that the hyperspectral bands at 967, 942, 814, 784, 781, 503, 413, and 416 nm, along with the image features Hσ and H, played the most crucial roles in distinguishing tea fermentation stages. These findings provide a scientific basis for assessing the fermentation degree of Tieguanyin oolong tea and support the development of intelligent detection systems. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

21 pages, 2797 KB  
Article
Visual Quality Assessment on the Vista Landscape of Beijing Central Axis Using VR Panoramic Technology
by Xiaomin Hu, Yifei Liu, Gang Yu, Mengyao Xu and Xingyan Ge
Buildings 2026, 16(2), 315; https://doi.org/10.3390/buildings16020315 - 12 Jan 2026
Viewed by 175
Abstract
Vista landscapes of historic cities embody unique spatial order and cultural memory, and the scientific quantification of their visual quality presents a common challenge for both heritage conservation and urban renewal. Focusing on the Beijing Central Axis, this study integrates VR panoramic technology [...] Read more.
Vista landscapes of historic cities embody unique spatial order and cultural memory, and the scientific quantification of their visual quality presents a common challenge for both heritage conservation and urban renewal. Focusing on the Beijing Central Axis, this study integrates VR panoramic technology with the SBE-SD evaluation method to develop a visual quality assessment framework suitable for vista landscapes of historic cities, systematically evaluating sectional differences in scenic beauty and identifying their key influencing factors. Thirteen typical viewing places and 17 assessment points were selected, and panoramic images were captured at each point. The evaluation framework comprising 3 first-level factors, 11 secondary factors, and 24 third-level factors was established, and a corresponding scoring table was designed through which students from related disciplines were recruited to conduct the evaluation. After obtaining valid data, scenic beauty values and landscape factor scores were analyzed, followed by correlation tests and backward stepwise regression. The results show the following: (1) The scenic beauty of the vista landscapes along the Central Axis shows sectional differentiation, with the middle section achieving the highest scenic beauty value, followed by the northern section, with the southern section scoring the lowest; specifically, Wanchunting Pavilion South scored the highest, while Tianqiao Bridge scored the lowest. (2) In terms of landscape factor scores, within spatial form, color scored the highest, followed by texture and scale, with volume scoring the lowest; within marginal profile, integrity scored higher than visual dominance; within visual structure, visual organization scored the highest, followed by visual patches, with visual hierarchy scoring the lowest. (3) Regression analysis identified six key influencing factors, ranked in descending order of significance as follows: color coordination degree of traditional buildings, spatial openness, spatial symmetry, hierarchy sense of buildings, texture regularity of traditional buildings, and visual dominance of historical landmark buildings. This study establishes a quantitative assessment pathway that connects subjective perception and objective environment with a replicable process, providing methodological support for the refined conservation and optimization of vista landscapes in historic cities while demonstrating the application potential of VR panoramic technology in urban landscape evaluation. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

21 pages, 58532 KB  
Article
Joint Inference of Image Enhancement and Object Detection via Cross-Domain Fusion Transformer
by Bingxun Zhao and Yuan Chen
Computers 2026, 15(1), 43; https://doi.org/10.3390/computers15010043 - 10 Jan 2026
Viewed by 145
Abstract
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to [...] Read more.
Underwater vision is fundamental to ocean exploration, yet it is frequently impaired by underwater degradation including low contrast, color distortion and blur, thereby presenting significant challenges for underwater object detection (UOD). Most existing methods employ underwater image enhancement as a preprocessing step to improve visual quality prior to detection. However, image enhancement and object detection are optimized for fundamentally different objectives, and directly cascading them leads to feature distribution mismatch. Moreover, prevailing dual-branch architectures process enhancement and detection independently, overlooking multi-scale interactions across domains and thus constraining the learning of cross-domain feature representation. To overcome these limitations, We propose an underwater cross-domain fusion Transformer detector (UCF-DETR). UCF-DETR jointly leverages image enhancement and object detection by exploiting the complementary information from the enhanced and original image domains. Specifically, an underwater image enhancement module is employed to improve visibility. We then design a cross-domain feature pyramid to integrate fine-grained structural details from the enhanced domain with semantic representations from the original domain. Cross-domain query interaction mechanism is introduced to model inter-domain query relationships, leading to accurate object localization and boundary delineation. Extensive experiments on the challenging DUO and UDD benchmarks demonstrate that UCF-DETR consistently outperforms state-of-the-art methods for UOD. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision (2nd Edition))
Show Figures

Figure 1

40 pages, 16360 KB  
Review
Artificial Intelligence Meets Nail Diagnostics: Emerging Image-Based Sensing Platforms for Non-Invasive Disease Detection
by Tejrao Panjabrao Marode, Vikas K. Bhangdiya, Shon Nemane, Dhiraj Tulaskar, Vaishnavi M. Sarad, K. Sankar, Sonam Chopade, Ankita Avthankar, Manish Bhaiyya and Madhusudan B. Kulkarni
Bioengineering 2026, 13(1), 75; https://doi.org/10.3390/bioengineering13010075 - 8 Jan 2026
Viewed by 716
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such [...] Read more.
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such as anemia, diabetes, psoriasis, melanoma, and fungal diseases. This review presents the first big synthesis of image analysis for nail lesions incorporating AI/ML for diagnostic purposes. Where dermatological reviews to date have been more wide-ranging in scope, our review will focus specifically on diagnosis and screening related to nails. The various technological modalities involved (smartphone imaging, dermoscopy, Optical Coherence Tomography) will be presented, together with the different processing techniques for images (color corrections, segmentation, cropping of regions of interest), and models that range from classical methods to deep learning, with annotated descriptions of each. There will also be additional descriptions of AI applications related to some diseases, together with analytical discussions regarding real-world impediments to clinical application, including scarcity of data, variations in skin type, annotation errors, and other laws of clinical adoption. Some emerging solutions will also be emphasized: explainable AI (XAI), federated learning, and platform diagnostics allied with smartphones. Bridging the gap between clinical dermatology, artificial intelligence and mobile health, this review consolidates our existing knowledge and charts a path through yet others to scalable, equitable, and trustworthy nail based medically diagnostic techniques. Our findings advocate for interdisciplinary innovation to bring AI-enabled nail analysis from lab prototypes to routine healthcare and global screening initiatives. Full article
(This article belongs to the Special Issue Bioengineering in a Generative AI World)
Show Figures

Graphical abstract

34 pages, 9553 KB  
Article
Research on Multi-Stage Optimization for High-Precision Digital Surface Model and True Digital Orthophoto Map Generation Methods
by Yingwei Ge, Renke Ji, Bingxuan Guo, Qinsi Wang, Xiao Jiang and Mofei Chen
Remote Sens. 2026, 18(2), 197; https://doi.org/10.3390/rs18020197 - 7 Jan 2026
Viewed by 184
Abstract
To enhance the overall quality and consistency of depth maps, Digital Surface Models (DSM), and True Digital Orthophoto Map (TDOM) in UAV image reconstruction, this paper proposes a multi-stage adaptive optimization generation method. First, to address the noise and outlier issues in depth [...] Read more.
To enhance the overall quality and consistency of depth maps, Digital Surface Models (DSM), and True Digital Orthophoto Map (TDOM) in UAV image reconstruction, this paper proposes a multi-stage adaptive optimization generation method. First, to address the noise and outlier issues in depth maps, an adaptive joint bilateral filtering-based optimization method is introduced. This method repairs anomalous depth values using a four-directional filling strategy, incorporates image-guided joint bilateral filtering to enhance edge structure representation, effectively improving the accuracy and continuity of the depth map. Next, during the DSM generation stage, a method based on depth value voting space and elevation anomaly detection is proposed. A joint mechanism of elevation calculation and anomaly point detection is used to suppress noise and errors, while a height value completion strategy significantly enhances the geometric accuracy and integrity of the DSM. Finally, in the TDOM generation process, occlusion detection and gap-line generation techniques are introduced. Together with uniform lighting, color adjustment, and image gap optimization strategies, this improves texture stitching continuity and brightness consistency, effectively reducing artifacts caused by gaps, blurriness, and lighting differences. Experimental results show that the proposed method significantly improves depth map smoothness, DSM geometric accuracy, and TDOM visual consistency compared to traditional methods, providing a complete and efficient technical pathway for high-quality surface reconstruction. Full article
(This article belongs to the Special Issue Remote Sensing for 2D/3D Mapping)
Show Figures

Figure 1

20 pages, 6958 KB  
Article
Bird Detection in the Field with the IA-Mask-RCNN
by Yassine Sohbi, Lucie Zgainski and Christophe Sausse
Appl. Sci. 2026, 16(2), 584; https://doi.org/10.3390/app16020584 - 6 Jan 2026
Viewed by 196
Abstract
In recent times, field crop damage caused by birds, such as corvids and pigeons, has become crucial for many farmers. Damage can be as serious as the loss of a large part of the harvest. Several solutions have been proposed, but none are [...] Read more.
In recent times, field crop damage caused by birds, such as corvids and pigeons, has become crucial for many farmers. Damage can be as serious as the loss of a large part of the harvest. Several solutions have been proposed, but none are effective. An example is the use of scarecrows, but birds eventually adapt to them over time, and so they become ineffective. To study bird behavior and to propose a bird deterrent that would adapt to the presence of birds, we set up an experimental image-taking system on several plots of land over a period of 4–5 years. Around fifteen terabytes of images taken in the field were acquired. Our aim was to automatically detect these birds using deep learning methods and then to activate a real-time scarer. This work meets two challenges: the first is agroecological, as bird damage has become a major issue, and the second is IT, as it is difficult to detect birds in the field: the individuals are small because they are far from the camera lens, and field conditions are often less than optimal: darkness, confusion between the pigeons’ colors and the ground, etc. The Mask-RCNN in its original configuration is not suited to detecting small individuals. We mainly focused on the model’s hyperparameters to better adapt it to our study context. As a result, we improved the detection of small individuals using, among other things, appropriate anchor scales design and image processing techniques. At the same time, we built an original dataset focused on small individuals called BirdyDataset. The model can detect corvids and pigeons with an accuracy of 78% under real field conditions. Full article
Show Figures

Figure 1

17 pages, 7764 KB  
Article
Research on Estimating Backfat Thickness in Jinfen White Pigs Using Deep Learning and Image Processing
by Wenwen Xing, Hong Li, Xuyang Fu, Ziyu Li, Pengzhe Yi and Jianlong Zhang
Agriculture 2026, 16(2), 138; https://doi.org/10.3390/agriculture16020138 - 6 Jan 2026
Viewed by 223
Abstract
To address the time-consuming, labor-intensive, and inefficient nature of existing contact-based measurements of sow backfat thickness (BFT), in this study, a method to estimate BFT from depth images using deep learning and image processing is proposed. Measurements of BFT and hip depth images [...] Read more.
To address the time-consuming, labor-intensive, and inefficient nature of existing contact-based measurements of sow backfat thickness (BFT), in this study, a method to estimate BFT from depth images using deep learning and image processing is proposed. Measurements of BFT and hip depth images were collected from 254 Jinfen White sows. Following preprocessing, including depth-value filtering and colorization, a modified YOLOv8n-ShuffleNetV2 detector was trained and deployed to predict regions of interest in the buttock images. Depth values were then extracted from these regions and converted into distance estimates. Then, 11 external morphological pixel-based parameters were extracted, including hip area, hip-circumference length, and the area of the fitted ellipse. A random sample of 203 sows was selected for training and testing, and the relationship between BFT and the external morphological parameters was analyzed in 152, with the rest being used for testing. The results show significant positive correlations between BFT and several hip morphological parameters, with Pearson correlation coefficients exceeding 0.90 for both hip and fitted ellipse area. Principal component analysis was applied to the selected hip features to extract area and length related factors as inputs to a machine learning model. An elastic net regression model was employed to estimate BFT. The model’s generalization capability was evaluated using 51 sows not involved in training and testing. The model achieved an R2 = 0.8617, MSE = 4.3626 mm2, and MAE = 1.6456 mm. Finally, a BFT estimation system for Jinfen White pigs was developed using PyQt5 and Python, which enables automatic preprocessing of sow hip images and real-time estimation of BFT. Together, these results address the cumbersome and inefficient traditional manual collection of sow BFT data and support precision management in sow breeding farms. Full article
(This article belongs to the Section Farm Animal Production)
Show Figures

Figure 1

18 pages, 21035 KB  
Article
Chlorophyll Retrieval in Sun Glint Region Based on VIIRS Rayleigh-Corrected Reflectance
by Dongyang Fu, Yan Wang, Bangyi Tao, Tianjing Luan, Yixian Zhu, Changpeng Li, Bei Liu, Guo Yu and Yongze Li
Remote Sens. 2026, 18(1), 183; https://doi.org/10.3390/rs18010183 - 5 Jan 2026
Viewed by 304
Abstract
Sun glint is commonly observed as interference in the imaging process of ocean color satellite sensors, making the extraction of water color information in sun glint-affected areas challenging and often leading to significant data gaps. The remote sensing baseline indices, calculated based on [...] Read more.
Sun glint is commonly observed as interference in the imaging process of ocean color satellite sensors, making the extraction of water color information in sun glint-affected areas challenging and often leading to significant data gaps. The remote sensing baseline indices, calculated based on Rayleigh-corrected reflectance (Rrc), are recognized as effective in reflecting water color variability in sun glint-affected regions. However, the accurate extraction of the Rrc baseline indices requires sun glint correction. The determination of sun glint correction coefficients for different bands lacks a clear methodology, and the currently available correction coefficients are not applicable to different sea regions. Therefore, this study focuses on the South China Sea, where VIIRS imagery is significantly affected by sun glint. Based on paired datasets comprising sun glint-affected and -unaffected images acquired over the same region on adjacent dates, sun glint correction coefficients for each spectral band were derived by maximizing the cosine similarity of histograms constructed from three baseline indices: SS486 (Spectral Shape index at 486 nm), CI551 (Color Index at 551 nm), and SS671 (Spectral Shape index at 671 nm). To further evaluate the effectiveness of the proposed correction, chlorophyll-a concentrations were retrieved using a Random Forest regression model trained with baseline indices derived from sun glint-free Rrc data and subsequently applied to baseline indices after sun glint correction. Comparative analyses of both baseline index extraction and chlorophyll-a retrieval demonstrate that the proposed optimal-value and mean-value correction approaches effectively mitigate sun glint effects. The mean sun glint correction coefficients α(443), α(486), α(551), α(671) and α(745) were determined to be 0.75, 0.83, 0.89, 0.95 and 0.94, respectively. These coefficients can be applied as sun glint correction coefficients for the VIIRS Rrc data in the South China Sea region. Furthermore, the proposed method for determining sun glint correction coefficients offers a transferable framework that can be extended to other sea areas. Full article
Show Figures

Figure 1

21 pages, 2824 KB  
Article
A 3D Microfluidic Paper-Based Analytical Device with Smartphone-Based Colorimetric Readout for Phosphate Sensing
by Jose Manuel Graña-Dosantos, Francisco Pena-Pereira, Carlos Bendicho and Inmaculada de la Calle
Sensors 2026, 26(1), 335; https://doi.org/10.3390/s26010335 - 4 Jan 2026
Viewed by 552
Abstract
In this work, a 3D microfluidic paper-based analytical device (3D-µPAD) was developed for the smartphone-based colorimetric determination of phosphate in environmental samples. The assay relied on the formation of a blue-colored product (molybdenum blue) in the detection area of the 3D-µPAD upon reduction [...] Read more.
In this work, a 3D microfluidic paper-based analytical device (3D-µPAD) was developed for the smartphone-based colorimetric determination of phosphate in environmental samples. The assay relied on the formation of a blue-colored product (molybdenum blue) in the detection area of the 3D-µPAD upon reduction of the heteropolyacid H3PMo12O40 formed in the presence of phosphate. A number of experimental parameters were optimized, including geometric aspects of 3D-µPADs, digitization and image processing conditions, the amount of chemicals deposited in specific areas of the 3D-µPAD, and the reaction time. In addition, the stability of the device was evaluated at three different storage temperatures. Under optimal conditions, the working range was found to be from 4 to 25 mg P/L (12–77 mg PO4−3/L). The limits of detection (LOD) and quantification (LOQ) were 0.015 mg P/L and 0.05 mg P/L, respectively. The repeatability and intermediate precision of a 5 mg P/L standard were 4.8% and 7.1%, respectively. The proposed colorimetric assay has been successfully applied to phosphorous determination in various waters, soils, and sediments, obtaining recoveries in the range of 94 to 107%. The ready-to-use 3D-µPAD showed a greener profile than the standard method for phosphate determination, being affordable, easy-to-use, and suitable for citizen science applications. Full article
Show Figures

Graphical abstract

Back to TopTop