Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,783)

Search Parameters:
Keywords = light-field images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4847 KiB  
Article
FCA-STNet: Spatiotemporal Growth Prediction and Phenotype Extraction from Image Sequences for Cotton Seedlings
by Yiping Wan, Bo Han, Pengyu Chu, Qiang Guo and Jingjing Zhang
Plants 2025, 14(15), 2394; https://doi.org/10.3390/plants14152394 - 2 Aug 2025
Viewed by 127
Abstract
To address the limitations of the existing cotton seedling growth prediction methods in field environments, specifically, poor representation of spatiotemporal features and low visual fidelity in texture rendering, this paper proposes an algorithm for the prediction of cotton seedling growth from images based [...] Read more.
To address the limitations of the existing cotton seedling growth prediction methods in field environments, specifically, poor representation of spatiotemporal features and low visual fidelity in texture rendering, this paper proposes an algorithm for the prediction of cotton seedling growth from images based on FCA-STNet. The model leverages historical sequences of cotton seedling RGB images to generate an image of the predicted growth at time t + 1 and extracts 37 phenotypic traits from the predicted image. A novel STNet structure is designed to enhance the representation of spatiotemporal dependencies, while an Adaptive Fine-Grained Channel Attention (FCA) module is integrated to capture both global and local feature information. This attention mechanism focuses on individual cotton plants and their textural characteristics, effectively reducing the interference from common field-related challenges such as insufficient lighting, leaf fluttering, and wind disturbances. The experimental results demonstrate that the predicted images achieved an MSE of 0.0086, MAE of 0.0321, SSIM of 0.8339, and PSNR of 20.7011 on the test set, representing improvements of 2.27%, 0.31%, 4.73%, and 11.20%, respectively, over the baseline STNet. The method outperforms several mainstream spatiotemporal prediction models. Furthermore, the majority of the predicted phenotypic traits exhibited correlations with actual measurements with coefficients above 0.8, indicating high prediction accuracy. The proposed FCA-STNet model enables visually realistic prediction of cotton seedling growth in open-field conditions, offering a new perspective for research in growth prediction. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

16 pages, 6356 KiB  
Article
Simulation-Based Verification and Application Research of Spatial Spectrum Modulation Technology for Optical Imaging Systems
by Yucheng Li, Yang Zhang, Houyun Liu, Daokuan Wang and Jiahui Yuan
Photonics 2025, 12(8), 755; https://doi.org/10.3390/photonics12080755 - 27 Jul 2025
Viewed by 486
Abstract
Leveraging Fourier optics theory and Abbe’s imaging principle, this study establishes that optical imaging fundamentally involves selective spatial spectrum recombination at the Fourier plane. Three classical experiments quantitatively validate universal spectrum manipulation mechanisms: (1) The Abbe-Porter experiment confirmed spectral filtering, directly demonstrating image [...] Read more.
Leveraging Fourier optics theory and Abbe’s imaging principle, this study establishes that optical imaging fundamentally involves selective spatial spectrum recombination at the Fourier plane. Three classical experiments quantitatively validate universal spectrum manipulation mechanisms: (1) The Abbe-Porter experiment confirmed spectral filtering, directly demonstrating image synthesis from transmitted spectral components. (2) Zernike phase-contrast microscopy quantified spectral phase modulation, overcoming the weak-phase-object detection limit by significantly enhancing contrast. (3) Optical joint transform correlation (JTC) demonstrated efficient spectral amplitude modulation for high-speed, high-accuracy image recognition. Collectively, these results form a comprehensive framework for active light field manipulation at the spectral plane, extending modulation capabilities to phase and amplitude dimensions. This work provides a foundational theoretical and technical framework for designing advanced optical systems, extending modulation capabilities to phase and amplitude dimensions. Full article
(This article belongs to the Special Issue Advanced Research in Computational Optical Imaging)
Show Figures

Figure 1

23 pages, 9118 KiB  
Article
Scattering Characteristics of a Circularly Polarized Bessel Pincer Light-Sheet Beam Interacting with a Chiral Sphere of Arbitrary Size
by Shu Zhang, Shiguo Chen, Qun Wei, Renxian Li, Bing Wei and Ningning Song
Micromachines 2025, 16(8), 845; https://doi.org/10.3390/mi16080845 - 24 Jul 2025
Viewed by 178
Abstract
The scattering interaction between a circularly polarized Bessel pincer light-sheet beam and a chiral particle is investigated within the framework of generalized Lorenz–Mie theory (GLMT). The incident electric field distribution is rigorously derived via the vector angular spectrum decomposition method (VASDM), with subsequent [...] Read more.
The scattering interaction between a circularly polarized Bessel pincer light-sheet beam and a chiral particle is investigated within the framework of generalized Lorenz–Mie theory (GLMT). The incident electric field distribution is rigorously derived via the vector angular spectrum decomposition method (VASDM), with subsequent determination of the beam-shape coefficients (BSCs) pmnu and qmnu through multipole expansion in the basis of vector spherical wave functions (VSWFs). The expansion coefficients for the scattered field (AmnsBmns) and interior field (AmnBmn) are derived by imposing boundary conditions. Simulations highlight notable variations in the scattering field, near-surface field distribution, and far-field intensity, strongly influenced by the dimensionless size parameter ka, chirality κ, and beam parameters (beam order l and beam scaling parameter α0). These findings provide insights into the role of chirality in modulating scattering asymmetry and localization effects. The results are particularly relevant for applications in optical manipulation and super-resolution imaging in single-molecule microbiology. Full article
Show Figures

Figure 1

19 pages, 9361 KiB  
Article
A Multi-Domain Enhanced Network for Underwater Image Enhancement
by Tianmeng Sun, Yinghao Zhang, Jiamin Hu, Haiyuan Cui and Teng Yu
Information 2025, 16(8), 627; https://doi.org/10.3390/info16080627 - 23 Jul 2025
Viewed by 168
Abstract
Owing to the intricate variability of underwater environments, images suffer from degradation including light absorption, scattering, and color distortion. However, U-Net architectures severely limit global context utilization due to fixed-receptive-field convolutions, while traditional attention mechanisms incur quadratic complexity and fail to efficiently fuse [...] Read more.
Owing to the intricate variability of underwater environments, images suffer from degradation including light absorption, scattering, and color distortion. However, U-Net architectures severely limit global context utilization due to fixed-receptive-field convolutions, while traditional attention mechanisms incur quadratic complexity and fail to efficiently fuse spatial–frequency features. Unlike local enhancement-focused methods, HMENet integrates a transformer sub-network for long-range dependency modeling and dual-domain attention for bidirectional spatial–frequency fusion. This design increases the receptive field while maintaining linear complexity. On UIEB and EUVP datasets, HMENet achieves PSNR/SSIM of 25.96/0.946 and 27.92/0.927, surpassing HCLR-Net by 0.97 dB/1.88 dB, respectively. Full article
Show Figures

Figure 1

26 pages, 78396 KiB  
Article
SWRD–YOLO: A Lightweight Instance Segmentation Model for Estimating Rice Lodging Degree in UAV Remote Sensing Images with Real-Time Edge Deployment
by Chunyou Guo and Feng Tan
Agriculture 2025, 15(15), 1570; https://doi.org/10.3390/agriculture15151570 - 22 Jul 2025
Viewed by 295
Abstract
Rice lodging severely affects crop growth, yield, and mechanized harvesting efficiency. The accurate detection and quantification of lodging areas are crucial for precision agriculture and timely field management. However, Unmanned Aerial Vehicle (UAV)-based lodging detection faces challenges such as complex backgrounds, variable lighting, [...] Read more.
Rice lodging severely affects crop growth, yield, and mechanized harvesting efficiency. The accurate detection and quantification of lodging areas are crucial for precision agriculture and timely field management. However, Unmanned Aerial Vehicle (UAV)-based lodging detection faces challenges such as complex backgrounds, variable lighting, and irregular lodging patterns. To address these issues, this study proposes SWRD–YOLO, a lightweight instance segmentation model that enhances feature extraction and fusion using advanced convolution and attention mechanisms. The model employs an optimized loss function to improve localization accuracy, achieving precise lodging area segmentation. Additionally, a grid-based lodging ratio estimation method is introduced, dividing images into fixed-size grids to calculate local lodging proportions and aggregate them for robust overall severity assessment. Evaluated on a self-built rice lodging dataset, the model achieves 94.8% precision, 88.2% recall, 93.3% mAP@0.5, and 91.4% F1 score, with real-time inference at 16.15 FPS on an embedded NVIDIA Jetson Orin NX device. Compared to the baseline YOLOv8n-seg, precision, recall, mAP@0.5, and F1 score improved by 8.2%, 16.5%, 12.8%, and 12.8%, respectively. These results confirm the model’s effectiveness and potential for deployment in intelligent crop monitoring and sustainable agriculture. Full article
Show Figures

Figure 1

21 pages, 2919 KiB  
Article
A Feasible Domain Segmentation Algorithm for Unmanned Vessels Based on Coordinate-Aware Multi-Scale Features
by Zhengxun Zhou, Weixian Li, Yuhan Wang, Haozheng Liu and Ning Wu
J. Mar. Sci. Eng. 2025, 13(8), 1387; https://doi.org/10.3390/jmse13081387 - 22 Jul 2025
Viewed by 157
Abstract
The accurate extraction of navigational regions from images of navigational waters plays a key role in ensuring on-water safety and the automation of unmanned vessels. Nonetheless, current technological methods encounter significant challenges in addressing fluctuations in water surface illumination, reflective disturbances, and surface [...] Read more.
The accurate extraction of navigational regions from images of navigational waters plays a key role in ensuring on-water safety and the automation of unmanned vessels. Nonetheless, current technological methods encounter significant challenges in addressing fluctuations in water surface illumination, reflective disturbances, and surface undulations, among other disruptions, in turn making it challenging to achieve rapid and precise boundary segmentation. To cope with these challenges, in this paper, we propose a coordinate-aware multi-scale feature network (GASF-ResNet) method for water segmentation. The method integrates the attention module Global Grouping Coordinate Attention (GGCA) in the four downsampling branches of ResNet-50, thus enhancing the model’s ability to capture target features and improving the feature representation. To expand the model’s receptive field and boost its capability in extracting features of multi-scale targets, the Avoidance Spatial Pyramid Pooling (ASPP) technique is used. Combined with multi-scale feature fusion, this effectively enhances the expression of semantic information at different scales and improves the segmentation accuracy of the model in complex water environments. The experimental results show that the average pixel accuracy (mPA) and average intersection and union ratio (mIoU) of the proposed method on the self-made dataset and on the USVInaland unmanned ship dataset are 99.31% and 98.61%, and 98.55% and 99.27%, respectively, significantly better results than those obtained for the existing mainstream models. These results are helpful in overcoming the background interference caused by water surface reflection and uneven lighting in the aquatic environment and in realizing the accurate segmentation of the water area for the safe navigation of unmanned vessels, which is of great value for the stable operation of unmanned vessels in complex environments. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

17 pages, 1927 KiB  
Article
ConvTransNet-S: A CNN-Transformer Hybrid Disease Recognition Model for Complex Field Environments
by Shangyun Jia, Guanping Wang, Hongling Li, Yan Liu, Linrong Shi and Sen Yang
Plants 2025, 14(15), 2252; https://doi.org/10.3390/plants14152252 - 22 Jul 2025
Viewed by 350
Abstract
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification [...] Read more.
To address the challenges of low recognition accuracy and substantial model complexity in crop disease identification models operating in complex field environments, this study proposed a novel hybrid model named ConvTransNet-S, which integrates Convolutional Neural Networks (CNNs) and transformers for crop disease identification tasks. Unlike existing hybrid approaches, ConvTransNet-S uniquely introduces three key innovations: First, a Local Perception Unit (LPU) and Lightweight Multi-Head Self-Attention (LMHSA) modules were introduced to synergistically enhance the extraction of fine-grained plant disease details and model global dependency relationships, respectively. Second, an Inverted Residual Feed-Forward Network (IRFFN) was employed to optimize the feature propagation path, thereby enhancing the model’s robustness against interferences such as lighting variations and leaf occlusions. This novel combination of a LPU, LMHSA, and an IRFFN achieves a dynamic equilibrium between local texture perception and global context modeling—effectively resolving the trade-offs inherent in standalone CNNs or transformers. Finally, through a phased architecture design, efficient fusion of multi-scale disease features is achieved, which enhances feature discriminability while reducing model complexity. The experimental results indicated that ConvTransNet-S achieved a recognition accuracy of 98.85% on the PlantVillage public dataset. This model operates with only 25.14 million parameters, a computational load of 3.762 GFLOPs, and an inference time of 7.56 ms. Testing on a self-built in-field complex scene dataset comprising 10,441 images revealed that ConvTransNet-S achieved an accuracy of 88.53%, which represents improvements of 14.22%, 2.75%, and 0.34% over EfficientNetV2, Vision Transformer, and Swin Transformer, respectively. Furthermore, the ConvTransNet-S model achieved up to 14.22% higher disease recognition accuracy under complex background conditions while reducing the parameter count by 46.8%. This confirms that its unique multi-scale feature mechanism can effectively distinguish disease from background features, providing a novel technical approach for disease diagnosis in complex agricultural scenarios and demonstrating significant application value for intelligent agricultural management. Full article
(This article belongs to the Section Plant Modeling)
Show Figures

Figure 1

11 pages, 21181 KiB  
Article
Parallel Ghost Imaging with Extra Large Field of View and High Pixel Resolution
by Nixi Zhao, Changzhe Zhao, Jie Tang, Jianwen Wu, Danyang Liu, Han Guo, Haipeng Zhang and Tiqiao Xiao
Appl. Sci. 2025, 15(15), 8137; https://doi.org/10.3390/app15158137 - 22 Jul 2025
Viewed by 195
Abstract
Ghost imaging (GI) facilitates image acquisition under low-light conditions through single pixel measurements, thus holding tremendous potential across various fields such as biomedical imaging, remote sensing, defense and military applications, and 3D imaging. However, in order to reconstruct high-resolution images, GI typically requires [...] Read more.
Ghost imaging (GI) facilitates image acquisition under low-light conditions through single pixel measurements, thus holding tremendous potential across various fields such as biomedical imaging, remote sensing, defense and military applications, and 3D imaging. However, in order to reconstruct high-resolution images, GI typically requires a large number of single-pixel measurements, which imposes practical limitations on its application. Parallel ghost imaging addresses this issue by utilizing each pixel of a position-sensitive detector as a bucket detector to simultaneously perform tens of thousands of ghost imaging measurements in parallel. In this work, we explore the non-local characteristics of ghost imaging in depth, and by constructing a large speckle space, we achieve a reconstruction result in parallel ghost imaging where the field of view surpasses the limitations of the reference arm detector. Using a computational ghost imaging framework, after pre-recording the speckle patterns, we are able to complete X-ray ghost imaging at a speed of 6 min per sample, with image dimensions of 14,000 × 10,000 pixels (4.55 mm × 3.25 mm, millimeter-scale field of view) and a pixel resolution of 0.325 µm (sub-micron pixel resolution). We present this framework to enhance efficiency, extend resolution, and dramatically expand the field of view, with the aim of providing a solution for the practical implementation of ghost imaging. Full article
(This article belongs to the Special Issue Single-Pixel Intelligent Imaging and Recognition)
Show Figures

Figure 1

24 pages, 9664 KiB  
Article
Frequency-Domain Collaborative Lightweight Super-Resolution for Fine Texture Enhancement in Rice Imagery
by Zexiao Zhang, Jie Zhang, Jinyang Du, Xiangdong Chen, Wenjing Zhang and Changmeng Peng
Agronomy 2025, 15(7), 1729; https://doi.org/10.3390/agronomy15071729 - 18 Jul 2025
Viewed by 316
Abstract
In rice detection tasks, accurate identification of leaf streaks, pest and disease distribution, and spikelet hierarchies relies on high-quality images to distinguish between texture and hierarchy. However, existing images often suffer from texture blurring and contour shifting due to equipment and environment limitations, [...] Read more.
In rice detection tasks, accurate identification of leaf streaks, pest and disease distribution, and spikelet hierarchies relies on high-quality images to distinguish between texture and hierarchy. However, existing images often suffer from texture blurring and contour shifting due to equipment and environment limitations, which affects the detection performance. In view of the fact that pests and diseases affect the whole situation and tiny details are mostly localized, we propose a rice image reconstruction method based on an adaptive two-branch heterogeneous structure. The method consists of a low-frequency branch (LFB) that recovers global features using orientation-aware extended receptive fields to capture streaky global features, such as pests and diseases, and a high-frequency branch (HFB) that enhances detail edges through an adaptive enhancement mechanism to boost the clarity of local detail regions. By introducing the dynamic weight fusion mechanism (CSDW) and lightweight gating network (LFFN), the problem of the unbalanced fusion of frequency information for rice images in traditional methods is solved. Experiments on the 4× downsampled rice test set demonstrate that the proposed method achieves a 62% reduction in parameters compared to EDSR, 41% lower computational cost (30 G) than MambaIR-light, and an average PSNR improvement of 0.68% over other methods in the study while balancing memory usage (227 M) and inference speed. In downstream task validation, rice panicle maturity detection achieves a 61.5% increase in mAP50 (0.480 → 0.775) compared to interpolation methods, and leaf pest detection shows a 2.7% improvement in average mAP50 (0.949 → 0.975). This research provides an effective solution for lightweight rice image enhancement, with its dual-branch collaborative mechanism and dynamic fusion strategy establishing a new paradigm in agricultural rice image processing. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

20 pages, 41202 KiB  
Article
Copper Stress Levels Classification in Oilseed Rape Using Deep Residual Networks and Hyperspectral False-Color Images
by Yifei Peng, Jun Sun, Zhentao Cai, Lei Shi, Xiaohong Wu, Chunxia Dai and Yubin Xie
Horticulturae 2025, 11(7), 840; https://doi.org/10.3390/horticulturae11070840 - 16 Jul 2025
Viewed by 259
Abstract
In recent years, heavy metal contamination in agricultural products has become a growing concern in the field of food safety. Copper (Cu) stress in crops not only leads to significant reductions in both yield and quality but also poses potential health risks to [...] Read more.
In recent years, heavy metal contamination in agricultural products has become a growing concern in the field of food safety. Copper (Cu) stress in crops not only leads to significant reductions in both yield and quality but also poses potential health risks to humans. This study proposes an efficient and precise non-destructive detection method for Cu stress in oilseed rape, which is based on hyperspectral false-color image construction using principal component analysis (PCA). By comprehensively capturing the spectral representation of oilseed rape plants, both the one-dimensional (1D) spectral sequence and spatial image data were utilized for multi-class classification. The classification performance of models based on 1D spectral sequences was compared from two perspectives: first, between machine learning and deep learning methods (best accuracy: 93.49% vs. 96.69%); and second, between shallow and deep convolutional neural networks (CNNs) (best accuracy: 95.15% vs. 96.69%). For spatial image data, deep residual networks were employed to evaluate the effectiveness of visible-light and false-color images. The RegNet architecture was chosen for its flexible parameterization and proven effectiveness in extracting multi-scale features from hyperspectral false-color images. This flexibility enabled RegNetX-6.4GF to achieve optimal performance on the dataset constructed from three types of false-color images, with the model reaching a Macro-Precision, Macro-Recall, Macro-F1, and Accuracy of 98.17%, 98.15%, 98.15%, and 98.15%, respectively. Furthermore, Grad-CAM visualizations revealed that latent physiological changes in plants under heavy metal stress guided feature learning within CNNs, and demonstrated the effectiveness of false-color image construction in extracting discriminative features. Overall, the proposed technique can be integrated into portable hyperspectral imaging devices, enabling real-time and non-destructive detection of heavy metal stress in modern agricultural practices. Full article
Show Figures

Figure 1

12 pages, 5633 KiB  
Article
Study on Joint Intensity in Real-Space and k-Space of SFS Super-Resolution Imaging via Multiplex Illumination Modulation
by Xiaoyu Yang, Haonan Zhang, Feihong Lin, Xu Liu and Qing Yang
Photonics 2025, 12(7), 717; https://doi.org/10.3390/photonics12070717 - 16 Jul 2025
Viewed by 222
Abstract
This paper studied the general mechanism of spatial-frequency-shift (SFS) super-resolution imaging based on multiplex illumination modulation. The theory of SFS joint intensity was first proposed. Experiments on parallel slots with discrete spatial frequency (SF) distribution and V-shape slots with continuous SF distribution were [...] Read more.
This paper studied the general mechanism of spatial-frequency-shift (SFS) super-resolution imaging based on multiplex illumination modulation. The theory of SFS joint intensity was first proposed. Experiments on parallel slots with discrete spatial frequency (SF) distribution and V-shape slots with continuous SF distribution were carried out, and their real-space images and k-space images were obtained. The influence of single illumination with different SFS and mixed illumination with various combinations on SFS super-resolution imaging was analyzed. The phenomena of sample SF coverage were discussed. The SFS super-resolution imaging characteristics based on low-coherence illumination and highly localized light fields were discovered. The phenomenon of image magnification during SFS super-resolution imaging process was discussed. The differences and connections between the SF spectrum of objects and the k-space images obtained in SFS super-resolution imaging process were explained. This provides certain support for optimization of high-throughput SFS super-resolution imaging. Full article
Show Figures

Figure 1

12 pages, 3406 KiB  
Article
Singular Value Decomposition-Assisted Holographic Generation of High-Quality Cylindrical Vector Beams Through Few-Mode Fibers
by Angel Cifuentes, Miguel Varga and Gabriel Molina-Terriza
Photonics 2025, 12(7), 716; https://doi.org/10.3390/photonics12070716 - 16 Jul 2025
Viewed by 246
Abstract
Full control of the light field at the tip of the fiber holds the possibility of producing structured illumination patterns such as LG-beams or vector light fields, which have important applications in different fields such as imaging and quantum technologies. In this work, [...] Read more.
Full control of the light field at the tip of the fiber holds the possibility of producing structured illumination patterns such as LG-beams or vector light fields, which have important applications in different fields such as imaging and quantum technologies. In this work, we show how, by measuring the transmission matrix (TM) and shaping the input of a few-mode fiber, we are able to produce cylindrical vector beams at the fiber output. We use singular value decomposition (SVD) to analyze the TM and use the singular vectors as the basis for beam shaping. We demonstrate the method in three different commercially available fibers supporting 6, 12 and 16 modes each. Full article
(This article belongs to the Special Issue Vortex Beams: Transmission, Scattering and Application)
Show Figures

Figure 1

14 pages, 738 KiB  
Article
Assessment of Pupillometry Across Different Commercial Systems of Laying Hens to Validate Its Potential as an Objective Indicator of Welfare
by Elyse Mosco, David Kilroy and Arun H. S. Kumar
Poultry 2025, 4(3), 31; https://doi.org/10.3390/poultry4030031 - 15 Jul 2025
Viewed by 252
Abstract
Background: Reliable and non-invasive methods for assessing welfare in poultry are essential for improving evidence-based welfare monitoring and advancing management practices in commercial production systems. The iris-to-pupil (IP) ratio, previously validated by our group in primates and cattle, reflects autonomic nervous system [...] Read more.
Background: Reliable and non-invasive methods for assessing welfare in poultry are essential for improving evidence-based welfare monitoring and advancing management practices in commercial production systems. The iris-to-pupil (IP) ratio, previously validated by our group in primates and cattle, reflects autonomic nervous system balance and may serve as a physiological indicator of stress in laying hens. This study evaluated the utility of the IP ratio under field conditions across diverse commercial layer housing systems. Materials and Methods: In total, 296 laying hens (Lohmann Brown, n = 269; White Leghorn, n = 27) were studied across four locations in Canada housed under different systems: Guelph (indoor; pen), Spring Island (outdoor and scratch; organic), Ottawa (outdoor, indoor and scratch; free-range), and Toronto (outdoor and hobby; free-range). High-resolution photographs of the eye were taken under ambient lighting. Light intensity was measured using the light meter app. The IP ratio was calculated using NIH ImageJ software (Version 1.54p). Statistical analysis included one-way ANOVA and linear regression using GraphPad Prism (Version 5). Results: Birds housed outdoors had the highest IP ratios, followed by those in scratch systems, while indoor and pen-housed birds had the lowest IP ratios (p < 0.001). Subgroup analyses of birds in Ottawa and Spring Island farms confirmed significantly higher IP ratios in outdoor environments compared to indoor and scratch systems (p < 0.001). The IP ratio correlated weakly with ambient light intensity (r2 = 0.25) and age (r2 = 0.05), indicating minimal influence of these variables. Although White Leghorn hens showed lower IP ratios than Lohmann Browns, this difference was confounded by housing type; all White Leghorns were housed in pens. Thus, housing system but not breed was the primary driver of IP variation. Conclusions: The IP ratio is a robust, non-invasive physiological marker of welfare assessment in laying hens, sensitive to housing environment but minimally influenced by light or age. Its potential for integration with digital imaging technologies supports its use in scalable welfare assessment protocols. Full article
Show Figures

Figure 1

22 pages, 9940 KiB  
Article
Developing a Novel Method for Vegetation Mapping in Temperate Forests Using Airborne LiDAR and Hyperspectral Imaging
by Nam Shin Kim and Chi Hong Lim
Forests 2025, 16(7), 1158; https://doi.org/10.3390/f16071158 - 14 Jul 2025
Viewed by 308
Abstract
This study advances vegetation and forest mapping in temperate mixed forests by integrating airborne hyperspectral imagery (HSI) and light detection and ranging (LiDAR) data, overcoming the limitations of conventional multispectral imaging. Employing a Digital Canopy Height Model (DCHM) derived from LiDAR, our approach [...] Read more.
This study advances vegetation and forest mapping in temperate mixed forests by integrating airborne hyperspectral imagery (HSI) and light detection and ranging (LiDAR) data, overcoming the limitations of conventional multispectral imaging. Employing a Digital Canopy Height Model (DCHM) derived from LiDAR, our approach integrates these structural metrics with hyperspectral spectral information, alongside detailed remote sensing data extraction. Through machine learning-based clustering, which combines both structural and spectral features, we successfully classified eight specific tree species, community boundaries, identified dominant species, and quantified their abundance, contributing to precise vegetation and forest type mapping based on predominant species and detailed attributes such as diameter at breast height, age, and canopy density. Field validation indicated the methodology’s high mapping precision, achieving overall accuracies of approximately 98.0% for individual species identification and 93.1% for community-level mapping. Demonstrating robust performance compared to conventional methods, this novel approach offers a valuable foundation for National Forest Ecology Inventory development and significantly enhances ecological research and forest management practices by providing new insights for improving our understanding and management of forest ecosystems and various forestry applications. Full article
Show Figures

Figure 1

22 pages, 7140 KiB  
Article
Impact of Phenological and Lighting Conditions on Early Detection of Grapevine Inflorescences and Bunches Using Deep Learning
by Rubén Íñiguez, Carlos Poblete-Echeverría, Ignacio Barrio, Inés Hernández, Salvador Gutiérrez, Eduardo Martínez-Cámara and Javier Tardáguila
Agriculture 2025, 15(14), 1495; https://doi.org/10.3390/agriculture15141495 - 11 Jul 2025
Viewed by 235
Abstract
Reliable early-stage yield forecasts are essential in precision viticulture, enabling timely interventions such as harvest planning, canopy management, and crop load regulation. Since grape yield is directly related to the number and size of bunches, the early detection of inflorescences and bunches, carried [...] Read more.
Reliable early-stage yield forecasts are essential in precision viticulture, enabling timely interventions such as harvest planning, canopy management, and crop load regulation. Since grape yield is directly related to the number and size of bunches, the early detection of inflorescences and bunches, carried out even before flowering, provides a valuable foundation for estimating potential yield far in advance of veraison. Traditional yield prediction methods are labor-intensive, subjective, and often restricted to advanced phenological stages. This study presents a deep learning-based approach for detecting grapevine inflorescences and bunches during early development, assessing how phenological stage and illumination conditions influence detection performance using the YOLOv11 architecture under commercial field conditions. A total of 436 RGB images were collected across two phenological stages (pre-bloom and fruit-set), two lighting conditions (daylight and artificial night-time illumination), and six grapevine cultivars. All images were manually annotated following a consistent protocol, and models were trained using data augmentation to improve generalization. Five models were developed: four specific to each condition and one combining all scenarios. The results show that the fruit-set stage under daylight provided the best performance (F1 = 0.77, R2 = 0.97), while for inflorescences, night-time imaging yielded the most accurate results (F1 = 0.71, R2 = 0.76), confirming the benefits of artificial lighting in early stages. These findings define optimal scenarios for early-stage organ detection and support the integration of automated detection models into vineyard management systems. Future work will address scalability and robustness under diverse conditions. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

Back to TopTop