Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (688)

Search Parameters:
Keywords = multi-angle imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 8383 KiB  
Article
Quantifying Emissivity Uncertainty in Multi-Angle Long-Wave Infrared Hyperspectral Data
by Nikolay Golosov, Guido Cervone and Mark Salvador
Remote Sens. 2025, 17(16), 2823; https://doi.org/10.3390/rs17162823 - 14 Aug 2025
Abstract
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing [...] Read more.
This study quantifies emissivity uncertainty using a new, specifically collected multi-angle thermal hyperspectral dataset, Nittany Radiance. Unlike previous research that primarily relied on model-based simulations, multispectral satellite imagery, or laboratory measurements, we use airborne hyperspectral long-wave infrared (LWIR) data captured from multiple viewing angles. The data was collected using the Blue Heron LWIR hyperspectral imaging sensor, flown on a light aircraft in a circular orbit centered on the Penn State University campus. This sensor, with 256 spectral bands (7.56–13.52 μm), captures multiple overlapping images with varying ranges and angles. We analyzed nine different natural and man-made targets across varying viewing geometries. We present a multi-angle atmospheric correction method, similar to FLAASH-IR, modified for multi-angle scenarios. Our results show that emissivity remains relatively stable at viewing zenith angles between 40 and 50° but decreases as angles exceed 50°. We found that emissivity uncertainty varies across the spectral range, with the 10.14–11.05 μm region showing the greatest stability (standard deviations typically below 0.005), while uncertainty increases significantly in regions with strong atmospheric absorption features, particularly around 12.6 μm. These results show how reliable multi-angle hyperspectral measurements are and why angle-specific atmospheric correction matters for non-nadir imaging applications Full article
Show Figures

Figure 1

19 pages, 1619 KiB  
Article
Impact of Water Velocity on Litopenaeus vannamei Behavior Using ByteTrack-Based Multi-Object Tracking
by Jiahao Zhang, Lei Wang, Zhengguo Cui, Hao Li, Jianlei Chen, Yong Xu, Haixiang Zhao, Zhenming Huang, Keming Qu and Hongwu Cui
Fishes 2025, 10(8), 406; https://doi.org/10.3390/fishes10080406 - 14 Aug 2025
Viewed by 91
Abstract
In factory-controlled recirculating aquaculture systems, precise regulation of water velocity is crucial for optimizing shrimp feeding behavior and improving aquaculture efficiency. However, quantitative analysis of the impact of water velocity on shrimp behavior remains challenging. This study developed an innovative multi-objective behavioral analysis [...] Read more.
In factory-controlled recirculating aquaculture systems, precise regulation of water velocity is crucial for optimizing shrimp feeding behavior and improving aquaculture efficiency. However, quantitative analysis of the impact of water velocity on shrimp behavior remains challenging. This study developed an innovative multi-objective behavioral analysis framework integrating detection, tracking, and behavioral interpretation. Specifically, the YOLOv8 model was employed for precise shrimp detection, ByteTrack with a dual-threshold matching strategy ensured continuous individual trajectory tracking in complex water environments, and Kalman filtering corrected coordinate offsets caused by water refraction. Under typical recirculating aquaculture system conditions, three water circulation rates (2.0, 5.0, and 10.0 cycles/day) were established to simulate varying flow velocities. High-frequency imaging (30 fps) was used to simultaneously record and analyze the movement trajectories of Litopenaeus vannamei during feeding and non-feeding periods, from which two-dimensional behavioral parameters—velocity and turning angle—were extracted. Key experimental results indicated that water circulation rates significantly affected shrimp movement velocity but had no significant effect on turning angle. Importantly, under only the moderate circulation rate (5.0 cycles/day), the average movement velocity during feeding was significantly lower than during non-feeding periods (p < 0.05). This finding reveals that moderate water velocity constitutes a critical hydrodynamic window for eliciting specific feeding behavior in shrimp. These results provide core parameters for an intelligent Litopenaeus vannamei feeding intensity assessment model based on spatiotemporal graph convolutional networks and offer theoretically valuable and practically applicable guidance for optimizing hydrodynamics and formulating precision feeding strategies in recirculating aquaculture systems. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Aquaculture)
Show Figures

Figure 1

19 pages, 3977 KiB  
Article
Accelerating Surgical Skill Acquisition by Using Multi-View Bullet-Time Video Generation
by Yinghao Wang, Chun Xie, Koichiro Kumano, Daichi Kitaguchi, Shinji Hashimoto, Tatsuya Oda and Itaru Kitahara
Appl. Sci. 2025, 15(16), 8830; https://doi.org/10.3390/app15168830 - 10 Aug 2025
Viewed by 359
Abstract
Surgical education and training have seen significant advancements with the integration of innovative technologies. This paper presents a novel approach to surgical education using a multi-view capturing system and bullet-time generation techniques to enhance the learning experience for aspiring surgeons. The proposed system [...] Read more.
Surgical education and training have seen significant advancements with the integration of innovative technologies. This paper presents a novel approach to surgical education using a multi-view capturing system and bullet-time generation techniques to enhance the learning experience for aspiring surgeons. The proposed system leverages an array of synchronized cameras strategically positioned around a surgical simulation environment, enabling the capture of surgical procedures from multiple angles simultaneously. The captured multi-view data is then processed using advanced computer vision and image processing algorithms to create a “bullet-time” effect, similar to the iconic scenes from The Matrix movie, allowing educators and trainees to manipulate time and view the surgical procedure from any desired perspective. In this paper, we propose the technical aspects of the multi-view capturing system, the bullet-time generation process, and the integration of these technologies into surgical education programs. We also discuss the potential applications in various surgical specialties and the benefits of utilizing this system for both novice and experienced surgeons. Finally, we present preliminary results from pilot studies and user feedback, highlighting the promising potential of this innovative approach to revolutionize surgical education and training. Full article
Show Figures

Figure 1

14 pages, 1971 KiB  
Article
High-Density Arrayed Spectrometer with Microlens Array Grating for Multi-Channel Parallel Spectral Analysis
by Fangyuan Zhao, Zhigang Feng and Shuonan Shan
Sensors 2025, 25(15), 4833; https://doi.org/10.3390/s25154833 - 6 Aug 2025
Viewed by 323
Abstract
To enable multi-channel parallel spectral analysis in array-based devices such as micro-light-emitting diodes (Micro-LEDs) and line-scan spectral confocal systems, the development of compact array spectrometers has become increasingly important. In this work, a novel spectrometer architecture based on a microlens array grating (MLAG) [...] Read more.
To enable multi-channel parallel spectral analysis in array-based devices such as micro-light-emitting diodes (Micro-LEDs) and line-scan spectral confocal systems, the development of compact array spectrometers has become increasingly important. In this work, a novel spectrometer architecture based on a microlens array grating (MLAG) is proposed, which addresses the major limitations of conventional spectrometers, including limited parallel detection capability, bulky structures, and insufficient spatial resolution. By integrating dispersion and focusing within a monolithic device, the system enables simultaneous acquisition across more than 2000 parallel channels within a 10 mm × 10 mm unit consisting of an f = 4 mm microlens and a 600 lines/mm blazed grating. Optimized microlens and aperture alignment allows for flexible control of the divergence angle of the incident light, and the system theoretically achieves nanometer-scale spectral resolution across a 380–780 nm wavelength range, with inter-channel measurement deviation below 1.25%. Experimental results demonstrate that this spectrometer system can theoretically support up to 2070 independently addressable subunits. At a wavelength of 638 nm, the coefficient of variation (CV) of spot spacing among array elements is as low as 1.11%, indicating high uniformity. The spectral repeatability precision is better than 1.0 nm, and after image enhancement, the standard deviation of the diffracted light shift is reduced to just 0.26 nm. The practical spectral resolution achieved is as fine as 3.0 nm. This platform supports wafer-level spectral screening of high-density Micro-LEDs, offering a practical hardware solution for high-precision industrial inline sorting, such as Micro-LED defect inspection. Full article
Show Figures

Figure 1

18 pages, 5280 KiB  
Article
A Drilling Debris Tracking and Velocity Measurement Method Based on Fine Target Feature Fusion Optimization
by Jinteng Yang, Yu Bao, Zumao Xie, Haojie Zhang, Zhongnian Li and Yonggang Li
Appl. Sci. 2025, 15(15), 8662; https://doi.org/10.3390/app15158662 - 5 Aug 2025
Viewed by 228
Abstract
During unmanned drilling operations, the velocity of drill cuttings serves as an important indicator of drilling conditions, which necessitates real-time and accurate measurements. To address challenges such as the small size of cuttings, weak feature representations, and complex motion trajectories, we propose a [...] Read more.
During unmanned drilling operations, the velocity of drill cuttings serves as an important indicator of drilling conditions, which necessitates real-time and accurate measurements. To address challenges such as the small size of cuttings, weak feature representations, and complex motion trajectories, we propose a novel velocity measurement method integrating small-object detection and tracking. Specifically, we enhance the multi-scale feature fusion capability of the YOLOv11 detection head by incorporating a lightweight feature extraction module, Ghost Conv, and a feature-aligned fusion module, FA-Concat, resulting in an improved model named YOLOv11-Dd (drilling debris). Furthermore, considering the robustness of the ByteTrack algorithm in retaining low-confidence targets and handling occlusions, we integrate ByteTrack into the tracking phase to enhance tracking stability. A velocity estimation module is introduced to achieve high-precision measurement by mapping the pixel displacement of detection box centers across consecutive frames to physical space. To facilitate model training and performance evaluation, we establish a drill-cutting splash simulation dataset comprising 3787 images, covering a diverse range of ejection angles, velocities, and material types. The experimental results show that the YOLOv11-Dd model achieves a 4.65% improvement in mAP@80 over YOLOv11, reaching 76.04%. For mAP@75–95, it improves by 0.79%, reaching 41.73%. The proposed velocity estimation method achieves an average accuracy of 92.12% in speed measurement tasks, representing a 0.42% improvement compared to the original YOLOv11. Full article
(This article belongs to the Special Issue AI from Industry 4.0 to Industry 5.0: Engineering for Social Change)
Show Figures

Figure 1

20 pages, 4569 KiB  
Article
Lightweight Vision Transformer for Frame-Level Ergonomic Posture Classification in Industrial Workflows
by Luca Cruciata, Salvatore Contino, Marianna Ciccarelli, Roberto Pirrone, Leonardo Mostarda, Alessandra Papetti and Marco Piangerelli
Sensors 2025, 25(15), 4750; https://doi.org/10.3390/s25154750 - 1 Aug 2025
Viewed by 408
Abstract
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly [...] Read more.
Work-related musculoskeletal disorders (WMSDs) are a leading concern in industrial ergonomics, often stemming from sustained non-neutral postures and repetitive tasks. This paper presents a vision-based framework for real-time, frame-level ergonomic risk classification using a lightweight Vision Transformer (ViT). The proposed system operates directly on raw RGB images without requiring skeleton reconstruction, joint angle estimation, or image segmentation. A single ViT model simultaneously classifies eight anatomical regions, enabling efficient multi-label posture assessment. Training is supervised using a multimodal dataset acquired from synchronized RGB video and full-body inertial motion capture, with ergonomic risk labels derived from RULA scores computed on joint kinematics. The system is validated on realistic, simulated industrial tasks that include common challenges such as occlusion and posture variability. Experimental results show that the ViT model achieves state-of-the-art performance, with F1-scores exceeding 0.99 and AUC values above 0.996 across all regions. Compared to previous CNN-based system, the proposed model improves classification accuracy and generalizability while reducing complexity and enabling real-time inference on edge devices. These findings demonstrate the model’s potential for unobtrusive, scalable ergonomic risk monitoring in real-world manufacturing environments. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

28 pages, 4026 KiB  
Article
Multi-Trait Phenotypic Analysis and Biomass Estimation of Lettuce Cultivars Based on SFM-MVS
by Tiezhu Li, Yixue Zhang, Lian Hu, Yiqiu Zhao, Zongyao Cai, Tingting Yu and Xiaodong Zhang
Agriculture 2025, 15(15), 1662; https://doi.org/10.3390/agriculture15151662 - 1 Aug 2025
Viewed by 320
Abstract
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based [...] Read more.
To address the problems of traditional methods that rely on destructive sampling, the poor adaptability of fixed equipment, and the susceptibility of single-view angle measurements to occlusions, a non-destructive and portable device for three-dimensional phenotyping and biomass detection in lettuce was developed. Based on the Structure-from-Motion Multi-View Stereo (SFM-MVS) algorithms, a high-precision three-dimensional point cloud model was reconstructed from multi-view RGB image sequences, and 12 phenotypic parameters, such as plant height, crown width, were accurately extracted. Through regression analyses of plant height, crown width, and crown height, and the R2 values were 0.98, 0.99, and 0.99, respectively, the RMSE values were 2.26 mm, 1.74 mm, and 1.69 mm, respectively. On this basis, four biomass prediction models were developed using Adaptive Boosting (AdaBoost), Support Vector Regression (SVR), Gradient Boosting Decision Tree (GBDT), and Random Forest Regression (RFR). The results indicated that the RFR model based on the projected convex hull area, point cloud convex hull surface area, and projected convex hull perimeter performed the best, with an R2 of 0.90, an RMSE of 2.63 g, and an RMSEn of 9.53%, indicating that the RFR was able to accurately simulate lettuce biomass. This research achieves three-dimensional reconstruction and accurate biomass prediction of facility lettuce, and provides a portable and lightweight solution for facility crop growth detection. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

17 pages, 6842 KiB  
Article
Inside the Framework: Structural Exploration of Mesoporous Silicas MCM-41, SBA-15, and SBA-16
by Agnieszka Karczmarska, Wiktoria Laskowska, Danuta Stróż and Katarzyna Pawlik
Materials 2025, 18(15), 3597; https://doi.org/10.3390/ma18153597 - 31 Jul 2025
Viewed by 350
Abstract
In the rapidly evolving fields of materials science, catalysis, electronics, drug delivery, and environmental remediation, the development of effective substrates for molecular deposition has become increasingly crucial. Ordered mesoporous silica materials have garnered significant attention due to their unique structural properties and exceptional [...] Read more.
In the rapidly evolving fields of materials science, catalysis, electronics, drug delivery, and environmental remediation, the development of effective substrates for molecular deposition has become increasingly crucial. Ordered mesoporous silica materials have garnered significant attention due to their unique structural properties and exceptional potential as substrates for molecular immobilization across these diverse applications. This study compares three mesoporous silica powders: MCM-41, SBA-15, and SBA-16. A multi-technique characterization approach was employed, utilizing low- and wide-angle X-ray diffraction (XRD), nitrogen physisorption, and transmission electron microscopy (TEM) to elucidate the structure–property relationships of these materials. XRD analysis confirmed the amorphous nature of silica frameworks and revealed distinct pore symmetries: a two-dimensional hexagonal (P6mm) structure for MCM-41 and SBA-15, and three-dimensional cubic (Im3¯m) structure for SBA-16. Nitrogen sorption measurements demonstrated significant variations in textural properties, with MCM-41 exhibiting uniform cylindrical mesopores and the highest surface area, SBA-15 displaying hierarchical meso- and microporosity confirmed by NLDFT analysis, and SBA-16 showing a complex 3D interconnected cage-like structure with broad pore size distribution. TEM imaging provided direct visualization of particle morphology and internal pore architecture, enabling estimation of lattice parameters and identification of structural gradients within individual particles. The integration of these complementary techniques proved essential for comprehensive material characterization, particularly for MCM-41, where its small particle size (45–75 nm) contributed to apparent structural inconsistencies between XRD and sorption data. This integrated analytical approach provides valuable insights into the fundamental structure–property relationships governing ordered mesoporous silica materials and demonstrates the necessity of combined characterization strategies for accurate structural determination. Full article
Show Figures

Graphical abstract

27 pages, 5740 KiB  
Article
Localization of Multiple GNSS Interference Sources Based on Target Detection in C/N0 Distribution Maps
by Qidong Chen, Rui Liu, Qiuzhen Yan, Yue Xu, Yang Liu, Xiao Huang and Ying Zhang
Remote Sens. 2025, 17(15), 2627; https://doi.org/10.3390/rs17152627 - 29 Jul 2025
Viewed by 321
Abstract
The localization of multiple interference sources in Global Navigation Satellite Systems (GNSS) can be achieved using carrier-to-noise ratio (C/N0) information provided by GNSS receivers, such as those embedded in smartphones. However, in increasingly prevalent complex scenarios—such as the coexistence of multiple [...] Read more.
The localization of multiple interference sources in Global Navigation Satellite Systems (GNSS) can be achieved using carrier-to-noise ratio (C/N0) information provided by GNSS receivers, such as those embedded in smartphones. However, in increasingly prevalent complex scenarios—such as the coexistence of multiple directional interferences, increased diversity and density of GNSS interference, and the presence of multiple low-power interference sources—conventional localization methods often fail to provide reliable results, thereby limiting their applicability in real-world environments. This paper presents a multi-interference sources localization method using object detection in GNSS C/N0 distribution maps. The proposed method first exploits the similarity between C/N0 data reported by GNSS receivers and image grayscale values to construct C/N0 distribution maps, thereby transforming the problem of multi-source GNSS interference localization into an object detection and localization task based on image processing techniques. Subsequently, an Oriented Squeeze-and-Excitation-based Faster Region-based Convolutional Neural Network (OSF-RCNN) framework is proposed to process the C/N0 distribution maps. Building upon the Faster R-CNN framework, the proposed method integrates an Oriented RPN (Region Proposal Network) to regress the orientation angles of directional antennas, effectively addressing their rotational characteristics. Additionally, the Squeeze-and-Excitation (SE) mechanism and the Feature Pyramid Network (FPN) are integrated at key stages of the network to improve sensitivity to small targets, thereby enhancing detection and localization performance for low-power interference sources. The simulation results verify the effectiveness of the proposed method in accurately localizing multiple interference sources under the increasingly prevalent complex scenarios described above. Full article
(This article belongs to the Special Issue Advanced Multi-GNSS Positioning and Its Applications in Geoscience)
Show Figures

Figure 1

30 pages, 92065 KiB  
Article
A Picking Point Localization Method for Table Grapes Based on PGSS-YOLOv11s and Morphological Strategies
by Jin Lu, Zhongji Cao, Jin Wang, Zhao Wang, Jia Zhao and Minjie Zhang
Agriculture 2025, 15(15), 1622; https://doi.org/10.3390/agriculture15151622 - 26 Jul 2025
Viewed by 333
Abstract
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, [...] Read more.
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, it is extremely difficult to accurately and efficiently identify and segment grape pedicels and then reliably locate the picking points. This is attributable to the low distinguishability between grape pedicels and the surrounding environment such as branches, as well as the impacts of other conditions like weather, lighting, and occlusion, which are coupled with the requirements for model deployment on edge devices with limited computing resources. To address these issues, this study proposes a novel picking point localization method for table grapes based on an instance segmentation network called Progressive Global-Local Structure-Sensitive Segmentation (PGSS-YOLOv11s) and a simple combination strategy of morphological operators. More specifically, the network PGSS-YOLOv11s is composed of an original backbone of the YOLOv11s-seg, a spatial feature aggregation module (SFAM), an adaptive feature fusion module (AFFM), and a detail-enhanced convolutional shared detection head (DE-SCSH). And the PGSS-YOLOv11s have been trained with a new grape segmentation dataset called Grape-⊥, which includes 4455 grape pixel-level instances with the annotation of ⊥-shaped regions. After the PGSS-YOLOv11s segments the ⊥-shaped regions of grapes, some morphological operations such as erosion, dilation, and skeletonization are combined to effectively extract grape pedicels and locate picking points. Finally, several experiments have been conducted to confirm the validity, effectiveness, and superiority of the proposed method. Compared with the other state-of-the-art models, the main metrics F1 score and mask mAP@0.5 of the PGSS-YOLOv11s reached 94.6% and 95.2% on the Grape-⊥ dataset, as well as 85.4% and 90.0% on the Winegrape dataset. Multi-scenario tests indicated that the success rate of positioning the picking points reached up to 89.44%. In orchards, real-time tests on the edge device demonstrated the practical performance of our method. Nevertheless, for grapes with short pedicels or occluded pedicels, the designed morphological algorithm exhibited the loss of picking point calculations. In future work, we will enrich the grape dataset by collecting images under different lighting conditions, from various shooting angles, and including more grape varieties to improve the method’s generalization performance. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

18 pages, 12946 KiB  
Article
High-Resolution 3D Reconstruction of Individual Rice Tillers for Genetic Studies
by Jiexiong Xu, Jiyoung Lee, Gang Jiang and Xiangchao Gan
Agronomy 2025, 15(8), 1803; https://doi.org/10.3390/agronomy15081803 - 25 Jul 2025
Viewed by 271
Abstract
The architecture of rice tillers plays a pivotal role in yield potential, yet conventional phenotyping methods have struggled to capture these intricate three-dimensional (3D) structures with high fidelity. In this study, a 3D model reconstruction method was developed specifically for rice tillers to [...] Read more.
The architecture of rice tillers plays a pivotal role in yield potential, yet conventional phenotyping methods have struggled to capture these intricate three-dimensional (3D) structures with high fidelity. In this study, a 3D model reconstruction method was developed specifically for rice tillers to overcome the challenges posed by their slender, feature-poor morphology in multi-view stereo-based 3D reconstruction. By applying strategically designed colorful reference markers, high-resolution 3D tiller models of 231 rice landraces were reconstructed. Accurate phenotyping was achieved by introducing ScaleCalculator, a software tool that integrated depth images from a depth camera to calibrate the physical sizes of the 3D models. The high efficiency of the 3D model-based phenotyping pipeline was demonstrated by extracting the following seven key agronomic traits: flag leaf length, panicle length, first internode length below the panicle, stem length, flag leaf angle, second leaf angle from the panicle, and third leaf angle. Genome-wide association studies (GWAS) performed with these 3D traits identified numerous candidate genes, nine of which had been previously confirmed in the literature. This work provides a 3D phenomics solution tailored for slender organs and offers novel insights into the genetic regulation of complex morphological traits in rice. Full article
Show Figures

Figure 1

22 pages, 5450 KiB  
Article
Optimization of a Heavy-Duty Hydrogen-Fueled Internal Combustion Engine Injector for Optimum Performance and Emission Level
by Murat Ozkara and Mehmet Zafer Gul
Appl. Sci. 2025, 15(15), 8131; https://doi.org/10.3390/app15158131 - 22 Jul 2025
Viewed by 439
Abstract
Hydrogen is a promising zero-carbon fuel for internal combustion engines; however, the geometric optimization of injectors for low-pressure direct-injection (LPDI) systems under lean-burn conditions remains underexplored. This study presents a high-fidelity optimization framework that couples a validated computational fluid dynamics (CFD) combustion model [...] Read more.
Hydrogen is a promising zero-carbon fuel for internal combustion engines; however, the geometric optimization of injectors for low-pressure direct-injection (LPDI) systems under lean-burn conditions remains underexplored. This study presents a high-fidelity optimization framework that couples a validated computational fluid dynamics (CFD) combustion model with a surrogate-assisted multi-objective genetic algorithm (MOGA). The CFD model was validated using particle image velocimetry (PIV) data from non-reacting flow experiments conducted in an optically accessible research engine developed by Sandia National Laboratories, ensuring accurate prediction of in-cylinder flow structures. The optimization focused on two critical geometric parameters: injector hole count and injection angle. Partial indicated mean effective pressure (pIMEP) and in-cylinder NOx emissions were selected as conflicting objectives to balance performance and emissions. Adaptive mesh refinement (AMR) was employed to resolve transient in-cylinder flow and combustion dynamics with high spatial accuracy. Among 22 evaluated configurations including both capped and uncapped designs, the injector featuring three holes at a 15.24° injection angle outperformed the baseline, delivering improved mixture uniformity, reduced knock tendency, and lower NOx emissions. These results demonstrate the potential of geometry-based optimization for advancing hydrogen-fueled LPDI engines toward cleaner and more efficient combustion strategies. Full article
Show Figures

Figure 1

24 pages, 824 KiB  
Article
MMF-Gait: A Multi-Model Fusion-Enhanced Gait Recognition Framework Integrating Convolutional and Attention Networks
by Kamrul Hasan, Khandokar Alisha Tuhin, Md Rasul Islam Bapary, Md Shafi Ud Doula, Md Ashraful Alam, Md Atiqur Rahman Ahad and Md. Zasim Uddin
Symmetry 2025, 17(7), 1155; https://doi.org/10.3390/sym17071155 - 19 Jul 2025
Viewed by 457
Abstract
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often [...] Read more.
Gait recognition is a reliable biometric approach that uniquely identifies individuals based on their natural walking patterns. It is widely used to recognize individuals who are challenging to camouflage and do not require a person’s cooperation. The general face-based person recognition system often fails to determine the offender’s identity when they conceal their face by wearing helmets and masks to evade identification. In such cases, gait-based recognition is ideal for identifying offenders, and most existing work leverages a deep learning (DL) model. However, a single model often fails to capture a comprehensive selection of refined patterns in input data when external factors are present, such as variation in viewing angle, clothing, and carrying conditions. In response to this, this paper introduces a fusion-based multi-model gait recognition framework that leverages the potential of convolutional neural networks (CNNs) and a vision transformer (ViT) in an ensemble manner to enhance gait recognition performance. Here, CNNs capture spatiotemporal features, and ViT features multiple attention layers that focus on a particular region of the gait image. The first step in this framework is to obtain the Gait Energy Image (GEI) by averaging a height-normalized gait silhouette sequence over a gait cycle, which can handle the left–right gait symmetry of the gait. After that, the GEI image is fed through multiple pre-trained models and fine-tuned precisely to extract the depth spatiotemporal feature. Later, three separate fusion strategies are conducted, and the first one is decision-level fusion (DLF), which takes each model’s decision and employs majority voting for the final decision. The second is feature-level fusion (FLF), which combines the features from individual models through pointwise addition before performing gait recognition. Finally, a hybrid fusion combines DLF and FLF for gait recognition. The performance of the multi-model fusion-based framework was evaluated on three publicly available gait databases: CASIA-B, OU-ISIR D, and the OU-ISIR Large Population dataset. The experimental results demonstrate that the fusion-enhanced framework achieves superior performance. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

16 pages, 3840 KiB  
Article
Automated Body Condition Scoring in Dairy Cows Using 2D Imaging and Deep Learning
by Reagan Lewis, Teun Kostermans, Jan Wilhelm Brovold, Talha Laique and Marko Ocepek
AgriEngineering 2025, 7(7), 241; https://doi.org/10.3390/agriengineering7070241 - 18 Jul 2025
Viewed by 797
Abstract
Accurate body condition score (BCS) monitoring in dairy cows is essential for optimizing health, productivity, and welfare. Traditional manual scoring methods are labor-intensive and subjective, driving interest in automated imaging-based systems. This study evaluated the effectiveness of 2D imaging and deep learning for [...] Read more.
Accurate body condition score (BCS) monitoring in dairy cows is essential for optimizing health, productivity, and welfare. Traditional manual scoring methods are labor-intensive and subjective, driving interest in automated imaging-based systems. This study evaluated the effectiveness of 2D imaging and deep learning for BCS classification using three camera perspectives—front, back, and top-down—to identify the most reliable viewpoint. The research involved 56 Norwegian Red milking cows at the Center for Livestock Experiments (SHF) of Norges Miljo-og Biovitenskaplige Universitet (NMBU) in Norway. Images were classified into BCS categories of 2.5, 3.0, and 3.5 using a YOLOv8 model. The back view achieved the highest classification precision (mAP@0.5 = 0.439), confirming that key morphological features for BCS assessment are best captured from this angle. Challenges included misclassification due to overlapping features, especially in Class 2.5 and background data. The study recommends improvements in algorithmic feature extraction, dataset expansion, and multi-view integration to enhance accuracy. Integration with precision farming tools enables continuous monitoring and early detection of health issues. This research highlights the potential of 2D imaging as a cost-effective alternative to 3D systems, particularly for small and medium-sized farms, supporting more effective herd management and improved animal welfare. Full article
(This article belongs to the Special Issue Precision Farming Technologies for Monitoring Livestock and Poultry)
Show Figures

Figure 1

24 pages, 2021 KiB  
Article
A Framework for Constructing Large-Scale Dynamic Datasets for Water Conservancy Image Recognition Using Multi-Role Collaboration and Intelligent Annotation
by Xueying Song, Xiaofeng Wang, Ganggang Zuo and Jiancang Xie
Appl. Sci. 2025, 15(14), 8002; https://doi.org/10.3390/app15148002 - 18 Jul 2025
Viewed by 239
Abstract
The construction of large-scale, dynamic datasets for specialized domain models often suffers with problems of low efficiency and poor consistency. This paper proposes a method that integrates multi-role collaboration with automated annotation to address these issues. The framework introduces two new roles, data [...] Read more.
The construction of large-scale, dynamic datasets for specialized domain models often suffers with problems of low efficiency and poor consistency. This paper proposes a method that integrates multi-role collaboration with automated annotation to address these issues. The framework introduces two new roles, data augmentation specialists and automatic annotation operators, to establish a closed-loop process that includes dynamic classification adjustment, data augmentation, and intelligent annotation. Two supporting tools were developed: an image classification modification tool that automatically adapts to changes in categories and an automatic annotation tool with rotation-angle perception based on the rotation matrix algorithm. Experimental results show that this method increases annotation efficiency by 40% compared to traditional approaches, while achieving 100% annotation consistency after classification modifications. The method’s effectiveness was validated using the WATER-DET dataset, a collection of 1500 annotated images from the water conservancy engineering field. A model trained on this dataset achieved an F1-score of 0.9 for identifying water environment problems in rivers and lakes. This research offers an efficient framework for dynamic dataset construction, and the developed methods and tools are expected to promote the application of artificial intelligence in specialized domains. Full article
Show Figures

Figure 1

Back to TopTop