Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (64)

Search Parameters:
Keywords = Lucas–Kanade

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6966 KiB  
Article
Structural Vibration Detection Using the Optimized Optical Flow Technique and UAV After Removing UAV’s Motions
by Xin Bai, Rongliang Xie, Ning Liu and Zi Zhang
Appl. Sci. 2025, 15(11), 5821; https://doi.org/10.3390/app15115821 - 22 May 2025
Viewed by 650
Abstract
Traditional structural damage detection relies on multi-sensor arrays (e.g., total stations, accelerometers, and GNSS). However, these sensors have some inherent limitations such as high cost, limited accuracy, and environmental sensitivity. Advances in computer vision technology have driven the research on vision-based structural vibration [...] Read more.
Traditional structural damage detection relies on multi-sensor arrays (e.g., total stations, accelerometers, and GNSS). However, these sensors have some inherent limitations such as high cost, limited accuracy, and environmental sensitivity. Advances in computer vision technology have driven the research on vision-based structural vibration analysis and damage identification. In this study, an optimized Lucas–Kanade optical flow algorithm is proposed, and it integrates feature point trajectory analysis with an adaptive thresholding mechanism, and improves the accuracy of the measurements through an innovative error vector filtering strategy. Comprehensive experimental validation demonstrates the performance of the algorithm in a variety of test scenarios. The method tracked MTS vibrations with 97% accuracy in a laboratory environment, and the robustness of the environment was confirmed by successful noise reduction using a dedicated noise-suppression algorithm under camera-induced interference conditions. UAV field tests show that it effectively compensates for UAV-induced motion artifacts and maintains over 90% measurement accuracy in both indoor and outdoor environments. Comparative analyses show that the proposed UAV-based method has significantly improved accuracy compared to the traditional optical flow method, providing a highly robust visual monitoring solution for structural durability assessment in complex environments. Full article
Show Figures

Figure 1

17 pages, 3239 KiB  
Article
MSF-SLAM: Enhancing Dynamic Visual SLAM with Multi-Scale Feature Integration and Dynamic Object Filtering
by Yongjia Duan, Jing Luo and Xiong Zhou
Appl. Sci. 2025, 15(9), 4735; https://doi.org/10.3390/app15094735 - 24 Apr 2025
Viewed by 730
Abstract
Conventional visual SLAM systems often struggle with degraded pose estimation accuracy in dynamic environments due to the interference of moving objects and unstable feature tracking. To address this critical challenge, we present a groundbreaking enhancement to visual SLAM by introducing an innovative architecture [...] Read more.
Conventional visual SLAM systems often struggle with degraded pose estimation accuracy in dynamic environments due to the interference of moving objects and unstable feature tracking. To address this critical challenge, we present a groundbreaking enhancement to visual SLAM by introducing an innovative architecture that integrates advanced feature extraction and dynamic object filtering mechanisms. At the core of our approach lies a novel Multi-Scale Feature Consolidation (MSFConv) module, which we have developed to significantly boost the feature extraction capabilities of the YOLOv8 network. This module enables superior multi-scale feature representation, leading to significant improvements in object detection accuracy and robustness. Furthermore, we have developed a Dynamic Object Filtering Framework (DOFF) that seamlessly integrates with the ORB-SLAM3 architecture. By leveraging the Lucas-Kanade (LK) optical flow method, DOFF effectively distinguishes and removes dynamic feature points while preserving the integrity of static features. This ensures high-precision pose estimation in highly dynamic environments. Comprehensive experiments on the TUM RGB-D dataset validate the exceptional performance of our proposed method, demonstrating 93.34% and 94.43% improvements in pose estimation accuracy over the baseline ORB-SLAM3 in challenging dynamic sequences. These substantial improvements are achieved through the synergistic combination of enhanced feature extraction and precise dynamic object filtering. Our work represents a significant leap forward in visual SLAM technology, offering a robust solution to the long-standing problem of dynamic environment handling. The proposed innovations not only advance the state-of-the-art in SLAM research but also pave the way for more reliable real-world applications in robotics and autonomous systems. Full article
Show Figures

Figure 1

15 pages, 4161 KiB  
Article
A Monocular Camera as an Operation Logger for Motorized Mobility Scooters: Visual Odometry Method to Estimate Steering and Throttle Angles
by Naoto Haraguchi, Yi Liu, Haruki Sugiyama, Kazunori Hase and Jun Suzurikawa
Sensors 2025, 25(9), 2701; https://doi.org/10.3390/s25092701 - 24 Apr 2025
Viewed by 429
Abstract
Motorized mobility scooters (MMSs) are vital assistive technology devices that facilitate independent living for older adults. In many cases, older adults with physical impairments operate MMSs without special licenses, increasing the risk of accidents caused by operational errors. Although sensing systems have been [...] Read more.
Motorized mobility scooters (MMSs) are vital assistive technology devices that facilitate independent living for older adults. In many cases, older adults with physical impairments operate MMSs without special licenses, increasing the risk of accidents caused by operational errors. Although sensing systems have been developed to record MMS operations and evaluate driving skills, they face challenges in clinical applications because of the complexity of installing inertial measurement units (IMUs). This study proposes a novel recording system for MMS operation that uses a compact single-lens camera and image processing. The system estimates steering and throttle angles during MMS operation using optical flow and template matching approaches. Estimation relies on road surface images captured by a single monocular camera, significantly reducing the complexity of the sensor setup. The proposed system successfully estimated the steering angle with comparable accuracy to existing approaches using IMUs. Estimation of the throttle angle was negatively affected by the inertia of the MMS body during acceleration and deceleration but demonstrated high accuracy during stable driving conditions. This method provides a fundamental computational technique for measuring MMS operations using camera images. With its simple setup, the proposed system enhances the usability of recording systems for evaluating MMS driving skills. Full article
(This article belongs to the Special Issue Sensors and Wearables for Rehabilitation)
Show Figures

Figure 1

9 pages, 1408 KiB  
Article
Real-Time Integration of Optical Coherence Tomography Thickness Map Overlays for Enhanced Visualization in Epiretinal Membrane Surgery: A Pilot Study
by Ferhat Turgut, Keisuke Ueda, Amr Saad, Tahm Spitznagel, Luca von Felten, Takashi Matsumoto, Rui Santos, Marc D. de Smet, Zoltán Zsolt Nagy, Matthias D. Becker and Gábor Márk Somfai
Bioengineering 2025, 12(3), 271; https://doi.org/10.3390/bioengineering12030271 - 10 Mar 2025
Viewed by 1096
Abstract
(1) Background: The process of epiretinal membrane peeling (MP) requires precise intraoperative visualization to achieve optimal surgical outcomes. This study investigates the integration of preoperative Optical Coherence Tomography (OCT) images into real-time surgical video feeds, providing a dynamic overlay that enhances the decision-making [...] Read more.
(1) Background: The process of epiretinal membrane peeling (MP) requires precise intraoperative visualization to achieve optimal surgical outcomes. This study investigates the integration of preoperative Optical Coherence Tomography (OCT) images into real-time surgical video feeds, providing a dynamic overlay that enhances the decision-making process during surgery. (2) Methods: Five MP surgeries were analyzed, where preoperative OCT images were first manually aligned with the initial frame of the surgical video by selecting five pairs of corresponding points. A homography transformation was then computed to overlay the OCT onto that first frame. Subsequently, for consecutive frames, feature point extraction (the Shi–Tomasi method) and optical flow computation (the Lucas–Kanade algorithm) were used to calculate frame-by-frame transformations, which were applied to the OCT image to maintain alignment in near real time. (3) Results: The method achieved a 92.7% success rate in optical flow detection and maintained an average processing speed of 7.56 frames per second (FPS), demonstrating the feasibility of near real-time application. (4) Conclusions: The developed approach facilitates enhanced intraoperative visualization, providing surgeons with easier retinal structure identification which results in more comprehensive data-driven decisions. By improving surgical precision while potentially reducing complications, this technique benefits both surgeons and patients. Furthermore, the integration of OCT overlays holds promise for advancing robot-assisted surgery and surgical training protocols. This pilot study establishes the feasibility of real-time OCT integration in MP and opens avenues for broader applications in vitreoretinal procedures. Full article
Show Figures

Figure 1

19 pages, 8171 KiB  
Article
Research on Error Point Deletion Technique in Three-Dimensional Reconstruction of ISAR Sequence Images
by Mingyu Ma and Yingni Hou
Sensors 2025, 25(6), 1689; https://doi.org/10.3390/s25061689 - 8 Mar 2025
Viewed by 573
Abstract
Three-dimensional reconstruction using a two-dimensional inverse synthetic aperture radar (ISAR) faces dual challenges: geometric distortion in initial point clouds caused by accumulated feature-matching errors and degraded reconstruction accuracy due to point cloud outlier interference. This paper proposes an optimized method to delete the [...] Read more.
Three-dimensional reconstruction using a two-dimensional inverse synthetic aperture radar (ISAR) faces dual challenges: geometric distortion in initial point clouds caused by accumulated feature-matching errors and degraded reconstruction accuracy due to point cloud outlier interference. This paper proposes an optimized method to delete the error points based on motion vector features and local spatial point cloud density. Before reconstruction, feature point extraction and matching for ISAR sequence images are performed using Harris corner detection and the improved Kanade–Lucas–Tomasi (KLT) algorithm. To address the issue of mismatched points, a method based on motion vector features is proposed. This method applies the dual constraints of motion distance and direction thresholds and deletes mismatched points based on local motion consistency. After point cloud reconstruction, a clustering method based on local spatial point cloud density is employed to effectively remove outliers. To validate the effectiveness of the proposed method, simulation experiments comparing the performance of different approaches are conducted. The experimental results demonstrate the effectiveness and robustness of the proposed method in the 3D reconstruction of moving targets. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 28683 KiB  
Article
Neural Radiance Field Dynamic Scene SLAM Based on Ray Segmentation and Bundle Adjustment
by Yuquan Zhang and Guosheng Feng
Sensors 2025, 25(6), 1679; https://doi.org/10.3390/s25061679 - 8 Mar 2025
Viewed by 1306
Abstract
The current neural implicit SLAM methods have demonstrated excellent performance in reconstructing ideal static 3D scenes. However, it remains a significant challenge for these methods to handle real scenes with drastic changes in lighting conditions and dynamic environments. This paper proposes a neural [...] Read more.
The current neural implicit SLAM methods have demonstrated excellent performance in reconstructing ideal static 3D scenes. However, it remains a significant challenge for these methods to handle real scenes with drastic changes in lighting conditions and dynamic environments. This paper proposes a neural implicit SLAM method that effectively deals with dynamic scenes. We employ a keyframe selection and tracking switching approach based on Lucas–Kanade (LK) optical flow, which serves as prior construction for the Conditional Random Fields potential function. This forms a semantic-based joint estimation method for dynamic and static pixels and constructs corresponding loss functions to impose constraints on dynamic scenes. We conduct experiments on various dynamic and challenging scene datasets, including TUM RGB-D, Openloris, and Bonn. The results demonstrate that our method significantly outperforms existing neural implicit SLAM systems in terms of reconstruction quality and tracking accuracy. Full article
(This article belongs to the Special Issue 3D Reconstruction with RGB-D Cameras and Multi-sensors)
Show Figures

Figure 1

21 pages, 4789 KiB  
Article
Machine-Learning-Based Activity Tracking for Individual Pig Monitoring in Experimental Facilities for Improved Animal Welfare in Research
by Frederik Deutch, Marc Gjern Weiss, Stefan Rahr Wagner, Lars Schmidt Hansen, Frederik Larsen, Constanca Figueiredo, Cyril Moers and Anna Krarup Keller
Sensors 2025, 25(3), 785; https://doi.org/10.3390/s25030785 - 28 Jan 2025
Cited by 1 | Viewed by 1682
Abstract
In experimental research, animal welfare should always be of the highest priority. Currently, physical in-person observations are the standard. This is time-consuming, and results are subjective. Video-based machine learning models for monitoring experimental pigs provide a continuous and objective observation method for animal [...] Read more.
In experimental research, animal welfare should always be of the highest priority. Currently, physical in-person observations are the standard. This is time-consuming, and results are subjective. Video-based machine learning models for monitoring experimental pigs provide a continuous and objective observation method for animal misthriving detection. The aim of this study was to develop and validate a pig tracking technology, using video-based data in a machine learning model to analyze the posture and activity level of experimental pigs living in single-pig pens. A research prototype was created using a microcomputer and a ceiling-mounted camera for live recording based on the obtained images from the experimental facility, and a combined model was created based on the Ultralytics YOLOv8n for object detection trained on the obtained images. As a second step, the Lucas–Kanade sparse optical flow technique for movement detection was applied. The resulting model successfully classified whether individual pigs were lying, standing, or walking. The validation test showed an accuracy of 90.66%, precision of 90.91%, recall of 90.66%, and correlation coefficient of 84.53% compared with observed ground truth. In conclusion, the model demonstrates how machine learning can be used to monitor experimental animals to potentially improve animal welfare. Full article
(This article belongs to the Special Issue Feature Papers in Sensing and Imaging 2024)
Show Figures

Figure 1

21 pages, 6103 KiB  
Article
UAVs-Based Visual Localization via Attention-Driven Image Registration Across Varying Texture Levels
by Yan Ren, Guohai Dong, Tianbo Zhang, Meng Zhang, Xinyu Chen and Mingliang Xue
Drones 2024, 8(12), 739; https://doi.org/10.3390/drones8120739 - 9 Dec 2024
Viewed by 5212
Abstract
This study investigates the difficulties associated with image registration due to variations in perspective, lighting, and ground object details between images captured by drones and satellite imagery. This study proposes an image registration and drone visual localization algorithm based on an attention mechanism. [...] Read more.
This study investigates the difficulties associated with image registration due to variations in perspective, lighting, and ground object details between images captured by drones and satellite imagery. This study proposes an image registration and drone visual localization algorithm based on an attention mechanism. Initially, an improved Oriented FAST and Rotated BRIEF (ORB) algorithm incorporating a quadtree-based feature point homogenization method is designed to extract image feature points, providing support for the initial motion estimation of UAVs. Following this, we combined a convolutional neural network with an attention mechanism and the inverse-combined Lucas-Kanade method to further extract image features. This integration facilitates the efficient registration of drone images with satellite tiles. Finally, we utilized the registration results to correct the initial motion of the drone and accurately determine its location. Our experimental findings indicate that the proposed algorithm achieves an average absolute positioning error of less than 40 m for low-texture flight paths and under 10 m for high-texture paths. This significantly mitigates the positioning challenges that arise from inconsistencies between drone images and satellite maps. Moreover, our method demonstrates a notable improvement in computational speed compared to existing algorithms. Full article
Show Figures

Figure 1

28 pages, 20242 KiB  
Article
PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model
by Jiale Liu and Jingwen Luo
Electronics 2024, 13(23), 4592; https://doi.org/10.3390/electronics13234592 - 21 Nov 2024
Cited by 1 | Viewed by 1391
Abstract
This paper proposes an enhanced visual simultaneous localization and mapping (vSLAM) algorithm tailored for mobile robots operating in indoor dynamic scenes. By incorporating point-line features and leveraging the Manhattan world model, the proposed PLM-SLAM framework significantly improves localization accuracy and map consistency. This [...] Read more.
This paper proposes an enhanced visual simultaneous localization and mapping (vSLAM) algorithm tailored for mobile robots operating in indoor dynamic scenes. By incorporating point-line features and leveraging the Manhattan world model, the proposed PLM-SLAM framework significantly improves localization accuracy and map consistency. This algorithm optimizes the line features detected by the Line Segment Detector (LSD) through merging and pruning strategies, ensuring real-time performance. Subsequently, dynamic point-line features are rejected based on Lucas–Kanade (LK) optical flow, geometric constraints, and depth information, minimizing the impact of dynamic objects. The Manhattan world model is then utilized to reduce rotational estimation errors and optimize pose estimation. High-precision line feature matching and loop closure detection mechanisms further enhance the robustness and accuracy of the system. Experimental results demonstrate the superior performance of PLM-SLAM, particularly in high-dynamic indoor environments, outperforming existing state-of-the-art methods. Full article
Show Figures

Figure 1

22 pages, 5456 KiB  
Article
Computer-Vision-Aided Deflection Influences Line Identification of Concrete Bridge Enhanced by Edge Detection and Time-Domain Forward Inference
by Jianfeng Chen, Long Zhao, Yuliang Feng and Zhiwei Chen
Buildings 2024, 14(11), 3537; https://doi.org/10.3390/buildings14113537 - 5 Nov 2024
Viewed by 1023
Abstract
To enhance the accuracy and efficiency of the deflection response measurement of concrete bridges with a non-contact scheme and address the ill-conditioned nature of the inverse problem in influence line (IL) identification, this study introduces a computer-vision-aided deflection IL identification method that integrates [...] Read more.
To enhance the accuracy and efficiency of the deflection response measurement of concrete bridges with a non-contact scheme and address the ill-conditioned nature of the inverse problem in influence line (IL) identification, this study introduces a computer-vision-aided deflection IL identification method that integrates edge detection and time-domain forward inference (TDFI). The methodology proposed in this research leverages computer vision technology with edge detection to surpass traditional contact-based measurement methods, greatly enhancing the operational efficiency and applicability of IL identification and, in particular, addressing the challenge of accurately measuring small deflections in concrete bridges. To mitigate the limitations of the Lucas–Kanade (LK) optical flow method, such as unclear feature points within the camera’s field of view and occasional point loss in certain video frames, an edge detection technique is employed to identify maximum values in the first-order derivatives of the image, creating virtual tracking points at the bridge edges through image processing. By precisely defining the bridge boundaries, only the essential structural attributes are preserved to enhance the reliability of minimal deflection deformations under vehicular loads. To tackle the ill-posed nature of the inverse problem, a TDFI model is introduced to identify IL, recursively capturing the static bridge response generated by the bridge under the influence of successive axles of a multi-axle vehicle. The IL is then computed by dividing the response by the weight of the preceding axle. Furthermore, an axle weight ratio reduction coefficient is proposed to mitigate noise amplification issues, ensuring that the weight of the preceding axle surpasses that of any other axle. To validate the accuracy and robustness of the proposed method, it is applied to numerical examples of a simply supported concrete beam, indoor experiments on a similar beam, and field tests on a three-span continuous concrete beam bridge. Full article
(This article belongs to the Special Issue Study on Concrete Structures)
Show Figures

Figure 1

15 pages, 17155 KiB  
Article
River Surface Velocity Measurement for Rapid Levee Breach Emergency Response Based on DFP-P-LK Algorithm
by Zhao-Dong Xu, Zhi-Wei Zhang, Ying-Qing Guo, Yan Zhang and Yang Zhan
Sensors 2024, 24(16), 5249; https://doi.org/10.3390/s24165249 - 14 Aug 2024
Viewed by 1062
Abstract
In recent years, the increasing frequency of climate change and extreme weather events has significantly elevated the risk of levee breaches, potentially triggering large-scale floods that threaten surrounding environments and public safety. Rapid and accurate measurement of river surface velocities is crucial for [...] Read more.
In recent years, the increasing frequency of climate change and extreme weather events has significantly elevated the risk of levee breaches, potentially triggering large-scale floods that threaten surrounding environments and public safety. Rapid and accurate measurement of river surface velocities is crucial for developing effective emergency response plans. Video image velocimetry has emerged as a powerful new approach due to its non-invasive nature, ease of operation, and low cost. This paper introduces the Dynamic Feature Point Pyramid Lucas–Kanade (DFP-P-LK) optical flow algorithm, which employs a feature point dynamic update fusion strategy. The algorithm ensures accurate feature point extraction and reliable tracking through feature point fusion detection and dynamic update mechanisms, enhancing the robustness of optical flow estimation. Based on the DFP-P-LK, we propose a river surface velocity measurement model for rapid levee breach emergency response. This model converts acquired optical flow motion to actual flow velocities using an optical flow-velocity conversion model, providing critical data support for levee breach emergency response. Experimental results show that the method achieves an average measurement error below 15% within the velocity range of 0.43 m/s to 2.06 m/s, demonstrating high practical value and reliability. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 4413 KiB  
Article
Super-Resolution Reconstruction of an Array Lidar Range Profile
by Xuelian Liu, Xulang Zhou, Guan Xi, Rui Zhuang, Chunhao Shi and Chunyang Wang
Appl. Sci. 2024, 14(12), 5335; https://doi.org/10.3390/app14125335 - 20 Jun 2024
Cited by 1 | Viewed by 1237
Abstract
Aiming at the problem that the range profile of the current array lidar has a low resolution and contains few target details and little edge information, a super-resolution reconstruction method based on projection onto convex sets (POCS) combining the Lucas–Kanade (LK) optical flow [...] Read more.
Aiming at the problem that the range profile of the current array lidar has a low resolution and contains few target details and little edge information, a super-resolution reconstruction method based on projection onto convex sets (POCS) combining the Lucas–Kanade (LK) optical flow method with a Gaussian pyramid was proposed. Firstly, the reference high-resolution range profile was obtained by the nearest neighbor interpolation of the single low-resolution range profile. Secondly, the LK optical flow method was introduced to achieve the motion estimation of low-resolution image sequences, and the Gaussian pyramid was used to perform multi-scale correction on the estimated vector, effectively improving the accuracy of motion estimation. On the basis of data consistency constraints, gradient constraints were introduced based on the distance value difference between the target edge and the background to enhance the reconstruction ability of the target edge. Finally, the residual between the estimated distance and the actual distance was calculated, and the high-resolution reference range profile was iteratively corrected by using the point spread function according to the residual. Bilinear interpolation, bicubic interpolation, POCS, POCS with adaptive correction threshold, and the proposed method were used to reconstruct the range profile of the datasets and the real datasets. The effectiveness of the proposed method was verified by the range profile reconstruction effect and objective evaluation index. The experimental results show that the index of the proposed method is improved compared to the interpolation method and the POCS method. In the redwood-3dscan dataset experiments, compared to the traditional POCS, the average gradient (AG) of the proposed method is increased by at least 8.04%, and the edge strength (ES) is increased by at least 4.84%. In the real data experiments, compared to the traditional POCS, the AG of the proposed method is increased by at least 5.85%, and the ES is increased by at least 7.01%, which proves that the proposed method can effectively improve the resolution of the reconstructed range map and the quality of the detail edges. Full article
Show Figures

Figure 1

29 pages, 1651 KiB  
Article
Quaternion-Based Attitude Estimation of an Aircraft Model Using Computer Vision
by Pavithra Kasula, James F. Whidborne and Zeeshan A. Rana
Sensors 2024, 24(12), 3795; https://doi.org/10.3390/s24123795 - 12 Jun 2024
Cited by 1 | Viewed by 4743
Abstract
Investigating aircraft flight dynamics often requires dynamic wind tunnel testing. This paper proposes a non-contact, off-board instrumentation method using vision-based techniques. The method utilises a sequential process of Harris corner detection, Kanade–Lucas–Tomasi tracking, and quaternions to identify the Euler angles from a pair [...] Read more.
Investigating aircraft flight dynamics often requires dynamic wind tunnel testing. This paper proposes a non-contact, off-board instrumentation method using vision-based techniques. The method utilises a sequential process of Harris corner detection, Kanade–Lucas–Tomasi tracking, and quaternions to identify the Euler angles from a pair of cameras, one with a side view and the other with a top view. The method validation involves simulating a 3D CAD model for rotational motion with a single degree-of-freedom. The numerical analysis quantifies the results, while the proposed approach is analysed analytically. This approach results in a 45.41% enhancement in accuracy over an earlier direction cosine matrix method. Specifically, the quaternion-based method achieves root mean square errors of 0.0101 rad/s, 0.0361 rad/s, and 0.0036 rad/s for the dynamic measurements of roll rate, pitch rate, and yaw rate, respectively. Notably, the method exhibits a 98.08% accuracy for the pitch rate. These results highlight the performance of quaternion-based attitude estimation in dynamic wind tunnel testing. Furthermore, an extended Kalman filter is applied to integrate the generated on-board instrumentation data (inertial measurement unit, potentiometer gimbal) and the results of the proposed vision-based method. The extended Kalman filter state estimation achieves root mean square errors of 0.0090 rad/s, 0.0262 rad/s, and 0.0034 rad/s for the dynamic measurements of roll rate, pitch rate, and yaw rate, respectively. This method exhibits an improved accuracy of 98.61% for the estimation of pitch rate, indicating its higher efficiency over the standalone implementation of the direction cosine method for dynamic wind tunnel testing. Full article
(This article belongs to the Special Issue Sensors in Aircraft (Volume II))
Show Figures

Figure 1

18 pages, 11485 KiB  
Article
Gas–Liquid Two-Phase Flow Measurement Based on Optical Flow Method with Machine Learning Optimization Model
by Junxian Wang, Zhenwei Huang, Ya Xu and Dailiang Xie
Appl. Sci. 2024, 14(9), 3717; https://doi.org/10.3390/app14093717 - 26 Apr 2024
Cited by 3 | Viewed by 1758
Abstract
Gas–Liquid two-phase flows are a common flow in industrial production processes. Since these flows inherently consist of discrete phases, it is challenging to accurately measure the flow parameters. In this context, a novel approach is proposed that combines the pyramidal Lucas-Kanade (L–K) optical [...] Read more.
Gas–Liquid two-phase flows are a common flow in industrial production processes. Since these flows inherently consist of discrete phases, it is challenging to accurately measure the flow parameters. In this context, a novel approach is proposed that combines the pyramidal Lucas-Kanade (L–K) optical flow method with the Split Comparison (SC) model measurement method. In the proposed approach, videos of gas–liquid two-phase flows are captured using a camera, and optical flow data are acquired from the flow videos using the pyramid L–K optical flow detection method. To address the issue of data clutter in optical flow extraction, a dynamic median value screening method is introduced to optimize the corner point for optical flow calculations. Machine learning algorithms are employed for the prediction model, yielding high flow prediction accuracy in experimental tests. Results demonstrate that the gradient boosted regression (GBR) model is the most effective among the five preset models, and the optimized SC model significantly improves measurement accuracy compared to the GBR model, achieving an R2 value of 0.97, RMSE of 0.74 m3/h, MAE of 0.52 m3/h, and MAPE of 8.0%. This method offers a new approach for monitoring flows in industrial production processes such as oil and gas. Full article
Show Figures

Figure 1

25 pages, 9328 KiB  
Article
A Hybrid PIV/Optical Flow Method for Incompressible Turbulent Flows
by Luís P. N. Mendes, Ana M. C. Ricardo, Alexandre J. M. Bernardino and Rui M. L. Ferreira
Water 2024, 16(7), 1021; https://doi.org/10.3390/w16071021 - 1 Apr 2024
Cited by 1 | Viewed by 2092
Abstract
We present novel velocimetry algorithms based on the hybridization of correlation-based Particle Image Velocimetry (PIV) and a combination of Lucas–Kanade and Liu–Shen optical flow (OpF) methods. An efficient Aparapi/OpenCL implementation of those methods is also provided in the accompanying open-source QuickLabPIV-ng tool enabled [...] Read more.
We present novel velocimetry algorithms based on the hybridization of correlation-based Particle Image Velocimetry (PIV) and a combination of Lucas–Kanade and Liu–Shen optical flow (OpF) methods. An efficient Aparapi/OpenCL implementation of those methods is also provided in the accompanying open-source QuickLabPIV-ng tool enabled with a Graphical User Interface (GUI). Two different options of hybridization were developed and tested: OpF as a last step, after correlation-based PIV, and OpF as a substitute for sub-pixel interpolation. Hybridization increases the spatial resolution of PIV, enabling the characterization of small turbulent scales and the computation of key turbulence parameters such as the rate of dissipation of turbulent kinetic energy. The method was evaluated using both synthetic and real databases, representing flows that exhibit a variety of locally isotropic homogeneous turbulent scales. The proposed hybrid PIV-OpF results in a 3-fold increase in the PIV density for synthetic images. The analysis of power spectral density functions and auto-correlation demonstrated the impact of PIV image quality on the accuracy of the method and its ability to extend the turbulence range. We discuss the challenges posed by optical noise and tracer density in the quality of the vector map density. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

Back to TopTop