Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = trajectory color image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 729 KB  
Article
A Single-Neuron-per-Class Readout for Image-Encoded Sensor Time Series
by David Bernal-Casas and Jaime Gallego
Mathematics 2025, 13(24), 3893; https://doi.org/10.3390/math13243893 - 5 Dec 2025
Viewed by 220
Abstract
We introduce an ultra-compact, single-neuron-per-class end-to-end readout for binary classification of noisy, image-encoded sensor time series. The approach compares a linear single-unit perceptron (E2E-MLP-1) with a resonate-and-fire (RAF) neuron (E2E-RAF-1), which merges feature selection and decision-making in a single block. Beyond empirical evaluation, [...] Read more.
We introduce an ultra-compact, single-neuron-per-class end-to-end readout for binary classification of noisy, image-encoded sensor time series. The approach compares a linear single-unit perceptron (E2E-MLP-1) with a resonate-and-fire (RAF) neuron (E2E-RAF-1), which merges feature selection and decision-making in a single block. Beyond empirical evaluation, we provide a mathematical analysis of the RAF readout: starting from its subthreshold ordinary differential equation, we derive the transfer function H(jω), characterize the frequency response, and relate the output signal-to-noise ratio (SNR) to |H(jω)|2 and the noise power spectral density Sn(ω)ωα (brown, pink, and blue noise). We present a stable discrete-time implementation compatible with surrogate gradient training and discuss the associated stability constraints. As a case study, we classify walk-in-place (WIP) in a virtual reality (VR) environment, a vision-based motion encoding (72 × 56 grayscale) derived from 3D trajectories, comprising 44,084 samples from 15 participants. On clean data, both single-neuron-per-class models approach ceiling accuracy. At the same time, under colored noise, the RAF readout yields consistent gains (typically +5–8% absolute accuracy at medium/high perturbations), indicative of intrinsic band-selective filtering induced by resonance. With ∼8 k parameters and sub-2 ms inference on commodity graphical processing units (GPUs), the RAF readout provides a mathematically grounded, robust, and efficient alternative for stochastic signal processing across domains, with virtual reality locomotion used here as an illustrative validation. Full article
(This article belongs to the Special Issue Computer Vision, Image Processing Technologies and Machine Learning)
Show Figures

Figure 1

20 pages, 8413 KB  
Article
An Analytical and Numerical Study of Wear Distribution on the Combine Harvester Header Platform: Model Development, Comparison, and Experimental Validation
by Honglei Zhang, Zhong Tang, Liquan Tian, Tiantian Jing and Biao Zhang
Lubricants 2025, 13(11), 482; https://doi.org/10.3390/lubricants13110482 - 30 Oct 2025
Viewed by 605
Abstract
The header platform of a combine harvester is subjected to severe abrasive and corrosive wear from rice stalks and environmental factors, which significantly limits its service life and operational efficiency. Accurately predicting the complex distribution of this wear over time and across the [...] Read more.
The header platform of a combine harvester is subjected to severe abrasive and corrosive wear from rice stalks and environmental factors, which significantly limits its service life and operational efficiency. Accurately predicting the complex distribution of this wear over time and across the platform’s surface, however, remains a significant challenge. This paper, for the first time, systematically establishes a quantitative mapping relationship from “material motion trajectory” to “component wear profile” and introduces a novel method for time-sequence wear validation based on corrosion color gradients, providing a complete research paradigm to address this challenge. To this end, an analytical model based on rigid-body dynamics was first developed to predict the motion trajectory of a single rice stalk. Subsequently, a full-scale Discrete Element (DEM) model of the header platform–flexible rice stalk system was constructed. This model simulated the complex flow process of the rice population with high fidelity and was used to analyze the influence of key operating parameters (spiral auger rotational speed, cutting width) on wear distribution. Finally, real-world wear data were obtained through in situ mapping of a header platform after long-term service (1300 h) and multi-period (0–1600 h) image analysis. Through a three-way quantitative comparison among the theoretical trajectory, simulated trajectory, and the actual wear profile, the results indicate that the simulated and theoretical trajectories are in good agreement in terms of their macroscopic trends (Mean Squared Error, MSE, ranging from 0.4 to 6.2); the simulated and actual wear profiles exhibit an extremely high degree of geometric similarity, with the simulated wear area showing a 95.1% match to the actual measured area (Edit Distance: 0.14; Hamming Distance: 1). This research not only confirms that the flow trajectory of rice is the determining factor for the wear distribution on the header platform but, more importantly, the developed analytical and numerical methods offer a robust theoretical basis and effective predictive tools for optimizing the wear resistance and predicting the service life of the header platform, thereby demonstrating significant engineering value. Full article
Show Figures

Figure 1

24 pages, 9636 KB  
Article
Finite-Time Modified Function Projective Synchronization Between Different Fractional-Order Chaotic Systems Based on RBF Neural Network and Its Application to Image Encryption
by Ruihong Li, Huan Wang and Dongmei Huang
Fractal Fract. 2025, 9(10), 659; https://doi.org/10.3390/fractalfract9100659 - 13 Oct 2025
Viewed by 479
Abstract
This paper innovatively achieves finite-time modified function projection synchronization (MFPS) for different fractional-order chaotic systems. By leveraging the advantages of radial basis function (RBF) neural networks in nonlinear approximation, this paper proposes a novel fractional-order sliding-mode controller. It is designed to address the [...] Read more.
This paper innovatively achieves finite-time modified function projection synchronization (MFPS) for different fractional-order chaotic systems. By leveraging the advantages of radial basis function (RBF) neural networks in nonlinear approximation, this paper proposes a novel fractional-order sliding-mode controller. It is designed to address the issues of system model uncertainty and external disturbances. Based on Lyapunov stability theory, it has been demonstrated that the error trajectory can converge to the equilibrium point along the sliding surface within a finite time. Subsequently, the finite-time MFPS of the fractional-order hyperchaotic Chen system and fractional-order chaotic entanglement system are realized under conditions of periodic and noise disturbances, respectively. The effects of the neural network parameters on the performance of the MFPS are then analyzed in depth. Finally, a color image encryption scheme is presented integrating the above MFPS method and exclusive-or operation, and its effectiveness and security are illustrated through numerical simulation and statistical analysis. In the future, we will further explore the application of fractional-order chaotic system MFPS in other fields, providing new theoretical support for interdisciplinary research. Full article
(This article belongs to the Special Issue Advances in Dynamics and Control of Fractional-Order Systems)
Show Figures

Figure 1

14 pages, 6767 KB  
Article
Reduction of Visual Artifacts in Laser Beam Scanning Displays
by Peng Zhou, Huijun Yu, Xiaoguang Li, Wenjiang Shen and Dongmin Wu
Micromachines 2025, 16(8), 949; https://doi.org/10.3390/mi16080949 - 19 Aug 2025
Viewed by 3731
Abstract
Laser beam scanning (LBS) projection systems based on MEMS micromirrors offer advantages such as compact size, low power consumption, and vivid color performance, making them well suited for applications like AR glasses and portable projectors. Among various scanning methods, raster scanning is widely [...] Read more.
Laser beam scanning (LBS) projection systems based on MEMS micromirrors offer advantages such as compact size, low power consumption, and vivid color performance, making them well suited for applications like AR glasses and portable projectors. Among various scanning methods, raster scanning is widely adopted; however, it suffers from artifacts such as dark bands between adjacent scanning lines and non-uniform distribution of the scanning trajectory relative to the original image. These issues degrade the overall viewing experience. In this study, we address these problems by introducing random variations to the slow-axis driving signal to alter the vertical offset of the scanning trajectories between different scan cycles. The variation is defined as an integer multiple of 1/8 of the fast-axis scanning period (1/fh) Due to the temporal integration effect of human vision, trajectories from different cycles overlap, thereby enhancing the scanning fill factor relative to the target image area. The simulation and experimental results demonstrate that the maximum ratio of non-uniform line spacing is reduced from 7:1 to 1:1, and the modulation of the scanned display image is reduced to 0.0006—below the human eye’s contrast threshold of 0.0039 under the given experimental conditions. This method effectively addresses scanning display artifacts without requiring additional hardware modifications. Full article
(This article belongs to the Special Issue Recent Advances in MEMS Mirrors)
Show Figures

Figure 1

24 pages, 90648 KB  
Article
An Image Encryption Method Based on a Two-Dimensional Cross-Coupled Chaotic System
by Caiwen Chen, Tianxiu Lu and Boxu Yan
Symmetry 2025, 17(8), 1221; https://doi.org/10.3390/sym17081221 - 2 Aug 2025
Cited by 1 | Viewed by 885
Abstract
Chaotic systems have demonstrated significant potential in the field of image encryption due to their extreme sensitivity to initial conditions, inherent unpredictability, and pseudo-random behavior. However, existing chaos-based encryption schemes still face several limitations, including narrow chaotic regions, discontinuous chaotic ranges, uneven trajectory [...] Read more.
Chaotic systems have demonstrated significant potential in the field of image encryption due to their extreme sensitivity to initial conditions, inherent unpredictability, and pseudo-random behavior. However, existing chaos-based encryption schemes still face several limitations, including narrow chaotic regions, discontinuous chaotic ranges, uneven trajectory distributions, and fixed pixel processing sequences. These issues substantially hinder the security and efficiency of such algorithms. To address these challenges, this paper proposes a novel hyperchaotic map, termed the two-dimensional cross-coupled chaotic map (2D-CFCM), derived from a newly designed 2D cross-coupled chaotic system. The proposed 2D-CFCM exhibits enhanced randomness, greater sensitivity to initial values, a broader chaotic region, and a more uniform trajectory distribution, thereby offering stronger security guarantees for image encryption applications. Based on the 2D-CFCM, an innovative image encryption method was further developed, incorporating efficient scrambling and forward and reverse random multidirectional diffusion operations with symmetrical properties. Through simulation tests on images of varying sizes and resolutions, including color images, the results demonstrate the strong security performance of the proposed method. This method has several remarkable features, including an extremely large key space (greater than 2912), extremely high key sensitivity, nearly ideal entropy value (greater than 7.997), extremely low pixel correlation (less than 0.04), and excellent resistance to differential attacks (with the average values of NPCR and UACI being 99.6050% and 33.4643%, respectively). Compared to existing encryption algorithms, the proposed method provides significantly enhanced security. Full article
(This article belongs to the Special Issue Symmetry in Chaos Theory and Applications)
Show Figures

Figure 1

27 pages, 9977 KB  
Article
Mergeable Probabilistic Voxel Mapping for LiDAR–Inertial–Visual Odometry
by Balong Wang, Nassim Bessaad, Huiying Xu, Xinzhong Zhu and Hongbo Li
Electronics 2025, 14(11), 2142; https://doi.org/10.3390/electronics14112142 - 24 May 2025
Cited by 1 | Viewed by 2241
Abstract
To address the limitations of existing LiDAR–visual fusion methods in adequately accounting for map uncertainties induced by LiDAR measurement noise, this paper introduces a LiDAR–inertial–visual odometry framework leveraging mergeable probabilistic voxel mapping. The method innovatively employs probabilistic voxel models to characterize uncertainties in [...] Read more.
To address the limitations of existing LiDAR–visual fusion methods in adequately accounting for map uncertainties induced by LiDAR measurement noise, this paper introduces a LiDAR–inertial–visual odometry framework leveraging mergeable probabilistic voxel mapping. The method innovatively employs probabilistic voxel models to characterize uncertainties in environmental geometric plane features and optimizes computational efficiency through a voxel merging strategy. Additionally, it integrates color information from cameras to further enhance localization accuracy. Specifically, in the LiDAR–inertial odometry (LIO) subsystem, a probabilistic voxel plane model is constructed for LiDAR point clouds to explicitly represent measurement noise uncertainty, thereby improving the accuracy and robustness of point cloud registration. A voxel merging strategy based on the union-find algorithm is introduced to merge coplanar voxel planes, reducing computational load. In the visual–inertial odometry (VIO) subsystem, image tracking points are generated through a global map projection, and outlier points are eliminated using a random sample consensus algorithm based on a dynamic Bayesian network. Finally, state estimation accuracy is enhanced by jointly optimizing frame-to-frame reprojection errors and frame-to-map RGB color errors. Experimental results demonstrate that the proposed method achieves root mean square errors (RMSEs) of absolute trajectory error at 0.478 m and 0.185 m on the M2DGR and NTU-VIRAL datasets, respectively, while attaining real-time performance with an average processing time of 39.19 ms per-frame on the NTU-VIRAL datasets. Compared to state-of-the-art approaches, our method exhibits significant improvements in both accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

21 pages, 3791 KB  
Article
Research on A Single-Load Identification Method Based on Color Coding and Harmonic Feature Fusion
by Xin Lu, Dan Chen, Likai Geng, Yao Wang, Dejie Sheng and Ruodan Chen
Electronics 2025, 14(8), 1574; https://doi.org/10.3390/electronics14081574 - 13 Apr 2025
Viewed by 489
Abstract
With the growing global focus on sustainable development and climate change mitigation, promoting the low carbonization of energy systems has become an inevitable trend. Power load monitoring is crucial to achieving efficient power management, and load identification is the key link. The traditional [...] Read more.
With the growing global focus on sustainable development and climate change mitigation, promoting the low carbonization of energy systems has become an inevitable trend. Power load monitoring is crucial to achieving efficient power management, and load identification is the key link. The traditional load identification method has the problem of low accuracy. It is assumed that the technique of fusing harmonic features through color coding can improve the accuracy of load identification. In this paper, the load’s instantaneous reactive power, power factor and current sequence distribution characteristics are used as the mapping characteristics of the R, G and B channels of the two-dimensional V–I trajectory color image of the load using color coding technology. The harmonic amplitude characteristics are integrated to construct the mixed-color image of the load. The void residual shrinkage neural network is selected as the classification training model. The advantages and disadvantages of two residual shrinkage construction units, RSBU-CS and RSBU-CW, are analyzed. A single-load identification model with three RSBU-CWs is built. Different datasets verify the performance of the model. Compared with the test results of the ordinary color image dataset, the accuracy of the mixed-color image dataset is above 98%, and the accuracy of load identification is improved. Full article
Show Figures

Figure 1

20 pages, 7893 KB  
Article
Simulation of Control Process of Fluid Boundary Layer on Deposition of Travertine Particles in Huanglong Landscape Water Based on Computational Fluid Dynamics Software (CFD)
by Xinze Liu, Wenhao Gao, Yang Zuo, Dong Sun, Weizhen Zhang, Zhipeng Zhang, Shupu Liu, Jianxing Dong, Shikuan Wang, Hao Xu, Hongwei Chen and Mengyu Xu
Water 2025, 17(5), 638; https://doi.org/10.3390/w17050638 - 22 Feb 2025
Cited by 2 | Viewed by 1014
Abstract
This research explores the distribution, transport, and deposition of calcium carbonate particles in the colorful pools of the Huanglong area under varying hydrodynamic conditions. The study employs Particle Image Velocimetry (PIV) for real-time measurements of flow field velocity and computational fluid dynamics (CFD) [...] Read more.
This research explores the distribution, transport, and deposition of calcium carbonate particles in the colorful pools of the Huanglong area under varying hydrodynamic conditions. The study employs Particle Image Velocimetry (PIV) for real-time measurements of flow field velocity and computational fluid dynamics (CFD) simulations to analyze particle behavior. The findings reveal that under horizontal flow conditions, the peak concentration of calcium carbonate escalated to 1.06%, representing a 6% surge compared to the inlet concentration. Significantly, particle aggregation and settling were predominantly noted at the bottom right of the flow channel, where the flow boundary layer is most pronounced. In the context of inclined surfaces equipped with a baffle, a substantial rise in calcium carbonate concentrations was detected at the channel’s bottom right and behind the baffle, particularly in regions characterized by reduced flow velocities. These low-velocity areas, along with the interaction of the boundary layer and low-speed vortices, led to a decrease in particle velocities, thereby enhancing deposition. The highest concentrations of calcium carbonate particles were found in regions characterized by thicker boundary layers, particularly in locations before and after the baffle. Using the Discrete Phase Model (DPM 22), the study tracked the trajectories of 2424 particles, of which 2415 exited the computational channel and nine underwent deposition. The overall deposition rate was measured at 0.371%, with calcium carbonate deposition rates ranging from 4.06 mm/a to 81.7 mm/a, closely matching field observations. These findings provide valuable insights into the dynamics of particle transport in aquatic environments and elucidate the factors influencing sedimentation processes. Full article
(This article belongs to the Special Issue Hydrodynamic Science Experiments and Simulations)
Show Figures

Figure 1

22 pages, 7001 KB  
Article
Green Flashes Observed in Optical and Infrared during an Extreme Electric Storm
by Gilbert Green and Naomi Watanabe
Appl. Sci. 2024, 14(16), 6938; https://doi.org/10.3390/app14166938 - 8 Aug 2024
Cited by 1 | Viewed by 1780
Abstract
A strong and fast-moving electrical storm occurred in the Southwest Florida region overnight, from 01:00 UTC on 17 April to 07:00 UTC on 17 April 2023. Video recordings were conducted in the region at Latitude N 26.34° and Longitude W 81.79° for 5 [...] Read more.
A strong and fast-moving electrical storm occurred in the Southwest Florida region overnight, from 01:00 UTC on 17 April to 07:00 UTC on 17 April 2023. Video recordings were conducted in the region at Latitude N 26.34° and Longitude W 81.79° for 5 h and 15 min, from 01:45 UTC to 07:00 UTC. The camera captured the flashes transforming from pinkish, violet, blue, and then emerald green in the sky twice: the first colored flash lasted 2.0 s, and the second one lasted 0.5 s. The characteristics of the flashes were analyzed using video images integrated with lightning flash data from the Geostationary Lightning Mapper (GLM). To gain deeper insights into the associated atmospheric conditions, the Advanced Baseline Imager (ABI) was also used to help understand the spectral anomalies. Both events had similarities: the same pattern of changing luminous colors in the optical images and the trajectory of the lightning discharges, showing clusters and horizontal distributions. Event 1 occurred mainly over the ocean and featured more intense storms, heavier rain, and denser, higher cloud-tops compared to Event 2, which occurred inland and involved dissipating storms. Moreover, the group energy detected in Event 1 was an order of magnitude higher than in Event 2. We attribute the wavelength of the recorded colored luminosity to varying atmospheric molecular concentrations, which ultimately contributed to the unique spectral line. In this study, we explore the correlation between colored flashes and specific atmospheric concentrations. Full article
(This article belongs to the Special Issue Lightning Electromagnetic Fields Research)
Show Figures

Figure 1

22 pages, 11407 KB  
Article
Research on a Matching Method for Vehicle-Borne Laser Point Cloud and Panoramic Images Based on Occlusion Removal
by Jiashu Ji, Weiwei Wang, Yipeng Ning, Hanwen Bo and Yufei Ren
Remote Sens. 2024, 16(14), 2531; https://doi.org/10.3390/rs16142531 - 10 Jul 2024
Cited by 4 | Viewed by 1841
Abstract
Vehicle-borne mobile mapping systems (MMSs) have been proven as an efficient means of photogrammetry and remote sensing, as they simultaneously acquire panoramic images, point clouds, and positional information along the collection route from a ground-based perspective. Obtaining accurate matching results between point clouds [...] Read more.
Vehicle-borne mobile mapping systems (MMSs) have been proven as an efficient means of photogrammetry and remote sensing, as they simultaneously acquire panoramic images, point clouds, and positional information along the collection route from a ground-based perspective. Obtaining accurate matching results between point clouds and images is a key issue in data application from vehicle-borne MMSs. Traditional matching methods, such as point cloud projection, depth map generation, and point cloud coloring, are significantly affected by the processing methods of point clouds and matching logic. In this study, we propose a method for generating matching relationships based on panoramic images, utilizing the raw point cloud map, a series of trajectory points, and the corresponding panoramic images acquired using a vehicle-borne MMS as input data. Through a point-cloud-processing workflow, irrelevant points in the point cloud map are removed, and the point cloud scenes corresponding to the trajectory points are extracted. A collinear model based on spherical projection is employed during the matching process to project the point cloud scenes to the panoramic images. An algorithm for vectorial angle selection is also designed to address filtering out the occluded point cloud projections during the matching process, generating a series of matching results between point clouds and panoramic images corresponding to the trajectory points. Experimental verification indicates that the method generates matching results with an average pixel error of approximately 2.82 pixels, and an average positional error of approximately 4 cm, thus demonstrating efficient processing. This method is suitable for the data fusion of panoramic images and point clouds acquired using vehicle-borne MMSs in road scenes, provides support for various algorithms based on visual features, and has promising applications in fields such as navigation, positioning, surveying, and mapping. Full article
Show Figures

Figure 1

12 pages, 3781 KB  
Article
Validation of a White Light and Fluorescence Augmented Panoramic Endoscopic Imaging System on a Bimodal Bladder Wall Experimental Model
by Arkadii Moskalev, Nina Kalyagina, Elizaveta Kozlikina, Daniil Kustov, Maxim Loshchenov, Marine Amouroux, Christian Daul and Walter Blondel
Photonics 2024, 11(6), 514; https://doi.org/10.3390/photonics11060514 - 28 May 2024
Cited by 2 | Viewed by 1988
Abstract
Background: Fluorescence visualization of pathologies, primarily neoplasms in human internal cavities, is one of the most popular forms of diagnostics during endoscopic examination in medical practice. Currently, visualization can be performed in the augmented reality mode, which allows to observe areas of increased [...] Read more.
Background: Fluorescence visualization of pathologies, primarily neoplasms in human internal cavities, is one of the most popular forms of diagnostics during endoscopic examination in medical practice. Currently, visualization can be performed in the augmented reality mode, which allows to observe areas of increased fluorescence directly on top of a usual color image. Another no less informative form of endoscopic visualization in the future can be mapping (creating a mosaic) of the acquired image sequence into a single map covering the area under study. The originality of the present contribution lies in the development of a new 3D bimodal experimental bladder model and its validation as an appropriate phantom for testing the combination of bimodal cystoscopy and image mosaicking. Methods: An original 3D real bladder-based phantom (physical model) including cancer-like fluorescent foci was developed and used to validate the combination of (i) a simultaneous white light and fluorescence cystoscopy imager with augmented reality mode and (ii) an image mosaicking algorithm superimposing both information. Results: Simultaneous registration and real-time visualization of a color image as a reference and a black-and-white fluorescence image with an overlay of the two images was made possible. The panoramic image build allowed to precisely visualize the relative location of the five fluorescent foci along the trajectory of the endoscope tip. Conclusions: The method has broad prospects and opportunities for further developments in bimodal endoscopy instrumentation and automatic image mosaicking. Full article
(This article belongs to the Special Issue Phototheranostics: Science and Applications)
Show Figures

Figure 1

19 pages, 8006 KB  
Article
An Underwater Localization Method Based on Visual SLAM for the Near-Bottom Environment
by Zonglin Liu, Meng Wang, Hanwen Hu, Tong Ge and Rui Miao
J. Mar. Sci. Eng. 2024, 12(5), 716; https://doi.org/10.3390/jmse12050716 - 26 Apr 2024
Cited by 2 | Viewed by 2492
Abstract
The feature matching of the near-bottom visual SLAM is influenced by underwater raised sediments, resulting in tracking loss. In this paper, the novel visual SLAM system is proposed in the underwater raised sediments environment. The underwater images are firstly classified based on the [...] Read more.
The feature matching of the near-bottom visual SLAM is influenced by underwater raised sediments, resulting in tracking loss. In this paper, the novel visual SLAM system is proposed in the underwater raised sediments environment. The underwater images are firstly classified based on the color recognition method by adding the weights of pixel location to reduce the interference of similar colors on the seabed. The improved adaptive median filter method is proposed to filter the classified images by using the mean value of the filter window border as the discriminant condition to retain the original features of the image. The filtered images are finally processed by the tracking module to obtain the trajectory of underwater vehicles and the seafloor maps. The datasets of seamount areas captured in the western Pacific Ocean are processed by the improved visual SLAM system. The keyframes, mapping points, and feature point matching pairs extracted from the improved visual SLAM system are improved by 5.2%, 11.2%, and 4.5% compared with that of the ORB-SLAM3 system, respectively. The improved visual SLAM system has the advantage of robustness to dynamic disturbances, which is of practical application in underwater vehicles operated in near-bottom areas such as seamounts and nodules. Full article
Show Figures

Figure 1

21 pages, 27120 KB  
Article
Visual Navigation and Obstacle Avoidance Control for Agricultural Robots via LiDAR and Camera
by Chongyang Han, Weibin Wu, Xiwen Luo and Jiehao Li
Remote Sens. 2023, 15(22), 5402; https://doi.org/10.3390/rs15225402 - 17 Nov 2023
Cited by 25 | Viewed by 6931
Abstract
Obstacle avoidance control and navigation in unstructured agricultural environments are key to the safe operation of autonomous robots, especially for agricultural machinery, where cost and stability should be taken into account. In this paper, we designed a navigation and obstacle avoidance system for [...] Read more.
Obstacle avoidance control and navigation in unstructured agricultural environments are key to the safe operation of autonomous robots, especially for agricultural machinery, where cost and stability should be taken into account. In this paper, we designed a navigation and obstacle avoidance system for agricultural robots based on LiDAR and a vision camera. The improved clustering algorithm is used to quickly and accurately analyze the obstacle information collected by LiDAR in real time. Also, the convex hull algorithm is combined with the rotating calipers algorithm to obtain the maximum diameter of the convex polygon of the clustered data. Obstacle avoidance paths and course control methods are developed based on the danger zones of obstacles. Moreover, by performing color space analysis and feature analysis on the complex orchard environment images, the optimal H-component of HSV color space is selected to obtain the ideal vision-guided trajectory images based on mean filtering and corrosion treatment. Finally, the proposed algorithm is integrated into the Three-Wheeled Mobile Differential Robot (TWMDR) platform to carry out obstacle avoidance experiments, and the results show the effectiveness and robustness of the proposed algorithm. The research conclusion can achieve satisfactory results in precise obstacle avoidance and intelligent navigation control of agricultural robots. Full article
(This article belongs to the Special Issue Advanced Sensing and Image Processing in Agricultural Applications)
Show Figures

Figure 1

14 pages, 3473 KB  
Article
An FPGA-Based Hardware Low-Cost, Low-Consumption Target-Recognition and Sorting System
by Yulu Wang, Yi Han, Jun Chen, Zhou Wang and Yi Zhong
World Electr. Veh. J. 2023, 14(9), 245; https://doi.org/10.3390/wevj14090245 - 4 Sep 2023
Cited by 4 | Viewed by 3261
Abstract
In autonomous driving systems, high-speed and real-time image processing, along with object recognition, are crucial technologies. This paper builds upon the research achievements in industrial item-sorting systems and proposes an object-recognition and sorting system for autonomous driving. In industrial sorting lines, goods-sorting robots [...] Read more.
In autonomous driving systems, high-speed and real-time image processing, along with object recognition, are crucial technologies. This paper builds upon the research achievements in industrial item-sorting systems and proposes an object-recognition and sorting system for autonomous driving. In industrial sorting lines, goods-sorting robots often need to work at high speeds to efficiently sort large volumes of items. This poses a challenge to the robot’s real-time vision and sorting capabilities, making it both practical and economically viable to implement a real-time and low-cost sorting system in a real-world industrial sorting line. Existing sorting systems have limitations such as high cost, high computing resource consumption, and high power consumption. These issues mean that existing sorting systems are typically used only in large industrial plants. In this paper, we design a high-speed, low-cost, low-resource-consumption FPGA (Field-Programmable Gate Array)-based item-sorting system that achieves similar performance to current mainstream sorting systems but at a lower cost and consumption. The recognition component employs a morphological-recognition method, which segments the image using a frame difference algorithm and then extracts the color and shape features of the items. To handle sorting, a six-degrees-of-freedom robotic arm is introduced into the sorting segment. The improved cubic B-spline interpolation algorithm is employed to plan the motion trajectory and consequently control the robotic arm to execute the corresponding actions. Through a series of experiments, this system achieves an average recognition delay of 25.26 ms, ensures smooth operation of the gripping motion trajectory, minimizes resource consumption, and reduces implementation costs. Full article
Show Figures

Figure 1

27 pages, 10672 KB  
Article
A Novel Autonomous Landing Method for Flying–Walking Power Line Inspection Robots Based on Prior Structure Data
by Yujie Zeng, Xinyan Qin, Bo Li, Jin Lei, Jie Zhang, Yanqi Wang and Tianming Feng
Appl. Sci. 2023, 13(17), 9544; https://doi.org/10.3390/app13179544 - 23 Aug 2023
Cited by 2 | Viewed by 3005
Abstract
Hybrid inspection robots have been attracting increasing interest in recent years, and are suitable for inspecting long-distance overhead power transmission lines (OPTLs), combining the advantages of flying robots (e.g., UAVs) and climbing robots (e.g., multiple-arm robots). Due to the complex work conditions (e.g., [...] Read more.
Hybrid inspection robots have been attracting increasing interest in recent years, and are suitable for inspecting long-distance overhead power transmission lines (OPTLs), combining the advantages of flying robots (e.g., UAVs) and climbing robots (e.g., multiple-arm robots). Due to the complex work conditions (e.g., power line slopes, complex backgrounds, wind interference), landing on OPTL is one of the most difficult challenges faced by hybrid inspection robots. To address this problem, this study proposes a novel autonomous landing method for a developed flying–walking power line inspection robot (FPLIR) based on prior structure data. The proposed method includes three main steps: (1) A color image of the target power line is segmented using a real-time semantic segmentation network, fusing the depth image to estimate the position of the power line. (2) The safe landing area (SLA) is determined using prior structure data, applying the trajectory planning method with geometric constraints to generate the dynamic landing trajectory. (3) The landing trajectory is tracked using real-time model predictive control (MPC), controlling FPLIR to land on the OPTL. The feasibility of the proposed method was verified in the ROS Gazebo environment. The RMSE values of the position along three axes were 0.1205,0.0976 and 0.0953, respectively, while the RMSE values of the velocity along these axes were 0.0426, 0.0345 and 0.0781. Additionally, experiments in a real environment using FPLIR were performed to verify the validity of the proposed method. The experimental results showed that the errors of position and velocity for the FPLIR landing on the lines were 6.18×102 m and 2.16×102 m/s. The simulation results as well as the experimental findings both satisfy the practical requirements. The proposed method provides a foundation for the intelligent inspection of OPTL in the future. Full article
(This article belongs to the Special Issue AI Technology and Application in Various Industries)
Show Figures

Figure 1

Back to TopTop