Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = obstacle point cloud classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4875 KB  
Article
A Comprehensive Radar-Based Berthing-Aid Dataset (R-BAD) and Onboard System for Safe Vessel Docking
by Fotios G. Papadopoulos, Antonios-Periklis Michalopoulos, Efstratios N. Paliodimos, Ioannis K. Christopoulos, Charalampos Z. Patrikakis, Alexandros Simopoulos and Stylianos A. Mytilinaios
Electronics 2025, 14(20), 4065; https://doi.org/10.3390/electronics14204065 - 16 Oct 2025
Viewed by 267
Abstract
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been [...] Read more.
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been deployed as auxiliary tools to support informed berthing decisions; however, these sensing modalities are severely affected by weather and light conditions, respectively, while cameras in particular are inherently incapable of providing direct range measurements. In this paper, we introduce a comprehensive, Radar-Based Berthing-Aid Dataset (R-BAD), aimed to cultivate the development of safe berthing systems onboard ships. The proposed R-BAD dataset includes a large collection of Frequency-Modulated Continuous Wave (FMCW) radar data in point cloud format alongside timestamped and synced video footage. There are more than 69 h of recorded ship operations, and the dataset is freely accessible to the interested reader(s). We also propose an onboard support system for radar-aided vessel docking, which enables obstacle detection, clustering, tracking and classification during ferry berthing maneuvers. The proposed dataset covers all docking/undocking scenarios (arrivals, departures, port idle, and cruising operations) and was used to train various machine/deep learning models of substantial performance, showcasing its validity for further autonomous navigation systems development. The berthing-aid system is tested in real-world conditions onboard an operational Ro-Ro/Passenger Ship and demonstrated superior, weather-resilient, repeatable and robust performance in detection, tracking and classification tasks, demonstrating its technology readiness for integration into future autonomous berthing-aid systems. Full article
Show Figures

Figure 1

23 pages, 3209 KB  
Article
Research on Power Laser Inspection Technology Based on High-Precision Servo Control System
by Zhe An and Yuesheng Pei
Photonics 2025, 12(9), 944; https://doi.org/10.3390/photonics12090944 - 22 Sep 2025
Viewed by 531
Abstract
With the expansion of the scale of ultra-high-voltage transmission lines and the complexity of the corridor environment, the traditional manual inspection method faces serious challenges in terms of efficiency, cost, and safety. In this study, based on power laser inspection technology with a [...] Read more.
With the expansion of the scale of ultra-high-voltage transmission lines and the complexity of the corridor environment, the traditional manual inspection method faces serious challenges in terms of efficiency, cost, and safety. In this study, based on power laser inspection technology with a high-precision servo control system, a complete set of laser point cloud processing technology is proposed, covering three core aspects: transmission line extraction, scene recovery, and operation status monitoring. In transmission line extraction, combining the traditional clustering algorithm with the improved PointNet++ deep learning model, a classification accuracy of 92.3% is achieved in complex scenes; in scene recovery, 95.9% and 94.4% of the internal point retention rate of transmission lines and towers, respectively, and a vegetation denoising rate of 7.27% are achieved by RANSAC linear fitting and density filtering algorithms; in the condition monitoring segment, the risk detection of tree obstacles based on KD-Tree acceleration and the arc sag calculation of the hanging chain line model realize centimetre-level accuracy of hidden danger localisation and keep the arc sag error within 5%. Experiments show that this technology significantly improves the automation level and decision-making accuracy of transmission line inspection and provides effective support for intelligent operation and maintenance of the power grid. Full article
Show Figures

Figure 1

18 pages, 14896 KB  
Article
Deep Learning-Based Point Cloud Classification of Obstacles for Intelligent Vehicles
by Yiqi Xu, Dengke Wu, Mengfei Zhou and Jiafu Yang
World Electr. Veh. J. 2025, 16(2), 80; https://doi.org/10.3390/wevj16020080 - 5 Feb 2025
Cited by 1 | Viewed by 1300
Abstract
Intelligent driving research has focused much attention on point cloud obstacles since they are a class of high-dimensional data that can adequately depict the shape and placement of obstacles, unlike picture data. Currently, deep learning technology is primarily employed for vehicle autonomy point [...] Read more.
Intelligent driving research has focused much attention on point cloud obstacles since they are a class of high-dimensional data that can adequately depict the shape and placement of obstacles, unlike picture data. Currently, deep learning technology is primarily employed for vehicle autonomy point cloud obstacle classification tasks. These techniques typically struggle with low classification accuracy, processing efficiency, and model stability. To tackle the abovementioned issues, this paper suggests a novel random forest algorithm that integrates the out-of-bag error theory and can consistently and accurately evaluate the influence of point cloud properties. Then, building on the novel algorithm, this paper suggests a modified PointNet network that incorporates the effects of both global and local features on the classification task, therefore increasing the conventional network’s classification accuracy. To assess the effectiveness of this novel approach in the experimental portion, we set up an evaluation system based on the metrics for average accuracy, overall accuracy, and a confusion matrix. According to the simulation results, the overall accuracy of the proposed network in terms of classification accuracy is 94.4% and the average accuracy is 84.9%, which are then compared to the prototype PointNet and its variants. The classification accuracies for the four types of obstacles are 97.6%, 63.6%, 92.5%, and 86.1%. In addition, the proposed method is effective at improving both the computational complexity and stability of the network. Full article
(This article belongs to the Special Issue Deep Learning Applications for Electric Vehicles)
Show Figures

Figure 1

21 pages, 20775 KB  
Article
Sensor Fusion Method for Object Detection and Distance Estimation in Assisted Driving Applications
by Stefano Favelli, Meng Xie and Andrea Tonoli
Sensors 2024, 24(24), 7895; https://doi.org/10.3390/s24247895 - 10 Dec 2024
Cited by 7 | Viewed by 4310
Abstract
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between [...] Read more.
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors’ attributes to plan the vehicle’s speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors’ calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 12200 KB  
Article
A Novel High-Precision Railway Obstacle Detection Algorithm Based on 3D LiDAR
by Zongliang Nan, Guoan Zhu, Xu Zhang, Xuechun Lin and Yingying Yang
Sensors 2024, 24(10), 3148; https://doi.org/10.3390/s24103148 - 15 May 2024
Cited by 7 | Viewed by 3461
Abstract
This article presents a high-precision obstacle detection algorithm using 3D mechanical LiDAR to meet railway safety requirements. To address the potential errors in the point cloud, we propose a calibration method based on projection and a novel rail extraction algorithm that effectively handles [...] Read more.
This article presents a high-precision obstacle detection algorithm using 3D mechanical LiDAR to meet railway safety requirements. To address the potential errors in the point cloud, we propose a calibration method based on projection and a novel rail extraction algorithm that effectively handles terrain variations and preserves the point cloud characteristics of the track area. We address the limitations of the traditional process involving fixed Euclidean thresholds by proposing a modulation function based on directional density variations to adjust the threshold dynamically. Finally, using PCA and local-ICP, we conduct feature analysis and classification of the clustered data to obtain the obstacle clusters. We conducted continuous experiments on the testing site, and the results showed that our system and algorithm achieved an STDR (stable detection rate) of over 95% for obstacles with a size of 15 cm × 15 cm × 15 cm in the range of ±25 m; at the same time, for obstacles of 10 cm × 10 cm × 10 cm, an STDR of over 80% was achieved within a range of ±20 m. This research provides a possible solution and approach for railway security via obstacle detection. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

22 pages, 2360 KB  
Article
Advancing Cycling Safety: On-Bike Alert System Utilizing Multi-Layer Radar Point Cloud Clustering for Coarse Object Classification
by Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Ferdaous Chaabane and Hichem Besbes
Sensors 2024, 24(10), 3094; https://doi.org/10.3390/s24103094 - 13 May 2024
Cited by 3 | Viewed by 2720
Abstract
Cyclists are considered to be vulnerable road users (VRUs) and need protection from potential collisions with cars and other vehicles induced by unsafe driving, dangerous road conditions, or weak cycling infrastructure. Integrating mmWave radars into cycling safety measures presents an efficient solution to [...] Read more.
Cyclists are considered to be vulnerable road users (VRUs) and need protection from potential collisions with cars and other vehicles induced by unsafe driving, dangerous road conditions, or weak cycling infrastructure. Integrating mmWave radars into cycling safety measures presents an efficient solution to this problem given their compact size, low power consumption, and low cost compared to other sensors. This paper introduces an mmWave radar-based bike safety system designed to offer real-time alerts to cyclists. The system consists of a low-power radar sensor affixed to the bicycle, connected to a micro-controller, and delivering a preliminary classification of detected obstacles. An efficient two-level clustering based on the accumulation of radar point clouds from multiple frames with a temporal projection from previous frames into the current frame is proposed. The clustering is followed by a coarse classification algorithm in which we use relevant features extracted from the resulting clusters. An annotated RadBike dataset composed of radar point cloud data synchronized with RGB camera images is developed to evaluate our system. The two-level clustering outperforms the DBSCAN algorithm, achieving a v-measure score of 0.91, compared to 0.88 with classical DBSCAN. Different classifiers, including decision trees, random forests, support vector machines (SVMs), and AdaBoost, have been assessed, with an overall accuracy of 87% for the three main object classes: four-wheeled, two-wheeled, and others. The system has the ability to improve rider safety on the road and substantially reduce the frequency of incidents involving cyclists. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

22 pages, 8333 KB  
Article
Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm
by Marta Lalak and Damian Wierzbicki
Sensors 2022, 22(17), 6611; https://doi.org/10.3390/s22176611 - 1 Sep 2022
Cited by 12 | Viewed by 3155
Abstract
Unmanned Aerial Vehicles (UAVs) are able to guarantee very high spatial and temporal resolution and up-to-date information in order to ensure safety in the direct vicinity of the airport. The current dynamic growth of investment areas in large agglomerations, especially in the neighbourhood [...] Read more.
Unmanned Aerial Vehicles (UAVs) are able to guarantee very high spatial and temporal resolution and up-to-date information in order to ensure safety in the direct vicinity of the airport. The current dynamic growth of investment areas in large agglomerations, especially in the neighbourhood of airports, leads to the emergence of objects that may constitute a threat for air traffic. In order to ensure that the obtained spatial data are accurate, it is necessary to understand the detection of atypical aviation obstacles by means of their identification and classification. Quite often, a common feature of atypical aviation obstacles is their elongated shape and irregular cross-section. These factors pose a challenge for modern object detection techniques when the processes used to determine their height are automated. This paper analyses the possibilities for the automated detection of atypical aviation obstacles based on the YOLO algorithm and presents an analysis of the accuracy of the determination of their height based on data obtained from UAV. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

25 pages, 8471 KB  
Article
Unknown Object Detection Using a One-Class Support Vector Machine for a Cloud–Robot System
by Raihan Kabir, Yutaka Watanobe, Md Rashedul Islam, Keitaro Naruse and Md Mostafizer Rahman
Sensors 2022, 22(4), 1352; https://doi.org/10.3390/s22041352 - 10 Feb 2022
Cited by 23 | Viewed by 5051
Abstract
Inter-robot communication and high computational power are challenging issues for deploying indoor mobile robot applications with sensor data processing. Thus, this paper presents an efficient cloud-based multirobot framework with inter-robot communication and high computational power to deploy autonomous mobile robots for indoor applications. [...] Read more.
Inter-robot communication and high computational power are challenging issues for deploying indoor mobile robot applications with sensor data processing. Thus, this paper presents an efficient cloud-based multirobot framework with inter-robot communication and high computational power to deploy autonomous mobile robots for indoor applications. Deployment of usable indoor service robots requires uninterrupted movement and enhanced robot vision with a robust classification of objects and obstacles using vision sensor data in the indoor environment. However, state-of-the-art methods face degraded indoor object and obstacle recognition for multiobject vision frames and unknown objects in complex and dynamic environments. From these points of view, this paper proposes a new object segmentation model to separate objects from a multiobject robotic view-frame. In addition, we present a support vector data description (SVDD)-based one-class support vector machine for detecting unknown objects in an outlier detection fashion for the classification model. A cloud-based convolutional neural network (CNN) model with a SoftMax classifier is used for training and identification of objects in the environment, and an incremental learning method is introduced for adding unknown objects to the robot knowledge. A cloud–robot architecture is implemented using a Node-RED environment to validate the proposed model. A benchmarked object image dataset from an open resource repository and images captured from the lab environment were used to train the models. The proposed model showed good object detection and identification results. The performance of the model was compared with three state-of-the-art models and was found to outperform them. Moreover, the usability of the proposed system was enhanced by the unknown object detection, incremental learning, and cloud-based framework. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

43 pages, 7149 KB  
Article
Mapping Crop Types in Southeast India with Smartphone Crowdsourcing and Deep Learning
by Sherrie Wang, Stefania Di Tommaso, Joey Faulkner, Thomas Friedel, Alexander Kennepohl, Rob Strey and David B. Lobell
Remote Sens. 2020, 12(18), 2957; https://doi.org/10.3390/rs12182957 - 11 Sep 2020
Cited by 78 | Viewed by 16027
Abstract
High resolution satellite imagery and modern machine learning methods hold the potential to fill existing data gaps in where crops are grown around the world at a sub-field level. However, high resolution crop type maps have remained challenging to create in developing regions [...] Read more.
High resolution satellite imagery and modern machine learning methods hold the potential to fill existing data gaps in where crops are grown around the world at a sub-field level. However, high resolution crop type maps have remained challenging to create in developing regions due to a lack of ground truth labels for model development. In this work, we explore the use of crowdsourced data, Sentinel-2 and DigitalGlobe imagery, and convolutional neural networks (CNNs) for crop type mapping in India. Plantix, a free app that uses image recognition to help farmers diagnose crop diseases, logged 9 million geolocated photos from 2017–2019 in India, 2 million of which are in the states of Andhra Pradesh and Telangana in India. Crop type labels based on farmer-submitted images were added by domain experts and deep CNNs. The resulting dataset of crop type at coordinates is high in volume, but also high in noise due to location inaccuracies, submissions from out-of-field, and labeling errors. We employed a number of steps to clean the dataset, which included training a CNN on very high resolution DigitalGlobe imagery to filter for points that are within a crop field. With this cleaned dataset, we extracted Sentinel time series at each point and trained another CNN to predict the crop type at each pixel. When evaluated on the highest quality subset of crowdsourced data, the CNN distinguishes rice, cotton, and “other” crops with 74% accuracy in a 3-way classification and outperforms a random forest trained on harmonic regression features. Furthermore, model performance remains stable when low quality points are introduced into the training set. Our results illustrate the potential of non-traditional, high-volume/high-noise datasets for crop type mapping, some improvements that neural networks can achieve over random forests, and the robustness of such methods against moderate levels of training set noise. Lastly, we caution that obstacles like the lack of good Sentinel-2 cloud mask, imperfect mobile device location accuracy, and preservation of privacy while improving data access will need to be addressed before crowdsourcing can widely and reliably be used to map crops in smallholder systems. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

26 pages, 40783 KB  
Article
Point Cloud vs. Mesh Features for Building Interior Classification
by Maarten Bassier, Maarten Vergauwen and Florent Poux
Remote Sens. 2020, 12(14), 2224; https://doi.org/10.3390/rs12142224 - 11 Jul 2020
Cited by 42 | Viewed by 12641
Abstract
Interpreting 3D point cloud data of the interior and exterior of buildings is essential for automated navigation, interaction and 3D reconstruction. However, the direct exploitation of the geometry is challenging due to inherent obstacles such as noise, occlusions, sparsity or variance in the [...] Read more.
Interpreting 3D point cloud data of the interior and exterior of buildings is essential for automated navigation, interaction and 3D reconstruction. However, the direct exploitation of the geometry is challenging due to inherent obstacles such as noise, occlusions, sparsity or variance in the density. Alternatively, 3D mesh geometries derived from point clouds benefit from preprocessing routines that can surmount these obstacles and potentially result in more refined geometry and topology descriptions. In this article, we provide a rigorous comparison of both geometries for scene interpretation. We present an empirical study on the suitability of both geometries for the feature extraction and classification. More specifically, we study the impact for the retrieval of structural building components in a realistic environment which is a major endeavor in Building Information Modeling (BIM) reconstruction. The study runs on segment-based structuration of both geometries and shows that both achieve recognition rates over 75% F1 score when suitable features are used. Full article
(This article belongs to the Special Issue Point Cloud Processing and Analysis in Remote Sensing)
Show Figures

Graphical abstract

23 pages, 5260 KB  
Technical Note
Large Scale Automatic Analysis and Classification of Roof Surfaces for the Installation of Solar Panels Using a Multi-Sensor Aerial Platform
by Luis López-Fernández, Susana Lagüela, Inmaculada Picón and Diego González-Aguilera
Remote Sens. 2015, 7(9), 11226-11248; https://doi.org/10.3390/rs70911226 - 1 Sep 2015
Cited by 16 | Viewed by 9045
Abstract
A low-cost multi-sensor aerial platform, aerial trike, equipped with visible and thermographic sensors is used for the acquisition of all the data needed for the automatic analysis and classification of roof surfaces regarding their suitability to harbor solar panels. The geometry of a [...] Read more.
A low-cost multi-sensor aerial platform, aerial trike, equipped with visible and thermographic sensors is used for the acquisition of all the data needed for the automatic analysis and classification of roof surfaces regarding their suitability to harbor solar panels. The geometry of a georeferenced 3D point cloud generated from visible images using photogrammetric and computer vision algorithms, and the temperatures measured on thermographic images are decisive to evaluate the areas, tilts, orientations and the existence of obstacles to locate the optimal zones inside each roof surface for the installation of solar panels. This information is complemented with the estimation of the solar irradiation received by each surface. This way, large areas may be efficiently analyzed obtaining as final result the optimal locations for the placement of solar panels as well as the information necessary (location, orientation, tilt, area and solar irradiation) to estimate the productivity of a solar panel from its technical characteristics. Full article
Show Figures

Graphical abstract

Back to TopTop