sensors-logo

Journal Browser

Journal Browser

Advances in Point Clouds for Sensing Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 20 May 2026 | Viewed by 3453

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Florida Polytechnic University, 4700 Research Way, Lakeland, FL 33805, USA
Interests: lidar; vehicular; intelligent mobility; crowd-sensing; pervasive and mobile computing; cyber–physical systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Point clouds have become a universal representation for 3D sensing across LiDAR, radar, RGB-D cameras, event sensors, and multi-view photogrammetry. Rapid progress in sensor design, geometric learning, and efficient computing is reshaping how we acquire, fuse, and interpret sparse, noisy, and large-scale 3D data. However, fundamental challenges remain in handling occlusions and dynamic scenes; aligning heterogeneous viewpoints; quantifying uncertainty; compressing and streaming data for edge devices; and ensuring interoperability, reproducibility, and real-time performance in safety-critical settings.

This Special Issue, Advances in Point Clouds for Sensing Applications, welcomes contributions that push the state of the art in 3D perception and mapping for domains such as autonomous driving, robotics, AR/VR, digital twins, smart cities, infrastructure inspection, and geosciences. We invite novel methods for point-cloud acquisition and reconstruction; registration, SLAM, and tracking; denoising, completion, and super-resolution; semantic/instance segmentation and object detection; multimodal sensor fusion (e.g., LiDAR, camera, radar); compression, indexing, and retrieval; uncertainty modeling and calibration; and energy-/latency-aware processing on edge platforms. Benchmark datasets, standardized evaluation protocols, and reports from real-world deployments are particularly encouraged. Our goal is to catalyze robust, scalable, and trustworthy 3D sensing pipelines that translate advances in algorithms and hardware into measurable impacts.

Dr. Luis G. Jaimes
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR
  • radar
  • RGB-D
  • 3D registration
  • semantic segmentation
  • sensor fusion
  • SLAM
  • scene generation
  • digital twins
  • edge computing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3872 KB  
Article
Fusion-Based Semantic Segmentation and 3D Reconstruction Using Radar–LiDAR Point Clouds: A Comparative Evaluation of DeepLabv3 and FCN-ResNet Against Traditional Architectures
by John Paipa, Cristian Suancha and Eduardo A. Fernández
Sensors 2026, 26(9), 2900; https://doi.org/10.3390/s26092900 - 6 May 2026
Viewed by 450
Abstract
Reliable person segmentation with sparse 3D sensors degrades significantly under adverse atmospheric conditions. This work presents a controlled comparative evaluation of four segmentation architectures—U-Net, Mask R-CNN, DeepLabV3+, and FCN-ResNet—on a fused Radar–LiDAR dataset for binary person–background segmentation and applies a dual-domain evaluation procedure [...] Read more.
Reliable person segmentation with sparse 3D sensors degrades significantly under adverse atmospheric conditions. This work presents a controlled comparative evaluation of four segmentation architectures—U-Net, Mask R-CNN, DeepLabV3+, and FCN-ResNet—on a fused Radar–LiDAR dataset for binary person–background segmentation and applies a dual-domain evaluation procedure that formally links 2D pixel-level overlap (IoU, Dice) to 3D geometric fidelity (Chamfer distance, Completeness) through mask back-projection onto fused point clouds. Raw point clouds are rasterized into range–intensity grids enriched with Radar reflectivity; the predicted masks are then reprojected into 3D space and evaluated using Chamfer distance and Completeness under three controlled visibility conditions. U-Net achieves the highest 2D overlap (IoU = 0.82, Dice = 0.89), while DeepLabV3+ delivers the best 3D reconstruction fidelity (Chamfer = 0.021 m, Completeness = 93.4%) and the highest overall accuracy (97.9%). This dissociation between 2D overlap and 3D fidelity is explained by DeepLabV3+’s multi-scale Atrous Spatial Pyramid Pooling (ASPP), which reduces boundary fragmentation during back-projection; more than 70% of the Chamfer deviation across competing architectures originates at object contours. Mask R-CNN performs well when instances are clearly separated, and FCN-ResNet offers the lowest computational cost at reduced boundary precision. Radar–LiDAR fusion sustains an IoU within 3% of clear-weather performance under dense fog, whereas LiDAR-only inputs degrade by more than 12%. Due to the 12:1 background-to-person class imbalance, overlap-based metrics (IoU, Dice) are prioritized over raw accuracy in all reported comparisons. These results provide actionable deployment guidance and constitute a reproducible evaluation procedure for future sparse-sensor fusion studies, independently of the architectures evaluated. Full article
(This article belongs to the Special Issue Advances in Point Clouds for Sensing Applications)
Show Figures

Figure 1

22 pages, 8466 KB  
Article
LiDAR Point Cloud Colourisation Using Multi-Camera Fusion and Low-Light Image Enhancement
by Pasindu Ranasinghe, Dibyayan Patra, Bikram Banerjee and Simit Raval
Sensors 2025, 25(21), 6582; https://doi.org/10.3390/s25216582 - 25 Oct 2025
Cited by 1 | Viewed by 2503
Abstract
In recent years, the fusion of camera data with LiDAR measurements has emerged as a powerful approach to enhance spatial understanding. This study introduces a novel, hardware-agnostic methodology that generates colourised point clouds from mechanical LiDAR using multiple camera inputs, providing complete 360-degree [...] Read more.
In recent years, the fusion of camera data with LiDAR measurements has emerged as a powerful approach to enhance spatial understanding. This study introduces a novel, hardware-agnostic methodology that generates colourised point clouds from mechanical LiDAR using multiple camera inputs, providing complete 360-degree coverage. The primary innovation lies in its robustness under low-light conditions, achieved through the integration of a low-light image enhancement module within the fusion pipeline. The system requires initial calibration to determine intrinsic camera parameters, followed by automatic computation of the geometric transformation between the LiDAR and cameras—removing the need for specialised calibration targets and streamlining the setup. The data processing framework uses colour correction to ensure uniformity across camera feeds before fusion. The algorithm was tested using a Velodyne Puck Hi-Res LiDAR and a four-camera configuration. The optimised software achieved real-time performance and reliable colourisation even under very low illumination, successfully recovering scene details that would otherwise remain undetectable. Full article
(This article belongs to the Special Issue Advances in Point Clouds for Sensing Applications)
Show Figures

Figure 1

Back to TopTop