3D Image Processing: Progress and Challenges
A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Image and Video Processing".
Deadline for manuscript submissions: 28 February 2026 | Viewed by 17
Special Issue Editor
Special Issue Information
Dear Colleagues,
3D image processing is becoming increasingly transformative across a wide range of application domains. Although initially developed for computer graphics, 3D imaging is now indispensable in fields such as autonomous driving, medical diagnostics, virtual and augmented reality, robotics, geospatial analysis, and industrial inspection. The widespread availability of affordable 3D sensors—such as LiDAR, depth cameras, and photogrammetry-based systems—has created a growing demand for scalable and robust 3D data-processing pipelines.
However, the unique characteristics of 3D data—such as irregular sampling, high dimensionality, sparsity, and sensitivity to noise—pose significant challenges in acquisition, sampling, restoration, segmentation, compression, and semantic understanding. Two complementary paradigms are proving particularly effective in addressing these issues: the model-based framework of Graph Signal Processing (GSP) and the data-driven approach of Graph Neural Networks (GNNs). Both are well-suited to modeling non-Euclidean structures and capturing the underlying geometric relationships present in 3D data.
To enhance interpretability while reducing reliance on large annotated datasets, researchers are increasingly exploring hybrid methods that integrate model-based priors with data-driven learning. These approaches offer efficient, interpretable, and generalizable solutions with fewer parameters. In parallel, advances in Large Language Models (LLMs) and multimodal foundation models are opening new avenues for cross-modal 3D understanding, including scene captioning, spatial reasoning, and task planning in 3D environments.
We are also witnessing rapid progress in novel 3D rendering techniques, such as 3D Gaussian Splatting, which enable photorealistic and efficient visualization of neural radiance fields and point clouds. Moreover, as autonomous agents and embodied AI systems become more prevalent, there is a pressing need for machine-optimized 3D coding schemes—beyond traditional human-centric visualization—to support real-time analytics, compression, and semantic understanding of 3D data by machines.
Notably, 3D imaging in the medical domain has seen rapid development, driven by advancements in modalities such as CT, MRI, and ultrasound. Recent research focuses on leveraging deep learning and graph-based methods for 3D tumor segmentation, organ reconstruction, surgical navigation, and cross-modal fusion of volumetric and surface data. These applications highlight the growing importance of accurate, efficient, and interpretable 3D models to support clinical workflows and decision-making.
This Special Issue welcomes high-quality contributions on topics including, but not limited to, 3D point cloud sampling and restoration, GSP- and GNN-based models, model-based deep learning, LLM-guided 3D analytics, advanced rendering methods like Gaussian splatting, and machine-optimized 3D coding. Interdisciplinary applications in healthcare, smart cities, robotics, and digital twins are particularly encouraged.
We invite submissions that address current challenges while proposing visionary concepts to shape the future roadmap of 3D image processing research.
Dr. Chinthaka Dinesh
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Keywords
- 3D image processing
- point cloud restoration
- graph signal processing
- graph neural networks
- model-based deep learning
- 3D gaussian splatting
- large language models for 3D vision
- 3D scene understanding
- machine-optimized 3D coding
- multimodal ai in 3D vision
- 3D medical image segmentation
- volumetric medical imaging
Benefits of Publishing in a Special Issue
- Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
- Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
- Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
- External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
- Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.
Further information on MDPI's Special Issue policies can be found here.