applsci-logo

Journal Browser

Journal Browser

Advances in Computer Graphics and 3D Technologies

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 January 2026) | Viewed by 7919

Special Issue Editors


E-Mail
Guest Editor
National Centre for Computer Animation, Bournemouth University, Bournemouth BH12 5BB, UK
Interests: geometric modeling; computer animation; computer graphics; image and point cloud-based shape reconstruction; machine learning; applications of ODEs and PDEs in geometric modeling and computer animation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Centre for Computer Animation, Bournemouth University, Poole BH12 5BB, UK
Interests: computer graphics; motion capture; machine learning; motion synthesis; physics-based simulation; 3D reconstruction; virtual reality and robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Computer graphics and 3D technology have made a significant impact on our daily life, ranging from fields such as the entertainment industry (film, gaming, AR/VR) and healthcare medical visualisation to industrial digital twins. With the development of AI and machine learning, the production process will be shortened, and more realistic 3D computer graphics will be available for applications. Therefore, this Special Issue will present new ideas and experimental results in the fields of 3D computer graphics, real-time graphics, computer vision and machine learning, ranging from the design, algorithm development and theoretical stages to the graphics’ practical use.

Areas relevant to computer graphics, geometric modelling, computational geometry, computational photography, 3D reconstruction, shape and surface modelling include, but are not limited, to real-time graphics rendering techniques, volume rendering, computer animation and simulation, physically based modelling, computer vision for computer graphics, machine learning for graphics, data compression for graphics, metaverse (VR/MR/XR), computational fabrication and scientific visualisation.

Prof. Dr. Lihua You
Dr. Zhidong Xiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • geometric computing
  • computer graphics
  • computational geometry
  • physically based modelling
  • computational photography
  • 3D reconstruction
  • shape matching
  • shape and surface modelling

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 4833 KB  
Article
Optimizing Head-Up Display Information Presentation for Older Drivers: Visual Attention Patterns and Design Implications
by Ke Zhang, Chen Xu and Jinho Yim
Appl. Sci. 2026, 16(6), 2682; https://doi.org/10.3390/app16062682 - 11 Mar 2026
Viewed by 351
Abstract
As population aging accelerates, age-related declines in visual sensitivity and attentional control make older drivers more vulnerable to suboptimal in-vehicle interface designs. Head-up displays (HUDs) are intended to reduce gaze shifts by overlaying information within the forward field of view, yet empirical evidence [...] Read more.
As population aging accelerates, age-related declines in visual sensitivity and attentional control make older drivers more vulnerable to suboptimal in-vehicle interface designs. Head-up displays (HUDs) are intended to reduce gaze shifts by overlaying information within the forward field of view, yet empirical evidence remains limited on how specific HUD presentation strategies reshape older drivers’ visual attention allocation. Grounded in theories of visual attention and cognitive load, this study systematically investigates three design variables that are increasingly common in contemporary HUDs (including AR-HUDs): (1) dynamic versus static navigation cues, (2) pedestrian warning strategies under different lighting conditions, and (3) the spatial placement of high-priority information. We first conducted a formative user study to define variables and operationalizations, and then carried out three within-subject driving-simulator experiments using controlled HUD stimuli and eye tracking. Objective gaze measures (e.g., fixation count, total fixation duration, and time to first fixation) were combined with subjective preference ratings to characterize attentional capture, search efficiency, and potential attentional costs. Findings reveal a robust trade-off: continuously changing navigation cues enhance attentional capture but can also increase attentional “stickiness,” unnecessarily consuming older drivers’ limited attentional resources. In pedestrian hazard tasks, real-time overlay warnings that were spatially aligned with the hazard significantly improved visual localization under low-light conditions, outperforming early warnings and multi-stage strategies. Across tasks and layout conditions, the central HUD region showed a stable attentional advantage—placing critical information centrally elicited greater visual attention and stronger subjective preference. These results provide mechanistic evidence for how HUD parameters modulate older drivers’ attention and yield actionable implications for prioritization, temporal pacing of dynamic navigation cues, and a “center-first” layout strategy to guide age-friendly HUD design. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

22 pages, 66701 KB  
Article
AVIF as an Alternative to JPEG and GPU Texture Compression Schemes for Texture Storage in 3D Computer Graphics
by Maria Grazia Corino, Tiziano Leidi and Achille Peternier
Appl. Sci. 2026, 16(5), 2541; https://doi.org/10.3390/app16052541 - 6 Mar 2026
Viewed by 527
Abstract
This article explores the potential of the emerging image compression standard AV1 Image File Format (AVIF) as a format for storing 2D texture data in 3D computer graphics, aiming to assess its suitability for graphics applications. It presents a comparative performance evaluation, focusing [...] Read more.
This article explores the potential of the emerging image compression standard AV1 Image File Format (AVIF) as a format for storing 2D texture data in 3D computer graphics, aiming to assess its suitability for graphics applications. It presents a comparative performance evaluation, focusing on image quality, compression efficiency, and processing times, by comparing AVIF with the traditional format JPEG and the texture compression schemes BPTC and S3TC. To conduct the evaluation, a selected set of test images is compressed into the specified formats, loaded as textures, and assessed in a mockup 3D application to evaluate their visual performance in a realistic rendering context. The results show that AVIF delivers better fidelity to the original image compared to JPEG, BPTC, and S3TC, while also yielding a smaller file size. It outperforms JPEG by 9.2 dB in visual quality and by 174.4% in compression ratio, on average. However, this comes at the cost of longer processing times, with AVIF taking 126 times longer than JPEG and 185 times longer than S3TC to encode an image. AVIF also showed a 536% increase in decoding time compared to JPEG. BPTC produced high-fidelity images, second only to AVIF, but it required longer encoding times, depending on the quality settings. However, unlike AVIF, it offers GPU optimization benefits. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

21 pages, 4886 KB  
Article
GaPMeS: Gaussian Patch-Level Mixture-of-Experts Splatting for Computation-Limited Sparse-View Feed-Forward 3D Reconstruction
by Jinwen Liu, Wenchao Liu and Rui Guo
Appl. Sci. 2026, 16(2), 1108; https://doi.org/10.3390/app16021108 - 21 Jan 2026
Viewed by 422
Abstract
To address the issues of parameter coupling and high computational demands in existing feed-forward Gaussian splatting methods, we propose Gaussian Patch-level Mixture-of-Experts Splatting (GaPMeS), a lightweight feed-forward 3D Gaussian reconstruction model based on a mixture-of-experts (MoE) multi-task decoupling framework. GaPMeS employs a dual-routing [...] Read more.
To address the issues of parameter coupling and high computational demands in existing feed-forward Gaussian splatting methods, we propose Gaussian Patch-level Mixture-of-Experts Splatting (GaPMeS), a lightweight feed-forward 3D Gaussian reconstruction model based on a mixture-of-experts (MoE) multi-task decoupling framework. GaPMeS employs a dual-routing gating mechanism to replace heavy refinement networks, enabling task-adaptive feature selection at the image patch level and alleviating the gradient conflicts commonly observed in shared-backbone architectures. By decoupling Gaussian parameter prediction into four independent sub-tasks and incorporating a hybrid soft–hard expert selection strategy, the model maintains high efficiency with only 14.6 M parameters while achieving competitive performance across multiple datasets—including a Structural Similarity Index (SSIM) of 0.709 on RealEstate10K, a Peak Signal-to-Noise Ratio (PSNR) of 19.57 on DL3DV, and a 26.0% SSIM improvement on real industrial scenes. These results demonstrate the model’s superior efficiency and reconstruction quality, offering a new and effective solution for high-quality sparse-view 3D reconstruction. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

20 pages, 4726 KB  
Article
Enhancing SeeGround with Relational Depth Text for 3D Visual Grounding
by Hyun-Sik Jeon, Seong-Hui Kang and Jong-Eun Ha
Appl. Sci. 2026, 16(2), 652; https://doi.org/10.3390/app16020652 - 8 Jan 2026
Viewed by 546
Abstract
Three-dimensional visual grounding is a core technology that identifies specific objects within complex 3D scenes based on natural language instructions, enhancing human–machine interactions in robotics and augmented reality domains. Traditional approaches have focused on supervised learning, which relies on annotated data; however, zero-shot [...] Read more.
Three-dimensional visual grounding is a core technology that identifies specific objects within complex 3D scenes based on natural language instructions, enhancing human–machine interactions in robotics and augmented reality domains. Traditional approaches have focused on supervised learning, which relies on annotated data; however, zero-shot methodologies are emerging due to the high costs of data construction and limitations in generalization. SeeGround achieves state-of-the-art performance by integrating 2D rendered images and spatial text descriptions. Nevertheless, SeeGround exhibits vulnerabilities in clearly discerning relative depth relationships owing to its implicit depth representations in 2D views. This study proposes the relational depth text (RDT) technique to overcome these limitations, utilizing a Monocular Depth Estimation model to extract depth maps from rendered 2D images and applying the K-Nearest Neighbors algorithm to convert inter-object relative depth relations into natural language descriptions, thereby incorporating them into Vision–Language Model (VLM) prompts. This method distinguishes itself by augmenting spatial reasoning capabilities while preserving SeeGround’s existing pipeline, demonstrating a 3.54% improvement in the Acc@0.25 metric on the Nr3D dataset in a 7B VLM environment that is approximately 10.3 times lighter than the original model, along with a 6.74% increase in Unique cases on the ScanRefer dataset, albeit with a 1.70% decline in Multiple cases. The proposed technique enhances the robustness of grounding through viewpoint anchoring and candidate discrimination in complex query scenarios, and is expected to improve efficiency in practical applications through future multi-view fusion and conditional execution optimizations. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

16 pages, 3077 KB  
Article
SS3DNet-AF: A Single-Stage, Single-View 3D Reconstruction Network with Attention-Based Fusion
by Muhammad Awais Shoukat, Allah Bux Sargano, Alexander Malyshev, Lihua You and Zulfiqar Habib
Appl. Sci. 2024, 14(23), 11424; https://doi.org/10.3390/app142311424 - 8 Dec 2024
Cited by 2 | Viewed by 1965
Abstract
Learning object shapes from a single image is challenging due to variations in scene content, geometric structures, and environmental factors, which create significant disparities between 2D image features and their corresponding 3D representations, hindering the effective training of deep learning models. Existing learning-based [...] Read more.
Learning object shapes from a single image is challenging due to variations in scene content, geometric structures, and environmental factors, which create significant disparities between 2D image features and their corresponding 3D representations, hindering the effective training of deep learning models. Existing learning-based approaches can be divided into two-stage and single-stage methods, each with limitations. Two-stage methods often rely on generating intermediate proposals by searching for similar structures across the entire dataset, a process that is computationally expensive due to the large search space and high-dimensional feature-matching requirements, further limiting flexibility to predefined object categories. In contrast, single-stage methods directly reconstruct 3D shapes from images without intermediate steps, but they struggle to capture complex object geometries due to high feature loss between image features and 3D shapes and limit their ability to represent intricate details. To address these challenges, this paper introduces SS3DNet-AF, a single-stage, single-view 3D reconstruction network with an attention-based fusion (AF) mechanism to enhance focus on relevant image features, effectively capturing geometric details and generalizing across diverse object categories. The proposed method is quantitatively evaluated using the ShapeNet dataset, demonstrating its effectiveness in achieving accurate 3D reconstructions while overcoming the computational challenges associated with traditional approaches. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

24 pages, 8793 KB  
Article
A Novel Computational Paradigm for Reconstructing Solid CAD Features from a Segmented Manifold Triangular Mesh
by Feiyu Zhao
Appl. Sci. 2024, 14(14), 6183; https://doi.org/10.3390/app14146183 - 16 Jul 2024
Viewed by 2751
Abstract
We introduce a novel computational paradigm for reconstructing solid computer-aided design (CAD) features from the surface of a segmented manifold triangular mesh. This paradigm addresses the challenge of capturing high-level design semantics for manifold triangular meshes and facilitates parametric and variational design capabilities. [...] Read more.
We introduce a novel computational paradigm for reconstructing solid computer-aided design (CAD) features from the surface of a segmented manifold triangular mesh. This paradigm addresses the challenge of capturing high-level design semantics for manifold triangular meshes and facilitates parametric and variational design capabilities. We categorize four prevalent features, namely extrusion, rotation, sweep, and loft, as generalized swept bodies driven by cross-sectional sketches and feature paths, providing a unified mathematical representation for various feature types. The numerical optimization-based approach conducts geometric processing on the segmented manifold triangular mesh patch, extracting cross-sectional sketch curves and feature paths from its surface, and then reconstructing appropriate features using the Open CASCADE kernel. We employ the personalized three-dimensional (3D) printed model as a case study. Parametric and variant designs of the 3D-printed models are achieved through feature reconstruction of the manifold triangular mesh obtained via 3D scanning. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

Back to TopTop