Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (223)

Search Parameters:
Keywords = 3D body reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5003 KB  
Article
Affordable 3D Technologies for Contactless Cattle Morphometry: A Comparative Pilot Trial of Smartphone-Based LiDAR, Photogrammetry and Neural Surface Reconstruction Models
by Sara Marchegiani, Stefano Chiappini, Md Abdul Mueed Choudhury, Guangxin E, Maria Federica Trombetta, Marina Pasquini, Ernesto Marcheggiani and Simone Ceccobelli
Agriculture 2025, 15(24), 2567; https://doi.org/10.3390/agriculture15242567 - 11 Dec 2025
Viewed by 109
Abstract
Morphometric traits are closely linked to body condition, health, welfare, and productivity in livestock. In recent years, contactless 3D reconstruction technologies have been increasingly adopted to improve the accuracy and efficiency of morphometric evaluations. Conventional approaches for 3D reconstruction mainly employ Light Detection [...] Read more.
Morphometric traits are closely linked to body condition, health, welfare, and productivity in livestock. In recent years, contactless 3D reconstruction technologies have been increasingly adopted to improve the accuracy and efficiency of morphometric evaluations. Conventional approaches for 3D reconstruction mainly employ Light Detection and Ranging (LiDAR) or photogrammetry. In contrast, emerging Artificial Intelligence (AI)-based methods, such as Neural Surface Reconstruction, 3D Gaussian Splatting, and Neural Radiance Fields, offer new opportunities for high-fidelity digital modeling. Smartphones’ affordability represents a cost-effective and portable platform for deploying these advanced tools, potentially supporting enhanced agricultural performance, accelerating sector digitalization, and thus reducing the urban–rural digital gap. This preliminary study assessed the viability of using smartphone-based LiDAR, photogrammetry, and AI models to obtain body measurements of Marchigiana cattle. Five morphometric traits manually collected on animals were compared with those extracted from smartphone-based 3D reconstructions. LiDAR measurements offer more consistent estimates, with relative error ranging from −1.55% to 4.28%, while photogrammetry demonstrated accuracy ranging from 0.75 to −14.56. AI-based models (NSR, 3DGS, NeRF) reported more variability between accuracy results, pointing to the need for further refinement. Overall, the results highlight the preliminary potential of portable 3D scanning technologies, particularly LiDAR-equipped smartphones, for non-invasive morphometric data collection in cattle. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

16 pages, 2273 KB  
Article
Joint Function and Movement Variability During Daily Living Activities Performed Throughout the Home Setting: A Digital Twin Modeling Study
by Zhou Fang, Mohammad Yavari, Yiqun Chen, Davood Shojaei, Peter Vee Sin Lee, Abbas Rajabifard and David Ackland
Sensors 2025, 25(24), 7409; https://doi.org/10.3390/s25247409 - 5 Dec 2025
Viewed by 313
Abstract
Human mobility is commonly assessed in the laboratory environment, but accurate and robust joint motion measurement and task classification in the home setting are rarely undertaken. This study aimed to develop a digital twin model of a home to measure, visualize, and classify [...] Read more.
Human mobility is commonly assessed in the laboratory environment, but accurate and robust joint motion measurement and task classification in the home setting are rarely undertaken. This study aimed to develop a digital twin model of a home to measure, visualize, and classify joint motion during activities of daily living. A fully furnished single-bedroom apartment was digitally reconstructed using 3D photogrammetry. Ten healthy adults performed 19 activities of daily living over a 2 h period throughout the apartment. Each participant’s upper and lower limb joint motion was measured using inertial measurement units, and body spatial location was measured using an ultra-wide band sensor, registered to the digital home model. Supervised machine learning classified tasks with a mean 82.3% accuracy. Hair combing involved the highest range of shoulder elevation (124.2 ± 21.2°), while sit-to-stand exhibited both the largest hip flexion (75.7 ± 10.3°) and knee flexion (91.8 ± 8.6°). Joint motion varied from room to room, even for a given task. For example, subjects walked fastest in the living room (1.0 ± 0.2 m/s) and slowest in the bathroom (0.78 ± 0.10 m/s), while the mean maximum ankle dorsiflexion in the living room was significantly higher than that in the bathroom (mean difference: 4.9°, p = 0.002, Cohen’s d = 1.25). This study highlights the dependency of both upper and lower limb joint motion during activities of daily living on the internal home environment. The digital twin modeling framework reported may be useful in planning home-based rehabilitation, remote monitoring, and for interior design and ergonomics. Full article
(This article belongs to the Special Issue Wearable Sensors in Biomechanics and Human Motion)
Show Figures

Figure 1

15 pages, 2020 KB  
Article
3D Human Reconstruction from Monocular Vision Based on Neural Fields and Explicit Mesh Optimization
by Kaipeng Wang, Xiaolong Xie, Wei Li, Jie Liu and Zhuo Wang
Electronics 2025, 14(22), 4512; https://doi.org/10.3390/electronics14224512 - 18 Nov 2025
Viewed by 936
Abstract
Three-dimensional Human Reconstruction from Monocular Vision is a key technology in Virtual Reality and digital humans. It aims to recover the 3D structure and pose of the human body from 2D images or video. Current methods for dynamic 3D reconstruction of the human [...] Read more.
Three-dimensional Human Reconstruction from Monocular Vision is a key technology in Virtual Reality and digital humans. It aims to recover the 3D structure and pose of the human body from 2D images or video. Current methods for dynamic 3D reconstruction of the human body, which are based on monocular views, have low accuracy and remain a challenging problem. This paper proposes a fast reconstruction method based on Instant Human Model (IHM) generation, which achieves highly realistic 3D reconstruction of the human body in arbitrary poses. First, the efficient dynamic human body reconstruction method, InstantAvatar, is utilized to learn the shape and appearance of the human body in different poses. However, due to its direct use of low-resolution voxels as canonical spatial human representations, it is not possible to achieve satisfactory reconstruction results on a wide range of datasets. Next, a voxel occupancy grid is initialized in the A-pose, and a voxel attention mechanism module is constructed to enhance the reconstruction effect. Finally, the Instant Human Model (IHM) method is employed to define continuous fields on the surface, enabling highly realistic dynamic 3D human reconstruction. Experimental results show that, compared to the representative InstantAvatar method, IHM achieves a 0.1% improvement in SSIM and a 2% improvement in PSNR on the PeopleSnapshot benchmark dataset, demonstrating improvements in both reconstruction quality and detail. Specifically, IHM, through voxel attention mechanisms and Mesh adaptive iterative optimization, achieves highly realistic 3D mesh models of human bodies in various poses while ensuring efficiency. Full article
(This article belongs to the Special Issue 3D Computer Vision and 3D Reconstruction)
Show Figures

Graphical abstract

23 pages, 3931 KB  
Article
Enhanced 3D Gaussian Splatting for Real-Scene Reconstruction via Depth Priors, Adaptive Densification, and Denoising
by Haixing Shang, Mengyu Chen, Kenan Feng, Shiyuan Li, Zhiyuan Zhang, Songhua Xu, Chaofeng Ren and Jiangbo Xi
Sensors 2025, 25(22), 6999; https://doi.org/10.3390/s25226999 - 16 Nov 2025
Viewed by 2124
Abstract
The application prospects of photorealistic 3D reconstruction are broad in smart cities, cultural heritage preservation, and related domains. However, existing methods face persistent challenges in balancing reconstruction accuracy, computational efficiency, and robustness, particularly in complex scenes characterized by reflective surfaces, vegetation, sparse viewpoints, [...] Read more.
The application prospects of photorealistic 3D reconstruction are broad in smart cities, cultural heritage preservation, and related domains. However, existing methods face persistent challenges in balancing reconstruction accuracy, computational efficiency, and robustness, particularly in complex scenes characterized by reflective surfaces, vegetation, sparse viewpoints, or large-scale structures. In this study, an enhanced 3D Gaussian Splatting (3DGS) framework that integrates three key innovations is proposed: (i) a depth-aware regularization module that leverages metric depth priors from the pre-trained Depth-Anything V2 model, enabling geometrically informed optimization through a dynamically weighted hybrid loss; (ii) a gradient-driven adaptive densification mechanism that triggers Gaussian adjustments based on local gradient saliency, reducing redundant computation; and (iii) a neighborhood density-based floating artifact detection method that filters outliers using spatial distribution and opacity thresholds. Extensive evaluations are conducted across four diverse datasets—ranging from architectures, urban scenes, natural landscapes with water bodies, and long-range linear infrastructures. Our method achieves state-of-the-art performance in both reconstruction quality and efficiency, attaining a PSNR of 34.15 dB and SSIM of 0.9382 on medium-sized scenes, with real-time rendering speeds exceeding 170 FPS at a resolution of 1600 × 900. It demonstrates superior generalization on challenging materials such as water and foliage, while exhibiting reduced overfitting compared to baseline approaches. Ablation studies confirm the critical contributions of depth regularization and gradient-sensitive adaptation, with the latter improving training efficiency by 38% over depth supervision alone. Furthermore, we analyze the impact of input resolution and depth model selection, revealing non-trivial trade-offs between quantitative metrics and visual fidelity. While aggressive downsampling inflates PSNR and SSIM, it leads to loss of high-frequency detail; we identify 1/4–1/2 resolution scaling as an optimal balance for practical deployment. Among depth models, Vitb achieves the best reconstruction stability. Despite these advances, memory consumption remains a challenge in large-scale scenarios. Future work will focus on lightweight model design, efficient point cloud preprocessing, and dynamic memory management to enhance scalability for industrial applications. Full article
Show Figures

Figure 1

12 pages, 1860 KB  
Article
Three-Dimensional, Image-Based Evaluation of the L5 Vertebral Body and Its Ossification Center in Human Fetuses
by Magdalena Grzonkowska, Michał Kułakowski, Karol Elster, Zofia Dzięcioł-Anikiej, Beata Zwierko, Sara Kierońska-Siwak, Magdalena Konieczna-Brazis, Michał Banasiak, Stanisław Orkisz and Mariusz Baumgart
Brain Sci. 2025, 15(11), 1229; https://doi.org/10.3390/brainsci15111229 - 15 Nov 2025
Viewed by 537
Abstract
Objectives: The aim of this study was to characterize the developmental trajectories of the fifth lumbar vertebra in human fetuses by assessing the growth of its vertebral body and ossification center using linear, planar, and volumetric measurements. Methods: A total of 54 [...] Read more.
Objectives: The aim of this study was to characterize the developmental trajectories of the fifth lumbar vertebra in human fetuses by assessing the growth of its vertebral body and ossification center using linear, planar, and volumetric measurements. Methods: A total of 54 human fetuses (26 male and 28 female) aged 17–30 weeks of gestation were examined. Computed tomography, digital image analysis, 3D reconstruction, and statistical modeling were used to quantify morphometric parameters of the L5 vertebral body and its ossification center. Results: All measured parameters demonstrated consistent age-related growth following a linear pattern. No statistically significant differences between sexes were observed in any measured diameter of the L5 vertebra or its ossification center within the examined gestational age range. Conclusions: The normative morphometric data and growth curves obtained for the L5 vertebra and its ossification center provide age-specific reference values that may aid in prenatal diagnostics. These findings can support clinicians in estimating gestational age, assessing vertebral development on ultrasound, and detecting congenital spinal anomalies and skeletal dysplasias at an early stage. Further multicenter studies including a broader gestational age range are warranted to strengthen the generalizability and clinical applicability of these results. Full article
Show Figures

Figure 1

24 pages, 4973 KB  
Article
An Enhanced Method for Optical Imaging Computation of Space Objects Integrating an Improved Phong Model and Higher-Order Spherical Harmonics
by Qinyu Zhu, Can Xu, Yasheng Zhang, Yao Lu, Xia Wang and Peng Li
Remote Sens. 2025, 17(21), 3543; https://doi.org/10.3390/rs17213543 - 26 Oct 2025
Viewed by 467
Abstract
Space-based optical imaging detection serves as a crucial means for acquiring characteristic information of space objects, with the quality and resolution of images directly influencing the accuracy of subsequent missions. Addressing the scarcity of datasets in space-based optical imaging, this study introduces a [...] Read more.
Space-based optical imaging detection serves as a crucial means for acquiring characteristic information of space objects, with the quality and resolution of images directly influencing the accuracy of subsequent missions. Addressing the scarcity of datasets in space-based optical imaging, this study introduces a method that combines an improved Phong model and higher-order spherical harmonics (HOSH) for the optical imaging computation of space objects. Utilizing HOSH to fit the light field distribution, this approach comprehensively considers direct sunlight, earthshine, reflected light from other extremely distant celestial bodies, and multiple scattering from object surfaces. Through spectral reflectance experiments, an improved Phong model is developed to calculate the optical scattering characteristics of space objects and to retrieve common material properties such as metallicity, roughness, index of refraction (IOR), and Alpha for four types of satellite surfaces. Additionally, this study designs two sampling methods: a random sampling based on the spherical Fibonacci function (RSSF) and a sequential frame sampling based on predefined trajectories (SSPT). Through numerical analysis of the geometric and radiative rendering pipeline, this method simulates multiple scenarios under both high-resolution and wide-field-of-view operational modes across a range of relative distances. Simulation results validate the effectiveness of the proposed approach, with average rendering speeds of 2.86 s per frame and 1.67 s per frame for the two methods, respectively, demonstrating the capability for real-time rapid imaging while maintaining low computational resource consumption. The data simulation process spans six distinct relative distance intervals, ensuring that multi-scale images retain substantial textural features and are accompanied by attitude labels, thereby providing robust support for algorithms aimed at space object attitude estimation, and 3D reconstruction. Full article
Show Figures

Figure 1

15 pages, 1523 KB  
Article
Dynamic Whole-Body FDG PET/CT for Predicting Malignancy in Head and Neck Tumors and Cervical Lymphadenopathy
by Gregor Horňák, André H. Dias, Ole L. Munk, Lars C. Gormsen, Jaroslav Ptáček and Pavel Karhan
Diagnostics 2025, 15(20), 2651; https://doi.org/10.3390/diagnostics15202651 - 21 Oct 2025
Viewed by 732
Abstract
Background: Dynamic whole-body (D-WB) FDG PET/CT is a novel technique that enables the direct reconstruction of multiparametric images representing the FDG metabolic uptake rate (MRFDG) and “free” FDG (DVFDG). Applying complementary parameters with distinct characteristics compared to static SUV [...] Read more.
Background: Dynamic whole-body (D-WB) FDG PET/CT is a novel technique that enables the direct reconstruction of multiparametric images representing the FDG metabolic uptake rate (MRFDG) and “free” FDG (DVFDG). Applying complementary parameters with distinct characteristics compared to static SUV images, the aims of this study are as follows: (1) to determine the threshold values of SUV, MRFDG, and DVFDG for malignant and benign lesions; (2) to compare the specificity of MRFDG and DVFDG images with static SUVbw images; and (3) to assess whether any of the dynamic imaging parameters correlate more significantly with malignancy or non-malignancy in the examined lesions based on the measured values obtained from D-WB FDG PET/CT. Methods: The study was a retrospective analysis of D-WB PET/CT data from 43 patients (23 males and 20 females) included both in the context of primary staging as well as imaging performed due to suspicion of post-therapeutic relapse or recurrence. Standard scanning was performed using a multiparametric PET acquisition protocol on a Siemens Biograph Vision 600 PET/CT scanner. Pathological findings were manually delineated, and values for SUVbw, MRFDG, and DVFDG were extracted. The findings were classified and statistically evaluated based on their was histological verification of a malignant or benign lesion. Multinomial and binomial logistic regression analyses were used to find parameters for data classification in different models, employing various combinations of the input data (SUVbw, MRFDG, DVFDG). ROC curves were generated by changing the threshold p-value in the regression models to compare the models and determine the optimal thresholds. Results: Patlak PET parameters (MRFDG and DVFDG) combined with mean SUVbw achieved the highest diagnostic accuracy of 0.82 (95% CI 0.75–0.89) for malignancy detection (F1-score = 0.90). Sensitivity reached 0.85 (95% CI 0.77–0.91) and specificity 0.93 (95% CI 0.87–0.98). Classification accuracy in tumors was 0.86 (95% CI 0.78–0.92) and in lymph nodes 0.81 (95% CI 0.73–0.88). Relative contribution analysis showed that DVFDG accounted for up to 65% of the classification weight. ROC analysis demonstrated AUC values above 0.8 for all models, with optimal thresholds achieving sensitivities of around 0.85 and specificities up to 0.93. Thresholds for malignancy detection were, for mean values, SUVbw > 5.8 g/mL, MRFDG > 0.05 µmol/mL/min, DVFDG > 68%, and, for maximal values, SUVbw > 8.7 g/mL, MRFDG > 0.11 µmol/mL/min, DVFDG > 202%. Conclusions: The D-WB [18F]FDG PET/CT images in this study highlight the potential for improved differentiation between malignant and benign lesions compared to conventional SUVbw imaging in patients with locally advanced head and neck cancers presenting with cervical lymphadenopathy and carcinoma of unknown primary origin (CUP). This observation may be particularly relevant in common diagnostic dilemmas, especially in distinguishing residual or recurrent tumors from post-radiotherapy changes. Further validation in larger cohorts with histopathological confirmation is warranted, as the small sample size in this study may limit the generalizability of the findings. Full article
Show Figures

Figure 1

14 pages, 2921 KB  
Article
Design and Validation of an Augmented Reality Training Platform for Patient Setup in Radiation Therapy Using Multimodal 3D Modeling
by Jinyue Wu, Donghee Han and Toshioh Fujibuchi
Appl. Sci. 2025, 15(19), 10488; https://doi.org/10.3390/app151910488 - 28 Sep 2025
Viewed by 634
Abstract
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup [...] Read more.
This study presents the development and evaluation of an Augmented Reality (AR)-based training system aimed at improving patient setup accuracy in radiation therapy. Leveraging Microsoft HoloLens 2, the system provides an immersive environment for medical staff to enhance their understanding of patient setup procedures. High-resolution 3D anatomical models were reconstructed from CT scans using 3D Slicer, while Luma AI was employed to rapidly capture complete body surface models. Due to limitations in each method—such as missing extremities or back surfaces—Blender was used to merge the models, improving completeness and anatomical fidelity. The AR application was developed in Unity, employing spatial anchors and 125 × 125 mm2 QR code markers to stabilize and align virtual models in real space. System accuracy testing demonstrated that QR code tracking achieved millimeter-level variation, with an expanded uncertainty of ±2.74 mm. Training trials for setup showed larger deviations in the X (left–right), Y (up-down), and Z (front-back) axes at the centimeter scale. This meant that we were able to quantify the user’s patient setup skills. While QR code positioning was relatively stable, manual placement of markers and the absence of real-time verification contributed to these errors. The system offers a radiation-free and interactive platform for training, enhancing spatial awareness and procedural skills. Future work will focus on improving tracking stability, optimizing the workflow, and integrating real-time feedback to move toward clinical applicability. Full article
(This article belongs to the Special Issue Novel Technologies in Radiology: Diagnosis, Prediction and Treatment)
Show Figures

Figure 1

18 pages, 2691 KB  
Article
YOLOv8-DMC: Enabling Non-Contact 3D Cattle Body Measurement via Enhanced Keypoint Detection
by Zhi Weng, Wenwen Hao, Caili Gong and Zhiqiang Zheng
Animals 2025, 15(18), 2738; https://doi.org/10.3390/ani15182738 - 19 Sep 2025
Viewed by 817
Abstract
Accurate and non-contact measurement of cattle body dimensions is essential for precision livestock management. This study presents YOLOv8-DMC, a lightweight deep learning model optimized for anatomical keypoint detection in side-view images of cattle. The model integrates three attention modules—DRAMiTransformer, MHSA-C2f, and CASimAM—to improve [...] Read more.
Accurate and non-contact measurement of cattle body dimensions is essential for precision livestock management. This study presents YOLOv8-DMC, a lightweight deep learning model optimized for anatomical keypoint detection in side-view images of cattle. The model integrates three attention modules—DRAMiTransformer, MHSA-C2f, and CASimAM—to improve robustness under occlusion and lighting variability. Following keypoint prediction, a 16-neighborhood depth completion and pass-through filtering process are applied to generate clean, colored point clouds. This enables precise 3D localization of keypoints by matching them to valid depth values. The model achieves AP@0.5 of 0.931 and AP@[0.50:0.95] of 0.868 on a dataset of over 7000 images, improving baseline accuracy by 2.14% and 3.09%, respectively, with only 0.35 M additional parameters and 0.9 GFLOPs in complexity. For real-world validation, strictly lateral-view RGB-D images from 137 cattle were collected, with ground-truth manual measurements. Compared with manual measurements, the average relative errors are 2.43% for body height, 2.26% for hip height, 3.65% for body length, and 4.48% for cannon circumference. The system supports deployment on edge devices, providing an efficient and accurate solution for 3D cattle measurement in real-world farming conditions. Full article
(This article belongs to the Section Cattle)
Show Figures

Figure 1

21 pages, 4674 KB  
Article
CLCFM3: A 3D Reconstruction Algorithm Based on Photogrammetry for High-Precision Whole Plant Sensing Using All-Around Images
by Atsushi Hayashi, Nobuo Kochi, Kunihiro Kodama, Sachiko Isobe and Takanari Tanabata
Sensors 2025, 25(18), 5829; https://doi.org/10.3390/s25185829 - 18 Sep 2025
Viewed by 754
Abstract
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it [...] Read more.
This research aims to develop a novel technique to acquire a large amount of high-density, high-precision 3D point cloud data for plant phenotyping using photogrammetry technology. The complexity of plant structures, characterized by overlapping thin parts such as leaves and stems, makes it difficult to reconstruct accurate 3D point clouds. One challenge in this regard is occlusion, where points in the 3D point cloud cannot be obtained due to overlapping parts, preventing accurate point capture. Another is the generation of erroneous points in non-existent locations due to image-matching errors along object outlines. To overcome these challenges, we propose a 3D point cloud reconstruction method named closed-loop coarse-to-fine method with multi-masked matching (CLCFM3). This method repeatedly executes a process that generates point clouds locally to suppress occlusion (multi-matching) and a process that removes noise points using a mask image (masked matching). Furthermore, we propose the closed-loop coarse-to-fine method (CLCFM) to improve the accuracy of structure from motion, which is essential for implementing the proposed point cloud reconstruction method. CLCFM solves loop closure by performing coarse-to-fine camera position estimation. By facilitating the acquisition of high-density, high-precision 3D data on a large number of plant bodies, as is necessary for research activities, this approach is expected to enable comparative analysis of visible phenotypes in the growth process of a wide range of plant species based on 3D information. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 6720 KB  
Article
UBSP-Net: Underclothing Body Shape Perception Network for Parametric 3D Human Reconstruction
by Xihang Li, Xianguo Cheng, Fang Chen, Furui Shi and Ming Li
Electronics 2025, 14(17), 3522; https://doi.org/10.3390/electronics14173522 - 3 Sep 2025
Viewed by 930
Abstract
This paper introduces a novel Underclothing Body Shape Perception Network (UBSP-Net) for reconstructing parametric 3D human models from clothed full-body 3D scans, addressing the challenge of estimating body shape and pose beneath clothing. Our approach simultaneously predicts both the internal body point cloud [...] Read more.
This paper introduces a novel Underclothing Body Shape Perception Network (UBSP-Net) for reconstructing parametric 3D human models from clothed full-body 3D scans, addressing the challenge of estimating body shape and pose beneath clothing. Our approach simultaneously predicts both the internal body point cloud and a reference point cloud for the SMPL model, with point-to-point correspondence, leveraging the external scan as an initial approximation to enhance the model’s stability and computational efficiency. By learning point offsets and incorporating body part label probabilities, the network achieves accurate internal body shape inference, enabling reliable Skinned Multi-Person Linear (SMPL) human body model registration. Furthermore, we optimize the SMPL+D human model parameters to reconstruct the clothed human model, accommodating common clothing types, such as T-shirts, shirts, and pants. Evaluated on the CAPE dataset, our method outperforms mainstream approaches, achieving significantly lower Chamfer distance errors and faster inference times. The proposed automated pipeline ensures accurate and efficient reconstruction, even with sparse or incomplete scans, and demonstrates robustness on real-world Thuman2.0 dataset scans. This work advances parametric human modeling by providing a scalable and privacy-preserving solution for applications to 3D shape analysis, virtual try-ons, and animation. Full article
Show Figures

Figure 1

18 pages, 518 KB  
Article
Fakhr al-Dīn al-Rāzī on the Existence and Nature of the Jinn
by Shoaib Ahmed Malik
Religions 2025, 16(9), 1141; https://doi.org/10.3390/rel16091141 - 31 Aug 2025
Viewed by 2188
Abstract
This article reconstructs Fakhr al-Dīn al-Rāzī’s (d. 1210) systematic treatment of the jinn in his Great Exegesis (al-Tafsīr al-Kabīr) and his summa The Sublime Objectives in Metaphysics (al-Maṭālib al-ʿĀliya min al-ʿIlm al-Ilāhī). In these works, al-Rāzī treats the jinn [...] Read more.
This article reconstructs Fakhr al-Dīn al-Rāzī’s (d. 1210) systematic treatment of the jinn in his Great Exegesis (al-Tafsīr al-Kabīr) and his summa The Sublime Objectives in Metaphysics (al-Maṭālib al-ʿĀliya min al-ʿIlm al-Ilāhī). In these works, al-Rāzī treats the jinn not as a marginal curiosity but as a test case for probing core metaphysical categories such as substance, embodiment, and divine action. His analysis unfolds through a sequence of guiding questions. Do the jinn exist at all? If not, we arrive at (1) the Denialist View. If they do exist, they must be either immaterial or material. The first yields (2) the Immaterialist View. The second raises the further question of whether bodies differ in essence or share a single essence. If they differ, we arrive at (3) the Non-Essentialist Corporealist View. Notably, these first three views are associated, in different ways, with various figures in the falsafa tradition. If they share a single essence, this produces the Essentialist Corporealist position, which then divides according to whether bodily structure is metaphysically necessary for life and agency. If not necessary, this produces (4) the Essentialist Corporealist—Structural Independence View, associated with the Ashʿarīs. If necessary, it leads to (5) the Essentialist Corporealist—Structural Dependence View, associated with the Muʿtazilīs. Al-Rāzī rejects (1) and (5), but he leaves (2), (3), and (4) as live possibilities. While he shows greater sympathy for (4), his broader purpose is not to settle the matter but to map the full range of theological and philosophical options. Al-Rāzī’s comprehensive exposition reflects the wider dialectic between falsafa, Ashʿarī theology, and Muʿtazilī theology, showcasing a sophisticated willingness to engage and entertain multiple metaphysical possibilities side by side. The result is an exercise in systematic metaphysics, where the question of the jinn, as liminal beings, becomes a means for interrogating broader ontological commitments in Islamic theology and philosophy. Full article
(This article belongs to the Special Issue Between Philosophy and Theology: Liminal and Contested Issues)
Show Figures

Figure 1

25 pages, 7878 KB  
Article
Three-Dimensional Attribute Modeling and Deep Mineralization Prediction of Vein 171 in Linglong Gold Field, Jiaodong Peninsula, Eastern China
by Hongda Li, Zhichun Wu, Shouxu Wang, Yongfeng Wang, Chong Dong, Xiao Li, Zhiqiang Zhang, Hualiang Li, Weijiang Liu and Bin Li
Minerals 2025, 15(9), 909; https://doi.org/10.3390/min15090909 - 27 Aug 2025
Viewed by 747
Abstract
As shallow mineral resources become increasingly depleted, the search for deep-seated orebodies has emerged as a crucial focus in modern gold exploration. This study investigates Vein 171 in the Linglong gold field, Jiaodong Peninsula, using 3D attribute modeling for deep mineralization prediction and [...] Read more.
As shallow mineral resources become increasingly depleted, the search for deep-seated orebodies has emerged as a crucial focus in modern gold exploration. This study investigates Vein 171 in the Linglong gold field, Jiaodong Peninsula, using 3D attribute modeling for deep mineralization prediction and precise orebody delineation. The research integrates surface and block models through Vulcan 2021.5 3D mining software to reconstruct the spatial morphology and internal attribute distribution of the orebody. Geostatistical methods were applied to identify and process high-grade anomalies, with grade interpolation conducted using the inverse distance weighting (IDW) method. The results reveal that Vein 171 is predominantly controlled by NE-trending extensional structures, and grade enrichment occurs in zones where fault dips transition from steep to gentle. The grade distribution of the 1711 and 171sub-1 orebodies demonstrates heterogeneity, with high-grade clusters exhibiting periodic and discrete distributions along the dip and plunge directions. Key enrichment zones were identified at elevations of –1800 m to –800 m near the bifurcation of the Zhaoping Fault, where stress concentration and rock fracturing have created complex fracture networks conducive to hydrothermal fluid migration and gold precipitation. Nine verification drillholes in key target areas revealed 21 new mineralized bodies, resulting in an estimated additional 2.308 t of gold resources and validating the predictive accuracy of the 3D model. This study not only provides a reliable framework for deep prospecting and mineral resource expansion in the Linglong Goldfield but also serves as a reference for exploration in similar structurally controlled gold deposits globally. Full article
(This article belongs to the Special Issue 3D Mineral Prospectivity Modeling Applied to Mineral Deposits)
Show Figures

Figure 1

16 pages, 12855 KB  
Article
The Influence of Seafloor Gradient on Turbidity Current Flow Dynamics and Depositional Response: A Case Study from the Lower Gas-Bearing Interval of Huangliu Formation II, Yinggehai Basin
by Yong Xu, Lei Li, Guohua Zhang, Wei Zhou, Zhongpo Zhang, Jiaying Wei and Xing Zhao
J. Mar. Sci. Eng. 2025, 13(9), 1616; https://doi.org/10.3390/jmse13091616 - 24 Aug 2025
Viewed by 839
Abstract
The Huangliu Formation, Section I, Gas Group II, at the eastern X gas field of the Yinggehai Basin, hosts thick, irregularly deposited sandstone bodies. The genesis of these sedimentary sand bodies has remained unclear. Utilizing drilling logs, core samples, and 3D seismic data [...] Read more.
The Huangliu Formation, Section I, Gas Group II, at the eastern X gas field of the Yinggehai Basin, hosts thick, irregularly deposited sandstone bodies. The genesis of these sedimentary sand bodies has remained unclear. Utilizing drilling logs, core samples, and 3D seismic data from this field, this study integrates seismic geomorphology analysis, paleo-hydrodynamic reconstruction, and sedimentary numerical simulation to investigate the spatiotemporal evolution of the depositional system under micro-paleotopographic conditions during Gas Zone II sedimentation. Key conclusions include the development of seven morphologically diverse isolated sand bodies in the Lower II Gas Zone, covering areas of 1.4–13.4 km2 with thicknesses ranging from 8.0 to 42.0 m. These sand bodies consist predominantly of massive fine-grained sandstone, characterized by box-shaped gamma-ray (GR) log responses and U- or V-shaped seismic reflection configurations. Reconstruction of paleo-turbidity current hydrodynamics for the Lower II depositional period was achieved through analysis of topographic slope gradients and the dimensional constraints (width/depth) of confined channels. Critically, slope gradients within the intraslope basin prompted a transition from supercritical to subcritical flow states within turbidity currents. This hydraulic transformation drove alternating erosion and deposition along the seafloor topography, ultimately generating the observed irregular, isolated turbidite sand bodies. Full article
(This article belongs to the Section Geological Oceanography)
Show Figures

Figure 1

28 pages, 9030 KB  
Article
UAV Path Planning via Semantic Segmentation of 3D Reality Mesh Models
by Xiaoxinxi Zhang, Zheng Ji, Lingfeng Chen and Yang Lyu
Drones 2025, 9(8), 578; https://doi.org/10.3390/drones9080578 - 14 Aug 2025
Cited by 2 | Viewed by 2255
Abstract
Traditional unmanned aerial vehicle (UAV) path planning methods for image-based 3D reconstruction often rely solely on geometric information from initial models, resulting in redundant data acquisition in non-architectural areas. This paper proposes a UAV path planning method via semantic segmentation of 3D reality [...] Read more.
Traditional unmanned aerial vehicle (UAV) path planning methods for image-based 3D reconstruction often rely solely on geometric information from initial models, resulting in redundant data acquisition in non-architectural areas. This paper proposes a UAV path planning method via semantic segmentation of 3D reality mesh models to enhance efficiency and accuracy in complex scenarios. The scene is segmented into buildings, vegetation, ground, and water bodies. Lightweight polygonal surfaces are extracted for buildings, while planar segments in non-building regions are fitted and projected into simplified polygonal patches. These photography targets are further decomposed into point, line, and surface primitives. A multi-resolution image acquisition strategy is adopted, featuring high-resolution coverage for buildings and rapid scanning for non-building areas. To ensure flight safety, a Digital Surface Model (DSM)-based shell model is utilized for obstacle avoidance, and sky-view-based Real-Time Kinematic (RTK) signal evaluation is applied to guide viewpoint optimization. Finally, a complete weighted graph is constructed, and ant colony optimization is employed to generate a low-energy-cost flight path. Experimental results demonstrate that, compared with traditional oblique photogrammetry, the proposed method achieves higher reconstruction quality. Compared with the commercial software Metashape, it reduces the number of images by 30.5% and energy consumption by 37.7%, while significantly improving reconstruction results in both architectural and non-architectural areas. Full article
Show Figures

Figure 1

Back to TopTop