Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (205)

Search Parameters:
Keywords = surface geometrical texture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4758 KB  
Article
SCSANet: Split Convolution Selective Attention Network of Drivable Area Detection for Mobile Robots
by Maozhang Ye, Xiaoli Li, Jidong Dai, Hongyi Li, Zhouyi Xu and Chentao Zhang
Eng 2026, 7(4), 176; https://doi.org/10.3390/eng7040176 (registering DOI) - 11 Apr 2026
Abstract
Detecting drivable areas is a fundamental task in autonomous driving systems. Although semantic segmentation networks have demonstrated strong performance in segmenting drivable regions, two key challenges persist. First, acquiring sufficient contextual information in complex road scenarios remains difficult, often leading to segmentation errors. [...] Read more.
Detecting drivable areas is a fundamental task in autonomous driving systems. Although semantic segmentation networks have demonstrated strong performance in segmenting drivable regions, two key challenges persist. First, acquiring sufficient contextual information in complex road scenarios remains difficult, often leading to segmentation errors. Second, the coarseness of extracted features may degrade accuracy even when texture information is available in RGB images. To address these issues, we propose an enhanced DeepLabv3+ algorithm called Split Convolution Selective Attention Network (SCSANet), which incorporates the Adaptive Kernel (AK) and Split Convolution Attention (SCA) modules. AK adaptively adjusts the receptive field to accommodate varying road scenarios, while SCA improves boundary clarity by enhancing channel interaction. In addition, we employ surface normals to provide complementary geometric information, thereby strengthening the ability of the network to recognize drivable areas. To compensate for the lack of publicly available datasets for closed or semi-closed scenarios, we introduce XMUROAD, a new dataset of binocular disparity images. Experiments on the XMUROAD dataset demonstrate that the proposed architectural improvements yield an mIoU gain of 1.63% under the same RGB input, and the full pipeline with surface normal input achieves improvements of 1.55% to 2.59% in mF1 and 2.94% to 4.83% in mIoU over state-of-the-art methods. Experiments on the KITTI dataset further verify the generalization capability of SCSANet, with improvements of 1.58% in mF1 and 2.88% in mIoU over state-of-the-art methods. The proposed method provides a practical approach for accurate drivable area detection in closed and semi-closed mobile-robot scenarios. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
19 pages, 12031 KB  
Technical Note
Efficient Mesh Reconstruction and Texturing of Oracle Bones
by Shiming De
Sensors 2026, 26(7), 2270; https://doi.org/10.3390/s26072270 - 7 Apr 2026
Viewed by 189
Abstract
The high-fidelity 3D digitization of small, detailed cultural heritage objects, such as Oracle Bones, presents significant challenges for which existing reconstruction workflows are often inadequate. Methods based on Structure-from-Motion (SfM) often lack the geometric density required to capture fine inscription details, while Light [...] Read more.
The high-fidelity 3D digitization of small, detailed cultural heritage objects, such as Oracle Bones, presents significant challenges for which existing reconstruction workflows are often inadequate. Methods based on Structure-from-Motion (SfM) often lack the geometric density required to capture fine inscription details, while Light Detection and Ranging and RGB-Depth approaches may introduce high data overhead and unstable color mapping. Recent specialized studies have utilized multi-shading-based techniques to extract such hidden surface textures, yet integrating these results into a cohesive mesh remains difficult. To address these limitations, we propose a digitization framework specifically designed for object-level archaeological artifacts. Our method combines semi-automatic alignment with ICP-based refinement for robust camera pose estimation, reducing misalignment issues associated with feature-only registration. Furthermore, we employ an efficient mesh-based representation with vertex-level coloring, enabling detailed geometry and consistent texturing while maintaining compact storage requirements. Our contributions include: (1) a high-quality mesh reconstruction framework that preserves fine inscription geometry; (2) a hybrid camera pose estimation strategy that improves alignment robustness; and (3) an integrated hardware-assisted workflow tailored for digitizing small archaeological artifacts under controlled acquisition conditions. Experimental results on physical Oracle Bone artifacts demonstrate that the proposed method achieves a mean geometric reconstruction error of approximately 0.075 mm with a Hausdorff distance of 1 mm. These results demonstrate the effectiveness of the proposed workflow for digitization of oracle bone artifacts. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

33 pages, 10259 KB  
Article
Multimodal Remote Sensing Image Classification Based on Dynamic Group Convolution and Bidirectional Guided Cross-Attention Fusion
by Lu Zhang, Yaoguang Yang, Zhaoshuang He, Guolong Li, Feng Zhao, Wenqiang Hua, Gongwei Xiao and Jingyan Zhang
Remote Sens. 2026, 18(7), 1066; https://doi.org/10.3390/rs18071066 - 2 Apr 2026
Viewed by 234
Abstract
The synergistic integration of Hyperspectral Imaging (HSI) and Light Detection and Ranging (LiDAR) data has become a pivotal strategy in remote sensing for precise land-cover classification. However, existing multimodal deep learning frameworks frequently suffer from intrinsic limitations, including rigid feature extraction protocols, underutilization [...] Read more.
The synergistic integration of Hyperspectral Imaging (HSI) and Light Detection and Ranging (LiDAR) data has become a pivotal strategy in remote sensing for precise land-cover classification. However, existing multimodal deep learning frameworks frequently suffer from intrinsic limitations, including rigid feature extraction protocols, underutilization of LiDAR-derived textural information, and asymmetric fusion mechanisms that fail to balance the contribution of spectral and elevation features effectively. To address these challenges, this paper proposes a novel framework named DGC-BCAF, which integrates Dynamic Group Convolution and Bidirectional Guided Cross-Attention Fusion to achieve adaptive feature representation and robust cross-modal interaction. First, a Dynamic Group Convolution (DGConv) module embedded within a ResNet18 backbone is designed to function as the central spatial context extractor. Unlike traditional group convolution, this module learns a dynamic relationship matrix to automatically group input channels, thereby facilitating flexible and context-aware feature representation that adapts to complex spatial distributions. Second, to overcome the insufficient exploitation of elevation data, we introduce a dedicated LiDAR texture encoding branch. This branch innovatively fuses Gray-Level Co-occurrence Matrix (GLCM) statistical features with multi-scale convolutional representations, capturing both geometric height information and fine-grained surface textural details that are critical for distinguishing objects with similar elevations. Finally, central to our architecture is the Bidirectional Cross-Attention Fusion (BCAF) module. Unlike standard unidirectional fusion approaches, BCAF employs a LiDAR geometry to guide the selection of salient spectral bands, while simultaneously utilizing spectral signatures to emphasize informative LiDAR channels. This mutual guidance ensures a balanced contribution from both modalities. Extensive experiments conducted on three benchmark datasets—Houston 2013, Trento, and MUUFL—demonstrate that DGC-BCAF consistently outperforms state-of-the-art methods in terms of overall accuracy, average accuracy, and Kappa coefficient. The results confirm that the proposed adaptive grouping and bidirectional guidance strategies significantly improve classification performance, particularly in distinguishing spectrally similar materials and delineating complex urban structures. Full article
Show Figures

Figure 1

20 pages, 4887 KB  
Article
Geo-RVF: A Multi-Task Lightweight Perception System Based on Radar–Vision Fusion for USVs
by Jianhong Zhou, Zhen Huang, Yifan Liu, Gang Zhang, Yilan Yu and Zhen Tian
J. Mar. Sci. Eng. 2026, 14(7), 664; https://doi.org/10.3390/jmse14070664 - 31 Mar 2026
Viewed by 263
Abstract
Visual perception in Unmanned Surface Vehicles (USVs) suffers from drastic lighting changes and missing texture features. These factors lead to depth scale drift and motion estimation bias. Moreover, existing multi-modal fusion models are computationally complex and unfit for resource-limited edge devices. To address [...] Read more.
Visual perception in Unmanned Surface Vehicles (USVs) suffers from drastic lighting changes and missing texture features. These factors lead to depth scale drift and motion estimation bias. Moreover, existing multi-modal fusion models are computationally complex and unfit for resource-limited edge devices. To address these problems, a lightweight Radar–Vision Fusion (Geo-RVF) algorithm is proposed. To supplement spatial information, point clouds are projected to build sparse depth maps. A probability consistency-based depth correction module is designed to suppress water noise. This helps extract accurate geometric anchors to guide visual depth propagation. Subsequently, a Recurrent Autoregressive Network (RAN) fuses radar and image features in the temporal dimension. This resolves dynamic positional deviations caused by texture degradation and distant small targets. After real-time optimization, Geo-RVF achieves 23 FPS on the Jetson Orin NX. On a collected dataset, the method attains a mean average precision (mAP) 50–90 of 44.2% and a mean intersection over union (mIoU) of 99%, outperforming HybridNets and Achelous. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

24 pages, 11322 KB  
Article
Hydrodynamic Influence of Circular Piles with a Surface Patterned with Hexagonal Dimples
by Angelica Lizbeth Álvarez-Mejia, Humberto Salinas-Tapia, Carlos Díaz-Delgado, Juan Manuel Becerril-Lara, Jesús Ramiro Félix-Félix, Boris Miguel López-Rebollar and Juan Antonio García-Aragón
Water 2026, 18(7), 807; https://doi.org/10.3390/w18070807 - 28 Mar 2026
Viewed by 385
Abstract
The interaction between circular piers and turbulent open-channel flow generates complex three-dimensional structures, including horseshoe vortices at the pier base and wake vortices downstream. These structures increase vertical velocities, pressure fluctuations, and shear stresses, contributing to erosion and structural instability. Although these phenomena [...] Read more.
The interaction between circular piers and turbulent open-channel flow generates complex three-dimensional structures, including horseshoe vortices at the pier base and wake vortices downstream. These structures increase vertical velocities, pressure fluctuations, and shear stresses, contributing to erosion and structural instability. Although these phenomena have been widely studied, limited attention has been given to surface geometric modifications as a flow-control strategy. This study employs Large Eddy Simulation (LES) to evaluate the influence of a hexagonal dimple pattern on circular piles in a free-surface channel. The dimples were defined by varying diameter, depth, and spacing to reduce vertical velocity and alter vortex formation. The computational domain represents a 0.40 m wide, 12 m long, and 1.2 m high rectangular channel, with an inlet mass flow of 9.4 kg/s and 0.10 m water depth. Model validation against particle image velocimetry (PIV) data showed 99% correlation, confirming numerical accuracy. Results demonstrate that textured surfaces modify flow dynamics by enhancing kinetic energy dissipation and generating micro-vortices that weaken dominant structures. The optimal configuration (6 mm diameter, 2 mm depth, 1 mm spacing) reduced downward vertical velocity by 42% and wake vortex shedding frequency by 24%, indicating improved hydraulic stability and erosion mitigation potential. Full article
(This article belongs to the Topic Advances in Environmental Hydraulics, 2nd Edition)
Show Figures

Figure 1

14 pages, 2326 KB  
Article
Steel Surface Defect Detection Based on Improved YOLOv8 with Multi-Scale Feature Fusion and Attention Mechanism
by Yalei Jia, Xian Zhang, Jianhui Meng and Jisong Zang
Electronics 2026, 15(7), 1408; https://doi.org/10.3390/electronics15071408 - 27 Mar 2026
Viewed by 384
Abstract
Identifying microscopic textural anomalies and filtering out complicated industrial background noise remain significant hurdles in inspecting metallic surfaces. To tackle these operational bottlenecks, our research introduces a refined multi-scale detection framework built upon the YOLOv8l architecture. Specifically, we engineer a fine-grained detection pathway [...] Read more.
Identifying microscopic textural anomalies and filtering out complicated industrial background noise remain significant hurdles in inspecting metallic surfaces. To tackle these operational bottlenecks, our research introduces a refined multi-scale detection framework built upon the YOLOv8l architecture. Specifically, we engineer a fine-grained detection pathway utilizing the P2 layer, which aims to preserve critical details of miniature flaws that are otherwise discarded during feature extraction. Furthermore, a Bi-directional Feature Pyramid Network model is embedded to reconstruct the feature fusion path, balancing the preservation of shallow geometric textures with enhanced multi-scale representation capabilities. To bolster anti-interference performance, a Convolutional Block Attention Module (CBAM) is integrated prior to the detection head, employing adaptive channel and spatial weighting to suppress unstructured background noise. Experimental results utilizing TTA demonstrate that the mAP@0.5 reached 76.3%. Detection accuracies for patches and inclusions reached 93.1% and 85.3%. Full article
Show Figures

Graphical abstract

16 pages, 2640 KB  
Article
The Effect of Normal Load on the Change in Geometrical Texture of Surfaces Forming a Multi-Bolted Connection
by Rafał Grzejda and Daniel Grochała
Appl. Sci. 2026, 16(7), 3248; https://doi.org/10.3390/app16073248 - 27 Mar 2026
Viewed by 321
Abstract
The stiffness of connections between machine elements depends on the geometry of the product and the condition of the material from which the joined elements are made. This stiffness is also influenced by the state of the surface geometrical texture and the technological [...] Read more.
The stiffness of connections between machine elements depends on the geometry of the product and the condition of the material from which the joined elements are made. This stiffness is also influenced by the state of the surface geometrical texture and the technological parameters during the assembly process. This study examined whether, under normal load, there is a significant change in the geometrical state of the surfaces joined by a multi-bolted connection. It was shown that by properly performing the preloading process for such a connection, loss of the elastic properties of the jointed surfaces can be avoided. The 3D images of the surfaces of the joined elements obtained as a result of the measurements can be used to model multi-bolted connections in a systemic approach. Full article
Show Figures

Figure 1

20 pages, 37476 KB  
Article
In-Orbit MapAnything: An Enhanced Feed-Forward Metric Framework for 3D Reconstruction of Non-Cooperative Space Targets Under Complex Lighting
by Yinxi Lu, Hongyuan Wang, Qianhao Ning, Ziyang Liu, Yunzhao Zang, Zhen Liao and Zhiqiang Yan
Sensors 2026, 26(7), 2026; https://doi.org/10.3390/s26072026 - 24 Mar 2026
Viewed by 364
Abstract
Precise 3D reconstruction of non-cooperative space targets is a prerequisite for active debris removal and on-orbit servicing. However, this task is impeded by severe environmental challenges. Specifically, the limited dynamic range of visible light cameras leads to frequent overexposure or underexposure under extreme [...] Read more.
Precise 3D reconstruction of non-cooperative space targets is a prerequisite for active debris removal and on-orbit servicing. However, this task is impeded by severe environmental challenges. Specifically, the limited dynamic range of visible light cameras leads to frequent overexposure or underexposure under extreme space lighting. Compounded by sparse textures and strong specular reflections, these factors significantly constrain reconstruction accuracy. While existing general-purpose feed-forward models such as MapAnything offer efficient inference, their geometric recovery capabilities degrade sharply when facing significant domain shifts. To address these issues, this paper proposes an enhanced 3D reconstruction framework tailored for the space environment named In-Orbit MapAnything. First, to mitigate data scarcity, we construct a high-quality space target dataset incorporating extreme illumination characteristics, which provides comprehensive auxiliary modalities including accurate camera poses and dense point clouds. Second, we propose the SatMap-Adapter module to mitigate feature degradation caused by severe specular reflections. This architecture employs a hierarchical cascade sampling strategy to align multi-level backbone features and utilizes a lightweight adaptive fusion module to dynamically integrate shallow photometric cues, intermediate structural information, and deep semantic features. Finally, we employ a weight-decomposed low-rank adaptation strategy to achieve parameter-efficient fine-tuning while strictly freezing the pre-trained backbone. Experimental results demonstrate that the proposed method decreases the absolute relative error and Chamfer distance by 15.23% and 20.02% respectively compared to the baseline MapAnything model, while maintaining a rapid inference speed. The proposed approach effectively suppresses reconstruction noise on metallic surfaces and recovers fine geometric structures, validating the effectiveness of our feature-enhanced framework in extreme space environments. Full article
Show Figures

Figure 1

18 pages, 7894 KB  
Article
Laser Surface Microtexturing for Enhanced Adhesive Bonding in Steel–Polymer and Steel–Ceramic Joints
by Szymon Tofil, Leonardo Orazi, Vincenzina Siciliani, Cyril Mauclair, António B. Pereira, Sascha Stribick, Felix Hartmann, Jianhua Yao, Qunli Zhang, Liang Wang and Shuyang Lin
Appl. Sci. 2026, 16(6), 3010; https://doi.org/10.3390/app16063010 - 20 Mar 2026
Viewed by 227
Abstract
Laser surface microtexturing has emerged as an effective approach for improving the performance of adhesive joints between dissimilar materials. In this study, the influence of laser-generated micrometric surface features on the mechanical behavior of hybrid adhesive joints was investigated for two material systems: [...] Read more.
Laser surface microtexturing has emerged as an effective approach for improving the performance of adhesive joints between dissimilar materials. In this study, the influence of laser-generated micrometric surface features on the mechanical behavior of hybrid adhesive joints was investigated for two material systems: structural steel bonded to polyamide (PA66) and structural steel bonded to technical ceramic (Al2O3). Single-lap joints were manufactured using a two-component epoxy adhesive with two nominal bond-line thicknesses (0.1 mm and 1.0 mm). Prior to bonding, selected surfaces were modified by ultrashort-pulse laser microtexturing, producing well-defined circular features with characteristic depths on the order of tens of micrometers. The resulting microstructures were characterized using optical and scanning electron microscopy, and their geometric parameters were quantified through profilometric measurements. Mechanical performance was evaluated under shear and bending loading conditions. The results demonstrate a substantial increase in joint strength for laser-microtextured surfaces compared with non-textured references for both material combinations. The effect of surface microtexturing was more pronounced than the influence of adhesive layer thickness within the investigated range. These findings confirm that laser-induced surface microtexturing is a versatile and application-oriented surface preparation method capable of enhancing the reliability of adhesive bonding in hybrid metal–polymer and metal–ceramic assemblies. Full article
(This article belongs to the Special Issue The Applications of Laser-Based Manufacturing for Material Science)
Show Figures

Figure 1

19 pages, 1232 KB  
Article
Network-Level Modeling of Pavement Surface Macrotexture Degradation Using Linear Mixed-Effects Models
by Raul Almeida, Adriana Santos, Susana Faria and Elisabete Freitas
Infrastructures 2026, 11(3), 101; https://doi.org/10.3390/infrastructures11030101 - 18 Mar 2026
Viewed by 265
Abstract
Surface texture plays a key role in pavement safety and performance, yet its degradation is influenced by multiple interacting factors that vary across road networks. This study developed statistical models to characterize and predict surface texture evolution on Portuguese highways using linear mixed-effects [...] Read more.
Surface texture plays a key role in pavement safety and performance, yet its degradation is influenced by multiple interacting factors that vary across road networks. This study developed statistical models to characterize and predict surface texture evolution on Portuguese highways using linear mixed-effects modeling. Texture measurements collected on 7204 pavement sections, each 100 m in length, over three monitoring cycles were analyzed alongside traffic, climatic, pavement structural, geometric, and spatial variables. The hierarchical structure of the data, with repeated measurements nested within pavement sections, was explicitly accounted for via random intercepts and random slopes. At the same time, temporal correlation was modeled via an autoregressive error structure. Two model specifications were evaluated: a model including only traffic and climatic variables and an extended model incorporating pavement and geometric characteristics. Results indicate that texture evolution is statistically associated with cumulative traffic loading, temperature-related indicators, precipitation, surface course type, lane position, vertical alignment, and altitude. The extended model showed a significantly better fit and superior predictive performance, as confirmed by information criteria and cross-validation metrics. The findings highlight the importance of accounting for section-level heterogeneity and roadway characteristics when modeling texture degradation. The proposed modeling framework provides a statistically scalable and robust tool for texture prediction, accounting for regional-specificities and long-term pavement management decisions. Full article
(This article belongs to the Special Issue Sustainable Road Design and Traffic Management)
Show Figures

Figure 1

17 pages, 1628 KB  
Article
Interplay of Aspect Ratio and Emission Dipole Orientation for Light Extraction in Corrugated Red, Green and Blue OLEDs
by Milan Kovačič, Janez Krč and Marko Topič
Photonics 2026, 13(3), 287; https://doi.org/10.3390/photonics13030287 - 17 Mar 2026
Viewed by 385
Abstract
Using advanced optical modelling, we quantify how sinusoidal corrugation and emitter dipole orientation jointly govern light extraction from OLED thin-film stacks into a glass substrate for red, green, and blue emission. Irrespective of emission colour, the corrugation aspect ratio (AR = height/period) [...] Read more.
Using advanced optical modelling, we quantify how sinusoidal corrugation and emitter dipole orientation jointly govern light extraction from OLED thin-film stacks into a glass substrate for red, green, and blue emission. Irrespective of emission colour, the corrugation aspect ratio (AR = height/period) is the dominant geometric parameter controlling extraction, with absolute period and height playing secondary roles, as periods of 600–1000 nm deliver similar gains across all colours. Extraction peaks at AR ≈ 0.2 for predominantly horizontal dipoles, AR ≈ 0.5 for vertical dipoles, and AR ≈ 0.3 for isotropic orientations. For the isotropic case, extraction improves by up to 40%, 34%, and 20% relative to flat red, green, and blue devices, respectively. Absorption analysis attributes the principal gains to suppression of surface-plasmon-polariton losses of vertical dipoles, supported by local dipole reorientation, waveguide disruption, and scattering. Because practical texturing can alter dipole orientation, optimum conditions must be re-evaluated; if orientations follow the sinusoidal profile, an AR of approximately 0.2–0.3 is favoured for isotropic to moderately horizontal orientations, whereas higher ARs benefit strongly vertical orientations. The results provide guidelines for co-optimising corrugation geometry and dipole orientation for high-efficiency OLEDs. Full article
Show Figures

Figure 1

28 pages, 5420 KB  
Article
HEMS-RTDETR: A Lightweight Edge-Enhanced and Deformation-Aware Detector for Floating Debris in Complex Water Environments
by Yiwei Cui, Xinyi Jiang, Haiting Yu, Meizhen Lei and Jia Ren
Electronics 2026, 15(6), 1226; https://doi.org/10.3390/electronics15061226 - 15 Mar 2026
Viewed by 348
Abstract
Floating debris detection in complex aquatic environments holds significant importance for water resource protection and maritime safety monitoring. However, this task faces three core challenges: severe background interference leading to blurred target textures, significant non-rigid deformations, and the frequent loss of small targets [...] Read more.
Floating debris detection in complex aquatic environments holds significant importance for water resource protection and maritime safety monitoring. However, this task faces three core challenges: severe background interference leading to blurred target textures, significant non-rigid deformations, and the frequent loss of small targets at long distances. To address these issues, we propose a high-performance lightweight detection algorithm, termed High-Efficiency Edge-Aware Multi-Scale Real-Time Detection Transformer (HEMS-RTDETR), built upon the Real-Time Detection Transformer (RT-DETR) architecture. First, to suppress disturbances induced by water surface ripples and specular reflections, a Cross-Stage Partial Multi-Scale Edge Information Enhancement (CSP-MSEIE) module is introduced to reconstruct the backbone network. By removing computational redundancy while incorporating explicit edge enhancement, feature extraction capability and noise robustness for weak-texture targets are significantly improved. Second, to handle irregular debris morphology, a Deformable Attention Transformer (DAT) module is integrated, enabling adaptive attention focusing on geometrically deformed regions. Finally, an Efficient Multi-Scale Bidirectional Feature Pyramid Network (EMBSFPN) is constructed to enhance cross-scale semantic interaction and alleviate small-target signal loss. Experimental results demonstrate that, compared with RTDETR-r18, HEMS-RTDETR reduces parameters to 12.57 M, improves mAP@0.5 and mAP@0.5:0.95 by 2.44% and 3.05%, respectively, and maintains real-time inference at 93 FPS, indicating strong robustness and application potential in dynamic aquatic environments. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 3rd Edition)
Show Figures

Figure 1

20 pages, 20209 KB  
Article
Planar-Guided Gaussian Splatting with Texture-Complexity-Based Initialization
by Anhong Zheng and Zhuoyuan Yu
Electronics 2026, 15(5), 1137; https://doi.org/10.3390/electronics15051137 - 9 Mar 2026
Viewed by 468
Abstract
Indoor scene reconstruction remains challenging due to the prevalence of low-texture regions such as walls, floors, and ceilings, where weak photometric signals hinder accurate geometric recovery. While 3D Gaussian Splatting (3DGS) achieves impressive novel view synthesis, existing methods struggle with geometric accuracy in [...] Read more.
Indoor scene reconstruction remains challenging due to the prevalence of low-texture regions such as walls, floors, and ceilings, where weak photometric signals hinder accurate geometric recovery. While 3D Gaussian Splatting (3DGS) achieves impressive novel view synthesis, existing methods struggle with geometric accuracy in textureless areas due to uniform treatment of scene regions. We propose a texture-complexity-based 3D Gaussian Splatting strategy that leverages geometric priors for high-fidelity indoor reconstruction. Our method extracts planar priors through Manhattan frame alignment and refines them with Segment Anything Model (SAM) masks, enabling texture-aware initialization: planar priors guide Gaussian placement in low-texture regions, while dense feature matching ensures accurate initialization in high-detail areas. During optimization, geometric regularization through depth-plane loss, normal-surface loss, and normal-consistency loss maintains structural integrity. Evaluations on ScanNet++, MuSHRoom, and Replica datasets demonstrate state-of-the-art performance, with training completed in under 1 h. Our approach balances geometric accuracy with photometric fidelity, providing a practical solution for high-fidelity indoor mesh extraction from Gaussian representations. Full article
Show Figures

Figure 1

27 pages, 15861 KB  
Article
Explorable 3D Hyperspectral Models from Multi-Angle Gimballed LWIR Pushbroom Imagery
by Nikolay Golosov, Guido Cervone and Mark Salvador
Remote Sens. 2026, 18(5), 781; https://doi.org/10.3390/rs18050781 - 4 Mar 2026
Viewed by 316
Abstract
Hyperspectral imaging in the long-wave infrared (LWIR) range enables identification of chemical compositions and material properties, but reconstructing 3D models from gimballed pushbroom sensors remains challenging because their unique acquisition geometry is incompatible with conventional photogrammetric software designed for frame cameras. This study [...] Read more.
Hyperspectral imaging in the long-wave infrared (LWIR) range enables identification of chemical compositions and material properties, but reconstructing 3D models from gimballed pushbroom sensors remains challenging because their unique acquisition geometry is incompatible with conventional photogrammetric software designed for frame cameras. This study presents a workflow for creating explorable 3D models from multi-angle LWIR hyperspectral imagery by co-registering hyperspectral line-scan data with simultaneously acquired RGB frame camera imagery using deep learning-based image matching. The co-registered images are processed in commercial photogrammetric software (Agisoft Metashape), and a texture-to-image mapping algorithm preserves correspondences between 3D model coordinates and original hyperspectral pixels across multiple viewing angles. Quantitative evaluation against reference data demonstrates that co-registration reduces geometric error approaching the accuracy of models built from high-resolution RGB imagery. The resulting models enable the retrieval of 8–50 spectral signatures per surface point, captured from different viewing geometries. This approach facilitates interactive exploration of angular variations in thermal infrared spectra, supporting material identification for non-Lambertian surfaces where single-angle observations may be insufficient for reliable classification. Full article
Show Figures

Figure 1

23 pages, 27373 KB  
Article
When Reality Meets Practice: Challenges and Pitfalls in 3D Digitization Using Structured Light Scanning and Photogrammetry in Cultural Heritage
by Eleftheria Iakovaki, Markos Konstantakis, Ioannis Giaourtsakis, Evangelia Rentoumi, Dimitrios Protopapas, Christos Psarras and Efterpi Koskeridou
Information 2026, 17(3), 237; https://doi.org/10.3390/info17030237 - 1 Mar 2026
Viewed by 627
Abstract
Three-dimensional (3D) digitization has become a central methodological pillar in cultural heritage documentation, conservation support, and dissemination. Despite the maturity of image-based photogrammetry and active sensing technologies, real-world digitization campaigns frequently diverge from idealized workflows due to constraints related to object accessibility, surface [...] Read more.
Three-dimensional (3D) digitization has become a central methodological pillar in cultural heritage documentation, conservation support, and dissemination. Despite the maturity of image-based photogrammetry and active sensing technologies, real-world digitization campaigns frequently diverge from idealized workflows due to constraints related to object accessibility, surface properties, lighting conditions, and operational feasibility. As a result, practitioners are often required to adapt acquisition and processing strategies dynamically, balancing geometric fidelity, visual quality, and practical limitations. This study presents a practice-oriented analysis of applied digitization workflows conducted in controlled indoor and museum environments, focusing on fragile and optically challenging cultural and paleontological objects. Structured light scanning, DSLR-based photogrammetry, and hybrid approaches were systematically explored. While structured light scanning offered high nominal resolution, its performance proved sensitive to material properties and surface behavior, leading to incomplete or unstable reconstructions in several cases. Photogrammetric workflows, when supported by controlled acquisition setups, yielded robust and visually coherent results for the majority of objects. For cases where conventional photogrammetry underperformed, alternative AI-assisted image-based reconstruction pipelines were evaluated as complementary solutions. Rather than emphasizing only successful outcomes, the paper documents recurring failure modes, decision-making trade-offs, and breakdown points across acquisition, alignment, meshing, and texturing stages. Empirical observations are synthesized into qualitative comparisons and decision-support tables, highlighting the conditions under which specific digitization strategies succeed or fail. The findings underscore that hybrid workflows, while theoretically advantageous, can amplify integration complexity and error propagation if not carefully constrained. By foregrounding practical constraints and adaptive methodological choices, this work contributes a transparent, experience-driven perspective on cultural heritage digitization, supporting more resilient planning and informed decision-making in future documentation and conservation projects. Full article
(This article belongs to the Special Issue Techniques and Data Analysis in Cultural Heritage, 2nd Edition)
Show Figures

Figure 1

Back to TopTop