Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,480)

Search Parameters:
Keywords = scanned maps

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 6228 KB  
Article
All-Weather Flood Mapping Using a Synergistic Multi-Sensor Downscaling Framework: Case Study for Brisbane, Australia
by Chloe Campo, Paolo Tamagnone, Suelynn Choy, Trinh Duc Tran, Guy J.-P. Schumann and Yuriy Kuleshov
Remote Sens. 2026, 18(2), 303; https://doi.org/10.3390/rs18020303 - 16 Jan 2026
Viewed by 36
Abstract
Despite a growing number of Earth Observation satellites, a critical observational gap persists for timely, high-resolution flood mapping, primarily due to infrequent satellite revisits and persistent cloud cover. To address this issue, we propose a novel framework that synergistically fuses complementary data from [...] Read more.
Despite a growing number of Earth Observation satellites, a critical observational gap persists for timely, high-resolution flood mapping, primarily due to infrequent satellite revisits and persistent cloud cover. To address this issue, we propose a novel framework that synergistically fuses complementary data from three public sensor types. Our methodology harmonizes these disparate data sources by using surface water fraction as a common variable and downscaling them with flood susceptibility and topography information. This allows for the integration of sub-daily observations from the Visible Infrared Imaging Radiometer Suite and the Advanced Himawari Imager with the cloud-penetrating capabilities of the Advanced Microwave Scanning Radiometer 2. We evaluated this approach on the February 2022 flood in Brisbane, Australia using an independent ground truth dataset. The framework successfully compensates for the limitations of individual sensors, enabling the consistent generation of detailed, high-resolution flood maps. The proposed method outperformed the flood extent derived from commercial high-resolution optical imagery, scoring 77% higher than the Copernicus Emergency Management Service (CEMS) map in the Critical Success Index. Furthermore, the True Positive Rate was twice as high as the CEMS map, confirming that the proposed method successfully overcame the cloud cover issue. This approach provides valuable, actionable insights into inundation dynamics, particularly when other public data sources are unavailable. Full article
16 pages, 5511 KB  
Article
Enhancing Lithium Extraction: Effect of Mechanical Activation on the Sulfuric Acid Leaching Behavior of Lepidolite
by Yuik Eom, Laurence Dyer, Aleksandar N. Nikoloski and Richard Diaz Alorro
Minerals 2026, 16(1), 87; https://doi.org/10.3390/min16010087 - 16 Jan 2026
Viewed by 104
Abstract
This study investigated the effect of mechanical activation on the physicochemical properties of lepidolite and the leaching behavior of mechanically activated samples in sulfuric acid (H2SO4). Lepidolite was mechanically activated using a high-energy planetary ball mill (PBM) at 400 [...] Read more.
This study investigated the effect of mechanical activation on the physicochemical properties of lepidolite and the leaching behavior of mechanically activated samples in sulfuric acid (H2SO4). Lepidolite was mechanically activated using a high-energy planetary ball mill (PBM) at 400 RPM with a 20:1 ball-to-feed weight ratio (BFR, g:g) and the samples activated for different durations were characterized for amorphous phase content, particle size, and morphology using various solid analyses. X-ray diffraction (XRD) revealed the progressive amorphization of lepidolite, with the amorphous fraction increased from 34.1% (unactivated) to 81.4% after 60 min of mechanical activation. Scanning electron microscopy (SEM) showed that mechanically activated particles became fluffy and rounded, whereas unactivated particles retained lamellar and angular shapes. The reactivity of minerals after mechanical activation was evaluated through a 2 M H2SO4 leaching test at different leaching temperatures (25–80 °C) and time periods (30–180 min). Although the leaching efficiencies of Li and Al slightly improved at higher leaching temperatures and longer leaching times, the leaching of these metals was primarily governed by the mechanical activation time. The highest Li and Al leaching efficiencies—87.0% for Li and 79.4% for Al—were obtained from lepidolite that was mechanically activated for 60 min under leaching conditions of 80 °C and a 10% (w/v) solid/liquid (S/L) ratio for 30 min. The elemental mapping images of leaching feed and residue produced via energy dispersive spectroscopy (EDS) indicated that unactivated particles in the leaching residue had much higher metal content than mechanically activated particles. Kinetic analysis further suggested that leaching predominantly occurs at mechanically activated sites and the apparent activation energies calculated in this study (<3.1 kJ·mol−1) indicate diffusion-controlled behavior with weak temperature dependence. This result confirmed that mechanical activation significantly improves reactivity and that the residual unleached fraction can be attributed to unactivated particles. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
Show Figures

Figure 1

20 pages, 4628 KB  
Article
Particle-Filter-Based LiDAR Localization for Indoor Parking Lot
by Injun Hong and Manbok Park
Appl. Sci. 2026, 16(2), 908; https://doi.org/10.3390/app16020908 - 15 Jan 2026
Viewed by 66
Abstract
Accurate localization of autonomous vehicles in indoor environments is challenging due to the absence of GPS signals, so various studies have explored the use of environmental sensors to address this limitation. In this paper, we propose an indoor localization algorithm that utilizes a [...] Read more.
Accurate localization of autonomous vehicles in indoor environments is challenging due to the absence of GPS signals, so various studies have explored the use of environmental sensors to address this limitation. In this paper, we propose an indoor localization algorithm that utilizes a 3D LiDAR sensor and a 2D map, supported by improved motion and sensor modeling tailored for indoor parking lots. These environments contain complex conditions in which static noise from parked vehicles and dynamic noise from moving vehicles coexist, requiring a localization method capable of maintaining robustness under high-noise conditions. In this study, vehicle odometry was obtained using LOAM-style scan-to-scan LiDAR odometry, and a particle filter was implemented based on this information. The proposed algorithm was validated using a test vehicle in two indoor parking lots under three different conditions: when the lot was empty, when parked vehicles were present, and when other moving vehicles were present. Experimental results demonstrated that the algorithm achieved an average localization error of approximately 0.09 m across all scenarios, confirming its effectiveness for indoor parking environments. Full article
Show Figures

Figure 1

30 pages, 2392 KB  
Article
Functional Connectivity Between Human Motor and Somatosensory Areas During a Multifinger Tapping Task: A Proof-of-Concept Study
by Roberto García-Leal, Julio Prieto-Montalvo, Juan Guzman de Villoria, Massimiliano Zanin and Estrella Rausell
NeuroSci 2026, 7(1), 12; https://doi.org/10.3390/neurosci7010012 - 14 Jan 2026
Viewed by 145
Abstract
Hand representation maps of the primate primary motor (M1) and somatosensory (SI) cortices exhibit plasticity, with their spatial extent modifiable through training. While activation and map enlargement during tapping tasks are well documented, the directionality of information flow between these regions remains unclear. [...] Read more.
Hand representation maps of the primate primary motor (M1) and somatosensory (SI) cortices exhibit plasticity, with their spatial extent modifiable through training. While activation and map enlargement during tapping tasks are well documented, the directionality of information flow between these regions remains unclear. We applied Information Imbalance Gain Causality (IIG) to examine the propagation and temporal dynamic of BOLD activity among Area 4 (precentral gyrus), Area 3a (fundus of the central sulcus), and SI areas (postcentral gyrus). Data were collected from both hemispheres of nine participants performing alternating right–left hand finger tapping inside a 1.5T fMRI scan. The results revealed strong information flow from both the precentral and postcentral gyri toward the sulcus during tapping task, with weaker bidirectional exchange between the gyri. When not engaged in tapping, both gyri communicated with each other and the sulcus. During active tapping, flow bypassed the sulcus, favoring a more direct postcentral to precentral way. Overtime, postcentral to sulcus influence strengthened during non task periods, but diminished during tapping. These findings suggest that M1, Area 3a, and SI areas form a dynamic network that supports rapid learning processing, where Area 3a of the sulcus may contribute to maintaining representational plasticity during complex tapping tasks. Full article
Show Figures

Figure 1

26 pages, 8620 KB  
Article
Two-Step Localization Method for Electromagnetic Follow-Up of LIGO-Virgo-KAGRA Gravitational-Wave Triggers
by Daniel Skorohod and Ofek Birnholtz
Universe 2026, 12(1), 21; https://doi.org/10.3390/universe12010021 (registering DOI) - 14 Jan 2026
Viewed by 178
Abstract
Rapid detection and follow-up of electromagnetic (EM) counterparts to gravitational wave (GW) signals from binary neutron star (BNS) mergers are essential for constraining source properties and probing the physics of relativistic transients. Observational strategies for these early EM searches are therefore critical, yet [...] Read more.
Rapid detection and follow-up of electromagnetic (EM) counterparts to gravitational wave (GW) signals from binary neutron star (BNS) mergers are essential for constraining source properties and probing the physics of relativistic transients. Observational strategies for these early EM searches are therefore critical, yet current practice remains suboptimal, motivating improved, coordination-aware approaches. We propose and evaluate the Two-Step Localization strategy, a coordinated observational protocol in which one wide-field auxiliary telescope and one narrow-field main telescope monitor the evolving GW sky localization in real time. The auxiliary telescope, by virtue of its large field of view, has a higher probability of detecting early EM emission. Upon registering a candidate signal, it triggers the main telescope to slew to the inferred location for prompt, high-resolution follow-up. We assess the performance of Two-Step Localization using large-scale simulations that incorporate dynamic sky-map updates, realistic telescope parameters, and signal-to-noise ratio (SNR)-weighted localization contours. For context, we compare Two-Step Localization to two benchmark strategies lacking coordination. Our results demonstrate that Two-Step Localization significantly reduces the median detection latency, highlighting the effectiveness of targeted cooperation in the early-time discovery of EM counterparts. Our results point to the most impactful next step: next-generation faster telescopes that deliver drastically higher slew rates and shorter scan times, reducing the number of required tiles; a deeper, truly wide-field auxiliary improves coverage more than simply adding more telescopes. Full article
(This article belongs to the Section Compact Objects)
Show Figures

Figure 1

28 pages, 3256 KB  
Article
Comparative Analysis of Sonication, Microfluidics, and High-Turbulence Microreactors for the Fabrication and Scaling-Up of Diclofenac-Loaded Liposomes
by Iria Naveira-Souto, Roger Fabrega Alsina, Elisabet Rosell-Vives, Eloy Pena-Rodríguez, Francisco Fernandez-Campos, Jessica Malavia, Xavier Julia Camprodon, Maximilian Schelden, Nazende Günday-Türeli, Andrés Cruz-Conesa and Maria Lajarin-Reinares
Pharmaceutics 2026, 18(1), 105; https://doi.org/10.3390/pharmaceutics18010105 - 13 Jan 2026
Viewed by 175
Abstract
Background: Liposomes are attractive topical carriers, yet translating laboratory fabrication to scalable, well-controlled processes remains challenging. Objectives: We compared three manufacturing methods for diclofenac-loaded liposomes: probe sonication, microfluidic mixing, and a high-turbulence microreactor, under a Quality-by-Design framework. Methods: Differential scanning [...] Read more.
Background: Liposomes are attractive topical carriers, yet translating laboratory fabrication to scalable, well-controlled processes remains challenging. Objectives: We compared three manufacturing methods for diclofenac-loaded liposomes: probe sonication, microfluidic mixing, and a high-turbulence microreactor, under a Quality-by-Design framework. Methods: Differential scanning calorimetry (DSC) was used to define a processing-relevant liquid-crystalline temperature window for the lipid excipients. For sonication scale-up, a Plackett-Burman screening design identified key process factors and supported an energy-density (W·s·L−1) control approach. For microfluidics, the effects of flow-rate ratio (FRR) and total flow rate (TFR) were mapped and optimized using a desirability function. Microreactor trials were performed at elevated throughput. Residual ethanol during post-processing was monitored at-line by Raman spectroscopy calibrated against gas chromatography (GC). Particle size and dispersity were measured by DLS and morphology assessed by cryo-TEM. Results: DSC supported a 70–85 °C processing window. Sonication scale-up using an energy-density target (~11,000 W·s·L−1) reproduced lab-scale quality at 8 L (Z-average ~87–92 nm; PDI 0.16–0.23; %EE 86–94%). Microfluidics optimization selected FRR 3:1/TFR 4 mL·min−1, yielding ~64 nm liposomes with PDI ~0.13 and %EE ~93%. The microreactor achieved ~50 nm liposomes with %EE ~95% at 50 mL·min−1. Cryo-TEM corroborated size trends and showed no evident aggregates. Conclusions: All three routes met topical CQAs (~50–100 nm; PDI ≤ 0.30; high %EE). Method selection should be guided by target size/dispersity and operational constraints: sonication enables energy-based scale-up, microfluidics offers precise size control, and microreactors provide higher throughput. Full article
(This article belongs to the Section Pharmaceutical Technology, Manufacturing and Devices)
Show Figures

Figure 1

19 pages, 6871 KB  
Article
A BIM-Derived Synthetic Point Cloud (SPC) Dataset for Construction Scene Component Segmentation
by Yiquan Zou, Tianxiang Liang, Wenxuan Chen, Zhixiang Ren and Yuhan Wen
Data 2026, 11(1), 16; https://doi.org/10.3390/data11010016 - 12 Jan 2026
Viewed by 150
Abstract
In intelligent construction and BIM–Reality integration applications, high-quality, large-scale construction scene point cloud data with component-level semantic annotations constitute a fundamental basis for three-dimensional semantic understanding and automated analysis. However, point clouds acquired from real construction sites commonly suffer from high labeling costs, [...] Read more.
In intelligent construction and BIM–Reality integration applications, high-quality, large-scale construction scene point cloud data with component-level semantic annotations constitute a fundamental basis for three-dimensional semantic understanding and automated analysis. However, point clouds acquired from real construction sites commonly suffer from high labeling costs, severe occlusion, and unstable data distributions. Existing public datasets remain insufficient in terms of scale, component coverage, and annotation consistency, limiting their suitability for data-driven approaches. To address these challenges, this paper constructs and releases a BIM-derived synthetic construction scene point cloud dataset, termed the Synthetic Point Cloud (SPC), targeting component-level point cloud semantic segmentation and related research tasks.The dataset is generated from publicly available BIM models through physics-based virtual LiDAR scanning, producing multi-view and multi-density three-dimensional point clouds while automatically inheriting component-level semantic labels from BIM without any manual intervention. The SPC dataset comprises 132 virtual scanning scenes, with an overall scale of approximately 8.75×109 points, covering typical construction components such as walls, columns, beams, and slabs. By systematically configuring scanning viewpoints, sampling densities, and occlusion conditions, the dataset introduces rich geometric and spatial distribution diversity. This paper presents a comprehensive description of the SPC data generation pipeline, semantic mapping strategy, virtual scanning configurations, and data organization scheme, followed by statistical analysis and technical validation in terms of point cloud scale evolution, spatial coverage characteristics, and component-wise semantic distributions. Furthermore, baseline experiments on component-level point cloud semantic segmentation are provided. The results demonstrate that models trained solely on the SPC dataset can achieve stable and engineering-meaningful component-level predictions on real construction point clouds, validating the dataset’s usability in virtual-to-real research scenarios. As a scalable and reproducible BIM-derived point cloud resource, the SPC dataset offers a unified data foundation and experimental support for research on construction scene point cloud semantic segmentation, virtual-to-real transfer learning, scan-to-BIM updating, and intelligent construction monitoring. Full article
Show Figures

Figure 1

20 pages, 5061 KB  
Article
Research on Orchard Navigation Technology Based on Improved LIO-SAM Algorithm
by Jinxing Niu, Jinpeng Guan, Tao Zhang, Le Zhang, Shuheng Shi and Qingyuan Yu
Agriculture 2026, 16(2), 192; https://doi.org/10.3390/agriculture16020192 - 12 Jan 2026
Viewed by 212
Abstract
To address the challenges in unstructured orchard environments, including high geometric similarity between fruit trees (with the measured average Euclidean distance difference between point cloud descriptors of adjacent trees being less than 0.5 m), significant dynamic interference (e.g., interference from pedestrians or moving [...] Read more.
To address the challenges in unstructured orchard environments, including high geometric similarity between fruit trees (with the measured average Euclidean distance difference between point cloud descriptors of adjacent trees being less than 0.5 m), significant dynamic interference (e.g., interference from pedestrians or moving equipment can occur every 5 min), and uneven terrain, this paper proposes an improved mapping algorithm named OSC-LIO (Orchard Scan Context Lidar Inertial Odometry via Smoothing and Mapping). The algorithm designs a dynamic point filtering strategy based on Euclidean clustering and spatiotemporal consistency within a 5-frame sliding window to reduce the interference of dynamic objects in point cloud registration. By integrating local semantic features such as fruit tree trunk diameter and canopy height difference, a two-tier verification mechanism combining “global and local information” is constructed to enhance the distinctiveness and robustness of loop closure detection. Motion compensation is achieved by fusing data from an Inertial Measurement Unit (IMU) and a wheel odometer to correct point cloud distortion. A three-level hierarchical indexing structure—”path partitioning, time window, KD-Tree (K-Dimension Tree)”—is built to reduce the time required for loop closure retrieval and improve the system’s real-time performance. Experimental results show that the improved OSC-LIO system reduces the Absolute Trajectory Error (ATE) by approximately 23.5% compared to the original LIO-SAM (Tightly coupled Lidar Inertial Odometry via Smoothing and Mapping) in a simulated orchard environment, while enabling stable and reliable path planning and autonomous navigation. This study provides a high-precision, lightweight technical solution for autonomous navigation in orchard scenarios. Full article
Show Figures

Figure 1

53 pages, 3354 KB  
Review
Mamba for Remote Sensing: Architectures, Hybrid Paradigms, and Future Directions
by Zefeng Li, Long Zhao, Yihang Lu, Yue Ma and Guoqing Li
Remote Sens. 2026, 18(2), 243; https://doi.org/10.3390/rs18020243 - 12 Jan 2026
Viewed by 127
Abstract
Modern Earth observation combines high spatial resolution, wide swath, and dense temporal sampling, producing image grids and sequences far beyond the regime of standard vision benchmarks. Convolutional networks remain strong baselines but struggle to aggregate kilometre-scale context and long temporal dependencies without heavy [...] Read more.
Modern Earth observation combines high spatial resolution, wide swath, and dense temporal sampling, producing image grids and sequences far beyond the regime of standard vision benchmarks. Convolutional networks remain strong baselines but struggle to aggregate kilometre-scale context and long temporal dependencies without heavy tiling and downsampling, while Transformers incur quadratic costs in token count and often rely on aggressive patching or windowing. Recently proposed visual state-space models, typified by Mamba, offer linear-time sequence processing with selective recurrence and have therefore attracted rapid interest in remote sensing. This survey analyses how far that promise is realised in practice. We first review the theoretical substrates of state-space models and the role of scanning and serialization when mapping two- and three-dimensional EO data onto one-dimensional sequences. A taxonomy of scan paths and architectural hybrids is then developed, covering centre-focused and geometry-aware trajectories, CNN– and Transformer–Mamba backbones, and multimodal designs for hyperspectral, multisource fusion, segmentation, detection, restoration, and domain-specific scientific applications. Building on this evidence, we delineate the task regimes in which Mamba is empirically warranted—very long sequences, large tiles, or complex degradations—and those in which simpler operators or conventional attention remain competitive. Finally, we discuss green computing, numerical stability, and reproducibility, and outline directions for physics-informed state-space models and remote-sensing-specific foundation architectures. Overall, the survey argues that Mamba should be used as a targeted, scan-aware component in EO pipelines rather than a drop-in replacement for existing backbones, and aims to provide concrete design principles for future remote sensing research and operational practice. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

12 pages, 1660 KB  
Article
Temporal Degradation of Skeletal Muscle Quality on CT as a Prognostic Marker in Septic Shock
by June-sung Kim, Jiyeon Ha, Youn-Jung Kim, Yousun Ko, Kyung Won Kim and Won Young Kim
Diagnostics 2026, 16(2), 247; https://doi.org/10.3390/diagnostics16020247 - 12 Jan 2026
Viewed by 137
Abstract
Background/Objectives: Although cross-sectional muscle quality has shown prognostic relevance, the impact of temporal changes in muscle composition in septic shock has not been fully explored. This study aimed to investigate whether deterioration in muscle quality on serial computed tomography (CT) scans is [...] Read more.
Background/Objectives: Although cross-sectional muscle quality has shown prognostic relevance, the impact of temporal changes in muscle composition in septic shock has not been fully explored. This study aimed to investigate whether deterioration in muscle quality on serial computed tomography (CT) scans is associated with mortality in patients with septic shock. Methods: We conducted a retrospective single-center study using a prospectively collected registry of adult patients with septic shock between May 2016 and May 2022. Patients who underwent CT on the day of emergency department (ED) presentation and had a CT performed more than 180 days earlier were included. Muscle quality maps were generated and segmented based on CT attenuation values into normal-attenuation muscle area (NAMA), low-attenuation muscle area (LAMA), and intramuscular adipose tissue area. Differences between the ED and prior CT scans were also calculated. The primary outcome was the 28-day mortality. Results: Among the 768 enrolled patients, the 28-day mortality was 18.0%. Both survivors and non-survivors showed a significantly greater increase in LAMA (20.8 vs. 9.8 cm2) and a greater decrease in NAMA (−26.0 vs. −18.8 cm2). Multivariate analysis identified increased LAMA as an independent risk factor for 28-day mortality (adjusted OR 1.03; 95% CI: 1.01–1.04; p < 0.01). Conclusions: An increase in LAMA on serial CT scans was associated with higher short-term mortality in patients with septic shock, suggesting that temporal degradation of skeletal muscle quality may serve as a potential prognostic marker. Full article
(This article belongs to the Special Issue Diagnostics in the Emergency and Critical Care Medicine)
Show Figures

Figure 1

22 pages, 92351 KB  
Article
Robust Self-Supervised Monocular Depth Estimation via Intrinsic Albedo-Guided Multi-Task Learning
by Genki Higashiuchi, Tomoyasu Shimada, Xiangbo Kong and Hiroyuki Tomiyama
Appl. Sci. 2026, 16(2), 714; https://doi.org/10.3390/app16020714 - 9 Jan 2026
Viewed by 185
Abstract
Self-supervised monocular depth estimation has demonstrated high practical utility, as it can be trained using a photometric image reconstruction loss between the original image and a reprojected image generated from the estimated depth and relative pose, thereby alleviating the burden of large-scale label [...] Read more.
Self-supervised monocular depth estimation has demonstrated high practical utility, as it can be trained using a photometric image reconstruction loss between the original image and a reprojected image generated from the estimated depth and relative pose, thereby alleviating the burden of large-scale label creation. However, this photometric image reconstruction loss relies on the Lambertian reflectance assumption. Under non-Lambertian conditions such as specular reflections or strong illumination gradients, pixel values fluctuate depending on the lighting and viewpoint, which often misguides training and leads to large depth errors. To address this issue, we propose a multitask learning framework that integrates albedo estimation as a supervised auxiliary task. The proposed framework is implemented on top of representative self-supervised monocular depth estimation backbones, including Monodepth2 and Lite-Mono, by adopting a multi-head architecture in which the shared encoder–decoder branches at each upsampling block into a Depth Head and an Albedo Head. Furthermore, we apply Intrinsic Image Decomposition to generate albedo images and design an albedo supervision loss that uses these albedo maps as training targets for the Albedo Head. We then integrate this loss term into the overall training objective, explicitly exploiting illumination-invariant albedo components to suppress erroneous learning in reflective regions and areas with strong illumination gradients. Experiments on the ScanNetV2 dataset demonstrate that, for the lightweight backbone Lite-Mono, our method achieves an average reduction of 18.5% over the four standard depth error metrics and consistently improves accuracy metrics, without increasing the number of parameters and FLOPs at inference time. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Computer Vision)
Show Figures

Figure 1

26 pages, 11357 KB  
Article
An Advanced Multi-Analytical Approach to Study Baroque Painted Wood Sculptures from Apulia (Southern Italy)
by Daniela Fico, Giorgia Di Fusco, Maurizio Masieri, Raffaele Casciaro, Daniela Rizzo and Angela Calia
Materials 2026, 19(2), 284; https://doi.org/10.3390/ma19020284 - 9 Jan 2026
Viewed by 284
Abstract
Three painted valuable wood sculptures from conventual collections in Apulia (Southern Italy), made between the beginning of the 17th century and the first half of the 18th century, were studied to shed light on the pictorial materials and techniques of the Neapolitan Baroque [...] Read more.
Three painted valuable wood sculptures from conventual collections in Apulia (Southern Italy), made between the beginning of the 17th century and the first half of the 18th century, were studied to shed light on the pictorial materials and techniques of the Neapolitan Baroque sculpture in Southern Italy. A multi-analytical approach was implemented using integrated micro-invasive techniques, including polarized light microscopy (PLM) in ultraviolet (UV) and visible (VIS) light, scanning electron microscopy coupled with energy dispersive spectroscopy (SEM-EDS), Fourier-Transform Infrared (FTIR) spectroscopy, and pyrolysis–gas chromatography/high-resolution mass spectrometry (Py-GC/HRMS). The stratigraphic sequences were microscopically identified, and the pictorial layers were discriminated on the basis of optical features, elemental compositions, and mapping. Organic components were detected by FTIR as lipids and proteinaceous compounds for binders, while terpenic resins were detected as varnishes. Accordingly, PY-GC/HRMS identified siccative oils, animal glue, egg, and colophony. The results allowed the identification of the painting techniques used for the pictorial films and the ground preparation layers and supported the distinction between original and repainting layers. The results of this multi-analytical approach provide insights into Baroque wooden sculpture in Southern Italy and offers information to support restorers in conservation works. Full article
Show Figures

Figure 1

39 pages, 10403 KB  
Article
High-Temperature Degradation of Hastelloy C276 in Methane and 99% Cracked Ammonia Combustion: Surface Analysis and Mechanical Property Evolution at 4 Bar
by Mustafa Alnaeli, Burak Goktepe, Steven Morris and Agustin Valera-Medina
Processes 2026, 14(2), 235; https://doi.org/10.3390/pr14020235 - 9 Jan 2026
Viewed by 191
Abstract
This study examines the high-temperature degradation of Hastelloy C276, a corrosion-resistant nickel-based alloy, during exposure to combustion products generated by methane and 99% cracked ammonia. Using a high-pressure optical combustor (HPOC) at 4 bar and exhaust temperatures of 815–860 °C, standard tensile specimens [...] Read more.
This study examines the high-temperature degradation of Hastelloy C276, a corrosion-resistant nickel-based alloy, during exposure to combustion products generated by methane and 99% cracked ammonia. Using a high-pressure optical combustor (HPOC) at 4 bar and exhaust temperatures of 815–860 °C, standard tensile specimens were exposed for five hours to fully developed post-flame exhaust gases, simulating real industrial turbine or burner conditions. The surfaces and subsurface regions of the samples were analysed using scanning electron microscopy (SEM; Zeiss Sigma HD FEG-SEM, Carl Zeiss, Oberkochen, Germany) and energy-dispersive X-ray spectroscopy (EDX; Oxford Instruments X-MaxN detectors, Oxford Instruments, Abingdon, United Kingdom), while mechanical properties were evaluated by tensile testing, and the gas-phase compositions were tracked in detail for each fuel blend. Results show that exposure to methane causes moderate oxidation and some grain boundary carburisation, with localised carbon enrichment detected by high-resolution EDX mapping. In contrast, 99% cracked ammonia resulted in much more aggressive selective oxidation, as evidenced by extensive surface roughening, significant chromium depletion, and higher oxygen incorporation, correlating with increased NOx in the exhaust gas. Tensile testing reveals that methane exposure causes severe embrittlement (yield strength +41%, elongation −53%) through grain boundary carbide precipitation, while cracked ammonia exposure results in moderate degradation (yield strength +4%, elongation −24%) with fully preserved ultimate tensile strength (870 MPa), despite more aggressive surface oxidation. These counterintuitive findings demonstrate that grain boundary integrity is more critical than surface condition for mechanical reliability. These findings underscore the importance of evaluating material compatibility in low-carbon and hydrogen/ammonia-fuelled combustion systems and establish critical microstructural benchmarks for the anticipated mechanical testing in future work. Full article
(This article belongs to the Special Issue Experiments and Diagnostics in Reacting Flows)
Show Figures

Figure 1

54 pages, 8516 KB  
Review
Interdisciplinary Applications of LiDAR in Forest Studies: Advances in Sensors, Methods, and Cross-Domain Metrics
by Nadeem Fareed, Carlos Alberto Silva, Izaya Numata and Joao Paulo Flores
Remote Sens. 2026, 18(2), 219; https://doi.org/10.3390/rs18020219 - 9 Jan 2026
Viewed by 362
Abstract
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, [...] Read more.
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, and complementary technologies—such as Inertial Measurement Units (IMU) and Global Navigation Satellite Systems (GNSS)—have yielded compact, cost-effective, and highly sophisticated LiDAR sensors. Concurrently, innovations in carrier platforms, including uncrewed aerial systems (UAS), mobile laser scanning (MLS), Simultaneous Localization and Mapping (SLAM) frameworks, have expanded LiDAR’s observational capacity from plot- to global-scale applications in forestry, precision agriculture, ecological monitoring, Above Ground Biomass (AGB) modeling, and wildfire science. This review synthesizes LiDAR’s cross-domain capabilities for the following: (a) quantifying vegetation structure, function, and compositional dynamics; (b) recent sensor developments encompassing ALS discrete-return (ALSD), and ALS full-waveform (ALSFW), photon-counting LiDAR (PCL), emerging multispectral LiDAR (MSL), and hyperspectral LiDAR (HSL) systems; and (c) state-of-the-art data processing and fusion workflows integrating optical and radar datasets. The synthesis demonstrates that many LiDAR-derived vegetation metrics are inherently transferable across domains when interpreted within a unified structural framework. The review further highlights the growing role of artificial-intelligence (AI)-driven approaches for segmentation, classification, and multitemporal analysis, enabling scalable assessments of vegetation dynamics at unprecedented spatial and temporal extents. By consolidating historical developments, current methodological advances, and emerging research directions, this review establishes a comprehensive state-of-the-art perspective on LiDAR’s transformative role and future potential in monitoring and modeling Earth’s vegetated ecosystems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Graphical abstract

19 pages, 690 KB  
Review
Methodologies for Assessing the Dimensional Accuracy of Computer-Guided Static Implant Surgery in Clinical Settings: A Scoping Review
by Sorana Nicoleta Rosu, Monica Silvia Tatarciuc, Anca Mihaela Vitalariu, Roxana-Ionela Vasluianu, Irina Gradinaru, Nicoleta Ioanid, Catalina Cioloca Holban, Livia Bobu, Adina Oana Armencia, Alice Murariu, Elena-Odette Luca and Ana Maria Dima
Dent. J. 2026, 14(1), 43; https://doi.org/10.3390/dj14010043 - 8 Jan 2026
Viewed by 223
Abstract
Background: Computer-guided static implant surgery (CGSIS) is widely adopted to enhance the precision of dental implant placement. However, significant heterogeneity in reported accuracy values complicates evidence-based clinical decision-making. This variance is likely attributable to a fundamental lack of standardization in the methodologies [...] Read more.
Background: Computer-guided static implant surgery (CGSIS) is widely adopted to enhance the precision of dental implant placement. However, significant heterogeneity in reported accuracy values complicates evidence-based clinical decision-making. This variance is likely attributable to a fundamental lack of standardization in the methodologies used to assess dimensional accuracy. Objective: This scoping review aimed to systematically map, synthesize, and analyze the clinical methodologies used to quantify the dimensional accuracy of CGSIS. Methods: The review was conducted in accordance with the PRISMA-ScR guidelines. A systematic search of PubMed/MEDLINE, Scopus, and Embase was performed from inception to October 2025. Clinical studies quantitatively comparing planned versus achieved implant positions in human patients were included. Data were charted on study design, guide support type, data acquisition methods, reference systems for superimposition, measurement software, and accuracy metrics. Results: The analysis of 21 included studies revealed extensive methodological heterogeneity. Key findings included the predominant use of two distinct reference systems: post-operative CBCT (n = 12) and intraoral scanning with scan bodies (n = 6). A variety of proprietary and third-party software packages (e.g., coDiagnostiX, Geomagic, Mimics) were employed for superimposition, utilizing different alignment algorithms. Critically, this heterogeneity in measurement approach directly manifests in widely varying reported values for core accuracy metrics. In addition, the definitions and reporting of core accuracy metrics—specifically global coronal deviation (range of reported means: 0.55–1.70 mm), global apical deviation (0.76–2.50 mm), and angular deviation (2.11–7.14°)—were inconsistent. For example, these metrics were also reported using different statistical summaries (e.g., means with standard deviations or medians with interquartile ranges). Conclusions: The comparability and synthesis of evidence on CGSIS accuracy are significantly limited by non-standardized measurement approaches. The reported ranges of deviation values are a direct consequence of this methodological heterogeneity, not a comparison of implant system performance. Our findings highlight an urgent need for a consensus-based minimum reporting standard for future clinical research in this field to ensure reliable and translatable evidence. Full article
(This article belongs to the Special Issue New Trends in Digital Dentistry)
Show Figures

Graphical abstract

Back to TopTop