Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,829)

Search Parameters:
Keywords = noise index

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4023 KB  
Article
High-Speed Image Restoration Based on a Dynamic Vision Sensor
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 781; https://doi.org/10.3390/s26030781 (registering DOI) - 23 Jan 2026
Abstract
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. [...] Read more.
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. However, in this paper, we found severe artifacts resulting in image quality degradation caused by color ghosts, event noises, and discrepancies between conventional image sensors and event-based sensors. To overcome these inevitable artifacts, we propose and demonstrate event-based compensation techniques such as cross-correlation optimization, contrast maximization, resolution mismatch compensation (event upsampling for alignment), and disparity matching. The results show that the deblur performance can be improved dramatically in terms of metrics such as the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Spatial Frequency Response (SFR). Thus, we expect that the proposed event-based image restoration technique can be widely deployed in mobile cameras. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
22 pages, 1980 KB  
Article
Multi-Temporal Point Cloud Alignment for Accurate Height Estimation of Field-Grown Leafy Vegetables
by Qian Wang, Kai Yuan, Zuoxi Zhao, Yangfan Luo and Yuanqing Shui
Agriculture 2026, 16(2), 280; https://doi.org/10.3390/agriculture16020280 - 22 Jan 2026
Viewed by 19
Abstract
Accurate measurement of plant height in leafy vegetables is challenging due to their short stature, high planting density, and severe canopy occlusion during later growth stages. These factors often limit the reliability of single-plant monitoring across the full growth cycle in open-field environments. [...] Read more.
Accurate measurement of plant height in leafy vegetables is challenging due to their short stature, high planting density, and severe canopy occlusion during later growth stages. These factors often limit the reliability of single-plant monitoring across the full growth cycle in open-field environments. To address this, we propose a multi-temporal point cloud alignment method for accurate plant height measurement, focusing on Choy Sum (Brassica rapa var. parachinensis). The method estimates plant height by calculating the vertical distance between the canopy and the ground. Multi-temporal point cloud maps are reconstructed using an enhanced Oriented FAST and Rotated BRIEF–Simultaneous Localization and Mapping (ORB-SLAM3) algorithm. A fixed checkerboard calibration board, leveled using a spirit level, ensures proper vertical alignment of the Z-axis and unifies coordinate systems across growth stages. Ground and plant points are separated using the Excess Green (ExG) index. During early growth stages, when the soil is minimally occluded, ground point clouds are extracted and used to construct a high-precision reference ground model through Cloth Simulation Filtering (CSF) and Kriging interpolation, compensating for canopy occlusion and noise. In later growth stages, plant point cloud data are spatially aligned with this reconstructed ground surface. Individual plants are identified using an improved Euclidean clustering algorithm, and consistent measurement regions are defined. Within each region, a ground plane is fitted using the Random Sample Consensus (RANSAC) algorithm to ensure alignment with the X–Y plane. Plant height is then determined by the elevation difference between the canopy and the interpolated ground surface. Experimental results show mean absolute errors (MAEs) of 7.19 mm and 18.45 mm for early and late growth stages, respectively, with coefficients of determination (R2) exceeding 0.85. These findings demonstrate that the proposed method provides reliable and continuous plant height monitoring across the full growth cycle, offering a robust solution for high-throughput phenotyping of leafy vegetables in field environments. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
21 pages, 4886 KB  
Article
GaPMeS: Gaussian Patch-Level Mixture-of-Experts Splatting for Computation-Limited Sparse-View Feed-Forward 3D Reconstruction
by Jinwen Liu, Wenchao Liu and Rui Guo
Appl. Sci. 2026, 16(2), 1108; https://doi.org/10.3390/app16021108 - 21 Jan 2026
Viewed by 50
Abstract
To address the issues of parameter coupling and high computational demands in existing feed-forward Gaussian splatting methods, we propose Gaussian Patch-level Mixture-of-Experts Splatting (GaPMeS), a lightweight feed-forward 3D Gaussian reconstruction model based on a mixture-of-experts (MoE) multi-task decoupling framework. GaPMeS employs a dual-routing [...] Read more.
To address the issues of parameter coupling and high computational demands in existing feed-forward Gaussian splatting methods, we propose Gaussian Patch-level Mixture-of-Experts Splatting (GaPMeS), a lightweight feed-forward 3D Gaussian reconstruction model based on a mixture-of-experts (MoE) multi-task decoupling framework. GaPMeS employs a dual-routing gating mechanism to replace heavy refinement networks, enabling task-adaptive feature selection at the image patch level and alleviating the gradient conflicts commonly observed in shared-backbone architectures. By decoupling Gaussian parameter prediction into four independent sub-tasks and incorporating a hybrid soft–hard expert selection strategy, the model maintains high efficiency with only 14.6 M parameters while achieving competitive performance across multiple datasets—including a Structural Similarity Index (SSIM) of 0.709 on RealEstate10K, a Peak Signal-to-Noise Ratio (PSNR) of 19.57 on DL3DV, and a 26.0% SSIM improvement on real industrial scenes. These results demonstrate the model’s superior efficiency and reconstruction quality, offering a new and effective solution for high-quality sparse-view 3D reconstruction. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and 3D Technologies)
Show Figures

Figure 1

22 pages, 7096 KB  
Article
An Improved ORB-KNN-Ratio Test Algorithm for Robust Underwater Image Stitching on Low-Cost Robotic Platforms
by Guanhua Yi, Tianxiang Zhang, Yunfei Chen and Dapeng Yu
J. Mar. Sci. Eng. 2026, 14(2), 218; https://doi.org/10.3390/jmse14020218 - 21 Jan 2026
Viewed by 60
Abstract
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching [...] Read more.
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching method that integrates ORB (Oriented FAST and Rotated BRIEF) feature extraction with a fixed-ratio constraint matching strategy. First, lightweight color and contrast enhancement techniques are employed to restore color balance and improve local texture visibility. Then, ORB descriptors are extracted and matched via a KNN (K-Nearest Neighbors) nearest-neighbor search, and Lowe’s ratio test is applied to eliminate false matches caused by weak texture similarity. Finally, the geometric transformation between image frames is estimated by incorporating robust optimization, ensuring stable homography computation. Experimental results on real underwater datasets show that the proposed method significantly improves stitching continuity and structural consistency, achieving 40–120% improvements in SSIM (Structural Similarity Index) and PSNR (peak signal-to-noise ratio) over conventional Harris–ORB + KNN, SIFT (scale-invariant feature transform) + BF (brute force), SIFT + KNN, and AKAZE (accelerated KAZE) + BF methods while maintaining processing times within one second. These results indicate that the proposed method is well-suited for real-time underwater environment perception and panoramic mapping on low-cost, micro-sized underwater robotic platforms. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

15 pages, 2430 KB  
Article
Improved Detection of Small (<2 cm) Hepatocellular Carcinoma via Deep Learning-Based Synthetic CT Hepatic Arteriography: A Multi-Center External Validation Study
by Jung Won Kwak, Sung Bum Cho, Ki Choon Sim, Jeong Woo Kim, In Young Choi and Yongwon Cho
Diagnostics 2026, 16(2), 343; https://doi.org/10.3390/diagnostics16020343 - 21 Jan 2026
Viewed by 67
Abstract
Background/Objectives: Early detection of hepatocellular carcinoma (HCC), particularly small lesions (<2 cm), which is crucial for curative treatment, remains challenging with conventional liver dynamic computed tomography (LDCT). We aimed to develop a deep learning algorithm to generate synthetic CT during hepatic arteriography (CTHA) [...] Read more.
Background/Objectives: Early detection of hepatocellular carcinoma (HCC), particularly small lesions (<2 cm), which is crucial for curative treatment, remains challenging with conventional liver dynamic computed tomography (LDCT). We aimed to develop a deep learning algorithm to generate synthetic CT during hepatic arteriography (CTHA) from non-invasive LDCT and evaluate its lesion detection performance. Methods: A cycle-consistent generative adversarial network with an attention module [Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization (U-GAT-IT)] was trained using paired LDCT and CTHA images from 277 patients. The model was validated using internal (68 patients, 139 lesions) and external sets from two independent centers (87 patients, 117 lesions). Two radiologists assessed detection performance using a 5-point scale and the detection rate. Results: Synthetic CTHA significantly improved the detection of sub-centimeter (<1 cm) HCCs compared with LDCT in the internal set (69.6% vs. 47.8%, p < 0.05). This improvement was robust in the external set; synthetic CTHA detected a greater number of small lesions than LDCT. Quantitative metrics (structural similarity index measure and peak signal-to-noise ratio) indicated high structural fidelity. Conclusions: Deep-learning–based synthetic CTHA significantly enhanced the detection of small HCCs compared with standard LDCT, offering a non-invasive alternative with high detection sensitivity, which was validated across multicentric data. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

22 pages, 2341 KB  
Article
Acquisition Performance Analysis of Communication and Ranging Signals in Space-Based Gravitational Wave Detection
by Hongling Ling, Zhaoxiang Yi, Haoran Wu and Kai Luo
Technologies 2026, 14(1), 73; https://doi.org/10.3390/technologies14010073 - 21 Jan 2026
Viewed by 128
Abstract
Space-based gravitational wave detection relies on laser interferometry to measure picometer-level displacements over 105106 km baselines. To integrate ranging and communication within the same optical link without degrading the primary scientific measurement, a low modulation index of 0.1 rad [...] Read more.
Space-based gravitational wave detection relies on laser interferometry to measure picometer-level displacements over 105106 km baselines. To integrate ranging and communication within the same optical link without degrading the primary scientific measurement, a low modulation index of 0.1 rad is required, resulting in extremely weak signals and challenging acquisition conditions. This study developed mathematical models for signal acquisition, identifying and analyzing key performance-limiting factors for both Binary Phase Shift Keying (BPSK) and Binary Offset Carrier (BOC) schemes. These factors include spreading factor, acquisition step, modulation index, and carrier-to-noise ratio (CNR). Particularly, the acquisition threshold can be directly calculated from these parameters and applied to the acquisition process of communication and ranging signals. Numerical simulations and evaluations, conducted with TianQin mission parameters, demonstrate that, for a data rate of 62.5 kbps and modulation indices of 0.081 rad (BPSK) or 0.036 rad (BOC), respectively, acquisition (probability ≈ 1) is achieved when the CNR is ≥104 dB·Hz under a false alarm rate of 106. These results provide critical theoretical support and practical guidance for optimizing the inter-satellite communication and ranging system design for the space-based gravitational wave detection missions. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

21 pages, 6017 KB  
Article
A New Ship Trajectory Clustering Method Based on PSO-DBSCAN
by Zhengchuan Qin and Tian Chai
J. Mar. Sci. Eng. 2026, 14(2), 214; https://doi.org/10.3390/jmse14020214 - 20 Jan 2026
Viewed by 67
Abstract
With the rapid growth of vessel traffic and the widespread adoption of the Automatic Identification System (AIS) in recent years, analyzing maritime traffic flow characteristics has become an essential component of modern maritime supervision. Clustering analysis is one of the primary data-mining approaches [...] Read more.
With the rapid growth of vessel traffic and the widespread adoption of the Automatic Identification System (AIS) in recent years, analyzing maritime traffic flow characteristics has become an essential component of modern maritime supervision. Clustering analysis is one of the primary data-mining approaches used to extract traffic patterns from AIS data. Addressing the challenge of assigning appropriate weights to the multidimensional features in AIS trajectories, namely latitude and longitude, speed over ground (SOG), and course over ground (COG). This study introduces an adaptive parameter optimization mechanism based on evolutionary algorithms. Specifically, Particle Swarm Optimization (PSO), a representative swarm intelligence algorithm, is employed to automatically search for the optimal feature-distance weights and the core parameters of Density-Based Spatial Clustering of Applications with Noise (DBSCAN), enabling dynamic adjustment of clustering thresholds and global optimization of model performance. By designing a comprehensive clustering evaluation index as the objective function, the proposed method achieves optimal parameter allocation in a multidimensional similarity space, thereby uncovering maritime traffic clusters that may be overlooked when relying on single-dimensional features. The method is validated using AIS trajectory data from the Xiamen Port area, where 15 traffic clusters were successfully identified. Comparative experiments with two other clustering algorithms demonstrate the superior performance of the proposed approach in trajectory pattern analysis, providing valuable reference for maritime regulatory and traffic management applications. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

18 pages, 10969 KB  
Article
Simulation Data-Based Dual Domain Network (Sim-DDNet) for Motion Artifact Reduction in MR Images
by Seong-Hyeon Kang, Jun-Young Chung, Youngjin Lee and for The Alzheimer’s Disease Neuroimaging Initiative
Magnetochemistry 2026, 12(1), 14; https://doi.org/10.3390/magnetochemistry12010014 - 20 Jan 2026
Viewed by 112
Abstract
Brain magnetic resonance imaging (MRI) is highly susceptible to motion artifacts that degrade fine structural details and undermine quantitative analysis. Conventional U-Net-based deep learning approaches for motion artifact reduction typically operate only in the image domain and are often trained on data with [...] Read more.
Brain magnetic resonance imaging (MRI) is highly susceptible to motion artifacts that degrade fine structural details and undermine quantitative analysis. Conventional U-Net-based deep learning approaches for motion artifact reduction typically operate only in the image domain and are often trained on data with simplified motion patterns, thereby limiting physical plausibility and generalization. We propose Sim-DDNet, a simulation-data-based dual-domain network that combines k-space-based motion simulation with a joint image-k-space reconstruction architecture. Motion-corrupted data were generated from T2-weighted Alzheimer’s Disease Neuroimaging Initiative brain MR scans using a k-space replacement scheme with three to five random rotational and translational events per volume, yielding 69,283 paired samples (49,852/6969/12,462 for training/validation/testing). Sim-DDNet integrates a real-valued U-Net-like image branch and a complex-valued k-space branch using cross attention, FiLM-based feature modulation, soft data consistency, and composite loss comprising L1, structural similarity index measure (SSIM), perceptual, and k-space-weighted terms. On the independent test set, Sim-DDNet achieved a peak signal-to-noise ratio of 31.05 dB, SSIM of 0.85, and gradient magnitude similarity deviation of 0.077, consistently outperforming U-Net and U-Net++ across all three metrics while producing less blurring, fewer residual ghost/streak artifacts, and reduced hallucination of non-existent structures. These results indicate that dual-domain, data-consistency-aware learning, which explicitly exploits k-space information, is a promising approach for physically plausible motion artifact correction in brain MRI. Full article
(This article belongs to the Special Issue Magnetic Resonances: Current Applications and Future Perspectives)
Show Figures

Figure 1

21 pages, 3501 KB  
Article
Subsurface Fracture Mapping in Adhesive Interfaces Using Terahertz Spectroscopy
by Mahavir Singh, Sushrut Karmarkar, Marco Herbsommer, Seongmin Yoon and Vikas Tomar
Materials 2026, 19(2), 388; https://doi.org/10.3390/ma19020388 - 18 Jan 2026
Viewed by 192
Abstract
Adhesive fracture in layered structures is governed by subsurface crack evolution that cannot be accessed using surface-based diagnostics. Methods such as digital image correlation and optical spectroscopy measure surface deformation but implicitly assume a straight and uniform crack front, an assumption that becomes [...] Read more.
Adhesive fracture in layered structures is governed by subsurface crack evolution that cannot be accessed using surface-based diagnostics. Methods such as digital image correlation and optical spectroscopy measure surface deformation but implicitly assume a straight and uniform crack front, an assumption that becomes invalid for interfacial fracture with wide crack openings and asymmetric propagation. In this work, terahertz time-domain spectroscopy (THz-TDS) is combined with double-cantilever beam testing to directly map subsurface crack-front geometry in opaque adhesive joints. A strontium titanate-doped epoxy is used to enhance dielectric contrast. Multilayer refractive index extraction, pulse deconvolution, and diffusion-based image enhancement are employed to separate overlapping terahertz echoes and reconstruct two-dimensional delay maps of interfacial separation. The measured crack geometry is coupled with load–displacement data and augmented beam theory to compute spatially averaged stresses and energy release rates. The measurements resolve crack openings down to approximately 100 μm and reveal pronounced width-wise non-uniform crack advance and crack-front curvature during stable growth. These observations demonstrate that surface-based crack-length measurements can either underpredict or overpredict fracture toughness depending on the measurement location. Fracture toughness values derived from width-averaged subsurface crack fronts agree with J-integral estimates obtained from surface digital image correlation. Signal-to-noise limitations near the crack tip define the primary resolution limit. The results establish THz-TDS as a quantitative tool for subsurface fracture mechanics and provide a framework for physically representative toughness measurements in layered and bonded structures. Full article
Show Figures

Graphical abstract

13 pages, 4563 KB  
Article
Balancing Radiation Dose and Image Quality: Protocol Optimization for Mobile Head CT in Neurointensive Care Unit Patients
by Damian Mialkowskyj, Robert Stahl, Suzette Heck, Konstantinos Dimitriadis, Thomas David Fischer, Thomas Liebig, Christoph G. Trumm, Tim Wesemann and Robert Forbrig
Diagnostics 2026, 16(2), 256; https://doi.org/10.3390/diagnostics16020256 - 13 Jan 2026
Viewed by 139
Abstract
Objective: Mobile head CT enables bedside neuroimaging in critically ill patients, reducing risks associated with intrahospital transport. Despite increasing clinical use, evidence on dose optimization for mobile CT systems remains limited. This study evaluated whether an optimized CT protocol can reduce radiation exposure [...] Read more.
Objective: Mobile head CT enables bedside neuroimaging in critically ill patients, reducing risks associated with intrahospital transport. Despite increasing clinical use, evidence on dose optimization for mobile CT systems remains limited. This study evaluated whether an optimized CT protocol can reduce radiation exposure without compromising diagnostic image quality in neurointensive care unit patients. Methods: In this retrospective single-center study, twenty-two non-contrast head CT examinations were acquired with a second-generation mobile CT scanner between March and May 2023. Patients underwent either a default (group A, n = 14; volumetric computed tomography dose index (CTDIvol) 44.1 mGy) or low-dose CT protocol (group B, n = 8; CTDIvol 32.1 mGy). Regarding dosimetry analysis, we recorded dose length product (DLP) and effective dose (ED). Quantitative image quality was assessed by manually placing ROIs at the basal ganglia and cerebellar levels to determine signal, noise, signal-to-noise ratio, and contrast-to-noise ratio. Two neuroradiologists independently rated qualitative image quality using a four-point Likert scale. Statistical comparisons were performed using a significance threshold of 0.05. Results: Median DLP and ED were significantly lower for group B (592 mGy·cm, 1.12 mSv) than for group A (826 mGy·cm, 1.57 mSv; each p < 0.0001). Quantitative image quality parameters did not differ significantly between groups (p > 0.05). Qualitative image quality was rated excellent (median score 4). Conclusions: The optimized mobile head CT protocol achieved a 28.7% reduction in radiation exposure while maintaining high diagnostic image quality. These findings support the adoption of low-dose strategies in mobile CT imaging in line with established radiation protection standards. Full article
Show Figures

Figure 1

14 pages, 1825 KB  
Article
CycleGAN-Based Translation of Digital Camera Images into Confocal-like Representations for Paper Fiber Imaging: Quantitative and Grad-CAM Analysis
by Naoki Kamiya, Kosuke Ashino, Yuto Hosokawa and Koji Shibazaki
Appl. Sci. 2026, 16(2), 814; https://doi.org/10.3390/app16020814 - 13 Jan 2026
Viewed by 190
Abstract
The structural analysis of paper fibers is vital for the noninvasive classification and conservation of traditional handmade paper in cultural heritage. Although digital still cameras (DSCs) offer a low-cost and noninvasive imaging solution, their inferior image quality compared to white-light confocal microscopy (WCM) [...] Read more.
The structural analysis of paper fibers is vital for the noninvasive classification and conservation of traditional handmade paper in cultural heritage. Although digital still cameras (DSCs) offer a low-cost and noninvasive imaging solution, their inferior image quality compared to white-light confocal microscopy (WCM) limits their effectiveness in fiber classification. To address this modality gap, we propose an unpaired image-to-image translation approach using cycle-consistent adversarial networks (CycleGANs). Our study targets a multifiber setting involving kozo, mitsumata, and gampi, using publicly available domain-specific datasets. Generated WCM-style images were quantitatively evaluated using peak signal-to-noise ratio, structural similarity index measure, mean absolute error, and Fréchet inception distance, achieving 8.24 dB, 0.28, 172.50, and 197.39, respectively. Classification performance was tested using EfficientNet-B0 and Inception-ResNet-v2, with F1-scores reaching 94.66% and 98.61%, respectively, approaching the performance of real WCM images (99.50% and 98.86%) and surpassing previous results obtained directly from DSC inputs (80.76% and 84.19%). Furthermore, Grad-CAM visualization confirmed that the translated images retained class-discriminative features aligned with those of the actual WCM inputs. Thus, the proposed CycleGAN-based image conversion effectively bridges the modality gap, enabling DSC images to approximate WCM characteristics and support high-accuracy paper fiber classification, which is a practical alternative for noninvasive material analysis. Full article
Show Figures

Graphical abstract

22 pages, 12767 KB  
Article
Data-Driven Trail Management Through Climate Refuge-Based Comfort Index for a More Sustainable Mobility in Protected Natural Areas
by Carmen García-Barceló, Adriana Morejón, Francisco J. Martínez, David Tomás and Jose-Norberto Mazón
Information 2026, 17(1), 79; https://doi.org/10.3390/info17010079 - 13 Jan 2026
Viewed by 181
Abstract
In this paper, we propose a data-driven decision-support approach for conceptual trail planning and management in protected natural areas, where environmental awareness (particularly climatic comfort and noise levels) is critical to ensuring a sustainable and enjoyable visitor mobility. Our case study is the [...] Read more.
In this paper, we propose a data-driven decision-support approach for conceptual trail planning and management in protected natural areas, where environmental awareness (particularly climatic comfort and noise levels) is critical to ensuring a sustainable and enjoyable visitor mobility. Our case study is the Natural Park of La Mata and Torrevieja in Spain. The paper begins by identifying climate refuges in this park (areas offering shelter from heat and other adverse conditions based on meteorological data). We extend this with a novel comfort indicator that incorporates ambient noise levels, using acoustic data from sensors. A key challenge is the integration of heterogeneous data sources (climatic data and noise data from the park’s digital twin infrastructure). To demonstrate the potential of this approach for trail planning, we implement an A* pathfinding algorithm to explore comfort-oriented routing alternatives, guided by our combined climate-noise comfort index. The algorithm is applied to trail management in the Natural Park of La Mata and Torrevieja, enabling the identification of indicative high-comfort routes that can inform future trail design and management decisions, while accounting for ecological constraints and visitor well-being. Results show that the proposed comfort-aware routing improves average environmental comfort by 66.3% with only an additional 344 m of walking distance. Finally, this work constitutes a first step toward a data space use case, showcasing interoperable, AI-ready environmental data usage and aligning with the European Green Deal. Full article
Show Figures

Graphical abstract

30 pages, 10813 KB  
Article
A Filter Method for Vehicle-Based Moving LiDAR Point Cloud Data for Removing IRI-Insensitive Components of Longitudinal Profile
by Guoqing Zhou, Hanwen Gao, Yufu Cai, Jiahao Guo and Xuesong Zhao
Remote Sens. 2026, 18(2), 240; https://doi.org/10.3390/rs18020240 - 12 Jan 2026
Viewed by 137
Abstract
The International Roughness Index (IRI) is calculated from elevation profiles acquired by high-speed profilers or laser scanners, but these raw data often contain measurement noise and extraneous wavelength components that can degrade the accuracy of IRI calculations. Existing filtering methods expose a limitation [...] Read more.
The International Roughness Index (IRI) is calculated from elevation profiles acquired by high-speed profilers or laser scanners, but these raw data often contain measurement noise and extraneous wavelength components that can degrade the accuracy of IRI calculations. Existing filtering methods expose a limitation in removing IRI-insensitive wavelength components. Thus, this paper proposes a Gaussian filtering algorithm based on the Nyquist sampling theorem to remove IRI-insensitive components of the longitudinal profile. The proposed approach first adaptively determines Gaussian template lengths according to sampling intervals, and then incorporates a boundary padding strategy to ensure processing stability. The proposed method enables precise wavelength selection within the IRI-sensitive band of 1.3–29.4 m while maintaining computational efficiency. The method was validated using the Paris–Lille dataset and the U.S. Long-Term Pavement Performance (LTPP) program dataset. The filtered profiles were evaluated by Power Spectral Density (PSD), and IRI values were calculated and compared with those obtained by conventional profile filtering methods. The results show that the proposed method is effective in removing the non-sensitive components of IRI and obtaining highly accurate IRI values. Compared with the standard IRI provided by the LTPP dataset, mean absolute error of the IRI values from the proposed method reaches 0.051 m/km, and mean relative error is less than 4%. These findings indicate that the proposed method improves the reliability of IRI calculation. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

35 pages, 7433 KB  
Article
Post-Fire Forest Pulse Recovery: Superiority of Generalized Additive Models (GAM) in Long-Term Landsat Time-Series Analysis
by Nima Arij, Shirin Malihi and Abbas Kiani
Sensors 2026, 26(2), 493; https://doi.org/10.3390/s26020493 - 12 Jan 2026
Viewed by 162
Abstract
Wildfires are increasing globally and pose major challenges for assessing post-fire vegetation recovery and ecosystem resilience. We analyzed long-term Landsat time series in two contrasting fire-prone ecosystems in the United States and Australia. Vegetation area was extracted using the Enhanced Vegetation Index (EVI) [...] Read more.
Wildfires are increasing globally and pose major challenges for assessing post-fire vegetation recovery and ecosystem resilience. We analyzed long-term Landsat time series in two contrasting fire-prone ecosystems in the United States and Australia. Vegetation area was extracted using the Enhanced Vegetation Index (EVI) with Otsu thresholding. Recovery to pre-fire baseline levels was modeled using linear, logistic, locally estimated scatterplot smoothing (LOESS), and generalized additive models (GAM), and their performance was compared using multiple metrics. The results indicated rapid recovery of Australian forests to baseline levels, whereas this was not the case for forests in the United States. Among climatic factors, temperature was the dominant parameter in Australia (Spearman ρ = 0.513, p < 10−8), while no climatic variable significantly influenced recovery in California. Methodologically, GAM consistently performed best in both regions due to its success in capturing multiphase and heterogeneous recovery patterns, yielding the lowest values of AIC (United States: 142.89; Australia: 46.70) and RMSE_cv (United States: 112.86; Australia: 2.26). Linear and logistic models failed to capture complex recovery dynamics, whereas LOESS was highly sensitive to noise and unstable for long-term prediction. These findings indicate that post-fire recovery is inherently nonlinear and ecosystem-specific and that simple models are insufficient for accurate estimation, with GAM emerging as an appropriate method for assessing vegetation recovery using remote sensing data. This study provides a transferable approach using remote sensing and GAM to monitor forest resilience under accelerating global fire regimes. Full article
(This article belongs to the Section Environmental Sensing)
Show Figures

Figure 1

30 pages, 1341 KB  
Article
A Novel MBPSO–BDGWO Ensemble Feature Selection Method for High-Dimensional Classification Data
by Nuriye Sancar
Informatics 2026, 13(1), 7; https://doi.org/10.3390/informatics13010007 - 12 Jan 2026
Viewed by 295
Abstract
In a high-dimensional classification dataset, feature selection is crucial for improving classification performance and computational efficiency by identifying an informative subset of features while reducing noise, redundancy, and overfitting. This study proposes a novel metaheuristic-based ensemble feature selection approach by combining the complementary [...] Read more.
In a high-dimensional classification dataset, feature selection is crucial for improving classification performance and computational efficiency by identifying an informative subset of features while reducing noise, redundancy, and overfitting. This study proposes a novel metaheuristic-based ensemble feature selection approach by combining the complementary strengths of Modified Binary Particle Swarm Optimization (MBPSO) and Binary Dynamic Grey Wolf Optimization (BDGWO). The proposed MBPSO–BDGWO ensemble method is specifically designed for high-dimensional classification problems. The performance of the proposed MBPSO–BDGWO ensemble method was rigorously evaluated through an extensive simulation study under multiple high-dimensional scenarios with varying correlation structures. The ensemble method was further validated on several real datasets. Comparative analyses were conducted against single-stage feature selection methods, including BPSO, BGWO, MBPSO, and BDGWO, using evaluation metrics such as accuracy, the F1-score, the true positive rate (TPR), the false positive rate (FPR), the AUC, precision, and the Jaccard stability index. Simulation studies conducted under various dimensionality and correlation scenarios show that the proposed ensemble method achieves a low FPR, a high TPR/Precision/F1/AUC, and strong selection stability, clearly outperforming both classical and advanced single-stage methods, even as dimensionality and collinearity increase. In contrast, single-stage methods typically experience substantial performance degradation in high-correlation and high-dimensional settings, particularly BPSO and BGWO. Moreover, on the real datasets, the ensemble method outperformed all compared single-stage methods and produced consistently low MAD values across repetitions, indicating robustness and stability even in ultra-high-dimensional genomic datasets. Overall, the findings indicate that the proposed ensemble method demonstrates consistent performance across the evaluated scenarios and achieves higher selection stability compared with the single-stage methods. Full article
Show Figures

Figure 1

Back to TopTop