Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,010)

Search Parameters:
Keywords = image matching

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3443 KB  
Article
Improved Parameter-Driven Automated Three-Class Segmentation for Concrete CT: A Reproducible Pipeline for Large-Scale Dataset Production
by Youxi Wang, Tianqi Zhang and Xinxiao Chen
Buildings 2026, 16(8), 1620; https://doi.org/10.3390/buildings16081620 - 20 Apr 2026
Abstract
The automated production of large-scale labeled datasets from concrete X-ray computed tomography (CT) images is a fundamental prerequisite for training and validating deep learning-based segmentation models. However, existing methods either require extensive manual annotation or rely on domain-specific deep learning models that themselves [...] Read more.
The automated production of large-scale labeled datasets from concrete X-ray computed tomography (CT) images is a fundamental prerequisite for training and validating deep learning-based segmentation models. However, existing methods either require extensive manual annotation or rely on domain-specific deep learning models that themselves demand labeled data—a circular dependency. This paper presents a parameter-driven three-class segmentation framework that automatically classifies each pixel in a concrete CT slice into one of three material phases: void (air pores and cracks), coarse aggregate, and mortar matrix, generating annotation masks suitable for large-scale dataset production without manual labeling. The proposed method combines: (1) fixed-threshold void detection calibrated to concrete CT grayscale characteristics; (2) adaptive percentile-based initial segmentation responsive to image-specific statistics; (3) multi-criteria connected component scoring based on area, shape descriptors (circularity, solidity, compactness, extent, aspect ratio), intensity distribution, and boundary gradient; (4) material science-informed size constraints aligned with concrete phase volume fractions; and (5) a material continuity enforcement module that applies topological hole-filling and conditional convex-hull consolidation to eliminate internal contamination within accepted aggregate regions, reducing boundary roughness by 7.6% and recovering misclassified boundary pixels. All parameters are centralized in a configuration file, enabling reproducible batch processing of 224 × 224 pixel CT slices at 0.07–1.12 s per image. Evaluated on 1007 224 × 224 concrete CT patches cropped from 200 representative scan frames, the framework produces three-class segmentation masks with physically consistent void fractions (mean 3.2%), aggregate fractions (mean 32.4%), and mortar fractions (mean 64.4%), all within ranges reported in the concrete CT literature (used as a dataset-scale QC screen, not a validation metric). Primary outputs and the archived image–mask pairs for this work are provided as an 8-bit patch archive. For pixel-wise validation, we report IoU, Dice, and pixel accuracy on an independently labeled subset that can be unambiguously paired with the released predictions: averaged over 57 matched patches, mean pixel accuracy is 88.6%, macro-mean IoU is 74.7%, and macro-mean Dice is 84.9%. The framework provides a fully automated annotation pipeline for dataset production, eliminating manual labeling costs for concrete CT image collections. The generated datasets are suitable for training semantic segmentation networks such as U-Net and its variants. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
15 pages, 3994 KB  
Article
Three-Dimensional Shape Measurement Using Speckle-Assisted Phase-Order Lines Without Phase Unwrapping
by Ziyou Zhang and Weipeng Yang
Sensors 2026, 26(8), 2534; https://doi.org/10.3390/s26082534 - 20 Apr 2026
Abstract
Achieving high-accuracy and high-speed 3D shape measurement remains a significant challenge. This paper presents a novel technique using phase-order lines (POLs), which eliminates the need for phase unwrapping in a binocular system. By combining phase-shifting for high resolution and speckle projection for robust [...] Read more.
Achieving high-accuracy and high-speed 3D shape measurement remains a significant challenge. This paper presents a novel technique using phase-order lines (POLs), which eliminates the need for phase unwrapping in a binocular system. By combining phase-shifting for high resolution and speckle projection for robust features, our method extracts POLs directly from the wrapped phase. The speckle patterns are then used to establish robust POL correspondences between stereo images. These matched POLs serve as reliable seeds to guide dense, sub-pixel matching directly on the wrapped phase, thus bypassing the complex phase unwrapping process. This approach significantly reduces the number of required patterns. The experimental results demonstrate that our method achieves a root-mean-square (RMS) error of 0.058 mm using only five patterns, delivering accuracy comparable to a 12-pattern temporal phase unwrapping (TPU) method while being significantly faster. Full article
Show Figures

Figure 1

23 pages, 868 KB  
Article
Radiomic Features of MRI Subcompartments Associate with Angiogenic and Inflammatory Transcriptomic Programs in Glioblastoma: An IvyGAP Exploratory Analysis
by Daniele Piccolo and Marco Vindigni
Cancers 2026, 18(8), 1293; https://doi.org/10.3390/cancers18081293 - 19 Apr 2026
Abstract
Background: Glioblastoma exhibits profound intratumoral heterogeneity, with anatomically distinct tumor zones characterized by divergent molecular programs that drive therapy resistance. Whether magnetic resonance imaging (MRI)-derived radiomic features can capture these regional transcriptomic differences remains unknown. We aimed to determine whether subcompartment-level radiomic features [...] Read more.
Background: Glioblastoma exhibits profound intratumoral heterogeneity, with anatomically distinct tumor zones characterized by divergent molecular programs that drive therapy resistance. Whether magnetic resonance imaging (MRI)-derived radiomic features can capture these regional transcriptomic differences remains unknown. We aimed to determine whether subcompartment-level radiomic features associate with transcriptomic pathway enrichment scores derived from biologically approximate tumor zones. Methods: We matched 28 patients (mean age 58.5 years; 13/28 MGMT methylated) across the IvyGAP RNA-seq atlas and the IVYGAP-RADIOMICS datasets. Single-sample GSEA (ssGSEA) pathway scores were computed for 24 gene sets. Radiomic features (3920 per subcompartment) were reduced to 597. Nested leave-one-patient-out cross-validation (LOPO-CV) with Elastic Net served as the primary predictive analysis; linear mixed-effects models (LMM) provided exploratory associational analysis. Analyses used a biologically motivated but spatially non-co-registered zone-to-subcompartment mapping; all reported associations are zone-approximate. Results: Twenty-one of 24 pathways showed no predictive signal (R2cv ≤ 0). Inflammatory Response (R2cv = 0.185, 95% CI [0.071, 0.355], p = 0.008) was the only pathway supported by both the nested CV (FDR = 0.096) and the exploratory LMM (FDR = 0.024, ΔR2 = 0.214 beyond subcompartment effects) analyses; the LMM association was robust to clinical covariate adjustment (likelihood ratio test p = 0.004). Angiogenesis (R2cv = 0.209, 95% CI [0.028, 0.353], p = 0.006) reached nested CV significance (FDR = 0.096) but was not corroborated by the LMM (FDR = 0.445); it is therefore reported as a tentative single-framework signal requiring independent validation. T2-derived texture features were selected in 100% of folds for both pathways. Conclusions: Inflammatory Response is the only pathway supported by both analytical frameworks; Angiogenesis is a tentative nested-CV-only signal pending independent validation. The absence of signal for 21 of 24 pathways should not be interpreted as evidence of biological inaccessibility: at N = 28 (vs. N ≈ 240 required by Riley criteria), severe underpowering, attenuation from the non-spatial zone-to-subcompartment mapping, and methodological constraints each independently suffice to suppress real associations. Five of the 24 gene sets (the IvyGAP zone modules) are non-independent from the outcome data and cannot be interpreted as discovery. All reported associations are zone-approximate and may partly reflect macro-compartment (between-subcompartment) effects; validation in larger cohorts with spatially precise co-registration is essential. Full article
(This article belongs to the Section Molecular Cancer Biology)
Show Figures

Figure 1

23 pages, 5622 KB  
Article
Principal Component-Based Spectral Standardization for Optical Spectrometers
by Qiguang Yang, Xu Liu, Wan Wu, Rajendra Bhatt, Yolanda Shea, Xiaozhen Xiong, Ming Zhao, Paul Smith, Greg Kopp and Peter Pilewskie
Remote Sens. 2026, 18(8), 1209; https://doi.org/10.3390/rs18081209 - 17 Apr 2026
Viewed by 165
Abstract
A Principal Component-Based Spectral Standardization (PCSS) method was developed to standardize hyperspectral radiance spectra onto a fixed wavelength grid. This enables the direct comparison of radiance or reflectance spectra across different spatial pixels of an imaging spectrometer or between different instruments. The method [...] Read more.
A Principal Component-Based Spectral Standardization (PCSS) method was developed to standardize hyperspectral radiance spectra onto a fixed wavelength grid. This enables the direct comparison of radiance or reflectance spectra across different spatial pixels of an imaging spectrometer or between different instruments. The method was validated using simulated Climate Absolute Radiance and Refractivity Observatory (CLARREO) Pathfinder (CPF) spectra. The PCSS approach demonstrated high accuracy: the average root-mean-square uncertainty across all CPF channels remained below 0.07%, with maximum individual-channel uncertainties under 1%. Compared to methods based on spectral interpolation, PCSS produced significantly lower biases with tighter error distributions, particularly in spectrally rich regions. Measured Hyper Spectral Imager for Climate Science (HySICS) balloon data provided further validation. PCSS successfully estimated wavelength shifts that closely matched measured data, even when utilizing approximated Jacobians, demonstrating the method’s robustness. Because it relies on a pre-computed lookup table for model parameters, PCSS bypasses the need for intensive radiative transfer calculations, making it highly computationally efficient. Beyond CPF, this method can easily be adapted for other hyperspectral sensors by substituting their respective wavelength grids and instrument line shape functions, offering a powerful tool to improve cross-calibration between different satellite sensors. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

23 pages, 4380 KB  
Article
Vision-Based Measurement of Breathing Deformation in Wind Turbine Blade Fatigue Test
by Xianlong Wei, Cailin Li, Zhiyong Wang, Zhao Hai, Jinghua Wang and Leian Zhang
J. Imaging 2026, 12(4), 174; https://doi.org/10.3390/jimaging12040174 - 17 Apr 2026
Viewed by 158
Abstract
Wind turbine blades are subjected to complex environmental conditions during long-term operation, which may lead to structural degradation and performance loss. To ensure structural integrity, fatigue testing prior to deployment is essential. This paper proposes a vision-based method for measuring the full-cycle breathing [...] Read more.
Wind turbine blades are subjected to complex environmental conditions during long-term operation, which may lead to structural degradation and performance loss. To ensure structural integrity, fatigue testing prior to deployment is essential. This paper proposes a vision-based method for measuring the full-cycle breathing deformation of wind turbine blades during fatigue testing. The method captures dynamic image sequences of the blade’s hotspot cross-section using industrial cameras and employs a feature-based template matching approach to reconstruct the three-dimensional coordinates of target points. Through coordinate transformation, the deformation trajectories are obtained, enabling quantitative analysis of the blade’s dynamic responses in both flapwise and edgewise directions. A dedicated hardware–software system was developed and validated through full-scale fatigue experiments. Quantitative comparison with strain gage measurements shows that the proposed method achieves mean absolute deviations of 0.84 mm and 0.93 mm in two independent experiments, respectively, with closely matched deformation trends under typical loading conditions. These results demonstrate that the proposed method can reliably capture the global deformation behavior of the blade with millimeter-level accuracy, while significantly reducing instrumentation complexity compared to conventional contact-based approaches. The proposed method provides an effective and practical solution for full-field dynamic deformation measurement in blade fatigue testing, offering strong potential for structural health monitoring and early damage detection in wind turbine systems. Full article
Show Figures

Figure 1

12 pages, 2085 KB  
Article
A Pilot Feasibility Study of Neurodevelopmental Surveillance After the Fontan Operation Using a Sedation-Free Brain MRI Approach
by Kwang Ho Choi, Hye Jin Baek, Hyungtae Kim, Si-Chan Sung, Joung-Hee Byun, Hoon Ko, Hyoung-Doo Lee, Ra Yu Yun, Jun-Ho Kim and Stefan Skare
J. Clin. Med. 2026, 15(8), 3069; https://doi.org/10.3390/jcm15083069 - 17 Apr 2026
Viewed by 92
Abstract
Background and Objectives: After undergoing a Fontan operation, children with single-ventricle physiology are at a risk of neurodevelopmental impairment; data from the Korean population are scarce. We characterized the neurocognitive profiles of early school-aged Fontan patients and evaluated the feasibility of a sedation-free [...] Read more.
Background and Objectives: After undergoing a Fontan operation, children with single-ventricle physiology are at a risk of neurodevelopmental impairment; data from the Korean population are scarce. We characterized the neurocognitive profiles of early school-aged Fontan patients and evaluated the feasibility of a sedation-free ultrafast brain magnetic resonance imaging (MRI) protocol for volumetric analysis. Methods: This prospective study screened 25 children who had undergone Fontan surgery and were in grades 1–3 (8–11 years of age) in 2023. After excluding children with a history of seizure, epilepsy, or brain infarction, 11 participants underwent standardized neurocognitive evaluation. Among them, four with extreme full-scale intelligence quotient (FSIQ) underwent 3T sedation-free ultrafast brain MRI (total scan time, 3 min 22 s), including volumetry-capable three-dimensional T1-weighted imaging. Six age-matched children served as controls. MRI volumetric analysis was exploratory and limited to a small subset of Fontan participants (n = 4), restricting statistical power and generalizability. Between-group comparisons were performed using Welch’s t-test, with Hedges’ g calculated as the effect size. Results: Mean FSIQ was 85.2 ± 24.3, with 36% patients with <85 FSIQ. Working memory (64%) and processing speed (55%) were most frequently impaired. Cerebellar volumes were lower in Fontan patients than in controls, although these differences were not statistically significant (left: 59.74 ± 8.86 vs. 72.26 ± 6.92 mL; right: 60.63 ± 7.70 vs. 71.54 ± 7.01 mL; very large effect sizes). Hippocampal volumes tended to be lower, and cerebellar volume showed a positive but non-significant correlation with processing speed. White matter hyperintensities and microbleeds were observed in two patients, both with impaired processing speed. Conclusions: School-aged Fontan patients exhibited selective deficits in working memory and processing speed, while exploratory MRI analysis suggested lower cerebellar volumes in the Fontan group. The ultrafast sedation-free MRI protocol proved feasible for volumetric assessment and, when combined with neurocognitive assessments, may support future milestone-based surveillance and early intervention for at-risk children. Full article
(This article belongs to the Special Issue Clinical Management of Pediatric Heart Diseases)
Show Figures

Figure 1

20 pages, 2475 KB  
Article
Data-Centric LoRA Adaptation and Trustworthy Edge Deployment of a Text-to-Image Diffusion Model for a Rights-Constrained Heritage Domain
by Youngho Kim and Hyungwoong Park
Electronics 2026, 15(8), 1685; https://doi.org/10.3390/electronics15081685 - 16 Apr 2026
Viewed by 128
Abstract
Public deployment of generative AI in cultural institutions is constrained by small, rights-restricted datasets, strict latency and runtime-stability requirements, and limits on visitor-data collection. This study presents a deployment-oriented framework for adapting a pre-trained text-to-image diffusion foundation model to a heritage-specific visual domain [...] Read more.
Public deployment of generative AI in cultural institutions is constrained by small, rights-restricted datasets, strict latency and runtime-stability requirements, and limits on visitor-data collection. This study presents a deployment-oriented framework for adapting a pre-trained text-to-image diffusion foundation model to a heritage-specific visual domain using Low-Rank Adaptation (LoRA). A Stable Diffusion v1.5 backbone is specialized through data-centric curation and LoRA fine-tuning, then served through an asynchronous edge architecture that links a Unity client and a local Python (version 3.10) inference server for public-facing operation on a native 400 × 1080 vertical canvas. To support deployment decisions without collecting personally identifiable information, the system records only anonymous operational logs and evaluates sustained-load behavior under repeated inference. In a 1000-iteration profiling test, the proposed configuration maintained stable runtime behavior without observable upward memory drift, with a peak allocated VRAM of 3.04 GB and an average end-to-end latency of 3.12 s. An 8 h field deployment further indicated service continuity under public interaction, while a CLIP-based proxy analysis under matched prompts and seeds suggested improved relative style controllability after adaptation (0.848 vs. 0.799). Rather than claiming cultural authenticity or visitor-level effects, this study offers a data-centric, deployment-oriented methodology for operating public-facing generative AI under small-data, latency, and privacy constraints. Full article
19 pages, 11100 KB  
Article
Semantic Communication Based on Slot Attention for MIMO Transmission in 6G Smart Factories
by Na Chen, Guijie Lin, Rubing Jian, Yusheng Wang, Meixia Fu, Jianquan Wang, Lei Sun, Wei Li, Taisei Urakami, Minoru Okada, Bin Shen, Qu Wang, Changyuan Yu, Fangping Chen and Xuekui Shangguan
Sensors 2026, 26(8), 2456; https://doi.org/10.3390/s26082456 - 16 Apr 2026
Viewed by 166
Abstract
In the Industrial Internet of Things (IIoT), vision-based industrial detection technology is crucial in the production process and can be used in many smart manufacturing applications, such as automated production control and Non-Destructive Evaluation (NDE). To enable timely and accurate decision-making, the network [...] Read more.
In the Industrial Internet of Things (IIoT), vision-based industrial detection technology is crucial in the production process and can be used in many smart manufacturing applications, such as automated production control and Non-Destructive Evaluation (NDE). To enable timely and accurate decision-making, the network must transmit product status information to the server under stringent requirements of ultra-reliability and low latency. However, traditional pixel-centric industrial image transmission consumes additional bandwidth, and existing deep learning-based semantic communication systems rely on costly manual annotations. To overcome these limitations, this paper proposes a novel object-centric semantic communication framework based on improved slot attention for Multiple-Input Multiple-Output (MIMO) transmission in a 6G smart manufacturing scenario. First, we propose an improved slot attention method based on unsupervised learning for real-world manufacturing image datasets. The proposed method decouples complex industrial images into different object instances, each corresponding to an independent semantic component slot, effectively isolating task-related visual targets from redundant backgrounds. Furthermore, we propose a priority-based semantic transmission strategy. By quantifying the task-relevant importance of each semantic slot and jointly matching MIMO sub-channels, our method optimizes industrial image transmission streams, ensuring the reliable transmission of the important semantic information. Extensive simulation results demonstrate that the proposed framework significantly enhances communication transmission efficiency. Even under constrained bandwidth ratios and a low Signal-to-Noise Ratio (SNR), our framework achieves superior visual reconstruction quality and improves the Peak Signal-to-Noise Ratio (PSNR) by 4.25 dB compared to existing benchmarks. Full article
(This article belongs to the Special Issue Integrated AI and Communication for 6G)
31 pages, 7470 KB  
Article
Improved Quantification of Methane Point-Source Emissions from Hyperspectral Imagery Using a Spectrally Corrected Levenberg–Marquardt Matched Filter
by Zhuo He, Yan Ma, Zhengqiang Li, Ying Zhang, Cheng Fan, Lili Qie, Zihan Zhang, Zheng Shi, Tong Lu, Yuanyuan Gao, Xingyu Yao, Xiaofan Li, Chenwei Lan and Qian Yao
Remote Sens. 2026, 18(8), 1195; https://doi.org/10.3390/rs18081195 - 16 Apr 2026
Viewed by 254
Abstract
Spaceborne hyperspectral imaging spectrometers enable refined retrieval and quantification of methane point-source emissions. However, the conventional matched filter (MF) systematically underestimates methane enhancements under high-concentration conditions and remains sensitive to spectral inconsistencies across varying observation scenarios. To address these limitations, we improve MF-based [...] Read more.
Spaceborne hyperspectral imaging spectrometers enable refined retrieval and quantification of methane point-source emissions. However, the conventional matched filter (MF) systematically underestimates methane enhancements under high-concentration conditions and remains sensitive to spectral inconsistencies across varying observation scenarios. To address these limitations, we improve MF-based retrieval from two aspects: the observation model and the unit absorption spectrum (UAS) representation. First, a Levenberg–Marquardt matched filter (LMMF) is developed by extending the MF framework to a nonlinear retrieval formulation while retaining its data-driven and background-statistics-based characteristics. Specifically, the exponential absorption term is preserved, and methane enhancement is iteratively solved in the nonlinear domain, enabling a more physically consistent retrieval without requiring precise external prior knowledge. Building upon this framework, a spectrally corrected LMMF (SC-LMMF) is further proposed by introducing a lookup-table-based dynamic UAS correction to account for variations in observation geometry, surface elevation, and atmospheric state. Comprehensive validation using idealized and noise-perturbed simulations, end-to-end simulations, and controlled-release experiments demonstrates that the LMMF mitigates high-concentration underestimation relative to the MF. The SC-LMMF further reduces cross-scene systematic biases, shifting retrievals toward a near 1:1 relationship. In controlled-release experiments, the SC-LMMF increased the coefficient of determination (R2) by approximately 50% while reducing the root mean square error (RMSE) and mean absolute error (MAE) by approximately 70% relative to the MF. Overall, the proposed framework enhances the robustness and quantitative consistency of methane point-source retrievals across multisource hyperspectral satellite observations. Full article
Show Figures

Figure 1

33 pages, 30701 KB  
Article
Polynomial Perceptrons for Compact, Robust, and Interpretable Machine Learning Models
by Edwin Aldana-Bobadilla, Alejandro Molina-Villegas, Juan Cesar-Hernandez and Mario Garza-Fabre
Entropy 2026, 28(4), 453; https://doi.org/10.3390/e28040453 - 15 Apr 2026
Viewed by 282
Abstract
This paper introduces the Polynomial Perceptron (PP), a structured extension of the classical perceptron that incorporates explicit polynomial feature expansions to model nonlinear interactions while preserving analytical transparency. By expressing feature interactions in closed functional form, PP captures higher-order dependencies through a compact [...] Read more.
This paper introduces the Polynomial Perceptron (PP), a structured extension of the classical perceptron that incorporates explicit polynomial feature expansions to model nonlinear interactions while preserving analytical transparency. By expressing feature interactions in closed functional form, PP captures higher-order dependencies through a compact set of learned coefficients, establishing a principled trade-off between expressivity and parameter efficiency. The proposed architecture is evaluated across heterogeneous domains, including text, image, and structured data tasks, under controlled experimental settings with parameter-matched baselines. Performance is assessed using standard metrics such as classification accuracy and model complexity (parameter count). Empirical results demonstrate that low-degree PP models achieve competitive accuracy compared to multilayer perceptrons and convolutional neural networks, while requiring significantly fewer parameters. An ablation study further analyzes the impact of polynomial degree on predictive performance, revealing diminishing returns beyond moderate degrees and highlighting favorable efficiency–accuracy trade-offs. A key advantage of PP lies in its intrinsic interpretability. Unlike conventional deep learning models that rely on post hhoc explanation methods, PP provides direct analytical insight through its explicit polynomial structure, enabling decomposition of predictions into feature-, token-, or patch-level contributions without surrogate approximations. Overall, the results indicate that PP offers a lightweight, interpretable, and computationally efficient alternative to standard neural architectures, particularly well-suited for resource-constrained environments and applications where transparency is critical. Full article
(This article belongs to the Special Issue Advances in Data Mining and Coding Theory for Data Compression)
33 pages, 28814 KB  
Article
2D Orthogonal Matching Pursuit for Fully Polarimetric SAR Image Formation
by Daniele Bonicoli, Marco Martorella and Elisa Giusti
Remote Sens. 2026, 18(8), 1182; https://doi.org/10.3390/rs18081182 - 15 Apr 2026
Viewed by 122
Abstract
Fully polarimetric SAR provides richer scattering information than single-polarisation imaging, but multichannel sparse image formation can be computationally and memory demanding, especially when channels are processed jointly. In our previous work, we introduced Orthogonal Matching Pursuit 2D Fully Polarimetric (OMP2D-FP), a greedy reconstruction [...] Read more.
Fully polarimetric SAR provides richer scattering information than single-polarisation imaging, but multichannel sparse image formation can be computationally and memory demanding, especially when channels are processed jointly. In our previous work, we introduced Orthogonal Matching Pursuit 2D Fully Polarimetric (OMP2D-FP), a greedy reconstruction algorithm that enforces a shared spatial support across polarimetric channels while exploiting a separable 2D formulation to avoid vectorisation and reduce computational burden and memory footprint relative to vectorised OMP-based formulations. In this paper, we extend its validation to real measurements and further develop its theoretical foundations by recasting the atom-selection step as a detection–estimation problem, thereby defining a cumulative objective function (COF) design space that enables the incorporation of disturbance statistics and prior knowledge into sparse recovery. Experiments on fully polarimetric SAR data of a T-72 tank over a wide range of aspect angles, SNR levels, and measurement percentages show that joint support selection improves reconstruction fidelity and polarimetric information preservation over independent per-channel processing, with particularly clear gains under challenging conditions. Preliminary applications of the COF framework (a whitening COF incorporating polarimetric clutter statistics and a mask-based COF incorporating spatial prior knowledge) yield encouraging results, motivating further systematic investigation of adaptive COF designs. Full article
20 pages, 23906 KB  
Article
Improved Depth Imaging of the Chicxulub Impact Crater by GPU-Accelerated Adjoint Reverse Time Migration
by Jesús Antonio Herrera-Pérez, Jose Carlos Ortíz-Alemán, Sebastián López-Juárez, Jhonatan Fernando Eulopa-Hernandez, Carlos Couder-Castañeda, Isaac Medina-Sanchez, Jairo Olguin-Roque and Diego Alfredo Padilla-Pérez
Symmetry 2026, 18(4), 658; https://doi.org/10.3390/sym18040658 - 15 Apr 2026
Viewed by 190
Abstract
Reverse time migration (RTM) exploits time-reversal symmetry and adjoint duality to focus wavefields and reconstruct subsurface reflectivity, but large surveys remain limited by the cost of forward and backward propagation. We present a Graphics Processing Unit (GPU)-accelerated adjoint RTM workflow for depth imaging [...] Read more.
Reverse time migration (RTM) exploits time-reversal symmetry and adjoint duality to focus wavefields and reconstruct subsurface reflectivity, but large surveys remain limited by the cost of forward and backward propagation. We present a Graphics Processing Unit (GPU)-accelerated adjoint RTM workflow for depth imaging of the Chicxulub impact structure using the marine A0/A1 composite profile (1996). The processed stacked section contains 14,172 traces with 6.25 m Common Depth Point (CDP) spacing, 1 ms sampling, and 18 s record length. Forward and adjoint wavefields are computed with a staggered-grid finite-difference scheme (fourth order in space, second in time) and Convolutional Perfectly Matched Layers (CPMLs), which provide stable finite-domain simulations while introducing controlled symmetry breaking through absorption. The solver is verified with the Lamb half-space analytical benchmark and applied through five interpretation-guided velocity/density updates. The final depth image improves reflector continuity and interpretability of crater-scale elements, including post-impact sedimentary fill, melt and breccia units, terrace fault blocks, and deep uplift-related structure. Compute Unified Device Architecture (CUDA) acceleration reduces runtime from ∼32.36 h on a CPU baseline to ∼34.10 min on an RTX 3070 (≈56.9×), enabling practical, reproducible iterative RTM on accessible hardware. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Numerical Analysis and Scientific Computing)
Show Figures

Figure 1

20 pages, 10357 KB  
Article
A Comparative Benchmark of Face Detection Models for Noisy and Dynamic Online Class Environments
by Cesar Isaza, Pamela Rocío Ibarra Tapia, Cristian Felipe Ramirez-Gutierrez, Jonny Paul Zavala de Paz, Jose Amilcar Rizzo Sierra and Karina Anaya
Future Internet 2026, 18(4), 208; https://doi.org/10.3390/fi18040208 - 15 Apr 2026
Viewed by 314
Abstract
Monitoring students’ on-screen availability is increasingly critical for analyzing participation patterns in synchronous online learning, especially under videoconferencing conditions characterized by compressed video streams, low-resolution face regions, fluctuating bandwidth, and dynamically reconfigured grid layouts. This study introduces a practical computer vision pipeline that [...] Read more.
Monitoring students’ on-screen availability is increasingly critical for analyzing participation patterns in synchronous online learning, especially under videoconferencing conditions characterized by compressed video streams, low-resolution face regions, fluctuating bandwidth, and dynamically reconfigured grid layouts. This study introduces a practical computer vision pipeline that integrates deep learning-based face detection, lightweight embedding-based identity matching, and frame-level temporal aggregation to estimate students’ visual presence (VP) during live online classes. A real-world dataset comprising 27 participants and 16,200 frames was collected under authentic conditions, including codec compression, variable image quality, and dynamic layout changes. Four widely used face detection models (Haar Cascade, DSFD, MTCNN, and YuNet) were benchmarked on noisy and low-quality images. Quantitative evaluation on a manually annotated subset of 270 frames demonstrates that MTCNN and YuNet yield lower average VP estimation errors (27.63% and 22.20%, respectively) compared to Haar Cascade (75.34%) and DSFD (47.14%), with YuNet also achieving the shortest average processing time of 0.069 s per frame. While the pipeline is intentionally streamlined to facilitate practical use by instructors, the study provides clearly defined steps and parameter settings, establishing a reproducible procedure for benchmarking face detection performance in synchronous online class environments. Full article
Show Figures

Graphical abstract

17 pages, 6145 KB  
Article
Novel, Contrast Echocardiography-Based Trabeculation Quantification Method in the Diagnosis of Left Ventricular Excessive Trabeculation
by Kristóf Attila Farkas-Sütő, Balázs Mester, Flóra Klára Gyulánczi, Krisztina Filipkó, Hajnalka Vágó, Béla Merkely and Andrea Szűcs
J. Imaging 2026, 12(4), 169; https://doi.org/10.3390/jimaging12040169 - 14 Apr 2026
Viewed by 272
Abstract
Cardiac MRI (CMR) is the gold standard for diagnosing left ventricular excessive trabeculation (LVET), whereas echocardiography (Echo) often does not yield a definitive diagnosis. The use of ultrasound contrast material offers the potential for more accurate imaging of the trabecular system; however, we [...] Read more.
Cardiac MRI (CMR) is the gold standard for diagnosing left ventricular excessive trabeculation (LVET), whereas echocardiography (Echo) often does not yield a definitive diagnosis. The use of ultrasound contrast material offers the potential for more accurate imaging of the trabecular system; however, we do not yet have diagnostic criteria developed specifically for contrast Echo (CE-Echo). We aimed to determine the role of CE-Echo in the diagnosis of LVET and to propose a novel method for quantifying trabeculation. We included 55 LVET subjects and 54 age- and sex-matched healthy Control subjects. All subjects underwent non-contrast Echo, CE-Echo, and CMR examinations. In addition to volumetric parameters and ejection fraction (EF), we measured the area of the trabeculated layer and its ratio to the LV area (Trab/LV_area) on apical CE-Echo views. Based on the CMR-derived diagnosis, the Trab/LV_area ratio identified individuals with LVET with high specificity (98%) and sensitivity (95%) when the average of the apical views reached 17% (AUC = 0.98), or when it exceeded 20% in at least one view (AUC = 0.96). The use of CE-Echo may assist in the quantitative diagnosis of LVET in addition to its morphological assessment, and the Trab_area/LVarea may be a good additional criterion in the diagnosis of LVET. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

20 pages, 3700 KB  
Article
Infrared Small Target Detection Method Fusing Accurate Registration and Weighted Difference
by Quan Liang, Teng Wang, Kefang Wang, Lixing Zhao, Xiaoyan Li and Fansheng Chen
Sensors 2026, 26(8), 2406; https://doi.org/10.3390/s26082406 - 14 Apr 2026
Viewed by 240
Abstract
Low-orbit thermal infrared bidirectional whisk-broom imaging offers wide-swath coverage and high spatial resolution for monitoring moving targets such as aircraft, but large scan angles and terrain undulation cause non-rigid geometric distortion and radiometric inconsistency between forward and backward scans. These effects generate strong [...] Read more.
Low-orbit thermal infrared bidirectional whisk-broom imaging offers wide-swath coverage and high spatial resolution for monitoring moving targets such as aircraft, but large scan angles and terrain undulation cause non-rigid geometric distortion and radiometric inconsistency between forward and backward scans. These effects generate strong clutter in difference images and degrade small and weak target detection. To address this problem, we propose an infrared small target detection method that fuses accurate registration and weighted difference. First, we propose a hybrid multi-scale registration algorithm that achieves coarse affine registration through sparse feature–point matching and then iteratively corrects nonlinear deformations by integrating a global grayscale-driven force with a local sparse-feature-guided force, yielding a registration error of 0.3281 pixels. On this basis, a multi-scale weighted convolutional morphological difference algorithm is proposed. A novel dual-structure hollow top-hat transform is constructed to accurately estimate the background, and a multi-directional convolution mechanism is introduced to effectively suppress anisotropic edge clutter and enhance target saliency. Experiments on SDGSAT-1 thermal infrared bidirectional whisk-broom data show an SCRG of 18.27, and a detection rate of 91.2% when the false alarm rate is below 0.15%. The method outperforms representative competing algorithms and provides a useful reference for space-based aerial moving target detection. Full article
Back to TopTop