Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,449)

Search Parameters:
Keywords = image super-resolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 7304 KB  
Review
Enhanced Lateral Resolution in Acoustic Imaging: From High- to Super-Resolution
by Zheng Xia, Huizi He, Zixing Zhou, Shanshan Pan and Sai Zhang
Sensors 2026, 26(6), 1992; https://doi.org/10.3390/s26061992 - 23 Mar 2026
Viewed by 103
Abstract
Acoustic imaging, especially ultrasound, underpins a wide range of applications from non-destructive evaluation to medical and materials analysis, yet its performance is ultimately constrained by lateral resolution. This review systematically summarizes recent advances in overcoming diffraction-limited resolution, encompassing traditional focusing techniques, transducer optimization, [...] Read more.
Acoustic imaging, especially ultrasound, underpins a wide range of applications from non-destructive evaluation to medical and materials analysis, yet its performance is ultimately constrained by lateral resolution. This review systematically summarizes recent advances in overcoming diffraction-limited resolution, encompassing traditional focusing techniques, transducer optimization, physical metamaterial lenses, and methods based on algorithmic optimization and deep learning technologies. It comprehensively covers approaches for enhancing acoustic lateral resolution, compares the differences and respective advantages and disadvantages of various methods, and proposes clear directions and recommendations for future research. This work provides robust guidance for subsequent research trends and development opportunities in higher-resolution acoustic imaging. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 6275 KB  
Article
EGDM-IRSR: Edge-Guided Diffusion Model with State-Space UNet for Super-Resolution Infrared Images
by Hao Liu, Liang Huang, Xiaofeng Wang, Ting Nie and Mingxuan Li
Remote Sens. 2026, 18(6), 910; https://doi.org/10.3390/rs18060910 - 17 Mar 2026
Viewed by 255
Abstract
Ensuring infrared images are of super-high resolution is crucial for enhancing thermal imaging systems’ visual perception, yet existing methods struggle to recover sharp edges and textual details. Therefore, in this study, we aimed to address the following issues: over-smoothed edges, distorted radiometric contrast [...] Read more.
Ensuring infrared images are of super-high resolution is crucial for enhancing thermal imaging systems’ visual perception, yet existing methods struggle to recover sharp edges and textual details. Therefore, in this study, we aimed to address the following issues: over-smoothed edges, distorted radiometric contrast in diffusion-based approaches, and scanning artifacts introduced by efficient state-space models like Mamba. We propose a novel edge-guided diffusion framework named EGDM-IRSR. Its core methodology integrates a multi-modal scanning mechanism employing complementary scan paths with content-aware modulation to mitigate directional artifacts, along with an edge guidance branch with learnable direction-aware convolutions, complemented by edge-frequency composite loss. Extensive experiments conducted on public benchmarks demonstrate that our method significantly outperforms state-of-the-art alternatives in quantitative metrics and exhibits superior visual fidelity by effectively preserving edges and fine structures. Ablation studies validate the effectiveness of each proposed component. We conclude that EGDM-IRSR provides a more robust and detail-enriched solution for acquiring super-resolution infrared images by synergistically integrating edge guidance with enhanced sequential modeling. Full article
Show Figures

Figure 1

22 pages, 6635 KB  
Article
EdgeGeoDiff: A Novel Two-Stage Diffusion Approach for Precipitation Downscaling with Edge Details and Geographical Priors
by Shiji Zhang, Chenghong Zhang, Tao Wu, Tao Zou and Yuanchang Dong
Sensors 2026, 26(6), 1857; https://doi.org/10.3390/s26061857 - 15 Mar 2026
Viewed by 204
Abstract
Precipitation downscaling aims to enhance coarse-resolution data to higher resolutions. Due to the similarity between downscaling and super-resolution (SR), deep learning-based SR approaches have been increasingly adopted in this domain. However, single-image super-resolution (SISR) methods applied to precipitation data face two main challenges: [...] Read more.
Precipitation downscaling aims to enhance coarse-resolution data to higher resolutions. Due to the similarity between downscaling and super-resolution (SR), deep learning-based SR approaches have been increasingly adopted in this domain. However, single-image super-resolution (SISR) methods applied to precipitation data face two main challenges: weak high-frequency signals and highly skewed distributions in precipitation datasets, which often lead to overly smooth reconstructions, failure to capture precipitation extremes, and loss of fine-scale variability with predictions biased toward mean values. To address these issues, we propose EdgeGeoDiff, a two-stage diffusion model for precipitation downscaling that leverages both edge information and geographical priors (e.g., terrain-related factors such as elevation). In the first stage, a residual network reconstructs an initial high-resolution precipitation field with preliminary structural details. In the second stage, edge features extracted using the Laplacian operator, together with geographical priors, guide a diffusion model to generate residuals that enhance fine-scale precipitation structures. Experimental results on real-world precipitation datasets show that EdgeGeoDiff effectively reconstructs fine-scale details while preserving large-scale patterns and outperforms conventional SISR methods in terms of its RMSE, PSNR, SSIM, and CSI, particularly demonstrating superior performance in the high-frequency region of the spectrum. Full article
Show Figures

Figure 1

16 pages, 7270 KB  
Article
Multi-Domain Fusion for UAV Image Super-Resolution Based on Tiny-Transformer
by Qiaoyue Man, Seok-Jeong Gee and Young-Im Cho
Drones 2026, 10(3), 204; https://doi.org/10.3390/drones10030204 - 14 Mar 2026
Viewed by 200
Abstract
Unmanned Aerial Vehicle imagery often suffers from severe spatial detail degradation due to sensor limitations and motion blur, hindering downstream vision tasks. To address this, we propose a lightweight super-resolution framework leveraging a Tiny-Transformer backbone enhanced by a multi-domain feature fusion strategy. Specifically, [...] Read more.
Unmanned Aerial Vehicle imagery often suffers from severe spatial detail degradation due to sensor limitations and motion blur, hindering downstream vision tasks. To address this, we propose a lightweight super-resolution framework leveraging a Tiny-Transformer backbone enhanced by a multi-domain feature fusion strategy. Specifically, we jointly model spatial structural semantics and frequency domain texture priors via a cross-domain fusion attention mechanism, enabling coordinated restoration of global consistency and local details. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches on standard benchmarks, achieving significant gains in Peak Signal-to-Noise Ratio and structural similarity index while maintaining low computational cost. Notably, the model exhibits superior robustness in reconstructing high-frequency textures common in aerial scenes. This work provides an efficient, deployable solution for enhancing visual fidelity in resource-constrained applications such as urban planning and precision agriculture. Full article
Show Figures

Figure 1

21 pages, 7166 KB  
Article
Geometric Reliability of AI-Enhanced Super-Resolution in Video-Based 3D Spatial Modeling
by Marwa Mohammed Bori, Zahraa Ezzulddin Hussein, Zainab N. Jasim and Bashar Alsadik
ISPRS Int. J. Geo-Inf. 2026, 15(3), 125; https://doi.org/10.3390/ijgi15030125 - 13 Mar 2026
Viewed by 260
Abstract
Video-based photogrammetric reconstruction is increasingly used when high-resolution still images are unavailable. However, limited spatial resolution, compression artifacts, and motion blur often reduce geometric accuracy. Recent advances in learning-based image super-resolution (SR) offer a promising preprocessing method, but their geometric reliability within photogrammetric [...] Read more.
Video-based photogrammetric reconstruction is increasingly used when high-resolution still images are unavailable. However, limited spatial resolution, compression artifacts, and motion blur often reduce geometric accuracy. Recent advances in learning-based image super-resolution (SR) offer a promising preprocessing method, but their geometric reliability within photogrammetric workflows remains not well understood. This study provides a controlled quantitative evaluation of learning-based super-resolution for video-based 3D reconstruction. Low-resolution video frames are enhanced using two representative methods: an open-source real-world SR model (Real-ESRGAN ×4) and a commercial solution (Topaz Video AI ×4). All datasets are processed with the same Structure-from-Motion and Multi-View Stereo pipelines and tested against terrestrial laser scanning (TLS) reference data. Results show that super-resolution significantly increases reconstruction density and improves the recovery of fine-scale surface details, while also leading to greater local surface variability compared with reconstructions from the original video; photogrammetric stability remains consistent despite these changes. The findings highlight a fundamental trade-off between reconstruction completeness and local geometric accuracy and clarify when enhanced video imagery via super-resolution can be a reliable source for 3D reconstruction. These results are especially important for spatial data science workflows and AI-powered 3D modeling and digital twin applications. Full article
(This article belongs to the Special Issue Urban Digital Twins Empowered by AI and Dataspaces)
Show Figures

Figure 1

13 pages, 6402 KB  
Article
Random-Induced High-Contrast Subwavelength Nondiffracting Structured Light
by Guangsen Guo, Junhui Jia, Xiaoshan Zhang, Junjie Chen, Shikuan Mai, Wenjia Wang, Haolin Lin, Yanwen Hu, Zhen Li and Shenhe Fu
Photonics 2026, 13(3), 274; https://doi.org/10.3390/photonics13030274 - 13 Mar 2026
Viewed by 271
Abstract
Nondiffracting structured light has attracted considerable attention owing to broad applications in both the classical and quantum optics. Despite extensive research, existing generation approaches suffer from a contradiction between the subwavelength focal spot size and the strong side lobes, leading to a low-contrast [...] Read more.
Nondiffracting structured light has attracted considerable attention owing to broad applications in both the classical and quantum optics. Despite extensive research, existing generation approaches suffer from a contradiction between the subwavelength focal spot size and the strong side lobes, leading to a low-contrast localized light field in the far field. Here, we theoretically report a distinct technique for the generation of high-contrast nondiffracting structured light with its feature size reaching a subwavelength scale. The presented technique relies on a randomly perturbed sharp-edge aperture, which comprises a basic circular obstacle for exciting the in-phase high-spatial-frequency diffractive waves and randomized slit motifs for realizing destructive interference among the zero-order diffractive components, emerging from the sharp-edge diffraction. With this framework, we obtain a continuous high-contrast light needle, both for the zero-order light mode and the higher-order light with topological structure. In both cases, the resultant light fields preserve their subwavelength intensity profiles along propagation distance. This operating strategy provides an effective manner for structured light generation in the subwavelength scale, offering opportunities for advanced applications such as super-resolution imaging and nano-scale light–matter interaction. Full article
Show Figures

Figure 1

19 pages, 7642 KB  
Article
A Graph-Regularized Double-Path Interactive Spectral Super-Resolution Network for Hyperspectral Image Reconstruction
by Shuo Wang, Ting Hu, Siyuan Cheng, Zhe Li, Zhonghua Sun, Kebin Jia and Jinchao Feng
Remote Sens. 2026, 18(6), 875; https://doi.org/10.3390/rs18060875 - 12 Mar 2026
Viewed by 194
Abstract
Deep learning has demonstrated outstanding potential for the spectral super-resolution (S2R) reconstruction of multispectral images (MSIs). However, it is still a challenge to alleviate spectral distortion during S2R reconstruction. Given the superiority of a graph, a graph-regularized double-path interactive [...] Read more.
Deep learning has demonstrated outstanding potential for the spectral super-resolution (S2R) reconstruction of multispectral images (MSIs). However, it is still a challenge to alleviate spectral distortion during S2R reconstruction. Given the superiority of a graph, a graph-regularized double-path interactive S2R network (GDIS2Net) consisting of two parallel branches is proposed to reconstruct hyperspectral images (HSIs) from MSIs. An interactive residual module is carefully schemed as the backbone of the S2R network to facilitate the feature interaction between the two branches, while an enhanced residual module is constructed for further feature fusion. In addition, a new loss function considering the spectral continuity is proposed to optimize the proposed GDIS2Net. Experimental analyses show that the proposed GDIS2Net outperforms state-of-the-art methods on both simulated and real datasets. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

16 pages, 22406 KB  
Article
Isotropic Reconstruction of Anisotropic vEM Volumes with ViT-Guided Diffusion
by Junchao Qiu, Guojia Wan, Zhengyun Zhou, Minghui Liao, Xiangdong Liu, Xinyuan Li and Bo Du
Electronics 2026, 15(6), 1181; https://doi.org/10.3390/electronics15061181 - 12 Mar 2026
Viewed by 233
Abstract
Volume electron microscopy (vEM) provides nanometer-scale 3D imaging, yet its axial (z) resolution is often much lower than the in-plane (xy) resolution, yielding anisotropic volumes that hinder segmentation and connectomic reconstruction. We present a two-stage cross-axial super-resolution framework [...] Read more.
Volume electron microscopy (vEM) provides nanometer-scale 3D imaging, yet its axial (z) resolution is often much lower than the in-plane (xy) resolution, yielding anisotropic volumes that hinder segmentation and connectomic reconstruction. We present a two-stage cross-axial super-resolution framework for isotropic reconstruction that combines a conditional diffusion model and domain-specific self-supervised pretraining of a vision transformer (ViT). First, the student–teacher self-distillation paradigm of DINOv3 is adopted to learn representations from large sets of high-resolution xy sections, capturing vEM-specific texture statistics and ultrastructural patterns. Second, a conditional diffusion denoiser is trained with supervised anisotropic degradation simulated by z-downsampling, while a perceptual loss based on frozen ViT feature distances constrains generated slices to match real-section distributions. These constraints recover axial high-frequency details and reduce hallucinated textures and inter-slice drift, improving cross-slice consistency. Experiments on two public vEM datasets show improved fidelity, perceptual quality, and membrane-boundary continuity over interpolation and learning-based baselines. Full article
Show Figures

Figure 1

21 pages, 5982 KB  
Article
Evaluating Geostationary Satellite-Based Approaches for NDVI Gap Filling in Polar-Orbiting Satellite Observations
by Han-Sol Ryu, Sung-Joo Yoon, Jinyeong Kim and Tae-Ho Kim
Sensors 2026, 26(5), 1731; https://doi.org/10.3390/s26051731 - 9 Mar 2026
Viewed by 296
Abstract
The Normalized Difference Vegetation Index (NDVI) derived from polar-orbiting satellites is widely used for vegetation monitoring; however, its temporal continuity is often limited by cloud contamination and fixed revisit cycles. To address this limitation, this study investigates the feasibility of using geostationary satellite [...] Read more.
The Normalized Difference Vegetation Index (NDVI) derived from polar-orbiting satellites is widely used for vegetation monitoring; however, its temporal continuity is often limited by cloud contamination and fixed revisit cycles. To address this limitation, this study investigates the feasibility of using geostationary satellite observations to enhance the spatial completeness of Sentinel-2 NDVI at its standard revisit intervals through cloud gap-filling applications. Geostationary Ocean Color Imager II (GOCI-II) data (250 m) was used as input, while Sentinel-2 Multispectral Instrument (MSI) NDVI (10 m) served as the reference dataset. To enable cross-sensor integration, a data-driven transformation framework was developed to convert GOCI-II NDVI into MSI-like NDVI while preserving dominant spatial variation patterns rather than pursuing strict pixel-level super-resolution. The transformed NDVI was assessed through spatial comparisons and statistical metrics, including correlation coefficient, mean absolute error, root mean square error (RMSE), normalized RMSE, and structural similarity index measure. Results show that geostationary-derived NDVI captures broad spatial organization and field-scale variability observed in MSI NDVI. Building on this cross-scale consistency, cloud gap-filling experiments demonstrate that temporally adjacent transformed NDVI scenes maintain consistent variation patterns, supporting their complementary use for compensating cloud-induced gaps. Although reduced contrast and magnitude-dependent biases remain, primarily due to the large spatial resolution difference and sub-pixel heterogeneity, an intermediate-resolution (80 m) sensitivity analysis indicates improved stability when the resolution gap is reduced. Overall, these findings highlight the practical potential of integrating geostationary and polar-orbiting observations to improve NDVI spatial continuity in cloud-prone regions. Full article
(This article belongs to the Special Issue Remote Sensing Technology for Agricultural and Land Management)
Show Figures

Figure 1

18 pages, 2343 KB  
Article
VMESR: Variable Mamba-Enhanced Super-Resolution for Real-Time Road Scene Understanding with Automotive Vision Sensors
by Hongjun Zhu, Wanjun Wang, Chunyan Ma and Rongtao Hou
Sensors 2026, 26(5), 1683; https://doi.org/10.3390/s26051683 - 6 Mar 2026
Viewed by 310
Abstract
Automotive vision systems depend critically on front-view cameras, whose image quality frequently degrades under adverse conditions such as rain, fog, low illumination, and rapid motion. To address this challenge, we propose VMESR, a variable mamba-enhanced super-resolution network that integrates a selective state-space model [...] Read more.
Automotive vision systems depend critically on front-view cameras, whose image quality frequently degrades under adverse conditions such as rain, fog, low illumination, and rapid motion. To address this challenge, we propose VMESR, a variable mamba-enhanced super-resolution network that integrates a selective state-space model into a lightweight super-resolution architecture. By serializing 2D feature maps and applying variable-depth mamba blocks, VMESR captures long-range dependencies with linear complexity. A multi-scale feature extractor, enhanced residual modules equipped with a convolutional block attention module, and dense fusion connections work together to improve the recovery of high-frequency details. Extensive experiments demonstrate that VMESR achieves competitive performance in both objective metrics and perceptual quality compared to state-of-the-art methods, while significantly reducing parameter counts and computational cost. VMESR provides a practical balance between efficiency and reconstructive accuracy, offering a deployable super-resolution solution for embedded automotive sensors and enhancing the robustness of autonomous driving perception pipelines. Full article
(This article belongs to the Special Issue AI for Emerging Image-Based Sensor Applications)
Show Figures

Figure 1

39 pages, 6596 KB  
Article
Unsupervised Super-Resolution for UAV Thermal Imagery via Diffusion Models with Emissivity-Guided Texture Transfer
by Dong Liu, Min Sun, Xinyi Wang and Kelly Chen Ke
Remote Sens. 2026, 18(5), 815; https://doi.org/10.3390/rs18050815 - 6 Mar 2026
Viewed by 300
Abstract
Due to hardware limitations of Thermal InfraRed (TIR) cameras, TIR images captured by Unmanned Aerial Vehicles (UAVs) suffer from Low Resolutions (LRs) and blurred textures. Improving the spatial resolution of TIR images is of great significance for subsequent applications. Existing image Super-Resolution (SR) [...] Read more.
Due to hardware limitations of Thermal InfraRed (TIR) cameras, TIR images captured by Unmanned Aerial Vehicles (UAVs) suffer from Low Resolutions (LRs) and blurred textures. Improving the spatial resolution of TIR images is of great significance for subsequent applications. Existing image Super-Resolution (SR) methods rely on High-Resolution (HR) ground truth for supervised training, resulting in limited generalization and a lack of mechanisms to preserve the physical consistency of thermal radiation. To address these two issues, this paper proposes an unsupervised super-resolution framework for UAV TIR imagery that integrates diffusion modeling with cross-modal texture transfer. The diffusion model enables stable reconstruction of the fundamental TIR structure without requiring high-resolution supervision, while multi-scale textures extracted from visible (VIS) imagery via Multi-Stage Decomposition based on Latent Low-Rank Representation (MS-DLatLRR) compensate for missing details. To suppress temperature distortions introduced by cross-modal texture transfer, a physics-guided constraint termed Prior-Informed Emissivity-Guided Coefficient Mapping (PI-EGCM) is incorporated. Emissivity-aware guidance maps constructed via semantic classification regulate texture transfer and preserve thermal radiation consistency. Experimental results demonstrate that the proposed method improves spatial resolution and perceptual quality while effectively maintaining temperature fidelity, achieving a balanced enhancement of structural detail and physical consistency. Full article
Show Figures

Figure 1

25 pages, 8082 KB  
Article
A Novel Improved Whale Optimization Algorithm-Based Multi-Scale Fusion Attention Enhanced SwinIR Model for Super-Resolution and Recognition of Text Images on Electrophoretic Displays
by Xin Xiong, Zikang Feng, Peng Li, Xi Hu, Jiyan Liu and Xueqing Liu
Biomimetics 2026, 11(3), 195; https://doi.org/10.3390/biomimetics11030195 - 6 Mar 2026
Viewed by 345
Abstract
Electrophoretic Displays (EPDs) are widely adopted in e-readers and portable devices due to their ultra-low power consumption and eye-friendly reflective characteristics. However, inherent hardware limitations, such as low resolution, slow response speed, and display degradation, frequently result in blurred strokes and degraded text [...] Read more.
Electrophoretic Displays (EPDs) are widely adopted in e-readers and portable devices due to their ultra-low power consumption and eye-friendly reflective characteristics. However, inherent hardware limitations, such as low resolution, slow response speed, and display degradation, frequently result in blurred strokes and degraded text readability. While traditional driving waveform optimizations can mitigate these issues, they are device-dependent and require extensive manual calibration. To address these challenges, this paper proposes an Improved Whale Optimization Algorithm-based Multi-scale Fusion Attention-enhanced SwinIR (IWOA-MFA-SwinIR) model for super-resolution and recognition of text images on EPDs. Structurally, the model incorporates a multi-scale fused attention (MFA) module that synergistically integrates channel, spatial, and gated attention mechanisms to precisely capture high-frequency text details while suppressing background noise within the SwinIR architecture. Furthermore, to enhance model robustness and eliminate manual tuning, an Improved Whale Optimization Algorithm (IWOA) is employed to adaptively optimize critical hyperparameters, including embedding dimension (d), attention head count (h), learning rate (lr), and dimensionality reduction coefficient (r). Experiments conducted on the TextZoom and EPD datasets demonstrate that the proposed model achieves state-of-the-art performance. In the ablation study, it attains a Peak Signal-to-Noise Ratio (PSNR) of 24.406, a Structural Similarity Index (SSIM) of 0.8837, and a Character Recognition Accuracy (CRA) of 89.81%. In the comparative evaluation, the proposed model consistently outperforms the second-best comparison model across three difficulty levels, yielding approximately a 1% improvement in PSNR, a 0.8% improvement in SSIM, and an 8% improvement in CRA. This confirms the proposed model’s superiority over mainstream comparative models in restoring text fidelity and improving recognition rates. Full article
(This article belongs to the Special Issue Bionics in Engineering Practice: Innovations and Applications)
Show Figures

Figure 1

19 pages, 4890 KB  
Article
MTA-Dataset: Multiple-Tilt-Angle Dataset for UAV–Satellite Image Matching
by Qifei Liu, Liang Jiang, Guoqiang Wu, Kun Huang, Haohui Sun and Gengchen Liu
Appl. Sci. 2026, 16(5), 2488; https://doi.org/10.3390/app16052488 - 4 Mar 2026
Viewed by 381
Abstract
Accurate target localization via matching real-time UAV images with reference satellite imagery is essential for autonomous environmental perception. Nonetheless, operational constraints and weather conditions often necessitate oblique photography. This large-tilt mode causes significant perspective and radiometric distortions, resulting in a substantial domain gap [...] Read more.
Accurate target localization via matching real-time UAV images with reference satellite imagery is essential for autonomous environmental perception. Nonetheless, operational constraints and weather conditions often necessitate oblique photography. This large-tilt mode causes significant perspective and radiometric distortions, resulting in a substantial domain gap between UAV and vertical satellite imagery. The scarcity of datasets featuring extreme viewpoint shifts and fine-grained ground-truth labels hinders the validation of image matching algorithms in multi-tilt-angle environments. To address this issue, we introduce the multiple-tilt-angle dataset (MTA-Dataset), containing 1892 UAV images with tilt angles spanning 0°,90° and flight altitudes up to 300 m, supported by high-precision five-point manual annotations. Based on this benchmark, we evaluate state-of-the-art matching algorithms and propose a spatial-resolution-based cropping strategy. Experimental results demonstrate that, as the UAV tilt angle increases within the range of 0°,90°, although the expanding field of view provides richer contextual information, the localization errors of all methods increase significantly and matching precision drops sharply due to severe geometric distortions in far-field regions and interference from redundant background information, with performance deteriorating most drastically in the 50°,90° range. With the integration of our strategy, the average matching localization errors of SuperPoint + SuperGlue baseline for UAV images within the tilt-angle ranges of 50°,60°, 60°,70°, 70°,80°, and 80°,90° are reduced by 33.49 m, 37.86 m, 98.3 m, and 109.95 m, respectively. Our study provides a more comprehensive evaluation framework for robust UAV–satellite image matching algorithms in multi-tilt-angle scenarios. Full article
Show Figures

Figure 1

24 pages, 30812 KB  
Article
A Lightweight Model for Hot-Rolled Steel Strip Surface Defect Recognition
by Naixuan Guo, Haonan Fan, Qin Dong, Rongchen Gu and Sen Xu
Sensors 2026, 26(5), 1618; https://doi.org/10.3390/s26051618 - 4 Mar 2026
Viewed by 313
Abstract
With the rapid development of intelligent manufacturing and industrial automation, defect recognition and detection of hot-rolled strip steel have become crucial to ensuring both production efficiency and product quality. However, existing hot-rolled strip steel detection systems often rely on expensive, energy-intensive, stationary equipment, [...] Read more.
With the rapid development of intelligent manufacturing and industrial automation, defect recognition and detection of hot-rolled strip steel have become crucial to ensuring both production efficiency and product quality. However, existing hot-rolled strip steel detection systems often rely on expensive, energy-intensive, stationary equipment, making them unsuitable for mobile applications, such as outdoor use. To address this challenge, this paper proposes and designs a lightweight dual-surface defect recognition model for hot-rolled steel strips that can be implemented on mobile low-power devices (e.g., Raspberry Pi). First, to train the lightweight model, the NEU-CLS dataset is augmented through image generation via StyleGAN3, denoising with a water-wave-like noise removal algorithm, and super-resolution with Real-ESRGAN. Then, MMAM-EfficientNet-B0 is pruned during training, and the Network Slimming algorithm is applied to optimize it on the expanded NEU-CLS dataset, removing 70% of the network structure. Finally, the pruned recognition model is deployed on a Raspberry Pi, achieving an accuracy of 96.333%, with a classification time of 1.527 s per image, a reduction of 155.010% compared to the original model. Our experiments confirm the real-time effectiveness and practical application value of the model. Full article
Show Figures

Figure 1

37 pages, 6252 KB  
Review
From Microscopy to Nanoscopy: Contemporary Physical Methods in Mitochondrial Structural Biology
by Semen V. Nesterov, Anton G. Rogov and Raif G. Vasilov
Int. J. Mol. Sci. 2026, 27(5), 2361; https://doi.org/10.3390/ijms27052361 - 3 Mar 2026
Viewed by 340
Abstract
Mitochondria play a crucial role in cellular bioenergetics, signaling, and metabolism; yet, many fundamental mechanisms such as the proton transfer along the membranes, the link between membrane curvature and oxidative phosphorylation, and the nanoscale organization of enzyme supercomplexes remain poorly understood due to [...] Read more.
Mitochondria play a crucial role in cellular bioenergetics, signaling, and metabolism; yet, many fundamental mechanisms such as the proton transfer along the membranes, the link between membrane curvature and oxidative phosphorylation, and the nanoscale organization of enzyme supercomplexes remain poorly understood due to the limitations of classical biochemical approaches. This review addresses this gap by systematically analyzing the contemporary physical methods used to investigate the mitochondrial structure and function from the micro to nano scale. It covers advanced fluorescence and super-resolution microscopy, electron and volume electron microscopy, and scanning probe techniques, as well as cryo-electron tomography for resolving supramolecular assemblies in near-native conditions. The review highlights the applications of the modern fluorescent probes, expansion and phase microscopy, and machine-learning-based image analysis for a quantitative assessment of the mitochondrial morphology, membrane potential, and dynamics in living cells and tissues. Complementary spectroscopic and scattering methods, including Raman spectroscopy, NMR, and X-ray and neutron scattering, are discussed as tools for probing the redox state, metabolite composition, and membrane organization. Emphasis is placed on integrating high-resolution experimental data with advanced computational frameworks to test competing models of mitochondrial function and pathology, and to guide the development of biomimetic and biomedical technologies. Full article
Show Figures

Figure 1

Back to TopTop