Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (12,626)

Search Parameters:
Keywords = high resolution images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 6462 KB  
Article
High Frame Rate ViSAR Based on OAM Beams: Imaging Model and Imaging Algorithm
by Xiaopeng Li, Liying Xu, Yongfei Mao, Weisong Li, Yinwei Li, Hongqiang Wang and Yiming Zhu
Remote Sens. 2026, 18(2), 294; https://doi.org/10.3390/rs18020294 - 15 Jan 2026
Abstract
High frame rate imaging of synthetic aperture radar (SAR), also known as video SAR (ViSAR), has attracted extensive research in recent years. When ViSAR system parameters are fixed, there is a technical trade-off between high frame rates and high resolution. In traditional ViSAR, [...] Read more.
High frame rate imaging of synthetic aperture radar (SAR), also known as video SAR (ViSAR), has attracted extensive research in recent years. When ViSAR system parameters are fixed, there is a technical trade-off between high frame rates and high resolution. In traditional ViSAR, the frame rate is usually increased by increasing the carrier frequency to increase the azimuth modulation frequency and reducing the synthetic aperture time. This paper attempts to propose a strip non-overlapping mode ViSAR based on Orbital Angular Momentum (OAM) beams, which uses the topological charge of vortex electromagnetic wave (VEW) to improve the azimuth modulation frequency, to improve the frame rate. By introducing the concept of VEW frame splitting, a corresponding time-varying topological charge mode is designed for ViSAR imaging. This design successfully introduces an additional azimuth modulation frequency while maintaining the original imaging resolution, thus significantly improving the frame rate performance of the ViSAR system. However, the Bessel function term in VEW causes amplitude modulation in the echo signal, while the additional frequency modulation causes the traditional matching filter to fail. To address these problems, an improved Range-Doppler algorithm (RDA) is proposed in this paper. By employing the range cell center approximation method, the negative effect of the Bessel function on imaging is reduced effectively. Furthermore, for the introduction of tuning frequency, the azimuth matched filter is specially improved, which effectively prevents the defocusing issues caused by the mismatch of tuning frequency. Finally, the computer simulation results prove that the ViSAR system and imaging algorithm based on VEW can effectively improve the frame rate of ViSAR and maintain the imaging resolution, which provides a research direction for the development of ViSAR technology. Full article
Show Figures

Figure 1

29 pages, 9724 KB  
Article
YOLOv11n-CGSD: Lightweight Detection of Dairy Cow Body Temperature from Infrared Thermography Images in Complex Barn Environments
by Zhongwei Kang, Hang Song, Hang Xue, Miao Wu, Derui Bao, Chuang Yan, Hang Shi, Jun Hu and Tomas Norton
Agriculture 2026, 16(2), 229; https://doi.org/10.3390/agriculture16020229 - 15 Jan 2026
Abstract
Dairy cow body temperature is a key physiological indicator that reflects metabolic level, immune status, and environmental stress responses, and it has been widely used for early disease recognition. Infrared thermography (IRT), as a non-contact imaging technique capable of remotely acquiring the surface [...] Read more.
Dairy cow body temperature is a key physiological indicator that reflects metabolic level, immune status, and environmental stress responses, and it has been widely used for early disease recognition. Infrared thermography (IRT), as a non-contact imaging technique capable of remotely acquiring the surface radiation temperature distribution of animals, is regarded as a powerful alternative to traditional temperature measurement methods. Under practical cowshed conditions, IRT images of dairy cows are easily affected by complex background interference and generally suffer from low resolution, poor contrast, indistinct boundaries, weak structural perception, and insufficient texture information, which lead to significant degradation in target detection and temperature extraction performance. To address these issues, a lightweight detection model named YOLOv11n-CGSD is proposed for dairy cow IRT images, aiming to improve the accuracy and robustness of region of interest (ROI) detection and body temperature extraction under complex background conditions. At the architectural level, a C3Ghost lightweight module based on the Ghost concept is first constructed to reduce redundant feature extraction while lowering computational cost and enhancing the network capability for preserving fine-grained features during feature propagation. Subsequently, a space-to-depth convolution module is introduced to perform spatial rearrangement of feature maps and achieve channel compression via non-strided convolution, thereby improving the sensitivity of the model to local temperature variations and structural details. Finally, a dynamic sampling mechanism is embedded in the neck of the network, where the upsampling and scale alignment processes are adaptively driven by feature content, enhancing the model response to boundary temperature changes and weak-texture regions. Experimental results indicate that the YOLOv11n-CGSD model can effectively shift attention from irrelevant background regions to ROI contour boundaries and increase attention coverage within the ROI. Under complex IRT conditions, the model achieves P, R, and mAP50 values of 89.11%, 86.80%, and 91.94%, which represent improvements of 3.11%, 5.14%, and 4.08%, respectively, compared with the baseline model. Using Tmax as the temperature extraction parameter, the maximum error (Max. Error) and mean error (MAE. Error) in the lower udder region are reduced by 33.3% and 25.7%, respectively, while in the around the anus region, the Max. Error and MAE. Error are reduced by 87.5% and 95.0%, respectively. These findings demonstrate that, under complex backgrounds and low-quality IRT imaging conditions, the proposed model achieves lightweight and high-performance detection for both lower udder (LU) and around the anus (AA) regions and provides a methodological reference and technical support for non-contact body temperature measurement of dairy cows in practical cowshed production environments. Full article
(This article belongs to the Section Farm Animal Production)
45 pages, 5848 KB  
Review
Future Perspectives on Black Hole Jet Mechanisms: Insights from Next-Generation Observatories and Theoretical Developments
by Andre L. B. Ribeiro and Nathalia M. N. da Rocha
Universe 2026, 12(1), 24; https://doi.org/10.3390/universe12010024 - 15 Jan 2026
Abstract
Black hole jets represent one of the most extreme manifestations of astrophysical processes, linking accretion physics, relativistic magnetohydrodynamics, and large-scale feedback in galaxies and clusters. Despite decades of observational and theoretical work, the mechanisms governing jet launching, collimation, and energy dissipation remain open [...] Read more.
Black hole jets represent one of the most extreme manifestations of astrophysical processes, linking accretion physics, relativistic magnetohydrodynamics, and large-scale feedback in galaxies and clusters. Despite decades of observational and theoretical work, the mechanisms governing jet launching, collimation, and energy dissipation remain open questions. In this article, we discuss how upcoming facilities such as the Event Horizon Telescope (EHT), the Cherenkov Telescope Array (CTA), the Vera C. Rubin Observatory (LSST), and the Whole Earth Blazar Telescope (WEBT) will provide unprecedented constraints on jet dynamics, variability, and multi-wavelength signatures. Furthermore, we highlight theoretical challenges, including the role of magnetically arrested disks (MADs), plasma microphysics, and general relativistic magnetohydrodynamic (GRMHD) simulations in shaping our understanding of jet formation. By combining high-resolution imaging, time-domain surveys, and advanced simulations, the next decade promises transformative progress in unveiling the physics of black hole jets. Full article
(This article belongs to the Special Issue Mechanisms Behind Black Holes and Relativistic Jets)
21 pages, 10154 KB  
Article
Sea Ice Concentration Retrieval in the Arctic and Antarctic Using FY-3E GNSS-R Data
by Tingyu Xie, Cong Yin, Weihua Bai, Dongmei Song, Feixiong Huang, Junming Xia, Xiaochun Zhai, Yueqiang Sun, Qifei Du and Bin Wang
Remote Sens. 2026, 18(2), 285; https://doi.org/10.3390/rs18020285 - 15 Jan 2026
Abstract
Recognizing the critical role of polar Sea Ice Concentration (SIC) in climate feedback mechanisms, this study presents the first comprehensive investigation of China’s Fengyun-3E(FY-3E) GNOS-II Global Navigation Satellite System Reflectometry (GNSS-R) for bipolar SIC retrieval. Specifically, reflected signals from multiple Global Navigation Satellite [...] Read more.
Recognizing the critical role of polar Sea Ice Concentration (SIC) in climate feedback mechanisms, this study presents the first comprehensive investigation of China’s Fengyun-3E(FY-3E) GNOS-II Global Navigation Satellite System Reflectometry (GNSS-R) for bipolar SIC retrieval. Specifically, reflected signals from multiple Global Navigation Satellite Systems (GNSS) are utilized to extract characteristic parameters from Delay Doppler Maps (DDMs). By integrating regional partitioning and dynamic thresholding for sea ice detection, a Random Forest Regression (RFR) model incorporating a rolling-window training strategy is developed to estimate SIC. The retrieved SIC products are generated at the native GNSS-R observation resolution of approximately 1 × 6 km, with each SIC estimate corresponding to an individual GNSS-R observation time. Owing to the limited daily spatial coverage of GNSS-R measurements, the retrieved SIC results are further aggregated into monthly composites for spatial distribution analysis. The model is trained and validated across both polar regions, including targeted ice–water boundary zones. Retrieved SIC estimates are compared with reference data from the OSI SAF Special Sensor Microwave Imager Sounder (SSMIS), demonstrating strong agreement. Based on an extensive dataset, the average correlation coefficient (R) reaches 0.9450 in the Arctic and 0.9602 in the Antarctic for the testing set, with corresponding Root Mean Squared Error (RMSE) of 0.1262 and 0.0818, respectively. Even in the more challenging ice–water transition zones, RMSE values remain within acceptable ranges, reaching 0.1486 in the Arctic and 0.1404 in the Antarctic. This study demonstrates the feasibility and accuracy of GNSS-R-based SIC retrieval, offering a robust and effective approach for cryospheric monitoring at high latitudes in both polar regions. Full article
Show Figures

Figure 1

20 pages, 2787 KB  
Article
FWISD: Flood and Waterfront Infrastructure Segmentation Dataset with Model Evaluations
by Kaiwen Xue and Cheng-Jie Jin
Remote Sens. 2026, 18(2), 281; https://doi.org/10.3390/rs18020281 - 15 Jan 2026
Abstract
The increasing severity of extreme weather events necessitates rapid methods for post-disaster damage assessment. Current remote sensing datasets often lack the spatial resolution required for a detailed evaluation of critical waterfront infrastructure, which is vulnerable during hurricanes. To address this limitation, we introduce [...] Read more.
The increasing severity of extreme weather events necessitates rapid methods for post-disaster damage assessment. Current remote sensing datasets often lack the spatial resolution required for a detailed evaluation of critical waterfront infrastructure, which is vulnerable during hurricanes. To address this limitation, we introduce the Flood and Waterfront Infrastructure Segmentation Dataset (FWISD), a new dataset constructed from high-resolution unmanned aerial vehicle imagery captured after a major hurricane, comprising 3750 annotated 1024 × 1024 pixel image patches. The dataset provides semantic labels for 11 classes, specifically designed to distinguish between intact and damaged structures. We conducted comprehensive experiments to evaluate the performance of both convolution and Transformer-based models. Our results indicate that hybrid models integrating Transformer encoders with convolutional decoders achieve a superior balance of contextual understanding and spatial precision. Regression analysis indicates that the distance to water has the maximum influence on the detection success rate, while comparative experiments emphasize the unique complexity of waterfront infrastructure compared to homogenous datasets. In summary, FWISD provides a valuable resource for developing and evaluating advanced models, establishing a foundation for automated systems that can improve the timeliness and precision of post-disaster response. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

18 pages, 6673 KB  
Article
An Adaptive Clear High-Dynamic Range Fusion Algorithm Based on Field-Programmable Gate Array for Real-Time Video Stream
by Hongchuan Huang, Yang Xu and Tingyu Zhao
Sensors 2026, 26(2), 577; https://doi.org/10.3390/s26020577 - 15 Jan 2026
Abstract
Conventional High Dynamic Range (HDR) image fusion algorithms generally require two or more original images with different exposure times for synthesis, making them unsuitable for real-time processing scenarios such as video streams. Additionally, the synthesized HDR images have the same bit depth as [...] Read more.
Conventional High Dynamic Range (HDR) image fusion algorithms generally require two or more original images with different exposure times for synthesis, making them unsuitable for real-time processing scenarios such as video streams. Additionally, the synthesized HDR images have the same bit depth as the original images, which may lead to banding artifacts and limits their applicability in professional fields requiring high fidelity. This paper utilizes a Field Programmable Gate Array (FPGA) to support an image sensor operating in Clear HDR mode, which simultaneously outputs High Conversion Gain (HCG) and Low Conversion Gain (LCG) images. These two images share the same exposure duration and are captured at the same moment, making them well-suited for real-time HDR fusion. This approach provides a feasible solution for real-time processing of video streams. An adaptive adjustment algorithm is employed to address the requirement for high fidelity. First, the initial HCG and LCG images are fused under the initial fusion parameters to generate a preliminary HDR image. Subsequently, the gain of the high-gain images in the video stream is adaptively adjusted according to the brightness of the fused HDR image, enabling stable brightness under dynamic illumination conditions. Finally, by evaluating the read noise of the HCG and LCG images, the fusion parameters are adaptively optimized to synthesize an HDR image with higher bit depth. Experimental results demonstrate that the proposed method achieves a processing rate of 46 frames per second for 2688 × 1520 resolution video streams, enabling real-time processing. The bit depth of the image is enhanced from 12 bits to 16 bits, preserving more scene information and effectively addressing banding artifacts in HDR images. This improvement provides greater flexibility for subsequent image processing tasks. Consequently, the adaptive algorithm is particularly suitable for dynamically changing scenarios such as real-time surveillance and professional applications including industrial inspection. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 1777 KB  
Article
Enhanced Fracture Energy and Toughness of UV-Curable Resin Using Flax Fiber Composite Laminates
by Mingwen Ou, Huan Li, Dequan Tan, Yizhen Peng, Hao Zhong, Linmei Wu and Wubin Shan
Biomimetics 2026, 11(1), 71; https://doi.org/10.3390/biomimetics11010071 - 15 Jan 2026
Abstract
Ultraviolet (UV) curable resins are widely used in photopolymerization-based 3D printing due to their rapid curing and compatibility with high-resolution processes. However, their brittleness and limited mechanical performance restrict their applicability, particularly in impact-resistant high-performance 3D-printed structures. Inspired by the mantis shrimp’s exceptional [...] Read more.
Ultraviolet (UV) curable resins are widely used in photopolymerization-based 3D printing due to their rapid curing and compatibility with high-resolution processes. However, their brittleness and limited mechanical performance restrict their applicability, particularly in impact-resistant high-performance 3D-printed structures. Inspired by the mantis shrimp’s exceptional energy absorption and impact resistance, attributed to its helicoidal fiber architecture, we developed a Bouligand flax fiber-reinforced composite laminate. By constructing biomimetic helicoidal composites based on Bouligand arrangements, the mechanical performance of flax fiber-reinforced UV-curable resin was systematically investigated. The influence of flax fiber orientation was assessed using mechanical testing combined with the digital image correlation (DIC) method. The results demonstrate that a 45° interlayer angle of flax fiber significantly enhanced the fracture energy of the resin from 1.67 KJ/m2 to 15.41 KJ/m2, an increase of ~823%. Moreover, the flax fiber-reinforced helicoidal structure markedly improved the ultimate tensile strength of the resin, with the 90° interlayer angle of flax fiber exhibiting the greatest enhancement, increasing from 5.32 MPa to 19.45 MPa. Full article
Show Figures

Figure 1

26 pages, 38465 KB  
Article
High-Resolution Snapshot Multispectral Imaging System for Hazardous Gas Classification and Dispersion Quantification
by Zhi Li, Hanyuan Zhang, Qiang Li, Yuxin Song, Mengyuan Chen, Shijie Liu, Dongjing Li, Chunlai Li, Jianyu Wang and Renbiao Xie
Micromachines 2026, 17(1), 112; https://doi.org/10.3390/mi17010112 - 14 Jan 2026
Abstract
Real-time monitoring of hazardous gas emissions in open environments remains a critical challenge. Conventional spectrometers and filter wheel systems acquire spectral and spatial information sequentially, which limits their ability to capture multiple gas species and dynamic dispersion patterns rapidly. A High-Resolution Snapshot Multispectral [...] Read more.
Real-time monitoring of hazardous gas emissions in open environments remains a critical challenge. Conventional spectrometers and filter wheel systems acquire spectral and spatial information sequentially, which limits their ability to capture multiple gas species and dynamic dispersion patterns rapidly. A High-Resolution Snapshot Multispectral Imaging System (HRSMIS) is proposed to integrate high spatial fidelity with multispectral capability for near real-time plume visualization, gas species identification, and concentration retrieval. Operating across the 7–14 μm spectral range, the system employs a dual-path optical configuration in which a high-resolution imaging path and a multispectral snapshot path share a common telescope, allowing for the simultaneous acquisition of fine two-dimensional spatial morphology and comprehensive spectral fingerprint information. Within the multispectral path, two 5×5 microlens arrays (MLAs) combined with a corresponding narrowband filter array generate 25 distinct spectral channels, allowing concurrent detection of up to 25 gas species in a single snapshot. The high-resolution imaging path provides detailed spatial information, facilitating spatio-spectral super-resolution fusion for multispectral data without complex image registration. The HRSMIS demonstrates modulation transfer function (MTF) values of at least 0.40 in the high-resolution channel and 0.29 in the multispectral channel. Monte Carlo tolerance analysis confirms imaging stability, enabling the real-time visualization of gas plumes and the accurate quantification of dispersion dynamics and temporal concentration variations. Full article
(This article belongs to the Special Issue Gas Sensors: From Fundamental Research to Applications, 2nd Edition)
Show Figures

Figure 1

21 pages, 99704 KB  
Article
A Multi-Modal Approach for Robust Oriented Ship Detection: Dataset and Methodology
by Jianing You, Yixuan Lv, Shengyang Li, Silei Liu, Kailun Zhang and Yuxuan Liu
Remote Sens. 2026, 18(2), 274; https://doi.org/10.3390/rs18020274 - 14 Jan 2026
Abstract
Maritime ship detection is a critical task for security and traffic management. To advance research in this area, we constructed a new high-resolution, spatially aligned optical-SAR dataset, named MOS-Ship. Building on this, we propose MOS-DETR, a novel query-based framework. This model incorporates an [...] Read more.
Maritime ship detection is a critical task for security and traffic management. To advance research in this area, we constructed a new high-resolution, spatially aligned optical-SAR dataset, named MOS-Ship. Building on this, we propose MOS-DETR, a novel query-based framework. This model incorporates an innovative multi-modal Swin Transformer backbone to extract unified feature pyramids from both RGB and SAR images. This design allows the model to jointly exploit optical textures and SAR scattering signatures for precise, oriented bounding box prediction. We also introduce an adaptive probabilistic fusion mechanism. This post-processing module dynamically integrates the detection results generated by our model from the optical and SAR inputs, synergistically combining their complementary strengths. Experiments validate that MOS-DETR achieves highly competitive accuracy and significantly outperforms unimodal baselines, demonstrating superior robustness across diverse conditions. This work provides a robust framework and methodology for advancing multimodal maritime surveillance. Full article
Show Figures

Figure 1

25 pages, 6075 KB  
Article
High-Frequency Monitoring of Explosion Parameters and Vent Morphology During Stromboli’s May 2021 Crater-Collapse Activity Using UAS and Thermal Imagery
by Elisabetta Del Bello, Gaia Zanella, Riccardo Civico, Tullio Ricci, Jacopo Taddeucci, Daniele Andronico, Antonio Cristaldi and Piergiorgio Scarlato
Remote Sens. 2026, 18(2), 264; https://doi.org/10.3390/rs18020264 - 14 Jan 2026
Abstract
Stromboli’s volcanic activity fluctuates in intensity and style, and periods of heightened activity can trigger hazardous events such as crater collapses and lava overflows. This study investigates the volcano’s explosive behavior surrounding the 19 May 2021 crater-rim failure, which primarily affected the N2 [...] Read more.
Stromboli’s volcanic activity fluctuates in intensity and style, and periods of heightened activity can trigger hazardous events such as crater collapses and lava overflows. This study investigates the volcano’s explosive behavior surrounding the 19 May 2021 crater-rim failure, which primarily affected the N2 crater and partially involved N1, by integrating high-frequency thermal imaging and high-resolution unmanned aerial system (UAS) surveys to quantify eruption parameters and vent morphology. Typically, eruptive periods preceding vent instability are characterized by evident changes in geophysical parameters and by intensified explosive activity. This is quantitatively monitored mainly through explosion frequency, while other eruption parameters are assessed qualitatively and sporadically. Our results show that, in addition to explosion rate, the spattering rate, the predominance of bomb- and gas-rich explosions, and the number of active vents increased prior to the collapse, reflecting near-surface magma pressurization. UAS surveys revealed that the pre-collapse configuration of the northern craters contributed to structural vulnerability, while post-collapse vent realignment reflected magma’s adaptation to evolving stress conditions. The May 2021 events were likely influenced by morphological changes induced by the 2019 paroxysms, which increased collapse frequency and amplified the 2021 failure. These findings highlight the importance of integrating quantitative time series of multiple eruption parameters and high-frequency morphological surveys into monitoring frameworks to improve early detection of system disequilibrium and enhance hazard assessment at Stromboli and similar volcanic systems. Full article
Show Figures

Figure 1

24 pages, 5801 KB  
Article
MEANet: A Novel Multiscale Edge-Aware Network for Building Change Detection in High-Resolution Remote Sensing Images
by Tao Chen, Linjin Huang, Wenyi Zhao, Shengjie Yu, Yue Yang and Antonio Plaza
Remote Sens. 2026, 18(2), 261; https://doi.org/10.3390/rs18020261 - 14 Jan 2026
Abstract
Remote sensing building change detection (RSBCD) is critical for land surface monitoring and understanding interactions between human activities and the ecological environment. However, existing deep learning-based RSBCD methods often result in mis-detected pixels concentrated around object boundaries, mainly due to ambiguous object shapes [...] Read more.
Remote sensing building change detection (RSBCD) is critical for land surface monitoring and understanding interactions between human activities and the ecological environment. However, existing deep learning-based RSBCD methods often result in mis-detected pixels concentrated around object boundaries, mainly due to ambiguous object shapes and complex spatial distributions. To address this problem, we propose a new Multiscale Edge-Aware change detection Network (MEANet) that accurately locates edge pixels of changed objects and enhances the separability between changed and unchanged pixels. Specifically, a high-resolution feature fusion network is adopted to preserve spatial details while integrating deep semantic information, and a multi-scale supervised contrastive loss (MSCL) is designed to jointly optimize pixel-level discrimination and embedding space separability. To further improve the handling of difficult samples, hard negative sampling is adopted in the contrastive learning process. We conduct comparative experiments on three benchmark datasets. Both Visual and quantitative results demonstrate that our new MEANet significantly reduces misclassified pixels at object boundaries and achieve superior detection accuracy compared to existing methods. Especially on the GZ-CD dataset, MEANet improves F1-Score and mIoU by more than 2% compared with ChangeFormer, demonstrating strong robustness in complex scenarios. It is worth noting that the performance of MEANet may still be affected by extremely complex edge textures or highly blurred boundaries. Future work will focus on further improving robustness under such challenges and extending the method to broader RSBCD scenarios. Full article
Show Figures

Figure 1

13 pages, 4845 KB  
Article
Efficient Solid-State Far-Field Macroscopic Fourier Ptychographic Imaging via Programmable Illumination and Camera Array
by Di You, Ge Ren and Haotong Ma
Photonics 2026, 13(1), 73; https://doi.org/10.3390/photonics13010073 - 14 Jan 2026
Abstract
The macroscopic Fourier ptychography (FP) is regarded as a highly promising approach of creating a synthetic aperture for macro visible imaging to achieve sub-diffraction-limited resolution. However most existing macro FP techniques rely on the high-precision translation stage to drive laser or camera scanning, [...] Read more.
The macroscopic Fourier ptychography (FP) is regarded as a highly promising approach of creating a synthetic aperture for macro visible imaging to achieve sub-diffraction-limited resolution. However most existing macro FP techniques rely on the high-precision translation stage to drive laser or camera scanning, thereby increasing system complexity and bulk. Meanwhile, the scanning process is slow and time-consuming, hindering the ability to achieve rapid imaging. In this paper, we introduce an innovative illumination scheme that employs a spatial light modulator to achieve precise programmable variable-angle illumination at a relatively long distance, and it can also freely adjust the illumination spot size through phase coding to avoid the issues of limited field of view and excessive dispersion of illumination energy. Coupled with a camera array, this could significantly reduce the number of shots taken by the imaging system and enable a lightweight and highly efficient solid-state macro FP imaging system with a large equivalent aperture. The effectiveness of the method is experimentally validated using various optically rough diffuse objects and a USAF target at laboratory-scale distances. Full article
Show Figures

Figure 1

18 pages, 14907 KB  
Article
Renal-AI: A Deep Learning Platform for Multi-Scale Detection of Renal Ultrastructural Features in Electron Microscopy Images
by Leena Nezamuldeen, Walaa Mal, Reem A. Al Zahrani, Sahar Jambi and M. Saleet Jafri
Diagnostics 2026, 16(2), 264; https://doi.org/10.3390/diagnostics16020264 - 14 Jan 2026
Abstract
Background/Objectives: Transmission electron microscopy (TEM) is an essential tool for diagnosing renal diseases. It produces high-resolution visualization of glomerular and mesangial ultrastructural features. However, manual interpretation of TEM images is labor-intensive and prone to interobserver variability. In this study, we introduced and [...] Read more.
Background/Objectives: Transmission electron microscopy (TEM) is an essential tool for diagnosing renal diseases. It produces high-resolution visualization of glomerular and mesangial ultrastructural features. However, manual interpretation of TEM images is labor-intensive and prone to interobserver variability. In this study, we introduced and evaluated deep learning architectures based on YOLOv8-OBB for automated detection of six ultrastructural features in kidney biopsy TEM images: glomerular basement membrane, mesangial folds, mesangial deposits, normal podocytes, podocytopathy, and subepithelial deposits. Methods: Building on our previous work, we propose a modified YOLOv8-OBB architecture that incorporates three major refinements: a grayscale input channel, a high-resolution P2 feature pyramid with refinement blocks (FPRbl), and a four-branch oriented detection head designed to detect small-to-large structures at multiple image scales (feature-map strides of 4, 8, 16, and 32 pixels). We compared two pretrained variants: our previous YOLOv8-OBB model developed with a grayscale input channel (GSch) and four additional feature-extraction layers (4FExL) (Pretrained + GSch + 4FExL) and the newly developed (Pretrained + FPRbl). Results: Quantitative assessment showed that our previously developed model (Pretrained + GSch + 4FExL) achieved an F1-score of 0.93 and mAP@0.5 of 0.953, while the (Pretrained + FPRbl) model developed in this study achieved an F1-score of 0.92 and mAP@0.5 of 0.941, demonstrating strong and clinically meaningful performance for both approaches. Qualitative assessment based on expert visual inspection of predicted bounding boxes revealed complementary strengths: (Pretrained + GSch + 4FExL) exhibited higher recall for subtle or infrequent findings, whereas (Pretrained + FPRbl) produced cleaner bounding boxes with higher-confidence predictions. Conclusions: This study presents how targeted architectural refinements in YOLOv8-OBB can enhance the detection of small, low-contrast, and variably oriented ultrastructural features in renal TEM images. Evaluating these refinements and translating them into a web-based platform (Renal-AI) showed the clinical applicability of deep learning-based tools for improving diagnostic efficiency and reducing interpretive variability in kidney pathology. Full article
Show Figures

Figure 1

22 pages, 6609 KB  
Article
CAMS-AI: A Coarse-to-Fine Framework for Efficient Small Object Detection in High-Resolution Images
by Zhanqi Chen, Zhao Chen, Baohui Yang, Qian Guo, Haoran Wang and Xiangquan Zeng
Remote Sens. 2026, 18(2), 259; https://doi.org/10.3390/rs18020259 - 14 Jan 2026
Abstract
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where [...] Read more.
Automated livestock monitoring in wide-area grasslands is a critical component of smart agriculture development. Devices such as Unmanned Aerial Vehicles (UAVs), remote sensing, and high-mounted cameras provide unique monitoring perspectives for this purpose. The high-resolution images they capture cover vast grassland backgrounds, where targets often appear as small, distant objects and are extremely unevenly distributed. Applying standard detectors directly to such images yields poor results and extremely high miss rates. To improve the detection accuracy of small targets in high-resolution images, methods represented by Slicing Aided Hyper Inference (SAHI) have been widely adopted. However, in specific scenarios, SAHI’s drawbacks are dramatically amplified. Its strategy of uniform global slicing divides each original image into a fixed number of sub-images, many of which may be pure background (negative samples) containing no targets. This results in a significant waste of computational resources and a precipitous drop in inference speed, falling far short of practical application requirements. To resolve this conflict between accuracy and efficiency, this paper proposes an efficient detection framework named CAMS-AI (Clustering and Adaptive Multi-level Slicing for Aided Inference). CAMS-AI adopts a “coarse-to-fine” intelligent focusing strategy: First, a Region Proposal Network (RPN) is used to rapidly locate all potential target areas. Next, a clustering algorithm is employed to generate precise Regions of Interest (ROIs), effectively focusing computational resources on target-dense areas. Finally, an innovative multi-level slicing strategy and a high-precision model are applied only to these high-quality ROIs for fine-grained detection. Experimental results demonstrate that the CAMS-AI framework achieves a mean Average Precision (mAP) comparable to SAHI while significantly increasing inference speed. Taking the RT-DETR detector as an example, while achieving 96% of the mAP50–95 accuracy level of the SAHI method, CAMS-AI’s end-to-end frames per second (FPS) is 10.3 times that of SAHI, showcasing its immense application potential in real-world, high-resolution monitoring scenarios. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

20 pages, 6578 KB  
Article
High-Resolution Spatiotemporal-Coded Differential Eddy-Current Array Probe for Defect Detection in Metal Substrates
by Qi OuYang, Yuke Meng, Lun Huang and Yun Li
Sensors 2026, 26(2), 537; https://doi.org/10.3390/s26020537 - 13 Jan 2026
Viewed by 9
Abstract
To address the problems of weak geometric features, low signal response amplitude, and insufficient spatial resolvability of near-surface defects in metal substrates, a high-resolution spatiotemporal-coded eddy-current array probe is proposed. The probe adopts an array topology with time-multiplexed excitation and adjacent differential reception, [...] Read more.
To address the problems of weak geometric features, low signal response amplitude, and insufficient spatial resolvability of near-surface defects in metal substrates, a high-resolution spatiotemporal-coded eddy-current array probe is proposed. The probe adopts an array topology with time-multiplexed excitation and adjacent differential reception, achieving a balance between high common-mode rejection ratio and high-density spatial sampling. First, a theoretical electromagnetic coupling model between the probe and the metal substrate is established, and finite-element simulations are conducted to investigate the evolution of the skin effect, eddy-current density distribution, and differential impedance response over an excitation frequency range of 1–10 MHz. Subsequently, a 64-channel M-DECA probe and an experimental testing platform are developed, and frequency-sweeping experiments are carried out under different excitation conditions. Experimental results indicate that, under a 50 kHz excitation frequency, the array eddy-current response achieves an optimal trade-off between signal amplitude and spatial geometric consistency. Furthermore, based on the pixel-to-physical coordinate mapping relationship, the lateral equivalent diameters of near-surface defects with different characteristic scales are quantitatively characterized, with relative errors of 6.35%, 4.29%, 3.98%, 3.50%, and 5.80%, respectively. Regression-based quantitative analysis reveals a power-law relationship between defect area and the amplitude of the differential eddy-current array response, with a coefficient of determination R2=0.9034 for the bipolar peak-to-peak feature. The proposed M-DECA probe enables high-resolution imaging and quantitative characterization of near-surface defects in metal substrates, providing an effective solution for electromagnetic detection of near-surface, low-contrast defects. Full article
Show Figures

Figure 1

Back to TopTop