Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (491)

Search Parameters:
Keywords = local texture enhancement

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4609 KB  
Article
Geospatial Analysis of Soil Quality Parameters and Soil Health in the Lower Mahanadi Basin, India
by Sagar Kumar Swain, Bikash Ranjan Parida, Ananya Mallick, Chandra Shekhar Dwivedi, Manish Kumar, Arvind Chandra Pandey and Navneet Kumar
GeoHazards 2025, 6(4), 71; https://doi.org/10.3390/geohazards6040071 (registering DOI) - 1 Nov 2025
Abstract
The lower Mahanadi basin in eastern India is experiencing significant land and soil transformations that directly influence agricultural sustainability and ecosystem resilience. In this study, we used geospatial techniques to analyze the spatial-temporal variability of soil quality and land cover between 2011 and [...] Read more.
The lower Mahanadi basin in eastern India is experiencing significant land and soil transformations that directly influence agricultural sustainability and ecosystem resilience. In this study, we used geospatial techniques to analyze the spatial-temporal variability of soil quality and land cover between 2011 and 2020 in the lower Mahanadi basin. The results revealed that the cropland decreased from 39,493.2 to 37,495.9 km2, while forest cover increased from 12,401.2 to 13,822.2 km2, enhancing soil organic carbon (>290 g/kg) and improving fertility. Grassland recovered from 4826.3 to 5432.1 km2, wastelands declined from 133.3 to 93.2 km2, and water bodies expanded from 184.3 to 191.4 km2, reflecting positive land–soil interactions. Soil quality was evaluated using the Simple Additive Soil Quality Index (SQI), with core indicators bulk density, organic carbon, and nitrogen, selected to represent physical, chemical, and biological components of soil. These indicators were chosen as they represent the essential physical, chemical, and biological components influencing soil functionality and fertility. The SQI revealed spatial variability in texture, organic carbon, nitrogen, and bulk density at different depths. SQI values indicated high soil quality (SQI > 0.65) in northern and northwestern zones, supported by neutral to slightly alkaline pH (6.2–7.4), nitrogen exceeding 5.29 g/kg, and higher organic carbon stocks (>48.8 t/ha). In contrast, central and southwestern regions recorded low SQI (0.15–0.35) due to compaction (bulk density up to 1.79 g/cm3) and fertility loss. Clay-rich soils (>490 g/kg) enhanced nutrient retention, whereas sandy soils (>320 g/kg) in the south increased leaching risks. Integration of LULC with soil quality confirms forest expansion as a driver of resilience, while agricultural intensification contributed to localized degradation. These findings emphasize the need for depth-specific soil management and integrated land-use planning to ensure food security and ecological sustainability. Full article
Show Figures

Figure 1

19 pages, 1895 KB  
Article
Cross-Context Aggregation for Multi-View Urban Scene and Building Facade Matching
by Yaping Yan and Yuhang Zhou
ISPRS Int. J. Geo-Inf. 2025, 14(11), 425; https://doi.org/10.3390/ijgi14110425 - 31 Oct 2025
Abstract
Accurate and robust feature matching across multi-view urban imagery is fundamental for urban mapping, 3D reconstruction, and large-scale spatial alignment. Real-world urban scenes involve significant variations in viewpoint, illumination, and occlusion, as well as repetitive architectural patterns that make correspondence estimation challenging. To [...] Read more.
Accurate and robust feature matching across multi-view urban imagery is fundamental for urban mapping, 3D reconstruction, and large-scale spatial alignment. Real-world urban scenes involve significant variations in viewpoint, illumination, and occlusion, as well as repetitive architectural patterns that make correspondence estimation challenging. To address these issues, we propose the Cross-Context Aggregation Matcher (CCAM), a detector-free framework that jointly leverages multi-scale local features, long-range contextual information, and geometric priors to produce spatially consistent matches. Specifically, CCAM integrates a multi-scale local enhancement branch with a parallel self- and cross-attention Transformer, enabling the model to preserve detailed local structures while maintaining a coherent global context. In addition, an independent positional encoding scheme is introduced to strengthen geometric reasoning in repetitive or low-texture regions. Extensive experiments demonstrate that CCAM outperforms state-of-the-art methods, achieving up to +31.8%, +19.1%, and +11.5% improvements in AUC@{5°, 10°, 20°} over detector-based approaches and up to 1.72% higher precision compared with detector-free counterparts. These results confirm that CCAM delivers reliable and spatially coherent matches, thereby facilitating downstream geospatial applications. Full article
Show Figures

Figure 1

23 pages, 5381 KB  
Article
Multi-Scale Multi-Branch Convolutional Neural Network on Google Earth Engine for Root-Zone Soil Salinity Retrieval in Arid Agricultural Areas
by Wenli Dong, Xinjun Wang, Songrui Ning, Wanzhi Zhou, Shenghan Gao, Chenyu Li, Yu Huang, Luan Dong and Jiandong Sheng
Agronomy 2025, 15(11), 2534; https://doi.org/10.3390/agronomy15112534 - 30 Oct 2025
Abstract
Soil salinization has become a critical constraint on agricultural productivity and eco-logical sustainability in arid regions. The accurate mapping of its spatial distribution is essential for sustainable land management. Although many studies have used satellite remote sensing combined with machine learning or convolutional [...] Read more.
Soil salinization has become a critical constraint on agricultural productivity and eco-logical sustainability in arid regions. The accurate mapping of its spatial distribution is essential for sustainable land management. Although many studies have used satellite remote sensing combined with machine learning or convolutional neural networks (CNN) for soil salinity monitoring, most CNN approaches rely on single-scale convolution kernels. This limits their ability to simultaneously capture fine local detail and broader spatial patterns. In this study, we developed a multi-scale deep learning framework to enhance salinity prediction accuracy. We target the root-zone soil salinity in the Wei-Ku Oasis. Sentinel-2 multispectral imagery and Sentinel-1 radar backscatter data, together with topographic, climatic, soil texture, and groundwater covariates, were integrated into a unified dataset. We implemented the workflow using the Google Earth Engine (GEE; earthengine-api 0.1.419) and Python (version 3.8.18) platforms, applying the Sequential Forward Selection (SFS) algorithm to identify the optimal feature subset for each model. A multi-branch convolutional neural network (MB-CNN) with parallel 1 × 1 and 3 × 3 convolutional branches was constructed and compared against random forest (RF), 1 × 1-CNN, and 3 × 3-CNN models. On the validation set, MB-CNN achieved the best performance (R2 = 0.752, MAE = 0.789, RMSE = 1.051 dS∙m−1, nRMSE = 0.104), showing stronger accuracy, lower error, and better stability than the other models. The soil salinity inversion map based on MB-CNN revealed distinct spatial patterns consistent with known hydrogeological and topographic controls. This study innovatively introduces a multi-scale convolutional kernel parallel architecture to construct the multi-branch CNN model. This approach captures environmental characteristics of soil salinity across multiple spatial scales, effectively enhancing the accuracy and stability of soil salinity inversion. It provides new insights for remote sensing modeling of soil properties. Full article
(This article belongs to the Section Farming Sustainability)
Show Figures

Figure 1

22 pages, 6998 KB  
Article
LDW-DETR: An Efficient Tomato Leaf Disease Detection Algorithm Based on Enhanced RT-DETR
by Hua Yang, Hao Xue, Yanjie Lyu, Mingzhi Mu, Tianwei Tang and Zhongke Huang
Appl. Sci. 2025, 15(21), 11620; https://doi.org/10.3390/app152111620 - 30 Oct 2025
Abstract
Tomato is one of the most important economic crops in the world, but it is prone to diseases during the growth process, so the detection of tomato diseases is very important. However, when detecting tomato diseases in natural environments, existing models are easily [...] Read more.
Tomato is one of the most important economic crops in the world, but it is prone to diseases during the growth process, so the detection of tomato diseases is very important. However, when detecting tomato diseases in natural environments, existing models are easily affected by environmental factors such as occlusion and illumination, as well as the small size of lesions. In response to these challenges, this paper proposes a tomato leaf disease detection framework LDW-DETR based on multi-scale fusion. First, the local-global feature fusion (LGFF) module is designed by referring to the idea of the PPA module, which can effectively capture local and global features, thereby enhancing the detection ability of small lesions in complex backgrounds. Second, the CSPDarknet architecture is introduced as the backbone network of LDW-DETR to improve the efficiency of feature extraction. In addition, the bottleneck layer of the C2f component is improved by integrating Strip Block and Contextualized Gated Linear Unit (CGLU) to enhance the perception ability of lesion edges and textures. Finally, the WIoU v3 loss function is used to optimize the bounding box regression process. The experimental results show that compared with RT-DETR, the LDW-DETR model improves mAP@0.5 and mAP@0.5–0.95 by 2.6% and 3.7%, respectively, while the number of parameters is reduced by 17.9%. In addition, it still maintains high robustness and generalization ability in cross-dataset experiments. These results show that LDW-DETR has good detection performance and generalization ability in the tomato leaf disease detection task. Full article
(This article belongs to the Section Agricultural Science and Technology)
Show Figures

Figure 1

25 pages, 12749 KB  
Article
ADFE-DET: An Adaptive Dynamic Feature Enhancement Algorithm for Weld Defect Detection
by Xiaocui Wu, Changjun Liu, Hao Zhang and Pengyu Xu
Appl. Sci. 2025, 15(21), 11595; https://doi.org/10.3390/app152111595 - 30 Oct 2025
Abstract
Welding is a critical joining process in modern manufacturing, with defects contributing to 50–80% of structural failures. Traditional inspection methods are often inefficient, subjective, and inconsistent. To address challenges in weld defect detection—including scale variation, morphological complexity, low contrast, and sample imbalance—this paper [...] Read more.
Welding is a critical joining process in modern manufacturing, with defects contributing to 50–80% of structural failures. Traditional inspection methods are often inefficient, subjective, and inconsistent. To address challenges in weld defect detection—including scale variation, morphological complexity, low contrast, and sample imbalance—this paper proposes ADFE-DET, an adaptive dynamic feature enhancement algorithm. The approach introduces three core innovations: the Dynamic Selection Cross-stage Cascade Feature Block (DSCFBlock) captures fine texture features via edge-preserving dynamic selection attention; the Adaptive Hierarchical Spatial Feature Pyramid Network (AHSFPN) achieves adaptive multi-scale feature integration through directional channel attention and hierarchical fusion; and the Multi-Directional Differential Lightweight Head (MDDLH) enables precise defect localization via multi-directional differential convolution while maintaining a lightweight architecture. Experiments on three public datasets (Weld-DET, NEU-DET, PKU-Market-PCB) show that ADFE-DET improves mAP50 by 2.16%, 2.73%, and 1.81%, respectively, over baseline YOLOv11n, while reducing parameters by 34.1%, computational complexity by 4.6%, and achieving 105 FPS inference speed. The results demonstrate that ADFE-DET provides an effective and practical solution for intelligent industrial weld quality inspection. Full article
Show Figures

Figure 1

17 pages, 3889 KB  
Article
STGAN: A Fusion of Infrared and Visible Images
by Liuhui Gong, Yueping Han and Ruihong Li
Electronics 2025, 14(21), 4219; https://doi.org/10.3390/electronics14214219 - 29 Oct 2025
Viewed by 84
Abstract
The fusion of infrared and visible images provides critical value in computer vision by integrating their complementary information, especially in the field of industrial detection, which provides a more reliable data basis for subsequent defect recognition. This paper presents STGAN, a novel Generative [...] Read more.
The fusion of infrared and visible images provides critical value in computer vision by integrating their complementary information, especially in the field of industrial detection, which provides a more reliable data basis for subsequent defect recognition. This paper presents STGAN, a novel Generative Adversarial Network framework based on a Swin Transformer for high-quality infrared and visible image fusion. Firstly, the generator employs a Swin Transformer as its backbone for feature extraction, which adopts a U-Net architecture, and the improved W-MSA is introduced into the bottleneck layer to enhance local attention and improve the expression ability of cross-modal features. Secondly, the discriminator uses a Markov discriminator to distinguish the difference. Then, the core GAN framework is leveraged to guarantee the retention of both infrared thermal radiation and visible-light texture details in the generated image so as to improve the clarity and contrast of the fused image. Finally, simulation verification showed that six out of seven indicators ranked in the top two, especially in key indicators such as PSNR, VIF, MI, and EN, which achieved optimal or suboptimal values. The experimental results on the general dataset show that this method is superior to the advanced method in terms of subjective vision and objective indicators, and it can effectively enhance the fine structure and thermal anomaly information in the image, which gives it great potential in the application of industrial surface defect detection. Full article
Show Figures

Figure 1

23 pages, 5273 KB  
Article
Assessing an Optical Tool for Identifying Tidal and Associated Mangrove Swamp Rice Fields in Guinea-Bissau, West Africa
by Jesus Céspedes, Jaime Garbanzo-León, Marina Temudo and Gabriel Garbanzo
Land 2025, 14(11), 2144; https://doi.org/10.3390/land14112144 - 28 Oct 2025
Viewed by 249
Abstract
An optical remote sensing approach was developed to identify areas with high and low salinity within the mangrove swamp rice system in West Africa. Conducted between 2019 and 2024 in Guinea-Bissau, this study examined two contrasting rice-growing environments, tidal mangrove (TM) and associated [...] Read more.
An optical remote sensing approach was developed to identify areas with high and low salinity within the mangrove swamp rice system in West Africa. Conducted between 2019 and 2024 in Guinea-Bissau, this study examined two contrasting rice-growing environments, tidal mangrove (TM) and associated mangrove (AM), to assess changes in vegetation dynamics, soil salinity concentration, and soil chemical properties. Field sampling was conducted during the dry season to avoid waterlogging, and soil analyses included texture, cation exchange capacity, micronutrients, and electrical conductivity (ECe). Meteorological stations recorded rainfall and environmental conditions over the period. Moreover, orthorectified and atmospherically corrected surface reflectance satellite imagery from PlanetScope and Sentinel-2 was selected due to their high spatial resolution and revisit frequency. From this data, vegetation dynamics were monitored using the Normalized Difference Vegetation Index (NDVI), with change detection calculated as the difference in NDVI between sequential images (ΔNDVI). Thresholds of 0.15 ≤ NDVI ≤ 0.5 and ΔNDVI > 0.1 were tested to identify significant vegetation growth, with smaller polygons (<1000 m2) removed to reduce noise. In this process, at least three temporal images per season were analyzed, and multi-year intersections were done to enhance accuracy. Our parameter optimization tests found that a locally calibrated NDVI threshold of 0.26 improved site classification. Thus, this integrated field–remote sensing approach proved to be a reproducible and cost-effective tool for detecting AM and TM environments and assessing vegetation responses to seasonal changes, contributing to improved land and water management in the salinity-affected mangrove swamp rice system. Full article
Show Figures

Figure 1

21 pages, 3381 KB  
Article
Aero-Engine Ablation Defect Detection with Improved CLR-YOLOv11 Algorithm
by Yi Liu, Jiatian Liu, Yaxi Xu, Qiang Fu, Jide Qian and Xin Wang
Sensors 2025, 25(21), 6574; https://doi.org/10.3390/s25216574 - 25 Oct 2025
Viewed by 430
Abstract
Aero-engine ablation detection is a critical task in aircraft health management, yet existing rotation-based object detection methods often face challenges of high computational complexity and insufficient local feature extraction. This paper proposes an improved YOLOv11 algorithm incorporating Context-guided Large-kernel attention and Rotated detection [...] Read more.
Aero-engine ablation detection is a critical task in aircraft health management, yet existing rotation-based object detection methods often face challenges of high computational complexity and insufficient local feature extraction. This paper proposes an improved YOLOv11 algorithm incorporating Context-guided Large-kernel attention and Rotated detection head, called CLR-YOLOv11. The model achieves synergistic improvement in both detection efficiency and accuracy through dual structural optimization, with its innovations primarily embodied in the following three tightly coupled strategies: (1) Targeted Data Preprocessing Pipeline Design: To address challenges such as limited sample size, low overall image brightness, and noise interference, we designed an ordered data augmentation and normalization pipeline. This pipeline is not a mere stacking of techniques but strategically enhances sample diversity through geometric transformations (random flipping, rotation), hybrid augmentations (Mixup, Mosaic), and pixel-value transformations (histogram equalization, Gaussian filtering). All processed images subsequently undergo Z-Score normalization. This order-aware pipeline design effectively improves the quality, diversity, and consistency of the input data. (2) Context-Guided Feature Fusion Mechanism: To overcome the limitations of traditional Convolutional Neural Networks in modeling long-range contextual dependencies between ablation areas and surrounding structures, we replaced the original C3k2 layer with the C3K2CG module. This module adaptively fuses local textural details with global semantic information through a context-guided mechanism, enabling the model to more accurately understand the gradual boundaries and spatial context of ablation regions. (3) Efficiency-Oriented Large-Kernel Attention Optimization: To expand the receptive field while strictly controlling the additional computational overhead introduced by rotated detection, we replaced the C2PSA module with the C2PSLA module. By employing large-kernel decomposition and a spatial selective focusing strategy, this module significantly reduces computational load while maintaining multi-scale feature perception capability, ensuring the model meets the demands of high real-time applications. Experiments on a self-built aero-engine ablation dataset demonstrate that the improved model achieves 78.5% mAP@0.5:0.95, representing a 4.2% improvement over the YOLOv11-obb which model without the specialized data augmentation. This study provides an effective solution for high-precision real-time aviation inspection tasks. Full article
(This article belongs to the Special Issue Advanced Neural Architectures for Anomaly Detection in Sensory Data)
Show Figures

Figure 1

16 pages, 14135 KB  
Article
Underwater Image Enhancement with a Hybrid U-Net-Transformer and Recurrent Multi-Scale Modulation
by Zaiming Geng, Jiabin Huang, Xiaotian Wang, Yu Zhang, Xinnan Fan and Pengfei Shi
Mathematics 2025, 13(21), 3398; https://doi.org/10.3390/math13213398 - 25 Oct 2025
Viewed by 305
Abstract
The quality of underwater imagery is inherently degraded by light absorption and scattering, a challenge that severely limits its application in critical domains such as marine robotics and archeology. While existing enhancement methods, including recent hybrid models, attempt to address this, they often [...] Read more.
The quality of underwater imagery is inherently degraded by light absorption and scattering, a challenge that severely limits its application in critical domains such as marine robotics and archeology. While existing enhancement methods, including recent hybrid models, attempt to address this, they often struggle to restore fine-grained details without introducing visual artifacts. To overcome this limitation, this work introduces a novel hybrid U-Net-Transformer (UTR) architecture that synergizes local feature extraction with global context modeling. The core innovation is a Recurrent Multi-Scale Feature Modulation (R-MSFM) mechanism, which, unlike prior recurrent refinement techniques, employs a gated modulation strategy across multiple feature scales within the decoder to iteratively refine textural and structural details with high fidelity. This approach effectively preserves spatial information during upsampling. Extensive experiments demonstrate the superiority of the proposed method. On the EUVP dataset, UTR achieves a PSNR of 28.347 dB, a significant gain of +3.947 dB over the state-of-the-art UWFormer. Moreover, it attains a top-ranking UIQM score of 3.059 on the UIEB dataset, underscoring its robustness. The results confirm that UTR provides a computationally efficient and highly effective solution for underwater image enhancement. Full article
Show Figures

Figure 1

23 pages, 4617 KB  
Article
IAASNet: Ill-Posed-Aware Aggregated Stereo Matching Network for Cross-Orbit Optical Satellite Images
by Jiaxuan Huang, Haoxuan Sun and Taoyang Wang
Remote Sens. 2025, 17(21), 3528; https://doi.org/10.3390/rs17213528 - 24 Oct 2025
Viewed by 223
Abstract
Stereo matching estimates disparity by finding correspondences between stereo image pairs. Under ill-posed conditions such as geometric differences, radiometric differences, and temporal changes, accurate estimation becomes difficult due to insufficient matching information. In remote sensing imagery, such ill-posed regions are more common because [...] Read more.
Stereo matching estimates disparity by finding correspondences between stereo image pairs. Under ill-posed conditions such as geometric differences, radiometric differences, and temporal changes, accurate estimation becomes difficult due to insufficient matching information. In remote sensing imagery, such ill-posed regions are more common because of complex imaging conditions. This problem is particularly pronounced in cross-track satellite stereo images, where existing methods often fail to effectively handle noise due to insufficient features or excessive reliance on prior assumptions. In this work, we propose an ill-posed-aware aggregated satellite stereo matching network, which integrates monocular depth estimation with an ill-posed-guided adaptive aware geometry fusion module to balance local and global features while reducing noise interference. In addition, we design an enhanced mask augmentation strategy during training to simulate occlusions and texture loss in complex scenarios, thereby improving robustness. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches on the US3D dataset, achieving a 5.38% D1-error and 0.958 pixels endpoint error (EPE). In particular, our method shows significant advantages in ill-posed regions. Overall, the proposed network not only exhibits strong feature learning ability but also demonstrates robust generalization in real-world remote sensing applications. Full article
Show Figures

Figure 1

14 pages, 4834 KB  
Article
Crowd Gathering Detection Method Based on Multi-Scale Feature Fusion and Convolutional Attention
by Kamil Yasen, Juting Zhou, Nan Zhou, Ke Qin, Zhiguo Wang and Ye Li
Sensors 2025, 25(21), 6550; https://doi.org/10.3390/s25216550 - 24 Oct 2025
Viewed by 185
Abstract
With rapid urbanization and growing population inflows into metropolitan areas, crowd gatherings have become increasingly frequent and dense, posing significant challenges to public safety management. Although existing crowd gathering detection methods have achieved notable progress, they still face major limitations: most rely heavily [...] Read more.
With rapid urbanization and growing population inflows into metropolitan areas, crowd gatherings have become increasingly frequent and dense, posing significant challenges to public safety management. Although existing crowd gathering detection methods have achieved notable progress, they still face major limitations: most rely heavily on local texture or density features and lack the capacity to model contextual information, making them ineffective under severe occlusions and complex backgrounds. Additionally, fixed-scale feature extraction strategies struggle to adapt to crowd regions with varying densities and scales, and insufficient attention to densely populated areas hinders the capture of critical local features. To overcome these challenges, we propose a point-supervised framework named Multi-Scale Convolutional Attention Network (MSCANet). MSCANet adopts a context-aware architecture and integrates multi-scale feature extraction modules and convolutional attention mechanisms, enabling it to dynamically adapt to varying crowd densities while focusing on key regions. This enhances feature representation in complex scenes and improves detection performance. Extensive experiments on public datasets demonstrate that MSCANet achieves high counting accuracy and robustness, particularly in dense and occluded environments, showing strong potential for real-world deployment. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 4636 KB  
Article
Radiomics for Dynamic Lung Cancer Risk Prediction in USPSTF-Ineligible Patients
by Morteza Salehjahromi, Hui Li, Eman Showkatian, Maliazurina B. Saad, Mohamed Qayati, Sherif M. Ismail, Sheeba J. Sujit, Amgad Muneer, Muhammad Aminu, Lingzhi Hong, Xiaoyu Han, Simon Heeke, Tina Cascone, Xiuning Le, Natalie Vokes, Don L. Gibbons, Iakovos Toumazis, Edwin J. Ostrin, Mara B. Antonoff, Ara A. Vaporciyan, David Jaffray, Fernando U. Kay, Brett W. Carter, Carol C. Wu, Myrna C. B. Godoy, J. Jack Lee, David E. Gerber, John V. Heymach, Jianjun Zhang and Jia Wuadd Show full author list remove Hide full author list
Cancers 2025, 17(21), 3406; https://doi.org/10.3390/cancers17213406 - 23 Oct 2025
Viewed by 341
Abstract
Background: Non-smokers and individuals with minimal smoking history represent a significant proportion of lung cancer cases but are often overlooked in current risk assessment models. Pulmonary nodules are commonly detected incidentally—appearing in approximately 24–31% of all chest CT scans regardless of smoking [...] Read more.
Background: Non-smokers and individuals with minimal smoking history represent a significant proportion of lung cancer cases but are often overlooked in current risk assessment models. Pulmonary nodules are commonly detected incidentally—appearing in approximately 24–31% of all chest CT scans regardless of smoking status. However, most established risk models, such as the Brock model, were developed using cohorts heavily enriched with individuals who have substantial smoking histories. This limits their generalizability to non-smoking and light-smoking populations, highlighting the need for more inclusive and tailored risk prediction strategies. Purpose: We aimed to develop a longitudinal radiomics-based approach for lung cancer risk prediction, integrating time-varying radiomic modeling to enhance early detection in USPSTF-ineligible patients. Methods: Unlike conventional models that rely on a single scan, we conducted a longitudinal analysis of 122 patients who were later diagnosed with lung cancer, with a total of 622 CT scans analyzed. Of these patients, 69% were former smokers, while 30% had never smoked. Quantitative radiomic features were extracted from serial chest CT scans to capture temporal changes in nodule evolution. A time-varying survival model was implemented to dynamically assess lung cancer risk. Additionally, we evaluated the integration of handcrafted radiomic features and the deep learning-based Sybil model to determine the added value of combining local nodule characteristics with global lung assessments. Results: Our radiomic analysis identified specific CT patterns associated with malignant transformation, including increased nodule size, voxel intensity, textural entropy, as indicators of tumor heterogeneity and progression. Integrating radiomics, delta-radiomics, and longitudinal imaging features resulted in the optimal predictive performance during cross-validation (concordance index [C-index]: 0.69), surpassing that of models using demographics alone (C-index: 0.50) and Sybil alone (C-index: 0.54). Compared to the Brock model (67% accuracy, 100% sensitivity, 33% specificity), our composite risk model achieved 78% accuracy, 89% sensitivity, and 67% specificity, demonstrating improved early cancer risk stratification. Kaplan–Meier curves and individualized cancer development probability functions further validated the model’s ability to track dynamic risk progression for individual patients. Visual analysis of longitudinal CT scans confirmed alignment between predicted risk and evolving nodule characteristics. Conclusions: Our study demonstrates that integrating radiomics, sybil, and clinical factors enhances future lung cancer risk prediction in USPSTF-ineligible patients, outperforming existing models and supporting personalized screening and early intervention strategies. Full article
Show Figures

Figure 1

24 pages, 11432 KB  
Article
MRDAM: Satellite Cloud Image Super-Resolution via Multi-Scale Residual Deformable Attention Mechanism
by Liling Zhao, Zichen Liao and Quansen Sun
Remote Sens. 2025, 17(21), 3509; https://doi.org/10.3390/rs17213509 - 22 Oct 2025
Viewed by 329
Abstract
High-resolution meteorological satellite cloud imagery plays a crucial role in diagnosing and forecasting severe convective weather phenomena characterized by suddenness and locality, such as tropical cyclones. However, constrained by imaging principles and various internal/external interferences during satellite data acquisition, current satellite imagery often [...] Read more.
High-resolution meteorological satellite cloud imagery plays a crucial role in diagnosing and forecasting severe convective weather phenomena characterized by suddenness and locality, such as tropical cyclones. However, constrained by imaging principles and various internal/external interferences during satellite data acquisition, current satellite imagery often fails to meet the spatiotemporal resolution requirements for fine-scale monitoring of these weather systems. Particularly for real-time tracking of tropical cyclone genesis-evolution dynamics and capturing detailed cloud structure variations within cyclone cores, existing spatial resolutions remain insufficient. Therefore, developing super-resolution techniques for meteorological satellite cloud imagery through software-based approaches holds significant application potential. This paper proposes a Multi-scale Residual Deformable Attention Model (MRDAM) based on Generative Adversarial Networks (GANs), specifically designed for satellite cloud image super-resolution tasks considering their morphological diversity and non-rigid deformation characteristics. The generator architecture incorporates two key components: a Multi-scale Feature Progressive Fusion Module (MFPFM), which enhances texture detail preservation and spectral consistency in reconstructed images, and a Deformable Attention Additive Fusion Module (DAAFM), which captures irregular cloud pattern features through adaptive spatial-attention mechanisms. Comparative experiments against multiple GAN-based super-resolution baselines demonstrate that MRDAM achieves superior performance in both objective evaluation metrics (PSNR/SSIM) and subjective visual quality, proving its superior performance for satellite cloud image super-resolution tasks. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Satellite Image Processing)
Show Figures

Graphical abstract

22 pages, 1678 KB  
Article
Image Completion Network Considering Global and Local Information
by Yubo Liu, Ke Chen and Alan Penn
Buildings 2025, 15(20), 3746; https://doi.org/10.3390/buildings15203746 - 17 Oct 2025
Viewed by 260
Abstract
Accurate depth image inpainting in complex urban environments remains a critical challenge due to occlusions, reflections, and sensor limitations, which often result in significant data loss. We propose a hybrid deep learning framework that explicitly combines local and global modelling through Convolutional Neural [...] Read more.
Accurate depth image inpainting in complex urban environments remains a critical challenge due to occlusions, reflections, and sensor limitations, which often result in significant data loss. We propose a hybrid deep learning framework that explicitly combines local and global modelling through Convolutional Neural Networks (CNNs) and Transformer modules. The model employs a multi-branch parallel architecture, where the CNN branch captures fine-grained local textures and edges, while the Transformer branch models global semantic structures and long-range dependencies. We introduce an optimized attention mechanism, Agent Attention, which differs from existing efficient/linear attention methods by using learnable proxy tokens tailored for urban scene categories (e.g., façades, sky, ground). A content-guided dynamic fusion module adaptively combines multi-scale features to enhance structural alignment and texture recovery. The frame-work is trained with a composite loss function incorporating pixel accuracy, perceptual similarity, adversarial realism, and structural consistency. Extensive experiments on the Paris StreetView dataset demonstrate that the proposed method achieves state-of-the-art performance, outperforming existing approaches in PSNR, SSIM, and LPIPS metrics. The study highlights the potential of multi-scale modeling for urban depth inpainting and discusses challenges in real-world deployment, ethical considerations, and future directions for multimodal integration. Full article
Show Figures

Figure 1

22 pages, 6497 KB  
Article
Semantic Segmentation of High-Resolution Remote Sensing Images Based on RS3Mamba: An Investigation of the Extraction Algorithm for Rural Compound Utilization Status
by Xinyu Fang, Zhenbo Liu, Su’an Xie and Yunjian Ge
Remote Sens. 2025, 17(20), 3443; https://doi.org/10.3390/rs17203443 - 15 Oct 2025
Viewed by 290
Abstract
In this study, we utilize Gaofen-2 satellite remote sensing images to optimize and enhance the extraction of feature information from rural compounds, addressing key challenges in high-resolution remote sensing analysis: traditional methods struggle to effectively capture long-distance spatial dependencies for scattered rural compounds. [...] Read more.
In this study, we utilize Gaofen-2 satellite remote sensing images to optimize and enhance the extraction of feature information from rural compounds, addressing key challenges in high-resolution remote sensing analysis: traditional methods struggle to effectively capture long-distance spatial dependencies for scattered rural compounds. To this end, we implement the RS3Mamba+ deep learning model, which introduces the Mamba state space model (SSM) into its auxiliary branching—leveraging Mamba’s sequence modeling advantage to efficiently capture long-range spatial correlations of rural compounds, a critical capability for analyzing sparse rural buildings. This Mamba-assisted branch, combined with multi-directional selective scanning (SS2D) and the enhanced STEM network framework (replacing single 7 × 7 convolution with two-stage 3 × 3 convolutions to reduce information loss), works synergistically with a ResNet-based main branch for local feature extraction. We further introduce a multiscale attention feature fusion mechanism that optimizes feature extraction and fusion, enhances edge contour extraction accuracy in courtyards, and improves the recognition and differentiation of courtyards from regions with complex textures. The feature information of courtyard utilization status is finally extracted using empirical methods. A typical rural area in Weifang City, Shandong Province, is selected as the experimental sample area. Results show that the extraction accuracy reaches an average intersection over union (mIoU) of 79.64% and a Kappa coefficient of 0.7889, improving the F1 score by at least 8.12% and mIoU by 4.83% compared with models such as DeepLabv3+ and Transformer. The algorithm’s efficacy in mitigating false alarms triggered by shadows and intricate textures is particularly salient, underscoring its potential as a potent instrument for the extraction of rural vacancy rates. Full article
Show Figures

Figure 1

Back to TopTop