Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (14,202)

Search Parameters:
Keywords = imaging and sensing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 1382 KiB  
Review
Application of Non-Destructive Technology in Plant Disease Detection: Review
by Yanping Wang, Jun Sun, Zhaoqi Wu, Yilin Jia and Chunxia Dai
Agriculture 2025, 15(15), 1670; https://doi.org/10.3390/agriculture15151670 (registering DOI) - 1 Aug 2025
Abstract
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on [...] Read more.
In recent years, research on plant disease detection has combined artificial intelligence, hyperspectral imaging, unmanned aerial vehicle remote sensing, and other technologies, promoting the transformation of pest and disease control in smart agriculture towards digitalization and artificial intelligence. This review systematically elaborates on the research status of non-destructive detection techniques used for plant disease identification and detection, mainly introducing the following two types of methods: spectral technology and imaging technology. It also elaborates, in detail, on the principles and application examples of each technology and summarizes the advantages and disadvantages of these technologies. This review clearly indicates that non-destructive detection techniques can achieve plant disease and pest detection quickly, accurately, and without damage. In the future, integrating multiple non-destructive detection technologies, developing portable detection devices, and combining more efficient data processing methods will become the core development directions of this field. Full article
(This article belongs to the Section Crop Protection, Diseases, Pests and Weeds)
Show Figures

Figure 1

22 pages, 6482 KiB  
Article
Surface Damage Detection in Hydraulic Structures from UAV Images Using Lightweight Neural Networks
by Feng Han and Chongshi Gu
Remote Sens. 2025, 17(15), 2668; https://doi.org/10.3390/rs17152668 (registering DOI) - 1 Aug 2025
Abstract
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial [...] Read more.
Timely and accurate identification of surface damage in hydraulic structures is essential for maintaining structural integrity and ensuring operational safety. Traditional manual inspections are time-consuming, labor-intensive, and prone to subjectivity, especially for large-scale or inaccessible infrastructure. Leveraging advancements in aerial imaging, unmanned aerial vehicles (UAVs) enable efficient acquisition of high-resolution visual data across expansive hydraulic environments. However, existing deep learning (DL) models often lack architectural adaptations for the visual complexities of UAV imagery, including low-texture contrast, noise interference, and irregular crack patterns. To address these challenges, this study proposes a lightweight, robust, and high-precision segmentation framework, called LFPA-EAM-Fast-SCNN, specifically designed for pixel-level damage detection in UAV-captured images of hydraulic concrete surfaces. The developed DL-based model integrates an enhanced Fast-SCNN backbone for efficient feature extraction, a Lightweight Feature Pyramid Attention (LFPA) module for multi-scale context enhancement, and an Edge Attention Module (EAM) for refined boundary localization. The experimental results on a custom UAV-based dataset show that the proposed damage detection method achieves superior performance, with a precision of 0.949, a recall of 0.892, an F1 score of 0.906, and an IoU of 87.92%, outperforming U-Net, Attention U-Net, SegNet, DeepLab v3+, I-ST-UNet, and SegFormer. Additionally, it reaches a real-time inference speed of 56.31 FPS, significantly surpassing other models. The experimental results demonstrate the proposed framework’s strong generalization capability and robustness under varying noise levels and damage scenarios, underscoring its suitability for scalable, automated surface damage assessment in UAV-based remote sensing of civil infrastructure. Full article
Show Figures

Figure 1

18 pages, 10780 KiB  
Article
Improving the Universal Performance of Land Cover Semantic Segmentation Through Training Data Refinement and Multi-Dataset Fusion via Redundant Models
by Jae Young Chang, Kwan-Young Oh and Kwang-Jae Lee
Remote Sens. 2025, 17(15), 2669; https://doi.org/10.3390/rs17152669 (registering DOI) - 1 Aug 2025
Abstract
Artificial intelligence (AI) has become the mainstream of analysis tools in remote sensing. Various semantic segmentation models have been introduced to segment land cover from aerial or satellite images, and remarkable results have been achieved. However, they often lack universal performance on unseen [...] Read more.
Artificial intelligence (AI) has become the mainstream of analysis tools in remote sensing. Various semantic segmentation models have been introduced to segment land cover from aerial or satellite images, and remarkable results have been achieved. However, they often lack universal performance on unseen images, making them challenging to provide as a service. One of the primary reasons for the lack of robustness is overfitting, resulting from errors and inconsistencies in the ground truth (GT). In this study, we propose a method to mitigate these inconsistencies by utilizing redundant models and verify the improvement using a public dataset based on Google Earth images. Redundant models share the same network architecture and hyperparameters but are trained with different combinations of training and validation data on the same dataset. Because of the variations in sample exposure during training, these models yield slightly different inference results. This variability allows for the estimation of pixel-level confidence levels for the GT. The confidence level is incorporated into the GT to influence the loss calculation during the training of the enhanced model. Furthermore, we implemented a consensus model that employs modified masks, where classes with low confidence are substituted by the dominant classes identified through a majority vote from the redundant models. To further improve robustness, we extended the same approach to fuse the dataset with different class compositions based on imagery from the Korea Multipurpose Satellite 3A (KOMPSAT-3A). Performance evaluations were conducted on three network architectures: a simple network, U-Net, and DeepLabV3. In the single-dataset case, the performance of the enhanced and consensus models improved by an average of 2.49% and 2.59% across the network architectures. In the multi-dataset scenario, the enhanced models and consensus models showed an average performance improvement of 3.37% and 3.02% across the network architectures, respectively, compared to an average increase of 1.55% without the proposed method. Full article
22 pages, 8105 KiB  
Article
Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing
by Jie Han, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang and Jiaxiu Zou
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 (registering DOI) - 1 Aug 2025
Abstract
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract [...] Read more.
The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species. Full article
Show Figures

Figure 1

22 pages, 24173 KiB  
Article
ScaleViM-PDD: Multi-Scale EfficientViM with Physical Decoupling and Dual-Domain Fusion for Remote Sensing Image Dehazing
by Hao Zhou, Yalun Wang, Wanting Peng, Xin Guan and Tao Tao
Remote Sens. 2025, 17(15), 2664; https://doi.org/10.3390/rs17152664 (registering DOI) - 1 Aug 2025
Abstract
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm [...] Read more.
Remote sensing images are often degraded by atmospheric haze, which not only reduces image quality but also complicates information extraction, particularly in high-level visual analysis tasks such as object detection and scene classification. State-space models (SSMs) have recently emerged as a powerful paradigm for vision tasks, showing great promise due to their computational efficiency and robust capacity to model global dependencies. However, most existing learning-based dehazing methods lack physical interpretability, leading to weak generalization. Furthermore, they typically rely on spatial features while neglecting crucial frequency domain information, resulting in incomplete feature representation. To address these challenges, we propose ScaleViM-PDD, a novel network that enhances an SSM backbone with two key innovations: a Multi-scale EfficientViM with Physical Decoupling (ScaleViM-P) module and a Dual-Domain Fusion (DD Fusion) module. The ScaleViM-P module synergistically integrates a Physical Decoupling block within a Multi-scale EfficientViM architecture. This design enables the network to mitigate haze interference in a physically grounded manner at each representational scale while simultaneously capturing global contextual information to adaptively handle complex haze distributions. To further address detail loss, the DD Fusion module replaces conventional skip connections by incorporating a novel Frequency Domain Module (FDM) alongside channel and position attention. This allows for a more effective fusion of spatial and frequency features, significantly improving the recovery of fine-grained details, including color and texture information. Extensive experiments on nine publicly available remote sensing datasets demonstrate that ScaleViM-PDD consistently surpasses state-of-the-art baselines in both qualitative and quantitative evaluations, highlighting its strong generalization ability. Full article
Show Figures

Figure 1

24 pages, 23817 KiB  
Article
Dual-Path Adversarial Denoising Network Based on UNet
by Jinchi Yu, Yu Zhou, Mingchen Sun and Dadong Wang
Sensors 2025, 25(15), 4751; https://doi.org/10.3390/s25154751 (registering DOI) - 1 Aug 2025
Abstract
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a [...] Read more.
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a novel three-module architecture for image denoising, comprising a generator, a dual-path-UNet-based denoiser, and a discriminator. The generator creates synthetic noise patterns to augment training data, while the dual-path-UNet denoiser uses multiple receptive field modules to preserve fine details and dense feature fusion to maintain global structural integrity. The discriminator provides adversarial feedback to enhance denoising performance. This dual-path adversarial training mechanism addresses the limitations of traditional methods by simultaneously capturing both local details and global structures. Experiments on the SIDD, DND, and PolyU datasets demonstrate superior performance. We compare our architecture with the latest state-of-the-art GAN variants through comprehensive qualitative and quantitative evaluations. These results confirm the effectiveness of noise removal with minimal loss of critical image details. The proposed architecture enhances image denoising capabilities in complex noise scenarios, providing a robust solution for applications that require high image fidelity. By enhancing adaptability to various types of noise while maintaining structural integrity, this method provides a versatile tool for image processing tasks that require preserving detail. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 1767 KiB  
Article
A Contrastive Representation Learning Method for Event Classification in Φ-OTDR Systems
by Tong Zhang, Xinjie Peng, Yifan Liu, Kaiyang Yin and Pengfei Li
Sensors 2025, 25(15), 4744; https://doi.org/10.3390/s25154744 (registering DOI) - 1 Aug 2025
Abstract
The phase-sensitive optical time-domain reflectometry (Φ-OTDR) system has shown substantial potential in distributed acoustic sensing applications. Accurate event classification is crucial for effective deployment of Φ-OTDR systems, and various methods have been proposed for event classification in Φ-OTDR systems. However, most existing methods [...] Read more.
The phase-sensitive optical time-domain reflectometry (Φ-OTDR) system has shown substantial potential in distributed acoustic sensing applications. Accurate event classification is crucial for effective deployment of Φ-OTDR systems, and various methods have been proposed for event classification in Φ-OTDR systems. However, most existing methods typically rely on sufficient labeled signal data for model training, which poses a major bottleneck in applying these methods due to the expensive and laborious process of labeling extensive data. To address this limitation, we propose CLWTNet, a novel contrastive representation learning method enhanced with wavelet transform convolution for event classification in Φ-OTDR systems. CLWTNet learns robust and discriminative representations directly from unlabeled signal data by transforming time-domain signals into STFT images and employing contrastive learning to maximize inter-class separation while preserving intra-class similarity. Furthermore, CLWTNet incorporates wavelet transform convolution to enhance its capacity to capture intricate features of event signals. The experimental results demonstrate that CLWTNet achieves competitive performance with the supervised representation learning methods and superior performance to unsupervised representation learning methods, even when training with unlabeled signal data. These findings highlight the effectiveness of CLWTNet in extracting discriminative representations without relying on labeled data, thereby enhancing data efficiency and reducing the costs and effort involved in extensive data labeling in practical Φ-OTDR system applications. Full article
(This article belongs to the Topic Distributed Optical Fiber Sensors)
Show Figures

Figure 1

23 pages, 4379 KiB  
Article
Large Vision Language Model: Enhanced-RSCLIP with Exemplar-Image Prompting for Uncommon Object Detection in Satellite Imagery
by Taiwo Efunogbon, Abimbola Efunogbon, Enjie Liu, Dayou Li and Renxi Qiu
Electronics 2025, 14(15), 3071; https://doi.org/10.3390/electronics14153071 (registering DOI) - 31 Jul 2025
Abstract
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in [...] Read more.
Large Vision Language Models (LVLMs) have shown promise in remote sensing applications, yet struggle with “uncommon” objects that lack sufficient public labeled data. This paper presents Enhanced-RSCLIP, a novel dual-prompt architecture that combines text prompting with exemplar-image processing for cattle herd detection in satellite imagery. Our approach introduces a key innovation where an exemplar-image preprocessing module using crop-based or attention-based algorithms extracts focused object features which are fed as a dual stream to a contrastive learning framework that fuses textual descriptions with visual exemplar embeddings. We evaluated our method on a custom dataset of 260 satellite images across UK and Nigerian regions. Enhanced-RSCLIP with crop-based exemplar processing achieved 72% accuracy in cattle detection and 56.2% overall accuracy on cross-domain transfer tasks, significantly outperforming text-only CLIP (31% overall accuracy). The dual-prompt architecture enables effective few-shot learning and cross-regional transfer from data-rich (UK) to data-sparse (Nigeria) environments, demonstrating a 41% improvement over baseline approaches for uncommon object detection in satellite imagery. Full article
Show Figures

Figure 1

21 pages, 12997 KiB  
Article
Aerial-Ground Cross-View Vehicle Re-Identification: A Benchmark Dataset and Baseline
by Linzhi Shang, Chen Min, Juan Wang, Liang Xiao, Dawei Zhao and Yiming Nie
Remote Sens. 2025, 17(15), 2653; https://doi.org/10.3390/rs17152653 (registering DOI) - 31 Jul 2025
Abstract
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, [...] Read more.
Vehicle re-identification (Re-ID) is a critical computer vision task that aims to match the same vehicle across spatially distributed cameras, especially in the context of remote sensing imagery. While prior research has primarily focused on Re-ID using remote sensing images captured from similar, typically elevated viewpoints, these settings do not fully reflect complex aerial-ground collaborative remote sensing scenarios. In this work, we introduce a novel and challenging task: aerial-ground cross-view vehicle Re-ID, which involves retrieving vehicles in ground-view image galleries using query images captured from aerial (top-down) perspectives. This task is increasingly relevant due to the integration of drone-based surveillance and ground-level monitoring in multi-source remote sensing systems, yet it poses substantial challenges due to significant appearance variations between aerial and ground views. To support this task, we present AGID (Aerial-Ground Vehicle Re-Identification), the first benchmark dataset specifically designed for aerial-ground cross-view vehicle Re-ID. AGID comprises 20,785 remote sensing images of 834 vehicle identities, collected using drones and fixed ground cameras. We further propose a novel method, Enhanced Self-Correlation Feature Computation (ESFC), which enhances spatial relationships between semantically similar regions and incorporates shape information to improve feature discrimination. Extensive experiments on the AGID dataset and three widely used vehicle Re-ID benchmarks validate the effectiveness of our method, which achieves a Rank-1 accuracy of 69.0% on AGID, surpassing state-of-the-art approaches by 2.1%. Full article
Show Figures

Figure 1

29 pages, 15488 KiB  
Article
GOFENet: A Hybrid Transformer–CNN Network Integrating GEOBIA-Based Object Priors for Semantic Segmentation of Remote Sensing Images
by Tao He, Jianyu Chen and Delu Pan
Remote Sens. 2025, 17(15), 2652; https://doi.org/10.3390/rs17152652 (registering DOI) - 31 Jul 2025
Abstract
Geographic object-based image analysis (GEOBIA) has demonstrated substantial utility in remote sensing tasks. However, its integration with deep learning remains largely confined to image-level classification. This is primarily due to the irregular shapes and fragmented boundaries of segmented objects, which limit its applicability [...] Read more.
Geographic object-based image analysis (GEOBIA) has demonstrated substantial utility in remote sensing tasks. However, its integration with deep learning remains largely confined to image-level classification. This is primarily due to the irregular shapes and fragmented boundaries of segmented objects, which limit its applicability in semantic segmentation. While convolutional neural networks (CNNs) excel at local feature extraction, they inherently struggle to capture long-range dependencies. In contrast, Transformer-based models are well suited for global context modeling but often lack fine-grained local detail. To overcome these limitations, we propose GOFENet (Geo-Object Feature Enhanced Network)—a hybrid semantic segmentation architecture that effectively fuses object-level priors into deep feature representations. GOFENet employs a dual-encoder design combining CNN and Swin Transformer architectures, enabling multi-scale feature fusion through skip connections to preserve both local and global semantics. An auxiliary branch incorporating cascaded atrous convolutions is introduced to inject information of segmented objects into the learning process. Furthermore, we develop a cross-channel selection module (CSM) for refined channel-wise attention, a feature enhancement module (FEM) to merge global and local representations, and a shallow–deep feature fusion module (SDFM) to integrate pixel- and object-level cues across scales. Experimental results on the GID and LoveDA datasets demonstrate that GOFENet achieves superior segmentation performance, with 66.02% mIoU and 51.92% mIoU, respectively. The model exhibits strong capability in delineating large-scale land cover features, producing sharper object boundaries and reducing classification noise, while preserving the integrity and discriminability of land cover categories. Full article
Show Figures

Figure 1

20 pages, 1059 KiB  
Article
The Knowledge Sovereignty Paradigm: Mapping Employee-Driven Information Governance Following Organisational Data Breaches
by Jeferson Martínez Lozano, Kevin Restrepo Bedoya and Juan Velez-Ocampo
J. Cybersecur. Priv. 2025, 5(3), 51; https://doi.org/10.3390/jcp5030051 (registering DOI) - 31 Jul 2025
Abstract
This study explores the emergent dynamics of knowledge sovereignty within organisations following data breach incidents. Using qualitative analysis based on Benoit’s image restoration theory, this study shows that employees do more than relay official messages—they actively shape information governance after a cyberattack. Employees [...] Read more.
This study explores the emergent dynamics of knowledge sovereignty within organisations following data breach incidents. Using qualitative analysis based on Benoit’s image restoration theory, this study shows that employees do more than relay official messages—they actively shape information governance after a cyberattack. Employees adapt Benoit’s response strategies (denial, evasion of responsibility, reducing offensiveness, corrective action, and mortification) based on how authentic they perceive the organisation’s response, their identification with the company, and their sense of fairness in crisis management. This investigation substantively extends extant crisis communication theory by showing how knowledge sovereignty is shaped through negotiation, as employees manage their dual role as breach victims and organisational representatives. The findings suggest that employees are key actors in post-breach information governance, and that their authentic engagement is critical to organisational recovery after cybersecurity incidents. Full article
22 pages, 6682 KiB  
Article
An FR4-Based Oscillator Loading an Additional High-Q Cavity for Phase Noise Reduction Using SISL Technology
by Jingwen Han, Ningning Yan and Kaixue Ma
Electronics 2025, 14(15), 3041; https://doi.org/10.3390/electronics14153041 - 30 Jul 2025
Abstract
An FR4-based X-band low phase noise oscillator loading an additional high-Q cavity resonator was designed in this study using substrate-integrated suspended line (SISL) technology. The additional resonator was coupled to an oscillator by the transmission line (coupling TL). The impact of the [...] Read more.
An FR4-based X-band low phase noise oscillator loading an additional high-Q cavity resonator was designed in this study using substrate-integrated suspended line (SISL) technology. The additional resonator was coupled to an oscillator by the transmission line (coupling TL). The impact of the additional resonator on startup conditions, Q factor enhancement, and phase noise reduction was thoroughly investigated. Three oscillators loading an additional high-Q cavity resonator, loading an additional high-Q cavity resonator and performing partial dielectric extraction, and loading an original parallel feedback oscillator for comparison were presented. The experimental results showed that the proposed oscillator had a low phase noise of −131.79 dBc/Hz at 1 MHz offset from the carrier frequency of 10.088 GHz, and the FOM was −197.79 dBc/Hz. The phase noise was reduced by 1.66 dB through loading the additional resonator and further reduced by 1.87 dB through partially excising the substrate. To the best of our knowledge, the proposed oscillator showed the lowest phase noise and FOM compared with other all-FR4-based oscillators. The cost of fabrication was markedly reduced. The proposed oscillator also has the advantages of compact size and self-packaging properties. Full article
Show Figures

Figure 1

19 pages, 7161 KiB  
Article
Dynamic Snake Convolution Neural Network for Enhanced Image Super-Resolution
by Weiqiang Xin, Ziang Wu, Qi Zhu, Tingting Bi, Bing Li and Chunwei Tian
Mathematics 2025, 13(15), 2457; https://doi.org/10.3390/math13152457 - 30 Jul 2025
Abstract
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To [...] Read more.
Image super-resolution (SR) is essential for enhancing image quality in critical applications, such as medical imaging and satellite remote sensing. However, existing methods were often limited in their ability to effectively process and integrate multi-scales information from fine textures to global structures. To address these limitations, this paper proposes DSCNN, a dynamic snake convolution neural network for enhanced image super-resolution. DSCNN optimizes feature extraction and network architecture to enhance both performance and efficiency: To improve feature extraction, the core innovation is a feature extraction and enhancement module with dynamic snake convolution that dynamically adjusts the convolution kernel’s shape and position to better fit the image’s geometric structures, significantly improving feature extraction. To optimize the network’s structure, DSCNN employs an enhanced residual network framework. This framework utilizes parallel convolutional layers and a global feature fusion mechanism to further strengthen feature extraction capability and gradient flow efficiency. Additionally, the network incorporates a SwishReLU-based activation function and a multi-scale convolutional concatenation structure. This multi-scale design effectively captures both local details and global image structure, enhancing SR reconstruction. In summary, the proposed DSCNN outperforms existing methods in both objective metrics and visual perception (e.g., our method achieved optimal PSNR and SSIM results on the Set5 ×4 dataset). Full article
(This article belongs to the Special Issue Structural Networks for Image Application)
Show Figures

Figure 1

26 pages, 4899 KiB  
Article
SDDGRNets: Level–Level Semantically Decomposed Dynamic Graph Reasoning Network for Remote Sensing Semantic Change Detection
by Zhuli Xie, Gang Wan, Yunxia Yin, Guangde Sun and Dongdong Bu
Remote Sens. 2025, 17(15), 2641; https://doi.org/10.3390/rs17152641 - 30 Jul 2025
Viewed by 42
Abstract
Semantic change detection technology based on remote sensing data holds significant importance for urban and rural planning decisions and the monitoring of ground objects. However, simple convolutional networks are limited by the receptive field, cannot fully capture detailed semantic information, and cannot effectively [...] Read more.
Semantic change detection technology based on remote sensing data holds significant importance for urban and rural planning decisions and the monitoring of ground objects. However, simple convolutional networks are limited by the receptive field, cannot fully capture detailed semantic information, and cannot effectively perceive subtle changes and constrain edge information. Therefore, a dynamic graph reasoning network with layer-by-layer semantic decomposition for semantic change detection in remote sensing data is developed in response to these limitations. This network aims to understand and perceive subtle changes in the semantic content of remote sensing data from the image pixel level. On the one hand, low-level semantic information and cross-scale spatial local feature details are obtained by dividing subspaces and decomposing convolutional layers with significant kernel expansion. Semantic selection aggregation is used to enhance the characterization of global and contextual semantics. Meanwhile, the initial multi-scale local spatial semantics are screened and re-aggregated to improve the characterization of significant features. On the other hand, at the encoding stage, the weight-sharing approach is employed to align the positions of ground objects in the change area and generate more comprehensive encoding information. Meanwhile, the dynamic graph reasoning module is used to decode the encoded semantics layer by layer to investigate the hidden associations between pixels in the neighborhood. In addition, the edge constraint module is used to constrain boundary pixels and reduce semantic ambiguity. The weighted loss function supervises and optimizes each module separately to enable the network to acquire the optimal feature representation. Finally, experimental results on three open-source datasets, such as SECOND, HIUSD, and Landsat-SCD, show that the proposed method achieves good performance, with an SCD score reaching 35.65%, 98.33%, and 67.29%, respectively. Full article
Show Figures

Figure 1

50 pages, 937 KiB  
Review
Precision Neuro-Oncology in Glioblastoma: AI-Guided CRISPR Editing and Real-Time Multi-Omics for Genomic Brain Surgery
by Matei Șerban, Corneliu Toader and Răzvan-Adrian Covache-Busuioc
Int. J. Mol. Sci. 2025, 26(15), 7364; https://doi.org/10.3390/ijms26157364 - 30 Jul 2025
Viewed by 46
Abstract
Precision neurosurgery is rapidly evolving as a medical specialty by merging genomic medicine, multi-omics technologies, and artificial intelligence (AI) technology, while at the same time, society is shifting away from the traditional, anatomic model of care to consider a more precise, molecular model [...] Read more.
Precision neurosurgery is rapidly evolving as a medical specialty by merging genomic medicine, multi-omics technologies, and artificial intelligence (AI) technology, while at the same time, society is shifting away from the traditional, anatomic model of care to consider a more precise, molecular model of care. The general purpose of this review is to contemporaneously reflect on how these advances will impact neurosurgical care by providing us with more precise diagnostic and treatment pathways. We hope to provide a relevant review of the recent advances in genomics and multi-omics in the context of clinical practice and highlight their transformational opportunities in the existing models of care, where improved molecular insights can support improvements in clinical care. More specifically, we will highlight how genomic profiling, CRISPR-Cas9, and multi-omics platforms (genomics, transcriptomics, proteomics, and metabolomics) are increasing our understanding of central nervous system (CNS) disorders. Achievements obtained with transformational technologies such as single-cell RNA sequencing and intraoperative mass spectrometry are exemplary of the molecular diagnostic possibilities in real-time molecular diagnostics to enable a more directed approach in surgical options. We will also explore how identifying specific biomarkers (e.g., IDH mutations and MGMT promoter methylation) became a tipping point in the care of glioblastoma and allowed for the establishment of a new taxonomy of tumors that became applicable for surgeons, where a change in practice enjoined a different surgical resection approach and subsequently stratified the adjuvant therapies undertaken after surgery. Furthermore, we reflect on how the novel genomic characterization of mutations like DEPDC5 and SCN1A transformed the pre-surgery selection of surgical candidates for refractory epilepsy when conventional imaging did not define an epileptogenic zone, thus reducing resective surgery occurring in clinical practice. While we are atop the crest of an exciting wave of advances, we recognize that we also must be diligent about the challenges we must navigate to implement genomic medicine in neurosurgery—including ethical and technical challenges that could arise when genomic mutation-based therapies require the concurrent application of multi-omics data collection to be realized in practice for the benefit of patients, as well as the constraints from the blood–brain barrier. The primary challenges also relate to the possible gene privacy implications around genomic medicine and equitable access to technology-based alternative practice disrupting interventions. We hope the contribution from this review will not just be situational consolidation and integration of knowledge but also a stimulus for new lines of research and clinical practice. We also hope to stimulate mindful discussions about future possibilities for conscientious and sustainable progress in our evolution toward a genomic model of precision neurosurgery. In the spirit of providing a critical perspective, we hope that we are also adding to the larger opportunity to embed molecular precision into neuroscience care, striving to promote better practice and better outcomes for patients in a global sense. Full article
(This article belongs to the Special Issue Molecular Insights into Glioblastoma Pathogenesis and Therapeutics)
Show Figures

Figure 1

Back to TopTop