Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,843)

Search Parameters:
Keywords = limited imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2789 KiB  
Article
LSTMConvSR: Joint Long–Short-Range Modeling via LSTM-First–CNN-Next Architecture for Remote Sensing Image Super-Resolution
by Qiwei Zhu, Guojing Zhang, Xiaoying Wang and Jianqiang Huang
Remote Sens. 2025, 17(15), 2745; https://doi.org/10.3390/rs17152745 (registering DOI) - 7 Aug 2025
Abstract
The inability of existing super-resolution methods to jointly model short-range and long-range spatial dependencies in remote sensing imagery limits reconstruction efficacy. To address this, we propose LSTMConvSR, a novel framework inspired by top-down neural attention mechanisms. Our approach pioneers an LSTM-first–CNN-next architecture. First, [...] Read more.
The inability of existing super-resolution methods to jointly model short-range and long-range spatial dependencies in remote sensing imagery limits reconstruction efficacy. To address this, we propose LSTMConvSR, a novel framework inspired by top-down neural attention mechanisms. Our approach pioneers an LSTM-first–CNN-next architecture. First, an LSTM-based global modeling stage efficiently captures long-range dependencies via downsampling and spatial attention, achieving 80.3% lower FLOPs and 11× faster speed. Second, a CNN-based local refinement stage, guided by the LSTM’s attention maps, enhances details in critical regions. Third, a top-down fusion stage dynamically integrates global context and local features to generate the output. Extensive experiments on Potsdam, UAVid, and RSSCN7 benchmarks demonstrate state-of-the-art performance, achieving 33.94 dB PSNR on Potsdam with 2.4× faster inference than MambaIRv2. Full article
(This article belongs to the Special Issue Neural Networks and Deep Learning for Satellite Image Processing)
32 pages, 3696 KiB  
Article
Deep Learning Small Water Body Mapping by Transfer Learning from Sentinel-2 to PlanetScope
by Yuyang Li, Pu Zhou, Yalan Wang, Xiang Li, Yihang Zhang and Xiaodong Li
Remote Sens. 2025, 17(15), 2738; https://doi.org/10.3390/rs17152738 (registering DOI) - 7 Aug 2025
Abstract
Small water bodies are widely spread and play crucial roles in supporting regional agricultural and aquaculture activities. PlanetScope imagery has a high resolution (3 m) with daily global coverage and has obviously enhanced small water body mapping. Recent studies have demonstrated the effectiveness [...] Read more.
Small water bodies are widely spread and play crucial roles in supporting regional agricultural and aquaculture activities. PlanetScope imagery has a high resolution (3 m) with daily global coverage and has obviously enhanced small water body mapping. Recent studies have demonstrated the effectiveness of deep learning for mapping small water bodies using PlanetScope; however, a persistent challenge remains in the scarcity of high-quality, manually annotated water masks used for model training, which limits the generalization capability of data-driven deep learning models. In this study, we propose a transfer learning framework that leverages Sentinel-2 data to improve PlanetScope-based small water body mapping, capitalizing on the spectral interoperability between PlanetScope and Sentinel-2 bands and the abundance of open-source Sentinel-2 water masks. Eight state-of-the-art segmentation models have been explored. Additionally, this paper presents the first assessment of the VMamba model for small water body mapping, building on its demonstrated success in segmentation tasks. The models were pre-trained using Sentinel-2-derived water masks and subsequently fine-tuned with a limited set (1292 image patches, 256 × 256 pixels in each patch) of manually annotated PlanetScope labels. Experiments were conducted using 5648 image patches and two areas of 9636 km2 and 2745 km2, respectively. Among the evaluated methods, VMamba achieved higher accuracy compared with both CNN- and Transformer-based models. This study highlights the efficacy of combining global Sentinel-2 datasets for pre-training with localized fine-tuning, which not only enhances mapping accuracy but also reduces reliance on labor-intensive manual annotation in regional small water body mapping. Full article
(This article belongs to the Section Remote Sensing Image Processing)
26 pages, 10480 KiB  
Article
Monitoring Chlorophyll Content of Brassica napus L. Based on UAV Multispectral and RGB Feature Fusion
by Yongqi Sun, Jiali Ma, Mengting Lyu, Jianxun Shen, Jianping Ying, Skhawat Ali, Basharat Ali, Wenqiang Lan, Yiwa Hu, Fei Liu, Weijun Zhou and Wenjian Song
Agronomy 2025, 15(8), 1900; https://doi.org/10.3390/agronomy15081900 - 7 Aug 2025
Abstract
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to [...] Read more.
Accurate prediction of chlorophyll content in Brassica napus L. (rapeseed) is essential for monitoring plant nutritional status and precision agricultural management. The current study focuses on single cultivars, limiting general applicability. This study used unmanned aerial vehicle (UAV)-based RGB and multispectral imagery to evaluate six rapeseed cultivars chlorophyll content across mixed-growth stages, including seedling, bolting, and initial flowering stages. The ExG-ExR threshold segmentation was applied to remove background interference. Subsequently, color and spectral indices were extracted from segmented images and ranked according to their correlations with measured chlorophyll content. Partial Least Squares Regression (PLSR), Multiple Linear Regression (MLR), and Support Vector Regression (SVR) models were independently established using subsets of the top-ranked features. Model performance was assessed by comparing prediction accuracy (R2 and RMSE). Results demonstrated significant accuracy improvements following background removal, especially for the SVR model. Compared to data without background removal, accuracy increased notably with background removal by 8.0% (R2p improved from 0.683 to 0.763) for color indices and 3.1% (R2p from 0.835 to 0.866) for spectral indices. Additionally, stepwise fusion of spectral and color indices further improved prediction accuracy. Optimal results were obtained by fusing the top seven color features ranked by correlation with chlorophyll content, achieving an R2p of 0.878 and an RMSE of 52.187 μg/g. These findings highlight the effectiveness of background removal and feature fusion in enhancing chlorophyll prediction accuracy. Full article
Show Figures

Figure 1

27 pages, 5688 KiB  
Review
Tree Biomass Estimation in Agroforestry for Carbon Farming: A Comparative Analysis of Timing, Costs, and Methods
by Niccolò Conti, Gianni Della Rocca, Federico Franciamore, Elena Marra, Francesco Nigro, Emanuele Nigrone, Ramadhan Ramadhan, Pierluigi Paris, Gema Tárraga-Martínez, José Belenguer-Ballester, Lorenzo Scatena, Eleonora Lombardi and Cesare Garosi
Forests 2025, 16(8), 1287; https://doi.org/10.3390/f16081287 - 7 Aug 2025
Abstract
Agroforestry systems (AFSs) enhance long-term carbon sequestration through tree biomass accumulation. As the European Union’s Carbon Farming Certification (CRCF) Regulation now recognizes AFSs in carbon farming (CF) schemes, accurate tree biomass estimation becomes essential for certification. This review examines field-destructive and remote sensing [...] Read more.
Agroforestry systems (AFSs) enhance long-term carbon sequestration through tree biomass accumulation. As the European Union’s Carbon Farming Certification (CRCF) Regulation now recognizes AFSs in carbon farming (CF) schemes, accurate tree biomass estimation becomes essential for certification. This review examines field-destructive and remote sensing methods for estimating tree aboveground biomass (AGB) in AFSs, with a specific focus on their advantages, limitations, timing, and associated costs. Destructive methods, although accurate and necessary for developing and validating allometric equations, are time-consuming, costly, and labour-intensive. Conversely, satellite- and drone-based remote sensing offer scalable and non-invasive alternatives, increasingly supported by advances in machine learning and high-resolution imagery. Using data from the INNO4CFIs project, which conducted parallel destructive and remote measurements in an AFS in Tuscany (Italy), this study provides a novel quantitative comparison of the resources each method requires. The findings highlight that while destructive measurements remain indispensable for model calibration and new species assessment, their feasibility is limited by practical constraints. Meanwhile, remote sensing approaches, despite some accuracy challenges in heterogeneous AFSs, offer a promising path forward for cost-effective, repeatable biomass monitoring but in turn require reliable field data. The integration of both approaches might represent a valid strategy to optimize precision and resource efficiency in carbon farming applications. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

21 pages, 7718 KiB  
Article
Monitoring the Early Growth of Pinus and Eucalyptus Plantations Using a Planet NICFI-Based Canopy Height Model: A Case Study in Riqueza, Brazil
by Fabien H. Wagner, Fábio Marcelo Breunig, Rafaelo Balbinot, Emanuel Araújo Silva, Messias Carneiro Soares, Marco Antonio Kramm, Mayumi C. M. Hirye, Griffin Carter, Ricardo Dalagnol, Stephen C. Hagen and Sassan Saatchi
Remote Sens. 2025, 17(15), 2718; https://doi.org/10.3390/rs17152718 - 6 Aug 2025
Abstract
Monitoring the height of secondary forest regrowth is essential for assessing ecosystem recovery, but current methods rely on field surveys, airborne or UAV LiDAR, and 3D reconstruction from high-resolution UAV imagery, which are often costly or limited by logistical constraints. Here, we address [...] Read more.
Monitoring the height of secondary forest regrowth is essential for assessing ecosystem recovery, but current methods rely on field surveys, airborne or UAV LiDAR, and 3D reconstruction from high-resolution UAV imagery, which are often costly or limited by logistical constraints. Here, we address the challenge of scaling up canopy height monitoring by evaluating a recent deep learning model, trained on data from the Amazon and Atlantic Forests, developed to extract canopy height from RGB-NIR Planet NICFI imagery. The research questions are as follows: (i) How are canopy height estimates from the model affected by slope and orientation in natural forests, based on a large and well-balanced experimental design? (ii) How effectively does the model capture the growth trajectories of Pinus and Eucalyptus plantations over an eight-year period following planting? We find that the model closely tracks Pinus growth at the parcel scale, with predictions generally within one standard deviation of UAV-derived heights. For Eucalyptus, while growth is detected, the model consistently underestimates height, by more than 10 m in some cases, until late in the cycle when the canopy becomes less dense. In stable natural forests, the model reveals seasonal artifacts driven by topographic variables (slope × aspect × day of year), for which we propose strategies to reduce their influence. These results highlight the model’s potential as a cost-effective and scalable alternative to field-based and LiDAR methods, enabling broad-scale monitoring of forest regrowth and contributing to innovation in remote sensing for forest dynamics assessment. Full article
Show Figures

Figure 1

22 pages, 6201 KiB  
Article
SOAM Block: A Scale–Orientation-Aware Module for Efficient Object Detection in Remote Sensing Imagery
by Yi Chen, Zhidong Wang, Zhipeng Xiong, Yufeng Zhang and Xinqi Xu
Symmetry 2025, 17(8), 1251; https://doi.org/10.3390/sym17081251 - 6 Aug 2025
Abstract
Object detection in remote sensing imagery is critical in environmental monitoring, urban planning, and land resource management. However, the task remains challenging due to significant scale variations, arbitrary object orientations, and complex background clutter. To address these issues, we propose a novel orientation [...] Read more.
Object detection in remote sensing imagery is critical in environmental monitoring, urban planning, and land resource management. However, the task remains challenging due to significant scale variations, arbitrary object orientations, and complex background clutter. To address these issues, we propose a novel orientation module (SOAM Block) that jointly models object scale and directional features while exploiting geometric symmetry inherent in many remote sensing targets. The SOAM Block is constructed upon a lightweight and efficient Adaptive Multi-Scale (AMS) Module, which utilizes a symmetric arrangement of parallel depth-wise convolutional branches with varied kernel sizes to extract fine-grained multi-scale features without dilation, thereby preserving local context and enhancing scale adaptability. In addition, a Strip-based Context Attention (SCA) mechanism is introduced to model long-range spatial dependencies, leveraging horizontal and vertical 1D strip convolutions in a directionally symmetric fashion. This design captures spatial correlations between distant regions and reinforces semantic consistency in cluttered scenes. Importantly, this work is the first to explicitly analyze the coupling between object scale and orientation in remote sensing imagery. The proposed method addresses the limitations of fixed receptive fields in capturing symmetric directional cues of large-scale objects. Extensive experiments are conducted on two widely used benchmarks—DOTA and HRSC2016—both of which exhibit significant scale variations and orientation diversity. Results demonstrate that our approach achieves superior detection accuracy with fewer parameters and lower computational overhead compared to state-of-the-art methods. The proposed SOAM Block thus offers a robust, scalable, and symmetry-aware solution for high-precision object detection in complex aerial scenes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 4069 KiB  
Article
Forest Volume Estimation in Secondary Forests of the Southern Daxing’anling Mountains Using Multi-Source Remote Sensing and Machine Learning
by Penghao Ji, Wanlong Pang, Rong Su, Runhong Gao, Pengwu Zhao, Lidong Pang and Huaxia Yao
Forests 2025, 16(8), 1280; https://doi.org/10.3390/f16081280 - 5 Aug 2025
Abstract
Forest volume is an important information for assessing the economic value and carbon sequestration capacity of forest resources and serves as a key indicator for energy flow and biodiversity. Although remote sensing technology is applied to estimate volume, optical remote sensing data have [...] Read more.
Forest volume is an important information for assessing the economic value and carbon sequestration capacity of forest resources and serves as a key indicator for energy flow and biodiversity. Although remote sensing technology is applied to estimate volume, optical remote sensing data have limitations in capturing forest vertical height information and may suffer from reflectance saturation. While LiDAR data can provide more detailed vertical structural information, they come with high processing costs and limited observation range. Therefore, improving the accuracy of volume estimation through multi-source data fusion has become a crucial challenge and research focus in the field of forest remote sensing. In this study, we integrated Sentinel-2 multispectral data, Resource-3 stereoscopic imagery, UAV-based LiDAR data, and field survey data to quantitatively estimate the forest volume in Saihanwula Nature Reserve, located in Inner Mongolia, China, on the southern part of Daxing’anling Mountains. The study evaluated the performance of multi-source remote sensing features by using recursive feature elimination (RFE) to select the most relevant factors and applied four machine learning models—multiple linear regression (MLR), k-nearest neighbors (kNN), random forest (RF), and gradient boosting regression tree (GBRT)—to develop volume estimation models. The evaluation metrics include the coefficient of determination (R2), root mean square error (RMSE), and relative root mean square error (rRMSE). The results show that (1) forest Canopy Height Model (CHM) data were strongly correlated with forest volume, helping to alleviate the reflectance saturation issues inherent in spectral texture data. The fusion of CHM and spectral data resulted in an improved volume estimation model with R2 = 0.75 and RMSE = 8.16 m3/hm2, highlighting the importance of integrating multi-source canopy height information for more accurate volume estimation. (2) Volume estimation accuracy varied across different tree species. For Betula platyphylla, we obtained R2 = 0.71 and RMSE = 6.96 m3/hm2; for Quercus mongolica, R2 = 0.74 and RMSE = 6.90 m3/hm2; and for Populus davidiana, R2 = 0.51 and RMSE = 9.29 m3/hm2. The total forest volume in the Saihanwula Reserve ranges from 50 to 110 m3/hm2. (3) Among the four machine learning models, GBRT consistently outperformed others in all evaluation metrics, achieving the highest R2 of 0.86, lowest RMSE of 9.69 m3/hm2, and lowest rRMSE of 24.57%, suggesting its potential for forest biomass estimation. In conclusion, accurate estimation of forest volume is critical for evaluating forest management practices and timber resources. While this integrated approach shows promise, its operational application requires further external validation and uncertainty analysis to support policy-relevant decisions. The integration of multi-source remote sensing data provides valuable support for forest resource accounting, economic value assessment, and monitoring dynamic changes in forest ecosystems. Full article
(This article belongs to the Special Issue Mapping and Modeling Forests Using Geospatial Technologies)
Show Figures

Figure 1

27 pages, 14923 KiB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

26 pages, 2459 KiB  
Article
Urban Agriculture for Post-Disaster Food Security: Quantifying the Contributions of Community Gardens
by Yanxin Liu, Victoria Chanse and Fabricio Chicca
Urban Sci. 2025, 9(8), 305; https://doi.org/10.3390/urbansci9080305 - 5 Aug 2025
Viewed by 7
Abstract
Wellington, New Zealand, is highly vulnerable to disaster-induced food security crises due to its geography and geological characteristics, which can disrupt transportation and isolate the city following disasters. Urban agriculture (UA) has been proposed as a potential alternative food source for post-disaster scenarios. [...] Read more.
Wellington, New Zealand, is highly vulnerable to disaster-induced food security crises due to its geography and geological characteristics, which can disrupt transportation and isolate the city following disasters. Urban agriculture (UA) has been proposed as a potential alternative food source for post-disaster scenarios. This study examined the potential of urban agriculture for enhancing post-disaster food security by calculating vegetable self-sufficiency rates. Specifically, it evaluated the capacity of current Wellington’s community gardens to meet post-disaster vegetable demand in terms of both weight and nutrient content. Data collection employed mixed methods with questionnaires, on-site observations and mapping, and collecting high-resolution aerial imagery. Garden yields were estimated using self-reported data supported by literature benchmarks, while cultivated areas were quantified through on-site mapping and aerial imagery analysis. Six post-disaster food demand scenarios were used based on different target populations to develop an understanding of the range of potential produce yields. Weight-based results show that community gardens currently supply only 0.42% of the vegetable demand for residents living within a five-minute walk. This rate increased to 2.07% when specifically targeting only vulnerable populations, and up to 10.41% when focusing on gardeners’ own households. However, at the city-wide level, the current capacity of community gardens to provide enough produce to feed people remained limited. Nutrient-based self-sufficiency was lower than weight-based results; however, nutrient intake is particularly critical for vulnerable populations after disasters, underscoring the greater challenge of ensuring adequate nutrition through current urban food production. Beyond self-sufficiency, this study also addressed the role of UA in promoting food diversity and acceptability, as well as its social and psychological benefits based on the questionnaires and on-site observations. The findings indicate that community gardens contribute meaningfully to post-disaster food security for gardeners and nearby residents, particularly for vulnerable groups with elevated nutritional needs. Despite the current limited capacity of community gardens to provide enough produce to feed residents, findings suggest that Wellington could enhance post-disaster food self-reliance by diversifying UA types and optimizing land-use to increase food production during and after a disaster. Realizing this potential will require strategic interventions, including supportive policies, a conducive social environment, and diversification—such as the including private yards—all aimed at improving food access, availability, and nutritional quality during crises. The primary limitation of this study is the lack of comprehensive data on urban agriculture in Wellington and the wider New Zealand context. Addressing this data gap should be a key focus for future research to enable more robust assessments and evidence-based planning. Full article
Show Figures

Figure 1

25 pages, 29559 KiB  
Article
CFRANet: Cross-Modal Frequency-Responsive Attention Network for Thermal Power Plant Detection in Multispectral High-Resolution Remote Sensing Images
by Qinxue He, Bo Cheng, Xiaoping Zhang and Yaocan Gan
Remote Sens. 2025, 17(15), 2706; https://doi.org/10.3390/rs17152706 - 5 Aug 2025
Viewed by 1
Abstract
Thermal Power Plants (TPPs), as widely used industrial facilities for electricity generation, represent a key task in remote sensing image interpretation. However, detecting TPPs remains a challenging task due to their complex and irregular composition. Many traditional approaches focus on detecting compact, small-scale [...] Read more.
Thermal Power Plants (TPPs), as widely used industrial facilities for electricity generation, represent a key task in remote sensing image interpretation. However, detecting TPPs remains a challenging task due to their complex and irregular composition. Many traditional approaches focus on detecting compact, small-scale objects, while existing composite object detection methods are mostly part-based, limiting their ability to capture the structural and textural characteristics of composite targets like TPPs. Moreover, most of them rely on single-modality data, failing to fully exploit the rich information available in remote sensing imagery. To address these limitations, we propose a novel Cross-Modal Frequency-Responsive Attention Network (CFRANet). Specifically, the Modality-Aware Fusion Block (MAFB) facilitates the integration of multi-modal features, enhancing inter-modal interactions. Additionally, the Frequency-Responsive Attention (FRA) module leverages both spatial and localized dual-channel information and utilizes Fourier-based frequency decomposition to separately capture high- and low-frequency components, thereby improving the recognition of TPPs by learning both detailed textures and structural layouts. Experiments conducted on our newly proposed AIR-MTPP dataset demonstrate that CFRANet achieves state-of-the-art performance, with a mAP50 of 82.41%. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

24 pages, 4294 KiB  
Article
Post Hoc Event-Related Potential Analysis of Kinesthetic Motor Imagery-Based Brain-Computer Interface Control of Anthropomorphic Robotic Arms
by Miltiadis Spanos, Theodora Gazea, Vasileios Triantafyllidis, Konstantinos Mitsopoulos, Aristidis Vrahatis, Maria Hadjinicolaou, Panagiotis D. Bamidis and Alkinoos Athanasiou
Electronics 2025, 14(15), 3106; https://doi.org/10.3390/electronics14153106 - 4 Aug 2025
Viewed by 128
Abstract
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and [...] Read more.
Kinesthetic motor imagery (KMI), the mental rehearsal of a motor task without its actual performance, constitutes one of the most common techniques used for brain–computer interface (BCI) control for movement-related tasks. The effect of neural injury on motor cortical activity during execution and imagery remains under investigation in terms of activations, processing of motor onset, and BCI control. The current work aims to conduct a post hoc investigation of the event-related potential (ERP)-based processing of KMI during BCI control of anthropomorphic robotic arms by spinal cord injury (SCI) patients and healthy control participants in a completed clinical trial. For this purpose, we analyzed 14-channel electroencephalography (EEG) data from 10 patients with cervical SCI and 8 healthy individuals, recorded through Emotiv EPOC BCI, as the participants attempted to move anthropomorphic robotic arms using KMI. EEG data were pre-processed by band-pass filtering (8–30 Hz) and independent component analysis (ICA). ERPs were calculated at the sensor space, and analysis of variance (ANOVA) was used to determine potential differences between groups. Our results showed no statistically significant differences between SCI patients and healthy control groups regarding mean amplitude and latency (p < 0.05) across the recorded channels at various time points during stimulus presentation. Notably, no significant differences were observed in ERP components, except for the P200 component at the T8 channel. These findings suggest that brain circuits associated with motor planning and sensorimotor processes are not disrupted due to anatomical damage following SCI. The temporal dynamics of motor-related areas—particularly in channels like F3, FC5, and F7—indicate that essential motor imagery (MI) circuits remain functional. Limitations include the relatively small sample size that may hamper the generalization of our findings, the sensor-space analysis that restricts anatomical specificity and neurophysiological interpretations, and the use of a low-density EEG headset, lacking coverage over key motor regions. Non-invasive EEG-based BCI systems for motor rehabilitation in SCI patients could effectively leverage intact neural circuits to promote neuroplasticity and facilitate motor recovery. Future work should include validation against larger, longitudinal, high-density, source-space EEG datasets. Full article
(This article belongs to the Special Issue EEG Analysis and Brain–Computer Interface (BCI) Technology)
Show Figures

Figure 1

19 pages, 7432 KiB  
Article
Image-Level Anti-Personnel Landmine Detection Using Deep Learning in Long-Wave Infrared Images
by Jun-Hyung Kim and Goo-Rak Kwon
Appl. Sci. 2025, 15(15), 8613; https://doi.org/10.3390/app15158613 - 4 Aug 2025
Viewed by 88
Abstract
This study proposes a simple deep learning-based framework for image-level anti-personnel landmine detection in long-wave infrared imagery. To address challenges posed by the limited size of the available dataset and the small spatial size of anti-personnel landmines within images, we integrate two key [...] Read more.
This study proposes a simple deep learning-based framework for image-level anti-personnel landmine detection in long-wave infrared imagery. To address challenges posed by the limited size of the available dataset and the small spatial size of anti-personnel landmines within images, we integrate two key techniques: transfer learning using pre-trained vision foundation models, and attention-based multiple instance learning to derive discriminative image features. We evaluate five pre-trained models, including ResNet, ConvNeXt, ViT, OpenCLIP, and InfMAE, in combination with attention-based multiple instance learning. Furthermore, to mitigate the reliance of trained models on irrelevant features such as artificial or natural structures in the background, we introduce an inpainting-based image augmentation method. Experimental results, conducted on a publicly available “legbreaker” anti-personnel landmine infrared dataset, demonstrate that the proposed framework achieves high precision and recall, validating its effectiveness for landmine detection in infrared imagery. Additional experiments are also performed on an aerial image dataset designed for detecting small-sized ship targets to further validate the effectiveness of the proposed approach. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Image Processing)
Show Figures

Figure 1

23 pages, 4382 KiB  
Article
MTL-PlotCounter: Multitask Driven Soybean Seedling Counting at the Plot Scale Based on UAV Imagery
by Xiaoqin Xue, Chenfei Li, Zonglin Liu, Yile Sun, Xuru Li and Haiyan Song
Remote Sens. 2025, 17(15), 2688; https://doi.org/10.3390/rs17152688 - 3 Aug 2025
Viewed by 148
Abstract
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep [...] Read more.
Accurate and timely estimation of soybean emergence at the plot scale using unmanned aerial vehicle (UAV) remote sensing imagery is essential for germplasm evaluation in breeding programs, where breeders prioritize overall plot-scale emergence rates over subimage-based counts. This study proposes PlotCounter, a deep learning regression model based on the TasselNetV2++ architecture, designed for plot-scale soybean seedling counting. It employs a patch-based training strategy combined with full-plot validation to achieve reliable performance with limited breeding plot data. To incorporate additional agronomic information, PlotCounter is extended into a multitask learning framework (MTL-PlotCounter) that integrates sowing metadata such as variety, number of seeds per hole, and sowing density as auxiliary classification tasks. RGB images of 54 breeding plots were captured in 2023 using a DJI Mavic 2 Pro UAV and processed into an orthomosaic for model development and evaluation, showing effective performance. PlotCounter achieves a root mean square error (RMSE) of 6.98 and a relative RMSE (rRMSE) of 6.93%. The variety-integrated MTL-PlotCounter, V-MTL-PlotCounter, performs the best, with relative reductions of 8.74% in RMSE and 3.03% in rRMSE compared to PlotCounter, and outperforms representative YOLO-based models. Additionally, both PlotCounter and V-MTL-PlotCounter are deployed on a web-based platform, enabling users to upload images via an interactive interface, automatically count seedlings, and analyze plot-scale emergence, powered by a multimodal large language model. This study highlights the potential of integrating UAV remote sensing, agronomic metadata, specialized deep learning models, and multimodal large language models for advanced crop monitoring. Full article
(This article belongs to the Special Issue Recent Advances in Multimodal Hyperspectral Remote Sensing)
Show Figures

Figure 1

25 pages, 6934 KiB  
Article
Feature Constraints Map Generation Models Integrating Generative Adversarial and Diffusion Denoising
by Chenxing Sun, Xixi Fan, Xiechun Lu, Laner Zhou, Junli Zhao, Yuxuan Dong and Zhanlong Chen
Remote Sens. 2025, 17(15), 2683; https://doi.org/10.3390/rs17152683 - 3 Aug 2025
Viewed by 180
Abstract
The accelerated evolution of remote sensing technology has intensified the demand for real-time tile map generation, highlighting the limitations of conventional mapping approaches that rely on manual cartography and field surveys. To address the critical need for rapid cartographic updates, this study presents [...] Read more.
The accelerated evolution of remote sensing technology has intensified the demand for real-time tile map generation, highlighting the limitations of conventional mapping approaches that rely on manual cartography and field surveys. To address the critical need for rapid cartographic updates, this study presents a novel multi-stage generative framework that synergistically integrates Generative Adversarial Networks (GANs) with Diffusion Denoising Models (DMs) for high-fidelity map generation from remote sensing imagery. Specifically, our proposed architecture first employs GANs for rapid preliminary map generation, followed by a cascaded diffusion process that progressively refines topological details and spatial accuracy through iterative denoising. Furthermore, we propose a hybrid attention mechanism that strategically combines channel-wise feature recalibration with coordinate-aware spatial modulation, enabling the enhanced discrimination of geographic features under challenging conditions involving edge ambiguity and environmental noise. Quantitative evaluations demonstrate that our method significantly surpasses established baselines in both structural consistency and geometric fidelity. This framework establishes an operational paradigm for automated, rapid-response cartography, demonstrating a particular utility in time-sensitive applications including disaster impact assessment, unmapped terrain documentation, and dynamic environmental surveillance. Full article
Show Figures

Figure 1

29 pages, 30467 KiB  
Article
Clay-Hosted Lithium Exploration in the Wenshan Region of Southeastern Yunnan Province, China, Using Multi-Source Remote Sensing and Structural Interpretation
by Lunxin Feng, Zhifang Zhao, Haiying Yang, Qi Chen, Changbi Yang, Xiao Zhao, Geng Zhang, Xinle Zhang and Xin Dong
Minerals 2025, 15(8), 826; https://doi.org/10.3390/min15080826 - 2 Aug 2025
Viewed by 282
Abstract
With the rapid increase in global lithium demand, the exploration of newly discovered lithium in the bauxite of the Wenshan area in southeastern Yunnan has become increasingly important. However, the current research on clay-type lithium in the Wenshan area has primarily focused on [...] Read more.
With the rapid increase in global lithium demand, the exploration of newly discovered lithium in the bauxite of the Wenshan area in southeastern Yunnan has become increasingly important. However, the current research on clay-type lithium in the Wenshan area has primarily focused on local exploration, and large-scale predictive metallogenic studies remain limited. To address this, this study utilized multi-source remote sensing data from ZY1-02D and ASTER, combined with ALOS 12.5 m DEM and Sentinel-2 imagery, to carry out remote sensing mineral identification, structural interpretation, and prospectivity mapping for clay-type lithium in the Wenshan area. This study indicates that clay-type lithium in the Wenshan area is controlled by NW, EW, and NE linear structures and are mainly distributed in the region from north of the Wenshan–Malipo fault to south of the Guangnan–Funing fault. High-value areas of iron-rich silicates and iron–magnesium minerals revealed by ASTER data indicate lithium enrichment, while montmorillonite and cookeite identification by ZY1-02D have strong indicative significance for lithium. Field verification samples show the highest Li2O content reaching 11,150 μg/g, with six samples meeting the comprehensive utilization criteria for lithium in bauxite (Li2O ≥ 500 μg/g) and also showing an enrichment of rare earth elements (REEs) and gallium (Ga). By integrating stratigraphic, structural, mineral identification, geochemical characteristics, and field verification data, ten mineral exploration target areas were delineated. This study validates the effectiveness of remote sensing technology in the exploration of clay-type lithium and provides an applicable workflow for similar environments worldwide. Full article
Show Figures

Figure 1

Back to TopTop