Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,204)

Search Parameters:
Keywords = forest network extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 14138 KB  
Article
Hierarchical Extraction and Multi-Feature Optimization of Complex Crop Planting Structures in the Hetao Irrigation District Based on Multi-Source Remote Sensing Data
by Shan Yu, Rong Li, Wala Du, Lide Su, Buqi Na and Liangliang Yu
Remote Sens. 2026, 18(6), 937; https://doi.org/10.3390/rs18060937 - 19 Mar 2026
Abstract
Accurate extraction of crop planting structures is important for crop area and yield estimation, but complex and fragmented cropping patterns with overlapping phenology in the Hetao Irrigation District hinder reliable crop discrimination. This study proposes a hierarchical workflow that integrates vegetation masking with [...] Read more.
Accurate extraction of crop planting structures is important for crop area and yield estimation, but complex and fragmented cropping patterns with overlapping phenology in the Hetao Irrigation District hinder reliable crop discrimination. This study proposes a hierarchical workflow that integrates vegetation masking with multi-source feature optimization for crop mapping. First, dual-temporal Sentinel-2 imagery (May and August) is used to generate a vegetation region-of-interest(ROI) mask via Otsu thresholding applied to the Normalized Difference Vegetation Index (NDVI), combined with pixel-wise maximum-value fusion to reduce phenology-driven omissions and background interference. Second, within the vegetation mask, Sentinel-2 spectral, vegetation-index, and texture features are combined with Sentinel-1 synthetic aperture radar (SAR) backscatter and SAR texture features to construct a multi-source feature set. Random Forest(RF) feature-importance ranking is used to select an effective feature subset, and four classifiers (RF, support vector machine (SVM), eXtreme Gradient Boosting (XGBoost), and convolutional neural network (CNN)) are compared under the same training/validation setting. The vegetation extraction achieves an overall accuracy of 91% (Kappa = 0.80). Using Sentinel-2 features only, the optimized subset with CNN attains the best performance (overall accuracy = 95%, Kappa = 0.93). Adding Sentinel-1 SAR texture features provides an additional improvement (overall accuracy = 96%, Kappa = 0.94), particularly for classes prone to confusion in fragmented plots. Area proportions derived from the final map are consistent with statistical yearbook data (percentage errors: maize 3.45%, sunflower 2.66%, wheat 0.11%, tomato 0.92%) under the study conditions. This workflow supports practical crop-structure monitoring in complex irrigation districts. Full article
Show Figures

Figure 1

29 pages, 29190 KB  
Article
Metallogenic Prediction for Copper–Nickel Sulfide Deposits in the Eastern and Central Tianshan Based on Multi-Modal Feature Fusion
by Haonan Wang, Bimin Zhang, Miao Xie, Yue Sun, Wei Ye, Chunfang Dong, Zimu Yang and Xueqiu Wang
Minerals 2026, 16(3), 318; https://doi.org/10.3390/min16030318 - 18 Mar 2026
Abstract
The deep integration of machine learning technology with geological prospecting has brought to the forefront a key challenge: how to construct geological-mineralization models by fusing multi-source data, select model features with guidance from metallogenic factors, build multi-source metallogenic prediction models with geological constraints, [...] Read more.
The deep integration of machine learning technology with geological prospecting has brought to the forefront a key challenge: how to construct geological-mineralization models by fusing multi-source data, select model features with guidance from metallogenic factors, build multi-source metallogenic prediction models with geological constraints, and ultimately achieve a thorough integration of domain knowledge and machine intelligence. The Eastern-Central Tianshan region is one of China’s most important copper–nickel mineral resource bases, predominantly hosting magmatic copper–nickel sulfide deposits with significant resource potential. In this context, this paper proposes a metallogenic prediction model based on multi-modal feature fusion technology. The model employs a Residual Neural Network (ResNet) incorporating a Squeeze-and-Excitation (SE) attention mechanism and a Multi-Layer Perceptron (MLP) to extract features from different modalities. It integrates multi-source data, including geochemical information, geological metallogenic factors, and aeromagnetic data. A cross-modal feature interaction module, constructed using attention weighting and a gating mechanism, enables deep fusion of the features. After training, the model achieved a prediction accuracy of 97% on the test set. Compared to a unimodal model constructed using Random Forest, the confidence and discriminative capability of the training results were significantly enhanced, validating the effectiveness of multi-modal feature fusion. Applying the trained model to the study area, a total of 11 prospective metallogenic zones were delineated. These include 4 zones in the peripheries of known deposits and 7 zones in previously unexplored (blank) areas. Notably, some known mineral occurrences fall within the predicted blank-area targets, validating the feasibility and significant value of multi-modal feature fusion in mineral prediction. This work provides a novel methodology for the subsequent integrated processing of multi-source data. Full article
(This article belongs to the Special Issue Geochemical Exploration for Critical Mineral Resources, 2nd Edition)
Show Figures

Figure 1

30 pages, 7250 KB  
Article
Differentiable Physical Modeling for Forest Above-Ground Biomass Retrieval by Unifying a Water Cloud Model and Deep Learning
by Cui Zhao, Rui Shi, Yongjie Ji, Wei Zhang, Wangfei Zhang, Xiahong He and Han Zhao
Remote Sens. 2026, 18(6), 912; https://doi.org/10.3390/rs18060912 - 17 Mar 2026
Abstract
To address the limitations of traditional forest above-ground biomass (AGB) retrieval methods—namely, the restricted accuracy of physical models and the limited generalization ability of purely data-driven models—this study proposes a differentiable physical modeling (DPM) approach for forest AGB estimation. The method adopts the [...] Read more.
To address the limitations of traditional forest above-ground biomass (AGB) retrieval methods—namely, the restricted accuracy of physical models and the limited generalization ability of purely data-driven models—this study proposes a differentiable physical modeling (DPM) approach for forest AGB estimation. The method adopts the water cloud model (WCM) as a physics-based framework, grounded in radiative transfer theory, and integrates C-band synthetic aperture radar (SAR) data with multispectral imagery. Within the PyTorch tensor computation framework, automatic differentiation (AD) is employed to seamlessly couple the WCM with the deep fully connected neural network (DFCNN), enabling a differentiable implementation of the WCM. Using mean squared error (MSE) as the loss function, the neural network parameters are optimized through backpropagation and gradient descent, thereby constructing an end-to-end trainable DPM model that effectively retrieves forest AGB while preserving physical interpretability and generalization capability. To validate the proposed method, two representative test sites were selected: Simao in Pu’er, Yunnan Province, and Genhe in Inner Mongolia. GF-3 PolSAR and RADARSAT-2 data were used to extract backscattering coefficients and compute the radar vegetation index (RVI), while Landsat 8 OLI imagery was employed to calculate the normalized difference vegetation index (NDVI), difference vegetation index (DVI), and soil-adjusted vegetation index (SAVI). These datasets, together with ASTER GDEM, field-measured biomass, and other relevant datasets, were integrated to construct a multisource dataset combining remote sensing and ground observations. The performance of the DPM model was then compared with the traditional WCM and several data-driven models, including the fully connected neural network (FNN), generalized regression neural network (GRNN), RF, and Adaptive Boosting (AdaBoost). The results indicate that the DPM model achieved R2 = 0.60, RMSE = 24.23 Mg/ha, Bias = 0.4 Mg/ha, and ubRMSE = 22.43 Mg/ha in Simao, and R2 = 0.48, RMSE = 33.29 Mg/ha, Bias = 0.87 Mg/ha, and ubRMSE = 33.28 Mg/ha in Genhe, demonstrating consistently better performance than both the WCM and all tested data-driven models. The DPM model demonstrated consistent performance across ecologically contrasting forest regions. It alleviated the systematic overestimation bias of purely data-driven models and overcame the limitations in predictive accuracy resulting from the simplified structure of the WCM. The differentiability of the WCM enables the loss function errors to be backpropagated through the neural network, thereby allowing the optimization of the physical model parameters. Overall, the DPM framework integrates the advantages of both physical models and data-driven approaches, providing an estimation method with acceptable accuracy for forest AGB retrieval. It also offers theoretical and practical insights for the integration of deep learning and physical knowledge in other research fields. Full article
Show Figures

Figure 1

23 pages, 153696 KB  
Article
Fine Mapping of Sparse Populus euphratica Forests Based on GF-2 Satellite Imagery and Deep Learning Models
by Hao Li, Jiawei Zou, Qinyu Zhao, Suhong Liu and Qingdong Shi
Remote Sens. 2026, 18(6), 902; https://doi.org/10.3390/rs18060902 - 15 Mar 2026
Abstract
Populus euphratica is a critical constructive species in arid desert regions, serving as a “natural barrier” for oasis protection. The sustainable management of Populus euphratica forests is directly related to regional ecological security, and the fine identification of sparse Populus euphratica forests is [...] Read more.
Populus euphratica is a critical constructive species in arid desert regions, serving as a “natural barrier” for oasis protection. The sustainable management of Populus euphratica forests is directly related to regional ecological security, and the fine identification of sparse Populus euphratica forests is essential for the conservation of natural Populus euphratica forests. Currently, most mapping studies on Populus euphratica distribution focus on the extraction of dense, contiguous Populus euphratica forests, with insufficient attention paid to the identification of sparse Populus euphratica forests. This study utilizes Gaofen-2 (GF-2) satellite imagery as the data source and takes a typical sparse Populus euphratica forests distribution area in the Tarim River Basin as the study site. It systematically evaluates the performance of nine mainstream deep learning models, including U-Net, DeepLabV3+, and SegFormer, in the task of sparse Populus euphratica forests identification. The results indicate that: (1) The false-color sample set, synthesized from near-infrared, red, and green bands, contributes to improved model accuracy. Compared to the true-color (red, green, blue bands) dataset, the average Intersection over Union (IoU) of the nine models shows a relative improvement of approximately 20%. (2) For the sparse Populus euphratica forests identification task based on the false-color dataset, four models—U-Net, U-Net++, MA-Net, and DeepLabV3+—exhibited excellent performance, with IoU exceeding 75%. (3) Using U-Net as the baseline model, this study integrated the max-pooling indices mechanism, atrous spatial pyramid pooling, and residual connection modules to construct a semantic segmentation network tailored for sparse Populus euphratica forests, named Sparse Populus euphratica Segmentation Network (SPS-Net). This model achieved an IoU of 80%, a relative improvement of approximately 6.3% over the baseline model, and demonstrated good stability in large-scale classification tests. The identification scheme for sparse Populus euphratica forests constructed using GF-2 imagery and deep learning models proposed in this study can provide effective technical support for the refined monitoring and protection of natural Populus euphratica forests. Full article
Show Figures

Figure 1

26 pages, 4974 KB  
Article
Soil Suborder Discrimination Using Machine Learning Is Improved by SWIR Imaging Compared with Full VIS–NIR–SWIR Spectra
by Daiane de Fatima da Silva Haubert, Nicole Ghinzelli Vedana, Weslei Augusto Mendonça, Karym Mayara de Oliveira, Caio Almeida de Oliveira, João Vitor Ferreira Gonçalves, José Alexandre M. Demattê, Roney Berti de Oliveira, Amanda Silveira Reis, Renan Falcioni and Marcos Rafael Nanni
Remote Sens. 2026, 18(6), 898; https://doi.org/10.3390/rs18060898 - 15 Mar 2026
Abstract
Rapid, standardised discrimination of soil taxonomic units remains challenging when relying solely on conventional field descriptions and laboratory analyses, particularly at high sampling densities. This study evaluated whether proximal spectroscopy and hyperspectral imaging can support the classification of Brazilian Soil Classification System (SiBCS) [...] Read more.
Rapid, standardised discrimination of soil taxonomic units remains challenging when relying solely on conventional field descriptions and laboratory analyses, particularly at high sampling densities. This study evaluated whether proximal spectroscopy and hyperspectral imaging can support the classification of Brazilian Soil Classification System (SiBCS) suborders and pedogenetic horizons when surface and subsurface spectra are treated separately. Six intact soil monoliths (0.12 × 1.60 m) were collected in Paraná State, southern Brazil, representing one Organossolo (Ooy), three Latossolos (LVd, LVd1, and LVd2) and two Argissolos (PVAd and PVd). For each monolith, 800 spectra were acquired per sensor with a non-imaging VIS–NIR–SWIR spectroradiometer (350–2500 nm), and 800 spectra per sensor per monolith were extracted from the SWIR hyperspectral images (1200–2450 nm). Principal component analysis (PCA) was used to summarise spectral variability, and supervised classification was performed via k-nearest neighbours, random forest, decision tree and gradient boosting for suborders (10-fold cross-validation), and a neural network was used for within-profile horizon classification. PCA indicated that most of the spectral variance was captured by a dominant axis, with clearer separation among suborders in the SWIR space than in the full VIS–NIR–SWIR range. With respect to suborder classification, subsurface spectra outperformed surface spectra, and SWIR outperformed VIS–NIR–SWIR: the best accuracies were 0.96 for subsurface SWIR (gradient boosting; AUC = 0.99; MCC = 0.95) and 0.89 for surface SWIR (k-nearest neighbours; AUC = 0.98; MCC = 0.87). Within-profile horizon classification via VIS–NIR–SWIR achieved accuracies of 0.84–0.97 with the Neural Network, with most misclassifications occurring between adjacent horizons. Overall, subsurface SWIR information provided the most reliable basis for taxonomic discrimination, whereas horizon classification was feasible but reflected gradual spectral transitions along the profile. Full article
Show Figures

Figure 1

20 pages, 5794 KB  
Article
Cotton Boll Extraction and Boll Number Estimation from UAV RGB Imagery Before and After Defoliation
by Na Su, Maoguang Chen, Caixia Yin, Ke Wang, Siyuan Chen, Zhenyang Wang, Liyang Liu, Yue Zhao and Qiuxiang Tang
Agronomy 2026, 16(6), 617; https://doi.org/10.3390/agronomy16060617 - 14 Mar 2026
Abstract
Accurate cotton boll identification and boll number estimation from UAV imagery are essential for large-scale yield prediction and precision management, yet severe leaf occlusion and complex canopy backgrounds often hinder robust performance. Here, UAV RGB images were acquired 3 days before defoliant application [...] Read more.
Accurate cotton boll identification and boll number estimation from UAV imagery are essential for large-scale yield prediction and precision management, yet severe leaf occlusion and complex canopy backgrounds often hinder robust performance. Here, UAV RGB images were acquired 3 days before defoliant application and at 3, 6, 9, 12, 15, and 18 days after defoliation. Cotton bolls were extracted using Mahalanobis distance, a support vector machine, and a neural network. Boll number was then estimated using an improved random forest model with multi-feature fusion. Across all defoliation stages, the NN produced the most accurate and stable boll extraction, achieving a maximum Kappa of 0.914, an overall accuracy of 95.77%, and an F1 score of 0.96. Extraction accuracy increased rapidly from 3 to 9 days after application and stabilized from 12 to 18 days. For boll number estimation, fusing the boll pixel ratio with color indices and texture features improved accuracy and consistency over time; the best performance was obtained at 18 days after application (R2 = 0.7264; rRMSE = 4.9%). Overall, imagery acquired 15–18 days after defoliation provided the most reliable estimation window, supporting operational pre-harvest assessment and harvest-timing decisions. Full article
Show Figures

Figure 1

17 pages, 3905 KB  
Article
UAV Multispectral Imagery Combined with Canopy Vertical Layering Information for Leaf Nitrogen Content Inversion in Cotton
by Kaixuan Li, Chunqi Yin, Yangbo Ye, Xueya Han and Sanmin Sun
Agronomy 2026, 16(6), 607; https://doi.org/10.3390/agronomy16060607 - 12 Mar 2026
Viewed by 121
Abstract
Leaf nitrogen concentration (LNC) exhibits pronounced vertical heterogeneity across canopy layers, which affects the accuracy of nitrogen diagnosis derived from UAV-based remote sensing imagery. To address the differential contributions of leaf nitrogen from distinct canopy strata and the limitations associated with single-source features, [...] Read more.
Leaf nitrogen concentration (LNC) exhibits pronounced vertical heterogeneity across canopy layers, which affects the accuracy of nitrogen diagnosis derived from UAV-based remote sensing imagery. To address the differential contributions of leaf nitrogen from distinct canopy strata and the limitations associated with single-source features, this study proposes an integrated framework that combines cumulative LNC indicators across canopy layers with multi-source feature sets (vegetation indices and texture features). Centered on three core technical innovations—(1) incorporating canopy-layer aggregation logic into LNC modeling, (2) integrating spectral and structural information through CNN-based feature fusion, and (3) combining deep feature extraction with gradient boosting regression to improve robustness under multi-stage conditions—the framework systematically evaluates three machine learning algorithms: Random Forest (RF), a Convolutional Neural Network–Extreme Gradient Boosting hybrid model (CNN_XGBoost), and K-Nearest Neighbor (KNN) for cotton LNC estimation across multiple growth stages. The results demonstrate that cumulative canopy-layer nitrogen indicators more effectively represent overall plant nitrogen status than single-layer measurements. The integration of multi-source features further enhances model performance. Under both single-variable inputs and combined VI–TF feature sets, the CNN_XGBoost model consistently outperforms the other models in calibration accuracy and stability across all growth stages. Its optimal performance occurs during the cotton flowering and boll stage, achieving a calibration R2 of 0.921. Overall, the proposed framework substantially improves the estimation accuracy of cotton LNC and provides both a theoretical foundation and technical support for precision nitrogen management and sustainable agricultural development. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

20 pages, 15297 KB  
Article
UAV-Based Stand Density Estimation for Aboveground Biomass Mapping in Moso Bamboo Forests
by Mengyi Hu, Nan Li, Dexuan Zhao, Xiaojun Xu, Tianzhen Wu, Jing Ma, Shijun Zhang, Yong Liang, Cancan Yang, Wei Zhang, Yali Zhang and Longwei Li
Remote Sens. 2026, 18(6), 872; https://doi.org/10.3390/rs18060872 - 12 Mar 2026
Viewed by 120
Abstract
The accurate estimation of aboveground biomass (AGB) in Moso bamboo forests is critical for assessing their carbon sequestration potential and supporting sustainable management. Satellite-based approaches are often constrained by signal saturation and mixed-pixel effects, whereas Unmanned Aerial Vehicle (UAV) imagery enables precise individual [...] Read more.
The accurate estimation of aboveground biomass (AGB) in Moso bamboo forests is critical for assessing their carbon sequestration potential and supporting sustainable management. Satellite-based approaches are often constrained by signal saturation and mixed-pixel effects, whereas Unmanned Aerial Vehicle (UAV) imagery enables precise individual tree detection, overcoming these limitations. In this study, we propose a stand density (SD)-driven AGB estimation framework using high-resolution UAV RGB imagery. Individual bamboo positions were extracted using the Revised Local Maximum (RLM) algorithm, which achieved an optimal accuracy at a 2.5 m sampling interval (OA = 82.20%). Using 85 ground-truth plots, we developed six SD-AGB models and evaluated them via 10-fold cross-validation and independent UAV validation (10 plots). The Artificial Neural Network (ANN) model outperformed others, with strong calibration (R2 = 0.94, RMSE = 3.78 Mg/ha), robust cross-validation (R2 = 0.84 ± 0.06, RMSE = 5.24 ± 0.67 Mg/ha), and reliable independent validation (R2 = 0.87, RMSE = 4.56 Mg/ha). Spatial mapping revealed a total of 14,190 bamboo plants with an average AGB of 32.80 Mg/ha. This UAV-based SD-AGB framework provides a robust, scalable, and cost-effective tool for precise biomass estimation, supporting sustainable bamboo forest management and carbon sequestration strategies and progress towards SDG 15. Full article
Show Figures

Figure 1

29 pages, 11795 KB  
Article
Empirical Evaluation of a CNN-ResNet-RF Hybrid Model for Occupancy Rate Prediction in Passive Ultra-Low-Energy Buildings
by Yiwen Liu, Yibing Xue, Chunlu Liu and Runyu Wang
Urban Sci. 2026, 10(3), 150; https://doi.org/10.3390/urbansci10030150 - 11 Mar 2026
Viewed by 91
Abstract
Accurate occupancy information is critical for optimizing energy efficiency in buildings. Hybrid machine learning models have demonstrated great potential in previous studies; however, their application in passive ultra-low-energy buildings remains underexplored. This study conducts an empirical evaluation of real-time occupancy rate prediction using [...] Read more.
Accurate occupancy information is critical for optimizing energy efficiency in buildings. Hybrid machine learning models have demonstrated great potential in previous studies; however, their application in passive ultra-low-energy buildings remains underexplored. This study conducts an empirical evaluation of real-time occupancy rate prediction using a CNN-ResNet-RF hybrid model based on multi-source environmental and behavioral data from a passive ultra-low-energy educational building. The model integrates Convolutional Neural Networks (CNN) for local feature extraction, Residual Networks (ResNet) to enhance deep feature representation, and Random Forests (RF) for ensemble-based generalization. Indoor CO2 concentration exhibits the strongest linear correlation with occupancy rate (r = 0.54), indicating a meaningful association with occupancy dynamics. The model demonstrates strong predictive performance on the test set, with a coefficient of determination (R2) of 0.964, a root mean square error (RMSE) of 0.054, and a residual prediction deviation (RPD) exceeding 5. Compared with baseline models such as CNN, RF, and CNN-RF, the proposed framework exhibits generally lower prediction errors and improved stability. Further lightweight compression experiments reveal that the structured compact CNN-ResNet-RF-25 variant achieves even better accuracy (R2 = 0.9748, RMSE = 0.0449, RPD = 6.327) while substantially reducing model complexity, demonstrating strong deployment potential in resource-constrained environments. Full article
(This article belongs to the Topic Geospatial AI: Systems, Model, Methods, and Applications)
Show Figures

Figure 1

17 pages, 1362 KB  
Article
Unlocking Tumor Aggressiveness in Endometrial Cancer: AI-Driven PET/CT Radiomics and Machine Learning for Prediction of High-Risk Tumor Histology
by Samet Yagci, Evrim Erdemoglu, Mehmet Erdogan, Mustafa Avci, Ahmet Tunc, Ismail Ozkoc and Sevim Sureyya Sengul
Cancers 2026, 18(6), 905; https://doi.org/10.3390/cancers18060905 - 11 Mar 2026
Viewed by 116
Abstract
Purpose: Accurate preoperative risk stratification in endometrial cancer (EC) is essential for guiding surgical and therapeutic decisions. This study aimed to evaluate the discriminative performance of [18F]-FDG PET/CT-derived radiomic features combined with machine learning models for differentiating low-risk (LRH-EC) and high-risk histology (HRH-EC) [...] Read more.
Purpose: Accurate preoperative risk stratification in endometrial cancer (EC) is essential for guiding surgical and therapeutic decisions. This study aimed to evaluate the discriminative performance of [18F]-FDG PET/CT-derived radiomic features combined with machine learning models for differentiating low-risk (LRH-EC) and high-risk histology (HRH-EC) subtypes. Methods: A total of 159 patients with histopathologically confirmed EC who underwent preoperative [18F]-FDG PET/CT were retrospectively analyzed. Radiomic features were extracted using LIFEx version 7.4.0 software following IBSI guidelines. After FDR correction and Pearson correlation–based redundancy reduction (|r| > 0.80), 16 radiomic features were retained for modeling. Three feature configurations (Conventional PET parameters, Radiomics16, and Combined) were evaluated. Machine learning models were developed using stratified 5-fold cross-validation. Model performance was assessed using AUC, accuracy, sensitivity, specificity, F1-score, Wilson confidence intervals, DeLong’s test, and McNemar’s test. Results: Artificial Neural Network (ANN) (AUC = 0.709) and Random Forest (RF) (AUC = 0.686) achieved the highest discriminative performance within the Radiomics16 feature set. No statistically significant superiority between algorithms or feature configurations was observed by DeLong analysis. However, McNemar’s test demonstrated significant patient-level classification differences for the Combined ANN model (p < 0.001). NGTDM_Coarseness and SUVmin emerged as the most influential features, reflecting tumor heterogeneity and metabolic activity. Conclusions: [18F]-FDG PET/CT-based radiomics combined with machine learning provides moderate yet consistent discrimination between LRH-EC and HRH-EC. While external validation is required, this approach may support noninvasive preoperative risk stratification in endometrial cancer. Full article
Show Figures

Figure A1

19 pages, 1106 KB  
Article
Clinical Prediction of Functional Decline in Multiple Sclerosis Using Volumetry-Based Synthetic Brain Networks
by Alin Ciubotaru, Alexandra Maștaleru, Thomas Gabriel Schreiner, Cristiana Filip, Roxana Covali, Laura Riscanu, Robert-Valentin Bilcu, Laura-Elena Cucu, Sofia Alexandra Socolov-Mihaita, Diana Lăcătușu, Florina Crivoi, Albert Vamanu, Ioana Martu, Lucia Corina Dima-Cozma, Romica Sebastian Cozma and Oana-Roxana Bitere-Popa
Life 2026, 16(3), 459; https://doi.org/10.3390/life16030459 - 11 Mar 2026
Viewed by 159
Abstract
Background: Disability progression in multiple sclerosis (MS) is increasingly recognized as a consequence of large-scale brain network disruption rather than isolated regional damage. Although diffusion tensor imaging (DTI) is the reference method for assessing structural connectivity, its limited availability restricts widespread clinical application. [...] Read more.
Background: Disability progression in multiple sclerosis (MS) is increasingly recognized as a consequence of large-scale brain network disruption rather than isolated regional damage. Although diffusion tensor imaging (DTI) is the reference method for assessing structural connectivity, its limited availability restricts widespread clinical application. There is therefore a critical need for alternative approaches capable of capturing network-level alterations using routinely acquired MRI data. Objective: This study aimed to determine whether synthetic structural connectivity matrices derived from standard regional volumetric MRI can capture clinically meaningful network alterations in MS and predict subsequent functional progression, particularly upper limb decline. Methods: Regional brain volumetry was obtained from routine T1-weighted MRI using an automated, clinically approved volumetric pipeline. Synthetic structural connectivity matrices were generated by integrating principles of structural covariance, distance-dependent connectivity, and disease-specific vulnerability patterns. Graph-theoretical network metrics were extracted to characterize global and regional topology. Machine learning models including logistic regression, support vector machines, random forests, and gradient boosting were trained to predict clinical progression defined by worsening on the 9-Hole Peg Test. Dimensionality reduction was performed using principal component analysis, and model performance was evaluated using balanced accuracy, AUC-ROC, and resampling-based validation. Feature importance analyses were conducted to identify network vulnerability patterns. Results: Synthetic connectivity networks exhibited biologically plausible properties, including preserved but attenuated small-world organization. Global efficiency showed a strong inverse correlation with disability severity (EDSS). Patients with clinical progression demonstrated marked reductions in network integration and segregation, alongside increased characteristic path length. Machine learning models achieved robust prediction of upper limb functional decline, with ensemble-based methods performing best (balanced accuracy > 80%, AUC-ROC up to 0.85). A limited subset of connections accounted for a disproportionate share of predictive power, predominantly involving frontoparietal associative networks, thalamocortical pathways, and inter-hemispheric connections. In a longitudinal subset, network-level alterations preceded measurable clinical deterioration by several months. Conclusions: Synthetic structural connectivity derived from routine volumetric MRI captures clinically relevant network-level disruption in multiple sclerosis and enables accurate prediction of functional progression. By bridging network neuroscience with widely accessible imaging data, this framework provides a pragmatic alternative for connectomic analysis when diffusion imaging is unavailable and supports a network-based understanding of disease evolution in MS. Full article
Show Figures

Figure 1

30 pages, 30836 KB  
Article
CrownViM: Context Clustering Meets Vision Mamba for Precise Tree Crown Segmentation in Aerial RGB Imagery
by Erkang Shi, Ziyang Shi, Fulin Su, Lin Li, Ruifeng Liu, Fangying Wan and Kai Zhou
Remote Sens. 2026, 18(6), 860; https://doi.org/10.3390/rs18060860 - 11 Mar 2026
Viewed by 100
Abstract
The proliferation of high-spatial-resolution remote sensing data is transforming forest attribute estimation, replacing traditional manual approaches with deep learning-based Individual Tree Crown Delineation (ITCD). Nevertheless, accurate ITCD boundary extraction from aerial RGB imagery faces persistent challenges: boundary ambiguity from complex crown occlusion in [...] Read more.
The proliferation of high-spatial-resolution remote sensing data is transforming forest attribute estimation, replacing traditional manual approaches with deep learning-based Individual Tree Crown Delineation (ITCD). Nevertheless, accurate ITCD boundary extraction from aerial RGB imagery faces persistent challenges: boundary ambiguity from complex crown occlusion in mixed forests, scarcity of high-quality annotations, and computational limitations of existing methods in dense forests. The latter manifests particularly in overlapping crown scenarios through constrained receptive fields, leading to substantial parameter requirements, computational inefficiency, and compromised accuracy. To overcome these limitations, we propose CrownViM, a novel architecture based on a bidirectional State Space Model (SSM). The framework integrates a linear-complexity Context Clustering Vision Mamba (CCViM) encoder for efficient global context modeling and employs a MaskFormer decoder for precise boundary prediction. We further introduce a partial-supervision loss function to reduce dependence on exhaustively annotated crown masks. Evaluations on OAM-TCD and the single-tree segmentation dataset (SSD) show CrownViM achieves significant segmentation accuracy improvements while maintaining a lightweight profile (39.6 M parameters). It substantially outperforms Convolutional Neural Network (CNN), Vision Transformer (ViT), and hybrid-based baselines when processing overlapping crowns and structurally complex scenes. As the first implementation of state space models in ITCD, CrownViM effectively addresses core limitations in global context capture, computational efficiency, and boundary definition. Our efficient architecture and sparse-annotation loss strategy enable high-accuracy, robust individual tree mapping, advancing tools for large-scale forest monitoring and accurate carbon stock quantification. Full article
Show Figures

Figure 1

32 pages, 3089 KB  
Article
Systematic Evaluation of Machine Learning and Deep Learning Models for IoT Malware Detection Across Ransomware, Rootkit, Spyware, Trojan, Botnet, Worm, Virus, and Keylogger
by Mazdak Maghanaki, Soraya Keramati, F. Frank Chen and Mohammad Shahin
Sensors 2026, 26(6), 1750; https://doi.org/10.3390/s26061750 - 10 Mar 2026
Viewed by 269
Abstract
The rapid growth of Internet-of-Things (IoT) deployments has substantially expanded the attack surface of modern cyber–physical systems, making accurate and computationally feasible malware detection essential for enterprise and industrial environments. This study presents a large-scale, systematic comparison of 27 machine learning (ML) and [...] Read more.
The rapid growth of Internet-of-Things (IoT) deployments has substantially expanded the attack surface of modern cyber–physical systems, making accurate and computationally feasible malware detection essential for enterprise and industrial environments. This study presents a large-scale, systematic comparison of 27 machine learning (ML) and 18 deep learning (DL) models for IoT malware detection across eight major malware categories: Trojan, Botnet, Ransomware, Rootkit, Worm, Spyware, Keylogger, and Virus. A realistic dataset was constructed using 50,000 executable samples collected from the Any.Run platform, including 8000 malware instances (1000 per class) and 42,000 benign samples. Each sample was executed in a sandbox to extract detailed static and behavioral telemetry. A targeted feature-selection pipeline reduced the feature space to 47 diagnostic features spanning static properties, behavioral indicators, process/file/registry activity, debug signals, and network telemetry, yielding a compact representation suitable for malware detection in IoT settings. Experimental results demonstrate that ensemble tree-based ML models consistently dominate performance on the engineered tabular feature set as 7 of the top 10 models are ML, with CatBoost and LightGBM achieving near-ceiling accuracy and low false-positive rates. Per-malware analysis further shows that optimal model choice depends on malware behavior. CatBoost is best for Trojan/Spyware, LightGBM for Botnet, XGBoost for Worm, Extra Trees for Rootkit, and Random Forest for Keylogger, while DL models are competitive only for specific categories, with TabNet performing best for Ransomware and FT-Transformer for Virus. In addition, an end-to-end computational time analysis across all 45 models reveals a clear efficiency advantage for boosted tree ensembles relative to most DL architectures, supporting deployment feasibility on commodity CPU hardware. Overall, the study provides actionable guidance for designing adaptive IoT malware detection frameworks, recommending gradient-boosted ensemble ML models as the primary deployment choice, with selective DL models only when category-specific gains justify additional computational cost. Full article
(This article belongs to the Special Issue Intelligent Sensors for Security and Attack Detection)
Show Figures

Figure 1

25 pages, 11205 KB  
Article
Remote Sensing Image Captioning via Self-Supervised DINOv3 and Transformer Fusion
by Maryam Mehmood, Ahsan Shahzad, Farhan Hussain, Lismer Andres Caceres-Najarro and Muhammad Usman
Remote Sens. 2026, 18(6), 846; https://doi.org/10.3390/rs18060846 - 10 Mar 2026
Viewed by 217
Abstract
Effective interpretation of coherent and usable information from aerial images (e.g., satellite imagery or high-altitude drone photography) can greatly reduce human effort in many situations, both natural (e.g., earthquakes, forest fires, tsunamis) and man-made (e.g., highway pile-ups, traffic congestion), particularly in disaster management. [...] Read more.
Effective interpretation of coherent and usable information from aerial images (e.g., satellite imagery or high-altitude drone photography) can greatly reduce human effort in many situations, both natural (e.g., earthquakes, forest fires, tsunamis) and man-made (e.g., highway pile-ups, traffic congestion), particularly in disaster management. This research proposes a novel encoder–decoder framework for captioning of remote sensing images that integrates self-supervised DINOv3 visual features with a hybrid Transformer–LSTM decoder. Unlike existing approaches that rely on supervised CNN-based encoders (e.g., ResNet, VGG), the proposed method leverages DINOv3’s self-supervised learning capabilities to extract dense, semantically rich features from aerial images without requiring domain-specific labeled pretraining. The proposed hybrid decoder combines Transformer layers for global context modeling with LSTM layers for sequential caption generation, producing coherent and context-aware descriptions. Feature extraction is performed using the DINOv3 model, which employs the gram-anchoring technique to stabilize dense feature maps. Captions are generated through a hybrid of Transformer with Long Short-Term Memory (LSTM) layers, which adds contextual meaning to captions through sequential hidden layer modeling with gated memory. The model is first evaluated on two traditional remote sensing image captioning datasets: RSICD and UCM-Captions. Multiple evaluation metrics like Bilingual Evaluation Understudy (BLEU), Consensus-based Image Description Evaluation (CIDEr), Recall-Oriented Understudy for Gisting Evaluation (ROUGE-L), and Metric for Evaluation of Translation with Explicit Ordering (METEOR), are used to quantify the performance and robustness of the proposed DINOv3 hybrid model. The proposed model outperforms conventional Convolutional Neural Network (CNN) and Vision Transformers (ViT)-based models by approximately 9–12% across most evaluation metrics. Attention heatmaps are also employed to qualitatively validate the proposed model when identifying and describing key spatial elements. In addition, the proposed model is evaluated on advanced remote sensing datasets, including RSITMD, DisasterM3, and GeoChat. The results demonstrate that self-supervised vision transformers are robust encoders for multi-modal understanding in remote sensing image analysis and captioning. Full article
Show Figures

Figure 1

23 pages, 4832 KB  
Article
Investigation of Printed Slot Antenna for Non-Invasive Glucose Sensing Using FR4 Substrate Material
by Yaqeen S. Mezaal
Micromachines 2026, 17(3), 335; https://doi.org/10.3390/mi17030335 - 10 Mar 2026
Viewed by 134
Abstract
This paper provides a feasibility study of a non-invasive microwave-based glucose-sensing system based on a small printed slot antenna with etched step-impedance resonators (SIRs) on an FR4 substrate in the ground plane at approximately 5.7 GHz. The sensor proposed takes advantage of the [...] Read more.
This paper provides a feasibility study of a non-invasive microwave-based glucose-sensing system based on a small printed slot antenna with etched step-impedance resonators (SIRs) on an FR4 substrate in the ground plane at approximately 5.7 GHz. The sensor proposed takes advantage of the effect of the antenna resonant frequency and reflection coefficient (S11) perturbation due to the dielectric loading of a human finger placed in the antenna near field. Instead of declaring direct glucose specificity, this paper is dedicated to understand whether the measures of RF can be translated to the invasive glucose values under the condition of controlled positioning. A vector network analyzer was used to measure the experimental values where resonant frequency and S11 magnitude were obtained at the point of peak sensitivity due to fixed finger placement at the point. These RF properties were associated with invasively measured glucose values using three modeling methods: a simple analytical linear formula, a second-degree Polynomial Ridge regression model, and a Random Forest machine learning model. The comparative analysis has established that nonlinear data-driven models outperform the analytical formulations significantly with the highest predictive accuracy being the Random Forest model (R2 = 0.72, RMSE = 10.57 mg/dL, MAE = 5.16 mg/dL). The findings affirm that the impacts of antenna loading control the raw measurements, but the trend related to glucose can be extracted upon machine learning calibration under controlled conditions. The research provides a methodological framework of RF-based non-invasive glucose sensing and the need to employ various phantom-based validation, sub-subject-based modeling, or clinically based evaluation metrics in future studies. Full article
(This article belongs to the Special Issue Metasurface-Based Devices and Systems)
Show Figures

Figure 1

Back to TopTop