Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,791)

Search Parameters:
Keywords = multisensors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 4593 KB  
Systematic Review
Vegetation Carbon Stock Estimation Using Remote Sensing: A Bibliometric and Critical Review
by Xiaoxiao Min, Mohd Johari Mohd Yusof, Luxin Fan and Sreetheran Maruthaveeran
Forests 2026, 17(4), 503; https://doi.org/10.3390/f17040503 (registering DOI) - 18 Apr 2026
Abstract
Vegetation carbon stock is a key component of the terrestrial carbon cycle and supports climate-change mitigation and carbon-neutrality strategies. While field inventories provide accurate references, they are constrained by cost and limited scalability, motivating the rapid adoption of remote sensing for large-scale spatial [...] Read more.
Vegetation carbon stock is a key component of the terrestrial carbon cycle and supports climate-change mitigation and carbon-neutrality strategies. While field inventories provide accurate references, they are constrained by cost and limited scalability, motivating the rapid adoption of remote sensing for large-scale spatial estimation and mapping. However, the literature lacks a consolidated bibliometric and critical synthesis focused on above-ground vegetation carbon stock estimation. Therefore, this review aims to provide a quantitative overview of publication trends, synthesise methodological developments, and identify key research gaps in remote-sensing-based above-ground vegetation carbon stock estimation. A total of 1825 Web of Science records (2015–2024) were retrieved, of which 763 were included for bibliometric mapping using VOSviewer version 1.6.20 and CiteSpace version 6.3.R2, complemented by a critical review of 32 high-quality studies. Results indicate a shift from passive optical and single-index approaches toward active sensing and multi-sensor, multi-platform integration, alongside broad uptake of machine learning and an emerging dominance of deep learning for nonlinear modelling and feature learning. Research attention is expanding beyond forests to non-forest ecosystems, yet challenges persist in spatial resolution, validation data availability, and cross-biome generalizability. This review summarizes methodological trajectories and identifies priorities for robust, transferable above-ground carbon estimation. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

39 pages, 1460 KB  
Review
Modernizing Livestock Operations: Smart Feedlot Technologies and Their Impact
by Son D. Dao, Amirali Khodadadian Gostar, Ruwan Tennakoon, Wei Qin Chuah and Alireza Bab-Hadiashar
Animals 2026, 16(8), 1244; https://doi.org/10.3390/ani16081244 (registering DOI) - 18 Apr 2026
Abstract
Smart feedlots are increasingly adopting Precision Livestock Farming technologies to enable continuous, individual-animal monitoring and more proactive management in intensive beef production systems. This narrative review synthesises evidence from approximately 350 academic publications, of which 117 are formally cited, complemented by industry deployments [...] Read more.
Smart feedlots are increasingly adopting Precision Livestock Farming technologies to enable continuous, individual-animal monitoring and more proactive management in intensive beef production systems. This narrative review synthesises evidence from approximately 350 academic publications, of which 117 are formally cited, complemented by industry deployments and the authors’ experience in smart feedlot system development. We cover enabling digital infrastructure (power, sensing networks, wireless connectivity, and gateways), animal identification and sensing (RFID, automated weighing, wearables, and pen-side sensors), machine vision (RGB, thermal, and multispectral imaging from fixed and mobile platforms), and AI-based analytics and decision support for health, welfare, performance, and environmental management. Across the literature, key components have progressed beyond proof-of-concept toward operation under commercial constraints. Reported outcomes include reduced reliance on routine pen-rider observation and yard handling, earlier triage of emerging morbidity risk and behavioural change, and more standardised welfare auditing. Vision-based methods are repeatedly validated against trained human scorers in both on-farm and abattoir contexts, while automated weighing and image-based liveweight estimation support higher-frequency growth monitoring with low single-digit percentage error in representative studies. Precision feeding and targeted supplementation are associated with improved feed utilisation and reduced resource wastage, although effectiveness and adoption vary across animal classes and production stages. We identify priorities for robust, scalable deployment: resilient communications in harsh environments, appropriate edge–cloud partitioning under intermittent connectivity, and interoperable multi-sensor data fusion to deliver trustworthy alerts and actionable insights. Persistent barriers remain cost, durability, maintenance burden, integration and interoperability, data governance, and workforce capability. Full article
(This article belongs to the Section Animal System and Management)
19 pages, 4121 KB  
Technical Note
drone2report: A Configuration-Driven Multi-Sensor Batch-Processing Engine for UAV-Based Plot Analysis in Precision Agriculture
by Nelson Nazzicari, Giulia Moscatelli, Agostino Fricano, Elisabetta Frascaroli, Roshan Paudel, Eder Groli, Paolo De Franceschi, Giorgia Carletti, Nicolò Franguelli and Filippo Biscarini
Drones 2026, 10(4), 301; https://doi.org/10.3390/drones10040301 (registering DOI) - 18 Apr 2026
Abstract
Unmanned aerial vehicles (UAVs) have become indispensable tools in precision agriculture and plant phenotyping, enabling the rapid, non-destructive assessment of crop traits across space and time. Equipped with RGB, multispectral, thermal, and other sensors, UAVs provide detailed information on canopy structure, physiology, and [...] Read more.
Unmanned aerial vehicles (UAVs) have become indispensable tools in precision agriculture and plant phenotyping, enabling the rapid, non-destructive assessment of crop traits across space and time. Equipped with RGB, multispectral, thermal, and other sensors, UAVs provide detailed information on canopy structure, physiology, and stress responses that can guide management decisions and accelerate breeding programs. Despite these advances, the downstream processing of UAV imagery remains technically demanding. Converting orthomosaics into standardized, biologically meaningful data often requires a combination of photogrammetry, geospatial analysis, and custom scripting, which can limit reproducibility and accessibility across research groups. We present drone2report, an open-source python-based software that processes orthomosaics from UAV flights to generate vegetation indices, summary statistics, derived subimages, and text (html) reports, supporting both research and applied crop breeding needs. Alongside the basic structure and functioning of drone2report, we also present five case studies that illustrate practical applications common in UAV-/drone-phenotyping of plants: (i) thresholding to remove background noise and highlight regions of interest; (ii) monitoring plant phenotypes over time; (iii) extracting information on plant height to detect events like lodging or the falling over of spikes; (iv) integrating multiple sensors (cameras) to construct and optimize new synthetic indices; (v) integrate a trained deep learning network to implement a classification task. These examples demonstrate the tool’s ability to automate analysis, integrate heterogeneous data and models, and support reproducible computation of agronomically relevant traits. drone2report streamlines orthorectified UAV-image processing for precision agriculture by linking orthomosaics to standardized, plot-level outputs. Its modular, configuration-driven design allows transparent workflows, easy customization, and integration of multiple sensors within a unified analytical framework. By facilitating reproducible, multi-modal image analysis, drone2report lowers technical barriers to UAV-based phenotyping and opens the way to robust, data-driven crop monitoring and breeding applications. Full article
(This article belongs to the Special Issue Advances in UAV-Based Remote Sensing for Climate-Smart Agriculture)
Show Figures

Figure 1

23 pages, 1462 KB  
Article
From Above: Drone-Driven Computer Vision for Reliable Elephant Body Condition Assessment
by Dede Aulia Rahman, Toto Haryanto and Riki Herliansyah
Conservation 2026, 6(2), 49; https://doi.org/10.3390/conservation6020049 - 17 Apr 2026
Abstract
Assessing individual animal health is essential for detecting early ecological stress that may scale to population-level impacts. Yet, conventional capture-based methods are invasive and logistically challenging, particularly for large mammals. This study evaluates the accuracy of drone-based morphometric measurements as a non-invasive approach [...] Read more.
Assessing individual animal health is essential for detecting early ecological stress that may scale to population-level impacts. Yet, conventional capture-based methods are invasive and logistically challenging, particularly for large mammals. This study evaluates the accuracy of drone-based morphometric measurements as a non-invasive approach for estimating elephants’ Body Condition Index (BCI). Research was conducted in Way Kambas National Park, Sumatra, using a DJI Matrice 300 RTK equipped with a multisensor camera to acquire aerial imagery, primarily from a top-down perspective. Morphometric parameters were extracted through image preprocessing, segmentation, and edge detection using an OpenCV-based Canny algorithm, followed by coordinate and Euclidean distance analyses. Drone-derived measurements were validated against field-based morphometry in captive Sumatran elephants. Linear regression revealed strong agreement between methods, with R2 values ranging from 0.91 to 0.97. Mid-body width showed the highest accuracy (R2 = 0.97, MAPE = 2.66%, RMSE = 2.36), while other body dimensions also performed consistently well. BCI-related morphometric ratios exhibited minimal differences between drone and field measurements, confirming methodological reliability. As an exploratory extension, a preliminary allometric scaling framework was applied to estimate body condition proxies in free-ranging wild elephants except for mid-body width; however, these estimates are model-derived from total body length and should be interpreted as indicative rather than as direct morphometric assessments of body condition. These findings demonstrate that drone-based photogrammetry provides a validated, practical, and non-invasive method for morphometric measurement in captive elephants, with promising but as yet incompletely validated potential for application to wild populations. Full article
21 pages, 1194 KB  
Article
Environment-Aware Proactive Beam Prediction in mmWave V2I via Multi-Modal Prior Mask Map
by Changpeng Zhou and Youyun Xu
Sensors 2026, 26(8), 2488; https://doi.org/10.3390/s26082488 - 17 Apr 2026
Abstract
In millimeter wave V2I communication systems, accurate beam prediction is crucial for optimizing network performance and improving signal transmission efficiency. Traditional beam prediction methods mainly rely on single-modal data, which often fails to capture the comprehensive environmental information required for high accuracy prediction. [...] Read more.
In millimeter wave V2I communication systems, accurate beam prediction is crucial for optimizing network performance and improving signal transmission efficiency. Traditional beam prediction methods mainly rely on single-modal data, which often fails to capture the comprehensive environmental information required for high accuracy prediction. In contrast, multi-modal approaches leverage complementary information from different data sources and offer a more promising solution. However, many existing fusion methods primarily depend on real-time sensory inputs and do not fully exploit stable environmental features in V2I scenarios, limiting the effective use of each modality. To address these limitations, this paper proposes a environment-aware proactive beam prediction method based on a multi-modal prior mask map (MMPMM), which integrates offline mapping with an online beam prediction network. Specifically, the method fuses information from images, point clouds, positions, and the MMPMM to predict the optimal beam index. The MMPMM provides channel-related prior information by extracting static V2I scene features offline without incurring any additional online measurement overhead. Experimental results on real-world datasets demonstrate that the proposed method achieves a Top-3 beam prediction accuracy of up to 71.23% while maintaining stable performance under the evaluated dynamic and degraded conditions, demonstrating its effectiveness in the considered scenarios. Full article
(This article belongs to the Special Issue 6G Communication and Edge Intelligence in Wireless Sensor Networks)
Show Figures

Figure 1

38 pages, 6162 KB  
Article
Leakage-Resistant Multi-Sensor Bearing Fault Diagnosis via Adaptive Time-Frequency Graph Learning and Sensor Reliability-Aware Fusion
by Yu Sun, Yihang Qin, Wenhao Chen, Wenhui Zhao and Haoran Sun
Sensors 2026, 26(8), 2484; https://doi.org/10.3390/s26082484 - 17 Apr 2026
Abstract
Reliable multi-sensor bearing fault diagnosis is challenged by temporal leakage caused by window-level random splitting, limited modeling of cross-sensor dependencies, and inadequate integration of raw temporal dynamics with time-frequency representations. To address these issues, this study proposes a leakage-resistant multi-sensor diagnosis framework that [...] Read more.
Reliable multi-sensor bearing fault diagnosis is challenged by temporal leakage caused by window-level random splitting, limited modeling of cross-sensor dependencies, and inadequate integration of raw temporal dynamics with time-frequency representations. To address these issues, this study proposes a leakage-resistant multi-sensor diagnosis framework that combines a partition-before-windowing evaluation protocol with adaptive time-frequency graph learning and reliability-aware fusion. Continuous vibration records are first divided into disjoint temporal regions with guard intervals and overlap auditing to suppress time-neighbor leakage. The model then extracts complementary features from a raw-signal branch and a dual-resolution log-STFT branch, while adaptive graph learning captures sample-dependent inter-sensor couplings and sensor reliability weighting highlights informative channels. A cross-gated fusion module further integrates temporal and graph-domain representations in a sample-adaptive manner for final classification. Experiments on a reconstructed nine-class benchmark derived from the HUSTbearing dataset show that the proposed method achieves a Macro-Accuracy of 0.973, a Macro-Recall of 0.964, and a Macro-F1 of 0.954, outperforming representative raw-signal and STFT-based baselines under the same leakage-resistant protocol. These results demonstrate that jointly modeling multi-scale time-frequency structure, dynamic sensor relationships, and reliable evaluation yields an effective and interpretable solution for intelligent bearing fault diagnosis under complex operating conditions. Full article
35 pages, 4669 KB  
Article
A Hybrid Physics-Informed ML Framework for Emission and Energy Flow Prediction in a Retrofitted Heavy-Duty Vehicle
by Talha Mujahid, Teresa Donateo and Pietropaolo Morrone
Algorithms 2026, 19(4), 317; https://doi.org/10.3390/a19040317 - 17 Apr 2026
Abstract
This study introduces a physics-informed machine learning framework for predicting transient emissions and energy variables in a retrofitted heavy-duty diesel vehicle. It merges data-driven modeling with physically derived features for reliable real-world analysis. A Random Forest regressor is trained on a public dataset [...] Read more.
This study introduces a physics-informed machine learning framework for predicting transient emissions and energy variables in a retrofitted heavy-duty diesel vehicle. It merges data-driven modeling with physically derived features for reliable real-world analysis. A Random Forest regressor is trained on a public dataset (26 trips from one instrumented vehicle) to predict CO2 and NOx mass rates, exhaust temperature, exhaust mass flow rate, and fuel flow rate from synchronized multi-sensor inputs using past-only, time-lagged features. On held-out trips, exhaust temperature prediction achieves R2 = 0.9997 and RMSE = 0.53 g/s; for CO2, with R2 = 0.9985 and RMSE= 0.38 g/s, comparable performance is reported for NOx, exhaust flow, and fuel rate. The trained model is integrated into a simulation framework to enable the evaluation of alternative operating conditions and powertrain configurations. First, the impact of cold-start versus hot-start operation is assessed, showing cumulative emission penalties of up to +28% for CO2 and +30% for NOx. Second, the effect of hybridization is investigated by comparing the baseline thermal configuration with a hybrid electric architecture, resulting in estimated reductions of −12.2% in CO2 and −10.5% in NOx emissions. This tool excels in high-fidelity emission prediction and system-level energy analysis, aiding advanced powertrain assessments under realistic driving conditions. Full article
Show Figures

Figure 1

35 pages, 6272 KB  
Article
AI-Enhanced Thermal–Visual–Inertial Odometry and Autonomous Planning for GPS-Denied Search-and- Rescue Robotics
by Islam T. Almalkawi, Sabya Shtaiwi, Alaa Alhowaide and Manel Guerrero Zapata
Sensors 2026, 26(8), 2462; https://doi.org/10.3390/s26082462 - 16 Apr 2026
Abstract
Search and rescue (SAR) missions in collapsed or underground environments remain challenging due to GPS unavailability, which hinders localization and autonomous navigation. Systems that rely on single-sensor inputs or structured settings often degrade under smoke, dust, or dynamic clutter. This paper presents an [...] Read more.
Search and rescue (SAR) missions in collapsed or underground environments remain challenging due to GPS unavailability, which hinders localization and autonomous navigation. Systems that rely on single-sensor inputs or structured settings often degrade under smoke, dust, or dynamic clutter. This paper presents an autonomous ground robot for GPS-denied SAR that integrates low-cost thermal, visual, inertial, and acoustic cues within a unified, computation-efficient architecture. The stack combines Thermal–Visual Odometry (TV–VO) with Zero-Velocity Updates (ZUPT) for drift-resistant localization, RescueGraph for multimodal survivor detection, and a Proximal Policy Optimization (PPO) planner for adaptive navigation under uncertainty. Across simulated disaster scenarios and benchmark corridor runs, the system shows embedded-feasible runtime behavior and supports return to base without external beacons under the evaluated conditions. Quantitatively, TV–VO+ZUPT reduces drift in short internal evaluations, while RescueGraph attains an F1-score of 0.6923 and an area under the ROC curve (AUC) of 0.976 for survivor detection. At the system level, the integrated navigation stack achieves full mission completion in the reported SAR-style trials, while the separate A*/PPO comparison highlights a trade-off between completion rate, traversal time, and collisions. Overall, the results support the practical promise of a low-cost sensor-fusion and learning-assisted navigation framework for GPS-denied SAR robotics. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Graphical abstract

58 pages, 4676 KB  
Review
Vision-Based Artificial Intelligence for Adaptive Peen Forming: Sensing Architectures, Learning Models, and Closed-Loop Smart Manufacturing
by Sehar Shahzad Farooq, Abdul Rehman, Fuad Ali Mohammed Al-Yarimi, Sejoon Park, Jaehyun Baik and Hosu Lee
Sensors 2026, 26(8), 2460; https://doi.org/10.3390/s26082460 - 16 Apr 2026
Abstract
Peen forming is a dieless manufacturing process used to shape large, thin aerospace panels through controlled shot impacts that induce residual stresses and curvature. Despite long-standing industrial use, process monitoring still depends largely on indirect proxies such as Almen intensity and coverage, limiting [...] Read more.
Peen forming is a dieless manufacturing process used to shape large, thin aerospace panels through controlled shot impacts that induce residual stresses and curvature. Despite long-standing industrial use, process monitoring still depends largely on indirect proxies such as Almen intensity and coverage, limiting spatially resolved deformation assessment and hindering closed-loop control. In parallel, vision-based artificial intelligence (AI) has enabled adaptive monitoring and feedback in smart-manufacturing domains such as welding, additive manufacturing, and sheet forming. This review examines how such sensing and learning strategies can be transferred to adaptive peening forming. We compare six vision sensing modalities and assess major AI model families for surface mapping, temporal prediction, robustness, and deployment maturity. The synthesis shows that progress is primarily constrained by limited validated datasets, harsh in-cabinet sensing conditions, scarce closed-loop demonstrations, and weak validation on curved aerospace geometries. We conclude that the sensing and AI foundations for adaptive peen forming are already emerging, but industrial translation now depends on stronger experimental validation, standardized benchmarking, robust multi-sensor integration, and edge-capable feedback pipelines. Full article
(This article belongs to the Special Issue Artificial Intelligence and Sensing Technology in Smart Manufacturing)
38 pages, 4493 KB  
Article
Direct Structural Response Monitoring Versus Weight-Based Damage Detection in Bridge Weigh-in-Motion
by Kun Feng, Arturo González and Miguel Casero
Appl. Sci. 2026, 16(8), 3866; https://doi.org/10.3390/app16083866 - 16 Apr 2026
Abstract
Bridge weigh-in-motion (BWIM) systems estimate axle and gross vehicle weights from measured bridge responses, typically strains but also displacements and rotations, via algorithms based on influence lines. Changes in inferred weights have been proposed as damage indicators, allowing existing BWIM installations to contribute [...] Read more.
Bridge weigh-in-motion (BWIM) systems estimate axle and gross vehicle weights from measured bridge responses, typically strains but also displacements and rotations, via algorithms based on influence lines. Changes in inferred weights have been proposed as damage indicators, allowing existing BWIM installations to contribute to structural health monitoring without additional sensors. However, BWIM accuracy is sensitive to discrepancies between idealised models and actual bridge–traffic conditions, including variability in vehicle configurations, road profiles, measurement noise, multiple-vehicle presence, and uncertainty in vehicle positioning. This paper uses a numerical vehicle–bridge interaction framework to compare the sensitivity of direct structural responses and BWIM-derived gross vehicle weights to global, local, and combined stiffness reductions in a short-span, simply supported bridge. The analysis considers different signal-to-noise ratios and field-representative BWIM error distributions corresponding to COST 323 accuracy classes. Direct monitoring of strain, displacement, and especially rotation provides slightly higher sensitivity to global stiffness changes than BWIM-inferred weights, but BWIM-inferred weights derived from rotations can be more robust than direct responses for detecting local damage under low signal-to-noise ratios. When BWIM calibration and modelling errors are included, detection performance degrades rapidly with decreasing accuracy class; meaningful local-damage detection is achieved only for the highest class. Multi-sensor configurations combining strain and rotation help distinguish quasi-uniform global changes from localised damage by exploiting their differential sensitivity to global and local stiffness variations. Full article
(This article belongs to the Special Issue Structural Health Monitoring in Bridges and Infrastructure)
Show Figures

Figure 1

34 pages, 1052 KB  
Review
Artificial Intelligence and Machine Learning in Remote Sensing for Tropical Forest Monitoring: Applications, Challenges, and Emerging Solutions
by Belachew Gizachew
Remote Sens. 2026, 18(8), 1193; https://doi.org/10.3390/rs18081193 - 16 Apr 2026
Viewed by 59
Abstract
Tropical forests, despite their critical environmental and socio-economic roles, remain highly vulnerable to deforestation, forest degradation, and climate-related disturbances. There is a growing demand for robust and transparent forest monitoring systems, particularly under REDD+, the Paris Agreement’s Enhanced Transparency Framework (ETF), and emerging [...] Read more.
Tropical forests, despite their critical environmental and socio-economic roles, remain highly vulnerable to deforestation, forest degradation, and climate-related disturbances. There is a growing demand for robust and transparent forest monitoring systems, particularly under REDD+, the Paris Agreement’s Enhanced Transparency Framework (ETF), and emerging climate-finance mechanisms. Conventional approaches based on field inventories and traditional remote sensing are often constrained by limited or uneven field data, persistent cloud cover, complex forest conditions, and limited institutional and technical capacity. This review examines how artificial intelligence (AI) and machine learning (ML) are being integrated into remote sensing–based tropical forest monitoring to address these structural constraints. Using a semi-systematic synthesis of peer-reviewed studies, complemented by operational platforms and grey literature, the review assesses AI/ML approaches, remote sensing datasets, and applications relevant to national and large-scale monitoring. Evidence is synthesized across five analytical dimensions: AI/ML model families and workflows, multi-sensor datasets and training resources, operational monitoring platforms, application domains (including deforestation, degradation, and biomass/carbon estimation), and cross-cutting technical, institutional, and governance barriers. The review finds that AI/ML-enabled remote sensing, particularly those combining optical, radar, and LiDAR time series within cloud-based platforms, has substantially improved the automation, scalability, and speed of tropical forest monitoring. However, effective and equitable adoption remains constrained by limitations in training and validation data, dependence on proprietary platforms and data, uneven technical capacity, and unresolved governance and ethical challenges. Emerging solutions, including open and representative training datasets, platform-agnostic processing infrastructures, long-term capacity building, and inclusive data-governance frameworks, are identified as critical enablers of credible and nationally owned AI/ML-enabled forest-monitoring systems. The review highlights that AI/ML can play a transformative role in supporting climate mitigation, biodiversity conservation, and informed decision-making. This potential, however, depends on transparent data governance arrangements, long-term capacity building, and platform-agnostic infrastructures that support national ownership. Full article
Show Figures

Figure 1

32 pages, 3743 KB  
Article
Machine Learning-Based Mapping of Dominant Tree Species in Dryland Forests Using Multi-Temporal and Multi-Source Data
by Emad H. E. Yasin, Milan Koreň and Kornel Czimber
Remote Sens. 2026, 18(8), 1185; https://doi.org/10.3390/rs18081185 - 15 Apr 2026
Viewed by 108
Abstract
Timely and accurate mapping of tree species is essential for forest resource inventory, biodiversity conservation, and sustainable ecosystem management, particularly in dryland environments where structural heterogeneity, spectral similarity, and data scarcity complicate classification. This study develops a machine learning-based framework implemented in Google [...] Read more.
Timely and accurate mapping of tree species is essential for forest resource inventory, biodiversity conservation, and sustainable ecosystem management, particularly in dryland environments where structural heterogeneity, spectral similarity, and data scarcity complicate classification. This study develops a machine learning-based framework implemented in Google Earth Engine to map dominant tree species in the Elnour Natural Forest Reserve (ENFR), Blue Nile, Sudan, using multi-temporal and multi-sensor remote sensing data. Multi-temporal Landsat 5 TM, Landsat 8 OLI, and Sentinel-2 MSI imagery were integrated with vegetation index (NDVI), topographic variables derived from a digital elevation model (DEM), and field observations. The performance of Random Forest (RF), Support Vector Machine (SVM), Classification and Regression Trees (CART), and an unweighted ensemble approach was evaluated across four reference years (2008, 2013, 2018, and 2021). Results show that RF and SVM consistently achieved high classification performance, with overall accuracy (OA) ranging from 85.0% to 92.0% and Kappa coefficients (κ) from 0.81 to 0.89, while maintaining stable and ecologically realistic species-area estimates. CART showed greater sensitivity to class imbalance and overestimated minor species (OA = 72.0–80.0%, κ = 0.65–0.74), whereas the ensemble approach amplified misclassification of rare classes (OA = 78.0–84.0%, κ = 0.70–0.78). The integration of Sentinel-2 data improved species discrimination due to enhanced spatial and spectral resolution, particularly in the red-edge region; however, algorithm selection remained the dominant factor controlling performance. Feature importance analysis identified near-infrared (NIR), shortwave infrared (SWIR), and NDVI variables as the most influential predictors. Multi-temporal analysis revealed declining class separability, reflected by decreasing MCC values, and a shift in species composition, including a decline in Acacia seyal (Delile) and an increase in Sterculia setigera Delile. These patterns indicate increasing ecological complexity driven primarily by anthropogenic pressures, with climatic variability acting as an additional stressor. Full article
13 pages, 1599 KB  
Article
VCMA-MRAM In-Memory Stochastic Sampling for Edge Boltzmann Machine Inference
by Xuesheng Deng, Yuesheng Li, Bin Fang and Lin Wang
Electronics 2026, 15(8), 1622; https://doi.org/10.3390/electronics15081622 - 13 Apr 2026
Viewed by 218
Abstract
Edge intelligence is often limited by the computation–energy trade-off in resource-constrained devices. Boltzmann machines (BMs) provide strong unsupervised learning capability, yet their reliance on Gibbs sampling makes digital implementations costly in both computation and energy. In this paper, we present a voltage-controlled magnetic [...] Read more.
Edge intelligence is often limited by the computation–energy trade-off in resource-constrained devices. Boltzmann machines (BMs) provide strong unsupervised learning capability, yet their reliance on Gibbs sampling makes digital implementations costly in both computation and energy. In this paper, we present a voltage-controlled magnetic anisotropy magnetic tunnel junction (VCMA-MTJ)-based MRAM system that performs in-memory stochastic sampling for state generation and updates in restricted/deep Boltzmann machines (RBMs/DBMs). By exploiting the intrinsic stochastic switching of VCMA-MTJs, the proposed system achieves probabilistic sampling with an energy as low as ∼10 fJ per sample. Implemented on a microcontroller-based edge platform, it enables real-time multi-sensor anomaly detection with an F1-score of 0.9854 and stable operation. The proposed hardware–algorithm co-design achieves in situ stochastic computing and storage within a single MRAM cell, providing an ultra-low-power substrate for probabilistic inference at the edge. Full article
(This article belongs to the Section Electronic Materials, Devices and Applications)
Show Figures

Figure 1

19 pages, 49217 KB  
Article
Deep Reinforcement Learning for Navigation via Multi-Modal Belief State Representation from LiDAR and Depth Sensors
by Degang Xu, Haiou Wang and Yizhi Wang
Appl. Sci. 2026, 16(8), 3787; https://doi.org/10.3390/app16083787 - 13 Apr 2026
Viewed by 252
Abstract
This paper presents a deep reinforcement learning framework for autonomous navigation based on multi-modal belief state representation learned from LiDAR and depth sensors. To address the challenges posed by partial observability and sensor-specific uncertainty, we propose a probabilistic representation module that models belief [...] Read more.
This paper presents a deep reinforcement learning framework for autonomous navigation based on multi-modal belief state representation learned from LiDAR and depth sensors. To address the challenges posed by partial observability and sensor-specific uncertainty, we propose a probabilistic representation module that models belief states as Gaussian distributions over latent environmental features. Sensor-specific encoders extract structured features from raw LiDAR and depth inputs, which are fused using a Q-value-guided weighting scheme derived from the policy critic. A motion-prediction pretraining strategy and a cross-modal coherence loss are introduced to enhance the alignment and reliability of the learned belief states. The resulting representation is integrated into a Soft Actor–Critic (SAC) framework to enable policy-driven decision-making under uncertainty. Extensive experiments in simulated environments demonstrate that the proposed method improves success rate, navigation efficiency, and generalization. Real-world experiments further validate these findings, with the proposed method outperforming a classical navigation baseline by reducing average travel time by 16% and path length by 4%. These results support the use of probabilistic multi-modal belief modeling for autonomous navigation under partial observability. Full article
(This article belongs to the Special Issue AI Applications in Modern Industrial Systems)
44 pages, 15261 KB  
Review
Cloud-Native Earth Observation for Quantitative Vegetation Science: Architectures, Workflows, and Scientific Implications
by Jochem Verrelst, Emma De Clerck, Bhagyashree Verma, Kavach Mishra and Gabriel Caballero
Remote Sens. 2026, 18(8), 1154; https://doi.org/10.3390/rs18081154 - 13 Apr 2026
Viewed by 228
Abstract
The increasing volume, temporal density, and diversity of satellite Earth observation (EO) data have fundamentally transformed quantitative vegetation remote sensing. Dense multi-sensor time series and computationally intensive modelling have rendered traditional download-and-process workflows increasingly impractical. Cloud-native computing—where data access, storage, and computation are [...] Read more.
The increasing volume, temporal density, and diversity of satellite Earth observation (EO) data have fundamentally transformed quantitative vegetation remote sensing. Dense multi-sensor time series and computationally intensive modelling have rendered traditional download-and-process workflows increasingly impractical. Cloud-native computing—where data access, storage, and computation are co-located and analyses are executed in data-proximate environments—has therefore emerged as a key paradigm for scalable and reproducible vegetation EO analysis. This review provides a science-oriented synthesis of cloud-native EO for quantitative vegetation research. We examine architectural principles, data models, and compute patterns that shape how vegetation analyses are implemented, scaled, and scientifically interpreted. Particular attention is given to machine learning as a system component, including model lifecycle management, domain shift, and evaluation integrity in distributed environments. We analyse how cloud-native data abstractions influence algorithmic assumptions, validation design, and long-term product consistency, highlighting trade-offs between analytical complexity, computational cost, latency, and scientific robustness. We provide a forward-looking perspective on emerging imaging spectroscopy missions and the growing system-level requirements for reproducible, scalable, and uncertainty-aware vegetation analytics at continental-to-global scales. We also outline how cloud-native EO infrastructures are driving new scientific paradigms based on continuous monitoring, systematic reprocessing, and AI-driven modelling. Full article
Back to TopTop