Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,808)

Search Parameters:
Keywords = information visibility

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 322 KB  
Article
Economic Sustainability Through Disclosure: Knowledge Management, Reporting Quality, and Corporate Performance in the Arab Gulf Region
by Alessandra Theuma and Ahmad Faisal Hayek
Sustainability 2026, 18(3), 1394; https://doi.org/10.3390/su18031394 - 30 Jan 2026
Abstract
This study examines whether sustainability information disclosure (SID) in the Arab Gulf acts as a substantive strategic tool that enhances corporate outcomes or merely serves as a symbolic gesture to maintain legitimacy. Using data from 92 listed firms across the Gulf Cooperation Council [...] Read more.
This study examines whether sustainability information disclosure (SID) in the Arab Gulf acts as a substantive strategic tool that enhances corporate outcomes or merely serves as a symbolic gesture to maintain legitimacy. Using data from 92 listed firms across the Gulf Cooperation Council (GCC) from 2020 to 2023, the study distinguishes between the level (volume) and quality (credibility) of disclosure. It examines their respective impacts on return on assets (ROA), return on equity (ROE), and financial reporting quality. The results reveal a consistent positive association between disclosure levels and financial performance, suggesting that volume-based corporate environmental, social, and governance (ESG) reporting may support short-term legitimacy and market confidence. In contrast, disclosure quality shows weaker and less consistent effects, highlighting a potential disconnect between visibility and substance. This pattern reflects the strategic use of disclosure for symbolic compliance in the GCC, where ESG reporting is often adopted to satisfy external expectations rather than to support internal transformation or long-term value creation. The findings position sustainability disclosure as an underleveraged tool for strategic knowledge management. While current practices enhance legitimacy, they fall short of driving performance gains through internal learning or reporting integrity. Policy implications include the need for harmonised disclosure frameworks, mandatory assurance standards, and improved alignment with international ESG guidelines to strengthen the credibility and impact of corporate sustainability communication in emerging markets. Full article
22 pages, 2306 KB  
Article
Learning Framework for Underwater Optical Localization Using Airborne Light Beams
by Jaeed Bin Saif, Mohamed Younis and Talal M. Alkharobi
Photonics 2026, 13(2), 133; https://doi.org/10.3390/photonics13020133 - 30 Jan 2026
Abstract
Underwater localization using airborne visible light beams offers a promising alternative to acoustic and radio-frequency methods, yet accurate modeling of light propagation through a dynamic air–water interface remains a major challenge. This paper introduces a physics-informed machine learning framework that combines geometric optics [...] Read more.
Underwater localization using airborne visible light beams offers a promising alternative to acoustic and radio-frequency methods, yet accurate modeling of light propagation through a dynamic air–water interface remains a major challenge. This paper introduces a physics-informed machine learning framework that combines geometric optics with neural network inference to localize submerged optical nodes under both flat and wavy surface conditions. The approach integrates ray-based light transmission modeling with a third-order Stokes wave formulation, enabling a realistic representation of nonlinear surface slopes and their effect on refraction. A multilayer perceptron (MLP) is trained on synthetic intensity–position datasets generated from this model, learning the complex mapping between received optical power (light intensity) and coordinates of the submerged receiver. The proposed method demonstrates high precision, stability, and adaptability across varying geometries and surface dynamics, offering a computationally efficient solution for optical localization in dynamic underwater environments. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence for Optical Networks)
Show Figures

Figure 1

22 pages, 45752 KB  
Article
Chrominance-Aware Multi-Resolution Network for Aerial Remote Sensing Image Fusion
by Shuying Li, Jiaxin Cheng, San Zhang and Wuwei Wang
Remote Sens. 2026, 18(3), 431; https://doi.org/10.3390/rs18030431 - 29 Jan 2026
Abstract
Spectral data obtained from upstream remote sensing tasks contain abundant complementary information. Infrared images are rich in radiative information, and visible images provide spatial details. Effective fusion of these two modalities improves the utilization of remote sensing data and provides a more comprehensive [...] Read more.
Spectral data obtained from upstream remote sensing tasks contain abundant complementary information. Infrared images are rich in radiative information, and visible images provide spatial details. Effective fusion of these two modalities improves the utilization of remote sensing data and provides a more comprehensive representation of target characteristics and texture details. The majority of current fusion methods focus primarily on intensity fusion between infrared and visible images. These methods ignore the chrominance information present in visible images and the interference introduced by infrared images on the color of fusion results. Consequently, the fused images exhibit inadequate color representation. To address these challenges, an infrared and visible image fusion method named Chrominance-Aware Multi-Resolution Network (CMNet) is proposed. CMNet integrates the Mamba module, which offers linear complexity and global awareness, into a U-Net framework to form the Multi-scale Spatial State Attention (MSSA) framework. Furthermore, the enhancement of the Mamba module through the design of the Chrominance-Enhanced Fusion (CEF) module leads to better color and detail representation in the fused image. Extensive experimental results show that the CMNet method delivers better performance compared to existing fusion methods across various evaluation metrics. Full article
(This article belongs to the Section Remote Sensing Image Processing)
15 pages, 1097 KB  
Perspective
Point-of-Care Veterinary Diagnostics Using Vis–NIR Spectroscopy: Current Opportunities and Future Directions
by Sofia Rosa, Ana C. Silvestre-Ferreira, Rui Martins and Felisbina Luísa Queiroga
Animals 2026, 16(3), 401; https://doi.org/10.3390/ani16030401 - 28 Jan 2026
Viewed by 56
Abstract
Visible-Near-Infrared (Vis-NIR) spectroscopy, spanning approximately 400 to 2500 nm, is an innovative technology with growing relevance for diagnostics performed at the point of care (POC). This review explores the potential of Vis-NIR in veterinary medicine, highlighting its advantages over complex techniques like Raman [...] Read more.
Visible-Near-Infrared (Vis-NIR) spectroscopy, spanning approximately 400 to 2500 nm, is an innovative technology with growing relevance for diagnostics performed at the point of care (POC). This review explores the potential of Vis-NIR in veterinary medicine, highlighting its advantages over complex techniques like Raman and Fourier transform infrared spectroscopy (FTIR) by being rapid, non-invasive, reagent-free, and compatible with miniaturized, portable devices. The methodology involves directing a broadband light source, often using LEDs, toward the sample (e.g., blood, urine, faeces), collecting spectral information related to molecular vibrations, which is then analyzed using chemometric methods. Successful veterinary applications include hemogram analysis in dogs, cats, and Atlantic salmon, and quantifying blood in ovine faeces for parasite detection. Key limitations include spectral interference from strong absorbers like water and hemoglobin, and the limited penetration depth of light. However, combining Vis-NIR with Self-Learning Artificial Intelligence (SLAI) is shown to isolate and mitigate these multi-scale interferences. Vis-NIR spectroscopy serves as an important complement to centralized laboratory testing, holding significant potential to accelerate clinical decisions, minimize stress on animals during assessment, and improve diagnostic capabilities for both human and animal health, aligning with the One Health concept. Full article
Show Figures

Figure 1

25 pages, 6583 KB  
Article
Robust Traffic Sign Detection for Obstruction Scenarios in Autonomous Driving
by Xinhao Wang, Limin Zheng, Yuze Song and Jie Li
Symmetry 2026, 18(2), 226; https://doi.org/10.3390/sym18020226 - 27 Jan 2026
Viewed by 72
Abstract
With the rapid advancement of autonomous driving technology, Traffic Sign Detection and Recognition (TSDR) has become a critical component for ensuring vehicle safety. However, existing TSDR systems still face significant challenges in accurately detecting partially occluded traffic signs, which poses a substantial risk [...] Read more.
With the rapid advancement of autonomous driving technology, Traffic Sign Detection and Recognition (TSDR) has become a critical component for ensuring vehicle safety. However, existing TSDR systems still face significant challenges in accurately detecting partially occluded traffic signs, which poses a substantial risk in real-world applications. To address this issue, this study proposes a comprehensive solution from three perspectives: data augmentation, model architecture enhancement, and dataset construction. We propose an innovative network framework tailored for occluded traffic sign detection. The framework enhances feature representation through a dual-path convolutional mechanism (DualConv) that preserves information flow even when parts of the sign are blocked, and employs a spatial attention module (SEAM) that helps the model focus on visible sign regions while ignoring occluded areas. Finally, we construct the Jinzhou Traffic Sign (JZTS) occlusion dataset to provide targeted training and evaluation samples. Extensive experiments on the public Tsinghua-Tencent 100K (TT-100K) dataset and our JZTS dataset demonstrate the superior performance and strong generalisation capability of our model under occlusion conditions. This work not only advances the robustness of TSDR systems for autonomous driving but also provides a valuable benchmark for future research. Full article
(This article belongs to the Section Computer)
20 pages, 49658 KB  
Article
Dead Chicken Identification Method Based on a Spatial-Temporal Graph Convolution Network
by Jikang Yang, Chuang Ma, Haikun Zheng, Zhenlong Wu, Xiaohuan Chao, Cheng Fang and Boyi Xiao
Animals 2026, 16(3), 368; https://doi.org/10.3390/ani16030368 - 23 Jan 2026
Viewed by 132
Abstract
In intensive cage rearing systems, accurate dead hen detection remains difficult due to complex environments, severe occlusion, and the high visual similarity between dead hens and live hens in a prone posture. To address these issues, this study proposes a dead hen identification [...] Read more.
In intensive cage rearing systems, accurate dead hen detection remains difficult due to complex environments, severe occlusion, and the high visual similarity between dead hens and live hens in a prone posture. To address these issues, this study proposes a dead hen identification method based on a Spatial-Temporal Graph Convolutional Network (STGCN). Unlike conventional static image-based approaches, the proposed method introduces temporal information to enable dynamic spatial-temporal modeling of hen health states. First, a multimodal fusion algorithm is applied to visible light and thermal infrared images to strengthen multimodal feature representation. Then, an improved YOLOv7-Pose algorithm is used to extract the skeletal keypoints of individual hens, and the ByteTrack algorithm is employed for multi-object tracking. Based on these results, spatial-temporal graph-structured data of hens are constructed by integrating spatial and temporal dimensions. Finally, a spatial-temporal graph convolution model is used to identify dead hens by learning spatial-temporal dependency features from skeleton sequences. Experimental results show that the improved YOLOv7-Pose model achieves an average precision (AP) of 92.8% in keypoint detection. Based on the constructed spatial-temporal graph data, the dead hen identification model reaches an overall classification accuracy of 99.0%, with an accuracy of 98.9% for the dead hen category. These results demonstrate that the proposed method effectively reduces interference caused by feeder occlusion and ambiguous visual features. By using dynamic spatial-temporal information, the method substantially improves robustness and accuracy of dead hen detection in complex cage rearing environments, providing a new technical route for intelligent monitoring of poultry health status. Full article
(This article belongs to the Special Issue Welfare and Behavior of Laying Hens)
Show Figures

Figure 1

22 pages, 9269 KB  
Article
Efficient Layer-Wise Cross-View Calibration and Aggregation for Multispectral Object Detection
by Xiao He, Tong Yang, Tingzhou Yan, Hongtao Li, Yang Ge, Zhijun Ren, Zhe Liu, Jiahe Jiang and Chang Tang
Electronics 2026, 15(3), 498; https://doi.org/10.3390/electronics15030498 - 23 Jan 2026
Viewed by 216
Abstract
Multispectral object detection is a fundamental task with an extensive range of practical implications. In particular, combining visible (RGB) and infrared (IR) images can offer complementary information that enhances detection performance in different weather scenarios. However, the existing methods generally involve aligning features [...] Read more.
Multispectral object detection is a fundamental task with an extensive range of practical implications. In particular, combining visible (RGB) and infrared (IR) images can offer complementary information that enhances detection performance in different weather scenarios. However, the existing methods generally involve aligning features across modalities and require proposals for the two-stage detectors, which are often slow and unsuitable for large-scale applications. To overcome this challenge, we introduce a novel one-stage oriented detector for RGB-infrared object detection called the Layer-wise Cross-Modality calibration and Aggregation (LCMA) detector. LCMA employs a layer-wise strategy to achieve cross-modality alignment by using the proposed inter-modality spatial-reduction attention. Moreover, we design Gated Coupled Filter in each layer to capture semantically meaningful features while ensuring that well-aligned and foreground object information is obtained before forwarding them to the detection head. This relieves the need for a region proposal step for the alignment, enabling direct category and bounding box predictions in a unified one-stage oriented detector. Extensive experiments on two challenging datasets demonstrate that the proposed LCMA outperforms state-of-the-art methods in terms of both accuracy and computational efficiency, which implies the efficacy of our approach in exploiting multi-modality information for robust and efficient multispectral object detection. Full article
(This article belongs to the Special Issue Multi-View Learning and Applications)
Show Figures

Figure 1

23 pages, 53610 KB  
Article
Multispectral Sparse Cross-Attention Guided Mamba Network for Small Object Detection in Remote Sensing
by Wen Xiang, Yamin Li, Liu Duan, Qifeng Wu, Jiaqi Ruan, Yucheng Wan and Sihan Wu
Remote Sens. 2026, 18(3), 381; https://doi.org/10.3390/rs18030381 - 23 Jan 2026
Viewed by 187
Abstract
Remote sensing small object detection remains a challenging task due to limited feature representation and interference from complex backgrounds. Existing methods that rely exclusively on either visible or infrared modalities often fail to achieve both accuracy and robustness in detection. Effectively integrating cross-modal [...] Read more.
Remote sensing small object detection remains a challenging task due to limited feature representation and interference from complex backgrounds. Existing methods that rely exclusively on either visible or infrared modalities often fail to achieve both accuracy and robustness in detection. Effectively integrating cross-modal information to enhance detection performance remains a critical challenge. To address this issue, we propose a novel Multispectral Sparse Cross-Attention Guided Mamba Network (MSCGMN) for small object detection in remote sensing. The proposed MSCGMN architecture comprises three key components: Multispectral Sparse Cross-Attention Guidance Module (MSCAG), Dynamic Grouped Mamba Block (DGMB), and Gated Enhanced Attention Module (GEAM). Specifically, the MSCAG module selectively fuses RGB and infrared (IR) features using sparse cross-modal attention, effectively capturing complementary information across modalities while suppressing redundancy. The DGMB introduces a dynamic grouping strategy to improve the computational efficiency of Mamba, enabling effective global context modeling. In remote sensing images, small objects occupy limited areas, making it difficult to capture their critical features. We design the GEAM module to enhance both global and local feature representations for small object detection. Experiments on the VEDAI and DroneVehicle datasets show that MSCGMN achieves mAP50 scores of 83.9% and 84.4%, outperforming existing state-of-the-art methods and demonstrating strong competitiveness in small object detection tasks. Full article
Show Figures

Graphical abstract

22 pages, 11123 KB  
Article
Compilation of a Nationwide River Image Dataset for Identifying River Channels and River Rapids via Deep Learning
by Nicholas Brimhall, Kelvyn K. Bladen, Thomas Kerby, Carl J. Legleiter, Cameron Swapp, Hannah Fluckiger, Julie Bahr, Makenna Roberts, Kaden Hart, Christina L. Stegman, Brennan L. Bean and Kevin R. Moon
Remote Sens. 2026, 18(2), 375; https://doi.org/10.3390/rs18020375 - 22 Jan 2026
Viewed by 135
Abstract
Remote sensing enables large-scale, image-based assessments of river dynamics, offering new opportunities for hydrological monitoring. We present a publicly available dataset consisting of 281,024 satellite and aerial images of U.S. rivers, constructed using an Application Programming Interface (API) and the U.S. Geological Survey’s [...] Read more.
Remote sensing enables large-scale, image-based assessments of river dynamics, offering new opportunities for hydrological monitoring. We present a publicly available dataset consisting of 281,024 satellite and aerial images of U.S. rivers, constructed using an Application Programming Interface (API) and the U.S. Geological Survey’s National Hydrography Dataset. The dataset includes images, primary keys, and ancillary geospatial information. We use a manually labeled subset of the images to train models for detecting rapids, defined as areas where high velocity and turbulence lead to a wavy, rough, or even broken water surface visible in the imagery. To demonstrate the utility of this dataset, we develop an image segmentation model to identify rivers within images. This model achieved a mean test intersection-over-union (IoU) of 0.57, with performance rising to an actual IoU of 0.89 on the subset of predictions with high confidence (predicted IoU > 0.9). Following this initial segmentation of river channels within the images, we trained several convolutional neural network (CNN) architectures to classify the presence or absence of rapids. Our selected model reached an accuracy and F1 score of 0.93, indicating strong performance for the classification of rapids that could support consistent, efficient inventory and monitoring of rapids. These data provide new resources for recreation planning, habitat assessment, and discharge estimation. Overall, the dataset and tools offer a foundation for scalable, automated identification of geomorphic features to support riverine science and resource management. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

30 pages, 746 KB  
Article
From the Visible to the Invisible: On the Phenomenal Gradient of Appearance
by Baingio Pinna, Daniele Porcheddu and Jurģis Šķilters
Brain Sci. 2026, 16(1), 114; https://doi.org/10.3390/brainsci16010114 - 21 Jan 2026
Viewed by 152
Abstract
Background: By exploring the principles of Gestalt psychology, the neural mechanisms of perception, and computational models, scientists aim to unravel the complex processes that enable us to perceive a coherent and organized world. This multidisciplinary approach continues to advance our understanding of [...] Read more.
Background: By exploring the principles of Gestalt psychology, the neural mechanisms of perception, and computational models, scientists aim to unravel the complex processes that enable us to perceive a coherent and organized world. This multidisciplinary approach continues to advance our understanding of how the brain constructs a perceptual world from sensory inputs. Objectives and Methods: This study investigates the nature of visual perception through an experimental paradigm and method based on a comparative analysis of human and artificial intelligence (AI) responses to a series of modified square images. We introduce the concept of a “phenomenal gradient” in human visual perception, where different attributes of an object are organized syntactically and hierarchically in terms of their perceptual salience. Results: Our findings reveal that human visual processing involves complex mechanisms including shape prioritization, causal inference, amodal completion, and the perception of visible invisibles. In contrast, AI responses, while geometrically precise, lack these sophisticated interpretative capabilities. These differences highlight the richness of human visual cognition and the current limitations of model-generated descriptions in capturing causal, completion-based, and context-dependent inferences. The present work introduces the notion of a ‘phenomenal gradient’ as a descriptive framework and provides an initial comparative analysis that motivates testable hypotheses for future behavioral and computational studies, rather than direct claims about improving AI systems. Conclusions: By bridging phenomenology, information theory, and cognitive science, this research challenges existing paradigms and suggests a more integrated approach to studying visual consciousness. Full article
Show Figures

Figure 1

18 pages, 4205 KB  
Article
Research on Field Weed Target Detection Algorithm Based on Deep Learning
by Ziyang Chen, Le Wu, Zhenhong Jia, Jiajia Wang, Gang Zhou and Zhensen Zhang
Sensors 2026, 26(2), 677; https://doi.org/10.3390/s26020677 - 20 Jan 2026
Viewed by 160
Abstract
Weed detection algorithms based on deep learning are considered crucial for smart agriculture, with the YOLO series algorithms being widely adopted due to their efficiency. However, existing YOLO algorithms struggle to maintain high accuracy, while low parameter requirements and computational efficiency are achieved [...] Read more.
Weed detection algorithms based on deep learning are considered crucial for smart agriculture, with the YOLO series algorithms being widely adopted due to their efficiency. However, existing YOLO algorithms struggle to maintain high accuracy, while low parameter requirements and computational efficiency are achieved when weeds with occlusion or overlap are detected. To address this challenge, a target detection algorithm called SSS-YOLO based on YOLOv9t is proposed in this paper. First, the SCB (Spatial Channel Conv Block) module is introduced, in which large kernel convolution is employed to capture long-range dependencies, occluded weed regions are bypassed by being associated with unobstructed areas, and features of unobstructed regions are enhanced through inter-channel relationships. Second, the SPPF EGAS (Spatial Pyramid Pooling Fast Edge Gaussian Aggregation Super) module is proposed, where multi-scale max pooling is utilized to extract hierarchical contextual features, large receptive fields are leveraged to acquire background information around occluded objects, and features of weed regions obscured by crops are inferred. Finally, the EMSN (Efficient Multi-Scale Spatial-Feedforward Network) module is developed, through which semantic information of occluded regions is reconstructed by contextual reasoning and background vegetation interference is effectively suppressed while visible regional details are preserved. To validate the performance of this method, experiments are conducted on both our self-built dataset and the publicly available Cotton WeedDet12 dataset. The results demonstrate that compared to existing algorithms, significant performance improvements are achieved by the proposed method. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

34 pages, 692 KB  
Systematic Review
The Experiences of LGBTQ+ Pre-Service and Qualified Teachers and Their Mental Health: A Systematic Review of International Research
by Jonathan Glazzard and Scott Thomas
Int. J. Environ. Res. Public Health 2026, 23(1), 115; https://doi.org/10.3390/ijerph23010115 - 17 Jan 2026
Viewed by 172
Abstract
Existing research highlights that Lesbian, Gay, Bisexual, Transgender and Queer (LGBTQ+) teachers are often exposed to additional stressors in schools which adversely affect their mental health. Some mitigate the effects of these stressors by separating their personal and professional identities while others choose [...] Read more.
Existing research highlights that Lesbian, Gay, Bisexual, Transgender and Queer (LGBTQ+) teachers are often exposed to additional stressors in schools which adversely affect their mental health. Some mitigate the effects of these stressors by separating their personal and professional identities while others choose to integrate their identities so that they can be authentic, advance social justice in school contexts or be visible and vocal role models. Less is known about the experiences of pre-service teachers who are undertaking teacher preparation programmes. This systematic literature review presents the results of 20 published papers which represent the global experiences of both pre-service teachers and serving teachers. The findings highlight identity management, experiences of discrimination, agency and lack of confidence of teacher educators. Two new frameworks are presented that lay the foundations for embedding LGBTQ+ inclusion and proposed mandatory elements of curricula for initial teacher training. This systematic literature review has been informed by the following research questions RQ1. What are the experiences of LGBTQ+ pre-service teachers? RQ2. How do LGBTQ+ pre-service teachers negotiate their identities? RQ3. How do LGBTQ+ pre-service teachers disrupt hetero/cis-normative cultures in schools? RQ4: How well does the teacher education programme prepare pre-service teachers for teaching LGBTQ+ inclusive education? Full article
(This article belongs to the Special Issue Mental Health Challenges Affecting LGBTQ+ Individuals and Communities)
Show Figures

Figure 1

26 pages, 48109 KB  
Article
Quantifying VIIRS and ABI Contributions to Hourly Dead Fuel Moisture Content Estimation Using Machine Learning
by John S. Schreck, William Petzke, Pedro A. Jiménez y Muñoz and Thomas Brummet
Remote Sens. 2026, 18(2), 318; https://doi.org/10.3390/rs18020318 - 17 Jan 2026
Viewed by 163
Abstract
Fuel moisture content (FMC) estimation is essential for wildfire danger assessment and fire behavior modeling. This study quantifies the value of integrating satellite observations from the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi-NPP and the Advanced Baseline Imager (ABI) aboard GOES-16 with [...] Read more.
Fuel moisture content (FMC) estimation is essential for wildfire danger assessment and fire behavior modeling. This study quantifies the value of integrating satellite observations from the Visible Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi-NPP and the Advanced Baseline Imager (ABI) aboard GOES-16 with High-Resolution Rapid Refresh (HRRR) numerical weather prediction data for hourly 10 h dead FMC estimation across the continental United States. We leverage the complementary characteristics of each system: VIIRS provides enhanced spatial resolution (375–750 m), while ABI contributes high temporal frequency observations (hourly). Using XGBoost machine learning models trained on 2020–2021 data, we systematically evaluate performance improvements stemming from incorporating satellite retrievals individually and in combination with HRRR meteorological variables through eight experimental configurations. Results demonstrate that while both satellite systems individually enhance prediction accuracy beyond HRRR-only models, their combination provides substantially greater improvements: 27% RMSE and MAE reduction and 46.7% increase in explained variance (R2) relative to HRRR baseline performance. Comprehensive seasonal analysis reveals consistent satellite data contributions across all seasons, with stable median performance throughout the year. Diurnal analysis across the complete 24 h cycle shows sustained improvements during all hours, not only during satellite overpass times, indicating effective integration of temporal information. Spatial analysis reveals improvements in western fire-prone regions where afternoon overpass timing aligns with peak fire danger conditions. Feature importance analysis using multiple explainable AI methods demonstrates that HRRR meteorological variables provide the fundamental physical framework for prediction, while satellite observations contribute fine-scale refinements that improve moisture estimates. The VIIRS lag-hour predictor successfully maintains observational value up to 72 h after acquisition, enabling flexible operational implementation. This research demonstrates the first systematic comparison of VIIRS versus ABI contributions to dead FMC estimation and establishes a framework for hourly, satellite-enhanced FMC products that support more accurate fire danger assessment and enhanced situational awareness for wildfire management operations. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

18 pages, 5694 KB  
Article
All-Weather Flood Mapping Using a Synergistic Multi-Sensor Downscaling Framework: Case Study for Brisbane, Australia
by Chloe Campo, Paolo Tamagnone, Suelynn Choy, Trinh Duc Tran, Guy J.-P. Schumann and Yuriy Kuleshov
Remote Sens. 2026, 18(2), 303; https://doi.org/10.3390/rs18020303 - 16 Jan 2026
Viewed by 169
Abstract
Despite a growing number of Earth Observation satellites, a critical observational gap persists for timely, high-resolution flood mapping, primarily due to infrequent satellite revisits and persistent cloud cover. To address this issue, we propose a novel framework that synergistically fuses complementary data from [...] Read more.
Despite a growing number of Earth Observation satellites, a critical observational gap persists for timely, high-resolution flood mapping, primarily due to infrequent satellite revisits and persistent cloud cover. To address this issue, we propose a novel framework that synergistically fuses complementary data from three public sensor types. Our methodology harmonizes these disparate data sources by using surface water fraction as a common variable and downscaling them with flood susceptibility and topography information. This allows for the integration of sub-daily observations from the Visible Infrared Imaging Radiometer Suite and the Advanced Himawari Imager with the cloud-penetrating capabilities of the Advanced Microwave Scanning Radiometer 2. We evaluated this approach on the February 2022 flood in Brisbane, Australia using an independent ground truth dataset. The framework successfully compensates for the limitations of individual sensors, enabling the consistent generation of detailed, high-resolution flood maps. The proposed method outperformed the flood extent derived from commercial high-resolution optical imagery, scoring 77% higher than the Copernicus Emergency Management Service (CEMS) map in the Critical Success Index. Furthermore, the True Positive Rate was twice as high as the CEMS map, confirming that the proposed method successfully overcame the cloud cover issue. This approach provides valuable, actionable insights into inundation dynamics, particularly when other public data sources are unavailable. Full article
Show Figures

Figure 1

24 pages, 43005 KB  
Article
Accurate Estimation of Spring Maize Aboveground Biomass in Arid Regions Based on Integrated UAV Remote Sensing Feature Selection
by Fengxiu Li, Yanzhao Guo, Yingjie Ma, Ning Lv, Zhijian Gao, Guodong Wang, Zhitao Zhang, Lei Shi and Chongqi Zhao
Agronomy 2026, 16(2), 219; https://doi.org/10.3390/agronomy16020219 - 16 Jan 2026
Viewed by 247
Abstract
Maize is one of the top three crops globally, ranking only behind rice and wheat, making it an important crop of interest. Aboveground biomass is a key indicator for assessing maize growth and its yield potential. This study developed an efficient and stable [...] Read more.
Maize is one of the top three crops globally, ranking only behind rice and wheat, making it an important crop of interest. Aboveground biomass is a key indicator for assessing maize growth and its yield potential. This study developed an efficient and stable biomass prediction model to estimate the aboveground biomass (AGB) of spring maize (Zea mays L.) under subsurface drip irrigation in arid regions, based on UAV multispectral remote sensing and machine learning techniques. Focusing on typical subsurface drip-irrigated spring maize in arid Xinjiang, multispectral images and field-measured AGB data were collected from 96 sample points (selected via stratified random sampling across 24 plots) over four key phenological stages in 2024 and 2025. Sixteen vegetation indices were calculated and 40 texture features were extracted using the gray-level co-occurrence matrix method, while an integrated feature-selection strategy combining Elastic Net and Random Forest was employed to effectively screen key predictor variables. Based on the selected features, six machine learning models were constructed, including Elastic Net Regression (ENR), Gradient Boosting Decision Trees (GBDT), Gaussian Process Regression (GPR), Partial Least Squares Regression (PLSR), Random Forest (RF), and Extreme Gradient Boosting (XGB). Results showed that the fused feature set comprised four vegetation indices (GRDVI, RERVI, GRVI, NDVI) and five texture features (R_Corr, NIR_Mean, NIR_Vari, B_Mean, B_Corr), thereby retaining red-edge and visible-light texture information highly sensitive to AGB. The GPR model based on the fused features exhibited the best performance (test set R2 = 0.852, RMSE = 2890.74 kg ha−1, MAE = 1676.70 kg ha−1), demonstrating high fitting accuracy and stable predictive ability across both the training and test sets. Spatial inversions over the two growing seasons of 2024 and 2025, derived from the fused-feature GPR optimal model at four key phenological stages, revealed pronounced spatiotemporal heterogeneity and stage-dependent dynamics of spring maize AGB: the biomass accumulates rapidly from jointing to grain filling, slows thereafter, and peaks at maturity. At a constant planting density, AGB increased markedly with nitrogen inputs from N0 to N3 (420 kg N ha−1), with the high-nitrogen N3 treatment producing the greatest biomass; this successfully captured the regulatory effect of the nitrogen gradient on maize growth, provided reliable data for variable-rate fertilization, and is highly relevant for optimizing water–fertilizer coordination in subsurface drip irrigation systems. Future research may extend this integrated feature selection and modeling framework to monitor the growth and estimate the yield of other crops, such as rice and cotton, thereby validating its generalizability and robustness in diverse agricultural scenarios. Full article
Show Figures

Figure 1

Back to TopTop