Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,966)

Search Parameters:
Keywords = meta-learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1050 KB  
Systematic Review
Performance and Clinical Utility of Deep Learning for Detecting Referable Age-Related Macular Degeneration on Fundus Photographs: A Systematic Review and Meta-Analysis
by Wei-Ting Luo and Ting-Wei Wang
Diagnostics 2026, 16(4), 633; https://doi.org/10.3390/diagnostics16040633 (registering DOI) - 22 Feb 2026
Abstract
Background/Objectives: Age-related macular degeneration (AMD) is a leading cause of irreversible central vision loss in older adults. Detection of referable AMD—typically intermediate or advanced disease requiring specialist evaluation—is critical for timely intervention. Deep learning (DL) applied to color fundus photographs has emerged as [...] Read more.
Background/Objectives: Age-related macular degeneration (AMD) is a leading cause of irreversible central vision loss in older adults. Detection of referable AMD—typically intermediate or advanced disease requiring specialist evaluation—is critical for timely intervention. Deep learning (DL) applied to color fundus photographs has emerged as a potential tool to support large-scale AMD screening. This systematic review and meta-analysis evaluated the diagnostic accuracy of DL algorithms for detecting referable AMD and compared their performance with human graders. Methods: We systematically searched PubMed, Embase, Web of Science, and IEEE Xplore through December 18, 2025. Diagnostic accuracy studies assessing DL algorithms on color fundus photographs for referable AMD in adults were included. Two reviewers independently screened studies, extracted data, and assessed risk of bias using an AI-adapted PROBAST framework. Pooled sensitivity and specificity were estimated using a bivariate random-effects model. Clinical utility was evaluated using likelihood ratios, and paired head-to-head comparisons were synthesized using a contrast-based meta-analysis. Results: Fourteen studies were included. DL algorithms achieved a pooled sensitivity of 0.91 (95% CI: 0.86–0.94) and specificity of 0.93 (95% CI: 0.86–0.96), with substantial heterogeneity. The pooled positive and negative likelihood ratios were 12.22 and 0.10, respectively, indicating strong diagnostic utility. In direct comparisons, DL systems showed slightly lower sensitivity but higher specificity than human graders. Conclusions: Deep learning demonstrates high diagnostic accuracy for detecting referable AMD from fundus photographs and may support screening and referral workflows. Further prospective validation and standardized evaluation are needed before widespread clinical implementation. Full article
28 pages, 16427 KB  
Article
A Multidimensional Assessment Framework for Urban Green Perception Using Large Vision Models and Mixed Reality
by Jingchao Wang, Yuehao Cao, Ximing Yue and Lulu Wang
Buildings 2026, 16(4), 877; https://doi.org/10.3390/buildings16040877 (registering DOI) - 22 Feb 2026
Abstract
Accurately assessing urban green perception is crucial for sustainable urban development and human well-being, yet conventional approaches often depend on simplistic objective metrics and non-immersive, screen-based subjective surveys, undermining ecological validity. This study develops and validates a multidimensional assessment framework that integrates Large [...] Read more.
Accurately assessing urban green perception is crucial for sustainable urban development and human well-being, yet conventional approaches often depend on simplistic objective metrics and non-immersive, screen-based subjective surveys, undermining ecological validity. This study develops and validates a multidimensional assessment framework that integrates Large Vision Models (LVMs) and Mixed Reality (MR) to couple objective environmental features with immersive human perception. The framework comprises 30 objective and 6 subjective indicators; state-of-the-art LVMs including DINOv2 and Depth Anything were applied to accurately extract objective features from Street View Imagery (SVI); and the MR device, Meta Quest 3, was utilized for the immersive collection of high-quality subjective data. In an empirical study with 74 volunteers in Shenzhen, China, machine learning models trained on MR-based data achieved 20–50% higher R2 for subjective perception than models trained on traditional screen-based data. The validated framework was then applied to 61,131 SVIs citywide to map the spatial distribution of multidimensional green perception and to quantify relationships between objective and subjective indicators. Going beyond technical validation, this study demonstrates how the framework serves as a critical tool for urban planning and landscape upgrading. By diagnosing perceptual deficits where greening quantity does not translate into quality experiences, the framework supports a paradigm shift from quantity-oriented greening to perception-oriented spatial optimization. These findings offer actionable insights for policymakers to prioritize interventions that effectively enhance public health and environmental equity in high-density cities. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
20 pages, 4349 KB  
Article
Agricultural Carbon Flux Estimation Using Multi-Source Remote Sensing and Ensemble Models
by Jiang Qiu, Qinrong Li, Weiyu Yu and Jinping Chen
Appl. Sci. 2026, 16(4), 2118; https://doi.org/10.3390/app16042118 (registering DOI) - 22 Feb 2026
Abstract
To accurately understand and investigate carbon fluxes in cropland ecosystems, this study adopted a machine learning ensemble model for estimation. Focusing on the Jinzhou station of the ChinaFLUX, we integrated eddy covariance carbon flux observations with multi-source satellite remote sensing data to construct [...] Read more.
To accurately understand and investigate carbon fluxes in cropland ecosystems, this study adopted a machine learning ensemble model for estimation. Focusing on the Jinzhou station of the ChinaFLUX, we integrated eddy covariance carbon flux observations with multi-source satellite remote sensing data to construct a machine learning-based cropland carbon flux estimation model. For environmental driver selection, a strategy combining correlation analysis with ecological mechanism understanding was employed to screen LST, NDVI, and NDMI as model input variables, effectively avoiding multicollinearity issues. Using footprint-weighted integrated data from 2005 to 2014 for model training and validation, a Stacking ensemble model was constructed with the RF model serving as the meta-learner to stack the predictions of RF, CART, and GBM. The ensemble model further reduced the prediction error (RMSE = 39.82), maintaining an R2 > 0.9 in most years and effectively improving predictive performance during anomalous years where single models underperformed. Based on these findings, the model was applied to analyze the spatiotemporal evolution of NEE in Jinzhou croplands from 2005 to 2014. The analysis revealed that while the region functioned overall as a carbon sink, it exhibited significant spatiotemporal heterogeneity. Spatially, the distribution followed a pattern of “strong intensity in the northeast and center, and weak intensity in the northwest and southwest.” Temporally, the sink intensity underwent significant interannual oscillations characterized by a “strengthening–weakening–re-strengthening–declining” trajectory. The high-precision prediction method proposed in this study is of great significance for revealing spatiotemporal variations in carbon sources/sinks, guiding green agricultural development, and supporting relevant policy formulation. Full article
Show Figures

Figure 1

34 pages, 3113 KB  
Systematic Review
A Systematic Review of Available Multispectral UAV Image Datasets for Precision Agriculture Applications
by Andrea Caroppo, Giovanni Diraco and Alessandro Leone
Remote Sens. 2026, 18(4), 659; https://doi.org/10.3390/rs18040659 (registering DOI) - 21 Feb 2026
Abstract
The proliferation of Unmanned Aerial Vehicles (UAVs) equipped with multispectral imaging sensors has revolutionized data collection in precision agriculture. These platforms provide high-resolution, temporally dense data crucial for monitoring crop health, optimizing resource management, and predicting yield. However, the development and validation of [...] Read more.
The proliferation of Unmanned Aerial Vehicles (UAVs) equipped with multispectral imaging sensors has revolutionized data collection in precision agriculture. These platforms provide high-resolution, temporally dense data crucial for monitoring crop health, optimizing resource management, and predicting yield. However, the development and validation of robust data-driven algorithms, from vegetation index analysis to complex deep learning models, are contingent upon the availability of high-quality, standardized, and publicly accessible datasets. This review systematically surveys and characterizes the current landscape of available datasets containing multispectral imagery acquired by UAVs in agricultural contexts. Following guidelines for reporting systematic reviews and meta-analyses (PRISMA methodology), 39 studies were selected and analyzed, categorizing them based on key attributes including spectral bands (e.g., RGB, Red Edge, Near-Infrared), spatial and temporal resolution, types of crops studied, presence of complementary ground-truth data (e.g., biomass, nitrogen content, yield maps), and the specific agricultural tasks they support (e.g., disease detection, weed mapping, water stress assessment). However, the review underscores a critical gap in standardization, with significant variability in data formats, annotation quality, and metadata completeness, which hampers reproducibility and comparative analysis. Furthermore, we identify a need for more datasets targeting specific challenges like early-stage disease identification and anomaly detection in complex crop canopies. Finally, we discuss future directions for the creation of more comprehensive, benchmark-ready open datasets that will be instrumental in accelerating research, fostering collaboration, and bridging the gap between algorithmic innovation and practical agricultural deployment. This work serves as a foundational guide for researchers and practitioners seeking suitable data for their work and contributes to the ongoing effort of standardizing open data practices in agricultural remote sensing. Full article
43 pages, 1927 KB  
Article
A Large-Scale Empirical Study of LLM Orchestration and Ensemble Strategies for Sentiment Analysis in Recommender Systems
by Konstantinos I. Roumeliotis, Dionisis Margaris, Dimitris Spiliotopoulos and Costas Vassilakis
Future Internet 2026, 18(2), 112; https://doi.org/10.3390/fi18020112 - 20 Feb 2026
Viewed by 41
Abstract
This paper presents a comprehensive empirical evaluation comparing meta-model aggregation strategies with traditional ensemble methods and standalone models for sentiment analysis in recommender systems beyond standalone large language model (LLM) performance. We investigate whether aggregating multiple LLMs through a reasoning-based meta-model provides measurable [...] Read more.
This paper presents a comprehensive empirical evaluation comparing meta-model aggregation strategies with traditional ensemble methods and standalone models for sentiment analysis in recommender systems beyond standalone large language model (LLM) performance. We investigate whether aggregating multiple LLMs through a reasoning-based meta-model provides measurable performance advantages over individual models and standard statistical aggregation approaches in zero-shot sentiment classification. Using a balanced dataset of 5000 verified Amazon purchase reviews (1000 reviews per rating category from 1 to 5 stars, sampled via two-stage stratified sampling across five product categories), we evaluate 12 different leading pre-trained LLMs from four major providers (OpenAI, Anthropic, Google, and DeepSeek) in both standalone and meta-model configurations. Our experimental design systematically compares individual model performance against GPT-based meta-model aggregation and traditional ensemble baselines (majority voting, mean aggregation). Results show statistically significant improvements (McNemar’s test, p < 0.001): the GPT-5 meta-model achieves 71.40% accuracy (10.15 percentage point improvement over the 61.25% individual model average), while the GPT-5 mini meta-model reaches 70.32% (9.07 percentage point improvement). These observed improvements surpass traditional ensemble methods (majority voting: 62.64%; mean aggregation: 62.96%), suggesting potential value in meta-model aggregation for sentiment analysis tasks. Our analysis reveals empirical patterns including neutral sentiment classification challenges (3-star ratings show 64.83% failure rates across models), model influence hierarchies, and cost-accuracy trade-offs ($130.45 aggregation cost vs. $0.24–$43.97 for individual models per 5000 predictions). This work provides evidence-based insights into the comparative effectiveness of LLM aggregation strategies in recommender systems, demonstrating that meta-model aggregation with natural language reasoning capabilities achieves measurable performance gains beyond statistical aggregation alone. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Figure 1

31 pages, 3941 KB  
Article
Integrating Machine Learning and Simulation for Integrated Mine-to-Mill Flowsheet Modelling: A Meta-Modelling Framework
by Pouya Nobahar, Chaoshui Xu and Peter Dowd
Minerals 2026, 16(2), 216; https://doi.org/10.3390/min16020216 - 20 Feb 2026
Viewed by 39
Abstract
The growing global demand for mineral resources is challenging mining operations to maintain productivity while processing lower-grade ores and increasingly complex deposits. This study presents an integrated framework that leverages machine learning (ML) and high-fidelity simulation to model and support scenario-based decision-making for [...] Read more.
The growing global demand for mineral resources is challenging mining operations to maintain productivity while processing lower-grade ores and increasingly complex deposits. This study presents an integrated framework that leverages machine learning (ML) and high-fidelity simulation to model and support scenario-based decision-making for the blasting–crushing–SAG (Semi-Autogenous Grindin) milling chain using a calibrated flowsheet. Using publicly available data from the Barrick Cortez Mine (Nevada, USA), more than three million operational scenarios were generated using the Integrated Extraction Simulator (IES) to capture system variability and sensitivity. Machine learning meta-models, built using Random Forest and XGBoost methods, were trained on the simulated data and achieved coefficients of determination (R2) exceeding 0.90 across all key outputs, including P20, P50, P80, and mass flow rates at different operational stages. The meta-models accurately reproduced plant-scale behaviour while reducing computational requirements by several orders of magnitude compared with full-scale simulations. SHapley Additive exPlanations (SHAP) analysis revealed that blast-hole diameter, explosive energy parameters, screen cut-size, crusher feed characteristics, and SAG mill operating conditions are the dominant factors impacting downstream particle size distributions. The proposed framework enables near-real-time evaluation of “what-if” operational scenarios and provides transparent, quantitative decision-support for integrated mine-to-mill optimisation. Full article
(This article belongs to the Section Mineral Processing and Extractive Metallurgy)
20 pages, 868 KB  
Systematic Review
Artificial Intelligence in Aquaculture Risk Management: A Systematic Review by PRISMA
by Marios C. Gkikas, Michele Thornton, Dimitris C. Gkikas, Spyros Sioutas and John A. Theodorou
Appl. Sci. 2026, 16(4), 2032; https://doi.org/10.3390/app16042032 - 18 Feb 2026
Viewed by 191
Abstract
The aquaculture industry is growing rapidly. It is the fastest growing food industry in the world, with production expanding 16-fold between 1985 and 2018, according to the Food and Agriculture Organization FAO. The industry operates in an environment of high uncertainty, as the [...] Read more.
The aquaculture industry is growing rapidly. It is the fastest growing food industry in the world, with production expanding 16-fold between 1985 and 2018, according to the Food and Agriculture Organization FAO. The industry operates in an environment of high uncertainty, as the management of biological and environmental risks is critical. The aim of this research is to identify machine learning (ML) algorithms applied to quantify risks, categorize applications by sector, and evaluate data linkage to the extent that they feed into formal risk management protocols. A systematic review was performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines. This search was conducted in Scopus and Science Direct for publications up to January 2026. Initially, 134 records were identified, of which 38 studies were ultimately included in the analysis. The results showed that artificial intelligence (AI) and ML offer new predictive capabilities. Integrating Internet of Things (IoT) sensors, AI methods and ML algorithms improve risk mitigation. However, there is a significant disconnection between algorithmic predictions and operational action. Only 3 of 38 studies demonstrated integration with standardized risk management frameworks (e.g., ISO31000). The study concludes that while AI tools provide predictive efficiency, interdisciplinary frameworks are required to filter predictions through economic and ethical criteria. Strengthening this connection will bring the use of AI as a tool for proactive and standardized risk mitigation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

36 pages, 4846 KB  
Systematic Review
Industrial Robotics and Adaptive Control Systems in STEM Education: Systematic Review of Technology Transfer from Industry to Classroom and Competency Development Framework
by Claudio Urrea
Appl. Sci. 2026, 16(4), 2026; https://doi.org/10.3390/app16042026 - 18 Feb 2026
Viewed by 103
Abstract
The Fourth Industrial Revolution reshapes manufacturing and workforce demands, yet a persistent gap remains between industry needs and engineering education. While proficiency in industrial robotics, adaptive control, and automation becomes critical, traditional education struggles to bridge the theory–practice divide. This systematic review examines [...] Read more.
The Fourth Industrial Revolution reshapes manufacturing and workforce demands, yet a persistent gap remains between industry needs and engineering education. While proficiency in industrial robotics, adaptive control, and automation becomes critical, traditional education struggles to bridge the theory–practice divide. This systematic review examines technology transfer from factory to classroom to develop authentic Industry 4.0 competencies. Following PRISMA 2020 guidelines, we synthesized 52 empirical studies (2019–2025) focusing on technology complexity, pedagogical approaches, and learning outcomes. Random-effects meta-analysis of 12 representative studies reveals large positive effects: Hedges’ g of 0.786 (95% CI: 0.726–0.846, p < 0.001) with homogeneous effects (I2 = 0.00%, p = 0.464), indicating robust generalizability. However, critical gaps emerged: only 7.7% employ actual industrial manipulators versus educational kits, adaptive control pedagogy remains limited, and fault-tolerant systems teaching receives minimal attention. Technology complexity analysis reveals clear progression from educational kits through semi-industrial platforms to industrial systems, with significant differential effects on transferable skills (r = 0.68, p < 0.001). This study proposes the ARC Framework integrating technology taxonomy, competency progression, pedagogical strategies, and assessment rubrics. Cost–effectiveness analysis demonstrates remote labs optimize impact-per-investment ratios ($45 vs. $280 per student), providing an evidence-based framework for technology transfer in engineering education. Full article
29 pages, 14455 KB  
Review
Few-Shot Semantic Segmentation in Remote Sensing: A Review on Definitions, Methods, Datasets, Advances and Future Trends
by Marko Petrov, Ema Pandilova, Ivica Dimitrovski, Dimitar Trajanov, Vlatko Spasev and Ivan Kitanovski
Remote Sens. 2026, 18(4), 637; https://doi.org/10.3390/rs18040637 - 18 Feb 2026
Viewed by 182
Abstract
Semantic segmentation in remote sensing images, which is the task of classifying each pixel of the image in a specific category, is widely used in areas such as disaster management, environmental monitoring, precision agriculture, and many others. However, traditional semantic segmentation methods face [...] Read more.
Semantic segmentation in remote sensing images, which is the task of classifying each pixel of the image in a specific category, is widely used in areas such as disaster management, environmental monitoring, precision agriculture, and many others. However, traditional semantic segmentation methods face a major challenge: they require large amounts of annotated data to train effectively. To tackle this challenge, few-shot semantic segmentation has been introduced, where the models can learn and adapt quickly to new classes from just a few annotated samples. This paper presents a comprehensive review of recent advances in few-shot semantic segmentation (FSSS) for remote sensing, covering datasets, methods, and emerging research directions. We first outline the fundamental principles of few-shot learning and summarize commonly used remote-sensing benchmarks, emphasizing their scale, geographic diversity, and relevance to episodic evaluation. Next, we categorize FSSS methods into major families (meta-learning, conditioning-based, and foundation-assisted approaches) and analyze how architectural choices, pretraining strategies, and inference protocols influence performance. The discussion highlights empirical trends across datasets, the behavior of different conditioning mechanisms, the impact of self-supervised and multimodal pretraining, and the role of reproducibility and evaluation design. Finally, we identify key challenges and future trends, including benchmark standardization, integration with foundation and multimodal models, efficiency at scale, and uncertainty-aware adaptation. Collectively, they signal a shift toward unified, adaptive models capable of segmenting novel classes across sensors, regions, and temporal domains with minimal supervision. Full article
Show Figures

Figure 1

40 pages, 8354 KB  
Article
System-Level Optimization of AUV Swarm Control and Perception: An Energy-Aware Federated Meta-Transfer Learning Framework with Digital Twin Validation
by Zinan Nie, Hongjun Tian, Yijie Yin, Yuhan Zhou, Wei Li, Yang Xiong, Yichen Wang, Zitong Zhang, Yang Yang, Dongxiao Xie, Manlin Wang and Shijie Huang
J. Mar. Sci. Eng. 2026, 14(4), 384; https://doi.org/10.3390/jmse14040384 - 18 Feb 2026
Viewed by 92
Abstract
Deep-sea exploration increasingly relies on Autonomous Underwater Vehicles (AUVs) to enable persistent, wide-area surveying in harsh and uncertain environments. In practice, however, deployments are constrained by tight energy budgets and bandwidth-limited, intermittent acoustic links, which complicate mission-level coordination. Moreover, many existing systems treat [...] Read more.
Deep-sea exploration increasingly relies on Autonomous Underwater Vehicles (AUVs) to enable persistent, wide-area surveying in harsh and uncertain environments. In practice, however, deployments are constrained by tight energy budgets and bandwidth-limited, intermittent acoustic links, which complicate mission-level coordination. Moreover, many existing systems treat perception and control as loosely coupled modules, often resulting in redundant sensing, inefficient communication, and degraded overall performance—particularly under heterogeneous sensing modalities and shifting geological conditions. To address these challenges, we propose a hierarchical Federated Meta-Transfer Learning (FMTL) framework that tightly integrates collaborative perception with adaptive control for swarm optimization. The framework operates at three levels: (1) Representation Learning aligns heterogeneous sensors in a shared latent space via a physics-informed contrastive objective, substantially reducing communication overhead; (2) Meta-Learning Adaptation enables rapid transfer and convergence in new environments with minimal data exchange; and (3) Energy-Aware Control realizes closed-loop exploration by coupling Federated Explainable AI (FXAI) with decentralized multi-agent reinforcement learning (MARL) for path planning under energy constraints. Validated in high-fidelity hardware-in-the-loop simulations and a digital-twin environment, FMTL outperforms state-of-the-art baselines, achieving an AUC of 0.94 for target identification. Furthermore, an energy–intelligence Pareto analysis demonstrates a 4.5× improvement in information gain per Joule. Overall, this work provides a physically consistent and communication-efficient blueprint for the optimization and control of next-generation intelligent marine swarms. Full article
(This article belongs to the Special Issue System Optimization and Control of Unmanned Marine Vehicles)
Show Figures

Figure 1

27 pages, 5156 KB  
Article
Mapping Forest Canopy Height via Self-Attention Multisource Feature Fusion and a Blending-Based Heterogeneous Ensemble Model
by Jing Tian, Pinghao Zhang, Pinliang Dong, Wei Shan, Ying Guo, Dan Li, Qiang Wang and Xiaodan Mei
Remote Sens. 2026, 18(4), 633; https://doi.org/10.3390/rs18040633 - 18 Feb 2026
Viewed by 146
Abstract
The accuracy of forest canopy height estimation is crucial for forest resource management and ecosystem carbon sequestration. However, existing approaches often face limitations in effectively integrating multisource remote sensing data, feature representation, and model learning strategies. To enhance the prediction performance of the [...] Read more.
The accuracy of forest canopy height estimation is crucial for forest resource management and ecosystem carbon sequestration. However, existing approaches often face limitations in effectively integrating multisource remote sensing data, feature representation, and model learning strategies. To enhance the prediction performance of the model in complex terrain and multisource data environments, this study comprehensively used ICESat-2/ATLAS photon point clouds, Sentinel-2/MSI multispectral imagery, and SRTM-DEM to construct a remote sensing-driven multisource feature system, which eliminated redundant interference using permutation feature importance analysis. Additionally, a self-attention (SA) mechanism was introduced to strengthen high-dimensional feature representation. Three heterogeneous models, incorporating deep neural network (DNN), extreme gradient boosting (XGBoost), and residual network (ResNet), were independently applied for forest canopy height estimation and were further used as base learners, with a random forest as the meta-learner, and an SA-Blending heterogeneous ensemble model that combines a blending technique with an SA mechanism was proposed to enhance the accuracy of forest canopy height estimation. To evaluate the SA optimization strategy and the role of multisource fusion, this study used the original features, SA-optimized features, and multisource fusion features (i.e., the concatenation and fusion of original features and self-attention mechanism features) as inputs to comprehensively compare the performance of each single model and the integrated model. The results show that: (1) The self-attention mechanism significantly improves the prediction performance of heterogeneous models. Compared with original features inputs, the R2 of DNN (SA-Only) and XGBoost (SA-Only) increased to 0.706 and 0.708, respectively, and the RMSE decreased to 1.691 m and 1.613 m. Although the R2 for ResNet (SA-Only) decreased slightly to 0.699 and the RMSE increased to 1.712 m, the overall impact was not significant. (2) Under the condition of multisource fusion feature input, DNN+SA, XGBoost+SA, and ResNet+SA all demonstrated higher fitting accuracy and stability, verifying the enhancing effect of the SA mechanism on the association expression of multisource information. (3) The SA-Blending model achieved the best overall performance, with R2 of 0.766 and RMSE of 1.510 m. It outperformed individual models and the SA-optimized model in terms of overall accuracy, stability, and robustness. The results can provide technical support for high-precision forest canopy height mapping and are of great significance for ecological monitoring applications. Full article
Show Figures

Figure 1

25 pages, 347 KB  
Review
Decarbonizing Freight Through Intermodal Transport: An Operations Research Perspective—Part I: Methodological Foundations and Model-Driven Insights
by Madelaine Martinez-Ferguson, Aliza Sharmin, Mustafa Can Camur and Xueping Li
Future Transp. 2026, 6(1), 49; https://doi.org/10.3390/futuretransp6010049 - 16 Feb 2026
Viewed by 91
Abstract
Intermodal transportation (IMT) has long been recognized as a key strategy for decarbonizing freight transportation (FT), which is one of the most polluting sectors worldwide. While IMT has been extensively examined using operations research (OR) methods, the integration of decarbonization objectives has only [...] Read more.
Intermodal transportation (IMT) has long been recognized as a key strategy for decarbonizing freight transportation (FT), which is one of the most polluting sectors worldwide. While IMT has been extensively examined using operations research (OR) methods, the integration of decarbonization objectives has only recently gained momentum. Despite this growing interest, to the best of our knowledge, no prior comprehensive review has systematically synthesized OR methodologies specifically addressing IMT decarbonization. To address this gap, we conduct a systematic literature review of OR studies on IMT decarbonization and organize the survey into two complementary parts. Part I focuses on methodological foundations of OR applications in IMT decarbonization. We classify studies by problem type and OR technique, analyzing modeling characteristics, solution approaches, and uncertainty treatment. Our analysis reveals that exact methods dominate the literature (41% of studies), while meta-heuristics show rapid recent growth with 50% of studies published recently. Approximately 20% of studies incorporate uncertainty, and they are predominantly demand-focused. We identify critical research gaps including limited multistage stochastic frameworks to capture cascading uncertainties, insufficient attention to terminal operations and network reliability, and the underutilization of emerging technologies such as reinforcement learning and digital twins. This systematic synthesis establishes the current state of OR methodologies in IMT decarbonization and provides a foundation for future innovations in sustainable freight systems. Full article
25 pages, 2523 KB  
Article
Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting
by Sheng Zhang, Yuyuan Huang, Ziqiang Luo, Jiangnan Zhou, Bing Wu, Ka Sun and Hongmei Mao
Entropy 2026, 28(2), 230; https://doi.org/10.3390/e28020230 - 16 Feb 2026
Viewed by 193
Abstract
Complex real-world systems are often modeled as heterogeneous information networks with diverse node and relation types, bringing new opportunities and challenges to link prediction. Traditional methods based on similarity or meta-paths fail to fully capture high-order structures and semantics, while existing hypergraph-based models [...] Read more.
Complex real-world systems are often modeled as heterogeneous information networks with diverse node and relation types, bringing new opportunities and challenges to link prediction. Traditional methods based on similarity or meta-paths fail to fully capture high-order structures and semantics, while existing hypergraph-based models homogenize all high-order information without considering their importance differences, diluting core associations with redundant noise and limiting prediction accuracy. Given these issues, we propose the VE-HGCN, a link prediction model for HINs that fuses hypergraph convolution with soft-voting ensemble strategy. The model first constructs multiple heterogeneous hypergraphs from HINs via network frequent subgraph pattern extraction, then leverages hypergraph convolution for node representation learning, and finally employs a soft-voting ensemble strategy to fuse multi-model prediction results. Extensive experiments on four public HIN datasets show that the VE-HGCN outperforms seven mainstream baseline models, thereby validating the effectiveness of the proposed method. This study offers a new perspective for link prediction in HINs and exhibits good generality and practicality, providing a feasible reference for addressing high-order information utilization issues in complex heterogeneous network analysis. Full article
Show Figures

Figure 1

26 pages, 2078 KB  
Article
Adversarial Distributed Multi-Task Meta-Inverse Reinforcement Learning with Theory of Mind and Mean-Field Method
by Li Song, Kun Yang and Chao Chen
Mathematics 2026, 14(4), 691; https://doi.org/10.3390/math14040691 - 15 Feb 2026
Viewed by 166
Abstract
Maximum entropy adversarial inverse reinforcement learning (ME-AIRL) has garnered widespread attention for its ability to learn rewards and optimize policies from expert demonstrations. In complex multi-task environments, applying meta-learning ME-AIRL to acquire rewards requires a substantial volume of homogeneous expert demonstrations across all [...] Read more.
Maximum entropy adversarial inverse reinforcement learning (ME-AIRL) has garnered widespread attention for its ability to learn rewards and optimize policies from expert demonstrations. In complex multi-task environments, applying meta-learning ME-AIRL to acquire rewards requires a substantial volume of homogeneous expert demonstrations across all tasks, which is often impractical in real-world scenarios. Moreover, interference between tasks further escalates computational complexity. To solve these challenges, this paper proposes a distributed multi-task meta ME-AIRL framework based on theory of mind and mean field, referred to as TMMF-MTAIRL. In TMMF-MTAIRL, the theory of mind is used to capture the relationships and representational information among multiple tasks. Furthermore, TMMF-MTAIRL integrates mean-field theory to transform interactions between complex tasks into interactions between the main task and the average of the remaining tasks. Furthermore, additional latent variables are introduced to enhance adaptation to novel tasks. We evaluate the proposed TMMF-MTAIRL on point-maze benchmarks and a real-world rolling bearing fault diagnosis dataset using metrics such as classification accuracy, mean rewards or cumulative rewards. TMMF-MTAIRL achieves the best performance across all tasks, with an average improvement of 0.16 in accuracy of fault classification over the strongest baseline. Full article
25 pages, 919 KB  
Article
Exploring How Holistic Teaching and Institutional Support Relate to Community College STEM Students’ Momentum and Self-Efficacy in Career-Relevant Competencies
by Xiwei Zhu, Xueli Wang and Aikebaier Nadila
Educ. Sci. 2026, 16(2), 317; https://doi.org/10.3390/educsci16020317 - 15 Feb 2026
Viewed by 300
Abstract
This study investigates how holistic teaching practices and institutional support at community colleges shape science, technology, engineering, and mathematics (STEM) students’ momentum and self-efficacy in career-relevant competencies. Using survey data from three community colleges, we apply structural equation modeling (SEM) to assess these [...] Read more.
This study investigates how holistic teaching practices and institutional support at community colleges shape science, technology, engineering, and mathematics (STEM) students’ momentum and self-efficacy in career-relevant competencies. Using survey data from three community colleges, we apply structural equation modeling (SEM) to assess these relationships while accounting for institutional variation using multi-group analysis. Our findings demonstrate that holistic teaching practices are positively associated with students’ curricular, cognitive, and meta-cognitive momentum, indicating that integrated, supportive classroom instruction contributes to sustained engagement and self-regulated learning in STEM pathways. Holistic teaching practices also show a marginal positive relationship with career readiness self-efficacy and professional and interpersonal self-efficacy, with cognitive and meta-cognitive momentum mediating these associations. In contrast, institutional support is not related to students’ momentum but is positively associated with professional and interpersonal self-efficacy, which may point to its role in shaping broader skill development independent of short-term academic engagement. These findings suggest that holistic teaching practices and institutional support differentially contribute to students’ academic momentum and career-related self-efficacy, which highlights the importance of coordinated efforts across classroom and institutional levels within the broader STEM ecosystem in fostering both short-term engagement and long-term professional competencies among diverse community college STEM learners. Full article
(This article belongs to the Special Issue Creating Cultures and Structures of Opportunity in STEMM Ecosystems)
Show Figures

Figure 1

Back to TopTop