Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (695)

Search Parameters:
Keywords = multi-perspective learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 990 KB  
Perspective
From Network Governance to Real-World-Time Learning: A High-Reliability Operating Model for Rare Cancers
by Bruno Fuchs, Anna L. Falkowski, Ruben Jaeger, Barbara Kopf, Christian Rothermundt, Kim van Oudenaarde, Ralph Zacchariah, Philip Heesen, Georg Schelling and Gabriela Studer
Cancers 2026, 18(4), 643; https://doi.org/10.3390/cancers18040643 - 16 Feb 2026
Viewed by 151
Abstract
Background: Rare cancers combine low incidence with high biological heterogeneity and multi-institutional care trajectories. These features make single-center learning structurally incomplete and render pathway fragmentation a dominant driver of preventable harm, variability, and waste. In this context, care quality is best understood as [...] Read more.
Background: Rare cancers combine low incidence with high biological heterogeneity and multi-institutional care trajectories. These features make single-center learning structurally incomplete and render pathway fragmentation a dominant driver of preventable harm, variability, and waste. In this context, care quality is best understood as a property of pathway integrity across routing, diagnostics (imaging/biopsy planning), multidisciplinary intent-setting, definitive treatment, and surveillance—rather than as a department-level attribute. Objective: To define a pragmatic, transferable operating blueprint for a rare-cancer Learning Health System (LHS) that turns routine care into continuous, auditable learning under explicit governance, while maintaining claims discipline and protecting measurement validity. Approach: We synthesize an implementation-oriented operating model using the Swiss Sarcoma Network (SSN) as an exemplar. The blueprint couples clinical governance (Integrated Practice Unit logic, hub-and-spoke routing, auditable multidisciplinary team decision systems) with an interoperable real-world-time data backbone designed for benchmarking, pathway mapping, and feedback. The operating logic is expressed as a closed-loop control cycle: capture → harmonize → benchmark → learn → implement → re-measure, with explicit owners, minimum requirements, and failure modes. Results/Blueprint: (i) The model specifies a minimal set of data primitives—time-stamped and traceable decision points covering baseline and tumor characteristics, pathway timing, treatment exposure, outcomes and complications, and feasible longitudinal PROMs and PREMs; (ii) a VBHC-ready, multi-domain measurement backbone spanning outcomes, harms, timeliness, function, process fidelity, and resource stewardship; and (iii) two non-negotiable validity guardrails: explicit applicability (“N/A”) rules and mandatory case-mix/complexity stratification. Implementation is treated as a governed step with defined workflow levers, fidelity criteria, balancing measures, and escalation thresholds to prevent “dashboard medicine” and surrogate-driven optimization. Conclusions: This perspective contributes an operating model—not a platform or single intervention—that enables credible improvement science and establishes prerequisites for downstream causal learning and minimum viable digital twins. By distinguishing enabling infrastructure from the governed clinical system as the primary intervention, the blueprint supports scalable, learnable excellence in rare-cancer care while protecting against gaming, inequity, and inference drift. Distinct from generic LHS or VBHC frameworks, this blueprint specifies validity gates required for rare-cancer benchmarking—explicit applicability (“N/A”) rules, denominator integrity/capture completeness disclosure, anti-gaming safeguards, and escalation governance. These elements are critical in rare cancers because small denominators, high heterogeneity, and multi-institutional pathways otherwise make benchmarking prone to artifacts and unsafe inferences. Full article
Show Figures

Figure 1

25 pages, 2523 KB  
Article
Link Prediction in Heterogeneous Information Networks: Improved Hypergraph Convolution with Adaptive Soft Voting
by Sheng Zhang, Yuyuan Huang, Ziqiang Luo, Jiangnan Zhou, Bing Wu, Ka Sun and Hongmei Mao
Entropy 2026, 28(2), 230; https://doi.org/10.3390/e28020230 - 16 Feb 2026
Viewed by 54
Abstract
Complex real-world systems are often modeled as heterogeneous information networks with diverse node and relation types, bringing new opportunities and challenges to link prediction. Traditional methods based on similarity or meta-paths fail to fully capture high-order structures and semantics, while existing hypergraph-based models [...] Read more.
Complex real-world systems are often modeled as heterogeneous information networks with diverse node and relation types, bringing new opportunities and challenges to link prediction. Traditional methods based on similarity or meta-paths fail to fully capture high-order structures and semantics, while existing hypergraph-based models homogenize all high-order information without considering their importance differences, diluting core associations with redundant noise and limiting prediction accuracy. Given these issues, we propose the VE-HGCN, a link prediction model for HINs that fuses hypergraph convolution with soft-voting ensemble strategy. The model first constructs multiple heterogeneous hypergraphs from HINs via network frequent subgraph pattern extraction, then leverages hypergraph convolution for node representation learning, and finally employs a soft-voting ensemble strategy to fuse multi-model prediction results. Extensive experiments on four public HIN datasets show that the VE-HGCN outperforms seven mainstream baseline models, thereby validating the effectiveness of the proposed method. This study offers a new perspective for link prediction in HINs and exhibits good generality and practicality, providing a feasible reference for addressing high-order information utilization issues in complex heterogeneous network analysis. Full article
Show Figures

Figure 1

16 pages, 440 KB  
Article
Signal Processing and Machine Learning for the Sustainability of the Italian Social Security System: Evidence from ISTAT Pension Data
by Gianfranco Piscopo, Chiara Marciano, Maria Longobardi and Massimiliano Giacalone
Mathematics 2026, 14(4), 690; https://doi.org/10.3390/math14040690 - 15 Feb 2026
Viewed by 91
Abstract
The long-run sustainability of pay-as-you-go pension systems crucially depends on the dynamic balance between social-security contributions paid by the working population and benefits paid to retirees. In Italy, the National Social Security Institute (INPS) manages the core of the public system, whose financial [...] Read more.
The long-run sustainability of pay-as-you-go pension systems crucially depends on the dynamic balance between social-security contributions paid by the working population and benefits paid to retirees. In Italy, the National Social Security Institute (INPS) manages the core of the public system, whose financial equilibrium is increasingly challenged by demographic aging, labor market fragility, and macroeconomic shocks. In this paper, in line with the aims of the Special Issue “Signal Processing and Machine Learning in Real-Life Processes”, we reinterpret the Italian pension system as a complex stochastic signal-processing problem. Using the most recent data published in the Annuario Statistico Italiano 2024 highlighting by ISTAT—with a focus on Protection and Social Security—we construct a set of time series describing contributions, benefits, coverage ratios and pension amounts, both at the national and territorial level. On this basis, we compare classical time-series models and a recurrent neural network with Long Short-Term Memory (LSTM) architecture for multi-step forecasting of the main aggregates. The signal-processing perspective allows us to disentangle trend, cyclical and shock components, while machine learning provides flexible nonlinear forecasting tools capable of capturing structural breaks such as the COVID-19 crisis. Our empirical results suggest that (i) pension expenditure remains high and persistent as a share of GDP; (ii) the contribution coverage ratio improved in 2022 but remains below the pre-pandemic level; and (iii) regional heterogeneity in the per-capita pension deficit is substantial and stable over time, with persistent imbalances in Southern regions and Islands. Finally, we perform a scenario analysis combining LSTM-based forecasts with demographic and labor market hypotheses, and we quantify the impact of alternative policy measures on the future pension deficit signal. The proposed framework, which integrates permutation-based inference, signal decomposition and deep learning, provides a reproducible template for the real-time monitoring of pension sustainability using official open data. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning in Real-Life Processes)
22 pages, 7987 KB  
Article
RioCC: Efficient and Accurate Class-Level Code Recommendation Based on Deep Code Clone Detection
by Hongcan Gao, Chenkai Guo and Hui Yang
Entropy 2026, 28(2), 223; https://doi.org/10.3390/e28020223 - 14 Feb 2026
Viewed by 80
Abstract
Context: Code recommendation plays an important role in improving programming efficiency and software quality. Existing approaches mainly focus on method- or API-level recommendations, which limits their effectiveness to local code contexts. From a multi-stage recommendation perspective, class-level code recommendation aims to efficiently narrow [...] Read more.
Context: Code recommendation plays an important role in improving programming efficiency and software quality. Existing approaches mainly focus on method- or API-level recommendations, which limits their effectiveness to local code contexts. From a multi-stage recommendation perspective, class-level code recommendation aims to efficiently narrow a large candidate code space while preserving essential structural information. Objective: This paper proposes RioCC, a class-level code recommendation framework that leverages deep forest-based code clone detection to progressively reduce the candidate space and improve recommendation efficiency in large-scale code spaces. Method: RioCC models the recommendation process as a coarse-to-fine candidate reduction procedure. In the coarse-grained stage, a quick search-based filtering module performs rapid candidate screening and initial similarity estimation, effectively pruning irrelevant candidates and narrowing the search space. In the fine-grained stage, a deep forest-based analysis with cascade learning and multi-grained scanning captures context- and structure-aware representations of class-level code fragments, enabling accurate similarity assessment and recommendation. This two-stage design explicitly separates coarse candidate filtering from detailed semantic matching to balance efficiency and accuracy. Results: Experiments on a large-scale dataset containing 192,000 clone pairs from BigCloneBench and a collected code pool show that RioCC consistently outperforms state-of-the-art methods, including CCLearner, Oreo, and RSharer, across four types of code clones, while significantly accelerating the recommendation process with comparable detection accuracy. Conclusions: By explicitly formulating class-level code recommendation as a staged retrieval and refinement problem, RioCC provides an efficient and scalable solution for large-scale code recommendation and demonstrates the practical value of integrating lightweight filtering with deep forest-based learning. Full article
(This article belongs to the Section Multidisciplinary Applications)
53 pages, 3028 KB  
Review
Optimization and Machine Learning for Electric Vehicles Management in Distribution Networks: A Review
by Stefania Conti, Giovanni Aiello, Salvatore Coco, Antonino Laudani, Santi Agatino Rizzo, Nunzio Salerno, Giuseppe Marco Tina and Cristina Ventura
Energies 2026, 19(4), 986; https://doi.org/10.3390/en19040986 - 13 Feb 2026
Viewed by 216
Abstract
The growing penetration of Electric Vehicles (EVs) in power distribution networks presents both challenges and opportunities for grid operators and planners. This paper provides a comprehensive review of recent advances in optimization techniques and machine learning (ML) approaches for the efficient management of [...] Read more.
The growing penetration of Electric Vehicles (EVs) in power distribution networks presents both challenges and opportunities for grid operators and planners. This paper provides a comprehensive review of recent advances in optimization techniques and machine learning (ML) approaches for the efficient management of EV charging and integration in low- and medium-voltage distribution systems. Optimization methods are analyzed with reference to their objectives—such as load flattening, voltage regulation, loss minimization, and infrastructure cost reduction—and their capability to handle multi-objective, stochastic, and real-time constraints. Concurrently, the role of ML is explored in load forecasting, user behavior modeling, anomaly detection, and adaptive control strategies. Particular attention is given to hybrid approaches that combine optimization algorithms (e.g., MILP, heuristic methods) with data-driven models (e.g., neural networks, reinforcement learning), highlighting their effectiveness in enhancing grid flexibility and resilience. This review adopts a unified system-level perspective that links EV management objectives, optimization techniques, and machine learning-based solutions within distribution networks. In addition, particular attention is devoted to data availability, reproducibility, and practical deployment aspects, with the aim of identifying current limitations and providing actionable insights for future research and real-world applications. This study aims to support the development of intelligent energy management strategies for EVs, fostering a sustainable and reliable evolution of distribution networks. Full article
Show Figures

Figure 1

22 pages, 645 KB  
Article
The Responsive Teacher Formation Framework (RTFF): Towards Teacher Belonging, Wellbeing, Autonomy and Agency in Primary Education
by Eliza Cachia, Ann Marie Cassar, Melanie Darmanin, Shirley Ann Gauci and Heathcliff Schembri
Educ. Sci. 2026, 16(2), 304; https://doi.org/10.3390/educsci16020304 - 13 Feb 2026
Viewed by 265
Abstract
Teacher education systems globally experience a gap in implementation between policy aspirations and everyday enactment, with implications for initial teacher education (ITE), the quality of practicums, professional identity, and teacher recruitment and retention. Situated in Malta’s superdiverse context and informed by international debates [...] Read more.
Teacher education systems globally experience a gap in implementation between policy aspirations and everyday enactment, with implications for initial teacher education (ITE), the quality of practicums, professional identity, and teacher recruitment and retention. Situated in Malta’s superdiverse context and informed by international debates on professional capital, care ethics, inclusion, and ecological conceptions of agency, this article introduces the Responsive Teacher Formation Framework (RTFF). This original, theoretically integrated, and empirically grounded framework foregrounds four interdependent pillars of professional formation: belonging, wellbeing, autonomy and agency. Drawing on a two-year, multi-strand national inquiry synthesising perspectives from children, families, newly qualified teachers, learning support educators, and school leaders, we integrated artefact-elicitation, focus groups, interviews, and questionnaires using reflexive thematic analysis and cross-strand configurational synthesis. Through a meta-synthesis convergence of the different strands of the study, recurrent tensions surface, including procedural versus lived belonging; attention versus neglect of wellbeing; nominal autonomy versus fragile system supports and policy endorsement versus constrained agency. The findings demonstrate how these complexities are experienced across the ITE–school interface. We argue that the RTFF offers a coherent and tractable syntax for ITE programme (re)design that is both theoretically robust and practically adaptable, diagnostically sensitive to local context, and implementable at scale. The model contributes to international discourse by linking fragmented debates on these four pillars into a responsive framework of, and for, teacher formation. Beyond the Maltese case, the RTFF offers an adaptable orientation for superdiverse settings seeking to transition from compliance-driven quality assurance to formation-centred professional excellence. The article concludes by outlining how the RTFF can anchor more integrated and sustainable policy, as well as nurture professional learning communities, thereby advancing the transformation of teacher education for academic excellence. Full article
(This article belongs to the Special Issue Transforming Teacher Education for Academic Excellence)
Show Figures

Figure 1

26 pages, 6581 KB  
Article
FWinFormer: A Frequency-Domain Deep Learning Framework for 3D Ocean Subsurface Temperature Prediction
by Juntong Wu, Miao Hu, Xiulin Geng and Xun Zhang
Remote Sens. 2026, 18(4), 575; https://doi.org/10.3390/rs18040575 - 12 Feb 2026
Viewed by 97
Abstract
Subsurface temperature is an important parameter for characterizing oceanic physical processes, and accurate prediction of subsurface temperature is essential for understanding oceanic changes. Existing methods primarily focus on spatial modeling but offer limited characterization of the spatiotemporal structure and frequency features of sea [...] Read more.
Subsurface temperature is an important parameter for characterizing oceanic physical processes, and accurate prediction of subsurface temperature is essential for understanding oceanic changes. Existing methods primarily focus on spatial modeling but offer limited characterization of the spatiotemporal structure and frequency features of sea temperature. They also suffer from restricted receptive fields and limited ability to model long-term dependencies. In this study, we propose a deep learning model named Fourier Window Transformer (FWinFormer), which integrates frequency-domain modeling to predict the three-dimensional subsurface temperature over the next 24 days. The model incorporates both temporal and frequency characteristics to enhance prediction accuracy. It consists of three modules: a Spatial Block Encoder, a Translator, and a Spatial Block Decoder. The spatial encoding and decoding modules are designed to extract spatial features, while the Translator models multi-scale temporal features based on the features extracted by the encoding and decoding modules. The input consists of 24 days of historical satellite observations, including sea-surface temperature (SST), salinity (SSS), eastward velocity (SSU), northward velocity (SSV) and height (SSH). We compared the model predictions with reanalysis data and evaluated performance from the perspectives of temporal evolution, spatial distribution, and vertical structure. Additionally, we validated the predicted temperatures against in situ observations. The results show that the model achieves strong and consistent performance across various temporal scales and spatial regions, with MAE, RMSE, and R2 values of 0.529, 0.785, and 0.994, respectively, for the 24-day average prediction. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing (Second Edition))
Show Figures

Figure 1

17 pages, 3204 KB  
Article
A Transferable Digital Twin-Driven Process Design Framework for High-Performance Multi-Jet Polishing
by Honglei Mo, Xie Chen, Lingxi Guo, Zili Zhang, Xiao Chen, Jianning Chu and Ruoxin Wang
Micromachines 2026, 17(2), 226; https://doi.org/10.3390/mi17020226 - 10 Feb 2026
Viewed by 189
Abstract
The multi-jet polishing process (MJP) demonstrates high shape accuracy and surface quality in the machining of nonlinear and complex surfaces, and it achieves precise and adjustable material removal rates through computer control. However, there are still challenges in terms of machining efficiency, system [...] Read more.
The multi-jet polishing process (MJP) demonstrates high shape accuracy and surface quality in the machining of nonlinear and complex surfaces, and it achieves precise and adjustable material removal rates through computer control. However, there are still challenges in terms of machining efficiency, system complexity, and stability. In particular, maintaining the polishing quality presents a greater challenge when working conditions change. To overcome these issues, this paper conceptually proposes a digital twin (DT)-driven, human-centric design framework that integrates key factors of MJP, such as jet kinetic energy, nozzle structure, abrasive type, and machining path. Within this framework, a feature-encoded transfer learning-based model is introduced to enhance surface roughness prediction accuracy and robustness under varying working conditions. The effectiveness of the proposed model was verified by conducting experiments on 3D printed workpieces under two different MJP working conditions. The results show that our proposed method yields better predictive performance and cross-condition adaptability. Overall, this work provides a predictive modeling component that supports DT-driven process design, offering a practical and extensible perspective for optimizing complex ultra-precision manufacturing processes under data-scarce and uncertainty-dominated conditions. Full article
(This article belongs to the Special Issue Future Trends in Ultra-Precision Machining)
Show Figures

Figure 1

34 pages, 7022 KB  
Article
Quantitative Perceptual Analysis of Feature-Space Scenarios in Network Media Evaluation Using Transformer-Based Deep Learning: A Case Study of Fuwen Township Primary School in China
by Yixin Liu, Zhimin Li, Lin Luo, Simin Wang, Ruqin Wang, Ruonan Wu, Dingchang Xia, Sirui Cheng, Zejing Zou, Xuanlin Li, Yujia Liu and Yingtao Qi
Buildings 2026, 16(4), 714; https://doi.org/10.3390/buildings16040714 - 9 Feb 2026
Viewed by 247
Abstract
Against the dual backdrop of the rural revitalization strategy and the pursuit of high-quality, balanced urban–rural education, optimizing rural campus spaces has emerged as an important lever for addressing educational resource disparities and improving pedagogical quality. However, conventional evaluation of campus space optimization [...] Read more.
Against the dual backdrop of the rural revitalization strategy and the pursuit of high-quality, balanced urban–rural education, optimizing rural campus spaces has emerged as an important lever for addressing educational resource disparities and improving pedagogical quality. However, conventional evaluation of campus space optimization faces two systemic dilemmas. First, top-down decision-making often neglects the authentic needs of diverse stakeholders and place-based knowledge, resulting in spatial interventions that lose regional distinctiveness. Second, routine public participation is constrained by geographical barriers, time costs, and sample-size limitations, which can amplify professional cognitive bias and impede comprehensive feedback formation. The compounded effect of these challenges contributes to a disconnect between spatial optimization outcomes and perceived needs, thereby constraining the distinctive development of rural educational spaces. To address these constraints, this study proposes a novel method that integrates regional spatial feature recognition with digital media-based public perception assessment. At the data collection and ethical governance level, the study strictly adheres to platform compliance and academic ethics. A total of 12,800 preliminary comments were scraped from major social media platforms (e.g., Douyin, Dianping, and Xiaohongshu) and processed through a three-stage screening workflow—keyword screening–rule-based filtering–manual verification—to yield 8616 valid records covering diverse public groups across China. All user-identifying information was fully anonymized to ensure lawful use and privacy protection. At the analytical modeling level, we develop a Transformer-based deep learning system that leverages multi-head attention mechanisms to capture implicit spatial-sentiment features and metaphorical expressions embedded in review texts. Evaluation on an independent test set indicates a classification accuracy of 89.2%, aligning with balanced and stable scoring performance. Robustness is further strengthened by introducing an equal-weight alternative strategy and conducting stability checks to indicate the consistency of model outputs across weighting assumptions. At the scenario interpretation level, we combine grounded-theory coding with semantic network analysis to establish a three-tier spatial analysis framework—macro (landscape pattern/hydro-topological patterns), meso (architectural interface), and micro (teaching scenes/pedagogical scenarios)—and incorporate an interpretive stakeholder typology (tourists, residents, parents, and professional groups) to systematically identify and quantify key features shaping public spatial perception. Findings show that, at the macro level, naturally integrated scenarios—such as “campus–farmland integration” and “mountain–water embeddedness”—exhibit high affective association, aligning with the “mountain-water-field-village” spatial sequence logic and suggesting broad public endorsement of ecological campus concepts, whereas vernacular settlement-pattern scenarios receive relatively low attention due to cognitive discontinuities. At the meso level, innovative corridor strategies (e.g., framed vistas and expanded corridor spaces) strengthen the building–nature interaction and suggest latent value in stimulating exploratory spatial experience. At the micro level, place-based practice-oriented teaching scenes (e.g., intangible cultural heritage handcraft and creative workshops) achieve higher scores, aligning with the compatibility of vernacular education’s “differential esthetics,” while urban convergence-oriented interdisciplinary curriculum scenes suggest an interpretive gap relative to public expectations. These results indicate an embedded relationship between public perception and regional spatial features, which is further shaped by a multi-actor governance process—characterized by “Government + Influencers + Field Study”—that mediates how rural educational spaces are produced, communicated, and interpreted in digital environments. The study’s innovative value lies in integrating sociological theories (e.g., embeddedness) with deep learning techniques to fill the regional and multi-actor perspective gap in rural campus POE and to promote a methodological shift from “experience-based induction” toward a “data-theory” dual-drive model. The findings provide inferential evidence for rural campus renewal and optimization; the methodological pipeline is transferable to small-scale rural primary schools with media exposure and salient regional ecological characteristics, and it offers a new pathway for incorporating digital media-driven public perception feedback into planning and design practice. The research methodology of this study consists of four sequential stages, which are implemented in a systematic and progressive manner: First, data collection was conducted: Python and the Octopus Collector were used to crawl online comment data related to Fuwen Township Central Primary School, strictly complying with the user agreements of the Douyin, Dianping, and Xiaohongshu platforms. Second, semantic preprocessing was performed: The evaluation content was segmented to generate word frequency statistics and semantic networks; qualitative analysis was conducted using Origin software, and quantitative translation was realized via Sankey diagrams. Third, spatial scene coding was carried out: Combined with a spatial characteristic identification system, a macro–meso–micro three-tier classification system for spatial scene characteristics was constructed to encode and quantitatively express the textual content. Finally, sentiment quantification and correlation analysis was implemented: A deep learning model based on the Transformer framework was employed to perform sentiment quantification scoring for each comment; Sankey diagrams were used to quantitatively correlate spatial scenes with sentiment tendencies, thereby exploring the public’s perceptual associations with the architectural spatial environment of rural campuses. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

25 pages, 6669 KB  
Article
G-CMTF Net: Spectro-Temporal Disentanglement and Reliability-Aware Gated Cross-Modal Temporal Fusion for Robust PSG Sleep Staging
by Jiongyao Ye and Pengfei Li
Symmetry 2026, 18(2), 316; https://doi.org/10.3390/sym18020316 - 9 Feb 2026
Viewed by 118
Abstract
Automatic sleep staging from polysomnography is challenged by marked spectro-temporal heterogeneity and non-stationary cross-channel artifacts, which often undermine naïve multimodal fusion. To address this, a Gated Cross-Modal and Temporal Fusion Network (G-CMTF Net) is proposed as an end-to-end model operating on 30 s [...] Read more.
Automatic sleep staging from polysomnography is challenged by marked spectro-temporal heterogeneity and non-stationary cross-channel artifacts, which often undermine naïve multimodal fusion. To address this, a Gated Cross-Modal and Temporal Fusion Network (G-CMTF Net) is proposed as an end-to-end model operating on 30 s EEG epochs and auxiliary EOG and EMG signals, in which cross-modal contributions are regulated through reliability-aware gating. A spectro-temporal disentanglement frontend learns multi-scale temporal features while incorporating FFT-derived band-power embeddings to preserve physiologically meaningful oscillatory cues. At the epoch level, gated fusion suppresses artifact-prone auxiliary inputs, thereby limiting noise transfer into a shared latent space. Long-range sleep dynamics are modeled via a convolution-augmented self-attention encoder that captures both local morphology and transition structure. On Sleep-EDF-20 and Sleep-EDF-78, G-CMTF Net achieves Macro-F1/ACC of 81.3%/85.5% and 78.2%/83.4%, respectively, while maintaining high sensitivity and geometric-mean performance on transitional epochs, consistent with the function of reliability-aware gated fusion under non-stationary auxiliary artifacts. From a symmetry perspective, the proposed framework enforces a structured balance between heterogeneous modalities by promoting representational consistency while adaptively suppressing asymmetric noise contributions. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 4667 KB  
Article
A Unified Complementary Regularization Framework for Long-Tailed Image Classification
by Xingyu Shen, Lei Zhang, Lituan Wang and Yan Wang
Appl. Sci. 2026, 16(3), 1656; https://doi.org/10.3390/app16031656 - 6 Feb 2026
Viewed by 126
Abstract
Class imbalance is a formidable and ongoing challenge in image classification tasks. Existing methods address this issue by emphasizing minority classes through class redistribution in the feature space or adjusting decision boundaries. Although such approaches improve the accuracy of minority classes, they often [...] Read more.
Class imbalance is a formidable and ongoing challenge in image classification tasks. Existing methods address this issue by emphasizing minority classes through class redistribution in the feature space or adjusting decision boundaries. Although such approaches improve the accuracy of minority classes, they often lead to unstable training and performance degradation on majority classes. To alleviate these challenges, we propose a unified redistribution framework termed as ComReg, which explicitly enforces complementary regularization on feature learning and decision boundary optimization in long-tailed image classification. Specifically, ComReg employs a multi-expert learning framework combined with prior-knowledge-guided online distillation to construct distribution-aware decision boundaries. From the feature space learning perspective, we enhance intra-class compactness and inter-class separability through decoupled-balanced contrastive learning. To further align the distributions in both spaces, we introduce a delay-weighted prototype learning strategy, which incorporates the decision boundary constructed by the head-class expert into the decoupled-balanced contrastive learning process. Extensive experiments on widely used long-tailed benchmarks, including CIFAR10-LT and CIFAR100-LT, as well as the real-world long-tailed datasets such as subsets of MedMNIST v2, demonstrate that our method achieves state-of-the-art performance. Full article
(This article belongs to the Special Issue AI-Driven Image and Signal Processing)
Show Figures

Figure 1

27 pages, 20135 KB  
Article
Seeing Like Argus: Multi-Perspective Global–Local Context Learning for Remote Sensing Semantic Segmentation
by Hongbing Chen, Yizhe Feng, Kun Wang, Mingrui Liao, Haoting Zhai, Tian Xia, Yubo Zhang, Jianhua Jiao and Changji Wen
Remote Sens. 2026, 18(3), 521; https://doi.org/10.3390/rs18030521 - 5 Feb 2026
Viewed by 428
Abstract
Accurate semantic segmentation of high-resolution remote sensing imagery is crucial for applications such as land cover mapping, urban development monitoring, and disaster response. However, remote sensing data still present inherent challenges, including complex spatial structures, significant intra-class variability, and diverse object scales, which [...] Read more.
Accurate semantic segmentation of high-resolution remote sensing imagery is crucial for applications such as land cover mapping, urban development monitoring, and disaster response. However, remote sensing data still present inherent challenges, including complex spatial structures, significant intra-class variability, and diverse object scales, which demand models capable of capturing rich contextual information from both local and global regions. To address these issues, we propose ArgusNet, a novel segmentation framework that enhances multi-scale representations through a series of carefully designed fusion mechanisms. At the core of ArgusNet lies the synergistic integration of Adaptive Windowed Additive Attention (AWAA) and 2D Selective Scan (SS2D). Specifically, our AWAA extends additive attention into a window-based structure with a dynamic routing mechanism, enabling multi-perspective local feature interaction via multiple global query vectors. Furthermore, we introduce a decoder optimization strategy incorporating three-stage feature fusion and a Macro Guidance Module (MGM) to improve spatial detail preservation and semantic consistency. Experiments on benchmark remote sensing datasets demonstrate that ArgusNet achieves competitive and improved segmentation performance compared to state-of-the-art methods, particularly in scenarios requiring fine-grained object delineation and robust multi-scale contextual understanding. Full article
Show Figures

Figure 1

26 pages, 6232 KB  
Article
MFE-YOLO: A Multi-Scale Feature Enhanced Network for PCB Defect Detection with Cross-Group Attention and FIoU Loss
by Ruohai Di, Hao Fan, Hanxiao Feng, Zhigang Lv, Lei Shu, Rui Xie and Ruoyu Qian
Entropy 2026, 28(2), 174; https://doi.org/10.3390/e28020174 - 2 Feb 2026
Viewed by 263
Abstract
The detection of defects in Printed Circuit Boards (PCBs) is a critical yet challenging task in industrial quality control, characterized by the prevalence of small targets and complex backgrounds. While deep learning models like YOLOv5 have shown promise, they often lack the ability [...] Read more.
The detection of defects in Printed Circuit Boards (PCBs) is a critical yet challenging task in industrial quality control, characterized by the prevalence of small targets and complex backgrounds. While deep learning models like YOLOv5 have shown promise, they often lack the ability to quantify predictive uncertainty, leading to overconfident errors in challenging scenarios—a major source of false alarms and reduced reliability in automated manufacturing inspection lines. From a Bayesian perspective, this overconfidence signifies a failure in probabilistic calibration, which is crucial for trustworthy automated inspection. To address this, we propose MFE-YOLO, a Bayesian-enhanced detection framework built upon YOLOv5 that systematically integrates uncertainty-aware mechanisms to improve both accuracy and operational reliability in real-world settings. First, we construct a multi-background PCB defect dataset with diverse substrate colors and shapes, enhancing the model’s ability to generalize beyond the single-background bias of existing data. Second, we integrate the Convolutional Block Attention Module (CBAM), reinterpreted through a Bayesian lens as a feature-wise uncertainty weighting mechanism, to suppress background interference and amplify salient defect features. Third, we propose a novel FIoU loss function, redesigned within a probabilistic framework to improve bounding box regression accuracy and implicitly capture localization uncertainty, particularly for small defects. Extensive experiments demonstrate that MFE-YOLO achieves state-of-the-art performance, with mAP@0.5 and mAP@0.5:0.95 values of 93.9% and 59.6%, respectively, outperforming existing detectors, including YOLOv8 and EfficientDet. More importantly, the proposed framework yields better-calibrated confidence scores, significantly reducing false alarms and enabling more reliable human-in-the-loop verification. This work provides a deployable, uncertainty-aware solution for high-throughput PCB inspection, advancing toward trustworthy and efficient quality control in modern manufacturing environments. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
Show Figures

Figure 1

30 pages, 3451 KB  
Article
A Novel Investment Risk Assessment Model for Complex Construction Projects Based on the IFA-LSSVM
by Rupeng Ren, Shengmin Wang and Jun Fang
Buildings 2026, 16(3), 624; https://doi.org/10.3390/buildings16030624 - 2 Feb 2026
Viewed by 237
Abstract
The project cycle of complex construction projects covers the whole process from project decision-making, design, bidding, construction, completion acceptance, and the initial stage of operation. Among them, the investment risk assessment of complex construction projects focuses on the early decision-making stage of the [...] Read more.
The project cycle of complex construction projects covers the whole process from project decision-making, design, bidding, construction, completion acceptance, and the initial stage of operation. Among them, the investment risk assessment of complex construction projects focuses on the early decision-making stage of the project, aiming to provide a basis for investment feasibility analysis. The investment risk of complex construction projects is highly nonlinear and uncertain, and the traditional risk assessment methods have limitations in model generalization ability and prediction accuracy. To improve the accuracy and reliability of quantitative risk assessment, this study proposed a novel investment risk assessment model based on the perspective of investors. Firstly, through literature research, a multi-dimensional comprehensive risk assessment index system covering policies and regulations, economic environment, technical management, construction safety, and financial cost was systematically identified and constructed. Subsequently, the Least Squares Support Vector Machine (LSSVM) was used to establish a nonlinear mapping relationship between risk indicators and final risk levels. Aiming at the problem that the parameter selection of the standard LSSVM model has a significant impact on the performance, this paper proposed an improved Firefly Algorithm (IFA) to automatically optimize the penalty factor and kernel function parameters of LSSVM, so as to overcome the blindness of artificial parameter selection and improve the convergence speed and generalization ability of the model. Compared with the classical Firefly Algorithm, IFA strengthens learning and adaptive strategies by adding depth. The conclusions are as follows. (1) Compared with the Backpropagation Neural Network (BPNN), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost), this model showed higher prediction accuracy on the test set, and its accuracy was reduced by about 3%. (2) Compared with FA, Genetic Algorithm (GA), and Particle Swarm Optimization (PSO), IFA had a stronger global retrieval ability. (3) The model could effectively fit the complex risk nonlinear relationship, and the risk assessment results were highly consistent with the actual situation. Therefore, the risk assessment model based on the improved LSSVM constructed in this study not only provides a more scientific and accurate quantitative tool for investment decision-making of construction projects, but also has important theoretical and practical significance for preventing and resolving significant investment risks. Full article
(This article belongs to the Special Issue Advances in Life Cycle Management of Buildings)
Show Figures

Figure 1

27 pages, 4367 KB  
Article
MTFE-Net: A Deep Learning Vision Model for Surface Roughness Extraction Based on the Combination of Texture Features and Deep Learning Features
by Qiancheng Jin, Wangzhe Du, Huaxin Liu, Xuwei Li, Xiaomiao Niu, Yaxing Liu, Jiang Ji, Mingjun Qiu and Yuanming Liu
Metals 2026, 16(2), 179; https://doi.org/10.3390/met16020179 - 2 Feb 2026
Viewed by 295
Abstract
Surface roughness, critically measured by the Arithmetical Mean Roughness (Ra), is a vital determinant of workpiece functional performance. Traditional contact-based measurement methods are inefficient and unsuitable for online inspection. While machine vision offers a promising alternative, existing approaches lack robustness, and pure deep [...] Read more.
Surface roughness, critically measured by the Arithmetical Mean Roughness (Ra), is a vital determinant of workpiece functional performance. Traditional contact-based measurement methods are inefficient and unsuitable for online inspection. While machine vision offers a promising alternative, existing approaches lack robustness, and pure deep learning models suffer from poor interpretability. Therefore, MTFE-Net is proposed, which is a novel deep learning framework for surface roughness classification. The key innovation of MTFE-Net lies in its effective integration of traditional texture feature analysis with deep learning within a dual-branch architecture. The MTFE (Multi-dimensional Texture Feature Extraction) branch innovatively combines a comprehensive suite of texture descriptors including Gray-Level Co-occurrence Matrix (GLCM), gray-level difference statistic, first-order statistic, Tamura texture features, wavelet transform, and Local Binary Pattern (LBP). This multi-scale, multi-perspective feature extraction strategy overcomes the limitations of methods that focus on only specific texture aspects. These texture features are then refined using Multi-Head Self-Attention (MHA) mechanism and Mamba model. Experiments on a dataset of Q235 steel surfaces show that MTFE-Net achieves state-of-the-art performance with 95.23% accuracy, 94.89% precision, 94.67% recall and 94.74% F1-score, significantly outperforming comparable models. The results validate that the fusion strategy effectively enhances accuracy and robustness, providing a powerful solution for industrial non-contact roughness inspection. Full article
(This article belongs to the Section Computation and Simulation on Metals)
Show Figures

Figure 1

Back to TopTop