Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,275)

Search Parameters:
Keywords = multi-task prediction model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 3876 KB  
Article
A Study on the Multi-Source Remote Sensing Visibility Classification Method Based on the LF-Transformer
by Chuhan Lu, Zhiyuan Han and Xiaoni Liang
Remote Sens. 2026, 18(4), 618; https://doi.org/10.3390/rs18040618 - 15 Feb 2026
Viewed by 109
Abstract
Visibility is a critical meteorological factor for ensuring the safety of maritime and bridge transportation, and accurate identification of low-visibility levels is essential for early warning and operational scheduling. Traditional methods such as Random Forest often exhibit insufficient feature-modeling capability when dealing with [...] Read more.
Visibility is a critical meteorological factor for ensuring the safety of maritime and bridge transportation, and accurate identification of low-visibility levels is essential for early warning and operational scheduling. Traditional methods such as Random Forest often exhibit insufficient feature-modeling capability when dealing with high-dimensional, multi-source remote sensing data. Meanwhile, satellite observations used for visibility recognition are characterized by strong inter-channel correlations, complex nonlinear interactions, significant observational noise and outliers, and the scarcity of low-visibility samples that are easily confused with low clouds and haze. As a result, existing general deep learning methods (e.g., the Saint model) may still exhibit unstable attention weights and limited generalization under complex meteorological conditions. To address these limitations, this study constructs a visibility classification task for the Jiaxing–Shaoxing Cross-Sea Bridge region in China based on multi-channel visible and infrared spectral observations from the Fengyun-4A (FY-4A) and Fengyun-4B (FY-4B) satellites. We propose a visibility classification method using the LF-Transformer for the Jiaxing–Shaoxing Cross-Sea Bridge region in China, and systematically compare it with the Random Forest and Saint models. Experimental results show that the Precision of the LF-Transformer increases significantly from 0.47 (Random Forest) to 0.59, achieving a 13% improvement and demonstrating stronger discriminative ability and stability under complex meteorological conditions. Furthermore, a combination input of FY4A+FY4B outperform the single FY4A, with a 25.5% increased Macro F1-score. With an additional ensemble strategy, the LF-Transformer further improves its precision on the FY4A+FY4B fused dataset to 0.61, a 3% compared to the original LF-Transformer, indicating enhanced prediction stability. Overall, the proposed method substantially strengthens visibility classification performance and highlights the strong application potential of the LF-Transformer in remote-sensing-based meteorological tasks, particularly for low-visibility monitoring, early warning, and transportation safety assurance. Full article
23 pages, 1032 KB  
Article
Research on Hourly Solar Radiation Prediction Methodology Based on DSWTC-Transformer
by Cong Li, Pengping Lv, Tao Huang and Xupeng Ren
Appl. Sci. 2026, 16(4), 1945; https://doi.org/10.3390/app16041945 - 15 Feb 2026
Viewed by 226
Abstract
Accurate estimation of solar radiation is of great significance for solar energy development and climate research. However, in China, the scarcity and uneven distribution of observation stations often cause deep learning models to overfit and suffer from accuracy degradation under small-sample conditions. To [...] Read more.
Accurate estimation of solar radiation is of great significance for solar energy development and climate research. However, in China, the scarcity and uneven distribution of observation stations often cause deep learning models to overfit and suffer from accuracy degradation under small-sample conditions. To address this issue, this paper proposes a deep learning framework that integrates transfer learning and multi-scale time series modeling for predicting hourly global solar radiation at target meteorological sites. The method employs representation learning and clustering to select source domain sites with similar climatic characteristics. It integrates wavelet transform convolution, depthwise separable convolution, and a Transformer encoder–decoder to achieve multi-scale feature extraction and long-term dependency modeling. Experimental results demonstrate that the model achieved a coefficient of determination (R2) of 0.9710 in tests conducted in the Ningxia region. It maintained good predictive performance even in a cold-start scenario with only one month of training data and exhibited stable accuracy across all four seasons, effectively mitigating seasonal bias. This provides a reliable solution for solar radiation estimation in data-scarce regions, and its modeling approach can also be extended to other climate-related time series prediction tasks. Full article
Show Figures

Figure 1

23 pages, 13265 KB  
Article
A Land Cover Recognition-Based Method for Wildfire Early Warning in Transmission Corridor Areas
by Changzheng Deng, Weiyi Li, Bo Chen and Zechuan Fan
Fire 2026, 9(2), 85; https://doi.org/10.3390/fire9020085 - 14 Feb 2026
Viewed by 241
Abstract
To improve the accuracy of wildfire risk identification in areas adjacent to power transmission corridors, this study proposes a wildfire early warning method that integrates refined land cover segmentation and multimodal feature deep learning. First, an improved bi-branch semantic segmentation network (BuildFormer++) is [...] Read more.
To improve the accuracy of wildfire risk identification in areas adjacent to power transmission corridors, this study proposes a wildfire early warning method that integrates refined land cover segmentation and multimodal feature deep learning. First, an improved bi-branch semantic segmentation network (BuildFormer++) is used to perform refined classification of high-resolution remote sensing images, extracting six types of land cover information, including forest and cultivated land. Second, a multi-dimensional feature set integrating land cover, topography, climate, and human activities is constructed and input into a multimodal wildfire point prediction network for deep feature fusion and probabilistic modeling. Experimental results show that the proposed segmentation network achieves a mean intersection–union ratio (mIoU) of 40.68% in the semantic segmentation task; the early warning model achieves an accuracy of 85.37%, an F1 score of 93.15%, and an ROC-AUC of 85.42% in risk prediction, significantly outperforming comparative methods. The “refined segmentation–feature fusion–risk prediction” framework constructed by this method can provide reliable technical support for the operation and maintenance safety and fire prevention of power transmission corridors. Full article
Show Figures

Figure 1

20 pages, 2405 KB  
Article
Confidence-Guided Adaptive Diffusion Network for Medical Image Classification
by Yang Yan, Zhuo Xie and Wenbo Huang
J. Imaging 2026, 12(2), 80; https://doi.org/10.3390/jimaging12020080 - 14 Feb 2026
Viewed by 129
Abstract
Medical image classification is a fundamental task in medical image analysis and underpins a wide range of clinical applications, including dermatological screening, retinal disease assessment, and malignant tissue detection. In recent years, diffusion models have demonstrated promising potential for medical image classification owing [...] Read more.
Medical image classification is a fundamental task in medical image analysis and underpins a wide range of clinical applications, including dermatological screening, retinal disease assessment, and malignant tissue detection. In recent years, diffusion models have demonstrated promising potential for medical image classification owing to their strong representation learning capability. However, existing diffusion-based classification methods often rely on oversimplified prior modeling strategies, which fail to adequately capture the intrinsic multi-scale semantic information and contextual dependencies inherent in medical images. As a result, the discriminative power and stability of feature representations are constrained in complex scenarios. In addition, fixed noise injection strategies neglect variations in sample-level prediction confidence, leading to uniform perturbations being imposed on samples with different levels of semantic reliability during the diffusion process, which in turn limits the model’s discriminative performance and generalization ability. To address these challenges, this paper proposes a Confidence-Guided Adaptive Diffusion Network (CGAD-Net) for medical image classification. Specifically, a hybrid prior modeling framework is introduced, consisting of a Hierarchical Pyramid Context Modeling (HPCM) module and an Intra-Scale Dilated Convolution Refinement (IDCR) module. These two components jointly enable the diffusion-based feature modeling process to effectively capture fine-grained structural details and global contextual semantic information. Furthermore, a Confidence-Guided Adaptive Noise Injection (CG-ANI) strategy is designed to dynamically regulate noise intensity during the diffusion process according to sample-level prediction confidence. Without altering the underlying discriminative objective, CG-ANI stabilizes model training and enhances robust representation learning for semantically ambiguous samples.Experimental results on multiple public medical image classification benchmarks, including HAM10000, APTOS2019, and Chaoyang, demonstrate that CGAD-Net achieves competitive performance in terms of classification accuracy, robustness, and training stability. These results validate the effectiveness and application potential of confidence-guided diffusion modeling for two-dimensional medical image classification tasks, and provide valuable insights for further research on diffusion models in the field of medical image analysis. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

29 pages, 2521 KB  
Article
Time-Series Modeling for Corporate Financial Crisis Prediction: Evidence from Recurrent Neural Networks
by Yanqiong Duan and Aizhen Ren
Mathematics 2026, 14(4), 657; https://doi.org/10.3390/math14040657 - 12 Feb 2026
Viewed by 212
Abstract
Corporate financial distress typically emerges through a gradual accumulation process, rendering crisis prediction inherently dynamic and path-dependent. However, many existing studies continue to rely on static cross-sectional data or short-term observations, which limits their ability to capture the temporal evolution of financial risk. [...] Read more.
Corporate financial distress typically emerges through a gradual accumulation process, rendering crisis prediction inherently dynamic and path-dependent. However, many existing studies continue to rely on static cross-sectional data or short-term observations, which limits their ability to capture the temporal evolution of financial risk. To address this issue, this study develops a time-series financial crisis early warning framework based on Recurrent Neural Networks (RNNs) and systematically evaluates the incremental value of temporal information in corporate distress prediction. Using annual data of Chinese A-share listed companies from 2019 to 2023, we construct both single-year cross-sectional datasets and a five-year multi-period time-series dataset under a unified experimental protocol. Within this dual-framework setting, RNNs are compared with Random Forest (RF), Support Vector Machine (SVM), and Backpropagation Neural Network (BPNN) using identical feature sets, training–testing splits, and evaluation criteria. Model performance is assessed through multiple metrics, including Accuracy, Precision, Recall, F1 score, and AUC, complemented by statistical validation using McNemar tests, loss-based comparisons, and bootstrap confidence intervals. The empirical results show that while RF and BPNN exhibit strong robustness in static, single-period prediction tasks, RNNs achieve consistently superior performance when multi-period temporal information is explicitly modeled. Statistical tests indicate that the observed performance advantages of RNNs are systematic and stable, though moderate under the current sample size. This study provides empirical evidence that incorporating temporal structures into financial crisis prediction can substantially enhance predictive effectiveness under constrained labeled data. The findings highlight the importance of time-series modeling for early warning applications and offer practical guidance for selecting appropriate predictive frameworks across different data structures. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

22 pages, 2714 KB  
Article
DeepChance-OPT: A Robust Decision-Making Framework for Dynamic Grasping in Precision Assembly
by Tong Wei and Haibo Jin
Information 2026, 17(2), 187; https://doi.org/10.3390/info17020187 - 12 Feb 2026
Viewed by 105
Abstract
Achieving safe and efficient sequential decision-making in dynamic and uncertain environments is a core challenge in intelligent manufacturing and robotic systems. During operation, systems are often subject to coupled multi-source uncertainties—such as stochastic disturbances, model mismatch, and environmental shifts—rendering traditional approaches based on [...] Read more.
Achieving safe and efficient sequential decision-making in dynamic and uncertain environments is a core challenge in intelligent manufacturing and robotic systems. During operation, systems are often subject to coupled multi-source uncertainties—such as stochastic disturbances, model mismatch, and environmental shifts—rendering traditional approaches based on deterministic models or post hoc safety verification incapable of simultaneously ensuring performance and safety. In particular, the non-differentiability of constraint satisfaction probabilities in chance-constrained decision-making severely impedes its integration with data-driven learning paradigms. To address these challenges, this paper proposes DeepChance-OPT (Deep Chance Optimization), an end-to-end differentiable disturbance-rejection decision framework tailored for dynamic grasping tasks in precision assembly. The framework first encodes historical observations and control sequences into a low-dimensional latent representation to extract key dynamic features relevant to decision-making. Subsequently, it models the temporal propagation of uncertainty in this latent space to predict the probability distribution of future states. Furthermore, via a differentiable chance-constrained mechanism, the risk of constraint violation is transformed into a continuous and differentiable penalty term, which is jointly optimized with the task performance objective to achieve synergistic improvement in both safety and efficiency. The entire framework is trained and executed under a unified end-to-end architecture, enabling closed-loop online sequential decision-making. Experiments on a precision silicon carbide wafer grasping task demonstrate that DeepChance-OPT achieves real-time performance (average decision latency < 4 ms) while reducing the constraint violation rate to 2.3%, significantly outperforming both traditional optimization and purely data-driven baselines. Under composite uncertainty scenarios—including parameter perturbations, measurement noise, and external disturbances—the success rate remains stably above 87.5%, fully validating the effectiveness of the proposed framework for robust, safe, and efficient decision-making in complex dynamic environments. This work provides a new paradigm for intelligent disturbance-rejection decision-making in high-precision manufacturing, offering both theoretical rigor and engineering practicality. Full article
(This article belongs to the Special Issue Data-Driven Decision-Making in Intelligent Systems)
Show Figures

Figure 1

23 pages, 2557 KB  
Article
MECFN: A Multi-Modal Temporal Fusion Network for Valve Opening Prediction in Fluororubber Material Level Control
by Weicheng Yan, Kaiping Yuan, Han Hu, Minghui Liu, Haigang Gong, Xiaomin Wang and Guantao Zhang
Electronics 2026, 15(4), 783; https://doi.org/10.3390/electronics15040783 - 12 Feb 2026
Viewed by 111
Abstract
During fluororubber production, strong material agitation and agglomeration induce severe dynamic fluctuations, irregular surface morphology, and pronounced variations in apparent material level. Under such operating conditions, conventional single-modality monitoring approaches—such as point-based height sensors or manual visual inspection—often fail to reliably capture the [...] Read more.
During fluororubber production, strong material agitation and agglomeration induce severe dynamic fluctuations, irregular surface morphology, and pronounced variations in apparent material level. Under such operating conditions, conventional single-modality monitoring approaches—such as point-based height sensors or manual visual inspection—often fail to reliably capture the true process state. This information deficiency leads to inaccurate valve opening adjustment and degrades material level control performance. To address this issue, valve opening prediction is formulated as a data-driven, control-oriented regression task for material level regulation, and an end-to-end multimodal temporal regression framework, termed MECFN (Multi-Modal Enhanced Cross-Fusion Network), is proposed. The model performs deep fusion of visual image sequences and height sensor signals. A customized Multi-Feature Extraction (MFE) module is designed to enhance visual feature representation under complex surface conditions, while two independent Transformer encoders are employed to capture long-range temporal dependencies within each modality. Furthermore, a context-aware cross-attention mechanism is introduced to enable effective interaction and adaptive fusion between heterogeneous modalities. Experimental validation on a real-world industrial fluororubber production dataset demonstrates that MECFN consistently outperforms traditional machine learning approaches and single-modality deep learning models in valve opening prediction. Quantitative results show that MECFN achieves a mean absolute error of 2.36, a root mean squared error of 3.73, and an R2 of 0.92. These results indicate that the proposed framework provides a robust and practical data-driven solution for supporting valve control and achieving stable material level regulation in industrial production environments. Full article
(This article belongs to the Special Issue AI for Industry)
Show Figures

Figure 1

20 pages, 1780 KB  
Article
Mining Managerial Insights from User Reviews: A Mix Contrastive Method to Aspect–Opinion Mining
by Tianshu Zhang, Kunze Xia and Xiaoliang Chen
Symmetry 2026, 18(2), 335; https://doi.org/10.3390/sym18020335 - 12 Feb 2026
Viewed by 158
Abstract
For businesses to optimize management decisions in the digital transformation, a process inherently characterized by symmetry between feedback collection and strategic adjustment, it is essential to automatically extract fine-grained opinions from large volumes of unstructured evaluations. However, traditional evaluation management techniques often fail [...] Read more.
For businesses to optimize management decisions in the digital transformation, a process inherently characterized by symmetry between feedback collection and strategic adjustment, it is essential to automatically extract fine-grained opinions from large volumes of unstructured evaluations. However, traditional evaluation management techniques often fail to reflect this symmetrical balance between user perception and organizational response, primarily due to their inefficiency in processing unstructured textual data. Moreover, existing aspect–opinion mining algorithms exhibit limited practical generalization performance due to poor robustness against noise and semantic variations in real-world reviews. To address these gaps, this paper proposes MixContrast, an aspect–opinion mining method based on mix contrastive learning, which integrates mixed sample construction with data augmentation to generate continuous semantic transition samples. By symmetrically aligning positive and negative samples through a contrastive learning mechanism, MixContrast enhances representation learning and improves model generalization. Experiments conducted on cosmetics and multi-domain e-commerce review datasets demonstrate that MixContrast significantly outperforms several strong baseline models in both aspect and opinion extraction tasks. Theoretical analysis shows that MixContrast enhances robustness by ensuring Lipschitz continuity and enabling gradient decomposition in the representation space. Based on MixContrast predictions, we conduct a correlation analysis among aspects, opinions, and sentiment tendencies, delivering real-time quantitative support for marketing strategy formulation, product optimization, and service enhancement. Beyond advancing aspect–opinion mining technology, this work enables data-driven, symmetrical integration of technical insights with managerial decision-making, holding significant theoretical and practical value for digitally transforming enterprises. Full article
Show Figures

Figure 1

29 pages, 2919 KB  
Article
A Model-Driven Engineering Approach to AI-Powered Healthcare Platforms
by Mira Raheem, Neamat Eltazi, Michael Papazoglou, Bernd Krämer and Amal Elgammal
Informatics 2026, 13(2), 32; https://doi.org/10.3390/informatics13020032 - 11 Feb 2026
Viewed by 178
Abstract
Artificial intelligence (AI) has the potential to transform healthcare by supporting more accurate diagnoses and personalized treatments. However, its adoption in practice remains constrained by fragmented data sources, strict privacy rules, and the technical complexity of building reliable clinical systems. To address these [...] Read more.
Artificial intelligence (AI) has the potential to transform healthcare by supporting more accurate diagnoses and personalized treatments. However, its adoption in practice remains constrained by fragmented data sources, strict privacy rules, and the technical complexity of building reliable clinical systems. To address these challenges, we introduce a model-driven engineering (MDE) framework designed specifically for healthcare AI. The framework relies on formal metamodels, domain-specific languages (DSLs), and automated transformations to move from high-level specifications to running software. At its core is the Medical Interoperability Language (MILA), a graphical DSL that enables clinicians and data scientists to define queries and machine learning pipelines using shared ontologies. When combined with a federated learning architecture, MILA allows institutions to collaborate without exchanging raw patient data, ensuring semantic consistency across sites while preserving privacy. We evaluate this approach in a multi-center cancer immunotherapy study. The generated pipelines delivered strong predictive performance, with best-performing models achieving up to 98.5% accuracy on selected prediction tasks, while substantially reducing manual coding effort. These findings suggest that MDE principles—metamodeling, semantic integration, and automated code generation—can provide a practical path toward interoperable, reproducible, and reliable digital health platforms. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

14 pages, 725 KB  
Article
PLTA-FinBERT: Pseudo-Label Generation-Based Test-Time Adaptation for Financial Sentiment Analysis
by Hai Yang, Hainan Chen, Chang Jiang, Juntao He and Pengyang Li
Big Data Cogn. Comput. 2026, 10(2), 59; https://doi.org/10.3390/bdcc10020059 - 11 Feb 2026
Viewed by 266
Abstract
Financial sentiment analysis leverages natural language processing techniques to quantitatively assess sentiment polarity and emotional tendencies in financial texts. Its practical application in investment decision-making and risk management faces two major challenges: the scarcity of high-quality labeled data due to expert annotation costs, [...] Read more.
Financial sentiment analysis leverages natural language processing techniques to quantitatively assess sentiment polarity and emotional tendencies in financial texts. Its practical application in investment decision-making and risk management faces two major challenges: the scarcity of high-quality labeled data due to expert annotation costs, and semantic drift caused by the continuous evolution of market language. To address these issues, this study proposes PLTA-FinBERT, a pseudo-label generation-based test-time adaptation framework that enables dynamic self-learning without requiring additional labeled data. The framework consists of two modules: a multi-perturbation pseudo-label generation mechanism that enhances label reliability through consistency voting and confidence-based filtering, and a test-time dynamic adaptation strategy that iteratively updates model parameters based on high-confidence pseudo-labels, allowing the model to continuously adapt to new linguistic patterns. PLTA-FinBERT achieves 0.8288 accuracy on the sentiment classification dataset of financial sentiment analysis, representing an absolute improvement of 2.37 percentage points over the benchmark. On the FiQA sentiment intensity prediction task, it obtains an R2 of 0.58, surpassing the previous state-of-the-art by 3 percentage points. Full article
Show Figures

Figure 1

20 pages, 2620 KB  
Article
Data-Driven Linear Representations of Forced Nonlinear MIMO Systems via Hankel Dynamic Mode Decomposition with Lifting
by Marcos Villarreal-Esquivel, Juan Francisco Durán-Siguenza and Luis Ismael Minchala
Mathematics 2026, 14(4), 625; https://doi.org/10.3390/math14040625 - 11 Feb 2026
Viewed by 387
Abstract
Modeling forced nonlinear multivariable dynamical systems remains challenging, particularly when first-principles models are unavailable or strong nonlinear couplings are present. In recent years, data-driven approaches grounded in the Koopman operator theory have gained attention for their ability to represent nonlinear dynamics via linear [...] Read more.
Modeling forced nonlinear multivariable dynamical systems remains challenging, particularly when first-principles models are unavailable or strong nonlinear couplings are present. In recent years, data-driven approaches grounded in the Koopman operator theory have gained attention for their ability to represent nonlinear dynamics via linear evolution in appropriately lifted spaces. This work presents a data-driven modeling framework for forced nonlinear multiple-input multiple-output (MIMO) systems based on Hankel Dynamic Mode Decomposition with control and lifting functions (HDMDc+Lift). The proposed methodology exploits Hankel matrices to encode temporal correlations and employs lifting functions to approximate the Koopman operator’s action on observable functions. As a result, an augmented-order linear state-space model is identified exclusively from input–output data, without relying on explicit knowledge of the system’s governing equations. The effectiveness of the proposed approach is demonstrated using operational data from a real multivariable tank system that was not used during the identification stage. The identified model achieves a coefficient of determination exceeding 0.87 in multi-step prediction tasks. Furthermore, spectral analysis of the resulting linear operator reveals that the dominant dynamical modes of the physical system are accurately captured. At the same time, additional modes associated with nonlinear interactions are also identified. These results highlight the HDMDc+Lift framework’s ability to provide accurate and interpretable linear representations of forced nonlinear MIMO dynamics. Full article
(This article belongs to the Special Issue Trends in Nonlinear Dynamic System Modeling)
Show Figures

Figure 1

43 pages, 22770 KB  
Article
Multi-Strategy Enhanced Connected Banking System Optimizer for Global Optimization and Corporate Bankruptcy Forecasting
by Yaozhong Zhang and Xiao Yang
Mathematics 2026, 14(4), 618; https://doi.org/10.3390/math14040618 - 10 Feb 2026
Viewed by 151
Abstract
Metaheuristic optimization algorithms are widely employed to address complex nonlinear and multimodal optimization problems due to their flexibility and strong global search capability. However, the original Connected Banking System Optimizer (CBSO) still exhibits several inherent limitations when handling high-dimensional and highly complex search [...] Read more.
Metaheuristic optimization algorithms are widely employed to address complex nonlinear and multimodal optimization problems due to their flexibility and strong global search capability. However, the original Connected Banking System Optimizer (CBSO) still exhibits several inherent limitations when handling high-dimensional and highly complex search spaces, including excessive dependence on single global-best guidance, rapid loss of population diversity, weak exploitation ability in later iterations, and inefficient boundary handling. These deficiencies often lead to premature convergence and unstable optimization performance. To overcome these drawbacks, this paper proposes a Multi-Strategy Enhanced Connected Banking System Optimizer (MSECBSO) by systematically enhancing the CBSO framework through multiple complementary mechanisms. First, a multi-elite cooperative guidance strategy is introduced to aggregate information from several high-quality individuals, thereby mitigating search-direction bias and improving population diversity. Second, an embedded differential evolution search strategy is incorporated to strengthen local exploitation accuracy and enhance the ability to escape from local optima. Third, a soft boundary rebound mechanism is designed to replace rigid boundary truncation, improving search stability and preventing boundary aggregation. The proposed MSECBSO is extensively evaluated on the CEC2017 and CEC2022 benchmark suites under different dimensional settings and is statistically compared with nine state-of-the-art metaheuristic algorithms. Experimental results demonstrate that MSECBSO achieves superior convergence accuracy, robustness, and stability across unimodal, multimodal, hybrid, and composition functions. In terms of computational complexity, MSECBSO retains the same order of time complexity as the original CBSO, namely O(N×D×T), while introducing only a marginal increase in constant computational overhead. The space complexity remains O(N×D), indicating good scalability for high-dimensional optimization problems. Furthermore, MSECBSO is applied to corporate bankruptcy forecasting by optimizing the hyperparameters of a K-nearest neighbors (KNN) classifier. The resulting MSECBSO-KNN model achieves higher prediction accuracy and stronger stability than competing optimization-based KNN models, confirming the effectiveness and practical applicability of the proposed algorithm in real-world classification tasks. Full article
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)
Show Figures

Figure 1

47 pages, 3665 KB  
Article
Enhanced Rotating Machinery Fault Diagnosis Using Hybrid RBSO–MRFO Adaptive Transformer-LSTM for Binary and Multi-Class Classification
by Amir R. Ali and Hossam Kamal
Machines 2026, 14(2), 208; https://doi.org/10.3390/machines14020208 - 10 Feb 2026
Viewed by 181
Abstract
Accurate fault diagnosis in rotating machinery is critical for predictive maintenance and operational reliability in industrial applications. Despite the effectiveness of deep learning, many models underperform due to manually selected hyperparameters, which can lead to premature convergence, overfitting, weak generalization, and inconsistent performance [...] Read more.
Accurate fault diagnosis in rotating machinery is critical for predictive maintenance and operational reliability in industrial applications. Despite the effectiveness of deep learning, many models underperform due to manually selected hyperparameters, which can lead to premature convergence, overfitting, weak generalization, and inconsistent performance across binary and multi-class classification. To address these limitations, the study proposes a novel hybrid hyperparameter optimization framework that combines Robotic Brain Storm Optimization (RBSO) with Manta Ray Foraging Optimization (MRFO) to optimally fine-tune deep learning architectures, including MLP, LSTM, GRU-TCN, CNN-BiLSTM, and Transformer-LSTM models. The framework leverages RBSO for global search to promote diversity and prevent premature convergence, and MRFO for local search to enhance convergence toward optimal solutions, with their combined effect improving predictive model performance and methodological generalization. The approach was validated on three benchmark datasets, including Case Western Reserve University (CWRU), industrial machine fault detection (TMFD), and the Machinery Fault Dataset (MaFaulDa). Before optimization, Transformer-LSTM model achieved 98.35% and 97.21% accuracy on CWRU binary and multi-class classification, 99.52% and 98.57% on TMFD, and 98.18% and 92.82% on MaFaulDa. Following hybrid optimization, Transformer-LSTM exhibited superior performance, with accuracies increasing to 99.72% for both CWRU tasks, 99.97% for TMFD, and 99.98% and 98.60% for MaFaulDa, substantially reducing misclassification. These results demonstrate that the proposed RBSO–MRFO framework provides a scalable, robust, and high-accuracy solution for intelligent fault diagnosis in rotating machinery. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

35 pages, 11090 KB  
Article
Design in the Age of Predictive Architecture: From Digital Models to Parametric Code to Latent Space
by José Carlos López Cervantes and Cintya Eva Sánchez Morales
Architecture 2026, 6(1), 25; https://doi.org/10.3390/architecture6010025 - 10 Feb 2026
Viewed by 234
Abstract
Over the last three decades, architecture has undergone a sustained digital transformation that has progressively displaced the ontology of the geometric generator, understood here as the primary artefact through which form is produced, controlled, and legitimized. This paper argues that, within one extended [...] Read more.
Over the last three decades, architecture has undergone a sustained digital transformation that has progressively displaced the ontology of the geometric generator, understood here as the primary artefact through which form is produced, controlled, and legitimized. This paper argues that, within one extended digital epoch, three successive regimes have reconfigured architectural agency. First, a digital model regime, in which computer-generated 3D models become the main generators of geometry. Second, a parametric code regime, in which scripted relations and numerical parameters supersede the individual model as the core design object, defining a space of possibilities rather than a single instance. Third, an emerging latent regime, in which diffusion and transformer systems produce high plausibility synthetic images as image-first generators and subsequently impose a post hoc image-to-geometry translation requirement. To make this shifting paradigm comparable across time, the paper uses the blob as a stable morphological reference and develops a comparative reading of four blobs, Kiesler’s Endless House, Greg Lynn’s Embryological House, Marc Fornes’ Vaulted Willow, and an author-generated GenAI blob curated from a traceable AI image archive, to show how the geometric generator migrates from object, to model, to code, to latent image-space. As a pre-digital hinge case, Kiesler is selected not only for anticipating blob-like continuity, but for clarifying a recurrent disciplinary tension, “ form first generators” that precede tectonic and programmatic rationalization. The central hypothesis is that GenAI introduces an ontological shift not primarily at the level of style, but at the level of architectural judgement and evidentiary legitimacy. The project can begin with a predictive image that is visually convincing yet tectonically underdetermined. To name this condition, the paper proposes the plausibility gap, the mismatch between visual plausibility and tectonic intelligibility, as an operational criterion for evaluating image-first workflows, and for specifying the verification tasks required to stabilize them as architecture. Selection establishes evidentiary legitimacy, while a friction map and Gap Index externalize the translation pressure required to turn predictive imagery into accountable geometry, making the plausibility gap operational rather than merely asserted. The paper concludes by outlining implications for authorship, pedagogy, and disciplinary judgement in emerging multi-agent design ecologies. Full article
(This article belongs to the Special Issue Architecture in the Digital Age)
Show Figures

Figure 1

16 pages, 1641 KB  
Article
Edge-Based GNN for Network Delay Prediction Enhanced by Flight Connectivity
by Zhixing Tang, Zhaolun Niu, Xuanting Chen, Shan Huang and Xinping Zhu
Aerospace 2026, 13(2), 161; https://doi.org/10.3390/aerospace13020161 - 10 Feb 2026
Viewed by 167
Abstract
Accurate prediction of network-wide delay is crucial for air traffic management and passenger service. However, the inherent complexity of large-scale air traffic networks, with their dense interconnectivity and multi-dimensional operational dynamics such as flight connectivity, makes this task highly challenging. While Graph Neural [...] Read more.
Accurate prediction of network-wide delay is crucial for air traffic management and passenger service. However, the inherent complexity of large-scale air traffic networks, with their dense interconnectivity and multi-dimensional operational dynamics such as flight connectivity, makes this task highly challenging. While Graph Neural Networks (GNNs) offer a promising framework, prevailing models are constrained by a “node → edge → node” representation paradigm, which fails to preserve the high-fidelity, edge-centric operational data that encodes delay propagation paths. To overcome this limitation, we propose a novel edge-based GNN. Our approach begins with a flight-connectivity-informed delay characterization, introducing delay width and delay strength as core metrics. The model implements an “edge → node” message-passing mechanism that explicitly encodes inbound and outbound flights, enabling direct learning of delay diffusion dynamics along air routes. Extensive experiments on real-world datasets demonstrate that our method outperforms state-of-the-art benchmarks, achieving the lowest RMSE, MAE, and MSE. A layered performance analysis reveals a key strength: the model delivers superior accuracy at major hub airports—which are critical to network performance—while maintaining robust precision at small-to-medium-sized airports. This balanced capability underscores the model’s practical utility and its enhanced capacity to capture the essential spatial–temporal dependencies governing delay propagation across diverse airport tiers. Full article
(This article belongs to the Special Issue AI, Machine Learning and Automation for Air Traffic Control (ATC))
Show Figures

Figure 1

Back to TopTop