Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (396)

Search Parameters:
Keywords = shot prediction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1149 KB  
Article
CGAP-HBSA: A Source Camera Identification Framework Under Few-Shot Conditions
by Yifan Hu, Zhiqiang Wen, Aofei Chen and Lini Wu
Symmetry 2026, 18(1), 71; https://doi.org/10.3390/sym18010071 - 31 Dec 2025
Abstract
Source camera identification relies on sensor noise features to distinguish between different devices, but large-scale sample labeling is time-consuming and labor-intensive, making it difficult to implement in real-world applications. The noise residuals generated by different camera sensors exhibit statistical asymmetry, and the structured [...] Read more.
Source camera identification relies on sensor noise features to distinguish between different devices, but large-scale sample labeling is time-consuming and labor-intensive, making it difficult to implement in real-world applications. The noise residuals generated by different camera sensors exhibit statistical asymmetry, and the structured patterns within these residuals also show local symmetric relationships. Together, these features form the theoretical foundation for camera source identification. To address the problem of limited labeled data under few-shot conditions, this paper proposes a Cross-correlation Guided Augmentation and Prediction with Hybrid Bidirectional State-Space Model Attention (CGAP-HBSA) framework, based on the aforementioned symmetry-related theoretical foundation. The method extracts symmetric correlation structures from unlabeled samples and converts them into reliable pseudo-labeled samples. Furthermore, the HBSA network jointly models symmetric structures and asymmetric variations in camera fingerprints using a bidirectional SSM module and a hybrid attention mechanism, thereby enhancing long-range spatial modeling capabilities and recognition robustness. In the Dresden dataset, the proposed method achieves an identification accuracy for the 5-shot camera source identification task that is only 0.02% lower than the current best-performing method under few-shot conditions, MDM-CPS, and outperforms other classical few-shot camera source identification methods. In the 10-shot task, the method improves by at least 0.3% compared to MDM-CPS. In the Vision dataset, the method improves the identification accuracy in the 5-shot camera source identification task by at least 6% compared to MDM-CPS, and in the 10-shot task, it improves by at least 3% over the best-performing MDM-CPS method. Experimental results demonstrate that the proposed method achieves competitive or superior performance in both 5-shot and 10-shot settings. Additional robustness experiments further confirm that the HBSA network maintains strong performance even under image compression and noise contamination conditions. Full article
16 pages, 2972 KB  
Review
AI-Driven Digital Pathology: Deep Learning and Multimodal Integration for Precision Oncology
by Hyun-Jong Jang and Sung Hak Lee
Int. J. Mol. Sci. 2026, 27(1), 379; https://doi.org/10.3390/ijms27010379 - 29 Dec 2025
Abstract
Pathology is fundamental to precision oncology, offering molecular and morphologic insights that enable personalized diagnosis and treatment. Recently, deep learning has demonstrated substantial potential in digital pathology, effectively addressing a wide range of diagnostic, prognostic, and biomarker-prediction tasks. Although early approaches based on [...] Read more.
Pathology is fundamental to precision oncology, offering molecular and morphologic insights that enable personalized diagnosis and treatment. Recently, deep learning has demonstrated substantial potential in digital pathology, effectively addressing a wide range of diagnostic, prognostic, and biomarker-prediction tasks. Although early approaches based on convolutional neural networks had limited capacity to generalize across tasks and datasets, transformer-based foundation models have substantially advanced the field by enabling scalable representation learning, enhancing cross-cohort robustness, and supporting few- and even zero-shot inference across a wide range of pathology applications. Furthermore, the ability of foundation models to integrate heterogeneous data within a unified processing framework broadens the possibility of developing more generalizable models for medicine. These multimodal foundation models can accelerate the advancement of pathology-based precision oncology by enabling coherent interpretation of histopathology together with radiology, clinical text, and molecular data, thereby supporting more accurate diagnosis, prognostication, and therapeutic decision-making. In this review, we provide a concise overview of these advances and examine how foundation models are driving the ongoing evolution of pathology-based precision oncology. Full article
Show Figures

Figure 1

30 pages, 3006 KB  
Article
MiRA: A Zero-Shot Mixture-of-Reasoning Agents Framework for Multimodal Answering of Science Questions
by Fawaz Alsolami, Asmaa Alrayzah and Rayyan Najam
Appl. Sci. 2026, 16(1), 372; https://doi.org/10.3390/app16010372 - 29 Dec 2025
Abstract
Multimodal question answering (QA) involves integrating information from both visual and textual inputs and requires models that can reason compositionally and accurately across modalities. Existing approaches, including fine-tuned vision–language and prompting, often struggle with generalization, interpretability, and reliance on task-specific data. In this [...] Read more.
Multimodal question answering (QA) involves integrating information from both visual and textual inputs and requires models that can reason compositionally and accurately across modalities. Existing approaches, including fine-tuned vision–language and prompting, often struggle with generalization, interpretability, and reliance on task-specific data. In this work, we propose a Mixture-of-Reasoning Agents (MiRA) framework for zero-shot multimodal reasoning. MiRA decomposes the reasoning process across three specialized agents—Visual Analyzing, Text Comprehending, and Judge—which consolidate multimodal evidence. Each agent operates independently using pretrained language models, enabling structured, interpretable reasoning without supervised training or task-specific adaptation. Evaluated on the ScienceQA benchmark, MiRA achieves 96.0% accuracy, surpassing all zero-shot methods, outperforming few-shot GPT-4o models by more than 18% on image-based questions, and achieving similar performance to the best fine-tuned systems. The analysis further shows that the Judge agent consistently improves the reliability of individual agent outputs, and that strong linear correlations (r > 0.95) exist between image-specific accuracy and overall performance across models. We identify a previously unreported and robust pattern in which performance on image-specific tasks strongly predicts overall task success. We also conduct detailed error analyses for each agent, highlighting complementary strengths and failure modes. These results demonstrate that modular agent collaboration with zero-shot reasoning provides highly accurate multimodal QA, establishing a new paradigm for zero-shot multimodal AI and offering a principled framework for future research in generalizable AI. Full article
(This article belongs to the Special Issue Deep Learning and Its Applications in Natural Language Processing)
Show Figures

Figure 1

40 pages, 5707 KB  
Review
Graph Representation Learning for Battery Energy Systems in Few-Shot Scenarios: Methods, Challenges and Outlook
by Xinyue Zhang and Shunli Wang
Batteries 2026, 12(1), 11; https://doi.org/10.3390/batteries12010011 - 26 Dec 2025
Viewed by 126
Abstract
Graph representation learning (GRL) has emerged as a unifying paradigm for modeling the relational and heterogeneous nature of battery energy storage systems (BESS), yet a systematic synthesis focused on data-scarce (few-shot) battery scenarios is still lacking. Graph representation learning offers a natural way [...] Read more.
Graph representation learning (GRL) has emerged as a unifying paradigm for modeling the relational and heterogeneous nature of battery energy storage systems (BESS), yet a systematic synthesis focused on data-scarce (few-shot) battery scenarios is still lacking. Graph representation learning offers a natural way to describe the structure and interaction of battery cells, modules and packs. At the same time, battery applications often suffer from very limited labeled data, especially for new chemistries, extreme operating conditions and second-life use. This review analyzes how graph representation learning can be combined with few-shot learning to support key battery management tasks under such data-scarce conditions. We first introduce the basic ideas of graph representation learning, including models based on neighborhood aggregation, contrastive learning, autoencoders and transfer learning, and discuss typical data, model and algorithm challenges in few-shot scenarios. We then connect these methods to battery state estimation problems, covering state of charge, state of health, remaining useful life and capacity. Particular attention is given to approaches that use graph neural models, meta-learning, semi-supervised and self-supervised learning, Bayesian deep networks, and federated learning to extract transferable features from early-cycle data, partial charge–discharge curves and large unlabeled field datasets. Reported studies show that, with only a small fraction of labeled samples or a few initial cycles, these methods can achieve state and life prediction errors that are comparable to or better than conventional models trained on full datasets, while also improving robustness and, in some cases, providing uncertainty estimates. Based on this evidence, we summarize the main technical routes for few-shot battery scenarios and identify open problems in data preparation, cross-domain generalization, uncertainty quantification and deployment on real battery management systems. The review concludes with a research outlook, highlighting the need for pack-level graph models, physics-guided and probabilistic learning, and unified benchmarks to advance reliable graph-based few-shot methods for next-generation intelligent battery management. Full article
(This article belongs to the Section Battery Modelling, Simulation, Management and Application)
Show Figures

Figure 1

21 pages, 15857 KB  
Article
LogPPO: A Log-Based Anomaly Detector Aided with Proximal Policy Optimization Algorithms
by Zhihao Wang, Jiachen Dong and Chuanchuan Yang
Smart Cities 2026, 9(1), 5; https://doi.org/10.3390/smartcities9010005 - 26 Dec 2025
Viewed by 101
Abstract
Cloud-based platforms form the backbone of smart city ecosystems, powering essential services such as transportation, energy management, and public safety. However, their operational complexity generates vast volumes of system logs, making manual anomaly detection infeasible and raising reliability concerns. This study addresses the [...] Read more.
Cloud-based platforms form the backbone of smart city ecosystems, powering essential services such as transportation, energy management, and public safety. However, their operational complexity generates vast volumes of system logs, making manual anomaly detection infeasible and raising reliability concerns. This study addresses the challenge of data scarcity in log anomaly detection by leveraging Large Language Models (LLMs) to enhance domain-specific classification tasks. We empirically validate that domain-adapted classifiers preserve strong natural language understanding, and introduce a Proximal Policy Optimization (PPO)-based approach to align semantic patterns between LLM outputs and classifier preferences. Experiments were conducted using three Transformer-based baselines under few-shot conditions across four public datasets. Results indicate that integrating natural language analyses improves anomaly detection F1-Scores by 5–86% over the baselines, while iterative PPO refinement boosts classifier’s “confidence” in label prediction. This research pioneers a novel framework for few-shot log anomaly detection, establishing an innovative paradigm in resource-constrained diagnostic systems in smart city infrastructures. Full article
Show Figures

Figure 1

18 pages, 6246 KB  
Article
Cross-Modality Alignment Perception and Multi-Head Self-Attention Mechanism for Vision-Language-Action of Humanoid Robot
by Bin Ren and Diwei Shi
Sensors 2026, 26(1), 165; https://doi.org/10.3390/s26010165 - 26 Dec 2025
Viewed by 184
Abstract
For a humanoid robot, it is difficult to predict a motion trajectory through end-to-end imitation learning when performing complex operations and multi-step processes, leading to jittering in the robot arm. To alleviate this problem and reduce the computational complexity of the self-attention module [...] Read more.
For a humanoid robot, it is difficult to predict a motion trajectory through end-to-end imitation learning when performing complex operations and multi-step processes, leading to jittering in the robot arm. To alleviate this problem and reduce the computational complexity of the self-attention module in Vision-Language-Action (VLA) operations, we proposed a memory-gated filtering attention model that improved the multi-head self-attention mechanism. Then, we designed a cross-modal alignment perception during training, combined with a few-shot data-collection strategy for key steps. The experimental results showed that the proposed scheme significantly improved the task success rate and alleviated the robot arm jitter problem, while reducing video memory usage by 72% and improving training speed from 1.35 s to 0.129 s per batch. This maintained higher action accuracy and robustness in the humanoid robot. Full article
Show Figures

Figure 1

21 pages, 2322 KB  
Article
A Unified AI Architecture for Self-Regulated Learning: Cognitive Modeling, Meta-Learning, and Continual Adaptation
by Ridouane Oubagine, Loubna Laaouina, Adil Jeghal and Hamid Tairi
Algorithms 2026, 19(1), 26; https://doi.org/10.3390/a19010026 - 26 Dec 2025
Viewed by 161
Abstract
The growing need for intelligent educational systems calls for architectures supporting adaptive instruction, while enabling more permanent, long-term personalization and cognitive alignment in the long run. While we have seen progress in adaptive learning technologies at the intersection of Self-Regulated Learning (SRL), Continual [...] Read more.
The growing need for intelligent educational systems calls for architectures supporting adaptive instruction, while enabling more permanent, long-term personalization and cognitive alignment in the long run. While we have seen progress in adaptive learning technologies at the intersection of Self-Regulated Learning (SRL), Continual Learning (CL), and Meta-Learning, these are generally employed in isolation to provide piecemeal solutions. In this paper, we propose CAMEL, a unified architecture for (1) cognitive modelling based on SRL, (2) continual learning functionalities, and (3) meta-learning to provide adaptive, personalized, and cognitively consistent learning environments. CAMEL includes the following components: (1) A Cognitive State Estimator that estimates learner motivation, attention, and persistence from behavioral traces, (2) A Meta-Learning Engine that allows it rapid adaptation through Model-Agnostic Meta-Learning (MAML), (3) A Continual Learning Memory that preserves knowledge across sessions using Elastic Weight Consolidation (EWC) and Replay, (4) A Pedagogical Decision Engine that makes real-time efficient adjustments of instructional strategies, and (5) A closed-loop that continuously reconciles misalignments between pedagogical actions and predicted cognitive states. Experiments conducted on the xAPI-Edu-Data dataset evaluate the system’s few-shot adaptation capability, knowledge retention, cognitive-state prediction accuracy, and knowledge, as well as cognitive responsiveness to the impending questions. It offers competitive performance in learner-state prediction and long-term performance compared to the baselines, and the improvements are consistent across the different baselines. This paper lays the groundwork for next-generation adaptive and cognition-driven AI-based learning systems. Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
Show Figures

Figure 1

17 pages, 5114 KB  
Article
Neural Network-Enabled Process Flowsheet for Industrial Shot Peening
by Langdon Feltner and Paul Mort
Materials 2026, 19(1), 9; https://doi.org/10.3390/ma19010009 - 19 Dec 2025
Viewed by 141
Abstract
This work presents a dynamic flowsheet model that predicts residual stress from shot peening. The peening medium is characterized by size and shape, and evolves dynamically with abrasion, fracture, classification, and replenishment. Because particle size and impact location vary stochastically, the resulting residual [...] Read more.
This work presents a dynamic flowsheet model that predicts residual stress from shot peening. The peening medium is characterized by size and shape, and evolves dynamically with abrasion, fracture, classification, and replenishment. Because particle size and impact location vary stochastically, the resulting residual stress field is spatially heterogeneous. Residual stress fields are predicted in real time through a convolutional long short-term memory (ConvLSTM) neural network trained on finite element simulations, enabling fast, mechanistically grounded prediction of surface stress evolution under industrial shot peening conditions. We deploy the model in a pair of 10,000-cycle production peening case studies, demonstrating that media recharge strategy has a measurable effect on residual stress outcomes. Full article
Show Figures

Graphical abstract

15 pages, 5477 KB  
Article
Few-Shot Transfer Learning for Diabetes Risk Prediction Across Global Populations
by Shrinit Babel, Sunit Babel, John Hodgson and Enrico Camporesi
Medicina 2026, 62(1), 7; https://doi.org/10.3390/medicina62010007 - 19 Dec 2025
Viewed by 176
Abstract
Background and Objectives: Type 2 diabetes mellitus (T2DM) affects over 537 million adults worldwide and disproportionately burdens low- and middle-income countries, where diagnostic resources are limited. Predictive models trained in one population often fail to generalize across regions due to shifts in [...] Read more.
Background and Objectives: Type 2 diabetes mellitus (T2DM) affects over 537 million adults worldwide and disproportionately burdens low- and middle-income countries, where diagnostic resources are limited. Predictive models trained in one population often fail to generalize across regions due to shifts in feature distributions and measurement practices, hindering scalable screening efforts. Materials and Methods: We evaluated a few-shot domain adaptation framework using a simple multilayer perceptron with four shared clinical features (age, body mass index, mean arterial pressure, and plasma glucose) across three adult cohorts: Bangladesh (n = 5288), Iraq (n = 662), and the Pima Indian dataset (n = 768). For each of the six source-target pairs, we pre-trained on the source cohort and then fine-tuned on 1, 5, 10, and 20% of the labeled target examples, reserving the remaining for testing; a final 20% few-shot version was compared with threshold tuning. Discrimination and calibration performance metrics were used before and after adaptation. SHAP explainability analyses quantified shifts in feature importance and decision thresholds. Results: Several source → target transfers produced zero true positives under the strict source-only baseline at a fixed 0.5 decision threshold (e.g., Bangladesh → Pima F1 = 0.00, 0/268 diabetics detected). Few-shot fine-tuning restored non-zero recall in all such cases, with F1 improvements up to +0.63 and precision–recall gains in every zero-baseline transfer. In directions with moderate baseline performance (e.g., Bangladesh → Iraq, Iraq → Pima, Pima → Iraq), 20% few-shot adaptation with threshold tuning improved AUROC by +0.01 to +0.14 and accuracy by +4 to +17 percentage points while reducing Brier scores by up to 0.14 and ECE by approximately 30–80% (suggesting improved calibration). All but one transfer (Iraq → Bangladesh) demonstrated statistically significant improvement by McNemar’s test (p < 0.001). SHAP analyses revealed population-specific threshold shifts: glucose inflection points ranged from ~120 mg/dL in Pima to ~150 mg/dL in Iraq, and the importance of BMI rose in Pima-targeted adaptations. Conclusions: Leveraging as few as 5–20% of local labels, few-shot domain adaptation enhances cross-population T2DM risk prediction using only routinely available features. This scalable, interpretable approach can democratize preventive screening in diverse, resource-constrained settings. Full article
Show Figures

Figure 1

26 pages, 8438 KB  
Article
LLM-WPFNet: A Dual-Modality Fusion Network for Large Language Model-Empowered Wind Power Forecasting
by Xuwen Zheng, Yongliang Luo and Yahui Shan
Symmetry 2025, 17(12), 2171; https://doi.org/10.3390/sym17122171 - 17 Dec 2025
Viewed by 291
Abstract
Wind power forecasting is critical to grid stability and renewable energy integration. However, existing deep learning methods struggle to incorporate semantic domain knowledge from textual information, exhibit limited generalization with scarce training data, and require high computational costs for extensive fine-tuning. Large language [...] Read more.
Wind power forecasting is critical to grid stability and renewable energy integration. However, existing deep learning methods struggle to incorporate semantic domain knowledge from textual information, exhibit limited generalization with scarce training data, and require high computational costs for extensive fine-tuning. Large language models (LLMs) offer a promising solution through their semantic representations, few-shot learning capabilities, and multimodal processing abilities. This paper proposes LLM-WPFNet, a dual-modality fusion framework that integrates frozen pre-trained LLMs with time-series analysis for wind power forecasting. The key insight is encoding temporal patterns as structured textual prompts to enable semantic guidance from frozen LLMs without fine-tuning. LLM-WPFNet employs two parallel encoding branches to extract complementary features from time series and textual prompts, unified through asymmetric multi-head attention fusion that enables selective semantic knowledge transfer from frozen LLM embeddings to enhance temporal representations. By maintaining the LLM frozen, our method achieves computational efficiency while leveraging robust semantic representations. Extensive experiments on four wind farm datasets (36–200 MW) across five prediction horizons (1–24 h) demonstrate that LLM-WPFNet consistently outperforms state-of-the-art baselines by 11% in MAE and RMSE. Notably, with only 10% of training data, it achieves a 17.6% improvement over the best baseline, validating its effectiveness in both standard and data-scarce scenarios. These results highlight the effectiveness and robustness of the dual-modality fusion design in predicting wind power under complex real-world conditions. Full article
Show Figures

Figure 1

22 pages, 1236 KB  
Article
An Industrial Framework for Cold-Start Recommendation in Few-Shot and Zero-Shot Scenarios
by Xulei Cao, Wenyu Zhang, Feiyang Jiang and Xinming Zhang
Information 2025, 16(12), 1105; https://doi.org/10.3390/info16121105 - 15 Dec 2025
Viewed by 358
Abstract
With the rise of online advertising, e-commerce industries, and new media platforms, recommendation systems have become an essential product form that connects users with a vast number of candidates. A major challenge in recommendation systems is the cold-start problem, where the absence of [...] Read more.
With the rise of online advertising, e-commerce industries, and new media platforms, recommendation systems have become an essential product form that connects users with a vast number of candidates. A major challenge in recommendation systems is the cold-start problem, where the absence of historical interaction data for new users and items leads to poor recommendation performance. We first analyze the causes of the cold-start problem, highlighting the limitations of existing embedding models when faced with a lack of interaction data. To address this, we classify the features of models into three categories, leveraging the Trans Block mapping to transfer features into the semantic space of missing features. Then, we propose a model-agnostic industrial framework (MAIF) with the Auto-Selection serving mechanism to address the cold-start recommendation problem in few-shot and zero-shot scenarios without requiring training from scratch. This framework can be applied to various online models without altering the prediction for warm entities, effectively avoiding the “seesaw phenomenon” between cold and warm entities. It improves prediction accuracy and calibration performance in three cold-start scenarios of recommendation systems. Finally, both the offline experiments on real-world industrial datasets and the online advertising system on the Dazhong Dianping app validate the effectiveness of our approach, showing significant improvements in recommendation performance for cold-start scenarios. Full article
Show Figures

Figure 1

26 pages, 4817 KB  
Article
ProcessGFM: A Domain-Specific Graph Pretraining Prototype for Predictive Process Monitoring
by Yikai Hu, Jian Lu, Xuhai Zhao, Yimeng Li, Zhen Tian and Zhiping Li
Mathematics 2025, 13(24), 3991; https://doi.org/10.3390/math13243991 - 15 Dec 2025
Viewed by 320
Abstract
Predictive process monitoring estimates the future behaviour of running process instances based on historical event logs, with typical tasks including next-activity prediction, remaining-time estimation, and risk assessment. Existing recurrent and Transformer-based models achieve strong accuracy on individual logs but transfer poorly across processes [...] Read more.
Predictive process monitoring estimates the future behaviour of running process instances based on historical event logs, with typical tasks including next-activity prediction, remaining-time estimation, and risk assessment. Existing recurrent and Transformer-based models achieve strong accuracy on individual logs but transfer poorly across processes and underuse the rich graph structure of event data. This paper introduces ProcessGFM, a domain-specific graph pretraining prototype for predictive process monitoring on event graphs. ProcessGFM employs a hierarchical graph neural architecture that jointly encodes event-level, case-level, and resource-level structure and is pretrained in a self-supervised manner on multiple benchmark logs using masked activity reconstruction, temporal order consistency, and pseudo-labelled outcome prediction. A multi-task prediction head and an adversarial domain alignment module adapt the pretrained backbone to downstream tasks and stabilise cross-log generalisation. On the BPI 2012, 2017, and 2019 logs, ProcessGFM improves next-activity accuracy by 2.7 to 4.5 percentage points over the best graph baseline, reaching up to 89.6% accuracy and 87.1% macro-F1. For remaining-time prediction, it attains mean absolute errors between 0.84 and 2.11 days, reducing error by 11.7% to 18.2% relative to the strongest graph baseline. For case-level risk prediction, it achieves area-under-the-curve scores between 0.907 and 0.934 and raises precision at 10% recall by 6.7 to 8.1 percentage points. Cross-log transfer experiments show that ProcessGFM retains between about 90% and 96% of its in-domain next-activity accuracy when applied zero-shot to a different log. Attention-based analysis highlights critical subgraphs that can be projected back to Petri net fragments, providing interpretable links between structural patterns, resource handovers, and late cases. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

25 pages, 1343 KB  
Review
A Critical Review of Diffusion—Thermomechanical and Composite Reinforcement Approaches for Surface Hardening of Aluminum Alloys and Matrix Composites
by Narayana Swamy Rangaiah, Ananda Hegde, Sathyashankara Sharma, Gowrishankar Mandya Channegowda, Umanath R. Poojary and Niranjana Rai
J. Compos. Sci. 2025, 9(12), 689; https://doi.org/10.3390/jcs9120689 - 12 Dec 2025
Viewed by 564
Abstract
Aluminum alloys require improved surface performance to satisfy the demands of today’s aerospace, automotive, marine, and structural applications. This paper compares three key surface hardening methods: diffusion-assisted microalloying, thermomechanical deformation-based treatments, and composite/hybrid reinforcing procedures. Diffusion-assisted Zn/Mg enrichment allows for localized precipitation hardening [...] Read more.
Aluminum alloys require improved surface performance to satisfy the demands of today’s aerospace, automotive, marine, and structural applications. This paper compares three key surface hardening methods: diffusion-assisted microalloying, thermomechanical deformation-based treatments, and composite/hybrid reinforcing procedures. Diffusion-assisted Zn/Mg enrichment allows for localized precipitation hardening but is limited by the native Al2O3 barrier, slow solute mobility, alloy-dependent solubility, and shallow hardened depths. In contrast, thermomechanical techniques such as shot peening, surface mechanical attrition treatment (SMAT), and laser shock peening produce ultrafine/nanocrystalline layers, high dislocation densities, and deep compressive residual stresses, allowing for predictable increases in hardness, fatigue resistance, and corrosion performance. Composite and hybrid reinforcement systems, such as SiC, B4C, graphene, and graphite-based aluminum matrix composites (AMCs), use load transfer, Orowan looping, interfacial strengthening, and solid lubrication effects to enhance wear resistance and through-thickness strengthening. Comparative evaluations show that, while diffusion-assisted procedures are still labor-intensive and solute-sensitive, thermomechanical treatments are more industrially established and scalable. Composite and hybrid systems provide the best tribological and load-bearing performance but necessitate more sophisticated processing approaches. Recent corrosion studies show that interfacial chemistry, precipitate distribution, and galvanic coupling all have a significant impact on pitting and stress corrosion cracking (SCC). These findings highlight the importance of treating corrosion as a fundamental design variable in all surface hardening techniques. This work uses unified tables and drawings to provide a thorough examination of strengthening mechanisms, corrosion and fatigue behavior, hardening depth, alloy suitability, and industrial feasibility. Future research focuses on overcoming diffusion barriers, establishing next-generation gradient topologies and hybrid processing approaches, improving strength ductility corrosion trade-offs, and utilizing machine-learning-guided alloy design. This research presents the first comprehensive framework for selecting multifunctional aluminum surfaces in demanding aerospace, automotive, and marine applications by seeing composite reinforcements as supplements rather than strict alternatives to diffusion-assisted and thermomechanical approaches. Full article
(This article belongs to the Section Metal Composites)
Show Figures

Figure 1

8 pages, 2266 KB  
Proceeding Paper
A Fatigue Life Calculation Procedure Implementing Surface and Depth-Graded Mechanical Properties
by Paschalis Adamidis, Christos Gakias, Efstratios Giannakis and Georgios Savaidis
Eng. Proc. 2025, 119(1), 25; https://doi.org/10.3390/engproc2025119025 - 11 Dec 2025
Viewed by 129
Abstract
This study presents a fatigue life prediction procedure for high-strength steel suspension components that exhibit surface and depth-graded mechanical properties due to manufacturing processes such as shot peening and heat treatment. A layer-by-layer approach based on local stress and material properties at the [...] Read more.
This study presents a fatigue life prediction procedure for high-strength steel suspension components that exhibit surface and depth-graded mechanical properties due to manufacturing processes such as shot peening and heat treatment. A layer-by-layer approach based on local stress and material properties at the examined depth from the surface is implemented, allowing the generation of S-N curves that reflect the local fatigue response at different depths. The methodology is applied to a parabolic monoleaf spring for the axle suspension of commercial vehicles, made of 51CrV4 steel, and validated against experimental fatigue data. Results show strong agreement, demonstrating the effectiveness of incorporating local mechanical characteristics in terms of stress and material properties into fatigue design workflows. Full article
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Viewed by 576
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

Back to TopTop