Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,601)

Search Parameters:
Keywords = adaptive rules

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 444 KB  
Article
Enhancing Cascade Object Detection Accuracy Using Correctors Based on High-Dimensional Feature Separation
by Andrey V. Kovalchuk, Andrey A. Lebedev, Olga V. Shemagina, Irina V. Nuidel, Vladimir G. Yakhno and Sergey V. Stasenko
Technologies 2025, 13(12), 593; https://doi.org/10.3390/technologies13120593 - 16 Dec 2025
Abstract
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can [...] Read more.
This study addresses the problem of correcting systematic errors in classical cascade object detectors under severe data scarcity and distribution shift. We focus on the widely used Viola–Jones framework enhanced with a modified Census transform and propose a modular “corrector” architecture that can be attached to an existing detector without retraining it. The key idea is to exploit the blessing of dimensionality: high-dimensional feature vectors constructed from multiple cascade stages are transformed by PCA and whitening into a space where simple linear Fisher discriminants can reliably separate rare error patterns from normal operation using only a few labeled examples. This study presents a novel algorithm designed to correct the outputs of object detectors constructed using the Viola–Jones framework enhanced with a modified census transform. The proposed method introduces several improvements addressing error correction and robustness in data-limited conditions. The approach involves image partitioning through a sliding window of fixed aspect ratio and a modified census transform in which pixel intensity is compared to the mean value within a rectangular neighborhood. Training samples for false negative and false positive correctors are selected using dual Intersection-over-Union (IoU) thresholds and probabilistic sampling of true positive and true negative fragments. Corrector models are trained based on the principles of high-dimensional separability within the paradigm of one- and few-shot learning, utilizing features derived from cascade stages of the detector. Decision boundaries are optimized using Fisher’s rule, with adaptive thresholding to guarantee zero false acceptance. Experimental results indicate that the proposed correction scheme enhances object detection accuracy by effectively compensating for classifier errors, particularly under conditions of scarce training data. On two railway image datasets with only about one thousand images each, the proposed correctors increase Precision from 0.36 to 0.65 on identifier detection while maintaining high Recall (0.98 → 0.94), and improve digit detection Recall from 0.94 to 0.98 with negligible loss in Precision (0.92 → 0.91). These results demonstrate that even under scarce training data, high-dimensional feature separation enables effective one-/few-shot error correction for cascade detectors with minimal computational overhead. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

14 pages, 1284 KB  
Article
A Comparative Study of Machine and Deep Learning Approaches for Smart Contract Vulnerability Detection
by Mohammed Yaseen Alhayani, Wisam Hazim Gwad, Shahab Wahhab Kareem and Moustafa Fayad
Technologies 2025, 13(12), 592; https://doi.org/10.3390/technologies13120592 - 16 Dec 2025
Abstract
The increasing use of blockchain smart contracts has introduced new security challenges, as small coding errors can lead to major financial losses. While rule-based static analyzers remain the most common detection tools, their limited adaptability often results in false positives and outdated vulnerability [...] Read more.
The increasing use of blockchain smart contracts has introduced new security challenges, as small coding errors can lead to major financial losses. While rule-based static analyzers remain the most common detection tools, their limited adaptability often results in false positives and outdated vulnerability patterns. This study presents a comprehensive comparative analysis of machine learning (ML) and deep learning (DL) methods for smart contract vulnerability detection using the BCCC-SCsVuls-2024 benchmark dataset. Six models (Random Forest, k-Nearest Neighbors, Simple and Deep Multilayer Perceptron, and Simple and Deep one-dimensional Convolutional Neural Networks) were evaluated under a unified experimental framework combining RobustScaler normalization and Principal Component Analysis (PCA) for dimensionality reduction. Our experimental results from a five-fold cross-validation show that the Random Forest classifier achieved the best overall performance with an accuracy of 89.44% and an F1-score of 93.20%, outperforming both traditional and neural models in stability and generalization. PCA-based feature analysis revealed that opcode-level features, particularly stack and memory manipulation instructions (PUSH, DUP, SWAP, and RETURNDATASIZE), were the most influential in defining contract behavior. Full article
Show Figures

Figure 1

32 pages, 1453 KB  
Article
Robust Explosion Point Location Detection via Multi–UAV Data Fusion: An Improved D–S Evidence Theory Framework
by Xuebin Liu and Hanshan Li
Mathematics 2025, 13(24), 3997; https://doi.org/10.3390/math13243997 - 15 Dec 2025
Abstract
The Dempster–Shafer (D–S) evidence theory, while powerful for uncertainty reasoning, suffers from mathematical limitations in high–conflict scenarios where its combination rule produces counterintuitive results. This paper introduces a reformulated D–S framework grounded in optimization theory and information geometry. We rigorously construct a dynamic [...] Read more.
The Dempster–Shafer (D–S) evidence theory, while powerful for uncertainty reasoning, suffers from mathematical limitations in high–conflict scenarios where its combination rule produces counterintuitive results. This paper introduces a reformulated D–S framework grounded in optimization theory and information geometry. We rigorously construct a dynamic weight allocation mechanism derived from minimizing systemic Jensen–Shannon divergence and propose a conflict–adaptive fusion rule with theoretical guarantees. We formally prove that our framework possesses the Conflict Attenuation Property and Robustness to Outlier Evidence. Extensive Monte Carlo simulations in multi–UAV explosion point localization demonstrate the framework’s superiority, reducing localization error by 75.6% in high–conflict scenarios compared to classical D–S. This work provides not only a robust application solution but also a theoretically sound and generalizable mathematical framework for multi–source data fusion under uncertainty. Full article
10 pages, 2882 KB  
Article
AI-Assisted Composite Etch Model for MPT
by Yanbin Gong, Fengsheng Zhao, Devin Sima, Wenzhang Li, Yingxiong Guo, Cheming Hu and Shengrui Zhang
Micromachines 2025, 16(12), 1410; https://doi.org/10.3390/mi16121410 - 15 Dec 2025
Abstract
For advanced semiconductor nodes, the demand for high-precision patterning of complex foundry circuits drives the widespread use of Lithography-Etch-Lithography-Etch (LELE)—a key Multiple Patterning Technology (MPT)—in Deep Ultraviolet (DUV) processes. However, the interaction between LELE’s two Lithography-Etch (LE) cycles makes it very challenging to [...] Read more.
For advanced semiconductor nodes, the demand for high-precision patterning of complex foundry circuits drives the widespread use of Lithography-Etch-Lithography-Etch (LELE)—a key Multiple Patterning Technology (MPT)—in Deep Ultraviolet (DUV) processes. However, the interaction between LELE’s two Lithography-Etch (LE) cycles makes it very challenging to build a model for etching contour simulation and hotspot detection. This study presents an Artificial Intelligence (AI)-assisted composite etch model to capture inter-LE interactions, which directly outputs the final post-LELE etch contour, enabling Etch Rule Check (ERC)-based simulation detection of After Etch Inspection (AEI) hotspots. In addition, the etch model proposed in this study can also predict the etch bias of different types of pattern (especially complex two-dimensional (2D) patterns), thereby enabling auto retargeting for After Develop Inspection (ADI) target generation. In the future, the framework of this composite model can be adapted to the Self-Aligned Reverse Patterning (SARP) + Cut process to address more complex MPT challenges. Full article
(This article belongs to the Special Issue Recent Advances in Lithography)
Show Figures

Figure 1

51 pages, 3324 KB  
Review
Application of Artificial Intelligence in Control Systems: Trends, Challenges, and Opportunities
by Enrique Ramón Fernández Mareco and Diego Pinto-Roa
AI 2025, 6(12), 326; https://doi.org/10.3390/ai6120326 - 14 Dec 2025
Viewed by 36
Abstract
The integration of artificial intelligence (AI) into intelligent control systems has advanced significantly, enabling improved adaptability, robustness, and performance in nonlinear and uncertain environments. This study conducts a PRISMA-2020-compliant systematic mapping of 188 peer-reviewed articles published between 2000 and 15 January 2025, identified [...] Read more.
The integration of artificial intelligence (AI) into intelligent control systems has advanced significantly, enabling improved adaptability, robustness, and performance in nonlinear and uncertain environments. This study conducts a PRISMA-2020-compliant systematic mapping of 188 peer-reviewed articles published between 2000 and 15 January 2025, identified through fully documented Boolean queries across IEEE Xplore, ScienceDirect, SpringerLink, Wiley, and Google Scholar. The screening process applied predefined inclusion–exclusion criteria, deduplication rules, and dual independent review, yielding an inter-rater agreement of κ = 0.87. The resulting synthesis reveals three dominant research directions: (i) control model strategies (36.2%), (ii) parameter optimization methods (45.2%), and (iii) adaptability mechanisms (18.6%). The most frequently adopted approaches include fuzzy logic structures, hybrid neuro-fuzzy controllers, artificial neural networks, evolutionary and swarm-based metaheuristics, model predictive control, and emerging deep reinforcement learning frameworks. Although many studies report enhanced accuracy, disturbance rejection, and energy efficiency, the analysis identifies persistent limitations, including overreliance on simulations, inconsistent reporting of hyperparameters, limited real-world validation, and heterogeneous evaluation criteria. This review consolidates current AI-enabled control technologies, compares methodological trade-offs, and highlights application-specific outcomes across renewable energy, robotics, agriculture, and industrial processes. It also delineates key research gaps related to reproducibility, scalability, computational constraints, and the need for standardized experimental benchmarks. The results aim to provide a rigorous and reproducible foundation for guiding future research and the development of next-generation intelligent control systems. Full article
Show Figures

Figure 1

9 pages, 216 KB  
Article
Mental Health Status of North Korean Refugee Adolescents Living in South Korea: A Comparative Study with South Korean Adolescents
by Susie Kim, Hyo-Seong Han, You-Shin Yi, Eun-Ju Bae, Youngil Lee, Chang-Min Lee, Ji-Yeon Shim, Dong-Sun Chung, Min-Sun Kim and Myung-Ho Lim
Children 2025, 12(12), 1689; https://doi.org/10.3390/children12121689 - 12 Dec 2025
Viewed by 97
Abstract
The refugee population is increasing worldwide, and in South Korea, the refugee population, including children and adolescents, is also rapidly increasing. This study aimed to compare the psychological problems of North Korean refugee adolescents with those of South Korean adolescents and to evaluate [...] Read more.
The refugee population is increasing worldwide, and in South Korea, the refugee population, including children and adolescents, is also rapidly increasing. This study aimed to compare the psychological problems of North Korean refugee adolescents with those of South Korean adolescents and to evaluate their mental health characteristics. Methods: This cross-sectional comparative study assessed psychological problems using the Korean version of the Youth Self-Report Scale (K-YSR) among 206 South Korean adolescents and 130 North Korean refugee adolescents enrolled in middle and high schools in Gyeonggi Province. The inclusion criteria included adolescents aged 13–18 years at middle or high school and residing in South Korea for at least 6 months (for North Korean refugees). Data were collected in October 2025. Results: North Korean refugee adolescents showed significantly higher scores of anxiety/depression (F = 11.304, p < 0.001, η2 = 0.033), somatic symptoms (F = 20.997, p < 0.001, η2 = 0.060), social immaturity (F = 11.083, p < 0.001, η2 = 0.032), rule-breaking behavior (F = 12.851, p < 0.001, η2 = 0.037), and aggressive behavior (F = 50.386, p < 0.001, η2 = 0.132). Notably, the largest effect size (η2 = 0.132) was observed in the aggressive behavior domain, while the somatic symptoms also showed a moderate effect size (η2 = 0.060). In the ANCOVA analysis, controlling for gender and age as covariates, female students scored higher in the anxiety/depression and somatic symptoms domains, while male students scored higher in the rule-breaking behavior and aggressive behavior domains. Conclusions: North Korean refugee adolescents experience various psychological difficulties during their adaptation to South Korean society. These results can be used as basic data to detect mental health problems in North Korean adolescent refugees early and develop customized support plans. Full article
13 pages, 2756 KB  
Article
Acid Versus Amide—Facts and Fallacies: A Case Study in Glycomimetic Ligand Design
by Martin Smieško, Roman P. Jakob, Tobias Mühlethaler, Roland C. Preston, Timm Maier and Beat Ernst
Molecules 2025, 30(24), 4751; https://doi.org/10.3390/molecules30244751 - 12 Dec 2025
Viewed by 136
Abstract
The replacement of ionizable functional groups that are predominantly charged at physiological pH with neutral bioisosteres is a common strategy in medicinal chemistry; however, its impact on binding affinity is often context-dependent. Here, we investigated a series of amide derivatives of a glycomimetic [...] Read more.
The replacement of ionizable functional groups that are predominantly charged at physiological pH with neutral bioisosteres is a common strategy in medicinal chemistry; however, its impact on binding affinity is often context-dependent. Here, we investigated a series of amide derivatives of a glycomimetic E-selectin ligand, in which the carboxylate group of the lead compound is substituted with a range of amide and isosteric analogs. Despite the expected loss of the salt-bridge interaction with Arg97, several amides retained or even improved the binding affinity. Co-crystal structures revealed conserved binding poses across the series, with consistent interactions involving the carbonyl oxygen of the amide and the key residues Tyr48 and Arg97. High-level quantum chemical calculations ruled out a direct correlation between carbonyl partial charges and affinity. Instead, a moderate correlation was observed between ligand binding and the out-of-plane pyramidality of the amide nitrogen, suggesting a favorable steric adaptation within the binding site. Molecular dynamics (MD) simulations revealed that high-affinity ligands exhibit enhanced solution-phase pre-organization toward the bioactive conformation, likely reducing the entropic penalty upon binding. Further analysis of protein–ligand complexes using Molecular mechanics/Generalized born surface area (MM-GB/SA) decomposition suggested minor lipophilic contributions from amide substituents. Taken together, this work underscores the importance of geometric and conformational descriptors, beyond classical electrostatics, in driving affinity in glycomimetic ligand design and provides new insights into the nuanced role of amides as carboxylate isosteres in protein–ligand recognition. Full article
Show Figures

Graphical abstract

29 pages, 1944 KB  
Article
Towards Governance of Socio-Technical System of Systems: Leveraging Lessons from Proven Engineering Principles
by Mohamed Mogahed and Mo Mansouri
Systems 2025, 13(12), 1113; https://doi.org/10.3390/systems13121113 - 10 Dec 2025
Viewed by 344
Abstract
Healthcare delivery systems operate as complex socio-technical Systems-of-Systems (SoS), where autonomous entities—hospitals, insurers, laboratories, and technology vendors—must coordinate to achieve collective outcomes that exceed individual capabilities. Despite substantial investment in interoperability standards and regulatory frameworks, persistent fragmentation undermines care quality, operational efficiency, and [...] Read more.
Healthcare delivery systems operate as complex socio-technical Systems-of-Systems (SoS), where autonomous entities—hospitals, insurers, laboratories, and technology vendors—must coordinate to achieve collective outcomes that exceed individual capabilities. Despite substantial investment in interoperability standards and regulatory frameworks, persistent fragmentation undermines care quality, operational efficiency, and systemic adaptability. This fragmentation stems from a fundamental governance paradox: how can independent systems retain operational autonomy while adhering to shared rules that ensure systemic resilience? This paper addresses this challenge by advancing a governance-oriented architecture grounded in Object-Oriented Programming (OOP) principles. We reinterpret core OOP constructs—encapsulation, modularity, inheritance, polymorphism, and interface definition—as governance mechanisms that enable autonomy through principled constraints while fostering structured coordination across heterogeneous systems. Central to this framework is the Confluence Interoperability Covenant (CIC), a socio-technical governance artifact that functions as an adaptive interface mechanism, codifying integrated legal, procedural, and technical standards without dictating internal system architectures. To validate this approach, we develop a functional proof-of-concept simulation using Petri Nets, modeling constituent healthcare systems as autonomous entities interacting through CIC-governed transitions. Comparative simulation results demonstrate that CIC-based governance significantly reduces fragmentation (from 0.8077 to 0.1538) while increasing successful interactions fivefold (from 68 to 339 over 400 steps). This work contributes foundational principles for SoS Engineering and offers practical guidance for designing scalable, interoperable governance architectures in mission-critical socio-technical domains. Full article
(This article belongs to the Special Issue Governance of System of Systems (SoS))
Show Figures

Figure 1

27 pages, 9001 KB  
Article
The Research on a Collaborative Management Model for Multi-Source Heterogeneous Data Based on OPC Communication
by Jiashen Tian, Cheng Shang, Tianfei Ren, Zhan Li, Eming Zhang, Jing Yang and Mingjun He
Sensors 2025, 25(24), 7517; https://doi.org/10.3390/s25247517 - 10 Dec 2025
Viewed by 264
Abstract
Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, [...] Read more.
Effectively managing multi-source heterogeneous data remains a critical challenge in distributed cyber-physical systems (CPS). To address this, we present a novel and edge-centric computing framework integrating four key technological innovations. Firstly, a hybrid OPC communication stack seamlessly combines Client/Server, Publish/Subscribe, and P2P paradigms, enabling scalable interoperability across devices, edge nodes, and the cloud. Secondly, an event-triggered adaptive Kalman filter is introduced; it incorporates online noise-covariance estimation and multi-threshold triggering mechanisms. This approach significantly reduces state-estimation error by 46.7% and computational load by 41% compared to conventional fixed-rate sampling. Thirdly, temporal asynchrony among edge sensors is resolved by a Dynamic Time Warping (DTW)-based data-fusion module, which employs optimization constrained by Mahalanobis distance. Ultimately, a content-aware deterministic message queue data distribution mechanism is designed to ensure an end-to-end latency of less than 10 ms for critical control commands. This mechanism, which utilizes a “rules first” scheduling strategy and a dynamic resource allocation mechanism, guarantees low latency for key instructions even under the response loads of multiple data messages. The core contribution of this study is the proposal and empirical validation of an architecture co-design methodology aimed at ultra-high-performance industrial systems. This approach moves beyond the conventional paradigm of independently optimizing individual components, and instead prioritizes system-level synergy as the foundation for performance enhancement. Experimental evaluations were conducted under industrial-grade workloads, which involve over 100 heterogeneous data sources. These evaluations reveal that systems designed with this methodology can simultaneously achieve millimeter-level accuracy in field data acquisition and millisecond-level latency in the execution of critical control commands. These results highlight a promising pathway toward the development of real-time intelligent systems capable of meeting the stringent demands of next-generation industrial applications, and demonstrate immediate applicability in smart manufacturing domains. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

21 pages, 2920 KB  
Article
Impediments to, and Opportunities for, the Incorporation of Science into Policy and Practice into the Sustainable Management of Groundwater in Pakistan
by Faizan ul Hasan
Water 2025, 17(24), 3496; https://doi.org/10.3390/w17243496 - 10 Dec 2025
Viewed by 185
Abstract
Groundwater sustains more than 60% of irrigation in Pakistan’s Indus Basin, yet accelerating depletion, rising salinity and fragmented governance threaten agricultural productivity and rural livelihoods. Although new monitoring technologies and provincial water laws have emerged, a persistent gap remains between scientific evidence, policy [...] Read more.
Groundwater sustains more than 60% of irrigation in Pakistan’s Indus Basin, yet accelerating depletion, rising salinity and fragmented governance threaten agricultural productivity and rural livelihoods. Although new monitoring technologies and provincial water laws have emerged, a persistent gap remains between scientific evidence, policy frameworks and farmer practices. This study applies the Science–Policy–Practice Interface (SPPI) to examine these disconnects, drawing on qualitative data from multi-stakeholder focus groups and interviews with farmers, scientists and policymakers in Punjab, Sindh and federal agencies. The analysis identifies five governance challenges: weak knowledge integration, fragmented institutions, political resistance to regulation, limited adaptive capacity and under-recognition of farmer-led innovations. While depletion is well documented, it rarely informs enforceable rules and informal practices often outweigh formal regulation. At the same time, farmers contribute adaptive strategies, such as recharge initiatives and water-sharing arrangements, that remain invisible to policy. The findings highlight both the potential and the limits of SPPI. It provides a valuable lens for aligning science, policy and practice but cannot overcome entrenched political economy barriers such as subsidies and elite capture. The study contributes theoretically by extending SPPI to irrigation-dependent aquifers and practically by identifying opportunities for hybrid knowledge systems to support adaptive and equitable groundwater governance in Pakistan and other LMICs. Full article
(This article belongs to the Section Water Resources Management, Policy and Governance)
Show Figures

Figure 1

22 pages, 371 KB  
Review
Artificial Intelligence as the Next Frontier in Cyber Defense: Opportunities and Risks
by Oladele Afolalu and Mohohlo Samuel Tsoeu
Electronics 2025, 14(24), 4853; https://doi.org/10.3390/electronics14244853 - 10 Dec 2025
Viewed by 257
Abstract
The limitations of conventional rule-based security systems have been exposed by the quick evolution of cyber threats, necessitating more proactive, intelligent, and flexible solutions. In cybersecurity, Artificial Intelligence (AI) has emerged as a transformative factor, offering improved threat detection, prediction, and automated response [...] Read more.
The limitations of conventional rule-based security systems have been exposed by the quick evolution of cyber threats, necessitating more proactive, intelligent, and flexible solutions. In cybersecurity, Artificial Intelligence (AI) has emerged as a transformative factor, offering improved threat detection, prediction, and automated response capabilities. This paper explores the advantages of using AI in strengthening cybersecurity, focusing on its applications in machine learning, Deep Learning, Natural Language Processing, and reinforcement learning. We highlight the improvement brought by AI in terms of real-time incident response, detection accuracy, scalability, and false positive reduction while processing massive datasets. Furthermore, we examine the challenges that accompany the integration of AI into cybersecurity, including adversarial attacks, data quality constraints, interpretability, and ethical implications. The study concludes by identifying potential future directions, such as integration with blockchain and IoT, Explainable AI and the implementation of autonomous security systems. By presenting a comprehensive analysis, this paper underscores exceptional potential of AI to transform cybersecurity into a field that is more robust, adaptive, and predictive. Full article
Show Figures

Figure 1

23 pages, 3559 KB  
Article
From Static Prediction to Mindful Machines: A Paradigm Shift in Distributed AI Systems
by Rao Mikkilineni and W. Patrick Kelly
Computers 2025, 14(12), 541; https://doi.org/10.3390/computers14120541 - 10 Dec 2025
Viewed by 205
Abstract
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted [...] Read more.
A special class of complex adaptive systems—biological and social—thrive not by passively accumulating patterns, but by engineering coherence, i.e., the deliberate alignment of prior knowledge, real-time updates, and teleonomic purposes. By contrast, today’s AI stacks—Large Language Models (LLMs) wrapped in agentic toolchains—remain rooted in a Turing-paradigm architecture: statistical world models (opaque weights) bolted onto brittle, imperative workflows. They excel at pattern completion, but they externalize governance, memory, and purpose, thereby accumulating coherence debt—a structural fragility manifested as hallucinations, shallow and siloed memory, ad hoc guardrails, and costly human oversight. The shortcoming of current AI relative to human-like intelligence is therefore less about raw performance or scaling, and more about an architectural limitation: knowledge is treated as an after-the-fact annotation on computation, rather than as an organizing substrate that shapes computation. This paper introduces Mindful Machines, a computational paradigm that operationalizes coherence as an architectural property rather than an emergent afterthought. A Mindful Machine is specified by a Digital Genome (encoding purposes, constraints, and knowledge structures) and orchestrated by an Autopoietic and Meta-Cognitive Operating System (AMOS) that runs a continuous Discover–Reflect–Apply–Share (D-R-A-S) loop. Instead of a static model embedded in a one-shot ML pipeline or deep learning neural network, the architecture separates (1) a structural knowledge layer (Digital Genome and knowledge graphs), (2) an autopoietic control plane (health checks, rollback, and self-repair), and (3) meta-cognitive governance (critique-then-commit gates, audit trails, and policy enforcement). We validate this approach on the classic Credit Default Prediction problem by comparing a traditional, static Logistic Regression pipeline (monolithic training, fixed features, external scripting for deployment) with a distributed Mindful Machine implementation whose components can reconfigure logic, update rules, and migrate workloads at runtime. The Mindful Machine not only matches the predictive task, but also achieves autopoiesis (self-healing services and live schema evolution), explainability (causal, event-driven audit trails), and dynamic adaptation (real-time logic and threshold switching driven by knowledge constraints), thereby reducing the coherence debt that characterizes contemporary ML- and LLM-centric AI architectures. The case study demonstrates “a hybrid, runtime-switchable combination of machine learning and rule-based simulation, orchestrated by AMOS under knowledge and policy constraints”. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

27 pages, 6838 KB  
Article
Voronoi-Induced Artifacts from Grid-to-Mesh Coupling and Bathymetry-Aware Meshes in Graph Neural Networks for Sea Surface Temperature Forecasting
by Giovanny A. Cuervo-Londoño, José G. Reyes, Ángel Rodríguez-Santana and Javier Sánchez
Electronics 2025, 14(24), 4841; https://doi.org/10.3390/electronics14244841 - 9 Dec 2025
Viewed by 235
Abstract
Accurate sea surface temperature (SST) forecasting in coastal upwelling systems requires predictive models capable of representing complex oceanic geometries. This work revisits grid-to-mesh coupling strategies in Graph Neural Networks (GNNs) and analyzes how mesh topology and connectivity influence prediction accuracy and artifact formation. [...] Read more.
Accurate sea surface temperature (SST) forecasting in coastal upwelling systems requires predictive models capable of representing complex oceanic geometries. This work revisits grid-to-mesh coupling strategies in Graph Neural Networks (GNNs) and analyzes how mesh topology and connectivity influence prediction accuracy and artifact formation. This standard coupling process is a significant source of discretization errors and spurious numerical artifacts that compromise the final forecast’s accuracy. Using daily Copernicus SST and 10 m wind reanalysis data from 2000 to 2020 over the Canary Islands and the Northwest African region, we evaluate four mesh configurations under varying grid-to-mesh connection densities. We analyze two structured meshes and propose two new unstructured meshes for which their nodes are distributed according to the bathymetry of the ocean region. The results show that forecast errors exhibit geometric patterns equivalent to order-k Voronoi tessellations generated by the k-nearest neighbor association rule. Bathymetry-aware meshes with k=3 and k=4 grid-to-mesh connections significantly reduce polygonal artifacts and improve long-term coherence, achieving up to 30% lower RMSE relative to structured baselines. These findings reveal that the underlying geometry, rather than node count alone, governs error propagation in autoregressive GNNs. The proposed analysis framework provides a clear understanding of the implications of grid-to-mesh connections and establishes a foundation for artifact-aware, geometry-adaptive learning in operational oceanography. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

17 pages, 715 KB  
Article
Objective over Architecture: Fraud Detection Under Extreme Imbalance in Bank Account Opening
by Wenxi Sun, Qiannan Shen, Yijun Gao, Qinkai Mao, Tongsong Qi and Shuo Xu
Computation 2025, 13(12), 290; https://doi.org/10.3390/computation13120290 - 9 Dec 2025
Viewed by 204
Abstract
Fraud in financial services—especially account opening fraud—poses major operational and reputational risks. Static rules struggle to adapt to evolving tactics, missing novel patterns and generating excessive false positives. Machine learning promises adaptive detection, but deployment faces severe class imbalance: in the NeurIPS 2022 [...] Read more.
Fraud in financial services—especially account opening fraud—poses major operational and reputational risks. Static rules struggle to adapt to evolving tactics, missing novel patterns and generating excessive false positives. Machine learning promises adaptive detection, but deployment faces severe class imbalance: in the NeurIPS 2022 BAF Base benchmark used here, fraud prevalence is 1.10%. Standard metrics (accuracy, f1_weighted) can look strong while doing little for the minority class. We compare Logistic Regression, SVM (RBF), Random Forest, LightGBM, and a GRU model on N = 1,000,000 accounts under a unified preprocessing pipeline. All models are trained to minimize their loss function, while configurations are selected on a stratified development set using validation-weighted F1-score f1_weighted. For the four classical models, class weighting in the loss (class_weight {None,balanced}) is treated as a hyperparameter and tuned. Similarly, the GRU is trained with a fixed class-weighted CrossEntropy loss that up-weights fraud cases. This ensures that both model families leverage weighted training objectives, while their final hyperparameters are consistently selected by the f1_weighted metric. Despite similar AUCs and aligned feature importance across families, the classical models converge to high-precision, low-recall solutions (1–6% fraud recall), whereas the GRU recovers 78% recall at 5% precision (AUC =0.8800). Under extreme imbalance, objective choice and operating point matter at least as much as architecture. Full article
Show Figures

Figure 1

15 pages, 3067 KB  
Article
Domain Adaptation of ECG Signals Using a Fuzzy Energy–Frequency Spectrogram Network
by Tae-Wan Kim and Keun-Chang Kwak
Appl. Sci. 2025, 15(24), 12909; https://doi.org/10.3390/app152412909 - 7 Dec 2025
Viewed by 205
Abstract
Deep learning has shown strong performance in ECG domain adaptation; however, its decision-making process remains opaque, particularly when operating on input spectrograms. Traditional fuzzy inference offers interpretability but is structurally limited to tabular or multi-channel data, making it difficult to apply directly to [...] Read more.
Deep learning has shown strong performance in ECG domain adaptation; however, its decision-making process remains opaque, particularly when operating on input spectrograms. Traditional fuzzy inference offers interpretability but is structurally limited to tabular or multi-channel data, making it difficult to apply directly to single-channel two-dimensional spectrograms. To address this limitation, we propose the Fuzzy Energy–Frequency Spectrogram Network (FEFSN), a new fuzzy–deep learning hybrid framework that enables direct fuzzy rule generation in the spectrogram domain. In FEFSN, the Fuzzy Rule Image Generation Module (FRIGM) decomposes an STFT-transformed ECG spectrogram into multiple energy-based channels using an Energy–density Membership Function (EMF), and then applies a Frequency Membership Function (FMF) to produce AND and OR fuzzy rule images for each energy–frequency combination. The generated rule images are subsequently normalized, activated, and combined through learned weights to form a rule-based domain-adapted spectrogram, which is then processed by a CNN. To evaluate the proposed approach, we used the PhysioNet ECG-ID dataset and compared the performance of a standard CNN with and without the FRIGM under identical training conditions. The results show that FEFSN maintains or slightly improves adaptation performance compared to the baseline CNN, despite introducing only a small number of additional parameters. More importantly, FEFSN provides ante hoc interpretability, allowing direct visualization of which energy–frequency regions were emphasized or suppressed during adaptation—an ability that conventional post hoc methods such as Grad-CAM cannot offer. Overall, FEFSN demonstrates that fuzzy logic can be effectively integrated with deep learning to achieve both reliable performance and transparent, rule-based interpretability in ECG spectrogram domain adaptation. Full article
(This article belongs to the Special Issue Evolutionary Computation in Biomedical Signal Processing)
Show Figures

Figure 1

Back to TopTop