Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (302)

Search Parameters:
Keywords = logical verification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1208 KB  
Article
NeSySwarm-IDS: End-to-End Differentiable Neuro-Symbolic Logic for Privacy-Preserving Intrusion Detection in UAV Swarms
by Gang Yang, Lin Ni, Tao Xia, Qinfang Shi and Jiajian Li
Appl. Sci. 2026, 16(7), 3204; https://doi.org/10.3390/app16073204 - 26 Mar 2026
Viewed by 78
Abstract
Unmanned Aerial Vehicle (UAV) swarms operating in contested environments face a critical “semantic gap” between raw, high-velocity network traffic and high-level mission security constraints, compounded by the risk of privacy leakage during collaborative learning. Existing deep learning (DL)-based Network Intrusion Detection Systems (NIDSs) [...] Read more.
Unmanned Aerial Vehicle (UAV) swarms operating in contested environments face a critical “semantic gap” between raw, high-velocity network traffic and high-level mission security constraints, compounded by the risk of privacy leakage during collaborative learning. Existing deep learning (DL)-based Network Intrusion Detection Systems (NIDSs) suffer from opacity, prohibitive resource consumption, and vulnerability to gradient leakage attacks in federated settings, while traditional rule-based systems fail to handle encrypted payloads and evolving attack patterns. To bridge this gap, we present NeSySwarm-IDS (Neuro-Symbolic Swarm Intrusion Detection System), an end-to-end differentiable neuro-symbolic framework that simultaneously achieves high accuracy, strong privacy guarantees, and built-in interpretability under resource constraints. NeSySwarm-IDS integrates an extremely lightweight 1D convolutional neural network with a differentiable Łukasiewicz fuzzy logic reasoner incorporating attack-specific rules. By aggregating only low-dimensional logic rule weights with calibrated differential privacy noise, we drastically reduce communication overhead while providing (ϵ,δ)-DP guarantees with negligible utility loss. Extensive experiments on the UAV-NIDD dataset and our self-collected dataset demonstrate that NeSySwarm-IDS achieves near-perfect detection accuracy, significantly outperforming traditional machine learning baselines despite using limited training data. A detailed case study on GPS spoofing confirms the interpretability of our approach, providing axiomatic explanations suitable for autonomous mission verification. These results establish that end-to-end neuro-symbolic learning can effectively bridge the semantic gap in UAV swarm security while ensuring privacy and interpretability, offering a practical pathway for deploying trustworthy AI in contested environments. Full article
(This article belongs to the Special Issue Cyberspace Security Technology in Computer Science)
Show Figures

Figure 1

50 pages, 7244 KB  
Article
Anomaly Detection and Correction for High-Spatiotemporal-Resolution Land Surface Temperature Data: Integrating Spatiotemporal Physical Constraints and Consistency Verification
by Yun Wang, Mengyang Chai, Xiao Zhang, Huairong Kang, Xuanbin Liu, Siwei Zhao, Cancan Cui and Yinnian Liu
Remote Sens. 2026, 18(7), 972; https://doi.org/10.3390/rs18070972 - 24 Mar 2026
Viewed by 113
Abstract
High-spatiotemporal-resolution land surface temperature (LST) data are crucial for analyzing surface energy balance, modeling temperature-related processes, and monitoring thermal environments. However, despite advancements in multi-source fusion and reconstruction techniques, high-frequency LST data remain susceptible to anomalies such as abrupt changes and outliers due [...] Read more.
High-spatiotemporal-resolution land surface temperature (LST) data are crucial for analyzing surface energy balance, modeling temperature-related processes, and monitoring thermal environments. However, despite advancements in multi-source fusion and reconstruction techniques, high-frequency LST data remain susceptible to anomalies such as abrupt changes and outliers due to retrieval uncertainties and varying observation conditions. Conventional statistical outlier detection methods risk misidentifying physically plausible rapid weather changes as data errors, introducing systematic biases. To address this, we propose a two-stage anomaly detection framework that follows a “temporal physical pre-screening first, spatial statistical verification later” logic. First, a piecewise empirical model, based on typical diurnal LST variation characteristics, is constructed to identify points violating physical patterns. Subsequently, a spatial consistency test using median absolute deviation (MAD) is introduced to distinguish real weather-driven fluctuations from genuine data anomalies from a spatial synergy perspective. This sequential design effectively reduces the risk of mis-correcting physically reasonable temperature variations. Validated using hourly seamless LST data (2016–2021) and ground observations in the Heihe River Basin, our method outperformed Seasonal-Trend decomposition using Loess (STL), double standardization methods, and robust Holt–Winters. For over 87% of the detected anomalies, the proposed method demonstrated positive improvement rates in RMSE, MAE, R, and R2. The overall average improvement rates reached 23.61%, 18.79%, 16.46%, and 61.33%, respectively, indicating robust performance. The results underscore that explicitly incorporating physical constraints enhances the reliability and interpretability of quality control for high-temporal-resolution remote sensing LST data. Full article
Show Figures

Figure 1

15 pages, 671 KB  
Article
Model Checking in Federated Learning-Based Smart Advertising
by Rasool Seyghaly, Jordi Garcia and Xavi Masip-Bruin
J. Sens. Actuator Netw. 2026, 15(2), 29; https://doi.org/10.3390/jsan15020029 - 20 Mar 2026
Viewed by 210
Abstract
As social networks continue to expand, smart advertising increasingly depends on machine learning to deliver personalized and effective advertisements. Federated Learning (FL) is a distributed learning paradigm that supports privacy-preserving advertising by training models locally while avoiding direct sharing of raw user data. [...] Read more.
As social networks continue to expand, smart advertising increasingly depends on machine learning to deliver personalized and effective advertisements. Federated Learning (FL) is a distributed learning paradigm that supports privacy-preserving advertising by training models locally while avoiding direct sharing of raw user data. However, ensuring the correctness, reliability, and operational robustness of FL-driven smart advertising systems remains a significant challenge, particularly in distributed and user-facing environments. In this study, we investigate the use of model checking as a formal verification technique for validating key properties of an FL-based smart advertising workflow in social networks. We combine a structured finite-state modeling approach with Linear Temporal Logic (LTL) specifications and model-checking tools to assess correctness, availability, and baseline privacy requirements. Using controlled simulation-based configurations, we show that, for a setup with 100 users and 20 edge servers, the system delivers advertisements to all users and the global model successfully processes 200 out of 200 requests. We further analyze verification overhead through detection-time measurements, observing an increase in average detection time from 10.05 s to 11.98 s as the number of users rises from 20 to 100. These results indicate that the proposed framework can provide practical assurance for FL-enabled smart advertising workflows, support more reliable deployment in distributed intelligent systems, and improve trustworthiness in real advertising applications. Full article
Show Figures

Graphical abstract

24 pages, 320 KB  
Article
Language Without Propositions: Why Large Language Models Hallucinate
by Jakub Mácha
Philosophies 2026, 11(2), 42; https://doi.org/10.3390/philosophies11020042 - 19 Mar 2026
Viewed by 267
Abstract
This paper defends the thesis that LLM hallucinations are best explained as a truth representation problem: Current models lack an internal representation of propositions as truth-bearers, so truth and falsity cannot constrain generation in the way factual discourse requires. It begins by [...] Read more.
This paper defends the thesis that LLM hallucinations are best explained as a truth representation problem: Current models lack an internal representation of propositions as truth-bearers, so truth and falsity cannot constrain generation in the way factual discourse requires. It begins by surveying leading explanations—computational limits on self-verification, deficiencies in training data as truth sources, and architectural factors—and argues that they converge on the same underlying representational deficit. Next, it reconstructs the philosophical background of current LLM design, showing how optimization for fluent continuation aligns with coherence-style evaluation and with broadly structuralist, relational semantics, before turning to David Chalmers’s recent attempt to secure propositional interpretability by drawing on Davidson/Lewis-style radical interpretation and by locating propositional content in “middle-layer” structures; it argues that this approach downplays the ubiquity of hallucination and inherits instability from post-training edits. Finally, the paper offers a positive proposal: Atomic propositions should be represented in the basic vector layer, reviving a logical atomist program as a principled route to reducing hallucination. Full article
(This article belongs to the Special Issue Foundations of Artificial Intelligence)
27 pages, 1237 KB  
Article
Constraint, Asymmetry, and Meaning: A Cybernetic Reinterpretation of Probabilistic Emergence Across Complex Systems
by Ezra N. S. Lockhart
Symmetry 2026, 18(3), 518; https://doi.org/10.3390/sym18030518 - 18 Mar 2026
Viewed by 216
Abstract
This study develops a Constraint-Driven Model of Intelligence to explain the emergence of structured meaning in complex systems, reconciling probability and cybernetics. It applies a conceptual–analytic procedure, conducted entirely through logical reasoning and theoretical analysis, without empirical measurement, data acquisition, experimental manipulation, or [...] Read more.
This study develops a Constraint-Driven Model of Intelligence to explain the emergence of structured meaning in complex systems, reconciling probability and cybernetics. It applies a conceptual–analytic procedure, conducted entirely through logical reasoning and theoretical analysis, without empirical measurement, data acquisition, experimental manipulation, or statistical testing, and is therefore methodologically separate from empirical artificial intelligence research. Phenomena such as model collapse are cited as theoretical instances for epistemic argumentation, without asserting empirical verification. Building on Émile Borel’s Infinite Monkey Theorem, which demonstrates the theoretical inevitability of order in unbounded stochastic processes, and Gregory Bateson’s principle of negative explanation, which defines structure as the result of systematically eliminated alternatives, the analysis formalizes how constraints break ergodicity and generate asymmetry. Shannon’s entropy quantifies the informational effects of constraints, while Simon’s bounded rationality and Turing’s algorithmic limits show how cognitive and computational boundaries produce tractable outcomes. Applied to modern AI, the model accounts for model collapse in recursive training, showing that the loss of asymmetric constraints produces low-entropy, repetitive outputs, demonstrating the epistemic necessity of constraint regulation. Comparing probabilistic and cybernetic accounts of emergence, the study shows that structured intelligence arises not from stochastic exploration alone, but from bounded, recursive, selective processes. This model is transdisciplinary, formalizing how constraints from socioeconomic pressures to subcultural circulation shape diversity, innovation, and functional asymmetry, establishing a generalizable cybernetic epistemology for the generation of structured intelligence and meaning across domains. By formalizing these concepts through set-theoretic derivations and integrative synthesis, this non-empirical model advances a cybernetic epistemology, separate from quantitative AI evaluations or experimental designs. Full article
Show Figures

Figure 1

38 pages, 4516 KB  
Article
A Formal Modeling Framework for Time-Aware Cyber–Physical Systems of Systems
by Riad Helal, Faiza Belala, Nabil Hameurlain and Akram Seghiri
Systems 2026, 14(3), 312; https://doi.org/10.3390/systems14030312 - 16 Mar 2026
Viewed by 186
Abstract
Cyber–Physical Systems of Systems (CPSoS) integrate autonomous constituent systems to accomplish complex missions. Nonetheless, decentralized coordination and continuous evolution create intricate dependencies that make behavior difficult to analyze. Current semi-formal modeling approaches, despite being easy to understand and widely accessible, lack semantic precision [...] Read more.
Cyber–Physical Systems of Systems (CPSoS) integrate autonomous constituent systems to accomplish complex missions. Nonetheless, decentralized coordination and continuous evolution create intricate dependencies that make behavior difficult to analyze. Current semi-formal modeling approaches, despite being easy to understand and widely accessible, lack semantic precision and are not computationally checkable to guarantee time-critical properties. Furthermore, current formal methods are often fragmented: they analyze behavior either at the individual CPS level or the collective CPSoS level, failing to provide a multi-level specification. To address these limitations, we propose an integrated framework combining SysML and Maude rewriting logic. SysML provides structural and behavioral specification capabilities, while Maude enables rigorous semantics, executable models, and formal verification. First, our approach proposes MM-CPSoS, a meta-model that unifies CPS and CPSoS entities with explicit temporal constraints. Dynamic behavior is captured through evolution patterns governing mission progression across both levels. Then, we encode SysML models into Maude as object-oriented configurations and conditional rewrite rules, enabling linear temporal logic (LTL) model checking of temporal properties. Finally, we demonstrate our approach through a Time-Aware Road Crisis Management System (TaRCiMaS2). Full article
(This article belongs to the Section Systems Engineering)
Show Figures

Figure 1

16 pages, 1063 KB  
Article
Integrating Inverse Prompting and Chain-of-Thought Reasoning for Automated Flood Control Text Generation: A Case Study of the Lixiahe Region
by Hui Min, Feng Ye, Dong Xu, Jin Xu and Xiaoping Liao
Water 2026, 18(6), 686; https://doi.org/10.3390/w18060686 - 15 Mar 2026
Viewed by 240
Abstract
Flood control briefings are critical emergency response documents that provide timely decision support for urban safety and regional development under climate change challenges. However, existing large language models (LLMs) face significant difficulties in domain-specific adaptation, content controllability, and logical consistency when processing complex [...] Read more.
Flood control briefings are critical emergency response documents that provide timely decision support for urban safety and regional development under climate change challenges. However, existing large language models (LLMs) face significant difficulties in domain-specific adaptation, content controllability, and logical consistency when processing complex water conservancy data. This study aims to develop a robust automated text generation method that ensures high accuracy and logical rigor for flood prevention in the Lixiahe region. We propose an IP-CoT method that integrates Chain-of-Thought (CoT) reasoning for structured information extraction and an Inverse Prompting (IP) mechanism with beam search to optimize content relevance using the DeepSeek-R1 model. Validated on a constructed dataset comprising flood control records from the Lixia River network from 2010 to 2024, the proposed method achieved an accuracy rate of 95.32% in the verification of emotional attributes, which is 2% to 15% higher than most traditional models. Additionally, in the verification of thematic attributes, fluency and diversity were improved, showing significant enhancements compared to the baseline model. This approach significantly enhances the quality and efficiency of domain-specific text generation, providing a reliable intelligent solution for modernizing regional flood control decision-making systems. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

26 pages, 1640 KB  
Article
Algorithmic Optimization for Accelerated UDS Fuzzing in Cyber–Physical Automotive Networks: The BB-FAST Approach on LIN-Bus
by Sungsik Im, Yijoon Jung and Junyoung Park
Electronics 2026, 15(6), 1223; https://doi.org/10.3390/electronics15061223 - 14 Mar 2026
Viewed by 318
Abstract
In modern cyber–physical vehicle networks, the security of component-level Electronic Control Units (ECUs) is essential for overall system reliability. While Controller Area Network(CAN) security is well-studied, the Local Interconnect Network (LIN) has received less attention despite its growing role in critical functions and [...] Read more.
In modern cyber–physical vehicle networks, the security of component-level Electronic Control Units (ECUs) is essential for overall system reliability. While Controller Area Network(CAN) security is well-studied, the Local Interconnect Network (LIN) has received less attention despite its growing role in critical functions and diagnostic services (UDS). The inherent constraints of the LIN protocol, specifically its low bandwidth and master–slave architecture, make traditional fuzz testing impractical due to extremely long execution times. This paper proposes Batch-based Binary-search Fuzzing and Accelerated Security Testing (BB-FAST), an optimized framework for faster vulnerability detection in LIN-based systems. By integrating batch processing and binary search techniques, BB-FAST overcomes communication bottlenecks and enables efficient error localization. Empirical evaluations on a physical automotive ECU demonstrate that BB-FAST achieves a significant reduction in testing time—up to 97.7% compared to traditional sequential methods. Notably, in scenarios involving critical controller failures, BB-FAST outperformed optimized batch-based approaches by 64.2% through its logarithmic error localization logic. By mitigating these physical limitations through algorithmic optimization, this work enables thorough security verification for LIN-based diagnostic interfaces that was previously constrained by protocol latency, thereby enhancing the integrity of cyber–physical automotive networks. Full article
Show Figures

Figure 1

36 pages, 16506 KB  
Article
A Scenario-Based Visual Modeling Method for the Complex Products Lifecycle
by Shuanglong Chang, Chuangye Chang, Xiyu Liu and Xinghai Gao
Electronics 2026, 15(6), 1198; https://doi.org/10.3390/electronics15061198 - 13 Mar 2026
Viewed by 293
Abstract
The development of complex products is challenged by diverse requirements, interdisciplinary coupling, intricate behaviors, and prolonged lifecycles. Traditional document-based systems engineering methods exhibit deficiencies in requirement validation, architectural verification, and cross-disciplinary integration, struggling to support early-stage verification and validation as well as interdisciplinary [...] Read more.
The development of complex products is challenged by diverse requirements, interdisciplinary coupling, intricate behaviors, and prolonged lifecycles. Traditional document-based systems engineering methods exhibit deficiencies in requirement validation, architectural verification, and cross-disciplinary integration, struggling to support early-stage verification and validation as well as interdisciplinary collaboration. To address these limitations, this paper proposes a scenario-based visual modeling method for the entire lifecycle of complex products, aiming to realize a closed-loop process epitomized by “construction as verification.” This method integrates model-based systems engineering, scenario-driven design, and multi-level visualization techniques to construct a multi-paradigm visual modeling and simulation framework driven by operational scenarios, use-case scenarios, and working-condition scenarios, each serving as the blueprint for constructing the corresponding Operational Concept, Functional/Logical, and Physical Specification Models. Concurrently, a semantic integration mechanism based on hybrid ontologies is introduced, which resolves semantic heterogeneity and facilitates model interoperability among multi-source heterogeneous models through formalized mapping. Furthermore, a simulation engine scheme based on Discrete Event System Specification is proposed to enable continuous verification from conceptual design to solution development. A case study on the braking mechanism of a high-speed train demonstrates that the proposed method can effectively support precise requirement validation, logical architectural verification, and multi-solution trade-off analysis, thereby significantly enhancing early verification capabilities and R&D efficiency. Full article
Show Figures

Figure 1

16 pages, 640 KB  
Article
Radiomics in Advancing and Explainable Liposarcoma Classification with MR Imaging
by Raffaele Natella, Giulia Varriano, Maria Chiara Brunese, Giulia Pacella, Luca Brunese, Marcello Zappia and Antonella Santone
Appl. Sci. 2026, 16(6), 2719; https://doi.org/10.3390/app16062719 - 12 Mar 2026
Viewed by 180
Abstract
Background: Soft tissue sarcomas are rare and highly heterogeneous malignant tumors, often asymptomatic in the early stages. Accurate diagnosis and reliable assessment of the risk of metastasis, classified as low, intermediate, or high, are therefore essential for effective clinical decision-making. However, the application [...] Read more.
Background: Soft tissue sarcomas are rare and highly heterogeneous malignant tumors, often asymptomatic in the early stages. Accurate diagnosis and reliable assessment of the risk of metastasis, classified as low, intermediate, or high, are therefore essential for effective clinical decision-making. However, the application of Artificial Intelligence (AI) approaches to these diseases is often limited by the small size and quality of available datasets, which can compromise model robustness and reliability. Methods: The use of formal methods, based on mathematical modeling and logical verification, can be an alternative to AI techniques. When integrated with radiomics, formal methods provide a structured and interpretable approach to support disease diagnosis. Results: The proposed methodology yielded encouraging results, in line with those reported in the literature. A process was conducted to extract several first- and second-order radiomic classes, which were then selected based on their significance. The resulting models were evaluated using standard performance metrics and obtained 80% accuracy, 83% precision, and 83% recall. Conclusion: The transparency of formal methods improves the interpretability of models and radiomic features, allowing new links with clinical practice to be discovered. The proposed approach is presented as a feasibility and proof-of-concept framework aimed at improving interpretability. Given the very small cohort size, performance metrics should be considered preliminary and descriptive, as they require validation on larger external datasets before any clinical applicability can be claimed. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 28063 KB  
Article
Towards Hyper-Personalized Travel Planning: A Multimodal AI Agent with Integrated Neural Rendering for Immersive Itineraries
by José Márquez-Algaba, Pablo Vicente-Martínez, Emilio Soria-Olivas, Manuel Sánchez-Montañés, María Ángeles García-Escrivà and Edu William-Secin
Electronics 2026, 15(6), 1142; https://doi.org/10.3390/electronics15061142 - 10 Mar 2026
Viewed by 422
Abstract
The digital transformation of the tourism industry faces a dual challenge: the fragmentation of data across platforms and the lack of immersive “try-before-you-buy” experiences. While Large Language Models (LLMs) have revolutionized information synthesis, they typically lack real-time visual verification capabilities. This paper proposes [...] Read more.
The digital transformation of the tourism industry faces a dual challenge: the fragmentation of data across platforms and the lack of immersive “try-before-you-buy” experiences. While Large Language Models (LLMs) have revolutionized information synthesis, they typically lack real-time visual verification capabilities. This paper proposes a novel, multimodal AI Agent architecture that integrates advanced natural language planning with photorealistic 3D visualization. We present a system where a conversational agent, powered by Gemini 2.5 Flash, orchestrates a suite of dynamic tools to build structured travel itineraries (flights, hotels, activities) while simultaneously deploying a neural rendering engine. This engine utilizes a modular Structure-from-Motion (SfM) pipeline feeding into 3D Gaussian Splatting (3DGS) to render navigable, high-fidelity digital twins of hotel facilities directly within the chat interface. Positioned as a Technology Readiness Level 4 (TRL 4) proof of concept (PoC), this work demonstrates the technical feasibility of the multimodal integration between conversational logic and automated visual synthesis. The results demonstrate the technical feasibility of a pipeline that dynamically binds LLM inference to 3D spatial data, providing a foundation for high-fidelity, interactive travel consultancy. Full article
Show Figures

Figure 1

20 pages, 3017 KB  
Article
Deep-Research Eval: An Automated Framework for Assessing Quality and Reliability in Long-Form Reports
by Yeerpan Tuohetiyaer, Yuye Zhu, Yan Hu, Siyuan Lu and Zhongfeng Wang
Appl. Sci. 2026, 16(5), 2546; https://doi.org/10.3390/app16052546 - 6 Mar 2026
Viewed by 608
Abstract
Deep Research Agents (DRAs) generate detailed literature surveys but often suffer from hallucinations and inconsistent structures. Existing evaluation methods face significant limitations. Human evaluation is time-consuming and requires domain expertise. Meanwhile, current LLM judges struggle with long reports due to context limits and [...] Read more.
Deep Research Agents (DRAs) generate detailed literature surveys but often suffer from hallucinations and inconsistent structures. Existing evaluation methods face significant limitations. Human evaluation is time-consuming and requires domain expertise. Meanwhile, current LLM judges struggle with long reports due to context limits and the inability to verify source reliability. To address this, we propose Deep-Research Eval. This framework standardizes the page as the basic unit for evaluation. It features an adaptive scoring system that assesses the logical quality of each page. Furthermore, it employs Paged-RAG with a constructible reference database to verify facts against specific evidence. Experiments on five agents show that our method effectively identifies errors. It achieves a strong correlation with human judgment, reaching a Composite Consistency Index (CCI) of 0.7585, an absolute increase of 0.4588 over baselines. Additionally, the Paged-RAG module improves factual verification accuracy, increasing the QA-F1 score by up to 6.9 times compared to standard retrieval methods. This work offers a scalable and practical approach for assessing AI-generated academic content. Full article
(This article belongs to the Special Issue Construction of Knowledge System Based on Natural Language Processing)
Show Figures

Figure 1

23 pages, 68544 KB  
Article
Two-Stage Fine-Grained Ship Recognition with a Detector Guided by Key Regions and a Multi-Patch Joint Classifier
by Qiantong Wang, Peifeng Li, Yuan Li, Lei Zhang, Ben Niu, Feng Wang, Xiurui Geng and Guangyao Zhou
Remote Sens. 2026, 18(5), 772; https://doi.org/10.3390/rs18050772 - 4 Mar 2026
Viewed by 217
Abstract
For human beings, fine-grained object recognition is a progressive process that proceeds from global outlines to local details. They can determine how to further focus on the distinctive regions based on the overall context, followed by recognition. To enhance the algorithm’s capability to [...] Read more.
For human beings, fine-grained object recognition is a progressive process that proceeds from global outlines to local details. They can determine how to further focus on the distinctive regions based on the overall context, followed by recognition. To enhance the algorithm’s capability to capture critical features, a multi-stage recognition framework, integrated with human-attended key regions for fine-grained ship recognition, is proposed in this manuscript. First, a set of distinctive templates is constructed following human identification logic. On this basis, a supervised attention method, Key Regions Guided Yolo11 (KRGY), with part-to-whole regulation is proposed to help the model focus on critical components, leading to better recognition and location performance. Furthermore, a multi-head joint recognition classification module is proposed, with key regions of ship cropped with the distinctive templates. With the hypothesis and verification framework Key Regions Guided Yolo11-Multi Head Classifier (KRGY-MHC), the accuracy of ship recognition is significantly improved based on a challenging datasets with high inter-class similarity DCL-11. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

19 pages, 4073 KB  
Article
Reinforcement Learning-Based Adaptive Motion Control of Humanoid Robots on Multi-Terrain
by Xin Wen, Luxuan Wang, Yongting Tao, Huige Lai and Hao Liu
Appl. Sci. 2026, 16(5), 2371; https://doi.org/10.3390/app16052371 - 28 Feb 2026
Viewed by 573
Abstract
In recent years, many countries have increased their investment in the field of humanoid robots, promoting significant technological development. This study aims to enable humanoid robots to better adapt to various complex environments, enhancing the robustness of their motion systems and the generalization [...] Read more.
In recent years, many countries have increased their investment in the field of humanoid robots, promoting significant technological development. This study aims to enable humanoid robots to better adapt to various complex environments, enhancing the robustness of their motion systems and the generalization ability of their motion strategies. Using reinforcement learning algorithms, training on varied terrain is a critical factor for developing adaptable humanoid robots. This paper takes the humanoid robot G1 as the research platform. First, it completes the training, transfer verification, and real-machine deployment of a flat-ground walking model. Then, using fuzzy logic control and a phased training strategy, walking models for ascending/descending stairs and traversing slopes are trained. By systematically varying the stair height and slope gradient, the convergence of the reward function and the task completion success rate are analyzed. Furthermore, the dynamic stability of the robot on complex terrains is validated through qualitative kinematic analysis. The research concludes that as the single-step height and slope gradient increase, the reward value initially rises with more iterations but converges more slowly and at a lower final value. Statistical analysis shows that the success rates of phased training for stair and slope terrains are higher than 86% and 92%, respectively. Full article
Show Figures

Figure 1

20 pages, 1894 KB  
Article
A Whale Optimization-Based Dynamic Compression ATPG Algorithm for Computer Interlocking Equipment Testing
by Zhiyang Yu, Lanxuan Jiang, Tianze Wu and Xiaoming Chen
Appl. Sci. 2026, 16(5), 2361; https://doi.org/10.3390/app16052361 - 28 Feb 2026
Viewed by 273
Abstract
High-speed railway signaling equipment constitutes safety-critical infrastructure, wherein hardware failures may directly compromise operational safety. During the hardware prototyping and verification stage, structural testing is essential to detect latent faults in digital logic circuits and to ensure compliance with stringent safety integrity requirements. [...] Read more.
High-speed railway signaling equipment constitutes safety-critical infrastructure, wherein hardware failures may directly compromise operational safety. During the hardware prototyping and verification stage, structural testing is essential to detect latent faults in digital logic circuits and to ensure compliance with stringent safety integrity requirements. However, conventional test generation methods often suffer from long generation times and excessive test vector volume. To address these challenges, this study proposes a whale optimization-based dynamic compression Automatic Test-Pattern Generation (ATPG) algorithm. The proposed method integrates a discrete whale optimization algorithm (WOA) with a deterministic PODEM framework to dynamically compress generated test vectors. Additionally, a multi-path-sensitized PODEM enhanced with desensitization techniques is introduced to reduce backtracking and improve search efficiency. The proposed algorithm has been applied to the computer interlocking golden model netlist for testing purposes, achieving an impressive fault coverage rate of 100%. Test results from the ISCAS-85 standard circuit indicate that our approach significantly reduces both the length of the vector set and the time required for test generation when compared to traditional PODEMs without vector compression and pseudo-random combined PODEM vector generation methods. This advancement effectively enhances overall vector generation efficiency while maintaining comprehensive fault coverage. Full article
Show Figures

Figure 1

Back to TopTop