Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = neuro-symbolic planning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1353 KB  
Article
SLTP: A Symbolic Travel-Planning Agent Framework with Decoupled Translation and Heuristic Tree Search
by Debin Tang, Qian Jiang, Jingpu Yang, Jingyu Zhao, Xiaofei Du, Miao Fang and Xiaofei Zhang
Electronics 2026, 15(2), 422; https://doi.org/10.3390/electronics15020422 (registering DOI) - 18 Jan 2026
Abstract
Large language models (LLMs) demonstrate outstanding capability in understanding natural language and show great potential in open-domain travel planning. However, when confronted with multi-constraint itineraries, personalized recommendations, and scenarios requiring rigorous external information validation, pure LLM-based approaches lack rigorous planning ability and fine-grained [...] Read more.
Large language models (LLMs) demonstrate outstanding capability in understanding natural language and show great potential in open-domain travel planning. However, when confronted with multi-constraint itineraries, personalized recommendations, and scenarios requiring rigorous external information validation, pure LLM-based approaches lack rigorous planning ability and fine-grained personalization. To address these gaps, we propose the Symbolic LoRA Travel Planner (SLTP) framework—an agent architecture that combines a two-stage symbol-rule LoRA fine-tuning pipeline with a user multi-option heuristic tree search (MHTS) planner. SLTP decomposes the entire process of transforming natural language into executable code into two specialized, sequential LoRA experts: the first maps natural-language queries to symbolic constraints with high fidelity; the second compiles symbolic constraints into executable Python planning code. After reflective verification, the generated code serves as constraints and heuristic rules for an MHTS planner that preserves diversified top-K candidate itineraries and uses pruning plus heuristic strategies to maintain search-time performance. To overcome the scarcity of high-quality intermediate symbolic data, we adopt a teacher–student distillation approach: a strong teacher model generates high-fidelity symbolic constraints and executable code, which we use as hard targets to distill knowledge into an 8B-parameter Qwen3-8B student model via two-stage LoRA. On the ChinaTravel benchmark, SLTP using an 8B student achieves performance comparable to or surpassing that of other methods built on DeepSeek-V3 or GPT-4o as a backbone. Full article
(This article belongs to the Special Issue AI-Powered Natural Language Processing Applications)
44 pages, 9272 KB  
Systematic Review
Toward a Unified Smart Point Cloud Framework: A Systematic Review of Definitions, Methods, and a Modular Knowledge-Integrated Pipeline
by Mohamed H. Salaheldin, Ahmed Shaker and Songnian Li
Buildings 2026, 16(2), 293; https://doi.org/10.3390/buildings16020293 - 10 Jan 2026
Viewed by 277
Abstract
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This [...] Read more.
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This systematic review synthesizes the state-of-the-art SPC terminology and methods to propose a modular pipeline. Following PRISMA, we searched Scopus, Web of Science, and Google Scholar up to June 2025. We included English-language studies in geomatics and engineering presenting novel SPC methods. Fifty-eight publications met eligibility criteria: Direct (n = 22), Indirect (n = 22), and New Use (n = 14). We formalize an operative SPC definition—queryable, ontology-linked, provenance-aware—and map contributions across traditional point cloud processing stages (from acquisition to modeling). Evidence shows practical value in cultural heritage, urban planning, and AEC/FM via semantic queries, rule checks, and auditable updates. Comparative qualitative analysis reveals cross-study trends: higher and more uniform density stabilizes features but increases computation, and hybrid neuro-symbolic classification improves long-tail consistency; however, methodological heterogeneity precluded quantitative synthesis. We distill a configurable eight-module pipeline and identify open challenges in data at scale, domain transfer, temporal (4D) updates, surface exports, query usability, and sensor fusion. Finally, we recommend lightweight reporting standards to improve discoverability and reuse. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

42 pages, 571 KB  
Review
Integrating Cognitive, Symbolic, and Neural Approaches to Story Generation: A Review on the METATRON Framework
by Hiram Calvo, Brian Herrera-González and Mayte H. Laureano
Mathematics 2025, 13(23), 3885; https://doi.org/10.3390/math13233885 - 4 Dec 2025
Cited by 1 | Viewed by 1090
Abstract
The human ability to imagine alternative realities has long supported reasoning, communication, and creativity through storytelling. By constructing hypothetical scenarios, people can anticipate outcomes, solve problems, and generate new knowledge. This link between imagination and reasoning has made storytelling an enduring topic in [...] Read more.
The human ability to imagine alternative realities has long supported reasoning, communication, and creativity through storytelling. By constructing hypothetical scenarios, people can anticipate outcomes, solve problems, and generate new knowledge. This link between imagination and reasoning has made storytelling an enduring topic in artificial intelligence, leading to the field of automatic story generation. Over the decades, different paradigms—symbolic, neural, and hybrid—have been proposed to address this task. This paper reviews key developments in story generation and identifies elements that can be integrated into a unified framework. Building on this analysis, we introduce the METATRON framework for neuro-symbolic generation of fiction stories. The framework combines a classical taxonomy of dramatic situations, used for symbolic narrative planning, with fine-tuned language models for text generation and coherence filtering. It also incorporates cognitive mechanisms such as episodic memory, emotional modeling, and narrative controllability, and explores multimodal extensions for text–image–audio storytelling. Finally, the paper discusses cognitively grounded evaluation methods, including theory-of-mind and creativity assessments, and outlines directions for future research. Full article
(This article belongs to the Special Issue Mathematical Foundations in NLP: Applications and Challenges)
Show Figures

Figure 1

35 pages, 2963 KB  
Article
Explainable Artificial Intelligence Framework for Predicting Treatment Outcomes in Age-Related Macular Degeneration
by Mini Han Wang
Sensors 2025, 25(22), 6879; https://doi.org/10.3390/s25226879 - 11 Nov 2025
Viewed by 1350
Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and [...] Read more.
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and large language model (LLM) framework that combines mechanistic disease knowledge with multimodal ophthalmic data for explainable AMD treatment prognosis. In a pilot cohort of ten surgically managed AMD patients (six men, four women; mean age 67.8 ± 6.3 years), we collected 30 structured clinical documents and 100 paired imaging series (optical coherence tomography, fundus fluorescein angiography, scanning laser ophthalmoscopy, and ocular/superficial B-scan ultrasonography). Texts were semantically annotated and mapped to standardized ontologies, while images underwent rigorous DICOM-based quality control, lesion segmentation, and quantitative biomarker extraction. A domain-specific ophthalmic knowledge graph encoded causal disease and treatment relationships, enabling neuro-symbolic reasoning to constrain and guide neural feature learning. An LLM fine-tuned on ophthalmology literature and electronic health records ingested structured biomarkers and longitudinal clinical narratives through multimodal clinical-profile prompts, producing natural-language risk explanations with explicit evidence citations. On an independent test set, the hybrid model achieved AUROC 0.94 ± 0.03, AUPRC 0.92 ± 0.04, and a Brier score of 0.07, significantly outperforming purely neural and classical Cox regression baselines (p ≤ 0.01). Explainability metrics showed that >85% of predictions were supported by high-confidence knowledge-graph rules, and >90% of generated narratives accurately cited key biomarkers. A detailed case study demonstrated real-time, individualized risk stratification—for example, predicting an >70% probability of requiring three or more anti-VEGF injections within 12 months and a ~45% risk of chronic macular edema if therapy lapsed—with predictions matching the observed clinical course. These results highlight the framework’s ability to integrate multimodal evidence, provide transparent causal reasoning, and support personalized treatment planning. While limited by single-center scope and short-term follow-up, this work establishes a scalable, privacy-aware, and regulator-ready template for explainable, next-generation decision support in AMD management, with potential for expansion to larger, device-diverse cohorts and other complex retinal diseases. Full article
(This article belongs to the Special Issue Sensing Functional Imaging Biomarkers and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 2761 KB  
Article
An Explainable AI Framework for Corneal Imaging Interpretation and Refractive Surgery Decision Support
by Mini Han Wang
Bioengineering 2025, 12(11), 1174; https://doi.org/10.3390/bioengineering12111174 - 28 Oct 2025
Cited by 1 | Viewed by 1385
Abstract
This study introduces an explainable neuro-symbolic and large language model (LLM)-driven framework for intelligent interpretation of corneal topography and precision surgical decision support. In a prospective cohort of 20 eyes, comprehensive IOLMaster 700 reports were analyzed through a four-stage pipeline: (1) automated extraction [...] Read more.
This study introduces an explainable neuro-symbolic and large language model (LLM)-driven framework for intelligent interpretation of corneal topography and precision surgical decision support. In a prospective cohort of 20 eyes, comprehensive IOLMaster 700 reports were analyzed through a four-stage pipeline: (1) automated extraction of key parameters—including corneal curvature, pachymetry, and axial biometry; (2) mapping of these quantitative features onto a curated corneal disease and refractive-surgery knowledge graph; (3) Bayesian probabilistic inference to evaluate early keratoconus and surgical eligibility; and (4) explainable multi-model LLM reporting, employing DeepSeek and GPT-4.0, to generate bilingual physician- and patient-facing narratives. By transforming complex imaging data into transparent reasoning chains, the pipeline delivered case-level outputs within ~95 ± 12 s. When benchmarked against independent evaluations by two senior corneal specialists, the framework achieved 92 ± 4% sensitivity, 94 ± 5% specificity, 93 ± 4% accuracy, and an AUC of 0.95 ± 0.03 for early keratoconus detection, alongside an F1 score of 0.90 ± 0.04 for refractive surgery eligibility. The generated bilingual reports were rated ≥4.8/5 for logical clarity, clinical usefulness, and comprehensibility, with representative cases fully concordant with expert judgment. Comparative benchmarking against baseline CNN and ViT models demonstrated superior diagnostic accuracy (AUC = 0.95 ± 0.03 vs. 0.88 and 0.90, p < 0.05), confirming the added value of the neuro-symbolic reasoning layer. All analyses were executed on a workstation equipped with an NVIDIA RTX 4090 GPU and implemented in Python 3.10/PyTorch 2.2.1 for full reproducibility. By explicitly coupling symbolic medical knowledge with advanced language models and embedding explainable artificial intelligence (XAI) principles throughout data processing, reasoning, and reporting, this framework provides a transparent, rapid, and clinically actionable AI solution. The approach holds significant promise for improving early ectatic disease detection and supporting individualized refractive surgery planning in routine ophthalmic practice. Full article
(This article belongs to the Special Issue Bioengineering and the Eye—3rd Edition)
Show Figures

Figure 1

24 pages, 3721 KB  
Article
Interactive Environment-Aware Planning System and Dialogue for Social Robots in Early Childhood Education
by Jiyoun Moon and Seung Min Song
Appl. Sci. 2025, 15(20), 11107; https://doi.org/10.3390/app152011107 - 16 Oct 2025
Viewed by 572
Abstract
In this study, we propose an interactive environment-aware dialog and planning system for social robots in early childhood education, aimed at supporting the learning and social interaction of young children. The proposed architecture consists of three core modules. First, semantic simultaneous localization and [...] Read more.
In this study, we propose an interactive environment-aware dialog and planning system for social robots in early childhood education, aimed at supporting the learning and social interaction of young children. The proposed architecture consists of three core modules. First, semantic simultaneous localization and mapping (SLAM) accurately perceives the environment by constructing a semantic scene representation that includes attributes such as position, size, color, purpose, and material of objects, as well as their positional relationships. Second, the automated planning system enables stable task execution even in changing environments through planning domain definition language (PDDL)-based planning and replanning capabilities. Third, the visual question answering module leverages scene graphs and SPARQL conversion of natural language queries to answer children’s questions and engage in context-based conversations. The experiment conducted in a real kindergarten classroom with children aged 6 to 7 years validated the accuracy of object recognition and attribute extraction for semantic SLAM, the task success rate of the automated planning system, and the natural language question answering performance of the visual question answering (VQA) module.The experimental results confirmed the proposed system’s potential to support natural social interaction with children and its applicability as an educational tool. Full article
(This article belongs to the Special Issue Robotics and Intelligent Systems: Technologies and Applications)
Show Figures

Figure 1

36 pages, 1495 KB  
Review
Decision-Making for Path Planning of Mobile Robots Under Uncertainty: A Review of Belief-Space Planning Simplifications
by Vineetha Malathi, Pramod Sreedharan, Rthuraj P R, Vyshnavi Anil Kumar, Anil Lal Sadasivan, Ganesha Udupa, Liam Pastorelli and Andrea Troppina
Robotics 2025, 14(9), 127; https://doi.org/10.3390/robotics14090127 - 15 Sep 2025
Viewed by 4788
Abstract
Uncertainty remains a central challenge in robotic navigation, exploration, and coordination. This paper examines how Partially Observable Markov Decision Processes (POMDPs) and their decentralized variants (Dec-POMDPs) provide a rigorous foundation for decision-making under partial observability across tasks such as Active Simultaneous Localization and [...] Read more.
Uncertainty remains a central challenge in robotic navigation, exploration, and coordination. This paper examines how Partially Observable Markov Decision Processes (POMDPs) and their decentralized variants (Dec-POMDPs) provide a rigorous foundation for decision-making under partial observability across tasks such as Active Simultaneous Localization and Mapping (A-SLAM), adaptive informative path planning, and multi-robot coordination. We review recent advances that integrate deep reinforcement learning (DRL) with POMDP formulations, highlighting improvements in scalability and adaptability as well as unresolved challenges of robustness, interpretability, and sim-to-real transfer. To complement learning-driven methods, we discuss emerging strategies that embed probabilistic reasoning directly into navigation, including belief-space planning, distributionally robust control formulations, and probabilistic graph models such as enhanced probabilistic roadmaps (PRMs) and Canadian Traveler Problem-based roadmaps. These approaches collectively demonstrate that uncertainty can be managed more effectively by coupling structured inference with data-driven adaptation. The survey concludes by outlining future research directions, emphasizing hybrid learning–planning architectures, neuro-symbolic reasoning, and socially aware navigation frameworks as critical steps toward resilient, transparent, and human-centered autonomy. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

17 pages, 1621 KB  
Article
Symmetric Graph-Based Visual Question Answering Using Neuro-Symbolic Approach
by Jiyoun Moon
Symmetry 2023, 15(9), 1713; https://doi.org/10.3390/sym15091713 - 7 Sep 2023
Cited by 1 | Viewed by 2307
Abstract
As the applications of robots expand across a wide variety of areas, high-level task planning considering human–robot interactions is emerging as a critical issue. Various elements that facilitate flexible responses to humans in an ever-changing environment, such as scene understanding, natural language processing, [...] Read more.
As the applications of robots expand across a wide variety of areas, high-level task planning considering human–robot interactions is emerging as a critical issue. Various elements that facilitate flexible responses to humans in an ever-changing environment, such as scene understanding, natural language processing, and task planning, are thus being researched extensively. In this study, a visual question answering (VQA) task was examined in detail from among an array of technologies. By further developing conventional neuro-symbolic approaches, environmental information is stored and utilized in a symmetric graph format, which enables more flexible and complex high-level task planning. We construct a symmetric graph composed of information such as color, size, and position for the objects constituting the environmental scene. VQA, using graphs, largely consists of a part expressing a scene as a graph, a part converting a question into SPARQL, and a part reasoning the answer. The proposed method was verified using a public dataset, CLEVR, with which it successfully performed VQA. We were able to directly confirm the process of inferring answers using SPARQL queries converted from the original queries and environmental symmetric graph information, which is distinct from existing methods that make it difficult to trace the path to finding answers. Full article
Show Figures

Figure 1

18 pages, 1438 KB  
Article
Plugin Framework-Based Neuro-Symbolic Grounded Task Planning for Multi-Agent System
by Jiyoun Moon
Sensors 2021, 21(23), 7896; https://doi.org/10.3390/s21237896 - 26 Nov 2021
Cited by 6 | Viewed by 4492
Abstract
As the roles of robots continue to expand in general, there is an increasing demand for research on automated task planning for a multi-agent system that can independently execute tasks in a wide and dynamic environment. This study introduces a plugin framework in [...] Read more.
As the roles of robots continue to expand in general, there is an increasing demand for research on automated task planning for a multi-agent system that can independently execute tasks in a wide and dynamic environment. This study introduces a plugin framework in which multiple robots can be involved in task planning in a broad range of areas by combining symbolic and connectionist approaches. The symbolic approach for understanding and learning human knowledge is useful for task planning in a wide and static environment. The network-based connectionist approach has the advantage of being able to respond to an ever-changing dynamic environment. A planning domain definition language-based planning algorithm, which is a symbolic approach, and the cooperative–competitive reinforcement learning algorithm, which is a connectionist approach, were utilized in this study. The proposed architecture is verified through a simulation. It is also verified through an experiment using 10 unmanned surface vehicles that the given tasks were successfully executed in a wide and dynamic environment. Full article
(This article belongs to the Special Issue Efficient Planning and Mapping for Multi-Robot Systems)
Show Figures

Figure 1

Back to TopTop