Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = description logic reasoning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 928 KB  
Review
Spatial and Temporal Knowledge Representation: Ontological Foundations, Semantic Web Standards
by Thomas Nipurakis, Stavroula Chatzinikolaou, Giannis Vassiliou and Nikolaos Papadakis
Electronics 2026, 15(8), 1590; https://doi.org/10.3390/electronics15081590 - 10 Apr 2026
Viewed by 461
Abstract
Spatial and temporal ontologies play a foundational role in modeling dynamic real-world phenomena across domains such as geographic information systems, artificial intelligence, and the Semantic Web. Although decades of research have advanced spatial reasoning, temporal logic, and ontology engineering, fully integrated spatio-temporal frameworks [...] Read more.
Spatial and temporal ontologies play a foundational role in modeling dynamic real-world phenomena across domains such as geographic information systems, artificial intelligence, and the Semantic Web. Although decades of research have advanced spatial reasoning, temporal logic, and ontology engineering, fully integrated spatio-temporal frameworks remain fragmented across disciplinary traditions. This paper presents a comprehensive review of spatial, temporal, and spatio-temporal ontologies, examining their conceptual foundations, formal logical models and Semantic Web standards. The literature is analyzed to classify major modeling paradigms and to evaluate their theoretical assumptions, representational capabilities, and computational trade-offs. The review proposes a taxonomy distinguishing foundational ontologies, spatial-centric models, temporal-centric frameworks, integrated spatio-temporal systems. Comparative discussion highlights tensions between logical expressiveness and scalability, as well as challenges related to interoperability and dynamic reasoning. The analysis identifies persistent gaps, including limited native temporal support in description logics, complexity in modeling evolving spatial relations, absence of unified spatio-temporal standards, and lack of standardized evaluation benchmarks. The paper concludes by outlining research directions focused on hybrid ontology–knowledge graph architectures, multi-scale modeling, event-driven semantics, and neuro-symbolic integration. By synthesizing theoretical and applied perspectives, this review provides a structured foundation for advancing interoperable and scalable spatio-temporal knowledge systems capable of supporting next-generation intelligent applications. Full article
Show Figures

Figure 1

28 pages, 2025 KB  
Article
DL-ReasonSuite: A Benchmark for Evaluating Description Logic Reasoning in Large Language Models
by Müge Oluçoğlu and Okan Bursa
Appl. Sci. 2026, 16(4), 1821; https://doi.org/10.3390/app16041821 - 12 Feb 2026
Viewed by 994
Abstract
Large language models (LLMs) have shown remarkable progress in general reasoning and understanding, but their ability to perform formal logical reasoning remains under-explored. In this paper, we introduce DLReasonSuite, a novel benchmark designed to rigorously evaluate LLMs on reasoning tasks grounded in Description [...] Read more.
Large language models (LLMs) have shown remarkable progress in general reasoning and understanding, but their ability to perform formal logical reasoning remains under-explored. In this paper, we introduce DLReasonSuite, a novel benchmark designed to rigorously evaluate LLMs on reasoning tasks grounded in Description Logic (DL). DL-ReasonSuite comprises 4740 tasks spanning seven distinct task types and organized into three reasoning tracks: (1) DLCore, covering fundamental ontology reasoning tasks (consistency checking, subsumption, and instance checking); (2) DLQuery, focusing on answering entailment-aware SPARQL queries; and (3) DLBridge, bridging natural language and formal logic (bidirectional NL ↔ OWL translation and tool-augmented entailment resolution). We detail the methodology for designing and implementing this benchmark, including task construction, automatic evaluation metrics and validation using reliable OWL reasoners. Then, we present an empirical evaluation of five leading reasoning LLMs as stateofart models: Kimi k1.5, LlamaNemotron Ultra, DeepSeekR1, Phi4 Reasoning Plus, and Phi4 Reasoning on the full suite of tasks. Our results reveal significant variability in LLM performance on formal reasoning was observed. While the best model, Phi4 Reasoning Plus, achieves an overall accuracy of 85% and excels especially in tool-augmented tasks, other models struggle notably with complex query reasoning for DL and precise OWL translation. We analyze the strengths and weaknesses of each model across different DL metrics and task categories, providing insights into current limitations of LLM reasoning such as handling SPARQL queries and maintaining logical consistency and the benefits of neuro-symbolic techniques. DL-ReasonSuite is a comprehensive framework for assessing and advancing LLMs’ Description Logic reasoning capabilities aiming to bridge the gap between natural language understanding and formal knowledge representation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

26 pages, 8396 KB  
Article
Temporal Knowledge Graph Reasoning: Completion with Semantic–Structural Fusion and Forecasting with an Interpretable Dual Decoder
by Wenchao Gao, Haoyang Wang and Hengyu Yang
Symmetry 2026, 18(2), 328; https://doi.org/10.3390/sym18020328 - 11 Feb 2026
Viewed by 664
Abstract
Temporal knowledge graphs (TKGs) effectively represent dynamic facts by incorporating a temporal dimension, yet they frequently encounter data incompleteness issues that constrain downstream applications. Concurrently, TKG prediction tasks, which enable reasoning about future events, have garnered significant attention. Existing TKG completion methods often [...] Read more.
Temporal knowledge graphs (TKGs) effectively represent dynamic facts by incorporating a temporal dimension, yet they frequently encounter data incompleteness issues that constrain downstream applications. Concurrently, TKG prediction tasks, which enable reasoning about future events, have garnered significant attention. Existing TKG completion methods often neglect semantic information, underexploit event information from subsequent timestamps, and fail to leverage the structural symmetries inherent in temporal data. To address these limitations, this paper proposes a synergistic approach comprising two models: SiSe for completion and DL-CompGCN for prediction. SiSe integrates semantic and structural embeddings by employing entity text descriptions as semantic signals, utilizing symmetric cross-attention for bidirectional feature fusion and leveraging bidirectional gated recurrent units to capture symmetric temporal influences from both past and future events. On ICEWS14, ICEWS05-15, and GDELT completion datasets, the MRR improves by 1.2, 1.4, and 0.8 percentage points, respectively. DL-CompGCN addresses the accuracy–interpretability trade-off in prediction tasks through a time-aware graph convolutional encoder and a dual-decoder framework that combines bilinear scoring with first-order logical rules to generate interpretable paths while preserving the symmetric properties of temporal relations. It achieves state-of-the-art performance on ICEWS14, ICEWS05-15, and ICEWS18 prediction datasets. The proposed models explicitly incorporate symmetric principles in their architectural design; SiSe employs symmetric bidirectional temporal modeling, while DL-CompGCN maintains symmetry in its graph propagation and rule inference mechanisms. The experimental results demonstrate that both models significantly outperform baseline methods, offering a comprehensive solution for temporal knowledge graph reasoning that respects and exploits the symmetric structures inherent in temporal data. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 1707 KB  
Article
Axiom Generation for Automated Ontology Construction from Texts Through Schema Mapping
by Tsitsi Zengeya, Jean Vincent Fonou-Dombeu and Mandlenkosi Gwetu
Mach. Learn. Knowl. Extr. 2026, 8(2), 29; https://doi.org/10.3390/make8020029 - 26 Jan 2026
Viewed by 1133
Abstract
Ontology learning from unstructured text has become a critical task for knowledge-driven applications in Big Data and Artificial Intelligence. While significant advances have been made in the automatic extraction of concepts and relations using neural and Transformer-based models, the generation of formal Description [...] Read more.
Ontology learning from unstructured text has become a critical task for knowledge-driven applications in Big Data and Artificial Intelligence. While significant advances have been made in the automatic extraction of concepts and relations using neural and Transformer-based models, the generation of formal Description Logic axioms required for constructing logically consistent and computationally tractable ontologies remains largely underexplored. This paper puts forward a novel pipeline for automated axiom generation through schema mapping. Our paper introduces three key innovations: a deterministic mapping framework that guarantees logical consistency (unlike stochastic Large Language Models); guaranteed formal consistency verified by OWL reasoners (unaddressed by prior statistical methods); and a transparent, scalable bridge from neural extractions to symbolic logic, eliminating manual post-processing. Technically, the pipeline builds upon the outputs of a Transformer-based fusion model for joint concept and relation extraction. We then map lexical relational phrases to formal ontological properties through a lemmatization-based schema alignment step. Entity typing and hierarchical induction are then employed to infer class structures, as well as domain and range constraints. Using RDFLib and structured data processing, we transform the extracted triples into both assertional (ABox) and terminological (TBox) axioms expressed in Description Logic. Experimental evaluation on benchmark datasets (Conll04 and NYT) demonstrates the efficacy of the approach, with expert validation showing high acceptance rates (>95%) and reasoners confirming zero inconsistencies. The pipeline thus establishes a reliable, scalable foundation for automated ontology learning, advancing the field from extraction to formally verifiable knowledge base construction. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

18 pages, 1947 KB  
Article
Traffic Accident Severity Prediction via Large Language Model-Driven Semantic Feature Enhancement
by Jianuo Hao, Fengze Fan and Xin Fu
Vehicles 2026, 8(1), 20; https://doi.org/10.3390/vehicles8010020 - 15 Jan 2026
Viewed by 1340
Abstract
Predicting the severity of traffic accidents remains challenging due to the limited ability of existing methods to extract deep semantic information from unstructured accident narratives, as traditional approaches typically depend on structured data alone. This study proposes a severity prediction approach enhanced by [...] Read more.
Predicting the severity of traffic accidents remains challenging due to the limited ability of existing methods to extract deep semantic information from unstructured accident narratives, as traditional approaches typically depend on structured data alone. This study proposes a severity prediction approach enhanced by semantic risk reasoning derived from large language models (LLMs). A prompt-engineering template is designed to guide LLMs in extracting proxy semantic features from accident descriptions, forming an enriched feature set that incorporates causal logic. These semantic features are fused with traditional structured features through three integration strategies—direct feature concatenation, optimized feature selection, and model-level fusion. Experiments based on 4013 accident records from expressways in Yunnan Province, China, demonstrate that models using LLM-derived semantic features significantly outperform those relying solely on structured features. Notably, the LightGBM model utilizing semantic features within a balanced learning framework achieves a severe accident recall of 77.8%. While model-level fusion proves optimal for XGBoost (improving Macro-F1 to 0.6356), we identify a “feature dilution” effect in other classifiers, where high-quality semantic reasoning is compromised by low-quality structured noise. These findings indicate that the proposed approach effectively enhances the identification of high-risk accidents and offers a novel semantic-aware solution for traffic safety management. Furthermore, the obtained results provide actionable insights for traffic management agencies to optimize emergency response resource allocation and formulate targeted accident prevention strategies. Full article
Show Figures

Figure 1

31 pages, 1440 KB  
Article
From Reliability Modelling to Cognitive Orchestration: A Paradigm Shift in Aircraft Predictive Maintenance
by Igor Kabashkin and Timur Tyncherov
Mathematics 2026, 14(1), 76; https://doi.org/10.3390/math14010076 - 25 Dec 2025
Viewed by 579
Abstract
This study formulates predictive maintenance of complex technical systems as a constrained multi-layer probabilistic optimization problem that unifies four interdependent analytical paradigms. The mathematical framework integrates: (i) Weibull reliability modelling with parametric lifetime estimation; (ii) Bayesian posterior updating for dynamic adaptation under uncertainty; [...] Read more.
This study formulates predictive maintenance of complex technical systems as a constrained multi-layer probabilistic optimization problem that unifies four interdependent analytical paradigms. The mathematical framework integrates: (i) Weibull reliability modelling with parametric lifetime estimation; (ii) Bayesian posterior updating for dynamic adaptation under uncertainty; (iii) nonlinear machine-learning inference for data-driven pattern recognition; and (iv) ontology-based semantic reasoning governed by logical axioms and domain-specific constraints. The four layers are synthesized through a formal orchestration operator, defined as a sequential composition, where each sub-operator is governed by explicit mathematical constraints: Weibull cumulative distribution functions, Bayesian likelihood-posterior relationships, gradient-based loss minimization, and description logic entailment. The system operates within a cognitive digital twin architecture, with orchestration convergence formalized through iterative parameter refinement until consistency between numerical predictions and semantic validation is achieved. The framework is validated through a case study on aircraft wheel-hub crack prediction. The mathematical formulation establishes a rigorous analytical foundation for cognitive predictive maintenance systems applicable to safety-critical technical systems including aerospace, energy infrastructure, transportation networks, and industrial machinery. Full article
Show Figures

Figure 1

21 pages, 1410 KB  
Article
Measure Student Aptitude in Learning Programming in Higher Education—A Data Analysis
by João Pires, Ana Rosa Borges, Jorge Bernardino, Fernanda Brito Correia and Anabela Gomes
Computers 2025, 14(10), 428; https://doi.org/10.3390/computers14100428 - 9 Oct 2025
Viewed by 1194
Abstract
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including [...] Read more.
Analyzing student performance in Introductory Programming courses in Higher Education is critical for early intervention and improved learning outcomes. This study explores the potential of a cognitive test for student success in an Introductory Programming course by analyzing data from 180 students, including Freshmen and Repeating Students, using descriptive statistics, correlation analysis, Categorical Principal Component Analysis and Item Response Theory models analysis. Analysis of the cognitive test revealed that some reasoning questions presented a statistically significant correlation, albeit of weak magnitude, with the course grades, particularly for freshman students. The development of models for predicting student performance in Introductory Programming using cognitive tests is also being explored. This study found that reasoning skills, namely logical reasoning and sequence completion, were more predictive of success in programming than general ability. The study also showed that a Programming Cognitive Test can be a useful tool for identifying students at risk of failure, particularly for freshmen students. Full article
Show Figures

Figure 1

36 pages, 7369 KB  
Article
Ontology-Driven Digital Twin Framework for Aviation Maintenance and Operations
by Igor Kabashkin
Mathematics 2025, 13(17), 2817; https://doi.org/10.3390/math13172817 - 2 Sep 2025
Cited by 4 | Viewed by 3438
Abstract
This paper presents a novel ontology-driven digital twin framework specifically designed for aviation maintenance and operations that addresses these challenges through semantic reasoning and explainable decision support. The proposed framework integrates seven interconnected ontologies—structural, functional, behavioral, monitoring, maintenance, lifecycle, and environmental. It collectively [...] Read more.
This paper presents a novel ontology-driven digital twin framework specifically designed for aviation maintenance and operations that addresses these challenges through semantic reasoning and explainable decision support. The proposed framework integrates seven interconnected ontologies—structural, functional, behavioral, monitoring, maintenance, lifecycle, and environmental. It collectively provides a comprehensive semantic representation of aircraft systems and their operational context. Each ontology is mathematically formalized using description logics and graph theory, creating a unified knowledge graph that enables transparent, traceable reasoning from sensor observations to maintenance decisions. The digital twin is formally defined as a 6-tuple that incorporates semantic transformation engines, cross-ontology mappings, and dynamic reasoning mechanisms. Unlike traditional data-driven approaches that operate as black boxes, the ontology-driven framework provides explainable inference capabilities essential for regulatory compliance and safety certification in aviation. The semantic foundation enables causal reasoning, rule-based validation, and context-aware maintenance recommendations while supporting standardization and interoperability across manufacturers, airlines, and regulatory bodies. The research contributes a mathematically grounded, semantically transparent framework that bridges the gap between domain knowledge and operational data in aviation maintenance. This work establishes the foundation for next-generation cognitive maintenance systems that can support intelligent, adaptive, and trustworthy operations in modern aviation ecosystems. Full article
Show Figures

Figure 1

23 pages, 2095 KB  
Article
A Unified Theoretical Analysis of Geometric Representation Forms in Descriptive Geometry and Sparse Representation Theory
by Shuli Mei
Mathematics 2025, 13(17), 2737; https://doi.org/10.3390/math13172737 - 26 Aug 2025
Viewed by 1882
Abstract
The primary distinction between technical design and engineering design lies in the role of analysis and optimization. From its inception, descriptive geometry has supported military and engineering applications, and its graphical rules inherently reflect principles of optimization—similar to the core ideas of sparse [...] Read more.
The primary distinction between technical design and engineering design lies in the role of analysis and optimization. From its inception, descriptive geometry has supported military and engineering applications, and its graphical rules inherently reflect principles of optimization—similar to the core ideas of sparse representation and compressed sensing. This paper explores the geometric and mathematical significance of the center line in symmetrical objects and the axis of rotation in solids of revolution, framing these elements within the theory of sparse representation. It further establishes rigorous correspondences between geometric primitives—points, lines, planes, and symmetric solids—and their sparse representations in descriptive geometry. By re-examining traditional engineering drawing techniques from the perspective of optimization analysis, this study reveals the hidden mathematical logic embedded in geometric constructions. The findings not only support the deeper integration of mathematical reasoning in engineering education but also provide an intuitive framework for teaching abstract concepts such as sparsity and signal reconstruction. This work contributes to interdisciplinary understanding between descriptive geometry, mathematical modeling, and engineering pedagogy. Full article
Show Figures

Figure 1

20 pages, 653 KB  
Article
Intensional Conceptualization Model and Its Language for Open Distributed Environments
by Khaled Badawy, Aleksander Essex and AbdulMutalib Wahaishi
AppliedMath 2025, 5(3), 109; https://doi.org/10.3390/appliedmath5030109 - 25 Aug 2025
Viewed by 854
Abstract
This paper introduces the Intensional Conceptualization Model for Open Environments (ICMOE), a formal framework designed to enable semantic integration in dynamic and distributed systems. Grounded in intensional logic and formalized via a domain-specific language (ICMOE-L) built on Description Logic (DL), the model distinguishes [...] Read more.
This paper introduces the Intensional Conceptualization Model for Open Environments (ICMOE), a formal framework designed to enable semantic integration in dynamic and distributed systems. Grounded in intensional logic and formalized via a domain-specific language (ICMOE-L) built on Description Logic (DL), the model distinguishes between intensional and extensional semantics, allowing structured representation and evolution of concepts, relations, and domain rules under the open world assumption. ICMOE supports advanced semantic reasoning through an interpretation function that bridges relational data and ontological structures. A formal complexity analysis shows that reasoning with ICMOE-L has a worst-case complexity of O(n) ), where n is the total number of TBox and ABox axioms. To validate its effectiveness, ICMOE is evaluated using both qualitative and quantitative metrics. The model achieves a Concept Coverage score of 0.94, Semantic Depth of 0.89, Dynamic Adaptability Index of 0.91, Semantic Rule Density of 0.85, and Ontology Alignment Efficiency of 0.88. These results demonstrate ICMOE’s superior scalability, semantic richness, and adaptability when compared to foundational models such as those by Guarino and Bealer—making it a robust solution for open distributed environments. Full article
Show Figures

Figure 1

25 pages, 15383 KB  
Article
SplitGround: Long-Chain Reasoning Split via Modular Multi-Expert Collaboration for Training-Free Scene Knowledge-Guided Visual Grounding
by Xilong Qin, Yue Hu, Wansen Wu, Xinmeng Li and Quanjun Yin
Big Data Cogn. Comput. 2025, 9(8), 209; https://doi.org/10.3390/bdcc9080209 - 14 Aug 2025
Viewed by 1331
Abstract
Scene Knowledge-guided Visual Grounding (SK-VG) is a multi-modal detection task built upon conventional visual grounding (VG) for human–computer interaction scenarios. It utilizes an additional passage of scene knowledge apart from the image and context-dependent textual query for referred object localization. Due to the [...] Read more.
Scene Knowledge-guided Visual Grounding (SK-VG) is a multi-modal detection task built upon conventional visual grounding (VG) for human–computer interaction scenarios. It utilizes an additional passage of scene knowledge apart from the image and context-dependent textual query for referred object localization. Due to the inherent difficulty in directly establishing correlations between the given query and the image without leveraging scene knowledge, this task imposes significant demands on a multi-step knowledge reasoning process to achieve accurate grounding. Off-the-shelf VG models underperform under such a setting due to the requirement of detailed description in the query and a lack of knowledge inference based on implicit narratives of the visual scene. Recent Vision–Language Models (VLMs) exhibit improved cross-modal reasoning capabilities. However, their monolithic architectures, particularly in lightweight implementations, struggle to maintain coherent reasoning chains across sequential logical deductions, leading to error accumulation in knowledge integration and object localization. To address the above-mentioned challenges, we propose SplitGround—a collaborative framework that strategically decomposes complex reasoning processes by fusing the input query and image with knowledge through two auxiliary modules. Specifically, it implements an Agentic Annotation Workflow (AAW) for explicit image annotation and a Synonymous Conversion Mechanism (SCM) for semantic query transformation. This hierarchical decomposition enables VLMs to focus on essential reasoning steps while offloading auxiliary cognitive tasks to specialized modules, effectively splitting long reasoning chains into manageable subtasks with reduced complexity. Comprehensive evaluations on the SK-VG benchmark demonstrate the significant advancements of our method. Remarkably, SplitGround attains an accuracy improvement of 15.71% on the hard split of the test set over the previous training-required SOTA, using only a compact VLM backbone without fine-tuning, which provides new insights for knowledge-intensive visual grounding tasks. Full article
Show Figures

Figure 1

18 pages, 232 KB  
Article
Reason and Revelation in Ibn Taymiyyah’s Critique of Philosophical Theology: A Contribution to Contemporary Islamic Philosophy of Religion
by Adeeb Obaid Alsuhaymi and Fouad Ahmed Atallah
Religions 2025, 16(7), 809; https://doi.org/10.3390/rel16070809 - 20 Jun 2025
Cited by 1 | Viewed by 9255
Abstract
This paper addresses the longstanding tension between reason and revelation in Islamic religious epistemology, with a focus on the thought of Ibn Taymiyyah (d. 728/1328). It aims to reassess his critique of philosophical theology (falsafa and kalām) and explore his constructive alternative to [...] Read more.
This paper addresses the longstanding tension between reason and revelation in Islamic religious epistemology, with a focus on the thought of Ibn Taymiyyah (d. 728/1328). It aims to reassess his critique of philosophical theology (falsafa and kalām) and explore his constructive alternative to rationalist metaphysics. The study adopts a descriptive–analytical methodology, combining close textual reading of Darʾ Taʿāruḍ al-ʿAql wa al-Naql and Naqd al-Manṭiq with conceptual analysis informed by contemporary religious epistemology and philosophy of religion. The findings reveal that Ibn Taymiyyah advances a triadic epistemological model centered on revelation (naql), reason (ʿaql), and innate disposition (fiṭrah). He refutes the autonomy of reason, redefines logic as a tool rather than a judge, and repositions fiṭrah as an intuitive foundation for belief. His approach emphasizes the harmony of sound reason with authentic revelation and challenges the epistemic assumptions of speculative theology. By presenting a comparative table of rationalist and Taymiyyan epistemologies, the study demonstrates how Ibn Taymiyyah’s framework anticipates key themes in Reformed Epistemology and the cognitive science of religion. The conclusions suggest that his vision offers a coherent, theocentric paradigm for religious knowledge that is highly relevant to the contemporary philosophy of religion and Islamic theology. Full article
(This article belongs to the Special Issue Problems in Contemporary Islamic Philosophy of Religion)
Show Figures

Graphical abstract

26 pages, 2192 KB  
Article
Exploring the Joint Influence of Built Environment Factors on Urban Rail Transit Peak-Hour Ridership Using DeepSeek
by Zhuorui Wang, Xiaoyu Zheng, Fanyun Meng, Kang Wang, Xincheng Wu and Dexin Yu
Buildings 2025, 15(10), 1744; https://doi.org/10.3390/buildings15101744 - 21 May 2025
Cited by 5 | Viewed by 2413
Abstract
Modern cities are facing increasing challenges such as traffic congestion, high energy consumption, and poor air quality, making rail transit systems, known for their high capacity and low emissions, essential components of sustainable urban infrastructure. While numerous studies have examined how the built [...] Read more.
Modern cities are facing increasing challenges such as traffic congestion, high energy consumption, and poor air quality, making rail transit systems, known for their high capacity and low emissions, essential components of sustainable urban infrastructure. While numerous studies have examined how the built environment impacts transit ridership, the complex interactions among these factors warrant further investigation. Recent advancements in the reasoning capabilities of large language models (LLMs) offer a robust methodological foundation for analyzing the complex joint influence of multiple built environment factors. LLMs not only can comprehend the physical meaning of variables but also exhibit strong non-linear modeling and logical reasoning capabilities. This study introduces an LLM-based framework to examine how built environment factors and station characteristics shape the transit ridership dynamics by utilizing DeepSeek-R1. We develop a 4D + N variable system for a more nuanced description of the built environment of the station area which includes density, diversity, design, destination accessibility, and station characteristics, leveraging multi-source data such as points of interest (POIs), road network data, housing prices, and population data. Then, the proposed approach is validated using data from Qingdao, China, examining both single-factor and multi-factor effects on transit peak-hour ridership at the macro level (across all stations) and the meso level (specific station types). First, the variables that have a substantial effect on peak-hour transit ridership at both the macro and meso levels are discussed. Second, key and latent factor combinations are identified. Notably, some factors may appear to have limited importance at the macro level, yet they can substantially influence the peak-hour ridership when interacting with other factors. Our findings enable policymakers to formulate a balanced mix of soft and hard policies, such as integrating a flexitime policy with enhancements in active travel infrastructure to increase the attractiveness of public transit. The proposed analytical framework is adaptable across regions and applicable to various transportation modes. These insights can guide transportation managers and policymakers while optimizing Transit-Oriented Development (TOD) strategies to enhance the sustainability of the entire transportation system. Full article
(This article belongs to the Special Issue Advanced Studies in Urban and Regional Planning—2nd Edition)
Show Figures

Figure 1

23 pages, 3133 KB  
Article
Integrating Textual Queries with AI-Based Object Detection: A Compositional Prompt-Guided Approach
by Silvan Ferreira, Allan Martins, Daniel G. Costa and Ivanovitch Silva
Sensors 2025, 25(7), 2258; https://doi.org/10.3390/s25072258 - 3 Apr 2025
Cited by 1 | Viewed by 1499
Abstract
While object detection and recognition have been extensively adopted by many applications in decision-making, new algorithms and methodologies have emerged to enhance the automatic identification of target objects. In particular, the rise of deep learning and language models has opened many possibilities in [...] Read more.
While object detection and recognition have been extensively adopted by many applications in decision-making, new algorithms and methodologies have emerged to enhance the automatic identification of target objects. In particular, the rise of deep learning and language models has opened many possibilities in this area, although challenges in contextual query analysis and human interactions persist. This article presents a novel neuro-symbolic object detection framework that aligns object proposals with textual prompts using a deep learning module while enabling logical reasoning through a symbolic module. By integrating deep learning with symbolic reasoning, object detection and scene understanding are considerably enhanced, enabling complex, query-driven interactions. Using a synthetic 3D image dataset, the results demonstrate that this framework effectively generalizes to complex queries, combining simple attribute-based descriptions without explicit training on compound prompts. We present the numerical results and comprehensive discussions, highlighting the potential of our approach for emerging smart applications. Full article
(This article belongs to the Special Issue Digital Imaging Processing, Sensing, and Object Recognition)
Show Figures

Figure 1

18 pages, 9282 KB  
Article
Parametric Analysis as a Tool for Hypothesis Generation: A Case Study of the Federal Archive Building in New York City
by Mike Christenson
Infrastructures 2025, 10(4), 71; https://doi.org/10.3390/infrastructures10040071 - 24 Mar 2025
Cited by 3 | Viewed by 1409
Abstract
This study investigates the epistemological potentials of parametric analysis for digitally modeling ordinary, existing buildings, addressing a gap in architectural research. While traditional digital modeling prioritizes geometric accuracy, it often limits the ability to generate new architectural insights, treating models as static representations [...] Read more.
This study investigates the epistemological potentials of parametric analysis for digitally modeling ordinary, existing buildings, addressing a gap in architectural research. While traditional digital modeling prioritizes geometric accuracy, it often limits the ability to generate new architectural insights, treating models as static representations rather than as tools for knowledge production. This research challenges the assumption that geometric accuracy is necessary for epistemological validity, proposing parametric analysis as a hypothesis-generating tool capable of uncovering latent spatial and morphological properties that conventional methods overlook. Using Suárez’s inferential conception of scientific representation as a theoretical framework, this research employs a comparative case study methodology, contrasting direct and parametric digital models of the Federal Archive Building in New York City, analyzing their respective contributions to architectural knowledge. Existing documentation of the Federal Archive Building provides the primary data. The findings reveal that parametric modeling can enable the discovery of latent design properties by facilitating the systematic exploration of geometric variations while maintaining other logics, specifically by demonstrating how certain architectural features accommodate site irregularities while preserving visual coherence. This research advances theoretical discourse by repositioning parametric models from descriptive artifacts to instruments of architectural reasoning, challenging conventional associations between representational accuracy and epistemological validity. Practical applications are suggested in heritage documentation, comparative architectural analysis, and educational contexts where the interpretive exploration of buildings can generate new insights beyond what geometrically accurate models alone can provide. Full article
(This article belongs to the Special Issue Modern Digital Technologies for the Built Environment of the Future)
Show Figures

Figure 1

Back to TopTop