Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (468)

Search Parameters:
Keywords = semantic variability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 7975 KB  
Article
Identification and Prediction of the Invasion Pattern of the Mikania micrantha with WaveEdgeNet Model Using UAV-Based Images in Shenzhen
by Hui Lin, Yang Yin, Xiaofen He, Jiangping Long, Tingchen Zhang, Zilin Ye and Xiaojia Deng
Remote Sens. 2026, 18(3), 437; https://doi.org/10.3390/rs18030437 - 30 Jan 2026
Abstract
Mikania micrantha is one of the most detrimental invasive plant species in the southeastern coastal region of China. To accurately predict the invasion pattern of Mikania micrantha and offer guidance for production practices, it is essential to determine its precise location and the [...] Read more.
Mikania micrantha is one of the most detrimental invasive plant species in the southeastern coastal region of China. To accurately predict the invasion pattern of Mikania micrantha and offer guidance for production practices, it is essential to determine its precise location and the driving factors. Therefore, a design of the wavelet convolution and dynamic feature fusion module was studied, and WaveEdgeNet was proposed. This model has the abilities to deeply extract image semantic features, retain features, perform multi-scale segmentation, and conduct fusion. Moreover, to quantify the impact of human and natural factors, we developed a novel proximity factor based on land use data. Additionally, a new feature selection framework was applied to identify driving factors by analyzing the relationships between environmental variables and Mikania micrantha. Finally, the MaxEnt model was utilized to forecast its potential future habitats. The results demonstrate that WaveEdgeNet effectively extracts image features and improves model performance, attaining an MIoU of 85% and an overall accuracy of 98.62%, outperforming existing models. Spatial analysis shows that the invaded area in 2024 was smaller than that in 2023, indicating that human intervention measures have achieved some success. Furthermore, the feature selection framework not only enhances MaxEnt’s accuracy but also cuts down computational time by 82.61%. According to MaxEnt modeling, human disturbance, proximity to forests, distance from roads, and elevation are recognized as the primary factors. In the future, we will concentrate on overcoming the seasonal limitations and attaining the objective of predicting the growth and reproduction of kudzu before they happen, which can offer a foundation for manual intervention. This study lays a solid technical foundation and offers comprehensive data support for comprehending the species’ dispersal patterns and driving factors and for guiding environmental conservation. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

32 pages, 491 KB  
Article
Complexity Assessments for Decidable Fragments of Set Theory. IV: A Quadratic Reduction from Constraints over Nested Sets to Boolean Formulae
by Domenico Cantone, Andrea De Domenico, Pietro Maugeri and Eugenio G. Omodeo
Foundations 2026, 6(1), 3; https://doi.org/10.3390/foundations6010003 - 30 Jan 2026
Abstract
As a contribution to automated set-theoretic inferencing, a translation is proposed of conjunctions of literals of the forms x=yz, xyz, and z=x, where x,y,z stand for [...] Read more.
As a contribution to automated set-theoretic inferencing, a translation is proposed of conjunctions of literals of the forms x=yz, xyz, and z=x, where x,y,z stand for variables ranging over the von Neumann universe of sets, into quantifier-free Boolean formulae of a rather simple conjunctive normal form. The formulae in the target language involve variables ranging over a Boolean ring of sets, along with a difference operator and relators designating equality, non-disjointness, and inclusion. Moreover, the result of each translation is a conjunction of literals of the forms x=yz and xyz and of implications whose antecedents are isolated literals and whose consequents are either inclusions (strict or non-strict) between variables, or equalities between variables. Besides reflecting a simple and natural semantics, which ensures satisfiability preservation, the proposed translation has quadratic algorithmic time complexity and bridges two languages, both of which are known to have an NP-complete satisfiability problem. Full article
(This article belongs to the Section Mathematical Sciences)
Show Figures

Figure 1

17 pages, 1874 KB  
Article
A Large-Kernel and Scale-Aware 2D CNN with Boundary Refinement for Multimodal Ischemic Stroke Lesion Segmentation
by Omar Ibrahim Alirr
Eng 2026, 7(2), 59; https://doi.org/10.3390/eng7020059 - 29 Jan 2026
Viewed by 26
Abstract
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This [...] Read more.
Accurate segmentation of ischemic stroke lesions from multimodal magnetic resonance imaging (MRI) is fundamental for quantitative assessment, treatment planning, and outcome prediction; yet, it remains challenging due to highly heterogeneous lesion morphology, low lesion–background contrast, and substantial variability across scanners and protocols. This work introduces Tri-UNetX-2D, a large-kernel and scale-aware 2D convolutional network with explicit boundary refinement for automated ischemic stroke lesion segmentation from DWI, ADC, and FLAIR MRI. The architecture is built on a compact U-shaped encoder–decoder backbone and integrates three key components: first, a Large-Kernel Inception (LKI) module that employs factorized depthwise separable convolutions and dilation to emulate very large receptive fields, enabling efficient long-range context modeling; second, a Scale-Aware Fusion (SAF) unit that learns adaptive weights to fuse encoder and decoder features, dynamically balancing coarse semantic context and fine structural detail; and third, a Boundary Refinement Head (BRH) that provides explicit contour supervision to sharpen lesion borders and reduce boundary error. Squeeze-and-Excitation (SE) attention is embedded within LKI and decoder stages to recalibrate channel responses and emphasize modality-relevant cues, such as DWI-dominant acute core and FLAIR-dominant subacute changes. On the ISLES 2022 multi-center benchmark, Tri-UNetX-2D improves Dice Similarity Coefficient from 0.78 to 0.86, reduces the 95th-percentile Hausdorff distance from 12.4 mm to 8.3 mm, and increases the lesion-wise F1-score from 0.71 to 0.81 compared with a plain 2D U-Net trained under identical conditions. These results demonstrate that the proposed framework achieves competitive performance with substantially lower complexity than typical 3D or ensemble-based models, highlighting its potential for scalable, clinically deployable stroke lesion segmentation. Full article
Show Figures

Figure 1

27 pages, 4721 KB  
Article
A Template-Based Approach for Industrial Title Block Compliance Check
by Olivier Laurendin, Khwansiri Ninpan, Quentin Robcis, Richard Lehaut, Hélène Danlos, Nicolas Bureau and Robert Plana
Algorithms 2026, 19(2), 105; https://doi.org/10.3390/a19020105 - 29 Jan 2026
Viewed by 29
Abstract
Title block compliance checking requires interpreting irregular tabular layouts and reporting structural inconsistencies, not only extracting metadata. This paper introduces a user-in-the-loop, template-based method that leverages a graphical annotation workflow to encode title block structure as a hierarchical annotation graph combining detected primitives [...] Read more.
Title block compliance checking requires interpreting irregular tabular layouts and reporting structural inconsistencies, not only extracting metadata. This paper introduces a user-in-the-loop, template-based method that leverages a graphical annotation workflow to encode title block structure as a hierarchical annotation graph combining detected primitives (cells/text) with user-defined semantic entities (key–value pairs, tables, headers). The resulting template is matched onto target title blocks using relative positional constraints and category-specific rules that distinguish acceptable variability from non-compliance (e.g., variable-size tables versus missing fields). The system outputs extracted key–value information and localized warning logs for end-user correction. On a real industrial example from the nuclear domain, the approach achieves 98–99% compliant annotation matching and 84% accuracy in flagging structural/content deviations, while remaining tolerant to moderate layout changes. Limitations and extensions are discussed, including support for additional fields, improved key similarity metrics, operational deployment with integrated feedback and broader benchmarking. Full article
Show Figures

Figure 1

48 pages, 2099 KB  
Review
Generative Models for Medical Image Creation and Translation: A Scoping Review
by Haowen Pang, Tiande Zhang, Yanan Wu, Shannan Chen, Wei Qian, Yudong Yao, Chuyang Ye, Patrice Monkam and Shouliang Qi
Sensors 2026, 26(3), 862; https://doi.org/10.3390/s26030862 - 28 Jan 2026
Viewed by 111
Abstract
Generative models play a pivotal role in the field of medical imaging. This paper provides an extensive and scholarly review of the application of generative models in medical image creation and translation. In the creation aspect, the goal is to generate new images [...] Read more.
Generative models play a pivotal role in the field of medical imaging. This paper provides an extensive and scholarly review of the application of generative models in medical image creation and translation. In the creation aspect, the goal is to generate new images based on potential conditional variables, while in translation, the aim is to map images from one or more modalities to another, preserving semantic and informational content. The review begins with a thorough exploration of a diverse spectrum of generative models, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), Diffusion Models (DMs), and their respective variants. The paper then delves into an insightful analysis of the merits and demerits inherent to each model type. Subsequently, a comprehensive examination of tasks related to medical image creation and translation is undertaken. For the creation aspect, papers are classified based on downstream tasks such as image classification, segmentation, and others. In the translation facet, papers are classified according to the target modality. A chord diagram depicting medical image translation across modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Cone Beam CT (CBCT), X-ray radiography, Positron Emission Tomography (PET), and ultrasound imaging, is presented to illustrate the direction and relative quantity of previous studies. Additionally, the chord diagram of MRI image translation across contrast mechanisms is also provided. The final section offers a forward-looking perspective, outlining prospective avenues and implementation guidelines for future research endeavors. Full article
24 pages, 9586 KB  
Article
EEG–fNIRS Cross-Subject Emotion Recognition Based on Attention Graph Isomorphism Network and Contrastive Learning
by Bingzhen Yu, Xueying Zhang and Guijun Chen
Brain Sci. 2026, 16(2), 145; https://doi.org/10.3390/brainsci16020145 - 28 Jan 2026
Viewed by 92
Abstract
Background/Objectives: Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) can objectively capture the spatiotemporal dynamics of brain activity during affective cognition, and their combination is promising for improving emotion recognition. However, multi-modal cross-subject emotion recognition remains challenging due to heterogeneous signal characteristics that hinder [...] Read more.
Background/Objectives: Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) can objectively capture the spatiotemporal dynamics of brain activity during affective cognition, and their combination is promising for improving emotion recognition. However, multi-modal cross-subject emotion recognition remains challenging due to heterogeneous signal characteristics that hinder effective fusion and substantial inter-subject variability that degrades generalization to unseen subjects. Methods: To address these issues, this paper proposes DC-AGIN, a dual-contrastive learning attention graph isomorphism network for EEG–fNIRS emotion recognition. DC-AGIN employs an attention-weighted AGIN encoder to adaptively emphasize informative brain-region topology while suppressing redundant connectivity noise. For cross-modal fusion, a cross-modal contrastive learning module projects EEG and fNIRS representations into a shared latent semantic space, promoting semantic alignment and complementarity across modalities. Results: To further enhance cross-subject generalization, a supervised contrastive learning mechanism is introduced to explicitly mitigate subject-specific identity information and encourage subject-invariant affective representations. Experiments on a self-collected dataset are conducted under both subject-dependent five-fold cross-validation and subject-independent leave-one-subject-out (LOSO) protocols. The proposed method achieves 96.98% accuracy in four-class classification in the subject-dependent setting and 62.56% under LOSO. Compared with existing models, DC-AGIN achieves SOTA performance. Conclusions: These results demonstrate that the work on attention aggregation, cross-modal and cross-subject contrastive learning enables more robust EEG-fNIRS emotion recognition, thus supporting the effectiveness of DC-AGIN in generalizable emotion representation learning. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

21 pages, 1757 KB  
Article
A Deep Learning Approach for Boat Detection in the Venice Lagoon
by Akbar Hossain Kanan, Michele Vittorio and Carlo Giupponi
Remote Sens. 2026, 18(3), 421; https://doi.org/10.3390/rs18030421 - 28 Jan 2026
Viewed by 191
Abstract
The Venice lagoon is the largest in the Mediterranean Sea. The historic city of Venice, located on a cluster of islands in the centre of this lagoon, is an enchanting and iconic destination for national and international tourists. The historical centre of Venice [...] Read more.
The Venice lagoon is the largest in the Mediterranean Sea. The historic city of Venice, located on a cluster of islands in the centre of this lagoon, is an enchanting and iconic destination for national and international tourists. The historical centre of Venice and the other islands of the lagoon, such as Burano, Murano and Torcello, attract crowds of tourists every year. Transportation is provided by boats navigating the lagoon along a network of canals. The lagoon itself attracts visitors who enjoy various outdoor recreational activities in the open air, such as fishing and sunbathing. While statistics are available for the activities targeting the islands, no information is currently available on the spatio-temporal distribution of recreational activities across the lagoon waters. This study explores the feasibility of using Sentinel-2 satellite images to assess and map the spatio-temporal distribution of boats in the Venice Lagoon. Cloud-free Level-2A images have been selected to study seasonal (summer vs. winter) and weekly (weekends vs. weekdays) variabilities in 2023, 2024, and 2025. The RGB threshold filtering and the U-Net Semantic Segmentation were applied to the Sentinel-2 images to ensure reliable results. Two spatial indices were produced: (i) a Water Recreation Index (WRI), identifying standing boats in areas attractive for recreation; and (ii) a Water Transportation Index (WTI), mapping moving boats along the canals. Multi-temporal WRI maps allow areas with recurring recreational activities—that are significantly higher in the summer compared to winter, and on weekends compared to other weekdays—to be identified. The WTI identifies canal paths with higher traffic intensity with seasonal and weekly variations. The latter should be targeted by measures for traffic control to limit wave induced erosion, while the first could be subject to protection or development strategies. Full article
Show Figures

Figure 1

23 pages, 1729 KB  
Article
Integrating Textual Features with Survival Analysis for Predicting Employee Turnover
by Qian Ke and Yongze Xu
Behav. Sci. 2026, 16(2), 174; https://doi.org/10.3390/bs16020174 - 26 Jan 2026
Viewed by 106
Abstract
This study presents a novel methodology that integrates Transformer-based textual analysis from professional networking platforms with traditional demographic variables within a survival analysis framework to predict turnover. Using a dataset comprising 4087 work events from Maimai (a leading professional networking platform in China) [...] Read more.
This study presents a novel methodology that integrates Transformer-based textual analysis from professional networking platforms with traditional demographic variables within a survival analysis framework to predict turnover. Using a dataset comprising 4087 work events from Maimai (a leading professional networking platform in China) spanning 2020 to 2022, our approach combines sentiment analysis and deep learning semantic representations to enhance predictive accuracy and interpretability for HR decision-making. Methodologically, we adopt a hybrid feature-extraction strategy combining theory-driven methods (sentiment analysis and TF-IDF) with a data-driven Transformer-based technique. Survival analysis is then applied to model time-dependent turnover risks, and we compare multiple models to identify the most predictive feature sets. Results demonstrate that integrating textual and demographic features improves prediction performance, specifically increasing the C-index by 3.38% and the cumulative/dynamic AUC by 3.43%. The Transformer-based method outperformed traditional approaches in capturing nuanced employee sentiments. Survival analysis further boosts model adaptability by incorporating temporal dynamics and also provides interpretable risk factors for turnover, supporting data-driven HR strategy formulation. This research advances turnover prediction methodology by combining text analysis with survival modeling, offering small and medium-sized enterprises a practical, data-informed approach to workforce planning. The findings contribute to broader labor market insights and can inform both organizational talent retention strategies and related policy-making. Full article
(This article belongs to the Section Organizational Behaviors)
Show Figures

Figure 1

16 pages, 5308 KB  
Article
Patient-Level Classification of Rotator Cuff Tears on Shoulder MRI Using an Explainable Vision Transformer Framework
by Murat Aşçı, Sergen Aşık, Ahmet Yazıcı and İrfan Okumuşer
J. Clin. Med. 2026, 15(3), 928; https://doi.org/10.3390/jcm15030928 - 23 Jan 2026
Viewed by 139
Abstract
Background/Objectives: Diagnosing Rotator Cuff Tears (RCTs) via Magnetic Resonance Imaging (MRI) is clinically challenging due to complex 3D anatomy and significant interobserver variability. Traditional slice-centric Convolutional Neural Networks (CNNs) often fail to capture the necessary volumetric context for accurate grading. This study [...] Read more.
Background/Objectives: Diagnosing Rotator Cuff Tears (RCTs) via Magnetic Resonance Imaging (MRI) is clinically challenging due to complex 3D anatomy and significant interobserver variability. Traditional slice-centric Convolutional Neural Networks (CNNs) often fail to capture the necessary volumetric context for accurate grading. This study aims to develop and validate the Patient-Aware Vision Transformer (Pa-ViT), an explainable deep-learning framework designed for the automated, patient-level classification of RCTs (Normal, Partial-Thickness, and Full-Thickness). Methods: A large-scale retrospective dataset comprising 2447 T2-weighted coronal shoulder MRI examinations was utilized. The proposed Pa-ViT framework employs a Vision Transformer (ViT-Base) backbone within a Weakly-Supervised Multiple Instance Learning (MIL) paradigm to aggregate slice-level semantic features into a unified patient diagnosis. The model was trained using a weighted cross-entropy loss to address class imbalance and was benchmarked against widely used CNN architectures and traditional machine-learning classifiers. Results: The Pa-ViT model achieved a high overall accuracy of 91% and a macro-averaged F1-score of 0.91, significantly outperforming the standard VGG-16 baseline (87%). Notably, the model demonstrated superior discriminative power for the challenging Partial-Thickness Tear class (ROC AUC: 0.903). Furthermore, Attention Rollout visualizations confirmed the model’s reliance on genuine anatomical features, such as the supraspinatus footprint, rather than artifacts. Conclusions: By effectively modeling long-range dependencies, the Pa-ViT framework provides a robust alternative to traditional CNNs. It offers a clinically viable, explainable decision support tool that enhances diagnostic sensitivity, particularly for subtle partial-thickness tears. Full article
(This article belongs to the Section Orthopedics)
Show Figures

Figure 1

21 pages, 2194 KB  
Article
Convolutional Autoencoder-Based Method for Predicting Faults of Cyber-Physical Systems Based on the Extraction of a Semantic State Vector
by Konstantin Zadiran and Maxim Shcherbakov
Machines 2026, 14(1), 126; https://doi.org/10.3390/machines14010126 - 22 Jan 2026
Viewed by 67
Abstract
Modern industrial equipment is a cyber-physical system (CPS) consisting of physical production components and digital controls. Lowering maintenance costs and increasing availability is important to improve its efficiency. Modern methods, based on solving event prediction problem, in particular, prediction of remaining useful life [...] Read more.
Modern industrial equipment is a cyber-physical system (CPS) consisting of physical production components and digital controls. Lowering maintenance costs and increasing availability is important to improve its efficiency. Modern methods, based on solving event prediction problem, in particular, prediction of remaining useful life (RUL), are used as a crucial step in a framework of reliability-centered maintenance to increase efficiency. But modern methods of RUL forecasting fall short when dealing with real-world scenarios, where CPS are described by multidimensional continuous high-frequency data with working cycles with variable duration. To overcome this problem, we propose a new method for fault prediction, which is based on extraction of semantic state vectors (SSVs) from working cycles of equipment. To implement SSV extraction, a new method, based on convolutional autoencoder and extraction of hidden state, is proposed. In this method, working cycles are detected in input data stream, and then they are converted to images, on which an autoencoder is trained. The output of an intermediate layer of an autoencoder is extracted and processed into SSVs. SSVs are then combined into a time series on which RUL is forecasted. After optimization of hyperparameters, the proposed method shows the following results: RMSE = 1.799, MAE = 1.374. These values are significantly more accurate than those obtained using existing methods: RMSE = 14.02 and MAE = 10.71. Therefore, SSV extraction is a viable technique for forecasting RUL. Full article
Show Figures

Figure 1

26 pages, 55590 KB  
Article
Adaptive Edge-Aware Detection with Lightweight Multi-Scale Fusion
by Xiyu Pan, Kai Xiong and Jianjun Li
Electronics 2026, 15(2), 449; https://doi.org/10.3390/electronics15020449 - 20 Jan 2026
Viewed by 143
Abstract
In object detection, boundary blurring caused by occlusion and background interference often hinders effective feature extraction. To address this challenge, we propose Edge Aware-YOLO, a novel framework designed to enhance edge awareness and efficient feature fusion. Our method integrates three key contributions. First, [...] Read more.
In object detection, boundary blurring caused by occlusion and background interference often hinders effective feature extraction. To address this challenge, we propose Edge Aware-YOLO, a novel framework designed to enhance edge awareness and efficient feature fusion. Our method integrates three key contributions. First, the Variable Sobel Compact Inverted Block (VSCIB) employs convolution kernels with adjustable orientation and size, enabling robust multi-scale edge adaptation. Second, the Spatial Pyramid Shared Convolution (SPSC) replaces standard pooling with shared dilated convolutions, minimizing detail loss during feature reconstruction. Finally, the Efficient Downsampling Convolution (EDC) utilizes a dual-branch architecture to balance channel compression with semantic preservation. Extensive evaluations on public datasets demonstrate that Edge Aware-YOLO significantly outperforms state-of-the-art models. On MS COCO, it achieves 56.3% mAP50 and 40.5% mAP50–95 (gains of 1.5% and 1.0%) with only 2.4M parameters and 5.8 GFLOPs, surpassing advanced models like YOLOv11. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

33 pages, 4885 KB  
Article
Two-Stage Fine-Tuning of Large Vision-Language Models with Hierarchical Prompting for Few-Shot Object Detection in Remote Sensing Images
by Yongqi Shi, Ruopeng Yang, Changsheng Yin, Yiwei Lu, Bo Huang, Yu Tao and Yihao Zhong
Remote Sens. 2026, 18(2), 266; https://doi.org/10.3390/rs18020266 - 14 Jan 2026
Viewed by 344
Abstract
Few-shot object detection (FSOD) in high-resolution remote sensing (RS) imagery remains challenging due to scarce annotations, large intra-class variability, and high visual similarity between categories, which together limit the generalization ability of convolutional neural network (CNN)-based detectors. To address this issue, we explore [...] Read more.
Few-shot object detection (FSOD) in high-resolution remote sensing (RS) imagery remains challenging due to scarce annotations, large intra-class variability, and high visual similarity between categories, which together limit the generalization ability of convolutional neural network (CNN)-based detectors. To address this issue, we explore leveraging large vision-language models (LVLMs) for FSOD in RS. We propose a two-stage, parameter-efficient fine-tuning framework with hierarchical prompting that adapts Qwen3-VL for object detection. In the first stage, low-rank adaptation (LoRA) modules are inserted into the vision and text encoders and trained jointly with a Detection Transformer (DETR)-style detection head on fully annotated base classes under three-level hierarchical prompts. In the second stage, the vision LoRA parameters are frozen, the text encoder is updated using K-shot novel-class samples, and the detection head is partially frozen, with selected components refined using the same three-level hierarchical prompting scheme. To preserve base-class performance and reduce class confusion, we further introduce knowledge distillation and semantic consistency losses. Experiments on the DIOR and NWPU VHR-10.v2 datasets show that the proposed method consistently improves novel-class performance while maintaining competitive base-class accuracy and surpasses existing baselines, demonstrating the effectiveness of integrating hierarchical semantic reasoning into LVLM-based FSOD for RS imagery. Full article
Show Figures

Figure 1

37 pages, 453 KB  
Article
Evaluation Trees and Normalisation for Proposition Algebra
by Jan A. Bergstra, Alban Ponse and Daan J. C. Staudt
Mathematics 2026, 14(2), 280; https://doi.org/10.3390/math14020280 - 12 Jan 2026
Viewed by 182
Abstract
Proposition algebra is based on Hoare’s conditional, a ternary connective comparable to if–then–else and used in the context of propositional logic. Conditional statements are composed from atomic propositions (propositional variables), constants for the Boolean truth values, and the conditional. In previous work, various [...] Read more.
Proposition algebra is based on Hoare’s conditional, a ternary connective comparable to if–then–else and used in the context of propositional logic. Conditional statements are composed from atomic propositions (propositional variables), constants for the Boolean truth values, and the conditional. In previous work, various equational axiomatisations have been defined, each of which leads to a so-called valuation congruence. The weakest of these is “free valuation congruence” and the strongest is “static valuation congruence”, which is equivalent to propositional logic. Free valuation congruence is axiomatised by four simple equational axioms, and we use evaluation trees to give a simple semantics: two conditional statements are free valuation congruent if, and only if, they have equal evaluation trees. Increasingly stronger valuation congruences arise by adding axioms to the four that define free valuation congruence: repetition-proof, contractive, memorising, and static valuation congruence. We prove that each such valuation congruence C can be characterised using a transformation on evaluation trees: two conditional statements are C-valuation congruent if, and only if, their C-transformed evaluation trees are equal. In order to prove that these transformations preserve the congruence property, we use normalisation functions: two conditional statements are C-valuation congruent if, and only if, the C-normalisation function returns equal images. Our framework provides the first comprehensive tree-based semantics that unifies all major valuation congruences in proposition algebra, offering both conceptual clarity and practical decision procedures. Full article
27 pages, 1843 KB  
Article
AI-Driven Modeling of Near-Mid-Air Collisions Using Machine Learning and Natural Language Processing Techniques
by Dothang Truong
Aerospace 2026, 13(1), 80; https://doi.org/10.3390/aerospace13010080 - 12 Jan 2026
Viewed by 238
Abstract
As global airspace operations grow increasingly complex, the risk of near-mid-air collisions (NMACs) poses a persistent and critical challenge to aviation safety. Traditional collision-avoidance systems, while effective in many scenarios, are limited by rule-based logic and reliance on transponder data, particularly in environments [...] Read more.
As global airspace operations grow increasingly complex, the risk of near-mid-air collisions (NMACs) poses a persistent and critical challenge to aviation safety. Traditional collision-avoidance systems, while effective in many scenarios, are limited by rule-based logic and reliance on transponder data, particularly in environments featuring diverse aircraft types, unmanned aerial systems (UAS), and evolving urban air mobility platforms. This paper introduces a novel, integrative machine learning framework designed to analyze NMAC incidents using the rich, contextual information contained within the NASA Aviation Safety Reporting System (ASRS) database. The methodology is structured around three pillars: (1) natural language processing (NLP) techniques are applied to extract latent topics and semantic features from pilot and crew incident narratives; (2) cluster analysis is conducted on both textual and structured incident features to empirically define distinct typologies of NMAC events; and (3) supervised machine learning models are developed to predict pilot decision outcomes (evasive action vs. no action) based on integrated data sources. The analysis reveals seven operationally coherent topics that reflect communication demands, pattern geometry, visibility challenges, airspace transitions, and advisory-driven interactions. A four-cluster solution further distinguishes incident contexts ranging from tower-directed approaches to general aviation pattern and cruise operations. The Random Forest model produces the strongest predictive performance, with topic-based indicators, miss distance, altitude, and operating rule emerging as influential features. The results show that narrative semantics provide measurable signals of coordination load and acquisition difficulty, and that integrating text with structured variables enhances the prediction of maneuvering decisions in NMAC situations. These findings highlight opportunities to strengthen radio practice, manage pattern spacing, improve mixed equipage awareness, and refine alerting in short-range airport area encounters. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

22 pages, 416 KB  
Review
A Roadmap of Mathematical Optimization for Visual SLAM in Dynamic Environments
by Hui Zhang, Xuerong Zhao, Ruixue Luo, Ziyu Wang, Gang Wang and Kang An
Mathematics 2026, 14(2), 264; https://doi.org/10.3390/math14020264 - 9 Jan 2026
Viewed by 334
Abstract
The widespread application of robots in complex and dynamic environments demands that Visual SLAM is both robust and accurate. However, dynamic objects, varying illumination, and environmental complexity fundamentally challenge the static world assumptions underlying traditional SLAM methods. This review provides a comprehensive investigation [...] Read more.
The widespread application of robots in complex and dynamic environments demands that Visual SLAM is both robust and accurate. However, dynamic objects, varying illumination, and environmental complexity fundamentally challenge the static world assumptions underlying traditional SLAM methods. This review provides a comprehensive investigation into the mathematical foundations of V-SLAM and systematically analyzes the key optimization techniques developed for dynamic environments, with particular emphasis on advances since 2020. We begin by rigorously deriving the probabilistic formulation of V-SLAM and its basis in nonlinear optimization, unifying it under a Maximum a Posteriori (MAP) estimation framework. We then propose a taxonomy based on how dynamic elements are handled mathematically, which reflects the historical evolution from robust estimation to semantic modeling and then to deep learning. This framework provides detailed analysis of three main categories: (1) robust estimation theory-based methods for outlier rejection, elaborating on the mathematical models of M-estimators and switch variables; (2) semantic information and factor graph-based methods for explicit dynamic object modeling, deriving the joint optimization formulation for multi-object tracking and SLAM; and (3) deep learning-based end-to-end optimization methods, discussing their mathematical foundations and interpretability challenges. This paper delves into the mathematical principles, performance boundaries, and theoretical controversies underlying these approaches, concluding with a summary of future research directions informed by the latest developments in the field. The review aims to provide both a solid mathematical foundation for understanding current dynamic V-SLAM techniques and inspiration for future algorithmic innovations. By adopting a math-first perspective and organizing the field through its core optimization paradigms, this work offers a clarifying framework for both understanding and advancing dynamic V-SLAM. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

Back to TopTop