Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (386)

Search Parameters:
Keywords = graph reconstruction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 543 KB  
Article
Numerical Methods for Partial Inverse Spectral Problems with Frozen Arguments on Star-Shaped Graphs
by Chung-Tsun Shieh, Tzong-Mo Tsai and Jyh-Shyang Wu
Mathematics 2026, 14(1), 156; https://doi.org/10.3390/math14010156 - 31 Dec 2025
Viewed by 168
Abstract
In this paper, the authors investigate a partial inverse spectral problem for Sturm–Liouville operators with frozen arguments on star-shaped graphs. The problem is to reconstruct the potential on one edge from the known potentials on the other edges together with two sequences of [...] Read more.
In this paper, the authors investigate a partial inverse spectral problem for Sturm–Liouville operators with frozen arguments on star-shaped graphs. The problem is to reconstruct the potential on one edge from the known potentials on the other edges together with two sequences of eigenvalues from a prescribed spectral set. The proposed approach is constructive. First, the characteristic function associated with the given spectral data is constructed, allowing the unknown potential contribution to be isolated. The potential is then recovered by expanding the resulting expressions in an appropriate Riesz basis and solving a corresponding system of linear equations. Based on established uniqueness results, this procedure yields a constructive numerical algorithm. Numerical examples demonstrate reliable reconstruction for both smooth and piecewise continuous potentials, providing a practical scheme for frozen-argument problems on star graphs. Full article
(This article belongs to the Special Issue Advances in Nonlinear Differential Equations with Applications)
Show Figures

Figure 1

18 pages, 2688 KB  
Article
Rolling Bearing Fault Diagnosis Based on Multi-Source Domain Joint Structure Preservation Transfer with Autoencoder
by Qinglei Jiang, Tielin Shi, Xiuqun Hou, Biqi Miao, Zhaoguang Zhang, Yukun Jin, Zhiwen Wang and Hongdi Zhou
Sensors 2026, 26(1), 222; https://doi.org/10.3390/s26010222 - 29 Dec 2025
Viewed by 236
Abstract
Domain adaptation methods have been extensively studied for rolling bearing fault diagnosis under various conditions. However, some existing methods only consider the one-way embedding of original space into a low-dimensional subspace without backward validation, which leads to inaccurate embeddings of data and poor [...] Read more.
Domain adaptation methods have been extensively studied for rolling bearing fault diagnosis under various conditions. However, some existing methods only consider the one-way embedding of original space into a low-dimensional subspace without backward validation, which leads to inaccurate embeddings of data and poor diagnostic performance. In this paper, a rolling bearing fault diagnosis method based on multi-source domain joint structure preservation transfer with autoencoder (MJSPTA) is proposed. Firstly, similar source domains are screened by inter-domain metrics; then, the high-dimensional data of both the source and target domains are projected into a shared subspace with different projection matrices, respectively, during the encoding stage. Finally, the decoding stage reconstructs the low-dimensional data back to the original high-dimensional space to minimize the reconstruction accuracy. In the shared subspace, the difference between source and target domains is reduced through distribution matching and sample weighting. Meanwhile, graph embedding theory is introduced to maximally preserve the local manifold structure of the samples during domain adaptation. Next, label propagation is used to obtain the predicted labels, and a voting mechanism ultimately determines the fault type. The effectiveness and robustness of the method are verified through a series of diagnostic tests. Full article
Show Figures

Figure 1

29 pages, 2044 KB  
Article
A Dual-Branch Transformer Framework for Trace-Level Anomaly Detection via Phase-Space Embedding and Causal Message Propagation
by Siyuan Liu, Yiting Chen, Sen Li, Jining Chen and Qian He
Big Data Cogn. Comput. 2026, 10(1), 10; https://doi.org/10.3390/bdcc10010010 - 28 Dec 2025
Viewed by 340
Abstract
In cloud-based distributed systems, trace anomaly detection plays a vital role in maintaining system reliability by identifying early signs of performance degradation or faults. However, existing methods often fail to capture the complex temporal and structural dependencies inherent in trace data. To address [...] Read more.
In cloud-based distributed systems, trace anomaly detection plays a vital role in maintaining system reliability by identifying early signs of performance degradation or faults. However, existing methods often fail to capture the complex temporal and structural dependencies inherent in trace data. To address this, we propose a novel dual-branch Transformer-based framework that integrates both temporal modeling and causal reasoning. The first branch encodes the original trace data to capture direct service-level dynamics, while the second employs phase-space reconstruction to reveal nonlinear temporal interactions by embedding time-delayed representations. To better capture how anomalies propagate across services, we introduce a causal propagation module that leverages directed service call graphs to enforce the time order and directionality during feature aggregation, ensuring anomaly signals propagate along realistic causal paths. Additionally, we propose a hybrid loss function combining the reconstruction error with symmetric Kullback–Leibler divergence between attention maps from the two branches, enabling the model to distinguish normal and anomalous patterns more effectively. Extensive experiments conducted on multiple real-world trace datasets demonstrate that our method consistently outperforms state-of-the-art baselines in terms of precision, recall, and F1 score. The proposed framework proves robust across diverse scenarios, offering improved detection accuracy, and robustness to noisy or complex service dependencies. Full article
Show Figures

Figure 1

22 pages, 4301 KB  
Article
Intelligent Wind Power Forecasting for Sustainable Smart Cities
by Zhihao Xu, Youyong Kong and Aodong Shen
Appl. Sci. 2026, 16(1), 305; https://doi.org/10.3390/app16010305 - 28 Dec 2025
Viewed by 161
Abstract
Wind power forecasting is critical to renewable energy generation, as accurate predictions are essential for the efficient and reliable operation of power systems. However, wind power output is inherently unstable and is strongly affected by meteorological factors such as wind speed, wind direction, [...] Read more.
Wind power forecasting is critical to renewable energy generation, as accurate predictions are essential for the efficient and reliable operation of power systems. However, wind power output is inherently unstable and is strongly affected by meteorological factors such as wind speed, wind direction, and atmospheric pressure. Weather conditions and wind power data are recorded by sensors installed in wind turbines, which may be damaged or malfunction during extreme or sudden weather events. Such failures can lead to inaccurate, incomplete, or missing data, thereby degrading data quality and, consequently, forecasting performance. To address these challenges, we propose a method that integrates a pre-trained large-scale language model (LLM) with the spatiotemporal characteristics of wind power networks, aiming to capture both meteorological variability and the complexity of wind farm terrain. Specifically, we design a spatiotemporal graph neural network based on multi-view maps as an encoder. The resulting embedded spatiotemporal map sequences are aligned with textual representations, concatenated with prompt embeddings, and then fed into a frozen LLM to predict future wind turbine power generation sequences. In addition, to mitigate anomalies and missing values caused by sensor malfunctions, we introduce a novel frequency-domain learning-based interpolation method that enhances data correlations and effectively reconstructs missing observations. Experiments conducted on real-world wind power datasets demonstrate that the proposed approach outperforms state-of-the-art methods, achieving root mean square errors of 17.776 kW and 50.029 kW for 24-h and 48-h forecasts, respectively. These results indicate substantial improvements in both accuracy and robustness, highlighting the strong practical potential of the proposed method for wind power forecasting in the renewable energy industry. Full article
Show Figures

Figure 1

22 pages, 8610 KB  
Article
A Unified GNN-CV Framework for Intelligent Aerial Situational Awareness
by Leyan Li, Rennong Yang, Anxin Guo and Zhenxing Zhang
Sensors 2026, 26(1), 119; https://doi.org/10.3390/s26010119 - 24 Dec 2025
Viewed by 243
Abstract
Aerial situational awareness (SA) faces significant challenges due to inherent complexity involving large-scale dynamic entities and intricate spatio-temporal relationships. While deep learning advances SA for specific data modalities (static or time-series), existing approaches often lack the holistic, vision-centric perspective essential for human decision-making. [...] Read more.
Aerial situational awareness (SA) faces significant challenges due to inherent complexity involving large-scale dynamic entities and intricate spatio-temporal relationships. While deep learning advances SA for specific data modalities (static or time-series), existing approaches often lack the holistic, vision-centric perspective essential for human decision-making. To bridge this gap, we propose a unified GNN-CV framework for operational-level SA. This framework leverages mature computer vision (CV) architectures to intelligently process radar-map-like representations, addressing diverse SA tasks within a unified paradigm. Key innovations include methods for sparse entity attribute transformation graph neural networks (SET-GNNs), large-scale radar map reconstruction, integrated feature extraction, specialized two-stage pre-training, and adaptable downstream task networks. We rigorously evaluate the framework on critical operational-level tasks: aerial swarm partitioning and configuration recognition. The framework achieves an impressive end-to-end recognition accuracy exceeding 90.1%. Notably, in specialized tactical scenarios featuring small, large, and irregular flight intervals within formations, configuration recognition accuracy surpasses 85.0%. Even in the presence of significant position and heading disturbances, accuracy remains above 80.4%, with millisecond response cycles. Experimental results highlight the benefits of leveraging mature CV techniques such as image classification, object detection, and image generation, which enhance the efficacy, resilience, and coherence of intelligent situational awareness. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 6967 KB  
Article
Semantics- and Physics-Guided Generative Network for Radar HRRP Generalized Zero-Shot Recognition
by Jiaqi Zhou, Tao Zhang, Siyuan Mu, Yuze Gao, Feiming Wei and Wenxian Yu
Remote Sens. 2026, 18(1), 4; https://doi.org/10.3390/rs18010004 - 19 Dec 2025
Viewed by 285
Abstract
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks [...] Read more.
High-resolution range profile (HRRP) target recognition has garnered significant attention in radar automatic target recognition (RATR) research for its rich structural information and low computational costs. With the rapid advancements in deep learning, methods for HRRP target recognition that leverage deep neural networks have emerged as the dominant approaches. Nevertheless, these traditional closed-set recognition methods require labeled data for every class in training, while in reality, seen classes and unseen classes coexist. Therefore, it is necessary to explore methods that can identify both seen and unseen classes simultaneously. To this end, a semantic- and physical-guided generative network (SPGGN) was innovatively proposed for HRRP generalized zero-shot recognition; it combines a constructed knowledge graph with attribute vectors to comprehensively represent semantics and reconstructs strong scattering points to introduce physical constraints. Specifically, to boost the robustness, we reconstructed the strong scattering points from deep features of HRRPs, where class-aware contrastive learning in the middle layer effectively mitigates the influence of target-aspect variations. In the classification stage, discriminative features are produced through attention-based feature fusion to capture multi-faceted information, while the design of balancing loss abates the bias towards seen classes. Experiments on two measured aircraft HRRP datasets validated the superior recognition performance of our method. Full article
Show Figures

Figure 1

26 pages, 4817 KB  
Article
ProcessGFM: A Domain-Specific Graph Pretraining Prototype for Predictive Process Monitoring
by Yikai Hu, Jian Lu, Xuhai Zhao, Yimeng Li, Zhen Tian and Zhiping Li
Mathematics 2025, 13(24), 3991; https://doi.org/10.3390/math13243991 - 15 Dec 2025
Viewed by 374
Abstract
Predictive process monitoring estimates the future behaviour of running process instances based on historical event logs, with typical tasks including next-activity prediction, remaining-time estimation, and risk assessment. Existing recurrent and Transformer-based models achieve strong accuracy on individual logs but transfer poorly across processes [...] Read more.
Predictive process monitoring estimates the future behaviour of running process instances based on historical event logs, with typical tasks including next-activity prediction, remaining-time estimation, and risk assessment. Existing recurrent and Transformer-based models achieve strong accuracy on individual logs but transfer poorly across processes and underuse the rich graph structure of event data. This paper introduces ProcessGFM, a domain-specific graph pretraining prototype for predictive process monitoring on event graphs. ProcessGFM employs a hierarchical graph neural architecture that jointly encodes event-level, case-level, and resource-level structure and is pretrained in a self-supervised manner on multiple benchmark logs using masked activity reconstruction, temporal order consistency, and pseudo-labelled outcome prediction. A multi-task prediction head and an adversarial domain alignment module adapt the pretrained backbone to downstream tasks and stabilise cross-log generalisation. On the BPI 2012, 2017, and 2019 logs, ProcessGFM improves next-activity accuracy by 2.7 to 4.5 percentage points over the best graph baseline, reaching up to 89.6% accuracy and 87.1% macro-F1. For remaining-time prediction, it attains mean absolute errors between 0.84 and 2.11 days, reducing error by 11.7% to 18.2% relative to the strongest graph baseline. For case-level risk prediction, it achieves area-under-the-curve scores between 0.907 and 0.934 and raises precision at 10% recall by 6.7 to 8.1 percentage points. Cross-log transfer experiments show that ProcessGFM retains between about 90% and 96% of its in-domain next-activity accuracy when applied zero-shot to a different log. Attention-based analysis highlights critical subgraphs that can be projected back to Petri net fragments, providing interpretable links between structural patterns, resource handovers, and late cases. Full article
(This article belongs to the Special Issue New Advances in Graph Neural Networks (GNNs) and Applications)
Show Figures

Figure 1

13 pages, 1873 KB  
Article
A Clinical Workflow for Evaluating Dose to Organs at Risk After Biology-Guided Radiation Therapy Delivery
by Thomas I. Banks, Chenyang Shen, Andrew R. Godley, Yang Kyun Park, Rameshwar Prasad, Madhav Ravikiran, Shahed N. Badiyan, Tu Dan, Aurelie Garant, Robert Timmerman, Steve Jiang and Bin Cai
Cancers 2025, 17(24), 3979; https://doi.org/10.3390/cancers17243979 - 13 Dec 2025
Viewed by 296
Abstract
Purpose: To develop a workflow for systemically reviewing the doses received by organs at risk (OARs) in a BgRT treatment session, as a means of monitoring delivery constancy and possibly identifying changes of clinical concern in the patient. Methods: We implemented [...] Read more.
Purpose: To develop a workflow for systemically reviewing the doses received by organs at risk (OARs) in a BgRT treatment session, as a means of monitoring delivery constancy and possibly identifying changes of clinical concern in the patient. Methods: We implemented a workflow consisting of a qualitative review of the reconstructed delivered-dose information provided by the RefleXion system, followed by its quantitative evaluation using in-house software. For the latter we developed a framework for calculating custom OAR dose–volume metrics from the delivered-dose distribution and graphing them. We retrospectively applied our workflow to three select BgRT patient cases to appraise its clinical utility. Results: Our workflow efficiently incorporates existing RefleXion TPS features and in-house software into a process for thoroughly evaluating doses to OARs after BgRT delivery. The spreadsheet we created for graphing the trends of normalized OAR dose–volume metrics readily shows if OAR doses have significantly changed or tolerance limits have been violated, thereby potentially revealing if changes of concern occurred in the targeted region. Our workflow also yields a cumulative delivered-dose record at the end of treatment. Conclusions: We established a post-BgRT dose evaluation workflow which supplements the information provided by RefleXion with calculation and graphing of custom OAR dose–volume metrics. This workflow is now routinely used in our clinic following all BgRT treatments. Future improvements could include increased automation, updating the dose calculation to reflect changes in patient anatomy, incorporation of PET metrics, and consideration of target dose data. Full article
(This article belongs to the Section Cancer Therapy)
Show Figures

Figure 1

33 pages, 2145 KB  
Article
Deep Learning Fractal Superconductivity: A Comparative Study of Physics-Informed and Graph Neural Networks Applied to the Fractal TDGL Equation
by Călin Gheorghe Buzea, Florin Nedeff, Diana Mirilă, Maricel Agop and Decebal Vasincu
Fractal Fract. 2025, 9(12), 810; https://doi.org/10.3390/fractalfract9120810 - 11 Dec 2025
Viewed by 356
Abstract
The fractal extension of the time-dependent Ginzburg–Landau (TDGL) equation, formulated within the framework of Scale Relativity, generalizes superconducting dynamics to non-differentiable space–time. Although analytically well established, its numerical solution remains difficult because of the strong coupling between amplitude and phase curvature. Here we [...] Read more.
The fractal extension of the time-dependent Ginzburg–Landau (TDGL) equation, formulated within the framework of Scale Relativity, generalizes superconducting dynamics to non-differentiable space–time. Although analytically well established, its numerical solution remains difficult because of the strong coupling between amplitude and phase curvature. Here we develop two complementary deep learning solvers for the fractal TDGL (FTDGL) system. The Fractal Physics-Informed Neural Network (F-PINN) embeds the Scale-Relativity covariant derivative through automatic differentiation on continuous fields, whereas the Fractal Graph Neural Network (F-GNN) represents the same dynamics on a sparse spatial graph and learns local gauge-covariant interactions via message passing. Both models are trained against finite-difference reference data, and a parametric study over the dimensionless fractality parameter D quantifies its influence on the coherence length, penetration depth, and peak magnetic field. Across multivortex benchmarks, the F-GNN reduces the relative L2 error on ψ2 from 0.190 to 0.046 and on Bz from approximately 0.62 to 0.36 (averaged over three seeds). This ≈4× improvement in condensate-density accuracy corresponds to a substantial enhancement in vortex-core localization—from tens of pixels of uncertainty to sub-pixel precision—and yields a cleaner reconstruction of the 2π phase winding around each vortex, improving the extraction of experimentally relevant observables such as ξeff, λeff, and local Bz peaks. The model also preserves flux quantization and remains robust under 2–5% Gaussian noise, demonstrating stable learning under experimentally realistic perturbations. The D—scan reveals broader vortex cores, a non-monotonic variation in the penetration depth, and moderate modulation of the peak magnetic field, while preserving topological structure. These results show that graph-based learning provides a superior inductive bias for modeling non-differentiable, gauge-coupled systems. The proposed F-PINN and F-GNN architectures therefore offer accurate, data-efficient solvers for fractal superconductivity and open pathways toward data-driven inference of fractal parameters from magneto-optical or Hall-probe imaging experiments. Full article
Show Figures

Figure 1

29 pages, 10236 KB  
Article
A Graph Data Model for CityGML Utility Network ADE: A Case Study on Water Utilities
by Ensiyeh Javaherian Pour, Behnam Atazadeh, Abbas Rajabifard, Soheil Sabri and David Norris
ISPRS Int. J. Geo-Inf. 2025, 14(12), 493; https://doi.org/10.3390/ijgi14120493 - 11 Dec 2025
Viewed by 546
Abstract
Modelling connectivity in utility networks is essential for operational management, maintenance planning, and resilience analysis. The CityGML Utility Network Application Domain Extension (UNADE) provides a detailed conceptual framework for representing utility networks; however, most existing implementations rely on relational databases, where connectivity must [...] Read more.
Modelling connectivity in utility networks is essential for operational management, maintenance planning, and resilience analysis. The CityGML Utility Network Application Domain Extension (UNADE) provides a detailed conceptual framework for representing utility networks; however, most existing implementations rely on relational databases, where connectivity must be reconstructed through joins rather than represented as explicit relationships. This creates challenges when managing densely connected network structures. This study introduces the UNADE–Labelled Property Graph (UNADE-LPG) model, a graph-based representation that maps the classes, relationships, and constraints defined in the UNADE Unified Modelling Language (UML) schema into nodes, edges, and properties. A conversion pipeline is developed to generate UNADE-LPG instances directly from CityGML UNADE datasets encoded in GML, enabling the population of graph databases while maintaining semantic alignment with the original schema. The approach is demonstrated through two case studies: a schematic network and a real-world water system from Frankston, Melbourne. Validation procedures, covering structural checks, topological continuity, classification behaviour, and descriptive graph statistics, confirm that the resulting graph preserves the semantic structure of the UNADE schema and accurately represents the physical connectivity of the network. An analytical path-finding query is also implemented to illustrate how the UNADE-LPG structure supports practical network-analysis tasks, such as identifying connected pipeline sequences. Overall, the findings show that the UNADE-LPG model provides a clear, standards-aligned, and operationally practical foundation for representing utility networks within graph environments, supporting future integration into digital-twin and network-analytics applications. Full article
Show Figures

Figure 1

21 pages, 6216 KB  
Article
Extraction, Segmentation, and 3D Reconstruction of Wire Harnesses from Point Clouds for Robot Motion Planning
by Saki Komoriya and Hiroshi Masuda
Sensors 2025, 25(24), 7542; https://doi.org/10.3390/s25247542 - 11 Dec 2025
Viewed by 462
Abstract
Accurate collision detection in off-line robot simulation is essential for ensuring safety in modern manufacturing. However, current simulation environments often neglect flexible components such as wire harnesses, which are attached to articulated robots with irregular slack to accommodate motion. Because these components are [...] Read more.
Accurate collision detection in off-line robot simulation is essential for ensuring safety in modern manufacturing. However, current simulation environments often neglect flexible components such as wire harnesses, which are attached to articulated robots with irregular slack to accommodate motion. Because these components are rarely modeled in CAD, the absence of accurate 3D harness models leads to discrepancies between simulated and actual robot behavior, which sometimes result in physical interference or damage. This paper addresses this limitation by introducing a fully automated framework for extracting, segmenting, and reconstructing 3D wire-harness models directly from dense, partially occluded point clouds captured by terrestrial laser scanners. The key contribution lies in a motion-aware segmentation strategy that classifies harnesses into static and dynamic parts based on their physical attachment to robot links, enabling realistic motion simulation. To reconstruct complex geometries from incomplete data, we further propose a dual reconstruction scheme: an OBB-tree-based method for robust centerline recovery of unbranched cables and a Reeb-graph-based method for preserving topological consistency in branched structures. The experimental results on multiple industrial robots demonstrate that the proposed approach can generate high-fidelity 3D harness models suitable for collision detection and digital-twin simulation, even under severe data occlusions. These findings close a long-standing gap between geometric sensing and physics-based robot simulation in real factory environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

21 pages, 335 KB  
Review
AI-Driven Motion Capture Data Recovery: A Comprehensive Review and Future Outlook
by Ahood Almaleh, Gary Ushaw and Rich Davison
Sensors 2025, 25(24), 7525; https://doi.org/10.3390/s25247525 - 11 Dec 2025
Viewed by 549
Abstract
This paper presents a comprehensive review of motion capture (MoCap) data recovery techniques, with a particular focus on the suitability of artificial intelligence (AI) for addressing missing or corrupted motion data. Existing approaches are classified into three categories: non-data-driven, data-driven (AI-based), and hybrid [...] Read more.
This paper presents a comprehensive review of motion capture (MoCap) data recovery techniques, with a particular focus on the suitability of artificial intelligence (AI) for addressing missing or corrupted motion data. Existing approaches are classified into three categories: non-data-driven, data-driven (AI-based), and hybrid methods. Within the AI domain, frameworks such as generative adversarial networks (GANs), transformers, and graph neural networks (GNNs) demonstrate strong capabilities in modeling complex spatial–temporal dependencies and achieving accurate motion reconstruction. Compared with traditional methods, AI techniques offer greater adaptability and precision, though they remain limited by high computational costs and dependence on large, high-quality datasets. Hybrid approaches that combine AI models with physics-based or statistical algorithms provide a balance between efficiency, interpretability, and robustness. The review also examines benchmark datasets, including CMU MoCap and Human3.6M, while highlighting the growing role of synthetic and augmented data in improving AI model generalization. Despite notable progress, the absence of standardized evaluation protocols and diverse real-world datasets continues to hinder generalization. Emerging trends point toward real-time AI-driven recovery, multimodal data fusion, and unified performance benchmarks. By integrating traditional, AI-based, and hybrid approaches into a coherent taxonomy, this review provides a unique contribution to the literature. Unlike prior surveys focused on prediction, denoising, pose estimation, or generative modeling, it treats MoCap recovery as a standalone problem. It further synthesizes comparative insights across datasets, evaluation metrics, movement representations, and common failure cases, offering a comprehensive foundation for advancing MoCap recovery research. Full article
Show Figures

Figure 1

13 pages, 1899 KB  
Article
Pattern Recognition with Artificial Intelligence in Space Experiments
by Federica Cuna, Maria Bossa, Fabio Gargano and Mario Nicola Mazziotta
Particles 2025, 8(4), 99; https://doi.org/10.3390/particles8040099 - 10 Dec 2025
Viewed by 336
Abstract
The application of advanced Artificial Intelligence (AI) techniques in astroparticle experiments represents a major advancement in both data analysis and experimental design. As space missions become increasingly complex, integrating AI tools is essential for optimizing system performance and maximizing scientific return. This study [...] Read more.
The application of advanced Artificial Intelligence (AI) techniques in astroparticle experiments represents a major advancement in both data analysis and experimental design. As space missions become increasingly complex, integrating AI tools is essential for optimizing system performance and maximizing scientific return. This study explores the use of Graph Neural Networks (GNNs) within the tracking systems of space-based experiments. A key challenge in track reconstruction is the high level of noise, primarily due to backscattering tracks, which can obscure the identification of primary particle trajectories. We propose a novel GNN-based approach for node-level classification tasks, specifically designed to distinguish primary tracks from backscattered ones within the tracker. In this framework, AI is employed as a powerful tool for pattern recognition, enabling the system to identify meaningful structures within complex tracking data and to discriminate signal from backscattering with higher precision. By addressing these challenges, our work aims to enhance the accuracy and reliability of data interpretation in astroparticle physics through the advanced deep learning techniques. Full article
Show Figures

Figure 1

26 pages, 12304 KB  
Article
Semantic Collaborative Environment for Extended Digital Natural Heritage: Integrating Data, Metadata, and Paradata
by Yeeun Lee, Songie Seol, Jisung Oh and Jongwook Lee
Heritage 2025, 8(12), 507; https://doi.org/10.3390/heritage8120507 - 4 Dec 2025
Viewed by 660
Abstract
Natural heritage digitization has evolved beyond simple 3D representation. Contemporary approaches require transparent documentation integrating biological, heritage, and digitization standards, yet existing frameworks operate in isolated domains without semantic interoperability. Current digitization frameworks fail to integrate biological standards (Darwin Core, ABCD), heritage standards [...] Read more.
Natural heritage digitization has evolved beyond simple 3D representation. Contemporary approaches require transparent documentation integrating biological, heritage, and digitization standards, yet existing frameworks operate in isolated domains without semantic interoperability. Current digitization frameworks fail to integrate biological standards (Darwin Core, ABCD), heritage standards (CIDOC-CRM), and digitization standards (CRMdig, PROV-O) into a unified semantic architecture, limiting transparent documentation of natural heritage data across its entire lifecycle—from physical observation through digital reconstruction to knowledge reasoning. This study proposes an integrated semantic framework comprising three components: (1) the E-DNH ontology, which adopts a triple-layer architecture (data–metadata–paradata) and a triple-module structure (nature–heritage–digital), bridging Darwin Core, CIDOC-CRM, CRMdig, and PROV-O; (2) the HR3D workflow, which establishes a standardized high-precision 3D data acquisition protocol that systematically documents paradata; and (3) the C-EDNH platform, which implements a Neo4j-based knowledge graph with semantic search capabilities, AI-driven quality assessment, and persistent identifiers (NSId/DOI). The framework was validated through digitization of 197 natural heritage specimens (68.5% avian, 24.9% insects, 5.1% mammals, 1.5% reptiles), demonstrating high geometric accuracy (RMS 0.18 ± 0.09 mm), visual fidelity (SSIM 0.92 ± 0.03), and color accuracy (ΔE00 2.1 ± 0.7). The resulting knowledge graph comprises 15,000+ nodes and 45,000+ semantic relationships, enabling cross-domain federated queries and reasoning. Unlike conventional approaches that treat digitization as mere data preservation, this framework positions digitization as an interpretive reconstruction process. By systematically documenting paradata, it establishes a foundation for knowledge discovery, reproducibility, and critical reassessment of digital natural heritage. Full article
Show Figures

Figure 1

28 pages, 1098 KB  
Article
Graph Neural Networks in Medical Imaging: Methods, Applications and Future Directions
by Ibomoiye Domor Mienye and Serestina Viriri
Information 2025, 16(12), 1051; https://doi.org/10.3390/info16121051 - 1 Dec 2025
Cited by 1 | Viewed by 1704
Abstract
Graph neural networks (GNNs) extend deep learning to non-Euclidean domains, offering a robust framework for modeling the spatial, structural, and functional relationships inherent in medical imaging. This paper reviews recent progress in GNN architectures, including recurrent, convolutional, attention-based, autoencoding, and spatiotemporal designs, and [...] Read more.
Graph neural networks (GNNs) extend deep learning to non-Euclidean domains, offering a robust framework for modeling the spatial, structural, and functional relationships inherent in medical imaging. This paper reviews recent progress in GNN architectures, including recurrent, convolutional, attention-based, autoencoding, and spatiotemporal designs, and examines how these models have been applied to core medical imaging tasks, such as segmentation, classification, registration, reconstruction, and multimodal fusion. The review further identifies current challenges and limitations in applying GNNs to medical imaging and discusses emerging trends, including graph–transformer integration, self-supervised graph learning, and federated GNNs. This paper provides a concise and comprehensive reference for advancing reliable and generalizable GNN-based medical imaging systems. Full article
Show Figures

Figure 1

Back to TopTop