Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,920)

Search Parameters:
Keywords = computer-generated art

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 1227 KB  
Article
A Physics-Constrained Surrogate Model for Multi-Hazard Collapse Assessment of Buildings Under Post-Fire Concurrent Wind-Earthquake Loading
by Ahmed Elgammal, Yasmin Ali, Amir Shirkhani and Pedro Martinez-Vazquez
Buildings 2026, 16(10), 1921; https://doi.org/10.3390/buildings16101921 - 12 May 2026
Abstract
Conventional structural design frameworks assess natural hazards as statistically independent phenomena, a practice that can lead to significant underestimation of risk for structures subjected to sequential or concurrent hazards. The generation of probabilistic fragility functions under such cascading loads, particularly for post-fire seismic [...] Read more.
Conventional structural design frameworks assess natural hazards as statistically independent phenomena, a practice that can lead to significant underestimation of risk for structures subjected to sequential or concurrent hazards. The generation of probabilistic fragility functions under such cascading loads, particularly for post-fire seismic events, presents a computational barrier for standard non-linear dynamic analysis. To address this barrier, this study introduces a comprehensive computational framework centered on a physics-constrained neural network (PCNN) to serve as a high-fidelity surrogate model. The framework first uses a non-linear 12-degree-of-freedom structural model to generate a baseline dataset of collapse times under post-fire, concurrent wind-earthquake loading via the computationally efficient endurance time (ET) method, confirming that wind effects are negligible under ambient conditions and that the framework correctly identifies this hazard hierarchy without prior labeling, while fire and seismic parameters dominate. This dataset is subsequently used to train the PCNN, which is validated to achieve exceptional predictive accuracy (R2= 0.991), performing on par with a state-of-the-art Random Forest model while enforcing physical constraints. A feature importance analysis confirmed that structural collapse is dominated by fire intensity (≈55%) and initial structural period (≈45%). The validated PCNN is then applied to demonstrate the framework’s capability, rapidly generating fragility curves that quantify the catastrophic effect of fire on seismic resilience. This analysis reveals that a severe 800 °C localized fire reduces the structure’s median collapse capacity by 94.7%, thereby establishing the proposed framework as a successful template for tackling complex, non-linear problems in multi-hazard engineering. Full article
(This article belongs to the Special Issue Reliability and Risk Assessment of Building Structures)
16 pages, 1077 KB  
Article
Characterization of Plan Complexity and Its Role in Quality Assurance for AI-Assisted CBCT-Based Online Adaptive Radiotherapy of Prostate Cancer
by Antonio Giuseppe Amico, Sonia Sapignoli, Samuele Cavinato, Badr El Khouzai, Marco Andrea Rossato, Marta Paiusco, Chiara Paronetto, Alessandro Scaggion, Matteo Sepulcri and Andrea Bettinelli
Cancers 2026, 18(10), 1557; https://doi.org/10.3390/cancers18101557 - 11 May 2026
Viewed by 22
Abstract
Background/Objectives: Online adaptive radiotherapy (oART) generates plans at each fraction by exploiting AI-assisted optimization engines without explicit user control over modulation. This process challenges quality assurance since measurement-based Patient Specific Quality Assurance (PSQA) cannot be performed daily. This study aimed: (i) to characterize [...] Read more.
Background/Objectives: Online adaptive radiotherapy (oART) generates plans at each fraction by exploiting AI-assisted optimization engines without explicit user control over modulation. This process challenges quality assurance since measurement-based Patient Specific Quality Assurance (PSQA) cannot be performed daily. This study aimed: (i) to characterize plan complexity in IOE-generated plans for prostate cancer using a reproducible set of PCMs, including the decomposition of inter-patient and intra-patient variability sources; (ii) to evaluate the association between PCMs and delivery accuracy within a cohort-informed SPC framework validated through leave-one-patient-out cross-validation; (iii) to investigate whether inter-fraction anatomical variations explain the observed plan complexity patterns, or whether complexity is predominantly an intrinsic signature of the AI-assisted optimizer. Methods: Twenty-one prostate cancer patients treated on a CBCT-based oART platform were retrospectively analyzed across three anatomical targets: prostatic bed (PrB), prostate (Pr), and prostate with seminal vesicles (PrSV). Six PCMs, namely MU/cGy, Modulation Complexity Score (MCS), Aperture Area Variability (AAV), Leaf Sequence Variability (LSV), Average Leaf Gap (ALG) and Plan Irregularity, were extracted. Additionally, five anatomical metrics (AMs) were computed from daily contours. Linear mixed-effects models (LMEMs) compared reference/online plans, decomposed variance via intraclass correlation coefficients (ICCs), and assessed PCM–gamma passing rate (GPR) associations. Leave-one-patient-out cross-validation (LOPO-CV) evaluated SPC threshold stability. The relationships between PCMs and AMs were investigated using LMEMs. Results: The AI-assisted optimization engine generated plans characterized by elevated monitor unit demand (average MU/cGy ≥ 6.8 ± 0.9) and narrow MLC apertures (ALG ≤ 17.7 mm ± 1.9 mm). No complexity differences emerged between offline and online-adapted plans, nor between anatomical targets. All PCMs showed significant associations with global GPR (p ≤ 0.027), though marginal R² remained low (≤ 0.122). Notably, GPR dispersion increased systematically at higher complexity values, indicating that highly modulated plans exhibit reduced delivery predictability. LOPO-CV demonstrated stable tolerance/action limits. Anatomical variations explained less than 35% of the total variance in PCMs. Conclusions: Plan complexity in oART reflects the optimization paradigm and patient-specific anatomy rather than daily adaptation. PCMs can serve as surveillance indicators flagging high-risk fractions to support SPC-based monitoring. Full article
23 pages, 2524 KB  
Article
MSPaDet: A Multi-Scale Phase-Aware Denoising Method for Target Detection in SAR Images
by Naxiong Chen, Xuyu Xiang and Yuanjing Luo
Remote Sens. 2026, 18(10), 1513; https://doi.org/10.3390/rs18101513 - 11 May 2026
Viewed by 8
Abstract
Synthetic Aperture Radar (SAR) target detection remains challenging due to coherent speckle corruption, weak-scattering targets with degraded structural cues, and cross-scale inconsistencies under anisotropic scattering. To tackle these challenges, this paper presents MSPaDet, a novel multi-scale phase-aware denoising detection framework that advances SAR [...] Read more.
Synthetic Aperture Radar (SAR) target detection remains challenging due to coherent speckle corruption, weak-scattering targets with degraded structural cues, and cross-scale inconsistencies under anisotropic scattering. To tackle these challenges, this paper presents MSPaDet, a novel multi-scale phase-aware denoising detection framework that advances SAR target detection by deeply integrating phase coherence with multi-scale representation learning. The proposed method introduces explicit dual-tree complex wavelet transform decomposition to generate direction-selective complex sub-bands, enabling fine-grained sub-band modulation. Within the framework, an SCFRDeno module suppresses speckle-dominant responses while preserving high-frequency structures via phase-coherence-guided reweighting, and a PaSCA block further refines features through input-adaptive spatial focusing and region reweighting. Extensive experiments on public SAR detection benchmarks—including MSAR, SAR-Aircraft-1.0, and SARDet-100K—demonstrate that our approach consistently outperforms state-of-the-art methods in detection accuracy, robustness, and cross-scenario generalization, with moderate computational cost, showing promising potential for practical deployment in Earth observation and safety monitoring systems. Full article
(This article belongs to the Section AI Remote Sensing)
23 pages, 1863 KB  
Article
SAT-MAK: Digital Surface Model Generation from Satellite Imagery Using Multi-Type Aggregated Keypoints and Weighted Clustering
by Zening Wang, Xu Huang, Xiaohu Yan, Jianhong Fu and Yongxiang Yao
Remote Sens. 2026, 18(10), 1492; https://doi.org/10.3390/rs18101492 - 9 May 2026
Viewed by 138
Abstract
The generation of Digital Surface Models (DSMs) from large-format, high-resolution satellite imagery constitutes a critical component of photogrammetry and computer vision. Achieving efficient, robust, and high-quality DSM reconstruction has therefore become a prominent research focus. However, with the continuous improvement in satellite image [...] Read more.
The generation of Digital Surface Models (DSMs) from large-format, high-resolution satellite imagery constitutes a critical component of photogrammetry and computer vision. Achieving efficient, robust, and high-quality DSM reconstruction has therefore become a prominent research focus. However, with the continuous improvement in satellite image resolution and the increasing diversity of image sources, satellite image matching—serving as the fundamental step in DSM generation—still faces significant challenges, including the uneven distribution of feature points and insufficient registration stability in large-scale imagery. To address these issues, this paper presents a refined DSM generation method for high-resolution satellite imagery, termed SAT-MAK. The framework consists of three main stages: (1) sparse matching based on MAK (Multi-type Aggregated Keypoints) extraction; (2) a density-weighted clustering matching optimization strategy; and (3) DSM generation following a conventional photogrammetric pipeline. Experiments were conducted on multiple sets of high-resolution satellite imagery, and the proposed method was compared with four commonly used satellite image 3D reconstruction algorithms. The results demonstrate that, compared with state-of-the-art methods, the proposed SAT-MAK approach improves DSM completeness by 5.29% while maintaining competitive RMSE performance, highlighting its strong potential for practical applications. Full article
(This article belongs to the Special Issue AI-Enhanced Remote Sensing for Image Matching and 3D Reconstruction)
37 pages, 967 KB  
Review
Temporal Evolution of Drug Resistance to HIV Integrase Inhibitors
by Indrani Choudhuri, Jocelyn G. Olvera, Avik Biswas, Allan Haldane, Ronald M. Levy and Dmitry Lyumkis
Viruses 2026, 18(5), 540; https://doi.org/10.3390/v18050540 - 8 May 2026
Viewed by 671
Abstract
HIV-1 integrase (IN) strand transfer inhibitors (INSTIs) are central to modern antiretroviral therapy (ART) because of their high potency and durable effect on viral suppression. However, drug resistance mutations (DRMs) within HIV-1 IN emerge, which can compromise long-term treatment efficacy. Many distinct DRMs [...] Read more.
HIV-1 integrase (IN) strand transfer inhibitors (INSTIs) are central to modern antiretroviral therapy (ART) because of their high potency and durable effect on viral suppression. However, drug resistance mutations (DRMs) within HIV-1 IN emerge, which can compromise long-term treatment efficacy. Many distinct DRMs that arise under INSTI therapy have been extensively tabulated in public repositories and literature. However, the timelines over which they emerge, accumulate, and consolidate in patients have not been systematically integrated across clinical and experimental studies. In this review, we synthesize current evidence on the temporal evolution of DRMs within HIV-1 IN by examining mutational kinetic data from viruses derived from people living with HIV/AIDS (PLWH) and from in vitro selection experiments. We compare experimental timelines to recent computational predictions derived from Potts-based fitness landscapes coupled with kinetic Monte Carlo simulations and identify reproducible kinetic classes that distinguish fast-, intermediate-, and slow-emerging DRMs. Rapidly emerging DRMs such as E92Q and N155H typically appear early under drug pressure and often represent low-barrier adaptive responses, whereas the most clinically consequential mutations, such as Q148H/K/R, G140A/S, and E138K, arise only after extended therapy and generally require compensatory mutational backgrounds to persist. Although absolute emergence times vary substantially between in vivo and in vitro systems, consistent temporal trends across datasets support the existence of underlying epistatic constraints that shape drug resistance evolution. Understanding DRM timelines is clinically relevant because it provides a framework for interpreting resistance detected at virological failure, informs optimal timing of resistance testing, and may enable earlier identification of high-risk evolutionary trajectories before durable resistance is established. Full article
(This article belongs to the Special Issue 15-Year Anniversary of Viruses)
Show Figures

Figure 1

16 pages, 35426 KB  
Article
JefiFast: Accelerating Jefimenko’s Equations with Memory-Centric Optimizations and Multi-GPU Parallelism
by Bing He, Shengyu Peng, Nan Sun, Guoliang Li, Xiaofei Zhu, Peng Xu and Xiaowei Shen
Physics 2026, 8(2), 43; https://doi.org/10.3390/physics8020043 - 7 May 2026
Viewed by 120
Abstract
As a foundation for numerical solvers in computational electromagnetics, particularly for multiphysics and electromagnetic compatibility applications, Jefimenko’s equations offer a generalized solution to Maxwell’s equations, enabling the direct computation of electromagnetic fields from time-dependent source distributions without the boundary-condition artifacts inherent to grid-based [...] Read more.
As a foundation for numerical solvers in computational electromagnetics, particularly for multiphysics and electromagnetic compatibility applications, Jefimenko’s equations offer a generalized solution to Maxwell’s equations, enabling the direct computation of electromagnetic fields from time-dependent source distributions without the boundary-condition artifacts inherent to grid-based methods. However, the numerical integration of these equations is computationally intensive, typically scaling as O(NsNo) for Ns source points and No observation points. In this paper, we present JefiFast, a highly optimized graphics processing unit (GPU) implementation that significantly outperforms the state-of-the-art JefiGPU algorithm. We identify that previous implementations are strictly memory-bound due to inefficient global memory transactions and a lack of data reuse. JefiFast addresses these bottlenecks through four key optimizations: (i) a packed memory layout (PML) using an array-of-structures approach to ensure coalesced memory access for source densities and their derivatives; (ii) geometry-aware shared memory tiling strategies that maximize L2 (level-2) cache hit rates and on-chip data reuse; (iii) pre-computation of time derivatives to minimize redundant arithmetic operations; and (iv) a robust observation domain decomposition strategy that enables linear scaling across multiple GPUs. Benchmarks demonstrate that JefiFast achieves speedups ranging from 4.08 times (for 303 grids on a single NVIDIA V100 graphic processor) to 84.51 times (for 503 grids on 4 NVIDIA V100 processors) compared to the baseline. Notably, for a 503 grid on a single GPU, JefiFast reduces execution time from about 51 min to just about 2.6 min (19.54 times speedup). These performance advances make high-resolution relativistic heavy-ion collision simulations feasible in near real-time. Full article
28 pages, 3153 KB  
Article
LiteScan-Net: A Lightweight Scanning Network and a Large-Scale Dataset for Cropland Change Detection
by Zhengfang Lou, Xiaoping Lu, Yao Lu, Siyi Li, Guosheng Cai and Ling Song
Remote Sens. 2026, 18(9), 1447; https://doi.org/10.3390/rs18091447 - 6 May 2026
Viewed by 223
Abstract
Aiming at the dual dilemma in high-resolution cropland change detection, where CNNs are constrained by limited local receptive fields and Transformers suffer from heavy computational costs, we propose LiteScan-Net, a lightweight and robust network architecture incorporating scanning principles from state-space modeling. The network [...] Read more.
Aiming at the dual dilemma in high-resolution cropland change detection, where CNNs are constrained by limited local receptive fields and Transformers suffer from heavy computational costs, we propose LiteScan-Net, a lightweight and robust network architecture incorporating scanning principles from state-space modeling. The network innovatively introduces the Multi-Directional Global Scanning (MDGS) mechanism as an efficient engineering surrogate, which simulates the selective scanning process using large-kernel 1D convolutions. This achieves global context modeling with linear complexity while avoiding the hardware limitations imposed by recurrent computations. Based on this mechanism, a three-stage collaborative architecture is constructed: the Coordinate-Aware Feature Purification (CAFP) module is designed to mitigate shallow phenological noise via coordinate sensitivity; the Context Difference Verification (CDV) module aims to alleviate pseudo-changes caused by registration errors through global alignment; and the State-Space Guided Refinement (SSGR) module promotes the generation of change masks with precise boundaries and compact interiors. To verify the model generalization, we construct a Massive Specialized Cropland Change Detection dataset named MSCC, which exhibits significant cross-scale characteristics. Experimental results demonstrate that LiteScan-Net achieves state-of-the-art (SOTA) performance across the CLCD, Hi-CNA, and MSCC datasets, with F1-scores of 79.43%, 84.82%, and 89.62%, respectively. With a low computational cost of only 1.78 GFLOPs and a real-time inference speed of 37.9 FPS, LiteScan-Net demonstrates high potential for future deployment on resource-constrained edge devices. Full article
Show Figures

Figure 1

21 pages, 1348 KB  
Article
AI-Driven Generation of Old English: A Framework for Low-Resource Languages
by Rodrigo Gabriel Salazar Alva, Matías Núñez, Cristian López Del Alamo and Javier Martín Arista
Big Data Cogn. Comput. 2026, 10(5), 145; https://doi.org/10.3390/bdcc10050145 - 6 May 2026
Viewed by 307
Abstract
Preserving ancient languages is essential for understanding the cultural and linguistic heritage of humanity. Old English, however, remains critically under-resourced, which limits its accessibility to modern natural language processing (NLP) techniques. We present a scalable framework that uses advanced large language models (LLMs) [...] Read more.
Preserving ancient languages is essential for understanding the cultural and linguistic heritage of humanity. Old English, however, remains critically under-resourced, which limits its accessibility to modern natural language processing (NLP) techniques. We present a scalable framework that uses advanced large language models (LLMs) to generate high-quality Old English texts to address this gap. In this study, we specifically employ state-of-the-art models, including Llama-3.1-8B and Mistral-7B, as our foundation models, which are then adapted to the unique characteristics of Old English. Our approach combines parameter-efficient fine-tuning (Low-Rank Adaptation (LoRA)), data augmentation via back-translation, and a dual-agent pipeline that separates content generation (in English) and translation (into Old English). Evaluation with automated metrics (BLEU, METEOR, and CHRF) shows improvements over baseline models, with BLEU scores increasing from 26 to over 65 for English-to-Old English translation. Expert human assessment confirms high grammatical accuracy and stylistic fidelity in the generated texts, with average scores of 9.0/10 for inflection and word order, 9.1/10 for lexical authenticity, and 7.8 for semantic coherence. These results demonstrate that the framework can reliably expand limited historical corpora while maintaining linguistic integrity, with immediate practical applications in digital humanities research, computational philology, and the development of educational resources for Old English study. Beyond expanding the Old English corpus, our method offers a practical blueprint for revitalizing other endangered languages, thus linking AI innovation with the goals of cultural preservation. Full article
Show Figures

Figure 1

31 pages, 2618 KB  
Article
Fractional Variational Graph Autoencoders for Enhancing Non-Local Representation Learning on Graphs
by Mohamed Ilyas El Harrak, Omar Bahou, Karim El Moutaouakil, Ahmed Nuino, Eddakir Abdellatif and Alina-Mihaela Patriciu
Information 2026, 17(5), 446; https://doi.org/10.3390/info17050446 - 6 May 2026
Viewed by 234
Abstract
While Graph Autoencoders (GAEs) have become a standard for unsupervised representation learning, their reliance on integer-order convolutions inherently restricts information propagation to immediate local neighborhoods. This paper introduces the Fractional Graph Autoencoder (FGAE) and its variational extension (FVGAE) to move beyond these local [...] Read more.
While Graph Autoencoders (GAEs) have become a standard for unsupervised representation learning, their reliance on integer-order convolutions inherently restricts information propagation to immediate local neighborhoods. This paper introduces the Fractional Graph Autoencoder (FGAE) and its variational extension (FVGAE) to move beyond these local constraints. By integrating fractional Laplace operators, our framework generalizes conventional GAEs and enables tunable non-local propagation. We show that the fractional order α acts as a structural regularizer, utilizing the Green’s function of anomalous diffusion to induce a form of structural memory within the latent space. This allows the model to recover long-range dependencies that are typically lost in standard architectures. Systematic benchmarking across eight datasets—ranging from homophilic citation networks to heterophilic and dense product graphs—shows that these fractional variants consistently outperform both foundational and state-of-the-art baselines (ARGA, SIG-VAE, and GraphMAE). Notably, on the Amazon Computers and Citeseer datasets, our methods achieve relative increases in Normalized Mutual Information (NMI) of 77.55% and 67.28%, respectively. Statistical analysis confirms these gains are robust, with large effect sizes (Cohen’s d>0.80) and significance at p<0.05. These findings suggest that fractional graph autoencoding offers a mathematically grounded inductive bias for capturing the complex, multi-scale dynamics of real-world networked systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

18 pages, 3270 KB  
Article
SLEVA-AV: An Edge-Centric IoT Security Architecture Using Multi-Stage Lightweight Encryption for Autonomous Vehicle Applications
by Lordwin Cecil Prabhaker Micheal, Xavier Fernando, Mathan Kumar Arumugasamy, Neelamegam Devarasu and Daisy Merina Rathinarajan
Future Internet 2026, 18(5), 245; https://doi.org/10.3390/fi18050245 - 5 May 2026
Viewed by 346
Abstract
Autonomous vehicle (AV) networks require secure and efficient data processing under strict latency and resource constraints. This paper proposes a secure, lightweight edge-centric framework, SLEVA-AV, for Internet of Things (IoT)-enabled autonomous vehicle communication. The framework integrates multi-modal sensor data processing, lightweight key management, [...] Read more.
Autonomous vehicle (AV) networks require secure and efficient data processing under strict latency and resource constraints. This paper proposes a secure, lightweight edge-centric framework, SLEVA-AV, for Internet of Things (IoT)-enabled autonomous vehicle communication. The framework integrates multi-modal sensor data processing, lightweight key management, multi-stage encryption, and integrity verification within a unified pipeline. A key derivation function (KDF) is employed to generate session keys using contextual parameters, enabling efficient re-keying during vehicular mobility without repeated handshake overhead. The encryption process combines PRESENT, SPECK, and lightweight encryption algorithm (LEA) ciphers to enhance cryptographic strength, while SHA-256 ensures data integrity. The proposed system is implemented using a CARLA-based simulation environment and validated through CrypTool 2-based cryptographic analysis. Performance evaluation over 10,000 samples demonstrates low latency (0.039–0.794 s), reduced energy consumption (0.0196–0.0589 J), and negligible key management overhead. Comparative analysis with recent state-of-the-art approaches shows improved scalability and efficiency. Security validation through attack simulations demonstrates resistance against brute-force (2336 key space), differential (2185), replay, and tampering attacks, achieving 100% detection accuracy. The results indicate that the proposed framework strikes a balanced trade-off among security strength, computational efficiency, and real-time performance, and it is suitable for deployment in IoT environments with high mobility and dynamic edge connectivity. Full article
Show Figures

Graphical abstract

23 pages, 3743 KB  
Article
CT-to-PET Synthesis in the Head–Neck and Thoracic Region via Conditional 3D Latent Diffusion Modeling
by Mohammed A. Mahdi, Mohammed Al-Shalabi, Reda Elbarougy, Ehab T. Alnfrawy, Muhammad Usman Hadi and Rao Faizan Ali
Bioengineering 2026, 13(5), 534; https://doi.org/10.3390/bioengineering13050534 - 3 May 2026
Viewed by 1519
Abstract
Background: Positron emission tomography (PET) provides physiologic information central to oncologic staging and treatment assessment, but its availability is limited by cost, radiation exposure, and scanner access. Synthesizing PET from computed tomography (CT) is attractive but challenging, as tracer uptake is only [...] Read more.
Background: Positron emission tomography (PET) provides physiologic information central to oncologic staging and treatment assessment, but its availability is limited by cost, radiation exposure, and scanner access. Synthesizing PET from computed tomography (CT) is attractive but challenging, as tracer uptake is only partially constrained by anatomy, making the mapping inherently one-to-many. Methods: We propose a conditional 3D latent diffusion framework (3D-LDM) for CT-to-PET synthesis in the head–neck and thoracic region. The pipeline localizes anatomy by segmenting lungs in CT and restricting the volume to reduce irrelevant variability. PET volumes are encoded into a compact latent space using a KL-regularized 3D autoencoder, and a conditional 3D diffusion U-Net learns to generate PET latents conditioned on CT via a denoising diffusion process. The model was trained and evaluated on 900 paired PET/CT studies. Performance was assessed in SUV space using MAE, PSNR, and SSIM, and compared against transformer-, CNN-, and GAN-based baselines. Results: On the held-out test cohort, 3D-LDM achieved the best overall quantitative fidelity (MAE = 303.05 ± 22.16 SUV units, PSNR = 32.64 ± 1.79, SSIM = 0.86 ± 0.03), outperforming all baselines with statistically significant differences (p < 0.001). At the lesion level, the model achieved a precision of 0.76 (95% CI: 0.71, 0.81) and recall of 0.76 (95% CI: 0.72, 0.80), detecting an average of 3.19 lesions per scan with a false-positive rate of 0.72/scan. Lesion-wise NMSE was 11.37%, significantly outperforming GAN and transformer baselines. Conclusions: 3D-LDM enables efficient, high-fidelity PET synthesis in the head–neck and thoracic regions, substantially improving lesion-level accuracy over state-of-the-art baselines. While it is not a replacement for diagnostic PET, these results support the model’s potential as a clinical decision support tool. Full article
(This article belongs to the Special Issue Machine Learning Applications in Cancer Diagnosis and Prognosis)
Show Figures

Figure 1

22 pages, 5557 KB  
Article
Exhaust Gas Temperature Prediction of a Marine Gas Turbine Engine Using a Thermodynamic Knowledge-Driven Graph Attention Network Model
by Jinwei Chen, Jinxian Wei, Weiqiang Gao, Yifan Chen and Huisheng Zhang
J. Mar. Sci. Eng. 2026, 14(9), 857; https://doi.org/10.3390/jmse14090857 - 3 May 2026
Viewed by 198
Abstract
The exhaust gas temperature (EGT) of the gas generator is a critical indicator for the health management system of a marine gas turbine engine. Therefore, EGT prediction can not only support predictive maintenance decision-making but also serves as a reliable virtual sensor for [...] Read more.
The exhaust gas temperature (EGT) of the gas generator is a critical indicator for the health management system of a marine gas turbine engine. Therefore, EGT prediction can not only support predictive maintenance decision-making but also serves as a reliable virtual sensor for EGT measurement. However, the engine EGT exhibits strongly nonlinear coupling relationships with other gas path variables, which causes challenges for data-driven prediction. Graph neural networks (GNNs) are particularly effective in capturing the coupling relationships among gas path sensor variables. However, conventional static graph structures fail to characterize the varying coupling strengths under different operating conditions. In this study, a thermodynamic knowledge-driven graph attention network (TKD-GAT) method is proposed for accurate and robust EGT prediction. First, a physics-guided graph topology is constructed based on the gas turbine thermodynamic equations. Subsequently, a multi-head attention mechanism is introduced to generate edge weights that capture the varying thermodynamic coupling strengths under different operation conditions. The proposed model is evaluated on a real-world LM2500 gas turbine, which is widely used in modern propulsion systems of commercial and military ships. The ablation study confirms that the thermodynamic knowledge-driven graph topology and the attention mechanism-based edge weights are both necessary to enhance the EGT prediction performance. The TKD-GAT model shows the best performance with an RMSE of 0.446% and an R2 of 0.971 compared with state-of-the-art models. The paired t-test and effect size measurement (Cohen’s d) statistically confirm the significance of performance improvements. The statistical results from multiple independent experiments prove the stability of the TKD-GAT model. Additionally, the model achieves a competitive computational cost despite the integration of a physics-guided graph topology and attention mechanisms. Crucially, an interpretability analysis confirms that the learned attention weights adhere to thermodynamic principles under different operation conditions. The proposed TKD-GAT model provides an effective solution for EGT prediction in health management systems. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 35689 KB  
Article
Computed Fluid Dynamics-Based Blood Pressure Prediction for Coronary Artery Disease Diagnosis Using Coronary Computed Tomography Angiography
by Rene Lisasi, Huan Huang, William Pei, Michele Esposito and Chen Zhao
J. Imaging 2026, 12(5), 196; https://doi.org/10.3390/jimaging12050196 - 2 May 2026
Viewed by 185
Abstract
Computational fluid dynamics (CFD)-based simulation of coronary blood flow provides valuable hemodynamic markers, such as pressure gradients, for diagnosing coronary artery disease (CAD). However, CFD is computationally expensive, time-consuming, and difficult to integrate into large-scale clinical workflows. These limitations restrict the availability of [...] Read more.
Computational fluid dynamics (CFD)-based simulation of coronary blood flow provides valuable hemodynamic markers, such as pressure gradients, for diagnosing coronary artery disease (CAD). However, CFD is computationally expensive, time-consuming, and difficult to integrate into large-scale clinical workflows. These limitations restrict the availability of labeled hemodynamic data for training AI models and hinder the broad adoption of non-invasive, physiology-based CAD assessment. To address these challenges, we develop an end-to-end pipeline that automates coronary geometry extraction from coronary computed tomography angiography (CCTA), streamlines simulation data generation, and enables efficient learning of coronary blood pressure distributions. The pipeline reduces the manual burden associated with traditional CFD workflows while producing consistent training data. Furthermore, we introduce a diffusion-based regression model. Specifically, the inverted conditional diffusion (ICD) model is designed to predict coronary blood pressure directly from CCTA-derived features, thereby bypassing the need for computationally intensive CFD during inference. The proposed model is trained and validated on two CCTA datasets using the Adam optimizer with a weight decay of 1×103, a learning rate of 1×105, a batch size of 100, and Huber loss. It is then evaluated on a test set of ten simulated coronary hemodynamic cases. Experimental results demonstrate state-of-the-art performance. Compared with Long Short-Term Memory (LSTM), the proposed model improves the R2 score by 19.78%, reduces the root mean squared error (RMSE) by 19.44%, and lowers the normalized root mean squared error (NRMSE) by 18%. Compared with a multilayer perceptron (MLP), it improves the R2 score by 8.38%, reduces RMSE by 4.3%, and reduces NRMSE by 5.4%. This work represents a first step toward a scalable and accessible framework for rapid, non-invasive, CFD-based blood pressure prediction, with the potential to support CAD diagnosis. Full article
(This article belongs to the Special Issue AI-Driven Medical Image Processing and Analysis)
Show Figures

Figure 1

28 pages, 357 KB  
Review
Review on Clustering and Aggregation Modeling Methods for Distribution Networks with Large-Scale DER Integration
by Ye Yang, Yetong Luo and Jingrui Zhang
Energies 2026, 19(9), 2205; https://doi.org/10.3390/en19092205 - 2 May 2026
Viewed by 325
Abstract
As the global response to climate change and energy crises accelerates, the large-scale integration of heterogeneous distributed energy resources (DERs) is rapidly transforming traditional passive distribution networks into active distribution networks. However, the massive quantity and high stochasticity of these underlying devices trigger [...] Read more.
As the global response to climate change and energy crises accelerates, the large-scale integration of heterogeneous distributed energy resources (DERs) is rapidly transforming traditional passive distribution networks into active distribution networks. However, the massive quantity and high stochasticity of these underlying devices trigger a severe “curse of dimensionality,” creating significant computational and communication bottlenecks for coordinated system dispatch. To overcome these challenges, the “clustering followed by equivalence” aggregation modeling paradigm has emerged as a critical technical pathway. This paper reviews the state-of-the-art clustering and aggregation methodologies for distribution networks with high DER penetration. The review begins by synthesizing multi-dimensional feature extraction techniques and cutting-edge clustering algorithms that establish the foundation for dimensionality reduction. It then delves into refined aggregation models tailored to heterogeneous resources, including dynamic data-driven equivalence for renewable generation, Minkowski sum-based boundary approximations for energy storage, and thermodynamic alongside Markov chain mapping methods for flexible loads. Building upon these models, the paper comprehensively discusses the practical applications of generalized aggregators, such as microgrids and virtual power plants, in feasible region error evaluation, coordinated network control, multi-agent market games, and privacy-preserving architectures. Finally, the review outlines future research trajectories, emphasizing hybrid data-model-driven architectures for real-time dispatch, distributionally robust optimization (DRO) for enhancing grid resilience and self-healing, and decentralized trading ecosystems to ensure equitable system-level surplus allocation. This review aims to provide a systematic theoretical reference for the coordinated management and aggregated trading of flexibility resources in novel power systems. Full article
33 pages, 647 KB  
Article
New Mathematics for Computer Performance: Array Algebra and Cost Functions
by Gaétan Hains and Lenore Mullin
Mathematics 2026, 14(9), 1479; https://doi.org/10.3390/math14091479 - 28 Apr 2026
Viewed by 279
Abstract
MoA (mathematics of arrays) is a theory of parallel operations on arrays that can describe all known algorithms in linear algebra, signal processing, and HPC because they are based on primitive recursion and array shapes. Mapping parallel algorithms to computer architectures remains more [...] Read more.
MoA (mathematics of arrays) is a theory of parallel operations on arrays that can describe all known algorithms in linear algebra, signal processing, and HPC because they are based on primitive recursion and array shapes. Mapping parallel algorithms to computer architectures remains more of an art than a science, and specific mathematical techniques are needed to provide a basis for performance evaluation at a level abstract enough to constitute an experimental science. In this paper we present a methodology for parallel code generation from MoA expressions. Then, we relate the MoA operators to the linear space of memory elements in computer architecture. Finally, we define a theory of execution costs that is based on classical operations research and is formally related to MoA-based parallel code generation. This constitutes a formalized and mechanizable approach to performance prediction, portability and optimization. Full article
(This article belongs to the Special Issue Advances in High-Performance Computing, Optimization and Simulation)
Show Figures

Figure 1

Back to TopTop