Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,829)

Search Parameters:
Keywords = image alignment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 20472 KB  
Article
Perceiving Through the Painted Surface: Viewer-Dependent Depth Illusion in a Renaissance Work
by Siamak Khatibi, Yuan Zhou and Linus de Petris
Arts 2026, 15(1), 16; https://doi.org/10.3390/arts15010016 (registering DOI) - 12 Jan 2026
Abstract
This study explores how classical painting techniques, particularly those rooted in the Renaissance tradition, can produce illusions of depth that vary with the viewer’s position. Focusing on a work rich in soft shading and subtle tonal transitions, we investigate how movement across the [...] Read more.
This study explores how classical painting techniques, particularly those rooted in the Renaissance tradition, can produce illusions of depth that vary with the viewer’s position. Focusing on a work rich in soft shading and subtle tonal transitions, we investigate how movement across the frontal plane influences the perception of spatial structure. A sequence of high-resolution photographs was taken from slightly offset viewpoints, simulating natural viewer motion. Using image alignment and pixel-wise difference mapping, we reveal perceptual shifts that suggest the presence of latent three-dimensional cues embedded within the painted surface. The findings offer visual and empirical support for concepts such as and dynamic engagement, where depth is constructed not solely by the image, but by the interaction between the artwork and the observer. Our approach demonstrates how digital analysis can enrich art historical interpretation, offering new insight into how still images can evoke the illusion of spatial presence. Full article
(This article belongs to the Section Visual Arts)
30 pages, 10471 KB  
Article
A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration
by Bo Shi, Hongli Liu and Emanuele Zappa
Metrology 2026, 6(1), 4; https://doi.org/10.3390/metrology6010004 (registering DOI) - 12 Jan 2026
Abstract
To achieve low-cost and flexible wheel angles measurement, we propose a novel strategy that integrates wheel segmentation network with 3D vision. In this framework, a semantic segmentation network is first employed to extract the wheel rim, followed by angle estimation through ICP-based point [...] Read more.
To achieve low-cost and flexible wheel angles measurement, we propose a novel strategy that integrates wheel segmentation network with 3D vision. In this framework, a semantic segmentation network is first employed to extract the wheel rim, followed by angle estimation through ICP-based point cloud registration. Since wheel rim extraction is closely tied to angle computation accuracy, we introduce APCS-SwinUnet, a segmentation network built on the SwinUnet architecture and enhanced with ASPP, CBAM, and a hybrid loss function. Compared with traditional image processing methods in wheel alignment, APCS-SwinUnet delivers more accurate and refined segmentation, especially at wheel boundaries. Moreover, it demonstrates strong adaptability across diverse tire types and lighting conditions. Based on the segmented mask, the wheel rim point cloud is extracted, and an iterative closest point algorithm is then employed to register the target point cloud with a reference one. Taking the zero-angle condition as the reference, the rotation and translation matrices are obtained through point cloud registration. These matrices are subsequently converted into toe and camber angles via matrix-to-angle transformation. Experimental results verify that the proposed solution enables accurate angle measurement in a cost-effective, simple, and flexible manner. Furthermore, repeated experiments further validate its robustness and stability. Full article
(This article belongs to the Special Issue Applied Industrial Metrology: Methods, Uncertainties, and Challenges)
17 pages, 1538 KB  
Article
A Mobile Augmented Reality Integrating KCHDM-Based Ontologies with LLMs for Adaptive Q&A and Knowledge Testing in Urban Heritage
by Yongjoo Cho and Kyoung Shin Park
Electronics 2026, 15(2), 336; https://doi.org/10.3390/electronics15020336 - 12 Jan 2026
Abstract
A cultural heritage augmented reality system overlays virtual information onto real-world heritage sites, enabling intuitive exploration and interpretation with spatial and temporal contexts. This study presents the design and implementation of a cognitive Mobile Augmented Reality (MAR) system that integrates KCHDM-based ontologies with [...] Read more.
A cultural heritage augmented reality system overlays virtual information onto real-world heritage sites, enabling intuitive exploration and interpretation with spatial and temporal contexts. This study presents the design and implementation of a cognitive Mobile Augmented Reality (MAR) system that integrates KCHDM-based ontologies with large language models (LLMs) to facilitate intelligent exploration of urban heritage. While conventional AR guides often rely on static data, our system introduces a Semantic Retrieval-Augmented Generation (RAG) pipeline anchored in a structured knowledge base modeled after the Korean Cultural Heritage Data Model (KCHDM). This architecture enables the LLM to perform dynamic contextual reasoning, transforming heritage data into adaptive question-answering (Q&A) and interactive knowledge-testing quizzes that are precisely grounded in both historical and spatial contexts. The system supports on-site AR exploration and map-based remote exploration to ensure robust usability and precise spatial alignment of virtual content. To deliver a rich, multisensory experience, the system provides multimodal outputs, integrating text, images, models, and audio narration. Furthermore, the integration of a knowledge sharing repository allows users to review and learn from others’ inquires. This ontology-driven LLM-integrated MAR design enhances semantic accuracy and contextual relevance, demonstrating the potential of MAR for socially enriched urban heritage experiences. Full article
Show Figures

Figure 1

15 pages, 1649 KB  
Review
Subacute and Chronic Low-Back Pain: From MRI Phenotype to Imaging-Guided Interventions
by Giulia Pacella, Raffaele Natella, Federico Bruno, Michele Fischetti, Michela Bruno, Maria Chiara Brunese, Mario Brunese, Alfonso Forte, Francesco Forte, Biagio Apollonio, Daniele Giuseppe Romano and Marcello Zappia
Diagnostics 2026, 16(2), 240; https://doi.org/10.3390/diagnostics16020240 - 12 Jan 2026
Abstract
Low-back pain (LBP) is a leading cause of disability worldwide. When symptoms persist beyond 4–6 weeks, when red flags are suspected, or when precise patient selection for procedures is needed, imaging—primarily MRI (Magnetic Resonance Imaging)—becomes pivotal. The purpose is to provide a pragmatic, [...] Read more.
Low-back pain (LBP) is a leading cause of disability worldwide. When symptoms persist beyond 4–6 weeks, when red flags are suspected, or when precise patient selection for procedures is needed, imaging—primarily MRI (Magnetic Resonance Imaging)—becomes pivotal. The purpose is to provide a pragmatic, radiology-first roadmap that aligns an imaging phenotype with anatomical targets and appropriate image-guided interventions, integrating MRI-based phenotyping with image-guided interventions for subacute and chronic LBP. In this narrative review, we define operational MRI criteria to distinguish radicular from non-radicular phenotypes and to contextualize endplate/Modic and facet/sacroiliac degenerative changes. We then summarize selection and technique for major procedures: epidural and periradicular injections (including selective nerve root blocks), facet interventions with medial branch radiofrequency ablation (RFA), sacroiliac joint injections and lateral branch RFA, basivertebral nerve ablation (BVNA) for vertebrogenic pain, percutaneous disc decompression, minimally invasive lumbar decompression (MILD), and vertebral augmentation for painful fractures. For each target, we outline preferred and alternative guidance modalities (fluoroscopy, CT, or ultrasound), key safety checks, and realistic effect sizes and durability, emphasizing when to avoid low-value or poorly indicated procedures. This review proposes a phenotype-driven reporting template and a care-pathway table linking MRI patterns to diagnostic blocks and definitive image-guided treatments, with the aim of reducing cascade testing and therapeutic ambiguity. A standardized phenotype → target → tool approach can make MRI reports more actionable and help clinicians choose the right image-guided intervention for the right patient, improving outcomes while prioritizing safety and value. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

15 pages, 1363 KB  
Article
Hierarchical Knowledge Distillation for Efficient Model Compression and Transfer: A Multi-Level Aggregation Approach
by Titinunt Kitrungrotsakul and Preeyanuch Srichola
Information 2026, 17(1), 70; https://doi.org/10.3390/info17010070 - 12 Jan 2026
Abstract
The success of large-scale deep learning models in remote sensing tasks has been transformative, enabling significant advances in image classification, object detection, and image–text retrieval. However, their computational and memory demands pose challenges for deployment in resource-constrained environments. Knowledge distillation (KD) alleviates these [...] Read more.
The success of large-scale deep learning models in remote sensing tasks has been transformative, enabling significant advances in image classification, object detection, and image–text retrieval. However, their computational and memory demands pose challenges for deployment in resource-constrained environments. Knowledge distillation (KD) alleviates these issues by transferring knowledge from a strong teacher to a student model, which can be compact for efficient deployment or architecturally matched to improve accuracy under the same inference budget. In this paper, we introduce Hierarchical Multi-Segment Knowledge Distillation (HIMS_KD), a multi-stage framework that sequentially distills knowledge from a teacher into multiple assistant models specialized in low-, mid-, and high-level representations, and then aggregates their knowledge into the final student. We integrate feature-level alignment, auxiliary similarity-logit alignment, and supervised loss during distillation. Experiments on benchmark remote sensing datasets (RSITMD and RSICD) show that HIMS_KD improves retrieval performance and enhances zero-shot classification; and when a compact student is used, it reduces deployment cost while retaining strong accuracy. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

19 pages, 2336 KB  
Article
A Lightweight Upsampling and Cross-Modal Feature Fusion-Based Algorithm for Small-Object Detection in UAV Imagery
by Jianglei Gong, Zhe Yuan, Wenxing Li, Weiwei Li, Yanjie Guo and Baolong Guo
Electronics 2026, 15(2), 298; https://doi.org/10.3390/electronics15020298 - 9 Jan 2026
Viewed by 72
Abstract
Small-object detection in UAV remote sensing faces common challenges such as tiny target size, blurred features, and severe background interference. Furthermore, single imaging modalities exhibit limited representation capability in complex environments. To address these issues, this paper proposes CTU-YOLO, a UAV-based small-object detection [...] Read more.
Small-object detection in UAV remote sensing faces common challenges such as tiny target size, blurred features, and severe background interference. Furthermore, single imaging modalities exhibit limited representation capability in complex environments. To address these issues, this paper proposes CTU-YOLO, a UAV-based small-object detection algorithm built upon cross-modal feature fusion and lightweight upsampling. The algorithm incorporates a dynamic and adaptive cross-modal feature fusion (DCFF) module, which achieves efficient feature alignment and fusion by combining frequency-domain analysis with convolutional operations. Additionally, a lightweight upsampling module (LUS) is introduced, integrating dynamic sampling and depthwise separable convolution to enhance the recovery of fine details for small objects. Experiments on the DroneVehicle and LLVIP datasets demonstrate that CTU-YOLO achieves 73.9% mAP on DroneVehicle and 96.9% AP on LLVIP, outperforming existing mainstream methods. Meanwhile, the model possesses only 4.2 MB parameters and 13.8 GFLOPs computational cost, with inference speeds reaching 129.9 FPS on DroneVehicle and 135.1 FPS on LLVIP. This exhibits an excellent lightweight design and real-time performance while maintaining high accuracy. Ablation studies confirm that both the DCFF and LUS modules contribute significantly to performance gains. Visualization analysis further indicates that the proposed method can accurately preserve the structure of small objects even under nighttime, low-light, and multi-scale background conditions, demonstrating strong robustness. Full article
(This article belongs to the Special Issue AI-Driven Image Processing: Theory, Methods, and Applications)
Show Figures

Figure 1

30 pages, 79545 KB  
Article
A2Former: An Airborne Hyperspectral Crop Classification Framework Based on a Fully Attention-Based Mechanism
by Anqi Kang, Hua Li, Guanghao Luo, Jingyu Li and Zhangcai Yin
Remote Sens. 2026, 18(2), 220; https://doi.org/10.3390/rs18020220 - 9 Jan 2026
Viewed by 73
Abstract
Crop classification of farmland is of great significance for crop monitoring and yield estimation. Airborne hyperspectral systems can provide large-format hyperspectral farmland images. However, traditional machine learning-based classification methods rely heavily on handcrafted feature design, resulting in limited representation capability and poor computational [...] Read more.
Crop classification of farmland is of great significance for crop monitoring and yield estimation. Airborne hyperspectral systems can provide large-format hyperspectral farmland images. However, traditional machine learning-based classification methods rely heavily on handcrafted feature design, resulting in limited representation capability and poor computational efficiency when processing large-format data. Meanwhile, mainstream deep-learning-based hyperspectral image (HSI) classification methods primarily rely on patch-based input methods, where a label is assigned to each patch, limiting the full utilization of hyperspectral datasets in agricultural applications. In contrast, this paper focuses on the semantic segmentation task in the field of computer vision and proposes a novel HSI crop classification framework named All-Attention Transformer (A2Former), which combines CNN and Transformer based on a fully attention-based mechanism. First, a CNN-based encoder consisting of two blocks, the overlap-downsample and the spectral–spatial attention weights block (SSWB) is constructed to extract multi-scale spectral–spatial features effectively. Second, we propose a lightweight C-VIT block to enhance high-dimensional features while reducing parameter count and computational cost. Third, a Transformer-based decoder block with gated-style weighted fusion and interaction attention (WIAB), along with a fused segmentation head (FH), is developed to precisely model global and local features and align semantic information across multi-scale features, thereby enabling accurate segmentation. Finally, a checkerboard-style sampling strategy is proposed to avoid information leakage and ensure the objectivity and accuracy of model performance evaluation. Experimental results on two public HSI datasets demonstrate the accuracy and efficiency of the proposed A2Former framework, outperforming several well-known patch-free and patch-based methods on two public HSI datasets. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
31 pages, 17740 KB  
Article
HR-UMamba++: A High-Resolution Multi-Directional Mamba Framework for Coronary Artery Segmentation in X-Ray Coronary Angiography
by Xiuhan Zhang, Peng Lu, Zongsheng Zheng and Wenhui Li
Fractal Fract. 2026, 10(1), 43; https://doi.org/10.3390/fractalfract10010043 - 9 Jan 2026
Viewed by 179
Abstract
Coronary artery disease (CAD) remains a leading cause of mortality worldwide, and accurate coronary artery segmentation in X-ray coronary angiography (XCA) is challenged by low contrast, structural ambiguity, and anisotropic vessel trajectories, which hinder quantitative coronary angiography. We propose HR-UMamba++, a U-Mamba-based framework [...] Read more.
Coronary artery disease (CAD) remains a leading cause of mortality worldwide, and accurate coronary artery segmentation in X-ray coronary angiography (XCA) is challenged by low contrast, structural ambiguity, and anisotropic vessel trajectories, which hinder quantitative coronary angiography. We propose HR-UMamba++, a U-Mamba-based framework centered on a rotation-aligned multi-directional state-space scan for modeling long-range vessel continuity across multiple orientations. To preserve thin distal branches, the framework is equipped with (i) a persistent high-resolution bypass that injects undownsampled structural details and (ii) a UNet++-style dense decoder topology for cross-scale topological fusion. On an in-house dataset of 739 XCA images from 374 patients, HR-UMamba++ is evaluated using eight segmentation metrics, fractal-geometry descriptors, and multi-view expert scoring. Compared with U-Net, Attention U-Net, HRNet, U-Mamba, DeepLabv3+, and YOLO11-seg, HR-UMamba++ achieves the best performance (Dice 0.8706, IoU 0.7794, HD95 16.99), yielding a relative Dice improvement of 6.0% over U-Mamba and reducing the deviation in fractal dimension by up to 57% relative to U-Net. Expert evaluation across eight angiographic views yields a mean score of 4.24 ± 0.49/5 with high inter-rater agreement. These results indicate that HR-UMamba++ produces anatomically faithful coronary trees and clinically useful segmentations that can serve as robust structural priors for downstream quantitative coronary analysis. Full article
Show Figures

Figure 1

23 pages, 11860 KB  
Article
HG-RSOVSSeg: Hierarchical Guidance Open-Vocabulary Semantic Segmentation Framework of High-Resolution Remote Sensing Images
by Wubiao Huang, Fei Deng, Huchen Li and Jing Yang
Remote Sens. 2026, 18(2), 213; https://doi.org/10.3390/rs18020213 - 9 Jan 2026
Viewed by 127
Abstract
Remote sensing image semantic segmentation (RSISS) aims to assign a correct class label to each pixel in remote sensing images and has wide applications. With the development of artificial intelligence, RSISS based on deep learning has made significant progress. However, existing methods remain [...] Read more.
Remote sensing image semantic segmentation (RSISS) aims to assign a correct class label to each pixel in remote sensing images and has wide applications. With the development of artificial intelligence, RSISS based on deep learning has made significant progress. However, existing methods remain more focused on predefined semantic classes and require costly retraining when confronted with new classes. To address this limitation, we propose the hierarchical guidance open-vocabulary semantic segmentation framework for remote sensing images (named HG-RSOVSSeg), enabling flexible segmentation of arbitrary semantic classes without model retraining. Our framework leverages pretrained text-embedding models to provide class common knowledge and aligns multimodal features through a dual-stream architecture. Specifically, we propose a multimodal feature aggregation module for pixel-level alignment and a hierarchical visual feature decoder guided by text feature alignment, which progressively refines visual features using language priors, preserving semantic coherence during high-resolution decoding. Extensive experiments were conducted on six representative public datasets, and the results showed that our method has the highest mean mIoU value, establishing state-of-the-art performance in the field of open-vocabulary semantic segmentation of remote sensing images. Full article
Show Figures

Figure 1

33 pages, 24811 KB  
Article
Demystifying Deep Learning Decisions in Leukemia Diagnostics Using Explainable AI
by Shahd H. Altalhi and Salha M. Alzahrani
Diagnostics 2026, 16(2), 212; https://doi.org/10.3390/diagnostics16020212 - 9 Jan 2026
Viewed by 164
Abstract
Background/Objectives: Conventional workflows, peripheral blood smears, and bone marrow assessment supplemented by LDI-PCR, molecular cytogenetics, and array-CGH, are expert-driven in the face of biological and imaging variability. Methods: We propose an AI pipeline that integrates convolutional neural networks (CNNs) and transfer [...] Read more.
Background/Objectives: Conventional workflows, peripheral blood smears, and bone marrow assessment supplemented by LDI-PCR, molecular cytogenetics, and array-CGH, are expert-driven in the face of biological and imaging variability. Methods: We propose an AI pipeline that integrates convolutional neural networks (CNNs) and transfer learning-based models with two explainable AI (XAI) approaches, LIME and Grad-Cam, to deliver both high diagnostic accuracy and transparent rationale. Seven public sources were curated into a unified benchmark (66,550 images) covering ALL, AML, CLL, CML, and healthy controls; images were standardized, ROI-cropped, and split with stratification (80/10/10). We fine-tuned multiple backbones (DenseNet-121, MobileNetV2, VGG16, InceptionV3, ResNet50, Xception, and a custom CNN) and evaluated the accuracy and F1-score, benchmarking against the recent literature. Results: On the five-class task (ALL/AML/CLL/CML/Healthy), MobileNetV2 achieved 97.9% accuracy/F1, with DenseNet-121 reaching 97.66% F1. On ALL subtypes (Benign, Early, Pre, Pro) and across tasks, DenseNet121 and MobileNetV2 were the most reliable, achieving state-of-the-art accuracy with the strongest, nucleus-centric explanations. Conclusions: XAI analyses (LIME, Grad-CAM) consistently localized leukemic nuclei and other cell-intrinsic morphology, aligning saliency with clinical cues and model performance. Compared with baselines, our approach matched or exceeded accuracy while providing stronger, corroborated interpretability on a substantially larger and more diverse dataset. Full article
Show Figures

Figure 1

16 pages, 1441 KB  
Article
DCRDF-Net: A Dual-Channel Reverse-Distillation Fusion Network for 3D Industrial Anomaly Detection
by Chunshui Wang, Jianbo Chen and Heng Zhang
Sensors 2026, 26(2), 412; https://doi.org/10.3390/s26020412 - 8 Jan 2026
Viewed by 85
Abstract
Industrial surface defect detection is essential for ensuring product quality, but real-world production lines often provide only a limited number of defective samples, making supervised training difficult. Multimodal anomaly detection with aligned RGB and depth data is a promising solution, yet existing fusion [...] Read more.
Industrial surface defect detection is essential for ensuring product quality, but real-world production lines often provide only a limited number of defective samples, making supervised training difficult. Multimodal anomaly detection with aligned RGB and depth data is a promising solution, yet existing fusion schemes tend to overlook modality-specific characteristics and cross-modal inconsistencies, so that defects visible in only one modality may be suppressed or diluted. In this work, we propose DCRDF-Net, a dual-channel reverse-distillation fusion network for unsupervised RGB–depth industrial anomaly detection. The framework learns modality-specific normal manifolds from nominal RGB and depth data and detects defects as deviations from these learned manifolds. It consists of three collaborative components: a Perlin-guided pseudo-anomaly generator that injects appearance–geometry-consistent perturbations into both modalities to enrich training signals; a dual-channel reverse-distillation architecture with guided feature refinement that denoises teacher features and constrains RGB and depth students towards clean, defect-free representations; and a cross-modal squeeze–excitation gated fusion module that adaptively combines RGB and depth anomaly evidence based on their reliability and agreement.Extensive experiments on the MVTec 3D-AD dataset show that DCRDF-Net achieves 97.1% image-level I-AUROC and 98.8% pixel-level PRO, surpassing current state-of-the-art multimodal methods on this benchmark. Full article
(This article belongs to the Section Sensor Networks)
27 pages, 13798 KB  
Article
A Hierarchical Deep Learning Architecture for Diagnosing Retinal Diseases Using Cross-Modal OCT to Fundus Translation in the Lack of Paired Data
by Ekaterina A. Lopukhova, Gulnaz M. Idrisova, Timur R. Mukhamadeev, Grigory S. Voronkov, Ruslan V. Kutluyarov and Elizaveta P. Topolskaya
J. Imaging 2026, 12(1), 36; https://doi.org/10.3390/jimaging12010036 - 8 Jan 2026
Viewed by 102
Abstract
The paper focuses on automated diagnosis of retinal diseases, particularly Age-related Macular Degeneration (AMD) and diabetic retinopathy (DR), using optical coherence tomography (OCT), while addressing three key challenges: disease comorbidity, severe class imbalance, and the lack of strictly paired OCT and fundus data. [...] Read more.
The paper focuses on automated diagnosis of retinal diseases, particularly Age-related Macular Degeneration (AMD) and diabetic retinopathy (DR), using optical coherence tomography (OCT), while addressing three key challenges: disease comorbidity, severe class imbalance, and the lack of strictly paired OCT and fundus data. We propose a hierarchical modular deep learning system designed for multi-label OCT screening with conditional routing to specialized staging modules. To enable DR staging when fundus images are unavailable, we use cross-modal alignment between OCT and fundus representations. This approach involves training a latent bridge that projects OCT embeddings into the fundus feature space. We enhance clinical reliability through per-class threshold calibration and implement quality control checks for OCT-only DR staging. Experiments demonstrate robust multi-label performance (macro-F1 =0.989±0.006 after per-class threshold calibration) and reliable calibration (ECE =2.1±0.4%), and OCT-only DR staging is feasible in 96.1% of cases that meet the quality control criterion. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

15 pages, 2695 KB  
Article
Opportunistic Osteoporosis Screening in Breast Cancer Using AI-Derived Vertebral BMD from Routine CT: Validation Against QCT and Multivariable Diagnostic Modeling
by Jiayi Pu, Wenqin Zhou, Miao Wei, Wen Li, Yan Xiao, Jia Xie and Fajin Lv
J. Clin. Med. 2026, 15(2), 512; https://doi.org/10.3390/jcm15020512 - 8 Jan 2026
Viewed by 115
Abstract
Background/Objectives: Breast cancer survivors face elevated risk of treatment-related bone loss, yet routine bone health assessment remains underutilized. Opportunistic bone density extraction from routine CT may address this gap. This study validated AI-derived vertebral bone mineral density (AI-vBMD) from non-contrast thoracoabdominal CT [...] Read more.
Background/Objectives: Breast cancer survivors face elevated risk of treatment-related bone loss, yet routine bone health assessment remains underutilized. Opportunistic bone density extraction from routine CT may address this gap. This study validated AI-derived vertebral bone mineral density (AI-vBMD) from non-contrast thoracoabdominal CT for osteoporosis screening and assessed its diagnostic value beyond clinical variables. Methods: This retrospective study included 332 breast cancer patients; AI-vBMD was successfully extracted in 325 (98%). Quantitative CT (QCT) served as reference standard. Agreement between AI-vBMD and QCT-vBMD was assessed using Pearson correlation, Bland–Altman analysis, and weighted kappa for QCT-defined osteoporosis (<80 mg/cm3). Nested logistic regression models compared a clinical model with and without AI-vBMD. Discrimination [area under the curve (AUC)], calibration, and clinical utility [decision-curve analysis (DCA)] were evaluated. Results: AI-vBMD showed strong correlation with QCT-vBMD (r = 0.98, p < 0.001), minimal bias (mean difference +1.82 mg/cm3), and excellent agreement for osteoporosis classification (weighted κ = 0.90). AI-vBMD alone achieved excellent discrimination for osteoporosis (AUC = 0.986). Integrating AI-vBMD into the clinical model yielded significantly higher diagnostic performance (AUC 0.988 vs. 0.879; p < 0.001) and demonstrated superior net benefit across relevant decision thresholds. Conclusions: AI-derived vertebral BMD from routine CT serves as a reliable QCT-aligned imaging biomarker for opportunistic osteoporosis assessment in breast cancer patients and adds significant incremental diagnostic value beyond clinical information alone. Full article
Show Figures

Figure 1

19 pages, 4909 KB  
Article
The Invention of a Patriotic Sage: State Ritual, Public Memory, and the Remaking of Yulgok Yi I
by Codruța Sîntionean
Religions 2026, 17(1), 70; https://doi.org/10.3390/rel17010070 - 8 Jan 2026
Viewed by 80
Abstract
This article examines how the Park Chung Hee regime reshaped the public memory of the Neo-Confucian philosopher Yi I (penname Yulgok, 1536–1584) by recasting him as a model of patriotic nationalism. Beginning with the inauguration of the Yulgok Festival in 1962, Yi I [...] Read more.
This article examines how the Park Chung Hee regime reshaped the public memory of the Neo-Confucian philosopher Yi I (penname Yulgok, 1536–1584) by recasting him as a model of patriotic nationalism. Beginning with the inauguration of the Yulgok Festival in 1962, Yi I was no longer commemorated solely as a scholar of the Chosŏn dynasty; instead, the regime portrayed him as a patriotic sage who advocated for military preparedness. Drawing on archival materials (presidential speeches, heritage management reports, newspaper articles), this study reconstructs the policy discourse surrounding Yulgok and traces the state-driven mechanisms that reframed his public image. The analysis shows that Yulgok’s image became embedded in political rituals, monumentalized in public spaces, circulated in everyday life through currency iconography, and materialized in physical heritage sites transformed to embody a purified, idealized vision of the past. Together, these initiatives positioned the state as the custodian of Yulgok’s memory, aligning his image with the ideological priorities of the militarist state. Full article
(This article belongs to the Special Issue Re-Thinking Religious Traditions and Practices of Korea)
Show Figures

Figure 1

17 pages, 2010 KB  
Review
Deep Brain Stimulation as a Rehabilitation Amplifier: A Precision-Oriented, Network-Guided Framework for Functional Restoration in Movement Disorders
by Olga Mateo-Sierra, Beatriz De la Casa-Fages, Esther Martín-Ramírez, Marta Barreiro-Gómez and Francisco Grandas
J. Clin. Med. 2026, 15(2), 492; https://doi.org/10.3390/jcm15020492 - 8 Jan 2026
Viewed by 150
Abstract
Background: Deep brain stimulation (DBS) is increasingly understood as a precision-oriented neuromodulation therapy capable of influencing distributed basal ganglia–thalamo–cortical and cerebellothalamic networks. Although its symptomatic benefits in Parkinson’s disease, essential tremor, and dystonia are well established, the extent to which DBS supports [...] Read more.
Background: Deep brain stimulation (DBS) is increasingly understood as a precision-oriented neuromodulation therapy capable of influencing distributed basal ganglia–thalamo–cortical and cerebellothalamic networks. Although its symptomatic benefits in Parkinson’s disease, essential tremor, and dystonia are well established, the extent to which DBS supports motor learning, adaptive plasticity, and participation in rehabilitation remains insufficiently defined. Traditional interpretations of DBS as a focal or lesion-like intervention are being challenged by electrophysiological and imaging evidence demonstrating multiscale modulation of circuit dynamics. Objectives and methods: DBS may enhance rehabilitation outcomes by stabilizing pathological oscillations and reducing moment-to-moment variability in motor performance, thereby enabling more consistent task execution and more effective physiotherapy, occupational therapy, and speech–language interventions. However, direct comparative evidence demonstrating additive or synergistic effects of DBS combined with rehabilitation remains limited. As a result, this potential is not fully realized in clinical practice due to interindividual variability, limited insight into how individual circuit architecture shapes therapeutic response, and the limited specificity of current connectomic biomarkers for predicting functional gains. Results: Technological advances such as tractography-guided targeting, directional leads, sensing-enabled devices, and adaptive stimulation are expanding opportunities to align neuromodulation with individualized circuit dysfunction. Despite these developments, major conceptual and empirical gaps persist. Few controlled studies directly compare outcomes with versus without structured rehabilitation following DBS. Heterogeneity in therapeutic response and rehabilitation access further complicates the interpretation of outcomes. Clarifying these relationships is essential for developing precision-informed frameworks that integrate DBS with rehabilitative strategies, recognizing that current connectomic and physiological biomarkers remain incompletely validated for predicting functional outcomes. Conclusions: This review synthesizes mechanistic, imaging, and technological evidence to outline a network-informed perspective of DBS as a potential facilitator of rehabilitation-driven functional improvement and identifies priorities for future research aimed at optimizing durable functional restoration. Full article
Show Figures

Figure 1

Back to TopTop