Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = modal shape visualization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 13642 KB  
Article
A Visual Target Navigation Method for Quadrotor Based on Large Language Model in Unknown Environment
by Yunzhuo Liu, Zhaowei Ma, Jiankun Guo, Haozhe Sun, Yifeng Niu, Hong Zhang and Mengyun Wang
Drones 2026, 10(2), 134; https://doi.org/10.3390/drones10020134 - 14 Feb 2026
Viewed by 157
Abstract
This paper proposes a novel Large Language Model (LLM)-based visual target navigation framework for quadrotors in unknown environments. Leveraging the semantic knowledge of LLMs, our method enables autonomous exploration based on natural language instructions. We design an intelligent planner using specialized prompt templates [...] Read more.
This paper proposes a novel Large Language Model (LLM)-based visual target navigation framework for quadrotors in unknown environments. Leveraging the semantic knowledge of LLMs, our method enables autonomous exploration based on natural language instructions. We design an intelligent planner using specialized prompt templates that operates in two phases: first, deriving global search sequences via probabilistic inference; second, dynamically generating sub-goal waypoints by fusing visual observations with statistical priors and LLM-derived scene relevance metrics. The quadrotor then executes a progressive search via path planning algorithms. Simulation results indicate that our fused method outperforms single-modality baselines by approximately 20%. Furthermore, physical flight experiments demonstrate success rates of 56% in Cross-layout and 48% in T-shaped layout scenarios. These results, while reflecting the inherent challenges of perceptual occlusion and planning uncertainty, validate the feasibility and potential of the proposed framework in real-world applications. Full article
Show Figures

Figure 1

13 pages, 1185 KB  
Article
A Dual-Mode Near-Infrared Optical Probe and Monte Carlo Framework for Functional Monitoring of Rheumatoid Arthritis: Addressing Diagnostic Ambiguity and Skin Tone Robustness
by Parmveer Atwal, Ryley McWilliams, Ramani Ramaseshen and Farid Golnaraghi
Sensors 2026, 26(4), 1179; https://doi.org/10.3390/s26041179 - 11 Feb 2026
Viewed by 209
Abstract
Current diagnostic modalities for rheumatoid arthritis (RA), such as Magnetic Resonance Imaging (MRI) and ultrasound (US), excel at visualizing structural pathology but are either resource-intensive or often limited to morphological assessment. In this work, we present the design and technical validation of a [...] Read more.
Current diagnostic modalities for rheumatoid arthritis (RA), such as Magnetic Resonance Imaging (MRI) and ultrasound (US), excel at visualizing structural pathology but are either resource-intensive or often limited to morphological assessment. In this work, we present the design and technical validation of a low-cost continuous-wave near-infrared (NIR) dual-mode optical probe for functional monitoring of joint inflammation. Unlike superficial imaging, NIR light penetrates approximately 3–5 cm and is tissue and wavelength dependent, enabling trans-illumination of the synovial volume. The system combines reflectance and transmission geometries to resolve the ambiguity between disease presence and disease severity. To validate the diagnostic logic, we employed mcxyzn Monte Carlo (MC) simulations to model the optical signature of RA progression from early onset to EULAR-OMERACT grade 2 pannus hypertrophy on a simplified finger model, based on several tissue models in the literature and supported by physical measurements on a multilayer silicone phantom and in vivo signal verification on human volunteers. Our results demonstrate a distinct functional dichotomy: reflectance geometry serves as a binary discriminator of synovial turbidity onset, while transmission flux serves as a monotonic proxy for pannus volume, exhibiting a quantifiable signal decay consistent with the Beer–Lambert law. Signal verification on a subject with confirmed RA pathology demonstrated a significant increase in the effective attenuation coefficient (µeff ~ 0.59 mm−1) compared to the healthy baseline (µeff ~ 0.47  mm−1). Furthermore, simulation analysis revealed a critical “metric inversion” in darker skin phenotypes (Fitzpatrick V–VI), where the standard beam-broadening signature of inflammation is artificially suppressed by epidermal absorption. We conclude that while transmission flux remains a robust grading metric across diverse skin tones, morphological beam-shape metrics are not robust, particularly in high-absorption populations. By targeting the hemodynamic precursors of structural damage, this dual-mode probe design offers a potential pathway for longitudinal, quantitative monitoring of disease activity at the point of care, while the systematic use of the Monte Carlo framework provides insight into the measurement geometry most suitable for a given clinical endpoint, whether that be detecting the presence or severity of rheumatoid arthritis. Full article
Show Figures

Figure 1

54 pages, 2381 KB  
Review
From the Optic Neuritis Treatment Trial to Antibody-Mediated Optic Neuritis: Four Decades of Progress and Unanswered Questions
by Marco A. Lana-Peixoto, Natália C. Talim and Paulo P. Christo
Biomedicines 2026, 14(2), 334; https://doi.org/10.3390/biomedicines14020334 - 31 Jan 2026
Viewed by 479
Abstract
Optic neuritis (ON) has been recognized since antiquity, but its modern clinical identity emerged only in the late 19th century and was definitively shaped by the Optic Neuritis Treatment Trial (ONTT). The ONTT established the natural history, visual prognosis, association with multiple sclerosis [...] Read more.
Optic neuritis (ON) has been recognized since antiquity, but its modern clinical identity emerged only in the late 19th century and was definitively shaped by the Optic Neuritis Treatment Trial (ONTT). The ONTT established the natural history, visual prognosis, association with multiple sclerosis (MS), and therapeutic response to corticosteroids, building the foundation for contemporary ON management. Subsequent discoveries—most notably aquaporin-4 IgG-associated ON (AQP4-ON), myelin oligodendrocyte glycoprotein antibody-associated ON (MOG-ON), and double-negative ON—have fundamentally transformed this paradigm, shifting ON from a seemingly uniform demyelinating syndrome to a group of biologically distinct disorders. These subtypes differ in immunopathology, clinical course, MRI features, retinal injury patterns, CSF profiles, and long-term outcomes, making early and accurate differentiation essential. MRI provides key distinctions in lesion length, orbital tissue inflammation, bilateral involvement, and chiasmal or optic tract extension. Optical coherence tomography (OCT) offers complementary structural biomarkers, including severe early ganglion cell loss in AQP4-ON, relative preservation in MOG-ON, and variable patterns in double-negative ON. CSF analysis further refines diagnosis, with oligoclonal bands strongly supporting MS-ON. Together, these modalities enable precise early stratification and timely initiation of targeted immunotherapy, which is critical for preventing irreversible visual disability. Despite major advances, significant unmet needs persist. Access to high-resolution MRI, OCT, cell-based antibody assays, and evidence-based treatments remains limited in many regions, contributing to global disparities in outcomes. The understanding of the pathogenesis of double-negative optic neuritis, the identification of reliable biomarkers of relapse and visual recovery, and the determination of standardized cut-off values for multimodal diagnostic tools—including MRI, OCT, CSF analysis, and serological assays—remain unresolved challenges. Future research must expand biomarker discovery, refine imaging criteria, and ensure equitable global access to cutting-edge diagnostic platforms and therapeutic innovations. Four decades after the ONTT, ON remains a dynamic field of investigation, with ongoing advances holding the potential to transform care for patients worldwide. Together, these advances expose a fundamental tension between historically MS-centered diagnostic frameworks and the emerging biological heterogeneity of ON, a tension that underpins the structure and critical perspective of the present review. Full article
(This article belongs to the Special Issue Multiple Sclerosis: Diagnosis and Treatment—3rd Edition)
Show Figures

Figure 1

28 pages, 2206 KB  
Article
Cross-Modal Temporal Graph Transformers for Explainable NFT Valuation and Information-Centric Risk Forecasting in Web3 Markets
by Fang Lin, Yitong Yang and Jianjun He
Information 2026, 17(2), 112; https://doi.org/10.3390/info17020112 - 23 Jan 2026
Viewed by 245
Abstract
NFT prices are shaped by heterogeneous signals including visual appearance, textual narratives, transaction trajectories, and on-chain interactions, yet existing studies often model these factors in isolation and rarely unify multimodal alignment, temporal non-stationarity, and heterogeneous relational dependencies in a leakage-safe forecasting setting. We [...] Read more.
NFT prices are shaped by heterogeneous signals including visual appearance, textual narratives, transaction trajectories, and on-chain interactions, yet existing studies often model these factors in isolation and rarely unify multimodal alignment, temporal non-stationarity, and heterogeneous relational dependencies in a leakage-safe forecasting setting. We propose MM-Temporal-Graph, a cross-modal temporal graph transformer framework for explainable NFT valuation and information-centric risk forecasting. The model encodes image, text, transaction time series, and blockchain behavioral features, constructs a heterogeneous NFT interaction graph (co-transaction, shared creator, wallet relation, and price co-movement), and jointly performs relation-aware graph attention and global temporal–structural transformer reasoning with an adaptive fusion gate. A contrastive multimodal alignment objective improves robustness under market drift, while a risk-aware regularizer and a multi-source risk index enable early warning and interpretable attribution across modalities, time segments, and relational neighborhoods. On MultiNFT-T, MM-Temporal-Graph improves MAE from 0.162 to 0.153 and R2 from 0.823 to 0.841 over the strongest multimodal graph baseline, and achieves 87.4% early risk detection accuracy. These results support accurate, robust, and explainable NFT valuation and proactive risk monitoring in Web3 markets. Full article
Show Figures

Figure 1

18 pages, 4244 KB  
Article
Dual-Modal Contrastive Learning for Continual Generalized Category Discovery
by Wei Jin, Nannan Li, Chengcheng Yang, Huanqiang Hu and Kuo Li
Mathematics 2026, 14(2), 365; https://doi.org/10.3390/math14020365 - 21 Jan 2026
Viewed by 232
Abstract
Continual Generalized Category Discovery (C-GCD) is an emerging research direction in Open-World Learning. The model aims to incrementally discover novel classes from unlabeled data while maintaining recognition of previously learned classes, without accessing historical samples. The absence of supervision signal in incremental sessions [...] Read more.
Continual Generalized Category Discovery (C-GCD) is an emerging research direction in Open-World Learning. The model aims to incrementally discover novel classes from unlabeled data while maintaining recognition of previously learned classes, without accessing historical samples. The absence of supervision signal in incremental sessions makes catastrophic forgetting more severe than in traditional incremental learning. Existing methods primarily enhance generalization through single-modality contrastive learning, overlooking the natural advantages of textual information. Visual features capture perceptual details such as shapes and textures, while textual information helps distinguish visually similar but semantically distinct categories, offering complementary benefits. However, directly obtaining category descriptions for unlabeled data in C-GCD is challenging. To address this, we introduce a conditional prompt learning mechanism to generate pseudo-prompts as textual information for unlabeled samples. Additionally, we propose a dual-modal contrastive learning strategy to enhance vision-text alignment and exploit CLIP’s multimodal potential. Extensive experiments on four benchmark datasets demonstrate that our method achieves competitive performance. We hope this work provides new insights for future research. Full article
(This article belongs to the Special Issue Computational Intelligence, Computer Vision and Pattern Recognition)
Show Figures

Figure 1

27 pages, 4064 KB  
Article
RDINet: A Deep Learning Model Integrating RGB-D and Ingredient Features for Food Nutrition Estimation
by Zhejun Kuang, Haobo Gao, Jiaxuan Yu, Dawen Sun, Jian Zhao and Lei Sun
Appl. Sci. 2026, 16(1), 454; https://doi.org/10.3390/app16010454 - 1 Jan 2026
Viewed by 370
Abstract
With growing public health awareness, accurate food nutrition estimation plays an increasingly important role in dietary management and disease prevention. The main bottleneck lies in how to effectively integrate multi-source heterogeneous information. We propose RDINet, a multimodal network that fuses RGB appearance, depth [...] Read more.
With growing public health awareness, accurate food nutrition estimation plays an increasingly important role in dietary management and disease prevention. The main bottleneck lies in how to effectively integrate multi-source heterogeneous information. We propose RDINet, a multimodal network that fuses RGB appearance, depth geometry, and ingredient semantics for food nutrition estimation. It comprises two core modules: The RGB-D fusion module integrates the textural appearance of RGB images and the 3D shape information conveyed by depth images through a channel–spatial attention mechanism, achieving a joint understanding of food appearance and geometric morphology without explicit 3D reconstruction; the ingredient fusion module embeds ingredient information into visual features via attention mechanisms, enabling the model to fully leverage components that are visually difficult to discern or prone to confusion, thereby activating corresponding nutritional reasoning pathways and achieving cross-modal inference from explicit observations to latent attributes. Experimental results on the Nutrition5k dataset show that RDINet achieves percentage mean absolute errors (PMAE) of 14.9%, 11.2%, 19.7%, 18.9%, and 19.5% for estimating calories, mass, fat, carbohydrates, and protein, respectively, with a mean PMAE of 16.8% across all metrics, outperforming existing mainstream methods. The results demonstrate that the appearance–geometry–semantics fusion framework is effective. Full article
Show Figures

Figure 1

22 pages, 1413 KB  
Systematic Review
Motion Capture as an Immersive Learning Technology: A Systematic Review of Its Applications in Computer Animation Training
by Xinyi Jiang, Zainuddin Ibrahim, Jing Jiang and Gang Liu
Multimodal Technol. Interact. 2026, 10(1), 1; https://doi.org/10.3390/mti10010001 - 23 Dec 2025
Viewed by 905
Abstract
Motion capture (MoCap) is increasingly recognized as a powerful multimodal immersive learning technology, providing embodied interaction and real-time motion visualization that enrich educational experiences. Although MoCap is gaining prominence within educational research, its pedagogical value and integration into computer animation training environments have [...] Read more.
Motion capture (MoCap) is increasingly recognized as a powerful multimodal immersive learning technology, providing embodied interaction and real-time motion visualization that enrich educational experiences. Although MoCap is gaining prominence within educational research, its pedagogical value and integration into computer animation training environments have received relatively limited systematic investigation. This review synthesizes findings from 17 studies to analyze how MoCap supports instructional design, creative development, and workflow efficiency in animation education. Results show that MoCap enables a multimodal learning process by combining visual, kinesthetic, and performative modalities, strengthening learners’ sense of presence, agency, and perceptual–motor understanding. Furthermore, we identified five key technical affordances of MoCap, including precision and fidelity, multi-actor and creative control, interactivity and immersion, perceptual–motor learning, and emotional expressiveness, which together shape both cognitive and creative learning outcomes. Emerging trends highlight MoCap’s growing convergence with VR/AR, XR, real-time rendering engines, and AI-augmented motion analysis, expanding its role in the design of immersive and interactive educational systems. This review offers insights into the use of MoCap in animation education research and provides a springboard for future work on more immersive and industry-relevant training. Full article
(This article belongs to the Special Issue Educational Virtual/Augmented Reality)
Show Figures

Figure 1

43 pages, 1311 KB  
Article
Wayfinding with Impaired Vision: Preferences for Cues, Strategies, and Aids (Part I—Perspectives from Visually Impaired Individuals)
by Dominique P. H. Blokland, Maartje J. E. van Loef, Nathan van der Stoep, Albert Postma and Krista E. Overvliet
Brain Sci. 2026, 16(1), 13; https://doi.org/10.3390/brainsci16010013 - 22 Dec 2025
Cited by 1 | Viewed by 719
Abstract
People with visual impairments (VIPs) can participate in orientation and mobility (O&M) training to learn how to navigate to their desired goal locations. During O&M training, personal wayfinding preferences with regard to cue use and wayfinding strategy choice are taken into account. However, [...] Read more.
People with visual impairments (VIPs) can participate in orientation and mobility (O&M) training to learn how to navigate to their desired goal locations. During O&M training, personal wayfinding preferences with regard to cue use and wayfinding strategy choice are taken into account. However, there is still a lack of clarity about which factors shape VIPs’ wayfinding experiences and how. Background/Objectives: In this study, we mapped individual differences in preferred sensory modality (both orientation- and mobility-related), and classified which personal and environmental factors are relevant for these preferences. Methods: To this end, interviews were conducted with eleven Dutch VIPs whose impairment varied in onset, ontology, and severity. Results: We concluded from our thematic analysis that hearing is the most important sensory modality to VIPs for orientation purposes, although it varies per person how and how often other resources are relied upon (i.e., other sensory modalities, existing knowledge of an environment, help from others, or navigational aids). Additionally, environmental factors such as weather conditions, crowdedness, and familiarity of the environment influence if, how, and which sensory modalities are employed. These preferences and strategies might be mediated by individual differences in priorities and needs pertaining to energy management. Conclusions: We discuss how the current findings could be of interest to orientation and mobility instructors when choosing a training strategy for individual clients. Full article
(This article belongs to the Special Issue Neuropsychological Exploration of Spatial Cognition and Navigation)
Show Figures

Figure 1

34 pages, 1353 KB  
Article
Wayfinding with Impaired Vision: Preferences for Cues, Strategies, and Aids (Part II—Perspectives from Orientation and Mobility Instructors)
by Dominique P. H. Blokland, Maartje J. E. van Loef, Nathan van der Stoep, Albert Postma and Krista E. Overvliet
Brain Sci. 2026, 16(1), 6; https://doi.org/10.3390/brainsci16010006 - 20 Dec 2025
Viewed by 651
Abstract
Background/Objectives: People with visual impairments can participate in orientation and mobility (O&M) training to learn how to navigate to their desired destinations. Instructors adapt their approach to each individual client. However, assessments of client characteristics and resulting instructional adaptations are not standardised and [...] Read more.
Background/Objectives: People with visual impairments can participate in orientation and mobility (O&M) training to learn how to navigate to their desired destinations. Instructors adapt their approach to each individual client. However, assessments of client characteristics and resulting instructional adaptations are not standardised and may therefore vary. This study aimed to identify which individual differences instructors consider during O&M training and why. Methods: We conducted semi-structured qualitative interviews with 10 O&M instructors. Participants were asked to describe how they prepare for a training trajectory, and to describe a route they taught a specific client. Thematic analysis was used to determine instructional choices and the relevant client-specific factors. Results: We observed a common four-step instructional process in which clients are taught to notice, interpret, act upon, and anticipate relevant sensory cues until a destination is reached. Four main themes captured the individual differences impacting this process: Sensory modalities, Capacities and limits, Personal contextual characteristics, and Training approach. Conclusions: Instructors perceive route learning to be shaped by clients’ sensory abilities (even fluctuating within sensory modalities), mental and physical capacities (especially concentration and energy), and personal characteristics (especially age and anxiety). The dynamic social context in which training takes place (e.g., the instructor–client relationship) is shaped by individual differences between both clients and instructors. We speculate that trust-related themes (e.g., building confidence) may explain why certain client characteristics are emphasised by instructors, as they are associated with training outcomes. Full article
(This article belongs to the Special Issue Neuropsychological Exploration of Spatial Cognition and Navigation)
Show Figures

Figure 1

38 pages, 25113 KB  
Article
A Two-Stage End-to-End Framework for Robust Scene Text Spotting with Self-Calibrated Detection and Contextual Recognition
by Yuning Cheng, Jinhong Huang, Io San Tai, Subrota Kumar Mondal, Tianqi Wang and Hussain Mohammed Dipu Kabir
Electronics 2025, 14(23), 4594; https://doi.org/10.3390/electronics14234594 - 23 Nov 2025
Viewed by 1167
Abstract
End-to-end scene text detection and recognition, which involves detecting and recognizing text in natural images, still faces significant challenges, particularly in handling text of arbitrary shapes, complex backgrounds, and computational efficiency requirements. This paper proposes a novel and viable end-to-end OCR framework that [...] Read more.
End-to-end scene text detection and recognition, which involves detecting and recognizing text in natural images, still faces significant challenges, particularly in handling text of arbitrary shapes, complex backgrounds, and computational efficiency requirements. This paper proposes a novel and viable end-to-end OCR framework that synergistically combines a powerful detection network with advanced recognition models. For text detection, we develop a method called Text Contrast Self-Calibrated Network (TextCSCN), which employs pixel-wise supervised contrastive learning to extract more discriminative features. TextCSCN addresses long-range dependency modeling and limited receptive field issues through self-calibrated convolutions and Global Convolutional Networks (GCNs). We further introduce an efficient Mamba-based bidirectional module for boundary refinement, enhancing both accuracy and speed. For text recognition, our framework employs a Swin Transformer backbone with Bidirectional Feature Pyramid Networks (BiFPNs) for optimized multi-scale feature extraction. We propose a Pre-Gated Contextual Attention Gate (PCAG) mechanism to effectively fuse visual and linguistic information while minimizing noise and uncertainty in multi-modal integration. Experiments on challenging benchmarks including TotalText and CTW1500 demonstrate the effectiveness of our approach. Our detection module achieves state-of-the-art performance with an F-score of 88.21% on TotalText, and the complete end-to-end system shows comparable improvements in recognition accuracy, establishing new benchmarks for scene text spotting. Full article
Show Figures

Figure 1

23 pages, 2988 KB  
Article
Exploratory Investigation of Motor and Psychophysiological Outcomes Following VR-Based Motor Training with Augmented Sensory Feedback for a Pilot Cohort with Spinal Cord Injury
by Raviraj Nataraj, Mingxiao Liu, Yu Shi, Sophie Dewil and Noam Y. Harel
Bioengineering 2025, 12(11), 1266; https://doi.org/10.3390/bioengineering12111266 - 18 Nov 2025
Viewed by 689
Abstract
Spinal cord injury (SCI) impairs motor function and requires rigorous rehabilitative therapy, motivating the development of approaches that are engaging and customizable. Virtual reality (VR) motor training with augmented sensory feedback (ASF) offers a promising pathway to enhance functional outcomes, yet it remains [...] Read more.
Spinal cord injury (SCI) impairs motor function and requires rigorous rehabilitative therapy, motivating the development of approaches that are engaging and customizable. Virtual reality (VR) motor training with augmented sensory feedback (ASF) offers a promising pathway to enhance functional outcomes, yet it remains unclear how ASF modalities affect performance and underlying psychophysiological states in persons with SCI. Five participants with chronic incomplete cervical-level SCI controlled a virtual robotic arm with semi-isometric upper-body contractions while undergoing ASF training with either visual feedback (VF) or combined visual plus haptic feedback (VHF). Motor performance (pathlength, completion time), psychophysiological measures (EEG, EMG, EDA, HR), and perceptual ratings (agency, motivation, utility) were assessed before and after ASF training. VF significantly reduced pathlength (−12.5%, p = 0.0011) and lowered EMG amplitude (−32.5%, p = 0.0063), suggesting the potential for improved motor performance and neuromuscular efficiency. VHF did not significantly improve performance, but trended toward higher cortical engagement. EEG analyses showed VF significantly decreased alpha and beta activity after training, whereas VHF trended toward mild increases. Regression revealed improved performance was significantly (p < 0.05) associated with changes in alpha power, EMG, EDA, and self-reported motivation. ASF type may differentially shape performance and psychophysiological responses in SCI participants. These preliminary findings suggest VR-based ASF as a potent multidimensional tool for personalizing rehabilitation. Full article
Show Figures

Graphical abstract

9 pages, 2957 KB  
Case Report
Flexible Bronchoscopic En Bloc Cryoextraction of Endobronchial Leiomyoma Using a 1.7-mm Cryoprobe: A Case Report with One-Year Follow-Up
by Chaeuk Chung and Dongil Park
Diagnostics 2025, 15(22), 2850; https://doi.org/10.3390/diagnostics15222850 - 11 Nov 2025
Viewed by 788
Abstract
Background and Clinical Significance: Endobronchial leiomyoma is a rare benign tumor of the respiratory tract, accounting for less than 2% of all benign pulmonary neoplasms. Most cases have been treated surgically or with endoscopic modalities such as laser or rigid bronchoscopy-assisted cryotherapy. Flexible [...] Read more.
Background and Clinical Significance: Endobronchial leiomyoma is a rare benign tumor of the respiratory tract, accounting for less than 2% of all benign pulmonary neoplasms. Most cases have been treated surgically or with endoscopic modalities such as laser or rigid bronchoscopy-assisted cryotherapy. Flexible bronchoscopic cryoextraction has been rarely reported, typically with 2.2-mm probes. Small-caliber cryoprobes (1.1- and 1.7-mm) have been validated for diagnostic transbronchial cryobiopsy but not for therapeutic removal of leiomyoma. We report a case of complete removal of endobronchial leiomyoma using a 1.7-mm cryoprobe via flexible bronchoscopy, demonstrating full airway and physiologic recovery. Case Presentation: A 25-year-old never-smoking man was referred after an abnormal health-screening chest radiograph demonstrated right middle and lower lobe atelectasis. Chest CT revealed a mass obstructing the proximal bronchus intermedius. Spirometry showed reduced FEV1 and FVC with preserved FEV1/FVC ratio, consistent with central airway obstruction. Therapeutic flexible bronchoscopy (Olympus BF-1TQ290) was performed under endotracheal intubation. Initial forceps biopsies were followed by transbronchial cryobiopsy with a 1.7-mm cryoprobe, applied for five freeze–adhesion cycles. The mass detached en bloc and was retrieved without complications, resulting in complete airway recanalization and visualization of the right middle and lower lobe bronchi. Histopathology showed interlacing fascicles of bland spindle cells with cigar-shaped nuclei, positive for SMA and desmin and negative for S-100 and CD34, confirming leiomyoma. The patient was discharged the next day. At one-year follow-up, bronchoscopy and CT demonstrated no recurrence, and spirometry normalized. Conclusions: Reports combining flexible bronchoscopy with a 1.7-mm small-caliber cryoprobe for en bloc removal of endobronchial leiomyoma are rare. This technique may represent a minimally invasive option for selected cases, provided careful hemostatic planning and appropriate case selection. Full article
Show Figures

Figure 1

30 pages, 7784 KB  
Review
Muscle Mechanics in Metabolic Health and Longevity: The Biochemistry of Training Adaptations
by Mike Tabone
BioChem 2025, 5(4), 37; https://doi.org/10.3390/biochem5040037 - 30 Oct 2025
Viewed by 2782
Abstract
Skeletal muscle is increasingly recognized as a dynamic endocrine organ whose secretome—particularly myokines—serves as a central hub for the coordination of systemic metabolic health, inflammation, and tissue adaptation. This review integrates molecular, cellular, and physiological evidence to elucidate how myokine signaling translates mechanical [...] Read more.
Skeletal muscle is increasingly recognized as a dynamic endocrine organ whose secretome—particularly myokines—serves as a central hub for the coordination of systemic metabolic health, inflammation, and tissue adaptation. This review integrates molecular, cellular, and physiological evidence to elucidate how myokine signaling translates mechanical and metabolic stimuli from exercise into biochemical pathways that regulate glucose homeostasis, lipid oxidation, mitochondrial function, and immune modulation. We detail the duality and context-dependence of cytokine and myokine actions, emphasizing the roles of key mediators such as IL-6, irisin, SPARC, FGF21, and BAIBA in orchestrating cross-talk between muscle, adipose tissue, pancreas, liver, bone, and brain. Distinctions between resistance and endurance training are explored, highlighting how each modality shapes the myokine milieu and downstream metabolic outcomes through differential activation of AMPK, mTOR, and PGC-1α axes. The review further addresses the hormetic role of reactive oxygen species, the importance of satellite cell dynamics, and the interplay between anabolic and catabolic signaling in muscle quality control and longevity. We discuss the clinical implications of these findings for metabolic syndrome, sarcopenia, and age-related disease, and propose that the remarkable plasticity of skeletal muscle and its secretome offers a powerful, multifaceted target for lifestyle interventions and future therapeutic strategies. An original infographic is presented to visually synthesize the complex network of myokine-mediated muscle–organ interactions underpinning exercise-induced metabolic health. Full article
Show Figures

Figure 1

21 pages, 2770 KB  
Article
Sensory Modality-Dependent Interplay Between Updating and Inhibition Under Increased Working Memory Load: An ERP Study
by Yuxi Luo, Ao Guo, Jinglong Wu and Jiajia Yang
Brain Sci. 2025, 15(11), 1178; https://doi.org/10.3390/brainsci15111178 - 30 Oct 2025
Viewed by 801
Abstract
Background/Objectives: Working memory (WM) performance relies on the coordination of updating and inhibition functions within the central executive system. However, their interaction under varying cognitive loads, particularly across sensory modalities, remains unclear. Methods: This study examined how sensory modality modulates flanker interference under [...] Read more.
Background/Objectives: Working memory (WM) performance relies on the coordination of updating and inhibition functions within the central executive system. However, their interaction under varying cognitive loads, particularly across sensory modalities, remains unclear. Methods: This study examined how sensory modality modulates flanker interference under increasing WM loads. Twenty-two participants performed a visual n-back task at three load levels (1-, 2-, and 3-back) while ignoring visual (within-modality) or auditory (cross-modality) flankers. Results: Behaviorally, increased WM load (2- and 3-back) led to reduced accuracy (AC) and prolonged reaction times (RTs) in both conditions. In addition, flanker interference was observed under the 2-back condition in both the visual within-modality (VM) and audiovisual cross-modality (AVM) tasks. However, performance impairment emerged at a lower load (2-back) in the VM condition, whereas in the AVM condition, it only emerged at the highest load (3-back). Significant performance impairment in the AVM condition occurred at higher WM loads, suggesting that greater WM load is required to trigger interference. Event-related potential (ERP) results showed that N200 amplitudes increased significantly for incongruent flankers under the highest WM load (3-back) in the visual within-modality condition, reflecting greater inhibitory demands. In the cross-modality condition, enhanced N200 was not observed across all loads and even reversed at low load (1-back). Moreover, the results also showed that P300 amplitude increased with load in the within-modality condition but decreased in the cross-modality condition. Conclusions: These results demonstrated that the interaction between updating and inhibition is shaped by both WM load and sensory modality, further supporting a sensory modality-specific resource allocation mechanism. The cross-modality configurations may enable more efficient distribution of cognitive resources under high load, reducing interference between concurrent executive demands. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

15 pages, 2694 KB  
Article
Seismic Facies Recognition Based on Multimodal Network with Knowledge Graph
by Binpeng Yan, Mutian Li, Rui Pan and Jiaqi Zhao
Appl. Sci. 2025, 15(20), 11087; https://doi.org/10.3390/app152011087 - 16 Oct 2025
Viewed by 861
Abstract
Seismic facies recognition constitutes a fundamental task in seismic data interpretation, playing an essential role in characterizing subsurface geological structures, sedimentary environments, and hydrocarbon reservoir distributions. Conventional approaches primarily depend on expert interpretation, which often introduces substantial subjectivity and operational inefficiency. Although deep [...] Read more.
Seismic facies recognition constitutes a fundamental task in seismic data interpretation, playing an essential role in characterizing subsurface geological structures, sedimentary environments, and hydrocarbon reservoir distributions. Conventional approaches primarily depend on expert interpretation, which often introduces substantial subjectivity and operational inefficiency. Although deep learning-based methods have been introduced, most rely solely on unimodal data—namely, seismic images—and encounter challenges such as limited annotated samples and inadequate generalization capability. To overcome these limitations, this study proposes a multimodal seismic facies recognition framework named GAT-UKAN, which integrates a U-shaped Kolmogorov–Arnold Network (U-KAN) with a Graph Attention Network (GAT). This model is designed to accept dual-modality inputs. By fusing visual features with knowledge embeddings at intermediate network layers, the model achieves knowledge-guided feature refinement. This approach effectively mitigates issues related to limited samples and poor generalization inherent in single-modality frameworks. Experiments were conducted on the F3 block dataset from the North Sea. A knowledge graph comprising 47 entities and 12 relation types was constructed to incorporate expert knowledge. The results indicate that GAT-UKAN achieved a Pixel Accuracy of 89.7% and a Mean Intersection over Union of 70.6%, surpassing the performance of both U-Net and U-KAN. Furthermore, the model was transferred to the Parihaka field in New Zealand via transfer learning. After fine-tuning, the predictions exhibited strong alignment with seismic profiles, demonstrating the model’s robustness under complex geological conditions. Although the proposed model demonstrates excellent performance in accuracy and robustness, it has so far been validated only on 2D seismic profiles. Its capability to characterize continuous 3D geological features therefore remains limited. Full article
Show Figures

Figure 1

Back to TopTop