Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (95)

Search Parameters:
Keywords = dual-slice

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5019 KB  
Article
A Dual Stream Deep Learning Framework for Alzheimer’s Disease Detection Using MRI Sonification
by Nadia A. Mohsin and Mohammed H. Abdul Ameer
J. Imaging 2026, 12(1), 46; https://doi.org/10.3390/jimaging12010046 - 15 Jan 2026
Viewed by 181
Abstract
Alzheimer’s Disease (AD) is an advanced brain illness that affects millions of individuals across the world. It causes gradual damage to the brain cells, leading to memory loss and cognitive dysfunction. Although Magnetic Resonance Imaging (MRI) is widely used in AD diagnosis, the [...] Read more.
Alzheimer’s Disease (AD) is an advanced brain illness that affects millions of individuals across the world. It causes gradual damage to the brain cells, leading to memory loss and cognitive dysfunction. Although Magnetic Resonance Imaging (MRI) is widely used in AD diagnosis, the existing studies rely solely on the visual representations, leaving alternative features unexplored. The objective of this study is to explore whether MRI sonification can provide complementary diagnostic information when combined with conventional image-based methods. In this study, we propose a novel dual-stream multimodal framework that integrates 2D MRI slices with their corresponding audio representations. MRI images are transformed into audio signals using a multi-scale, multi-orientation Gabor filtering, followed by a Hilbert space-filling curve to preserve spatial locality. The image and sound modalities are processed using a lightweight CNN and YAMNet, respectively, then fused via logistic regression. The experimental results of the multimodal achieved the highest accuracy in distinguishing AD from Cognitively Normal (CN) subjects at 98.2%, 94% for AD vs. Mild Cognitive Impairment (MCI), and 93.2% for MCI vs. CN. This work provides a new perspective and highlights the potential of audio transformation of imaging data for feature extraction and classification. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

22 pages, 30575 KB  
Article
Dual-Domain Seismic Data Reconstruction Based on U-Net++
by Enkai Li, Wei Fu, Feng Zhu, Bonan Li, Xiaoping Fan, Tuo Zheng, Peng Zhang, Tiantian Hu, Ziming Zhou, Chongchong Wang and Pengcheng Jiang
Processes 2026, 14(2), 263; https://doi.org/10.3390/pr14020263 - 12 Jan 2026
Viewed by 208
Abstract
Missing seismic data in reflection seismology, which frequently arises from a variety of operational and natural limitations, immediately impairs the quality of ensuing imaging and calls into question the validity of geological interpretation. Traditional techniques for reconstructing seismic data frequently rely significantly on [...] Read more.
Missing seismic data in reflection seismology, which frequently arises from a variety of operational and natural limitations, immediately impairs the quality of ensuing imaging and calls into question the validity of geological interpretation. Traditional techniques for reconstructing seismic data frequently rely significantly on parameter choices and prior assumptions. Even while these methods work well for partially missing traces, reconstructing whole shot gather is still a difficult task that has not been thoroughly studied. Data-driven approaches that summarize and generalize patterns from massive amounts of data have become more and more common in seismic data reconstruction research in recent years. This work builds on earlier research by proposing an enhanced technique that can recreate whole shot gathers as well as partially missing traces. During model training, we first implement a Moveout-window selective slicing method for reconstructing missing traces. By creating training datasets inside a high signal-to-noise ratio (SNR) window, this method improves the model’s capacity for learning. Additionally, a technique is presented for the receiver domain reconstruction of missing shot data. A dual-domain reconstruction method is used to successfully recover the seismic data in order to handle situations where there is simultaneous missing data in both domains. Full article
Show Figures

Figure 1

28 pages, 4255 KB  
Article
Segmentation-Guided Hybrid Deep Learning for Pulmonary Nodule Detection and Risk Prediction from Multi-Cohort CT Images
by Gomavarapu Krishna Subramanyam, Kundojjala Srinivas, Veera Venkata Raghunath Indugu, Dedeepya Sai Gondi and Sai Krishna Gaduputi Subbammagari
Diseases 2026, 14(1), 21; https://doi.org/10.3390/diseases14010021 - 6 Jan 2026
Viewed by 320
Abstract
Background: Lung cancer screening using low-dose computed tomography (LDCT) demands not only early pulmonary nodule detection but also accurate estimation of malignancy risk. This remains challenging due to subtle nodule appearances, the large number of CT slices per scan, and variability in radiological [...] Read more.
Background: Lung cancer screening using low-dose computed tomography (LDCT) demands not only early pulmonary nodule detection but also accurate estimation of malignancy risk. This remains challenging due to subtle nodule appearances, the large number of CT slices per scan, and variability in radiological interpretation. The objective of this study is to develop a unified computer-aided detection and diagnosis framework that improves both nodule localization and malignancy assessment while maintaining clinical reliability. Methods: We propose Seg-CADe-CADx, a dual-stage deep learning framework that integrates segmentation-guided detection and malignancy classification. In the first stage, a segmentation-guided detector with a lightweight 2.5D refinement head is employed to enhance nodule localization accuracy, particularly for small nodules with diameters of 6 mm or less. In the second stage, a hybrid 3D DenseNet–Swin Transformer classifier is used for malignancy prediction, incorporating probability calibration to improve the reliability of risk estimates. Results: The proposed framework was evaluated on established public benchmarks. On the LUNA16 dataset, the system achieved a competitive performance metric (CPM) of 0.944 for nodule detection. On the LIDC-IDRI dataset, the malignancy classification module achieved a ROC-AUC of 0.988, a PR-AUC of 0.947, and a specificity of 97.8% at 95% sensitivity. Calibration analysis further demonstrated strong agreement between predicted probabilities and true malignancy likelihoods, with an expected calibration error of 0.209 and a Brier score of 0.083. Conclusions: The results demonstrate that hybrid segmentation-guided CNN–Transformer architectures can effectively improve both diagnostic accuracy and clinical reliability in lung cancer screening. By combining precise nodule localization with calibrated malignancy risk estimation, the proposed framework offers a promising tool for supporting radiologists in LDCT-based lung cancer assessment. Full article
Show Figures

Figure 1

33 pages, 3256 KB  
Article
DMF-Net: A Dynamic Fusion Attention Mechanism-Based Model for Coronary Artery Segmentation
by GuangKun Ma, Linghui Kong, Mo Guan, Yanhong Meng and Deyan Chen
Symmetry 2025, 17(12), 2111; https://doi.org/10.3390/sym17122111 - 8 Dec 2025
Viewed by 396
Abstract
Coronary artery segmentation in CTA images remains challenging due to blurred vessel boundaries, unclear structural details, and sparse vascular distributions. To address these limitations, we propose DMF-Net (Dual-path Multi-scale Fusion Network), a novel multi-scale feature fusion architecture based on UNet++. The network incorporates [...] Read more.
Coronary artery segmentation in CTA images remains challenging due to blurred vessel boundaries, unclear structural details, and sparse vascular distributions. To address these limitations, we propose DMF-Net (Dual-path Multi-scale Fusion Network), a novel multi-scale feature fusion architecture based on UNet++. The network incorporates three key innovations: First, a Dynamic Buffer–Bottleneck–Buffer Layer (DBBLayer) in shallow encoding stages enhances the extraction and preservation of fine vascular structures. Second, an Axial Local–global Hybrid Attention Module (ALHA) in deep encoding stages employs a dual-path mechanism to simultaneously capture vessel trajectories and small branches through integrated global and local pathways. Third, a 2.5D slice strategy improves trajectory capture by leveraging contextual information from adjacent slices. Additionally, a composite loss function combining Dice loss and binary cross-entropy jointly optimizes vascular connectivity and boundary precision. Validated on the ImageCAS dataset, DMF-Net achieves superior performance compared to state-of-the-art methods: 89.45% Dice Similarity Coefficient (DSC) (+3.67% vs. UNet++), 3.85 mm Hausdorff Distance (HD, 49.1% reduction), and 0.95 mm Average Surface Distance (ASD, 42.4% improvement). Subgroup analysis reveals particularly strong performance in clinically challenging scenarios. For small vessels (<2 mm diameter), DMF-Net achieves 85.23 ± 1.34% DSC versus 78.67 ± 1.89% for UNet++ (+6.56%, p < 0.001). At complex bifurcations, HD improves from 9.34 ± 2.15 mm to 4.67 ± 1.28 mm (50.0% reduction, p < 0.001). In low-contrast regions (HU difference < 100), boundary precision (ASD) improves from 2.15 ± 0.54 mm to 1.08 ± 0.32 mm (49.8% improvement, p < 0.001). All improvements are statistically significant (p < 0.001). Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 7241 KB  
Article
Ship Target Feature Detection of Airborne Scanning Radar Based on Trajectory Prediction Integration
by Fan Zhang, Zhenghuan Xia, Shichao Jin, Xin Liu, Zhilong Zhao, Chuang Zhang, Han Fu, Kang Xing, Zongqiang Liu, Changhu Xue, Tao Zhang and Zhiying Cui
Remote Sens. 2025, 17(23), 3858; https://doi.org/10.3390/rs17233858 - 28 Nov 2025
Viewed by 376
Abstract
In order to address the challenges faced by airborne scanning radars in detecting maritime ship targets, such as low signal-to-clutter ratios and the strong spatio-temporal non-stationarity of sea clutter, this paper proposes a multi-feature detection method based on trajectory prediction integration. First, the [...] Read more.
In order to address the challenges faced by airborne scanning radars in detecting maritime ship targets, such as low signal-to-clutter ratios and the strong spatio-temporal non-stationarity of sea clutter, this paper proposes a multi-feature detection method based on trajectory prediction integration. First, the Margenau–Hill Spectrogram (MHS) is employed for time–frequency analysis and uniformization processing. The extraction of features is conducted across three dimensions: energy intensity, spatial clustering, and distributional disorder. The metrics employed in this study include ridge integral (RI), maximum size of connected regions (MS), and scanning slice time–frequency entropy (SSTFE). Feature normalization is achieved via reference units to eliminate dynamic range variations. Secondly, a trajectory prediction matrix is constructed to correlate target cross-scan distance variations. When combined with a scan weight matrix that dynamically adjusts multi-frame contributions, this approach enables effective accumulation of target features across multiple scans. Finally, the greedy convex hull algorithm is used to complete target detection with a controllable false alarm rate. The validation process employs real-world data from a C-band dual-polarization airborne scanning radar. The findings indicate a 36.11% enhancement in the number of successful detections in comparison to the conventional single-frame three-feature detection method. Among the extant scanning algorithms, this approach evinces optimal feature space separability and detection performance, thus offering a novel pathway for maritime target detection using airborne scanning radars. Full article
Show Figures

Figure 1

14 pages, 5161 KB  
Article
The Synaptic and Intrinsic Cellular Mechanisms of Persistent Firing in Neurogliaform Cells
by Shiyuan Chen, Xiaoshan Chen, Jianwen Zhou, Jinzhao Wang, Kaiyuan Li, Wenyuan Xie, Cheng Long and Gangyi Wu
Biomolecules 2025, 15(11), 1603; https://doi.org/10.3390/biom15111603 - 15 Nov 2025
Cited by 1 | Viewed by 715
Abstract
While persistent firing in glutamatergic neurons has been well-characterized, the intrinsic and synaptic mechanisms driving this phenomenon in neurogliaform cells (NGFCs), a subtype of GABAergic interneurons, remain unclear. This study investigates the mechanisms underlying persistent firing in hippocampal NGFCs. Whole-cell current-clamp recordings were [...] Read more.
While persistent firing in glutamatergic neurons has been well-characterized, the intrinsic and synaptic mechanisms driving this phenomenon in neurogliaform cells (NGFCs), a subtype of GABAergic interneurons, remain unclear. This study investigates the mechanisms underlying persistent firing in hippocampal NGFCs. Whole-cell current-clamp recordings were performed on acute brain slices from C57BL/6J mice to examine the electrophysiological properties of NGFCs in the hippocampal stratum lacunosum-moleculare (SLM). Pharmacological interventions, including T-type calcium channel blocker ML218 and 5-hydroxytryptamine (5-HT) receptor antagonist olanzapine, were used to dissect the mechanisms of persistent firing. Biocytin labeling and confocal microscopy were employed to confirm neuronal morphology and location. The study revealed that persistent firing in NGFCs is induced by a long-lasting delayed afterdepolarization (L-ADP), which depends on T-type calcium channels (intrinsic mechanism) and is modulated by 5-HT receptors (synaptic mechanism). Persistent firing was observed in 62.96% of SLM neurons and was abolished by ML218 or olanzapine. The findings bridge a gap in understanding how inhibitory interneurons contribute to memory processes. The dual-mechanism framework (T-type channels and 5-HT receptors) aligns with prior work on glutamatergic systems but highlights unique features of GABAergic persistent firing. These insights advance the understanding of inhibitory circuit dynamics and their potential role in cognitive functions, paving the way for further research into interneuron-specific memory encoding. Full article
Show Figures

Figure 1

9 pages, 1278 KB  
Article
Coronary Calcium Scoring as Prediction of Coronary Artery Diseases with Low-Dose Dual-Source CT
by Enrico Schwarz, Valentina Tambè, Silvia De Simoni, Roberto Moltrasi, Matteo Magazzeni, Elena Ciortan, Stefano Bentivegna, Anastasia Esseridou and Francesco Secchi
J. Cardiovasc. Dev. Dis. 2025, 12(11), 425; https://doi.org/10.3390/jcdd12110425 - 27 Oct 2025
Viewed by 1397
Abstract
The aim of this paper is to evaluate the correlation between the coronary calcium score (CCS) and coronary artery disease (CAD), patients underwent coronary CT angiography (CTA). Four hundred and five patients who underwent a coronary CT with CCS analysis were considered for [...] Read more.
The aim of this paper is to evaluate the correlation between the coronary calcium score (CCS) and coronary artery disease (CAD), patients underwent coronary CT angiography (CTA). Four hundred and five patients who underwent a coronary CT with CCS analysis were considered for this retrospective study. Coronary CTA was performed using a dual-source (256-slice) CT scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany). Before injecting the contrast medium, non-contrasted cardiac CT was performed in a longitudinal scan field from the tracheal carina down to the diaphragm. The corresponding images for calcium scoring were reconstructed with a slice width of 1.5 mm and a slice interval of 1 mm, and the tube voltage was 120 kVp. The total calcium score was calculated using dedicated software. The calcium score based on the Agatston method was defined as the presence of a lesion with an area greater than 1 mm2 and peak intensity greater than 130 Hounsfield Units, which was automatically identified and marked with color by the software. From the radiological report, the degree of coronary stenosis was retrieved. A score of 1 corresponds to the absence of stenosis, a score of 2 to mild stenosis (<50%), and a score of 3 to moderate/severe stenosis (>50%). The total coronary gravity score (CGS) for each patient was calculated by summing the score of each coronary artery. The Spearman test was used for correlation. Out of the 405 patients, 217 were male. The mean and standard deviation age was 72 ± 11 years. The overall amount of calcium was an Agatston score of 393 ± 709. A positive correlation between CCS and CGS was found (r = 0.835 and p < 0.001). A ROC curve with AUC 0.917 (p ≤ 0.001) was obtained. The optimal cutoff point of the calcium score for discriminating CGS < 2 was 112, yielding sensitivity of 90% and specificity of 81%. This study confirms the important relationship between the coronary artery calcium score and the presence and extension of coronary artery disease. Full article
Show Figures

Figure 1

20 pages, 7466 KB  
Article
Feasibility Study of CLIP-Based Key Slice Selection in CT Images and Performance Enhancement via Lesion- and Organ-Aware Fine-Tuning
by Kohei Yamamoto and Tomohiro Kikuchi
Bioengineering 2025, 12(10), 1093; https://doi.org/10.3390/bioengineering12101093 - 10 Oct 2025
Viewed by 1222
Abstract
Large-scale medical visual question answering (MedVQA) datasets are critical for training and deploying vision–language models (VLMs) in radiology. Ideally, such datasets should be automatically constructed from routine radiology reports and their corresponding images. However, no existing method directly links free-text findings to the [...] Read more.
Large-scale medical visual question answering (MedVQA) datasets are critical for training and deploying vision–language models (VLMs) in radiology. Ideally, such datasets should be automatically constructed from routine radiology reports and their corresponding images. However, no existing method directly links free-text findings to the most relevant 2D slices in volumetric computed tomography (CT) scans. To address this gap, a contrastive language–image pre-training (CLIP)-based key slice selection framework is proposed, which matches each sentence to its most informative CT slice via text–image similarity. This experiment demonstrates that models pre-trained in the medical domain already achieve competitive slice retrieval accuracy and that fine-tuning them on a small dual-supervised dataset that imparts both lesion- and organ-level awareness yields further gains. In particular, the best-performing model (fine-tuned BiomedCLIP) achieved a Top-1 accuracy of 51.7% for lesion-aware slice retrieval, representing a 20-point improvement over baseline CLIP, and was accepted by radiologists in 56.3% of cases. By automating the report-to-slice alignment, the proposed method facilitates scalable, clinically realistic construction of MedVQA resources. Full article
Show Figures

Graphical abstract

26 pages, 1799 KB  
Review
Mechanotransduction-Epigenetic Coupling in Pulmonary Regeneration: Multifunctional Bioscaffolds as Emerging Tools
by Jing Wang and Anmin Xu
Pharmaceuticals 2025, 18(10), 1487; https://doi.org/10.3390/ph18101487 - 2 Oct 2025
Viewed by 1261
Abstract
Pulmonary fibrosis (PF) is a progressive and fatal lung disease characterized by irreversible alveolar destruction and pathological extracellular matrix (ECM) deposition. Currently approved agents (pirfenidone and nintedanib) slow functional decline but do not reverse established fibrosis or restore functional alveoli. Multifunctional bioscaffolds present [...] Read more.
Pulmonary fibrosis (PF) is a progressive and fatal lung disease characterized by irreversible alveolar destruction and pathological extracellular matrix (ECM) deposition. Currently approved agents (pirfenidone and nintedanib) slow functional decline but do not reverse established fibrosis or restore functional alveoli. Multifunctional bioscaffolds present a promising therapeutic strategy through targeted modulation of critical cellular processes, including proliferation, migration, and differentiation. This review synthesizes recent advances in scaffold-based interventions for PF, with a focus on their dual mechano-epigenetic regulatory functions. We delineate how scaffold properties (elastic modulus, stiffness gradients, dynamic mechanical cues) direct cell fate decisions via mechanotransduction pathways, exemplified by focal adhesion–cytoskeleton coupling. Critically, we highlight how pathological mechanical inputs establish and perpetuate self-reinforcing epigenetic barriers to regeneration through aberrant chromatin states. Furthermore, we examine scaffolds as platforms for precision epigenetic drug delivery, particularly controlled release of inhibitors targeting DNA methyltransferases (DNMTi) and histone deacetylases (HDACi) to disrupt this mechano-reinforced barrier. Evidence from PF murine models and ex vivo lung slice cultures demonstrate scaffold-mediated remodeling of the fibrotic niche, with key studies reporting substantial reductions in collagen deposition and significant increases in alveolar epithelial cell markers following intervention. These quantitative outcomes highlight enhanced alveolar epithelial plasticity and upregulating antifibrotic gene networks. Emerging integration of stimuli-responsive biomaterials, CRISPR/dCas9-based epigenetic editors, and AI-driven design to enhance scaffold functionality is discussed. Collectively, multifunctional bioscaffolds hold significant potential for clinical translation by uniquely co-targeting mechanotransduction and epigenetic reprogramming. Future work will need to resolve persistent challenges, including the erasure of pathological mechanical memory and precise spatiotemporal control of epigenetic modifiers in vivo, to unlock their full therapeutic potential. Full article
(This article belongs to the Section Pharmacology)
Show Figures

Figure 1

35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Cited by 1 | Viewed by 1503
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

18 pages, 3029 KB  
Article
Polarization and Depolarization Current Characteristics of Cables at Different Water Immersion Stages
by Yuyang Jiao, Jingjiang Qu, Yingqiang Shang, Jingyue Ma, Jiren Chen, Jun Xiong and Zepeng Lv
Energies 2025, 18(19), 5094; https://doi.org/10.3390/en18195094 - 25 Sep 2025
Viewed by 823
Abstract
To address the insulation degradation caused by moisture intrusion due to damage to the outer sheath of power cables, this study systematically analyzed the charge transport characteristics of XLPE cables at different water immersion stages using polarization/depolarization current (PDC) measurements. An evaluation method [...] Read more.
To address the insulation degradation caused by moisture intrusion due to damage to the outer sheath of power cables, this study systematically analyzed the charge transport characteristics of XLPE cables at different water immersion stages using polarization/depolarization current (PDC) measurements. An evaluation method for assessing water immersion levels was proposed based on conductivity, charge density, and charge mobility. Experiments were conducted on commercial 10 kV XLPE cable samples subjected to accelerated water immersion for durations ranging from 0 to 30 days. PDC data were collected via a custom-built three-electrode measurement platform. The results indicated that with increasing immersion time, the decay rate of polarization/depolarization currents slowed, the steady-state current amplitude rose significantly, and the DC conductivity increased from 1.86 × 10−17 S/m to 2.70 × 10−15 S/m—a nearly two-order-of-magnitude increase. The Pearson correlation coefficient between charge mobility and immersion time reached 0.96, indicating a strong positive correlation. Additional tests on XLPE insulation slices showed a rapid rise in conductivity during early immersion, a decrease in breakdown voltage from 93.64 kV to 66.70 kV, and enhanced space charge accumulation under prolonged immersion and higher electric fields. The proposed dual-parameter criterion (conductivity and charge mobility) effectively distinguishes between early and advanced stages of cable water immersion, offering a practical approach for non-destructive assessment of insulation conditions and early detection of moisture intrusion, with significant potential for application in predictive maintenance and insulation diagnostics. Full article
Show Figures

Figure 1

19 pages, 2838 KB  
Article
Cascaded Spatial and Depth Attention UNet for Hippocampus Segmentation
by Zi-Zheng Wei, Bich-Thuy Vu, Maisam Abbas and Ran-Zan Wang
J. Imaging 2025, 11(9), 311; https://doi.org/10.3390/jimaging11090311 - 11 Sep 2025
Viewed by 958
Abstract
This study introduces a novel enhancement to the UNet architecture, termed Cascaded Spatial and Depth Attention U-Net (CSDA-UNet), tailored specifically for precise hippocampus segmentation in T1-weighted brain MRI scans. The proposed architecture integrates two key attention mechanisms: a Spatial Attention (SA) module, which [...] Read more.
This study introduces a novel enhancement to the UNet architecture, termed Cascaded Spatial and Depth Attention U-Net (CSDA-UNet), tailored specifically for precise hippocampus segmentation in T1-weighted brain MRI scans. The proposed architecture integrates two key attention mechanisms: a Spatial Attention (SA) module, which refines spatial feature representations by producing attention maps from the deepest convolutional layer and modulating the matching object features; and an Inter-Slice Attention (ISA) module, which enhances volumetric uniformity by integrating related information from adjacent slices, thereby reinforcing the model’s capacity to capture inter-slice dependencies. The CSDA-UNet is assessed using hippocampal segmentation data derived from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and Decathlon, two benchmark studies widely employed in neuroimaging research. The proposed model outperforms state-of-the-art methods, achieving a Dice coefficient of 0.9512 and an IoU score of 0.9345 on ADNI and Dice scores of 0.9907/0.8963 (train/validation) and an IoU score of 0.9816/0.8132 (train/validation) on the Decathlon dataset across multiple quantitative metrics. These improvements underscore the efficacy of the proposed dual-attention framework in accurately explaining small, asymmetrical structures such as the hippocampus, while maintaining computational efficiency suitable for clinical deployment. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

20 pages, 914 KB  
Article
LR-SQL: A Supervised Fine-Tuning Method for Text2SQL Tasks Under Low-Resource Scenarios
by Wuzhenghong Wen, Yongpan Zhang, Su Pan, Yuwei Sun, Pengwei Lu and Cheng Ding
Electronics 2025, 14(17), 3489; https://doi.org/10.3390/electronics14173489 - 31 Aug 2025
Viewed by 1792
Abstract
In supervised fine-tuning (SFT) for Text2SQL tasks, particularly for databases with numerous tables, encoding schema features requires excessive tokens, escalating GPU resource requirements during fine-tuning. To bridge this gap, we propose LR-SQL, a general dual-model SFT framework comprising a schema linking model and [...] Read more.
In supervised fine-tuning (SFT) for Text2SQL tasks, particularly for databases with numerous tables, encoding schema features requires excessive tokens, escalating GPU resource requirements during fine-tuning. To bridge this gap, we propose LR-SQL, a general dual-model SFT framework comprising a schema linking model and an SQL generation model. At the core of our framework lies the schema linking model, which is trained on a novel downstream task termed slice-based related table filtering. This task dynamically partitions a database into adjustable slices of tables and sequentially evaluates the relevance of each slice to the input query, thereby reducing token consumption per iteration. However, slicing fragments destroys database information, impairing the model’s ability to comprehend the complete database. Thus, we integrate Chain of Thought (CoT) in training, enabling the model to reconstruct the full database context from discrete slices, thereby enhancing inference fidelity. Ultimately, the SQL generation model uses the result from the schema linking model to generate the final SQL. Extensive experiments demonstrate that our proposed LR-SQL reduces total GPU memory usage by 40% compared to baseline SFT methods, with only a 2% drop in table prediction accuracy for the schema linking task and a negligible 0.6% decrease in overall Text2SQL Execution Accuracy. Full article
(This article belongs to the Special Issue Advances in Data Security: Challenges, Technologies, and Applications)
Show Figures

Figure 1

18 pages, 16540 KB  
Article
E-CMCA and LSTM-Enhanced Framework for Cross-Modal MRI-TRUS Registration in Prostate Cancer
by Ciliang Shao, Ruijin Xue and Lixu Gu
J. Imaging 2025, 11(9), 292; https://doi.org/10.3390/jimaging11090292 - 27 Aug 2025
Cited by 1 | Viewed by 931
Abstract
Accurate registration of MRI and TRUS images is crucial for effective prostate cancer diagnosis and biopsy guidance, yet modality differences and non-rigid deformations pose significant challenges, especially in dynamic imaging. This study presents a novel cross-modal MRI-TRUS registration framework, leveraging a dual-encoder architecture [...] Read more.
Accurate registration of MRI and TRUS images is crucial for effective prostate cancer diagnosis and biopsy guidance, yet modality differences and non-rigid deformations pose significant challenges, especially in dynamic imaging. This study presents a novel cross-modal MRI-TRUS registration framework, leveraging a dual-encoder architecture with an Enhanced Cross-Modal Channel Attention (E-CMCA) module and a LSTM-Based Spatial Deformation Modeling Module. The E-CMCA module efficiently extracts and integrates multi-scale cross-modal features, while the LSTM-Based Spatial Deformation Modeling Module models temporal dynamics by processing depth-sliced 3D deformation fields as sequential data. A VecInt operation ensures smooth, diffeomorphic transformations, and a FuseConv layer enhances feature integration for precise alignment. Experiments on the μ-RegPro dataset from the MICCAI 2023 Challenge demonstrate that our model significantly improves registration accuracy and performs robustly in both static 3D and dynamic 4D registration tasks. Experiments on the μ-RegPro dataset from the MICCAI 2023 Challenge demonstrate that our model achieves a DSC of 0.865, RDSC of 0.898, TRE of 2.278 mm, and RTRE of 1.293, surpassing state-of-the-art methods and performing robustly in both static 3D and dynamic 4D registration tasks. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

26 pages, 19263 KB  
Article
An Adaptive Dual-Channel Underwater Target Detection Method Based on a Vector Cross-Trispectrum Diagonal Slice
by Weixuan Zhang, Yu Chen, Qiang Bian, Yuyao Liu, Yan Liang and Zhou Meng
J. Mar. Sci. Eng. 2025, 13(9), 1628; https://doi.org/10.3390/jmse13091628 - 26 Aug 2025
Cited by 1 | Viewed by 744
Abstract
This paper introduces a method for detecting weak line spectrum signals in dynamic, non-Gaussian marine noise using a single vector hydrophone. The trispectrum diagonal slice is employed to extract coupled line spectrum features, enabling the detection of line spectra with independent frequencies and [...] Read more.
This paper introduces a method for detecting weak line spectrum signals in dynamic, non-Gaussian marine noise using a single vector hydrophone. The trispectrum diagonal slice is employed to extract coupled line spectrum features, enabling the detection of line spectra with independent frequencies and phases while effectively suppressing Gaussian noise. By constructing a cross-trispectrum diagonal slice spectrum from the hydrophone’s sound pressure and composite particle velocity, the method leverages coherence gain to enhance the signal-to-noise ratio (SNR). Furthermore, a discriminator based on the cross-coherence function of pressure and velocity is proposed, which utilizes a dynamic threshold to adaptively and in real-time select either the vector cross-trispectrum diagonal slice (V-TriD) or the conventional energy detection (ED) as the optimal detection channel for incoming signal. The feasibility and effectiveness of this method were validated through simulations and sea trial data from the South China Sea. Experimental results demonstrate that the proposed algorithm can effectively detect the target signal, achieving an SNR improvement of 3 dB at the target frequency and an average reduction in broadband noise energy of 1–2 dB compared to traditional energy spectrum detection. The proposed algorithm exhibits computational efficiency, adaptability, and robustness, making it well suited for real-time underwater target detection in critical applications, including harbor security, waterway monitoring, and marine bioacoustic studies. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Back to TopTop