Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (150)

Search Parameters:
Keywords = slice attention

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1597 KB  
Article
A Comparative Analysis of SegFormer, FabE-Net and VGG-UNet Models for the Segmentation of Neural Structures on Histological Sections
by Igor Makarov, Elena Koshevaya, Alina Pechenina, Galina Boyko, Anna Starshinova, Dmitry Kudlay, Taiana Makarova and Lubov Mitrofanova
Diagnostics 2025, 15(18), 2408; https://doi.org/10.3390/diagnostics15182408 - 22 Sep 2025
Viewed by 185
Abstract
Background: Segmenting nerve fibres in histological images is a tricky job because of how much the tissue looks can change. Modern neural network architectures, including U-Net and transformers, demonstrate varying degrees of effectiveness in this area. The aim of this study is to [...] Read more.
Background: Segmenting nerve fibres in histological images is a tricky job because of how much the tissue looks can change. Modern neural network architectures, including U-Net and transformers, demonstrate varying degrees of effectiveness in this area. The aim of this study is to conduct a comparative analysis of the SegFormer, VGG-UNet, and FabE-Net models in terms of segmentation quality and speed. Methods: The training sample consisted of more than 75,000 pairs of images of different tissues (original slice and corresponding mask), scaled from 1024 × 1024 to 224 × 224 pixels to optimise computations. Three neural network architectures were used: the classic VGG-UNet, FabE-Net with attention and global context perception blocks, and the SegFormer transformer model. For an objective assessment of the quality of the models, expert validation was carried out with the participation of four independent pathologists, who evaluated the quality of segmentation according to specified criteria. Quality metrics (precision, recall, F1-score, accuracy) were calculated as averages based on the assessments of all experts, which made it possible to take into account variability in interpretation and increase the reliability of the results. Results: SegFormer achieved stable stabilisation of the loss function faster than the other models—by the 20–30th epoch, compared to 45–60 epochs for VGG-UNet and FabE-Net. Despite taking longer to train per epoch, SegFormer produced the best segmentation quality, with the following metrics: precision 0.84, recall 0.99, F1-score 0.91 and accuracy 0.89. It also annotated a complete histological section in the fastest time. Visual analysis revealed that, compared to other models, which tended to produce incomplete or excessive segmentation, SegFormer more accurately and completely highlights nerve structures. Conclusions: Using attention mechanisms in SegFormer compensates for morphological variability in tissues, resulting in faster and higher-quality segmentation. Image scaling does not impair training quality while significantly accelerating computational processes. These results confirm the potential of SegFormer for practical use in digital pathology, while also highlighting the need for high-precision, immunohistochemistry-informed labelling to improve segmentation accuracy. Full article
(This article belongs to the Special Issue Pathology and Diagnosis of Neurological Disorders, 2nd Edition)
Show Figures

Figure 1

19 pages, 2838 KB  
Article
Cascaded Spatial and Depth Attention UNet for Hippocampus Segmentation
by Zi-Zheng Wei, Bich-Thuy Vu, Maisam Abbas and Ran-Zan Wang
J. Imaging 2025, 11(9), 311; https://doi.org/10.3390/jimaging11090311 - 11 Sep 2025
Viewed by 343
Abstract
This study introduces a novel enhancement to the UNet architecture, termed Cascaded Spatial and Depth Attention U-Net (CSDA-UNet), tailored specifically for precise hippocampus segmentation in T1-weighted brain MRI scans. The proposed architecture integrates two key attention mechanisms: a Spatial Attention (SA) module, which [...] Read more.
This study introduces a novel enhancement to the UNet architecture, termed Cascaded Spatial and Depth Attention U-Net (CSDA-UNet), tailored specifically for precise hippocampus segmentation in T1-weighted brain MRI scans. The proposed architecture integrates two key attention mechanisms: a Spatial Attention (SA) module, which refines spatial feature representations by producing attention maps from the deepest convolutional layer and modulating the matching object features; and an Inter-Slice Attention (ISA) module, which enhances volumetric uniformity by integrating related information from adjacent slices, thereby reinforcing the model’s capacity to capture inter-slice dependencies. The CSDA-UNet is assessed using hippocampal segmentation data derived from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and Decathlon, two benchmark studies widely employed in neuroimaging research. The proposed model outperforms state-of-the-art methods, achieving a Dice coefficient of 0.9512 and an IoU score of 0.9345 on ADNI and Dice scores of 0.9907/0.8963 (train/validation) and an IoU score of 0.9816/0.8132 (train/validation) on the Decathlon dataset across multiple quantitative metrics. These improvements underscore the efficacy of the proposed dual-attention framework in accurately explaining small, asymmetrical structures such as the hippocampus, while maintaining computational efficiency suitable for clinical deployment. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

38 pages, 13994 KB  
Article
Post-Heuristic Cancer Segmentation Refinement over MRI Images and Deep Learning Models
by Panagiotis Christakakis and Eftychios Protopapadakis
AI 2025, 6(9), 212; https://doi.org/10.3390/ai6090212 - 2 Sep 2025
Viewed by 770
Abstract
Lately, deep learning methods have greatly improved the accuracy of brain-tumor segmentation, yet slice-wise inconsistencies still limit reliable use in clinical practice. While volume-aware 3D convolutional networks achieve high accuracy, their memory footprint and inference time may limit clinical adoption. This study proposes [...] Read more.
Lately, deep learning methods have greatly improved the accuracy of brain-tumor segmentation, yet slice-wise inconsistencies still limit reliable use in clinical practice. While volume-aware 3D convolutional networks achieve high accuracy, their memory footprint and inference time may limit clinical adoption. This study proposes a resource-conscious pipeline for lower-grade-glioma delineation in axial FLAIR MRI that combines a 2D Attention U-Net with a guided post-processing refinement step. Two segmentation backbones, a vanilla U-Net and an Attention U-Net, are trained on 110 TCGA-LGG axial FLAIR patient volumes under various loss functions and activation functions. The Attention U-Net, optimized with Dice loss, delivers the strongest baseline, achieving a mean Intersection-over-Union (mIoU) of 0.857. To mitigate slice-wise inconsistencies inherent to 2D models, a White-Area Overlap (WAO) voting mechanism quantifies the tumor footprint shared by neighboring slices. The WAO curve is smoothed with a Gaussian filter to locate its peak, after which a percentile-based heuristic selectively relabels the most ambiguous softmax pixels. Cohort-level analysis shows that removing merely 0.1–0.3% of ambiguous low-confidence pixels lifts the post-processing mIoU above the baseline while improving segmentation for two-thirds of patients. The proposed refinement strategy holds great potential for further improvement, offering a practical route for integrating deep learning segmentation into routine clinical workflows with minimal computational overhead. Full article
Show Figures

Figure 1

18 pages, 16540 KB  
Article
E-CMCA and LSTM-Enhanced Framework for Cross-Modal MRI-TRUS Registration in Prostate Cancer
by Ciliang Shao, Ruijin Xue and Lixu Gu
J. Imaging 2025, 11(9), 292; https://doi.org/10.3390/jimaging11090292 - 27 Aug 2025
Viewed by 448
Abstract
Accurate registration of MRI and TRUS images is crucial for effective prostate cancer diagnosis and biopsy guidance, yet modality differences and non-rigid deformations pose significant challenges, especially in dynamic imaging. This study presents a novel cross-modal MRI-TRUS registration framework, leveraging a dual-encoder architecture [...] Read more.
Accurate registration of MRI and TRUS images is crucial for effective prostate cancer diagnosis and biopsy guidance, yet modality differences and non-rigid deformations pose significant challenges, especially in dynamic imaging. This study presents a novel cross-modal MRI-TRUS registration framework, leveraging a dual-encoder architecture with an Enhanced Cross-Modal Channel Attention (E-CMCA) module and a LSTM-Based Spatial Deformation Modeling Module. The E-CMCA module efficiently extracts and integrates multi-scale cross-modal features, while the LSTM-Based Spatial Deformation Modeling Module models temporal dynamics by processing depth-sliced 3D deformation fields as sequential data. A VecInt operation ensures smooth, diffeomorphic transformations, and a FuseConv layer enhances feature integration for precise alignment. Experiments on the μ-RegPro dataset from the MICCAI 2023 Challenge demonstrate that our model significantly improves registration accuracy and performs robustly in both static 3D and dynamic 4D registration tasks. Experiments on the μ-RegPro dataset from the MICCAI 2023 Challenge demonstrate that our model achieves a DSC of 0.865, RDSC of 0.898, TRE of 2.278 mm, and RTRE of 1.293, surpassing state-of-the-art methods and performing robustly in both static 3D and dynamic 4D registration tasks. Full article
(This article belongs to the Special Issue Celebrating the 10th Anniversary of the Journal of Imaging)
Show Figures

Figure 1

19 pages, 7531 KB  
Article
Evaluating the Impact of 2D MRI Slice Orientation and Location on Alzheimer’s Disease Diagnosis Using a Lightweight Convolutional Neural Network
by Nadia A. Mohsin and Mohammed H. Abdulameer
J. Imaging 2025, 11(8), 260; https://doi.org/10.3390/jimaging11080260 - 5 Aug 2025
Viewed by 822
Abstract
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative [...] Read more.
Accurate detection of Alzheimer’s disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative combination of MRI slice orientation and anatomical location for AD classification. We propose an automated framework that first selects the most relevant slices using a feature entropy-based method applied to activation maps from a pretrained CNN model. For classification, we employ a lightweight CNN architecture based on depthwise separable convolutions to efficiently analyze the selected 2D MRI slices extracted from preprocessed 3D brain scans. To further interpret model behavior, an attention mechanism is integrated to analyze which feature level contributes the most to the classification process. The model is evaluated on three binary tasks: AD vs. mild cognitive impairment (MCI), AD vs. cognitively normal (CN), and MCI vs. CN. The experimental results show the highest accuracy (97.4%) in distinguishing AD from CN when utilizing the selected slices from the ninth axial segment, followed by the tenth segment of coronal and sagittal orientations. These findings demonstrate the significance of slice location and orientation in MRI-based AD diagnosis and highlight the potential of lightweight CNNs for clinical use. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

31 pages, 3754 KB  
Review
Artificial Gametogenesis and In Vitro Spermatogenesis: Emerging Strategies for the Treatment of Male Infertility
by Aris Kaltsas, Maria-Anna Kyrgiafini, Eleftheria Markou, Andreas Koumenis, Zissis Mamuris, Fotios Dimitriadis, Athanasios Zachariou, Michael Chrisofos and Nikolaos Sofikitis
Int. J. Mol. Sci. 2025, 26(15), 7383; https://doi.org/10.3390/ijms26157383 - 30 Jul 2025
Cited by 2 | Viewed by 2869
Abstract
Male-factor infertility accounts for approxiamately half of all infertility cases globally, yet therapeutic options remain limited for individuals with no retrievable spermatozoa, such as those with non-obstructive azoospermia (NOA). In recent years, artificial gametogenesis has emerged as a promising avenue for fertility restoration, [...] Read more.
Male-factor infertility accounts for approxiamately half of all infertility cases globally, yet therapeutic options remain limited for individuals with no retrievable spermatozoa, such as those with non-obstructive azoospermia (NOA). In recent years, artificial gametogenesis has emerged as a promising avenue for fertility restoration, driven by advances in two complementary strategies: organotypic in vitro spermatogenesis (IVS), which aims to complete spermatogenesis ex vivo using native testicular tissue, and in vitro gametogenesis (IVG), which seeks to generate male gametes de novo from pluripotent or reprogrammed somatic stem cells. To evaluate the current landscape and future potential of these approaches, a narrative, semi-systematic literature search was conducted in PubMed and Scopus for the period January 2010 to February 2025. Additionally, landmark studies published prior to 2010 that contributed foundational knowledge in spermatogenesis and testicular tissue modeling were reviewed to provide historical context. This narrative review synthesizes multidisciplinary evidence from cell biology, tissue engineering, and translational medicine to benchmark IVS and IVG technologies against species-specific developmental milestones, ranging from rodent models to non-human primates and emerging human systems. Key challenges—such as the reconstitution of the blood–testis barrier, stage-specific endocrine signaling, and epigenetic reprogramming—are discussed alongside critical performance metrics of various platforms, including air–liquid interface slice cultures, three-dimensional organoids, microfluidic “testis-on-chip” devices, and stem cell-derived gametogenic protocols. Particular attention is given to clinical applicability in contexts such as NOA, oncofertility preservation in prepubertal patients, genetic syndromes, and reprocutive scenarios involving same-sex or unpartnered individuals. Safety, regulatory, and ethical considerations are critically appraised, and a translational framework is outlined that emphasizes biomimetic scaffold design, multi-omics-guided media optimization, and rigorous genomic and epigenomic quality control. While the generation of functionally mature sperm in vitro remains unachieved, converging progress in animal models and early human systems suggests that clinically revelant IVS and IVG applications are approaching feasibility, offering a paradigm shift in reproductive medicine. Full article
Show Figures

Figure 1

16 pages, 5262 KB  
Article
A Hybrid Framework for Metal Artifact Suppression in CT Imaging of Metal Lattice Structures via Radon Transform and Attention-Based Super-Resolution Reconstruction
by Bingyang Wang, Zhiwei Zhang, Heng Li and Ronghai Wu
Appl. Sci. 2025, 15(14), 7819; https://doi.org/10.3390/app15147819 - 11 Jul 2025
Viewed by 461
Abstract
High-density component-induced metal artifacts in industrial computed tomography (CT) severely impair image quality and make further analysis more difficult. To suppress artifacts and improve image quality, this research suggests a practical approach that combines lightweight attention-enhanced super-resolution networks with Radon-domain artifact elimination. First, [...] Read more.
High-density component-induced metal artifacts in industrial computed tomography (CT) severely impair image quality and make further analysis more difficult. To suppress artifacts and improve image quality, this research suggests a practical approach that combines lightweight attention-enhanced super-resolution networks with Radon-domain artifact elimination. First, the original CT slices are subjected to bicubic interpolation, which enhances resolution and reduces sampling errors during transformation. The Radon transform, which detects and suppresses metal artifacts in the Radon domain, is then used to convert the interpolated pictures into sinograms. The artifact-suppressed sinograms are then reconstructed at better resolution using a lightweight Enhanced Deep Super-Resolution (EDSR) network with a channel attention mechanism, which consists of only one residual block. The inverse Radon transform is used to recreate the final CT images. An average peak signal-to-noise ratio (PSNR) of 40.39 dB and an average signal-to-noise ratio (SNR) of 29.75 dB, with an SNR improvement of 15.48 dB over the original artifact-laden images, show the success of the suggested strategy in experiments. This method offers a workable and effective way to improve image quality in industrial CT applications that involve intricate structures that incorporate metal. Full article
Show Figures

Figure 1

18 pages, 70320 KB  
Article
RIS-UNet: A Multi-Level Hierarchical Framework for Liver Tumor Segmentation in CT Images
by Yuchai Wan, Lili Zhang and Murong Wang
Entropy 2025, 27(7), 735; https://doi.org/10.3390/e27070735 - 9 Jul 2025
Viewed by 670
Abstract
The deep learning-based analysis of liver CT images is expected to provide assistance for clinicians in the diagnostic decision-making process. However, the accuracy of existing methods still falls short of clinical requirements and needs to be further improved. Therefore, in this work, we [...] Read more.
The deep learning-based analysis of liver CT images is expected to provide assistance for clinicians in the diagnostic decision-making process. However, the accuracy of existing methods still falls short of clinical requirements and needs to be further improved. Therefore, in this work, we propose a novel multi-level hierarchical framework for liver tumor segmentation. In the first level, we integrate inter-slice spatial information by a 2.5D network to resolve the accuracy–efficiency trade-off inherent in conventional 2D/3D segmentation strategies for liver tumor segmentation. Then, the second level extracts the inner-slice global and local features for enhancing feature representation. We propose the Res-Inception-SE Block, which combines residual connections, multi-scale Inception modules, and squeeze-excitation attention to capture comprehensive global and local features. Furthermore, we design a hybrid loss function combining Binary Cross Entropy (BCE) and Dice loss to solve the category imbalance problem and accelerate convergence. Extensive experiments on the LiTS17 dataset demonstrate the effectiveness of our method on accuracy, efficiency, and visual results for liver tumor segmentation. Full article
(This article belongs to the Special Issue Cutting-Edge AI in Computational Bioinformatics)
Show Figures

Figure 1

12 pages, 902 KB  
Article
BSCNNLaneNet: A Novel Bidirectional Spatial Convolution Neural Network for Lane Detection
by Youming Ge, Zhihang Ji, Moli Zhang, Xiang Li, Guoyong Wang and Lin Wang
Electronics 2025, 14(13), 2604; https://doi.org/10.3390/electronics14132604 - 27 Jun 2025
Viewed by 466
Abstract
Accurately detecting lane lines is a hot topic in computer vision. How to effectively utilize the relationship between lane features for detection is still an open question. In this paper, we propose a novel lane detection model based on convolutional neural network (CNN), [...] Read more.
Accurately detecting lane lines is a hot topic in computer vision. How to effectively utilize the relationship between lane features for detection is still an open question. In this paper, we propose a novel lane detection model based on convolutional neural network (CNN), namely, the BSCNNLaneNet (Bidirectional Spatial CNN Lane Detection Network). The proposed model is based on the spatial CNN method and incorporates a bidirectional recurrent neural network (BRNN) block to learn the spatial relationships between slice features. Additionally, a convolutional block attention mechanism is introduced to gain global features, which enhance the global connection between slice features in different directions. We conduct extensive experiments on the TuSimple dataset. The results demonstrate that the proposed method surpasses the original spatial CNN method, achieving an increase in accuracy from 96.53% to 96.86%. Full article
Show Figures

Figure 1

31 pages, 4585 KB  
Article
CAAF-ResUNet: Adaptive Attention Fusion with Boundary-Aware Loss for Lung Nodule Segmentation
by Thang Quoc Pham, Thai Hoang Le, Khai Dinh Lai, Dat Quoc Ngo, Tan Van Pham, Quang Hong Hua, Khang Quang Le, Huyen Duy Mai Le and Tuyen Ngoc Lam Nguyen
Medicina 2025, 61(7), 1126; https://doi.org/10.3390/medicina61071126 - 22 Jun 2025
Cited by 1 | Viewed by 663
Abstract
Background and Objectives: The accurate segmentation of pulmonary nodules in computed tomography (CT) remains a critical yet challenging task due to variations in nodule size, shape, and boundary ambiguity. This study proposes CAAF-ResUNet (Context-Aware Adaptive Attention Fusion ResUNet), a novel deep learning [...] Read more.
Background and Objectives: The accurate segmentation of pulmonary nodules in computed tomography (CT) remains a critical yet challenging task due to variations in nodule size, shape, and boundary ambiguity. This study proposes CAAF-ResUNet (Context-Aware Adaptive Attention Fusion ResUNet), a novel deep learning model designed to address these challenges through adaptive feature fusion and edge-sensitive learning. Materials and Methods: Central to our approach is the Adaptive Attention Controller (AAC), which dynamically adjusts the contribution of channel and position attention based on contextual features in each input. To further enhance boundary localization, we incorporate three complementary boundary-aware loss functions: Sobel, Laplacian, and Hausdorff. Results: An extensive evaluation of two benchmark datasets demonstrates the superiority of the proposed model, achieving Dice scores of 90.88% on LUNA16 and 85.92% on LIDC-IDRI, both exceeding prior state-of-the-art methods. A clinical validation of a dataset comprising 804 CT slices from 35 patients at the University Medical Center of Ho Chi Minh City confirmed the model’s practical reliability, yielding a Dice score of 95.34% and a notably low Miss Rate of 4.60% under the Hausdorff loss configuration. Conclusions: These results establish CAAF-ResUNet as a robust and clinically viable solution for pulmonary nodule segmentation, offering enhanced boundary precision and minimized false negatives, two critical properties in early-stage lung cancer diagnosis and radiological decision support. Full article
Show Figures

Figure 1

28 pages, 12155 KB  
Article
Domes and Earthquakes of Naples: A Comparative Study of Baroque and Neoclassical Structure
by Claudia Cennamo, Luciana Di Gennaro, Luigi Massaro and Giorgio Frunzio
Buildings 2025, 15(12), 2008; https://doi.org/10.3390/buildings15122008 - 11 Jun 2025
Viewed by 695
Abstract
The aim of this research is the structural study of the dome of Tesoro di San Gennaro in Naples compared with the more recent studies about San Francesco di Paola, as examples, respectively, of baroque and neoclassic style, emblems of different stylistic periods [...] Read more.
The aim of this research is the structural study of the dome of Tesoro di San Gennaro in Naples compared with the more recent studies about San Francesco di Paola, as examples, respectively, of baroque and neoclassic style, emblems of different stylistic periods of Neapolitan architectural schools about domes and churches. The studies are carried out with particular attention to evaluating their seismic safety without considering the role of the vertical supporting structures. The analysis adopts graphical approaches to assess the safety of the two domes under vertical and horizontal loads, with a special focus on the effects of earthquakes. In the case of San Gennaro, the approach is mixed between the rigid-kinematic theory and the theory of elasticity due to the presence of a wooden structure, while in the case of San Francesco di Paola, only the thrust-line method was used, applying it to the three-dimensional structures through the slicing technique. In conclusion, the methods to assess the safety of the domes under both vertical and horizontal seismic loads allow for a comparison of the two structures and provide a comprehensive evaluation of their structural integrity. The study demonstrates, through a predominantly graphical methodology, the effectiveness of traditional equilibrium-based approaches in assessing dome stability, highlighting the active contribution of the timber structure in San Gennaro and quantifying its role under seismic loading scenarios. Full article
(This article belongs to the Special Issue Recent Scientific Developments on the Mechanics of Masonry Structures)
Show Figures

Figure 1

29 pages, 5553 KB  
Article
Data-Driven Multi-Scale Channel-Aligned Transformer for Low-Carbon Autonomous Vessel Operations: Enhancing CO2 Emission Prediction and Green Autonomous Shipping Efficiency
by Jiahao Ni, Hongjun Tian, Kaijie Zhang, Yihong Xue and Yang Xiong
J. Mar. Sci. Eng. 2025, 13(6), 1143; https://doi.org/10.3390/jmse13061143 - 9 Jun 2025
Viewed by 697
Abstract
The accurate prediction of autonomous vessel CO2 emissions is critical for achieving IMO 2050 carbon neutrality and optimizing low-carbon maritime operations. Traditional models face limitations in real-time multi-source data analysis and dynamic cross-variable dependency modeling, hindering data-driven decision-making for sustainable autonomous shipping. [...] Read more.
The accurate prediction of autonomous vessel CO2 emissions is critical for achieving IMO 2050 carbon neutrality and optimizing low-carbon maritime operations. Traditional models face limitations in real-time multi-source data analysis and dynamic cross-variable dependency modeling, hindering data-driven decision-making for sustainable autonomous shipping. This study proposes a Multi-scale Channel-aligned Transformer (MCAT) model, integrated with a 5G–satellite–IoT communication architecture, to address these challenges. The MCAT model employs multi-scale token reconstruction and a dual-level attention mechanism, effectively capturing spatiotemporal dependencies in heterogeneous data streams (AIS, sensors, weather) while suppressing high-frequency noise. To enable seamless data collaboration, a hybrid transmission framework combining satellite (Inmarsat/Iridium), 5G URLLC slicing, and industrial Ethernet is designed, achieving ultra-low latency (10 ms) and nanosecond-level synchronization via IEEE 1588v2. Validated on a 22-dimensional real autonomous vessel dataset, MCAT reduces prediction errors by 12.5% MAE and 24% MSE compared to state-of-the-art methods, demonstrating superior robustness under noisy scenarios. Furthermore, the proposed architecture supports smart autonomous shipping solutions by providing demonstrably interpretable emission insights through its dual-level attention mechanism (visualized via attention maps) for route optimization, fuel efficiency enhancement, and compliance with CII regulations. This research bridges AI-driven predictive analytics with green autonomous shipping technologies, offering a scalable framework for digitalized and sustainable maritime operations. Full article
(This article belongs to the Special Issue Sustainable Maritime Transport and Port Intelligence)
Show Figures

Figure 1

21 pages, 17434 KB  
Article
Large Vessel Segmentation and Microvasculature Quantification Based on Dual-Stream Learning in Optic Disc OCTA Images
by Jingmin Luan, Zehao Wei, Qiyang Li, Jian Liu, Yao Yu, Dongni Yang, Jia Sun, Nan Lu, Xin Zhu and Zhenhe Ma
Photonics 2025, 12(6), 588; https://doi.org/10.3390/photonics12060588 - 9 Jun 2025
Viewed by 528
Abstract
Quantification of optic disc microvasculature is crucial for diagnosing various ocular diseases. However, accurate quantification of the microvasculature requires the exclusion of large vessels, such as the central artery and vein, when present. To address the challenge of ineffective learning of edge information, [...] Read more.
Quantification of optic disc microvasculature is crucial for diagnosing various ocular diseases. However, accurate quantification of the microvasculature requires the exclusion of large vessels, such as the central artery and vein, when present. To address the challenge of ineffective learning of edge information, which arises from the adhesion and transposition of large vessels in the optic disc, we developed a segmentation model that generates high-quality edge information in optic disc slices. By integrating dual-stream learning with channel-spatial attention and multi-level attention mechanisms, our model effectively learns both the target’s primary structure and fine details. Compared to state-of-the-art methods, our proposed approach demonstrates superior performance in segmentation accuracy. Superior results were obtained when the model was tested on OCTA images of the optic disc from 10 clinical patients. This underscores the significant contribution of our method in achieving clearly defined multi-task learning while substantially enhancing inference speed. Full article
(This article belongs to the Section Biophotonics and Biomedical Optics)
Show Figures

Figure 1

31 pages, 3849 KB  
Article
SAFEL-IoT: Secure Adaptive Federated Learning with Explainability for Anomaly Detection in 6G-Enabled Smart Industry 5.0
by Mohammed Naif Alatawi
Electronics 2025, 14(11), 2153; https://doi.org/10.3390/electronics14112153 - 26 May 2025
Cited by 1 | Viewed by 1551
Abstract
The rise of 6G-enabled smart industries necessitates secure, adaptive, and interpretable anomaly detection frameworks capable of operating under dynamic, adversarial, and resource-constrained environments. This study presents SAFEL-IoT, a novel Secure Adaptive Federated Learning framework with integrated explainability, specifically designed for anomaly detection in [...] Read more.
The rise of 6G-enabled smart industries necessitates secure, adaptive, and interpretable anomaly detection frameworks capable of operating under dynamic, adversarial, and resource-constrained environments. This study presents SAFEL-IoT, a novel Secure Adaptive Federated Learning framework with integrated explainability, specifically designed for anomaly detection in Industrial Internet-of-Things (IIoT) systems under Industry 5.0 paradigms. SAFEL-IoT introduces a dynamic aggregation mechanism based on temporal model divergence, a hybrid encryption scheme combining partial homomorphic encryption with differential privacy, and an interpretable anomaly scoring pipeline leveraging SHapley Additive exPlanations (SHAP) values and temporal attention mechanisms. Extensive experimentation on the SKAB industrial dataset demonstrates that SAFEL-IoT achieves a superior F1 score of 0.93, reduces training time to 63.7 s, and maintains explanation fidelity with only a 0.15 explanation error. Communication efficiency is improved by 70.3% through 6G network slicing, while detection latency remains below 12 ms across 100 distributed edge clients. Further analysis shows a 41.7% improvement in drift robustness and a 68.9% reduction in false positives compared to traditional federated learning baselines. Theoretical convergence guarantees, scalability under large node deployments, and resilience against adversarial attacks validate SAFEL-IoT as a comprehensive and practical solution for secure, explainable, and scalable anomaly detection in next-generation industrial ecosystems. Full article
(This article belongs to the Special Issue Security and Privacy in IoT-Based Systems)
Show Figures

Figure 1

15 pages, 1616 KB  
Article
DiffBTS: A Lightweight Diffusion Model for 3D Multimodal Brain Tumor Segmentation
by Zuxin Nie, Jiahong Yang, Chengxuan Li, Yaqin Wang and Jun Tang
Sensors 2025, 25(10), 2985; https://doi.org/10.3390/s25102985 - 9 May 2025
Viewed by 1415
Abstract
Denoising diffusion probabilistic models (DDPMs) have achieved remarkable success across various research domains. However, their high complexity when processing 3D images remains a limitation. To mitigate this, researchers typically preprocess data into 2D slices, enabling the model to perform segmentation in a reduced [...] Read more.
Denoising diffusion probabilistic models (DDPMs) have achieved remarkable success across various research domains. However, their high complexity when processing 3D images remains a limitation. To mitigate this, researchers typically preprocess data into 2D slices, enabling the model to perform segmentation in a reduced 2D space. This paper introduces DiffBTS, an end-to-end, lightweight diffusion model specifically designed for 3D brain tumor segmentation. DiffBTS replaces the conventional self-attention module in the traditional diffusion models by introducing an efficient 3D self-attention mechanism. The mechanism is applied between down-sampling and jump connections in the model, allowing it to capture long-range dependencies and global semantic information more effectively. This design prevents computational complexity from growing in square steps. Prediction accuracy and model stability are crucial in brain tumor segmentation; we propose the Edge-Blurring Guided (EBG) algorithm, which directs the diffusion model to focus more on the accuracy of segmentation boundaries during the iterative sampling process. This approach enhances prediction accuracy and stability. To assess the performance of DiffBTS, we compared it with seven state-of-the-art models on the BraTS 2020 and BraTS 2021 datasets. DiffBTS achieved an average Dice score of 89.99 and an average HD95 value of 1.928 mm on BraTS2021 and 86.44 and 2.466 mm on BraTS2020, respectively. Extensive experimental results demonstrate that DiffBTS achieves state-of-the-art performance in brain tumor segmentation, outperforming all competing models. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Back to TopTop