Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (944)

Search Parameters:
Keywords = guided fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 12598 KiB  
Article
OKG-ConvGRU: A Domain Knowledge-Guided Remote Sensing Prediction Framework for Ocean Elements
by Renhao Xiao, Yixiang Chen, Lizhi Miao, Jie Jiang, Donglin Zhang and Zhou Su
Remote Sens. 2025, 17(15), 2679; https://doi.org/10.3390/rs17152679 (registering DOI) - 2 Aug 2025
Abstract
Accurate prediction of key ocean elements (e.g., chlorophyll-a concentration, sea surface temperature, etc.) is imperative for maintaining marine ecological balance, responding to marine disaster pollution, and promoting the sustainable use of marine resources. Existing spatio-temporal prediction models primarily rely on either physical or [...] Read more.
Accurate prediction of key ocean elements (e.g., chlorophyll-a concentration, sea surface temperature, etc.) is imperative for maintaining marine ecological balance, responding to marine disaster pollution, and promoting the sustainable use of marine resources. Existing spatio-temporal prediction models primarily rely on either physical or data-driven approaches. Physical models are constrained by modeling complexity and parameterization errors, while data-driven models lack interpretability and depend on high-quality data. To address these challenges, this study proposes OKG-ConvGRU, a domain knowledge-guided remote sensing prediction framework for ocean elements. This framework integrates knowledge graphs with the ConvGRU network, leveraging prior knowledge from marine science to enhance the prediction performance of ocean elements in remotely sensed images. Firstly, we construct a spatio-temporal knowledge graph for ocean elements (OKG), followed by semantic embedding representation for its spatial and temporal dimensions. Subsequently, a cross-attention-based feature fusion module (CAFM) is designed to efficiently integrate spatio-temporal multimodal features. Finally, these fused features are incorporated into an enhanced ConvGRU network. For multi-step prediction, we adopt a Seq2Seq architecture combined with a multi-step rolling strategy. Prediction experiments for chlorophyll-a concentration in the eastern seas of China validate the effectiveness of the proposed framework. The results show that, compared to baseline models, OKG-ConvGRU exhibits significant advantages in prediction accuracy, long-term stability, data utilization efficiency, and robustness. This study provides a scientific foundation and technical support for the precise monitoring and sustainable development of marine ecological environments. Full article
Show Figures

Figure 1

17 pages, 1651 KiB  
Article
A Comprehensive User Acceptance Evaluation Framework of Intelligent Driving Based on Subjective and Objective Integration—From the Perspective of Value Engineering
by Wang Zhang, Fuquan Zhao, Zongwei Liu, Haokun Song and Guangyu Zhu
Systems 2025, 13(8), 653; https://doi.org/10.3390/systems13080653 (registering DOI) - 2 Aug 2025
Abstract
Intelligent driving technology is expected to reshape urban transportation, but its promotion is hindered by user acceptance challenges and diverse technical routes. This study proposes a comprehensive user acceptance evaluation framework for intelligent driving from the perspective of value engineering (VE). The novelty [...] Read more.
Intelligent driving technology is expected to reshape urban transportation, but its promotion is hindered by user acceptance challenges and diverse technical routes. This study proposes a comprehensive user acceptance evaluation framework for intelligent driving from the perspective of value engineering (VE). The novelty of this framework lies in three aspects: (1) It unifies behavioral theory and utility theory under the value engineering framework, and it extracts key indicators such as safety, travel efficiency, trust, comfort, and cost, thus addressing the issue of the lack of integration between subjective and objective factors in previous studies. (2) It establishes a systematic mapping mechanism from technical solutions to evaluation indicators, filling the gap of insufficient targeting at different technical routes in the existing literature. (3) It quantifies acceptance differences via VE’s core formula of V = F/C, overcoming the ambiguity of non-technical evaluation in prior research. A case study comparing single-vehicle intelligence vs. collaborative intelligence and different sensor combinations (vision-only, map fusion, and lidar fusion) shows that collaborative intelligence and vision-based solutions offer higher comprehensive acceptance due to balanced functionality and cost. This framework guides enterprises in technical strategy planning and assists governments in formulating industrial policies by quantifying acceptance differences across technical routes. Full article
(This article belongs to the Special Issue Modeling, Planning and Management of Sustainable Transport Systems)
41 pages, 86958 KiB  
Article
An Efficient Aerial Image Detection with Variable Receptive Fields
by Wenbin Liu, Liangren Shi and Guocheng An
Remote Sens. 2025, 17(15), 2672; https://doi.org/10.3390/rs17152672 (registering DOI) - 2 Aug 2025
Abstract
This article presents VRF-DETR, a lightweight real-time object detection framework for aerial remote sensing images, aimed at addressing the challenge of insufficient receptive fields for easily confused categories due to differences in height and perspective. Based on the RT-DETR architecture, our approach introduces [...] Read more.
This article presents VRF-DETR, a lightweight real-time object detection framework for aerial remote sensing images, aimed at addressing the challenge of insufficient receptive fields for easily confused categories due to differences in height and perspective. Based on the RT-DETR architecture, our approach introduces three key innovations: the multi-scale receptive field adaptive fusion (MSRF2) module replaces the Transformer encoder with parallel dilated convolutions and spatial-channel attention to adjust receptive fields for confusing objects dynamically; the gated multi-scale context (GMSC) block reconstructs the backbone using Gated Multi-Scale Context units with attention-gated convolution (AGConv), reducing parameters while enhancing multi-scale feature extraction; and the context-guided fusion (CGF) module optimizes feature fusion via context-guided weighting to resolve multi-scale semantic conflicts. Evaluations were conducted on both the VisDrone2019 and UAVDT datasets, where VRF-DETR achieved the mAP50 of 52.1% and the mAP50-95 of 32.2% on the VisDrone2019 validation set, surpassing RT-DETR by 4.9% and 3.5%, respectively, while reducing parameters by 32% and FLOPs by 22%. It maintains real-time performance (62.1 FPS) and generalizes effectively, outperforming state-of-the-art methods in accuracy-efficiency trade-offs for aerial object detection. Full article
(This article belongs to the Special Issue Deep Learning Innovations in Remote Sensing)
Show Figures

Figure 1

22 pages, 4480 KiB  
Article
MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities
by Chengcheng Yang, Zhiyao Liang, Tonglai Liu, Zeng Hu and Dashun Yan
Electronics 2025, 14(15), 3088; https://doi.org/10.3390/electronics14153088 (registering DOI) - 1 Aug 2025
Abstract
Multimodal sentiment analysis (MSA) faces key challenges such as incomplete modality inputs, long-range temporal dependencies, and suboptimal fusion strategies. To address these, we propose MGMR-Net, a Mamba-guided multimodal reconstruction and fusion network that integrates modality-aware reconstruction with text-centric fusion within an efficient state-space [...] Read more.
Multimodal sentiment analysis (MSA) faces key challenges such as incomplete modality inputs, long-range temporal dependencies, and suboptimal fusion strategies. To address these, we propose MGMR-Net, a Mamba-guided multimodal reconstruction and fusion network that integrates modality-aware reconstruction with text-centric fusion within an efficient state-space modeling framework. MGMR-Net consists of two core components: the Mamba-collaborative fusion module, which utilizes a two-stage selective state-space mechanism for fine-grained cross-modal alignment and hierarchical temporal integration, and the Mamba-enhanced reconstruction module, which employs continuous-time recurrence and dynamic gating to accurately recover corrupted or missing modality features. The entire network is jointly optimized via a unified multi-task loss, enabling simultaneous learning of discriminative features for sentiment prediction and reconstructive features for modality recovery. Extensive experiments on CMU-MOSI, CMU-MOSEI, and CH-SIMS datasets demonstrate that MGMR-Net consistently outperforms several baseline methods under both complete and missing modality settings, achieving superior accuracy, robustness, and generalization. Full article
(This article belongs to the Special Issue Application of Data Mining in Decision Support Systems (DSSs))
16 pages, 4587 KiB  
Article
FAMNet: A Lightweight Stereo Matching Network for Real-Time Depth Estimation in Autonomous Driving
by Jingyuan Zhang, Qiang Tong, Na Yan and Xiulei Liu
Symmetry 2025, 17(8), 1214; https://doi.org/10.3390/sym17081214 - 1 Aug 2025
Abstract
Accurate and efficient stereo matching is fundamental to real-time depth estimation from symmetric stereo cameras in autonomous driving systems. However, existing high-accuracy stereo matching networks typically rely on computationally expensive 3D convolutions, which limit their practicality in real-world environments. In contrast, real-time methods [...] Read more.
Accurate and efficient stereo matching is fundamental to real-time depth estimation from symmetric stereo cameras in autonomous driving systems. However, existing high-accuracy stereo matching networks typically rely on computationally expensive 3D convolutions, which limit their practicality in real-world environments. In contrast, real-time methods often sacrifice accuracy or generalization capability. To address these challenges, we propose FAMNet (Fusion Attention Multi-Scale Network), a lightweight and generalizable stereo matching framework tailored for real-time depth estimation in autonomous driving applications. FAMNet consists of two novel modules: Fusion Attention-based Cost Volume (FACV) and Multi-scale Attention Aggregation (MAA). FACV constructs a compact yet expressive cost volume by integrating multi-scale correlation, attention-guided feature fusion, and channel reweighting, thereby reducing reliance on heavy 3D convolutions. MAA further enhances disparity estimation by fusing multi-scale contextual cues through pyramid-based aggregation and dual-path attention mechanisms. Extensive experiments on the KITTI 2012 and KITTI 2015 benchmarks demonstrate that FAMNet achieves a favorable trade-off between accuracy, efficiency, and generalization. On KITTI 2015, with the incorporation of FACV and MAA, the prediction accuracy of the baseline model is improved by 37% and 38%, respectively, and a total improvement of 42% is achieved by our final model. These results highlight FAMNet’s potential for practical deployment in resource-constrained autonomous driving systems requiring real-time and reliable depth perception. Full article
Show Figures

Figure 1

34 pages, 1156 KiB  
Systematic Review
Mathematical Modelling and Optimization Methods in Geomechanically Informed Blast Design: A Systematic Literature Review
by Fabian Leon, Luis Rojas, Alvaro Peña, Paola Moraga, Pedro Robles, Blanca Gana and Jose García
Mathematics 2025, 13(15), 2456; https://doi.org/10.3390/math13152456 - 30 Jul 2025
Viewed by 185
Abstract
Background: Rock–blast design is a canonical inverse problem that joins elastodynamic partial differential equations (PDEs), fracture mechanics, and stochastic heterogeneity. Objective: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, a systematic review of mathematical methods for geomechanically informed [...] Read more.
Background: Rock–blast design is a canonical inverse problem that joins elastodynamic partial differential equations (PDEs), fracture mechanics, and stochastic heterogeneity. Objective: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol, a systematic review of mathematical methods for geomechanically informed blast modelling and optimisation is provided. Methods: A Scopus–Web of Science search (2000–2025) retrieved 2415 records; semantic filtering and expert screening reduced the corpus to 97 studies. Topic modelling with Bidirectional Encoder Representations from Transformers Topic (BERTOPIC) and bibliometrics organised them into (i) finite-element and finite–discrete element simulations, including arbitrary Lagrangian–Eulerian (ALE) formulations; (ii) geomechanics-enhanced empirical laws; and (iii) machine-learning surrogates and multi-objective optimisers. Results: High-fidelity simulations delimit blast-induced damage with ≤0.2 m mean absolute error; extensions of the Kuznetsov–Ram equation cut median-size mean absolute percentage error (MAPE) from 27% to 15%; Gaussian-process and ensemble learners reach a coefficient of determination (R2>0.95) while providing closed-form uncertainty; Pareto optimisers lower peak particle velocity (PPV) by up to 48% without productivity loss. Synthesis: Four themes emerge—surrogate-assisted PDE-constrained optimisation, probabilistic domain adaptation, Bayesian model fusion for digital-twin updating, and entropy-based energy metrics. Conclusions: Persisting challenges in scalable uncertainty quantification, coupled discrete–continuous fracture solvers, and rigorous fusion of physics-informed and data-driven models position blast design as a fertile test bed for advances in applied mathematics, numerical analysis, and machine-learning theory. Full article
Show Figures

Figure 1

12 pages, 294 KiB  
Review
Targeting Advanced Pancreatic Ductal Adenocarcinoma: A Practical Overview
by Chiara Citterio, Stefano Vecchia, Patrizia Mordenti, Elisa Anselmi, Margherita Ratti, Massimo Guasconi and Elena Orlandi
Gastroenterol. Insights 2025, 16(3), 26; https://doi.org/10.3390/gastroent16030026 - 30 Jul 2025
Viewed by 181
Abstract
Background/Objectives: Pancreatic ductal adenocarcinoma (PDAC) remains one of the deadliest solid tumors, with a five-year overall survival rate below 10%. While the introduction of multi-agent chemotherapy regimens has improved outcomes marginally, most patients with advanced disease continue to have limited therapeutic options. Molecular [...] Read more.
Background/Objectives: Pancreatic ductal adenocarcinoma (PDAC) remains one of the deadliest solid tumors, with a five-year overall survival rate below 10%. While the introduction of multi-agent chemotherapy regimens has improved outcomes marginally, most patients with advanced disease continue to have limited therapeutic options. Molecular profiling has uncovered actionable genomic alterations in select subgroups of PDAC, yet the clinical impact of targeted therapies remains modest. This review aims to provide a clinically oriented synthesis of emerging molecular targets in PDAC, their therapeutic relevance, and practical considerations for biomarker testing, including current FDA and EMA indications. Methods: A narrative review was conducted using data from PubMed, Embase, Scopus, and international guidelines (NCCN, ESMO, ASCO). The selection focused on evidence published between 2020 and 2025, highlighting molecularly defined PDAC subsets and the current status of targeted therapies. Results: Actionable genomic alterations in PDAC include KRAS G12C mutations, BRCA1/2 and PALB2-associated homologous recombination deficiency, MSI-H/dMMR status, and rare gene fusions involving NTRK, RET, and NRG1. While only a minority of patients are eligible for targeted treatments, early-phase trials and real-world data have shown promising results in these subgroups. Testing molecular profiling is increasingly standard in advanced PDAC. Conclusions: Despite the rarity of targetable mutations, systematic molecular profiling is critical in advanced PDAC to guide off-label therapy or clinical trial enrollment. A practical framework for identifying and acting on molecular targets is essential to bridge the gap between precision oncology and clinical management. Full article
(This article belongs to the Special Issue Advances in the Management of Gastrointestinal and Liver Diseases)
4 pages, 976 KiB  
Proceeding Paper
Developing a Risk Recognition System Based on a Large Language Model for Autonomous Driving
by Donggyu Min and Dong-Kyu Kim
Eng. Proc. 2025, 102(1), 7; https://doi.org/10.3390/engproc2025102007 - 29 Jul 2025
Viewed by 101
Abstract
Autonomous driving systems have the potential to reduce traffic accidents dramatically; however, conventional modules often struggle to accurately detect risks in complex environments. This study presents a novel risk recognition system that integrates the reasoning capabilities of a large language model (LLM), specifically [...] Read more.
Autonomous driving systems have the potential to reduce traffic accidents dramatically; however, conventional modules often struggle to accurately detect risks in complex environments. This study presents a novel risk recognition system that integrates the reasoning capabilities of a large language model (LLM), specifically GPT-4, with traffic engineering domain knowledge. By incorporating surrogate safety measures such as time-to-collision (TTC) alongside traditional sensor and image data, our approach enhances the vehicle’s ability to interpret and react to potentially dangerous situations. Utilizing the realistic 3D simulation environment of CARLA, the proposed framework extracts comprehensive data—including object identification, distance, TTC, and vehicle dynamics—and reformulates this information into natural language inputs for GPT-4. The LLM then provides risk assessments with detailed justifications, guiding the autonomous vehicle to execute appropriate control commands. The experimental results demonstrate that the LLM-based module outperforms conventional systems by maintaining safer distances, achieving more stable TTC values, and delivering smoother acceleration control during dangerous scenarios. This fusion of LLM reasoning with traffic engineering principles not only improves the reliability of risk recognition but also lays a robust foundation for future real-time applications and dataset development in autonomous driving safety. Full article
Show Figures

Figure 1

18 pages, 6570 KiB  
Article
Deposition Process and Interface Performance of Aluminum–Steel Joints Prepared Using CMT Technology
by Jie Zhang, Hao Du, Xinyue Wang, Yinglong Zhang, Jipeng Zhao, Penglin Zhang, Jiankang Huang and Ding Fan
Metals 2025, 15(8), 844; https://doi.org/10.3390/met15080844 - 29 Jul 2025
Viewed by 208
Abstract
The anode assembly, as a key component in the electrolytic aluminum process, is composed of steel claws and aluminum guide rods. The connection quality between the steel claws and guide rods directly affects the current conduction efficiency, energy consumption, and operational stability of [...] Read more.
The anode assembly, as a key component in the electrolytic aluminum process, is composed of steel claws and aluminum guide rods. The connection quality between the steel claws and guide rods directly affects the current conduction efficiency, energy consumption, and operational stability of equipment. Achieving high-quality joining between the aluminum alloy and steel has become a key process in the preparation of the anode assembly. To join the guide rods and steel claws, this work uses Cold Metal Transfer (CMT) technology to clad aluminum on the steel surface and employs machine vision to detect surface forming defects in the cladding layer. The influence of different currents on the interfacial microstructure and mechanical properties of aluminum alloy cladding on the steel surface was investigated. The results show that increasing the cladding current leads to an increase in the width of the fusion line and grain size and the formation of layered Fe2Al5 intermetallic compounds (IMCs) at the interface. As the current increases from 90 A to 110 A, the thickness of the Al-Fe IMC layer increases from 1.46 μm to 2.06 μm. When the current reaches 110 A, the thickness of the interfacial brittle phase is the largest, at 2 ± 0.5 μm. The interfacial region where aluminum and steel are fused has the highest hardness, and the tensile strength first increases and then decreases with the current. The highest tensile strength is 120.45 MPa at 100 A. All the fracture surfaces exhibit a brittle fracture. Full article
Show Figures

Figure 1

16 pages, 2370 KiB  
Article
SemABC: Semantic-Guided Adaptive Bias Calibration for Generative Zero-Shot Point Cloud Segmentation
by Yuyun Wei and Meng Qi
Appl. Sci. 2025, 15(15), 8359; https://doi.org/10.3390/app15158359 - 27 Jul 2025
Viewed by 343
Abstract
Due to the limited quantity and high cost of high-quality three-dimensional annotations, generalized zero-shot point cloud segmentation aims to transfer the knowledge of seen to unseen classes by leveraging semantic correlations to achieve generalization purposes. Existing generative point cloud semantic segmentation approaches rely [...] Read more.
Due to the limited quantity and high cost of high-quality three-dimensional annotations, generalized zero-shot point cloud segmentation aims to transfer the knowledge of seen to unseen classes by leveraging semantic correlations to achieve generalization purposes. Existing generative point cloud semantic segmentation approaches rely on generators trained on seen classes to synthesize visual features for unseen classes in order to help the segmentation model gain the ability of generalization, but this often leads to a bias toward seen classes. To address this issue, we propose a semantic-guided adaptive bias calibration approach with a dual-branch network architecture. This network consists of a novel visual–semantic fusion branch alongside the primary segmentation branch to suppress the bias toward seen classes. Specifically, the visual–semantic branch exploits the visual–semantic relevance of the synthetic features of unseen classes to provide auxiliary predictions. Furthermore, we introduce an adaptive bias calibration module that dynamically integrates the predictions from both the main and auxiliary branches to achieve unbiased segmentation results. Extensive experiments conducted on standard benchmarks demonstrate that our approach significantly outperforms state-of-the-art methods on both seen and unseen classes, thereby validating the effectiveness of our approach. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Industrial Engineering)
Show Figures

Figure 1

29 pages, 3125 KiB  
Article
Tomato Leaf Disease Identification Framework FCMNet Based on Multimodal Fusion
by Siming Deng, Jiale Zhu, Yang Hu, Mingfang He and Yonglin Xia
Plants 2025, 14(15), 2329; https://doi.org/10.3390/plants14152329 - 27 Jul 2025
Viewed by 404
Abstract
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper [...] Read more.
Precisely recognizing diseases in tomato leaves plays a crucial role in enhancing the health, productivity, and quality of tomato crops. However, disease identification methods that rely on single-mode information often face the problems of insufficient accuracy and weak generalization ability. Therefore, this paper proposes a tomato leaf disease recognition framework FCMNet based on multimodal fusion, which combines tomato leaf disease image and text description to enhance the ability to capture disease characteristics. In this paper, the Fourier-guided Attention Mechanism (FGAM) is designed, which systematically embeds the Fourier frequency-domain information into the spatial-channel attention structure for the first time, enhances the stability and noise resistance of feature expression through spectral transform, and realizes more accurate lesion location by means of multi-scale fusion of local and global features. In order to realize the deep semantic interaction between image and text modality, a Cross Vision–Language Alignment module (CVLA) is further proposed. This module generates visual representations compatible with Bert embeddings by utilizing block segmentation and feature mapping techniques. Additionally, it incorporates a probability-based weighting mechanism to achieve enhanced multimodal fusion, significantly strengthening the model’s comprehension of semantic relationships across different modalities. Furthermore, to enhance both training efficiency and parameter optimization capabilities of the model, we introduce a Multi-strategy Improved Coati Optimization Algorithm (MSCOA). This algorithm integrates Good Point Set initialization with a Golden Sine search strategy, thereby boosting global exploration, accelerating convergence, and effectively preventing entrapment in local optima. Consequently, it exhibits robust adaptability and stable performance within high-dimensional search spaces. The experimental results show that the FCMNet model has increased the accuracy and precision by 2.61% and 2.85%, respectively, compared with the baseline model on the self-built dataset of tomato leaf diseases, and the recall and F1 score have increased by 3.03% and 3.06%, respectively, which is significantly superior to the existing methods. This research provides a new solution for the identification of tomato leaf diseases and has broad potential for agricultural applications. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

17 pages, 2864 KiB  
Article
A Deep-Learning-Based Diffusion Tensor Imaging Pathological Auto-Analysis Method for Cervical Spondylotic Myelopathy
by Shuoheng Yang, Junpeng Li, Ningbo Fei, Guangsheng Li and Yong Hu
Bioengineering 2025, 12(8), 806; https://doi.org/10.3390/bioengineering12080806 - 27 Jul 2025
Viewed by 268
Abstract
Pathological conditions of the spinal cord have been found to be associated with cervical spondylotic myelopathy (CSM). This study aims to explore the feasibility of automatic deep-learning-based classification of the pathological condition of the spinal cord to quantify its severity. A Diffusion Tensor [...] Read more.
Pathological conditions of the spinal cord have been found to be associated with cervical spondylotic myelopathy (CSM). This study aims to explore the feasibility of automatic deep-learning-based classification of the pathological condition of the spinal cord to quantify its severity. A Diffusion Tensor Imaging (DTI)-based spinal cord pathological assessment method was proposed. A multi-dimensional feature fusion model, referred to as DCSANet-MD (DTI-Based CSM Severity Assessment Network-Multi-Dimensional), was developed to extract both 2D and 3D features from DTI slices, incorporating a feature integration mechanism to enhance the representation of spatial information. To evaluate this method, 176 CSM patients with cervical DTI slices and clinical records were collected. The proposed assessment model demonstrated an accuracy of 82% in predicting two categories of severity levels (mild and severe). Furthermore, in a more refined three-category severity classification (mild, moderate, and severe), using a hierarchical classification strategy, the model achieved an accuracy of approximately 68%, which significantly exceeded the baseline performance. In conclusion, these findings highlight the potential of the deep-learning-based method as a decision-making support tool for DTI-based pathological assessments of CSM, offering great value in monitoring disease progression and guiding the intervention strategies. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Medical Imaging Processing)
Show Figures

Figure 1

21 pages, 5527 KiB  
Article
SGNet: A Structure-Guided Network with Dual-Domain Boundary Enhancement and Semantic Fusion for Skin Lesion Segmentation
by Haijiao Yun, Qingyu Du, Ziqing Han, Mingjing Li, Le Yang, Xinyang Liu, Chao Wang and Weitian Ma
Sensors 2025, 25(15), 4652; https://doi.org/10.3390/s25154652 - 27 Jul 2025
Viewed by 278
Abstract
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based [...] Read more.
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN–Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet’s superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet’s exceptional accuracy and robust generalization for computer-aided dermatological diagnosis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

25 pages, 4344 KiB  
Article
YOLO-DFAM-Based Onboard Intelligent Sorting System for Portunus trituberculatus
by Penglong Li, Shengmao Zhang, Hanfeng Zheng, Xiumei Fan, Yonchuang Shi, Zuli Wu and Heng Zhang
Fishes 2025, 10(8), 364; https://doi.org/10.3390/fishes10080364 - 25 Jul 2025
Viewed by 236
Abstract
This study addresses the challenges of manual measurement bias and low robustness in detecting small, occluded targets in complex marine environments during real-time onboard sorting of Portunus trituberculatus. We propose YOLO-DFAM, an enhanced YOLOv11n-based model that replaces the global average pooling in [...] Read more.
This study addresses the challenges of manual measurement bias and low robustness in detecting small, occluded targets in complex marine environments during real-time onboard sorting of Portunus trituberculatus. We propose YOLO-DFAM, an enhanced YOLOv11n-based model that replaces the global average pooling in the Focal Modulation module with a spatial–channel dual-attention mechanism and incorporates the ASF-YOLO cross-scale fusion strategy to improve feature representation across varying target sizes. These enhancements significantly boost detection, achieving an mAP@50 of 98.0% and precision of 94.6%, outperforming RetinaNet-CSL and Rotated Faster R-CNN by up to 6.3% while maintaining real-time inference at 180.3 FPS with only 7.2 GFLOPs. Unlike prior static-scene approaches, our unified framework integrates attention-guided detection, scale-adaptive tracking, and lightweight weight estimation for dynamic marine conditions. A ByteTrack-based tracking module with dynamic scale calibration, EMA filtering, and optical flow compensation ensures stable multi-frame tracking. Additionally, a region-specific allometric weight estimation model (R2 = 0.9856) reduces dimensional errors by 85.7% and maintains prediction errors below 4.7% using only 12 spline-interpolated calibration sets. YOLO-DFAM provides an accurate, efficient solution for intelligent onboard fishery monitoring. Full article
Show Figures

Figure 1

25 pages, 19515 KiB  
Article
Towards Efficient SAR Ship Detection: Multi-Level Feature Fusion and Lightweight Network Design
by Wei Xu, Zengyuan Guo, Pingping Huang, Weixian Tan and Zhiqi Gao
Remote Sens. 2025, 17(15), 2588; https://doi.org/10.3390/rs17152588 - 24 Jul 2025
Viewed by 339
Abstract
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where [...] Read more.
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where model size, computational load, and power consumption are tightly restricted. Thus, guided by the principles of lightweight design, robustness, and energy efficiency optimization, this study proposes a three-stage collaborative multi-level feature fusion framework to reduce model complexity without compromising detection performance. Firstly, the backbone network integrates depthwise separable convolutions and a Convolutional Block Attention Module (CBAM) to suppress background clutter and extract effective features. Building upon this, a cross-layer feature interaction mechanism is introduced via the Multi-Scale Coordinated Fusion (MSCF) and Bi-EMA Enhanced Fusion (Bi-EF) modules to strengthen joint spatial-channel perception. To further enhance the detection capability, Efficient Feature Learning (EFL) modules are embedded in the neck to improve feature representation. Experiments on the Synthetic Aperture Radar (SAR) Ship Detection Dataset (SSDD) show that this method, with only 1.6 M parameters, achieves a mean average precision (mAP) of 98.35% in complex scenarios, including inshore and offshore environments. It balances the difficult problem of being unable to simultaneously consider accuracy and hardware resource requirements in traditional methods, providing a new technical path for real-time SAR ship detection on satellite platforms. Full article
Show Figures

Figure 1

Back to TopTop