Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (965)

Search Parameters:
Keywords = metric fusion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1067 KiB  
Article
Green Supplier Evaluation in E-Commerce Systems: An Integrated Rough-Dombi BWM-TOPSIS Approach
by Qigan Shao, Simin Liu, Jiaxin Lin, James J. H. Liou and Dan Zhu
Systems 2025, 13(9), 731; https://doi.org/10.3390/systems13090731 (registering DOI) - 23 Aug 2025
Abstract
The rapid growth of e-commerce has created substantial environmental impacts, driving the need for advanced optimization models to enhance supply chain sustainability. As consumer preferences shift toward environmental responsibility, organizations must adopt robust quantitative methods to reduce ecological footprints while ensuring operational efficiency. [...] Read more.
The rapid growth of e-commerce has created substantial environmental impacts, driving the need for advanced optimization models to enhance supply chain sustainability. As consumer preferences shift toward environmental responsibility, organizations must adopt robust quantitative methods to reduce ecological footprints while ensuring operational efficiency. This study develops a novel hybrid multi-criteria decision-making (MCDM) model to evaluate and prioritize green suppliers under uncertainty, integrating the rough-Dombi best–worst method (BWM) and an improved Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The proposed model addresses two key challenges: (1) inconsistency in expert judgments through rough set theory and Dombi aggregation operators and (2) ranking instability via an enhanced TOPSIS formulation that mitigates rank reversal. Mathematically, the rough-Dombi BWM leverages interval-valued rough numbers to model subjective expert preferences, while the Dombi operator ensures flexible and precise weight aggregation. The modified TOPSIS incorporates a dynamic distance metric to strengthen ranking robustness. A case study of five e-commerce suppliers validates the model’s effectiveness, with results identifying cost, green competitiveness, and external environmental management as the dominant evaluation dimensions. Key indicators—such as product price, pollution control, and green design—are rigorously prioritized using the proposed framework. Theoretical contributions include (1) a new rough-Dombi fusion for criteria weighting under uncertainty and (2) a stabilized TOPSIS variant with reduced sensitivity to data perturbations. Practically, the model provides e-commerce enterprises with a computationally efficient tool for sustainable supplier selection, enhancing resource allocation and green innovation. This study advances the intersection of uncertainty modeling, operational research, and sustainability analytics, offering scalable methodologies for mathematical decision-making in supply chain contexts. Full article
(This article belongs to the Section Supply Chain Management)
26 pages, 2363 KiB  
Article
An Analysis and Simulation of Security Risks in Radar Networks from the Perspective of Cybersecurity
by Runyang Chen, Yi Zhang, Xiuhe Li and Jinhe Ran
Sensors 2025, 25(17), 5239; https://doi.org/10.3390/s25175239 (registering DOI) - 23 Aug 2025
Abstract
Radar networks, composed of multiple radar stations and a fusion center interconnected via communication technologies, are widely used in civil aviation and maritime operations. Ensuring the security of radar networks is crucial. While their strong anti-jamming capabilities make traditional electronic countermeasures less effective, [...] Read more.
Radar networks, composed of multiple radar stations and a fusion center interconnected via communication technologies, are widely used in civil aviation and maritime operations. Ensuring the security of radar networks is crucial. While their strong anti-jamming capabilities make traditional electronic countermeasures less effective, the openness and vulnerability of their network architecture expose them to cybersecurity risks. Current research on radar network security risk analysis from a cybersecurity perspective remains insufficient, necessitating further study to provide theoretical support for defense strategies. Taking centralized radar networks as an example, this paper first analyzes their architecture and potential cybersecurity risks, identifying a threat where attackers could potentially execute false data injection attacks (FDIAs) against the fusion center via man-in-the-middle attacks (MITMAs). A threat model is then established, outlining possible attack procedures and methods, along with defensive recommendations and evaluation metrics. Furthermore, for scenarios involving single-link control without traffic increase, the impact of different false data construction methods is examined. Simulation experiments validate the findings, showing that the average position offset increases from 8.38 m to 78.35 m after false data injection. This result confirms significant security risks under such threats, providing a reference for future countermeasure research. Full article
(This article belongs to the Section Sensors Development)
Show Figures

Figure 1

12 pages, 1703 KiB  
Article
Transperineal MRI-US Fusion-Guided Biopsy with Systematic Sampling for Prostate Cancer: Diagnostic Accuracy and Clinical Implications Across PI-RADS
by Valèria Richart, Meritxell Costa, María Muní, Ignacio Asiain, Rafael Salvador, Josep Puig, Leonardo Rodriguez-Carunchio, Belinda Salinas, Marc Comas-Cufí and Carlos Nicolau
Cancers 2025, 17(17), 2735; https://doi.org/10.3390/cancers17172735 - 22 Aug 2025
Abstract
Background/Objectives: Magnetic resonance imaging (MRI) and MRI–ultrasound (US) fusion-targeted biopsy have improved prostate cancer diagnosis, particularly for clinically significant disease. However, the added value of combining systematic biopsy with targeted biopsy remains debated. This study aimed to evaluate the diagnostic accuracy of MRI–US [...] Read more.
Background/Objectives: Magnetic resonance imaging (MRI) and MRI–ultrasound (US) fusion-targeted biopsy have improved prostate cancer diagnosis, particularly for clinically significant disease. However, the added value of combining systematic biopsy with targeted biopsy remains debated. This study aimed to evaluate the diagnostic accuracy of MRI–US fusion-targeted and systematic transperineal biopsies in detecting prostate cancer and explore the correlation between PI-RADS score and histology. Methods: We retrospectively analyzed 356 patients with 452 MRI-detected lesions who underwent both MRI–US fusion-targeted and transperineal systematic biopsies between 2020 and 2023. Clinically significant prostate cancer (csPCa) was defined as International Society of Urological Pathology (ISUP) grade ≥ 2. Diagnostic performance metrics (sensitivity, specificity, and accuracy) were calculated for each technique using the combined result as a reference. Subgroup analysis was performed for patients under active surveillance. Results: Prostate cancer was diagnosed in 323 of 452 lesions (71%) and csPCa in 223 lesions (49%). Targeted biopsy demonstrated higher sensitivity (93.7%) and accuracy (79.9%) than systematic biopsy (85.7% sensitivity and 77.6% accuracy), although systematic biopsy provided slightly higher specificity. Systematic biopsy alone identified 8.2% of PCa cases missed by targeted biopsy and upgraded 9.9% of lesions to csPCa. csPCa detection increased with PI-RADS score (23% in PI-RADS 3 and 73% in PI-RADS 5). In active surveillance patients, csPCa was found in 65% of lesions. Conclusions: MRI–US fusion-targeted biopsy improves csPCa detection, but systematic biopsy remains valuable, especially for identifying additional or higher-grade disease. The combined approach provides an optimal diagnostic yield, supporting its continued use in both initial and repeat biopsy settings. Full article
22 pages, 5943 KiB  
Article
LiteCOD: Lightweight Camouflaged Object Detection via Holistic Understanding of Local-Global Features and Multi-Scale Fusion
by Abbas Khan, Hayat Ullah and Arslan Munir
AI 2025, 6(9), 197; https://doi.org/10.3390/ai6090197 - 22 Aug 2025
Abstract
Camouflaged object detection (COD) represents one of the most challenging tasks in computer vision, requiring sophisticated approaches to accurately extract objects that seamlessly blend within visually similar backgrounds. While contemporary techniques demonstrate promising detection performance, they predominantly suffer from computational complexity and resource [...] Read more.
Camouflaged object detection (COD) represents one of the most challenging tasks in computer vision, requiring sophisticated approaches to accurately extract objects that seamlessly blend within visually similar backgrounds. While contemporary techniques demonstrate promising detection performance, they predominantly suffer from computational complexity and resource requirements that severely limit their deployment in real-time applications, particularly on mobile devices and edge computing platforms. To address these limitations, we propose LiteCOD, an efficient lightweight framework that integrates local and global perceptions through holistic feature fusion and specially designed efficient attention mechanisms. Our approach achieves superior detection accuracy while maintaining computational efficiency essential for practical deployment, with enhanced feature propagation and minimal computational overhead. Extensive experiments validate LiteCOD’s effectiveness, demonstrating that it surpasses existing lightweight methods with average improvements of 7.55% in the F-measure and 8.08% overall performance gain across three benchmark datasets. Our results indicate that our framework consistently outperforms 20 state-of-the-art methods across quantitative metrics, computational efficiency, and overall performance while achieving real-time inference capabilities with a significantly reduced parameter count of 5.15M parameters. LiteCOD establishes a practical solution bridging the gap between detection accuracy and deployment feasibility in resource-constrained environments. Full article
Show Figures

Figure 1

14 pages, 7081 KiB  
Article
SupGAN: A General Super-Resolution GAN-Promoting Training Method
by Tao Wu, Shuo Xiong, Qiuhang Chen, Huaizheng Liu, Weijun Cao and Haoran Tuo
Appl. Sci. 2025, 15(17), 9231; https://doi.org/10.3390/app15179231 - 22 Aug 2025
Abstract
An image super-resolution (SR) method based on Generative Adversarial Networks (GANs) has achieved impressive results in terms of visual performance. However, the weights of loss functions in these methods are usually set to fixed values manually, which cannot fully adapt to different datasets [...] Read more.
An image super-resolution (SR) method based on Generative Adversarial Networks (GANs) has achieved impressive results in terms of visual performance. However, the weights of loss functions in these methods are usually set to fixed values manually, which cannot fully adapt to different datasets and tasks, and may result in a decrease in the perceptual effect of the SR images. To address this issue and further improve visual quality, we propose a perception-driven SupGAN, which improves the generator and loss function of GAN-based image super-resolution models. The generator adopts multi-scale feature extraction and fusion to restore SR images with diverse and fine textures. We design a network-training method based on the proportion of high-frequency information in images (BHFTM), which utilizes the proportion of high-frequency information in images obtained through the Canny operator to set the weights of the loss function. In addition, we employ the four-patch method to better simulate the degradation of complex real-world scenarios. We extensively test our method and compare it with recent SR methods (BSRGAN, Real-ESRGAN, RealSR, SwinIR, LDL, etc.) on different types of datasets (OST300, 2020track1, RealWorld38, BSDS100 etc.) with a scaling factor of ×4. The results show that the NIQE metric improves, and also demonstrate that SupGAN can generate more natural and fine textures while suppressing unpleasant artifacts. Full article
(This article belongs to the Special Issue Collaborative Learning and Optimization Theory and Its Applications)
Show Figures

Figure 1

31 pages, 6069 KiB  
Article
Multi-View Clustering-Based Outlier Detection for Converter Transformer Multivariate Time-Series Data
by Yongjie Shi, Jiang Guo, Jiale Tian, Tongqiang Yi, Yang Meng and Zhong Tian
Sensors 2025, 25(17), 5216; https://doi.org/10.3390/s25175216 - 22 Aug 2025
Abstract
Online monitoring systems continuously collect massive multivariate time-series data from converter transformers. Accurate outlier detection in these data is essential for identifying sensor faults, communication errors, and incipient equipment failures, thereby ensuring reliable condition assessment and maintenance decisions. However, the complex characteristics of [...] Read more.
Online monitoring systems continuously collect massive multivariate time-series data from converter transformers. Accurate outlier detection in these data is essential for identifying sensor faults, communication errors, and incipient equipment failures, thereby ensuring reliable condition assessment and maintenance decisions. However, the complex characteristics of transformer monitoring data—including non-Gaussian distributions from diverse operational modes, high dimensionality, and multi-scale temporal dependencies—render traditional outlier detection methods ineffective. This paper proposes a Multi-View Clustering-based Outlier Detection (MVCOD) framework that addresses these challenges through complementary data representations. The framework constructs four complementary data views—raw-differential, multi-scale temporal, density-enhanced, and manifold representations—and applies four detection algorithms (K-means, HDBSCAN, OPTICS, and Isolation Forest) to each view. An adaptive fusion mechanism dynamically weights the 16 detection results based on quality and complementarity metrics. Extensive experiments on 800 kV converter transformer operational data demonstrate that MVCOD achieves a Silhouette Coefficient of 0.68 and an Outlier Separation Score of 0.81, representing 30.8% and 35.0% improvements over the best baseline method, respectively. The framework successfully identifies 10.08% of data points as outliers with feature-level localization capabilities. This work provides an effective and interpretable solution for ensuring data quality in converter transformer monitoring systems, with potential applications to other complex industrial time-series data. Full article
Show Figures

Figure 1

22 pages, 9182 KiB  
Article
Sensor Synergy in Bathymetric Mapping: Integrating Optical, LiDAR, and Echosounder Data Using Machine Learning
by Emre Gülher and Ugur Alganci
Remote Sens. 2025, 17(16), 2912; https://doi.org/10.3390/rs17162912 - 21 Aug 2025
Abstract
Bathymetry, the measurement of water depth and underwater terrain, is vital for scientific, commercial, and environmental applications. Traditional methods like shipborne echosounders are costly and inefficient in shallow waters due to limited spatial coverage and accessibility. Emerging technologies such as satellite imagery, drones, [...] Read more.
Bathymetry, the measurement of water depth and underwater terrain, is vital for scientific, commercial, and environmental applications. Traditional methods like shipborne echosounders are costly and inefficient in shallow waters due to limited spatial coverage and accessibility. Emerging technologies such as satellite imagery, drones, and spaceborne LiDAR offer cost-effective and efficient alternatives. This research explores integrating multi-sensor datasets to enhance bathymetric mapping in coastal and inland waters by leveraging each sensor’s strengths. The goal is to improve spatial coverage, resolution, and accuracy over traditional methods using data fusion and machine learning. Gülbahçe Bay in İzmir, Turkey, serves as the study area. Bathymetric modeling uses Sentinel-2, Göktürk-1, and aerial imagery with varying resolutions and sensor characteristics. Model calibration evaluates independent and integrated use of single-beam echosounder (SBE) and satellite-based LiDAR (ICESat-2) during training. After preprocessing, Random Forest and Extreme Gradient Boosting algorithms are applied for bathymetric inference. Results are assessed using accuracy metrics and IHO CATZOC standards, achieving A1 level for 0–10 m, A2/B for 0–15 m, and C level for 0–20 m depth intervals. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

30 pages, 3080 KiB  
Article
A High-Acceptance-Rate VxWorks Fuzzing Framework Based on Protocol Feature Fusion and Memory Extraction
by Yichuan Wang, Jiazhao Han, Xi Deng and Xinhong Hei
Future Internet 2025, 17(8), 377; https://doi.org/10.3390/fi17080377 - 21 Aug 2025
Viewed by 60
Abstract
With the widespread application of Internet of Things (IoT) devices, the security of embedded systems faces severe challenges. As an embedded operating system widely used in critical mission scenarios, the security of the TCP stack in VxWorks directly affects system reliability. However, existing [...] Read more.
With the widespread application of Internet of Things (IoT) devices, the security of embedded systems faces severe challenges. As an embedded operating system widely used in critical mission scenarios, the security of the TCP stack in VxWorks directly affects system reliability. However, existing protocol fuzzing methods based on network communication struggle to adapt to the complex state machine and grammatical rules of the TCP. Additionally, the lack of a runtime feedback mechanism for closed-source VxWorks systems leads to low testing efficiency. This paper proposes the vxTcpFuzzer framework, which generates structured test cases by integrating the field features of the TCP. Innovatively, it uses the memory data changes of VxWorks network protocol processing tasks as a coverage metric and combines a dual anomaly detection mechanism (WDB detection and heartbeat detection) to achieve precise anomaly capture. We conducted experimental evaluations on three VxWorks system devices, where vxTcpFuzzer successfully triggered multiple potential vulnerabilities, verifying the framework’s effectiveness. Compared with three existing classic fuzzing schemes, vxTcpFuzzer demonstrates significant advantages in test case acceptance rates (44.94–54.92%) and test system abnormal rates (23.79–34.70%) across the three VxWorks devices. The study confirms that protocol feature fusion and memory feedback mechanisms can effectively enhance the depth and efficiency of protocol fuzzing for VxWorks systems. Furthermore, this approach offers a practical and effective solution for uncovering TCP vulnerabilities in black-box environments. Full article
(This article belongs to the Special Issue Secure Integration of IoT and Cloud Computing)
Show Figures

Figure 1

22 pages, 311 KiB  
Article
A Dempster–Shafer, Fusion-Based Approach for Malware Detection
by Patricio Galdames, Simon Yusuf Enoch, Claudio Gutiérrez-Soto and Marco A. Palomino
Mathematics 2025, 13(16), 2677; https://doi.org/10.3390/math13162677 - 20 Aug 2025
Viewed by 136
Abstract
Dempster–Shafer theory (DST), a generalization of probability theory, is well suited for managing uncertainty and integrating information from diverse sources. In recent years, DST has gained attention in cybersecurity research. However, despite the growing interest, there is still a lack of systematic comparisons [...] Read more.
Dempster–Shafer theory (DST), a generalization of probability theory, is well suited for managing uncertainty and integrating information from diverse sources. In recent years, DST has gained attention in cybersecurity research. However, despite the growing interest, there is still a lack of systematic comparisons of DST implementation strategies for malware detection. In this paper, we present a comprehensive evaluation of DST-based ensemble mechanisms for malware detection, addressing critical methodological questions regarding optimal mass function construction and combination rules. Our systematic analysis was tested with 630,504 benign and malicious samples collected from four public datasets (BODMAS, DREBIN, AndroZoo, and BMPD) to train malware detection models. We explored three approaches for converting classifier outputs into probability mass functions: global confidence using fixed values derived from performance metrics, class-specific confidence with separate values for each class, and computationally optimized confidence values. The results establish that all approaches yield comparable performance, although fixed values offer significant computational and interpretability advantages. Additionally, we introduced a novel linear combination rule for evidence fusion, which delivers results on par with conventional DST rules while enhancing interpretability. Our experiments show consistently low false positive rates—ranging from 0.16% to 3.19%. This comprehensive study provides the first systematic methodology comparison for DST-based malware detection, establishing evidence-based guidelines for practitioners on optimal implementation strategies. Full article
(This article belongs to the Special Issue Analytical Frameworks and Methods for Cybersecurity, 2nd Edition)
Show Figures

Figure 1

19 pages, 2717 KiB  
Article
EASD: Exposure Aware Single-Step Diffusion Framework for Monocular Depth Estimation in Autonomous Vehicles
by Chenyuan Zhang and Deokwoo Lee
Appl. Sci. 2025, 15(16), 9130; https://doi.org/10.3390/app15169130 - 19 Aug 2025
Viewed by 109
Abstract
Monocular depth estimation (MDE) is a cornerstone of computer vision and is applied to diverse practical areas such as autonomous vehicles, robotics, etc., yet even the latest methods suffer substantial errors in high-dynamic-range (HDR) scenes where over- or under-exposure erases critical texture. To [...] Read more.
Monocular depth estimation (MDE) is a cornerstone of computer vision and is applied to diverse practical areas such as autonomous vehicles, robotics, etc., yet even the latest methods suffer substantial errors in high-dynamic-range (HDR) scenes where over- or under-exposure erases critical texture. To address this challenge in real-world autonomous driving scenarios, we propose the Exposure-Aware Single-Step Diffusion Framework for Monocular Depth Estimation (EASD). EASD leverages a pre-trained Stable Diffusion variational auto-encoder, freezing its encoder to extract exposure-robust latent RGB and depth representations. A single-step diffusion process then predicts the clean depth latent vector, eliminating iterative error accumulation and enabling real-time inference suitable for autonomous vehicle perception pipelines. To further enhance robustness under extreme lighting conditions, EASD introduces an Exposure-Aware Feature Fusion (EAF) module—an attention-based pyramid that dynamically modulates multi-scale features according to global brightness statistics. This mechanism suppresses bias in saturated regions while restoring detail in under-exposed areas. Furthermore, an Exposure-Balanced Loss (EBL) jointly optimises global depth accuracy, local gradient coherence and reliability in exposure-extreme regions—key metrics for safety-critical perception tasks such as obstacle detection and path planning. Experimental results on NYU-v2, KITTI, and related benchmarks demonstrate that EASD reduces absolute relative error by an average of 20% under extreme illumination, using only 60,000 labelled images. The framework achieves real-time performance (<50 ms per frame) and strikes a superior balance between accuracy, computational efficiency, and data efficiency, offering a promising solution for robust monocular depth estimation in challenging automotive lighting conditions such as tunnel transitions, night driving and sun glare. Full article
Show Figures

Figure 1

18 pages, 1460 KiB  
Article
Sustainable Optimization Design of Architectural Space Based on Visual Perception and Multi-Objective Decision Making
by Qunjing Ji, Yu Cai and Osama Sohaib
Buildings 2025, 15(16), 2940; https://doi.org/10.3390/buildings15162940 - 19 Aug 2025
Viewed by 134
Abstract
This study proposes an integrated computational framework that combines deep learning-based visual perception analysis with multi-criteria decision making to optimize indoor architectural layouts in terms of both visual coherence and sustainability. The framework initially employs a deep learning method leveraging edge pixel feature [...] Read more.
This study proposes an integrated computational framework that combines deep learning-based visual perception analysis with multi-criteria decision making to optimize indoor architectural layouts in terms of both visual coherence and sustainability. The framework initially employs a deep learning method leveraging edge pixel feature recombination to extract critical spatial layout features and determine key visual focal points. A fusion model is then constructed to preprocess visual representations of interior layouts. Subsequently, an evolutionary deep learning algorithm is adopted to optimize parameter convergence and enhance feature extraction accuracy. To support comprehensive evaluation and decision making, an improved Analytic Hierarchy Process (AHP) is integrated with the entropy weight method, enabling the fusion of objective, data-driven weights with subjective expert judgments. This dual-focus framework addresses two pressing challenges in architectural optimization: sensitivity to building-specific spatial features and the traditional disconnect between perceptual analysis and sustainability metrics. Experimental results on a dataset of 25,400 building images demonstrate that the proposed method achieves a feature detection accuracy of 92.3%, surpassing CNN (73.6%), RNN (68.2%), and LSTM (75.1%) baselines, while reducing the processing time to under 0.95 s and lowering the carbon footprint to 17.8% of conventional methods. These findings underscore the effectiveness and practicality of the proposed model in facilitating intelligent, sustainable architectural design. Full article
Show Figures

Figure 1

21 pages, 25577 KiB  
Article
DFFNet: A Dual-Domain Feature Fusion Network for Single Remote Sensing Image Dehazing
by Huazhong Jin, Zhang Chen, Zhina Song and Kaimin Sun
Sensors 2025, 25(16), 5125; https://doi.org/10.3390/s25165125 - 18 Aug 2025
Viewed by 244
Abstract
Single remote sensing image dehazing aims to eliminate atmospheric scattering effects without auxiliary information. It serves as a crucial preprocessing step for enhancing the performance of downstream tasks in remote sensing images. Conventional approaches often struggle to balance haze removal and detail restoration [...] Read more.
Single remote sensing image dehazing aims to eliminate atmospheric scattering effects without auxiliary information. It serves as a crucial preprocessing step for enhancing the performance of downstream tasks in remote sensing images. Conventional approaches often struggle to balance haze removal and detail restoration under non-uniform haze distributions. To address this issue, we propose a Dual-domain Feature Fusion Network (DFFNet) for remote sensing image dehazing. DFFNet consists of two specialized units: the Frequency Restore Unit (FRU) and the Context Extract Unit (CEU). As haze primarily manifests as low-frequency energy in the frequency domain, the FRU effectively suppresses haze across the entire image by adaptively modulating low-frequency amplitudes. Meanwhile, to reconstruct details attenuated due to dense haze occlusion, we introduce the CEU. This unit extracts multi-scale spatial features to capture contextual information, providing structural guidance for detail reconstruction. Furthermore, we introduce the Dual-Domain Feature Fusion Module (DDFFM) to establish dependencies between features from FRU and CEU via a designed attention mechanism. This leverages spatial contextual information to guide detail reconstruction during frequency domain haze removal. Experiments on the StateHaze1k, RICE and RRSHID datasets demonstrate that DFFNet achieves competitive performance in both visual quality and quantitative metrics. Full article
Show Figures

Figure 1

27 pages, 6697 KiB  
Article
DCEDet: Tiny Object Detection in Remote Sensing Images Based on Dual-Contrast Feature Enhancement and Dynamic Distance Measurement
by Xinkai Hu, Zhida Ren, Uzair Aslam Bhatti, Mengxing Huang and Yirong Wu
Remote Sens. 2025, 17(16), 2876; https://doi.org/10.3390/rs17162876 - 18 Aug 2025
Viewed by 201
Abstract
Recent advances in deep learning have significantly improved remote sensing object detection (RSOD). However, tiny object detection (TOD) remains challenging due to two main issues: (1) limited appearance cues and (2) the traditional Intersection over Union (IoU)-based label assignment strategy, which struggles to [...] Read more.
Recent advances in deep learning have significantly improved remote sensing object detection (RSOD). However, tiny object detection (TOD) remains challenging due to two main issues: (1) limited appearance cues and (2) the traditional Intersection over Union (IoU)-based label assignment strategy, which struggles to identify enough positive samples. To address these, we propose DCEDet, a new tiny object detector for remote sensing images that enhances feature representation and optimizes label assignment. Specifically, we first design a dual-contrast feature enhancement structure, i.e., the Group-Single Context Enhancement Module (GSCEM) and Global-Local Feature Fusion Module (GLFFM). Among them, the GSCEM is designed to extract contextual enhancement features as supplementary information for TOD. The GLFFM is a feature fusion module devised to integrate both global object distribution and local detail information, aiming to prevent information loss and enhance the localization of tiny objects. In addition, the Normalized Distance and Difference Metric (NDDM) is designed as a dynamic distance measurement that enhances class representation and localization performance in TOD, thereby optimizing the training process. Finally, we conduct extensive experiments on two typical tiny object datasets, i.e., AI-TODv2 and LEVIR-SHIP, achieving optimal results of 27.8% APt and 81.2% AP50. The experimental results demonstrate the effectiveness and superiority of our method. Full article
Show Figures

Figure 1

16 pages, 829 KiB  
Article
Evaluating the Efficacy of a Novel Titanium Cage System in ALIF and LLIF: A Retrospective Clinical and Radiographic Analysis
by Ryan W. Turlip, Mert Marcel Dagli, Richard J. Chung, Daksh Chauhan, Richelle J. Kim, Julia Kincaid, Hasan S. Ahmad, Yohannes Ghenbot and Jang Won Yoon
J. Clin. Med. 2025, 14(16), 5814; https://doi.org/10.3390/jcm14165814 - 17 Aug 2025
Viewed by 369
Abstract
Background/Objectives: The success of lumbar interbody fusion depends on the implant design and the surgical approach used. This study evaluated the clinical and radiographic outcomes of lateral lumbar interbody fusion (LLIF) and anterior lumbar interbody fusion (ALIF) using a 3D-printed porous titanium [...] Read more.
Background/Objectives: The success of lumbar interbody fusion depends on the implant design and the surgical approach used. This study evaluated the clinical and radiographic outcomes of lateral lumbar interbody fusion (LLIF) and anterior lumbar interbody fusion (ALIF) using a 3D-printed porous titanium interbody cage system. Methods: A retrospective, single-center review of 48 patients treated for degenerative lumbar spine disease was conducted. Patients underwent LLIF, ALIF, or a combination of both using a 3D-printed titanium cage system (J&J MedTech, Raynham, MA, USA). The Oswestry disability index (ODI) and Patient-Reported Outcomes Measurement Information System (PROMIS) metrics were assessed after 6 weeks, 3 months, 6 months, and 12 months. Linear mixed-effects models evaluated the pre- and post-operative differences. Fusion performance and complications were assessed using the Bridwell grading system over 24 months. Results: A total of 78 levels (62 LLIF and 16 ALIF) were analyzed. Fusion rates were 90.3% (56/62) for LLIF levels and 81.3% (13/16) for ALIF levels by the end of 12 months. ODI scores improved significantly after 3 months (MD −13.0, p < 0.001), 6 months (MD −12.3, p < 0.001), and 12 months (MD −14.9, p < 0.001). PROMIS Pain Interference scores improved after 3 months (MD −6.1, p < 0.001), 6 months (MD −3.4, p < 0.001), and 12 months (MD −5.8, p < 0.001). PROMIS Physical Function scores improved after 3 months (MD +3.4, p = 0.032) and 12 months (MD +4.9, p < 0.001). Conclusions: This novel interbody cage demonstrated high fusion rates, significant pain and function improvements, and a favorable safety profile, warranting further comparative studies. Full article
(This article belongs to the Special Issue Clinical Advances in Spinal Neurosurgery)
Show Figures

Figure 1

23 pages, 4505 KiB  
Article
Enhanced ResNet-50 with Multi-Feature Fusion for Robust Detection of Pneumonia in Chest X-Ray Images
by Neenu Sebastian and B. Ankayarkanni
Diagnostics 2025, 15(16), 2041; https://doi.org/10.3390/diagnostics15162041 - 14 Aug 2025
Viewed by 255
Abstract
Background/Objectives: Pneumonia is a critical lung infection that demands timely and precise diagnosis, particularly during the evaluation of chest X-ray images. Deep learning is widely used for pneumonia detection but faces challenges such as poor denoising, limited feature diversity, low interpretability, and class [...] Read more.
Background/Objectives: Pneumonia is a critical lung infection that demands timely and precise diagnosis, particularly during the evaluation of chest X-ray images. Deep learning is widely used for pneumonia detection but faces challenges such as poor denoising, limited feature diversity, low interpretability, and class imbalance issues. This study aims to develop an optimized ResNet-50 based framework for accurate pneumonia detection. Methods: The proposed approach integrates Multiscale Curvelet Filtering with Directional Denoising (MCF-DD) as a preprocessing step to suppress noise while preserving diagnostic details. Multi-feature fusion is performed by combining deep features extracted from ResNet-50 with handcrafted texture descriptors such as Local Binary Patterns (LBPs), leveraging both semantic and structural information. Precision attention mechanisms are incorporated to enhance interpretability by highlighting diagnostically relevant regions. Results: Validation on the Kaggle chest radiograph dataset demonstrates that the proposed model achieves higher accuracy, sensitivity, specificity, and other performance metrics compared to existing methods. The inclusion of MCF-DD preprocessing, multi-feature fusion, and precision attention contributes to improved robustness and diagnostic reliability. Conclusions: The optimized ResNet-50 framework, enhanced by noise suppression, multi-feature fusion, and attention mechanisms, offers a more accurate and interpretable solution for pneumonia detection from chest X-ray images, addressing key challenges in existing deep learning approaches. Full article
(This article belongs to the Special Issue Machine Learning in Precise and Personalized Diagnosis)
Show Figures

Figure 1

Back to TopTop