error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (89)

Search Parameters:
Keywords = artifact handling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 1909 KB  
Article
Spectrophotometric Analysis of Divalent Mercury (Hg(II)) Using Dithizone: The Effect of Humic Acids and Ligands
by Stephen K. Okine, Lesta S. Fletcher, Zachary Andreasen and Hong Zhang
Water 2026, 18(1), 53; https://doi.org/10.3390/w18010053 - 24 Dec 2025
Viewed by 345
Abstract
Spectrophotometric analysis of divalent Hg(II) using dithizone has been widely used. Yet, a number of analytical issues and concerns associated with this method remain to be addressed. We studied the effect of humic acids (Aldrich and Acros humic acids) and pH on Hg(II) [...] Read more.
Spectrophotometric analysis of divalent Hg(II) using dithizone has been widely used. Yet, a number of analytical issues and concerns associated with this method remain to be addressed. We studied the effect of humic acids (Aldrich and Acros humic acids) and pH on Hg(II) analysis and clarified several analytical and operational issues. Our study shows that the humic acids lower the slopes of the Hg(II) calibrations and thus the sensitivity of the method. Nevertheless, the calibrations retain good linearity and thus still remain valid and useful in the presence of the humic acids at the tested levels of up to 100 ppm. The effect of the humic acids appears to be similar under both acidic and basic conditions. Our tests using cysteine (model agent for thiol group) and oxalate (carboxylic group) reveal the cause for the effect of the humic acids. The study shows that cysteine has the strongest effect on the Hg(II) analysis (largest calibration slope decreases), followed by humic acids and then oxalate. As for the pH effect, in the absence of the humic acids, basic conditions lead to lower sensitivity but still with good linearity at pH up to 9. Yet, the method fails to perform satisfactorily at pH ≥ 10. Our further extended study on the effect of ligands (chloride, hydroxyl, citrate, oxalate, and cysteine) confirms the effect and role of the thiol and carboxylic groups of humic acids in affecting the Hg(II) analysis. These ligands widely present in environmental samples can interfere with the Hg(II) analysis by lowering its sensitivity while still leaving its calibration linearity unaltered. Our operational study shows that the concentration of dithizone solution (dithizone in chloroform) should always be kept excessive and adjusted based on the level of Hg(II) analyzed to ensure complete complexation of Hg(II) with dithizone. Adoption of the dithizone solution used for the Hg(II) extraction, instead of chloroform, to zero the spectrophotometer proves to be useful and effective in minimizing analytical errors. The improved, refined method of spectrophotometric analysis of Hg(II) using dithizone can still serve as a useful analytical tool. Yet, a lack of due attention to and appropriate measures for handling the effect of humic acids and other ligands can result in analytical errors and research artifacts. This can consequently compromise the analytical validity of this method. Appropriate analytical calibrations should be conducted with the effect of humic acids or ligands in consideration, and only the specific calibration in the presence of the humic acid or ligand of concern at the relevant level(s) should be employed appropriately to calculate the results of the analytical unknowns. Full article
(This article belongs to the Section Water Quality and Contamination)
Show Figures

Figure 1

62 pages, 10208 KB  
Review
Extracting Value from Marine and Microbial Natural Product Artifacts and Chemical Reactivity
by Mark S. Butler and Robert J. Capon
Mar. Drugs 2026, 24(1), 5; https://doi.org/10.3390/md24010005 - 20 Dec 2025
Viewed by 741
Abstract
Natural products are and continue to be a remarkable resource, rich in structural diversity, and endowed with valuable chemical and biological properties that have advanced both science and society. Some natural products, especially those from marine organisms, are chemically reactive, and during extraction [...] Read more.
Natural products are and continue to be a remarkable resource, rich in structural diversity, and endowed with valuable chemical and biological properties that have advanced both science and society. Some natural products, especially those from marine organisms, are chemically reactive, and during extraction and handling can partially or totally transform into artifacts. All too often overlooked or mischaracterised as natural products, artifacts can be invaluable indicators of a uniquely evolved and primed chemical space, with enhanced chemical and biological properties highly prized for drug discovery. To demonstrate this potential, we review a wide selection of marine and microbial case studies, revealing the factors that initiate artifact formation (e.g., solvents, heat, pH, light and air oxidation) and commenting on the mechanisms behind artifact formation. We conclude with reflections on how to recognise and control artifact formation, and how to exploit knowledge of artifacts as a window into unique regions of natural product chemical space—to better inform the development of future marine bioproducts. Full article
(This article belongs to the Special Issue From Marine Natural Products to Marine Bioproducts)
Show Figures

Graphical abstract

26 pages, 7430 KB  
Article
PMSAF-Net: A Progressive Multi-Scale Asymmetric Fusion Network for Lightweight and Multi-Platform Thin Cloud Removal
by Li Wang and Feng Liang
Remote Sens. 2025, 17(24), 4001; https://doi.org/10.3390/rs17244001 - 11 Dec 2025
Viewed by 242
Abstract
With the rapid improvement of deep learning, significant progress has been made in cloud removal for remote sensing images (RSIs). However, the practical deployment of existing methods on multi-platform devices faces several limitations, including high computational complexity preventing real-time processing, substantial hardware resource [...] Read more.
With the rapid improvement of deep learning, significant progress has been made in cloud removal for remote sensing images (RSIs). However, the practical deployment of existing methods on multi-platform devices faces several limitations, including high computational complexity preventing real-time processing, substantial hardware resource demands that are unsuitable for edge devices, and inadequate performance in complex cloud scenarios. To address these challenges, we propose PMSAF-Net, a lightweight Progressive Multi-Scale Asymmetric Fusion Network designed for efficient thin cloud removal. The proposed network employs a Dual-Branch Asymmetric Attention (DBAA) module to optimize spatial details and channel dependencies, reducing computation cost while improving feature extraction. A Multi-Scale Context Aggregation (MSCA) mechanism captures multi-level contextual information through hierarchical dilated convolutions, effectively handling clouds of varying scales and complexities. A Refined Residual Block (RRB) minimizes boundary artifacts through reflection padding and residual calibration. Additionally, an Iterative Feature Refinement (IFR) module progressively enhances feature representations via dense cross-stage connections. Extensive experimental multi-platform datasets results show that the proposed method achieves favorable performance against state-of-the-art algorithms. With only 0.32 M parameters, PMSAF-Net maintains low computational costs, demonstrating its strong potential for multi-platform deployment on resource-constrained edge devices. Full article
Show Figures

Figure 1

20 pages, 2950 KB  
Article
The Role of MER Processing Pipelines for STN Functional Identification During DBS Surgery: A Feature-Based Machine Learning Approach
by Vincenzo Levi, Stefania Coelli, Chiara Gorlini, Federica Forzanini, Sara Rinaldo, Nico Golfrè Andreasi, Luigi Romito, Roberto Eleopra and Anna Maria Bianchi
Bioengineering 2025, 12(12), 1300; https://doi.org/10.3390/bioengineering12121300 - 26 Nov 2025
Cited by 1 | Viewed by 447
Abstract
Microelectrode recording (MER) is commonly used to validate preoperative targeting during subthalamic nucleus (STN) deep brain stimulation (DBS) surgery for Parkinson’s Disease (PD). Although machine learning (ML) has been used to improve STN localization using MER data, the impact of preprocessing steps on [...] Read more.
Microelectrode recording (MER) is commonly used to validate preoperative targeting during subthalamic nucleus (STN) deep brain stimulation (DBS) surgery for Parkinson’s Disease (PD). Although machine learning (ML) has been used to improve STN localization using MER data, the impact of preprocessing steps on the accuracy of classifiers has received little attention. We evaluated 24 distinct preprocessing pipelines combining four artifact removal strategies, three outlier handling methods, and optional feature normalization. The effect of each data processing procedure’s component of interest was evaluated in function of the performance obtained using three ML models. Artifact rejection methods (i.e., unsupervised variance-based algorithm (COV) and background noise estimation (BCK)), combined with optimized outlier management (i.e., statistical outlier identification per hemisphere (ORH)) consistently improved classification performance. In contrast, applying hemisphere-specific feature normalization prior to classification led to performance degradation across all metrics. SHAP (SHapley Additive exPlanations) analysis, performed to determine feature importance across pipelines, revealed stable agreement with regard to influential features across diverse preprocessing configurations. In conclusion, optimal artifact rejection and outlier treatment are essential in preprocessing MER for STN identification in DBS, whereas preliminary feature normalization strategies may impair model performance. Overall, the best classification performance was obtained by applying the Random Forest model to the dataset treated using COV artifact rejection and ORH outlier management (accuracy = 0.945). SHAP-based interpretability offers valuable guidance for refining ML pipelines. These insights can inform robust protocol development for MER-guided DBS targeting. Full article
(This article belongs to the Special Issue AI and Data Analysis in Neurological Disease Management)
Show Figures

Graphical abstract

17 pages, 2779 KB  
Article
Image Restoration Based on Semantic Prior Aware Hierarchical Network and Multi-Scale Fusion Generator
by Yapei Feng, Yuxiang Tang and Hua Zhong
Technologies 2025, 13(11), 521; https://doi.org/10.3390/technologies13110521 - 13 Nov 2025
Viewed by 532
Abstract
As a fundamental low-level vision task, image restoration plays a pivotal role in reconstructing authentic visual information from corrupted inputs, directly impacting the performance of downstream high-level vision systems. Current approaches frequently exhibit two critical limitations: (1) Progressive texture degradation and blurring during [...] Read more.
As a fundamental low-level vision task, image restoration plays a pivotal role in reconstructing authentic visual information from corrupted inputs, directly impacting the performance of downstream high-level vision systems. Current approaches frequently exhibit two critical limitations: (1) Progressive texture degradation and blurring during iterative refinement, particularly in irregular damage patterns. (2) Structural incoherence when handling cross-domain artifacts. To address these challenges, we present a semantic-aware hierarchical network (SAHN) that synergistically integrates multi-scale semantic guidance with structural consistency constraints. Firstly, we construct a Dual-Stream Feature Extractor. Based on a modified U-Net backbone with dilated residual blocks, this skip-connected encoder–decoder module simultaneously captures hierarchical semantic contexts and fine-grained texture details. Secondly, we propose the semantic prior mapper by establishing spatial–semantic correspondences between damaged areas and multi-scale features through predefined semantic prototypes through adaptive attention pooling. Additionally, we construct a multi-scale fusion generator, by employing cascaded association blocks with structural similarity constraints. This unit progressively aggregates features from different semantic levels using deformable convolution kernels, effectively bridging the gap between global structure and local texture reconstruction. Compared to existing methods, our algorithm attains the highest overall PSNR of 34.99 with the best visual authenticity (with the lowest FID of 11.56). Comprehensive evaluations of three datasets demonstrate its leading performance in restoring visual realism. Full article
Show Figures

Figure 1

23 pages, 59318 KB  
Article
BAT-Net: Bidirectional Attention Transformer Network for Joint Single-Image Desnowing and Snow Mask Prediction
by Yongheng Zhang
Information 2025, 16(11), 966; https://doi.org/10.3390/info16110966 - 7 Nov 2025
Viewed by 404
Abstract
In the wild, snow is not merely additive noise; it is a non-stationary, semi-transparent veil whose spatial statistics vary with depth, illumination, and wind. Because conventional two-stage pipelines first detect a binary mask and then inpaint the occluded regions, any early mis-classification is [...] Read more.
In the wild, snow is not merely additive noise; it is a non-stationary, semi-transparent veil whose spatial statistics vary with depth, illumination, and wind. Because conventional two-stage pipelines first detect a binary mask and then inpaint the occluded regions, any early mis-classification is irreversibly baked into the final result, leading to over-smoothed textures or ghosting artifacts. We propose BAT-Net, a Bidirectional Attention Transformer Network that frames desnowing as a coupled representation learning problem, jointly disentangling snow appearance and scene radiance in a single forward pass. Our core contributions are as follows: (1) A novel dual-decoder architecture where a background decoder and a snow decoder are coupled via a Bidirectional Attention Module (BAM). The BAM implements a continuous predict–verify–correct mechanism, allowing the background branch to dynamically accept, reject, or refine the snow branch’s occlusion hypotheses, dramatically reducing error accumulation. (2) A lightweight yet effective multi-scale feature fusion scheme comprising a Scale Conversion Module (SCM) and a Feature Aggregation Module (FAM), enabling the model to handle the large scale variance among snowflakes without a prohibitive computational cost. (3) The introduction of the FallingSnow dataset, curated to eliminate the label noise caused by irremovable ground snow in existing benchmarks, providing a cleaner benchmark for evaluating dynamic snow removal. Extensive experiments on synthetic and real-world datasets demonstrate that BAT-Net sets a new state of the art. It achieves a PSNR of 35.78 dB on the CSD dataset, outperforming the best prior model by 1.37 dB, and also achieves top results on SRRS (32.13 dB) and Snow100K (34.62 dB) datasets. The proposed method has significant practical applications in autonomous driving and surveillance systems, where accurate snow removal is crucial for maintaining visual clarity. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

14 pages, 2035 KB  
Review
Multidisciplinary Perspective of Spread Through Air Spaces in Lung Cancer: A Narrative Review
by Riccardo Orlandi, Lorenzo Bramati, Maria C. Andrisani, Giorgio A. Croci, Claudia Bareggi, Simona Castiglioni, Francesca Romboni, Sara Franzi and Davide Tosi
Cancers 2025, 17(20), 3374; https://doi.org/10.3390/cancers17203374 - 19 Oct 2025
Viewed by 1586
Abstract
Spread Through Air Spaces (STAS) is an emerging pattern of tumor invasion in lung cancer, first recognized by the World Health Organization in 2015. This narrative review examines STAS from a multidisciplinary perspective, integrating pathologic, radiologic, oncologic, and surgical points of view, together [...] Read more.
Spread Through Air Spaces (STAS) is an emerging pattern of tumor invasion in lung cancer, first recognized by the World Health Organization in 2015. This narrative review examines STAS from a multidisciplinary perspective, integrating pathologic, radiologic, oncologic, and surgical points of view, together with molecular biology to assess its clinical significance, diagnostic challenges, and therapeutic implications. Pathologically, STAS is characterized by tumor cells floating beyond the main tumor, contributing to recurrence and poor prognosis. Radiologic advancements suggest potential imaging markers for STAS, such as spiculation, the absence of an air bronchogram, solid tumor components, as well as high fluorodeoxyglucose uptake, though definitive preoperative identification remains challenging. Oncologic studies link STAS to aggressive tumor behavior and lympho-vascular invasion, suggesting a role for adjuvant chemotherapy even in the earliest stages of disease; furthermore, specific molecular alterations have been discovered, including EGFR wild-type status and ALK/ROS1 rearrangements together with high Ki-67 expression, tumor necrosis, and alterations in cell adhesion proteins like E-cadherin. Surgical aspects highlight the increased risk of recurrence following limited resection, raising concerns about optimal surgical strategies. The debate over STAS as a true invasion mechanism versus an artifact from surgical handling underscores the need for standardized pathological evaluation. This review aims to refine STAS detection, integrate it into multidisciplinary treatment decision-making, and assess its potential as a staging criterion in lung cancer management. Full article
(This article belongs to the Special Issue Surgical Management of Non-Small Cell Lung Cancer)
Show Figures

Figure 1

18 pages, 973 KB  
Article
Machine Learning-Based Vulnerability Detection in Rust Code Using LLVM IR and Transformer Model
by Young Lee, Syeda Jannatul Boshra, Jeong Yang, Zechun Cao and Gongbo Liang
Mach. Learn. Knowl. Extr. 2025, 7(3), 79; https://doi.org/10.3390/make7030079 - 6 Aug 2025
Viewed by 3459
Abstract
Rust’s growing popularity in high-integrity systems requires automated vulnerability detection in order to maintain its strong safety guarantees. Although Rust’s ownership model and compile-time checks prevent many errors, sometimes unexpected bugs may occasionally pass analysis, underlining the necessity for automated safe and unsafe [...] Read more.
Rust’s growing popularity in high-integrity systems requires automated vulnerability detection in order to maintain its strong safety guarantees. Although Rust’s ownership model and compile-time checks prevent many errors, sometimes unexpected bugs may occasionally pass analysis, underlining the necessity for automated safe and unsafe code detection. This paper presents Rust-IR-BERT, a machine learning approach to detect security vulnerabilities in Rust code by analyzing its compiled LLVM intermediate representation (IR) instead of the raw source code. This approach offers novelty by employing LLVM IR’s language-neutral, semantically rich representation of the program, facilitating robust detection by capturing core data and control-flow semantics and reducing language-specific syntactic noise. Our method leverages a graph-based transformer model, GraphCodeBERT, which is a transformer architecture pretrained model to encode structural code semantics via data-flow information, followed by a gradient boosting classifier, CatBoost, that is capable of handling complex feature interactions—to classify code as vulnerable or safe. The model was evaluated using a carefully curated dataset of over 2300 real-world Rust code samples (vulnerable and non-vulnerable Rust code snippets) from RustSec and OSV advisory databases, compiled to LLVM IR and labeled with corresponding Common Vulnerabilities and Exposures (CVEs) identifiers to ensure comprehensive and realistic coverage. Rust-IR-BERT achieved an overall accuracy of 98.11%, with a recall of 99.31% for safe code and 93.67% for vulnerable code. Despite these promising results, this study acknowledges potential limitations such as focusing primarily on known CVEs. Built on a representative dataset spanning over 2300 real-world Rust samples from diverse crates, Rust-IR-BERT delivers consistently strong performance. Looking ahead, practical deployment could take the form of a Cargo plugin or pre-commit hook that automatically generates and scans LLVM IR artifacts during the development cycle, enabling developers to catch vulnerabilities at an early stage in the development cycle. Full article
Show Figures

Figure 1

25 pages, 7859 KB  
Article
Methodology for the Early Detection of Damage Using CEEMDAN-Hilbert Spectral Analysis of Ultrasonic Wave Attenuation
by Ammar M. Shakir, Giovanni Cascante and Taher H. Ameen
Materials 2025, 18(14), 3294; https://doi.org/10.3390/ma18143294 - 12 Jul 2025
Cited by 1 | Viewed by 976
Abstract
Current non-destructive testing (NDT) methods, such as those based on wave velocity measurements, lack the sensitivity necessary to detect early-stage damage in concrete structures. Similarly, common signal processing techniques often assume linearity and stationarity among the signal data. By analyzing wave attenuation measurements [...] Read more.
Current non-destructive testing (NDT) methods, such as those based on wave velocity measurements, lack the sensitivity necessary to detect early-stage damage in concrete structures. Similarly, common signal processing techniques often assume linearity and stationarity among the signal data. By analyzing wave attenuation measurements using advanced signal processing techniques, mainly Hilbert–Huang transform (HHT), this work aims to enhance the early detection of damage in concrete. This study presents a novel energy-based technique that integrates complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and Hilbert spectrum analysis (HSA), to accurately capture nonlinear and nonstationary signal behaviors. Ultrasonic non-destructive testing was performed in this study on manufactured concrete specimens subjected to micro-damage characterized by internal microcracks smaller than 0.5 mm, induced through controlled freeze–thaw cycles. The recorded signals were decomposed from the time domain using CEEMDAN into frequency-ordered intrinsic mode functions (IMFs). A multi-criteria selection strategy, including damage index evaluation, was employed to identify the most effective IMFs while distinguishing true damage-induced energy loss from spurious nonlinear artifacts or noise. Localized damage was then analyzed in the frequency domain using HSA, achieving an up to 88% reduction in wave energy via Marginal Hilbert Spectrum analysis, compared to 68% using Fourier-based techniques, demonstrating a 20% improvement in sensitivity. The results indicate that the proposed technique enhances early damage detection through wave attenuation analysis and offers a superior ability to handle nonlinear, nonstationary signals. The Hilbert Spectrum provided a higher time-frequency resolution, enabling clearer identification of damage-related features. These findings highlight the potential of CEEMDAN-HSA as a practical, sensitive tool for early-stage microcrack detection in concrete. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

19 pages, 6323 KB  
Article
A UNet++-Based Approach for Delamination Imaging in CFRP Laminates Using Full Wavefield
by Yitian Yan, Kang Yang, Yaxun Gou, Zhifeng Tang, Fuzai Lv, Zhoumo Zeng, Jian Li and Yang Liu
Sensors 2025, 25(14), 4292; https://doi.org/10.3390/s25144292 - 9 Jul 2025
Cited by 2 | Viewed by 888
Abstract
The timely detection of delamination is essential for preventing catastrophic failures and extending the service life of carbon fiber-reinforced polymers (CFRP). Full wavefields in CFRP encapsulate extensive information on the interaction between guided waves and structural damage, making them a widely utilized tool [...] Read more.
The timely detection of delamination is essential for preventing catastrophic failures and extending the service life of carbon fiber-reinforced polymers (CFRP). Full wavefields in CFRP encapsulate extensive information on the interaction between guided waves and structural damage, making them a widely utilized tool for damage mapping. However, due to the multimodal and dispersive nature of guided waves, interpreting full wavefields remains a significant challenge. This study proposes an end-to-end delamination imaging approach based on UNet++ using 2D frequency domain spectra (FDS) derived from full wavefield data. The proposed method is validated through a self-constructed simulation dataset, experimental data collected using Scanning Laser Doppler Vibrometry, and a publicly available dataset created by Kudela and Ijjeh. The results on the simulated data show that UNet++, trained with multi-frequency FDS, can accurately predict the location, shape, and size of delamination while effectively handling frequency offsets and noise interference in the input FDS. Experimental results further indicate that the model, trained exclusively on simulated data, can be directly applied to real-world scenarios, delivering artifact-free delamination imaging. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 41418 KB  
Article
Atmospheric Scattering Model and Non-Uniform Illumination Compensation for Low-Light Remote Sensing Image Enhancement
by Xiaohang Zhao, Liang Huang, Mingxuan Li, Chengshan Han and Ting Nie
Remote Sens. 2025, 17(12), 2069; https://doi.org/10.3390/rs17122069 - 16 Jun 2025
Cited by 2 | Viewed by 1131
Abstract
Enhancing low-light remote sensing images is crucial for preserving the accuracy and reliability of downstream analyses in a wide range of applications. Although numerous enhancement algorithms have been developed, many fail to effectively address the challenges posed by non-uniform illumination in low-light scenes. [...] Read more.
Enhancing low-light remote sensing images is crucial for preserving the accuracy and reliability of downstream analyses in a wide range of applications. Although numerous enhancement algorithms have been developed, many fail to effectively address the challenges posed by non-uniform illumination in low-light scenes. These images often exhibit significant brightness inconsistencies, leading to two primary problems: insufficient enhancement in darker regions and over-enhancement in brighter areas, frequently accompanied by color distortion and visual artifacts. These issues largely stem from the limitations of existing methods, which insufficiently account for non-uniform atmospheric attenuation and local brightness variations in reflectance estimation. To overcome these challenges, we propose a robust enhancement method based on non-uniform illumination compensation and the Atmospheric Scattering Model (ASM). Unlike conventional approaches, our method utilizes ASM to initialize reflectance estimation by adaptively adjusting atmospheric light and transmittance. A weighted graph is then employed to effectively handle local brightness variation. Additionally, a regularization term is introduced to suppress noise, refine reflectance estimation, and maintain balanced brightness enhancement. Extensive experiments on multiple benchmark remote sensing datasets demonstrate that our approach outperforms state-of-the-art methods, delivering superior enhancement performance and visual quality, even under complex non-uniform low-light conditions. Full article
Show Figures

Figure 1

22 pages, 7958 KB  
Article
Depth Upsampling with Local and Nonlocal Models Using Adaptive Bandwidth
by Niloufar Salehi Dastjerdi and M. Omair Ahmad
Electronics 2025, 14(8), 1671; https://doi.org/10.3390/electronics14081671 - 20 Apr 2025
Viewed by 2708
Abstract
The rapid advancement of 3D imaging technology and depth cameras has made depth data more accessible for applications such as virtual reality and autonomous driving. However, depth maps typically suffer from lower resolution and quality compared to color images due to sensor limitations. [...] Read more.
The rapid advancement of 3D imaging technology and depth cameras has made depth data more accessible for applications such as virtual reality and autonomous driving. However, depth maps typically suffer from lower resolution and quality compared to color images due to sensor limitations. This paper introduces an improved approach to guided depth map super-resolution (GDSR) that effectively addresses key challenges, including the suppression of texture copying artifacts and the preservation of depth discontinuities. The proposed method integrates both local and nonlocal models within a structured framework, incorporating an adaptive bandwidth mechanism that dynamically adjusts guidance weights. Instead of relying on fixed parameters, this mechanism utilizes a distance map to evaluate patch similarity, leading to enhanced depth recovery. The local model ensures spatial smoothness by leveraging neighboring depth information, preserving fine details within small regions. On the other hand, the nonlocal model identifies similarities across distant areas, improving the handling of repetitive patterns and maintaining depth discontinuities. By combining these models, the proposed approach achieves more accurate depth upsampling with high-quality depth reconstruction. Experimental results, conducted on several datasets and evaluated using various objective metrics, demonstrate the effectiveness of the proposed method through both quantitative and qualitative assessments. The approach consistently delivers improved performance over existing techniques, particularly in preserving structural details and visual clarity. An ablation study further confirms the individual contributions of key components within the framework. These results collectively support the conclusion that the method is not only robust and accurate but also adaptable to a range of real-world scenarios, offering a practical advancement over current state-of-the-art solutions. Full article
(This article belongs to the Special Issue Image and Video Processing for Emerging Multimedia Technology)
Show Figures

Figure 1

20 pages, 41816 KB  
Article
The 3D Gaussian Splatting SLAM System for Dynamic Scenes Based on LiDAR Point Clouds and Vision Fusion
by Yuquan Zhang, Guangan Jiang, Mingrui Li and Guosheng Feng
Appl. Sci. 2025, 15(8), 4190; https://doi.org/10.3390/app15084190 - 10 Apr 2025
Cited by 1 | Viewed by 7354
Abstract
This paper presents a novel 3D Gaussian Splatting (3DGS)-based Simultaneous Localization and Mapping (SLAM) system that integrates Light Detection and Ranging (LiDAR) and vision data to enhance dynamic scene tracking and reconstruction. Existing 3DGS systems face challenges in sensor fusion and handling dynamic [...] Read more.
This paper presents a novel 3D Gaussian Splatting (3DGS)-based Simultaneous Localization and Mapping (SLAM) system that integrates Light Detection and Ranging (LiDAR) and vision data to enhance dynamic scene tracking and reconstruction. Existing 3DGS systems face challenges in sensor fusion and handling dynamic objects. To address these, we introduce a hybrid uncertainty-based 3D segmentation method that leverages uncertainty estimation and 3D object detection, effectively removing dynamic points and improving static map reconstruction. Our system also employs a sliding window-based keyframe fusion strategy that reduces computational load while maintaining accuracy. By incorporating a novel dynamic rendering loss function and pruning techniques, we suppress artifacts such as ghosting and ensure real-time operation in complex environments. Extensive experiments show that our system outperforms existing methods in dynamic object removal and overall reconstruction quality. The key innovations of our work lie in its integration of hybrid uncertainty-based segmentation, dynamic rendering loss functions, and an optimized sliding window strategy, which collectively enhance robustness and efficiency in dynamic scene reconstruction. This approach offers a promising solution for real-time robotic applications, including autonomous navigation and augmented reality. Full article
(This article belongs to the Special Issue Trends and Prospects for Wireless Sensor Networks and IoT)
Show Figures

Figure 1

27 pages, 744 KB  
Article
Microhooks: A Novel Framework to Streamline the Development of Microservices
by Omar Iraqi, Mohamed El Kadiri El Hassani and Anass Zouine
Computers 2025, 14(4), 139; https://doi.org/10.3390/computers14040139 - 7 Apr 2025
Viewed by 2742
Abstract
The microservices architectural style has gained widespread adoption in recent years thanks to its ability to deliver high scalability and maintainability. However, the development process for microservices-based applications can be complex and challenging. Indeed, it often requires developers to manage a large number [...] Read more.
The microservices architectural style has gained widespread adoption in recent years thanks to its ability to deliver high scalability and maintainability. However, the development process for microservices-based applications can be complex and challenging. Indeed, it often requires developers to manage a large number of distributed components with the burden of handling low-level, recurring needs, such as inter-service communication, brokering, event management, and data replication. In this article, we present Microhooks: a novel framework designed to streamline the development of microservices by allowing developers to focus on their business logic while declaratively expressing the so-called low-level needs. Based on the inversion of control and the materialized view patterns, among others, our framework automatically generates and injects the corresponding artifacts, leveraging 100% build time code introspection and instrumentation, as well as context building, for optimized runtime performance. We provide the first implementation for the Java world, supporting the most popular containers and brokers, and adhering to the standard Java/Jakarta Persistence API. From the user perspective, Microhooks exposes an intuitive, container-agnostic, broker-neutral, and ORM framework-independent API. Microhooks evaluation against state-of-the-art practices has demonstrated its effectiveness in drastically reducing code size and complexity, without incurring any considerable cost on performance. Based on such promising results, we believe that Microhooks has the potential to become an essential component of the microservices development ecosystem. Full article
Show Figures

Figure 1

30 pages, 1749 KB  
Article
Deepfake Image Forensics for Privacy Protection and Authenticity Using Deep Learning
by Saud Sohail, Syed Muhammad Sajjad, Adeel Zafar, Zafar Iqbal, Zia Muhammad and Muhammad Kazim
Information 2025, 16(4), 270; https://doi.org/10.3390/info16040270 - 27 Mar 2025
Cited by 2 | Viewed by 8669
Abstract
This research focuses on the detection of deepfake images and videos for forensic analysis using deep learning techniques. It highlights the importance of preserving privacy and authenticity in digital media. The background of the study emphasizes the growing threat of deepfakes, which pose [...] Read more.
This research focuses on the detection of deepfake images and videos for forensic analysis using deep learning techniques. It highlights the importance of preserving privacy and authenticity in digital media. The background of the study emphasizes the growing threat of deepfakes, which pose significant challenges in various domains, including social media, politics, and entertainment. Current methodologies primarily rely on visual features that are specific to the dataset and fail to generalize well across varying manipulation techniques. However, these techniques focus on either spatial or temporal features individually and lack robustness in handling complex deepfake artifacts that involve fused facial regions such as eyes, nose, and mouth. Key approaches include the use of CNNs, RNNs, and hybrid models like CNN-LSTM, CNN-GRU, and temporal convolutional networks (TCNs) to capture both spatial and temporal features during the detection of deepfake videos and images. The research incorporates data augmentation with GANs to enhance model performance and proposes an innovative fusion of artifact inspection and facial landmark detection for improved accuracy. The experimental results show near-perfect detection accuracy across diverse datasets, demonstrating the effectiveness of these models. However, challenges remain, such as the difficulty of detecting deepfakes in compressed video formats, the need for handling noise and addressing dataset imbalances. The research presents an enhanced hybrid model that improves detection accuracy while maintaining performance across various datasets. Future work includes improving model generalization to detect emerging deepfake techniques better. The experimental results reveal a near-perfect accuracy of over 99% across different architectures, highlighting their effectiveness in forensic investigations. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

Back to TopTop