Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (801)

Search Parameters:
Keywords = colored noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 2292 KB  
Article
Source Camera Identification via Explicit Content–Fingerprint Decoupling with a Dual-Branch Deep Learning Framework
by Zijuan Han, Yang Yang, Jiaxuan Lu, Jian Sun, Yunxia Liu and Ngai-Fong Bonnie Law
Appl. Sci. 2026, 16(3), 1245; https://doi.org/10.3390/app16031245 - 26 Jan 2026
Viewed by 38
Abstract
In this paper, we propose a source camera identification method based on disentangled feature modeling, aiming to achieve robust extraction of camera fingerprint features under complex imaging and post-processing conditions. To address the severe coupling between image content and camera fingerprint features in [...] Read more.
In this paper, we propose a source camera identification method based on disentangled feature modeling, aiming to achieve robust extraction of camera fingerprint features under complex imaging and post-processing conditions. To address the severe coupling between image content and camera fingerprint features in existing methods, which makes content interference difficult to suppress, we develop a dual-branch deep learning framework guided by imaging physics. By introducing physical consistency constraints, the proposed framework explicitly separates image content representations from device-related fingerprint features in the feature space, thereby enhancing the stability and robustness of source camera identification. The proposed method adopts two parallel branches: a content modeling branch and a fingerprint feature extraction branch. The content branch is built upon an improved U-Net architecture to reconstruct scene and color information, and further incorporates texture refinement and multi-scale feature fusion to reduce residual content interference in fingerprint modeling. The fingerprint branch employs ResNet-50 as the backbone network to learn discriminative global features associated with the camera imaging pipeline. Based on these branches, fingerprint information dominated by sensor noise is explicitly extracted by computing the residual between the input image and the reconstructed content, and is further encoded through noise analysis and feature fusion for joint camera model classification. Experimental results on multiple public-source camera forensics datasets demonstrate that the proposed method achieves stable and competitive identification performance in same-brand camera discrimination, complex imaging conditions, and post-processing scenarios, validating the effectiveness of the proposed disentangled modeling and physical consistency constraint strategy for source camera identification. Full article
(This article belongs to the Special Issue New Development in Machine Learning in Image and Video Forensics)
Show Figures

Figure 1

21 pages, 4023 KB  
Article
High-Speed Image Restoration Based on a Dynamic Vision Sensor
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 781; https://doi.org/10.3390/s26030781 - 23 Jan 2026
Viewed by 146
Abstract
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. [...] Read more.
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. However, in this paper, we found severe artifacts resulting in image quality degradation caused by color ghosts, event noises, and discrepancies between conventional image sensors and event-based sensors. To overcome these inevitable artifacts, we propose and demonstrate event-based compensation techniques such as cross-correlation optimization, contrast maximization, resolution mismatch compensation (event upsampling for alignment), and disparity matching. The results show that the deblur performance can be improved dramatically in terms of metrics such as the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Spatial Frequency Response (SFR). Thus, we expect that the proposed event-based image restoration technique can be widely deployed in mobile cameras. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
26 pages, 2403 KB  
Article
Assessment of Psychological Effects of the Built Environment Based on TFN–Prospect–Regret Theory–VIKOR: A Case Study of Open-Plan Offices
by Xiaoting Cheng, Guiling Zhao and Meng Xie
Sustainability 2026, 18(2), 1104; https://doi.org/10.3390/su18021104 - 21 Jan 2026
Viewed by 116
Abstract
As people spend more time indoors, the impact of the built environment on psychological health has attracted growing attention. Yet existing studies often have difficulty capturing decision-makers’ reference dependence and loss aversion under uncertainty. To bridge this gap, we propose an evaluation framework [...] Read more.
As people spend more time indoors, the impact of the built environment on psychological health has attracted growing attention. Yet existing studies often have difficulty capturing decision-makers’ reference dependence and loss aversion under uncertainty. To bridge this gap, we propose an evaluation framework comprising three first-level criteria—Outdoor Environment, Physical Comfort (including thermal, lighting, and color environments), and Acoustic Comfort—and determine combined weights by integrating subjective analytic hierarchy process (AHP) judgments with objective entropy weighting based on triangular fuzzy numbers (TFNs). We further incorporate prospect–regret theory to represent loss aversion, expectation-based reference points, and counterfactual regret/rejoicing, and couple it with the VIKOR compromise ranking method, forming an integrated “TFN + Prospect–Regret + VIKOR” approach. The proposed method is applied to four retrofit alternatives for an open-plan office floor (approximately 1200 m2), each emphasizing outdoor environment, physical comfort, acoustic comfort, or no single priority. Experts assessed the schemes using fuzzy linguistic variables. The results show that lighting conditions, thermal comfort, color scheme, and internal noise control receive the highest comprehensive weights. Extensive sensitivity analyses across value/weighting functions and regret-aversion parameters indicate that the ranking of alternatives remains stable while exhibiting clearer separation. Comparative analyses further suggest that, although the overall ordering is consistent with baseline methods, the proposed model increases score dispersion and improves discriminative power. Overall, by explicitly accounting for decision-makers’ psychological behavior and information uncertainty, the framework enables robust and interpretable selection of retrofit schemes for existing office spaces. Full article
Show Figures

Figure 1

22 pages, 7096 KB  
Article
An Improved ORB-KNN-Ratio Test Algorithm for Robust Underwater Image Stitching on Low-Cost Robotic Platforms
by Guanhua Yi, Tianxiang Zhang, Yunfei Chen and Dapeng Yu
J. Mar. Sci. Eng. 2026, 14(2), 218; https://doi.org/10.3390/jmse14020218 - 21 Jan 2026
Viewed by 77
Abstract
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching [...] Read more.
Underwater optical images often exhibit severe color distortion, weak texture, and uneven illumination due to light absorption and scattering in water. These issues result in unstable feature detection and inaccurate image registration. To address these challenges, this paper proposes an underwater image stitching method that integrates ORB (Oriented FAST and Rotated BRIEF) feature extraction with a fixed-ratio constraint matching strategy. First, lightweight color and contrast enhancement techniques are employed to restore color balance and improve local texture visibility. Then, ORB descriptors are extracted and matched via a KNN (K-Nearest Neighbors) nearest-neighbor search, and Lowe’s ratio test is applied to eliminate false matches caused by weak texture similarity. Finally, the geometric transformation between image frames is estimated by incorporating robust optimization, ensuring stable homography computation. Experimental results on real underwater datasets show that the proposed method significantly improves stitching continuity and structural consistency, achieving 40–120% improvements in SSIM (Structural Similarity Index) and PSNR (peak signal-to-noise ratio) over conventional Harris–ORB + KNN, SIFT (scale-invariant feature transform) + BF (brute force), SIFT + KNN, and AKAZE (accelerated KAZE) + BF methods while maintaining processing times within one second. These results indicate that the proposed method is well-suited for real-time underwater environment perception and panoramic mapping on low-cost, micro-sized underwater robotic platforms. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 3453 KB  
Review
Diamond Sensor Technologies: From Multi Stimulus to Quantum
by Pak San Yip, Tiqing Zhao, Kefan Guo, Wenjun Liang, Ruihan Xu, Yi Zhang and Yang Lu
Micromachines 2026, 17(1), 118; https://doi.org/10.3390/mi17010118 - 16 Jan 2026
Viewed by 399
Abstract
This review explores the variety of diamond-based sensing applications, emphasizing their material properties, such as high Young’s modulus, thermal conductivity, wide bandgap, chemical stability, and radiation hardness. These diamond properties give excellent performance in mechanical, pressure, thermal, magnetic, optoelectronic, radiation, biosensing, quantum, and [...] Read more.
This review explores the variety of diamond-based sensing applications, emphasizing their material properties, such as high Young’s modulus, thermal conductivity, wide bandgap, chemical stability, and radiation hardness. These diamond properties give excellent performance in mechanical, pressure, thermal, magnetic, optoelectronic, radiation, biosensing, quantum, and other applications. In vibration sensing, nano/poly/single-crystal diamond resonators operate from MHz to GHz frequencies, with high quality factor via CVD growth, diamond-on-insulator techniques, and ICP etching. Pressure sensing uses boron-doped piezoresistive, as well as capacitive and Fabry–Pérot readouts. Thermal sensing merges NV nanothermometry, single-crystal resonant thermometers, and resistive/diode sensors. Magnetic detection offers FeGa/Ti/diamond heterostructures, complementing NV. Optoelectronic applications utilize DUV photodiodes and color centers. Radiation detectors benefit from diamond’s neutron conversion capability. Biosensing leverages boron-doped diamond and hydrogen-terminated SGFETs, as well as gas targets such as NO2/NH3/H2 via surface transfer doping and Pd Schottky/MIS. Imaging uses AFM/NV probes and boron-doped diamond tips. Persistent challenges, such as grain boundary losses in nanocrystalline diamond, limited diamond-on-insulator bonding yield, high temperature interface degradation, humidity-dependent gas transduction, stabilization of hydrogen termination, near-surface nitrogen-vacancy noise, and the cost of high-quality single-crystal diamond, are being addressed through interface and surface chemistry control, catalytic/dielectric stack engineering, photonic integration, and scalable chemical vapor deposition routes. These advances are enabling integrated, high-reliability diamond sensors for extreme and quantum-enhanced applications. Full article
Show Figures

Figure 1

25 pages, 8224 KB  
Article
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
by Vladimir Frants, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2026, 17(1), 89; https://doi.org/10.3390/info17010089 - 14 Jan 2026
Viewed by 230
Abstract
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor [...] Read more.
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor limitations and environmental factors, weakening visual fidelity and reducing performance in vision tasks. Common issues such as insufficient lighting, backlighting, and limited exposure create low contrast, heavy shadows, and poor visibility, particularly at night. We propose QWR-Dec-Net, a quaternion-based Retinex decomposition network tailored for low-light image enhancement. QWR-Dec-Net consists of two key modules: a decomposition module that separates illumination and reflectance, and a denoising module that fuses a quaternion holistic color representation with wavelet multi-frequency information. This structure jointly improves color constancy and noise suppression. Experiments on low-light remote sensing datasets (LSCIDMR and UCMerced) show that QWR-Dec-Net outperforms current methods in PSNR, SSIM, LPIPS, and classification accuracy. The model’s accurate illumination estimation and stable reflectance make it well-suited for remote sensing tasks such as object detection, video surveillance, precision agriculture, and autonomous navigation. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 1073 KB  
Article
Near-Optimal Decoding Algorithm for Color Codes Using Population Annealing
by Fernando Martínez-García, Francisco Revson F. Pereira and Pedro Parrado-Rodríguez
Entropy 2026, 28(1), 91; https://doi.org/10.3390/e28010091 - 12 Jan 2026
Viewed by 252
Abstract
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. [...] Read more.
The development and use of large-scale quantum computers relies on integrating quantum error-correcting (QEC) schemes into the quantum computing pipeline. A fundamental part of the QEC protocol is the decoding of the syndrome to identify a recovery operation with a high success rate. In this work, we implement a decoder that finds the recovery operation with the highest success probability by mapping the decoding problem to a spin system and using Population Annealing to estimate the free energy of the different error classes. We study the decoder performance on a 4.8.8 color code lattice under different noise models, including code capacity with bit-flip and depolarizing noise, and phenomenological noise, which considers noisy measurements, with performance reaching near-optimal thresholds for bit-flip and depolarizing noise, and the highest reported threshold for phenomenological noise. This decoding algorithm can be applied to a wide variety of stabilizer codes, including surface codes and quantum Low-Density Parity Check (qLDPC) codes. Full article
(This article belongs to the Special Issue Coding Theory and Its Applications)
Show Figures

Figure 1

24 pages, 6081 KB  
Article
Color Image Encryption Based on Phase-Only Hologram Encoding Under Dynamic Constraint and Phase Retrieval Under Structured Light Illumination
by Wenqi Zhong, Yanfeng Su, Yiwen Wang, Xinyu Peng, Chenxia Li, Shanjun Nie, Zhijian Cai and Wenqiang Wan
Photonics 2026, 13(1), 66; https://doi.org/10.3390/photonics13010066 - 11 Jan 2026
Viewed by 184
Abstract
This paper introduces a color image encryption technique based on phase-only hologram (POH) encoding with dynamic constraint and phase retrieval under structured light illumination (SLI). During encryption, the color plaintext is first encoded into a POH. This hologram is then transformed into an [...] Read more.
This paper introduces a color image encryption technique based on phase-only hologram (POH) encoding with dynamic constraint and phase retrieval under structured light illumination (SLI). During encryption, the color plaintext is first encoded into a POH. This hologram is then transformed into an amplitude distribution through phase-amplitude conversion. Subsequently, using an iterative phase retrieval algorithm under structured light, the amplitude is encrypted into a visible ciphertext image, while a POM set is produced. The resulting ciphertext exhibits a visible image pattern, rather than noise-like appearance, providing ultrahigh imperceptibility. Moreover, the dynamic constraint in hologram encoding ensures balanced quality across color channels, leading to high-quality decrypted images with correct keys. The incorporation of a structured phase mask and the POM set expands the key space and boosts security. In decryption, the decryption structured light (DSL) illuminates the ciphertext and the neural network sequentially to generate a reconstructed amplitude. This amplitude is converted into a phase distribution via amplitude-phase conversion, which then acts as the POH for color holographic reconstruction, yielding the decrypted image. Numerical simulations demonstrate the method’s feasibility, high security, and strong robustness. Full article
Show Figures

Figure 1

39 pages, 14025 KB  
Article
Degradation-Aware Multi-Stage Fusion for Underwater Image Enhancement
by Lian Xie, Hao Chen and Jin Shu
J. Imaging 2026, 12(1), 37; https://doi.org/10.3390/jimaging12010037 - 8 Jan 2026
Viewed by 276
Abstract
Underwater images frequently suffer from color casts, low illumination, and blur due to wavelength-dependent absorption and scattering. We present a practical two-stage, modular, and degradation-aware framework designed for real-time enhancement, prioritizing deployability on edge devices. Stage I employs a lightweight CNN to classify [...] Read more.
Underwater images frequently suffer from color casts, low illumination, and blur due to wavelength-dependent absorption and scattering. We present a practical two-stage, modular, and degradation-aware framework designed for real-time enhancement, prioritizing deployability on edge devices. Stage I employs a lightweight CNN to classify inputs into three dominant degradation classes (color cast, low light, blur) with 91.85% accuracy on an EUVP subset. Stage II applies three scene-specific lightweight enhancement pipelines and fuses their outputs using two alternative learnable modules: a global Linear Fusion and a LiteUNetFusion (spatially adaptive weighting with optional residual correction). Compared to the three single-scene optimizers (average PSNR = 19.0 dB; mean UCIQE ≈ 0.597; mean UIQM ≈ 2.07), the Linear Fusion improves PSNR by +2.6 dB on average and yields roughly +20.7% in UCIQE and +21.0% in UIQM, while maintaining low latency (~90 ms per 640 × 480 frame on an Intel i5-13400F (Intel Corporation, Santa Clara, CA, USA). The LiteUNetFusion further refines results: it raises PSNR by +1.5 dB over the Linear model (23.1 vs. 21.6 dB), brings modest perceptual gains (UCIQE from 0.72 to 0.74, UIQM 2.5 to 2.8) at a runtime of ≈125 ms per 640 × 480 frame, and better preserves local texture and color consistency in mixed-degradation scenes. We release implementation details for reproducibility and discuss limitations (e.g., occasional blur/noise amplification and domain generalization) together with future directions. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

34 pages, 9553 KB  
Article
Research on Multi-Stage Optimization for High-Precision Digital Surface Model and True Digital Orthophoto Map Generation Methods
by Yingwei Ge, Renke Ji, Bingxuan Guo, Qinsi Wang, Xiao Jiang and Mofei Chen
Remote Sens. 2026, 18(2), 197; https://doi.org/10.3390/rs18020197 - 7 Jan 2026
Viewed by 184
Abstract
To enhance the overall quality and consistency of depth maps, Digital Surface Models (DSM), and True Digital Orthophoto Map (TDOM) in UAV image reconstruction, this paper proposes a multi-stage adaptive optimization generation method. First, to address the noise and outlier issues in depth [...] Read more.
To enhance the overall quality and consistency of depth maps, Digital Surface Models (DSM), and True Digital Orthophoto Map (TDOM) in UAV image reconstruction, this paper proposes a multi-stage adaptive optimization generation method. First, to address the noise and outlier issues in depth maps, an adaptive joint bilateral filtering-based optimization method is introduced. This method repairs anomalous depth values using a four-directional filling strategy, incorporates image-guided joint bilateral filtering to enhance edge structure representation, effectively improving the accuracy and continuity of the depth map. Next, during the DSM generation stage, a method based on depth value voting space and elevation anomaly detection is proposed. A joint mechanism of elevation calculation and anomaly point detection is used to suppress noise and errors, while a height value completion strategy significantly enhances the geometric accuracy and integrity of the DSM. Finally, in the TDOM generation process, occlusion detection and gap-line generation techniques are introduced. Together with uniform lighting, color adjustment, and image gap optimization strategies, this improves texture stitching continuity and brightness consistency, effectively reducing artifacts caused by gaps, blurriness, and lighting differences. Experimental results show that the proposed method significantly improves depth map smoothness, DSM geometric accuracy, and TDOM visual consistency compared to traditional methods, providing a complete and efficient technical pathway for high-quality surface reconstruction. Full article
(This article belongs to the Special Issue Remote Sensing for 2D/3D Mapping)
Show Figures

Figure 1

14 pages, 2218 KB  
Article
Singular Value Decomposition Wavelength-Multiplexing Ghost Imaging
by Yingtao Zhang, Xueqian Zhang, Zongguo Li and Hongguo Li
Photonics 2026, 13(1), 49; https://doi.org/10.3390/photonics13010049 - 5 Jan 2026
Viewed by 322
Abstract
To enhance imaging quality, singular value decomposition (SVD) has been applied to single-wavelength ghost imaging (GI) or color GI. In this paper, we extend the application of SVD to wavelength-multiplexing ghost imaging (WMGI) for reducing the redundant information in the random measurement matrix [...] Read more.
To enhance imaging quality, singular value decomposition (SVD) has been applied to single-wavelength ghost imaging (GI) or color GI. In this paper, we extend the application of SVD to wavelength-multiplexing ghost imaging (WMGI) for reducing the redundant information in the random measurement matrix corresponding to multi-wavelength modulated speckle fields. The feasibility of this method is demonstrated through numerical simulations and optical experiments. Based on the intensity statistical properties of multi-wavelength speckle fields, we derived an expression for the contrast-to-noise ratio (CNR) to characterize imaging quality and conducted a corresponding analysis. The theoretical results indicate that in SVDWMGI, for the m-wavelength case, the CNR of the reconstructed image is m times that of single-wavelength GI. Moreover, we carried out an optical experiment with a three-wavelength speckle-modulated light source to verify the method. This approach integrates the advantages of both SVD and wavelength division multiplexing, potentially facilitating the application of GI in long-distance imaging fields such as remote sensing. Full article
(This article belongs to the Special Issue Ghost Imaging and Quantum-Inspired Classical Optics)
Show Figures

Figure 1

18 pages, 2680 KB  
Article
Temporally Aware Objective Quality Metric for Immersive Video
by Jakub Stankowski, Bartosz Sojka, Tomasz Grajek and Adrian Dziembowski
Appl. Sci. 2026, 16(1), 274; https://doi.org/10.3390/app16010274 - 26 Dec 2025
Viewed by 264
Abstract
State-of-the-art objective quality metrics designed for immersive content typically prioritize spatial distortions; therefore, they can omit temporal artifacts introduced by view synthesis and dynamic scene rendering. Consequently, metrics such as the commonly used peak signal-to-noise ratio for immersive video (IV-PSNR) are “temporally blind”, [...] Read more.
State-of-the-art objective quality metrics designed for immersive content typically prioritize spatial distortions; therefore, they can omit temporal artifacts introduced by view synthesis and dynamic scene rendering. Consequently, metrics such as the commonly used peak signal-to-noise ratio for immersive video (IV-PSNR) are “temporally blind”, creating a conceptual gap where temporally stable distortions cannot be distinguished from disruptive temporal flickering. To address this limitation, we propose a temporal extension of the IV-PSNR metric that incorporates motion information into the quality assessment process. The method augments the traditional Y, U, and V color components with a fourth channel representing motion vectors (M), enabling the proposed four-component IV-PSNRYUVM metric to account for dynamic distortions introduced by view rendering. To evaluate the effectiveness of the proposed approach, multiple configurations of motion integration were tested, including metrics based solely on motion consistency, metrics combining motion with texture, and several dense optical flow algorithms with different parameter settings. Extensive experiments performed on immersive video sequences demonstrate that the proposed four-component IV-PSNRYUVM achieves the highest correlation with subjectively perceived video quality. These results confirm that combining texture with motion information provides a benefit, making the proposal a valuable addition for real-world immersive video systems. Full article
Show Figures

Figure 1

23 pages, 12620 KB  
Article
The Color Image Watermarking Algorithm Based on Quantum Discrete Wavelet Transform and Chaotic Mapping
by Yikang Yuan, Wenbo Zhao, Zhongyan Li and Wanquan Liu
Symmetry 2026, 18(1), 33; https://doi.org/10.3390/sym18010033 - 24 Dec 2025
Viewed by 352
Abstract
Quantum watermarking is a technique that embeds specific information into a quantum carrier for the purpose of digital copyright protection. In this paper, we propose a novel color image watermarking algorithm that integrates quantum discrete wavelet transform with Sinusoidal–Tent mapping and baker mapping. [...] Read more.
Quantum watermarking is a technique that embeds specific information into a quantum carrier for the purpose of digital copyright protection. In this paper, we propose a novel color image watermarking algorithm that integrates quantum discrete wavelet transform with Sinusoidal–Tent mapping and baker mapping. Initially, chaotic sequences are generated using Sinusoidal–Tent mapping to determine the channels suitable for watermark embedding. Subsequently, a one-level quantum Haar wavelet transform is applied to the selected channel to decompose the image. The watermarked image is then scrambled via discrete baker mapping, and the scrambled image is embedded into the High-High subbands. The invisibility of the watermark is evaluated by calculating the peak signal-to-noise ratio, Structural similarity index measure, and Learned Perceptual Image Patch Similarity, with comparisons made against the color histogram. The robustness of the proposed algorithm is assessed through the calculation of Normalized Cross-Correlation. In the simulation results, PSNR is close to 63, SSIM is close to 1, LPIPS is close to 0.001, and NCC is close to 0.97. This indicates that the proposed watermarking algorithm exhibits excellent visual quality and a robust capability to withstand various attacks. Additionally, through ablation study, the contribution of each technique to overall performance was systematically evaluated. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

22 pages, 8263 KB  
Article
Research on Propeller Defect Diagnosis of Rotor UAVs Based on MDI-STFFNet
by Beining Cui, Dezhi Jiang, Xinyu Wang, Lv Xiao, Peisen Tan, Yanxia Li and Zhaobin Tan
Symmetry 2026, 18(1), 3; https://doi.org/10.3390/sym18010003 - 19 Dec 2025
Viewed by 268
Abstract
To address flight safety risks from rotor defects in rotorcraft drones operating in complex low-altitude environments, this study proposes a high-precision diagnostic model based on the Multimodal Data Input and Spatio-Temporal Feature Fusion Network (MDI-STFFNet). The model uses a dual-modality coupling mechanism that [...] Read more.
To address flight safety risks from rotor defects in rotorcraft drones operating in complex low-altitude environments, this study proposes a high-precision diagnostic model based on the Multimodal Data Input and Spatio-Temporal Feature Fusion Network (MDI-STFFNet). The model uses a dual-modality coupling mechanism that integrates vibration and air pressure signals, forming a “single-path temporal, dual-path representational” framework. The one-dimensional vibration signal and the five-channel pressure array are mapped into a texture space via phase space reconstruction and color-coded recurrence plots, followed by extraction of transient spatial features using a pre-trained ResNet-18 model. Parallel LSTM networks capture long-term temporal dependencies, while a parameter-free 1D max-pooling layer compresses redundant pressure data, reducing LSTM parameter growth. The CSW-FM module enables adaptive fusion across modal scales via shared-weight mapping and learnable query vectors that dynamically assign spatiotemporal weights. Experiments on a self-built dataset with seven defect types show that the model achieves 99.01% accuracy, improving by 4.46% and 1.98% over single-modality vibration and pressure inputs. Ablation studies confirm the benefits of spatiotemporal fusion and soft weighting in accuracy and robustness. The model provides a scalable, lightweight solution for UAV power system fault diagnosis under high-noise and varying conditions. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

16 pages, 3612 KB  
Article
Two-Stage Denoising Diffusion Model for Low-Light Image Enhancement
by Danchen Wang, Hao Zhang, Rongsan Chen and Xiang Li
Appl. Sci. 2026, 16(1), 18; https://doi.org/10.3390/app16010018 - 19 Dec 2025
Viewed by 442
Abstract
Images captured under weak illumination typically suffer from low brightness and contrast, severe color distortion, and significant noise contamination, which not only degrade human visual perception but also hinder the performance of high-level vision tasks. Low-light image enhancement aims to improve visual quality [...] Read more.
Images captured under weak illumination typically suffer from low brightness and contrast, severe color distortion, and significant noise contamination, which not only degrade human visual perception but also hinder the performance of high-level vision tasks. Low-light image enhancement aims to improve visual quality and provide favorable conditions for subsequent image processing. To address the challenges of non-uniform illumination and loss of details in dark regions, we propose a two-stage denoising diffusion model (two-stage DDM). Specifically, we design a convolution-based Retinex decomposition module to achieve fast and robust image decomposition, followed by a two-stage diffusion-based denoising process that further enhances global image details, brightness, and contrast. In addition, we introduce a feature enhancement module to strengthen the representational capacity of the reflectance component. To evaluate the robustness and generalization ability of the proposed model, extensive experiments are conducted on the LOLv1, LOLv2-real, and LSRW datasets. Experimental results demonstrate that the proposed two-stage DDM achieves competitive performance with state-of-the-art methods, producing enhanced images with more natural and spatially uniform brightness, along with noticeably improved visual quality. Full article
Show Figures

Figure 1

Back to TopTop