Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (265)

Search Parameters:
Keywords = visual perturbations

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 15024 KB  
Article
HFA-Net: Explainable Multi-Scale Deep Learning Framework for Illumination-Invariant Plant Disease Diagnosis in Precision Agriculture
by Muhammad Hassaan Ashraf, Farhana Jabeen, Muhammad Waqar and Ajung Kim
Sensors 2026, 26(7), 2067; https://doi.org/10.3390/s26072067 - 26 Mar 2026
Viewed by 299
Abstract
Robust plant disease detection in real-world agricultural environments remains challenging due to dynamic environmental conditions. Accurate and reliable disease identification is essential for precision agriculture and effective crop management. Although computer vision and Artificial Intelligence (AI) have shown promising results in controlled settings, [...] Read more.
Robust plant disease detection in real-world agricultural environments remains challenging due to dynamic environmental conditions. Accurate and reliable disease identification is essential for precision agriculture and effective crop management. Although computer vision and Artificial Intelligence (AI) have shown promising results in controlled settings, their performance often drops under lesion scale variability, inter- and intra-class similarity among diseases, class imbalance, and illumination fluctuations. To overcome these challenges, we propose a Heterogeneous Feature Aggregation Network (HFA-Net) that brings together architectural improvements, illumination-aware preprocessing, and training-level enhancements into a single cohesive framework. To extract richer and more discriminative features from the early layers of the network, HFA-Net introduces a multi-scale, multi-level feature aggregation stem. The Reduction-Expansion (RE) mechanism helps preserve important lesion details while adapting to variations in scale. Considering real agricultural environments, an Illumination-Adaptive Contrast Enhancement (IACE) preprocessing pipeline is designed to address illumination variability in real agricultural environments. Experimental results show that HFA-Net achieves 96.03% accuracy under normal conditions and maintains strong performance under challenging lighting scenarios, achieving 92.95% and 93.07% accuracy in extremely dark and bright environments, respectively. Furthermore, quantitative explainability analysis using perturbation-based metrics demonstrates that the model’s predictions are not only accurate but also faithful to disease-relevant regions. Finally, Grad-CAM-based visual explanations confirm that the model’s predictions are driven by disease-specific regions, enhancing interpretability and practical reliability. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

18 pages, 12071 KB  
Article
A Novel Reversible Image Camouflaging Method Based on Lossless Matrix Transformation
by Gizem Dursun Demir and Ufuk Özkaya
Mathematics 2026, 14(7), 1111; https://doi.org/10.3390/math14071111 - 26 Mar 2026
Viewed by 183
Abstract
Image encryption methods aim to transform a secret image into a noise-like, texture-like image. Since this behavior of the encrypted image indicates that it is encrypted, it provokes a large number of attacks. One of the most effective methods to counter this threat [...] Read more.
Image encryption methods aim to transform a secret image into a noise-like, texture-like image. Since this behavior of the encrypted image indicates that it is encrypted, it provokes a large number of attacks. One of the most effective methods to counter this threat is to protect the information by transforming the original image into a new, meaningful image. The bottleneck of this approach is that the new image in which the information is embedded must have a high visual quality that is indistinguishable from the real image. Another critical requirement is obtaining the original image without loss. In this paper, we propose a reversible image camouflage method based on lossless matrix transformation and two-dimensional wavelet transformation. Random matrix perturbation is introduced and applied as an effective method for the lossless transformation of low-frequency or flat regions. The proposed method was applied to different datasets for performance analysis. The PSNR values of the plain/camouflage image pair are above 55 dB, and the SSIM values obtained by our method are very close to 0.9999 on these datasets. The experimental results demonstrate that the method’s performance is independent of the content of the plain/target image and of the fragment size. Furthermore, in cases where the target image is specifically chosen, PSNR values exceed 58 dB. Additionally, the efficacy of the method in generating camouflage images has been demonstrated through histogram analysis and performance analysis in the low- and high-frequency regions. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

23 pages, 4705 KB  
Article
CSFPR-RTDETR-CR: A Causal Intervention Enhanced Framework for Infrared UAV Small Target Detection with Feature Debiasing
by Honglong Wang and Lihui Sun
Sensors 2026, 26(6), 1941; https://doi.org/10.3390/s26061941 - 19 Mar 2026
Viewed by 180
Abstract
Infrared UAV small target detection is critical in areas such as military reconnaissance, disaster monitoring, and border patrol. However, it faces challenges due to the small size of targets, weak texture, and complex backgrounds in infrared images. Existing deep learning-based object detection models [...] Read more.
Infrared UAV small target detection is critical in areas such as military reconnaissance, disaster monitoring, and border patrol. However, it faces challenges due to the small size of targets, weak texture, and complex backgrounds in infrared images. Existing deep learning-based object detection models often learn spurious correlations between targets and their backgrounds. This leads to poor generalization and higher rates of false positives and missed detections in complex scenes. To overcome feature bias and improve performance, this paper proposes an enhanced detection framework based on causal reasoning. The framework builds on the advanced CSFPR-RTDETR detector. Guided by the principles of structural causal models, it explicitly separates causal and non-causal features in the feature space. Feature debiasing is achieved through a three-path approach. First, a causal data augmentation module is introduced. It applies frequency perturbations drawn from a Gaussian distribution to non-causal features. This strengthens the model’s robustness against mixed disturbances. Second, a counterfactual reasoning module is integrated into the backbone network. This module generates counterfactual samples to intervene in the feature distribution, helping the model identify and utilize causal features more effectively. Third, a causal attention mechanism module is added to the encoder. By distinguishing and weighting causal and non-causal features, it guides the model to focus on features that are essential for detecting targets. Experiments on the HIT-UAV public dataset show that the proposed framework improves mAP@50 by 5.6% and mAP@50:95 by 1.8%. Visualization analysis further confirms that the framework enhances feature discrimination and overall detection performance. Full article
Show Figures

Figure 1

30 pages, 26587 KB  
Article
Research on Synthetic Data Methods and Detection Models for Micro-Cracks
by Yaotong Jiang, Tianmiao Wang, Xuanhe Chen and Jianhong Liang
Sensors 2026, 26(6), 1883; https://doi.org/10.3390/s26061883 - 17 Mar 2026
Viewed by 209
Abstract
Micro-crack detection on concrete surfaces is challenging because labeled micro-crack data are scarce, crack cues are extremely weak (often only a few pixels wide), and complex backgrounds (e.g., non-uniform illumination, shadows, and stains) degrade feature extraction; this study aims to improve both data [...] Read more.
Micro-crack detection on concrete surfaces is challenging because labeled micro-crack data are scarce, crack cues are extremely weak (often only a few pixels wide), and complex backgrounds (e.g., non-uniform illumination, shadows, and stains) degrade feature extraction; this study aims to improve both data availability and detection robustness for practical inspection. A Poisson image editing-based synthesis strategy is developed to generate visually coherent micro-crack samples via gradient-domain blending, and a Complex-Scene-Tolerant YOLO (CST-YOLO) detector is proposed on top of YOLOv10, following an “lighting decoupling–global perception–micro-feature enhancement” design. CST-YOLO integrates an Lighting-Adaptive Preprocessing Module (LAPM) to suppress illumination/shadow perturbations, a Spatial–Channel Sparse Transformer (SCS-Former) to model long-range crack topology efficiently, and a Small Object Focus Block (SOFB) to enhance micro-scale cues under cluttered backgrounds. Experiments are conducted on a 650-image dataset (200 real and 450 synthesized), in which synthesized samples are used only for training, and the validation/test sets contain only real images, with a 7:2:1 split. CST-YOLO achieves 0.990 mAP@0.5 and 0.926 mAP@0.5:0.95 at 139 FPS, and ablation results indicate complementary contributions from LAPM, SCS-Former, and SOFB. These results support the effectiveness of combining realistic synthesis and architecture-level robustness for real-time micro-crack detection in complex scenes. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

18 pages, 10950 KB  
Article
A Predictable-Image Solution for Copyright Protection Based on Layer-Wise Relevance Propagation
by Yougyung Park, Sieun Kim and Inwhee Joe
Appl. Sci. 2026, 16(6), 2864; https://doi.org/10.3390/app16062864 - 16 Mar 2026
Viewed by 159
Abstract
As artificial intelligence (AI) systems are increasingly deployed in real-world applications, concerns regarding the unauthorized use of copyrighted images during model training have become more pronounced. In particular, both generative and discriminative models may implicitly internalize distinctive visual patterns from copyrighted data, leading [...] Read more.
As artificial intelligence (AI) systems are increasingly deployed in real-world applications, concerns regarding the unauthorized use of copyrighted images during model training have become more pronounced. In particular, both generative and discriminative models may implicitly internalize distinctive visual patterns from copyrighted data, leading to potential ethical and legal risks even after data removal. In this study, we propose a practical copyright protection framework, termed the Predictable-Image Solution (PIS), which aims to disrupt the learning of copyrighted visual features during the training process. PIS leverages Layer-wise Relevance Propagation (LRP) to identify image regions that contribute positively to a model’s prediction and selectively modifies these regions using non-copyrighted visual substitutes, such as textures or benign image patterns. By targeting semantically influential regions rather than applying global perturbations, the proposed approach effectively interferes with feature extraction while preserving the perceptual quality and overall visual structure of the original image. Extensive experiments conducted on multiple pre-trained image classification models demonstrate that PIS consistently degrades classification performance on protected images, while maintaining high visual similarity as measured by perceptual metrics. These results indicate that PIS offers an effective, model-agnostic, and visually unobtrusive solution for mitigating unauthorized exploitation of copyrighted images in practical AI training scenarios. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 310 KB  
Article
A Regularized Backbone-Level Cross-Modal Interaction Framework for Stable Temporal Reasoning in Video-Language Models
by Geon-Woo Kim and Ho-Young Jung
Mathematics 2026, 14(6), 996; https://doi.org/10.3390/math14060996 - 15 Mar 2026
Viewed by 244
Abstract
Deep learning approaches for egocentric video understanding often lack a principled theoretical treatment of stability, particularly when dealing with the sparse, noisy, and temporally ambiguous observations characteristic of first-person imaging. In this work, we frame egocentric video question answering not merely as a [...] Read more.
Deep learning approaches for egocentric video understanding often lack a principled theoretical treatment of stability, particularly when dealing with the sparse, noisy, and temporally ambiguous observations characteristic of first-person imaging. In this work, we frame egocentric video question answering not merely as a classification task, but as an ill-posed inverse problem aimed at reconstructing latent semantic intent from stochastically perturbed visual signals. To address the instability inherent in standard dual-encoder architectures, we present a framework with a mathematical interpretation that incorporates gated cross-modal interaction within the transformer backbone. Formally, the video-side update analyzed in this work is defined as a learnable convex combination of unimodal feature representations and cross-modal attention residuals; the full implementation applies analogous gated cross-modal updates bidirectionally. From a regularization perspective, the gating mechanism can be interpreted as an adaptive parameter that balances data fidelity against language-conditioned structural constraints during feature reconstruction. We provide the Bounded Update Property (Lemma 1) and an analytical layer-wise sensitivity bound and empirically demonstrate that the proposed framework achieves measurable improvements in both accuracy and stability on the EgoTaskQA and MSR-VTT benchmarks. On EgoTaskQA, our model improves accuracy from 27.0% to 31.7% (+4.7 pp) and reduces the accuracy drop under 50% frame drop from 3.93 pp to 0.94 pp. On MSR-VTT, our model improves accuracy by 13.0 pp over the dual-encoder baseline. Under severe perturbation (50% frame drop) on MSR-VTT, our model retains 97.7% of its clean performance, whereas the baseline exhibits near-zero drop accompanied by majority-class behavior. These results provide empirical evidence that the proposed interaction induces stable behavior under perturbations in an ill-posed multimodal inference setting, mitigating sensitivity to sampling variability while preserving query-relevant temporal structure. Furthermore, an entropy-based analysis indicates that the gating mechanism prevents excessive diffusion of attention, promoting coherent temporal reasoning. Overall, this work offers a mathematically informed perspective on designing interaction mechanisms for stable multimodal systems, with a focus on robust reasoning under temporal ambiguity. Full article
Show Figures

Figure 1

19 pages, 2755 KB  
Article
CA-Adv: Curvature-Adaptive Weighted Adversarial 3D Point Cloud Generation Method for Remote Sensing Scenarios
by Yanwen Sun, Shijia Xiao, Weiquan Liu, Min Huang, Chaozhi Cheng, Shiwei Lin, Jinhe Su, Zongyue Wang and Guorong Cai
Remote Sens. 2026, 18(6), 882; https://doi.org/10.3390/rs18060882 - 13 Mar 2026
Viewed by 198
Abstract
Adversarial robustness in 3D point cloud recognition models is a critical concern in remote sensing applications, such as autonomous driving and infrastructure monitoring. Existing adversarial attack methods can compromise model performance; moreover, they often neglect the intrinsic geometric properties of point clouds, leading [...] Read more.
Adversarial robustness in 3D point cloud recognition models is a critical concern in remote sensing applications, such as autonomous driving and infrastructure monitoring. Existing adversarial attack methods can compromise model performance; moreover, they often neglect the intrinsic geometric properties of point clouds, leading to perceptually unnatural perturbations that limit their practicality for robustness evaluation in real-world scenarios. To address this, we propose CA-Adv, a novel curvature-adaptive weighted adversarial generation method for 3D point clouds. Our approach first employs Shapley values to assess regional sensitivity and identify salient regions. It then adaptively partitions these regions based on local curvature and assigns perturbation weights accordingly, concentrating the attack on geometrically sensitive areas while preserving overall structural consistency through explicit geometric constraints. Extensive experiments on real-world remote sensing data (KITTI) and synthetic benchmarks (ModelNet40, ShapeNet) demonstrate that CA-Adv achieves a high attack success rate with a minimal perturbation budget. The generated adversarial examples maintain superior visual naturalness and geometric fidelity. The method provides a practical tool for evaluating the robustness of 3D recognition models in applications such as autonomous driving, urban-scale LiDAR perception, and remote sensing point cloud analysis. Full article
Show Figures

Figure 1

26 pages, 2382 KB  
Article
Evaluating the Effectiveness of Explainable AI for Adversarial Attack Detection in Traffic Sign Recognition Systems
by Bill Deng Pan, Yupeng Yang, Richard Guo, Yongxin Liu, Hongyun Chen and Dahai Liu
Mathematics 2026, 14(6), 971; https://doi.org/10.3390/math14060971 - 12 Mar 2026
Viewed by 264
Abstract
Connected autonomous vehicles (CAVs) rely on deep neural network-based perception systems to operate safely in complex driving environments. However, these systems remain vulnerable to adversarial perturbations that can induce misclassification without perceptible changes to human observers. Explainable artificial intelligence (XAI) has been proposed [...] Read more.
Connected autonomous vehicles (CAVs) rely on deep neural network-based perception systems to operate safely in complex driving environments. However, these systems remain vulnerable to adversarial perturbations that can induce misclassification without perceptible changes to human observers. Explainable artificial intelligence (XAI) has been proposed as a potential adversarial detection mechanism by exposing inconsistencies in model attention. This study evaluated the effectiveness of NoiseCAM-based explanation-space detection on the German Traffic Sign Recognition Benchmark (GTSRB) using a single 32 × 32 CNN architecture. Adversarial examples were generated using FGSM under perturbation budgets ϵ = 0.01–0.10, and detection performance was evaluated using accuracy, precision, recall, F1-score, and ROC–AUC. Results show that NoiseCAM achieves detection accuracies between 51.8% and 52.9% with ROC–AUC values of 0.52–0.53, only marginally above random discrimination (0.5). Class-wise analysis further reveals substantial variability in detection reliability across traffic sign categories, with visually structured regulatory signs exhibiting higher separability than complex warning signs. These findings suggest that explanation-space inconsistencies alone provide limited adversarial detection capability in low-resolution, safety-critical perception pipelines. The study contributes to the understanding of the operational limits of explanation-based adversarial detection and highlights the need to integrate XAI signals with complementary robustness or uncertainty-aware mechanisms for reliable deployment in autonomous driving systems. Full article
Show Figures

Figure 1

19 pages, 764 KB  
Article
FeOCR: Domain-Adaptive Chinese OCR with Visual Character Disambiguation and LLM-Based Correction for Metallurgical Documents
by Qiang Zheng, Yaxuan Sun, Lin Wang, Haoning Zhang, Fanjie Meng and Minghui Li
Electronics 2026, 15(6), 1144; https://doi.org/10.3390/electronics15061144 - 10 Mar 2026
Viewed by 305
Abstract
High-quality text corpora are essential for knowledge graph construction and domain-specific large model pre-training in technology-intensive industries, with the steel metallurgy sector serving as a representative case. However, many industrial documents remain in scanned or PDF formats, where general-purpose Optical Character Recognition (OCR) [...] Read more.
High-quality text corpora are essential for knowledge graph construction and domain-specific large model pre-training in technology-intensive industries, with the steel metallurgy sector serving as a representative case. However, many industrial documents remain in scanned or PDF formats, where general-purpose Optical Character Recognition (OCR) systems exhibit systematic errors when recognizing Chinese metallurgical documents. In particular, visually similar Chinese characters that differ by only minor strokes are frequently confused, leading to severe degradation of text reliability and cascading errors in downstream knowledge extraction. This paper proposes FeOCR, a general-purpose domain-adaptive framework for machine-printed Chinese characters, which is specifically evaluated within the context of the steel metallurgy industry. The framework integrates visual character disambiguation with context-aware semantic correction. We first construct a metallurgy-specific OCR dataset emphasizing high-frequency confusable Chinese word pairs and enhance data diversity through font perturbation and noise synthesis. Parameter-efficient fine-tuning (LoRA) is then applied to adapt a general OCR model to domain-specific visual patterns. Furthermore, a Large Language Model-based correction module performs semantic refinement of residual errors under domain lexical constraints. Experiments demonstrate significant reductions in character and word error rates, especially for confusable technical terms, providing a reliable foundation for industrial Chinese document digitization. Full article
Show Figures

Figure 1

12 pages, 551 KB  
Article
Optic Flow Simulating Self-Motion Does Not Modulate the Hoffmann Reflex in the Soleus During Upright Standing in Healthy Young Adults
by Christophe Barbanchon and Stéphane Baudry
Brain Sci. 2026, 16(3), 297; https://doi.org/10.3390/brainsci16030297 - 6 Mar 2026
Viewed by 288
Abstract
Background/Objectives: Visual motion is a powerful contributor to postural control, yet its influence on modulation of the Ia afferent pathway remains to be confirmed. This study investigated whether optic-flow simulating self-motion modulates the soleus Hoffmann (H) reflex recorded in the soleus during [...] Read more.
Background/Objectives: Visual motion is a powerful contributor to postural control, yet its influence on modulation of the Ia afferent pathway remains to be confirmed. This study investigated whether optic-flow simulating self-motion modulates the soleus Hoffmann (H) reflex recorded in the soleus during upright stance in immersive virtual reality. Methods: Fourteen healthy adults completed two experimental sessions, each comprising four visual conditions of increasing optic-flow complexity. In one session, participants stood freely on a force platform (free standing) whereas in the other, postural sways were restricted (supported standing). Surface EMG, posterior tibial nerve stimulation, and force-platform recordings were collected. Results: During free standing, optic flow substantially increased postural sway [F(3,13) = 15.7, p < 0.001, η2 = 0.55], with higher sway in all optic-flow conditions (~13 mm/s) compared with static viewing (~10 mm/s). In contrast, soleus H-reflex amplitude was not modulated by optic flow [F(3,13) = 0.2, p = 0.57], remaining stable across conditions (~44% Mmax). Background EMG and CoP position preceding stimulation were similar across conditions. In supported standing, used to isolate the effect of optic flow independently to postural control, H-reflex amplitude again showed no condition effect [F(3,13) = 0.2, p = 0.86]. Conclusions: These findings indicate that postural perturbation induced by optic flow was not accompanied by a modulation of the Ia afferent-motoneuron transmission of the soleus under the used experimental conditions. The results suggest that postural control under virtual optic flow is mediated predominantly by supraspinal sensory-integration mechanisms, rather than by modulation of the Ia-monosynaptic reflex pathway. Full article
(This article belongs to the Special Issue Neural and Muscular Plasticity in Motor and Postural Control)
Show Figures

Figure 1

14 pages, 518 KB  
Article
Consistency and Quantitative Backward Stability Analysis of the Two-Step Jarratt Method for Nonlinear Systems
by Vahideh Rasouli, Alicia Cordero, Taher Lotfi and Juan R. Torregrosa
Axioms 2026, 15(3), 186; https://doi.org/10.3390/axioms15030186 - 4 Mar 2026
Viewed by 273
Abstract
In this work, we revisit the two-step Jarratt method from the perspective of numerical stability. While high-order iterative schemes are often examined in terms of convergence rate and computational efficiency, their backward stability properties have received comparatively less attention. We begin by establishing [...] Read more.
In this work, we revisit the two-step Jarratt method from the perspective of numerical stability. While high-order iterative schemes are often examined in terms of convergence rate and computational efficiency, their backward stability properties have received comparatively less attention. We begin by establishing the method’s strong consistency. Next, we provide a quantitative backward stability assessment within the standard floating-point arithmetic framework, deriving explicit perturbation bounds that show that the iteration errors remain proportional to machine precision. To support the theoretical findings, we present numerical experiments—including tests under finite-precision perturbations—as well as Python implementations and visualizations of the numerical examples. The results illustrate that the two-step Jarratt method not only achieves a high convergence order but also remains numerically robust for well-conditioned nonlinear systems. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

24 pages, 11178 KB  
Article
FLAMA: Frame-Level Alignment Margin Attack for Scene Text and Automatic Speech Recognition
by Yikun Xu, Zhiheng Xu and Pengwen Dai
Electronics 2026, 15(5), 1064; https://doi.org/10.3390/electronics15051064 - 4 Mar 2026
Viewed by 304
Abstract
Scene text recognition (STR) and automatic speech recognition (ASR) translate visual or acoustic signals into linguistic sequences and underpin many modern perception systems. Although their front-ends and decoders differ (e.g., CTC-based, attention-based, or variants), both tasks ultimately rely on aligning input frames to [...] Read more.
Scene text recognition (STR) and automatic speech recognition (ASR) translate visual or acoustic signals into linguistic sequences and underpin many modern perception systems. Although their front-ends and decoders differ (e.g., CTC-based, attention-based, or variants), both tasks ultimately rely on aligning input frames to output tokens by deep learning techniques, which exposes a shared vulnerability to adversarial perturbations. Existing attacks commonly optimize global sequence-level objectives. As a result, decisive frames are treated implicitly, and optimization can become unnecessarily diffuse over long input sequences, hindering convergence and perceptual quality. To address the above issues, we propose FLAMA, a unified Frame-Level Alignment Margin Attack, which could be used for both STR and ASR models. FLAMA explicitly targets alignment by maximizing per frame (or per step) recognition margins. The design is decoder-agnostic and applies to both CTC-based and attention-based pipelines. It employs a recognition-score-aware Step/Halt gate that concentrates updates on the most critical frames, and a stabilization stage that suppresses late-iteration oscillations to improve optimization stability and perceptual control. Ablation analyses show that stabilization consistently enhances attack success and reduces distortion. We evaluate FLAMA on STR benchmarks (SVT, CUTE80, and IC13) with CRNN, STAR, and TRBA, and on the ASR benchmark (LibriSpeech) with a Wav2Vec 2.0 model. Across modalities and architectures, FLAMA achieves near-100% attack success while substantially reducing l2 distortion and improving perceptual metrics compared with FGSM/PGD baselines. These results highlight frame-level alignment as a shared weak point across visual and audio sequence recognizers and suggest localized margin objectives as a principled route to effective sequence attacks. Full article
Show Figures

Figure 1

26 pages, 951 KB  
Article
Advances in Semantic-Preserving Text Watermarking
by Jiale Meng and Zheming Lu
Sensors 2026, 26(5), 1528; https://doi.org/10.3390/s26051528 - 28 Feb 2026
Viewed by 247
Abstract
Textual content faces escalating security threats regarding copyright infringement, tampering, and unauthorized distribution. Text watermarking offers a vital defense mechanism by embedding imperceptible identifiers for source tracking and anti-counterfeiting. However, unlike general image watermarking, protecting text is uniquely challenging due to its highly [...] Read more.
Textual content faces escalating security threats regarding copyright infringement, tampering, and unauthorized distribution. Text watermarking offers a vital defense mechanism by embedding imperceptible identifiers for source tracking and anti-counterfeiting. However, unlike general image watermarking, protecting text is uniquely challenging due to its highly discrete structure and low pixel redundancy, where even minute perturbations can compromise legibility. Over the past three decades, a wide range of text watermarking techniques have been proposed to address these challenges. While recent research has heavily favored semantic-based watermarking driven by Large Language Models (LLMs), these approaches are often inapplicable to high-stakes scenarios requiring strict content integrity and visual fidelity, such as legal documentation and artistic font protection. Addressing this gap, this paper presents a comprehensive survey of semantic-preserving text watermarking methods developed in recent years, with a particular focus on image-based, font-based, and format-based techniques. We propose a unified classification framework to systematically analyze these approaches, examining their methodological principles, robustness, embedding capacity, and imperceptibility. By clarifying the core characteristics and limitations of existing techniques, this survey aims to provide a structured technical reference for researchers and practitioners, facilitating the advancement of secure, robust, and scalable text protection technologies. Full article
Show Figures

Figure 1

21 pages, 2810 KB  
Article
Stability of Circular Orbits Around Kerr Black Holes Immersed in a Dehnen-Type Dark Matter Halo
by Yu Wang, Meilin Liu and Haiguang Xu
Universe 2026, 12(3), 68; https://doi.org/10.3390/universe12030068 - 28 Feb 2026
Viewed by 258
Abstract
We investigate the dynamical stability of circular orbits around a Kerr black hole embedded in a Dehnen-type dark matter halo. The effective spacetime metric of the combined system is constructed using the Newman–Janis algorithm, and the effective potential for test-particle motion in the [...] Read more.
We investigate the dynamical stability of circular orbits around a Kerr black hole embedded in a Dehnen-type dark matter halo. The effective spacetime metric of the combined system is constructed using the Newman–Janis algorithm, and the effective potential for test-particle motion in the equatorial plane is derived. The stability of circular orbits is analyzed through the Hessian matrix of the effective potential, while the stability strength and restoring-force distribution are employed to quantify the orbital response to small perturbations. Our results show that the presence of the dark matter halo significantly alters the spatial structure of stable circular orbits, leading to non-continuous stable regions whose location and extent depend sensitively on the halo’s characteristic density, scale radius, and the black hole spin. The innermost stable circular orbit (ISCO) is shifted relative to the vacuum Kerr case, with its position determined by the combined effects of the spin and halo parameters. Two-dimensional heatmaps, parameter scans, and three-dimensional visualizations systematically illustrate how the black hole spin and dark matter halo properties influence the ISCO and the distribution of stable orbits. Finally, we analyze the influence of the dark matter halo on the structure of the black hole event horizon. These results provide a detailed theoretical investigation of orbital dynamics around rotating black holes in dark-matter-rich environments. Full article
Show Figures

Figure 1

19 pages, 4831 KB  
Article
Lipid Droplets as Cellular Sensors of Lipid Metabolic Reprogramming in Colon Cancer: Insights from Essential Amino Acid Supplementation Using Raman Spectroscopy and Imaging
by Monika Kopeć, Karolina Beton-Mysur and Beata Brożek-Płuska
Molecules 2026, 31(5), 762; https://doi.org/10.3390/molecules31050762 - 25 Feb 2026
Viewed by 353
Abstract
Herein, we present a comprehensive single-cell investigation of the biochemical and metabolic responses of normal human colon fibroblasts (CCD-18Co) and colorectal adenocarcinoma cells (Caco-2) to supplementation with the amino acids leucine, threonine, and arginine, employing State-of-the-Art Raman spectroscopy and Raman imaging. This fully [...] Read more.
Herein, we present a comprehensive single-cell investigation of the biochemical and metabolic responses of normal human colon fibroblasts (CCD-18Co) and colorectal adenocarcinoma cells (Caco-2) to supplementation with the amino acids leucine, threonine, and arginine, employing State-of-the-Art Raman spectroscopy and Raman imaging. This fully label-free and noninvasive methodology enabled high-spatial-resolution mapping of intracellular components, providing unprecedented insight into subcellular biochemical organization and metabolic remodeling associated with colorectal carcinogenesis. By synergistically integrating Raman spectroscopic data with advanced chemometric methods, we demonstrate robust, reproducible discrimination between normal and malignant colon cells, both in their native state and after amino acid treatment, based solely on their intrinsic vibrational fingerprints. Partial Least Squares Discriminant Analysis (PLS-DA) and one-way ANOVA revealed that perturbations in lipid metabolism and protein composition constitute key molecular determinants underlying the observed phenotypic divergence between control and amino acid–supplemented cells. Notably, detailed analysis of diagnostic Raman band intensity ratios (2845/3015, 2845/2930, 3015/2888, and 1444/1256) uncovered pronounced amino acid–driven alterations in metabolic pathways at the single-cell level. Raman imaging further enabled spatially resolved visualization of these biochemical shifts and changes in Raman band intensities, highlighting distinct lipid- and protein-rich subcellular domains that respond differentially to amino acid exposure in normal versus cancerous cells. Collectively, our findings establish Raman spectroscopy combined with chemometric analysis as a powerful and sensitive platform for decoding amino acid–induced metabolic reprogramming in colorectal cells. This approach deepens the mechanistic understanding of nutrient–cancer cell interactions and opens new avenues for the development of Raman-based strategies in cancer diagnostics and therapeutic response assessment. Full article
(This article belongs to the Special Issue Vibrational Spectroscopy and Imaging for Chemical Application)
Show Figures

Figure 1

Back to TopTop