Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (172)

Search Parameters:
Keywords = maximizing mutual information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13473 KB  
Article
Automatic Threshold Selection Guided by Maximizing Homologous Isomeric Similarity Under Unified Transformation Toward Unimodal Distribution
by Yaobin Zou, Wenli Yu and Qingqing Huang
Electronics 2026, 15(2), 451; https://doi.org/10.3390/electronics15020451 - 20 Jan 2026
Viewed by 498
Abstract
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric [...] Read more.
Traditional thresholding methods are often tailored to specific histogram patterns, making it difficult to achieve robust segmentation across diverse images exhibiting non-modal, unimodal, bimodal, or multimodal distributions. To address this limitation, this paper proposes an automatic thresholding method guided by maximizing homologous isomeric similarity under a unified transformation toward unimodal distribution. The primary objective is to establish a generalized selection criterion that functions independently of the input histogram’s pattern. The methodology employs bilateral filtering, non-maximum suppression, and Sobel operators to transform diverse histogram patterns into a unified, right-skewed unimodal distribution. Subsequently, the optimal threshold is determined by maximizing the normalized Renyi mutual information between the transformed edge image and binary contour images extracted at varying levels. Experimental validation on both synthetic and real-world images demonstrates that the proposed method offers greater adaptability and higher accuracy compared to representative thresholding and non-thresholding techniques. The results show a significant reduction in misclassification errors and improved correlation metrics, confirming the method’s effectiveness as a unified thresholding solution for images with non-modal, unimodal, bimodal, or multimodal histogram patterns. Full article
(This article belongs to the Special Issue Image Processing and Pattern Recognition)
Show Figures

Figure 1

25 pages, 1075 KB  
Article
Prompt-Based Few-Shot Text Classification with Multi-Granularity Label Augmentation and Adaptive Verbalizer
by Deling Huang, Zanxiong Li, Jian Yu and Yulong Zhou
Information 2026, 17(1), 58; https://doi.org/10.3390/info17010058 - 8 Jan 2026
Viewed by 243
Abstract
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, [...] Read more.
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, existing verbalizer construction methods often rely on external knowledge bases, which require complex noise filtering and manual refinement, making the process time-consuming and labor-intensive, while approaches based on pre-trained language models (PLMs) frequently overlook inherent prediction biases. Furthermore, conventional data augmentation methods focus on modifying input instances while overlooking the integral role of label semantics in prompt tuning. This disconnection often leads to a trade-off where increased sample diversity comes at the cost of semantic consistency, resulting in marginal improvements. To address these limitations, this paper first proposes a novel Bayesian Mutual Information-based method that optimizes label mapping to retain general PLM features while reducing reliance on irrelevant or unfair attributes to mitigate latent biases. Based on this method, we propose two synergistic generators that synthesize semantically consistent samples by integrating label word information from the verbalizer to effectively enrich data distribution and alleviate sparsity. To guarantee the reliability of the augmented set, we propose a Low-Entropy Selector that serves as a semantic filter, retaining only high-confidence samples to safeguard the model against ambiguous supervision signals. Furthermore, we propose a Difficulty-Aware Adversarial Training framework that fosters generalized feature learning, enabling the model to withstand subtle input perturbations. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods on most few-shot and full-data splits, with F1 score improvements of up to +2.8% on the standard AG’s News benchmark and +1.0% on the challenging DBPedia benchmark. Full article
Show Figures

Graphical abstract

32 pages, 593 KB  
Article
From Access to Impact: How Digital Financial Inclusion Drives Sustainable Development
by Gerardo Enrique Kattan-Rodríguez and Alicia Fernanda Galindo-Manrique
Sustainability 2025, 17(23), 10799; https://doi.org/10.3390/su172310799 - 2 Dec 2025
Viewed by 1727
Abstract
This study examines the combined impact of fintech and financial inclusion on achieving the United Nations’ Sustainable Development Goals (SDGs). Previous research has emphasized the role of financial inclusion in reducing poverty, strengthening resilience, and promoting economic stability; however, its interaction with fintech [...] Read more.
This study examines the combined impact of fintech and financial inclusion on achieving the United Nations’ Sustainable Development Goals (SDGs). Previous research has emphasized the role of financial inclusion in reducing poverty, strengthening resilience, and promoting economic stability; however, its interaction with fintech in advancing sustainability remains less examined. Using four composite indices incorporating updated variables, expanded country coverage, and a broader temporal scope, this analysis evaluates digital financial channels, including formal access, mobile money, digital credit, transfers, and rural finance, across SDGs 3, 4, 8, and 9. The findings indicate that formal access is associated with lower maternal mortality (SDG 3) and contributes positively to decent work and economic growth (SDG 8), as well as industry, innovation, and infrastructure (SDG 9). Digital credit and transfers help ease liquidity constraints in high-inequality regions, while mobile money enhances education outcomes (SDG 4) under robust governance, supporting informal labor markets. Rural finance strengthens innovation and infrastructure development in underserved areas, reinforcing SDG 9. A simultaneous equation model provides evidence of bidirectional relationships among financial inclusion, fintech adoption, and sustainable development, underscoring their mutual reinforcement rather than strict causality. Overall, the study highlights the systemic interconnection between finance and sustainability and emphasizes the importance of governance, infrastructure, and regulation in maximizing developmental benefits. Full article
(This article belongs to the Special Issue Digitalization and Circular Sustainability Development)
Show Figures

Figure A1

27 pages, 8265 KB  
Article
ICIRD: Information-Principled Deep Clustering for Invariant, Redundancy-Reduced and Discriminative Cluster Distributions
by Aiyu Zheng, Robert M. X. Wu, Yupeng Wang and Yanting He
Entropy 2025, 27(12), 1200; https://doi.org/10.3390/e27121200 - 26 Nov 2025
Viewed by 411
Abstract
Deep clustering aims to discover meaningful data groups by jointly learning representations and cluster probability distributions. Yet existing methods rarely consider the underlying information characteristics of these distributions, causing ambiguity and redundancy in cluster assignments, particularly when different augmented views are used. To [...] Read more.
Deep clustering aims to discover meaningful data groups by jointly learning representations and cluster probability distributions. Yet existing methods rarely consider the underlying information characteristics of these distributions, causing ambiguity and redundancy in cluster assignments, particularly when different augmented views are used. To address this issue, this paper proposes a novel information-principled deep clustering framework for learning invariant, redundancy-reduced, and discriminative cluster probability distributions, termed ICIRD. Specifically, ICIRD is built upon three complementary modules for cluster probability distributions: (i) conditional entropy minimization, which increases assignment certainty and discriminability; (ii) inter-cluster mutual information minimization, which reduces redundancy among cluster distributions and sharpens separability; and (iii) cross-view mutual information maximization, which enforces semantic consistency across augmented views. Additionally, a contrastive representation mechanism is incorporated to provide stable and reliable feature inputs for the cluster probability distributions. Together, these components enable ICIRD to jointly optimize both representations and cluster probability distributions in an information-regularized manner. Extensive experiments on five image benchmark datasets demonstrate that ICIRD outperforms most existing deep clustering methods, particularly on fine-grained datasets such as CIFAR-100 and ImageNet-Dogs. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 949 KB  
Article
Utility–Leakage Trade-Off for Federated Representation Learning
by Yuchen Liu, Onur Günlü, Yuanming Shi and Youlong Wu
Entropy 2025, 27(11), 1163; https://doi.org/10.3390/e27111163 - 15 Nov 2025
Viewed by 525
Abstract
Federated representation learning (FRL) is a promising technique for learning shared data representations that capture general features across decentralized clients without sharing raw data. However, there is a risk of sensitive information leakage from learned representations. The conventional differential privacy (DP) mechanism protects [...] Read more.
Federated representation learning (FRL) is a promising technique for learning shared data representations that capture general features across decentralized clients without sharing raw data. However, there is a risk of sensitive information leakage from learned representations. The conventional differential privacy (DP) mechanism protects the privacy of the whole data by randomizing (adding noise or random response) at the cost of deteriorating learning performance. Inspired by the fact that some data information may be public or non-private and only sensitive information (e.g., race) should be protected, we investigate the information-theoretic protection on specific sensitive information for FRL. To characterize the trade-off between utility and sensitive information leakage, we adopt mutual information-based metrics to measure utility and sensitive information leakage, and propose a method that maximizes the utility performance, while restricting sensitive information leakage less than any positive value ϵ via the local DP mechanism. Simulation demonstrates that our scheme can achieve the best utility–leakage trade-off among baseline schemes, and more importantly can adjust the trade-off between leakage and utility by controlling the noise level in local DP. Full article
(This article belongs to the Special Issue Information-Theoretic Approaches for Machine Learning and AI)
Show Figures

Figure 1

15 pages, 3459 KB  
Article
Multi-Granularity Invariant Structure Learning for Text Classification in Entrepreneurship Policy
by Xinyu Sun and Meifang Yao
Mathematics 2025, 13(22), 3648; https://doi.org/10.3390/math13223648 - 14 Nov 2025
Viewed by 500
Abstract
Data-driven text classification technology is crucial for understanding and managing a large number of entrepreneurial policy-related texts, yet it is hindered by two primary challenges. First, the intricate, multi-faceted nature of policy documents often leads to insufficient information extraction, as existing models struggle [...] Read more.
Data-driven text classification technology is crucial for understanding and managing a large number of entrepreneurial policy-related texts, yet it is hindered by two primary challenges. First, the intricate, multi-faceted nature of policy documents often leads to insufficient information extraction, as existing models struggle to synergistically leverage diverse information types, such as statistical regularities, linguistic structures, and external factual knowledge, resulting in semantic sparsity. Second, the performance of state-of-the-art deep learning models is heavily reliant on large-scale annotated data, a resource that is scarce and costly to acquire in entrepreneurial policy domains, rendering models susceptible to overfitting and poor generalization. To address these challenges, this paper proposes a Multi-granularity Invariant Structure Learning (MISL) model. Specifically, MISL first employs a multi-view feature engineering module that constructs and fuses distinct statistical, linguistic, and knowledge graphs to generate a comprehensive and rich semantic representation, thereby alleviating semantic sparsity. Furthermore, to enhance robustness and generalization from limited data, we introduce a dual invariant structure learning framework. This framework operates at two levels: (1) sample-invariant representation learning uses data augmentation and mutual information maximization to learn the essential semantic core of a text, invariant to superficial perturbations; (2) neighborhood-invariant semantic learning applies a contrastive objective on a nearest-neighbor graph to enforce intra-class compactness and inter-class separability in the feature space. Extensive experiments demonstrate that our proposed MISL model significantly outperforms state-of-the-art baselines, proving its effectiveness and robustness for classifying complex texts in entrepreneurial policy domains. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science, 2nd Edition)
Show Figures

Figure 1

19 pages, 3195 KB  
Article
Waveform Design of a Cognitive MIMO Radar via an Improved Adaptive Gradient Descent Genetic Algorithm
by Tingli Shen, Jianbin Lu, Yunlei Zhang, Peng Wu and Ke Li
Appl. Sci. 2025, 15(20), 10893; https://doi.org/10.3390/app152010893 - 10 Oct 2025
Viewed by 707
Abstract
This study addresses the challenge of cognitive waveform design for multiple-input–multiple-output (MIMO) radar systems operating in cluttered environments. It focuses on the key practical requirements for transmitting time-domain waveforms and proposes a novel approach. This method first determines the optimal frequency-domain waveform and [...] Read more.
This study addresses the challenge of cognitive waveform design for multiple-input–multiple-output (MIMO) radar systems operating in cluttered environments. It focuses on the key practical requirements for transmitting time-domain waveforms and proposes a novel approach. This method first determines the optimal frequency-domain waveform and then designs a time-domain waveform that closely approximates the frequency-domain solution. The primary objective is to enable MIMO radar systems to transmit orthogonal waveforms while accommodating various constraints. A frequency-domain waveform optimization model was initially developed using the principle of maximizing dual mutual information (DMI), and the energy spectral density (ESD) of the optimal waveform was derived using the water-filling method. Next, a time-domain waveform approximation model is constructed based on the minimum mean square error (MMSE) criterion, which incorporates constant modulus and peak-to-average power ratio (PAPR) constraints. To minimize the performance degradation of the waveform, an improved adaptive gradient descent genetic algorithm (GD-AGA) was proposed to synthesize multichannel orthogonal time-domain waveforms for MIMO radars. The simulation results demonstrate the effectiveness of the proposed model for enhancing the performance of MIMO radar. Compared with traditional genetic algorithms (GA) and two enhanced GA alternatives, the proposed algorithm achieves a lower ESD loss and better orthogonal performance. Full article
Show Figures

Figure 1

32 pages, 1049 KB  
Article
An Approximate Bayesian Approach to Optimal Input Signal Design for System Identification
by Piotr Bania and Anna Wójcik
Entropy 2025, 27(10), 1041; https://doi.org/10.3390/e27101041 - 7 Oct 2025
Cited by 1 | Viewed by 947
Abstract
The design of informatively rich input signals is essential for accurate system identification, yet classical Fisher-information-based methods are inherently local and often inadequate in the presence of significant model uncertainty and non-linearity. This paper develops a Bayesian approach that uses the mutual information [...] Read more.
The design of informatively rich input signals is essential for accurate system identification, yet classical Fisher-information-based methods are inherently local and often inadequate in the presence of significant model uncertainty and non-linearity. This paper develops a Bayesian approach that uses the mutual information (MI) between observations and parameters as the utility function. To address the computational intractability of the MI, we maximize a tractable MI lower bound. The method is then applied to the design of an input signal for the identification of quasi-linear stochastic dynamical systems. Evaluating the MI lower bound requires the inversion of large covariance matrices whose dimensions scale with the number of data points N. To overcome this problem, an algorithm that reduces the dimension of the matrices to be inverted by a factor of N is developed, making the approach feasible for long experiments. The proposed Bayesian method is compared with the average D-optimal design method, a semi-Bayesian approach, and its advantages are demonstrated. The effectiveness of the proposed method is further illustrated through four examples, including atomic sensor models, where input signals that generate a large amount of MI are especially important for reducing the estimation error. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

26 pages, 1333 KB  
Article
Category Name Expansion and an Enhanced Multimodal Fusion Framework for Few-Shot Learning
by Tianlei Gao, Lei Lyu, Xiaoyun Xie, Nuo Wei, Yushui Geng and Minglei Shu
Entropy 2025, 27(9), 991; https://doi.org/10.3390/e27090991 - 22 Sep 2025
Viewed by 859
Abstract
With the advancement of image processing techniques, few-shot learning (FSL) has gradually become a key approach to addressing the problem of data scarcity. However, existing FSL methods often rely on unimodal information under limited sample conditions, making it difficult to capture fine-grained differences [...] Read more.
With the advancement of image processing techniques, few-shot learning (FSL) has gradually become a key approach to addressing the problem of data scarcity. However, existing FSL methods often rely on unimodal information under limited sample conditions, making it difficult to capture fine-grained differences between categories. To address this issue, we propose a multimodal few-shot learning method based on category name expansion and image feature enhancement. By integrating the expanded category text with image features, the proposed method enriches the semantic representation of categories and enhances the model’s sensitivity to detailed features. To further improve the quality of cross-modal information transfer, we introduce a cross-modal residual connection strategy that aligns features across layers through progressive fusion. This approach enables the fused representations to maximize mutual information while reducing redundancy, effectively alleviating the information bottleneck caused by uneven entropy distribution between modalities and enhancing the model’s generalization ability. Experimental results demonstrate that our method achieves superior performance on both natural image datasets (CIFAR-FS and FC100) and a medical image dataset. Full article
Show Figures

Figure 1

20 pages, 4449 KB  
Article
Source-Free Domain Adaptation for Medical Image Segmentation via Mutual Information Maximization and Prediction Bank
by Hongzhen Wu, Yue Zhou and Xiaoqiang Li
Electronics 2025, 14(18), 3656; https://doi.org/10.3390/electronics14183656 - 15 Sep 2025
Viewed by 2317
Abstract
Medical image segmentation faces significant challenges due to domain shift between different clinical centers and data privacy restrictions. Current source-free domain adaptation methods for medical images suffer from three critical limitations, including unstable training caused by noisy pseudo-labels and poor handling of foreground-background [...] Read more.
Medical image segmentation faces significant challenges due to domain shift between different clinical centers and data privacy restrictions. Current source-free domain adaptation methods for medical images suffer from three critical limitations, including unstable training caused by noisy pseudo-labels and poor handling of foreground-background imbalance where critical structures like optic cup occupy extremely small regions. Additionally, strict privacy regulations often prevent access to source domain data during adaptation. To address these limitations, this paper proposes a source-free domain adaptation approach based on mutual information optimization for fundus image segmentation. The method incorporates a teacher–student network to ensure training stability and a mutual information maximization algorithm to reduce pseudo-label noise naturally. Furthermore, a prediction bank is constructed to handle class imbalance by leveraging complete statistics. Experimental results on fundus segmentation datasets demonstrate better performance, achieving 91.74% average Dice coefficient on Drishti-GS and 87.80% on RIM-ONE-r datasets, outperforming current methods. This work provides a practical solution for cross-institutional medical image analysis while preserving data privacy, with significant potential for eye disease diagnosis and other medical applications requiring robust domain adaptation. Full article
Show Figures

Figure 1

25 pages, 2747 KB  
Article
A Dynamic Information-Theoretic Network Model for Systemic Risk Assessment with an Application to China’s Maritime Sector
by Lin Xiao, Arash Sioofy Khoojine, Hao Chen and Congyin Wang
Mathematics 2025, 13(18), 2959; https://doi.org/10.3390/math13182959 - 12 Sep 2025
Viewed by 863
Abstract
This paper develops a dynamic information-theoretic network framework to quantify systemic risk in China’s maritime–commodity nexus with a focus on the Yangtze River Basin using eight monthly indicators, CCFI, CBCFI, BDI, YRCFI, GAUP, MPCT, CPUS, and ASMC. We resample, impute, standardize, and difference [...] Read more.
This paper develops a dynamic information-theoretic network framework to quantify systemic risk in China’s maritime–commodity nexus with a focus on the Yangtze River Basin using eight monthly indicators, CCFI, CBCFI, BDI, YRCFI, GAUP, MPCT, CPUS, and ASMC. We resample, impute, standardize, and difference series to achieve stationary time series. Nonlinear interdependencies are estimated via KSG mutual information (MI) within sliding windows; networks are filtered using the Planar Maximally Filtered Graph (PMFG) with bootstrap edge validation (95th percentile) and benchmarked against the MST. Average MI indicates moderate yet heterogeneous dependence (about 0.13–0.17), revealing a container/port core (CCFI–YRCFI–MPCT), a bulk/energy spine (BDI–CPUS), and commodity bridges via GAUP. Dynamic PMFG metrics show a generally resilient but episodically vulnerable structure: density and compactness decline in turbulence. Stress tests demonstrate high redundancy to diffuse link failures (connectivity largely intact until ∼70–80% edge removal) but pronounced sensitivity of diffusion capacity to targeted multi-node outages. Early-warning indicators based on entropy rate and percolation threshold Z-scores flag recurring windows of elevated fragility; change point detection evaluation of both metrics isolates clustered regime shifts (2015–2016, 2018–2019, 2021–2022, and late 2023–2024). A Systemic Importance Index (SII) combining average centrality and removal impact ranks MPCT and CCFI as most critical, followed by BDI, with GAUP/CPUS mid-peripheral and ASMC peripheral. The findings imply that safeguarding port throughput and stabilizing container freight conditions deliver the greatest resilience gains, while monitoring bulk/energy linkages is essential when macro shocks synchronize across markets. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

18 pages, 6003 KB  
Article
A Graph Contrastive Learning Method for Enhancing Genome Recovery in Complex Microbial Communities
by Guo Wei and Yan Liu
Entropy 2025, 27(9), 921; https://doi.org/10.3390/e27090921 - 31 Aug 2025
Cited by 2 | Viewed by 1180
Abstract
Accurate genome binning is essential for resolving microbial community structure and functional potential from metagenomic data. However, existing approaches—primarily reliant on tetranucleotide frequency (TNF) and abundance profiles—often perform sub-optimally in the face of complex community compositions, low-abundance taxa, and long-read sequencing datasets. To [...] Read more.
Accurate genome binning is essential for resolving microbial community structure and functional potential from metagenomic data. However, existing approaches—primarily reliant on tetranucleotide frequency (TNF) and abundance profiles—often perform sub-optimally in the face of complex community compositions, low-abundance taxa, and long-read sequencing datasets. To address these limitations, we present MBGCCA, a novel metagenomic binning framework that synergistically integrates graph neural networks (GNNs), contrastive learning, and information-theoretic regularization to enhance binning accuracy, robustness, and biological coherence. MBGCCA operates in two stages: (1) multimodal information integration, where TNF and abundance profiles are fused via a deep neural network trained using a multi-view contrastive loss, and (2) self-supervised graph representation learning, which leverages assembly graph topology to refine contig embeddings. The contrastive learning objective follows the InfoMax principle by maximizing mutual information across augmented views and modalities, encouraging the model to extract globally consistent and high-information representations. By aligning perturbed graph views while preserving topological structure, MBGCCA effectively captures both global genomic characteristics and local contig relationships. Comprehensive evaluations using both synthetic and real-world datasets—including wastewater and soil microbiomes—demonstrate that MBGCCA consistently outperforms state-of-the-art binning methods, particularly in challenging scenarios marked by sparse data and high community complexity. These results highlight the value of entropy-aware, topology-preserving learning for advancing metagenomic genome reconstruction. Full article
(This article belongs to the Special Issue Network-Based Machine Learning Approaches in Bioinformatics)
Show Figures

Figure 1

25 pages, 549 KB  
Article
CurveMark: Detecting AI-Generated Text via Probabilistic Curvature and Dynamic Semantic Watermarking
by Yuhan Zhang, Xingxiang Jiang, Hua Sun, Yao Zhang and Deyu Tong
Entropy 2025, 27(8), 784; https://doi.org/10.3390/e27080784 - 24 Jul 2025
Viewed by 2701
Abstract
Large language models (LLMs) pose significant challenges to content authentication, as their sophisticated generation capabilities make distinguishing AI-produced text from human writing increasingly difficult. Current detection methods suffer from limited information capture, poor rate–distortion trade-offs, and vulnerability to adversarial perturbations. We present CurveMark, [...] Read more.
Large language models (LLMs) pose significant challenges to content authentication, as their sophisticated generation capabilities make distinguishing AI-produced text from human writing increasingly difficult. Current detection methods suffer from limited information capture, poor rate–distortion trade-offs, and vulnerability to adversarial perturbations. We present CurveMark, a novel dual-channel detection framework that combines probability curvature analysis with dynamic semantic watermarking, grounded in information-theoretic principles to maximize mutual information between text sources and observable features. To address the limitation of requiring prior knowledge of source models, we incorporate a Bayesian multi-hypothesis detection framework for statistical inference without prior assumptions. Our approach embeds imperceptible watermarks during generation via entropy-aware, semantically informed token selection and extracts complementary features from probability curvature patterns and watermark-specific metrics. Evaluation across multiple datasets and LLM architectures demonstrates 95.4% detection accuracy with minimal quality degradation (perplexity increase < 1.3), achieving 85–89% channel capacity utilization and robust performance under adversarial perturbations (72–94% information retention). Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

21 pages, 7084 KB  
Article
Chinese Paper-Cutting Style Transfer via Vision Transformer
by Chao Wu, Yao Ren, Yuying Zhou, Ming Lou and Qing Zhang
Entropy 2025, 27(7), 754; https://doi.org/10.3390/e27070754 - 15 Jul 2025
Viewed by 1155
Abstract
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal [...] Read more.
Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal when trying to apply the unique style of Chinese paper-cutting art to style transfer. Therefore, this paper proposes a new method for Chinese paper-cutting style transformation based on the Transformer, aiming at realizing the efficient transformation of Chinese paper-cutting art styles. Specifically, the network consists of a frequency-domain mixture block and a multi-level feature contrastive learning module. The frequency-domain mixture block explores spatial and frequency-domain interaction information, integrates multiple attention windows along with frequency-domain features, preserves critical details, and enhances the effectiveness of style conversion. To further embody the symmetrical structures and hollowed hierarchical patterns intrinsic to Chinese paper-cutting, the multi-level feature contrastive learning module is designed based on a contrastive learning strategy. This module maximizes mutual information between multi-level transferred features and content features, improves the consistency of representations across different layers, and thus accentuates the unique symmetrical aesthetics and artistic expression of paper-cutting. Extensive experimental results demonstrate that the proposed method outperforms existing state-of-the-art approaches in both qualitative and quantitative evaluations. Additionally, we created a Chinese paper-cutting dataset that, although modest in size, represents an important initial step towards enriching existing resources. This dataset provides valuable training data and a reference benchmark for future research in this field. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 1198 KB  
Article
Information-Theoretic Sequential Framework to Elicit Dynamic High-Order Interactions in High-Dimensional Network Processes
by Helder Pinto, Yuri Antonacci, Gorana Mijatovic, Laura Sparacino, Sebastiano Stramaglia, Luca Faes and Ana Paula Rocha
Mathematics 2025, 13(13), 2081; https://doi.org/10.3390/math13132081 - 24 Jun 2025
Cited by 1 | Viewed by 844
Abstract
Complex networks of stochastic processes are crucial for modeling the dynamics of interacting systems, particularly those involving high-order interactions (HOIs) among three or more components. Traditional measures—such as mutual information (MI), interaction information (II), the redundancy-synergy index (RSI), and O-information (OI)—are typically limited [...] Read more.
Complex networks of stochastic processes are crucial for modeling the dynamics of interacting systems, particularly those involving high-order interactions (HOIs) among three or more components. Traditional measures—such as mutual information (MI), interaction information (II), the redundancy-synergy index (RSI), and O-information (OI)—are typically limited to static analyses not accounting for temporal correlations and become computationally unfeasible in large networks due to the exponential growth of the number of interactions to be analyzed. To address these challenges, first a framework is introduced to extend these information-theoretic measures to dynamic processes. This includes the II rate (IIR), RSI rate (RSIR), and the OI rate gradient (ΔOIR), enabling the dynamic analysis of HOIs. Moreover, a stepwise strategy identifying groups of nodes (multiplets) that maximize either redundant or synergistic HOIs is devised, offering deeper insights into complex interdependencies. The framework is validated through simulations of networks composed of cascade, common drive, and common target mechanisms, modelled using vector autoregressive (VAR) processes. The feasibility of the proposed approach is demonstrated through its application in climatology, specifically by analyzing the relationships between climate variables that govern El Niño and the Southern Oscillation (ENSO) using historical climate data. Full article
(This article belongs to the Special Issue Recent Advances in Time Series Analysis)
Show Figures

Figure 1

Back to TopTop