Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (496)

Search Parameters:
Keywords = information-theoretic interpretation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 542 KB  
Perspective
Untangling the Osteopathic Gordian Knot: Reconceptualized Principles for Sustainable and Contemporary Clinical Practice—A Conceptual Perspective
by Christian Lunghi, Francesca Baroni, Giandomenico D’Alessandro, Mauro Longobardi, Giacomo Consorti, Nicola Vanacore and Marco Tramontano
Healthcare 2026, 14(9), 1221; https://doi.org/10.3390/healthcare14091221 - 1 May 2026
Abstract
Background: Osteopathy’s integration into contemporary healthcare requires clear articulation of its theoretical and practical foundations and active engagement in interprofessional practice. Despite growing institutional recognition, conceptual ambiguity remains regarding foundational principles and their operationalization. Osteopathy is broadly described as a person-centered, evidence-informed discipline [...] Read more.
Background: Osteopathy’s integration into contemporary healthcare requires clear articulation of its theoretical and practical foundations and active engagement in interprofessional practice. Despite growing institutional recognition, conceptual ambiguity remains regarding foundational principles and their operationalization. Osteopathy is broadly described as a person-centered, evidence-informed discipline promoting health through manual and educational strategies within systemic and biopsychosocial contexts. Objectives: This Perspective critically examines osteopathic principles, proposes a shared conceptual model for interdisciplinary care, and outlines a structured research agenda for empirical validation, aiming to enhance person-centered, preventive, and sustainable practice. Methods: A narrative review synthesized historical, theoretical, and contemporary evidence. Records were thematically analyzed through expert collaborative brainstorming to achieve consensus, ensuring both conceptual and empirical rigor. Results: Twenty-two studies were included, forming two thematic areas: (1) historical evolution of osteopathic principles, encompassing foundational definitions, early interpretive divergences, codifications, and adaptations; and (2) contemporary reconceptualization for interdisciplinary care, integrating systems-oriented and biopsychosocial frameworks. Emphasis was placed on self-regulation, structure–function relationships, and holistic care. This synthesis bridges historical and modern insights, highlighting osteopathy’s relevance in integrative, pediatric, and preventive healthcare. Conclusions: Reconceptualizing osteopathic principles strengthens professional identity and supports sustainable, evidence-informed, person-centered practice. The proposed framework informs interprofessional collaboration and guides a research roadmap to validate and integrate osteopathy globally within contemporary healthcare systems. Full article
Show Figures

Graphical abstract

23 pages, 1624 KB  
Article
Measurement of China’s “External Market Provider” Role: Trade-Margin Decomposition and Gravity Determinants
by Manru Zhao and Yujia Lu
Entropy 2026, 28(5), 504; https://doi.org/10.3390/e28050504 - 30 Apr 2026
Abstract
This paper measures China’s role as an “external market provider” by quantifying, for 168 source countries during 2001–2022, the share of each country’s exports absorbed by China and decomposing that share into extensive (product coverage), quantity, and price margins using the Hummels–Klenow framework. [...] Read more.
This paper measures China’s role as an “external market provider” by quantifying, for 168 source countries during 2001–2022, the share of each country’s exports absorbed by China and decomposing that share into extensive (product coverage), quantity, and price margins using the Hummels–Klenow framework. To characterize destination-market concentration, we construct an HHI-based network diversification indicator from export-destination shares and interpret it from a complementary information-theoretic perspective, where higher concentration corresponds to lower diversification and stronger dependence. We document the dynamics of China’s market-provision role and estimate an extended gravity-type model with country- and year-fixed effects. The results show that China’s external market-provider role expanded markedly after WTO accession, with growth driven mainly by the quantity margin and, after 2018, increasingly supported by the price margin. Economic proximity and similarity in global value-chain position are associated with stronger China-absorption shares, while greater destination concentration relative to China is associated with lower China-absorption shares. Free trade agreements are linked to stronger, more extensive, and larger margins. Robustness checks based on lagged covariates, additional controls, higher-dimensional fixed effects, Tobit estimation, and winsorization support the main findings. Overall, the paper provides a replicable framework for measuring destination-market pull and shows how China’s import-side role varies across products, regions, and development groups, while using the information-theoretic perspective as a supplementary interpretation of diversification patterns rather than as a separate empirical tool. Full article
Show Figures

Figure 1

17 pages, 2086 KB  
Review
Research Progress on Intelligent Fault Recognition Technology in Seismic Exploration
by Ke Ren, Cheng Song, Na Li, Xiaodong Wang, Zeming Wang and Yanhai Liu
GeoHazards 2026, 7(2), 48; https://doi.org/10.3390/geohazards7020048 - 29 Apr 2026
Abstract
With the expansion of seismic exploration targets to deeper and more complex geological structures, traditional fault interpretation methods face significant challenges in terms of efficiency and accuracy. The extensive application of artificial intelligence (AI) technologies is driving the evolution of fault recognition techniques [...] Read more.
With the expansion of seismic exploration targets to deeper and more complex geological structures, traditional fault interpretation methods face significant challenges in terms of efficiency and accuracy. The extensive application of artificial intelligence (AI) technologies is driving the evolution of fault recognition techniques toward automation and intelligence. This paper systematically reviews the development of AI technologies in fault recognition, from traditional machine learning-based seismic attribute fusion analysis to deep learning-based end-to-end recognition and semantic segmentation. It provides a detailed discussion of key technological advancements, such as sample set construction, weak signal enhancement, and noise suppression. To address the current challenges, including the insufficient authenticity of synthetic data, poor model interpretability, and weak quantitative representation capabilities, this study proposes three future research directions: the development of benchmark datasets based on real geological evolution, the construction of interpretable model architectures that incorporate geological prior information, and the realization of multi-parameter collaborative intelligent fault system analysis. These directions aim to provide theoretical support for advancing the practical and industrial applications of intelligent fault recognition technology. Full article
Show Figures

Figure 1

12 pages, 983 KB  
Article
Possible Entropic Limits of Iterative Computation in Generative AI: Model Collapse Explained by the Data Processing Inequality and the AI Theorem
by Pavel Straňák
Symmetry 2026, 18(5), 764; https://doi.org/10.3390/sym18050764 - 29 Apr 2026
Abstract
Generative AI systems trained on synthetic data exhibit progressive degradation known as model collapse. This paper provides a theoretical explanation of this phenomenon using Shannon’s Data Processing Inequality (DPI), modeling iterative synthetic-data training as a Markov chain of lossy transformations. We show that [...] Read more.
Generative AI systems trained on synthetic data exhibit progressive degradation known as model collapse. This paper provides a theoretical explanation of this phenomenon using Shannon’s Data Processing Inequality (DPI), modeling iterative synthetic-data training as a Markov chain of lossy transformations. We show that mutual information with respect to the original data distribution must decrease monotonically, yielding qualitative predictions for exponential decay tendencies and indicating that information loss arises from general finite-precision and capacity constraints rather than from any specific architectural mechanism. Building on this analysis, we introduce the AI conceptual theorem, a generalized stability limit for computable systems. The theorem states that any purely computational system that generates outputs iteratively under finite precision, bounded capacity, and without external low-entropy input must experience cumulative information degradation after a finite number of steps. DPI-based collapse emerges as a special case of this broader principle. The framework is intended as a conceptual information-theoretic perspective rather than a fully formalized theory, with several assumptions intentionally simplified to highlight the underlying entropic mechanism. The results should therefore be interpreted as principled limits that motivate further empirical and mathematical investigation rather than as definitive closed-form predictions. Together, DPI and the AI Theorem provide a unified information-theoretic framework for understanding degradation in synthetic training, long-horizon inference, and other iterative computational processes. The resulting predictions are quantitatively falsifiable and offer guidance for designing more stable and information-preserving AI systems. Full article
(This article belongs to the Special Issue Applications of Symmetry/Asymmetry and Machine Learning)
Show Figures

Figure 1

28 pages, 31083 KB  
Article
Mechanistic Interpretation of Field-Measured Pavement Response Under Heavy-Vehicle Loading
by Suphawut Malaikrisanachalee, Auckpath Sawangsuriya, Phansak Sattayhatewa, Ponlathep Lertworawanich, Apiniti Jotisankasa, Susit Chaiprakaikeow and Narongrit Wongwai
Infrastructures 2026, 11(5), 154; https://doi.org/10.3390/infrastructures11050154 - 29 Apr 2026
Abstract
This study presents a data-driven framework for the mechanistic interpretation of asphalt pavement responses using an integrated smart sensing and monitoring system deployed on a national highway in Thailand. A fully instrumented pavement test section was developed, incorporating a multi-sensor embedded network and [...] Read more.
This study presents a data-driven framework for the mechanistic interpretation of asphalt pavement responses using an integrated smart sensing and monitoring system deployed on a national highway in Thailand. A fully instrumented pavement test section was developed, incorporating a multi-sensor embedded network and a field data acquisition platform integrated with weigh-in-motion (WIM) technology. The system consists of 54 sensors, including strain gauges, pressure cells, moisture sensors, and thermocouples, installed at multiple depths to capture high-resolution stress–strain responses under controlled heavy-vehicle loading. Field measurements were analyzed and compared with classical mechanistic models, including Boussinesq’s theory, Odemark’s equivalent thickness method, and Burmister’s multilayer elastic theory. The results demonstrate good agreement for vertical stress predictions in deeper layers, while significant discrepancies were observed in strain responses, particularly in the asphalt layer, where measured tensile strains were up to 2.5 times higher than theoretical estimates. The findings indicate that conventional elastic models provide useful first-order approximations; however, discrepancies were observed in representing the viscoelastic behavior of asphalt materials under real loading conditions. Furthermore, the integration of sensor data with traffic loading information confirms that axle load magnitude is the dominant factor governing pavement responses, whereas vehicle speed primarily influences load duration. The proposed framework demonstrates the potential of smart sensing systems for enabling automated, data-driven pavement analysis and supporting digital twin-based infrastructure management. Full article
Show Figures

Figure 1

10 pages, 3223 KB  
Article
Artificial Intelligence Training Data and Holistic Health Conceptualization: An Interpretive Exposome Framework
by Emre Umucu
Information 2026, 17(5), 425; https://doi.org/10.3390/info17050425 - 28 Apr 2026
Viewed by 83
Abstract
Health is increasingly understood as a multidimensional phenomenon shaped by complex interactions among biological, psychosocial, environmental, and informational factors. Building on the human exposome and its extensions, this paper introduces the interpretive exposome, a conceptual framework that captures cumulative exposure to how health-related [...] Read more.
Health is increasingly understood as a multidimensional phenomenon shaped by complex interactions among biological, psychosocial, environmental, and informational factors. Building on the human exposome and its extensions, this paper introduces the interpretive exposome, a conceptual framework that captures cumulative exposure to how health-related information is framed, recorded, interpreted, and communicated by clinicians, artificial intelligence (AI) mechanisms, and institutions across the life course. We argue that the interpretive process, including biased clinical health records, algorithmic decision-support outputs, and inequitable communication, operates as exposures that can accumulate and influence downstream health outcomes. We further describe how AI systems function as interpretive filters that may reproduce, alleviate, or amplify bias through training data and recursive deployment. While remaining conceptual in nature, this proposed framework outlines methodological pathways for operationalization using natural language processing (NLP), bias auditing, and multi-modal data integration. The interpretive exposome complements existing exposome models and offers a theoretical foundation for future empirical validation aimed at promoting equitable, transparent, and context-aware healthcare. Full article
(This article belongs to the Special Issue Modeling in the Era of Generative AI)
Show Figures

Figure 1

24 pages, 1531 KB  
Article
SS-RIME: A Scale-Stabilized Approach to EEG Cognitive Workload Classification
by Kais Khaldi, Afrah Alanazi, Inam Alanazi, Sahar Almenwer and Anis Mohamed
Sensors 2026, 26(9), 2679; https://doi.org/10.3390/s26092679 - 25 Apr 2026
Viewed by 702
Abstract
Accurate and interpretable assessment of cognitive workload from EEG remains a central challenge in neuroergonomics and real-time human–machine interaction. To address the limitations of existing Empirical Mode Decomposition (EMD) and Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) approaches, particularly their instability, [...] Read more.
Accurate and interpretable assessment of cognitive workload from EEG remains a central challenge in neuroergonomics and real-time human–machine interaction. To address the limitations of existing Empirical Mode Decomposition (EMD) and Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) approaches, particularly their instability, limited neuroscientific grounding, and sensitivity to amplitude fluctuations, this paper introduces Scale-Stabilized Relative Intrinsic Mode Energy (SS-RIME), a theoretically motivated and physiologically informed feature extraction framework. SS-RIME integrates instantaneous frequency stabilization to enforce a consistent oscillatory hierarchy across subjects, delta (1–4 Hz) and theta (4–7.5 Hz) spectral weighting based on established frontal-midline activity, and cross-IMF energy normalization to reduce amplitude-driven variability. Applied to 64-channel EEG recorded during N-back tasks, the proposed framework achieved high performance, outperforming both classical machine-learning baselines and deep learning models such as EEGNet, DeepConvNet, and ShallowConvNet. SS-RIME yielded accuracies of 99.12±0.41% (0 vs. 2-back), 97.84±0.63% (0 vs. 3-back), and 92.31±1.12% (2 vs. 3-back), demonstrating strong cross-subject generalization. Theta-dominant IMFs over frontal midline regions emerged as the most discriminative components, supporting the neuroscientific validity of the stabilized and spectrally weighted Hilbert–Huang representation. With an inference time below 20 ms per epoch, SS-RIME is computationally efficient and suitable for real-time neuroergonomics applications, providing a robust, explainable, and physiologically grounded solution for EEG-based cognitive workload decoding while addressing key methodological gaps in prior EMD/CEEMDAN and deep learning approaches. Full article
(This article belongs to the Section Intelligent Sensors)
29 pages, 1102 KB  
Article
A Weighted Relational Graph Model for Emergent Superconducting-like Regimes: Gibbs Structure, Percolation, and Phase Coherence
by Bianca Brumă, Călin Gheorghe Buzea, Diana Mirilă, Valentin Nedeff, Florin Nedeff, Maricel Agop, Ioan Gabriel Sandu and Decebal Vasincu
Axioms 2026, 15(5), 309; https://doi.org/10.3390/axioms15050309 - 25 Apr 2026
Viewed by 121
Abstract
We introduce a minimal relational network model in which superconducting-like behavior emerges as a collective phase of constrained connectivity and phase coherence, without assuming microscopic electrons, phonons, or material-specific interactions. The model is formulated as a concrete instantiation of a previously introduced axiomatic [...] Read more.
We introduce a minimal relational network model in which superconducting-like behavior emerges as a collective phase of constrained connectivity and phase coherence, without assuming microscopic electrons, phonons, or material-specific interactions. The model is formulated as a concrete instantiation of a previously introduced axiomatic relational–informational framework for emergent geometry and effective spacetime, in which geometry and effective forces arise from constrained information flow rather than from a background manifold. Mathematically, this construction is realized on a finite weighted graph with binary edge-activation variables and compact vertex phase variables, sampled through a Gibbs ensemble generated by an additive informational action. The system is represented as a finite weighted graph with weighted edges encoding transport or informational costs, augmented by dynamically activated low-cost channels and compact phase degrees of freedom defined at vertices. The effective edge costs induce a weighted shortest-path metric, providing an operational notion of emergent relational geometry. Using Monte Carlo simulations on two-dimensional periodic lattices, we show that the same informational action supports three distinct emergent regimes: a normal resistive phase, a fragile low-temperature–like superconducting phase characterized by noise-sensitive coherence, and a noise-robust high-temperature–like superconducting phase in which global phase coherence persists under substantial fluctuations. These regimes are identified using purely relational observables with direct graph-theoretic and statistical-mechanical interpretation, including percolation of low-cost channels, phase correlation functions, an operational phase stiffness (helicity modulus), and a geometric diagnostic based on relational ball growth. In particular, we extract an effective geometric dimension from the scaling of low-cost accessibility balls, using a ball-growth relation of the form B(r) ~ rdeff, revealing a clear monotonic hierarchy between normal, fragile superconducting, and noise-robust superconducting—like regimes. This demonstrates that superconducting-like behaviour in the present framework corresponds not only to percolation and phase alignment, but also to a qualitative reorganization of relational geometry. Robustness is tested via finite-size comparison between 8 × 8, 12 × 12 and 16 × 16 lattice realizations. Within this framework, normal and superconducting-like behavior arise from the same underlying relational mechanism and differ only in the structural stability of connectivity, coherence, and geometric accessibility under fluctuations. The aim of this work is structural rather than material-specific: we do not reproduce detailed experimental phase diagrams or microscopic pairing mechanisms, but identify minimal relational conditions under which low-dissipation, phase-coherent transport can emerge as a generic organizational regime of constrained relational systems. Full article
(This article belongs to the Section Mathematical Physics)
17 pages, 1463 KB  
Article
Physics-Informed Neural Networks for Process Optimization in Laser Powder Bed Fusion of Inconel 718 Superalloy: A Data-Efficient, Physics-Constrained Machine Learning Framework
by Saurabh Tiwari, Seong Jun Heo and Nokeun Park
Metals 2026, 16(5), 465; https://doi.org/10.3390/met16050465 (registering DOI) - 24 Apr 2026
Viewed by 135
Abstract
This study aimed to develop and validate a physics-informed neural network (PINN) framework for data-efficient and physically consistent process optimization in the laser powder bed fusion (LPBF) of Inconel 718 (IN718) superalloy. Laser powder bed fusion (LPBF) is widely adopted for fabricating Inconel [...] Read more.
This study aimed to develop and validate a physics-informed neural network (PINN) framework for data-efficient and physically consistent process optimization in the laser powder bed fusion (LPBF) of Inconel 718 (IN718) superalloy. Laser powder bed fusion (LPBF) is widely adopted for fabricating Inconel 718 (IN718) components in aerospace and energy applications; however, navigating its high-dimensional, nonlinear process parameter space remains a central challenge. High-fidelity finite element simulations are computationally prohibitive for extensive parameter sweeps, whereas purely data-driven machine learning (ML) models are limited by data scarcity and unphysical extrapolation behavior. This study presents a physics-informed neural network (PINN) framework that embeds the transient heat conduction equation and Goldak double-ellipsoidal heat source model directly into the neural network training loss, enforcing thermophysical consistency simultaneously with data fidelity. The model was trained on a curated, multi-source dataset of LPBF IN718 parameter combinations drawn from peer-reviewed experimental studies and validated finite element simulation outputs, spanning the laser power (70–400 W), scan speed (200–2000 mm/s), hatch spacing (50–140 µm), and layer thickness (20–50 µm). The PINN predicted the melt pool width, depth, peak temperature, and relative density with mean absolute percentage errors (MAPE) of 3.8%, 4.7%, 3.1%, and 1.9%, respectively, outperforming a baseline artificial neural network (ANN) with an identical architecture. The framework correctly identified the optimal volumetric energy density (VED) window of 55–105 J/mm3, yielding relative densities ≥99.5%, consistent with the published experimental thresholds for IN718. A data efficiency analysis demonstrated that the PINN with 25% training data achieves a performance equivalent to that of the fully trained ANN with 100% data, confirming an approximately four-fold data efficiency improvement attributable to physics-informed regularization, consistent with theoretical predictions. Sensitivity analysis via automatic differentiation confirmed that laser power and scan speed were the dominant parameters (~85% combined variance), which is in agreement with previous studies. This study provides a computationally efficient, interpretable, and physically consistent ML pathway for the accelerated process qualification of IN718 components for aerospace and energy applications. Full article
31 pages, 834 KB  
Article
Verification of the Methods of Digital Monitoring of Information Space Based on Coding Theory Tools
by Dina Shaltykova, Akhat Bakirov, Anastasiya Grishina, Mariya Kostsova, Yelizaveta Vitulyova and Ibragim Suleimenov
Computers 2026, 15(4), 260; https://doi.org/10.3390/computers15040260 - 21 Apr 2026
Viewed by 206
Abstract
This study examines the applicability of coding-theoretic tools to the digital monitoring of information space. The proposed approach treats response patterns to socially significant stimuli as binary sequences and interprets their analysis as a classification problem analogous to error correction in coding theory. [...] Read more.
This study examines the applicability of coding-theoretic tools to the digital monitoring of information space. The proposed approach treats response patterns to socially significant stimuli as binary sequences and interprets their analysis as a classification problem analogous to error correction in coding theory. To verify the feasibility of this framework, a model psychological test consisting of seven binary questions was analyzed using a procedure derived from the Hamming code (7,4). The method makes it possible to map the full space of observed answer combinations onto a smaller set of reference codewords and thereby identify stable response configurations. The obtained results show that the distributions produced after coding-based transformation are markedly non-uniform and contain recurrent maxima, indicating the presence of structured patterns in collective responses. It is also shown that permutations of question order substantially affect the resulting distributions and correlation indicators, which highlights both the sensitivity and the analytical potential of the proposed encoding scheme. The main contribution of the study is methodological: it demonstrates that error-correcting coding can be operationalized as a formal tool for detecting latent regularities in simplified monitoring data. At the same time, the present results should be regarded as proof of concept, since further work is required to validate the approach on larger datasets, compare it with baseline classification methods, and extend it to longer and multivalued response sequences. Full article
Show Figures

Figure 1

33 pages, 503 KB  
Review
Kolmogorov–Arnold Networks for Sensor Data Processing: A Comprehensive Survey of Architectures, Applications, and Open Challenges
by Antonio M. Martínez-Heredia and Andrés Ortiz
Sensors 2026, 26(8), 2515; https://doi.org/10.3390/s26082515 - 19 Apr 2026
Viewed by 327
Abstract
Kolmogorov–Arnold Networks (KANs) have recently gained increasing attention as an alternative to conventional neural architectures, mainly because they replace fixed activation functions with learnable univariate mappings defined along network edges. This design not only increases modeling flexibility but also makes it easier to [...] Read more.
Kolmogorov–Arnold Networks (KANs) have recently gained increasing attention as an alternative to conventional neural architectures, mainly because they replace fixed activation functions with learnable univariate mappings defined along network edges. This design not only increases modeling flexibility but also makes it easier to interpret how inputs are transformed within the network while maintaining parameter efficiency. KANs are particularly well suited for sensor-driven systems where transparency, robustness, and computational constraints are critical. This study provides a survey of KAN-based approaches for processing sensor data. A literature review conducted from 2024 to 2026 examined the deployment of KAN models in industrial and mechanical sensing, medical and biomedical sensing, and remote sensing and environmental monitoring, utilizing a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-based methodology. We first revisit the theoretical foundations of KANs and their main architectural variants, including spline-based, polynomial-based, monotonic, and hybrid formulations, to structure the discussion. From a practical standpoint, we then examine how KAN modules are integrated into modern deep learning pipelines, such as convolutional, recurrent, transformer-based, graph-based, and physics-informed architectures. KAN-based models demonstrate comparable predictive performance as conventional machine learning models, while having fewer parameters and more interpretable representations. Several limitations persist, including computational overhead, sensitivity to noisy signals, and resource-constrained device deployment challenges. Real-world sensor systems encounter significant challenges in adopting KAN-based models, including scalability in large-scale sensor networks, integration with hardware architectures, automated model development, resilience to out-of-distribution conditions, and the need for standardized evaluation metrics. Collectively, these observations provide a clearer understanding of the current and potential limitations of KAN-based models, offering practical guidance on the development of interpretable and efficient learning systems for future sensor equipment applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

7 pages, 191 KB  
Proceeding Paper
Psychological Dimensions Involved in Image Communication: A Multidisciplinary Research Proposal for Analyzing Cognitive and Perceptual Processes in Visual Education
by Giusi Antonia Toto and Pierpaolo Limone
Proceedings 2026, 139(1), 7; https://doi.org/10.3390/proceedings2026139007 - 17 Apr 2026
Viewed by 252
Abstract
Image communication represents a fundamental domain of human experience that intersects cognitive neuroscience, educational psychology, and visual communication theory. The increasing digitalization of contemporary society has amplified the importance of visual literacy, defined as the ability to interpret, use, and create visual media. [...] Read more.
Image communication represents a fundamental domain of human experience that intersects cognitive neuroscience, educational psychology, and visual communication theory. The increasing digitalization of contemporary society has amplified the importance of visual literacy, defined as the ability to interpret, use, and create visual media. While neuroscientific research highlights the brain’s proficiency in processing visual information, significant gaps remain in understanding the underlying psychological mechanisms and their practical applications in educational contexts. This study proposes a multidisciplinary research design to systematically analyze these psychological dimensions. The research will integrate cognitive, perceptual, and pedagogical perspectives to understand how visual representations influence learning. The methodological design includes a multi-method approach combining experimental analysis, ethnographic observation, and psychometric evaluation on a stratified sample of 240 participants (aged 16–25) divided into three groups: high school students (n = 80), university students (n = 80), and young professionals (n = 80). The proposed methodology will utilize eye-tracking to analyze visual perception patterns, integrated with semantic differential methods to evaluate cognitive and affective associations with visual imagery. The expected results should clarify how the effectiveness of image communication depends on the coherence between technical and semantic aspects of visual imagery. The research aims to contribute to the theoretical framework of educational neuroscience, offering empirical evidence for optimizing teaching strategies based on multimodal visual communication. Full article
38 pages, 585 KB  
Review
A Unified Information Bottleneck Framework for Multimodal Biomedical Machine Learning
by Liang Dong
Entropy 2026, 28(4), 445; https://doi.org/10.3390/e28040445 - 14 Apr 2026
Viewed by 388
Abstract
Multimodal biomedical machine learning increasingly integrates heterogeneous data sources (including medical imaging, multi-omics profiles, electronic health records, and wearable sensor signals) to support clinical diagnosis, prognosis, and treatment response prediction. Despite strong empirical performance, most existing multimodal systems lack a principled theoretical foundation [...] Read more.
Multimodal biomedical machine learning increasingly integrates heterogeneous data sources (including medical imaging, multi-omics profiles, electronic health records, and wearable sensor signals) to support clinical diagnosis, prognosis, and treatment response prediction. Despite strong empirical performance, most existing multimodal systems lack a principled theoretical foundation for understanding why fusion improves prediction, how information is distributed across modalities, and when models can be trusted under incomplete or shifting data. This paper develops a unified information-theoretic framework that formalizes multimodal biomedical learning as an information optimization problem. We formulate multimodal representation learning through the information bottleneck principle, deriving a variational objective that balances predictive sufficiency against informational compression in an architecture-agnostic manner. Building on this foundation, we introduce information-theoretic tools for decomposing modality contributions via conditional mutual information, quantifying redundancy and synergy, and diagnosing fusion collapse. We further show that robustness to missing modalities can be cast as an information consistency problem and extend the framework to longitudinal disease modeling through transfer entropy and sequential information bottleneck objectives. Applications to multimodal foundation models, uncertainty quantification, calibration, and out-of-distribution detection are developed. Empirical case studies across three biomedical datasets (TCGA breast cancer multi-omics, TCGA glioma clinical-plus-molecular data, and OASIS-2 longitudinal Alzheimer’s data) show that the framework’s key quantities are computable and interpretable on real data: MI decomposition identifies modality dominance and redundancy; the VMIB traces a compression–prediction tradeoff in the information plane; entropy-based selective prediction raises accuracy from 0.787 to 0.939 at 50% coverage; transfer entropy reveals stage-dependent modality influence in disease progression; and pretraining/adaptation diagnostics distinguish efficient from wasteful fine-tuning strategies. Together, these results develop entropy and mutual information as organizing principles for the design, analysis, and evaluation of multimodal biomedical AI systems. Full article
Show Figures

Figure 1

34 pages, 2037 KB  
Article
Stock Forecasting Based on Informational Complexity Representation: A Framework of Wavelet Entropy, Multiscale Entropy, and Dual-Branch Network
by Guisheng Tian, Chengjun Xu and Yiwen Yang
Entropy 2026, 28(4), 424; https://doi.org/10.3390/e28040424 - 10 Apr 2026
Viewed by 223
Abstract
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as [...] Read more.
Stock price sequences are characterized by pronounced nonlinearity, non-stationarity, and multi-scale volatility. They are further influenced by complex, multi-source factors, such as macroeconomic conditions and market behavior, making high-precision forecasting highly challenging. Existing approaches are limited by noise and multi-dimensional market features, as well as difficulties in balancing prediction accuracy with model complexity. To address these challenges, we propose Wavelet Entropy and Cross-Attention Network (WECA-Net), which combines wavelet decomposition with a multimodal cross-attention mechanism. From an information-theoretic perspective, stock price dynamics reflect the time-varying uncertainty and informational complexity of the market. We employ wavelet entropy to quantify the dispersion and uncertainty of energy distribution across frequency bands, and multiscale entropy to measure the scale-dependent complexity and regularity of the time series. These entropy-derived descriptors provide an interpretable prior of “information content” for cross-modal attention fusion, thereby improving robustness and generalization under non-stationary market conditions. Experiments on Chinese stock indices, A-Share, and CSI 300 component stock datasets demonstrate that WECA-Net consistently outperforms mainstream models in Mean Absolute Error (MAE) and R2 across all datasets. Notably, on the CSI 300 dataset, WECA-Net achieves an R2 of 0.9895, underscoring its strong predictive accuracy and practical applicability. This framework is also well aligned with sensor data fusion and intelligent perception paradigms, offering a robust solution for financial signal processing and real-time market state awareness. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

28 pages, 1509 KB  
Article
Quantifying Structural Divergence Between Human and Diffusion-Based Generative Visual Compositions
by Necati Vardar and Çağrı Gümüş
Appl. Sci. 2026, 16(8), 3669; https://doi.org/10.3390/app16083669 - 9 Apr 2026
Viewed by 282
Abstract
The rapid proliferation of text-to-image generative systems has transformed visual content production, yet the structural characteristics embedded in their compositional outputs remain insufficiently understood. Rather than approaching human–AI differentiation as a purely classification problem, this study investigates whether a controlled set of AI-generated [...] Read more.
The rapid proliferation of text-to-image generative systems has transformed visual content production, yet the structural characteristics embedded in their compositional outputs remain insufficiently understood. Rather than approaching human–AI differentiation as a purely classification problem, this study investigates whether a controlled set of AI-generated and human-designed posters exhibits measurable structural divergence under thematically matched conditions. A dataset of jazz festival posters was analyzed using interpretable geometric and information-theoretic descriptors, including spatial density (padding ratio), edge density, chromatic dispersion, and entropy-based measures. Instead of relying on deep neural detection architectures, we employed a transparent machine-learning framework to examine intrinsic structural separability within feature space. Results demonstrated highly stable group separation (ROC-AUC = 0.99; 95% CI: 0.978–0.999) under cross-validated evaluation. Distributional analysis further revealed a pronounced divergence in spatial density allocation (Kolmogorov–Smirnov statistic = 0.76, p < 10−28), accompanied by a very large effect size (Cohen’s d = 1.365). While padding ratio emerged as the dominant discriminative factor, additional entropy- and chromatic-based descriptors contributed to group separation even when spatial density was excluded (AUC = 0.903). These findings indicate that AI-generated and human-designed posters can diverge in negative space allocation and chromatic organization under controlled thematic and platform-specific conditions. The study contributes to the explainable analysis of generative visual systems by reframing human–AI differentiation as a structural divergence problem grounded in interpretable image statistics rather than as a model-specific artifact detection task. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop