Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,695)

Search Parameters:
Keywords = mutual information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3022 KB  
Article
Pedestrian Physiological Response Map Prediction Model for Street Audiovisual Environments Using LSTM Networks
by Jingwen Xing, Xuyuan He, Xinxin Li, Tianci Wang, Siqing Mao and Luyao Li
Buildings 2026, 16(9), 1648; https://doi.org/10.3390/buildings16091648 - 22 Apr 2026
Abstract
Existing studies of street-related emotional perception mainly rely on static scene evaluations, which cannot capture the cumulative effects of environmental exposure during continuous walking. To address this limitation, this study proposes a method for predicting pedestrian physiological responses in sequential audiovisual street environments. [...] Read more.
Existing studies of street-related emotional perception mainly rely on static scene evaluations, which cannot capture the cumulative effects of environmental exposure during continuous walking. To address this limitation, this study proposes a method for predicting pedestrian physiological responses in sequential audiovisual street environments. Four real-world walking routes were selected, with outbound and return directions treated as independent paths, yielding eight paths and 32 valid samples. EEG, ECG, sound pressure level, first-person video, and GPS data were synchronously collected to construct a 1 s multimodal time-series dataset. Pearson correlation, Kendall correlation, and mutual information analyses were used to examine linear, monotonic, and nonlinear relationships between environmental variables and physiological indicators, and the resulting weights were incorporated into a Long Short-Term Memory (LSTM) model for multi-step prediction. Visual elements and noise exposure were the main factors influencing physiological responses. Among the models, the mutual-information-weighted LSTM performed best, achieving an R2 of 0.77 for heart rate variability (RMSSD), whereas prediction of the EEG ratio (β/α and θ/β) remained limited. An additional independent street sample outside the training set was then used to generate a dual-dimensional EEG-ECG physiological response map, demonstrating the model’s potential for identifying emotional risk segments and supporting street-level micro-renewal. Full article
96 pages, 2106 KB  
Article
A Random Field Theory of Electromagnetic Information
by Said Mikki
Entropy 2026, 28(5), 481; https://doi.org/10.3390/e28050481 - 22 Apr 2026
Abstract
As a rigorous and comprehensive foundation for electromagnetic information theory (EIT), we develop a general theory that elucidates the universal stochastic structure of radiated electromagnetic (EM) fields and induced currents in generic EM information transmission systems. The framework encompasses arbitrary random scatterers, input [...] Read more.
As a rigorous and comprehensive foundation for electromagnetic information theory (EIT), we develop a general theory that elucidates the universal stochastic structure of radiated electromagnetic (EM) fields and induced currents in generic EM information transmission systems. The framework encompasses arbitrary random scatterers, input information fields, and EM mutual coupling. The system is modeled as a multiply connected, arbitrary Riemannian manifold within the language of differential geometry. Our approach exploits exact Green’s functions (GFs) on manifolds to construct a novel electromagnetic random field theory (EM-RFT). Interpreted as response functions localized on the surfaces of transceivers and scatterers, the GFs allow us to treat the internal physical details of the EM system as a black box, redirecting analytical attention toward external input–output relations in line with signal processing and communication theory. This integration of random fields (RFs), electromagnetics, and GFs yields a unified framework for deriving and characterizing the stochastic structure of arbitrary EM information transmission systems. We rigorously establish that EM random fields satisfying Maxwell’s equations can always be constructed using system GFs driven by external information fields. The theory further decouples stochastic input RFs from random fluctuations associated with the communication medium (e.g., scatterers), and introduces general correlation propagators valid for arbitrary EM links. Using the Karhunen–Loève expansion, all EM random fields are represented as sums of random variables, providing both a simulation framework for arbitrary EM RFs and a basis for evaluating mutual information between input and output spatial domains at arbitrary locations in the system. Full article
Show Figures

Figure 1

27 pages, 498 KB  
Article
An Information Theory of Persistent Homology: Entropy, the Data Processing Inequality, and Rate–Distortion Bounds for Topological Features
by Deepalakshmi Perumalsamy, Caleb Gunalan and Rajermani Thinakaran
Mathematics 2026, 14(8), 1385; https://doi.org/10.3390/math14081385 - 20 Apr 2026
Abstract
Background: Topological Data Analysis (TDA) captures multi-scale geometric features of data as persistence diagrams, yet no principled information-theoretic framework quantifies how much information those features carry, how efficiently they compress, or when they are informationally irreducible. Methods: We construct a measure-theoretic [...] Read more.
Background: Topological Data Analysis (TDA) captures multi-scale geometric features of data as persistence diagrams, yet no principled information-theoretic framework quantifies how much information those features carry, how efficiently they compress, or when they are informationally irreducible. Methods: We construct a measure-theoretic probability space over persistence diagram space using a Poisson-process reference measure, and define topological entropy (H-T), topological mutual information (I-T), and a topological rate–distortion function as the core objects of a new theory. Results: Four theorems with full proofs establish finite stability, axiomatic uniqueness, a Topological Data Processing Inequality, and a Rate–Distortion Theorem with explicit Poisson-model closed-form formula. A Renyi generalization of topological entropy is also established. Computational and practical implementation aspects—including finite-sample estimation, multi-parameter extension, and algorithmic realization—are addressed inline throughout the paper. Conclusions: This framework provides a rigorous measure-theoretic information-theoretic foundation for persistent homology, demonstrated on simulated brain connectivity and point cloud data, with applications to threshold selection, genomic classification bounds, and compressed sensing. Full article
50 pages, 1540 KB  
Article
Causally Informative Entropic Inequalities within Families of Distributions with Shared Marginals
by Daniel Chicharro
Entropy 2026, 28(4), 472; https://doi.org/10.3390/e28040472 - 20 Apr 2026
Abstract
The joint probability distribution of observable variables from a system is constrained by the underlying causal structure. In the presence of hidden variables, untestable independencies that involve hidden variables lead to testable causally-imposed inequality constraints for observable variables, whose violation can reject the [...] Read more.
The joint probability distribution of observable variables from a system is constrained by the underlying causal structure. In the presence of hidden variables, untestable independencies that involve hidden variables lead to testable causally-imposed inequality constraints for observable variables, whose violation can reject the compatibility of a causal structure with data. One type of causally informative inequalities is entropic inequalities, which appear in the space of entropic terms associated with the distribution of observable variables. We derive a new type of minimum information (minInf) entropic inequalities that substantially increases causal inference power. These new entropic inequalities appear when considering the constraints that the causal structure imposes on entropic terms determined by information minimization within families of distributions that preserve sets of marginals shared with the original distribution. We introduce a new family of minInf data processing inequalities and a procedure to recursively combine different types of data processing inequalities to create tighter testable entropic inequalities. We extensively illustrate the applicability of this procedure in the instrumental causal scenario, integrating the new inequalities with standard instrumental entropic inequalities constructed with multivariate instrumental sets. We also provide additional examples with other types of entropic inequalities, such as the Information Causality and Groups-Decomposition inequalities. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
24 pages, 1441 KB  
Article
Unsupervised Detection of Pathological Gait Patterns via Instantaneous Center of Rotation Analysis
by Ludwin Molina Arias and Magdalena Smoleń
Appl. Sci. 2026, 16(8), 3976; https://doi.org/10.3390/app16083976 - 19 Apr 2026
Viewed by 172
Abstract
This study introduces a novel unsupervised framework, ICR-LLS, for detecting pathological gait patterns using instantaneous center of rotation (ICR) trajectories of the shank in the sagittal plane. ICR trajectories were computed from two-dimensional kinematic data captured at the lateral femoral epicondyle and lateral [...] Read more.
This study introduces a novel unsupervised framework, ICR-LLS, for detecting pathological gait patterns using instantaneous center of rotation (ICR) trajectories of the shank in the sagittal plane. ICR trajectories were computed from two-dimensional kinematic data captured at the lateral femoral epicondyle and lateral malleolus for both shanks, producing four-dimensional multivariate time series for each gait trial. Pairwise trajectory dissimilarities were quantified using circularly aligned Dynamic Time Warping (DTW), preserving temporal and spatial structure. The resulting dissimilarity matrix was embedded into a three-dimensional space using a force-directed network layout, enabling intuitive visualization of inter-subject gait relationships. Density-based clustering (DBSCAN), enhanced with a consensus-based ensemble approach, was employed to automatically identify clusters representing typical (healthy) gait patterns and outliers corresponding to pathological deviations. The framework is evaluated on a public dataset comprising individuals with Parkinson’s disease (PD) and healthy controls, achieving a normalized mutual information (NMI) of 0.449 and a Separation-to-Compactness Ratio (SCR) of 6.754, indicating a meaningful cluster structure. In addition, classification-oriented metrics yield an accuracy of 90%, sensitivity of 70%, and specificity of 96.7%, supporting the method’s effectiveness in distinguishing pathological gait. By combining minimal 2D kinematic inputs with unsupervised learning, ICR-LLS provides an interpretable framework for the exploratory analysis of gait variability, and although further validation is required, the findings suggest that ICR trajectories may serve as a meaningful biomechanical descriptor for characterizing pathological locomotion. Full article
Show Figures

Figure 1

21 pages, 1220 KB  
Article
ML-FSID-FIS: A Multi-Level Feature Selection and Fuzzy Inference System for Intrusion Detection in IoMT
by Ghaida Balhareth, Mohammad Ilyas and Basmh Alkanjr
Sensors 2026, 26(8), 2501; https://doi.org/10.3390/s26082501 - 18 Apr 2026
Viewed by 180
Abstract
The Internet of Medical Things (IoMT) is becoming a vital part of modern healthcare, enabling ongoing patient monitoring and remote diagnosis. However, as more devices connect to the internet, healthcare systems become more vulnerable to serious security issues such as unauthorized access, patient [...] Read more.
The Internet of Medical Things (IoMT) is becoming a vital part of modern healthcare, enabling ongoing patient monitoring and remote diagnosis. However, as more devices connect to the internet, healthcare systems become more vulnerable to serious security issues such as unauthorized access, patient data manipulation, and Man-in-the-Middle attacks. Conventional Intrusion Detection Systems (IDSs) often struggle with the unclear and uncertain characteristics of IoMT traffic, which leads to reduced detection accuracy and increased false alarms. To address these challenges, this paper proposes ML-FSID-FIS, a multi-level feature selection-based Intrusion Detection System that employs a fuzzy inference system (FIS) for classification in IoMT networks. The model combines multiple feature selection techniques into a three-stage multi-level feature selection strategy to improve detection efficiency and strengthen the security of IoMT networks. In the first stage, four feature selection techniques—Random Forest, XGBoost, ReliefF, and Mutual Information—are applied to identify the most relevant features. In the second stage, a frequency-based consensus strategy is utilized to extract consistently selected features from the four top-ranked sets. In the third stage, an ensemble refinement using bagging-based ranking is employed to rank the remaining features, resulting in the selection of the top five features. From these, three candidate 3-feature groups are formed and evaluated, and the best-performing group is selected as the final input set for the fuzzy logic classifier. The FIS produces a continuous risk score that is mapped to a binary decision using a validation-selected threshold. When the proposed method was tested on the WUSTL-EHMS-2020 dataset and compared with other recent work using the same dataset, it showed strong detection performance while maintaining a very low false positive rate of 0.3%. This study is distinguished by its integrated design, which combines a three-stage multi-level feature selection strategy with fuzzy logic-based intrusion classification to improve feature efficiency and support interpretable intrusion detection in IoMT. Full article
(This article belongs to the Special Issue Semantic Communication for the Internet of Things)
Show Figures

Figure 1

30 pages, 4591 KB  
Article
Reproducible System Innovation in DICOM Mammography Processing with Pixel-Monotonic Dynamic Range Control
by Gulzira Abdikerimova, Moldir Yessenova, Ainur Shekerbek, Ainur Orynbayeva, Balkiya Zhylanbaeva, Gulbarshin Rakhimbayeva, Aisulu Ismailova, Kuanysh Kadirkulov and Zhanat Manbetova
Technologies 2026, 14(4), 236; https://doi.org/10.3390/technologies14040236 - 17 Apr 2026
Viewed by 173
Abstract
This paper presents a reproducible system innovation for processing Digital Imaging and Communications in Medicine (DICOM) mammography images based on pixel-monotonic dynamic range management and engineering-verifiable intensity transformations. Standard DICOM conversion schemes to 8-bit representation often result in irreversible luminance-range compression, locality-dependent contrast [...] Read more.
This paper presents a reproducible system innovation for processing Digital Imaging and Communications in Medicine (DICOM) mammography images based on pixel-monotonic dynamic range management and engineering-verifiable intensity transformations. Standard DICOM conversion schemes to 8-bit representation often result in irreversible luminance-range compression, locality-dependent contrast distortions, and reduced robustness of deep learning models. The proposed framework preserves the physical consistency of the Modality LUT and photometric polarity, performs breast-aware robust Winsor normalization, and applies strictly monotonic global tone mapping while preserving the 16-bit depth of the training data. System validation was performed using architecture-independent metrics. Compared to standard processing, the median value of normalized mutual information increased from 0.878 to 0.892, the effective number of bits increased from 7.88 to 10.11 (+2.25), the representation entropy increased by 1.42 bits, and the clipping rate was reduced to almost zero. Experiments with the Faster R-CNN detector showed stable or improved calcification localization at Intersection over Union (IoU) ≥ 0.5 under controlled augmentation conditions. The results confirm that pixel-monotonic dynamic range control provides a reproducible, engineering-verifiable basis for AI-based mammography analysis within the evaluated dataset and experimental setting. Full article
Show Figures

Figure 1

38 pages, 3645 KB  
Article
A Calibrated Multi-Task Ensemble Architecture for Biomedical Risk Prediction
by Zhainagul Khamitova, Gulmira Omarova, Madi Akhmetzhanov, Roza Burganova, Maksym Orynbassar, Umida Sabirova, Almagul Bukatayeva, Aliya Barakova, Gulnoz Jiyanmuratova and Dilchekhra Yuldasheva
Computers 2026, 15(4), 244; https://doi.org/10.3390/computers15040244 - 15 Apr 2026
Viewed by 149
Abstract
Risk stratification of impaired glycemic control remains a major challenge in biomedical data analysis due to heterogeneous metabolic, behavioral, and therapeutic factors observed in large-scale populations. This study proposes a calibrated and interpretable decision–support framework, termed Calibrated Multi-Task Stacking Ensemble (CMSE), for joint [...] Read more.
Risk stratification of impaired glycemic control remains a major challenge in biomedical data analysis due to heterogeneous metabolic, behavioral, and therapeutic factors observed in large-scale populations. This study proposes a calibrated and interpretable decision–support framework, termed Calibrated Multi-Task Stacking Ensemble (CMSE), for joint modeling of clinically related glycemic outcomes. The framework integrates demographic variables, lipid profiles, renal and inflammatory biomarkers, dietary and smoking indicators, and therapy-related features within a unified predictive architecture. Robust modeling is ensured through leakage-aware preprocessing, quantile-based Winsorization, out-of-fold stacking, and isotonic calibration of probabilistic outputs. The physiological coherence between short-term and long-term glycemic markers is investigated using an explicit intertask coupling mechanism based on the estimated average glucose (eAG) ratio. Model interpretability is supported using SHAP analysis, mutual information, distance correlation, and feature importance metrics. In the primary medication-free screening configuration, the framework is evaluated on the NHANES 2017–March 2020 dataset, achieving ROC-AUC of 0.865 for diabetes classification and R2 values of 0.385 and 0.366 for plasma glucose and HbA1c prediction, respectively. These results indicate that CMSE provides a reliable and explainable approach for calibrated glycemic risk assessment and clinical decision support. Full article
24 pages, 5829 KB  
Article
Analysis of Influencing Factors on the Severity of Ship Collision Accidents Based on an Improved TAN-BN
by Chenyu Wan and Xiongguan Bao
Appl. Sci. 2026, 16(8), 3818; https://doi.org/10.3390/app16083818 - 14 Apr 2026
Viewed by 251
Abstract
This study proposes an improved tree-augmented Bayesian network (TAN-BN) method for analyzing the severity of ship collision accidents by introducing the information contribution rate (ICR) for edge orientation and flexible filtering constraints for structure optimization. Based on 634 ship collision accident reports, a [...] Read more.
This study proposes an improved tree-augmented Bayesian network (TAN-BN) method for analyzing the severity of ship collision accidents by introducing the information contribution rate (ICR) for edge orientation and flexible filtering constraints for structure optimization. Based on 634 ship collision accident reports, a Bayesian network covering accident attributes and causal factors was constructed. The results show that the improved model achieved an overall AUC of 0.864, higher than that of the traditional TAN model (0.827). Mutual information analysis identified ship length as the factor most strongly associated with accident severity, with a mutual information value of 0.0868. Sensitivity analysis based on true risk impact (TRI) further showed that ship length, time, and ship type were the most influential factors, with average TRI values of 19.4%, 8.8%, and 7.2%, respectively. The proposed model effectively captures the dependency relationships between accident severity and multiple influencing factors and can provide quantitative support for risk warning and accident prevention in maritime traffic safety. Full article
(This article belongs to the Special Issue Risk and Safety of Maritime Transportation: 2nd Edition)
Show Figures

Figure 1

38 pages, 585 KB  
Review
A Unified Information Bottleneck Framework for Multimodal Biomedical Machine Learning
by Liang Dong
Entropy 2026, 28(4), 445; https://doi.org/10.3390/e28040445 - 14 Apr 2026
Viewed by 208
Abstract
Multimodal biomedical machine learning increasingly integrates heterogeneous data sources (including medical imaging, multi-omics profiles, electronic health records, and wearable sensor signals) to support clinical diagnosis, prognosis, and treatment response prediction. Despite strong empirical performance, most existing multimodal systems lack a principled theoretical foundation [...] Read more.
Multimodal biomedical machine learning increasingly integrates heterogeneous data sources (including medical imaging, multi-omics profiles, electronic health records, and wearable sensor signals) to support clinical diagnosis, prognosis, and treatment response prediction. Despite strong empirical performance, most existing multimodal systems lack a principled theoretical foundation for understanding why fusion improves prediction, how information is distributed across modalities, and when models can be trusted under incomplete or shifting data. This paper develops a unified information-theoretic framework that formalizes multimodal biomedical learning as an information optimization problem. We formulate multimodal representation learning through the information bottleneck principle, deriving a variational objective that balances predictive sufficiency against informational compression in an architecture-agnostic manner. Building on this foundation, we introduce information-theoretic tools for decomposing modality contributions via conditional mutual information, quantifying redundancy and synergy, and diagnosing fusion collapse. We further show that robustness to missing modalities can be cast as an information consistency problem and extend the framework to longitudinal disease modeling through transfer entropy and sequential information bottleneck objectives. Applications to multimodal foundation models, uncertainty quantification, calibration, and out-of-distribution detection are developed. Empirical case studies across three biomedical datasets (TCGA breast cancer multi-omics, TCGA glioma clinical-plus-molecular data, and OASIS-2 longitudinal Alzheimer’s data) show that the framework’s key quantities are computable and interpretable on real data: MI decomposition identifies modality dominance and redundancy; the VMIB traces a compression–prediction tradeoff in the information plane; entropy-based selective prediction raises accuracy from 0.787 to 0.939 at 50% coverage; transfer entropy reveals stage-dependent modality influence in disease progression; and pretraining/adaptation diagnostics distinguish efficient from wasteful fine-tuning strategies. Together, these results develop entropy and mutual information as organizing principles for the design, analysis, and evaluation of multimodal biomedical AI systems. Full article
Show Figures

Figure 1

18 pages, 3157 KB  
Article
Deep Learning-Based Distributed Photovoltaic Power Generation Forecasting and Installation Potential Assessment
by Jun Chen, Jiawen You and Huafeng Cai
Sustainability 2026, 18(8), 3859; https://doi.org/10.3390/su18083859 - 14 Apr 2026
Viewed by 309
Abstract
Against the backdrop of the global energy structure accelerating its transition towards a clean and low-carbon model, rooftop-distributed photovoltaic (PV) systems are playing an increasingly prominent strategic role in urban energy supply systems, owing to their notable advantages such as environmental friendliness and [...] Read more.
Against the backdrop of the global energy structure accelerating its transition towards a clean and low-carbon model, rooftop-distributed photovoltaic (PV) systems are playing an increasingly prominent strategic role in urban energy supply systems, owing to their notable advantages such as environmental friendliness and high spatial utilization efficiency. Consequently, they are becoming a critical pillar in advancing urban energy transformation and enhancing sustainable development. This paper aims to explore deep learning-based techniques for assessing the potential of large-scale distributed PV installations. To accurately evaluate their dynamic power generation capability, a hybrid prediction model integrating variational mode decomposition (VMD), the mutual information (MI) method, and a cascaded xLSTM-Informer network is proposed. Firstly, the model preprocesses key meteorological sequences using VMD, decomposing them into modal components of different frequencies. Subsequently, the MI method is employed to extract critical sequences. Then, the xLSTM module is utilized to learn the long-term complex dependencies between meteorological conditions and PV power output, while the Informer network captures key global temporal patterns, achieving high-precision time-series forecasting of PV generation. Finally, employing the forecasted time-series power curve as the core input, a comprehensive analytical framework for PV installation potential is constructed, integrating assessments of technical feasibility, economic viability, and environmental performance. This framework aims to scientifically estimate the admissible installed capacity and system value of distributed PV systems, thereby providing a dynamic basis for decision-making in urban planning. Full article
(This article belongs to the Section Energy Sustainability)
Show Figures

Figure 1

23 pages, 4408 KB  
Article
Edge-Attentive Dual-Branch Frame Field Network for High-Precision Building Polygon Extraction
by Ruijie Han, Xiangtao Fan, Jian Liu, Weijia Bei, Qifeng Ge, Jianhao Xu and Ruijie Yao
Remote Sens. 2026, 18(8), 1159; https://doi.org/10.3390/rs18081159 - 13 Apr 2026
Viewed by 292
Abstract
Efficient extraction of building footprints from aerial and satellite imagery is essential for urban planning, infrastructure management, and large-scale geospatial analysis. Traditional raster-based approaches provide limited geometric precision, while existing polygon-generation methods often rely on detecting and ordering small-scale building vertices, which can [...] Read more.
Efficient extraction of building footprints from aerial and satellite imagery is essential for urban planning, infrastructure management, and large-scale geospatial analysis. Traditional raster-based approaches provide limited geometric precision, while existing polygon-generation methods often rely on detecting and ordering small-scale building vertices, which can lead to incomplete structures, distorted shapes, and high computational cost. To address these limitations, this study proposes an Edge-Attentive Dual-Branch Frame Field Network (EA-DBFFN) for automated and high-precision building polygon extraction. The method is built upon frame field learning and introduces a dual-branch architecture that separately predicts building masks and edges. A Dual-Task Decoder enlarges and adapts receptive fields while applying spatial attention to enhance the representation of structural details. Fixed Sobel and Laplacian filters are incorporated to strengthen boundary detection. In addition, a Dual-Task Mutual Guidance Module promotes the exchange of complementary information between the mask and edge branches, improving geometric consistency and reducing boundary errors. Experiments conducted on the Inria Aerial dataset and the CrowdAI dataset demonstrate that EA-DBFFN achieves superior performance in region-based metrics, with an AP75 of 72.9% on CrowdAI, representing a 2.3% improvement over competing methods. Furthermore, EA-DBFFN produces geometrically higher-quality polygons, with the Max Tangent Angle error reduced by 6.4%, the Invalid Polygon Ratio reduced by 66.3%, and Edge Smoothness improved by 72.7% compared to the best competing method. The results show that EA-DBFFN provides an effective and computationally efficient framework for generating high-quality vectorized building footprints suitable for large-scale urban analysis. Full article
Show Figures

Figure 1

21 pages, 545 KB  
Article
Updatable Private Set Intersection with Low Communication Overhead
by Chao Qi, Mingmei Zheng, Aoxiang Xu, Jinhan Zhong, Xiaowei Yuan and Qinyun Cai
Symmetry 2026, 18(4), 646; https://doi.org/10.3390/sym18040646 - 12 Apr 2026
Viewed by 181
Abstract
Private set intersection (PSI) is a fundamental cryptographic task that allows two mutually distrusting parties, each holding a private set of elements, to jointly compute the intersection of their sets. It ensures a symmetric information structure where neither party gains any knowledge about [...] Read more.
Private set intersection (PSI) is a fundamental cryptographic task that allows two mutually distrusting parties, each holding a private set of elements, to jointly compute the intersection of their sets. It ensures a symmetric information structure where neither party gains any knowledge about the other’s elements beyond those in the shared intersection. Traditional PSI protocols are primarily designed for static settings, which limits their applicability and efficiency in dynamic scenarios where input sets continuously evolve. To address this challenge, the notion of updatable PSI (UPSI) was introduced, enabling repeated PSI computations over changing inputs while preserving the symmetric privacy guarantees between participants. Despite the numerous recent advancements in UPSI research, it still suffers from significant communication overhead. In this paper, we address this challenge by introducing LcUPSI (low-communication UPSI), a new updatable PSI protocol that achieves remarkably low communication overhead. We formally prove that LcUPSI is secure in the semi-honest model. Furthermore, we compare the LcUPSI protocol with the state-of-the-art UPSI protocol BMSTZ24 (ASIACRYPT). The results demonstrate that LcUPSI significantly reduces communication overhead, highlighting its advantages in low-bandwidth conditions. Full article
Show Figures

Figure 1

24 pages, 15558 KB  
Article
A Mutual-Structure Weighted Sub-Pixel Multimodal Optical Remote Sensing Image Matching Method
by Tao Huang, Hongbo Pan, Nanxi Zhou, Siyuan Zou and Shun Zhou
Remote Sens. 2026, 18(8), 1137; https://doi.org/10.3390/rs18081137 - 12 Apr 2026
Cited by 1 | Viewed by 203
Abstract
Sub-pixel matching of multimodal optical images is a critical step in the combined application of multiple sensors. However, structural noise and inconsistencies arising from variations in multimodal image responses usually limit the accuracy of matching. Phase congruency mutual-structure weighted least absolute deviation (PCWLAD) [...] Read more.
Sub-pixel matching of multimodal optical images is a critical step in the combined application of multiple sensors. However, structural noise and inconsistencies arising from variations in multimodal image responses usually limit the accuracy of matching. Phase congruency mutual-structure weighted least absolute deviation (PCWLAD) is developed as a coarse-to-fine framework. In the coarse matching stage, we preserve the complete structure and use an enhanced cross-modal similarity criterion to mitigate structural information loss by phase congruency (PC) noise filtering. In the fine matching stage, a mutual-structure filtering and weighted least absolute deviation-based method is introduced to enhance inter-modal structural consistency and to accurately estimate sub-pixel displacements adaptively. Experiments on three multimodal datasets—Landsat visible-infrared, short-range visible-near-infrared, and unmanned aerial vehicle (UAV) optical image pairs—show that PCWLAD achieves superior average performance compared with eight state-of-the-art methods, attaining an average matching accuracy of approximately 0.4 pixels. Full article
(This article belongs to the Special Issue Advances in Multi-Source Remote Sensing Data Fusion and Analysis)
32 pages, 7656 KB  
Article
Unveiling Systemic Risks in Sustainable Safety Management: Integrating BERTopic, LLM, and SNA for Accident Text Mining
by Lanjing Wang, Rui Huang, Yige Chen, Yunxiang Yang, Jing Zhan and Haiyuan Gong
Sustainability 2026, 18(8), 3787; https://doi.org/10.3390/su18083787 - 10 Apr 2026
Viewed by 307
Abstract
To unveil the underlying risk structures in complex industrial systems, this paper proposes a hybrid analytical framework that integrates BERTopic modeling, a large language model (LLM), and social network analysis (SNA). This framework aims to extract systemic safety intelligence from unstructured accident reports. [...] Read more.
To unveil the underlying risk structures in complex industrial systems, this paper proposes a hybrid analytical framework that integrates BERTopic modeling, a large language model (LLM), and social network analysis (SNA). This framework aims to extract systemic safety intelligence from unstructured accident reports. It first employs BERTopic to identify latent causal topics based on 745 Chinese accident investigation reports and utilizes DeepSeek-V3.1 (LLM) for semantic refinement and causal mapping of these topics. Subsequently, a semantic network of causal keywords based on positive pointwise mutual information (PPMI) is constructed, and its topological structure is analyzed using SNA methods. The study identifies and analyzes five major risk communities: confined spaces, fire, mining, construction, and road traffic. It reveals that accident causation exhibits the small-world characteristics of multi-factor coupling and non-linearity, with core risk nodes concentrated in systemic inducements such as organizational management and compliance deficiencies. The results demonstrate that this framework effectively identifies the latent systemic risk patterns embedded within the texts, providing methodological support for developing sustainable safety management mechanisms based on design for safety. Full article
(This article belongs to the Special Issue Achieving Sustainability in Safety Management and Design for Safety)
Show Figures

Figure 1

Back to TopTop