Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,801)

Search Parameters:
Keywords = information theoretic methods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 396 KB  
Article
Preliminary and Shrinkage-Type Estimation for the Parameters of the Birnbaum–Saunders Distribution Based on Modified Moments
by Syed Ejaz Ahmed, Muhammad Kashif Ali Shah, Waqas Makhdoom and Nighat Zahra
Stats 2026, 9(1), 8; https://doi.org/10.3390/stats9010008 (registering DOI) - 16 Jan 2026
Abstract
The two-parameter Birnbaum–Saunders (B-S) distribution is widely applied across various fields due to its favorable statistical properties. This study aims to enhance the efficiency of modified moment estimators for the B-S distribution by systematically incorporating auxiliary non-sample information. To this end, we developed [...] Read more.
The two-parameter Birnbaum–Saunders (B-S) distribution is widely applied across various fields due to its favorable statistical properties. This study aims to enhance the efficiency of modified moment estimators for the B-S distribution by systematically incorporating auxiliary non-sample information. To this end, we developed and analyzed a suite of estimation strategies, including restricted estimators, preliminary test estimators, and Stein-type shrinkage estimators. A pretest procedure was formulated to guide the decision on whether to integrate the non-sample information. The relative performance of these estimators was rigorously evaluated through an asymptotic distributional analysis, comparing their asymptotic distributional bias and risk under a sequence of local alternatives. The finite-sample properties were assessed via Monte Carlo simulation studies. The practical utility of the proposed methods is demonstrated through applications to two real-world datasets: failure times for mechanical valves and bone mineral density measurements. Both numerical results and theoretical analysis confirm that the proposed shrinkage-based techniques deliver substantial efficiency gains over conventional estimators. Full article
28 pages, 2027 KB  
Article
Dynamic Resource Games in the Wood Flooring Industry: A Bayesian Learning and Lyapunov Control Framework
by Yuli Wang and Athanasios V. Vasilakos
Algorithms 2026, 19(1), 78; https://doi.org/10.3390/a19010078 - 16 Jan 2026
Abstract
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like [...] Read more.
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like brand reputation and customer base cannot be precisely observed. This paper establishes a systematic and theoretically grounded online decision framework to tackle this problem. We first model the problem as a Partially Observable Stochastic Dynamic Game. The core innovation lies in introducing an unobservable market position vector as the central system state, whose evolution is jointly influenced by firm investments, inter-channel competition, and macroeconomic randomness. The model further captures production lead times, physical inventory dynamics, and saturation/cross-channel effects of marketing investments, constructing a high-fidelity dynamic system. To solve this complex model, we propose a hierarchical online learning and control algorithm named L-BAP (Lyapunov-based Bayesian Approximate Planning), which innovatively integrates three core modules. It employs particle filters for Bayesian inference to nonparametrically estimate latent market states online. Simultaneously, the algorithm constructs a Lyapunov optimization framework that transforms long-term discounted reward objectives into tractable single-period optimization problems through virtual debt queues, while ensuring stability of physical systems like inventory. Finally, the algorithm embeds a game-theoretic module to predict and respond to rational strategic reactions from each channel. We provide theoretical performance analysis, rigorously proving the mean-square boundedness of system queues and deriving the performance gap between long-term rewards and optimal policies under complete information. This bound clearly quantifies the trade-off between estimation accuracy (determined by particle count) and optimization parameters. Extensive simulations demonstrate that our L-BAP algorithm significantly outperforms several strong baselines—including myopic learning and decentralized reinforcement learning methods—across multiple dimensions: long-term profitability, inventory risk control, and customer service levels. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
27 pages, 1134 KB  
Article
A Cryptocurrency Dual-Offline Payment Method for Payment Capacity Privacy Protection
by Huayou Si, Yaqian Huang, Guozheng Li, Yun Zhao, Yuanyuan Qi, Wei Chen and Zhigang Gao
Electronics 2026, 15(2), 400; https://doi.org/10.3390/electronics15020400 - 16 Jan 2026
Abstract
Current research on cryptocurrency dual-offline payment systems has garnered significant attention from both academia and industry, owing to its potential payment feasibility and application scalability in extreme environments and network-constrained scenarios. However, existing dual-offline payment schemes exhibit technical limitations in privacy preservation, failing [...] Read more.
Current research on cryptocurrency dual-offline payment systems has garnered significant attention from both academia and industry, owing to its potential payment feasibility and application scalability in extreme environments and network-constrained scenarios. However, existing dual-offline payment schemes exhibit technical limitations in privacy preservation, failing to adequately safeguard sensitive data such as payment amounts and participant identities. To address this, this paper proposes a privacy-preserving dual-offline payment method utilizing a cryptographic challenge-response mechanism. The method employs zero-knowledge proof technology to cryptographically protect sensitive information, such as the payer’s wallet balance, during identity verification and payment authorization. This provides a technical solution that balances verification reliability with privacy protection in dual-offline transactions. The method adopts the payment credential generation and credential verification mechanism, combined with elliptic curve cryptography (ECC), to construct the verification protocol. These components enable dual-offline functionality while concealing sensitive information, including counterparty identities and wallet balances. Theoretical analysis and experimental verification on 100 simulated transactions show that this method achieves an average payment generation latency of 29.13 ms and verification latency of 25.09 ms, significantly outperforming existing technology in privacy protection, computational efficiency, and security robustness. The research provides an innovative technical solution for cryptocurrency dual-offline payment, advancing both theoretical foundations and practical applications in the field. Full article
(This article belongs to the Special Issue Data Privacy Protection in Blockchain Systems)
Show Figures

Figure 1

47 pages, 1424 KB  
Article
Integrating the Contrasting Perspectives Between the Constrained Disorder Principle and Deterministic Optical Nanoscopy: Enhancing Information Extraction from Imaging of Complex Systems
by Yaron Ilan
Bioengineering 2026, 13(1), 103; https://doi.org/10.3390/bioengineering13010103 - 15 Jan 2026
Abstract
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in [...] Read more.
This paper examines the contrasting yet complementary approaches of the Constrained Disorder Principle (CDP) and Stefan Hell’s deterministic optical nanoscopy for managing noise in complex systems. The CDP suggests that controlled disorder within dynamic boundaries is crucial for optimal system function, particularly in biological contexts, where variability acts as an adaptive mechanism rather than being merely a measurement error. In contrast, Hell’s recent breakthrough in nanoscopy demonstrates that engineered diffraction minima can achieve sub-nanometer resolution without relying on stochastic (random) molecular switching, thereby replacing randomness with deterministic measurement precision. Philosophically, these two approaches are distinct: the CDP views noise as functionally necessary, while Hell’s method seeks to overcome noise limitations. However, both frameworks address complementary aspects of information extraction. The primary goal of microscopy is to provide information about structures, thereby facilitating a better understanding of their functionality. Noise is inherent to biological structures and functions and is part of the information in complex systems. This manuscript achieves integration through three specific contributions: (1) a mathematical framework combining CDP variability bounds with Hell’s precision measurements, validated through Monte Carlo simulations showing 15–30% precision improvements; (2) computational demonstrations with N = 10,000 trials quantifying performance under varying biological noise regimes; and (3) practical protocols for experimental implementation, including calibration procedures and real-time parameter optimization. The CDP provides a theoretical understanding of variability patterns at the system level, while Hell’s technique offers precision tools at the molecular level for validation. Integrating these approaches enables multi-scale analysis, allowing for deterministic measurements to accurately quantify the functional variability that the CDP theory predicts is vital for system health. This synthesis opens up new possibilities for adaptive imaging systems that maintain biologically meaningful noise while achieving unprecedented measurement precision. Specific applications include cancer diagnostics through chromosomal organization variability, neurodegenerative disease monitoring via protein aggregation disorder patterns, and drug screening by assessing cellular response heterogeneity. The framework comprises machine learning integration pathways for automated recognition of variability patterns and adaptive acquisition strategies. Full article
(This article belongs to the Section Biosignal Processing)
19 pages, 9505 KB  
Article
A Fractal Topology-Based Method for Joint Roughness Coefficient Calculation and Its Application to Coal Rock Surfaces
by Rui Wang, Jiabin Dong and Wenhao Dong
Modelling 2026, 7(1), 19; https://doi.org/10.3390/modelling7010019 - 15 Jan 2026
Abstract
The accurate evaluation of the Joint Roughness Coefficient (JRC) is crucial for rock mechanics engineering. Existing JRC prediction models based on a single fractal parameter often face limitations in physical consistency and predictive accuracy. This study proposes a novel two-parameter JRC prediction method [...] Read more.
The accurate evaluation of the Joint Roughness Coefficient (JRC) is crucial for rock mechanics engineering. Existing JRC prediction models based on a single fractal parameter often face limitations in physical consistency and predictive accuracy. This study proposes a novel two-parameter JRC prediction method based on fractal topology theory. The core innovation of this method lies in extracting two distinct types of information from a roughness profile: the scale-invariant characteristics of its frequency distribution, quantified by the Hurst exponent (H), and the amplitude-dependent scale effects, quantified by the coefficient (C). By integrating these two complementary aspects of roughness, a comprehensive predictive model is established: JRC = 100.014H1.5491C1.2681. The application of this model to Atomic Force Microscopy (AFM)-scanned coal rock surfaces indicates that JRC is primarily controlled macroscopically by amplitude-related information (reflected by C), while the scale-invariant frequency characteristics (reflected by H) significantly influence local prediction accuracy. By elucidating the distinct roles of scale-invariance and amplitude attributes in controlling JRC, this research provides a new theoretical framework and a practical analytical tool for the quantitative evaluation of JRC in engineering applications. Full article
Show Figures

Figure 1

20 pages, 5073 KB  
Article
SAWGAN-BDCMA: A Self-Attention Wasserstein GAN and Bidirectional Cross-Modal Attention Framework for Multimodal Emotion Recognition
by Ning Zhang, Shiwei Su, Haozhe Zhang, Hantong Yang, Runfang Hao and Kun Yang
Sensors 2026, 26(2), 582; https://doi.org/10.3390/s26020582 - 15 Jan 2026
Abstract
Emotion recognition from physiological signals is pivotal for advancing human–computer interaction, yet unimodal pipelines frequently underperform due to limited information, constrained data diversity, and suboptimal cross-modal fusion. Addressing these limitations, the Self-Attention Wasserstein Generative Adversarial Network with Bidirectional Cross-Modal Attention (SAWGAN-BDCMA) framework is [...] Read more.
Emotion recognition from physiological signals is pivotal for advancing human–computer interaction, yet unimodal pipelines frequently underperform due to limited information, constrained data diversity, and suboptimal cross-modal fusion. Addressing these limitations, the Self-Attention Wasserstein Generative Adversarial Network with Bidirectional Cross-Modal Attention (SAWGAN-BDCMA) framework is proposed. This framework reorganizes the learning process around three complementary components: (1) a Self-Attention Wasserstein GAN (SAWGAN) that synthesizes high-quality Electroencephalography (EEG) and Photoplethysmography (PPG) to expand diversity and alleviate distributional imbalance; (2) a dual-branch architecture that distills discriminative spatiotemporal representations within each modality; and (3) a Bidirectional Cross-Modal Attention (BDCMA) mechanism that enables deep two-way interaction and adaptive weighting for robust fusion. Evaluated on the DEAP and ECSMP datasets, SAWGAN-BDCMA significantly outperforms multiple contemporary methods, achieving 94.25% accuracy for binary and 87.93% for quaternary classification on DEAP. Furthermore, it attains 97.49% accuracy for six-class emotion recognition on the ECSMP dataset. Compared with state-of-the-art multimodal approaches, the proposed framework achieves an accuracy improvement ranging from 0.57% to 14.01% across various tasks. These findings offer a robust solution to the long-standing challenges of data scarcity and modal imbalance, providing a profound theoretical and technical foundation for fine-grained emotion recognition and intelligent human–computer collaboration. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

36 pages, 6828 KB  
Article
Discriminating Music Sequences Method for Music Therapy—DiMuSe
by Emil A. Canciu, Florin Munteanu, Valentin Muntean and Dorin-Mircea Popovici
Appl. Sci. 2026, 16(2), 851; https://doi.org/10.3390/app16020851 - 14 Jan 2026
Viewed by 23
Abstract
The purpose of this research was to investigate whether music empirically associated with therapeutic effects contains intrinsic informational structures that differentiate it from other sound sequences. Drawing on ontology, phenomenology, nonlinear dynamics, and complex systems theory, we hypothesize that therapeutic relevance may be [...] Read more.
The purpose of this research was to investigate whether music empirically associated with therapeutic effects contains intrinsic informational structures that differentiate it from other sound sequences. Drawing on ontology, phenomenology, nonlinear dynamics, and complex systems theory, we hypothesize that therapeutic relevance may be linked to persistent structural patterns embedded in musical signals rather than to stylistic or genre-related attributes. This paper introduces the Discriminating Music Sequences (DiMuSes) method, an unsupervised, structure-oriented analytical framework designed to detect such patterns. The method applies 24 scalar evaluators derived from statistics, fractal geometry, nonlinear physics, and complex systems, transforming sound sequences into multidimensional vectors that characterize their global temporal organization. Principal Component Analysis (PCA) reduces this feature space to three dominant components (PC1–PC3), enabling visualization and comparison in a reduced informational space. Unsupervised k-Means clustering is subsequently applied in the PCA space to identify groups of structurally similar sound sequences, with cluster quality evaluated using Silhouette and Davies–Bouldin indices. Beyond clustering, DiMuSe implements ranking procedures based on relative positions in the PCA space, including distance to cluster centroids, inter-item proximity, and stability across clustering configurations, allowing melodies to be ordered according to their structural proximity to the therapeutic cluster. The method was first validated using synthetically generated nonlinear signals with known properties, confirming its capacity to discriminate structured time series. It was then applied to a dataset of 39 music and sound sequences spanning therapeutic, classical, folk, religious, vocal, natural, and noise categories. The results show that therapeutic music consistently forms a compact and well-separated cluster and ranks highly in structural proximity measures, suggesting shared informational characteristics. Notably, pink noise and ocean sounds also cluster near therapeutic music, aligning with independent evidence of their regulatory and relaxation effects. DiMuSe-derived rankings were consistent with two independent studies that identified the same musical pieces as highly therapeutic.The present research remains at a theoretical stage. Our method has not yet been tested in clinical or experimental therapeutic settings and does not account for individual preference, cultural background, or personal music history, all of which strongly influence therapeutic outcomes. Consequently, DiMuSe does not claim to predict individual efficacy but rather to identify structural potential at the signal level. Future work will focus on clinical validation, integration of biometric feedback, and the development of personalized extensions that combine intrinsic informational structure with listener-specific response data. Full article
14 pages, 1359 KB  
Proceeding Paper
Non-Parametric Model for Curvature Classification of Departure Flight Trajectory Segments
by Lucija Žužić, Ivan Štajduhar, Jonatan Lerga and Renato Filjar
Eng. Proc. 2026, 122(1), 1; https://doi.org/10.3390/engproc2026122001 - 13 Jan 2026
Viewed by 75
Abstract
This study introduces a novel approach for classifying flight trajectory curvature, focusing on early-stage flight characteristics to detect anomalies and deviations. The method intentionally avoids direct coordinate data and instead leverages a combination of trajectory-derived and meteorological features. This research analysed 9849 departure [...] Read more.
This study introduces a novel approach for classifying flight trajectory curvature, focusing on early-stage flight characteristics to detect anomalies and deviations. The method intentionally avoids direct coordinate data and instead leverages a combination of trajectory-derived and meteorological features. This research analysed 9849 departure flight trajectories originating from 14 different airports. Two distinct trajectory classes were established through manual visual inspection, differentiated by curvature patterns. This categorisation formed the ground truth for evaluating trained machine learning (ML) classifiers from different families. The comparative analysis demonstrates that the Random Forest (RF) algorithm provides the most effective classification model. RF excels at summarising complex trajectory information and identifying non-linear relationships within the early-flight data. A key contribution of this work is the validation of specific predictors. The theoretical definitions of direction change (using vector values to capture dynamic movement) and diffusion distance (using scalar values to represent static displacement) proved highly effective. Their selection as primary predictors is supported by their ability to represent the essential static and dynamic properties of the trajectory, enabling the model to accurately classify flight paths and potential deviations before the flight is complete. This approach offers significant potential for enhancing real-time air traffic monitoring and safety systems. Full article
Show Figures

Figure 1

28 pages, 604 KB  
Review
Remote Sensing Effects and Invariants in Land Surface Studies
by Hongliang Fang
Remote Sens. 2026, 18(2), 248; https://doi.org/10.3390/rs18020248 - 13 Jan 2026
Viewed by 98
Abstract
Remote sensing is a technique to acquire information from a distance. Remote sensing effects refer to any factors that need to be considered in remote sensing processes, while remote sensing invariants represent features that remain stable throughout these processes. Both remote sensing effects [...] Read more.
Remote sensing is a technique to acquire information from a distance. Remote sensing effects refer to any factors that need to be considered in remote sensing processes, while remote sensing invariants represent features that remain stable throughout these processes. Both remote sensing effects and invariants are fundamental to the study of remote sensing systems, methods, algorithms, products, and applications. Many studies have explored different effects and invariants independently, yet these studies are scattered across the literature and a comprehensive synthesis is lacking within the community. This paper intends to synthesize various remote sensing effects and invariants under a unified framework. The characterization, underlying principles, and potential applications of a selected group of remote sensing effects were first examined. Subsequently, a suite of nine key effects, atmospheric effects, background effects, clumping effects, directional effects, heterogeneity effects, saturation effects, scaling effects, temporal effects, and topographic effects, were addressed. Furthermore, a list of remote sensing invariants, including spectral, spatial, temporal, directional, and thematical invariants, were analyzed. Potential directions for future studies were further discussed. This synthesis represents a concerted effort to advance the theoretical understanding of fundamental principles in remote sensing science. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

31 pages, 9196 KB  
Article
Balancing Ecological Restoration and Industrial Landscape Heritage Values Through a Digital Narrative Approach: A Case Study of the Dagushan Iron Mine, China
by Xin Bian, Andre Brown and Bruno Marques
Land 2026, 15(1), 155; https://doi.org/10.3390/land15010155 - 13 Jan 2026
Viewed by 193
Abstract
Under rapid urbanization and ecological transformation, balancing authenticity preservation with adaptive reuse presents a major challenge for industrial heritage landscapes. This study investigates the Dagushan Iron Mine in Anshan, China’s first large-scale open-pit iron mine and once the deepest in Asia, which is [...] Read more.
Under rapid urbanization and ecological transformation, balancing authenticity preservation with adaptive reuse presents a major challenge for industrial heritage landscapes. This study investigates the Dagushan Iron Mine in Anshan, China’s first large-scale open-pit iron mine and once the deepest in Asia, which is currently undergoing ecological backfilling that threatens its core landscape morphology and spatial integrity. Using a mixed-method approach combining archival research, spatial documentation, qualitative interviews, and expert evaluation through the Analytic Hierarchy Process (AHP), we construct a cross-validated evidence chain to examine how evidence-based industrial landscape heritage values can inform low-intervention digital narrative strategies for off-site learning. This study contributes theoretically by reframing authenticity and integrity under ecological transition as the traceability and interpretability of landscape evidence, rather than material survival alone. Evaluation involving key stakeholders reveals a value hierarchy in which historical value ranks highest, followed by social and cultural values, while scientific–technological and ecological–environmental values occupy the mid-tier. Guided by these weights, we develop a four-layer value-to-narrative translation framework and an animation design pathway that supports curriculum-aligned learning for off-site students. This study establishes an operational link between evidence chain construction, value weighting, and digital storytelling translation, offering a transferable workflow for industrial heritage landscapes undergoing ecological restoration, including sites with World Heritage potential or status. Full article
(This article belongs to the Special Issue Urban Landscape Transformation vs. Heritage and Memory)
Show Figures

Figure 1

31 pages, 425 KB  
Article
Research on the Influence of Green Innovation Climate on Employees’ Green Value Co-Creation: Moderating Role of Inclusive Leadership
by Jianbo Tu, Mengchen Lu and Jiaojiao Liu
Sustainability 2026, 18(2), 769; https://doi.org/10.3390/su18020769 - 12 Jan 2026
Viewed by 122
Abstract
Cultivating a green innovation-oriented work climate exerts a positive effect on employees’ participation in green knowledge sharing and other co-creation behaviors. Previous studies analyzed the influential factors of green value co-creation from the perspective of green motivation and green dynamic capabilities, but there [...] Read more.
Cultivating a green innovation-oriented work climate exerts a positive effect on employees’ participation in green knowledge sharing and other co-creation behaviors. Previous studies analyzed the influential factors of green value co-creation from the perspective of green motivation and green dynamic capabilities, but there is a lack of research on the antecedents of green value co-creation from the perspective of green innovation climate. Therefore, based on the social information processing theory, this paper make an in-depth research on the impact mechanism of green innovation climate on employee green value co-creation, through perception of corporate social responsibility and employees’ sense of belonging. A questionnaire survey was conducted on Chinese enterprises implementing green innovation, and 337 valid questionnaires were collected. The effect mechanism of green innovation climate on employees’ green value co-creation was analyzed by the hierarchical regression analysis method. Process regression analysis was used to explore the moderating effect of inclusive leadership. The result of the research shows that green innovation climate has a significant relation to employees’ sense of belonging, perception of corporate social responsibility and employees’ sense of belonging. Perception of corporate social responsibility and employees’ sense of belonging have mediating effects on the relations between green innovation climate and employees’ green value co-creation. Inclusive leadership can moderate the relationship between perception of corporate social responsibility and employees’ green value co-creation. In theory, from the perspectives of green innovation climate and inclusive leadership, it further enriches the research on the driving factors of green value co-creation. In practice, It provides a theoretical reference for enterprises to utilize the strategy of green innovation climate and inclusive leadership to promote green value co-creation of enterprises effectively. Full article
Show Figures

Figure 1

28 pages, 60648 KB  
Article
Physical–MAC Layer Integration: A Cross-Layer Sensing Method for Mobile UHF RFID Robot Reading States Based on MLR-OLS and Random Forest
by Ruoyu Pan, Bo Qin, Jiaqi Liu, Huawei Gou, Xinyi Liu, Honggang Wang and Yurun Zhou
Sensors 2026, 26(2), 491; https://doi.org/10.3390/s26020491 - 12 Jan 2026
Viewed by 105
Abstract
In automated warehousing scenarios, mobile UHF RFID robots typically operate along preset fixed paths to collect basic information from goods tags. They lack the ability to perceive shelf layouts and goods distribution, leading to problems such as missing reads and low inventory efficiency. [...] Read more.
In automated warehousing scenarios, mobile UHF RFID robots typically operate along preset fixed paths to collect basic information from goods tags. They lack the ability to perceive shelf layouts and goods distribution, leading to problems such as missing reads and low inventory efficiency. To address this issue, this paper proposes a cross-layer sensing method for mobile UHF RFID robot reading states based on multiple linear regression-orthogonal least squares (MLR-OLS) and random forest. For shelf state sensing, a position sensing model is constructed based on the physical layer, and MLR-OLS is used to estimate shelf positions and interaction time. For good state sensing, combining physical layer and MAC layer features, a K-means-based tag density classification method and a missing tag count estimation algorithm based on frame states and random forest are proposed to realize the estimation of goods distribution and the number of missing goods. On this basis, according to the read state sensing results, this paper further proposes an adaptive reading strategy for RFID robots to perform targeted reading on missing goods. Experimental results show that when the robot is moving at medium and low speeds, the proposed method can achieve centimeter-level shelf positioning accuracy and exhibit high reliability in goods distribution sensing and missing goods count estimation, and the adaptive reading strategy can significantly improve the goods read rate. This paper realizes cross-layer sensing and read optimization of the RFID robot system, providing a theoretical basis and technical route for the application of mobile UHF RFID robot systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

15 pages, 299 KB  
Article
Development and Psychometric Validation of the Hospital Medication System Safety Assessment Questionnaire
by Leila Sales, Ana Filipa Cardoso, Beatriz Araújo and Élvio Jesus
Nurs. Rep. 2026, 16(1), 22; https://doi.org/10.3390/nursrep16010022 - 12 Jan 2026
Viewed by 111
Abstract
Background/Objectives: Medication incidents remain a significant concern in hospital settings. Integrated medication systems, regarding organized processes, policies, technologies and professional practices are designed to enhance patient safety; however, their safety performance is still suboptimal. The use of valid and reliable instruments to assess [...] Read more.
Background/Objectives: Medication incidents remain a significant concern in hospital settings. Integrated medication systems, regarding organized processes, policies, technologies and professional practices are designed to enhance patient safety; however, their safety performance is still suboptimal. The use of valid and reliable instruments to assess hospital medication system safety can be a valuable resource for health care management. The aim of this study was to describe the development and psychometric validation of the Hospital Medication System Safety Assessment Questionnaire (HMSSA-Q) for assessing the safety of hospital medication systems and its processes in Portugal. Methods: The HMSSA-Q was developed through a literature review and two rounds of expert panel consultation. Following consensus, a pilot methodological study was conducted in 95 Portuguese hospitals. Construct validity was assessed using principal component factor analysis, and reliability was evaluated through internal consistency (Cronbach’s alpha). Results: The instrument is theoretically structured into five predefined domains/subscales: Organizational Environment, Safe Medication Prescribing, Safe Medication in Hospital Pharmacy, Safe Medication Preparation and Administration, and Information and Patient Education. Principal component analyses performed separately for each domain supported their internal structure. The overall scale showed excellent internal consistency (Cronbach’s α = 0.97), with Cronbach’s alpha values for the domains ranging from 0.86 to 0.94. Conclusions: The HMSSA-Q is a valid and reliable instrument for assessing the safety of hospital medication systems and has the potential to serve as an innovative management tool for improving patient safety. Full article
20 pages, 2503 KB  
Article
On Invertibility of Large Binary Matrices
by Ibrahim Mammadov, Pavel Loskot and Thomas Honold
Mathematics 2026, 14(2), 270; https://doi.org/10.3390/math14020270 - 10 Jan 2026
Viewed by 113
Abstract
Many data processing applications involve binary matrices for storing digital information. At present, there are limited results in the literature about algorithms for inverting large binary matrices. This paper contributes the following three results. First, the divide-and-conquer methods for efficiently inverting large matrices [...] Read more.
Many data processing applications involve binary matrices for storing digital information. At present, there are limited results in the literature about algorithms for inverting large binary matrices. This paper contributes the following three results. First, the divide-and-conquer methods for efficiently inverting large matrices over finite fields such as Strassen’s matrix inversion often fail on singular sub-blocks, even if the original matrix is non-singular. It is proposed to combine Strassen’s method with the PLU factorization at each recursive step in order to obtain robust pivoting, which correctly inverts all non-singular matrices over any finite field. The resulting algorithm is shown to maintain the sub-cubic time complexity. Second, although there are theoretical studies on how to systematically enumerate all invertible matrices over finite fields without redundancy, no practical algorithm has been reported in the literature that is easy to understand and also suitable for enumerating large matrices. The use of Bruhat decomposition has been proposed to enumerate all invertible matrices. It leverages the linear group-theoretic structure and defines an ordered sequence of invertible matrices, so that each matrix is generated exactly once. Third, large binary matrices have about 29% probability to be invertible. In some applications, it may be desirable to repair the singular matrices by performing a small number of bit-flips. It is shown that the minimum number of bit-flips is equal to the matrix rank deficiency, i.e., the minimum Hamming distance from the general linear group. The required bit-flips are identified by pivoting during the matrix inversion, so the matrix rank can be restored. The correctness and the time complexity of the proposed algorithms were verified both theoretically and empirically. The reference implementation of these algorithms in C++ is available on Github. Full article
(This article belongs to the Special Issue Computational Methods for Numerical Linear Algebra)
Show Figures

Figure 1

33 pages, 4488 KB  
Article
New Fuzzy Aggregators for Ordered Fuzzy Numbers for Trend and Uncertainty Analysis
by Miroslaw Kozielski, Piotr Prokopowicz and Dariusz Mikolajewski
Electronics 2026, 15(2), 309; https://doi.org/10.3390/electronics15020309 - 10 Jan 2026
Viewed by 83
Abstract
Decision-making under uncertainty, especially when dealing with incomplete or linguistically described data, remains a significant challenge in various fields of science and industry. The increasing complexity of real-world problems necessitates the development of mathematical models and data processing techniques that effectively address uncertainty [...] Read more.
Decision-making under uncertainty, especially when dealing with incomplete or linguistically described data, remains a significant challenge in various fields of science and industry. The increasing complexity of real-world problems necessitates the development of mathematical models and data processing techniques that effectively address uncertainty and incompleteness. Aggregators play a key role in solving these problems, particularly in fuzzy systems, where they constitute fundamental tools for decision-making, data analysis, and information fusion. Aggregation functions have been extensively studied and applied in many fields of science and engineering. Recent research has explored their usefulness in fuzzy control systems, highlighting both their advantages and limitations. One promising approach is the use of ordered fuzzy numbers (OFNs), which can represent directional tendencies in data. Previous studies have introduced the property of direction sensitivity and the corresponding determinant parameter, which enables the analysis of correspondence between OFNs and facilitates inference operations. The aim of this paper is to examine existing aggregate functions for fuzzy set numbers and assess their suitability within OFNs. By analyzing the properties, theoretical foundations, and practical applications of these functions, we aim to identify a suitable aggregation operator that complies with the principles of OFN while ensuring consistency and efficiency in decision-making based on fuzzy structures. This paper introduces a novel aggregation approach that preserves the expected mathematical properties while incorporating the directional components inherent to OFN. The proposed method aims to improve the robustness and interpretability of fuzzy reasoning systems under uncertainty. Full article
(This article belongs to the Special Issue Advances in Intelligent Systems and Networks, 2nd Edition)
Show Figures

Figure 1

Back to TopTop