Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = minimum description length principle

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4732 KB  
Article
Analysis of Core–Periphery Structure Based on Clustering Aggregation in the NFT Transfer Network
by Zijuan Chen, Jianyong Yu, Yulong Wang and Jinfang Xie
Entropy 2025, 27(4), 342; https://doi.org/10.3390/e27040342 - 26 Mar 2025
Cited by 2 | Viewed by 1185
Abstract
With the rise of blockchain technology and the Ethereum platform, non-fungible tokens (NFTs) have emerged as a new class of digital assets. The NFT transfer network exhibits core–periphery structures derived from different partitioning methods, leading to local discrepancies and global diversity. We propose [...] Read more.
With the rise of blockchain technology and the Ethereum platform, non-fungible tokens (NFTs) have emerged as a new class of digital assets. The NFT transfer network exhibits core–periphery structures derived from different partitioning methods, leading to local discrepancies and global diversity. We propose a core–periphery structure characterization method based on Bayesian and stochastic block models (SBMs). This method incorporates prior knowledge to improve the fit of core–periphery structures obtained from various partitioning methods. Additionally, we introduce a locally weighted core–periphery structure aggregation (LWCSA) scheme, which determines local aggregation weights using the minimum description length (MDL) principle. This approach results in a more accurate and representative core–periphery structure. The experimental results indicate that core nodes in the NFT transfer network constitute approximately 2.3–5% of all nodes. Compared to baseline methods, our approach improves the normalized mutual information (NMI) index by 6–10%, demonstrating enhanced structural representation. This study provides a theoretical foundation for further analysis of the NFT market. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

18 pages, 477 KB  
Article
Can Effects of a Generalized Uncertainty Principle Appear in Compact Stars?
by João Gabriel Galli Gimenez, Dimiter Hadjimichef, Peter Otto Hess, Marcelo Netz-Marzola and César A. Zen Vasconcellos
Universe 2025, 11(1), 5; https://doi.org/10.3390/universe11010005 - 26 Dec 2024
Cited by 3 | Viewed by 1658
Abstract
In the present contribution, a preliminary analysis of the effects of the Generalized Uncertainty Principle (GUP) with a minimum length, in the context of compact stars, is performed. On basis of a deformed Poisson canonical algebra with a parametrized minimum length scale that [...] Read more.
In the present contribution, a preliminary analysis of the effects of the Generalized Uncertainty Principle (GUP) with a minimum length, in the context of compact stars, is performed. On basis of a deformed Poisson canonical algebra with a parametrized minimum length scale that induces deviations from conventional Quantum Mechanics, fundamental questions involving the consistence, evidences and proofs of this approach as a possible cure for unbounded energy divergence are outlined. The incorporation of GUP effects into semiclassical 2N-dimensional systems is made by means of a time-invariant distortion transformation applied to their non-deformed counterparts. Assuming the quantum hadrodynamics σω approach as a toy-model, due to its simplicity and structured description of neutron stars, we perform a preliminary analysis of GUP effects with a minimum spacetime length on these compact objects. The corresponding results for the equation of state and the mass-radius relation for neutron stars are in tune with recent observations with a maximum mass around 2.5 M and radius close to 12 km. Our results also indicate the smallness of the noncommutative scale. Full article
(This article belongs to the Special Issue Studies in Neutron Stars)
Show Figures

Figure 1

16 pages, 381 KB  
Article
Quantum-Spacetime Symmetries: A Principle of Minimum Group Representation
by Diego J. Cirilo-Lombardo and Norma G. Sanchez
Universe 2024, 10(1), 22; https://doi.org/10.3390/universe10010022 - 4 Jan 2024
Cited by 5 | Viewed by 2230
Abstract
We show that, as in the case of the principle of minimum action in classical and quantum mechanics, there exists an even more general principle in the very fundamental structure of quantum spacetime: this is the principle of minimal group representation, [...] Read more.
We show that, as in the case of the principle of minimum action in classical and quantum mechanics, there exists an even more general principle in the very fundamental structure of quantum spacetime: this is the principle of minimal group representation, which allows us to consistently and simultaneously obtain a natural description of spacetime’s dynamics and the physical states admissible in it. The theoretical construction is based on the physical states that are average values of the generators of the metaplectic group Mp(n), the double covering of SL(2C) in a vector representation, with respect to the coherent states carrying the spin weight. Our main results here are: (i) There exists a connection between the dynamics given by the metaplectic-group symmetry generators and the physical states (the mappings of the generators through bilinear combinations of the basic states). (ii) The ground states are coherent states of the Perelomov–Klauder type defined by the action of the metaplectic group that divides the Hilbert space into even and odd states that are mutually orthogonal. They carry spin weight of 1/4 and 3/4, respectively, from which two other basic states can be formed. (iii) The physical states, mapped bilinearly with the basic 1/4- and 3/4-spin-weight states, plus their symmetric and antisymmetric combinations, have spin contents s=0,1/2,1,3/2 and 2. (iv) The generators realized with the bosonic variables of the harmonic oscillator introduce a natural supersymmetry and a superspace whose line element is the geometrical Lagrangian of our model. (v) From the line element as operator level, a coherent physical state of spin 2 can be obtained and naturally related to the metric tensor. (vi) The metric tensor is naturally discretized by taking the discrete series given by the basic states (coherent states) in the n number representation, reaching the classical (continuous) spacetime for n. (vii) There emerges a relation between the eigenvalue α of our coherent-state metric solution and the black-hole area (entropy) as Abh/4lp2=α, relating the phase space of the metric found, gab, and the black hole area, Abh, through the Planck length lp2 and the eigenvalue α of the coherent states. As a consequence of the lowest level of the quantum-discrete-spacetime spectrum—e.g., the ground state associated to n=0 and its characteristic length—there exists a minimum entropy related to the black-hole history. Full article
(This article belongs to the Special Issue Quantum Physics including Gravity: Highlights and Novelties)
Show Figures

Figure 1

21 pages, 1191 KB  
Article
Restoration for Intensity Nonuniformities with Discontinuities in Whole-Body MRI
by Stathis Hadjidemetriou, Ansgar Malich, Lorenz Damian Rossknecht, Luca Ferrarini and Ismini E. Papageorgiou
Signals 2023, 4(4), 725-745; https://doi.org/10.3390/signals4040040 - 18 Oct 2023
Cited by 1 | Viewed by 2795
Abstract
The reconstruction in MRI assumes a uniform radio-frequency field. However, this is violated due to coil field nonuniformity and sensitivity variations. In whole-body MRI, the nonuniformities are more complex due to the imaging with multiple coils that typically have different overall sensitivities that [...] Read more.
The reconstruction in MRI assumes a uniform radio-frequency field. However, this is violated due to coil field nonuniformity and sensitivity variations. In whole-body MRI, the nonuniformities are more complex due to the imaging with multiple coils that typically have different overall sensitivities that result in sharp sensitivity changes at the junctions between adjacent coils. These lead to images with anatomically inconsequential intensity nonuniformities that include jump discontinuities of the intensity nonuniformities at the junctions corresponding to adjacent coils. The body is also imaged with multiple contrasts that result in images with different nonuniformities. A method is presented for the joint intensity uniformity restoration of two such images to achieve intensity homogenization. The effect of the spatial intensity distortion on the auto-co-occurrence statistics of each image as well as on the joint-co-occurrence statistics of the two images is modeled in terms of Point Spread Function (PSF). The PSFs and the non-stationary deconvolution of these PSFs from the statistics offer posterior Bayesian expectation estimates of the nonuniformity with Bayesian coring. Subsequently, a piecewise smoothness constraint is imposed for nonuniformity. This uses non-isotropic smoothing of the restoration field to allow the modeling of junction discontinuities. The implementation of the restoration method is iterative and imposes stability and validity constraints of the nonuniformity estimates. The effectiveness and accuracy of the method is demonstrated extensively with whole-body MRI image pairs of thirty-one cancer patients. Full article
Show Figures

Figure 1

18 pages, 9274 KB  
Article
Joint Direction of Arrival-Polarization Parameter Tracking Algorithm Based on Multi-Target Multi-Bernoulli Filter
by Zhikun Chen, Bin’an Wang, Ruiheng Yang and Yuchao Lou
Remote Sens. 2023, 15(16), 3929; https://doi.org/10.3390/rs15163929 - 8 Aug 2023
Cited by 4 | Viewed by 1982
Abstract
This paper presents a tracking algorithm for joint estimation of direction of arrival (DOA) and polarization parameters, which exhibit dynamic behavior due to the movement of signal source carriers. The proposed algorithm addresses the challenge of real-time estimation in multi-target scenarios with an [...] Read more.
This paper presents a tracking algorithm for joint estimation of direction of arrival (DOA) and polarization parameters, which exhibit dynamic behavior due to the movement of signal source carriers. The proposed algorithm addresses the challenge of real-time estimation in multi-target scenarios with an unknown number. This algorithm is built upon the Multi-target Multi-Bernoulli (MeMBer) filter algorithm, which makes use of a sensor array called Circular Orthogonal Double-Dipole (CODD). The algorithm begins by constructing a Minimum Description Length (MDL) principle, taking advantage of the characteristics of the polarization-sensitive array. This allows for adaptive estimation of the number of signal sources and facilitates the separation of the noise subspace. Subsequently, the joint parameter Multiple Signal Classification (MUSIC) spatial spectrum function is employed as the pseudo-likelihood function, overcoming the limitations imposed by unknown prior information constraints. To approximate the posterior distribution of MeMBer filters, Sequential Monte Carlo (SMC) method is utilized. The simulation results demonstrate that the proposed algorithm achieves excellent tracking accuracy in joint DOA-polarization parameter estimation, whether in scenarios with known or unknown numbers of signal sources. Moreover, the algorithm demonstrates robust tracking convergence even under low Signal-to-Noise Ratio (SNR) conditions. Full article
(This article belongs to the Special Issue Advances in Radar Systems for Target Detection and Tracking)
Show Figures

Graphical abstract

15 pages, 3889 KB  
Article
Dielectric Spectroscopy as a Condition Monitoring Technique for Low-Voltage Cables: Onsite Aging Assessment and Sensitivity Analyses
by Simone Vincenzo Suraci, Chuanyang Li and Davide Fabiani
Energies 2022, 15(4), 1509; https://doi.org/10.3390/en15041509 - 17 Feb 2022
Cited by 12 | Viewed by 3153
Abstract
This work presents the development, validation, and sensitivity analyses of a portable device capable of performing high-frequency dielectric spectroscopy tests on site. After a brief introduction on the operation principle and the description of the impact of frequency on dielectric spectroscopy, the article [...] Read more.
This work presents the development, validation, and sensitivity analyses of a portable device capable of performing high-frequency dielectric spectroscopy tests on site. After a brief introduction on the operation principle and the description of the impact of frequency on dielectric spectroscopy, the article presents the results of tests on reference samples confirming good agreement with expected values. The frequency region in which the device operates, 1–200 kHz, was chosen because of its correlation with oxidative species of polymeric compound. The sensitivity analyses were performed measuring the dielectric response of low voltage cables with different aged lengths. The outcome of these tests is twofold. On the one hand, they confirm the suitability of the technique for aging evaluation, and, on the other hand, they allow the assessment of the minimum aged length (damage ratio) which causes appreciable variations on the obtained dielectric spectrum. This quantity was found to be ~35% of the total cable length. Full article
Show Figures

Figure 1

27 pages, 12303 KB  
Article
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
by Baihan Lin
Entropy 2022, 24(1), 59; https://doi.org/10.3390/e24010059 - 28 Dec 2021
Cited by 3 | Viewed by 3569
Abstract
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network [...] Read more.
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks. Full article
(This article belongs to the Special Issue Information Theory and Deep Neural Networks)
Show Figures

Figure 1

14 pages, 718 KB  
Article
Evaluation of Transmission Properties of Networks Described with Reference Graphs Using Unevenness Coefficients
by Sławomir Bujnowski, Beata Marciniak, Zbigniew Lutowski, Adam Flizikowski and Olutayo Oyeyemi Oyerinde
Electronics 2021, 10(14), 1684; https://doi.org/10.3390/electronics10141684 - 14 Jul 2021
Cited by 3 | Viewed by 2133
Abstract
This paper discusses an evaluation method of transmission properties of networks described with regular graphs (Reference Graphs) using unevenness coefficients. The first part of the paper offers generic information about describing network topology via graphs. The terms ‘chord graph’ and ‘Reference Graph’, which [...] Read more.
This paper discusses an evaluation method of transmission properties of networks described with regular graphs (Reference Graphs) using unevenness coefficients. The first part of the paper offers generic information about describing network topology via graphs. The terms ‘chord graph’ and ‘Reference Graph’, which is a special form of a regular graph, are defined. The operating principle of a basic tool used for testing the network’s transmission properties is discussed. The next part consists of a description of the searching procedure of the shortest paths connecting any two nodes of a graph and the method determining the number of uses of individual graph edges. The analysis shows that using particular edges of a graph depends on two factors: their total number in minimum length paths and their total number in parallel paths connecting the graph nodes. The latter makes it possible to define an unevenness coefficient. The calculated values of the unevenness coefficients can be used to evaluate the transmission properties of networks and to control the distribution of transmission resources. Full article
Show Figures

Figure 1

21 pages, 15057 KB  
Article
Low-Rank Matrix Recovery from Noise via an MDL Framework-Based Atomic Norm
by Anyong Qin, Lina Xian, Yongliang Yang, Taiping Zhang and Yuan Yan Tang
Sensors 2020, 20(21), 6111; https://doi.org/10.3390/s20216111 - 27 Oct 2020
Cited by 3 | Viewed by 3043
Abstract
The recovery of the underlying low-rank structure of clean data corrupted with sparse noise/outliers is attracting increasing interest. However, in many low-level vision problems, the exact target rank of the underlying structure and the particular locations and values of the sparse outliers are [...] Read more.
The recovery of the underlying low-rank structure of clean data corrupted with sparse noise/outliers is attracting increasing interest. However, in many low-level vision problems, the exact target rank of the underlying structure and the particular locations and values of the sparse outliers are not known. Thus, the conventional methods cannot separate the low-rank and sparse components completely, especially in the case of gross outliers or deficient observations. Therefore, in this study, we employ the minimum description length (MDL) principle and atomic norm for low-rank matrix recovery to overcome these limitations. First, we employ the atomic norm to find all the candidate atoms of low-rank and sparse terms, and then we minimize the description length of the model in order to select the appropriate atoms of low-rank and the sparse matrices, respectively. Our experimental analyses show that the proposed approach can obtain a higher success rate than the state-of-the-art methods, even when the number of observations is limited or the corruption ratio is high. Experimental results utilizing synthetic data and real sensing applications (high dynamic range imaging, background modeling, removing noise and shadows) demonstrate the effectiveness, robustness and efficiency of the proposed method. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning for Smart Sensing Applications)
Show Figures

Figure 1

32 pages, 5506 KB  
Article
Time for Change: Implementation of Aksentijevic-Gibson Complexity in Psychology
by Aleksandar Aksentijevic, Anja Mihailovic and Dragutin T. Mihailovic
Symmetry 2020, 12(6), 948; https://doi.org/10.3390/sym12060948 - 4 Jun 2020
Cited by 5 | Viewed by 3712
Abstract
Given that complexity is critical for psychological processing, it is somewhat surprising that the field was dominated for a long time by probabilistic methods that focus on the quantitative aspects of the source/output. Although the more recent approaches based on the Minimum Description [...] Read more.
Given that complexity is critical for psychological processing, it is somewhat surprising that the field was dominated for a long time by probabilistic methods that focus on the quantitative aspects of the source/output. Although the more recent approaches based on the Minimum Description Length principle have produced interesting and useful models of psychological complexity, they have not directly defined the meaning and quantitative unit of complexity measurement. Contrasted to these mathematical approaches are various ad hoc measures based on different aspects of structure, which can work well but suffer from the same problem. The present manuscript is composed of two self-sufficient, yet related sections. In Section 1, we describe a complexity measure for binary strings which satisfies both these conditions (Aksentijevic–Gibson complexity; AG). We test the measure on a number of classic studies employing both short and long strings and draw attention to an important feature—a complexity profile—that could be of interest in modelling the psychological processing of structure as well as analysis of strings of any length. In Section 2 we discuss different factors affecting the complexity of visual form and showcase a 2D generalization of AG complexity. In addition, we provide algorithms in R that compute the AG complexity for binary strings and matrices and demonstrate their effectiveness on examples involving complexity judgments, symmetry perception, perceptual grouping, entropy, and elementary cellular automata. Finally, we enclose a repository of codes, data and stimuli for our example in order to facilitate experimentation and application of the measure in sciences outside psychology. Full article
(This article belongs to the Special Issue Symmetry of Perception and Behaviour)
Show Figures

Figure 1

16 pages, 424 KB  
Article
Change-Point Detection in Autoregressive Processes via the Cross-Entropy Method
by Lijing Ma and Georgy Sofronov
Algorithms 2020, 13(5), 128; https://doi.org/10.3390/a13050128 - 20 May 2020
Cited by 5 | Viewed by 5317
Abstract
It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we [...] Read more.
It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we need to find a method that divides the original nonstationary time series into a piecewise stationary segments. In this paper, we develop a flexible method to estimate the unknown number and the locations of change-points in autoregressive time series. In order to find the optimal value of a performance function, which is based on the Minimum Description Length principle, we develop a Cross-Entropy algorithm for the combinatorial optimization problem. Our numerical experiments show that the proposed approach is very efficient in detecting multiple change-points when the underlying process has moderate to substantial variations in the mean and the autocorrelation coefficient. We also apply the proposed method to real data of daily AUD/CNY exchange rate series from 2 January 2018 to 24 March 2020. Full article
Show Figures

Figure 1

41 pages, 534 KB  
Article
Generalised Uncertainty Relations for Angular Momentum and Spin in Quantum Geometry
by Matthew J. Lake, Marek Miller and Shi-Dong Liang
Universe 2020, 6(4), 56; https://doi.org/10.3390/universe6040056 - 19 Apr 2020
Cited by 34 | Viewed by 4054
Abstract
We derive generalised uncertainty relations (GURs) for orbital angular momentum and spin in the recently proposed smeared-space model of quantum geometry. The model implements a minimum length and a minimum linear momentum and recovers both the generalised uncertainty principle (GUP) and extended uncertainty [...] Read more.
We derive generalised uncertainty relations (GURs) for orbital angular momentum and spin in the recently proposed smeared-space model of quantum geometry. The model implements a minimum length and a minimum linear momentum and recovers both the generalised uncertainty principle (GUP) and extended uncertainty principle (EUP), previously proposed in the quantum gravity literature, within a single formalism. In this paper, we investigate the consequences of these results for particles with extrinsic and intrinsic angular momentum and obtain generalisations of the canonical so ( 3 ) and su ( 2 ) algebras. We find that, although SO ( 3 ) symmetry is preserved on three-dimensional slices of an enlarged phase space, corresponding to a superposition of background geometries, individual subcomponents of the generalised generators obey nontrivial subalgebras. These give rise to GURs for orbital angular momentum while leaving the canonical commutation relations intact except for a simple rescaling, ħ ħ + β . The value of the new parameter, β ħ × 10 61 , is determined by the ratio of the dark energy density to the Planck density, and its existence is required by the presence of both minimum length and momentum uncertainties. Here, we assume the former to be of the order of the Planck length and the latter to be of the order of the de Sitter momentum ħ Λ , where Λ is the cosmological constant, which is consistent with the existence of a finite cosmological horizon. In the smeared-space model, ħ and β are interpreted as the quantisation scales for matter and geometry, respectively, and a quantum state vector is associated with the spatial background. We show that this also gives rise to a rescaled Lie algebra for generalised spin operators, together with associated subalgebras that are analogous to those for orbital angular momentum. Remarkably, consistency of the algebraic structure requires the quantum state associated with a flat background to be fermionic, with spin eigenvalues ± β / 2 . Finally, the modified spin algebra leads to GURs for spin measurements. The potential implications of these results for cosmology and high-energy physics, and for the description of spin and angular momentum in relativistic theories of quantum gravity, including dark energy, are briefly discussed. Full article
(This article belongs to the Special Issue Rotation Effects in Relativity)
24 pages, 1942 KB  
Article
Detecting Metachanges in Data Streams from the Viewpoint of the MDL Principle
by Shintaro Fukushima and Kenji Yamanishi
Entropy 2019, 21(12), 1134; https://doi.org/10.3390/e21121134 - 20 Nov 2019
Cited by 4 | Viewed by 4830
Abstract
This paper addresses the issue of how we can detect changes of changes, which we call metachanges, in data streams. A metachange refers to a change in patterns of when and how changes occur, referred to as “metachanges along time” and “metachanges [...] Read more.
This paper addresses the issue of how we can detect changes of changes, which we call metachanges, in data streams. A metachange refers to a change in patterns of when and how changes occur, referred to as “metachanges along time” and “metachanges along state”, respectively. Metachanges along time mean that the intervals between change points significantly vary, whereas metachanges along state mean that the magnitude of changes varies. It is practically important to detect metachanges because they may be early warning signals of important events. This paper introduces a novel notion of metachange statistics as a measure of the degree of a metachange. The key idea is to integrate metachanges along both time and state in terms of “code length” according to the minimum description length (MDL) principle. We develop an online metachange detection algorithm (MCD) based on the statistics to apply it to a data stream. With synthetic datasets, we demonstrated that MCD detects metachanges earlier and more accurately than existing methods. With real datasets, we demonstrated that MCD can lead to the discovery of important events that might be overlooked by conventional change detection methods. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

12 pages, 453 KB  
Article
Model Selection for Non-Negative Tensor Factorization with Minimum Description Length
by Yunhui Fu, Shin Matsushima and Kenji Yamanishi
Entropy 2019, 21(7), 632; https://doi.org/10.3390/e21070632 - 27 Jun 2019
Cited by 3 | Viewed by 5023
Abstract
Non-negative tensor factorization (NTF) is a widely used multi-way analysis approach that factorizes a high-order non-negative data tensor into several non-negative factor matrices. In NTF, the non-negative rank has to be predetermined to specify the model and it greatly influences the factorized matrices. [...] Read more.
Non-negative tensor factorization (NTF) is a widely used multi-way analysis approach that factorizes a high-order non-negative data tensor into several non-negative factor matrices. In NTF, the non-negative rank has to be predetermined to specify the model and it greatly influences the factorized matrices. However, its value is conventionally determined by specialists’ insights or trial and error. This paper proposes a novel rank selection criterion for NTF on the basis of the minimum description length (MDL) principle. Our methodology is unique in that (1) we apply the MDL principle on tensor slices to overcome a problem caused by the imbalance between the number of elements in a data tensor and that in factor matrices, and (2) we employ the normalized maximum likelihood (NML) code-length for histogram densities. We employ synthetic and real data to empirically demonstrate that our method outperforms other criteria in terms of accuracies for estimating true ranks and for completing missing values. We further show that our method can produce ranks suitable for knowledge discovery. Full article
(This article belongs to the Special Issue Information-Theoretical Methods in Data Mining)
Show Figures

Figure 1

14 pages, 4467 KB  
Article
Real-Time Recursive Fingerprint Radio Map Creation Algorithm Combining Wi-Fi and Geomagnetism
by Ju-Hyeon Seong and Dong-Hoan Seo
Sensors 2018, 18(10), 3390; https://doi.org/10.3390/s18103390 - 10 Oct 2018
Cited by 3 | Viewed by 4039
Abstract
Fingerprint is a typical indoor-positioning algorithm, which measures the strength of wireless signals and creates a radio map. Using this radio map, the position is estimated through comparisons with the received signal strength measured in real-time. The radio map has a direct effect [...] Read more.
Fingerprint is a typical indoor-positioning algorithm, which measures the strength of wireless signals and creates a radio map. Using this radio map, the position is estimated through comparisons with the received signal strength measured in real-time. The radio map has a direct effect on the positioning performance; therefore, it should be designed accurately and managed efficiently, according to the type of wireless signal, amount of space, and wireless-signal density. This paper proposes a real-time recursive radio map creation algorithm that combines Wi-Fi and geomagnetism. The proposed method automatically recreates the radio map using geomagnetic radio-map dual processing (GRDP), which reduces the time required to create it. It also reduces the size of the radio map by actively optimizing its dimensions using an entropy-based minimum description length principle (MDLP) method. Experimental results in an actual building show that the proposed system exhibits similar map creation time as a system using a Wi-Fi–based radio map. Geomagnetic radio maps exhibiting over 80% positioning accuracy were created, and the dimensions of the radio map that combined the two signals were found to be reduced by 23.81%, compared to the initially prepared radio map. The dimensions vary according to the wireless signal state, and are automatically reduced in different environments. Full article
(This article belongs to the Special Issue Sensor Fusion and Novel Technologies in Positioning and Navigation)
Show Figures

Figure 1

Back to TopTop