Next Issue
Volume 23, October
Previous Issue
Volume 23, August
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 23, Issue 9 (September 2021) – 139 articles

Cover Story (view full-size image): The Schrödinger bridge problem (SBP) finds the most likely stochastic evolution between two probability distributions given a prior stochastic evolution. As well as applications in the natural sciences, problems of this kind have important applications in machine learning, such as dataset alignment and hypothesis testing. While the theory behind this problem is relatively mature, scalable numerical recipes to estimate the Schrödinger bridge remain an active area of research. We prove an equivalence between the SBP and maximum likelihood estimation enabling direct application of successful machine learning techniques. We propose a numerical procedure to estimate SBPs using Gaussian process regression and demonstrate the practical use of our approach in numerical simulations and experiments. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 4532 KiB  
Article
Analysis of Korean Peninsula Earthquake Network Based on Event Shuffling and Network Shuffling
by Seungsik Min and Gyuchang Lim
Entropy 2021, 23(9), 1236; https://doi.org/10.3390/e23091236 - 21 Sep 2021
Cited by 1 | Viewed by 2264
Abstract
In this work, a Korean peninsula earthquake network, constructed via event-sequential linking known as the Abe–Suzuki method, was investigated in terms of network properties. A significance test for these network properties was performed via comparisons with those of two random networks, constructed from [...] Read more.
In this work, a Korean peninsula earthquake network, constructed via event-sequential linking known as the Abe–Suzuki method, was investigated in terms of network properties. A significance test for these network properties was performed via comparisons with those of two random networks, constructed from two approaches, that is, EVENT (SEQUENCE) SHUFFLING and NETWORK (MATRIX) SHUFFLING. The Abe–Suzuki earthquake network has a clear difference from the two random networks. However, the two shuffled networks exhibited completely different functions, and even some network properties for one shuffled datum are significantly high and those of the other shuffled data are low compared to actual data. For most cases, the event-shuffled network showed a functional similarity to the real network, but with different exponents/parameters. This result strongly claims that the Korean peninsula earthquake network has a spatiotemporal causal relation. Additionally, the Korean peninsula network properties are mostly similar to those found in previous studies on the US and Japan. Further, the Korean earthquake network showed strong linearity in a specific range of spatial resolution, that is, 0.20°~0.80°, implying that macroscopic properties of the Korean earthquake network are highly regular in this range of resolution. Full article
(This article belongs to the Special Issue Complex Systems Time Series Analysis and Modeling for Geoscience)
Show Figures

Figure 1

21 pages, 4579 KiB  
Article
Dynamic Robustness of Open-Source Project Knowledge Collaborative Network Based on Opinion Leader Identification
by Shaojuan Lei, Xiaodong Zhang and Suhui Liu
Entropy 2021, 23(9), 1235; https://doi.org/10.3390/e23091235 - 21 Sep 2021
Cited by 5 | Viewed by 2297
Abstract
A large amount of semantic content is generated during designer collaboration in open-source projects (OSPs). Based on the characteristics of knowledge collaboration behavior in OSPs, we constructed a directed, weighted, semantic-based knowledge collaborative network. Four social network analysis indexes were created to identify [...] Read more.
A large amount of semantic content is generated during designer collaboration in open-source projects (OSPs). Based on the characteristics of knowledge collaboration behavior in OSPs, we constructed a directed, weighted, semantic-based knowledge collaborative network. Four social network analysis indexes were created to identify the key opinion leader nodes in the network using the entropy weight and TOPSIS method. Further, three degradation modes were designed for (1) the collaborative behavior of opinion leaders, (2) main knowledge dissemination behavior, and (3) main knowledge contribution behavior. Regarding the degradation model of the collaborative behavior of opinion leaders, we considered the propagation characteristics of opinion leaders to other nodes, and we created a susceptible–infected–removed (SIR) propagation model of the influence of opinion leaders’ behaviors. Finally, based on empirical data from the Local Motors open-source vehicle design community, a dynamic robustness analysis experiment was carried out. The results showed that the robustness of our constructed network varied for different degradation modes: the degradation of the opinion leaders’ collaborative behavior had the lowest robustness; this was followed by the main knowledge dissemination behavior and the main knowledge contribution behavior; the degradation of random behavior had the highest robustness. Our method revealed the influence of the degradation of collaborative behavior of different types of nodes on the robustness of the network. This could be used to formulate the management strategy of the open-source design community, thus promoting the stable development of OSPs. Full article
(This article belongs to the Special Issue Analysis and Applications of Complex Social Networks)
Show Figures

Figure 1

16 pages, 902 KiB  
Article
The Impact of the COVID-19 Pandemic on the Unpredictable Dynamics of the Cryptocurrency Market
by Kyungwon Kim and Minhyuk Lee
Entropy 2021, 23(9), 1234; https://doi.org/10.3390/e23091234 - 20 Sep 2021
Cited by 15 | Viewed by 4241
Abstract
The global economy is under great shock again in 2020 due to the COVID-19 pandemic; it has not been long since the global financial crisis in 2008. Therefore, we investigate the evolution of the complexity of the cryptocurrency market and analyze the characteristics [...] Read more.
The global economy is under great shock again in 2020 due to the COVID-19 pandemic; it has not been long since the global financial crisis in 2008. Therefore, we investigate the evolution of the complexity of the cryptocurrency market and analyze the characteristics from the past bull market in 2017 to the present the COVID-19 pandemic. To confirm the evolutionary complexity of the cryptocurrency market, three general complexity analyses based on nonlinear measures were used: approximate entropy (ApEn), sample entropy (SampEn), and Lempel-Ziv complexity (LZ). We analyzed the market complexity/unpredictability for 43 cryptocurrency prices that have been trading until recently. In addition, three non-parametric tests suitable for non-normal distribution comparison were used to cross-check quantitatively. Finally, using the sliding time window analysis, we observed the change in the complexity of the cryptocurrency market according to events such as the COVID-19 pandemic and vaccination. This study is the first to confirm the complexity/unpredictability of the cryptocurrency market from the bull market to the COVID-19 pandemic outbreak. We find that ApEn, SampEn, and LZ complexity metrics of all markets could not generalize the COVID-19 effect of the complexity due to different patterns. However, market unpredictability is increasing by the ongoing health crisis. Full article
(This article belongs to the Special Issue Modeling and Forecasting of Rare and Extreme Events)
Show Figures

Figure 1

19 pages, 21461 KiB  
Article
Significance Support Vector Regression for Image Denoising
by Bing Sun and Xiaofeng Liu
Entropy 2021, 23(9), 1233; https://doi.org/10.3390/e23091233 - 20 Sep 2021
Cited by 5 | Viewed by 2196
Abstract
As an extension of the support vector machine, support vector regression (SVR) plays a significant role in image denoising. However, due to ignoring the spatial distribution information of noisy pixels, the conventional SVR denoising model faces the bottleneck of overfitting in the case [...] Read more.
As an extension of the support vector machine, support vector regression (SVR) plays a significant role in image denoising. However, due to ignoring the spatial distribution information of noisy pixels, the conventional SVR denoising model faces the bottleneck of overfitting in the case of serious noise interference, which leads to a degradation of the denoising effect. For this problem, this paper proposes a significance measurement framework for evaluating the sample significance with sample spatial density information. Based on the analysis of the penalty factor in SVR, significance SVR (SSVR) is presented by assigning the sample significance factor to each sample. The refined penalty factor enables SSVR to be less susceptible to outliers in the solution process. This overcomes the drawback that the SVR imposes the same penalty factor for all samples, which leads to the objective function paying too much attention to outliers, resulting in poorer regression results. As an example of the proposed framework applied in image denoising, a cutoff distance-based significance factor is instantiated to estimate the samples’ importance in SSVR. Experiments conducted on three image datasets showed that SSVR demonstrates excellent performance compared to the best-in-class image denoising techniques in terms of a commonly used denoising evaluation index and observed visual. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

18 pages, 2716 KiB  
Article
Enhanced Directed Random Walk for the Identification of Breast Cancer Prognostic Markers from Multiclass Expression Data
by Hui Wen Nies, Mohd Saberi Mohamad, Zalmiyah Zakaria, Weng Howe Chan, Muhammad Akmal Remli and Yong Hui Nies
Entropy 2021, 23(9), 1232; https://doi.org/10.3390/e23091232 - 20 Sep 2021
Cited by 4 | Viewed by 3041
Abstract
Artificial intelligence in healthcare can potentially identify the probability of contracting a particular disease more accurately. There are five common molecular subtypes of breast cancer: luminal A, luminal B, basal, ERBB2, and normal-like. Previous investigations showed that pathway-based microarray analysis could help in [...] Read more.
Artificial intelligence in healthcare can potentially identify the probability of contracting a particular disease more accurately. There are five common molecular subtypes of breast cancer: luminal A, luminal B, basal, ERBB2, and normal-like. Previous investigations showed that pathway-based microarray analysis could help in the identification of prognostic markers from gene expressions. For example, directed random walk (DRW) can infer a greater reproducibility power of the pathway activity between two classes of samples with a higher classification accuracy. However, most of the existing methods (including DRW) ignored the characteristics of different cancer subtypes and considered all of the pathways to contribute equally to the analysis. Therefore, an enhanced DRW (eDRW+) is proposed to identify breast cancer prognostic markers from multiclass expression data. An improved weight strategy using one-way ANOVA (F-test) and pathway selection based on the greatest reproducibility power is proposed in eDRW+. The experimental results show that the eDRW+ exceeds other methods in terms of AUC. Besides this, the eDRW+ identifies 294 gene markers and 45 pathway markers from the breast cancer datasets with better AUC. Therefore, the prognostic markers (pathway markers and gene markers) can identify drug targets and look for cancer subtypes with clinically distinct outcomes. Full article
(This article belongs to the Special Issue Networks and Systems in Bioinformatics)
Show Figures

Figure 1

19 pages, 12900 KiB  
Article
Real Sample Consistency Regularization for GANs
by Xiangde Zhang and Jian Zhang
Entropy 2021, 23(9), 1231; https://doi.org/10.3390/e23091231 - 19 Sep 2021
Viewed by 2310
Abstract
Mode collapse has always been a fundamental problem in generative adversarial networks. The recently proposed Zero Gradient Penalty (0GP) regularization can alleviate the mode collapse, but it will exacerbate a discriminator’s misjudgment problem, that is the discriminator judges that some generated samples are [...] Read more.
Mode collapse has always been a fundamental problem in generative adversarial networks. The recently proposed Zero Gradient Penalty (0GP) regularization can alleviate the mode collapse, but it will exacerbate a discriminator’s misjudgment problem, that is the discriminator judges that some generated samples are more real than real samples. In actual training, the discriminator will direct the generated samples to point to samples with higher discriminator outputs. The serious misjudgment problem of the discriminator will cause the generator to generate unnatural images and reduce the quality of the generation. This paper proposes Real Sample Consistency (RSC) regularization. In the training process, we randomly divided the samples into two parts and minimized the loss of the discriminator’s outputs corresponding to these two parts, forcing the discriminator to output the same value for all real samples. We analyzed the effectiveness of our method. The experimental results showed that our method can alleviate the discriminator’s misjudgment and perform better with a more stable training process than 0GP regularization. Our real sample consistency regularization improved the FID score for the conditional generation of Fake-As-Real GAN (FARGAN) from 14.28 to 9.8 on CIFAR-10. Our RSC regularization improved the FID score from 23.42 to 17.14 on CIFAR-100 and from 53.79 to 46.92 on ImageNet2012. Our RSC regularization improved the average distance between the generated and real samples from 0.028 to 0.025 on synthetic data. The loss of the generator and discriminator in standard GAN with our regularization was close to the theoretical loss and kept stable during the training process. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

18 pages, 3840 KiB  
Article
Model for Risk Calculation and Reliability Comparison of Level Crossings
by Pamela Ercegovac, Gordan Stojić, Miloš Kopić, Željko Stević, Feta Sinani and Ilija Tanackov
Entropy 2021, 23(9), 1230; https://doi.org/10.3390/e23091230 - 19 Sep 2021
Cited by 1 | Viewed by 2757
Abstract
There is not a single country in the world that is so rich that it can remove all level crossings or provide their denivelation in order to absolutely avoid the possibility of accidents at the intersections of railways and road traffic. In the [...] Read more.
There is not a single country in the world that is so rich that it can remove all level crossings or provide their denivelation in order to absolutely avoid the possibility of accidents at the intersections of railways and road traffic. In the Republic of Serbia alone, the largest number of accidents occur at passive crossings, which make up three-quarters of the total number of crossings. Therefore, it is necessary to constantly find solutions to the problem of priorities when choosing level crossings where it is necessary to raise the level of security, primarily by analyzing the risk and reliability at all level crossings. This paper presents a model that enables this. The calculation of the maximal risk of a level crossing is achieved under the conditions of generating the maximum entropy in the virtual operating mode. The basis of the model is a heterogeneous queuing system. Maximum entropy is based on the mandatory application of an exponential distribution. The system is Markovian and is solved by a standard analytical concept. The basic input parameters for the calculation of the maximal risk are the geometric characteristics of the level crossing and the intensities and structure of the flows of road and railway vehicles. The real risk is based on statistical records of accidents and flow intensities. The exact reliability of the level crossing is calculated from the ratio of real and maximal risk, which enables their further comparison in order to raise the level of safety, and that is the basic idea of this paper. Full article
Show Figures

Figure 1

15 pages, 681 KiB  
Article
Study of Concentrated Short Fiber Suspensions in Flows, Using Topological Data Analysis
by Rabih Mezher, Jack Arayro, Nicolas Hascoet and Francisco Chinesta
Entropy 2021, 23(9), 1229; https://doi.org/10.3390/e23091229 - 18 Sep 2021
Cited by 2 | Viewed by 1869
Abstract
The present study addresses the discrete simulation of the flow of concentrated suspensions encountered in the forming processes involving reinforced polymers, and more particularly the statistical characterization and description of the effects of the intense fiber interaction, occurring during the development of the [...] Read more.
The present study addresses the discrete simulation of the flow of concentrated suspensions encountered in the forming processes involving reinforced polymers, and more particularly the statistical characterization and description of the effects of the intense fiber interaction, occurring during the development of the flow induced orientation, on the fibers’ geometrical center trajectory. The number of interactions as well as the interaction intensity will depend on the fiber volume fraction and the applied shear, which should affect the stochastic trajectory. Topological data analysis (TDA) will be applied on the geometrical center trajectories of the simulated fiber to prove that a characteristic pattern can be extracted depending on the flow conditions (concentration and shear rate). This work proves that TDA allows capturing and extracting from the so-called persistence image, a pattern that characterizes the dependence of the fiber trajectory on the flow kinematics and the suspension concentration. Such a pattern could be used for classification and modeling purposes, in rheology or during processing monitoring. Full article
(This article belongs to the Special Issue Statistical Fluid Dynamics)
Show Figures

Figure 1

19 pages, 11460 KiB  
Article
Energy Loss and Radial Force Variation Caused by Impeller Trimming in a Double-Suction Centrifugal Pump
by Qifan Deng, Ji Pei, Wenjie Wang, Bin Lin, Chenying Zhang and Jiantao Zhao
Entropy 2021, 23(9), 1228; https://doi.org/10.3390/e23091228 - 18 Sep 2021
Cited by 21 | Viewed by 2808
Abstract
Impeller trimming is an economical method for broadening the range of application of a given pump, but it can destroy operational stability and efficiency. In this study, entropy production theory was utilized to analyze the variation of energy loss caused by impeller trimming [...] Read more.
Impeller trimming is an economical method for broadening the range of application of a given pump, but it can destroy operational stability and efficiency. In this study, entropy production theory was utilized to analyze the variation of energy loss caused by impeller trimming based on computational fluid dynamics. Experiments and numerical simulations were conducted to investigate the energy loss and fluid-induced radial forces. The pump’s performance seriously deteriorated after impeller trimming, especially under overload conditions. Energy loss in the volute decreased after trimming under part-load conditions but increased under overload conditions, and this phenomenon made the pump head unable to be accurately predicted by empirical equations. With the help of entropy production theory, high-energy dissipation regions were mainly located in the volute discharge diffuser under overload conditions because of the flow separation and the mixing of the main flow and the stalled fluid. The increased incidence angle at the volute’s tongue after impeller trimming resulted in more serious flow separation and higher energy loss. Furthermore, the radial forces and their fluctuation amplitudes decreased under all the investigated conditions. The horizontal components of the radial forces in all cases were much higher than the vertical components. Full article
(This article belongs to the Special Issue Entropy in Computational Fluid Dynamics III)
Show Figures

Figure 1

19 pages, 10363 KiB  
Article
A High-Efficiency Spectral Method for Two-Dimensional Ocean Acoustic Propagation Calculations
by Xian Ma, Yongxian Wang, Xiaoqian Zhu, Wei Liu, Wenbin Xiao and Qiang Lan
Entropy 2021, 23(9), 1227; https://doi.org/10.3390/e23091227 - 18 Sep 2021
Cited by 7 | Viewed by 2157
Abstract
The accuracy and efficiency of sound field calculations highly concern issues of hydroacoustics. Recently, one-dimensional spectral methods have shown high-precision characteristics when solving the sound field but can solve only simplified models of underwater acoustic propagation, thus their application range is small. Therefore, [...] Read more.
The accuracy and efficiency of sound field calculations highly concern issues of hydroacoustics. Recently, one-dimensional spectral methods have shown high-precision characteristics when solving the sound field but can solve only simplified models of underwater acoustic propagation, thus their application range is small. Therefore, it is necessary to directly calculate the two-dimensional Helmholtz equation of ocean acoustic propagation. Here, we use the Chebyshev–Galerkin and Chebyshev collocation methods to solve the two-dimensional Helmholtz model equation. Then, the Chebyshev collocation method is used to model ocean acoustic propagation because, unlike the Galerkin method, the collocation method does not need stringent boundary conditions. Compared with the mature Kraken program, the Chebyshev collocation method exhibits a higher numerical accuracy. However, the shortcoming of the collocation method is that the computational efficiency cannot satisfy the requirements of real-time applications due to the large number of calculations. Then, we implemented the parallel code of the collocation method, which could effectively improve calculation effectiveness. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics II)
Show Figures

Figure 1

19 pages, 296 KiB  
Article
Not All Structure and Dynamics Are Equal
by Garrett Mindt
Entropy 2021, 23(9), 1226; https://doi.org/10.3390/e23091226 - 18 Sep 2021
Cited by 3 | Viewed by 3072
Abstract
The hard problem of consciousness has been a perennially vexing issue for the study of consciousness, particularly in giving a scientific and naturalized account of phenomenal experience. At the heart of the hard problem is an often-overlooked argument, which is at the core [...] Read more.
The hard problem of consciousness has been a perennially vexing issue for the study of consciousness, particularly in giving a scientific and naturalized account of phenomenal experience. At the heart of the hard problem is an often-overlooked argument, which is at the core of the hard problem, and that is the structure and dynamics (S&D) argument. In this essay, I will argue that we have good reason to suspect that the S&D argument given by David Chalmers rests on a limited conception of S&D properties, what in this essay I’m calling extrinsic structure and dynamics. I argue that if we take recent insights from the complexity sciences and from recent developments in Integrated Information Theory (IIT) of Consciousness, that we get a more nuanced picture of S&D, specifically, a class of properties I’m calling intrinsic structure and dynamics. This I think opens the door to a broader class of properties with which we might naturally and scientifically explain phenomenal experience, as well as the relationship between syntactic, semantic, and intrinsic notions of information. I argue that Chalmers’ characterization of structure and dynamics in his S&D argument paints them with too broad a brush and fails to account for important nuances, especially when considering accounting for a system’s intrinsic properties. Ultimately, my hope is to vindicate a certain species of explanation from the S&D argument, and by extension dissolve the hard problem of consciousness at its core, by showing that not all structure and dynamics are equal. Full article
(This article belongs to the Special Issue Integrated Information Theory and Consciousness)
15 pages, 6592 KiB  
Article
A Novel Dehumidification Strategy to Reduce Liquid Fraction and Condensation Loss in Steam Turbines
by Yan Yang, Haoping Peng and Chuang Wen
Entropy 2021, 23(9), 1225; https://doi.org/10.3390/e23091225 - 18 Sep 2021
Cited by 9 | Viewed by 2741
Abstract
Massive droplets can be generated to form two-phase flow in steam turbines, leading to erosion issues to the blades and reduces the reliability of the components. A condensing two-phase flow model was developed to assess the flow structure and loss considering the nonequilibrium [...] Read more.
Massive droplets can be generated to form two-phase flow in steam turbines, leading to erosion issues to the blades and reduces the reliability of the components. A condensing two-phase flow model was developed to assess the flow structure and loss considering the nonequilibrium condensation phenomenon due to the high expansion behaviour in the transonic flow in linear blade cascades. A novel dehumidification strategy was proposed by introducing turbulent disturbances on the suction side. The results show that the Wilson point of the nonequilibrium condensation process was delayed by increasing the inlet superheated level at the entrance of the blade cascade. With an increase in the inlet superheated level of 25 K, the liquid fraction and condensation loss significantly reduced by 79% and 73%, respectively. The newly designed turbine blades not only remarkably kept the liquid phase region away from the blade walls but also significantly reduced 28.1% averaged liquid fraction and 47.5% condensation loss compared to the original geometry. The results provide an insight to understand the formation and evaporation of the condensed droplets inside steam turbines. Full article
(This article belongs to the Special Issue Non-equilibrium Phase Transitions)
Show Figures

Figure 1

8 pages, 1228 KiB  
Article
Influence of Source Parameters on the Polarization Properties of Beams for Practical Free-Space Quantum Key Distribution
by Tianyi Wu, Qing Pan, Chushan Lin, Lei Shi, Shanghong Zhao, Yijun Zhang, Xingyu Wang and Chen Dong
Entropy 2021, 23(9), 1224; https://doi.org/10.3390/e23091224 - 17 Sep 2021
Viewed by 2211
Abstract
Polarization encoding has been extensively used in quantum key distribution (QKD) implementations along free-space links. However, the calculation model to characterize channel transmittance and quantum bit error rate (QBER) for free-space QKD has not been systematically studied. As a result, it is often [...] Read more.
Polarization encoding has been extensively used in quantum key distribution (QKD) implementations along free-space links. However, the calculation model to characterize channel transmittance and quantum bit error rate (QBER) for free-space QKD has not been systematically studied. As a result, it is often assumed that misalignment error is equal to a fixed value, which is not theoretically rigorous. In this paper, we investigate the depolarization and rotation of the signal beams resulting from spatially-dependent polarization effects of the use of curved optics in an off-axis configuration, where decoherence can be characterized by the Huygens–Fresnel principle and the cross-spectral density matrix (CSDM). The transmittance and misalignment error in a practical free-space QKD can thus be estimated using the method. Furthermore, the numerical simulations clearly show that the polarization effect caused by turbulence can be effectively mitigated when maintaining good beam coherence properties. Full article
(This article belongs to the Special Issue Practical Quantum Communication)
Show Figures

Figure 1

14 pages, 2078 KiB  
Article
Continuous-Variable Quantum Secret Sharing Based on Thermal Terahertz Sources in Inter-Satellite Wireless Links
by Chengji Liu, Changhua Zhu, Zhihui Li, Min Nie, Hong Yang and Changxing Pei
Entropy 2021, 23(9), 1223; https://doi.org/10.3390/e23091223 - 17 Sep 2021
Cited by 9 | Viewed by 2520
Abstract
We propose a continuous-variable quantum secret sharing (CVQSS) scheme based on thermal terahertz (THz) sources in inter-satellite wireless links (THz-CVQSS). In this scheme, firstly, each player locally preforms Gaussian modulation to prepare a thermal THz state, and then couples it into a circulating [...] Read more.
We propose a continuous-variable quantum secret sharing (CVQSS) scheme based on thermal terahertz (THz) sources in inter-satellite wireless links (THz-CVQSS). In this scheme, firstly, each player locally preforms Gaussian modulation to prepare a thermal THz state, and then couples it into a circulating spatiotemporal mode using a highly asymmetric beam splitter. At the end, the dealer measures the quadrature components of the received spatiotemporal mode through performing the heterodyne detection to share secure keys with all the players of a group. This design enables that the key can be recovered only by the whole group players’ knowledge in cooperation and neither a single player nor any subset of the players in the group can recover the key correctly. We analyze both the security and the performance of THz-CVQSS in inter-satellite links. Results show that a long-distance inter-satellite THz-CVQSS scheme with multiple players is feasible. This work will provide an effective way for building an inter-satellite quantum communication network. Full article
(This article belongs to the Special Issue Practical Quantum Communication)
Show Figures

Figure 1

23 pages, 1728 KiB  
Article
A Novel Conflict Management Method Based on Uncertainty of Evidence and Reinforcement Learning for Multi-Sensor Information Fusion
by Fanghui Huang, Yu Zhang, Ziqing Wang and Xinyang Deng
Entropy 2021, 23(9), 1222; https://doi.org/10.3390/e23091222 - 17 Sep 2021
Cited by 6 | Viewed by 2684
Abstract
Dempster–Shafer theory (DST), which is widely used in information fusion, can process uncertain information without prior information; however, when the evidence to combine is highly conflicting, it may lead to counter-intuitive results. Moreover, the existing methods are not strong enough to process real-time [...] Read more.
Dempster–Shafer theory (DST), which is widely used in information fusion, can process uncertain information without prior information; however, when the evidence to combine is highly conflicting, it may lead to counter-intuitive results. Moreover, the existing methods are not strong enough to process real-time and online conflicting evidence. In order to solve the above problems, a novel information fusion method is proposed in this paper. The proposed method combines the uncertainty of evidence and reinforcement learning (RL). Specifically, we consider two uncertainty degrees: the uncertainty of the original basic probability assignment (BPA) and the uncertainty of its negation. Then, Deng entropy is used to measure the uncertainty of BPAs. Two uncertainty degrees are considered as the condition of measuring information quality. Then, the adaptive conflict processing is performed by RL and the combination two uncertainty degrees. The next step is to compute Dempster’s combination rule (DCR) to achieve multi-sensor information fusion. Finally, a decision scheme based on correlation coefficient is used to make the decision. The proposed method not only realizes adaptive conflict evidence management, but also improves the accuracy of multi-sensor information fusion and reduces information loss. Numerical examples verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Recent Progress of Deng Entropy)
Show Figures

Figure 1

16 pages, 7861 KiB  
Article
A Three-Dimensional Infinite Collapse Map with Image Encryption
by Wenhao Yan, Zijing Jiang, Xin Huang and Qun Ding
Entropy 2021, 23(9), 1221; https://doi.org/10.3390/e23091221 - 17 Sep 2021
Cited by 6 | Viewed by 2270
Abstract
Chaos is considered as a natural candidate for encryption systems owing to its sensitivity to initial values and unpredictability of its orbit. However, some encryption schemes based on low-dimensional chaotic systems exhibit various security defects due to their relatively simple dynamic characteristics. In [...] Read more.
Chaos is considered as a natural candidate for encryption systems owing to its sensitivity to initial values and unpredictability of its orbit. However, some encryption schemes based on low-dimensional chaotic systems exhibit various security defects due to their relatively simple dynamic characteristics. In order to enhance the dynamic behaviors of chaotic maps, a novel 3D infinite collapse map (3D-ICM) is proposed, and the performance of the chaotic system is analyzed from three aspects: a phase diagram, the Lyapunov exponent, and Sample Entropy. The results show that the chaotic system has complex chaotic behavior and high complexity. Furthermore, an image encryption scheme based on 3D-ICM is presented, whose security analysis indicates that the proposed image encryption scheme can resist violent attacks, correlation analysis, and differential attacks, so it has a higher security level. Full article
Show Figures

Figure 1

33 pages, 4557 KiB  
Article
Stochastic Chaos and Markov Blankets
by Karl Friston, Conor Heins, Kai Ueltzhöffer, Lancelot Da Costa and Thomas Parr
Entropy 2021, 23(9), 1220; https://doi.org/10.3390/e23091220 - 17 Sep 2021
Cited by 69 | Viewed by 7924
Abstract
In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions [...] Read more.
In this treatment of random dynamical systems, we consider the existence—and identification—of conditional independencies at nonequilibrium steady-state. These independencies underwrite a particular partition of states, in which internal states are statistically secluded from external states by blanket states. The existence of such partitions has interesting implications for the information geometry of internal states. In brief, this geometry can be read as a physics of sentience, where internal states look as if they are inferring external states. However, the existence of such partitions—and the functional form of the underlying densities—have yet to be established. Here, using the Lorenz system as the basis of stochastic chaos, we leverage the Helmholtz decomposition—and polynomial expansions—to parameterise the steady-state density in terms of surprisal or self-information. We then show how Markov blankets can be identified—using the accompanying Hessian—to characterise the coupling between internal and external states in terms of a generalised synchrony or synchronisation of chaos. We conclude by suggesting that this kind of synchronisation may provide a mathematical basis for an elemental form of (autonomous or active) sentience in biology. Full article
Show Figures

Figure 1

17 pages, 315 KiB  
Article
Hyperbolically Symmetric Versions of Lemaitre–Tolman–Bondi Spacetimes
by Luis Herrera, Alicia Di Prisco and Justo Ospino
Entropy 2021, 23(9), 1219; https://doi.org/10.3390/e23091219 - 16 Sep 2021
Cited by 16 | Viewed by 1786
Abstract
We study fluid distributions endowed with hyperbolic symmetry, which share many common features with Lemaitre–Tolman–Bondi (LTB) solutions (e.g., they are geodesic, shearing, and nonconformally flat, and the energy density is inhomogeneous). As such, they may be considered as hyperbolic symmetric versions of LTB, [...] Read more.
We study fluid distributions endowed with hyperbolic symmetry, which share many common features with Lemaitre–Tolman–Bondi (LTB) solutions (e.g., they are geodesic, shearing, and nonconformally flat, and the energy density is inhomogeneous). As such, they may be considered as hyperbolic symmetric versions of LTB, with spherical symmetry replaced by hyperbolic symmetry. We start by considering pure dust models, and afterwards, we extend our analysis to dissipative models with anisotropic pressure. In the former case, the complexity factor is necessarily nonvanishing, whereas in the latter cases, models with a vanishing complexity factor are found. The remarkable fact is that all solutions satisfying the vanishing complexity factor condition are necessarily nondissipative and satisfy the stiff equation of state. Full article
(This article belongs to the Special Issue Complexity of Self-Gravitating Systems)
16 pages, 1032 KiB  
Article
Learning in Convolutional Neural Networks Accelerated by Transfer Entropy
by Adrian Moldovan, Angel Caţaron and Răzvan Andonie
Entropy 2021, 23(9), 1218; https://doi.org/10.3390/e23091218 - 16 Sep 2021
Cited by 2 | Viewed by 3320
Abstract
Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is [...] Read more.
Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead–accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 4663 KiB  
Article
Fault Feature Extraction for Reciprocating Compressors Based on Underdetermined Blind Source Separation
by Jindong Wang, Xin Chen, Haiyang Zhao, Yanyang Li and Zujian Liu
Entropy 2021, 23(9), 1217; https://doi.org/10.3390/e23091217 - 15 Sep 2021
Cited by 7 | Viewed by 2568
Abstract
In practical engineering applications, the vibration signals collected by sensors often contain outliers, resulting in the separation accuracy of source signals from the observed signals being seriously affected. The mixing matrix estimation is crucial to the underdetermined blind source separation (UBSS), determining the [...] Read more.
In practical engineering applications, the vibration signals collected by sensors often contain outliers, resulting in the separation accuracy of source signals from the observed signals being seriously affected. The mixing matrix estimation is crucial to the underdetermined blind source separation (UBSS), determining the accuracy level of the source signals recovery. Therefore, a two-stage clustering method is proposed by combining hierarchical clustering and K-means to improve the reliability of the estimated mixing matrix in this paper. The proposed method is used to solve the two major problems in the K-means algorithm: the random selection of initial cluster centers and the sensitivity of the algorithm to outliers. Firstly, the observed signals are clustered by hierarchical clustering to get the cluster centers. Secondly, the cosine distance is used to eliminate the outliers deviating from cluster centers. Then, the initial cluster centers are obtained by calculating the mean value of each remaining cluster. Finally, the mixing matrix is estimated with the improved K-means, and the sources are recovered using the least square method. Simulation and the reciprocating compressor fault experiments demonstrate the effectiveness of the proposed method. Full article
Show Figures

Figure 1

18 pages, 906 KiB  
Article
An Efficient Partition-Based Approach to Identify and Scatter Multiple Relevant Spreaders in Complex Networks
by Jedidiah Yanez-Sierra, Arturo Diaz-Perez and Victor Sosa-Sosa
Entropy 2021, 23(9), 1216; https://doi.org/10.3390/e23091216 - 15 Sep 2021
Cited by 3 | Viewed by 2203
Abstract
One of the main problems in graph analysis is the correct identification of relevant nodes for spreading processes. Spreaders are crucial for accelerating/hindering information diffusion, increasing product exposure, controlling diseases, rumors, and more. Correct identification of spreaders in graph analysis is a relevant [...] Read more.
One of the main problems in graph analysis is the correct identification of relevant nodes for spreading processes. Spreaders are crucial for accelerating/hindering information diffusion, increasing product exposure, controlling diseases, rumors, and more. Correct identification of spreaders in graph analysis is a relevant task to optimally use the network structure and ensure a more efficient flow of information. Additionally, network topology has proven to play a relevant role in the spreading processes. In this sense, more of the existing methods based on local, global, or hybrid centrality measures only select relevant nodes based on their ranking values, but they do not intentionally focus on their distribution on the graph. In this paper, we propose a simple yet effective method that takes advantage of the underlying graph topology to guarantee that the selected nodes are not only relevant but also well-scattered. Our proposal also suggests how to define the number of spreaders to select. The approach is composed of two phases: first, graph partitioning; and second, identification and distribution of relevant nodes. We have tested our approach by applying the SIR spreading model over nine real complex networks. The experimental results showed more influential and scattered values for the set of relevant nodes identified by our approach than several reference algorithms, including degree, closeness, Betweenness, VoteRank, HybridRank, and IKS. The results further showed an improvement in the propagation influence value when combining our distribution strategy with classical metrics, such as degree, outperforming computationally more complex strategies. Moreover, our proposal shows a good computational complexity and can be applied to large-scale networks. Full article
(This article belongs to the Special Issue Analysis and Applications of Complex Social Networks)
Show Figures

Figure 1

12 pages, 1359 KiB  
Article
The Information Conveyed in a SPAC′s Offering
by Gil Cohen and Mahmoud Qadan
Entropy 2021, 23(9), 1215; https://doi.org/10.3390/e23091215 - 15 Sep 2021
Cited by 1 | Viewed by 3795
Abstract
The popularity of SPACs (Special Purpose Acquisition Companies) has grown dramatically in recent years as a substitute for the traditional IPO (Initial Public Offer). We modeled the average annual return for SPAC investors and found that this financial tool produced an annual return [...] Read more.
The popularity of SPACs (Special Purpose Acquisition Companies) has grown dramatically in recent years as a substitute for the traditional IPO (Initial Public Offer). We modeled the average annual return for SPAC investors and found that this financial tool produced an annual return of 17.3%. We then constructed an information model that examined a SPAC′s excess returns during the 60 days after a potential merger or acquisition had been announced. We found that the announcement had a major impact on the SPAC’s share price over the 60 days, delivering on average 0.69% daily excess returns over the IPO portfolio and 31.6% cumulative excess returns for the entire period. Relative to IPOs, the cumulative excess returns of SPACs rose dramatically in the next few days after the potential merger or acquisition announcement until the 26th day. They then declined but rose again until the 48th day after the announcement. Finally, the SPAC’s structure reduced the investors’ risk. Thus, if investors buy a SPAC stock immediately after a potential merger or acquisition has been announced and hold it for 48 days, they can reap substantial short-term returns. Full article
(This article belongs to the Special Issue Information Theory on Financial Markets and Financial Innovations)
Show Figures

Figure 1

27 pages, 6123 KiB  
Article
Geometric Characteristics of the Wasserstein Metric on SPD(n) and Its Applications on Data Processing
by Yihao Luo, Shiqiang Zhang, Yueqi Cao and Huafei Sun
Entropy 2021, 23(9), 1214; https://doi.org/10.3390/e23091214 - 14 Sep 2021
Cited by 1 | Viewed by 2723
Abstract
The Wasserstein distance, especially among symmetric positive-definite matrices, has broad and deep influences on the development of artificial intelligence (AI) and other branches of computer science. In this paper, by involving the Wasserstein metric on SPD(n), we [...] Read more.
The Wasserstein distance, especially among symmetric positive-definite matrices, has broad and deep influences on the development of artificial intelligence (AI) and other branches of computer science. In this paper, by involving the Wasserstein metric on SPD(n), we obtain computationally feasible expressions for some geometric quantities, including geodesics, exponential maps, the Riemannian connection, Jacobi fields and curvatures, particularly the scalar curvature. Furthermore, we discuss the behavior of geodesics and prove that the manifold is globally geodesic convex. Finally, we design algorithms for point cloud denoising and edge detecting of a polluted image based on the Wasserstein curvature on SPD(n). The experimental results show the efficiency and robustness of our curvature-based methods. Full article
Show Figures

Figure 1

18 pages, 9260 KiB  
Article
Coherence and Entropy of Credit Cycles across the Euro Area Candidate Countries
by Adina Criste, Iulia Lupu and Radu Lupu
Entropy 2021, 23(9), 1213; https://doi.org/10.3390/e23091213 - 14 Sep 2021
Cited by 2 | Viewed by 2054
Abstract
The pattern of financial cycles in the European Union has direct impacts on financial stability and economic sustainability in view of adoption of the euro. The purpose of the article is to identify the degree of coherence of credit cycles in the countries [...] Read more.
The pattern of financial cycles in the European Union has direct impacts on financial stability and economic sustainability in view of adoption of the euro. The purpose of the article is to identify the degree of coherence of credit cycles in the countries potentially seeking to adopt the euro with the credit cycle inside the Eurozone. We first estimate the credit cycles in the selected countries and in the euro area (at the aggregate level) and filter the series with the Hodrick–Prescott filter for the period 1999Q1–2020Q4. Based on these values, we compute the indicators that define the credit cycle similarity and synchronicity in the selected countries and a set of entropy measures (block entropy, entropy rate, Bayesian entropy) to show the high degree of heterogeneity, noting that the manifestation of the global financial crisis has changed the credit cycle patterns in some countries. Our novel approach provides analytical tools to cope with euro adoption decisions, showing how the coherence of credit cycles can be increased among European countries and how the national macroprudential policies can be better coordinated, especially in light of changes caused by the pandemic crisis. Full article
(This article belongs to the Special Issue Entropy-Based Applications in Economics, Finance, and Management)
Show Figures

Figure 1

9 pages, 241 KiB  
Article
Causality in Discrete Time Physics Derived from Maupertuis Reduced Action Principle
by Roland Riek and Atanu Chatterjee
Entropy 2021, 23(9), 1212; https://doi.org/10.3390/e23091212 - 14 Sep 2021
Cited by 4 | Viewed by 2264
Abstract
Causality describes the process and consequences from an action: a cause has an effect. Causality is preserved in classical physics as well as in special and general theories of relativity. Surprisingly, causality as a relationship between the cause and its effect is in [...] Read more.
Causality describes the process and consequences from an action: a cause has an effect. Causality is preserved in classical physics as well as in special and general theories of relativity. Surprisingly, causality as a relationship between the cause and its effect is in neither of these theories considered a law or a principle. Its existence in physics has even been challenged by prominent opponents in part due to the time symmetric nature of the physical laws. With the use of the reduced action and the least action principle of Maupertuis along with a discrete dynamical time physics yielding an arrow of time, causality is defined as the partial spatial derivative of the reduced action and as such is position- and momentum-dependent and requests the presence of space. With this definition the system evolves from one step to the next without the need of time, while (discrete) time can be reconstructed. Full article
(This article belongs to the Special Issue Time, Causality, and Entropy)
Show Figures

Graphical abstract

48 pages, 5580 KiB  
Article
Understanding Changes in the Topology and Geometry of Financial Market Correlations during a Market Crash
by Peter Tsung-Wen Yen, Kelin Xia and Siew Ann Cheong
Entropy 2021, 23(9), 1211; https://doi.org/10.3390/e23091211 - 14 Sep 2021
Cited by 12 | Viewed by 5544
Abstract
In econophysics, the achievements of information filtering methods over the past 20 years, such as the minimal spanning tree (MST) by Mantegna and the planar maximally filtered graph (PMFG) by Tumminello et al., should be celebrated. Here, we show how one can systematically [...] Read more.
In econophysics, the achievements of information filtering methods over the past 20 years, such as the minimal spanning tree (MST) by Mantegna and the planar maximally filtered graph (PMFG) by Tumminello et al., should be celebrated. Here, we show how one can systematically improve upon this paradigm along two separate directions. First, we used topological data analysis (TDA) to extend the notions of nodes and links in networks to faces, tetrahedrons, or k-simplices in simplicial complexes. Second, we used the Ollivier-Ricci curvature (ORC) to acquire geometric information that cannot be provided by simple information filtering. In this sense, MSTs and PMFGs are but first steps to revealing the topological backbones of financial networks. This is something that TDA can elucidate more fully, following which the ORC can help us flesh out the geometry of financial networks. We applied these two approaches to a recent stock market crash in Taiwan and found that, beyond fusions and fissions, other non-fusion/fission processes such as cavitation, annihilation, rupture, healing, and puncture might also be important. We also successfully identified neck regions that emerged during the crash, based on their negative ORCs, and performed a case study on one such neck region. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

18 pages, 538 KiB  
Article
Mood Disorder Detection in Adolescents by Classification Trees, Random Forests and XGBoost in Presence of Missing Data
by Elzbieta Turska, Szymon Jurga and Jaroslaw Piskorski
Entropy 2021, 23(9), 1210; https://doi.org/10.3390/e23091210 - 14 Sep 2021
Cited by 8 | Viewed by 2656
Abstract
We apply tree-based classification algorithms, namely the classification trees, with the use of the rpart algorithm, random forests and XGBoost methods to detect mood disorder in a group of 2508 lower secondary school students. The dataset presents many challenges, the most important of [...] Read more.
We apply tree-based classification algorithms, namely the classification trees, with the use of the rpart algorithm, random forests and XGBoost methods to detect mood disorder in a group of 2508 lower secondary school students. The dataset presents many challenges, the most important of which is many missing data as well as the being heavily unbalanced (there are few severe mood disorder cases). We find that all algorithms are specific, but only the rpart algorithm is sensitive; i.e., it is able to detect cases of real cases mood disorder. The conclusion of this paper is that this is caused by the fact that the rpart algorithm uses the surrogate variables to handle missing data. The most important social-studies-related result is that the adolescents’ relationships with their parents are the single most important factor in developing mood disorders—far more important than other factors, such as the socio-economic status or school success. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing)
Show Figures

Figure 1

18 pages, 2712 KiB  
Article
Research on Driving Fatigue Alleviation Using Interesting Auditory Stimulation Based on VMD-MMSE
by Fuwang Wang, Bin Lu, Xiaogang Kang and Rongrong Fu
Entropy 2021, 23(9), 1209; https://doi.org/10.3390/e23091209 - 14 Sep 2021
Cited by 15 | Viewed by 3077
Abstract
The accurate detection and alleviation of driving fatigue are of great significance to traffic safety. In this study, we tried to apply the modified multi-scale entropy (MMSE) approach, based on variational mode decomposition (VMD), to driving fatigue detection. Firstly, the VMD was used [...] Read more.
The accurate detection and alleviation of driving fatigue are of great significance to traffic safety. In this study, we tried to apply the modified multi-scale entropy (MMSE) approach, based on variational mode decomposition (VMD), to driving fatigue detection. Firstly, the VMD was used to decompose EEG into multiple intrinsic mode functions (IMFs), then the best IMFs and scale factors were selected using the least square method (LSM). Finally, the MMSE features were extracted. Compared with the traditional sample entropy (SampEn), the VMD-MMSE method can identify the characteristics of driving fatigue more effectively. The VMD-MMSE characteristics combined with a subjective questionnaire (SQ) were used to analyze the change trends of driving fatigue under two driving modes: normal driving mode and interesting auditory stimulation mode. The results show that the interesting auditory stimulation method adopted in this paper can effectively relieve driving fatigue. In addition, the interesting auditory stimulation method, which simply involves playing interesting auditory information on the vehicle-mounted player, can effectively relieve driving fatigue. Compared with traditional driving fatigue-relieving methods, such as sleeping and drinking coffee, this interesting auditory stimulation method can relieve fatigue in real-time when the driver is driving normally. Full article
Show Figures

Figure 1

14 pages, 3441 KiB  
Article
Stochastic Analysis of Predator–Prey Models under Combined Gaussian and Poisson White Noise via Stochastic Averaging Method
by Wantao Jia, Yong Xu, Dongxi Li and Rongchun Hu
Entropy 2021, 23(9), 1208; https://doi.org/10.3390/e23091208 - 13 Sep 2021
Cited by 7 | Viewed by 2577
Abstract
In the present paper, the statistical responses of two-special prey–predator type ecosystem models excited by combined Gaussian and Poisson white noise are investigated by generalizing the stochastic averaging method. First, we unify the deterministic models for the two cases where preys are abundant [...] Read more.
In the present paper, the statistical responses of two-special prey–predator type ecosystem models excited by combined Gaussian and Poisson white noise are investigated by generalizing the stochastic averaging method. First, we unify the deterministic models for the two cases where preys are abundant and the predator population is large, respectively. Then, under some natural assumptions of small perturbations and system parameters, the stochastic models are introduced. The stochastic averaging method is generalized to compute the statistical responses described by stationary probability density functions (PDFs) and moments for population densities in the ecosystems using a perturbation technique. Based on these statistical responses, the effects of ecosystem parameters and the noise parameters on the stationary PDFs and moments are discussed. Additionally, we also calculate the Gaussian approximate solution to illustrate the effectiveness of the perturbation results. The results show that the larger the mean arrival rate, the smaller the difference between the perturbation solution and Gaussian approximation solution. In addition, direct Monte Carlo simulation is performed to validate the above results. Full article
(This article belongs to the Topic Stochastic Models and Experiments in Ecology and Biology)
Show Figures

Figure 1

21 pages, 25871 KiB  
Article
Trajectory Planning of Robot Manipulator Based on RBF Neural Network
by Qisong Song, Shaobo Li, Qiang Bai, Jing Yang, Ansi Zhang, Xingxing Zhang and Longxuan Zhe
Entropy 2021, 23(9), 1207; https://doi.org/10.3390/e23091207 - 13 Sep 2021
Cited by 22 | Viewed by 4330
Abstract
Robot manipulator trajectory planning is one of the core robot technologies, and the design of controllers can improve the trajectory accuracy of manipulators. However, most of the controllers designed at this stage have not been able to effectively solve the nonlinearity and uncertainty [...] Read more.
Robot manipulator trajectory planning is one of the core robot technologies, and the design of controllers can improve the trajectory accuracy of manipulators. However, most of the controllers designed at this stage have not been able to effectively solve the nonlinearity and uncertainty problems of the high degree of freedom manipulators. In order to overcome these problems and improve the trajectory performance of the high degree of freedom manipulators, a manipulator trajectory planning method based on a radial basis function (RBF) neural network is proposed in this work. Firstly, a 6-DOF robot experimental platform was designed and built. Secondly, the overall manipulator trajectory planning framework was designed, which included manipulator kinematics and dynamics and a quintic polynomial interpolation algorithm. Then, an adaptive robust controller based on an RBF neural network was designed to deal with the nonlinearity and uncertainty problems, and Lyapunov theory was used to ensure the stability of the manipulator control system and the convergence of the tracking error. Finally, to test the method, a simulation and experiment were carried out. The simulation results showed that the proposed method improved the response and tracking performance to a certain extent, reduced the adjustment time and chattering, and ensured the smooth operation of the manipulator in the course of trajectory planning. The experimental results verified the effectiveness and feasibility of the method proposed in this paper. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop