Next Issue
Volume 26, November
Previous Issue
Volume 26, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 10 (October 2024) – 77 articles

Cover Story (view full-size image): O-information is an information-theoretic measure based on composite Shannon entropy measures that quantifies the balance between redundancy and synergy in systems of variables. However, estimations of O-information for discrete variables suffer from bias, which has not yet been fully addressed. This study investigates how sample size and the number of bins influence bias in O-information estimation. The results reveal that a small ratio of sample size to the number of bins causes a strong bias toward synergy in independent systems. Independent systems may thus potentially be falsely classified as synergistic. A bias correction method is proposed, offering partial improvement. Nonetheless, simulations of independent systems are needed to better understand and benchmark this bias. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 1727 KiB  
Article
Quantum-Like Approaches Unveil the Intrinsic Limits of Predictability in Compartmental Models
by José Alejandro Rojas-Venegas, Pablo Gallarta-Sáenz, Rafael G. Hurtado, Jesús Gómez-Gardeñes and David Soriano-Paños
Entropy 2024, 26(10), 888; https://doi.org/10.3390/e26100888 - 21 Oct 2024
Cited by 1 | Viewed by 1151
Abstract
Obtaining accurate forecasts for the evolution of epidemic outbreaks from deterministic compartmental models represents a major theoretical challenge. Recently, it has been shown that these models typically exhibit trajectory degeneracy, as different sets of epidemiological parameters yield comparable predictions at early stages of [...] Read more.
Obtaining accurate forecasts for the evolution of epidemic outbreaks from deterministic compartmental models represents a major theoretical challenge. Recently, it has been shown that these models typically exhibit trajectory degeneracy, as different sets of epidemiological parameters yield comparable predictions at early stages of the outbreak but disparate future epidemic scenarios. In this study, we use the Doi–Peliti approach and extend the classical deterministic compartmental models to a quantum-like formalism to explore whether the uncertainty of epidemic forecasts is also shaped by the stochastic nature of epidemic processes. This approach allows us to obtain a probabilistic ensemble of trajectories, revealing that epidemic uncertainty is not uniform across time, being maximal around the epidemic peak and vanishing at both early and very late stages of the outbreak. Therefore, our results show that, independently of the models’ complexity, the stochasticity of contagion and recovery processes poses a natural constraint for the uncertainty of epidemic forecasts. Full article
Show Figures

Figure 1

13 pages, 2721 KiB  
Article
The Relationship Between Astronomical and Developmental Times Emerging in Modeling the Evolution of Agents
by Alexander O. Gusev and Leonid M. Martyushev
Entropy 2024, 26(10), 887; https://doi.org/10.3390/e26100887 - 21 Oct 2024
Cited by 1 | Viewed by 848
Abstract
The simplest evolutionary model for catching prey by an agent (predator) is considered. The simulation is performed on the basis of a software-emulated Intel i8080 processor. Maximizing the number of catches is chosen as the objective function. This function is associated with energy [...] Read more.
The simplest evolutionary model for catching prey by an agent (predator) is considered. The simulation is performed on the basis of a software-emulated Intel i8080 processor. Maximizing the number of catches is chosen as the objective function. This function is associated with energy dissipation and developmental time. It is shown that during Darwinian evolution, agents with an initially a random set of processor commands subsequently acquire a successful catching skill. It is found that in the process of evolution, a logarithmic relationship between astronomical and developmental times arises in agents. This result is important for the ideas available in the literature about the close connection of such concepts as time, Darwinian selection, and the maximization of entropy production. Full article
Show Figures

Figure 1

10 pages, 2101 KiB  
Article
Advanced Exergy-Based Optimization of a Polygeneration System with CO2 as Working Fluid
by Jing Luo, Qianxin Zhu and Tatiana Morosuk
Entropy 2024, 26(10), 886; https://doi.org/10.3390/e26100886 - 21 Oct 2024
Cited by 1 | Viewed by 877
Abstract
Using polygeneration systems is one of the most cost-effective ways for energy efficiency improvement, which secures sustainable energy development and reduces environmental impacts. This paper investigates a polygeneration system powered by low- to medium-grade waste heat and using CO2 as a working [...] Read more.
Using polygeneration systems is one of the most cost-effective ways for energy efficiency improvement, which secures sustainable energy development and reduces environmental impacts. This paper investigates a polygeneration system powered by low- to medium-grade waste heat and using CO2 as a working fluid to simultaneously produce electric power, refrigeration, and heating capacities. The system is simulated in Aspen HYSYS® and evaluated by applying advanced exergy-based methods. With the split of exergy destruction and investment cost into avoidable and unavoidable parts, the avoidable part reveals the real improvement potential and priority of each component. Subsequently, an exergoeconomic graphical optimization is implemented at the component level to improve the system performance further. Optimization results and an engineering solution considering technical limitations are proposed. Compared to the base case, the system exergetic efficiency was improved by 15.4% and the average product cost was reduced by 7.1%; while the engineering solution shows an increase of 11.3% in system exergetic efficiency and a decrease of 8.5% in the average product cost. Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Industrial Energy Systems)
Show Figures

Figure 1

21 pages, 34834 KiB  
Article
A Multilayer Nonlinear Permutation Framework and Its Demonstration in Lightweight Image Encryption
by Cemile İnce, Kenan İnce and Davut Hanbay
Entropy 2024, 26(10), 885; https://doi.org/10.3390/e26100885 - 21 Oct 2024
Cited by 1 | Viewed by 1010
Abstract
As information systems become more widespread, data security becomes increasingly important. While traditional encryption methods provide effective protection against unauthorized access, they often struggle with multimedia data like images and videos. This necessitates specialized image encryption approaches. With the rise of mobile and [...] Read more.
As information systems become more widespread, data security becomes increasingly important. While traditional encryption methods provide effective protection against unauthorized access, they often struggle with multimedia data like images and videos. This necessitates specialized image encryption approaches. With the rise of mobile and Internet of Things (IoT) devices, lightweight image encryption algorithms are crucial for resource-constrained environments. These algorithms have applications in various domains, including medical imaging and surveillance systems. However, the biggest challenge of lightweight algorithms is balancing strong security with limited hardware resources. This work introduces a novel nonlinear matrix permutation approach applicable to both confusion and diffusion phases in lightweight image encryption. The proposed method utilizes three different chaotic maps in harmony, namely a 2D Zaslavsky map, 1D Chebyshev map, and 1D logistic map, to generate number sequences for permutation and diffusion. Evaluation using various metrics confirms the method’s efficiency and its potential as a robust encryption framework. The proposed scheme was tested with 14 color images in the SIPI dataset. This approach achieves high performance by processing each image in just one iteration. The developed scheme offers a significant advantage over its alternatives, with an average NPCR of 99.6122, UACI of 33.4690, and information entropy of 7.9993 for 14 test images, with an average correlation value as low as 0.0006 and a vast key space of 2800. The evaluation results demonstrated that the proposed approach is a viable and effective alternative for lightweight image encryption. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

17 pages, 1607 KiB  
Article
Assessment of Nuclear Fusion Reaction Spontaneity via Engineering Thermodynamics
by Silvano Tosti
Entropy 2024, 26(10), 884; https://doi.org/10.3390/e26100884 - 21 Oct 2024
Viewed by 1037
Abstract
This work recalls the basic thermodynamics of chemical processes for introducing the evaluation of the nuclear reactions’ spontaneity. The application and definition of the thermodynamic state functions of the nuclear processes have been described by focusing on their contribution to the chemical potential. [...] Read more.
This work recalls the basic thermodynamics of chemical processes for introducing the evaluation of the nuclear reactions’ spontaneity. The application and definition of the thermodynamic state functions of the nuclear processes have been described by focusing on their contribution to the chemical potential. The variation of the nuclear binding potentials involved in a nuclear reaction affects the chemical potential through a modification of the internal energy and of the other state functions. These energy changes are related to the mass defect between reactants and products of the nuclear reaction and are of the order of magnitude of 1 MeV per particle, about six orders of magnitude larger than those of the chemical reactions. In particular, this work assesses the Gibbs free energy change of the fusion reactions by assuming the Qvalue as the nuclear contribution to the chemical potential and by calculating the entropy through the Sackur–Tetrode expression. Then, the role of the entropy in fusion processes was re-examined by demonstrating the previous spontaneity analyses, which assume a perfect gas of DT atoms in the initial state of the fusion reactions, are conservative and lead to assessing more negative ΔG than in the real case (ionized gas). As a final point, this paper examines the thermodynamic spontaneity of exothermic processes with a negative change of entropy and discusses the different thermodynamic spontaneity exhibited by the DT fusion processes when conducted in a controlled or uncontrolled way. Full article
(This article belongs to the Special Issue Trends in the Second Law of Thermodynamics)
Show Figures

Figure 1

14 pages, 578 KiB  
Article
A Synergistic Perspective on Multivariate Computation and Causality in Complex Systems
by Thomas F. Varley
Entropy 2024, 26(10), 883; https://doi.org/10.3390/e26100883 - 21 Oct 2024
Cited by 1 | Viewed by 1748
Abstract
What does it mean for a complex system to “compute” or perform “computations”? Intuitively, we can understand complex “computation” as occurring when a system’s state is a function of multiple inputs (potentially including its own past state). Here, we discuss how computational processes [...] Read more.
What does it mean for a complex system to “compute” or perform “computations”? Intuitively, we can understand complex “computation” as occurring when a system’s state is a function of multiple inputs (potentially including its own past state). Here, we discuss how computational processes in complex systems can be generally studied using the concept of statistical synergy, which is information about an output that can only be learned when the joint state of all inputs is known. Building on prior work, we show that this approach naturally leads to a link between multivariate information theory and topics in causal inference, specifically, the phenomenon of causal colliders. We begin by showing how Berkson’s paradox implies a higher-order, synergistic interaction between multidimensional inputs and outputs. We then discuss how causal structure learning can refine and orient analyses of synergies in empirical data, and when empirical synergies meaningfully reflect computation versus when they may be spurious. We end by proposing that this conceptual link between synergy, causal colliders, and computation can serve as a foundation on which to build a mathematically rich general theory of computation in complex systems. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

18 pages, 41079 KiB  
Article
Research on Target Image Classification in Low-Light Night Vision
by Yanfeng Li, Yongbiao Luo, Yingjian Zheng, Guiqian Liu and Jiekai Gong
Entropy 2024, 26(10), 882; https://doi.org/10.3390/e26100882 - 21 Oct 2024
Cited by 2 | Viewed by 1638
Abstract
In extremely dark conditions, low-light imaging may offer spectators a rich visual experience, which is important for both military and civic applications. However, the images taken in ultra-micro light environments usually have inherent defects such as extremely low brightness and contrast, a high [...] Read more.
In extremely dark conditions, low-light imaging may offer spectators a rich visual experience, which is important for both military and civic applications. However, the images taken in ultra-micro light environments usually have inherent defects such as extremely low brightness and contrast, a high noise level, and serious loss of scene details and colors, which leads to great challenges in the research of low-light image and object detection and classification. The low-light night vision image used as the study object in this work has an excessively dim overall picture and very little information about the screen’s features. Three algorithms, HE, AHE, and CLAHE, were used to enhance and highlight the image. The effectiveness of these image enhancement methods is evaluated using metrics such as the peak signal-to-noise ratio and mean square error, and CLAHE was selected after comparison. The target image includes vehicles, people, license plates, and objects. The gray-level co-occurrence matrix (GLCM) was used to extract the texture features of the enhanced images, and the extracted image texture features were used as input to construct a backpropagation (BP) neural network classification model. Then, low-light image classification models were developed based on VGG16 and ResNet50 convolutional neural networks combined with low-light image enhancement algorithms. The experimental results show that the overall classification accuracy of the VGG16 convolutional neural network model is 92.1%. Compared with the BP and ResNet50 neural network models, the classification accuracy was increased by 4.5% and 2.3%, respectively, demonstrating its effectiveness in classifying low-light night vision targets. Full article
Show Figures

Figure 1

26 pages, 728 KiB  
Article
Adaptive Privacy-Preserving Coded Computing with Hierarchical Task Partitioning
by Qicheng Zeng, Zhaojun Nan and Sheng Zhou
Entropy 2024, 26(10), 881; https://doi.org/10.3390/e26100881 - 21 Oct 2024
Viewed by 952
Abstract
Coded computing is recognized as a promising solution to address the privacy leakage problem and the straggling effect in distributed computing. This technique leverages coding theory to recover computation tasks using results from a subset of workers. In this paper, we propose the [...] Read more.
Coded computing is recognized as a promising solution to address the privacy leakage problem and the straggling effect in distributed computing. This technique leverages coding theory to recover computation tasks using results from a subset of workers. In this paper, we propose the adaptive privacy-preserving coded computing (APCC) strategy, designed to be applicable to various types of computation tasks, including polynomial and non-polynomial functions, and to adaptively provide accurate or approximated results. We prove the optimality of APCC in terms of encoding rate, defined as the ratio between the computation loads of tasks before and after encoding, based on the optimal recovery threshold of Lagrange Coded Computing. We demonstrate that APCC guarantees information-theoretical data privacy preservation. Mitigation of the straggling effect in APCC is achieved through hierarchical task partitioning and task cancellation, which further reduces computation delays by enabling straggling workers to return partial results of assigned tasks, compared to conventional coded computing strategies. The hierarchical task partitioning problems are formulated as mixed-integer nonlinear programming (MINLP) problems with the objective of minimizing task completion delay. We propose a low-complexity maximum value descent (MVD) algorithm to optimally solve these problems. The simulation results show that APCC can reduce the task completion delay by a range of 20.3% to 47.5% when compared to other state-of-the-art benchmarks. Full article
(This article belongs to the Special Issue Intelligent Information Processing and Coding for B5G Communications)
Show Figures

Figure 1

14 pages, 1238 KiB  
Article
Rate Optimization of Intelligent Reflecting Surface-Assisted Coal Mine Wireless Communication Systems
by Yang Liu, Zhao Yang, Bin Wang and Yanhong Xu
Entropy 2024, 26(10), 880; https://doi.org/10.3390/e26100880 - 20 Oct 2024
Cited by 1 | Viewed by 1057
Abstract
This paper proposes a three-step joint rate optimization method for intelligent reflecting surface (IRS)-assisted coal mine wireless communication systems. Different from terrestrial IRS-assisted communication scenarios, in coal mines, IRSs can be installed flexibly on the tops of rectangular tunnels to address the issues [...] Read more.
This paper proposes a three-step joint rate optimization method for intelligent reflecting surface (IRS)-assisted coal mine wireless communication systems. Different from terrestrial IRS-assisted communication scenarios, in coal mines, IRSs can be installed flexibly on the tops of rectangular tunnels to address the issues of signals being blocked and interfered with by mining equipment. Therefore, it is necessary to optimize the IRS deployment position, the transmit power and IRS phase shifts to achieve the maximum effective achievable rate at user stations equipped with the proposed system. However, due to the complex channel models of coal mines, the optimization problem of IRS deployment position is non-convex. To solve this problem, two auxiliary variables along with logarithmic operations and Taylor approximation are introduced. On this basis, a three-step joint rate optimization involving the transmit power, IRS phase shifts and IRS deployment position is proposed to maximize the effective achievable rates at the user station. The simulation results show that compared with other rate optimization schemes, the effective achievable rates at the user station using the proposed joint rate optimization scheme can be improved by approximately 12.32% to 54.17% for different parameter configurations. It is also pointed out that the deployment position of the IRS can converge to the same optimal position independent of the initial deployment position. Moreover, we investigate the effects of the roughness of the tunnel walls in a coal mine on the effective achievable rates at the user station, and the simulation results indicate that the proposed three-step joint rate optimization scheme performs better in the coal mine scenario regardless of the roughness. Full article
Show Figures

Figure 1

11 pages, 418 KiB  
Article
Fast and Accurate Numerical Integration of the Langevin Equation with Multiplicative Gaussian White Noise
by Mykhaylo Evstigneev and Deniz Kacmazer
Entropy 2024, 26(10), 879; https://doi.org/10.3390/e26100879 - 20 Oct 2024
Cited by 2 | Viewed by 1249
Abstract
A univariate stochastic system driven by multiplicative Gaussian white noise is considered. The standard method for simulating its Langevin equation of motion involves incrementing the system’s state variable by a biased Gaussian random number at each time step. It is shown that the [...] Read more.
A univariate stochastic system driven by multiplicative Gaussian white noise is considered. The standard method for simulating its Langevin equation of motion involves incrementing the system’s state variable by a biased Gaussian random number at each time step. It is shown that the efficiency of such simulations can be significantly enhanced by incorporating the skewness of the distribution of the updated state variable. A new algorithm based on this principle is introduced, and its superior performance is demonstrated using a model of free diffusion of a Brownian particle with a friction coefficient that decreases exponentially with the kinetic energy. The proposed simulation technique proves to be accurate over time steps that are an order of magnitude longer than those required by standard algorithms. The model used to test the new numerical technique is known to exhibit a transition from normal diffusion to superdiffusion as the environmental temperature rises above a certain critical value. A simple empirical formula for the time-dependent diffusion coefficient, which covers both diffusion regimes, is introduced, and its accuracy is confirmed through comparison with the simulation results. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

17 pages, 8264 KiB  
Article
RTINet: A Lightweight and High-Performance Railway Turnout Identification Network Based on Semantic Segmentation
by Dehua Wei, Wenjun Zhang, Haijun Li, Yuxing Jiang, Yong Xian and Jiangli Deng
Entropy 2024, 26(10), 878; https://doi.org/10.3390/e26100878 - 19 Oct 2024
Viewed by 1398
Abstract
To lighten the workload of train drivers and enhance railway transportation safety, a novel and intelligent method for railway turnout identification is investigated based on semantic segmentation. More specifically, a railway turnout scene perception (RTSP) dataset is constructed and annotated manually in this [...] Read more.
To lighten the workload of train drivers and enhance railway transportation safety, a novel and intelligent method for railway turnout identification is investigated based on semantic segmentation. More specifically, a railway turnout scene perception (RTSP) dataset is constructed and annotated manually in this paper, wherein the innovative concept of side rails is introduced as part of the labeling process. After that, based on the work of Deeplabv3+, combined with a lightweight design and an attention mechanism, a railway turnout identification network (RTINet) is proposed. Firstly, in consideration of the need for rapid response in the deployment of the identification model on high-speed trains, this paper selects the MobileNetV2 network, renowned for its suitability for lightweight deployment, as the backbone of the RTINet model. Secondly, to reduce the computational load of the model while ensuring accuracy, depth-separable convolutions are employed to replace the standard convolutions within the network architecture. Thirdly, the bottleneck attention module (BAM) is integrated into the model to enhance position and feature information perception, bolster the robustness and quality of the segmentation masks generated, and ensure that the outcomes are characterized by precision and reliability. Finally, to address the issue of foreground and background imbalance in turnout recognition, the Dice loss function is incorporated into the network training procedure. Both the quantitative and qualitative experimental results demonstrate that the proposed method is feasible for railway turnout identification, and it outperformed the compared baseline models. In particular, the RTINet was able to achieve a remarkable mIoU of 85.94%, coupled with an inference speed of 78 fps on the customized dataset. Furthermore, the effectiveness of each optimized component of the proposed RTINet is verified by an additional ablation study. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

12 pages, 4146 KiB  
Article
Infidelity Analysis of Digital Counter-Diabatic Driving in Simple Two-Qubit System
by Ouyang Lei
Entropy 2024, 26(10), 877; https://doi.org/10.3390/e26100877 - 19 Oct 2024
Viewed by 994
Abstract
Digitized counter-diabatic (CD) optimization algorithms have been proposed and extensively studied to enhance performance in quantum computing by accelerating adiabatic processes while minimizing energy transitions. While adding approximate counter-diabatic terms can initially introduce adiabatic errors that decrease over time, Trotter errors from decomposition [...] Read more.
Digitized counter-diabatic (CD) optimization algorithms have been proposed and extensively studied to enhance performance in quantum computing by accelerating adiabatic processes while minimizing energy transitions. While adding approximate counter-diabatic terms can initially introduce adiabatic errors that decrease over time, Trotter errors from decomposition approximation persist. On the other hand, increasing the high-order nested commutators for CD terms may improve adiabatic errors but could also introduce additional Trotter errors. In this article, we examine the two-qubit model to explore the interplay between approximate CD, adiabatic errors, Trotter errors, coefficients, and commutators. Through these analyses, we aim to gain insights into optimizing these factors for better fidelity, a shallower circuit depth, and a reduced gate number in near-term gate-based quantum computing. Full article
(This article belongs to the Special Issue Quantum Computing for Complex Dynamics, 2nd Edition)
Show Figures

Figure 1

20 pages, 9500 KiB  
Article
Image Captioning Based on Semantic Scenes
by Fengzhi Zhao, Zhezhou Yu, Tao Wang and Yi Lv
Entropy 2024, 26(10), 876; https://doi.org/10.3390/e26100876 - 18 Oct 2024
Cited by 1 | Viewed by 2125
Abstract
With the development of artificial intelligence and deep learning technologies, image captioning has become an important research direction at the intersection of computer vision and natural language processing. The purpose of image captioning is to generate corresponding natural language descriptions by understanding the [...] Read more.
With the development of artificial intelligence and deep learning technologies, image captioning has become an important research direction at the intersection of computer vision and natural language processing. The purpose of image captioning is to generate corresponding natural language descriptions by understanding the content of images. This technology has broad application prospects in fields such as image retrieval, autonomous driving, and visual question answering. Currently, many researchers have proposed region-based image captioning methods. These methods generate captions by extracting features from different regions of an image. However, they often rely on local features of the image and overlook the understanding of the overall scene, leading to captions that lack coherence and accuracy when dealing with complex scenes. Additionally, image captioning methods are unable to extract complete semantic information from visual data, which may lead to captions with biases and deficiencies. Due to these reasons, existing methods struggle to generate comprehensive and accurate captions. To fill this gap, we propose the Semantic Scenes Encoder (SSE) for image captioning. It first extracts a scene graph from the image and integrates it into the encoding of the image information. Then, it extracts a semantic graph from the captions and preserves semantic information through a learnable attention mechanism, which we refer to as the dictionary. During the generation of captions, it combines the encoded information of the image and the learned semantic information to generate complete and accurate captions. To verify the effectiveness of the SSE, we tested the model on the MSCOCO dataset. The experimental results show that the SSE improves the overall quality of the captions. The improvement in scores across multiple evaluation metrics further demonstrates that the SSE possesses significant advantages when processing identical images. Full article
(This article belongs to the Collection Entropy in Image Analysis)
Show Figures

Figure 1

21 pages, 3593 KiB  
Article
Solving the B-SAT Problem Using Quantum Computing: Smaller Is Sometimes Better
by Ahmad Bennakhi, Gregory T. Byrd and Paul Franzon
Entropy 2024, 26(10), 875; https://doi.org/10.3390/e26100875 - 18 Oct 2024
Cited by 2 | Viewed by 1392
Abstract
This paper aims to outline the effectiveness of modern universal gate quantum computers when utilizing different configurations to solve the B-SAT (Boolean satisfiability) problem. The quantum computing experiments were performed using Grover’s search algorithm to find a valid solution. The experiments were performed [...] Read more.
This paper aims to outline the effectiveness of modern universal gate quantum computers when utilizing different configurations to solve the B-SAT (Boolean satisfiability) problem. The quantum computing experiments were performed using Grover’s search algorithm to find a valid solution. The experiments were performed under different variations to demonstrate their effects on the results. Changing the number of shots, qubit mapping, and using a different quantum processor were all among the experimental variables. The study also branched into a dedicated experiment highlighting a peculiar behavior that IBM quantum processors exhibit when running circuits with a certain number of shots. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

14 pages, 316 KiB  
Article
Noise Transfer Approach to GKP Quantum Circuits
by Timothy C. Ralph, Matthew S. Winnel, S. Nibedita Swain and Ryan J. Marshman
Entropy 2024, 26(10), 874; https://doi.org/10.3390/e26100874 - 18 Oct 2024
Viewed by 1016
Abstract
The choice between the Schrödinger and Heisenberg pictures can significantly impact the computational resources needed to solve a problem, even though they are equivalent formulations of quantum mechanics. Here, we present a method for analysing Bosonic quantum circuits based on the Heisenberg picture [...] Read more.
The choice between the Schrödinger and Heisenberg pictures can significantly impact the computational resources needed to solve a problem, even though they are equivalent formulations of quantum mechanics. Here, we present a method for analysing Bosonic quantum circuits based on the Heisenberg picture which allows, under certain conditions, a useful factoring of the evolution into signal and noise contributions, similar way to what can be achieved with classical communication systems. We provide examples which suggest that this approach may be particularly useful in analysing quantum computing systems based on the Gottesman–Kitaev–Preskill (GKP) qubits. Full article
(This article belongs to the Special Issue Quantum Optics: Trends and Challenges)
Show Figures

Figure 1

1 pages, 135 KiB  
Correction
Correction: Zhang et al. A New Semi-Quantum Two-Way Authentication Protocol between Control Centers and Neighborhood Gateways in Smart Grids. Entropy 2024, 26, 644
by Qiandong Zhang, Kejia Zhang, Kunchi Hou and Long Zhang
Entropy 2024, 26(10), 873; https://doi.org/10.3390/e26100873 - 18 Oct 2024
Viewed by 611
Abstract
In the published publication [...] Full article
(This article belongs to the Section Quantum Information)
20 pages, 1519 KiB  
Article
Transported Entropy of Ions and Peltier Coefficients in 8YSZ and 10Sc1CeSZ Electrolytes for Solid Oxide Cells
by Aydan Gedik and Stephan Kabelac
Entropy 2024, 26(10), 872; https://doi.org/10.3390/e26100872 - 17 Oct 2024
Viewed by 891
Abstract
In this study, the transported entropy of ions for 8YSZ and 10Sc1CeSZ electrolytes was experimentally determined to enable precise modeling of heat transport in solid oxide cells (SOCs). The Peltier coefficient, crucial for thermal management, was directly calculated, highlighting reversible heat transport effects [...] Read more.
In this study, the transported entropy of ions for 8YSZ and 10Sc1CeSZ electrolytes was experimentally determined to enable precise modeling of heat transport in solid oxide cells (SOCs). The Peltier coefficient, crucial for thermal management, was directly calculated, highlighting reversible heat transport effects in the cell. While data for 8YSZ are available in the literature, providing a basis for comparison, the results for 10Sc1CeSZ show slightly smaller Seebeck coefficients but higher transported ion entropies. Specifically, at 700°C and an oxygen partial pressure of pO2=0.21 bar, values of SO2*=52±10 J/K·F for 10Sc1CeSZ and SO2*=48±9 J/K·F for 8YSZ were obtained. The transported entropy was also validated through theoretical calculations and showed minimal deviations when comparing different cell operation modes (O2||O2−||O2 and H2, H2O||O2−||O2). The influence of the transported entropy of the ions on the total heat generation and the partial heat generation at the electrodes is shown. The temperature has the greatest influence on heat generation, whereby the ion entropy also plays a role. Finally, the Peltier coefficients of 8YSZ for all homogeneous phases agree with the literature values. Full article
Show Figures

Figure 1

39 pages, 7800 KiB  
Article
FLCMC: Federated Learning Approach for Chinese Medicinal Text Classification
by Guang Hu and Xin Fang
Entropy 2024, 26(10), 871; https://doi.org/10.3390/e26100871 - 17 Oct 2024
Viewed by 987
Abstract
Addressing the privacy protection and data sharing issues in Chinese medical texts, this paper introduces a federated learning approach named FLCMC for Chinese medical text classification. The paper first discusses the data heterogeneity issue in federated language modeling. Then, it proposes two perturbed [...] Read more.
Addressing the privacy protection and data sharing issues in Chinese medical texts, this paper introduces a federated learning approach named FLCMC for Chinese medical text classification. The paper first discusses the data heterogeneity issue in federated language modeling. Then, it proposes two perturbed federated learning algorithms, FedPA and FedPAP, based on the self-attention mechanism. In these algorithms, the self-attention mechanism is incorporated within the model aggregation module, while a perturbation term, which measures the differences between the client and the server, is added to the local update module along with a customized PAdam optimizer. Secondly, to enable a fair comparison of algorithms’ performance, existing federated algorithms are improved by integrating a customized Adam optimizer. Through experiments, this paper first conducts experimental analyses on hyperparameters, data heterogeneity, and validity on synthetic datasets, which proves that the proposed federated learning algorithm has significant advantages in classification performance and convergence stability when dealing with heterogeneous data. Then, the algorithm is applied to Chinese medical text datasets to verify its effectiveness on real datasets. The comparative analysis of algorithm performance and communication efficiency shows that the algorithm exhibits strong generalization ability on deep learning models for Chinese medical texts. As for the synthetic dataset, upon comparing with comparison algorithms FedAvg, FedProx, FedAtt, and their improved versions, the experimental results show that for data with general heterogeneity, both FedPA and FedPAP show significantly more accurate and stable convergence behavior. On the real Chinese medical dataset of doctor–patient conversations, IMCS-V2, with logistic regression and long short-term memory network as training models, the experiment results show that in comparison to the above three comparison algorithms and their improved versions, FedPA and FedPAP both possess the best accuracy performance and display significantly more stable and accurate convergence behavior, proving that the method in this paper has better classification effects for Chinese medical texts. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

7 pages, 258 KiB  
Article
Remarks on Limit Theorems for the Free Quadratic Forms
by Wiktor Ejsmont, Marek Biernacki and Patrycja Hęćka
Entropy 2024, 26(10), 870; https://doi.org/10.3390/e26100870 - 17 Oct 2024
Viewed by 702
Abstract
In 2021, Ejsmont and Biernacki showed that the free tangent distribution can be used to measure household satisfaction with durable consumer goods. This distribution arises as the limit of free random variables. This, new article serves as the theoretical introduction to the continuation [...] Read more.
In 2021, Ejsmont and Biernacki showed that the free tangent distribution can be used to measure household satisfaction with durable consumer goods. This distribution arises as the limit of free random variables. This, new article serves as the theoretical introduction to the continuation of the research presented in the paper from 2021. We continue the study of the limit of specific quadratic forms in free probability, which is the first step towards constructing a new distribution for the evaluation of satisfaction with material affluence among household. We formulate a non-central limit theorem for weighted sums of commutators and square of the sums for free random variable. In addition we give the random matrix models for these limits. Full article
(This article belongs to the Special Issue Random Matrix Theory and Its Innovative Applications)
28 pages, 9040 KiB  
Article
First Hitting Times on a Quantum Computer: Tracking vs. Local Monitoring, Topological Effects, and Dark States
by Qingyuan Wang, Silin Ren, Ruoyu Yin, Klaus Ziegler, Eli Barkai and Sabine Tornow
Entropy 2024, 26(10), 869; https://doi.org/10.3390/e26100869 - 16 Oct 2024
Cited by 4 | Viewed by 1743
Abstract
We investigate a quantum walk on a ring represented by a directed triangle graph with complex edge weights and monitored at a constant rate until the quantum walker is detected. To this end, the first hitting time statistics are recorded using unitary dynamics [...] Read more.
We investigate a quantum walk on a ring represented by a directed triangle graph with complex edge weights and monitored at a constant rate until the quantum walker is detected. To this end, the first hitting time statistics are recorded using unitary dynamics interspersed stroboscopically by measurements, which are implemented on IBM quantum computers with a midcircuit readout option. Unlike classical hitting times, the statistical aspect of the problem depends on the way we construct the measured path, an effect that we quantify experimentally. First, we experimentally verify the theoretical prediction that the mean return time to a target state is quantized, with abrupt discontinuities found for specific sampling times and other control parameters, which has a well-known topological interpretation. Second, depending on the initial state, system parameters, and measurement protocol, the detection probability can be less than one or even zero, which is related to dark-state physics. Both return-time quantization and the appearance of the dark states are related to degeneracies in the eigenvalues of the unitary time evolution operator. We conclude that, for the IBM quantum computer under study, the first hitting times of monitored quantum walks are resilient to noise. However, a finite number of measurements leads to broadening effects, which modify the topological quantization and chiral effects of the asymptotic theory with an infinite number of measurements. Full article
(This article belongs to the Special Issue Quantum Walks for Quantum Technologies)
Show Figures

Figure 1

17 pages, 899 KiB  
Article
Corrected Thermodynamics of Black Holes in f(R) Gravity with Electrodynamic Field and Cosmological Constant
by Mou Xu, Yuying Zhang, Liu Yang, Shining Yang and Jianbo Lu
Entropy 2024, 26(10), 868; https://doi.org/10.3390/e26100868 - 15 Oct 2024
Viewed by 1255
Abstract
The thermodynamics of black holes (BHs) and their corrections have become a hot topic in the study of gravitational physics, with significant progress made in recent decades. In this paper, we study the thermodynamics and corrections of spherically symmetric BHs in models [...] Read more.
The thermodynamics of black holes (BHs) and their corrections have become a hot topic in the study of gravitational physics, with significant progress made in recent decades. In this paper, we study the thermodynamics and corrections of spherically symmetric BHs in models f(R)=R+αR2 and f(R)=R+2γR+8Λ under the f(R) theory, which includes the electrodynamic field and the cosmological constant. Considering thermal fluctuations around equilibrium states, we find that, for both f(R) models, the corrected entropy is meaningful in the case of a negative cosmological constant (anti-de Sitter–RN spacetime) with Λ=1. It is shown that when the BHs’ horizon radius is small, thermal fluctuations have a more significant effect on the corrected entropy. Using the corrected entropy, we derive expressions for the relevant corrected thermodynamic quantities (such as Helmholtz free energy, internal energy, Gibbs free energy, and specific heat) and calculate the effects of the correction terms. The results indicate that the corrections to Helmholtz free energy and Gibbs free energy, caused by thermal fluctuations, are remarkable for small BHs. In addition, we explore the stability of BHs using specific heat. The study reveals that the corrected BH thermodynamics exhibit locally stable for both models, and corrected systems undergo a Hawking–Page phase transition. Considering the requirement on the non-negative volume of BHs, we also investigate the constraint on the EH radius of BHs. Full article
(This article belongs to the Special Issue The Black Hole Information Problem)
Show Figures

Figure 1

18 pages, 592 KiB  
Article
Causal Learning: Monitoring Business Processes Based on Causal Structures
by Fernando Montoya, Hernán Astudillo, Daniela Díaz and Esteban Berríos
Entropy 2024, 26(10), 867; https://doi.org/10.3390/e26100867 - 15 Oct 2024
Viewed by 1430
Abstract
Conventional methods for process monitoring often fail to capture the causal relationships that drive outcomes, making hard to distinguish causal anomalies from mere correlations in activity flows. Hence, there is a need for approaches that allow causal interpretation of atypical scenarios (anomalies), allowing [...] Read more.
Conventional methods for process monitoring often fail to capture the causal relationships that drive outcomes, making hard to distinguish causal anomalies from mere correlations in activity flows. Hence, there is a need for approaches that allow causal interpretation of atypical scenarios (anomalies), allowing to identify the influence of operational variables on these anomalies. This article introduces (CaProM), an innovative technique based on causality techniques, applied during the planning phase in business process environments. The technique combines two causal perspectives: anomaly attribution and distribution change attribution. It has three stages: (1) process events are collected and recorded, identifying flow instances; (2) causal learning of process activities, building a directed acyclic graphs (DAGs) represent dependencies among variables; and (3) use of DAGs to monitor the process, detecting anomalies and critical nodes. The technique was validated with a industry dataset from the banking sector, comprising 562 activity flow plans. The study monitored causal structures during the planning and execution stages, and allowed to identify the main factor behind a major deviation from planned values. This work contributes to business process monitoring by introducing a causal approach that enhances both the interpretability and explainability of anomalies. The technique allows to understand which specific variables have caused an atypical scenario, providing a clear view of the causal relationships within processes and ensuring greater accuracy in decision-making. This causal analysis employs cross-sectional data, avoiding the need to average multiple time instances and reducing potential biases, and unlike time series methods, it preserves the relationships among variables. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

22 pages, 3311 KiB  
Article
Meshed Context-Aware Beam Search for Image Captioning
by Fengzhi Zhao, Zhezhou Yu, Tao Wang and He Zhao
Entropy 2024, 26(10), 866; https://doi.org/10.3390/e26100866 - 15 Oct 2024
Cited by 1 | Viewed by 1411
Abstract
Beam search is a commonly used algorithm in image captioning to improve the accuracy and robustness of generated captions by finding the optimal word sequence. However, it mainly focuses on the highest-scoring sequence at each step, often overlooking the broader image context, which [...] Read more.
Beam search is a commonly used algorithm in image captioning to improve the accuracy and robustness of generated captions by finding the optimal word sequence. However, it mainly focuses on the highest-scoring sequence at each step, often overlooking the broader image context, which can lead to suboptimal results. Additionally, beam search tends to select similar words across sequences, causing repetitive and less diverse output. These limitations suggest that, while effective, beam search can be further improved to better capture the richness and variety needed for high-quality captions. To address these issues, this paper presents meshed context-aware beam search (MCBS). In MCBS for image captioning, the generated caption context is dynamically used to influence the image attention mechanism at each decoding step, ensuring that the model focuses on different regions of the image to produce more coherent and contextually appropriate captions. Furthermore, a penalty coefficient is introduced to discourage the generation of repeated words. Through extensive testing and ablation studies across various models, our results show that MCBS significantly enhances overall model performance. Full article
(This article belongs to the Collection Entropy in Image Analysis)
Show Figures

Figure 1

15 pages, 5286 KiB  
Article
Mapping Guaranteed Positive Secret Key Rates for Continuous Variable Quantum Key Distribution
by Mikhael T. Sayat, Oliver Thearle, Biveen Shajilal, Sebastian P. Kish, Ping Koy Lam, Nicholas J. Rattenbury and John E. Cater
Entropy 2024, 26(10), 865; https://doi.org/10.3390/e26100865 - 15 Oct 2024
Viewed by 1258
Abstract
The standard way to measure the performance of existing continuous variable quantum key distribution (CVQKD) protocols is by using the achievable secret key rate (SKR) with respect to one parameter while keeping all other parameters constant. However, this atomistic method requires many individual [...] Read more.
The standard way to measure the performance of existing continuous variable quantum key distribution (CVQKD) protocols is by using the achievable secret key rate (SKR) with respect to one parameter while keeping all other parameters constant. However, this atomistic method requires many individual parameter analyses while overlooking the co-dependence of other parameters. In this work, a numerical tool is developed for comparing different CVQKD protocols while taking into account the simultaneous effects of multiple CVQKD parameters on the capability of protocols to produce positive SKRs. Using the transmittance, excess noise, and modulation amplitude parameter space, regions of positive SKR are identified to compare three discrete modulated (DM) CVQKD protocols. The results show that the M-QAM protocol outperforms the M-APSK and M-PSK protocols and that there is a non-linear increase in the capability to produce positive SKRs as the number of coherent states used for a protocol increases. The tool developed is beneficial for choosing the optimum protocol in unstable channels, such as free space, where the transmittance and excess noise fluctuate, providing a more holistic assessment of a protocol’s capability to produce positive SKRs. Full article
(This article belongs to the Special Issue Quantum Optics: Trends and Challenges)
Show Figures

Figure 1

13 pages, 9395 KiB  
Article
Sex Differences in Hierarchical and Modular Organization of Functional Brain Networks: Insights from Hierarchical Entropy and Modularity Analysis
by Wenyu Chen, Ling Zhan and Tao Jia
Entropy 2024, 26(10), 864; https://doi.org/10.3390/e26100864 - 14 Oct 2024
Viewed by 1646
Abstract
Existing studies have demonstrated significant sex differences in the neural mechanisms of daily life and neuropsychiatric disorders. The hierarchical organization of the functional brain network is a critical feature for assessing these neural mechanisms. But the sex differences in hierarchical organization have not [...] Read more.
Existing studies have demonstrated significant sex differences in the neural mechanisms of daily life and neuropsychiatric disorders. The hierarchical organization of the functional brain network is a critical feature for assessing these neural mechanisms. But the sex differences in hierarchical organization have not been fully investigated. Here, we explore whether the hierarchical structure of the brain network differs between females and males using resting-state fMRI data. We measure the hierarchical entropy and the maximum modularity of each individual, and identify a significant negative correlation between the complexity of hierarchy and modularity in brain networks. At the mean level, females show higher modularity, whereas males exhibit a more complex hierarchy. At the consensus level, we use a co-classification matrix to perform a detailed investigation of the differences in the hierarchical organization between sexes and observe that the female group and the male group exhibit different interaction patterns of brain regions in the dorsal attention network (DAN) and visual network (VIN). Our findings suggest that the brains of females and males employ different network topologies to carry out brain functions. In addition, the negative correlation between hierarchy and modularity implies a need to balance the complexity in the hierarchical organization of the brain network, which sheds light on future studies of brain functions. Full article
Show Figures

Figure 1

17 pages, 1320 KiB  
Article
Finite-Blocklength Analysis of Coded Modulation with Retransmission
by Ming Jiang, Yi Wang, Fan Ding and Qiushi Xu
Entropy 2024, 26(10), 863; https://doi.org/10.3390/e26100863 - 14 Oct 2024
Viewed by 860
Abstract
The rapid developments of 5G and B5G networks have posed higher demands on retransmission in certain scenarios. This article reviews classical finite-length coding performance prediction formulas and proposes rate prediction formulas for coded modulation retransmission scenarios. Specifically, we demonstrate that a recently proposed [...] Read more.
The rapid developments of 5G and B5G networks have posed higher demands on retransmission in certain scenarios. This article reviews classical finite-length coding performance prediction formulas and proposes rate prediction formulas for coded modulation retransmission scenarios. Specifically, we demonstrate that a recently proposed model for correcting these prediction formulas also exhibits high accuracy in coded modulation retransmissions. To enhance the generality of this model, we introduce a range variable Pfinal to unify the predictions with different SNRs. Finally, based on simulation results, the article puts forth recommendations specific to retransmission with a high spectral efficiency. Full article
(This article belongs to the Special Issue Information Theory and Network Coding II)
Show Figures

Figure 1

20 pages, 1036 KiB  
Article
Quantum Approach for Contextual Search, Retrieval, and Ranking of Classical Information
by Alexander P. Alodjants, Anna E. Avdyushina, Dmitriy V. Tsarev, Igor A. Bessmertny and Andrey Yu. Khrennikov
Entropy 2024, 26(10), 862; https://doi.org/10.3390/e26100862 - 13 Oct 2024
Viewed by 1566
Abstract
Quantum-inspired algorithms represent an important direction in modern software information technologies that use heuristic methods and approaches of quantum science. This work presents a quantum approach for document search, retrieval, and ranking based on the Bell-like test, which is well-known in quantum physics. [...] Read more.
Quantum-inspired algorithms represent an important direction in modern software information technologies that use heuristic methods and approaches of quantum science. This work presents a quantum approach for document search, retrieval, and ranking based on the Bell-like test, which is well-known in quantum physics. We propose quantum probability theory in the hyperspace analog to language (HAL) framework exploiting a Hilbert space for word and document vector specification. The quantum approach allows for accounting for specific user preferences in different contexts. To verify the algorithm proposed, we use a dataset of synthetic advertising text documents from travel agencies generated by the OpenAI GPT-4 model. We show that the “entanglement” in two-word document search and retrieval can be recognized as the frequent occurrence of two words in incompatible query contexts. We have found that the user preferences and word ordering in the query play a significant role in relatively small sizes of the HAL window. The comparison with the cosine similarity metrics demonstrates the key advantages of our approach based on the user-enforced contextual and semantic relationships between words and not just their superficial occurrence in texts. Our approach to retrieving and ranking documents allows for the creation of new information search engines that require no resource-intensive deep machine learning algorithms. Full article
Show Figures

Figure 1

21 pages, 2958 KiB  
Article
Research on Credit Default Prediction Model Based on TabNet-Stacking
by Shijie Wang and Xueyong Zhang
Entropy 2024, 26(10), 861; https://doi.org/10.3390/e26100861 - 13 Oct 2024
Cited by 3 | Viewed by 2003
Abstract
With the development of financial technology, the traditional experience-based and single-network credit default prediction model can no longer meet the current needs. This manuscript proposes a credit default prediction model based on TabNeT-Stacking. First, use the PyTorch deep learning framework to construct an [...] Read more.
With the development of financial technology, the traditional experience-based and single-network credit default prediction model can no longer meet the current needs. This manuscript proposes a credit default prediction model based on TabNeT-Stacking. First, use the PyTorch deep learning framework to construct an improved TabNet structure. The multi-population genetic algorithm is used to optimize the Attention Transformer automatic feature selection module. The particle swarm algorithm is used to optimize the hyperparameter selection and achieve automatic parameter search. Finally, Stacking ensemble learning is used, and the improved TabNet is used to extract features. XGBoost (eXtreme Gradient Boosting), LightGBM (Light Gradient Boosting Machine), CatBoost (Category Boosting), KNN (K-NearestNeighbor), and SVM (Support Vector Machine) are selected as the first-layer base learners, and XGBoost is used as the second-layer meta-learner. The experimental results show that compared with original models, the credit default prediction model proposed in this manuscript outperforms the comparison models in terms of accuracy, precision, recall, F1 score, and AUC (Area Under the Curve) of credit default prediction results. Full article
Show Figures

Figure 1

22 pages, 1739 KiB  
Article
Approach Based on the Ordered Fuzzy Decision Making System Dedicated to Supplier Evaluation in Supply Chain Management
by Katarzyna Rudnik, Anna Chwastyk and Iwona Pisz
Entropy 2024, 26(10), 860; https://doi.org/10.3390/e26100860 - 12 Oct 2024
Cited by 1 | Viewed by 1305
Abstract
The selection of suppliers represents a pivotal aspect of supply chain management and has a considerable impact on the success and competitiveness of the organization in question. The selection of a suitable supplier is a multi-criteria decision making (MCDM) problem based on a [...] Read more.
The selection of suppliers represents a pivotal aspect of supply chain management and has a considerable impact on the success and competitiveness of the organization in question. The selection of a suitable supplier is a multi-criteria decision making (MCDM) problem based on a number of qualitative, quantitative, and even conflicting criteria. The aim of this paper is to propose a novel MCDM approach dedicated to the supplier evaluation problem using an ordered fuzzy decision making system. This study uses a fuzzy inference system based on IF–THEN rules with ordered fuzzy numbers (OFNs). The approach employs the concept of OFNs to account for potential uncertainty and subjectivity in the decision making process, and it also takes into account the trends of changes in assessment values and entropy in the final supplier evaluation. This paper’s principal contribution is the development of a knowledge base and the demonstration of its application in an ordered fuzzy expert system for multi-criteria supplier evaluation in a dynamic and uncertain environment. The proposed system takes into account the dynamic changes in the value of assessment parameters in the overall supplier assessment, allowing for the differentiation of suppliers based on current and historical data. The utilization of OFNs in a fuzzy model then allows for a reduction in the complexity of the knowledge base in comparison to a classical fuzzy system and makes it more accessible to users, as it requires only basic arithmetic operations in the inference process. This paper presents a comprehensive framework for the assessment of suppliers against a range of criteria, including local hiring, completeness, and defect factors. Furthermore, the potential to integrate sustainability and ESG (environmental, social, and corporate governance) criteria in the assessment process adds value to the decision making framework by adapting to current trends in supply chain management. Full article
Show Figures

Figure 1

20 pages, 1179 KiB  
Article
Empirical Bayes Methods, Evidentialism, and the Inferential Roles They Play
by Samidha Shetty, Gordon Brittan, Jr. and Prasanta S. Bandyopadhyay
Entropy 2024, 26(10), 859; https://doi.org/10.3390/e26100859 - 12 Oct 2024
Viewed by 1188
Abstract
Empirical Bayes-based Methods (EBM) is an increasingly popular form of Objective Bayesianism (OB). It is identified in particular with the statistician Bradley Efron. The main aims of this paper are, first, to describe and illustrate its main features and, [...] Read more.
Empirical Bayes-based Methods (EBM) is an increasingly popular form of Objective Bayesianism (OB). It is identified in particular with the statistician Bradley Efron. The main aims of this paper are, first, to describe and illustrate its main features and, second, to locate its role by comparing it with two other statistical paradigms, Subjective Bayesianism (SB) and Evidentialism. EBM’s main formal features are illustrated in some detail by schematic examples. The comparison between what Efron calls their underlying “philosophies” is by way of a distinction made between confirmation and evidence. Although this distinction is sometimes made in the statistical literature, it is relatively rare and never to the same point as here. That is, the distinction is invariably spelled out intra- and not inter-paradigmatically solely in terms of one or the other accounts. The distinction made in this paper between confirmation and evidence is illustrated by two well-known statistical paradoxes: the base-rate fallacy and Popper’s paradox of ideal evidence. The general conclusion reached is that each of the paradigms has a basic role to play and all are required by an adequate account of statistical inference from a technically informed and fine-grained philosophical perspective. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop