Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (414)

Search Parameters:
Keywords = Turing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
66 pages, 819 KB  
Article
Tossing Coins with an 𝒩𝒫-Machine
by Edgar Graham Daylight
Symmetry 2025, 17(10), 1745; https://doi.org/10.3390/sym17101745 - 16 Oct 2025
Viewed by 135
Abstract
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ, defined as [...] Read more.
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ, defined as ψ=ψcellψrest. The component ψcell enforces the constraint that each cell in the tableau contains exactly one symbol, while ψrest incorporates constraints governing the step-by-step behavior of N on w. In recent work, we reformulated a critical part of ψrest as a compact Horn formula. In another paper, we evaluated the cost of this reformulation, though our estimates were intentionally conservative. Here, we provide a more rigorous analysis and derive a polynomial bound for two enhanced variants of our original Filling Holes with Backtracking algorithm: the refined (rFHB) and streamlined (sFHB) versions, each tasked with solving 3-SAT. The improvements stem from exploiting inter-cell dependencies spanning large regions of the tableau in the case of rFHB, and by incorporating correlated coin-tossing constraints in the case of sFHB. These improvements are purely conceptual; no empirical validation—commonly expected by complexity specialists—is provided. Accordingly, any claim regarding P vs. NP remains beyond the scope of this work. Full article
(This article belongs to the Special Issue Symmetry in Solving NP-Hard Problems)
Show Figures

Figure 1

57 pages, 1386 KB  
Article
Bidirectional Endothelial Feedback Drives Turing-Vascular Patterning and Drug-Resistance Niches: A Hybrid PDE-Agent-Based Study
by Zonghao Liu, Louis Shuo Wang, Jiguang Yu, Jilin Zhang, Erica Martel and Shijia Li
Bioengineering 2025, 12(10), 1097; https://doi.org/10.3390/bioengineering12101097 - 12 Oct 2025
Viewed by 481
Abstract
We present a hybrid partial differential equation-agent-based model (PDE-ABM). In our framework, tumor cells secrete tumor angiogenic factor (TAF), while endothelial cells chemotactically migrate and branch in response. Reaction–diffusion PDEs for TAF, oxygen, and cytotoxic drug are coupled to discrete stochastic dynamics of [...] Read more.
We present a hybrid partial differential equation-agent-based model (PDE-ABM). In our framework, tumor cells secrete tumor angiogenic factor (TAF), while endothelial cells chemotactically migrate and branch in response. Reaction–diffusion PDEs for TAF, oxygen, and cytotoxic drug are coupled to discrete stochastic dynamics of tumor cells and endothelial tip cells, ensuring multiscale integration. Motivated by observed perfusion heterogeneity in tumors and its pharmacokinetic consequences, we conduct a linear stability analysis for a reduced endothelial–TAF reaction–diffusion subsystem and derive an explicit finite-domain threshold for Turing instability. We demonstrate that bidirectional coupling, where endothelial cells both chemotactically migrate along TAF gradients and secrete TAF, is necessary and sufficient to generate spatially periodic vascular clusters and inter-cluster hypoxic regions. These emergent patterns produce heterogeneous drug penetration and resistant niches. Our results identify TAF clearance, chemotactic sensitivity, and endothelial motility as effective levers to homogenize perfusion. The model is two-dimensional and employs simplified kinetics, and we outline necessary extensions to three dimensions and saturable kinetics required for quantitative calibration. The study links reaction–diffusion mechanisms with clinical principles and suggests actionable strategies to mitigate resistance by targeting endothelial–TAF feedback. Full article
(This article belongs to the Special Issue Applications of Partial Differential Equations in Bioengineering)
Show Figures

Figure 1

12 pages, 3323 KB  
Article
Effects of Laser Shock Processing on the Mechanical Properties of 6061-T6 Aluminium Alloy Using Nanosecond and Picosecond Laser Pulses
by Martha Guadalupe Arredondo Bravo, Gilberto Gomez-Rosas, Miguel Morales, David Munoz-Martin, Juan Jose Moreno-Labella, Jose Manuel Lopez Lopez, Jose Guadalupe Quiñones Galvan, Carlos Rubio-Gonzalez, Francisco Javier Casillas Rodriguez and Carlos Molpeceres
Materials 2025, 18(20), 4649; https://doi.org/10.3390/ma18204649 - 10 Oct 2025
Viewed by 465
Abstract
Laser shock processing (LSP) is a surface treatment technique used to enhance mechanical properties such as hardness, corrosion resistance, and wear resistance. This study investigates the effects of LSP on a 6061-T6 aluminium alloy using four treatment conditions: nanosecond (ns-LSP), picosecond (ps-LSP), and [...] Read more.
Laser shock processing (LSP) is a surface treatment technique used to enhance mechanical properties such as hardness, corrosion resistance, and wear resistance. This study investigates the effects of LSP on a 6061-T6 aluminium alloy using four treatment conditions: nanosecond (ns-LSP), picosecond (ps-LSP), and a combination of nanosecond–picosecond (nsps-LSP) and picosecond–nanosecond (psns-LSP) pulses. Two laser systems were employed: a Q-switched Nd:YAG laser (850 mJ/pulse, 6 ns, 1064 nm, 10 Hz), and an Ekspla Atlantic 355-60 laser (0.110 mJ/pulse, 13 ps, 1064 nm, 1 kHz). All treatments induced compressive residual stresses up to 1 mm in depth. Additionally, improvements in microhardness were observed, particularly at deeper layers in the combined nsps-LSP treatment. Surface roughness was measured and compared. Among all configurations, the nsps-LSP treatment produced the highest compressive residual stresses (−428 MPa) and greater microhardness at depth. These results suggest that the combined nsps-LSP treatment represents a promising approach to enhance the mechanical performance of metallic components. Full article
(This article belongs to the Special Issue Advances in Laser Processing Technology of Materials—Second Edition)
Show Figures

Figure 1

58 pages, 4299 KB  
Article
Optimisation of Cryptocurrency Trading Using the Fractal Market Hypothesis with Symbolic Regression
by Jonathan Blackledge and Anton Blackledge
Commodities 2025, 4(4), 22; https://doi.org/10.3390/commodities4040022 - 3 Oct 2025
Viewed by 799
Abstract
Cryptocurrencies such as Bitcoin can be classified as commodities under the Commodity Exchange Act (CEA), giving the Commodity Futures Trading Commission (CFTC) jurisdiction over those cryptocurrencies deemed commodities, particularly in the context of futures trading. This paper presents a method for predicting both [...] Read more.
Cryptocurrencies such as Bitcoin can be classified as commodities under the Commodity Exchange Act (CEA), giving the Commodity Futures Trading Commission (CFTC) jurisdiction over those cryptocurrencies deemed commodities, particularly in the context of futures trading. This paper presents a method for predicting both long- and short-term trends in selected cryptocurrencies based on the Fractal Market Hypothesis (FMH). The FMH applies the self-affine properties of fractal stochastic fields to model financial time series. After introducing the underlying theory and mathematical framework, a fundamental analysis of Bitcoin and Ethereum exchange rates against the U.S. dollar is conducted. The analysis focuses on changes in the polarity of the ‘Beta-to-Volatility’ and ‘Lyapunov-to-Volatility’ ratios as indicators of impending shifts in Bitcoin/Ethereum price trends. These signals are used to recommend long, short, or hold trading positions, with corresponding algorithms (implemented in Matlab R2023b) developed and back-tested. An optimisation of these algorithms identifies ideal parameter ranges that maximise both accuracy and profitability, thereby ensuring high confidence in the predictions. The resulting trading strategy provides actionable guidance for cryptocurrency investment and quantifies the likelihood of bull or bear market dominance. Under stable market conditions, machine learning (using the ‘TuringBot’ platform) is shown to produce reliable short-horizon estimates of future price movements and fluctuations. This reduces trading delays caused by data filtering and increases returns by identifying optimal positions within rapid ‘micro-trends’ that would otherwise remain undetected—yielding gains of up to approximately 10%. Empirical results confirm that Bitcoin and Ethereum exchanges behave as self-affine (fractal) stochastic fields with Lévy distributions, exhibiting a Hurst exponent of roughly 0.32, a fractal dimension of about 1.68, and a Lévy index near 1.22. These findings demonstrate that the Fractal Market Hypothesis and its associated indices provide a robust market model capable of generating investment returns that consistently outperform standard Buy-and-Hold strategies. Full article
Show Figures

Figure 1

19 pages, 4717 KB  
Article
Benchmarking Psychological Lexicons and Large Language Models for Emotion Detection in Brazilian Portuguese
by Thales David Domingues Aparecido, Alexis Carrillo, Chico Q. Camargo and Massimo Stella
AI 2025, 6(10), 249; https://doi.org/10.3390/ai6100249 - 1 Oct 2025
Viewed by 563
Abstract
Emotion detection in Brazilian Portuguese is less studied than in English. We benchmarked a large language model (Mistral 24B), a language-specific transformer model (BERTimbau), and the lexicon-based EmoAtlas for classifying emotions in Brazilian Portuguese text, with a focus on eight emotions derived from [...] Read more.
Emotion detection in Brazilian Portuguese is less studied than in English. We benchmarked a large language model (Mistral 24B), a language-specific transformer model (BERTimbau), and the lexicon-based EmoAtlas for classifying emotions in Brazilian Portuguese text, with a focus on eight emotions derived from Plutchik’s model. Evaluation covered four corpora: 4000 stock-market tweets, 1000 news headlines, 5000 GoEmotions Reddit comments translated by LLMs, and 2000 DeepSeek-generated headlines. While BERTimbau achieved the highest average scores (accuracy 0.876, precision 0.529, and recall 0.423), an overlap with Mistral (accuracy 0.831, precision 0.522, and recall 0.539) and notable performance variability suggest there is no single top performer; however, both transformer-based models outperformed the lexicon-based EmoAtlas (accuracy 0.797) but required up to 40 times more computational resources. We also introduce a novel “emotional fingerprinting” methodology using a synthetically generated dataset to probe emotional alignment, which revealed an imperfect overlap in the emotional representations of the models. While LLMs deliver higher overall scores, EmoAtlas offers superior interpretability and efficiency, making it a cost-effective alternative. This work delivers the first quantitative benchmark for interpretable emotion detection in Brazilian Portuguese, with open datasets and code to foster research in multilingual natural language processing. Full article
Show Figures

Figure 1

18 pages, 2638 KB  
Article
RNA Polymerase I Dysfunction Underlying Craniofacial Syndromes: Integrated Genetic Analysis Reveals Parallels to 22q11.2 Deletion Syndrome
by Spencer Silvey, Scott Lovell and Merlin G. Butler
Genes 2025, 16(9), 1063; https://doi.org/10.3390/genes16091063 - 10 Sep 2025
Viewed by 611
Abstract
Background/Objective: POLR1A and related gene variants cause craniofacial and developmental syndromes, including Acrofacial Dysostosis-Cincinnati, Treacher-Collins types 2–4, and TWIST1-associated disorders. Using a patient case integrated with molecular analyses, we aimed to clarify shared pathogenic mechanisms and propose these conditions as part of a [...] Read more.
Background/Objective: POLR1A and related gene variants cause craniofacial and developmental syndromes, including Acrofacial Dysostosis-Cincinnati, Treacher-Collins types 2–4, and TWIST1-associated disorders. Using a patient case integrated with molecular analyses, we aimed to clarify shared pathogenic mechanisms and propose these conditions as part of a spectrum of RNA polymerase I (Pol I)–related ribosomopathies. Methods: A patient with a heterozygous POLR1A variant underwent clinical evaluation. Findings were integrated with a literature review of craniofacial syndromes to identify overlapping fea tures. Protein-protein and gene-gene interactions were analyzed with STRING and Pathway Commons, a structural modeling of POLR1A assessed the mutation’s impact. Results: The patient exhibited features overlapping with Sweeney-Cox, Saethre-Cox, Robinow-Sorauf, and Treacher-Collins types 2–4, supporting a shared spectrum. Computational analyses identified POLR1A-associated partners and pathways converging on Pol I function, ribosomal biogenesis, and nucleolar processes. Structural modeling of the Met496Ile variant suggested disruption of DNA binding and polymerase activity, linking molecular dysfunction to the clinical phenotype. Conclusion: Significant clinical and genetic overlap exists among Saethre-Chotzen, Sweeney-Cox, Treacher-Collins types 2–4, and Acrofacial Dysostosis-Cincinnati. POLR1A and related Pol I subunits provide a mechanistic basis through impaired nucleolar organization and rRNA transcription, contributing to abnormal craniofacial development. Integrative protein, gene, and structural analyses support classifying these syndromes as Pol I–related ribosomopathies, with implications for diagnosis, counseling, and future mechanistic or therapeutic studies. Full article
Show Figures

Figure 1

24 pages, 4260 KB  
Article
Distinct Inflammatory Responses of hiPSC-Derived Endothelial Cells and Cardiomyocytes to Cytokines Involved in Immune Checkpoint Inhibitor-Associated Myocarditis
by Samantha Conte, Isaure Firoaguer, Simon Lledo, Thi Thom Tran, Claire El Yazidi, Stéphanie Simoncini, Zohra Rebaoui, Claire Guiol, Christophe Chevillard, Régis Guieu, Denis Puthier, Franck Thuny, Jennifer Cautela and Nathalie Lalevée
Cells 2025, 14(17), 1397; https://doi.org/10.3390/cells14171397 - 7 Sep 2025
Viewed by 880
Abstract
Inflammatory cytokines, particularly interferon-γ (IFN-γ), are markedly elevated in the peripheral blood of patients with immune checkpoint inhibitor-induced myocarditis (ICI-M). Endomyocardial biopsies from these patients also show GBP-associated inflammasome overexpression. While both factors are implicated in ICI-M pathophysiology, their interplay and cellular targets [...] Read more.
Inflammatory cytokines, particularly interferon-γ (IFN-γ), are markedly elevated in the peripheral blood of patients with immune checkpoint inhibitor-induced myocarditis (ICI-M). Endomyocardial biopsies from these patients also show GBP-associated inflammasome overexpression. While both factors are implicated in ICI-M pathophysiology, their interplay and cellular targets remain poorly characterized. Our aim was to elucidate how ICI-M-associated cytokines affect the viability and inflammatory responses of endothelial cells (ECs) and cardiomyocytes (CMs) using human induced pluripotent stem cell (hiPSC)-derived models. ECs and CMs were differentiated from the same hiPSC line derived from a healthy donor. Cells were exposed either to IFN-γ alone or to an inflammatory cytokine cocktail (CCL5, GZMB, IL-1β, IL-2, IL-6, IFN-γ, TNF-α). We assessed large-scale transcriptomic changes via microarray and evaluated inflammatory, apoptotic, and cell death pathways at cellular and molecular levels. hiPSC-ECs were highly sensitive to cytokine exposure, displaying significant mortality and marked transcriptomic changes in immunity- and inflammation-related pathways. In contrast, hiPSC-CM showed limited transcriptional changes and reduced susceptibility to cytokine-induced death. In both cell types, cytokine treatment upregulated key components of the inflammasome pathway, including regulators (GBP5, GBP6, P2X7, NLRC5), a core component (AIM2), and the effector GSDMD. Increased GBP5 expression and CASP-1 cleavage mirrored the findings found elsewhere in endomyocardial biopsies from ICI-M patients. This hiPSC-based model reveals a distinct cellular sensitivity to ICI-M-related inflammation, with endothelial cells showing heightened vulnerability. These results reposition endothelial dysfunction, rather than cardiomyocyte injury alone, as a central mechanism in ICI-induced myocarditis. Modulating endothelial inflammasome activation, particularly via AIM2 inhibition, could offer a novel strategy to mitigate cardiac toxicity while preserving antitumor efficacy. Full article
(This article belongs to the Special Issue New Research on Immunity and Inflammation in Cardiovascular Disease)
Show Figures

Figure 1

25 pages, 489 KB  
Article
A Review on Models and Applications of Quantum Computing
by Eduard Grigoryan, Sachin Kumar and Placido Rogério Pinheiro
Quantum Rep. 2025, 7(3), 39; https://doi.org/10.3390/quantum7030039 - 4 Sep 2025
Viewed by 1876
Abstract
This manuscript is intended for readers who have a general interest in the subject of quantum computation and provides an overview of the most significant developments in the field. It begins by introducing foundational concepts from quantum mechanics—such as superposition, entanglement, and the [...] Read more.
This manuscript is intended for readers who have a general interest in the subject of quantum computation and provides an overview of the most significant developments in the field. It begins by introducing foundational concepts from quantum mechanics—such as superposition, entanglement, and the no-cloning theorem—that underpin quantum computation. The primary computational models are discussed, including gate-based (circuit) quantum computing, adiabatic quantum computing, measurement-based quantum computing and the quantum Turing machine. A selection of significant quantum algorithms are reviewed, notably Grover’s search algorithm, Shor’s factoring algorithm, and Quantum Singular Value Transformation (QSVT), which enables efficient solutions to linear algebra problems on quantum devices. To assess practical performance, we compare quantum and classical implementations of support vector machines (SVMs) using several synthetic datasets. These experiments offer insight into the capabilities and limitations of near-term quantum classifiers relative to classical counterparts. Finally, we review leading quantum programming platforms—including Qiskit, PennyLane, and Cirq—and discuss their roles in bridging theoretical models with real-world quantum hardware. The paper aims to provide a concise yet comprehensive guide for those looking to understand both the theoretical foundations and applied aspects of quantum computing. Full article
Show Figures

Figure 1

22 pages, 1021 KB  
Systematic Review
Scientific Evidence in Public Health Decision-Making: A Systematic Literature Review of the Past 50 Years
by Emmanuel Kabengele Mpinga, Sara Chebbaa, Anne-Laure Pittet and Gabin Kayumbi
Int. J. Environ. Res. Public Health 2025, 22(9), 1343; https://doi.org/10.3390/ijerph22091343 - 28 Aug 2025
Viewed by 1746
Abstract
Background: Scientific evidence plays a critical role in informing public health decision-making processes. However, the extent, nature, and effectiveness of its use remain uneven across contexts. Despite the increasing volume of literature on the subject, previous syntheses have often suffered from narrow thematic, [...] Read more.
Background: Scientific evidence plays a critical role in informing public health decision-making processes. However, the extent, nature, and effectiveness of its use remain uneven across contexts. Despite the increasing volume of literature on the subject, previous syntheses have often suffered from narrow thematic, temporal, or geographic scopes. Objectives: This study undertook a comprehensive systematic literature review spanning 50 years to (i) synthesise current knowledge on the use of scientific evidence in public health decisions, (ii) identify key determinants, barriers, and enablers, (iii) evaluate implementation patterns, and (iv) propose future directions for research and practice. Methods: We adopted the PRISMA model (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). Moreover, we researched three large databases (Web of Science, Embase, and PubMed), and this study focused on articles published in the English and French languages between January 1974 and December 2024. Studies were analysed thematically and descriptively to identify trends, patterns, and knowledge gaps. Results: This review reveals a growing corpus of scholarship with a predominance of qualitative studies mainly published in public health journals. Evidence use is most frequently analysed at the national policy level. Analyses of the evolution of scientific production over time revealed significant shifts beginning as early as 2005. Critical impediments included limited access to reliable and timely data, a lack of institutional capacity, and insufficient training among policy-makers. In contrast, enablers encompass cross-sector collaboration, data transparency, and alignment between researchers and decision-makers. Conclusions: Addressing persistent gaps necessitates a more nuanced appreciation of interdisciplinary and contextual factors. Our findings call for proactive policies aimed at promoting the use of scientific evidence by improving the accessibility of health data (addressing the absence or lack of data, as well as its reliability, timeliness, and accessibility), and by training decision-makers in the use of scientific evidence for decision making. Furthermore, our findings advocate for better alignment between the agendas of healthcare professionals (e.g., data collection), researchers (e.g., the selection of research topics), and decision-makers (e.g., expectations and needs) in order to develop and implement public health policies that are grounded in and informed by scientific evidence. Full article
Show Figures

Figure 1

9 pages, 1005 KB  
Proceeding Paper
General Theory of Information and Mindful Machines
by Rao Mikkilineni
Proceedings 2025, 126(1), 3; https://doi.org/10.3390/proceedings2025126003 - 26 Aug 2025
Viewed by 762
Abstract
As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful [...] Read more.
As artificial intelligence advances toward unprecedented capabilities, society faces a choice between two trajectories. One continues scaling transformer-based architectures, such as state-of-the-art large language models (LLMs) like GPT-4, Claude, and Gemini, aiming for broad generalization and emergent capabilities. This approach has produced powerful tools but remains largely statistical, with unclear potential to achieve hypothetical “superintelligence”—a term used here as a conceptual reference to systems that might outperform humans across most cognitive domains, though no consensus on its definition or framework currently exists. The alternative explored here is the Mindful Machines paradigm—AI systems that could, in future, integrate intelligence with semantic grounding, embedded ethical constraints, and goal-directed self-regulation. This paper outlines the Mindful Machine architecture, grounded in Mark Burgin’s General Theory of Information (GTI), and proposes a post-Turing model of cognition that directly encodes memory, meaning, and teleological goals into the computational substrate. Two implementations are cited as proofs of concept. Full article
Show Figures

Figure 1

25 pages, 14199 KB  
Article
A Nonlinear Cross-Diffusion Model for Disease Spread: Turing Instability and Pattern Formation
by Ravi P. Gupta, Arun Kumar and Shristi Tiwari
Mathematics 2025, 13(15), 2404; https://doi.org/10.3390/math13152404 - 25 Jul 2025
Viewed by 628
Abstract
In this article, we propose a novel nonlinear cross-diffusion framework to model the distribution of susceptible and infected individuals within their habitat using a reduced SIR model that incorporates saturated incidence and treatment rates. The study investigates solution boundedness through the theory of [...] Read more.
In this article, we propose a novel nonlinear cross-diffusion framework to model the distribution of susceptible and infected individuals within their habitat using a reduced SIR model that incorporates saturated incidence and treatment rates. The study investigates solution boundedness through the theory of parabolic partial differential equations, thereby validating the proposed spatio-temporal model. Through the implementation of the suggested cross-diffusion mechanism, the model reveals at least one non-constant positive equilibrium state within the susceptible–infected (SI) system. This work demonstrates the potential coexistence of susceptible and infected populations through cross-diffusion and unveils Turing instability within the system. By analyzing codimension-2 Turing–Hopf bifurcation, the study identifies the Turing space within the spatial context. In addition, we explore the results for Turing–Bogdanov–Takens bifurcation. To account for seasonal disease variations, novel perturbations are introduced. Comprehensive numerical simulations illustrate diverse emerging patterns in the Turing space, including holes, strips, and their mixtures. Additionally, the study identifies non-Turing and Turing–Bogdanov–Takens patterns for specific parameter selections. Spatial series and surfaces are graphed to enhance the clarity of the pattern results. This research provides theoretical insights into the implications of cross-diffusion in epidemic modeling, particularly in contexts characterized by localized mobility, clinically evident infections, and community-driven isolation behaviors. Full article
(This article belongs to the Special Issue Models in Population Dynamics, Ecology and Evolution)
Show Figures

Figure 1

14 pages, 2182 KB  
Article
Stability Analysis of a Master–Slave Cournot Triopoly Model: The Effects of Cross-Diffusion
by Maria Francesca Carfora and Isabella Torcicollo
Axioms 2025, 14(7), 540; https://doi.org/10.3390/axioms14070540 - 17 Jul 2025
Viewed by 411
Abstract
A Cournot triopoly is a type of oligopoly market involving three firms that produce and sell homogeneous or similar products without cooperating with one another. In Cournot models, firms’ decisions about production levels play a crucial role in determining overall market output. Compared [...] Read more.
A Cournot triopoly is a type of oligopoly market involving three firms that produce and sell homogeneous or similar products without cooperating with one another. In Cournot models, firms’ decisions about production levels play a crucial role in determining overall market output. Compared to duopoly models, oligopolies with more than two firms have received relatively less attention in the literature. Nevertheless, triopoly models are more reflective of real-world market conditions, even though analyzing their dynamics remains a complex challenge. A reaction–diffusion system of PDEs generalizing a nonlinear triopoly model describing a master–slave Cournot game is introduced. The effect of diffusion on the stability of Nash equilibrium is investigated. Self-diffusion alone cannot induce Turing pattern formation. In fact, linear stability analysis shows that cross-diffusion is the key mechanism for the formation of spatial patterns. The conditions for the onset of cross-diffusion-driven instability are obtained via linear stability analysis, and the formation of several Turing patterns is investigated through numerical simulations. Full article
Show Figures

Figure 1

23 pages, 372 KB  
Article
Computability of the Zero-Error Capacity of Noisy Channels
by Holger Boche and Christian Deppe
Information 2025, 16(7), 571; https://doi.org/10.3390/info16070571 - 3 Jul 2025
Viewed by 701
Abstract
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is [...] Read more.
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is known for computing this capacity. In this work, we investigate the computability-theoretic boundaries of the zero-error capacity and establish several fundamental limitations. Our main result shows that the zero-error capacity of noisy channels is not Banach–Mazur-computable and therefore is also not Borel–Turing-computable. This provides a strong form of non-computability that goes beyond classical undecidability, capturing the inherent discontinuity of the capacity function. As a further contribution, we analyze the deep connections between (i) the zero-error capacity of DMCs, (ii) the Shannon capacity of graphs, and (iii) Ahlswede’s operational characterization via the maximum-error capacity of 0–1 arbitrarily varying channels (AVCs). We prove that key semi-decidability questions are equivalent for all three capacities, thus unifying these problems into a common algorithmic framework. While the computability status of the Shannon capacity of graphs remains unresolved, our equivalence result clarifies what makes this problem so challenging and identifies the logical barriers that must be overcome to resolve it. Together, these results chart the computational landscape of zero-error information theory and provide a foundation for further investigations into the algorithmic intractability of exact capacity computations. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
24 pages, 769 KB  
Article
Injecting Observers into Computational Complexity
by Edgar Graham Daylight
Philosophies 2025, 10(4), 76; https://doi.org/10.3390/philosophies10040076 - 26 Jun 2025
Cited by 1 | Viewed by 722
Abstract
We characterize computer science as an interplay between two modes of reasoning: the Aristotelian (procedural) method and the Platonic (declarative) approach. We contend that Aristotelian, step-by-step thinking dominates in computer programming, while Platonic, static reasoning plays a more prominent role in computational complexity. [...] Read more.
We characterize computer science as an interplay between two modes of reasoning: the Aristotelian (procedural) method and the Platonic (declarative) approach. We contend that Aristotelian, step-by-step thinking dominates in computer programming, while Platonic, static reasoning plays a more prominent role in computational complexity. Various frameworks elegantly blend both Aristotelian and Platonic reasoning. A key example explored in this paper concerns nondeterministic polynomial time Turing machines. Beyond this interplay, we emphasize the growing importance of the ‘computing by observing’ paradigm, which posits that a single derivation tree—generated with a string-rewriting system—can yield multiple interpretations depending on the choice of the observer. Advocates of this paradigm formalize the Aristotelian activities of rewriting and observing within automata theory through a Platonic lens. This approach raises a fundamental question: How do these Aristotelian activities re-emerge when the paradigm is formulated in propositional logic? By addressing this issue, we develop a novel simulation method for nondeterministic Turing machines, particularly those bounded by polynomial time, improving upon the standard textbook approach. Full article
(This article belongs to the Special Issue Semantics and Computation)
Show Figures

Figure 1

29 pages, 351 KB  
Article
The Computability of the Channel Reliability Function and Related Bounds
by Holger Boche and Christian Deppe
Algorithms 2025, 18(6), 361; https://doi.org/10.3390/a18060361 - 11 Jun 2025
Viewed by 1015
Abstract
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated [...] Read more.
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the R function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the R function and the zero-error feedback capacity are not Banach–Mazur computable. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
Back to TopTop