Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = turing algorithm

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
41 pages, 1874 KB  
Article
Is Every Cognitive Phenomenon Computable?
by Fernando Rodriguez-Vergara and Phil Husbands
Mathematics 2026, 14(3), 535; https://doi.org/10.3390/math14030535 - 2 Feb 2026
Viewed by 120
Abstract
According to the Church–Turing thesis, the limit of what is computable is bounded by Turing machines. Following from this, given that general computable functions formally describe the notion of recursive mechanisms, it is sometimes argued that every organismic process that specifies consistent cognitive [...] Read more.
According to the Church–Turing thesis, the limit of what is computable is bounded by Turing machines. Following from this, given that general computable functions formally describe the notion of recursive mechanisms, it is sometimes argued that every organismic process that specifies consistent cognitive responses should be both limited to Turing machine capabilities and amenable to formalization. There is, however, a deep intuitive conviction permeating contemporary cognitive science, according to which mental phenomena, such as consciousness and agency, cannot be explained by resorting to this kind of framework. In spite of some exceptions, the overall tacit assumption is that whatever the mind is, it exceeds the reach of what is described by notions of computability. This issue, namely the nature of the relation between cognition and computation, becomes particularly pertinent and increasingly more relevant as a possible source of better understanding the inner workings of the mind, as well as the limits of artificial implementations thereof. Moreover, although it is often overlooked or omitted so as to simplify our models, it will probably define, or so we argue, the direction of future research on artificial life, cognitive science, artificial intelligence, and related fields. Full article
(This article belongs to the Special Issue Non-algorithmic Mathematical Models of Biological Organization)
6 pages, 210 KB  
Article
Why Turing’s Computable Numbers Are Only Non-Constructively Closed Under Addition
by Jeff Edmonds
Entropy 2026, 28(1), 71; https://doi.org/10.3390/e28010071 - 7 Jan 2026
Viewed by 276
Abstract
Kolmogorov complexity asks whether a string can be outputted by a Turing Machine (TM) whose description is shorter. Analogously, a real number is considered computable if a Turing machine can generate its decimal expansion. The modern ϵ-approximation definition of computability, widely used [...] Read more.
Kolmogorov complexity asks whether a string can be outputted by a Turing Machine (TM) whose description is shorter. Analogously, a real number is considered computable if a Turing machine can generate its decimal expansion. The modern ϵ-approximation definition of computability, widely used in practical computation, ensures that computable reals are constructively closed under addition. However, Turing’s original 1936 digit-by-digit notion, which demands the direct output of the n-th digit, presents a stark divergence. Though the set of Turing-computable reals is not constructively closed under addition, we prove that a Turing machine capable of computing x+y non-constructively exists. The core constructive computational barrier arises from determining the ones digit of a sum like 0.333¯+0.666¯=0.999¯=1.000¯. This particular example is ambiguous because both 0.999¯ and 1.000¯ are legitimate decimal representations of the same number. However, if any of the infinite number of 3s in the first term is changed to a 2 (e.g., 0.3332+0.666¯), the sum’s leading digit is definitely zero. Conversely, if it is changed to a 4 (e.g., 0.3334+0.666¯), the leading digit is definitely one. This implies an inherent undecidability in determining these digits. Recent papers and our work address this issue. Hamkins provides an informal argument, while Berthelette et al. present more complicated formal proof, and our contribution offers a simple reduction to the Halting Problem. We demonstrate that determining when carry propagation stops can be resolved with a single query to an oracle that tells if and when a given TM halts. Because a concrete answer to this query exists, so does a TM computing the digits of x+y, though the proof is non-constructive. As far as we know, the analogous question for multiplication remains open. This, we feel, is an interesting addition to the story. This reveals a subtle but significant difference between the modern ϵ-approximation definition and Turing’s original 1936 digit-by-digit notion of a computable number, as well as between constructive and non-constructive proof. This issue of computability and numerical precision ties into algorithmic information and Kolmogorov complexity. Full article
66 pages, 819 KB  
Article
Tossing Coins with an 𝒩𝒫-Machine
by Edgar Graham Daylight
Symmetry 2025, 17(10), 1745; https://doi.org/10.3390/sym17101745 - 16 Oct 2025
Viewed by 550
Abstract
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ, defined as [...] Read more.
In computational complexity, a tableau represents a hypothetical accepting computation path p of a nondeterministic polynomial time Turing machine N on an input w. The tableau is encoded by the formula ψ, defined as ψ=ψcellψrest. The component ψcell enforces the constraint that each cell in the tableau contains exactly one symbol, while ψrest incorporates constraints governing the step-by-step behavior of N on w. In recent work, we reformulated a critical part of ψrest as a compact Horn formula. In another paper, we evaluated the cost of this reformulation, though our estimates were intentionally conservative. Here, we provide a more rigorous analysis and derive a polynomial bound for two enhanced variants of our original Filling Holes with Backtracking algorithm: the refined (rFHB) and streamlined (sFHB) versions, each tasked with solving 3-SAT. The improvements stem from exploiting inter-cell dependencies spanning large regions of the tableau in the case of rFHB, and by incorporating correlated coin-tossing constraints in the case of sFHB. These improvements are purely conceptual; no empirical validation—commonly expected by complexity specialists—is provided. Accordingly, any claim regarding P vs. NP remains beyond the scope of this work. Full article
(This article belongs to the Special Issue Symmetry in Solving NP-Hard Problems)
Show Figures

Figure 1

58 pages, 4299 KB  
Article
Optimisation of Cryptocurrency Trading Using the Fractal Market Hypothesis with Symbolic Regression
by Jonathan Blackledge and Anton Blackledge
Commodities 2025, 4(4), 22; https://doi.org/10.3390/commodities4040022 - 3 Oct 2025
Viewed by 5007
Abstract
Cryptocurrencies such as Bitcoin can be classified as commodities under the Commodity Exchange Act (CEA), giving the Commodity Futures Trading Commission (CFTC) jurisdiction over those cryptocurrencies deemed commodities, particularly in the context of futures trading. This paper presents a method for predicting both [...] Read more.
Cryptocurrencies such as Bitcoin can be classified as commodities under the Commodity Exchange Act (CEA), giving the Commodity Futures Trading Commission (CFTC) jurisdiction over those cryptocurrencies deemed commodities, particularly in the context of futures trading. This paper presents a method for predicting both long- and short-term trends in selected cryptocurrencies based on the Fractal Market Hypothesis (FMH). The FMH applies the self-affine properties of fractal stochastic fields to model financial time series. After introducing the underlying theory and mathematical framework, a fundamental analysis of Bitcoin and Ethereum exchange rates against the U.S. dollar is conducted. The analysis focuses on changes in the polarity of the ‘Beta-to-Volatility’ and ‘Lyapunov-to-Volatility’ ratios as indicators of impending shifts in Bitcoin/Ethereum price trends. These signals are used to recommend long, short, or hold trading positions, with corresponding algorithms (implemented in Matlab R2023b) developed and back-tested. An optimisation of these algorithms identifies ideal parameter ranges that maximise both accuracy and profitability, thereby ensuring high confidence in the predictions. The resulting trading strategy provides actionable guidance for cryptocurrency investment and quantifies the likelihood of bull or bear market dominance. Under stable market conditions, machine learning (using the ‘TuringBot’ platform) is shown to produce reliable short-horizon estimates of future price movements and fluctuations. This reduces trading delays caused by data filtering and increases returns by identifying optimal positions within rapid ‘micro-trends’ that would otherwise remain undetected—yielding gains of up to approximately 10%. Empirical results confirm that Bitcoin and Ethereum exchanges behave as self-affine (fractal) stochastic fields with Lévy distributions, exhibiting a Hurst exponent of roughly 0.32, a fractal dimension of about 1.68, and a Lévy index near 1.22. These findings demonstrate that the Fractal Market Hypothesis and its associated indices provide a robust market model capable of generating investment returns that consistently outperform standard Buy-and-Hold strategies. Full article
Show Figures

Figure 1

25 pages, 489 KB  
Article
A Review on Models and Applications of Quantum Computing
by Eduard Grigoryan, Sachin Kumar and Placido Rogério Pinheiro
Quantum Rep. 2025, 7(3), 39; https://doi.org/10.3390/quantum7030039 - 4 Sep 2025
Cited by 1 | Viewed by 6699
Abstract
This manuscript is intended for readers who have a general interest in the subject of quantum computation and provides an overview of the most significant developments in the field. It begins by introducing foundational concepts from quantum mechanics—such as superposition, entanglement, and the [...] Read more.
This manuscript is intended for readers who have a general interest in the subject of quantum computation and provides an overview of the most significant developments in the field. It begins by introducing foundational concepts from quantum mechanics—such as superposition, entanglement, and the no-cloning theorem—that underpin quantum computation. The primary computational models are discussed, including gate-based (circuit) quantum computing, adiabatic quantum computing, measurement-based quantum computing and the quantum Turing machine. A selection of significant quantum algorithms are reviewed, notably Grover’s search algorithm, Shor’s factoring algorithm, and Quantum Singular Value Transformation (QSVT), which enables efficient solutions to linear algebra problems on quantum devices. To assess practical performance, we compare quantum and classical implementations of support vector machines (SVMs) using several synthetic datasets. These experiments offer insight into the capabilities and limitations of near-term quantum classifiers relative to classical counterparts. Finally, we review leading quantum programming platforms—including Qiskit, PennyLane, and Cirq—and discuss their roles in bridging theoretical models with real-world quantum hardware. The paper aims to provide a concise yet comprehensive guide for those looking to understand both the theoretical foundations and applied aspects of quantum computing. Full article
Show Figures

Figure 1

23 pages, 372 KB  
Article
Computability of the Zero-Error Capacity of Noisy Channels
by Holger Boche and Christian Deppe
Information 2025, 16(7), 571; https://doi.org/10.3390/info16070571 - 3 Jul 2025
Viewed by 1189
Abstract
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is [...] Read more.
The zero-error capacity of discrete memoryless channels (DMCs), introduced by Shannon, is a fundamental concept in information theory with significant operational relevance, particularly in settings where even a single transmission error is unacceptable. Despite its importance, no general closed-form expression or algorithm is known for computing this capacity. In this work, we investigate the computability-theoretic boundaries of the zero-error capacity and establish several fundamental limitations. Our main result shows that the zero-error capacity of noisy channels is not Banach–Mazur-computable and therefore is also not Borel–Turing-computable. This provides a strong form of non-computability that goes beyond classical undecidability, capturing the inherent discontinuity of the capacity function. As a further contribution, we analyze the deep connections between (i) the zero-error capacity of DMCs, (ii) the Shannon capacity of graphs, and (iii) Ahlswede’s operational characterization via the maximum-error capacity of 0–1 arbitrarily varying channels (AVCs). We prove that key semi-decidability questions are equivalent for all three capacities, thus unifying these problems into a common algorithmic framework. While the computability status of the Shannon capacity of graphs remains unresolved, our equivalence result clarifies what makes this problem so challenging and identifies the logical barriers that must be overcome to resolve it. Together, these results chart the computational landscape of zero-error information theory and provide a foundation for further investigations into the algorithmic intractability of exact capacity computations. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
38 pages, 16379 KB  
Article
Hyperbolic Sine Function Control-Based Finite-Time Bipartite Synchronization of Fractional-Order Spatiotemporal Networks and Its Application in Image Encryption
by Lvming Liu, Haijun Jiang, Cheng Hu, Haizheng Yu, Siyu Chen, Yue Ren, Shenglong Chen and Tingting Shi
Fractal Fract. 2025, 9(1), 36; https://doi.org/10.3390/fractalfract9010036 - 13 Jan 2025
Viewed by 1368
Abstract
This work is devoted to the hyperbolic sine function (HSF) control-based finite-time bipartite synchronization of fractional-order spatiotemporal networks and its application in image encryption. Initially, the addressed networks adequately take into account the nature of anisotropic diffusion, i.e., the diffusion matrix can be [...] Read more.
This work is devoted to the hyperbolic sine function (HSF) control-based finite-time bipartite synchronization of fractional-order spatiotemporal networks and its application in image encryption. Initially, the addressed networks adequately take into account the nature of anisotropic diffusion, i.e., the diffusion matrix can be not only non-diagonal but also non-square, without the conservative requirements in plenty of the existing literature. Next, an equation transformation and an inequality estimate for the anisotropic diffusion term are established, which are fundamental for analyzing the diffusion phenomenon in network dynamics. Subsequently, three control laws are devised to offer a detailed discussion for HSF control law’s outstanding performances, including the swifter convergence rate, the tighter bound of the settling time and the suppression of chattering. Following this, by a designed chaotic system with multi-scroll chaotic attractors tested with bifurcation diagrams, Poincaré map, and Turing pattern, several simulations are pvorided to attest the correctness of our developed findings. Finally, a formulated image encryption algorithm, which is evaluated through imperative security tests, reveals the effectiveness and superiority of the obtained results. Full article
Show Figures

Figure 1

15 pages, 4408 KB  
Article
An Efficient Linearized Difference Algorithm for a Diffusive Selkov–Schnakenberg System
by Yange Wang and Xixian Bai
Mathematics 2024, 12(6), 894; https://doi.org/10.3390/math12060894 - 18 Mar 2024
Cited by 1 | Viewed by 1134
Abstract
This study provides an efficient linearized difference algorithm for a diffusive Selkov–Schnakenberg system. The algorithm is developed by using a finite difference method that relies on a three-level linearization approach. The boundedness, existence and uniqueness of the solution of our proposed [...] Read more.
This study provides an efficient linearized difference algorithm for a diffusive Selkov–Schnakenberg system. The algorithm is developed by using a finite difference method that relies on a three-level linearization approach. The boundedness, existence and uniqueness of the solution of our proposed algorithm are proved. The numerical experiments not only validate the accuracy of the algorithm but also preserve the Turing patterns. Full article
(This article belongs to the Special Issue Numerical and Computational Methods in Engineering)
Show Figures

Figure 1

12 pages, 8558 KB  
Article
Mark Burgin’s Legacy: The General Theory of Information, the Digital Genome, and the Future of Machine Intelligence
by Rao Mikkilineni
Philosophies 2023, 8(6), 107; https://doi.org/10.3390/philosophies8060107 - 12 Nov 2023
Cited by 3 | Viewed by 5161
Abstract
With 500+ papers and 20+ books spanning many scientific disciplines, Mark Burgin has left an indelible mark and legacy for future explorers of human thought and information technology professionals. In this paper, I discuss his contribution to the evolution of machine intelligence using [...] Read more.
With 500+ papers and 20+ books spanning many scientific disciplines, Mark Burgin has left an indelible mark and legacy for future explorers of human thought and information technology professionals. In this paper, I discuss his contribution to the evolution of machine intelligence using his general theory of information (GTI) based on my discussions with him and various papers I co-authored during the past eight years. His construction of a new class of digital automata to overcome the barrier posed by the Church–Turing Thesis, and his contribution to super-symbolic computing with knowledge structures, cognizing oracles, and structural machines are leading to practical applications changing the future landscape of information systems. GTI provides a model for the operational knowledge of biological systems to build, operate, and manage life processes using 30+ trillion cells capable of replication and metabolism. The schema and associated operations derived from GTI are also used to model a digital genome specifying the operational knowledge of algorithms executing the software life processes with specific purposes using replication and metabolism. The result is a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit. We discuss here one of these applications. Full article
(This article belongs to the Special Issue Special Issue in Memory of Professor Mark Burgin)
Show Figures

Figure 1

28 pages, 1233 KB  
Article
Trustworthy Digital Representations of Analog Information—An Application-Guided Analysis of a Fundamental Theoretical Problem in Digital Twinning
by Holger Boche, Yannik N. Böck, Ullrich J. Mönich and Frank H. P. Fitzek
Algorithms 2023, 16(11), 514; https://doi.org/10.3390/a16110514 - 9 Nov 2023
Cited by 4 | Viewed by 2277
Abstract
This article compares two methods of algorithmically processing bandlimited time-continuous signals in light of the general problem of finding “suitable” representations of analog information on digital hardware. Albeit abstract, we argue that this problem is fundamental in digital twinning, a signal-processing paradigm the [...] Read more.
This article compares two methods of algorithmically processing bandlimited time-continuous signals in light of the general problem of finding “suitable” representations of analog information on digital hardware. Albeit abstract, we argue that this problem is fundamental in digital twinning, a signal-processing paradigm the upcoming 6G communication-technology standard relies on heavily. Using computable analysis, we formalize a general framework of machine-readable descriptions for representing analytic objects on Turing machines. Subsequently, we apply this framework to sampling and interpolation theory, providing a thoroughly formalized method for digitally processing the information carried by bandlimited analog signals. We investigate discrete-time descriptions, which form the implicit quasi-standard in digital signal processing, and establish continuous-time descriptions that take the signal’s continuous-time behavior into account. Motivated by an exemplary application of digital twinning, we analyze a textbook model of digital communication systems accordingly. We show that technologically fundamental properties, such as a signal’s (Banach-space) norm, can be computed from continuous-time, but not from discrete-time descriptions of the signal. Given the high trustworthiness requirements within 6G, e.g., employed software must satisfy assessment criteria in a provable manner, we conclude that the problem of “trustworthy” digital representations of analog information is indeed essential to near-future information technology. Full article
(This article belongs to the Topic Modeling and Practice for Trustworthy and Secure Systems)
Show Figures

Figure 1

18 pages, 2639 KB  
Article
Secure CAPTCHA by Genetic Algorithm (GA) and Multi-Layer Perceptron (MLP)
by Saman Shojae Chaeikar, Fatemeh Mirzaei Asl, Saeid Yazdanpanah, Mazdak Zamani, Azizah Abdul Manaf and Touraj Khodadadi
Electronics 2023, 12(19), 4084; https://doi.org/10.3390/electronics12194084 - 29 Sep 2023
Cited by 11 | Viewed by 2372
Abstract
To achieve an acceptable level of security on the web, the Completely Automatic Public Turing test to tell Computer and Human Apart (CAPTCHA) was introduced as a tool to prevent bots from doing destructive actions such as downloading or signing up. Smartphones have [...] Read more.
To achieve an acceptable level of security on the web, the Completely Automatic Public Turing test to tell Computer and Human Apart (CAPTCHA) was introduced as a tool to prevent bots from doing destructive actions such as downloading or signing up. Smartphones have small screens, and, therefore, using the common CAPTCHA methods (e.g., text CAPTCHAs) in these devices raises usability issues. To introduce a reliable, secure, and usable CAPTCHA that is suitable for smartphones, this paper introduces a hand gesture recognition CAPTCHA based on applying genetic algorithm (GA) principles on Multi-Layer Perceptron (MLP). The proposed method improves the performance of MLP-based hand gesture recognition. It has been trained and evaluated on 2201 videos of the IPN Hand dataset, and MSE and RMSE benchmarks report index values of 0.0018 and 0.0424, respectively. A comparison with the related works shows a minimum of 1.79% fewer errors, and experiments produced a sensitivity of 93.42% and accuracy of 92.27–10.25% and 6.65% improvement compared to the MLP implementation. The range of the supported hand gestures can be a limit for the application of this research as a limited range may result in a vulnerable CAPTCHA. Also, the processes of training and testing require significant computational resources. In the future, we will optimize the method to run with high reliability in various illumination conditions and skin color and tone. The next development plan is to use augmented reality and create unpredictable random patterns to enhance the security of the method. Full article
(This article belongs to the Special Issue State-of-the-Art Electronics in the USA)
Show Figures

Figure 1

35 pages, 19835 KB  
Article
Color Image Encryption Algorithm Based on a Chaotic Model Using the Modular Discrete Derivative and Langton’s Ant
by Ernesto Moya-Albor, Andrés Romero-Arellano, Jorge Brieva and Sandra L. Gomez-Coronel
Mathematics 2023, 11(10), 2396; https://doi.org/10.3390/math11102396 - 22 May 2023
Cited by 29 | Viewed by 4000
Abstract
In this work, a color image encryption and decryption algorithm for digital images is presented. It is based on the modular discrete derivative (MDD), a novel technique to encrypt images and efficiently hide visual information. In addition, Langton’s ant, which is a two-dimensional [...] Read more.
In this work, a color image encryption and decryption algorithm for digital images is presented. It is based on the modular discrete derivative (MDD), a novel technique to encrypt images and efficiently hide visual information. In addition, Langton’s ant, which is a two-dimensional universal Turing machine with a high key space, is used. Moreover, a deterministic noise technique that adds security to the MDD is utilized. The proposed hybrid scheme exploits the advantages of MDD and Langton’s ant, generating a very secure and reliable encryption algorithm. In this proposal, if the key is known, the original image is recovered without loss. The method has demonstrated high performance through various tests, including statistical analysis (histograms and correlation distributions), entropy, texture analysis, encryption quality, key space assessment, key sensitivity analysis, and robustness to differential attack. The proposed method highlights obtaining chi-square values between 233.951 and 281.687, entropy values between 7.9999225223 and 7.9999355791, PSNR values (in the original and encrypted images) between 8.134 and 9.957, the number of pixel change rate (NPCR) values between 99.60851796% and 99.61054611%, unified average changing intensity (UACI) values between 33.44672377% and 33.47430379%, and a vast range of possible keys >5.8459×1072. On the other hand, an analysis of the sensitivity of the key shows that slight changes to the key do not generate any additional information to decrypt the image. In addition, the proposed method shows a competitive performance against recent works found in the literature. Full article
(This article belongs to the Special Issue Chaos-Based Secure Communication and Cryptography)
Show Figures

Figure 1

8 pages, 239 KB  
Article
Effective Procedures
by Nathan Salmon
Philosophies 2023, 8(2), 27; https://doi.org/10.3390/philosophies8020027 - 16 Mar 2023
Viewed by 2136
Abstract
The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined, even if it does not have a purely mathematical definition—and even if (as many have asserted) for that reason, the Church–Turing thesis (that [...] Read more.
The “somewhat vague, intuitive” notion from computability theory of an effective procedure (method) or algorithm can be fairly precisely defined, even if it does not have a purely mathematical definition—and even if (as many have asserted) for that reason, the Church–Turing thesis (that the effectively calculable functions on natural numbers are exactly the general recursive functions), cannot be proved. However, it is logically provable from the notion of an effective procedure, without reliance on any (partially) mathematical thesis or conjecture concerning effective procedures, such as the Church–Turing thesis, that the class of effective procedures is undecidable, i.e., that there is no effective procedure for ascertaining whether a given procedure is effective. The proof does not even appeal to a precise definition of ‘effective procedure’. Instead, it relies solely and entirely on a basic grasp of the intuitive notion of such a procedure. Though the result itself is not surprising, it is also not without significance. It has the consequence, for example, that the solution to a decision problem, if it is to be complete, must be accompanied by a separate argument that the proposed ascertainment procedure is, in fact, a decision procedure, i.e., effective—for example, that it invariably terminates with the correct verdict. Full article
(This article belongs to the Special Issue Turing the Philosopher: Established Debates and New Developments)
27 pages, 4989 KB  
Article
Quality of Service Generalization using Parallel Turing Integration Paradigm to Support Machine Learning
by Abdul Razaque, Mohamed Ben Haj Frej, Gulnara Bektemyssova, Muder Almi’ani, Fathi Amsaad, Aziz Alotaibi, Noor Z. Jhanjhi, Mohsin Ali, Saule Amanzholova and Majid Alshammari
Electronics 2023, 12(5), 1129; https://doi.org/10.3390/electronics12051129 - 25 Feb 2023
Cited by 4 | Viewed by 2145
Abstract
The Quality-of-Service (QoS) provision in machine learning is affected by lesser accuracy, noise, random error, and weak generalization (ML). The Parallel Turing Integration Paradigm (PTIP) is introduced as a solution to lower accuracy and weak generalization. A logical table (LT) is part of [...] Read more.
The Quality-of-Service (QoS) provision in machine learning is affected by lesser accuracy, noise, random error, and weak generalization (ML). The Parallel Turing Integration Paradigm (PTIP) is introduced as a solution to lower accuracy and weak generalization. A logical table (LT) is part of the PTIP and is used to store datasets. The PTIP has elements that enhance classifier learning, enhance 3-D cube logic for security provision, and balance the engineering process of paradigms. The probability weightage function for adding and removing algorithms during the training phase is included in the PTIP. Additionally, it uses local and global error functions to limit overconfidence and underconfidence in learning processes. By utilizing the local gain (LG) and global gain (GG), the optimization of the model’s constituent parts is validated. By blending the sub-algorithms with a new dataset in a foretelling and realistic setting, the PTIP validation is further ensured. A mathematical modeling technique is used to ascertain the efficacy of the proposed PTIP. The results of the testing show that the proposed PTIP obtains lower relative accuracy of 38.76% with error bounds reflection. The lower relative accuracy with low GG is considered good. The PTIP also obtains 70.5% relative accuracy with high GG, which is considered an acceptable accuracy. Moreover, the PTIP gets better accuracy of 99.91% with a 100% fitness factor. Finally, the proposed PTIP is compared with cutting-edge, well-established models and algorithms based on different state-of-the-art parameters (e.g., relative accuracy, accuracy with fitness factor, fitness process, error reduction, and generalization measurement). The results confirm that the proposed PTIP demonstrates better results as compared to contending models and algorithms. Full article
(This article belongs to the Special Issue Application of Machine Learning in Big Data)
Show Figures

Figure 1

16 pages, 1280 KB  
Article
Simulation of Closed Timelike Curves in a Darwinian Approach to Quantum Mechanics
by Carlos Baladrón and Andrei Khrennikov
Universe 2023, 9(2), 64; https://doi.org/10.3390/universe9020064 - 22 Jan 2023
Cited by 2 | Viewed by 7374
Abstract
Closed timelike curves (CTCs) are non-intuitive theoretical solutions of general relativity field equations. The main paradox associated with the physical existence of CTCs, the so-called grandfather paradox, can be satisfactorily solved by a quantum model named Deutsch-CTC. An outstanding theoretical result that [...] Read more.
Closed timelike curves (CTCs) are non-intuitive theoretical solutions of general relativity field equations. The main paradox associated with the physical existence of CTCs, the so-called grandfather paradox, can be satisfactorily solved by a quantum model named Deutsch-CTC. An outstanding theoretical result that has been demonstrated in the Deutsch-CTC model is the computational equivalence of a classical and a quantum computer in the presence of a CTC. In this article, in order to explore the possible implications for the foundations of quantum mechanics of that equivalence, a fundamental particle is modelled as a classical-like system supplemented with an information space in which a randomizer and a classical Turing machine are stored. The particle could then generate quantum behavior in real time in case it was controlled by a classical algorithm coding the rules of quantum mechanics and, in addition, a logical circuit simulating a CTC was present on its information space. The conditions that, through the action of evolution under natural selection, might produce a population of such particles with both elements on their information spaces from initial sheer random behavior are analyzed. Full article
(This article belongs to the Section Foundations of Quantum Mechanics and Quantum Gravity)
Show Figures

Figure 1

Back to TopTop