Application of Information Theory Entropy as a Cost Measure in the Automatic Problem Solving †
Abstract
:1. Introduction
- The never ending dream of universal problem solving methods resurrect throughout all computer science history:
- Negative Results: a universal algorithm does not exist—halting/decision problem of UTM (Turing, 1936) can be represented as a special instance of the universal algorithm, No Free Lunch Theorem (Wolpert, Macready, 1997);
- Positive Result: a universal algorithm can be indefinitely asymptotically approximated (Eberbach, CEC’2003).
2. Automatic Problem Solving: $-Calculus SuperTuring Model of Computation
- $-Calculus (read: Cost Calculus, [2,3,4] is a process algebra using anytime algorithms [5] for interactive problem solving targeting intractable and undecidable problems. It is formalization of resource-bounded computation (anytime algorithms) [5] guaranteeing to produce better results if more resources (e.g., time, memory) become available. Its unique feature is support for problem solving by incrementally searching for solutions and using cost performance measure to direct its search.
- We can write in short that $-Calculus = Process Algebra + Anytime Algorithms.
- Historically $-Calculus has been inspired both by λ-Calculus and π-Calculus:
- -
- Church’s λ-Calculus (1936): sequential algorithms, equivalent to TM model, built around function as a basic primitive.
- -
- Milner’s π-Calculus (1992): parallel algorithms, a superTuring model, but no support for automatic problem solving, built around interaction.
- -
- Eberbach’s $-Calculus (1997): parallel algorithms, a superTuring model, built-in support for automatic problem solving (kΩ-optimization), built around cost.
- The kΩ-optimization meta-search represents this “impossible” to construct but “possible to approximate indefinitely” universal algorithm, i.e., it approximates the universal algorithm. It is a very general search method, allowing to simulate many other search algorithms, including A*, minimax, dynamic programming, tabu search, evolutionary algorithms.
- Simplicity, everything is $-expression, open system, prefix notation with potentially infinite number of arguments, and consisting of simple and complex $-expressions.
- -
- Simple (atomic) $-expressions.
- send (→ a P1 P2 …) send Pi through channel a.
- receive (← a X1 X2 …) receive Xi from channel a.
- cost ( $ P1 P2 …) compute cost of Pi.
- suppression ( ‘ P1 P2 …) suppress evaluation of Pi.
- atomic call ( a P1 P2 …) and its definition ( := (a X1 X2 …) ^P^).
- negation ( ¬a P1 P2 …) negation of atomic function call.
- -
- Complex $-expressions.
- general choice ( + P1 P2 …) pick up one of Pi.
- cost choice ( ? P1 P2 …) pick up Pi with the smallest cost.
- adversary choice ( # P1 P2 …) pick up Pi with the highest cost.
- sequential composition ( . P1 P2 …) do P1 P2 … sequentially.
- parallel composition ( || P1 P2 …) do P1 P2 … in parallel.
- function call ( f P1 P2 …) and its definition ( := ( f X1 X2 …) P).
- Search methods (and kΩ optimization, in particular) can be:
- -
- complete if no solutions are omitted,
- -
- optimal if the highest quality solution is found,
- -
- totally optimal if the highest quality solution with minimal search cost is found.
- Search can involve single or multiple agents; for multiple agents it can be:
- -
- cooperative ($-calculus cost choice used),
- -
- competitive ($-calculus adversary choice used),
- -
- random ($-calculus general choice used).
- Search can be:
- -
- offline (n = 0, the complete solution is computed first and executed after without perception),
- -
- online (n ≠ 0, action execution and computation are interleaved.
- If $1 becomes an identity function we obtain Pareto optimality keeping objectives separate.
- For optimization (best quality solutions)—$2 is fixed, and $3 is used only.
- For search optimization (minimal search costs)—$3 is fixed, and $2 is used only.
- For total optimization (best quality solutions with min search costs)—both $1, $2 and $3 are used.
- kΩ-optimization (meta-search)—a very general search method that builds dynamically optimal or “satisficing” plans of actions from atomic and complex $-expressions.
- kΩ-meta-search is controlled by parameters:
- -
- k—depth of search,
- -
- b—width of search,
- -
- n—depth of execution,
- -
- Ω—alphabet for optimization,
- kΩ-optimization works iterating through phases: select, examine, execute.
- It is very flexible and powerful method: combines the best of both worlds: deliberative agents for flexibility, and reactive agents for robustness.
- The “best” programs are the programs with minimal cost—each statement in the language has its associated cost $ (this leads to a new paradigm—cost languages).
- $-calculus is built around the central notion of cost. The cost functions represent a uniform criterion of search and the quality of solutions in problem solving. Cost functions have their roots in von Neumann/Morgenstern utility theory and they satisfy axioms for utilities [5]. In decision theory they allow to choose states with optimal utilities on average (the maximum expected utility principle).
- In $-calculus they allow to choose states with minimal costs subject to uncertainty (expressed by probabilities, fuzzy set or rough set membership functions).
- It is not clear whether it is possible to define a minimal and complete set of cost functions, i.e., possible to use “for anything”. $-calculus approximates this desire for universality by defining a standard cost function possible to use for many things (but not for everything, thus a user may define own cost functions).
3. Application of Information Theory Entropy as an Instance of $-Calculus Cost Measure
- 0.
- t = 0, initialization phase init: S0 = ε0:
- 1.
- t = 1, first loop iteration:
- 2.
- t = 2, second loop iteration:
4. Conclusions and Future Work
Conflicts of Interest
References
- Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423, 623–656. [Google Scholar] [CrossRef]
- Eberbach, E. Approximate reasoning in the algebra of bounded rational agents. Int. J. Approx. Reason. 2008, 49, 316–330. [Google Scholar] [CrossRef]
- Eberbach, E. The $-Calculus process algebra for problem solving: A paradigmatic shift in handling hard computational problems. Theor. Comput. Sci. 2007, 383, 200–243. [Google Scholar] [CrossRef]
- Eberbach, E. $-Calculus of bounded rational agents: Flexible optimization as search under bounded resources in interactive systems. Fundam. Inform. 2005, 68, 47–102. [Google Scholar]
- Russell, S.; Norvig, P. Artificial Intelligence: A Modern Approach, 3rd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
- Burgin, M. Super-Recursive Algorithms; Springer: New York, NY, USA, 2005. [Google Scholar]
- Kolmogorov, A.N. On tables of random numbers. Sankhyā Ser. A 1965, 25, 369–375. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Eberbach, E. Application of Information Theory Entropy as a Cost Measure in the Automatic Problem Solving. Proceedings 2017, 1, 194. https://doi.org/10.3390/IS4SI-2017-04037
Eberbach E. Application of Information Theory Entropy as a Cost Measure in the Automatic Problem Solving. Proceedings. 2017; 1(3):194. https://doi.org/10.3390/IS4SI-2017-04037
Chicago/Turabian StyleEberbach, Eugene. 2017. "Application of Information Theory Entropy as a Cost Measure in the Automatic Problem Solving" Proceedings 1, no. 3: 194. https://doi.org/10.3390/IS4SI-2017-04037
APA StyleEberbach, E. (2017). Application of Information Theory Entropy as a Cost Measure in the Automatic Problem Solving. Proceedings, 1(3), 194. https://doi.org/10.3390/IS4SI-2017-04037