Next Article in Journal
Preface: Proceedings of the 6th International Electronic Conference on Sensors and Applications
Previous Article in Journal
Processing Information in the Clouds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Equivalence, (Crypto) Morphism and Other Theoretical Tools for the Study of Information †

by
Marcin J. Schroeder
Global Learning Center, Tohoku University, Sendai 980-8576, Japan
Conference Theoretical Information Studies (TIS), Berkeley, CA, USA, 2–6 June 2019.
Proceedings 2020, 47(1), 12; https://doi.org/10.3390/proceedings2020047012
Published: 8 May 2020
(This article belongs to the Proceedings of IS4SI 2019 Summit)

Abstract

:
The meaning of information can be understood as a relationship between information systems. This study presents a brief outline of theoretical tools for the analysis of this relationship. Considering the informational character of reality, it is natural to extend the relationships between signs to include the concept of meaning as another instance of a relation between the informational entities of a sign and its denotation. However, this approach to the semantics of information does not require any specific ontological commitment, as the intention is always directed towards the object presented to us as a structural manifestation of information. Whether there is something that differs from this informational structure and is beyond our capacity to comprehend directly, or whether there are objects that are the result of our own active engagement in their formation, is a matter of ontological position, with respect to which our approach is neutral. The experience of logic tells us about the dangers of self-reference and the problem of the non-definability of the truth, demonstrated by Tarski. To avoid similar problems, we need precise theoretical tools to analyze relationships between information systems and between instances of information involved in semantics. These tools are also necessary for the definition and analysis of levels of informational abstraction that extend beyond the traditional linguistic and logical context.

1. Introduction

The main obstacle in the development of semantics in its linguistic, logical form and, more recently, more general semantics of information was the perceived difference between the ontological status of a sign and the status of its denotation. Sometimes this division was associated with the mind–body duality when a sign involved in a symbolic relationship was considered a private, mental entity (belonging to res cogitans) and its denotation was understood as a component or element of objective reality (res extensa). In the 19th century, Brentano wanted to make this relationship of intention between symbol and denotation a subject of systematic study, but at the price of giving the intention an exclusive mental status. The discussion of diverse positions on the subject of intention and the choice of the definition of meaning continued to never reach the point of consensus, and recently it was transformed into a discussion on the computational character of cognition and the possibility (or impossibility) of designing artefacts capable of understanding the meaning in symbolic communication.
The present author proposed an alternative solution to the issues created by the difference in ontological status between a sign and its denotation by considering both to be instances of information [1]. In this approach, intention does not have to cross the border between essentially different entities. After all, when we use symbols—informational entities per se (for instance linguistic expressions)—we direct our intention not to independent entities outside of our mental reality and with different ontological status, but to other informational entities either linguistic or not. The word “cow” has as its denotation information integrated into an object identified by its properties and relations to other objects within the perceptual field. Some of these perceived properties may be private (qualia) and some may be subject to objectivization in the process of enculturation. Without the reference to information identifying the specific object, the word “cow” is meaningless.
Following the idea introduced in the earlier work of the author considering the meaning of information as a relationship between information systems [1], this study presents a brief overview of theoretical tools for the analysis of this relationship. Common sense discussions of the meaning of information usually assume that the same information can be encoded or symbolized in many different ways without alternation of the meaning (for instance, with the use of different languages). We can explore this relationship (equivalence) between signs pointing to the same denotation as one side of the symbolic function. The other side is in the process of categorization and abstraction, where one sign can point at many (equivalent) objects.
This simple observation may bring an immediate association with the work of Rudolf Wille on Formal Concept Analysis [2]. However, Wille developed his analysis exclusively for the linguistic context. Since we want to consider the much broader context of general information, it is necessary to engage the theoretical study of information, in particular the structural study of information with essentially different formalism. Moreover, knowing the formidable obstacles in logical studies of semantics, for instance the problem of non-definability of truth demonstrated by Alfred Tarski, we have to be very cautious about the traps inherent in the attempts to build semantics for information based on the concept of truth. For these reasons, it will be necessary in this study to use a relatively high level of formal exposition.

2. Information and Its Structural Manifestation

The idea of the meaning of information as a relationship between the two informational entities of a sign and a denotation can be applied to any conceptualization of information. However, in such a vague formulation, it does not have much value for work on specific methods of informational semantics. For this reason, it will be necessary to make a decision on the choice of the definition of information and to make reference to the theory of information built upon this definition [3,4].
The present author’s definition of information is based on only one categorial (non-definable) concept of the one-many opposition. This minimalistic conceptual framework for information has its advantage in the development of its formal theory consisting in a direct connection to mathematical concepts. Information is defined as an identification of a variety understood as that which makes one out of many. This can be achieved by a selection of one component out of the many (selective manifestation of information) or by equipping the many with a structure that unites it (structural manifestation of information). These are two coexisting manifestations of information, not two types of information as one always requires the presence of the other, although possibly for a different variety. While the reference to selection is quite straightforward and does not require elaborate explanation, the reference to a structure can generate a demand for explanation. Of course, the formalism of a theoretical model of information has to address both manifestations.
Thus, the concept of information requires a variety (many), which can be understood as an arbitrary set S (called a carrier of information). The information system is this set S equipped with the family of subsets M satisfying two conditions: (1) the entire set S belongs to the family M and (2) together with every subfamily of M, its intersection belongs to M, i.e., M is a Moore family of subsets. Of course, this means that we have a closure operator defined on S (i.e., a function f on the power set 2S of a set S) such that [5,6]:
(1)
For every subset A of S, A  f(A);
(2)
For all subsets A, B of S, A B  f(A)  f(B);
(3)
For every subset A of S, f(f(A)) = f(A)).
The set S with a closure operator f defined on it is usually called a closure space and is represented by the symbol <S, f>. Alternatively, this closure space can be defined as a distinction of the Moore family M of subsets of S.
The Moore family M of subsets is simply the family f-Cl of all closed subsets, i.e., subsets A of S, such that A = f(A). The family of closed subsets M = f-Cl is equipped with the structure of a complete lattice Lf by the set theoretical inclusion of sets. Lf can play a role in the generalization of logic for (not necessarily linguistic) information systems, although it does not have to be a Boolean algebra. In many cases, it maintains all the fundamental characteristics of a logical system [6].
Information itself is a distinction of a subfamily ℑ of M, such that it is closed with respect to (pair-wise) intersection and is dually hereditary, i.e., with each subset belonging to ℑ, all subsets of S including it belong to ℑ (i.e., ℑ is a filter in the lattice Lf).
The Moore family M can represent a variety of structures of a particular type (e.g., geometric, topological, algebraic, logical, etc.) defined on the subsets of S. This corresponds to the structural manifestation of information. Filter ℑ in turn, in many mathematical theories associated with localization, can be used as a tool for identification, i.e., selection of an element within the family M, and under some conditions within the set S. For instance, in the context of Shannon’s selective information based on a probability distribution of the choice of an element in S, ℑ consists of elements in S with the probability measure 1, while M is simply the set of all subsets of S.
Now, when we have the basic mathematical formalism for information, we can proceed to the formalization of the theory of morphisms, functions that preserve informational structure, or in the mathematical language of lattice theory, homorphisms of closure spaces. This type of mapping is of crucial importance for our understanding of symbolic representation as it is defined as a mapping of the information of the sign to the information of the denotation.
If we have two closure spaces <S, f> and <T, g>, then a function φ: S → T is called a homomorphism of closure spaces if it satisfies the condition: ∀A ⊆ S: φ(f(A)) ⊆ g(φ(A)).
It can be easily recognized that this is exactly the same condition that defines continuous functions in the case of topological spaces (topological information), and as in topology, for general transitive closure spaces it is equivalent to the requirement that the inverse image of every g-closed subset is f-closed. It is important to notice that homomorphisms of closure spaces can be defined between closure spaces of very different types in diverse disciplines of mathematics.
Now, when we add the condition that the function φ is bijective and satisfies the stronger condition ∀A ⊆ S: φ(f(A)) = g(φ(A)), we get an isomorphism of closure spaces. From the point of view of mathematical theory, isomorphic closure spaces can be considered identical. Finally, isomorphisms from <S, f> on itself (i.e., when S = T and f = g) are called automorphisms or transformations of closure spaces. It can be easily shown that the class of all automorphisms on a closure space <S, f> forms a group with respect to the composition of functions.
The original choice of the closure space formalism made by the author to develop a general theory of information was guided by purely pragmatic considerations. Neither the definition of information formulated by the author as an identification of a variety, nor the large variety of continuing attempts to formulate a theory of structural information from the famous 1972 book by René Thom “Structural Stability and Morphogenesis” [7] to recent contributions [8], compels us to its use, other than the fact that the formalism is very general and allows us to consider practically all existing formally formulated theories of information as special cases. Closure space morphisms give us transitions between very different types of information. For instance, we can consider a symbolic relationship between linguistic systems governed by classical logic described in terms of the consequence closure operation and its denotation described in terms of geometry, topology, or some other form of morphology.
Certainly, this is a common practice of mathematical science to look for a formalism in which we have representation of all theoretical terms and that describes the subject of the study adequately. Nothing more is expected in mathematical practice. Yet, the choice of formalism calls for a more careful philosophical reflection going beyond pragmatic justification.
Another issue motivating this study is the question about the meaning of the concept of a structure frequently used but rarely defined outside of particular contexts. We have well-defined structures in many mathematical theories (e.g., relational structures, algebraic structures, topological structures, etc.), but under more careful inspection, the question “What does structure mean in general?” is rarely asked and, if asked, is far from being answered. The problem is that we often refer to “equivalent” definitions of structures without any formal justification or explanation for this equivalence outside of the particular cases. Our own two definitions of a closure space presented above, one involving the concept of a closure operator and the other involving the concept of a Moore family of subsets, are considered equivalent. We can switch between the two definitions freely, but how do we describe this equivalence outside of this specific context?
In standard situations, the equivalence of structures is defined by isomorphisms. Two isomorphic structures are identical for everyday mathematical practice. Therefore, the question is how to formalize the relationship of equivalence of structures when due to differences in used concepts we cannot consider isomorphisms. The name “cryptomorphism” appeared in this context and became a standard expression. Garrett Birkhoff in 1967 introduced the concept of cryptomorphism (actually he used two terms apparently understood the same way, the other was “crypto-isomorphism”) in the context of abstract algebras defined in alternative ways, excluding the use of isomorphism as a criterion for their equivalence [6].
Birkhoff illustrated the need to go beyond the usual association of algebraic structures through isomorphisms by the example of the concept, omnipresent in mathematics, of a group. A group can be considered as an abstract algebra with four different signatures (types of operations according to the number of operands entering algebraic operations): (1, 2), (0, 1, 2), (2, 2), and (2). Algebras cannot be isomorphic if they have different signatures, and the idea of “polymorphism” presented by Bourbaki in their early attempts to formalize the general concept of a structure does not work here. Birkhoff proposed a solution for the association of algebraic structures, but it is heavily dependent on the context of abstract algebras and can be applied only to so-called varieties (classes of algebras defined by polynomial equations). His approach cannot be extended to other types of structures, for instance topological or geometric. Thus far, the concept of cryptomorphism has not acquired any formal general definition, and the term is used as a generic description of an ad hoc translation between different conceptual frameworks for the study of particular mathematical objects.
The present author started his attempt to develop a general theory of structures from the formulation of the concept of symmetry based on closure space theory [9,10]. Following the general idea of Felix Klein’s Erlangen Program, structures have been associated with the invariants of symmetries. This study is intended as a link between the symmetry-based general concept of structure and the more traditional methodology of the study of abstract concepts based on equivalence relations.
The limited space and scope of this paper does not allow for more than a very brief and general description of the triangular relationship between the three concepts of an equivalence relation, a group action on a set, and the lattice of substructures. The latter two concepts appear in the formulation of the theory of general symmetry, but without any association with the concept of equivalence relations fundamental for the process of abstraction [9,10]. The mutual cryptomorphic interdependence of all three concepts sets the foundation for structural abstraction, i.e., for considering classes of structures as one structural object.
The cryptomorphic character of the relationship does not lead to an error of circularity, because we consider here only three specific structures and their cryptomorphisms are defined for a very specific context. We do not attempt to engage arbitrary structures as tools, but rather to engage the three specific structures as tools for the inquiry of arbitrary structures. Therefore, the three particular instances of cryptomorphisms are used for the purpose of raising the level of abstraction.
It is appropriate to explain a little bit more about why these three specific structures are so important for our purposes. Equivalence relations are fundamental not only for mathematics, but for any form of abstract thinking. Abstraction is a process of transition from the lower level of individual objects to the higher one of abstract concepts, reducing complexity by the elimination of individual differences irrelevant for the purpose of our consideration. Instead of dealing with the huge variety of individual properties of objects that we study in reference to only a few properties or relations of interest, we consider entire classes of mutually equivalent individuals sharing relevant properties as individuals of a higher level of abstraction.
Group actions on sets are fundamental tools of natural sciences, in particular in physics. Group actions are used for the conceptualization of symmetry or local invariance of selected (symmetric) collectives under transformation that globally may change all individuals. An example of such a symmetry could be a symmetric configuration of points forming some structure, but also a human being or any biological organism is an invariant structure whose material components are constantly being replaced by new ones.
The least obvious is the third structure, the lattice of substructures. Probably the most disappointing aspect of Birkhoff’s extensive study of lattices was the early discovered fact that the lattice of substructures does not identify the structure uniquely. Otherwise, lattice theory would have already given the ultimate answer to the general question “What is a structure?”. For instance, non-isomorphic groups can have isomorphic lattices of subgroups [6]. This however does not disqualify lattices as very convenient tools. For our purposes, it turns out that instead of any specific lattice, such as the lattice of substructures, we can study several lattices of substructures invariantly with respect to the action of subgroups of structural automorphisms.
Thus, the tool for the general concept of a structure is not one of the three structures described above, but their mutual interdependence. An early description of the method briefly outlined here can be found in another paper by the author [11], while a more elaborate description is in preparation.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Schroeder, M.J. Semantics of Information: Meaning and Truth as Relationships between Information Carriers. In The Computational Turn: Past, Presents, Futures? Proceedings of the IACAP 2011, Aarhus University, 4–6 July 2011; Ess, C., Hagengruber, R., Eds.; Monsenstein und Vannerdat: Munster, Germany, 2011; pp. 120–123. Available online: https://coeckelbergh.files.wordpress.com/2015/03/55.pdf (accessed on 7 May 2020).
  2. Wille, R. Restructuring lattice theory: An approach based on hierarchies of concepts. Ordered Sets: Proceedings of the NATO Advanced Study Institute, Banff, AB, Canada, 28 August–12 September 1981Ferre, S., Rudolph, S., Eds.; NATO Science Series C, 83; Springer: Dordrecht, The Netherlands, 1982; pp. 445–470. Reprinted in Formal Concept Analysis: Proceedings of 7th International Conference, ICFCA 2009, Darmstadt, Germany, 21–24 May 2009Ferre, S., Rudolph, S., Eds.; Springer Science and Business Media: Berlin, Germany, 2009. [Google Scholar]
  3. Schroeder, M.J. Philosophical Foundations for the Concept of Information: Selective and Structural Information. In Proceedings of the Third International Conference on the Foundations of Information Science, Paris, France, 4–7 July 2005; Available online: http://www.mdpi.org/fis2005/F.58.paper.pdf/ (accessed on 7 May 2020).
  4. Schroeder, M.J. From Philosophy to Theory of Information. Int. J. Inf. Theor. Appl. 2011, 18, 56–68. [Google Scholar]
  5. Schroeder, M.J. Algebraic Model for the Dualism of Selective and Structural Manifestations of Information. In RIMS Kokyuroku; Kondo, M., Ed.; No. 1915; Research Institute for Mathematical Sciences, Kyoto University: Kyoto, Japan, 2014; pp. 44–52. [Google Scholar]
  6. Birkhoff, G. Lattice Theory, 3rd ed.; American Mathematical Society Colloquium Publications: Providence, RI, USA, 1967; Volume XXV. [Google Scholar]
  7. Thom, R. Structural Stability and Morphogenesis (Advanced Books Classics); Benjamin-Cummings, Longman: San Francisco, CA, USA, 1975. [Google Scholar]
  8. Burgin, M.; Feistel, R. Structural and Symbolic Information in the Context of the General Theory of Information. Information 2017, 8, 139. [Google Scholar] [CrossRef]
  9. Schroeder, M.J. Concept of Symmetry in Closure Spaces as a Tool for Naturalization of Information. In Algebraic System, Logic, Language and Computer Science. RIMS Kokyuroku; Horiuchi, K., Ed.; Research Institute for Mathematical Sciences, Kyoto University: Kyoto, Japan, 2016; No. 2008; pp. 29–36. Available online: http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/ pdf/2008-04.pdf (accessed on 8 May 2020).
  10. Schroeder, M.J. Exploring Meta-Symmetry for Configurations in Closure Spaces. In Developments of Language, Logic, Algebraic System and Computer Science, RIMS Kokyuroku; Horiuchi, K., Ed.; Research Institute for Mathematical Sciences, Kyoto University: Kyoto, Japan, 2017; No. 2051; pp. 35–42. Available online: http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/ pdf/2051-07.pdf (accessed on 7 May 2020).
  11. Schroeder, M.J. Structures and Their Cryptomorphic Manifestations: Searching for Inquiry Tools. In Algebraic Systems, Logic, Language and Related Areas in Computer Science, RIMS Kokyuroku; Adachi, T., Ed.; Research Institute for Mathematical Sciences, Kyoto University: Kyoto, Japan, 2019; No. 2130; Available online: www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/2130-08.pdf (accessed on 7 May 2020).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Schroeder, M.J. Equivalence, (Crypto) Morphism and Other Theoretical Tools for the Study of Information. Proceedings 2020, 47, 12. https://doi.org/10.3390/proceedings2020047012

AMA Style

Schroeder MJ. Equivalence, (Crypto) Morphism and Other Theoretical Tools for the Study of Information. Proceedings. 2020; 47(1):12. https://doi.org/10.3390/proceedings2020047012

Chicago/Turabian Style

Schroeder, Marcin J. 2020. "Equivalence, (Crypto) Morphism and Other Theoretical Tools for the Study of Information" Proceedings 47, no. 1: 12. https://doi.org/10.3390/proceedings2020047012

Article Metrics

Back to TopTop