## 1. Vagueness: A Brief Overview

An object is vague when its properties and capacities are not sharply determined. Alternatively, we can say that a concept is vague when it is characterized by fuzzy boundaries. The Sorites Paradox, which was introduced by Eubulides of Miletus, is a typical example of an argument that demonstrates what fuzzy boundaries are. The term ‘sorites’ derives from the Greek word soros, which means “heap”. However, “given the male preoccupation with ceanial hairiness, it has been more common to pose the argument in terms of baldness rather than heaps” [

1]. Let us now explain, in some detail, the paradox.

Imagine that there is a group of men that forms a line in front of us. The leftmost man, Jayson, has a completely hairless head, and the rightmost man, Dennis, has a full head of hair. In between, each man has a non-perceptibly greater amount of cranial hair than the man to his left. Clearly, Jayson is bald, and Dennis is not bald. However, there is no observable difference between Jayson and the man next to him, and so he is also bald. Similarly, the second man is not different from the third man, and by following this train of thought, Dennis is also bald!

There are three Views of Vagueness:

The ‘ontic’ view: The world itself is vague and, consequently, language is too vague to describe the world.

The ‘semantic’ view: Vagueness exists only in our language and our thoughts.

The ‘epistemic’ view: Vagueness exists because we do not know where the boundaries exist for a “vague” concept.

This author supports the ontic view, and therefore, the ideas presented below are affected by this “prejudice”.

## 2. On Information

‘Information’ can be viewed as facts provided or learned about something or someone. In general, information flow is crucial for the dissemination of information and knowledge, which can be considered as information plus analysis and/or experience. The mathematical representation of information is quite a challenging task. The representation introduced by Barwise and Seligman [

2] is quite interesting and useful. It employs the notion of a type. Although many readers would be familiar with this word, we still suspect that few of them can really explain the difference between a set and a type.

Herman Geuvers [

3] elegantly explained the difference, and we borrow his explanation. First of all, a type is not a set. Types may look like sets, but give syntactic information. For example, in the expression

the word

$\mathsf{nat}$ says that expression is an integer, meaning that all constituents are integer numbers, operations, and variables. Now, consider the expression

This states that 8 is an even number. More generally, to say that 8 is an even number we need proof that establishing that there is a natural number, such that when it is multiplied by 2, it is equal to 8. To show that $4/x+2:\mathsf{nat}$; we need a computation that will make sure that all terms of the expression are $\mathsf{nat}$.

## 3. Classifications

The basic ingredient of the representation of Barwise and Seligman is the classification:

**Definition** **1.** A classification $\mathbf{A}=(A,{\mathsf{\Sigma}}_{A},{\vDash}_{A})$ consists of a set A of objects to be classified, called tokens of $\mathbf{A}$, a set ${\mathsf{\Sigma}}_{A}$ of objects used to classify the tokens, called the types of $\mathbf{A}$, and a binary relation ${\vDash}_{A}$ over A and ${\mathsf{\Sigma}}_{A}$, which tells one which tokens are classified as different types.

In simple words, we have a set of types, a set of objects (the tokens), and a relation that tells us what is the type of each object. Thus, $a{\vDash}_{A}\alpha $ means that the token a is of type $\alpha $.

**Example** **1.** Suppose that we have a set B of batteries and ${\mathsf{\Sigma}}_{B}$ is a set that includes the typesempty,full, etc. The relation ${\vDash}_{B}$ can be used to classify the batteries. Thus, ${b}_{1}{\vDash}_{B}$ empty means that battery ${b}_{1}$ is empty.

Functions are used to map numbers to numbers or, more generally, objects to objects. For example, we can map a complex object to a simpler one by discarding some properties, attributes, and characteristics of the former that the latter does not have. An informorphism is a “function” that maps one classification to another. Thus, if one classification is about flashlight bulbs and their status (e.g., lit, unlit, and live) and another about batteries and their status, an infomorphism maps flashlight bulbs and their status to batteries and their status. Formally:

**Definition** **2.** If $\mathbf{A}=(A,{\mathsf{\Sigma}}_{A},{\vDash}_{A})$ and $\mathbf{B}=(B,{\mathsf{\Sigma}}_{B},{\vDash}_{B})$ are classifications, then an infomorphism is a pair $f=({f}^{\to},{f}^{\leftarrow})$ of functions, where ${f}^{\to}:B\to A$ and ${f}^{\leftarrow}:{\mathsf{\Sigma}}_{A}\to {\mathsf{\Sigma}}_{B}$, satisfying the following confition:for all tokens b of $\mathbf{B}$ and all types α of $\mathbf{A}$. ## 4. Fuzzy Classifications

Fuzzy classifications are objects of the Dialectica category

${\mathbf{Dial}}_{[0,1]}\mathbf{Set}$ [

4]. This means that objects are identical to classifications. However, the morphisms are a bit different. For simplicity, we will discuss objects and morphisms of category

${\mathbf{Dial}}_{[0,1]}\mathbf{Set}$. In what follows, these will be called fuzzy classifications.

**Definition** **3.** A fuzzy classification $\mathbf{A}$ is a triple $(A,{\mathsf{\Sigma}}_{A},{\vDash}_{A})$, where A is a set of objects to be classified, called tokens of $\mathbf{A}$, a set ${\mathsf{\Sigma}}_{A}$ of objects used to classify the tokens, called the types of $\mathbf{A}$, and a fuzzy binary relation ${\vDash}_{A}$ over A and ${\mathsf{\Sigma}}_{A}$, such thatwhich means that the object a is classified as being of type α with a degree that is equal to i. Instead of ${\vDash}_{A}(a,\alpha )=i$, one can write $a{\vDash}_{A}^{i}\alpha $. ## 5. Discussion

The notion of classification is based on the assumption that nothing is lost during information flow. Of course, this is not true, and in reality, information is lost. This means that errors happen because information is lost. However, what causes this loss of information? One possible answer is: Noise. Noise is a manifestation of vagueness, as it renders information not crisp. There are various forms of noise: Thermal, intermodulation, cross-talk, impulse, and shot noise.

Consider again the relation:

Once an error happens in a classification (i.e., remember that a classification describes a system), it becomes more noticeable during propagation. This explains why I have opted to use the ≤ operator. Now, it is straightforward to define the tensor multiplication and the addition of two fuzzy classifications. The hard work is to see what these new structures can tell us about information that propogates in a vague environment. In addition, I need to see whether fuzzy classifications can be used to interpret things that happen during information flow that is not lossless.