1. Introduction
Informational processes have been considered as a common way to explain mental representation. However, whether it is able to go further to cover the special features of mental representation, apart from explaining the underlying process, is still an issue. I stand for a negative point of view that the informational approach is not a successful and complete approach to explain mental representation, which probably thereafter sheds light on the non-intelligence of AI.
2. Background
Mental representation has been one of the central issues in the philosophy of the mind. It refers to mental phenomena that have mental content. There are many examples of mental representation in our daily life, such as mental pictures, perception, belief, desire, imagination, memory, and so on. Usually, we define mental representation as having a representational state and representational content. For instance, when you see that there is a yellow car in the parking lot, the representational state is “seeing”, while the representational content is “that yellow car in the parking lot”.
There are many different views on mental representation. One of them can be called the informational approach. With the development of psychology and cognitive science, information has become a common way to explain mental representation. According to this approach, the information process is a basic process for the occurrence of all kinds of mental representation. Thus, “information” is not just a core concept in computational theory but should also be a common concept when we explain mental representation [
1,
2].
3. Informational Approach and Its Problems
There are reasons why informational processes have been considered as a common way to explain mental representation. One of the reasons is that even among philosophers, mechanism is preferred in a theory. Therefore, with the improvement of science such as psychology and cognitive science and the evolution of the concept of information, mechanism can be explained in terms of information. Many scientists and philosophers believe that mental representation, like perception or memory, can be reduced to processes of the transmission of information. Another reason is that they would like to take representational content as informational content, so the hard part of mental representation (which is how to explain the representational content) will automatically go away.
However, there are two questions for the informational approach:
- (1)
Can information cover the mentality of mental representation?
- (2)
Can mental representation be reduced to informational processes?
Let us consider the first question: can information cover the mentality of mental representation? There are a lot of things in the world carrying information, but they are not mental representations. For instance, smoke might imply fire, thermometers signify the temperature of the surroundings, and computers store a lot of information and function in terms of the computation of information. However, none of them are mental representations. For a state to be mental representation, there must be something distinctive that makes it mental rather than non-mental. I think there are two features of mental representation that makes it special. First, mental representation is subjective in a certain way. We often hear that consciousness is subjective in such a way that it cannot be explained objectively. However, mental representation also has such subjective features that might not be easily explained in an objective way, which means physical and informational processes are not able to tell the whole story, although mental representation has its objective part at the same time. It does not even matter whether the subjectivity of mental representation means the first-person perspective, indexicality or whatever. As long as you admit that there is some subjective part of mental representation, the informational approach will have trouble giving a satisfactory account. Second, mental representation has been considered as having semantic properties. In other words, there is misrepresentation. Take perception as an example; there are veridical or accurate perceptions but also non-veridical perceptions like illusions and hallucinations. It is the semantic properties of mental representations that cannot be explained very well by information, even by so-called information semantics. Since they often prefer a causal account or mechanism, it is not obvious that they can explain misrepresentation explicitly, because there is no “wrong” causation.
Now let us consider the second question: can mental representation ultimately be reduced to informational processes? The answers “yes” and “no” will lead to quite different consequences. If mental representation can be reduced to informational processes, then this might not only remove the distinctive features of the mind but also open up more possibilities for artificial intelligence. To some people, this might be a good thing, but I doubt the truth of it. In my opinion, mental representation cannot be reduced to informational processes. First, while “information” is a useful tool and an interesting way for us to understand mental representation, mental representation has its own unique features that cannot be reduced to information. Second, although there is informational content, it is not equal to mentally representational content.
4. Conclusions and Further Thoughts about AI
Based on the argument above, I conclude that mental representation cannot be reduced to informational processes, since the informational approach has trouble explaining mental representation, regarding either the subjective part or the semantic property of mental representation.
Furthermore, I think this might inspire some very interesting ideas about AI, which recently has been seen as a very promising enterprise, though these ideas might entail a seemingly negative standpoint.
Here is a little argument from the above conclusion for the doubt regarding the intelligence of AI:
The informational approach has trouble explaining mental representation.
Intelligence is a kind of mental state or at least a capability involved mental representation.
If AI is based on the informational approach and the confident belief in the explanatory capability of information, it is not as intelligent as it calls itself.
If this argument makes sense, then since the informational approach has trouble explaining mental representation, AI based on the confident belief in the explanatory capability of information and data is not really intelligent as it calls itself.