This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

In this paper we discuss Floridi’s views concerning semantic information in the light of a recent contribution (in collaboration with the present author) [

“Philosophical work on the concept of [...] information is still at that lamentable stage when disagreement affects even the way in which the problems themselves are provisionally phrased and framed” [

The word “information” has been given many different meanings by various writers in the field of information theory. It is likely that at least a number of these will prove sufficiently useful in certain applications to deserve further study and permanent recognition. It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field [

In this paper we discuss Floridi’s views concerning

We start, in

According to the received view, logical deduction never increases (semantic) information. This tenet clashes with the intuitive idea that deductive arguments are useful just because, by their means, we obtain information that we did not possess before. However, it allowed philosophers and mathematicians to justify their view of logic and mathematics as infallible activities that are not subject to the tribunal of experience.

One of the trademarks of modern, or “logical”, empiricism was the rejection of the possibility of “synthetic a priori” judgements. (We brieﬂy recall some philosophical terminology. A judgment is

In such a way logical analysis overcomes not only metaphysics in the proper, classical sense of the word, especially scholastic metaphysics and that of the systems of German idealism, but also the hidden metaphysics of Kantian and modern apriorism. The scientific world-conception knows no unconditionally valid knowledge derived from pure reason, no “synthetic judgments a priori” of the kind that lie at the basis of Kantian epistemology and even more of all pre- and post-Kantian ontology and metaphysics. [...] It is precisely in the rejection of the possibility of synthetic knowledge a priori that the basic thesis of modern empiricism lies. The scientific world-conception knows only empirical statements about things of all kinds, and analytic statements of logic and mathematics [

The conception of mathematics as tautological in character, which is based on the investigations of Russell and Wittgenstein, is also held by the Vienna Circle. It is to be noted that this conception is opposed not only to apriorism and intuitionism, but also to the older empiricism (for instance of J.S. Mill), which tried to derive mathematics and logic in an experimental-inductive manner as it were [

It is typical of any purely logical deduction that the conclusion to which it leads simply re-asserts (a proper or improper) part of what has already been stated in the premises. Thus, to illustrate this point by a very elementary example, from the premise “This figure is a right triangle”, we can deduce the conclusion, “This figure is a triangle”; but this conclusion clearly reiterates part of the information already contained in the premise. [...] The same situation prevails in all other cases of logical deduction; and we may, therefore, say that logical deduction—which is the one and only method of mathematical proof—is a technique of conceptual analysis: it discloses what assertions are concealed in a given set of premises, and it makes us realize to what we committed ourselves in accepting those premises; but none of the results obtained by this technique ever goes by one iota beyond the information already contained in the initial assumptions [

Once the justification of deductive inference is perceived as philosophically problematic at all, the temptation to which most philosophers succumb is to offer too strong a justification: to say, for instance, that when we recognize the premises of a valid inference as true, we have thereby already recognized the truth of the conclusion [

The conception of logical deduction as “analytic”, and therefore “tautological”, is a persistent dogma of (logical) empiricism that seems to be somewhat independent of Quine’s two dogmas [

In fact, in

In his latest work Quine appears to leave aside this idea that some logical laws may be synthetic. For example, in his

Yes so, on this score I think of the truths of logic as analytic in the traditional sense of the word, that is to say true by virtue of the meaning of the words. Or as I would prefer to put it: they are learned or can be learned in the process of learning to use the words themselves, and involve nothing more [

At the half of the 20th century, Bar-Hillel and Carnap’s theory of “semantic information” provided what is, to date, the strongest theoretical justification for the thesis that deductive reasoning is “tautological”. Although their effort was clearly inspired by the rising enthusiasm for Shannon and Weaver’s new Theory of Information [

The Mathematical Theory of Communication, often referred to also as “Theory of (Transmission of) Information”, as practised nowadays, is not interested in the content of the symbols whose information it measures. The measures, as defined, for instance, by Shannon, have nothing to do with what these symbols symbolise, but only with the frequency of their occurrence. [...] This deliberate restriction of the scope of the Statistical Communication Theory was of great heuristic value and enabled this theory to reach important results in a short time. Unfortunately, however, it often turned out that impatient scientists in various fields applied the terminology and the theorems of Communication Theory to fields in which the term “information” was used, presystematically, in a semantic sense, that is, one involving contents or designata of symbols, or even in a pragmatic sense, that is, one involving the users of these symbols [

Suppose we are interested in the weather forecast for tomorrow and that we focus only on the possible truth values of the two sentences “tomorrow will rain” (

The same basic idea, identifying the information carried by a sentence with the set of the possible states that it excludes, had already made its appearance in Popper’s

The amount of positive information about the world which is conveyed by a scientific statement is the greater the more likely it is to clash, because of its logical character, with possible singular statements. (Not for nothing do we call the laws of nature “laws”: the more they prohibit the more they say.) [

[...]

It might then be said, further, that if the class of potential falsifiers of one theory is “larger” than that of another, there will be more opportunities for the first theory to be refuted by experience; thus compared with the second theory, the first theory may be said to be “falsifiable in a higher degree”. This also means that the first theory says more about the world of experience than the second theory, for it rules out a larger class of basic statements. [...] Thus it can be said that the amount of empirical information conveyed by a theory, or its empirical content, increases with its degree of falsifiability [

A straightforward consequence of Bar-Hillel and Carnap’s notion of “semantic information” is that contradictions, like “tomorrow will rain and will not rain”, carry the maximum amount of information, since they exclude all possible states. Another inevitable consequence of the theory is that _{1},...,_{n} if and only if the conditional (_{1 }∧ ... ∧ _{n}) →

Bar-Hillel and Carnap were well aware that their theory of semantic information sounded counterintuitive in connection with contradictory (sets of) sentences, as shown by the near-apologetic remark they included in their [

It might perhaps, at first, seem strange that a self-contradictory sentence, hence one which no ideal receiver would accept, is regarded as carrying with it the most inclusive information. It should, however, be emphasized that semantic information is here not meant as implying truth. A false sentence which happens to say much is thereby highly informative in our sense. Whether the information it carries is true or false, scientifically valuable or not, and so forth, does not concern us. A self-contradictory sentence asserts too much; it is too informative to be true [

But the importance of the requirement of consistency will be appreciated if one realizes that a self-contradictory system is uninformative. It is so because any conclusion we please can be derived from it. Thus no statement is singled out, either as incompatible or as derivable, since all are derivable. A consistent system, on the other hand, divides the set of all possible statements into two: those which it contradicts and those with which it is compatible. (Among the latter are the conclusions which can be derived from it.) This is why consistency is the most general requirement for a system, whether empirical or non-empirical, if it is to be of any use at all [

Cohen and Nagel were among the first to point out that the traditional tenet that logical deduction is devoid of any informational content sounds paradoxical:

If in an inference the conclusion is not contained in the premises, it cannot be valid; and if the conclusion is not different from the premises, it is useless; but the conclusion cannot be contained in the premises and also possess novelty; hence inferences cannot be both valid and useful [

A few decades later Jaakko Hintikka described this paradox as a true “scandal of deduction”:

C.D. Broad has called the unsolved problems concerning induction a scandal of philosophy. It seems to me that in addition to this scandal of induction there is an equally disquieting scandal of deduction. Its urgency can be brought home to each of us by any clever freshman who asks, upon being told that deductive reasoning is “tautological” or “analytical” and that logical truths have no “empirical content” and cannot be used to make “factual assertions”: in what other sense, then, does deductive reasoning give us new information? Is it not perfectly obvious there is some such sense, for what point would there otherwise be to logic and mathematics? [

If no objective, non-psychological increase of information takes place in deduction, all that is involved is merely psychological conditioning, some sort of intellectual psychoanalysis, calculated to bring us to see better and without inhibitions what objectively speaking is already before your eyes. Now most philosophers have not taken to the idea that philosophical activity is a species of brainwashing. They are scarcely any more favourably disposed towards the much more far-fetched idea that all the multifarious activities of a contemporary logician or mathematician that hinge on deductive inference are as many therapeutic exercises calculated to ease the psychological blocks and mental cramps that initially prevented us from being, in the words of one of these candid positivists, “aware of all that we implicitly asserted” already in the premises of the deductive inference in question [

A non-psychologistic attempt to avoid the paradox consists in blaming it on the imperfection of our logical language. In his

In a logically perfect language the recognition of tautologies should be immediate. Since the deducibility of a certain conclusion from a given set of premises is equivalent to the tautologyhood of the conditional whose antecedent is the conjunction of the premises and whose consequent is the conclusion of the inference, the correctness of any inference would prove, in a symbolism of the kind, to be immediately visible. So, given a “suitable notation”, logical deduction could actually be reduced to the mere

When the truth of one proposition follows from the truth of others, we can see this from the structure of the propositions. (

In a suitable notation we can in fact recognize the formal properties of propositions by mere inspection of the propositions themselves. (6.122).

Every tautology itself shows that it is a tautology. (6.127(b))

In accordance with Wittgenstein’s idea, one could specify a procedure that translates sentences into a “perfect notation” that fully brings out the information they convey, for instance by computing the whole truth-table for the conditional that represents the inference. Such a table displays all the relevant possible worlds and allows one to distinguish immediately those that make a sentence true from those that make it false, the latter representing (collectively) the “semantic information” carried by the sentence. Once the translation has been performed, logical consequence can be recognized by “mere inspection”.Thus, if information could be fully unfolded by means of some mechanical translation into a “perfect logical language”, the scandal of deduction could be avoided without appealing to psychologism. Sometimes we fail to immediately “see” that a conclusion is implicit in the premises because we express both in a concise notation, a sort of stenography that prevents us from fully recognizing the formal properties of propositions until we decode it into an adequate notation. From this point of view, semantic information would be a perfectly good way of specifying the information carried by a sentence with reference to an algorithmic procedure of translation. (On the theme of a “logically perfect language” see also [

Although this idea may seem to work well for propositional logic, one can easily see how the Church-Turing undecidability theorem excludes the possibility of a perfect language, in Wittgenstein’s sense, for first-order logic: since first-order logical truth is undecidable, we can never find an algorithm to translate every sentence into a perfect language in which its tautologyhood could be immediately decided by mere inspection. This negative result is also the main motivation for Hintikka’s criticism of Bar-Hillell and Carnap’s notion of semantic information.

[. . . ] measures of information which are not effectively calculable are well-nigh absurd. What realistic use can there be for measures of information which are such that we in principle cannot always know (and cannot have a method of finding out) how much information we possess? One of the purposes the concept of information is calculated to serve is surely to enable us to review what we know (have information about) and what we do not know. Such a review is in principle impossible, however, if our measures of information are non-recursive [

The truths of propositional logic are [...] tautologies, they do not carry any new information. Similarly, it is easily seen that in the logically valid inferences of propositional logic the information carried by the conclusion is smaller or at most equal to the information carried by the premises. The term “tautology” thus characterizes very aptly the truths and inferences of propositional logic. One reason for its one-time appeal to philosophers was undoubtedly its success in this limited area” ([

Thus, some degree of uncertainty about whether or not a certain conclusion follows from given premises cannot be, in general, completely eliminated even in the restricted and “simple” domain of propositional logic. So, if we take seriously the time-honoured and common-sense concept of information, according to which information consists in reducing uncertainty, we should conclude that in some cases deductive reasoning

The scandal of deduction has recently received renewed attention leading to a number of original contributions (e.g., [

Another widely debated paradox connected with the received view on logical deduction arises in the context of modal characterizations of propositional attitudes and is nothing but a variant of the “scandal of deduction” described in the previous section. According to the standard logic of knowledge (epistemic logic) and belief (doxastic logic), as well as to the more recent attempts to axiomatize the “logic of being informed” (information logic), if an agent _{a}_{a}φ_{a}ψ

(_{a}_{a}φ_{a}ψ

and the “necessitation rule”:

(_{a}φ

On the other hand, despite its paradoxical ﬂavour, (2) seems an inescapable consequence of the standard Kripke-style semantical characterization of the logics under consideration. The latter is carried out in terms of structures of the form (_{1},...,_{n}), where _{a}_{a}_{a}φ

Now, under this reading of the consequence relation ├, which is based on classical propositional logic, (2) may perhaps be satisfied by an “idealized reasoner”, in some sense to be made more precise, but is not satisfied, and is not likely to ever be satisfiable, in practice. (It should be noted that the appeal to an “idealized reasoner” has usually the effect of sweeping under the rug a good deal of interesting questions, including how idealized such a reasoner should be. Idealization may well be a matter of degree.) As mentioned above, even restricting ourselves to the domain of propositional logic, the theory of computational complexity tells us that the decision problem for Boolean logic is co-NP-complete, and this means that any real agent, even if equipped with an up-to-date computer running a decision procedure for Boolean logic, will never be able to feasibly recognize that certain Boolean sentences logically follow from sentences that she regards as true. So, the clash between (2) and the classical notion of logical consequence, which arises in any real application context, may only be solved either by waiving the assumption stated in (2), or by waiving the consequence relation of classical logic in favour of a weaker one with respect to which it may be safely assumed that the modality _{a}

In this section we discuss Floridi’s ideas on the anomalies of the received view. The reader is warned that our exposition shows a strong bias for the ideas put forward in a joint paper by Floridi and the present author [

Floridi’s key idea for dissolving the Bar-Hillel-Carnap paradox (BCP) is startlingly simple. Observe that the problem with the standard theory of semantic information is confined to inconsistent (sets of) sentences. So, if we endorse the view that “information encapsulates truth” [

We shall not discuss TSSI in detail, since we chose to concentrate on Floridi’s approach to the scandal of deduction. We just remark, in passing, that despite all its merits, this theory may perhaps be criticized, very much like the neopositivistic tenet that logic is informationally trivial, as a philosophical overkill. If the problem lies with “inconsistent information”, why not simply say that “information encapsulates consistency” and try to define a notion of semantic information where inconsistent (sets of) sentences are qualified as uninformative, in line with Popper’s informal view (

It must be observed however that even requiring that “information encapsulates consistency” appears too demanding. In many interesting contexts agents are not able to tell whether their data are (classically) consistent. So how can we practically distinguish a situation in which we hold genuine information (

Floridi’s Theory of Strongly Semantic Information implies that the degree of informativeness of any tautology is 0 (see [

[...] indeed, according to MTI [Mathematical Theory of Information], TWSI [Theory of Weakly Semantic Information] and TSSI [Theory of Strongly Semantic Information], tautologies are not informative. This seems both reasonable and unquestionable. If you wish to know what the time is, and you are told that “it is either 5pm or it is not” then the message you have received provides you with no information [

_{a}

_{a}

_{a}φ

_{a}

In § 3.1 of [

A logic is an idealization of certain sorts of real-life phenomena. By their very nature, idealizations misdescribe the behaviour of actual agents. This is to be tolerated when two conditions are met. One is that the actual behaviour of actual agents can defensibly be made out to approximate to the behaviour of the ideal agents of the logician’s idealization. The other is the idealization’s facilitation of the logician’s discovery and demonstration of deep laws [

Floridi’s second strategy consists simply in observing that the problem is shared by all epistemic logics. While this is admittedly no solution, one can agree with him that a “a problem shared is a problem halved” [

Interestingly enough, Floridi’s third and last strategy seems to consist in removing the scandal of deduction altogether, by uncritically assuming that all tautologies are equally uninformative. Floridi’s argument can be summarized as follows:

By the Inverse Relationship Principle, “information goes hand in hand with unpredictability” [

Hence, an agent’s information cannot be increased by receiving the (empty) information that

This situation is indistinguishable from the one in which the agent actually holds the (empty) information that

If you ask me when the train leaves and I tell you that either it does or it does not leave at 10:30 am, you have not been informed, although one may indifferently express this by saying that what I said was uninformative in itself or that (it was so because) you already were informed that the train did or did not leave at 10:30 am anyway [

Hence, we can assume that, for every tautology

It turns out that the apparent difficulty of information overload can be defused by interpreting ├

Of course one might retort that, under these circumstances,

Perhaps times are ripe to ask the fundamental question: is this kind of metaphysical and ultimately unattainable “objective information”—as opposed to the information a (real) agent

This is the sense in which, for example, an agent

What we have in mind is a notion of information that satisfies the following commonsense requirement:

In our view, the scandal of deduction and the related problem of information overload are nothing but symptoms of a fundamental difficulty. This can be described as

The problem raised at the end of the previous section is addressed in [

The informational semantics of the logical operators is based on the following principle:

_{1},...,_{n}) is true, respectively false, in terms of the information that a actually holds about the truth or falsity of _{1},...,_{n}.

Clearly, the classical truth-table semantics for the Boolean operators does not qualify as an informational semantics. Even if we interpret the truth values 1 and 0 that a sentence

3-valued truth-tables.

The problem becomes apparent when one considers the necessary and sufficient condition for the truth of a disjunction or the necessary and sufficient condition for the falsity of a conjunction. Under the informational interpretation of 1 and 0, these conditions would read, respectively, as follows:

we actually hold the information that

we actually hold the information that

It can be verified that the only meaning-conditions that can be justified in accordance with the founding principle of informational semantics (p. 50 above) are those expressed by the inference rules shown in _{a}φ_{a}φ_{def }

Sufficient conditions (introduction rules) for the standard Boolean operators.

Necessary conditions (elimination rules) for the four standard Boolean operators.

As required by the informational semantics, these rules specify the necessary (elimination rules) and sufficient (introduction rules) conditions for an agent

Since the introduction and elimination (intelim) rules are intended as valid for any agent, reference to the agent can be omitted, unless otherwise required, by stripping the label

Let us call

for no sentence

if

[This principle] is a universal but purely negative criterion of all truth. But it belongs to logic alone, because it is valid of all cognitions, merely as cognitions and without respect to their content, and declares that the contradiction entirely nullifies them [

The set ^{u}be the set of all upper bounds of ^{u}^{u}^{∗} = ^{∗} , ⊆) is a complete lattice, where the meet ⨅ ^{∗} is given by∩^{u}^{u}

Now, the

More generally, the

Observe that, since ⨅ ∅ = ⊤, (4) yields INF(Γ)= ⊤ whenever Γ is analytically inconsistent, for there is no

This requirement of

The _{a}_{a}_{a}_{a}_{a}_{a}

Finally, we define the following deducibility relation:

Γ ├

Observe that Γ├

Γ ├

Hence, ├ is informationally trivial, in that every agent that actually holds the information that the premises are true must thereby hold the information that the conclusion is true, or equivalently, the surface semantic information carried by the conclusion is included in the surface semantic information carried by the premises. The latter wording covers the limiting case in which the surface information carried by the premises is ⊤ which does not qualify as genuine information (T is not an information state).

Do this meaning-theory for the logical operators and its associated notion of surface semantic information satisfy SMR? The answer is yes. The details can be found in [

It may be objected that the deducibility relation ├ is still “explosive” when Γ is analytically inconsistent, for there is no information state for a that contains _{a}φ

Obviously, once a contradiction has been discovered, no one is going to go through it: to exploit it to show that the train leaves at 11:52 or that the next Pope will be a woman [

We stress again that our definition of information state and surface semantic information do not require that information “encapsulates truth”, nor do they even require that it “encapsulates consistency”, but only that information “encapsulates

According to this characterization, ├ is informationally trivial by definition, and this is in accordance with the tenet that analytic inferences are utterly uninformative. However, now both the meaning theory and the residual notion of semantic information do satisfy SMR and so the scandal of deduction is dissolved. The inferences that can be justified by ├ are only a subclass of the classically valid inferences and their validity can be recognized in feasible time. Moreover, as we have already remarked, the Tarskian logic ├ contains no

In [

An argument to show the validity of (6) based on these analytic rules will necessarily have to introduce temporary assumptions that are “discharged” when the conclusion is drawn, as in the following schematic argument:

Here, the information expressed by the signed sentences _{a}φ_{a}φ

Analytical judgements (affirmative) are therefore those in which the connection of the predicate with the subject is cogitated through identity; those in which this connection is cogitated without identity, are called synthetical judgements. The former may be called

[...]

In an analytical judgement I do not go beyond the given conception, in order to arrive at some decision respecting it. [...] But in synthetical judgements, I must go beyond the given conception, in order to cogitate, in relation with it, something quite different from what was cogitated in it [...] [

On the other hand, the synthetic inferences of classical propositional logic are precisely those that essentially require the introduction (and subsequence discharge) of virtual information. The manipulation of virtual information can be governed, like in the example given above, by a single _{a}φ_{a}φ_{a}φ_{k}_{k}_{k}_{k}_{−1}. All classical tautologies are synthetic at some degree greater than 0. For example, the law of excluded middle is synthetic at degree 1, as shown by the following argument from the empty set of premises:

We have discussed some of Floridi’s views on semantic information, paying special attention to the solutions they offer to the anomalies of the received view, such as the Bar-Hillel-Carnap Paradox, the Scandal of Deduction and the Problem of Information Overload (aka. “Logical Omniscience”). Most of the discussion has been oriented by the approach to the scandal of deduction put forward in a joint paper by Floridi and the present author, so it is inevitably biased. However, as Popper repeatedly stressed, there is no problem with holding a biased view provided that one is willing to honestly and severely test it. Being our theory a non-empirical one, it can be tested mainly for consistency, intuitive plausibility and, above all, for its heuristic value, its capability of raising new interesting problems. Here, the proof of the pudding consists in trying to develop a quantitative Theory of Bounded Semantic Information to complement the qualitative theory that has been outlined in [

I wish to thank Stefania Bandini and Marcelo Finger for valuable comments on a previous draft.