Next Article in Journal
Named Sets as an Efficient Tool for Modeling Data Relationships in Database Models
Previous Article in Journal
Monitoring Thermal Conditions and Finding Sources of Overheating
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Proceeding Paper

Information and Disinformation Boundaries and Interfaces †

Department of Computer Science and Engineering, Chalmers University of Technology and University Gothenburg, 40482 Gothenburg, Sweden
Division of Computer Science and Software Engineering, School of Innovation, Design and Engineering, Mälardalen University, 72220 Västerås, Sweden
Presented at the 5th International Conference of Philosophy of Information, IS4SI Summit 2021, Online, 18 September 2021.
Proceedings 2022, 81(1), 49;
Published: 15 March 2022


This paper presents the highlights from the Boundaries of Disinformation workshop held at Chalmers University of Technology. It addresses the phenomenon of disinformation—its historical and current forms. Digitalization and hyperconnectivity have been identified as leading contemporary sources of disinformation. In the effort to counteract disinformation globally, diverse strategies have been proposed. However, it is important not to forget the need for the balance between individual freedom of expression and institutionalized societal thinking used to prevent spreading disinformation. The important aspect of the solution is that the debate about adequate and truthful information, as opposed to disinformation, involves stakeholders.

1. Introduction

Last year, a workshop was held at Chalmers University of Technology on the Boundaries of Disinformation [1]. It gathered Swedish and international thinkers, representing different approaches to the diverse topics of Artificial Intelligence (Max Tegmark, physicist and AI researcher, MIT), Democracy (Daniel Lindvall, sociologist and independent researcher), Epistemology (Åsa Wikforss (philosopher, Stockholm University), Ethics (Gordana Dodig-Crnkovic, Chalmers University of Technology), Human–Computer Interaction (Wolfgang Hofkirchner, Vienna University of Technology) and Law (Chris Marsden, University of Sussex). The workshop was organized and moderated by Joshua Bronson (philosopher) and Susanne Stenberg (legal expert within R&D) from RISE Research Institutes of Sweden.
The workshop topic was introduced by Bronson and Stenberg. Disinformation was presented as a significant problem of contemporary societies, bringing “the challenge of dealing with disinformation magnified by digitalization and increasing use of AI” into more and more aspects of our society. Disinformation is typically defined as purposefully spreading false information to deceive, cause harm to, or disrupt an opponent. Disinformation can be generated and spread by individuals, groups, organizations, companies, or governments, and equally, disinformation can target any of these. According to Bronson and Stenberg, today, we have effectively lowered the barrier for content creation and dissemination to such an extent that traditional gatekeepers, such as governments, universities, publishers, and media, are unable to steer information and content creation.
The aim of the Boundaries of Disinformation workshop was to map the edges of this problem.
In what follows, I will present my take on the problem of disinformation, its relation to information and its role in society, after having participated in the workshop and learned a great deal from my colleagues discussing various manifestations of disinformation and possibilities for its control.

2. Phenomenon of Disinformation, Old and Omnipresent

Historical examples of disinformation are many, as illustrated by the following three ancient examples of “fake news”: the donation of Constantine from the eighth century, a sanctioned surrender of the Hospitaliers of the Knights Templar in the 1140s and the story from 1782 when Benjamin Franklin created a fake issue of a Boston newspaper, as reported in [2]. War and political propaganda and counterpropaganda are classical cases of disinformation and misinformation.
We meet information and disinformation daily on micro (individual), meso (group), and macro (global) scales.

3. What Is New?

Content production today has become simple and affordable to all to an unprecedented degree, and, consequently, it has run out of societal control.
“The idea that different people can get a piece of paper that states the same thing is powerful. It’s equalizing. It’s easy to trust the information in this case because accepting that a huge group of people are being misled is, well, unbelievable. There isn’t a way to prevent fake news entirely, but it starts with critical reading and conversations” [2]. Not only the general public/“ordinary people” have a voice that can reach around the globe, but also politicians can directly tweet to their followers, circumventing democratic goalkeepers.
The proposed automated means and Artificial Intelligence used for fighting disinformation bring their own challenges, as presented in the overview of self-regulation, co-regulation, and classic regulatory responses, as currently adopted by social platforms and EU [3], connecting the technological, legal, and social dimensions.

4. Digitalization and Hyperconnectivity as Sources of Disinformation

New online content production and communication has, as a consequence, the phenomenon of “informational bubbles”—isolated groups that share information and values independently of the rest of the world. Such groups can easily assume extreme positions, such as anti-vaxxers or groups claiming that the Earth is flat.
Social networks, electronic web-based media, digital platforms, and web bots provide ways for disinformation to uncontrollably develop in dangerous ways and proportions.
New technologies make content creation and dissemination easy, avoiding the traditional gatekeeping mechanisms of publishers, (predefined) media formats, (existing) institutions, universities, and governments.
Joshua Bronson and Susanne Stenberg asked the following questions:
Can we establish new gatekeepers who would:
Tell the difference between managing disinformation and censoring;
Establish the relationship between facts and disinformation;
Find out if and when information can be traced;
Establish the possibilities and limits of AI solutions to disinformation;
Increase media literacy in our radically changing digital landscape;
Help framing laws to protect the freedom of expression while guarding against disinformation.

5. Boundaries of Disinformation

There are a number of questions that must be answered to understand disinformation, its role, and production means, such as:
Who decides what is “the case”/“the fact”/”the truth”?
What is ”authoritative information”/“trustworthy information”?
Who are the authorities, and for what?
It is important, in the effort to identify and counteract disinformation, not to forget the need for a boundary/balance between individual freedom and societal institutionalized thinking. We need to better understand and formulate the relationship between Authority vs. Freedom vs. Responsibility in this context.
Moving towards a more truth-based society is about elucidating and explicating:
“How?” (AI, computers, media literacy, etc.);
“Why?” (philosophy, ethics, law, critical thinking, etc.), which is a question for democracies to decide.

6. Interfaces between Information and Disinformation

As Wu argues [4], there is an interaction and convergence of the philosophy and science of information in sciences. Consequently, they are parallel to Information vs. Disinformation, related questions of demarcation between Science vs. Pseudoscience according to Popper [5], which caused a lot of discussion among philosophers of science.
There are cases in the history of science in which false information/knowledge (false for us here and now) has led to the production of true information/knowledge (true for us here and now). The entire development of science can be seen as a refinement and replacement of inadequate knowledge by the more adequate one. A classic example of the mechanism leading to new discoveries and new insights is serendipity, making unexpected discoveries by accident.
The pre-condition for the discovery of new scientific “truths” (where the term “true” is used in its limited sense to mean “true to our best knowledge”) is not that we start with a critical mass of absolutely true information, but that in continuous interaction (feedback loop) with the world we refine our set of (partial) truths. For good reason, truth is not an operative term for working scientists. Instead, it is the notion of correctness that refers to a given reference frame. Each change in the frame of reference (such as in scientific revolutions where the change from a geocentric to a heliocentric view developed) will lead to a different understanding of what is “true” and “correct”.
Interestingly, Christopher Columbus had, for the most part, incorrect information about his proposed journey to India. He never saw India, but he made a great discovery. The “discovery” of America was not incidental; it was a result of a combination of many favorable historical preconditions combined with both true and false information about the state of affairs. Similar discoveries are constant occurrences in science.
“Yet libraries are full of ‘false knowledge’”, as Floridi points out in his Afterword [6]. However, we find them useful.
How much should we be worried? Current debates about COVID-19 vaccines show how harmful disinformation (about the danger of vaccines in this case) can be for society. What can be done to assure and maintain the correctness and trustworthiness of information? Whose responsibility is it to keep media free from misinformation, disinformation, malinformation? It is very important that we discuss it here and now, broadly involving diverse stakeholders, while the rapid development of AI makes content production increasingly simple, available, and vastly abundant.


This research funded by Swedish Research Council grant MORCOM@COGS.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


I would like to thank Joshua Bronson and Susanne Stenberg from RISE Research Institutes of Sweden for organizing the Boundaries of Disinformation workshop.

Conflicts of Interest

The author declares no conflict of interest.


  1. Dodig-Crnkovic, G. Ethics of Disinformation—Workshop Presentation. Available online: (accessed on 18 September 2021).
  2. D’Costa, K. Three Historical Examples of “Fake News” Three Historical Examples of “Fake News”. Available online: (accessed on 18 September 2021).
  3. European Parliament Automated Tackling of Disinformation. 2019. Available online: (accessed on 18 September 2021).
  4. Wu, K. The Interaction and Convergence of the Philosophy and Science of Information. Philosophies 2016, 3, 228–244. [Google Scholar] [CrossRef] [Green Version]
  5. Popper, K. The Logic of Scientific Discovery; Routledge, Milton Park, Abingdon-on-Thames, Oxfordshire: England, UK, 1959. [Google Scholar]
  6. Floridi, L. LIS as Applied Philosophy of Information: A Reappraisal. Available SSRN 2004, 52, 658–665. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dodig-Crnkovic, G. Information and Disinformation Boundaries and Interfaces. Proceedings 2022, 81, 49.

AMA Style

Dodig-Crnkovic G. Information and Disinformation Boundaries and Interfaces. Proceedings. 2022; 81(1):49.

Chicago/Turabian Style

Dodig-Crnkovic, Gordana. 2022. "Information and Disinformation Boundaries and Interfaces" Proceedings 81, no. 1: 49.

Article Metrics

Back to TopTop