Next Article in Journal
From Spectrum to Image: A Novel Deep Clustering Network for Lactose-Free Milk Adulteration Detection
Previous Article in Journal
The Future Is Organic: A Deep Dive into Techniques and Applications for Real-Time Condition Monitoring in SASO Systems—A Systematic Review
Previous Article in Special Issue
Quantifying Gender Bias in Large Language Models Using Information-Theoretic and Statistical Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Ethics of Data and Its Governance: A Discourse Theoretical Approach

by
Bernd Carsten Stahl
School of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
Information 2025, 16(6), 497; https://doi.org/10.3390/info16060497
Submission received: 29 April 2025 / Revised: 9 June 2025 / Accepted: 12 June 2025 / Published: 15 June 2025
(This article belongs to the Special Issue Advances in Information Studies)

Abstract

:
The rapidly growing amount and importance of data across all aspects of organisations and society have led to urgent calls for better, more comprehensive and applicable approaches to data governance. One key driver of this is the use of data in machine learning systems, which hold the promise of producing much social and economic good, but which simultaneously raise significant concerns. Calls for data governance thus typically have an ethical component. This can refer to specific ethical values that data governance is meant to preserve, most obviously in the area of privacy and data protection. More broadly, responsible data governance is seen as a condition of the development and use of ethical and trustworthy digital technologies. This conceptual paper takes the already existing ethical aspect of the data governance discourse as a point of departure and argues that ethics should play a more central role in data governance. Drawing on Habermas’s Theory of Communicative Action and using the example of neuro data, this paper argues that data shapes and is shaped by discourses. Data is at the core of our shared ontological positions and influences what we believe to be real and thus also what it means to be ethical. These insights can be used to develop guidance for the further development of responsible data governance.

1. Introduction

Technical progress continues to lead to rapidly growing amounts of digital data. Data sources including social media, sensor networks, the Internet of Things (IoT), wearable devices, and many others lead to the rapid expansion of the amount and type of data that are collected. Data governance has an important role to play in supporting the intended and beneficial use of these data and in preventing unintended and malicious use. Data governance structures can be used to strengthen the security of data and data protection; they can foster data sharing, reuse, and collaboration. Briefly, they have an important impact on how data are used and thus on the ethical implications of data use.
Data is almost universally recognised as being of high importance in today’s organisations and society, which is reflected in growing awareness of the importance of data and its governance in the field of information systems. Enormous resources are being invested into means of collecting, storing, processing, and retrieving data, both in private enterprises and in publicly funded projects and institutions. However, the very nature of data and the implications of that nature receive less attention than they deserve. This article proposes that a critical reading of the literature can offer additional perspectives on data and data governance, with a focus on the ethical implications of data and its governance. Ethics is an important component of the data governance discussion. While data governance has significant technical and organisational aspects, these are closely linked to underlying ethical concerns. This ethical aspect of data governance is not always prominent in academic and other contributions to the discussion. However, it has recently become more prominent in the context of the discourse around artificial intelligence (AI). There is very little doubt that AI is of ethical relevance. Among the numerous attempts to define, characterise, enumerate, and address ethical aspects of AI, one can find consistent references to data governance as part of the overall package of solutions that are proposed to address ethical concerns.
Such references to the ethics of data governance tend to take a functional view of data and its governance which often seems to assume that they are ethically neutral and can be used to promote ethical aims if wielded correctly. Without denying that this view of the ethics of data governance has its applications, I use this paper to suggest that a deeper and more critical interpretation of the ethics of data governance is possible. Drawing on Habermas’s work, notably his Theory of Communicative Action (TCA) [1] and discourse ethics [2,3], I explore the role of data and its governance in discourses, thereby highlighting the ethical dimensions of data governance. I argue that data are a core component of discourses and that they are required for the creation and maintenance of validity claims. Validity claims are at the heart of the discourses that not only constitute communicative action but constitute our social reality, including our ethical perceptions of what is good or bad, right or wrong, ethical or unethical. Data governance as the set of processes that regulate how we create, share, and make use of data thus has an intrinsic influence on the very concept of ethics and how ethical claims can be supported or contested. From this perspective, the ethics of data governance is not just one aspect of data governance worth considering, but data governance has a potentially significant influence on the very nature of our shared social reality and therefore on what we perceive to be ethical. This raises fundamental questions about data governance that the current discourse has barely touched. Furthermore, the TCA, with its constituent components of validity claims and building on the idea of conflict between the lifeworld and systems, can strengthen our awareness of the political side of data.
In order to develop the argument, this paper proceeds as follows. It starts with an introduction of the key concepts of data governance, its relevance to information systems, and its ethical aspects. This provides the basis of a review of the current discussion of responsible data governance, using the AI ethics discourse as the most prominent field of application of this debate. It then introduces Habermas’s theoretical work and applies it to data and data governance. The implications of Habermas’s work for data governance are then explored using the example of neuro data. This allows for a more detailed focus on the different ways in which the TCA can contribute to the data governance discourse, looking at the social and political ways in which data is shaped, the influence it has on the constitution of our shared social reality, and the question of who has a voice in these discussions. These points lead to a discussion of the practical implications for data governance that can be drawn from this theoretical framework and the limitations of the approach.

2. Ethical Aspects of Data Governance

This section outlines the way in which ethics is currently considered in the theory and practice of data governance. It starts with a brief introduction of the concepts of data governance and ethics and then proceeds to a discussion of established ethical concerns of data governance, making use of the discussion of AI ethics to exemplify the argument.

2.1. Data Governance

The constituent parts of the term “data governance” are both somewhat problematic. Data are often seen as the brute facts that describe reality that can be contrasted with information, which contextualises the brute fact of data, which leads to knowledge, which uses information for specific purposes [4] or supports non-trivial claims about a phenomenon [5]. This hierarchy from less refined data to more reflected information to applicable knowledge has some plausibility. However, it is also problematic in its assumption that data are simply “given” (the literal translation of the Latin word “data”) and are available to simply be collected. In practice, data are never given and are necessarily subject to an active process of selection and interpretation before they can even be perceived as data. This paper will use the term “data” to stand for digital representations of external phenomena. While such digital and computer-readable data do not cover all possible forms of data, they are at the heart of current data governance discussions.
The other half of the term “data governance” is similarly complex. “Governance” is often used as a concept in opposition to “government”, which denotes activities of the state, in particular of the executive branch of the state [6]. The term governance arises from perceived limitations of the hierarchical organisation of government and refers to “dynamic interrelations of (mostly organised) actors, their resources, interests and power, fora for debate and arenas for negotiation between actors, rules of the game, and policy instruments applied to help achieve legitimate agreements” [7].
Data governance, bringing together the concepts of data and governance, then describes how digital representations of external phenomena are organised in a decentralised (non-governmental) way to achieve legitimate aims. As Lis and Otto [8] put it, “Data governance provides a framework of decision-rights and accountabilities for the management and use of data. It encourages desirable behavior concerning the conduct of data within an organization.” Data and governance are separate terms and one can talk about one without talking about the other. However, in practice, and for reasons that will become clearer throughout the article, the governance of data is not just a technical activity, but it shapes the content, meaning, and reality of data.
Data governance taken in this broad sense of the term is not new. It necessarily forms part of computing as data processing. There are other terms that aim to address the same or similar challenges, such as “data management” [9] or “digital governance” [10]. For the purposes of this paper, they will be regarded as aspects of data governance. The growing importance of data has led to increasing calls for data governance from a policy perspective [11,12,13].
Work on data governance spans a significant number of disciplines and fields such as computer science, data science, and AI research as well as philosophy, law, research, and innovation policy and specialised sub-fields such as that of data protection. It involves large amounts of research but also organisational and societal practices that currently give rise to significant legislative and regulatory activities.

2.2. Data and Its Governance in the Information Systems Field

Data and its governance form a key component of the field of IS, understood as the academic and practice-oriented discipline that explores the role of information technologies in organisations and society. This is true on an obvious level, where information technologies produce, process, and draw inferences from digital data and therefore need to be based on an understanding of what those data represent and how they can be read, manipulated and stored. IS scholars have investigated data governance practices and shed light on some of their less obvious aspects. For example, there are accounts of data curation that portray curation activities as constituent practices of data governance [14]. IS scholars have also observed some of the underlying assumptions of data governance in other disciplines such as data science. Drawing on examples from the oil and gas sector, Parmiggiani et al. [15] demonstrate that data governance is not a simple mechanical exercise involving collecting and processing data but that it requires a deep understanding of the domain from which the data originates and that data governance requires critical but often invisible labour to make the data usable for scientific investigation and decision making. These examples indicate that the work on data governance that IS scholars have undertaken often touches on intricate understandings of data and data governance and their role that prefigure the argument developed in this article.
Data and data governance touch on most topics and areas of interest of the IS field. A key current topic is that of digital transformation [16,17,18]. Digital transformation, understood as “a change in how a firm employs digital technologies, to develop a new digital business model that helps to create and appropriate more value for the firm” [19] can take many forms and draw on numerous types of data and business models. Zuboff [20] prominently observed that data-driven changes to organisational practices are not simply an extension of prior activities based on more and better data but that the use of data changes the very nature of the activity, something she described using the term “informate”. Since Zuboff’s original observation, this trend of companies to informate their work has accelerated rapidly. Using the example of a music platform, for example, Alaimo and Kallinikos [21] have shown how digital platforms transform social and cultural behaviours into data points, thereby changing how music is perceived, provided, and enjoyed. Digital transformation in this broad sense can change all aspects of organisational life. Alaimo and Kallinikos [22], for example, argue that digital data objects, aggregations of data structured according to specific schemas, are fundamentally altering how organisations process information, make decisions, and understand their environment. One particular aspect of the impact of digitisation and its reliance on data-driven processes is the way we structure and implement work, for example by introducing virtual work [23,24] or by introducing algorithmic management practices, notably in large online platforms [25].
In addition to these general references to the importance of data in underpinning digital transformation, there are further contributions to the IS discourse that focus on epistemological, ontological, and critical aspects of data that are relevant to this article. Epistemology, as the discipline concerned with knowledge, how we can know, what we can know, and how such questions can be answered in a well-justified manner, is at the basis of all research activities. In the IS field, questions of epistemology have been explicitly discussed in the debates around paradigms [26,27,28,29]. It is probably not controversial to claim that data and its governance have crucial implications for all epistemological positions and their underlying paradigms. Data contributes to the process of sensemaking, of extracting meaning from the natural and social worlds we find ourselves in [30]. Data shapes how we approach and perceive knowledge and how it can be represented, which is a crucial precondition for communicating insights into current realities and the anticipation of future developments [31]. The use of digital data to engage with the world and the reliance on artefacts that produce or process novel forms of data, such as the Internet of Things, can lead to new ways of knowing, what Monteiro and Parmiggiani [32] call “synthetic knowing”.
Epistemological positions are linked to ontological views. The question of what exists has implications for what we can know, a point that is well covered in the IS paradigm discussions. The IS field has started to explore the ontological side of data and its governance. One important question is that of the ontological status of digital data. If data is not a straightforward reflection of reality [21], then the question arises of what its ontological status is. Using the case of the classification of data-driven objects that underpin the categorisation of music, Alaimo and Kallinikos suggest that recent data-driven approaches and algorithms redefine organisational processes as well as the nature of categories themselves, thereby arguably changing the nature of social reality. A similar ontology-based argument can be made for health data, and in particular, the role of personal health records [33]. These records are often advertised as offering an authoritative perspective of reality but in practice can be shown to exhibit both “drift” and “shift” due to competing interests and tensions within the community of stakeholders. This short look at some aspects of how data and its governance are treated in the IS literature cannot remotely claim to do justice to this rich topic area. Its purpose here is to set the scene for the following discussion of ethics and data and to provide a motivation for applying a Habermasian angle to it. A superficial view might suggest that the three topics of digital transformation, epistemology, and ontology are fundamentally separate from ethical questions. However, a more detailed reflection demonstrates that this is far from the case. Changes and transformations triggered or facilitated by digital technologies have social consequences that are rarely morally neutral. For example, the practical differences in terms of day-to-day activities between driving a traditional taxi and driving for Uber may be limited. However, being managed through algorithms rather than humans can raise concerns that differ from established concerns [34] and that have different moral connotations. Hence, it is not surprising that there are suggestions to explore the shape of responsible digital transformation [35]. Maybe less obviously, but therefore more importantly, there are influences on ethics from epistemology and ontology. What is perceived to exist or be real has a strong influence on what is perceived to be right and wrong. Questions of the nature of knowledge and what we can know are similarly crucial to answering questions of what is perceived to be right and wrong. Data and its governance are thus implicated in not just our scientific understanding of the world but also our views of good and bad, right and wrong. This article focuses on these questions, which are explored in more detail in the next section, and proposes the use of a Habermasian perspective, which directly links questions of ethics with epistemology and ontology as well as broader questions of politics and regulation. Before we can explore these questions further, however, it is worth looking at the current discussion of ethics and data.

2.3. Ethics and Data

This section introduces the concept of ethics and provides a brief overview of the link between ethics and data.

2.3.1. Ethics

The term “ethics” in its current use in the English language refers to several groups of distinct phenomena. On the basic level, it refers to the distinction between what an agent perceives to be morally good or bad. It also refers to the systems of beliefs that such an agent holds to support this distinction, and it refers to higher-level reflections of this system of beliefs, its justifications, and its limitations [36]. Philosophical ethics is located at the latter theoretical level. The systematic theoretical reflection of moral phenomena has led to the development of numerous ethical theories. The three groups of ethical theories that are currently most prominent in discussions of the ethics of technology, including in IS research [37], focus on the consequences of actions to evaluate their ethical qualities (consequentialism, notably utilitarian positions such as those developed by Bentham [38] or Mill [39]), the duties of the agent (deontology, most prominently represented by Kant [40,41]), or the character of the agent (virtue ethics, typically linked to Aristotle [42,43]).
While these three sets of theories are pervasively cited, it is important to note that they are not the only way to think about ethical questions. There are ethical positions that are based on religious views, ethics based on the immediacy of individual interactions [44], theories of moral development [45], positions proposing different ways of reasoning over ethics [46], radical ethical scepticism [47], and many others. The combining feature of philosophical ethics is that it considers moral phenomena. Not all moral philosophy aims to be practical and applicable, and some moral philosophies would deny the possibility of arriving at prescriptive statements. However, much recent moral philosophy, in line with classical ethical theory [48], aims to provide practical guidance on how moral questions or dilemmas can be approached. The complexity of modern societies and the institutionalised distribution of labour that 21st-century citizens mostly find themselves in has led to the development of specialised fields of ethics that focus on the application of ethical theory in fields like biomedicine [49] or business [50,51]. Some of the discourses in these fields of applied ethics are particularly relevant to data governance, notably those that refer to technology, in particular those technologies that are used for processing and manipulating data. As these have a strong influence on how ethical aspects of data governance are currently framed, it is worth exploring them in some more detail.

2.3.2. Ethics of Technology, Computers, Information, and Data

While ethical questions have been raised about computing from the onset of the development of digital computers [52], the increasing use of computers has led to the development of the field of computer ethics since the 1980s [53]. At the beginning of the computer ethics discussion, there was a focus on computers as technical artefacts [54]. Due to the increasing pervasiveness and ubiquity of computing devices, this focus has gradually been broadened to include the ethical implications of information in general [55,56].
Ethical questions have long been recognised as a legitimate concern in the field of IS [57]. The discussion has covered various issues and applications, such as privacy [58,59], professionalism [60], gender [61], and ethical decision-making in computer use [62]. There have also been numerous publications in the IS field concerning the development and use of ethical theory [63,64,65].
The arrival of “big data” [66] raised numerous concerns, leading to the development of a body of literature on the ethics of big data [67,68,69,70]. Core to this debate is the recognition that big data uses have manifest social consequences. In particular, this is based on the recognition that big data are analysed and used to predict the future, which can have ethically problematic consequences [71], some of which are discussed in the subsequent section. These concerns have led to the coining of the term “data ethics”, which was briefly popular and has inspired, for example, the naming of the UK’s Centre for Data Ethics and Innovation (renamed as the “Responsible Technology Adoption Unit” in February 2024). The term data ethics was quickly superseded by the more popular term “ethics of AI”. One could argue, however, that most of the relevant ethical issues that are discussed under the heading of AI, certainly those arising from machine learning, have their root in data and their uses [72,73].
The AI ethics debate gained prominence with the growing success of machine learning during the 2010s [74,75]. It covers a range of ethical concerns, including the immediate consequences of the application of machine learning such as privacy and data protection [13,76,77,78], bias and unfair discrimination [79,80,81,82], and safety, security, and reliability [83,84,85].
There are numerous other ethical concerns related to AI, including its economic consequences [86], which also encompass implications for employment [87,88,89] and the possibility of worker surveillance [90] as well as larger macroeconomic phenomena, notably questions of the justice of the distribution of the benefits of AI [12], maybe most pointedly captured by Zuboff’s [91] concept of surveillance capitalism. Other society-level worries include the impact that AI has on democratic processes, which has to do with the increasing dominance of the large tech companies who have the means to wield AI in ways that benefit them, which can lead to a concentration of political and economic power [92].
These and other ethical concerns about AI are grounded in the use of data and the consequences this use can have. Appropriate governance structures for data are often suggested as a mechanism to mitigate these concerns, leading to the concept of responsible data governance [93].

2.4. Responsible Data Governance

Policy-level calls for data governance tend to underline the potential benefits that the suitable use of data can bring, such as an increase in productivity, improvements in public services and healthcare, a reduction in crime, and many others [11]. Appropriate governance structures for data are seen as important conditions for meeting these aims. By putting in place such structures, data users can be held accountable for their use of data [8], which is meant to support trust and collaboration.
Proper data governance is said to produce numerous benefits, for example in the biomedical sciences [94,95,96]. By instituting suitable data governance structures, science can reduce the cost of producing redundant data [5], improve collaboration, and, ideally, promote scientific progress. Benefits such as improving collaboration, reducing costs, and contributing to knowledge clearly have a moral side to them, but they are typically not expressed in ethical terms. In addition to these general benefits, there are a number of expected advantages of data governance that can be expressed in more traditional ethical language, such as the promotion of human flourishing [75] or the avoidance of harm [97]. This position typically implies that there are objective data that describe the world accurately. The requirement for data governance is then to preserve the integrity of these data and to render them accessible for appropriate uses. While this is an appropriate position, the paper will argue below that it remains too narrow.
On the societal level, there are furthermore hopes that the large-scale use of data can promote some of humanity’s key ethical objectives. This is a type of argument that can best be identified in current AI discussions, for example in the hopes that AI can address some of the key issues mentioned earlier. On the one hand, there are concerns that AI will lead to biases, injustices, or environmental pollution. On the other hand, machine learning technologies may offer solutions to identify biases and avoid unfair discrimination [88,98], strengthen security [99], create economic benefits [100], improve healthcare [101,102], including during a pandemic [103], and support sustainability [104]. More broadly, there is a movement towards “AI for good” [105]. AI for good can be realised by the impact that AI can have on promoting the UN’s Sustainable Development Goals [106].
Despite these envisaged benefits of data governance, one has to admit various challenges, e.g., with regard to data protection [107], which remains a fast-moving field [108]. In addition to such specific technical problems, there are large questions about the location of and responsibility for data governance. Much of it takes place on the level of the organisation. It has been argued, however, that this focus is problematic, as many uses of data are determined on the level of an innovation ecosystem [8]. Other problems include the question of how possible trade-offs can be addressed, e.g., between the openness of access and the protection of personal data [109] or the prevention of misuse. It has been suggested that successful data governance not only requires action on the organisational level but must be accompanied by appropriate legislation to ensure consistency [11] and overseen by a regulatory body [75].
We are thus far away from having comprehensive answers to the question of what responsible data governance would look like in detail [93]. This section should nevertheless have shown that the current calls for data governance largely call for responsible data governance defined as data governance that is informed by ethical concerns and explicitly aims to promote ethical aims. This means that data governance practices need to ensure that well-defined ethical concerns such as the safeguarding of personal data are included. Moreover, data governance approaches and practices should be informed by the insight that they potentially touch on larger ethical issues at the societal level, ranging from environmental sustainability to the justice of economic distributions. There is thus a close link between ethics and data governance, which is already established or at least implied in much of the data governance literature discussed above. However, this paper goes one step further and argues that there are further connections between data governance and ethics that go to the heart of the way we constitute our shared social reality, as can be discussed using the lens of the TCA.

3. The Theory of Communicative Action and Its Role in Critical IS Research

This article argues that using a Habermasian perspective offers a theoretically sound way to capture the various ethics-related aspects of data and its governance. The value of Habermas’s work here is that it covers ethical theory but that it is broader and offers unique insights related to epistemological, ontological, and also political aspects of data. This section therefore starts by introducing his Theory of Communication Action, a central pillar of his theoretical work. This is used as a starting point for the subsequent overview of Habermas’s reception in the IS literature as a key critical scholar.

3.1. The Theory of Communicative Action in Information Systems

This section will outline the work of the German philosopher and social theorist Jürgen Habermas, in particular his TCA and discourse ethics. The brief introduction to the Theory of Communicative Action (TCA) [1] allows for straightforward references to data and data governance, which are located at the core of IS.
Habermas distinguishes communicative action, which aims for mutual understanding, conflict resolution, and compromise from strategic action, which is characterised by purposive-rational thinking that aims to achieve pre-defined objectives [110]. For communicative action to work, speakers need to engage in discourses, which human beings always do in the context of their individual lifeworld. The concept of lifeworld stems from the phenomenological tradition [111] and refers to the fact that all humans can only interact with the world from their own specific position that is shaped by their history, experience, socialisation, environment, etc. Discursive validity claims have their origin in the lifeworld. This is the reason why Habermas is critical of systems-based decisions that ignore lifeworlds, and the distinction between lifeworld and system is a key aspect of the TCA. By systems, he means large and anonymous social structures, such as the economy or the legal system, but the ideas can arguably be applied to information systems. Habermas uses the term “colonisation of the lifeworld” to denote situations where traditional forms of life are dismantled and interactions that used to be based on mutual interaction and recognition are organised through non-personal means such as bureaucracy or markets.
In terms of data and data governance, this concept of the colonisation of the lifeworld is important, as it can be used to describe the development from data as a shared and agreed-upon resource held by a community to an abstract entity and tradeable commodity. This colonisation of the lifeworld is problematic because it can lead to loss of control, alienation, and exploitation. This is a key theme of Zuboff’s [91] concept of surveillance capitalism, which describes the extraction of immense financial value from data through large tech companies to the detriment of the data subject. Many aspects of the data economy are based on this phenomenon. Talk of “data as the new oil” of the information economy exemplifies this way of thinking [112].
The TCA builds on the concept of speech acts [113], which it sees as the basis of discourses. Each speech act carries validity claims: truth, (normative) rightness, and truthfulness. This means that each speech act includes the (fallible) claim to be true, to be ethically acceptable and believed by the speaker. An additional implied validity claim is that of comprehensibility. These validity claims can be queried and then need to be defended by the speaker [114]. For such communication to work, it relies on the assumption of the ideal speech situation where all contributors to a discourse have equal chances to contribute. Wilson [115], drawing on White [116], lists the following constitutive rules of an ideal speech situation:
1.
Each subject is allowed to participate in the discussion;
2a.
Each is allowed to call into question any proposal;
2b.
Each is allowed to introduce any proposal into the discussion;
2c.
Each is allowed to express their attitudes, wishes, and needs;
3.
No speaker ought to be hindered by compulsion—whether arising from inside the discussion or outside it—from making use of the rights secure under (1) and (2).
Habermas is fully aware that real discourses never reflect this ideal speech situation. His claim is that they are transcendental, i.e., they need to be recognised as a condition of the possibility of successful discourses. Without striving for them, communicative action cannot lead to valid outcomes. This has important implications for dealing with data that we will return to in the discussion section.
The TCA gave rise to the development of discourse ethics that Habermas [2,3] undertook in close collaboration with Karl-Otto Apel [117,118]. Discourse ethics builds on the Theory of Communicative Action, in particular on the recognition that all speech acts have a normative component. The claim to normative rightness is thus comparable to that of truth and subject to discursive agreement or disagreement. This points to the close link between different types of validity claims, notably claims to truth and rightness that we return to in the discussion. In terms of current views on responsible data governance, this means that the normative consensus on what is right and proper to do with data (e.g., keep it secure and uphold the fair information principles [119]) should be understood as the outcome of a discourse that cannot be seen as independently given and perpetual but which may become subject to new discourse, for example in the light of new empirical insights on the properties of data or data processing technologies.
Habermas is not a philosopher of technology and has no particular interest in specific technical developments. It should nevertheless be clear that he is fully aware of the importance of technology in modern societies. For example, Habermas was a signatory of the 2016 Charter of Fundamental Digital Rights of the European Union (https://digitalcharta.eu/sprachen/, accessed on 11 June 2025), indicating his continued awareness of the crucial role of digital technologies. His work is thus of high potential relevance for our understanding of technology and its social and organisational uses, which explains its strong reflection in the IS field.

3.2. Habermas and Critical Research in Information Systems

There are examples of IS scholars using Habermas’s work and, in particular, his approach to discourse ethics to explain phenomena of ethical relevance [64,120]. However, his work is probably best known as a key theoretical root of critical research. The relevance of his work for ethical aspects of data and data governance extends beyond the traditional boundaries of philosophical ethics and includes aspects of critical theory. It is therefore worth briefly reviewing how Habermas’s work has been received and made use of in the IS field more broadly.
As indicated earlier, critical research within the IS field is typically described as a paradigm, a concept that builds on Kuhn’s [121] ideas about paradigms and has been developed for application to the social sciences [26]. A paradigm in this sense of the word represents a worldview of a researcher that consists of and informs their understanding of the nature of reality, ways to observe it, concepts of truth, types of data, and methods to collect and analyse them and also the role of research in society and the role of the researcher in the scientific system. It is important, however, to realise that the tradition of critical research predates the paradigm discussion and has a long tradition in philosophy. A key example of critical philosophy that provides roots for critical work today is Kant’s critiques of pure reason, practical reason and judgment [122,123].
While this flavour of general philosophical critique is still relevant to critical research in IS, it is worth looking at the discourse in some more detail to understand Habermas’s role in it. Current critical research tends to explicitly focus on social conditions and it challenges institutions and organisations that represent oppressive forms of control [124]. It is based on the recognition that social reality is not optimal, that many people are disempowered and subjugated. Information systems can play a crucial role in the scientific and social systems that perpetuate and reproduce these social conditions [125]. A key feature of critical research is thus its critical intention to affect change [126], to overcome control, domination, and oppression [127]. One crucial concept in this context is that of emancipation [28]. Critical research is centrally concerned with emancipation as “the enactment of new, less oppressive worlds through critical awareness and problem alleviation” [128]. Emancipation, as the ability to achieve one’s potential to a greater degree [129], is difficult to systematically capture and understand and even more difficult to achieve. Critical scholars see a role for research in exposing and challenging domination and ideology [130] and then offering perspectives, reformulating views of social reality, and thereby supporting the transformation of oppressive and alienating situations [131].
While the critical intentions to change social reality and promote change are core to critical research, there are a number of further features typically ascribed to it. These include a non-realist or non-objectivist ontology [132], a non-positivist epistemology [133], and corresponding choices in terms of methodology, data collection, and analysis methods. Critical research tends to aim to understand the totality of the phenomenon under study [29], including its historical context. Critical scholarship values reflexivity as the explicit consideration of the role of research and researchers in their social context [134]. It is sometimes seen as non-performative in the sense that it does not see research as a means to achieve predefined organisational or societal aims [135,136].
The emancipatory intention as well as the ontological and epistemological positions that Habermas discusses are reflected in different types of “knowledge interests” [28]. Habermas distinguishes between three types of knowledge interests: technical, practical, and emancipatory. These correspond to different concepts of rationality and different intentions and likely outcomes. He does not reject the technical and practical knowledge interests, which correspond to positivist, objectivist, and instrumental inquiries, but he suggests that the emancipatory knowledge interest is the preferred one to establish a link between theoretical knowledge and practice [137].
These characteristics of critical IS research lead to a set of frequent topics that it covers. These topics typically link to the causes and mechanisms of suppression and alienation, to the reasons why people can benefit from emancipation. One broad topic area has to do with power and with the way information systems can convey, reproduce, and perpetuate often problematic power relationships [138,139]. Questions include how technology can be used to control people [140], which does not have to be negative and problematic but often is. The same is true for oppression, which can be a side-effect of technology use and may not be based on malicious intent [131]. Power relationships can be part of traditional political relationships [141] but they can also be enacted in other uses of technology, such as technology-based surveillance [142,143].
Power relationships arise in specific socio-economic contexts and a key one that critical research is interested in is that of capitalism. Much critical research is informed by Marx’s [144] critique of capitalism. Topics related to capitalism include the class society that capitalism tends to engender, the role of workers including their exploitation [141], the role of trade unions [145,146], and the role of managers [147]. Capitalism tends to commodify aspects of social life that previously were not part of market exchanges [148,149]. Furthermore, capitalism is based on a certain type of means–ends rationality, which is also frequently targeted by critical research, as it can be a key obstacle to emancipation [150,151]. This type of rationality can turn into or hide ideologies [152] that can become reified in information systems and thus difficult to challenge. Topics of interest to critical researchers are of course not limited to capitalism but can cover all possible causes of alienation and oppression, notably questions of gender and race [153,154].
Critical scholarship in IS can draw on a rich array of theoretical positions. These include postcolonialism, postmodernism, queer theory, and others. It can draw on the work of authors like Foucault [155,156,157], Bourdieu [158], and Freire [128], to name just a few. However, in practice, there has been a strong focus on Habermas’s work to the point where [159] can say that Habermasian perspectives have “largely become synonymous with the critical in IS”. This has led to calls for a broadening of critical approaches and an analysis of the limitations of Habermas’s work.
The application of critical theory is not limited to IS and rich critical traditions can be found in other fields that have a bearing on data and data governance, like critical management studies [160], critical legal studies [161,162], critical neuroscience [163], or critical research in technology [164,165,166]. There have even been initial steps towards the creation of a discourse on critical data studies [167].
In this paper, I draw explicitly on Habermas’s theories and the reception of his work in the IS field for several reasons. My initial interest in questions of data and its governance is the ethical implications they have. It has been argued that all critical theory must be based on ethics [168] or a value position [133]. Habermas offers a theoretical position that not only sees normative concerns as crucial in all communication but has developed an explicit ethical theory in discourse ethics that is relevant to information systems [169]. Furthermore, Habermasian work offers an overall perspective that includes ontological and epistemological aspects that are of crucial importance for data. At the same time, his work covers societal and political aspects that can have crucial implications for ethical concerns but that are often not explicitly conceptualised in current ethical theories. Critical research based on Habermas’s work therefore offers a unique perspective for the following investigation of data and its governance and in particular the ethical aspects it covers.

4. Habermasian Perspectives on Data and Data Governance

In order to understand how the TCA can guide our knowledge and practice of data governance, it is helpful to make use of an example, in this case, the example of the governance of neuro data.

4.1. The Example of Neuro Data

The term “neuro data” in this paper stands for data that relates to the brain in the broadest sense. This includes human research data as well as clinical data that include brain imaging data, such as functional magnetic resonance images (fMRI), other technical measures such as electroencephalograms (EEG), and also other data that allows us to understand brain functions, such as behavioural or other diagnostic data. In addition to human data, neuro data can include animal brain data, typically collected from laboratory animals such as mice, rats, or monkeys, as well as technical data, such as brain simulation data. Neuro data is thus not so much a well-defined type of data but a family of different types of data that can be used for a variety of purposes from fundamental research to the diagnosis and treatment of brain-related illnesses.
Neuro data has always been a basis of neuroscience and thus can be traced back to the early steps of academic neuroscience in the 1960s and beyond that to the work of Cajal and possibly even beyond that. However, in line with much other scientific research, data in neuroscience has been taking a more prominent role, due to new methods of data collection, analysis, and representation. There are increasingly large parts of neuroscience research that rely on big data applications. The data supporting such research can come from different scales, ranging from molecular and genetic to microscopic, brain-level data, all the way to population neuroscience [170].
There are several reasons why thinking about neuro data can help clarify questions about responsible data governance. Firstly, neuro data is an example of the type of large-volume heterogeneous data that is increasingly used for research in biomedicine and beyond, which is a key driver for data governance policy [96]. It raises numerous ethical questions about the appropriate and ethical collection and use of data [68]. The ethical concerns linked to the combination of neuroscience and information technology are visible but far from addressed [171]. This includes specific questions on well-defined issues such as the possibility of anonymisation [172] and also a growing awareness of the need for larger international data governance structures if the promise of big data research is to be realised [173]. In addition to these rather specific questions that pertain directly to data governance, neuro data raises many broader ethical and social questions. These include questions of ownership and benefits from data and also fundamental questions about how we define diseases, how societies structure their healthcare systems, and even what it means to be human, to be healthy or normal and to be part of society. This broad array of questions and concerns makes neuro data in the broad sense outlined here an interesting example of data that can help us understand the relevance of Habermas’s work for responsible data governance.
The use of neuro data as an example allows me to draw on a long personal experience of dealing with data governance. This paper is of a conceptual nature and its main argument is theoretical, proposing that a Habermasian reading of data governance can enrich our understanding of its ethical and related aspects. However, it is informed by ten years of work leading data governance in one of the largest publicly funded research initiatives in this area. I served as the Ethics Director of the Human Brain Project from 2015 to 2023. The Human Brain Project was an EU project funded under the Future and Emerging Technologies (FET) Flagship funding stream. It brought together computer science and neuroscience with a view to establishing an IT-driven infrastructure for neuroscientific research [174]. It brought together more than 500 researchers from more than 100 partner organisations over a period of 10 years (2013 to 2023), covering numerous topics including traditional human and animal neuroscience as well as technical research and development in areas such as simulation, neuromorphic computing, and neurorobotics. I do not want to spend much space on going into detail on some of the controversy surrounding the project [175,176]. Suffice it to say that the project has now concluded and produced some impressive results [177,178]. Of interest to this article is that the project included a workstream on ethical, social, and legal issues and developed approaches to integrate responsible innovation principles and practices throughout its lifetime [179,180,181,182,183]. I was involved in these activities through my role as the Ethics Director, which included the task of co-chairing the project’s Data Governance Working Group. While this article does not include formal empirical research and conventional descriptions of data collection and analysis, it is thus nevertheless informed by extensive empirical experience of data governance. I have included this paragraph as part of the critical research tradition’s focus on reflexivity and the explicit positioning of the role of the researcher. From a Habermasian perspective, it provides backing for my claims of authenticity and legitimacy of the arguments I develop in the coming sections.

4.2. The Reality of Data

I start the exploration of the relevance of Habermas’s work on data governance by looking at the nature of data, which drives the types of truth claims it can support and the consequences that arise from that. The argument is that one should not only do the right thing with data but that data influences what counts as shared social reality and thus what counts as doing the right thing in principle.
Data has an ontological quality in the philosophical meaning of the term, i.e., it determines what counts as being or as real. This statement touches on the philosophical discussion of ontology along the lines of a debate between idealism and realism [184,185]. The question of the ontological basis of statements about the nature of the phenomena that can be observed constitutes a key component of the paradigm debate in the social sciences [26,186], which has strong echoes in the IS field [28,187,188]. Habermas’s position as a representative of critical research seems to suggest an anti-realist position in this debate [134]. However, one can argue that the influence of data on what counts as real and thus on what can count as a true statement is similarly pertinent for realist positions.
The key point for this paper arising from this discussion is the recognition that data cannot simply be accepted as data but that the way it is generated, collected, processed, presented, and perceived must be scrutinised. This scrutiny of data is an activity that can be supported by appropriate data governance structures.
The focus on the social processes of data generation furthermore calls into question some of the assumptions of the current data governance debate. One key problem arising in big data research, notably AI-related work, is the problem of bias and resulting unfair discrimination. It is worth reflecting on the origins of such claims and how appropriate data governance is meant to address them. The logic of the argument that one can find in many of the contributions to the discussions of bias is that social biases are reflected in the data that is collected and that certain ways of processing such data, notably through machine learning, can reproduce and perpetuate the bias, thus leading to the unfair discrimination of individuals on the basis of the gender, race, age, etc. One response to this that is gaining prominence and has found its way into proposed legislation [189] is to require data to be without bias.
In light of the socially constructed nature of data, as implied in the TCA, such a requirement for objectivity and an absence of bias is deeply problematic. It would require a God’s eye “view from nowhere” [190], which is not available. Instead, a more appropriate response would be to expose the socially constructed and discursive nature of data and to open the processes that led the data to scrutiny. This would still allow for highlighting existing biases with regard to protected characteristics such as gender, race, or age. It would serve as a warning, however, that even where such biases are recognised and removed or compensated for in a dataset, there may still be unobserved biases that may have consequences that at present are simply not yet recognised.
Neuro data can serve as a strong example that demonstrates the issues. The popular narrative around neuroscience argues that brain-related diseases have their cause in the brain, typically where some brain functions do not work as expected. Neuroscience uses data to help understand how the brain functions, which allows the identification of anomalies, which, in turn, allows for diagnoses and, where possible, leads to treatments. This narrative is plausible, as there are brain-related diseases where fairly straightforward causal chains can be established, e.g., in epilepsy or Alzheimer’s disease, where malfunctions of the brain lead to recognisable symptoms. However, this is less obvious for other brain-related diseases and mental health issues such as depression or anxiety where the causal factors are more complex. Environmental and social factors influence those diseases. There are furthermore attempts to understand the genetic component of brain diseases which are typically complex and multifactorial.
The ontological status of data and the relationship between data and the underlying phenomenon is important to highlight. The example of neuro data shows that there can be uncertainty or disagreement about the underlying phenomenon (e.g., the nature of a mental health condition) as well as the link between data and the phenomenon. Neuro data can represent important aspects of the phenomenon, e.g., through imaging data, EEG data, data on symptoms, etc., but it is fundamentally different from the phenomenon in question. Neuro data and mental health questions highlight the importance of keeping apart the phenomenon and its representation and remaining aware of this distinction. The first-person perception of a phenomenon is very different from its representation through (digital) data, and this ontological gap cannot be bridged.
The TCA offers a way of dealing with this distinction between phenomenon and data. A Habermasian discourse opens not only the possibility of querying conclusions from neuro data, but it allows for questioning the very nature of data and the assumptions associated with it. This includes the question of whether particular symptoms should be accepted as diseases, which is a sub-question of what should be considered a disease in the first instance. Such macro-level discussions then immediately link back to the current state of the biomedical art in terms of the definitions and mechanisms of diseases and the way these can be described and measured, which is done using data. The social or natural reality of data is clearly part of the broader question of how we understand and approach diseases and cannot be divorced from broader questions around the nature of reality, the nature of disease, and the nature of data. All of these can be addressed using discourses and discourses are what determine their (always provisional) nature through temporary consensus.

4.3. The Truth of Data

A key reason for collecting and processing data is to derive true statements. Scientific research aims to contribute to knowledge and one of the hallmarks of knowledge is that it is true. This raises the question of what counts as truth. We have seen earlier that for Habermas, each statement contains a validity claim to truth, but how can we know whether the statement is actually true? Habermas’s TCA implies a consensus theory of truth. A statement is true if it has been accepted by the relevant stakeholders in a practical discourse. Such a discourse should aim to approximate the ideal speech situation, thus allowing all who are affected to contribute, question the content, bring in new positions, and be allowed to be heard.
One implication of this is that truth is provisional. Even in the rare cases where the participants in a discourse agree on the truth value of a statement, this agreement can change when new evidence emerges or new arguments are developed. However, Habermas does not think that truth is entirely elusive or arbitrary. He suggests that discourses asymptotically approach truth.
To some extent, one can argue that the scientific discourse is modelled on these principles of the ideal speech situation. There are no formal constraints to contributing to the discourse, e.g., in the form of writing scientific papers. Ideas are assessed by peer reviewers on the basis of their merit using the transparent mechanism of peer reviews, which allows contributors to understand the logic and respond accordingly. In practice, the scientific publication process is clearly far away from the ideal speech situation. There are dominant institutions and voices that can dictate what gets heard. Financial interests drive scientific research and can take over and the majority of researchers have neither the level of training nor the financial or academic standing to contribute to the discourse in elite publication outlets that determine scientific orthodoxy.
Data play a crucial role in these discourses that determine truth, as they are typically the basis of scientific truth claims. One can see the state of the art in research as the current consensus view, and thus as the truth. This consensus is represented in standard outputs. A good example of such a consensus in the neuroscience domain is the Diagnostic and Statistical Manual of Mental Disorders (DSM). This publication by the American Psychiatric Association is used for the classification of mental disorders, providing standard criteria and guidance for diagnosis and treatment. The DSM is based on research, supported by data, and represents the shared position of the psychiatric community. At the same time, it is often and, in many respects, contested. The current version of the DSM is number 5, indicating that there were four previous iterations that required major updates and new versions. Specific diagnoses and their link to symptoms can change over time as new conditions are identified and treatments and interventions change. The scientific consensus represented in the DSM is based on research and thus on data and its interpretation. It is open to challenge and modification, again based on data. Recent developments in terms of data storage, processing, and analysis techniques furthermore provide opportunities for novel insights, for example by linking genetic data to traditional neuroscientific data or by linking neuro data to environmental data, promising a better understanding and new knowledge.
The interpretation of the DSM as the outcome of a truth-oriented discourse suggests that one can argue that the scientific system can be seen as a discourse that aims to represent the ideal speech situation. However, it is equally easy to see the limitations of this interpretation, given that the scientific discourse is highly confined to experts, using an inaccessible language, and for it to live up to Habermasian requirements, it would need to be developed, for example, by broadening inclusion and offering support for publications from authors who represent underserved communities. The example of neuro data suggests, however, that this is unlikely to be enough. Looking at the underlying ideas of the TCA and discourse ethics, it is clear that those who are affected should have an opportunity to contribute to the discourse. In the case of neuro data, the set of affected people and groups goes far beyond neuroscientists. It includes people who suffer from brain-related diseases, their carers, and healthcare professionals as well as a raft of organisations that play a role in mental health care, including insurers, pharmaceutical companies, hospitals, regulators, and many others.
This list shows one of the fundamental difficulties of Habermasian discourses, namely the varying ability to contribute to them. For all the stakeholders just listed, neuro data is important because it is the basis upon which evidence-based decisions about definitions, diagnosis, and treatment of brain-related diseases are made. Their ability to understand and interpret the data differs vastly, as does their ability to make practical contributions to the discourse.

4.4. The Authenticity of Data

The one validity claim that Habermas sees implied in each speech act but that is probably the least widely discussed in applications of his work is that of authenticity. For Habermas, this validity claim (Wahrhaftigkeit in the original, could also be translated as “truthfulness”) refers to a speaker’s subjective world [2]. By producing a speech act, the speaker claims to be authentic, i.e., they make a claim that the statement is objectively true and ethically appropriate and that they subjectively believe it to be so.
Authenticity is a difficult validity claim to deal with because it refers to an internal state of the speaker, which is impossible to know for anybody but the speaker. Even for the speaker themselves, it may be difficult to assess. However, authenticity is of high importance in critical research, which deals with issues such as intentional deception as well as concepts such as ideology [191] or false consciousness, which is invoked to explain why people undertake actions that are against their own best interests [192,193].
Data can play an important role with regard to claims of authenticity. Data can support truth claims and at the same time strengthen an individual’s belief in the truth of their statements, thus supporting their claim to being authentic and truthful. This is a situation that one can typically observe in scientific research, including neuroscience, where scientific data is the basis of theories and empirical insights that are broadly accepted as true. Strong foundations in data are likely to be one cause of the general trust in science, which is closely linked to an acceptance of authenticity claims.
The flip side of this link between data and authenticity is that data can be used to intentionally or unintentionally hide a lack of authenticity. This could be the case where research suggests a correlation between certain types of biomarkers that could be identified in neuro data and mental health issues where the speaker has doubts about causal relationships but fails to voice those doubts. Rendering data inaccessible through data governance mechanisms can have the effect of masking the basis of inauthentic statements. Data can thus play a role in supporting or limiting authenticity and might be used in discourses where claims of authenticity are queried.
The preceding subsections looked at the role of data and its governance from the perspective of validity claims, exploring how data can shape claims to truth, rightness, and authenticity. These claims are made by individual speakers, but they play out and are affected by the broader socio-economic and political environment. One of the benefits of a Habermaisan perspective is that it covers this broader context as well.

4.5. The Politics of Data

Habermas’s work explicitly considers the political function of discourses and can be regarded as a justification of liberal social democratic political regimes. This has been picked up by IS researchers who have made use of his ideas in areas that touch on basic human needs, such as healthcare [194,195,196] or ICT for development [197,198], or where IS research touches on political processes, e.g., in e-government [199,200] or ICT regulation [201].
From the perspective of the TCA (and numerous other contributions by Habermas that are not covered in detail in this article), the current political and legal landscape can be interpreted as the outcome of discourses held in societies that led to the creation and acceptance of norms, rules, and regulations. For example, concerns about data protection and privacy [202], largely triggered by technical developments, have led to a legal regime including the General Data Protection Regulation in Europe [107] that drives much of data governance practice. The relevance of this perspective comes from the fact that it establishes the link between different normative regimes, notably ethics, politics, and the law. This link is important to be aware of because it highlights the fact that ethics can lead to political agreements (as well as disagreements) that can find their expression in law, thus demonstrating that ethics in data governance may find its expression in legal compliance. Maybe more importantly, this link between normative regimes highlights the fluent nature of normative agreements. Interpreting ethical as well as political positions as outcomes of practical discourses shows that they have a temporary nature and are typically open to revision. This explains why ways to implement responsible data governance are not set in stone but develop with technical developments and also according to social preferences. It also explains why there are justifiable local differences in data governance practices. Despite broad agreement on many of the fundamental values that data governance safeguards, these are expressed differently in different contexts, jurisdictions, and cultures. There is little doubt, for example, that privacy is seen as an ethical value across most current societies, but the level of the protection of privacy and mechanisms to protect it differ significantly.
The example of neuro data can shed some additional light on this political nature of data governance. Neuro data can have its origins in different types of environments. Much of it is generated by publicly funded research, often undertaken in universities. Another large source of neuro data is the healthcare setting, which is funded in vastly different ways in different countries. Private companies, foundations, and many other organisations also contribute to the generation of neuro data. This means that questions of the ownership of neuro data and the legitimacy of being able to benefit from it are contested. A typical question would be whether companies should be given access to publicly or collectively funded data from research and healthcare and, if so, how the (financial) benefit arising from this access should be allocated or distributed. Again, there is no global agreement on this and different attitudes to the functioning and legitimacy of markets may offer different answers to such questions. It is notable, however, that the broader scepticism towards the dominance of big tech companies [91,92] is driving specific data governance regimes. In Europe, where market-oriented activities are often regarded with a large amount of scepticism, one can observe attempts to structure data governance in ways that can help address such problems of ownership and market dominance. In the UK, for example, there are discussions of data trusts and data cooperatives as means to mediate between individual data subjects and data users for the benefit of the users [203]. Similarly, the EU’s Data Governance Act [204] established a framework for the creation of specifically designed and recognised “Data Altruism Organisations” that would serve a similar role. The EU is currently developing “Common European Data Spaces” (https://digital-strategy.ec.europa.eu/en/policies/data-spaces, accessed on 27 April 2024) that are meant to allow for data sharing and collaboration with a view to harnessing the value of data while avoiding some of the pitfalls related to data protection or intellectual property. Using a Habermasian figure of thought, these approaches to data governance that aim to reduce the influence of market mechanisms can be interpreted as a means to avoid the colonisation of the lifeworld. Whether they can and are likely to achieve that is at least partly an empirical question that goes beyond the confines of this paper.
A further set of political implications related to neuro data arises in the area of neurorights [205]. The neurorights discourse is a relatively recent phenomenon that revolves around the question of whether our understanding of an increasing ability to manipulate the human brain calls for new legal rights and protections going beyond existing legal mechanisms. Neurorights can be defined as “the ethical, legal, social, or natural principles of freedom or entitlement related to a person’s cerebral and mental domain; that is, the fundamental normative rules for the protection and preservation of the human brain and mind” [206]. At this point, there is a lack of agreement on whether and to what degree such neurorights are required. One key concern is that of privacy, where there are questions of whether existing data protection regulations are sufficient to deal with new types of data and data concerns linked to neurotechnology [207]. In 2021, Chile became the first country to recognise neurorights in its constitution [208]. It is impossible to do justice to the discussion of neurorights in this paper. Suffice it to say that it is partly triggered by data protection concerns but that it is furthermore strongly influenced by our rapidly expanding knowledge of the brain, which is based on ever-growing data on the brain. Neurorights can therefore be understood as one further political implication of neuro data.

5. Implications

5.1. Ethics, Data Governance, and Critique

This article explores the role of data and its governance and the ethical implications these can have. There is little doubt that data governance is of ethical relevance, starting with the observation that it provides part of the response to well-established ethical concerns related to data protection and intellectual property. By drawing on Habermas’s TCA and the tradition of critical research in IS and other fields, I have argued that the ethical relevance of data and its governance go beyond these immediate and well-recognised concerns.
The TCA provides a conceptual basis for understanding and expressing that data is not an objective representation of an immutable reality. Instead, data is deeply embedded in discourses, which both constitute what we mean by data and how we govern it and are also driven by this conceptualisation of data and its governance. Data can constitute our social reality; it can support scientific as well as broader social positions across domains. Data governance and the way it is used to facilitate access to data, and also the way in which data governance determines what counts as data, thus has a potentially profound, albeit often hidden, impact on the very nature of our social reality.
Drawing on the example of neuro data, I have argued that data and its governance play an important role not just in our measuring and understanding of neurological and mental health and illness but that they contribute to creating these concepts in the first instance. Data inform and support the always provisional consensuses on what counts as a brain-related or mental illness, which can have significant ethical connotations for individuals who are diagnosed as well as for the broader environment of mental health and illness, including family, carers, and local and national health services. Where data governance renders access to and understanding of data difficult, this can lead to the reification of diagnoses and diseases. This means that the link between symptoms, data, diagnoses, and treatment is closed off and no longer subject to questioning. This can support ideological positions, which, for example, ascribe certain mental states to individuals on the basis of characteristics such as age, gender, or race, even though alternative interpretations of data are possible.
The epistemological and ontological ramifications of data and its governance thus play an important role with regard to ethical questions. Using figures of thought from critical theory allows for drawing a line from the role of data and its governance in validity claims to broader social, political, and economic concerns. One such concern is that it is in the interest of some powerful actors to promote the popular narrative, as it allows for the development of medicine and the creation of markets where these medicines can be sold at enormous profits. This is a topic that critical neuroscience has explored [163,209] and where critical approaches to neuroscience, information systems, and data meet. This is an example of a topic area where critical neuroscience and critical data studies [167] could work together to better understand the phenomena in question and where a Habermasian perspective can help to understand the problems.
However, critical theory is not just about understanding phenomena but aims to provide practically relevant insights that can contribute to an improvement of social reality. It is thus important to look at the implications that the perspective developed in this article have for data governance practice.

5.2. Implications of a Habermasian Perspective for Data Governance

Based on the account so far, one can define responsible data governance from a TCA perspective as the collection of ways of and approaches to dealing with data that allow and promote open discourses concerning the data, its uses, and social consequences. Responsible data governance should strengthen the visibility, questioning, and deliberation of the validity claims made on the basis of or about data. It needs to be sensitive to the possibility of the colonisation of the lifeworld. The resulting discourses should be open to the full array of data-related questions ranging from the ontological question of how data shapes and constitutes the reality we live in all the way to global political issues around how data and data ownership shape the socio-economic world we live in. To put this in Habermasian terms, the use of data and its governance should not be confined to technical and practical knowledge interests but should promote emancipatory forms of inquiry.
This is a tall order, and it would be unrealistic to assume that all data governance activities can fully cater to these demands. However, a Habermasian reading does provide some indications of possible practices and social realities of data governance.
A substantive conclusion for data governance arising from the TCA perspective is that data should be made accessible for scrutiny. This is already well-established in the governance, in particular, of scientific data, where there is a strong drive towards open data and the drive to make data FAIR (findable, accessible, interoperable, reusable) [210] that has been adopted by many research funders. The discursive nature of data means, however, that the FAIR agenda is not enough. It may suffice for scientific scrutiny, but in order to be relevant in broader societal discourses, the social construction of data must be accessible to non-experts. Data owners, controllers, and stewards should think about how they can justify their data and conclusions to be drawn from it in ways that are appropriate for the audiences affected by it. There is probably no easy answer to how this can be achieved. The problem can be turned into a discourse in its own right where such justifications can be sought and responded to. The important insight is that such justifications of data need to be recognised as legitimate activities of data governance. The problem of accessibility and openness to scrutiny is clearly much more difficult to address in the private sector. Data is seen as a financially valuable asset by most companies and protected by legal, technical, and organisational means. The current economic system strongly discourages the openness of commercial data. Again, there will not be a simple solution to this issue. However, in cases where there is a transition from open to protected proprietary data, such as in the case where a company uses publicly funded research data, it would be conceivable to leverage the openness of research data to require subsequent commercial data to be equally shared, using mechanisms such as licenses of data that require users of open data to make their data openly available.
As can be seen from these points, the Habermasian reading of data and its consequences for data governance are not completely separate from the more established topics of responsible data governance. They point in similar directions, promoting the openness, accessibility, and sharing of data while being aware of the need to protect data to safeguard the legitimate interests of data subjects and data owners. The discourse theoretical reading goes beyond established responsible data governance in that it more radically queries the very nature of data and calls for reflections on why data should be recognised and how they can be interpreted. In this view, data is not just data, as all data is socially constructed and its use in discourses must be justified. This is primarily an ontological position that has consequences for epistemological questions of how we arrive at shared truth claims. However, these ontological and epistemological questions are directly linked with ethical ones. They touch on the link between truth and power [211,212], which has consequences for ethics as well as politics.
One consequence of practical data governance could be that stronger requirements for the accessibility of metadata would be introduced. Data governance routinely deals with the need to provide metadata, i.e., data that describes the underlying data. In a neuro data research setting, such metadata could include data about the time and method of data collection, a description of what the data represents, or how it can be analysed. Such metadata is of crucial importance in the scientific system. However, it does not normally translate well beyond science. More accessible metadata could include, for example, narratives that are understandable for lay observers around why data was collected, what the limitations of a particular type of data or dataset are, or which alternative data could be collected. To some degree, such questions are covered in the scientific publication process. However, they are typically conveyed in scientific jargon and thus not accessible to broader audiences. Science education is an important mechanism to overcome this gap, and on the basis of this paper, one can argue that science communication should explicitly include questions of data and data governance. However, it is important to realise that the problem is not just one of better communicating to lay audiences but also of allowing other views to be taken seriously in scientific discourses. In the case of neuro data and biomedicine more broadly, movements in this direction can be observed, for example, in the inclusion of patients in research project oversight boards. Such representation of affected stakeholders is currently much less prominent in fields beyond biomedicine, but Habermas’s work offers strong arguments as to why it should be common practice in any data governance regime.
A Habermasian perspective may help better understand aspects of data governance by viewing data as the expression of speech acts. The creation and publication of a data set can reasonably be seen as a contribution to a public discourse and thus as a speech act. If data providers accept this view, then they realise that the data itself implies validity claims and that it is legitimate to question these claims. This can then become part of data governance. Data governance then has the role of allowing querying claims about the truth, normative rightness, authenticity, and comprehensibility of data. This view also removes the tendency to see data as objective and links the speech act inescapably with the speaker. To use a term from critical theory, such a position can help avoid the reification of data [213,214], i.e., the turning of socially constructed data into a thing that is beyond questioning. Such reification is a strong contributing factor to the promotion of ideological views of the world, i.e., a position where apparently self-evident truths lead to advantages for some and disadvantages for others [215,216].
Habermas-informed approaches to responsible data governance can thus derive practical guidance from the core ideas of the TCA. These should reflect that data is both an input into and resource of discourses and an outcome of discourses. Responsible data governance should render visible how data is used to define our shared reality and how it shapes and is shaped by social, economic, and political interests and processes.

6. Conclusions

In this paper, I seek to offer a novel perspective on responsible data governance. I draw on Habermas’s work to argue that data has properties that call for a re-interpretation of the nature of data and its social and economic uses. The TCA, with its references to the ideal speech situation and the long history of the discussion of the colonisation of the lifeworld, can sharpen the attention to specific data governance practices and highlight broader social, economic, and political influences on data that data governance practice should consider. I use the example of neuro data to exemplify the questions and discuss possible responses.
While this paper gives some indications of what responsible data governance might look like in practice, it is clear that spelling out the implications of a discourse theoretical position for data governance will be a long-term programme of work. I argue that embarking on this programme of work would be worthwhile. Specific insights and recommendations will need to be developed for different data types, different application scenarios, different types of users, and different types of data processing technologies. There will need to be guidance on the technical level but also for companies, research institutions and research funders, policymakers, etc. The data governance landscape is currently developing rapidly and much experimentation with different models of data governance is underway. The discourse-oriented perspective can help to develop these and broaden their understanding of what responsible data governance should entail.
The approach taken here has limitations that call for further work. The example of neuro data raises many more questions than it can answer. Further and more detailed examples would be helpful, drawing on empirical observations to show how data and data governance are discursively constructed and enacted. I have relied heavily on Habermas’s work because it offers a comprehensive perspective that connects ethical questions with ontological, epistemological, and also broader political concerns. It will come as no surprise that an ambitious theoretical position such as Habermas’s is subject to numerous critical concerns. He has been accused of being too complex, inaccessible, and difficult to apply in practice. His work has been described as relying on rationality to an implausible degree. The ideal speech situation, which makes demands on speakers that are rarely realised in practice, has been widely criticised [217]. The dominance of Habermas in the critical IS discourse has been highlighted, leading to calls for the integration of other discourse theories, such as the views put forward by Foucault [155,218,219,220]. In this paper, I have only touched very cursorily on discourse ethics and discourse-oriented political theory, which warrant a more detailed analysis in the context of data and data governance. The focus on Habermas and the CDA furthermore has kept me from exploring other philosophical positions on data and data governance. A key example would be Foridi’s philosophy of information [221] and logic of information [222], where he develops a taxonomy of data that includes the concepts of “dedomena”, which stand for phenomena that exist prior to any interpretation, and “diaphora”, which are closer to the concept of data developed here. A more detailed analysis of the relationship of the Habermasian interpretation of data developed here, Floridi’s concepts of data, and other philosophical approaches to data would be enlightening but goes beyond the confines of this article.
Despite these limitations and the need for further work, this paper should be of interest to a broad range of audiences and stakeholders involved in data governance. For those who are predominantly involved in the technical side of data governance, the paper shows that ethical aspects are important as a motivator of data governance but go beyond questions of compliance and data protection, thus calling for a broader reflection of technical data governance measures. For policy and decision makers, this paper should have shown that a simple reliance on established data governance mechanisms may be insufficient to address some of the challenges related to data. Overall, the paper has established that ethical questions should be and already are at the core of data governance. However, while much thought has been invested and valuable practices have been established to ensure that ethical concerns are reflected in data governance, the use of theoretical frameworks such as Habermas’s discourse theory can broaden our understanding of the ethics of data governance and should guide the further development of the theory and practice of data governance.
It may be appropriate to end this paper by returning to metaphors for data. Earlier, I introduced the metaphor of data as the new oil and highlighted its shortcomings. It might be better to think of data as the new air. Data, like air, is now ubiquitous; it is a necessary condition of successful interaction and, indeed, of survival. At the same time, one does not notice it until it becomes scarce or polluted. Just like air, data should not be treated as a commodity. It should be seen as a public good. Responsible data governance then has the task of maintaining this public good and making sure that people can benefit from it equitably. A discourse-informed reading can help us develop data governance in this direction.

Funding

This work was supported by the Engineering and Physical Sciences Research Council [Horizon Digital Economy Research Trusted Data Driven Products: EP/T022493/1] and [RAI UK: Creating an International Ecosystem for Responsible AI Research and Innovation: EP/Y009800/1]. Further funding was received by the European Union for the project STRATEGIC. Complementary funding was received by UK Research and Innovation (UKRI) under the UK government’s Horizon Europe Guarantee funding scheme (10123282).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data is contained in the article.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Habermas, J. Theorie des Kommunikativen Handelns; Suhrkamp: Frankfurt, Germany, 1981; ISBN 3-518-28775-3. [Google Scholar]
  2. Habermas, J. Moralbewußtsein und Kommunikatives Handeln; Suhrkamp: Frankfurt, Germany, 1983. [Google Scholar]
  3. Habermas, J. Erläuterungen zur Diskursethik; Suhrkamp: Frankfurt, Germany, 1991. [Google Scholar]
  4. Davenport, T.H.; Prusak, L. Working Knowledge: How Organizations Manage What They Know; Harvard Business Review Press: Boston, MA, USA, 1998; ISBN 978-0-87584-655-2. [Google Scholar]
  5. Boulton, G.; Campbell, P.; Collins, B.; Elias, P.; Hall, W.; Laurie, G.; O’Neill, O.; Rawlins, M.; Thornton, J.; Vallance, P. Science as an Open Enterprise; Royal Society: London, UK, 2012. [Google Scholar]
  6. Borrás, S.; Edler, J. The Roles of the State in the Governance of Socio-Technical Systems’ Transformation. Res. Policy 2020, 49, 103971. [Google Scholar] [CrossRef]
  7. Kuhlmann, S.; Stegmaier, P.; Konrad, K. The Tentative Governance of Emerging Science and Technology—A Conceptual Introduction. Res. Policy 2019, 48, 1091–1097. [Google Scholar] [CrossRef]
  8. Lis, D.; Otto, B. Towards a Taxonomy of Ecosystem Data Governance. In Proceedings of the 54th Hawaii International Conference on System Sciences, Wailea, HI, USA, 5–8 January 2021; pp. 6067–6076. [Google Scholar]
  9. British Academy; Royal Society. Data Management and Use: Governance in the 21st Century a Joint Report by the British Academy and the Royal Society; Royal Society: London, UK, 2017. [Google Scholar]
  10. Floridi, L. Soft Ethics, the Governance of the Digital and the General Data Protection Regulation. Philos. Trans. R. Soc. A 2018, 376, 20180081. [Google Scholar] [CrossRef]
  11. DCMS. UK National Data Strategy; Department for Digital, Culture, Media & Sport: London, UK; Department for Business, Energy & Industrial Strategy: London, UK, 2020.
  12. UK AI Council. AI Roadmap; UK AI Council: London, UK, 2021.
  13. UNESCO. First Draft of the Recommendation on the Ethics of Artificial Intelligence; UNESCO: Paris, France, 2020. [Google Scholar]
  14. Parmiggiani, E.; Grisot, M. Data Curation as Governance Practice. Scand. J. Inf. Syst. 2020, 32, 1. [Google Scholar]
  15. Parmiggiani, E.; Østerlie, T.; Almklov, P.G. In the Backrooms of Data Science. J. Assoc. Inf. Syst. 2022, 23, 139–164. [Google Scholar] [CrossRef]
  16. Kraus, S.; Jones, P.; Kailer, N.; Weinmann, A.; Chaparro-Banegas, N.; Roig-Tierno, N. Digital Transformation: An Overview of the Current State of the Art of Research. Sage Open 2021, 11, 21582440211047576. [Google Scholar] [CrossRef]
  17. Majchrzak, A.; Markus, M.L.; Wareham, J. Designing for Digital Transformation: Lessons for Information Systems Research from the Study of ICT and Societal Challenges. MIS Q. 2016, 40, 267–277. [Google Scholar] [CrossRef]
  18. Wessel, L.; Baiyere, A.; Ologeanu-Taddei, R.; Cha, J.; Blegind-Jensen, T. Unpacking the Difference between Digital Transformation and IT-Enabled Organizational Transformation. J. Assoc. Inf. Syst. 2021, 22, 102–129. [Google Scholar] [CrossRef]
  19. Verhoef, P.C.; Broekhuizen, T.; Bart, Y.; Bhattacharya, A.; Qi Dong, J.; Fabian, N.; Haenlein, M. Digital Transformation: A Multidisciplinary Reflection and Research Agenda. J. Bus. Res. 2021, 122, 889–901. [Google Scholar] [CrossRef]
  20. Zuboff, S. In the Age of the Smart Machine: The Future of Work and Power; Basic Books: New York, NY, USA, 1988; ISBN 0-465-03212-5. [Google Scholar]
  21. Alaimo, C.; Kallinikos, J. Managing by Data: Algorithmic Categories and Organizing. Organ. Stud. 2021, 42, 1385–1407. [Google Scholar] [CrossRef]
  22. Alaimo, C.; Kallinikos, J. Organizations Decentered: Data Objects, Technology and Knowledge. Organ. Sci. 2022, 33, 19–37. [Google Scholar] [CrossRef]
  23. Bailey, C.; Madden, A. What Makes Work Meaningful–Or Meaningless. MIT Sloan Manag. Rev. 2016, 57, 53. [Google Scholar]
  24. Bailey, D.E.; Leonardi, P.M.; Barley, S.R. The Lure of the Virtual. Organ. Sci. 2012, 23, 1485–1504. [Google Scholar] [CrossRef]
  25. Möhlmann, M.; Zalmanson, L.; Henfridsson, O.; Gregory, R.W. Algorithmic Management of Work on Online Labor Platforms: When Matching Meets Control: MIS Quarterly. MIS Q. 2021, 45, 1999–2022. [Google Scholar] [CrossRef]
  26. Burrell, G.; Morgan, G. Sociological Paradigms and Organisational Analysis: Elements of the Sociology of Corporate Life; Heinemann Educational: London, UK, 1979; ISBN 0-435-82131-8. [Google Scholar]
  27. Chua, W.F. Radical Developments in Accounting Thought. Account. Rev. 1986, 61, 601–632. [Google Scholar]
  28. Hirschheim, R.; Klein, H.K. Four Paradigms of Information Systems Development. Commun. ACM 1989, 32, 1199–1216. [Google Scholar] [CrossRef]
  29. Orlikowski, W.J.; Baroudi, J.J. Studying Information Technology in Organizations: Research Approaches and Assumptions. Inf. Syst. Res. 1991, 2, 1–28. [Google Scholar] [CrossRef]
  30. Cornelissen, J.P.; Mantere, S.; Vaara, E. The Contraction of Meaning: The Combined Effect of Communication, Emotions, and Materiality on Sensemaking in the Stockwell Shooting. J. Manag. Stud. 2014, 51, 699–736. [Google Scholar] [CrossRef]
  31. Barley, W.C. Anticipatory Work: How the Need to Represent Knowledge Across Boundaries Shapes Work Practices Within Them. Organ. Sci. 2015, 26, 1612–1628. [Google Scholar] [CrossRef]
  32. Monteiro, E.; Parmiggiani, E. Synthetic Knowing: The Politics of the Internet of Things. MIS Q. 2019, 43, 167–184. [Google Scholar] [CrossRef]
  33. Davidson, E.J.; Østerlund, C.S.; Flaherty, M.G. Drift and Shift in the Organizing Vision Career for Personal Health Records: An Investigation of Innovation Discourse Dynamics. Inf. Organ. 2015, 25, 191–221. [Google Scholar] [CrossRef]
  34. Möhlmann, M.; Henfridsson, O. What People Hate about Being Managed by Algorithms, According to a Study of Uber Drivers. Harv. Bus. Rev. 2019, 30, 1–7. [Google Scholar]
  35. Zimmer, M.P.; Järveläinen, J.; Stahl, B.C.; Mueller, B. Responsibility of/in Digital Transformation. J. Responsible Technol. 2023, 16, 100068. [Google Scholar] [CrossRef]
  36. Stahl, B.C. Morality, Ethics, and Reflection: A Categorization of Normative IS Research. J. Assoc. Inf. Syst. 2012, 13, 636–656. [Google Scholar] [CrossRef]
  37. Gal, U.; Hansen, S.; Lee, A. Research Perspectives: Toward Theoretical Rigor in Ethical Analysis: The Case of Algorithmic Decision-Making Systems. J. Assoc. Inf. Syst. 2022, 23, 1634–1661. [Google Scholar] [CrossRef]
  38. Bentham, J. An Introduction to the Principles of Morals and Legislation; Dover Publications Inc.: Mineola, NY, USA, 1789; ISBN 0-486-45452-5. [Google Scholar]
  39. Mill, J.S. Utilitarianism, 2nd ed.; Hackett Publishing Co., Inc.: Indianapolis, IN, USA, 1861; ISBN 0-87220-605-X. [Google Scholar]
  40. Kant, I. Kritik der Praktischen Vernunft; Reclam: Ditzingen, Germany, 1788; ISBN 3-15-001111-6. [Google Scholar]
  41. Kant, I. Grundlegung zur Metaphysik der Sitten; Reclam: Ditzingen, Germany, 1797; ISBN 3-15-004507-X. [Google Scholar]
  42. Aristotle. The Nicomachean Ethics; Filiquarian Publishing, LLC: Minneapolis, MN, USA, 2007; ISBN 978-1-59986-822-6. [Google Scholar]
  43. MacIntyre, A.C. After Virtue: A Study in Moral Theory; University of Notre Dame Press: Notre Dame, IN, USA, 2007; ISBN 978-0-268-03504-4. [Google Scholar]
  44. Levinas, E. Ethique et Infini; Le Livre de Poche: Paris, France, 1984; ISBN 2-253-03426-6. [Google Scholar]
  45. Kohlberg, L. The Philosophy of Moral Development: Moral Stages and the Idea of Justice: 1; Harpercollins: New York, NY, USA, 1981; ISBN 0-06-064760-4. [Google Scholar]
  46. Gilligan, C. In a Different Voice: Psychological Theory and Women’s Development; Reissue; Harvard University Press: Cambridge, MA, USA, 1990; ISBN 0-674-44544-9. [Google Scholar]
  47. Nietzsche, F.W. Zur Genealogie der Moral; Kindle edition; Macmillan: New York, NY, USA, 1887. [Google Scholar]
  48. Annas, J. The Morality of Happiness; New Ed. edition; Oxford University Press, U.S.A.: New York, NY, USA, 1993; ISBN 978-0-19-509652-1. [Google Scholar]
  49. Childress, J.F.; Beauchamp, T.L. Principles of Biomedical Ethics; Oxford University Press: Oxford, UK, 1979. [Google Scholar]
  50. Bowie, N.E. Business Ethics: A Kantian Perspective; Blackwell Publishers: Hoboken, NJ, USA, 1999. [Google Scholar]
  51. De George, R.T. Business Ethics, 5th ed.; Prentice Hall College Div: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  52. Wiener, N. The Human Use of Human Beings; Doubleday: New York, NY, USA, 1954. [Google Scholar]
  53. Bynum, T.W. Computer Ethics: Its Birth and Its Future. Ethics Inf. Technol. 2001, 3, 109–112. [Google Scholar] [CrossRef]
  54. Moor, J.H. What Is Computer Ethics. Metaphilosophy 1985, 16, 266–275. [Google Scholar] [CrossRef]
  55. Bynum, T.W. The Historical Roots of Information and Computer Ethics. In The Cambridge Handbook of Information and Computer Ethics; Floridi, L., Ed.; Cambridge University Press: Cambridge, UK, 2010; pp. 20–38. ISBN 0-521-71772-8. [Google Scholar]
  56. Bynum, T.W. Computer and Information Ethics; Zalta, E.N., Ed.; The Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2018. [Google Scholar]
  57. Mason, R.O. Four Ethical Issues of the Information Age. MIS Q. 1986, 10, 5–12. [Google Scholar] [CrossRef]
  58. Culnan, M. “How Did They Get My Name?” An Exploratory Investigation of Consumer Attitudes Toward Secondary Information Use. Manag. Inf. Syst. Q. 1993, 17, 341–363. [Google Scholar] [CrossRef]
  59. Smith, H.; Milberg, S.; Burke, S. Information Privacy: Measuring Individuals’ Concerns about Organizational Practices. Manag. Inf. Syst. Q. 1996, 20, 167–196. [Google Scholar] [CrossRef]
  60. Martinsons, M.; Ou, C.; Murata, K.; Drummond, D.; Li, Y.; Lo, H.; Davison, R. The Ethics of IT Professionals in Japan and China. J. Assoc. Inf. Syst. 2009, 10, 834–859. [Google Scholar]
  61. Adam, A. Computer Ethics in a Different Voice. Inf. Organ. 2001, 11, 235–261. [Google Scholar] [CrossRef]
  62. Loch, K.D.; Conger, S. Evaluating Ethical Decision Making and Computer Use. Commun. ACM 1996, 39, 74–83. [Google Scholar] [CrossRef]
  63. Bryant, A.; Land, F.; King, J.L. Editor’s Introduction to the Special Issue on Ethical Issues in IS Research. J. Assoc. Inf. Syst. 2009, 10, 782–786. [Google Scholar]
  64. Mingers, J.; Walsham, G. Towards Ethical Information Systems: The Contribution of Discourse Ethics. MIS Q. 2010, 34, 833–854. [Google Scholar] [CrossRef]
  65. Sarker, S.; Fuller, M.; Chatterjee, S. Ethical Information Systems Development: A Baumanian Postmodernist Perspective. J. Assoc. Inf. Syst. 2009, 10, 787–815. [Google Scholar]
  66. Berger, K.M.; Roderick, J. National and Transnational Security Implications of Big Data in the Life Sciences; AAAS: Washington, DC, USA, 2014. [Google Scholar]
  67. Metcalf, J.; Keller, E.F.; Boyd, d. Perspectives on Big Data, Ethics, and Society; Council for Big Data, Ethics, and Society: New York, NY, USA, 2016. [Google Scholar]
  68. Mittelstadt, B.; Floridi, L. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. Sci. Eng. Ethics 2016, 22, 303–341. [Google Scholar] [CrossRef] [PubMed]
  69. Nerurkar, M.; Wadephul, C.; Wiegerling, K. Ethics of Big Data: Introduction. Int. Rev. Inf. Ethics 2016. [Google Scholar] [CrossRef]
  70. Richards, N.M.; King, J.H. Big Data Ethics. Wake For. Law Rev. 2014, 49, 393. [Google Scholar]
  71. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Penguin: London, UK, 2016; ISBN 978-0-14-198542-8. [Google Scholar]
  72. Coeckelbergh, M. AI Ethics; The MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  73. Dignum, V. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, 1st ed.; 2019 edition; Springer: Berlin/Heidelberg, Germany, 2019; ISBN 978-3-030-30370-9. [Google Scholar]
  74. Bengio, Y.; Lecun, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar] [CrossRef]
  75. Hall, W.; Pesenti, J. Growing the Artificial Intelligence Industry in the UK; Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy: London, UK, 2017.
  76. EDPS. EDPS Opinion on the European Commission’s White Paper on Artificial Intelligence—A European Approach to Excellence and Trust (Opinion 4/2020); EDPS: Brussels, Belgium, 2020. [Google Scholar]
  77. Haenlein, M.; Kaplan, A. A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence. Calif. Manag. Rev. 2019, 61, 5–14. [Google Scholar] [CrossRef]
  78. Veale, M.; Binns, R.; Edwards, L. Algorithms That Remember: Model Inversion Attacks and Data Protection Law. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20180083. [Google Scholar] [CrossRef] [PubMed]
  79. Access Now. Human Rights in the Age of Artificial Intelligence; Access Now: New York, NY, USA, 2018. [Google Scholar]
  80. CDEI. Interim Report: Review into Bias in Algorithmic Decision-Making; Centre for Data Ethics and Innovation: London, UK, 2019.
  81. de Reuver, M.; van Wynsberghe, A.; Janssen, M.; van de Poel, I. Digital Platforms and Responsible Innovation: Expanding Value Sensitive Design to Overcome Ontological Uncertainty. Ethics Inf. Technol. 2020, 22, 257–267. [Google Scholar] [CrossRef]
  82. Rahman, A.U.; Saqia, B.; Alsenani, Y.S.; Ullah, I. Data Quality, Bias, and Strategic Challenges in Reinforcement Learning for Healthcare: A Survey. Int. J. Data Inform. Intell. Comput. 2024, 3, 24–42. [Google Scholar]
  83. AI HLEG. Sectorial Considerations for Trustworthy AI—Taking AI’s Context Specificity into Account; European Commission: Brussels, Belgium, 2020. [Google Scholar]
  84. AIEI Group. From Principles to Practice—An Interdisciplinary Framework to Operationalise AI Ethics; VDE/Bertelsmann Stiftung: Gütersloh, Germany, 2020; p. 56. [Google Scholar]
  85. Babuta, A.; Oswald, M.; Janjeva, A. Artificial Intelligence and UK National Security—Policy Considerations; Royal United Services Institute for Defence and Security Studies: London, UK, 2020. [Google Scholar]
  86. Müller, V.C. Ethics of Artificial Intelligence and Robotics. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Metaphysics Research Lab, Stanford University: Stanford, CA, USA, 2020. [Google Scholar]
  87. Boden, M.A. Artificial Intelligence: A Very Short Introduction; OUP Oxford: Oxford, UK, 2018; ISBN 978-0-19-960291-9. [Google Scholar]
  88. Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni, O.; Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar, E.; Kraus, S. Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel. arXiv 2016, arXiv:2211.06318. [Google Scholar]
  89. Willcocks, L. Robo-Apocalypse Cancelled? Reframing the Automation and Future of Work Debate. J. Inf. Technol. 2020, 35, 286–302. [Google Scholar] [CrossRef]
  90. Muller, C. The Impact of Artificial Intelligence on Human Rights, Democracy and the Rule of Law; Council of Europe, Ad Hoc Committee on Artificial Intelligence (CAHAI): Strasbourg, France, 2020. [Google Scholar]
  91. Zuboff, S. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, 1st ed.; Profile Books: London, UK, 2019; ISBN 978-1-78125-685-5. [Google Scholar]
  92. Nemitz, P. Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philos. Trans. R. Soc. A 2018, 376, 20180089. [Google Scholar] [CrossRef]
  93. Fothergill, B.T.; Knight, W.; Stahl, B.C.; Ulnicane, I. Responsible Data Governance of Neuroscience Big Data. Front. Neuroinform. 2019, 13, 28. [Google Scholar] [CrossRef]
  94. ALLEA, (European Federation of Academies of Sciences and Humanities); FEAM, (Federation of European Academies of Medicine); EASAC, (European Academies’ Science Advisory Council). International Sharing of Personal Health Data for Research; ALLEA: Berlin, Germany, 2021. [Google Scholar]
  95. Fernando, B.; King, M.; Sumathipala, A. Advancing Good Governance in Data Sharing and Biobanking—International Aspects. Wellcome Open Res. 2019, 4, 184. [Google Scholar] [CrossRef]
  96. Nuffield Council on Bioethics. The Collection, Linking and Use of Data in Biomedical Research and Health Care: Ethical Issues—Executive Summary; Nuffield Council on Bioethics: London, UK, 2015. [Google Scholar]
  97. Zook, M.; Barocas, S.; Boyd, D.; Crawford, K.; Keller, E.; Gangadharan, S.P.; Goodman, A.; Hollander, R.; Koenig, B.A.; Metcalf, J.; et al. Ten Simple Rules for Responsible Big Data Research. PLOS Comput. Biol. 2017, 13, e1005399. [Google Scholar] [CrossRef]
  98. European Parliament. REPORT with Recommendations to the Commission on a Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies; European Parliament, Committee on Legal Affairs: Strasbourg, France, 2020. [Google Scholar]
  99. Richards, L.; Brockmann, K.; Boulanini, V. Responsible Artificial Intelligence Research and Innovation for International Peace and Security; Stockholm International Peace Research Institute: Stockholm, Sweden, 2020. [Google Scholar]
  100. FRA. Getting the Future Right—Artificial Intelligence and Fundamental Rights; European Union Agency for Fundamental Rights: Luxembourg, 2020. [Google Scholar]
  101. Haque, A.; Milstein, A.; Fei-Fei, L. Illuminating the Dark Spaces of Healthcare with Ambient Intelligence. Nature 2020, 585, 193–202. [Google Scholar] [CrossRef] [PubMed]
  102. Topol, E.J. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef]
  103. Sipior, J.C. Considerations for Development and Use of AI in Response to COVID-19. Int. J. Inf. Manag. 2020, 55, 102170. [Google Scholar] [CrossRef]
  104. Nishant, R.; Kennedy, M.; Corbett, J. Artificial Intelligence for Sustainability: Challenges, Opportunities, and a Research Agenda. Int. J. Inf. Manag. 2020, 53, 102104. [Google Scholar] [CrossRef]
  105. Berendt, B. AI for the Common Good?! Pitfalls, Challenges, and Ethics Pen-Testing. Paladyn J. Behav. Robot. 2019, 10, 44–65. [Google Scholar] [CrossRef]
  106. Findlay, M.; Seah, J. An Ecosystem Approach to Ethical AI and Data Use: Experimental Reflections. In Proceedings of the 2020 IEEE/ITU International Conference on Artificial Intelligence for Good (AI4G), Geneva, Switzerland, 21–25 September 2020; pp. 192–197. [Google Scholar]
  107. GDPR. REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation). Off. J. Eur. Union 2016, L119/1. [Google Scholar]
  108. Eke, D.; Aasebø, I.E.J.; Akintoye, S.; Knight, W.; Karakasidis, A.; Mikulan, E.; Ochang, P.; Ogoh, G.; Oostenveld, R.; Pigorini, A.; et al. Pseudonymization of Neuroimages and Data Protection: Increasing Access to Data While Retaining Scientific Utility. Neuroimage Rep. 2021, 1, 100053. [Google Scholar] [CrossRef]
  109. Choudhury, S.; Fishman, J.R.; McGowan, M.L.; Juengst, E.T. Big Data, Open Science and the Brain: Lessons Learned from Genomics. Front. Hum. Neurosci. 2014, 8, 239. [Google Scholar] [CrossRef]
  110. Schaefer, M.; Heinze, H.-J.; Rotte, M.; Denke, C. Communicative versus Strategic Rationality: Habermas Theory of Communicative Action and the Social Brain. PLoS ONE 2013, 8, e65111. [Google Scholar] [CrossRef]
  111. Ihde, D. Technology and the Lifeworld: From Garden to Earth; Indiana University Press: Bloomington, Indiana, 1990; ISBN 978-0-253-20560-5. [Google Scholar]
  112. Watts, M. Why Data Is the New Oil. FutureScot. 2021. Available online: https://futurescot.com/why-data-is-the-new-oil/ (accessed on 28 April 2025).
  113. Searle, J.R. Speech Acts: An Essay in the Philosophy of Language; Cambridge University Press: Cambridge, UK, 1969; ISBN 978-0-521-09626-3. [Google Scholar]
  114. Ross, A.; Chiasson, M. Habermas and Information Systems Research: New Directions. Inf. Organ. 2011, 21, 123–141. [Google Scholar] [CrossRef]
  115. Wilson, F.A. The Truth is out There: The Search for Emancipatory Principles in Information Systems Design. Inf. Technol. People 1997, 10, 187–204. [Google Scholar] [CrossRef]
  116. White, S.K. The Recent Work of Jürgen Habermas: Reason, Justice and Modernity; Cambridge University Press: Cambridge, UK; New York, NY, USA, 1988; ISBN 978-0-521-34360-2. [Google Scholar]
  117. Apel, K.-O. The Response of Discourse Ethics to the Moral Challenge of the Human Situation as Such and Especially Today; Peeters Publishers: Leuven, Belgium, 2002; ISBN 90-429-0978-1. [Google Scholar]
  118. Apel, K.-O. Diskurs Und Verantwortung.: Das Problem Des Übergangs Zur Postkonventionellen Moral; Suhrkamp Verlag KG: Frankfurt a. M., Germany, 1990; ISBN 3-518-28493-2. [Google Scholar]
  119. Bonner, W.; Chiasson, M. If Fair Information Principles Are the Answer, What Was the Question? An Actor-Network Theory Investigation of the Modern Constitution of Privacy. Inf. Organ. 2005, 15, 267–293. [Google Scholar] [CrossRef]
  120. Cecez-Kecmanovic, D.; Marjanovic, O. IS Serving the Community: The Pragmatic, the Ethical and the Moral Questions. In Proceedings of the International Conference on Information Systems (ICIS), Fort Worth, TX, USA, 13–16 December 2015. [Google Scholar]
  121. Kuhn, T.S. The Structure of Scientific Revolutions; New ed. of 3 Revised ed.; Chicago University Press: Chicago, IL, USA, 1996; ISBN 0-226-45808-3. [Google Scholar]
  122. Kant, I. Kritik der Praktischen Vernunft; Reclam: Ditzingen, Germany, 1986; ISBN 3-15-001111-6. [Google Scholar]
  123. Kant, I. Kritik der Reinen Vernunft; Neuauflage Studienausgabe; Suhrkamp: Frankfurt, Germany, 1995; ISBN 3-518-09327-4. [Google Scholar]
  124. Cecez-Kecmanovic, D. Critical Information Systems Research: A Habermasian Approach. In Proceedings of the 9th European Conference on Information Systems, Global Co-operation in the New Millennium, ECIS 2001, Bled, Slovenia, 27–29 June 2001. [Google Scholar]
  125. Frey, P.; Schaupp, S.; Wenten, K.-A. Towards Emancipatory Technology Studies. Nanoethics 2021, 15, 19–27. [Google Scholar] [CrossRef]
  126. Ngwenyama, O.K. The Critical Social Theory Approach to Information Systems: Problems and Challenges. In Information Systems Research: Contemporary Approaches & Emergent Traditions; Nissen, H.-E., Klein, H.K., Hirschheim, R., Eds.; North Holland: Amsterdam, The Netherlands, 1991; pp. 267–280. ISBN 0-444-89029-7. [Google Scholar]
  127. Cecez-Kecmanovic, D. Basic Assumptions of the Critical Research Perspectives in Information Systems. In Handbook of Critical Information Systems Research: Theory and Application; Howcroft, D., Trauth, E., Eds.; Edward Elgar Publishing Ltd.: Cheltenham, UK, 2005; pp. 19–46. ISBN 1-84376-478-4. [Google Scholar]
  128. Young, A.G. Using ICT for Social Good: Cultural Identity Restoration through Emancipatory Pedagogy. Inf. Syst. J. 2018, 28, 340–358. [Google Scholar] [CrossRef]
  129. Klein, H.K.; Huynh, M.Q. The Critical Social Theory of Jürgen Habermas and Its Implications for IS Research. In Social Theory and Philosophy for Information Systems; Mingers, J., Willcocks, L.P., Eds.; Wiley: Chichester, UK, 2004; pp. 157–237. ISBN 0-470-85117-1. [Google Scholar]
  130. Howcroft, D.; Trauth, E.M. The Implications of a Critical Agenda in Gender and IS Research. Inf. Syst. J. 2008, 18, 185–202. [Google Scholar] [CrossRef]
  131. Kane, G.C.; Young, A.G.; Majchrzak, A.; Ransbotham, S. Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants. MIS Q. 2021, 45, 371–396. [Google Scholar] [CrossRef]
  132. Alvesson, M.; Willmott, H. Studying Management Critically; Sage Publications Ltd.: London, UK, 2003; ISBN 0-7619-6737-0. [Google Scholar]
  133. Myers, M.D.; Klein, H.K. A Set of Principles for Conducting Critical Research in Information Systems. MIS Q. 2011, 35, 17–36. [Google Scholar] [CrossRef]
  134. Richardson, H.; Robinson, B. The Mysterious Case of the Missing Paradigm: A Review of Critical Information Systems Research 1991–2001. Inf. Syst. J. 2007, 17, 251–270. [Google Scholar] [CrossRef]
  135. Alcadipani, R.; Hassard, J. Actor-Network Theory, Organizations and Critique: Towards a Politics of Organizing. Organization 2010, 17, 419–435. [Google Scholar] [CrossRef]
  136. Spicer, A.; Alvesson, M.; Kärreman, D. Critical Performativity: The Unfinished Business of Critical Management Studies. Hum. Relat. 2009, 62, 537–560. [Google Scholar] [CrossRef]
  137. Lyytinen, K.; Klein, H.K. The Critical Theory of Jürgen Habermas as a Basis for a Theory of Information Systems. In Research Methods in Information Systems: I.F.I.P. Colloquium Proceedings; Mumford, E., Hirschheim, R., Fitzgerald, G., Wood-Harper, T., Eds.; North-Holland Publishing Co: Amsterdam, The Netherlands, 1985; pp. 219–236. ISBN 0-444-87807-6. [Google Scholar]
  138. Doolin, B. Information Technology as Disciplinary Technology: Being Critical in Interpretive Research on Information Systems. J. Inf. Technol. 1998, 13, 301–311. [Google Scholar] [CrossRef]
  139. Doolin, B. Power and Resistance in the Implementation of a Medical Management Information System. Inf. Syst. J. 2004, 14, 343–362. [Google Scholar] [CrossRef]
  140. Kohli, R.; Kettinger, W.J. Informating the Clan: Controlling Physicians’ Costs and Outcome. MIS Q. 2004, 28, 363–394. [Google Scholar] [CrossRef]
  141. Saravanamuthu, K. Information Technology and Ideology. J. Inf. Technol. 2002, 17, 79–87. [Google Scholar] [CrossRef]
  142. Jackson, P.; Gharavi, H.; Klobas, J. Technologies of the Self: Virtual Work and the Inner Panopticon. Inf. Technol. People 2006, 19, 219–243. [Google Scholar] [CrossRef]
  143. Zirkle, B.; Staples, W.G. Negotiating Workplace Surveillance. In Electronic Monitoring in the Workplace: Controversies and Solutions; Weckert, J., Ed.; Idea Group Publishing: Hershey, PA, USA, 2005; pp. 79–100. ISBN 1-59140-457-6. [Google Scholar]
  144. Marx, K. Das Kapita. Kritik der politischen Ökonomie. Erster Band, Buch I: Der Produktionsprozeß des Kapitals; 81972. Unveränderter Neudruck der 1. Auflage 1962; Dietz Verlag: Berlin, Germany, 1972. [Google Scholar]
  145. Cecez-Kecmanovic, D.; Klein, H.K.; Brooke, C. Exploring the Critical Agenda in Information Systems Research. Inf. Syst. J. 2008, 18, 123–135. [Google Scholar] [CrossRef]
  146. Robinson, B. Cybersolidarity: Internet-Based Campaigning and Trade Union Internationalism. In Social Inclusion, Societal and Organizational Implications for Information Systems, Proceedings of the IFIP TC8 WG 8.2 International Working Conference, Limerick, Ireland, 12–15 July 2006; Federation for Information Processing; Trauth, E., Howcroft, D., Butler, T., Fitzgerald, B., DeGross, J., Eds.; Springer: New York, NY, USA, 2006; pp. 123–135. ISBN 0-387-34587-6. [Google Scholar]
  147. Elbanna, A.; Newman, M. The Bright Side and the Dark Side of Top Management Support in Digital Transformation—A Hermeneutical Reading. Technol. Forecast. Soc. Chang. 2022, 175, 121411. [Google Scholar] [CrossRef]
  148. Attias, B.A. Technology and the Great Refusal: The Information Age and Critical Social Theory. In Globalization, Technology, and Philosophy; Tabachnick, D., Koivukoski, T., Eds.; State University of New York Press: Albany, NY, USA, 2004; pp. 43–56. ISBN 0-7914-6059-2. [Google Scholar]
  149. Greenhill, A.; Wilson, M. Haven or Hell? Telework, Flexibility and Family in the e-Society: A Marxist Analysis. Eur. J. Inf. Syst. 2006, 15, 379–388. [Google Scholar] [CrossRef]
  150. Paterson, B. We Cannot Eat Data: The Need for Computer Ethics to Address the Cultural and Ecological Impacts of Computing. In Information Technology Ethics: Cultural Perspectives; Premier reference source; Hongladarom, S., Ess, C., Eds.; Idea Group Reference: Hershey, PA, USA, 2007; pp. 153–168. ISBN 978-1-59904-310-4. [Google Scholar]
  151. Probert, S.K. Adorno: A Critical Theory for IS Research. In Social Theory and Philosophy for Information Systems; Mingers, J., Willcocks, L.P., Eds.; Wiley: Chichester, UK, 2004; pp. 129–156. ISBN 0-470-85117-1. [Google Scholar]
  152. Avgerou, C.; McGrath, K. Power, Rationality, and the Art of Living Through Socio-Technical Change. MIS Q. 2007, 31, 295–315. [Google Scholar] [CrossRef]
  153. Kvasny, L.; Richardson, H. Critical Research in Information Systems: Looking Forward, Looking Back. Inf. Technol. People 2006, 19, 196. [Google Scholar] [CrossRef]
  154. Trauth, E.M.; Howcroft, D. Critical Empirical Research in IS: An Example of Gender and the IT Workforce. Inf. Technol. People 2006, 19, 272. [Google Scholar] [CrossRef]
  155. Doolin, B. Information Systems and Power: A Foucauldian Perspective. In Critical Management Perspectives on Information Systems; Brooke, C., Ed.; Butterworth Heinemann: Amsterdam, The Netherlands, 2009; pp. 211–230. ISBN 0-7506-8197-7. [Google Scholar]
  156. Willcocks, L. Foucault, Power/Knowledge and Information Systems: Reconstructing the Present. In Social Theory and Philosophy for Information Systems; Mingers, J., Willcocks, L., Eds.; Wiley: Chichester, UK, 2004; pp. 238–296. [Google Scholar]
  157. Young, M.-L.; Kuo, F.-Y.; Myers, M.D. To Share or Not to Share: A Critical Research Perspective on Knowledge Management Systems. Eur. J. Inf. Syst. 2012, 21, 496–511. [Google Scholar] [CrossRef]
  158. Kvasny, L.; Truex, D. Information Technology and the Cultural Reproduction of Social Order: A Research Paradigm. In Organizational and Social Perspectives on Information Technology, Proceedings of the IFIP TC8 WG8.2 International Working Conference on the Social and Organizational Perspective on Research and Practice in Information Technology, Aalborg, Denmark, 9–11 June 2000; IFIP—The International Federation for Information Processing; Baskerville, R., Stage, J., DeGross, J.I., Eds.; Springer US: Boston, MA, USA, 2000; pp. 277–293. ISBN 978-0-387-35505-4. [Google Scholar]
  159. Marcon, T.; Chiasson, M.; Gopal, A. The Crisis of Relevance and the Relevance of Crisis: Renegotiating Critique in Information Systems Scholarship. In Information Systems Research: Relevant Theory and Informed Practice; Kaplan, B., Truex, D., Wastell, D., Wood-Harper, A., DeGross, J., Eds.; Springer: Boston, MA, USA, 2004; pp. 143–159. [Google Scholar]
  160. Grey, C. Critical Management Studies: Towards a More Mature Politics. In Handbook of Critical Information Systems Research: Theory and Application; Howcroft, D., Trauth, E., Eds.; Edward Elgar Publishing Ltd.: Cheltenham, UK, 2005; pp. 174–194. ISBN 1-84376-478-4. [Google Scholar]
  161. Critical Legal Studies; Blackwell: Oxford, UK, 1987; ISBN 0-631-15718-2.
  162. Unger, R.M. The Critical Legal Studies Movement; Harvard University Press: Cambridge, MA, USA, 1986; ISBN 0-674-17736-3. [Google Scholar]
  163. Choudhury, S.; Slaby, J. Critical Neuroscience: A Handbook of the Social and Cultural Contexts of Neuroscience, 1st ed.; Wiley-Blackwell: Hoboken, NJ, USA, 2011. [Google Scholar]
  164. Brey, P. The Technological Construction of Social Power. Soc. Epistemol. 2008, 22, 71–95. [Google Scholar] [CrossRef]
  165. Delanty, G.; Harris, N. Critical Theory and the Question of Technology: The Frankfurt School Revisited. Thesis Elev. 2021, 166, 88–108. [Google Scholar] [CrossRef]
  166. Feenberg, A. From Critical Theory of Technology to the Rational Critique of Rationality. Soc. Epistemol. 2008, 22, 5–28. [Google Scholar] [CrossRef]
  167. Iliadis, A.; Russo, F. Critical Data Studies: An Introduction. Big Data Soc. 2016, 3, 2053951716674238. [Google Scholar] [CrossRef]
  168. Stahl, B.C. The Ethical Nature of Critical Research in Information Systems. Inf. Syst. J. 2008, 18, 137–163. [Google Scholar] [CrossRef]
  169. Schlagwein, D.; Cecez-Kecmanovic, D.; Hanckel, B. Ethical Norms and Issues in Crowdsourcing Practices: A Habermasian Analysis. Inf. Syst. J. 2019, 29, 811–837. [Google Scholar] [CrossRef]
  170. Schumann, G. Challenges and Future Directions for Investigating the Effects of Urbanicity on Mental Health. Nat. Ment. Health 2023, 1, 817–819. [Google Scholar] [CrossRef]
  171. Stahl, B.C.; Rainey, S.; Harris, E.; Fothergill, B.T. The role of ethics in data governance of large neuro-ICT projects. J. Am. Med. Inf. Assoc. 2018, 8, 1099–1107. [Google Scholar] [CrossRef] [PubMed]
  172. Ravindra, V.; Grama, A. De-Anonymization Attacks on Neuroimaging Datasets. arXiv 2019, arXiv:1908.03260. [Google Scholar]
  173. Eke, D.O.; Bernard, A.; Bjaalie, J.G.; Chavarriaga, R.; Hanakawa, T.; Hannan, A.J.; Hill, S.L.; Martone, M.E.; McMahon, A.; Ruebel, O.; et al. International Data Governance for Neuroscience. Neuron 2021, 110, 600–612. [Google Scholar] [CrossRef]
  174. Amunts, K.; Ebell, C.; Muller, J.; Telefont, M.; Knoll, A.; Lippert, T. The Human Brain Project: Creating a European Research Infrastructure to Decode the Human Brain. Neuron 2016, 92, 574–581. [Google Scholar] [CrossRef]
  175. Abbott, A. Human Brain Project Votes for Leadership Change. Nature 2015. [Google Scholar] [CrossRef]
  176. Frégnac, Y.; Laurent, G. Neuroscience: Where Is the Brain in the Human Brain Project? Nature 2014, 513, 27–29. [Google Scholar] [CrossRef]
  177. Frégnac, Y. Flagship Afterthoughts: Could the Human Brain Project (HBP) Have Done Better? eNeuro 2023, 10. [Google Scholar] [CrossRef]
  178. Naddaf, M. Europe Spent €600 Million to Recreate the Human Brain in a Computer. How Did It Go? Nature 2023, 620, 718–720. [Google Scholar] [CrossRef]
  179. Aicardi, C.; Mahfoud, T. Formal and Informal Infrastructures of Collaboration in the Human Brain Project. Sci. Technol. Hum. Values 2022, 49, 403–430. [Google Scholar] [CrossRef]
  180. Evers, K. The Contribution of Neuroethics to International Brain Research Initiatives. Nat. Rev. Neurosci. 2016, 18, 1–2. [Google Scholar] [CrossRef]
  181. Rose, N. The Human Brain Project: Social and Ethical Challenges. Neuron 2014, 82, 1212–1215. [Google Scholar] [CrossRef] [PubMed]
  182. Salles, A.; Bjaalie, J.G.; Evers, K.; Farisco, M.; Fothergill, B.T.; Guerrero, M.; Maslen, H.; Muller, J.; Prescott, T.; Stahl, B.C.; et al. The Human Brain Project: Responsible Brain Research for the Benefit of Society. Neuron 2019, 101, 380–384. [Google Scholar] [CrossRef] [PubMed]
  183. Stahl, B.C.; Akintoye, S.; Bitsch, L.; Bringedal, B.; Eke, D.; Farisco, M.; Grasenick, K.; Guerrero, M.; Knight, W.; Leach, T.; et al. From Responsible Research and Innovation to Responsibility by Design. J. Responsible Innov. 2021, 8, 175–198. [Google Scholar] [CrossRef]
  184. Borgmann, A. Reality and Technology. Camb. J. Econ. 2010, 34, 27–35. [Google Scholar] [CrossRef]
  185. Khlentzos, D.; Block, N.; Putnam, H. Naturalistic Realism and the Antirealist Challenge; MIT Press: Cambridge, MA, USA, 2004; ISBN 978-0-262-11285-7. [Google Scholar]
  186. Berger, P.L.; Luckmann, T. The Social Construction of Reality: A Treatise in the Sociology of Knowledge; Penguin: London, UK, 1966; ISBN 978-0-14-013548-0. [Google Scholar]
  187. Chen, W.; Hirschheim, R. A Paradigmatic and Methodological Examination of Information Systems Research from 1991 to 2001. Inf. Syst. J. 2004, 14, 197–235. [Google Scholar] [CrossRef]
  188. Wernick, P.; Hall, T. Can Thomas Kuhn’s Paradigms Help Us Understand Software Engineering? Eur. J. Inf. Syst. 2004, 13, 235–243. [Google Scholar] [CrossRef]
  189. European Commission. Proposal for a Regulation on a European Approach for Artificial Intelligence; European Commission: Brussels, Belgium, 2021. [Google Scholar]
  190. Nagel, T. The View From Nowhere; Revised edition; Oxford University Press: New York, NY, USA; London, UK, 1989; ISBN 978-0-19-505644-0. [Google Scholar]
  191. Hawkes, D. Ideology, 2nd ed.; The new critical idiom; Routledge: London, UK, 2003; ISBN 0-415-29011-2. [Google Scholar]
  192. Gramsci, A. Selections from the Prison Notebooks of Antonio Gramsci / Edited and Translated by Quinton Hoare and Geoffrey Nowell Smith; Lawrence and Wishart: London, UK, 1971; ISBN 0-85315-280-2. [Google Scholar]
  193. Ngwenyama, O.; Rowe, F.; Klein, S.; Henriksen, H.Z. The Open Prison of the Big Data Revolution: False Consciousness, Faustian Bargains, and Digital Entrapment. Inf. Syst. Res. 2023, 35, 1507–2085. [Google Scholar] [CrossRef]
  194. Byrne, E.; Gregory, J. Co-Constructing Local Meanings for Child Health Indicators in Community-Based Information Systems: The UThukela District Child Survival Project in KwaZulu-Natal. Int. J. Med. Inform. 2007, 76, 78–88. [Google Scholar] [CrossRef]
  195. Ross, A.; Marcolin, B.; Chiasson, M. Representation in Systems Development and Implementation: A Healthcare Enterprise System Implementation. AIS Trans. Hum. Comput. Interact. 2012, 4, 51–71. [Google Scholar] [CrossRef]
  196. Shaw, M.C.; Stahl, B.C. On Quality and Communication: The Relevance of Critical Theory to Health Informatics. J. Assoc. Inf. Syst. 2011, 12, 255–273. [Google Scholar] [CrossRef]
  197. Kanungo, S. On the Emancipatory Role of Rural Information Systems. Inf. Technol. People 2004, 17, 407–422. [Google Scholar] [CrossRef]
  198. Puri, S.K.; Sahay, S. The Politics of Knowledge in Using GIS for Land Management in India. IFIP Adv. Inf. Commun. Technol. 2004, 143, 597–614. [Google Scholar]
  199. Arayankalam, J.; Khan, A.; Krishnan, S. How to Deal with Corruption? Examining the Roles of e-Government Maturity, Government Administrative Effectiveness, and Virtual Social Networks Diffusion. Int. J. Inf. Manag. 2021, 58, 102203. [Google Scholar] [CrossRef]
  200. Arayankalam, J.; Krishnan, S. B2C E-Business Use in a Country: The Roles of E-Government Maturity, Corruption, and Virtual Social Networks Diffusion. In Proceedings of the 2020 Pacific Asia Conference on Information Systems, Dubai, United Arab Emirates, 22–24 June 2020. [Google Scholar]
  201. Galhardo, J.A.; de Souza, C.A. ICT Regulation Process: Habermas Meets The Multiple Streams Framework. In Proceedings of the AMCIS 2020, Virtual, 10–14 August 2020. [Google Scholar]
  202. Finn, R.L.; Wright, D.; Friedewald, M. Seven Types of Privacy. In European Data Protection: Coming of Age; Gutwirth, S., Leenes, R., de Hert, P., Poullet, Y., Eds.; Springer: Dordrecht, The Netherlands, 2013; pp. 3–32. ISBN 978-94-007-5170-5. [Google Scholar]
  203. Ada Lovelace Institute; UK AI Council. Exploring Legal Mechanisms for Data Stewardship; Ada Lovelace Institute: London, UK, 2021.
  204. European Commission. Proposal for a Regulation of The European Parliament and of The Council on European Data Governance (Data Governance Act); European Commission: Brussels, Belgium, 2020. [Google Scholar]
  205. Bublitz, J.C. Novel Neurorights: From Nonsense to Substance. Neuroethics 2022, 15, 7. [Google Scholar] [CrossRef] [PubMed]
  206. Ienca, M. On Neurorights. Front. Hum. Neurosci. 2021, 15, 701258. [Google Scholar] [CrossRef] [PubMed]
  207. Rainey, S.; McGillibray, K.; Akintoye, S.; Fothergill, T.; Bublitz, C.; Stahl, B.C. Is European Data Protection Regulation Sufficient to Deal with Emerging Data Concerns Relating to Neurotechnology? J. Law Biosci. 2020, 7, lsaa051. [Google Scholar] [CrossRef]
  208. Ruiz, S.; Valera, L.; Ramos, P.; Sitaram, R. Neurorights in the Constitution: From Neurotechnology to Ethics and Politics. Philos. Trans. R. Soc. B Biol. Sci. 2024, 379, 20230098. [Google Scholar] [CrossRef]
  209. Schleim, S. Critical Neuroscience—Or Critical Science? A Perspective on the Perceived Normative Significance of Neuroscience. Front. Hum. Neurosci. 2014, 8, 336. [Google Scholar] [CrossRef]
  210. Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.-W.; da Silva Santos, L.B.; Bourne, P.E.; et al. The FAIR Guiding Principles for Scientific Data Management and Stewardship. Sci. Data 2016, 3, 160018. [Google Scholar] [CrossRef]
  211. Foucault, M. L’ordre du Discours; Editions Flammarion: Paris, France, 1971. [Google Scholar]
  212. Foucault, M. Power-Knowledge: Selected Interviews & Other Writings—1972–1977; Colin, G., Ed.; Pantheon Books: New York, NY, USA, 1980; ISBN 0-394-51357-6. [Google Scholar]
  213. Feenberg, A. Critical Theory of Technology; New Ed.; Oxford University Press Inc., USA: New York, NY, USA, 1993; ISBN 0-19-506855-6. [Google Scholar]
  214. How, A. Critical Theory; Traditions in social theory; Palgrave Macmillan: Basingstoke, UK, 2003; ISBN 0-333-75151-5. [Google Scholar]
  215. Freeden, M. Ideology: A Very Short Introduction; Very short introductions; Oxford University Press: Oxford, UK, 2003; ISBN 0-19-280281-X. [Google Scholar]
  216. Gouldner, A.W. The Dialectic of Ideology and Technology: The Origins, Grammar and Future of Ideology; Critical social studies; Macmillan: London, UK, 1976; ISBN 0-333-19757-7. [Google Scholar]
  217. Ashenden, S.; Owen, D. Foucault Contra Habermas: Recasting the Dialogue between Genealogy and Critical Theory; Sage Publications Ltd.: Thousand Oaks, CA, USA, 1999; ISBN 0-8039-7771-9. [Google Scholar]
  218. Brooke, C. What Does It Mean to Be “critical” in IS Research? J. Inf. Technol. 2002, 17, 49–57. [Google Scholar] [CrossRef]
  219. Feenberg, A. Questioning Technology, 1st ed.; Routledge: London, UK, 1999; ISBN 0-415-19755-4. [Google Scholar]
  220. Klecun, E.; Cornford, T. A Critical Approach to Evaluation. Eur. J. Inf. Syst. 2005, 14, 229–243. [Google Scholar] [CrossRef]
  221. Floridi, L. The Philosophy of Information; Routledge: London, UK, 1999; ISBN 978-0-19-923239-0. [Google Scholar]
  222. Floridi, L. The Logic of Information: A Theory of Philosophy as Conceptual Design; Oxford University Press: Oxford, UK, 2019; ISBN 978-0-19-883363-5. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Stahl, B.C. The Ethics of Data and Its Governance: A Discourse Theoretical Approach. Information 2025, 16, 497. https://doi.org/10.3390/info16060497

AMA Style

Stahl BC. The Ethics of Data and Its Governance: A Discourse Theoretical Approach. Information. 2025; 16(6):497. https://doi.org/10.3390/info16060497

Chicago/Turabian Style

Stahl, Bernd Carsten. 2025. "The Ethics of Data and Its Governance: A Discourse Theoretical Approach" Information 16, no. 6: 497. https://doi.org/10.3390/info16060497

APA Style

Stahl, B. C. (2025). The Ethics of Data and Its Governance: A Discourse Theoretical Approach. Information, 16(6), 497. https://doi.org/10.3390/info16060497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop