Next Article in Journal
Fathers’ Experiences of Negotiating Co-Parenting Arrangements and Family Court
Previous Article in Journal
Danger Is a Signal, Not a State: Bigaagarri—An Indigenous Protocol for Dancing Around Threats to Wellbeing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fighting Disinformation Online: The Digital Services Act in the American Context

by
Daniela Peterka-Benton
College of Humanities and Social Sciences, Montclair State University, Montclair, NJ 07043, USA
Soc. Sci. 2025, 14(1), 28; https://doi.org/10.3390/socsci14010028
Submission received: 22 November 2024 / Revised: 6 January 2025 / Accepted: 7 January 2025 / Published: 10 January 2025

Abstract

:
The internet is one of humanity’s most significant creations in the modern era. What began roughly 30 years ago has developed into a rich, diverse, but largely unregulated environment we can no longer live without. With the spread of mis- and disinformation worldwide, calls for a safer internet have gotten louder. This article discusses the threats disinformation poses to online users and provides a case study on how the European Union’s Digital Services Act attempts to protect users’ fundamental rights in the online space and whether the Digital Services Act could or should serve as a model for similar legislation in the United States.

1. Introduction—The Digital Frontier

The introduction of the internet has transformed global societies in many ways, including communication, information sharing, commerce, education, and social interactions. On 30 April 1993, the World Wide Web was born and since then has morphed into a highly complex digital world foundational to many social and economic activities. Many positive changes have come with this, but like almost any newly discovered space in human history, this new frontier has also introduced a place that remains largely unconstrained. Due to limited legal oversight and enforcement, it should not come as a surprise that crimes are easy to conduct, hateful or even radical content can be shared widely from any place in the world, and online mis- and disinformation have become so pervasive that public opinion can be swayed easily.
The rise of mis- and disinformation has become especially concerning over the last few years, fueled by the extensive pandemic lockdowns, which forced many people online even more as a sole way to communicate with the outside world. Misinformation can broadly be described as inaccurate or false information unintentionally presented as a fact and primarily spread through social media, while disinformation is the deliberate spreading of falsehoods with the intent to mislead (Cambridge Dictionary n.d.). While misinformation conveys information that is broadly incorrect or a false statement of a fact, disinformation may convey legitimate facts within fabricated contexts to influence consumers of that information. Disinformation is always introduced with intent and a concealed motive. In that sense, disinformation and propaganda share a lot of features in that they both seek to change public opinion (Guess and Lyons 2020; Normandin 2022; Hussain and Soomro 2023).
This paper focuses primarily on the issue of disinformation, which, according to the High-Level Expert Group (HLEG) advising the European Commission on fake news and disinformation online, “includes all forms of false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit” (European Commission, Directorate-General for Communications Networks, Content and Technology 2018). Disinformation has become a global problem, which can be identified/addressed primarily by three different means: governmental regulation, regulation by platforms, and identification by users. While all these are equally important means to investigate, this paper will focus on the role of governmental regulation by discussing one of the most extensive pieces of legislation introduced yet, the European Union’s Digital Services Act (DSA). The DSA regulates online intermediaries and platforms to protect consumers by preventing illegal and harmful activities online and the spreading of disinformation. This approach is a stark contrast to the regulation-aversive United States, which, as the world’s largest economy (International Monetary Fund n.d.) and with the world’s third largest population of internet users (CIA World Factbook n.d.), still grapples with major impacts of online disinformation. The United States, as a federation of states, faces similar difficulties in introducing a standardized set of rules for all states as the European Union did in regard to its member states. Therefore, this paper will analyze the Digital Services Act as a case study and potential guide toward federal regulation in the United States through a conceptual discussion along three guiding questions:
  • What threat does disinformation pose to users and the worlds they live in?
    • This section will discuss global data on online disinformation and will provide a review of the more significant disinformation threats in the EU and the United States.
  • How does the European Union’s Digital Services Act intend to protect users’ fundamental rights in the online space?
    • Here, the paper will describe how the DSA came into existence, its goals and their realization in a regulatory instrument, how the DSA enforces its regulations, and discuss implementation challenges and successes.
  • Can and should the United States look to the Digital Services Act as a potential legislative model?
    • This part of the paper will focus on the United States as a key global economy with a significant internet user population. It will discuss regulation practices, existing online privacy laws, and current legal attempts to regulate the online space.

2. Disinformation Threats

The Global Economic Forum’s Global Risks Report 2024 describes mis- and disinformation as a severe global threat in the immediate future when billions of people participate in political elections. Mis- and disinformation campaigns might undermine the legitimacy of election results, which can lead to democratic instability or even more severe forms of unrest. Similarly, the 2023 Reuters Institute Digital News Report found in a global study across 46 markets that account for roughly 50% of the world’s population, 56% of respondents worry about what is real or fake in internet news. This number increases to 64%, questioning the trustworthiness of news from people who rely primarily on social media as a news source (Newman et al. 2023). Concerning data about organized disinformation campaigns was shared in the Industrialized Disinformation 2020 Global Inventory of Organized Social Media Manipulation report, which investigated trends of computational propaganda across 81 countries. The report identifies three major concerns including cyber troop activity (military or political entities or mercenary individuals committed to manipulating public opinion over social media), account takedowns by online platforms, and increased private cyber troop activities. Activity of these groups continued to rise globally, from 70 countries identified in a previous report to 81 countries in the 2020 report. While cyber troop activities used to be a purview of political and military organizations, the report noticed an increase in private companies working on behalf of political actors. Private cyber troop activities have been located in 48 countries and the research group estimates that close to USD 60 million was spent on these since 2009 (Bradshaw et al. 2021).
The global threat of disinformation is undeniable at this point, as it undermines basic human rights through “the deliberate creation and dissemination of false or manipulated information intended to deceive and mislead audiences, either to cause harm or for personal, political or financial gain” (Human Rights Council 2022). Irene Khan (United Nations General Assembly 2022), United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, noted that disinformation poses a significant risk for marginalized and vulnerable civilian populations who experience armed conflicts as it contributes to a limitation of freedom of opinion.
These global concerns have been reported in national theaters, including countries of the European Union, as well. Incidents, including the refugee crisis in 2015, Brexit in 2016, COVID-19, and several elections across the Union over the past few years, have increased concerns about the dissemination of disinformation and its threat to democratic values (Saurwein and Spencer-Smith 2020). In a 2023 European Commission report on Russian disinformation campaigns, for example, such content was identified as highly toxic or illegal and reached large audiences. The report also identified attempts to manipulate civic discourse and suppress individual rights. The analysis that led to these findings was based on “close to 7 million pieces of content posted between December 2021 and December 2022 across 11 official EU languages as well as Russian” (European Commission 2023b). The NATO Strategic Communications Centre of Excellence conducted a social media manipulation experiment in 2022 to assess social media companies’ competence to detect and block inauthentic or manipulative content. Between September and October 2022, the research team bought inauthentic engagement for only EUR 168 (approximately USD 182) from social media manipulation service providers in three different countries, which resulted in “1225 comments, 6560 likes, 15,785 views, and 3739 shares on Facebook, Instagram, YouTube, Twitter, TikTok, and VKontakte, enabling us to identify 6564 accounts used for social media manipulation (and) of the 27,309 fake engagements purchased, more than 93 per cent remained online and active after four weeks” (Fredheim et al. 2023). This demonstrates that while social media platforms claim to continuously make improvements to prevent manipulation, the vast majority of that type of content appears to remain active.
These examples of disinformation campaigns and their impact most likely only scratch the surface of the full extent of disinformation manipulation and are of grave concern to online users, as the 2023 European Commission Report on the State of the Digital Decade points out. The importance of protecting users on the internet from disinformation, next to protection from cyber-attacks and improved availability of high-speed internet, was ranked as one of the top three priorities for Europeans until 2030, according to the Special Eurobarometer (European Commission 2023a).
Just like many European countries, the United States also has had to grapple with the threat of disinformation for several years now. First large-scale mis- and disinformation campaigns appeared during the COVID-19 pandemic, and also played a role in the Capitol riot in 2021. Continued attacks on elected officials and ongoing threats to the integrity of the 2024 election persist, as well. The intelligence assessment on ‘Foreign Threats to the 2020 US Federal Elections’ (National Intelligence Council 2021) found that Russia targeted Biden’s candidacy during the 2020 election cycle and supported former President Trump to erode public faith in the integrity of democratic elections worldwide and sow sociopolitical divisions through targeted disinformation campaigns. Iran and several other countries and non-state actors were also found to have taken steps to influence the election. A comparative, cross-national study on the nation’s resilience to online disinformation found that the United States scored quite low based on the country’s large advertising market and its comparatively fragmented news consumption, which relies primarily on non-public service media (Humprecht et al. 2020). Similarly, Shahbaz and Funk (2021) report that the United States’ score declined for the fifth consecutive year based on the continued presence of false and misleading information online.
As technology develops, artificial intelligence has introduced new tools to distribute intentional falsehoods online. Of particular concern is the ability of AI to create “deep fakes”, which are visual and audio content that can be very deceptive. NewsGuard, which provides independent, apolitical trust ratings for online news sources, highlighted in their December 2023 Misinformation Monitor that AI tools are being used to push foreign propaganda, which includes mis- and disinformation regarding the conflicts in Ukraine and Gaza and misleading information in the healthcare sector. NewsGuard also tested language models such as OpenAI’s ChatGPT and their utilization to spread mis- and disinformation and found that after feeding ChatGPT with false prompts, the system produced 98 out of 100 myths (NewsGuard 2023). Similar concerns about the use of AI in disinformation campaigns were shared in the 2023 Freedom on the Net report, which identified 47 governments that utilized AI tools to direct online disinformation in their favor (Funk et al. 2023).
It has been widely established that disinformation exists and that it has real-life implications on health-related issues, safety issues, political interference, and, most importantly, internal trust in the democratic elements of society. As new technologies and tools to spread disinformation become available, the main question as to how to stop this remains mainly unanswered. Is it the responsibility of the government to protect internet users from false or maybe even illegal information, or should platforms be held responsible and liable for the content their users post? There might also be a need for internet users to educate themselves on how to curb the spread of harmful disinformation. While many countries have developed legislation to regulate the internet (Yadav et al. 2021), very few legal instruments exist to target disinformation on a regionwide scale, overwriting national laws. One such new instrument, the European Union’s Digital Services Act (DSA), is the first regulation in the world that aims to regulate social media and online marketplaces, to protect users’ fundamental rights in the digital space, and to place some measure of legal oversight on the providers of these online services by introducing a standardized set of rules across the entire European Union.

3. Materials and Method

The purpose of this case study is to relate a particular regulatory intervention to its applicability in a different national theater. Many social science disciplines frequently employ case studies because the recurrence of events over several situations might offer valuable answers, even though this methodology’s emphasis on particular circumstances may restrict generalizability (Cooke 2023). According to Noor (2008), “when dealing with a process or complex real-life activities” (p. 1602), case studies have value in exploring a specific problem, characteristic, or analytical unit in ways that macro or systemic analysis can miss. Three categories of case study designs are distinguished by (Yin 1994), an early proponent of case study research: exploratory, descriptive, and explanatory. For this project, the descriptive case study approach is deemed most useful, as it allows for a detailed description of one particular regulatory response to ensure increased online safety and a related discussion about the feasibility of this approach in the United States. The description of the case will include background on the history of the DSA, enforcement strategies under the DSA, and implementation challenges and successes. The findings of this case study lay the foundation for a critical discussion on whether the DSA can serve as a model of regulation for the United States through a review of existing internet laws and other legal initiatives to counter online disinformation in the U.S.

3.1. The European Response to Disinformation—A Case Study of Governmental Regulation

The DSA came into existence as a modernization of the EU’s 2000 Electronic Commerce Directive (2000/31/EC), which established basic rules about how to provide online services in a transparent manner that protects consumers and online businesses alike (Shaping Europe’s Digital Future n.d.a). The Directive was created in repose to the expansion of internet use during the 1990s when cross-border online services still faced legal obstacles in internal markets. The new Directive introduced fundamental standards for required consumer information, guidelines for commercial communications, and procedures to be followed when entering into an online contract.
The DSA addresses a number of issues at the EU-wide level and limits national legislations, which negatively impact the internal market of the European Union (Wilman 2022, 27 December). On 23 April 2022, the European Parliament and Council decided on these new rules, which entered into force on 16 November 2022 as the DSA. The DSA is a part of the EU’s Digital Service Act Package, which also includes the Digital Markets Act, regulating practices of digital companies to create a fair digital market (Shaping Europe’s Digital Future n.d.d).
The key goal of the DSA is the regulation of online intermediaries and platforms, including social media, commerce, content-sharing, etc., to protect online users from illegal and harmful activities as well as disinformation. As far as disinformation is concerned, the DSA builds onto the 2018 Code of Practice on Disinformation, “a voluntary set of guidelines developed by the European Commission and online platforms to address the spreading of false information and fake news online” (Leiser 2023, p. 8), which was updated, signed and presented by the European Commission on 16 June 2022. The 2022 Code provides measures in the following areas: reducing the financial rewards offered to identified disinformation providers, transparency in political advertising, strengthening measures to reduce the spread of disinformation (bots, fake accounts etc.), empowering users by providing tools to identify and flag disinformation, and through media literacy programs, improve access for researchers to study disinformation, expanding fact-checking across all EU member states, establishment of a transparency center and task force to keep to code up-to-date, and provision of a strong monitoring network (European Commission 2022a).
The DSA framework is organized into five chapters and 107 recitals, which include provisions on subject matter and scope and definitions used, descriptions of liabilities for intermediary service providers, description of provider responsibilities to create a safe online environment, and references to other relevant directives (The Articles of the Digital Services Act (DSA) n.d.; Chiarella 2023). The DSA targets only specific online service providers, which are subjected to varying obligations based on their market share. They include “Very large online platforms (VLOPs) and search engines (VLOSEs)”, which have a reach of more than 45 million consumers in Europe, online marketplaces, app stores, collaborative economy platforms and social media platforms, cloud and web hosting services, and intermediary services offering network infrastructure (European Commission n.d.).
VLOPs and VLOSEs are of particular concern to the DSA because potential mis- and disinformation or other harmful content is readily available to a significant number of individuals (Laux et al. 2021). On 25 April 2023, the European designated 17 VLOPs and 2 VLOSEs under the DSA that reach at least 45 million monthly active users (European Commission 2023c) which are listed in Table 1:
Because of the significant user numbers of VLOPs and VLOEs, rules under the DSA have been applied to these providers since August 2023, including the following:
  • Platforms must create a system that utilizes ‘trusted flaggers’ or gives users the option to flag content, services, or goods that are illegal;
  • Information sharing with intermediary services as to why certain content is being removed or why account access has been restricted; such decisions can be challenged through an out-of-court dispute settlement mechanism; content moderation decisions will be made available to the public through the EU Commission’s DSA Transparency Database;
  • Greater transparency and control over what users see on their feeds to allow them to opt out of personalized recommendations (including the labeling of ads);
  • Bans of targeted advertising to minors based on their private information;
  • The protection of privacy and security of minors who can access certain platforms. This can be achieved by the introduction of age verification measures, parental controls, and easy-to-use tools to signal abuse;
  • Platforms are required to flag mis- and disinformation and must provide fact-checking opportunities for users to mitigate the impact on elections and civic discourse
  • New requirements for online marketplaces to counter the trade with illegal goods: sellers must provide proof of identity before they can sell items online and must also inform the buyer of whom they are buying from; providers that detect illegal sales must inform the users involved and identify the seller or illegal goods including options for redress (Shaping Europe’s Digital Future n.d.f).

3.2. Enforcement Under the Digital Services Act

Enforcement of these rules, as outlined in the DSA, requires all EU member states to identify one or more competent entities, called Digital Service Coordinators, to lead these efforts locally. These entities are obligated to collaborate with other national organizations and the European Board for Digital Services, an independent entity established in Article 61 of the DSA, and the EU Commission (Article 49). Digital Services Coordinators can open an investigation based on suspicion of infringement only if the provider’s European headquarters is located in their country (Article 40). The European Commission can also supervise and enforce the DSA if the providers are designated VLOPs or VLOSEs. Even if the Commission decides against proceedings of suspected infringements by VLOPs or VLOSEs, Digital Service Coordinators can still enforce obligations under the DSA locally (The Articles of the Digital Services Act (DSA) n.d.).
Investigatory tools for such proceedings include a formal request for information, access to the VLOPS’ data and algorithms, the taking of interviews, and inspections of premises. If the Commission finds enough evidence of a possible infringement, it can open proceedings, which may lead to fines or other periodic penalty payments. VLOPs or VLOSEs under investigation have the right to be heard on all preliminary findings. The monetary penalty for a violation of the DSA is imposed by the Commission and can reach up to 6% of the global turnover of the VLOPs or VLOSEs in question. This decision can also include further supervision requirements, periodic penalties of up to 5% or average worldwide turnover for delays in compliance, and even temporary suspension of services in case of serious harm to users or detection of criminal offenses that involve threats to the life or safety if a person (Shaping Europe’s Digital Future n.d.e).
The first proceedings against an online platform under the DSA targeted X (Twitter International Unlimited Company (TIUC) provider), which was designated a VLOP on 25 April 2023. The Commission based the decision to open proceedings on a risk assessment report, a transparency report, and replies to a formal request for information to ascertain whether a violation of the DSA has, in fact, occurred (Press Corner 2023). Since then, additional proceedings have been opened against AliExpress (March 2024), Meta Platforms (April 2024), TikTok (April 2024), and Temu (October 2024). Areas of investigation of these VLOPs include the following:
  • Twitter International Unlimited Company (TIUC): risk and mitigation strategies countering the dissemination of illegal content, the effectiveness of measures taken to combat information manipulation on the platform, measures taken by X to increase the transparency of its platform, and suspected deceptive design of the user interface (Blue checks);
  • AliExpress International (Netherlands) B.V.: management and mitigation of risks, content moderation and internal complaint handling mechanism, transparency of advertising and recommender systems, traceability of traders, and access for researchers;
  • Meta Platforms Ireland Limited (MPIL): deceptive advertisements and disinformation, political content policy, election-monitoring and data access, notice and action mechanism, and internal complaint-handling systems;
  • TikTok Technology Limited: risk assessment compliance and risk mitigation measures;
  • Whaleco Technology Limited: systems to limit the sale of non-compliant products in the European Union, risks linked to the addictive design of the service such as game-like rewards, content and products recommendation mechanism, access for researchers (Shaping Europe’s Digital Future n.d.b).
While the EU Commission has taken the lead to supervise, enforce and monitor compliance of VLOPs and VLOSEs, national authorities can supervise and enforce compliance under the DSA. To ensure the establishment of national governance structures under the DSA, all EU member states must appoint their Digital Services Coordinators (DSCs) and other national authorities to enforce the DSA by 17 February 2024. The Commission also established the European Centre for Algorithmic Transparency (ECAT) to support monitoring and complaints with the DSA on a national and EU-wide scale through research and technical expertise on algorithmic systems deployed by online platforms and search engines (Shaping Europe’s Digital Future n.d.c).
As of October 2024, the Commission has taken investigatory steps involving the following VLOPs and VLOSEs in Table 2, with the understanding that these steps do not imply infringement of the DSA (Shaping Europe’s Digital Future n.d.b):

3.3. Implementation Challenges and Successes

The DSA is certainly an ambitious regulation attempt of the digital space and has not been without a fair share of ‘birthing pains’, which include operational, technical, and enforcement challenges.
One major obstacle the DSA faces is based on the premise that there are clear processes and operations to identify hate speech and false information. Currently, there is, however, no technology (algorithms, for example) to produce flawless identification of such cases; hence, the flagging process is aided by human review. Typically, journalists, researchers, and other consumers are empowered to distinguish truth from untruths, but humans, as we know, are not infallible and are often driven by their own subjectivity. Aside from whether proper detection of hate speech and disinformation is possible, we must also consider one of the most important human rights: free speech. Freedom to express thoughts (regardless of challenge or controversy) is a fundamental right in every democratic society, but it remains difficult to clearly delineate between speech that we may not agree with and content that can actually harm individuals, further being complicated by the question whether a government should even be in the position to be the judge over such content (Tourkochoriti 2023). Turillazzi et al. (2023) note that “the DSA is unclear whether the focus must be on illegal—and/or harmful—content. There is no clear definition of what is harmful and what is illegal” (p. 94). Dot Europe, a conglomeration of leading internet companies in Europe, previously known as the European Digital Media Association (EDiMA), challenged the European Commission to limit legal action to illegal content instead of the inclusion of harmful content. The main concern in relation to that assumed that service providers would have to decide on content that should be protected by free speech and potentially harmful content (Stolton 2020). The DSA tries to address some of these concerns with the use of “trusted flaggers” to identify illegal content only. These “trusted flaggers” had to be selected by the Member States’ Digital Service Coordinators by 17 February 2024 (The Articles of the Digital Services Act (DSA) n.d.). Such designated flaggers must show competence and expertise in detecting and identifying illegal content and must be able to perform their duties without bias (European Commission 2022b).
The DSA, unfortunately, remains vague on how ‘harmful’ content should be identified and responded to, but Article 12 does at least require providers of intermediary services to share “information on any policies, procedures, measures, and tools used for the purpose of content moderation, including algorithmic decision-making and human review” (The Articles of the Digital Services Act (DSA) n.d.).
A study by the German Verbraucherzentrale Bundesverband of 12 VLOPs and VLOSEs on their implementation of guidelines outlined in Articles 12 (terms and conditions), 14 (notice and action mechanisms), 25 (dark patterns), 26 (risk assessment), 27 (mitigation of risks), and 38 (competent authorities and digital services coordinators) was conducted between 12 October and 17 November 2023. The results highlighted several areas of concern for consumers which include the continued use of dark patterns and manipulation tactics that impact user choices, which are banned under the DSA. Furthermore, providers are required to display information on each advertisement clarifying information presented in the advertisement, responsible parties for the ad, and, if different from those who paid for the ad. None of the studied providers (only four in this category) made that kind of information available to their users. Surprisingly, it was also noted that the reviewed providers made it difficult for consumers to find terms and conditions or contact information for the respective providers. Overall, it was found that consumer protections under the DSA are still deficient (Verbraucherzentrale Bundesverband 2023, n.d.).
Nonetheless, the DSA constitutes the most extensive piece of digital regulation since the beginning of the internet. The most important aspect of the DSA is that it requires more accountability from big technology companies, which gives some decision-making power back to individual users. Martin Husovec, co-founder of the European Information Society Institute, stated in an interview with Global Finance that “going forward, Big Tech can innovate but must test things instead of moving fast and breaking them” (Ventura 2023). The ‘Wild West’ era of the internet will need to come to an end if we want this environment to be free from completely unchecked information and threats that have severe impacts on the offline world.

4. The DSA as a Model for the United States?

Global digital governance has evolved greatly over the past 25 years, a time during which the United States has generally favored innovation and monetizing markets over regulation and protection of individuals on the internet. The Framework for Global Electronic Commerce in 1997, a strategy introduced by the Clinton administration, aimed to support and enforce a simple legal environment with maximum freedom for entrepreneurs to try out new business models. This trend continued during the Bush, Obama, and Trump Administrations, which all favored limited government oversight to allow for innovation to flourish (Thierer and Haaland 2019). The Biden Administration, however, pursued a more cautious agenda regarding digital policies, including the 9 March 2022 Executive Order 14067 Ensuring Responsible Development of Digital Assets (Miller and Rosen 2022) or Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, released on 30 October 2023 (Harris and Jaikaran 2024). To this date, however, there is no centralized federal internet law to regulate the digital space in the U.S.
The federal government must also be careful in how it addresses online disinformation while still protecting the fundamental right to free speech, a cornerstone of American society, even though some countries and even the U.S. do have exceptions to what is allowed as free speech. These exceptions created the space necessary for the DSA to be developed and become a European-wide law. Another frequently discussed response to disinformation relies on civic education innovations to equip users of the internet to be more aware of potential disinformation. This is an obstacle for federal government-led initiatives as education is primarily in the purview of individual U.S. states and not the federal government. Even with these significant limitations in creating a comprehensive and centralized federal response to disinformation, many laws and legal bills have been introduced over the years to regulate the internet, at least to some degree (U.S. Cyberspace Solarium Commission 2021).

4.1. Existing Internet Laws in the United States

As far as online privacy is concerned, the European Union introduced the General Data Protection Regulation (GDPR) in 2018, which strengthens individual rights to individuals by providing more control over their personal data, including the following:
  • The need for an individual’s clear consent to the processing of his or her personal data;
  • Easier access for the data subject to his or her personal data;
  • The right to rectification, to erasure, and ‘to be forgotten’;
  • The right to object, including to the use of personal data for the purposes of ‘profiling’;
  • The right to data portability from one service provider to another” (The General Data Protection Regulation n.d.).
While the U.S. has no central federal-level internet privacy law, several federal and state laws do protect online privacy. Some key federal laws include the following:
  • The Federal Trade Commission Act (FTC) (1914)—regulates unfair or deceptive commercial practices;
  • Electronic Communications Privacy Act (ECPA) (1986)—extends the ban on unlawful communication interception to cover some categories of electronic communications;
  • Computer Fraud & Abuse Act (CFAA) (1986)—prohibits the intentional unauthorized access to computers;
  • Children’s Online Privacy Protection Act (COPPA) (1998)—introduces special online protections for children under 13;
  • Controlling the Assault of Non-Solicited Pornography and Marketing Act (CAN-SPAM Act) (2003)—created the first national guidelines in the US for sending commercial emails;
  • Financial Services Modernization Act (GLBA) (1999)—reduced limitations on banks and securities underwriters to form alliances;
  • Fair and Accurate Credit Transactions Act (FACTA) (2003)—aimed to enhance the accuracy of documents pertaining to customers’ credit (Thomas Reuters n.d.).
The year 2023, however, was a very active year for consumer privacy, during which ten new state statutes and at least 350 bills on consumer privacy were passed. The ten new state laws will go into force between March 2024 and January 2026, “bringing the number of comprehensive state laws to thirteen and the number of new healthcare data privacy laws to three” (Spirion LLC n.d.).
While change is certainly on the horizon for some aspects of our digital lives, regulation of illegality online, let alone disinformation in the United States, is still in its infancy. With all the above-mentioned challenges in mind, the DSA is still regarded as a milestone in digital regulation, which might provide a framework for global application. As such, could the DSA serve as a model for modernizing digital regulation in the United States?

4.2. Legal Initiatives to Counter Online Disinformation

While regulation of online disinformation within the United States still remains largely uncontrolled, protections against foreign disinformation campaigns against the United States are being addressed to some degree by the Countering Foreign Propaganda and Disinformation Act (CFPDA), which was signed into law in 2017 as part of the National Defense Authorization Act (NDAA). Most notably, the Act provided funding for the establishment of the Global Engagement Center under the State Department, which coordinates efforts to identify and counter-propaganda from foreign governments such as Russia and China (Hall 2017). In December of 2023, however, conservative media groups and the State of Texas filed a complaint against Secretary of State Antony Blinken and leaders at the Center based on accusations of First Amendment violations against American media outlets (Myers 2023).
One piece of legislation that finds renewed interest in countering the issue of disinformation internally is Section 230 in the Communications Decency Act of 1996, which states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”, and releases companies that provide interactive computer services from liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers being obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”.
The statute has often been viewed as a concession to big online companies, but it does provide important free speech protections to individual users. Platforms are generally exempt from liability under Section 230 for content submitted by users unless the content breaches specified legal obligations, federal criminal law, or intellectual property rights.
While many would agree to limits on hate speech online, an expression that explores difficult and controversial topics, such as abortion, discrimination, or gender justice, could be severely limited by a more restrictive version of Section 230 (Granick 2023). Recently, Section 230 was challenged with two cases discussed by the U.S. Supreme Court (Twitter, Inc. v. Taamneh et al., Gonzalez v. Google LLC.), both of which argued that social media platforms should be held liable for the deaths caused by ISIS attacks. The relatives of the victims argued that the platforms were responsible for hosting content that radicalized users and encouraged individuals to carry out attacks. The Supreme Court rejected the arguments based on the finding that “knowing about and failing to stop users from posting offending content also did not add up to aiding and abetting terrorism” (Heath 2023).
While changes to Section 230 appear to be difficult to accomplish, new legislation has been introduced to create a safer space for online consumers, most notably the Platform Accountability and Transparency Act (PATA). This Act, which was first presented in 2022, mandates that social media companies provide more information to the public and researchers. Examples of this data include a complete library of advertisements, figures on content moderation, real-time data regarding viral content, and details on ranking and recommendation algorithms (Coons 2022). Many other bills intended to target various aspects of mis- and disinformation concerns were introduced to Congress more recently, such as
  • The Stop Antiabortion Disinformation Act or the SAD Act (introduced in House: 20 April 2023): the Act would prohibit advertisements that use deceptive or misleading statements related to the provision of abortion services (Bonamici 2023;)
  • The Educating Against Misinformation and Disinformation Act (introduced in House: 3 August 2022): the Act would establish a Commission to fight misinformation by promoting media literacy (Beyer 2022;)
  • The Health Misinformation Act of 2021 (introduced in Senate: 22 July 2022): the Act would hold online providers accountable for the proliferation of health misinformation during a public health emergency (Klobuchar 2021;)
  • The Honest Ads Act (introduced in Senate 7 May 2019): the Act improves disclosure requirements for online political advertisements (Klobuchar 2019).
A legal sidebar by the Congressional Research Service (Brannon 2024) outlines that many members of the 118th Congress (2023–2024) have shown interest in expanding regulation of “Big Tech” companies to address competition, antitrust and privacy concerns, data protection, content moderation practices, and how to remove harmful content. These are clear signals that there is a desire to introduce more oversight and accountability for large online platforms, however, it is unlikely that the United States will introduce comprehensive federal legislation such as the DSA any time soon.
And it may not even be necessary to begin the long and arduous tasks of introducing such legislation if the ‘Brussels Effect,’ European Union regulations that become the de facto governance norms beyond the borders of the EU, does its work. Nunziato (2023) foresees global platforms adjusting their content moderation processes and other practices governed under the DSA to comply with EU regulations. This, as Nunziato points out, is not a new phenomenon, as was seen in 2016 when platforms willingly adopted the EU Code of Conduct on Countering Illegal Hate Speech Online out of fear that the EU would introduce stricter regulations about the removal of hate or extremist speech. With the DSA now formally threatening substantial financial penalties to VLOPs and VLOSEs for non-compliance, change will not only be implemented for European users alone, but will lead to benefit global users of said platforms, as well.
Nonetheless, there are some innovative approaches that come with the DSA that could prove useful to regulation outside the EU. The DSA relies on stakeholders to assess industry-wide risks, propose mitigation methods, and report frameworks and metrics. The industry itself, alongside VLOPs and VLOSEs, will have to take the lead in the regulatory process, which even includes the creation of standards for annual audits mandated by the DSA. Co-regulation might be the most attractive feature of the DSA and could serve as an example for countries outside the EU on how to create regulation without governmental control (Morar 2022). The first steps in this direction have led to the creation of the Digital Trust & Safety Partnership (DTSP) organization in 2021. The DTSP brings together leading internet companies to identify and adopt overarching commitments for greater transparency and safety online. The DTSP Best Practices Framework identified five commitments as crucial steps to identify content or conduct jeopardizing user trust and safety. These include the commitment of online companies to weigh content and conduct-related risks, infuse remedies to these in their products, and ensure that clear guidelines for user content and conduct are provided. Companies also commit to developing mechanisms to enforce these guidelines and to be able to react to changes in content and conduct-related risks by ensuring that their objectives and actions are transparent to customers and stakeholders (Digital Trust & Safety Partnership 2021).
The realization of these commitments will vary among digital companies, and currently, there is no plan to introduce any sort of regulatory body to oversee the development of these processes.

5. Conclusions

So, what will the digital future hold? As technology advances rapidly, we will find ourselves vulnerable to continued internal and external disinformation threats that can shake the very foundations of free and democratic societies. Ways to minimize these threats may include centralized government action, education of users, and co-regulation mechanisms, all of which require extensive partnerships among stakeholders. While expansive and centralized legislation such as the DSA seems unattainable or even undesirable for the United States at this point, it, nonetheless, provides a meaningful model of how internet regulation could look in the future. In the meantime, every little improvement toward a more secure digital environment can potentially be viewed as a step in the right direction.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Beyer, S. Donald, Jr. 2022. Educating Against Misinformation and Disinformation Act. Available online: https://www.congress.gov/bill/117th-congress/house-bill/6971 (accessed on 5 January 2024).
  2. Bonamici, Suzanne. 2023. Stop Antiabortion Disinformation Act. Available online: https://www.congress.gov/bill/118th-congress/house-bill/2736 (accessed on 10 February 2024).
  3. Bradshaw, Samantha, Hannah Bailey, and Philip N. Howard. 2021. Industrialized Disinformation: 2020 Global Inventory of Organized Social Media Manipulation. Working Paper 2021. Oxford: Oxford INternet Institute. Available online: https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/12/2021/02/CyberTroop-Report20-Draft9.pdf (accessed on 10 February 2024).
  4. Brannon, Valerie C. 2024. Regulating Big Tech: CRS Legal Products for the 118th Congress. LSB10889. Congressional Research Service. Available online: https://crsreports.congress.gov/product/pdf/LSB/LSB10889 (accessed on 10 February 2024).
  5. Cambridge Dictionary. n.d. Disinformation. Available online: https://dictionary.cambridge.org/us/dictionary/english/disinformation (accessed on 24 July 2024).
  6. Chiarella, Maria Luisa. 2023. Digital Markets Act (DMA) and Digital Services Act (DSA): New Rules for the EU Digital Environment. Athens Journal of Law (AJL) 9: 33–58. [Google Scholar] [CrossRef]
  7. CIA World Factbook. n.d. Country Comparisons—Internet Users. Available online: https://www.cia.gov/the-world-factbook/field/internet-users/country-comparison/ (accessed on 23 July 2024).
  8. Cooke, Mary. 2023. Commentary: Promoting Generalisation in Qualitative Nursing Research Using the Multiple Case Narrative Approach: A Methodological Overview. Journal of Research in Nursing: JRN 28: 382–83. [Google Scholar] [CrossRef]
  9. Coons, Christopher A. 2022. Platform Accountability and Transparency Act. Available online: https://www.congress.gov/bill/117th-congress/senate-bill/5339 (accessed on 10 February 2024).
  10. Digital Trust & Safety Partnership. 2021. Best Practices Framework—Digital Trust & Safety Partnership. January 13. Available online: https://dtspartnership.org/best-practices/ (accessed on 10 February 2024).
  11. European Commission. 2022a. 2022 Strengthened Code of Practice on Disinformation. Shaping Europe’s Digital Future. Digital-Strategy.ec.europa.eu. June 16. Available online: https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation (accessed on 10 October 2024).
  12. European Commission. 2022b. “Questions and Answers on the Digital Services Act”. European Commission. November 14. Available online: https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348 (accessed on 10 October 2024).
  13. European Commission. 2023a. 2030 Digital Decade. Report on the State of the Digital Decade 2023. Brussels: European Commission. [Google Scholar]
  14. European Commission. 2023b. Digital Services Act: Application of the Risk Management Framework to Russian Disinformation Campaigns. Brussels: European Commission. [Google Scholar]
  15. European Commission. 2023c. Digital Services Act: Commission Designates First Set of Very Large Online Platforms and Search Engines. Brussels: European Commission. Available online: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_2413 (accessed on 25 April 2023).
  16. European Commission. n.d. The EU’s Digital Services Act. Available online: https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en (accessed on 5 January 2024).
  17. European Commission, Directorate-General for Communications Networks, Content and Technology. 2018. A Multi-Dimensional Approach to Disinformation: Report of the Independent High Level Group on Fake News and Online Disinformation. Available online: https://op.europa.eu/en/publication-detail/-/publication/6ef4df8b-4cea-11e8-be1d-01aa75ed71a1/language-en (accessed on 10 February 2024).
  18. Fredheim, Rolf, Sebastian Bay, Anton Dek, Martha Stolze, and Tetiana Haiduchyk. 2023. Social Media Manipulation 2022/2023: Assessing the Ability of Social Media Companies to Combat Platform Manipulation. Nato Strategic Communications Centre of Excellence. Available online: https://stratcomcoe.org/publications/social-media-manipulation-20222023-assessing-the-ability-of-social-media-companies-to-combat-platform-manipulation/272 (accessed on 20 June 2024).
  19. Funk, Allie, Adrian Shahbaz, and Kian Vesteinsson. 2023. Freedom on the Net 2023. The Repressive Power of Artificial Intelligence. Freedom House. Available online: https://freedomhouse.org/sites/default/files/2023-10/Freedom-on-the-net-2023-DigitalBooklet.pdf (accessed on 20 June 2024).
  20. Granick, Jennifer Stisa. 2023. Is This the End of the Internet As We Know It? ACLU. February 22. Available online: https://www.aclu.org/news/free-speech/section-230-is-this-the-end-of-the-internet-as-we-know-it (accessed on 20 June 2024).
  21. Guess, Andrew M., and Benjamin A. Lyons. 2020. Misinformation, Disinformation, and Online Propaganda. In Social Media and Democracy. Edited by Nathaniel Persily and Joshua A. Tucker. SSRC Anxieties of Democracy. Camebridge: University Press, pp. 10–33. [Google Scholar]
  22. Hall, Holly Kathleen. 2017. The New Voice of America: Countering Foreign Propaganda and Disinformation Act. First Amendment Studies 51: 49–61. [Google Scholar] [CrossRef]
  23. Harris, Laurie, and Chris Jaikaran. 2024. Highlights of the 2023 Executive Order on Artificial Intelligence for Congress. Congressional Research Service R47843. Available online: https://crsreports.congress.gov/product/pdf/R/R47843/8 (accessed on 21 December 2024).
  24. Heath, Jacob M. 2023. U.S. Supreme Court’s Take on Section 230: What It Means for Online Platforms—And What’s Next. Orrick.com. June 13. Available online: https://www.orrick.com/en/Insights/2023/06/US-Supreme-Courts-Take-on-Section-230 (accessed on 20 June 2024).
  25. Human Rights Council. 2022. Resolution Adopted by the Human Rights Council on 1 April 2022. United Nations. Available online: https://documents-dds-ny.un.org/doc/UNDOC/GEN/G22/304/10/PDF/G2230410.pdf?OpenElement (accessed on 5 August 2024).
  26. Humprecht, Edda, Frank Esser, and Peter Van Aelst. 2020. Resilience to Online Disinformation: A Framework for Cross-National Comparative Research. The International Journal of Press/Politics 25: 493–516. [Google Scholar] [CrossRef]
  27. Hussain, Mumtaz, and Tariq Rahim Soomro. 2023. Social Media: An Exploratory Study of Information, Misinformation, Disinformation, and Malinformation. Applied Computer Systems 28: 13–20. [Google Scholar] [CrossRef]
  28. International Monetary Fund. n.d. GDP, Current Prices. Available online: https://www.imf.org/external/datamapper/NGDPD@WEO/OEMDC/ADVEC/WEOWORLD (accessed on 23 July 2024).
  29. Klobuchar, Amy. 2019. Honest Ads Act. Available online: https://www.congress.gov/bill/116th-congress/senate-bill/1356 (accessed on 5 August 2024).
  30. Klobuchar, Amy. 2021. Health Misinformation Act of 2021. Available online: https://www.congress.gov/bill/117th-congress/senate-bill/2448 (accessed on 5 August 2024).
  31. Laux, Johann, Sandra Wachter, and Brent Mittelstadt. 2021. Taming the Few: Platform Regulation, Independent Audits, and the Risks of Capture Created by the DMA and DSA. Computer Law & Security Review 43: 105613. [Google Scholar]
  32. Leiser, Mark. 2023. Analysing the European Union’s Digital Services Act Provisions for the Curtailment of Fake News, Disinformation, & Online Manipulation. SocArXiv. Available online: https://osf.io/preprints/socarxiv/rkhx4 (accessed on 9 January 2025).
  33. Miller, Rena S., and Liana W. Rosen. 2022. Digital Assets and Illicit Finance: E.O. 14067 and Recent Anti-Money Laundering Developments. Congressional Research Service IN12039. Available online: https://crsreports.congress.gov/product/pdf/IN/IN12039#:~:text=14067%2C%20Ensuring%20Responsible%20Development%20of,financing%20of%20terrorism%20(CFT) (accessed on 21 December 2024).
  34. Morar, David. 2022. The Digital Services Act’s Lesson for U.S. Policymakers: Co-Regulatory Mechanisms. Brookings. August 23. Available online: https://www.brookings.edu/articles/the-digital-services-acts-lesson-for-u-s-policymakers-co-regulatory-mechanisms/ (accessed on 5 August 2024).
  35. Myers, Steven Lee. 2023. State Department’s Fight Against Disinformation Comes Under Attack. The New York Times. December 14. Available online: https://www.nytimes.com/2023/12/14/technology/state-department-disinformation-criticism.html (accessed on 20 June 2024).
  36. National Intelligence Council. 2021. Foreign Threats to the 2020 US Federal Elections. Available online: https://www.dni.gov/files/ODNI/documents/assessments/ICA-declass-16MAR21.pdf (accessed on 9 January 2025).
  37. Newman, Nic, Richard Fletcher, Kirsten Eddy, Craig T. Robertson, and Rasmus Kleis Nielsen. 2023. Reuters Institute Digital News Report 2023. Reuters Institute for the Study of Journalism. Available online: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf (accessed on 20 June 2024).
  38. NewsGuard. 2023. The Year AI Supercharged Misinformation: NewsGuard’s 2023 in Review—Misinformation Monitor: December 2023. December 27. Available online: https://www.newsguardtech.com/misinformation-monitor/december-2023/ (accessed on 20 June 2024).
  39. Noor, Khairul Baharein Mohd. 2008. Case Study: A Strategic Research Methodology. American Journal of Applied Sciences 5: 1602–4. [Google Scholar] [CrossRef]
  40. Normandin, Audrey C. 2022. Redefining ‘Misinformation’, ‘Disinformation’, and ‘Fake News’: Using Social Science Research to Form an Interdisciplinary Model of Online Limited Forums on Social Media Platforms. Campbell Law Review 44: 289–334. [Google Scholar]
  41. Nunziato, Dawn Carla. 2023. The Digital Services Act and the Brussels Effect on Platform Content Moderation Essay. Chicago Journal of International Law 24: 115–28. [Google Scholar]
  42. Press Corner. 2023. Commission Opens Formal Proceedings Against X Under the Digital Services Act. European Commission. Available online: https://ec.europa.eu/commission/presscorner/detail/en/IP_23_6709 (accessed on 10 October 2024).
  43. Saurwein, Florian, and Charlotte Spencer-Smith. 2020. Combating Disinformation on Social Media: Multilevel Governance and Distributed Accountability in Europe. Digital Journalism 8: 820–41. [Google Scholar] [CrossRef]
  44. Shahbaz, Adrian, and Allie Funk. 2021. Freedom on the Net 2021: The Global Drive to Control Big Tech. Freedom House. Available online: https://freedomhouse.org/report/freedom-net/2021/global-drive-control-big-tech (accessed on 10 October 2024).
  45. Shaping Europe’s Digital Future. n.d.a. E-Commerce Directive. Available online: https://digital-strategy.ec.europa.eu/en/policies/e-commerce-directive (accessed on 5 January 2024).
  46. Shaping Europe’s Digital Future. n.d.b. Supervision of the Designated Very Large Online Platforms and Search Engines Under DSA. Available online: https://digital-strategy.ec.europa.eu/en/policies/list-designated-vlops-and-vloses (accessed on 16 January 2024).
  47. Shaping Europe’s Digital Future. n.d.c. The Cooperation Framework Under the Digital Services Act. Available online: https://digital-strategy.ec.europa.eu/en/policies/dsa-cooperation (accessed on 27 November 2023).
  48. Shaping Europe’s Digital Future. n.d.d. The Digital Services Act Package. Available online: https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package (accessed on 28 September 2023).
  49. Shaping Europe’s Digital Future. n.d.e. The Enforcement Framework Under the Digital Services Act. Available online: https://digital-strategy.ec.europa.eu/en/policies/dsa-enforcement (accessed on 27 November 2023).
  50. Shaping Europe’s Digital Future. n.d.f. The Impact of the Digital Services Act on Digital Platforms. Available online: https://digital-strategy.ec.europa.eu/en/policies/dsa-impact-platforms (accessed on 27 November 2023).
  51. Spirion LLC. n.d. New State Privacy Laws Enforceable in 2024–2026 Part 1. Available online: https://explore.spirion.com/2024-2026-data-privacy-laws?utm_source=google&utm_medium=cpc&utm_campaign=Non-Brand_Compliance_Data_Breach_Search&gad_source=1&gclid=Cj0KCQiAoKeuBhCoARIsAB4WxteQblDiqsLeS6VSqzmy2XTpAadeDMJbzrs_ADA7XOc2AubjiRT5XzUaAqcAEALw_wcB (accessed on 22 February 2024).
  52. Stolton, Samuel. 2020. Digital Services Act Should Avoid Rules on ‘Harmful’ Content, Big Tech Tells EU. EURACTIV. October 12. Available online: https://www.euractiv.com/section/digital/news/digital-services-act-should-avoid-rules-on-harmful-content-big-tech-tells-eu/ (accessed on 20 June 2024).
  53. The Articles of the Digital Services Act (DSA). n.d. Available online: https://www.eu-digital-services-act.com/Digital_Services_Act_Articles.html (accessed on 19 January 2024).
  54. The General Data Protection Regulation. n.d. Available online: https://www.consilium.europa.eu/en/policies/data-protection/data-protection-regulation/ (accessed on 12 February 2024).
  55. Thierer, Adam, and Connor Haaland. 2019. The Clinton-Bush-Obama-Trump Innovation Vision. Mercatus Center. Available online: https://www.mercatus.org/economic-insights/expert-commentary/clinton-bush-obama-trump-innovation-vision (accessed on 18 June 2024).
  56. Thomas Reuters. n.d. Internet Privacy Laws Revealed—How Your Personal Information Is Protected Online. Available online: https://legal.thomsonreuters.com/en/insights/articles/how-your-personal-information-is-protected-online (accessed on 12 February 2024).
  57. Tourkochoriti, Ioanna. 2023. The Digital Services Act and the EU as the Global Regulator of the Internet Ioanna Tourkochoriti*. Chicago Journal of International Law 24: 129. Available online: https://resolver.ebscohost.com/openurl?sid=EBSCO%3alft&genre=article&issn=15290816&ISBN=&volume=24&issue=1&date=20230701&spage=129&pages=129-147&title=Chicago+Journal+of+International+Law&atitle=The+Digital+Services+Act+and+the+EU+as+the+Global+Regulator+of+the+Internet.&aulast=Tourkochoriti%2c+Ioanna&id=DOI%3a&site=ftf-live (accessed on 20 June 2024).
  58. Turillazzi, Aina, Mariarosaria Taddeo, Luciano Floridi, and Federico Casolari. 2023. The Digital Services Act: An Analysis of Its Ethical, Legal, and Social Implications. Law, Innovation & Technology 15: 83–106. [Google Scholar]
  59. United Nations General Assembly. 2022. Disinformation and Freedom of Opinion and Expression During Armed Conflicts. Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. August 12. Available online: https://documents-dds-ny.un.org/doc/UNDOC/GEN/N22/459/30/PDF/N2245930.pdf?OpenElement (accessed on 20 June 2024).
  60. U.S. Cyberspace Solarium Commission. 2021. Countering Disinformation in the United States. Falls Church: U.S. Cyberspace Solarium Commission. [Google Scholar]
  61. Ventura, Luca. 2023. EU Passes Digital Services Act. Global Finance Magazine. September 21. Available online: https://gfmag.com/technology/eu-passes-digital-services-act/ (accessed on 10 October 2024).
  62. Verbraucherzentrale Bundesverband. 2023. Auswertung: 100 Tage DSA. Available online: https://www.vzbv.de/sites/default/files/2023-12/vzbv-Untersuchung_DSA_bf.pdf (accessed on 10 October 2024).
  63. Verbraucherzentrale Bundesverband. n.d. 100 Tage Digital Services Act: Verbraucherschutz Auf Online-Plattformen Weiter Mangelhaft. Available online: https://www.vzbv.de/pressemitteilungen/100-tage-digital-services-act-verbraucherschutz-auf-online-plattformen-weiter (accessed on 18 January 2024).
  64. Wilman, Folkert. 2022. The Digital Services Act (DSA)—An Overview. Social Science Research Network. December 27. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4304586#:~:text=The%20DSA%20aims%20to%20better,providers%20of%20these%20important%20services (accessed on 9 January 2025).
  65. Yadav, Kamya, Ulaş Erdoğdu, Samikshya Siwakoti, Jacob N. Shapiro, and Alicia Wanless. 2021. Countries Have More than 100 Laws on the Books to Combat Misinformation. How Well Do They Work? Bulletin of the Atomic Scientists 77: 124–28. [Google Scholar] [CrossRef]
  66. Yin, Robert K. 1994. Case Study Research: Design and Methods. Applied Social Research Methods. Thousand Oaks: SAGE Publications. [Google Scholar]
Table 1. Providers covered by DSA (as of 25 April 2023).
Table 1. Providers covered by DSA (as of 25 April 2023).
Very Large Online Platforms:
Alibaba AliExpressGoogle PlayPinterestYouTube
Amazon StoreGoogle MapsSnapchatZalando
Apple AppStoreGoogle ShoppingTikTok
Booking.comInstagramTwitter
FacebookLinkedInWikipedia
Very Large Online Search Engines:
BingGoogle Search
Table 2. DSA enforcement actions (as of 31 October 2024).
Table 2. DSA enforcement actions (as of 31 October 2024).
ProviderDesignated ServiceType of Service Under DSADSA Enforcement Actions
AliExpress International (Netherlands) B.V.AliExpressVLOP25.04.2023: designation
06.11.2023: request for information
18.01.2024: request for information
14.03.2024: opening of proceedings
Amazon Services Europe S.à.r.l.Amazon StoreVLOP25.04.2023: designation
15.11.2023: request for information
18.01.2024: request for information
05.07.2024: request for information
Apple Distribution International LimitedApp StoreVLOP25.04.2023: designation
14.12.2023: request for information
18.01.2024: request for information
Aylo Freesites Ltd.PornhubVLOP20.12.2023: designation
13.06.2024: request for information 18.10.2024: request for information
Booking.com B.V.Booking.comVLOP25.04.2023: designation
18.01.2024: request for information
Google Ireland Ltd.Google SearchVLOSE25.04.2023: designation
18.01.2024: request for information
14.03.2024: request for information
Google PlayVLOP25.04.2023: designation
14.12.2023: request for information
18.01.2024: request for information
Google MapsVLOP25.04.2023: designation
18.01.2024: request for information
Google ShoppingVLOP25.04.2023: designation
18.01.2024: request for information
YoutubeVLOP25.04.2023: designation
09.11.2023: request for information
18.01.2024: request for information
14.03.2024: request for information
02.10.2024: request for information
Infinite Styles Services Co., Ltd.SheinVLOP26.04.2024: designation
28.06.2024: request for information
LinkedIn Ireland Unlimited CompanyLinkedIn VLOP25.04.2023: designation
18.01.2024: request for information
14.03.2024: request for information
07.06.2024: statement of Commissioner Breton on LinkedIn’s announcement on targeted advertisement
Meta Platforms Ireland Limited (MPIL)Facebook/
Instagram
VLOPs25.04.2023: designation
19.10.2023: request for information
10.11.2023: request for information
01.12.2023: request for information
18.01.2024: request for information
01.03.2024: request for information
14.03.2024: request for information
30.04.2024: opening of proceedings + request for information
16.05.2024: opening of proceedings
16.08.2024: request for information
Microsoft Ireland Operations LimitedBingVLOSE25.04.2023: designation
18.01.2024: request for information
14.03.2024: request for information
17.05.2024: request for information
NKL Associates s.r.oXNXXVLOP10.07.2024: designation
Pinterest Europe Ltd.Pinterest VLOP25.04.2023: designation
18.01.2024: request for information
Snap B.V.SnapchatVLOP25.04.2023: designation
10.11.2023: request for information
18.01.2024: request for information
14.03.2024: request for information
02.10.2024: request for information
Technius Ltd.StripchatVLOP20.12.2023: designation
13.06.2024: request for information
18.10.2024: request for information
TikTok Technology LimitedTikTok VLOP25.04.2023: designation
19.10.2023: request for information
09.11.2023: request for information
18.01.2024: request for information
19.02.2024: opening of proceedings
14.03.2024: request for information
17.04.2024: request for information
22.04.2024: opening of proceedings + communication of intention to impose interim measures + request for information by decision
24.04.2024: statement of Commissioner Breton on TikTok’s suspension of TikTok Lite functionalities
05.08.2024: decision accepting binding commitments
02.10.2024: request for information
Twitter International Unlimited Company (TIUC)XVLOP25.04.2023: designation
12.10.2023: request for information 18.12.2023: opening of proceedings
14.03.2024: request for information
08.05.2024: request for information
12.07.2024: preliminary findings
Whaleco Technology LimitedTemuVLOP31.05.2024: designation
28.06.2024: request for information
11.10.2024: request for information 31.10.2024: opening of proceedings
WebGroup Czech RepublicXVideosVLOP20.12.2023: designation (Designation decision not yet available)
13.06.2024: request for information 18.10.2024: request for information
Wikimedia Foundation Inc 3WikipediaVLOP25.04.2023: designation
Zalando SEZalandoVLOP25.04.2023: designation
18.01.2024: request for information
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peterka-Benton, D. Fighting Disinformation Online: The Digital Services Act in the American Context. Soc. Sci. 2025, 14, 28. https://doi.org/10.3390/socsci14010028

AMA Style

Peterka-Benton D. Fighting Disinformation Online: The Digital Services Act in the American Context. Social Sciences. 2025; 14(1):28. https://doi.org/10.3390/socsci14010028

Chicago/Turabian Style

Peterka-Benton, Daniela. 2025. "Fighting Disinformation Online: The Digital Services Act in the American Context" Social Sciences 14, no. 1: 28. https://doi.org/10.3390/socsci14010028

APA Style

Peterka-Benton, D. (2025). Fighting Disinformation Online: The Digital Services Act in the American Context. Social Sciences, 14(1), 28. https://doi.org/10.3390/socsci14010028

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop