Next Article in Journal
Balancing Privacy and Risk: A Critical Analysis of Personal Data Use as Governed by Saudi Insurance Law
Previous Article in Journal
Justice Delayed in the COVID-19 Era: Injunctions, Mootness, and Religious Freedom in the United States Legal System
Previous Article in Special Issue
AI Moderation and Legal Frameworks in Child-Centric Social Media: A Case Study of Roblox
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Organisational Challenges in US Law Enforcement’s Response to AI-Driven Cybercrime and Deepfake Fraud

School of Policing Studies, Faculty of Business, Justice and Behavioural Science, Charles Sturt University, Goulburn, NSW 2580, Australia
Laws 2025, 14(4), 46; https://doi.org/10.3390/laws14040046
Submission received: 14 April 2025 / Revised: 14 June 2025 / Accepted: 30 June 2025 / Published: 4 July 2025

Abstract

The rapid rise of AI-driven cybercrime and deepfake fraud poses complex organisational challenges for US law enforcement, particularly the Federal Bureau of Investigation (FBI). Applying Maguire’s (2003) police organisation theory, this qualitative single-case study analyses the FBI’s structure, culture, technological integration, and inter-agency collaboration. Findings underscore the organisational strengths of the FBI, including a specialised Cyber Division, advanced detection tools, and partnerships with agencies such as the Cybersecurity and Infrastructure Security Agency (CISA). However, constraints, such as resource limitations, detection inaccuracies, inter-agency rivalries, and ethical concerns, including privacy risks associated with AI surveillance, hinder operational effectiveness. Fragmented global legal frameworks, diverse national capacities, and inconsistent detection of advanced deepfakes further complicate responses to this issue. This study proposes the establishment of agile task forces, public–private partnerships, international cooperation protocols, and ethical AI frameworks to counter evolving threats, offering scalable policy and technological solutions for global law enforcement.

1. Introduction

The information age has transformed society and the economy through rapid advances in information and communication technologies (ICTs) and artificial intelligence (AI). However, these innovations have also enabled sophisticated cybercrime. AI-driven cybercrime, especially fraud involving deepfakes, has become a notable risk. Deepfakes are hyper-realistic synthetic media generated using AI technologies, which are increasingly used by cybercriminals to impersonate people, commit fraud, and undermine trust in digital systems (Arora et al. 2024; Farouk and Fahmi 2024; George and George 2023; Mahlasela et al. 2024). In 2023, the FBI’s Internet Crime Complaint Center (IC3) received 880,418 complaints, with losses exceeding USD 12.5 billion—a 22% increase from 2022.1 In 2024, the FBI highlighted a surge in AI-enabled cyberattacks targeting individuals and organisations, including phishing schemes, identity theft, and fraudulent business communications, reflecting the rising sophistication and scale of deception powered by AI.2 Globally, deepfake-related incidents have escalated, particularly in the cryptocurrency and fintech sectors, though comprehensive data on the scale of this rise remain limited.3
AI-enabled cybercrime is transnational in nature, as it takes advantage of discrepancies in international legal frameworks and law enforcement capacities. In the European Union, Europol works with state members in Europe to combat the rapid rise of deepfake-influenced fraud schemes, while challenges in deepfake regulation and use are unique to each nation, such as comparing China and India.4 This global patchwork makes international cooperation difficult because cybercriminals can run operations from jurisdictions with lax enforcement. The FBI leads countermeasures in the United States; however, its response has to be within an international framework to be effective. By linking national and global perspectives, this paper explores the potential and challenges for the FBI’s organisational framework to contribute to transnational, interdisciplinary interventions against AI and deepfake-related crimes.
Therefore, this paper addresses two research questions: (1) What organisational factors, as framed by Maguire’s (2003) police organisation theory, shape the FBI’s response to AI-driven cybercrime and deepfake fraud? (2) What is the FBI’s current organisational structure and technological capabilities for investigating AI-driven cybercrime and deepfake fraud?
Informed by Maguire’s police organisation theory (Maguire 2003), which suggests that factors such as organisational context, structural complexity, and structural control act as determinants of operational efficacy, this study analyses a qualitative, case-oriented, single-case study. Data processing involves using MAXQDA 24.
Timely analysis of law enforcement’s response to evolving technological threats is one of the primary takeaways of this study. Deepfake fraud exploits human trust to circumvent traditional security paradigms while threatening economic stability, democratic processes, and judicial integrity. Recent incidents have shown how cybercriminals are using artificial intelligence (AI) to create realistic deepfake videos and voice clones as tools for extortion, disinformation, and financial fraud. For example, the FBI cautioned smartphone users about AI-generated robocalls impersonating trusted individuals to obtain personal information, recommending protective measures, such as call-blocking codes.5 Through a case study of the FBI’s structure, culture, technological adoption practices, and mechanisms for inter-agency collaboration, this paper identifies the FBI’s strengths, challenges, and opportunities for the future in combating AI-driven cybercrime. By applying Maguire’s (2003) police organisation theory, this study offers a replicable model for analysing how law enforcement agencies worldwide address similar threats, informing global strategies to counter deepfake fraud and other AI-enabled crimes.

2. Literature Review

2.1. Cybercrime and the Online Harm of Deepfakes

Cybercrime encompasses illegal activities using computers and networks, including hacking, identity theft, phishing, ransomware, and cyberstalking (Hawdon 2021). The advent of artificial intelligence (AI) has intensified these threats, particularly through deepfakes, which are AI-generated synthetic media in video, audio, or images, used to commit fraud, extortion, and misinformation (Maras and Alexandrou 2019). For example, voice-cloning technology enables fraudsters to impersonate a person with high accuracy (Blancaflor et al. 2024; Genelza 2024; Lin et al. 2024). These activities undermine social trust in digital platforms and pose risks to national security, as organised crime groups can take advantage of vulnerabilities. Addressing these threats requires advanced detection technologies, international collaboration, and strong regulatory frameworks (Buçaj and Idrizaj 2025; Mwangi 2024; Qudus 2025).
Deepfakes are synthetic media created using AI models, such as generative adversarial networks (GANs), which consist of two neural networks, a generator that produces fake content and a discriminator that assesses its truthfulness, culminating in media as real as life (Chaudhary et al. 2025; Jheelan and Pudaruth 2025). Deepfakes blend, refashion, duplicate, and overlay visual or audio content to convincingly impersonate real individuals (Maras and Alexandrou 2019). These technologies replicate human-like patterns using machine learning algorithms that simulate cognition and behaviour. In false products generated by advanced technology utilising artificial intelligence in their processes, machine learning is exploited frequently. Artificial intelligence encompasses computational models of human behaviour and thought processes that are constructed to behave rationally and intelligently through simulations of human behaviour (Suomala and Kauttonen 2022). Machine learning is a subfield of AI that enables computer systems to learn from a few examples, much like humans do, but also from huge datasets and experience directly performing complex tasks by learning from data instead of through pre-programmed instructions (Kanade 2022). These systems learn from experience and adjust their behaviour based on their performance concerning new data in the world.
The availability of deepfake tools and technology has reached the level of open-source software and mobile apps that enable anyone with a smartphone or a computer to create lifelike deepfakes with taps and swipes. This accessibility has raised concerns about its potential for misuse, given its dual-use nature for legitimate and malicious purposes (Brandqvist 2024; Macron 2025; Oza et al. 2024).
This technology has found legitimate applications in creative industries, such as film and entertainment, where it has been used to recreate deceased actors or enhance visual effects. However, it also facilitates social engineering attacks, deceiving people to divulge sensitive information, which is another ethical layer of concern (Broklyn et al. 2024; Dsouza et al. 2024; Kaur et al. 2024).
Deepfake technology has also been abused to produce pornographic videos and images. It includes the ability to create explicit material featuring celebrities, politicians, friends, or enemies, all without their consent. Deepfake videos have had many celebrities as victims (Birrer and Just 2024; Korshunov and Marcel 2018). A deepfake video of former US First Lady Michelle Obama was shared on Reddit, digitally overlaying her face over that of a pornographic actress with a similar facial structure.6 Voice cloning, an advanced form of deepfake technology, is also increasingly used in business fraud. These scammers can create highly convincing fraudulent telephone calls to meet the goals of persuading the victim to send money through wire transfers, gift cards, and cryptocurrency. The criminals steal audio from well-known public platforms, such as YouTube or TikTok, creating voice clones of celebrities, officials, or even private citizens (Lin et al. 2024). In addition to deepfake makers, there exist—and are being developed—an array of digital media manipulation tools to alter user video, voice, and image.
A key part is that the consequences of deepfake technology do not just affect isolated individual use cases—they pose a risk to society on a larger scale. Emerging technologies such as deepfakes have empowered organised crime groups to become security threats for nations in a globalised digital world (Lin 2022; Matey 2024). Globally, legal frameworks are struggling to keep up, making cross-border prosecution and enforcement increasingly difficult.7 As deepfakes proliferate, differentiating between the real and the fake, fact from fiction, is becoming increasingly complicated, an effect known as “truth decay” (Chesney and Citron 2019a; Helmus 2022). The declining trust in (social) media can have serious drawbacks, especially in the political domain, where deepfakes can be created to delegitimise political actors or to interfere in elections (Diakopoulos and Johnson 2021). This means that the challenge is to bridge the utility of deepfake over its potential harm and provide the relevant regulatory and governance standards to define such a challenge.

2.2. Organisational Theory and Cybercrime Investigation in Law Enforcement

While organisational theory may shed light on law enforcement’s response to cybercrime, interdisciplinary approaches—bringing together criminology, computer science, and policy studies—are essential to addressing it holistically. The unique and consistent aspect of the organisation is that it is a group of people working together to accomplish a common goal (Bittner 1965). It is a consciously coordinated social entity with identifiable boundaries that functions on a relatively continuous basis to achieve a common goal or set of goals (Robbins 1990). Organisational theory focuses on organisational design, organisational structure, and behaviour of managers and administrators (Daft 2015).
In the last few decades, empirical studies have attempted to understand the relationship between organisations and performance. Early studies have argued that performance improvement must come from changing organisational properties, like leadership, team, culture, policies, and structures (organisations are often treated as independent variables). Starting in the 1950s, scholars processed organisations as rational systems and dynamic entities that respond to the external environment, meaning, therefore, that they can also be dependent variables. Before the 1960s, research focused on comparing organisations and their differences, while the period between the 1960s and early 1980s was dominated by theory and empirical work concerning formal organisational structures. The focus diminished after 1985 but was revived in the 1990s (Donaldson 1995; Kalleberg 1996). Further studies have focused on organisational structure, internal units, and structural variables (Maguire 2003).
A focus on addressing police organisations has been a central theme of police reform and professionalisation efforts since the beginning of modern policing in the US during the 20th century (Zhao et al. 2010). Nevertheless, empirical investigations of associations between law enforcement organisational attributes and cybercrime investigations are few. Several studies have relied on the work of Robert Langworthy, who looked at three dimensions—size, technology, and environment—as a foundation for later studies (Langworthy 1986). The research, which involved extensive studies on law enforcement operations and cybercrime, allows organisations to understand how trends in cybercrime evolve over time. Building on Langworthy, Edward R. Maguire (2003) created a theory of police organisation in Organizational Structure in American Police Agencies: Context, Complexity, and Control. His framework explains differences in police department characteristics as a function of context, structural complexity, and structural control. Despite their similar primary law enforcement functions, Maguire posits that the structures and traits of these agencies can vary widely.
However, the rise of artificial intelligence, particularly deepfake technologies, challenges Maguire’s model. The emergence of AI-driven threats exposes limitations in centralised, hierarchical models of structural control, which may lack the agility required for real-time detection and response. AI necessitates new dimensions of organisational adaptability, capabilities not fully captured in traditional definitions of structural control. Moreover, the ethical implications of using AI in policing suggest a need for ethical oversight structures not explicitly captured in Maguire’s model.
Studies of high-tech crime investigations have identified emerging trends. According to Willits and Nowacki (2016), the increase in the size of sub-national jurisdictions, coupled with the ascending significance of cybercrime, renders some US law enforcement agencies with larger ones, agencies with routine set and expansive responsibilities, and those that utilise a more advanced class of technology and specialised divisions to address potential areas of attacks more likely to form cybercrime units (Willits and Nowacki 2016).
Nowacki and Willits (2020) posit that law enforcement agencies with more complexity and specialisation (including establishing cybercrime units) are more likely to allocate resources for cybercrime investigation. They specifically connected organisational context, complexity, and control to cybercrime policies. For context, larger agencies, those performing nonroutine tasks, and those subject to collective bargaining agreements tend to devote more resources to cybercrime investigations. For complexity, the more hierarchical layers and specialisation an agency has, the more likely it is to devote staff to cybercrime; also, the more civilian employees there are, the more likely it is. In terms of control, agencies employing civilian personnel for administrative positions spend a more significant share of their budgets on cybercrime initiatives (Nowacki and Willits 2020).

3. Methods

3.1. Research Design

This study adopted a qualitative single-case study design, focusing on the Federal Bureau of Investigation (FBI) as the unit of analysis due to its central role in cybercrime investigation and combating efforts (Yin 2017). The design enabled an in-depth exploration of the FBI’s organisational response to AI-driven cybercrime and deepfake fraud. This study provides potential implications for other law enforcement agencies facing similar challenges, such as the European Union Agency for Criminal Justice Cooperation (Europol) and the International Criminal Police Organization (Interpol), which are also navigating similar issues of resource constraints and technological adaptation.
The analytical framework of this study was guided by Maguire’s (2003) police organisation theory, which contends that organisational context (external pressures and environment), structural complexity (a division of labour and specialisation), and structural control (hierarchy and decision-making processes) shape operational efficacy. These dimensions are particularly important when analysing how the FBI structures itself to combat AI-driven cybercrime, a type of crime that requires technological expertise, inter-agency coordination, and adaptive governance.
The emphasis on understanding organisational characteristics also supports the qualitative case study approach, as it facilitates an in-depth exploration of internal processes, cultural dynamics, and structural adaptations that quantitative methods may miss. Similar to some other newer non-traditional crimes, deepfake fraud uses AI tools and trust elicited by humans—leading to new units being needed, skills to train on, and partnerships established within law enforcement to combat it. By focusing on the FBI, this study captured how one major agency negotiates these challenges, shedding light on broader law enforcement trends and acknowledging the specificity of the case.

3.2. Data Collection

The novelty and sensitivity of AI-driven cybercrime, particularly deepfake fraud, precluded primary data collection, such as interviews with FBI officials, due to classified or ongoing investigations. Additionally, the rapid evolution of AI technologies means organisational responses are still developing, limiting the feasibility of structured interviews. To address this data collection limitation, the study employed a multi-source secondary data collection strategy, integrating FBI reports, Internet Crime Complaint Center (IC3) data, government publications (e.g., Federal Trade Commission reports), peer-reviewed literature, and policy reports. Nonetheless, relying solely on publicly available sources introduces several data collection limitations. First, there is a potential for selection bias, as high-profile cases are more likely to be reported and analysed, and this may neglect less publicised yet important incidents. Second, the absence of insider perspectives, such as direct law enforcement input, limits the ability to capture operational challenges, decision-making processes, and internal assessment of emerging trends. Additionally, publicly available data may lag behind real-time developments of this fast-evolving cybercrime.
To mitigate biases, this study applied triangulation cross-references of crime statistics, policy analysis, and academic sources from diverse stakeholders, such as federal agencies and industry experts. This approach enhanced the robustness of findings by validating information across different sources. It also reduced the risk of over-reliance on any single narrative. However, it should be noted that the absence of direct law enforcement input remains a limitation. Some nuances may be inherently inaccessible without primary data.
Additionally, this author has 20 years of policing experience and worked as a police liaison officer in Washington, D.C., from 2014 to 2019, focusing on cybercrime and homeland security issues. This author has gained in-depth knowledge through various formal and informal conversations with the FBI and other federal, state, and local law enforcement agencies. The author’s law enforcement experience informed the study’s contextual understanding; however, it was not used as primary data and did not substitute for direct insider testimony.

3.3. Data Analysis

Data were analysed using MAXQDA 24 through thematic coding, guided by Maguire’s (2003) framework (organisational context, structural complexity, and structural control) and inductive themes specific to deepfake fraud. Codes were developed iteratively, with initial deductive codes derived from Maguire’s dimensions and emergent codes identified through repeated data review. Inter-coder reliability was ensured through cross-referencing to support evidence-based conclusions.

3.4. Ethical Considerations

Ethical considerations were paramount, given the sensitive nature of cybercrime data. This study used only publicly available sources, ensuring compliance with institutional review board protocols and minimising risks to individuals or organisations (Tisdell et al. 2025). All data were drawn from publicly available or non-sensitive materials, which is in line with ethical practices in research. No subjects or organisations were identified to avoid potential harm, and data were used to keep the study confidential, as per institutional review board protocols.

3.5. Limitations

Although providing depth, the single-case design limited generalisability to other law enforcement agencies where resources and mandates differ, such as for state or local entities, compared to other levels of law enforcement, like the FBI. As stated, the use of secondary data means the study may be at risk of biases. Some of the biases could occur due to under-reporting or giving excessive attention to headline-dominating cases and the absence of direct, insider perspectives from current practitioners. However, triangulation limited this threat. While the lack of primary data resulted in an inability to provide direct insights into officials’ perspectives, this study followed the approach taken in the study’s organisational focus, which is well-captured in official reports and records. Despite these mitigation strategies, some limitations are inherent and should be considered when interpreting the findings.

4. Findings

4.1. The Landscape of AI-Driven Cybercrime and Deepfake Fraud in the US

4.1.1. Defining AI-Driven Cybercrime

AI-driven cybercrime leverages machine learning and generative models to scale illicit activities, from automated phishing campaigns to intricate synthetic media schemes (IOCTA 2024). Advanced AI technologies, particularly deepfakes, enable cybercriminals to execute attacks with high accuracy and adaptability, exploiting AI’s ability to mimic human behaviour and analyse large datasets, making them more challenging for detection and prevention (FBI 2024). Some key typologies of AI-related cybercrime include:
Business Email Compromise (BEC): Scammers breach corporate email accounts and impersonate company executives or trusted business partners, authorising fake fund transfers. BEC scams took USD 2.7 billion from businesses worldwide in 2022.8 Natural language processing (NLP), a method of AI, has raised the quality of phishing emails to the level of a real person.9
Data Breaches: Data breaches have affected thousands of companies in the US, making unauthorised access to sensitive data a continuing threat. The Identity Theft Resource Center (ITRC) 2023 Annual Data Breach Report tracked 3205 data compromises across the US in 2023, a record high and a 78% increase from 2022.10 Such tools drive these compromises faster by finding vulnerabilities and automatically exploiting them to steal sensitive personal or corporate data, which can be used later for fraud.
Phishing/Spoofing: Phishing is the top reported internet crime type—where attackers trick potential targets into revealing personal information—and is used as a common method that allows deepfakes to be leveraged for scams, with AI making it easier to construct believable scenarios, fraudulent emails, or text messages to trick targets into handing over credentials. The FBI’s IC3 received 298,878 phishing and spoofing complaints in the US in 2023 (see footnote 1).
Ransomware: AI can be harnessed to create smarter, more efficient malware that encrypts data using unique algorithms and demands payment in cryptocurrency. In 2023, losses exceeded USD 12.5 billion, marking a 22% increase from the previous year, while AI-driven attacks fostered the rapid adaptation of strategies to escape detection (see footnote 1).
Identity Theft: Identity theft refers to the use of personal data for deceptive credits or deals. The FTC and affiliated agencies fielded 1,036,961 complaints of identity theft in 2023.11 A number of industrial reports in different fields have warned about the role of deepfake technology and how it expands the threat through hyper-realistic impersonations, allowing criminals to defeat authentication processes.12
Elder Fraud: In 2023, total losses reported to the FBI’s IC3 by those over the age of 60 topped USD 3.4 billion, which is an almost 11% increase in reported losses from 2022.13 According to the National Council on Aging (NCOA), with AI-based deepfakes taking advantage of emotional needs via false emergencies or celebrity testimonials, elderly people have become a primary target, or so-called “grandparent scam”.14

4.1.2. Deepfake Technology and Cybercrime

Deepfake technology creates hyper-realistic audio, video, or images, weaponising trust and misleading victims, representing a unique risk in organised fraud activity (Chesney and Citron 2019b). Criminal organisations misuse this technology to mimic trusted individuals, including executives, relatives, or celebrities, to gain funds, steal data, or spread disinformation.15 Deepfake tools, which previously necessitated considerable expertise to use, have become overwhelmingly accessible via open-source software and mobile apps, facilitating widespread use.
There are major international deepfake cases that have occurred, such as the Arup case (2024), where scammers impersonated a UK consulting firm’s Chief Financial Officer and senior staff in a series of deepfake video calls to trick one of the firm’s Hong Kong employees into wiring HK 200 million (USD 25.6 million) to the scammers.16 In the United States, some deepfake cases have also occurred.
Deepfake Elon Musk Investment Scams: An 82-year-old retiree in the US lost over USD 690,000 in 2024 to an apparent deepfake video in which fake Elon Musk appeared to endorse a fraudulent investment, highlighting how these schemes can ruin the victim’s personal life.17
Deepfake Robocall Impersonating Joe Biden: In January 2024, deepfake technologies were used in a political campaign where an AI-generated robocall impersonating President Joe Biden was sent to New Hampshire voters asking them not to take part in the Democratic primary. This is an important case regarding disinformation.18
Deepfake Scam Calls and Voice Cloning: The volume of deepfake scam calls increased significantly in 2024, with companies reporting they had received increasing deepfake fraud calls. Scammers harness AI to mimic the voices of trusted individuals, such as bank officials, corporate executives, or relatives, to deceive the victim into transferring money or disclosing sensitive information.19
School Principal Defamation via Deepfake Audio: In Baltimore in 2024, a school principal’s reputation was damaged after a staff member made a deepfake audio recording of him that was circulated to undermine him. The modified video gained widespread belief and led to abusive messages and threats against the principal before police announced it was fake. This case has made deepfake a new threat to public schools in the US.20
US Celebrity Impersonation Fraud: Scammers are leveraging deepfake technology to create fake videos and images of celebrities to promote fraudulent investment schemes. Scammers deepfaked celebrities such as Taylor Swift and produced pornography on various social media platforms.21 Such schemes lured victims into fraudulent transactions, leveraging emotional manipulation and confidence in celebrities.22
The above schemes prey on psychological vulnerabilities, taking advantage of urgency or familiarity to deceive people. They illustrate deepfakes as enablers of different types of fraud activities, increasing the political and social threat of deepfakes.23

4.1.3. Societal and Economic Implications

Deepfake fraud cases illustrate their sophistication. Deepfake techniques are also used to target political actors by spreading disinformation, as witnessed during the Russian invasion of Ukraine (Majchrzak 2023). That kind of manipulation can poison public discourse and disrupt the integrity of elections, potentially skewing opinions or undermining leaders. When individuals fall victim to sextortion attacks, particularly ones that use deepfake pornographic imagery to extort and blackmail the victims, it causes significant personal and psychological harm (Laffier and Rehman 2023).
On the economic side, deepfake fraud puts pressure on corporate and public resources. Social media firms have acted in response to the proliferation of celebrity scams, with companies such as Meta grounding loss mitigation in the development of detection tools and partnerships, such as the Fraud Intelligence Reciprocal Exchange (FIRE), which also targeted scams affecting users in the US.24 However, the absence of global standards and legal frameworks means that these efforts remain fragmented, and societies and economies are left vulnerable to growing threats.

4.2. FBI’s Organisational Characteristics

The FBI serves as one of the key agencies that recognise cybercrime in the United States and fight these advanced threats as deepfakes come into focus. Grounded in Maguire’s (2003) organisational theory, this section analyses the organisational features based on three primary categories: (a) organisational context, (b) structural complexity, and (c) structural control (Maguire 2003), see Table 1. Many of these will serve as the critical dimensions upon which this study can analytically frame how the FBI adjusts to the shifting terrain of technological crime.

4.2.1. Organisational Context

Originally the Bureau of Investigation, renamed in 1935, the FBI is an established agency comprising around 38,000 personnel, including 13,000 special agents.25 The FBI’s Cyber Division, with 93 task forces, exemplifies functional differentiation. However, its hierarchical structure highlights a universal challenge: balancing centralisation with agility in global cybercrime responses. Its massive size requires complex coordination devices to handle multiple activities, which is consistent with Maguire’s (2003) finding that size helps define complexity. The agency’s age indicates a long-established bureaucracy, which tends to be stable but resists change and has a slower reaction rate to new threats, such as a deepfake. Finally, the FBI has invested significantly in digital forensics, cybercrime investigation, and advanced AI technologies to improve criminal intelligence collection, analysis, and sharing. The use of technologies has deeply impacted its operating environment.26
The FBI works in a strategic environment that is influenced by the following:
Legal Framework: The Computer Fraud and Abuse Act (CFAA, 18 U.S.C. §1030) is the prime mechanism for prosecuting cybercrimes, prohibiting unauthorised access to computers and systems, fraud, and damage to data.27 However, privacy laws—namely, the Electronic Communications Privacy Act of 1986 (ECPA) —heavily restrict data collection, hindering investigations of encrypted communications and the spread of deepfakes.28 For instance, the ECPA protects electronic communications, warranting interception, which can slow down the real-time tracking of deepfakes.
Social Pressures: There has been increasing public concern over privacy, civil liberties, and the ethical risks of deploying AI-driven surveillance and facial recognition, as seen in the increasing debates since 2024 in the US, with scrutiny of the misappropriation of facial recognition in deepfake investigations.29 These pressures require transparent and accountable practices, putting the FBI to the test to balance enforcement with civil liberties.30
Political Dynamics: The FBI works with legal attaché offices and sub-offices in more than 180 countries on cross-border investigations.31 However, tensions with the country depending on shared backgrounds, such as China and Russia, are often correlated with state-sponsored computer crimes like deepfake campaigns, preventing collaboration. The cross-jurisdictional nature of cybercrime, which is geographically transcendent, links to the necessity for international efforts to confront dishonest actors who exploit the US judicial system.32

4.2.2. Structural Complexity

The FBI’s organisational approach is hierarchical yet decentralised to meet the needs of the complexity of cybercrime investigations. According to Maguire (2003), structural complexity is evaluated through the level of vertical, functional, and spatial differentiation, which helps detail how the FBI will respond to technological crimes, such as deepfakes.
Vertical Differentiation: The FBI has several layers of hierarchy, including its headquarters in Washington, D.C., where 56 field offices, over 400 resident agencies, and 23 overseas posts are located.33 Field offices execute local operations, and the headquarters provides resources, guidelines, and strategic decisions. This verticality guarantees clarity in command but creates a lag time in responsiveness, accelerating nefarious deepfakes, where action requires a real-time response (Lin 2024; Maguire 2003). The layering makes organisations more complex, which can slow decision-making, particularly in cross-jurisdictional cases.
Functional Differentiation: The FBI has a Cyber Division, which was established in 2002 and serves as the primary unit in combating technological crimes, including deepfakes, and organising 93 Computer Crimes Task Forces across the country.34 The National Cyber Investigative Joint Task Force (NCIJTF), however, combines personnel from more than 30 federal agencies, providing specialisation through agencies like the National Security Agency (NSA) and the Department of Homeland Security (DHS). The NCIJTF is led by a director from the FBI and a mission council comprised of senior leaders from these agencies, ensuring a collaborative, multi-agency approach to countering complex cyber threats.35 Agencies with greater degrees of specialisation, such as the FBI, tend to create separate cybercrime units, which in turn enhances their respective agency’s ability to manage more complicated cases (Willits and Nowacki 2016). At the same time, the FBI’s Intelligence Branch and Science and Technology Branch are two of the functional units that provide analytical and forensic support for cyber investigations.36
Spatial Differentiation: With field offices in major US cities and international posts in over 180 countries, the FBI’s geographic breadth allows for both localised responses and global reach. The geographic dispersal of cases allows for the customised exploration of crime but creates challenges for coordination, especially for deepfake crimes with transnational offenders.37 Cybercrime’s cross-domain nature complicates how jurisdictions align in the domain of US cybersecurity policies, respectively requiring strong inter-agency and international partnerships.
The structural complexity positions the FBI to address various cyber threats but also carries inefficiencies in the form of delayed information sharing between individual units, which can inhibit the detection of hyper-realistic deepfakes.

4.2.3. Structural Control

All of these structures are necessary to ensure the integrity and consistency of the various legal and operational objectives. As Maguire (2003) described, law enforcement agencies like the FBI manage their cybercrime efforts through administration, formalisation, and centralisation.
Management: Resources are allocated, and strategic priorities are set around a unified framework, enabling consistent responses to cyber threats. Agencies that are well-managed have more opportunities to invest in cybercrime units (Nowacki and Willits 2020). For example, the US federal budget for FY 2025 expands the Department of Justice and FBI’s cyber investigative capabilities, aligning investments with the National Cybersecurity Strategy to ensure robust, coordinated responses to emerging threats (see footnote 26). However, this top-down approach is likely to stifle field-level innovation, especially in terms of adjusting to emerging technologies like AI and deepfakes.
Formalisation: The FBI follows rigid policies, which include observing the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA) and coordinating investigative procedures with other agencies, such as the Secret Service.38 Formalised protocols may foster legal accountability, but they can also inhibit flexibility (Maguire 2003). For instance, the ECPA requires the government to obtain a search warrant or subpoena before accessing electronic records. The strict warrant requirements of the ECPA can cause delays in the availability of encrypted deepfake content, undermining timely actions (see footnote 27).
Centralisation: Decisions are developed by upper management of the FBI as operational standards for the rest of the organisation (Boyko 2024). The NCIJTF aims to mitigate fragmentation by coordinating multi-agency efforts. However, inter-agency rivalries remain and can interfere with coordination.39 While centralisation guarantees strategic coherence, it might decrease the flexibility of field agents in addressing deepfake incidents.
While these controls serve to strengthen the FBI’s management of multi-faceted investigations, they can stifle flexibility when gauging the implications of the rapidly changing deepfake technology.
The above-mentioned in Section 4.2 is presented in Table 1.
Table 1. The FBI’s organisational characteristics informed by Maguire’s (2003) theory.
Table 1. The FBI’s organisational characteristics informed by Maguire’s (2003) theory.
DimensionSub-ComponentsFBI CharacteristicsCybercrime and Deepfake Fraud Context
Organisational ContextSize, Age, Tasks and Technology, Environment~38,000 personnel; founded in 1908; specialised cybercrime units; constrained by privacy laws and global legal frameworksBureaucratic delays and legislation restrictions may impede the prompt detection of deepfake fraud
Structural ComplexityVertical, Functional, Spatial DifferentiationMulti-layered hierarchy; Cyber Division and NCIJTF; 56 field offices, ~60 legal attaché offices (Legats) in over 180 countriesHierarchical structures may delay responses to deepfake scams, and field offices struggle to keep pace with technological advancements
Structural ControlAdministration, Formalisation, CentralisationCentralised decision-making; adheres to CFAA and ECPA; coordinated via NCIJTFLegislation like the ECPA and centralised decision-making may delay real-time responses to deepfake scam calls, despite NCIJTF coordination
(Source: Authors’ analysis).

4.3. Technological Integration and Inter-Agency Collaboration

4.3.1. Technological Integration

Currently, the FBI utilises advanced technologies for cybercrime investigation, such as deepfake detection and prevention. The FBI actively engages in efforts to establish collaborative relationships with ethical hackers, who aid in the detection and prevention of cybercrime. Its technology features the following:
AI-Powered Threat Intake Processing System (TIPS): The FBI employs an AI-powered threat intake processing system, or TIPS, a tool called “Complaint Lead Value Probability”, to prioritise reports of incoming cybercrime. This creates an opportunity to use these systems to facilitate resource allocation to high-impact cases, such as deepfake scams, through the scoring of tips based on urgency and relevance. Those tools are essential in confronting the fact that high-tech crimes tend to be hidden.40
Neural Networks for Deepfake Detection: The FBI uses neural networks that can identify deepfake content. Detection accuracy in controlled environments can reach high accuracy, though this performance drops for advanced deepfakes in real-world scenarios.41 The NSA, FBI, and CISA jointly advise organisations to implement a variety of detection technologies to deal with emerging deepfake threats.42

4.3.2. Inter-Agency Collaboration

The FBI’s cybercrime strategy includes a collaborative approach, such as the National Cyber Investigative Joint Task Force (NCIJTF), which brings together more than 30 agencies, such as the Office of the Director of National Intelligence (ODNI), NSA, DHS, and CISA, and others (see footnote 35). Key initiatives include the following:
Election Security: The FBI collaborated with CISA and NSA to combat deepfake misinformation campaigns targeting the 2024 US general election. NCIJTF’s task force team tracked social media channels and cyber-enabled threats and supported intelligence analysis to counter foreign disinformation initiatives.43 The threats included deepfake content and disinformation.
FBI InfraGard Program: FBI Director Christopher Wray called for collaboration with state and local authorities to protect against nation-state cyber threats and critical infrastructure attacks. The InfraGard Program aims to improve partnerships with states to share classified intelligence to strengthen local cyber readiness. These efforts aim to be equalisers at the state level.44
However, jurisdictional overlaps and data-sharing restrictions through privacy laws, such as the Electronic Communications Privacy Act (ECPA), make all of this a challenge. Mutual legal assistance treaties (MLATs) and other mechanisms exist, but requests can be slow, and some countries are reluctant to assist. An extensive lack of cooperation with countries that shelter cybercriminals further complicates investigations (see footnote 35). Disputes between agencies may slow down coordinated responses.45

4.4. Organisational Culture and Training

The FBI has cultivated a culture of intellectual rigour and high ethical standards yet struggles to keep pace with technical innovation.
Inter-Agency “Turf Wars”: Jurisdictional overlaps and inter-agency competition with other agencies can hinder resource sharing and collaboration, creating inefficiencies in the federal response to cybercrime.46 Cybercrime’s complexity requires better coordination among federal, state, and local agencies.47
Training Gaps: The FBI Academy has coursework on digital forensics and AI analytics, but its curricula cannot keep up with rapidly evolving AI technologies.48 There are ongoing initiatives to improve proficiency in AI-related analysis and focus on proficiency in blockchain verifications and analytical techniques that can dissect AI-generated content, but it is difficult to account for a constantly evolving threat landscape (see footnote 35). The AI and deepfake technologies are evolving rapidly, making it difficult for any training program to stay completely up-to-date (see footnote 2).

4.5. Challenges

The FBI is presented with serious obstacles in combatting deepfake-based cybercrime, an ever-adaptive danger that undermines trust and technology. Hyper-realistic deepfakes consistently surpass the existing detection methods, such as neural networks and voice, as the trace of digital footprints is shifty at best. Attribution to individuals continues to be difficult because cybercriminals can use VPNs, encrypted channels, and cryptocurrencies to hide identities, which can limit the ability of police to conduct transnational investigations, mainly when the perpetrators operate from jurisdictions that cooperate very little.49 Finally, an environment of competition for AI talent from the private sector yields talent shortages that limit access to advanced countermeasures, while agent burnout further burdens the workforce, crippling operations.50
On the front lines of this effort, ethical dilemmas add new hurdles to progress when AI-powered surveillance tools escalate privacy concerns, prompting a public backlash to perceived overreach and demanding a careful balance of civil liberties. Researchers have pointed out that the misuse of AI technologies can infringe upon privacy and perpetuate bias, bringing challenges, such as algorithmic discrimination and mismanagement of personal data (Ahmad et al. 2025; Ezzeddine 2024; Shalevska 2024).51 As Interpol and the United Nations Interregional Crime and Justice Research Institute (UNICRI) note, “the responsible use of AI in law enforcement prioritised the alignment of policing principles, ethical standards, and human rights compliance.”52 The problems are global in nature, where fragmented legal frameworks amplify the enforcement difficulties faced by agencies, such as Interpol. Tackling this complex crisis requires the development of creative tech improvements, strong global cooperation, and rebuilding lost confidence, as well as political frameworks for ethical practice to counterbalance deepfake hazards.

4.6. Strategies

The FBI should implement a comprehensive technology–governance framework comprising robust law enforcement, stakeholder cooperation, and domestic–international regulations to tackle the deepfake cybercrime challenges. Enhanced tools and techniques, such as neural network analysers, could help identify deepfakes and public–private partnerships with technology firms that can fast-track such initiatives. Proposed amendments to existing computer crime legislation seek expressly to criminalise synthetic media fraud, thus helping to bolster prosecution efforts through legislative reform. International cooperation encourages global initiatives to standardise investigative procedures, enabling law enforcement to transcend jurisdictional barriers and fight cybercriminals who operate across national boundaries.
Deepfakes are an emerging technology that can potentially compromise a wide range of sophisticated applications, from diplomacy to journalism, at a rapid scale and in real time. Preparing for emerging threats includes investing in quantum-resistant encryption and predictive analytics to anticipate next-generation deepfakes and stay ahead of the curve, enabling organisations to deploy pre-emptive defences before they are compromised. Agile task forces made up of a cross-section of federal, state, and private sector experts can enhance responsiveness and adapt quickly to emerging threats through flexible, collaborative structures. The strategies represent the urgent role of technological innovation, clear legislation, international cooperation, public awareness, and frameworks and protocols in the protection of society from the rising threat of deepfake-led crime.

5. Conclusions and Recommendations

The FBI’s specialised Cyber Division, advanced detection technologies, and inter-agency partnerships position it as a leader in combating rapidly evolving AI-driven cybercrime and deepfake fraud. Maguire’s (2003) framework highlights strengths in functional differentiation but reveals challenges in structural complexity (such as bureaucratic hierarchies and inter-agency rivalry), resource limitations (such as lack of access to advanced detection tools), and ethical concerns (such as privacy risks of AI-enabled surveillance). Similar issues affect international agencies, such as Interpol, which face fragmented legal frameworks. This study’s policy recommendations include the following: (1) To address structural complexity and inter-agency rivalry, rapid and decentralised task forces are recommended to increase responsiveness. This approach would directly mitigate delays caused by bureaucratic hierarchies and reduce inter-agency rivalry by fostering collaborative decision-making and breaking down competitive barriers between agencies. (2) To address resource limitations, public–private partnerships are needed to expand the availability of cutting-edge detection tools. These partnerships would leverage private sector innovation and expertise to strengthen law enforcement capabilities. (3) To address fragmented legal frameworks, this study urges to push forward multilateral and bilateral treaties to harmonise laws against cybercrime. This goal aims to foster consistent international standards and reduce jurisdictional inconsistencies. (4) To address ethical concerns and privacy risks, proper ethical AI frameworks are needed to balance enforcement with civil liberties. These frameworks would establish clear guidelines to protect individual rights in AI-enabled surveillance and policing operations.
Regrading theoretical recommendations, Maguire’s theory could benefit from modifications that directly address the identified organisational issues, such as (1) integrating mechanisms for real-time decision-making and decentralised intelligence sharing, specifically to overcome structural complexity and inter-agency rivalry. This would enable agencies to respond swiftly to AI-driven threats through streamlined information flows. (2) Incorporating ethical oversight structures to govern the use of AI, directly addressing ethical concerns by ensuring accountability and transparency in AI applications within law enforcement. (3) Enhancing organisational adaptability to respond to rapidly evolving technological threats and resource constraints, addressing both the need for continuous technological integration and the challenge of limited resources.
Given the rapid development of AI and deepfake technologies, the above recommendations should be treated as initial steps in a long-term and adaptive process to help government regulations, law enforcement, and societies keep pace. By aligning each recommendation with the corresponding organisational issue, this conclusion strengthens the logical foundation and strategic roadmap for addressing AI-driven cybercrime. It is essential to continually evaluate and refine policy and law enforcement strategies. Especially in the US context, AI-enabled cybercrime intersects with a complex federal, state, and local legal and enforcement framework. Fragmented jurisdictions and diverse laws at different levels create unique challenges for law enforcement agencies. For instance, “turf wars” can be mitigated through rapid and decentralised task forces, as described above, which promote collaborative decision-making and reduce competitive barriers between agencies. Therefore, ongoing research is needed to address these legal, regulatory, and even political challenges. Since organisational theory alone may overlook the impact of legal and political contexts, future studies should integrate these perspectives for a fuller understanding of how US organisations operate within this intricate web of laws and political dynamics.
Future research should adopt multi-case designs, comparing federal, state, and local agencies, or employ quantitative analysis of cybercrime response metrics for enhancing robustness. Comparative studies should also consider including insights from international bodies, such as the European Union’s Law Enforcement Agency (Europol), which supports investigations initiated by Member States.53 A comparative organisational analysis between US agencies and their international counterparts would be particularly valuable. Such studies could further explore how global treaties address fragmented legal frameworks by comparing Europol’s coordinated approach with those of the US inter-agency efforts. Furthermore, research should focus on industry-oriented directions, evaluate the interplay between US organisations and evolving legal regimes, and explore the effectiveness of policy interventions and the impact of AI on policing capabilities. Through the lens of organisational structures operated within the intricate web of diverse laws and political dynamics, studies can better address the challenges posed by AI-driven cybercrime and inform more adaptive strategies.

Funding

This research received no external funding.

Institutional Review Board Statement

“Not applicable” for studies not involving humans or animals.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Ahmad, Rafaq, Sumaira Saleem, and Sayyad Hussain. 2025. Ethical and Legal Challenges of Artificial Intelligence: Implications for Human Right. Journal of Law, Society and Policy Review 2: 10–25. [Google Scholar]
  2. Arora, Muskan, Kaushal Kishore Mishra, Mandeep Singh, Praveen Singh, and Rashmi Tripathi. 2024. Deepfake Technology and Its Implications for Influencer Marketing. In Navigating the World of Deepfake Technology. Hershey: IGI Global, pp. 66–90. [Google Scholar]
  3. Birrer, Alena, and Natascha Just. 2024. What we know and don’t know about deepfakes: An investigation into the state of the research and regulatory landscape. New Media & Society, 1–20. [Google Scholar]
  4. Bittner, Egon. 1965. The concept of organization. Social Research 32: 239–55. [Google Scholar]
  5. Blancaflor, Eric B., Raphael M. Abaleta, Luke Martin D. L. Achacoso, Alden Christian C. Amper, and Pfrancis Isaiah R. Ampiloquio. 2024. Emerging Threat: The Use of AI Voice Cloning Software and Services to Deceive Victims Through Phone Conversations and its Potential Effects on the Filipino Population. Paper presented at the 2024 5th Asia Service Sciences and Software Engineering Conference, Tokyo, Japan, September 11–13. [Google Scholar]
  6. Boyko, Konstantin. 2024. Integrating Intelligence Analysis: A Key to Effective Leadership. Available online: https://leb.fbi.gov/articles/featured-articles/integrating-intelligence-analysis-a-key-to-effective-leadership (accessed on 14 June 2025).
  7. Brandqvist, Johan. 2024. The Cybersecurity Threat of Deepfake. Master thesis, University of Skövde, Skövde, Sweden. [Google Scholar]
  8. Broklyn, Peter, Axel Egon, and Ralph Shad. 2024. DEEPFAKES AND CYBERSECURITY: DETECTION AND MITIGATION Authors. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4904874 (accessed on 14 June 2025).
  9. Buçaj, Enver, and Kenan Idrizaj. 2025. The need for cybercrime regulation on a global scale by the international law and cyber convention. Multidisciplinary Reviews 8: 2025024. [Google Scholar] [CrossRef]
  10. Chaudhary, Ankur, Ritesh Rastogi, Aditee Mattoo, Punit Kumar, Tanvi Kumari, and Devansh Dubey. 2025. Generative Adversarial Networks (GANs). Generative AI: Disruptive Technologies for Innovative Applications, 29–55. [Google Scholar]
  11. Chesney, Bobby, and Danielle Citron. 2019a. Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review 107: 1753. [Google Scholar] [CrossRef]
  12. Chesney, Bobby, and Danielle Citron. 2019b. Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs 98: 147. [Google Scholar]
  13. Daft, Richard L. 2015. Organization Theory and Design. Boston: Cengage Learning Canada Inc. [Google Scholar]
  14. Diakopoulos, Nicholas, and Deborah Johnson. 2021. Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society 23: 2072–98. [Google Scholar]
  15. Donaldson, Lex. 1995. American Anti-Management Theories of Organization: A Critique of Paradigm Proliferation. Cambridge: Cambridge University Press, vol. 25. [Google Scholar]
  16. Dsouza, Darren Steve, Ayman El Hajjar, and Hamid Jahankhani. 2024. Deepfakes in Social Engineering Attacks. In Space Law Principles and Sustainable Measures. Berlin and Heidelberger: Springer, pp. 153–83. [Google Scholar]
  17. Ezzeddine, Yasmine. 2024. Artificial Intelligence in Law Enforcement Surveillance: Citizen Perspectives, Resistance and Counterstrategies. Sheffield: Sheffield Hallam University. [Google Scholar]
  18. Farouk, Mohamed Adel, and Bassant Mourad Fahmi. 2024. Deepfakes and media integrity: Navigating the new reality of synthetic content. Journal of Media and Interdisciplinary Studies 3: 47–94. [Google Scholar] [CrossRef]
  19. FBI. 2024. FBI Warns of Increasing Threat of Cyber Criminals Utilizing Artificial Intelligence. Available online: https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence#:~:text=SAN%20FRANCISCO%E2%80%94The%20FBI%20San%20Francisco%20division%20is%20warning,sophisticated%20phishing%2Fsocial%20engineering%20attacks%20and%20voice%2Fvideo%20cloning%20scams (accessed on 14 June 2025).
  20. Genelza, Genesis Gregorious. 2024. A systematic literature review on AI voice cloning generator: A game-changer or a threat? Journal of Emerging Technologies 4: 54–61. [Google Scholar]
  21. George, A. Shaji, and A. S. Hovan George. 2023. Deepfakes: The evolution of hyper realistic media manipulation. Partners Universal Innovative Research Publication 1: 58–74. [Google Scholar]
  22. Hawdon, James. 2021. Cybercrime: Victimization, perpetration, and techniques. American Journal of Criminal Justice 46: 837–42. [Google Scholar] [CrossRef]
  23. Helmus, Todd. C. 2022. Artificial Intelligence, Deepfakes, and Disinformation. Rand Corporation. Available online: https://www.rand.org/content/dam/rand/pubs/perspectives/PEA1000/PEA1043-1/RAND_PEA1043-1.pdf (accessed on 14 June 2025).
  24. Internet Organised Crime Threat Assessment (IOCTA). 2024. Publications Office of the European Union. Available online: https://www.europol.europa.eu/publication-events/main-reports/internet-organised-crime-threat-assessment-iocta-2024 (accessed on 14 June 2025).
  25. Jheelan, Jhanvi, and Sameerchand Pudaruth. 2025. Using Deep Learning to Identify Deepfakes Created Using Generative Adversarial Networks. Computers 14: 60. [Google Scholar] [CrossRef]
  26. Kalleberg, Arne. L. 1996. Organizations in America: Analysing Their Structures and Human Resource Practices. Thousand Oaks: Sage Publications. [Google Scholar]
  27. Kanade, Vijay. 2022. What Is Machine Learning? Understanding Types & Applications. Austin: Spiceworks Inc. [Google Scholar]
  28. Kaur, Jaspreet, Kapil Sharma, and M. P. Singh. 2024. Exploring the Depth: Ethical Considerations, Privacy Concerns, and Security Measures in the Era of Deepfakes. In Navigating the World of Deepfake Technology. Hershey: IGI Global, pp. 141–65. [Google Scholar]
  29. Korshunov, Pavel, and Sebastien Marcel. 2018. Deepfakes: A new threat to face recognition? Assessment and detection. arXiv arXiv:1812.08685. [Google Scholar]
  30. Laffier, Jennifer, and Aalyia Rehman. 2023. Deepfakes and harm to women. Journal of Digital Life and Learning 3: 1–21. [Google Scholar] [CrossRef]
  31. Langworthy, Robert H. 1986. The Structure of Police Organizations. New York: Praeger New York. [Google Scholar]
  32. Lin, Leo S. F. 2022. Globalization of crime and digitized societies: A recent survey. In Evolution of Digitized Societies Through Advanced Technologies. Berlin and Heidelberger: Springer, pp. 153–63. [Google Scholar]
  33. Lin, Leo S. F. 2024. A Study on the Organizational Characteristics of Law Enforcement Agencies and their Manifestation in Technological Crime Investigation. Journal of Police Management 20: 191–214. (In Chinese). [Google Scholar]
  34. Lin, S. F., Duane Aslett, Geberew Mekonnen, and Mladen Zecevic. 2024. The Dangers of Voice Cloning and How to Combat it. Available online: https://theconversation.com/the-dangers-of-voice-cloning-and-how-to-combat-it-23992 (accessed on 14 June 2025).
  35. Macron, Tolu. 2025. Generative AI and Cybersecurity: Analyzing the Dual Use of Deepfake Technology for Threats and Defensive Measures. Available online: https://www.researchgate.net/profile/Tolu-Macron/publication/388323713_Generative_AI_and_Cybersecurity_Analyzing_the_Dual_Use_of_Deepfake_Technology_for_Threats_and_Defensive_Measures/links/6792cbc94c479b26c9b18872/Generative-AI-and-Cybersecurity-Analyzing-the-Dual-Use-of-Deepfake-Technology-for-Threats-and-Defensive-Measures.pdf?__cf_chl_tk=_kmCoP03rMrM1_pPncHQTTEsrea3aDPmeEexgLk_xaw-1751263022-1.0.1.1-UKYi4peAFAkVZHUye.gXAa8OtLWJkTCFNtrpg5PDkCs (accessed on 14 June 2025).
  36. Maguire, Edward R. 2003. Organizational Structure in American Police Agencies: Context, Complexity, and Control. New York: Suny Press. [Google Scholar]
  37. Mahlasela, Oyena, Errol Baloyi, Nokuthaba Siphambilie, and Zubeida C. Khan. 2024. Artificial Intelligence Impact on the Realism and Prevalence of Deepfakes. Available online: https://www.researchgate.net/profile/Errol-Baloyi-2/publication/384056978_Artificial_Intelligence_Impact_on_the_realism_and_prevalence_of_deepfakes/links/66e7ec96dde50b3258772247/Artificial-Intelligence-Impact-on-the-realism-and-prevalence-of-deepfakes.pdf (accessed on 14 June 2025).
  38. Majchrzak, Adam. 2023. Russian disinformation and the use of images generated by artificial intelligence (deepfake) in the first year of the invasion of Ukraine. Media Business Culture 2023: 42–55. [Google Scholar]
  39. Maras, Marie-Helen, and Alex Alexandrou. 2019. Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. The International Journal of Evidence & Proof 23: 255–62. [Google Scholar]
  40. Matey, Gustavo Díaz. 2024. Non-state Actors and Technological Revolution: Organized Crime and International Terrorism. In International Relations and Technological Revolution 4.0: World Order, Power and New International Society. Berlin and Heidelberger: Springer, pp. 89–106. [Google Scholar]
  41. Mwangi, Phillip. 2024. Cybersecurity Threats and National Security in the Digital Age. American Journal of International Relations 9: 26–35. [Google Scholar]
  42. Nowacki, Jeffrey, and Dale Willits. 2020. An organizational approach to understanding police response to cybercrime. Policing: An International Journal 43: 63–76. [Google Scholar] [CrossRef]
  43. Oza, Priyanshi, Nirjharaa Patel, and Ayushi Patel. 2024. Deepfake Technology: Overview and Emerging Trends in Social Media. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4981040 (accessed on 14 June 2025).
  44. Qudus, Lawal. 2025. Cybersecurity Governance: Strengthening Policy Frameworks to Address Global Cybercrime and Data Privacy Challenges. Available online: https://journalijsra.com/sites/default/files/fulltext_pdf/IJSRA-2025-0225.pdf (accessed on 14 June 2025).
  45. Robbins, Stephen P. 1990. Organization Theory: Structures, Designs, and Applications, 3rd ed. Chennai: Pearson Education India. [Google Scholar]
  46. Shalevska, Elena. 2024. Human Rights in the Age of AI: Understanding the Risks, Ethical Dilemmas, and the Role of Education in Mitigating Threats. Journal of Legal and Political Education 1: 38–52. [Google Scholar] [CrossRef]
  47. Suomala, Jyrki, and Janne Kauttonen. 2022. Human’s intuitive mental models as a source of realistic artificial intelligence and engineering. Frontiers in Psychology 13: 873289. [Google Scholar] [CrossRef]
  48. Tisdell, Elizabeth J., Sharan B. Merriam, and Heather L Stuckey-Peyrot. 2025. Qualitative Research: A Guide to Design and Implementation. Hoboken: John Wiley & Sons. [Google Scholar]
  49. Willits, Dale, and Jeffrey Nowacki. 2016. The use of specialized cybercrime policing units: An organizational analysis. Criminal Justice Studies 29: 105–24. [Google Scholar] [CrossRef]
  50. Yin, Robert K. 2017. Case Study Research and Applications: Design and Methods. Thousand Oaks: Sage Publications. [Google Scholar]
  51. Zhao, Jihong, Ling Ren, and Nicholas Lovrich. 2010. Police organizational structures during the 1990s: An application of contingency theory. Police Quarterly 13: 209–32. [Google Scholar] [CrossRef]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
See https://www.fbi.gov/about, Retrieved 14 June 2025.
26
27
28
29
30
31
32
33
34
35
36
37
See https://fbijobs.gov/locations, Retrieved 14 June 2025.
38
39
40
41
42
43
44
45
46
47
48
49
50
51
The fast-evolving nature of AI technologies has brought about both positive and negative impacts on human rights and ethical standards. Positive impacts include access to information and services, enabling broader communication and content creation, and efficiency in analysing legal documents. However, negative impacts include privacy concerns, bias and discrimination, accountability, and transparency. It is critical to uphold ethical standards through fairness, transparency, regulation, and public participation and review mechanisms.
52
53
Unlike the FBI, Europol officers cannot arrest citizens or instigate investigations.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, L.S.F. Organisational Challenges in US Law Enforcement’s Response to AI-Driven Cybercrime and Deepfake Fraud. Laws 2025, 14, 46. https://doi.org/10.3390/laws14040046

AMA Style

Lin LSF. Organisational Challenges in US Law Enforcement’s Response to AI-Driven Cybercrime and Deepfake Fraud. Laws. 2025; 14(4):46. https://doi.org/10.3390/laws14040046

Chicago/Turabian Style

Lin, Leo S. F. 2025. "Organisational Challenges in US Law Enforcement’s Response to AI-Driven Cybercrime and Deepfake Fraud" Laws 14, no. 4: 46. https://doi.org/10.3390/laws14040046

APA Style

Lin, L. S. F. (2025). Organisational Challenges in US Law Enforcement’s Response to AI-Driven Cybercrime and Deepfake Fraud. Laws, 14(4), 46. https://doi.org/10.3390/laws14040046

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop