Next Article in Journal
Development of an In-Situ Simulation Device for Testing Deep Pressure-Preserving Coring Tools under High-Temperature and Ultrahigh-Pressure Conditions
Previous Article in Journal
Repulsive Force for Micro- and Nano-Non-Contact Manipulation
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Exploring and Understanding Law Enforcement’s Relationship with Technology: A Qualitative Interview Study of Police Officers in North Carolina

Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC 27695-8103, USA
Department of Public Administration, North Carolina State University, Raleigh, NC 27695-8102, USA
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3887;
Submission received: 3 February 2023 / Revised: 10 March 2023 / Accepted: 13 March 2023 / Published: 18 March 2023



Featured Application

The study synthesizes qualitative analysis to (1) highlight potential societal benefits and the ethical concerns of artificial intelligence (AI) technologies in policing, and (2) inform the responsible design and integration of AI technologies in law enforcement.


Integrating artificial intelligence (AI) technologies into law enforcement has become a concern of contemporary politics and public discourse. In this paper, we qualitatively examine the perspectives of AI technologies based on 20 semi-structured interviews of law enforcement professionals in North Carolina. We investigate how integrating AI technologies, such as predictive policing and autonomous vehicle (AV) technology, impacts the relationships between communities and police jurisdictions. The evidence suggests that police officers maintain that AI plays a limited role in policing but believe the technologies will continue to expand, improving public safety and increasing policing capability. Conversely, police officers believe that AI will not necessarily increase trust between police and the community, citing ethical concerns and the potential to infringe on civil rights. It is thus argued that the trends toward integrating AI technologies into law enforcement are not without risk. Policymaking guided by public consensus and collaborative discussion with law enforcement professionals must aim to promote accountability through the application of responsible design of AI in policing with an end state of providing societal benefits and mitigating harm to the populace. Society has a moral obligation to mitigate the detrimental consequences of fully integrating AI technologies into law enforcement.

1. Introduction

The proliferation of artificial intelligence (AI) in the public and private sectors, particularly in the domain of law enforcement, has generated substantial controversy based on ethical concerns and media attention [1,2,3,4,5]. Specifically, law enforcement agencies in the United States use AI technologies for predictive policing, surveillance, and automation. Yet, despite the considerable growth of research examining the contributions of AI technologies to societal development and sustainability [6], no United States-based studies to date have explored police officer perspectives on the application of AI in the domain of law enforcement and its ethical considerations (see Appendix A for additional details on the literature search for studies). Moreover, law enforcement professionals and citizens hold distinctly different views about the role of police in American society and legitimacy of practices used by police to carry out their mandate [7,8]. However, to create consensus and improve democratic accountability, there is a need to holistically evaluate police officer perspectives on the role of AI technology and its increasing deployment in policing.
The traditional role of law enforcement in the 21st century United States has been one of crime control and order maintenance [9]. Additionally, the United States Bureau of Justice Statistics expands on this description of law enforcement by stating that its agencies and employees are responsible for enforcing laws, maintaining public order, and managing public safety. To carry out their responsibilities, police officers must have the ability to uphold every aspect of their profession to promote public safety and maintain the rule of law. Doing so requires a sense of ethics and the ability to use moral decision-making to navigate the grey areas of law enforcement and complex situations involving life and death. In addition, certain ideals and character traits enable police officers to act in ways to promote public safety and uphold the rule of law. Nevertheless, as technology advances, AI can increasingly enhance and, in some instances, replace various human elements of policing processes, procedures, and operations in real-world situations [10,11].
Police increasingly apply advances in AI technologies to enhance their capacities and capabilities to control crime and maintain order [4]. In particular, law enforcement agencies in the United States use AI-enabled algorithms and machine learning for predictive policing, surveillance, and automation of certain tasks (e.g., reading license plates). Moreover, predictive policing and big data surveillance are not novel forms of police work. Instead, from a historical context, they are simply a sociotechnical extension of what has been long practiced [10].
Unfortunately, this technology also has the potential to do considerable harm, creating an additional, albeit distinct, set of societal and moral implications for 21st century policing. For instance, can predictive policing applications offer a more impartial approach to reducing crime than their human counterparts? These programs claim to be impersonal and objective in their application, yet they rely on humans to create the computer algorithms and train machine learning systems that underpin their function [5,10]. Furthermore, the data driven models used in these systems may be incomplete, biased, and may potentially target underrepresented populations [10]. The lack of transparency and explainability in the operation and decision-making processes of AI technologies employed in law enforcement is also problematic [12]. How can law enforcement professionals be held accountable when experts even struggle to explain the decision-making processes that AI technologies use, compounding and creating ancillary trust issues between police and the public they serve. Additional implications that stem from the risks of incorporating AI technologies into policing include privacy and fairness concerns and accountability challenges [10,13]. Therefore, efforts to implement ethical behavior into AI should govern and guide the design of artificial-intelligence-based components of policing programs [3].
In order to understand law enforcement’s relationship with AI technologies and their societal and moral implications, we qualitatively analyze the perspectives of AI technologies based on semi-structured interviews of law enforcement professionals in North Carolina. The main goals of this study are to explore (a) police officer views of law enforcement in the 21st century United States, (b) police officer views on artificial intelligence technologies, including self-driving vehicles, and examine (c) their combined societal and ethical implications. In addition, to expand the knowledge on the responsible implementation of ethical police practices using AI technologies and their impact on society, the current study aims to synthesize the police officer views with the background literature on AI ethics. The study synthesizes three central themes: (1) AI-enhanced law enforcement necessitates consideration of community relationship dynamics, (2) principled ethics of police practices using AI technologies and law enforcement values and diversity factors are critical for the responsible design of AI technologies, and (3) algorithmic policing technologies have the potential to create perceived societal benefits, but not without risk of causing harm related to civil rights and eroding democratic accountability of policing.
Thus, we argue that the trends toward integrating AI technologies into the law enforcement domain are not without risk and have the potential to erode critical normative and legal safeguards around civil rights. Ethicists have long warned of potential harms which accompany the use of new technologies, especially those which intrude on the privacy expectations of individuals [14,15]. Police body-worn cameras, intended to improve the transparency of interactions with community members, may disadvantage victims of sexual or domestic violence as they struggle to recount details of the encounter [16]. As an antidote, the President’s Task Force on 21st Century Policing, published in 2015, recommends greater community oversight over the adoption, implementation, and evaluation of new technologies [17]. Therefore, society has a moral obligation to implement AI-enhanced law enforcement policies reflective of upholding the values and diversity factors deemed crucial to fostering positive police and community relationships and designing ethically principled AI technologies for police work.

2. Materials and Methods

The study applies a cross-sectional design, and the methodological approach used was qualitative content analysis of interviews with law enforcement professionals [18].

2.1. Participants and Setting

As part of a more extensive study on the ethical and social impacts of artificial intelligence technologies, a sample of 20 participants was recruited (North Carolina State University Institutional Review Board approval No. 20276). For this sample, the inclusion criteria were that the participants had to be law enforcement professionals currently serving in North Carolina at the state and local levels. The participants were initially recruited through the Law Enforcement Executive Program (LEEP) and Administrative Officers Management Program (AOMP), which are North Carolina State University programs that provide leadership education and management training for their students. This approach enabled building rapport with a traditionally difficult to recruit population, despite their role as public officials serving their communities [19].
After the initial recruitment of participants, a convenience sampling method was employed to target police officer social networks to provide access to potential participants. Applying this approach, the sample size snowballs as each additional participant recruits more participants. Researchers frequently use this sampling method when they have a requirement to study a population where the participants are challenging to reach or are viewed as part of a hidden population [20]. Additional details on sample demographics can be found in Table 1 below.

2.2. Data Collection

Data were collected through semi-structured interviews performed from November 2021 to March 2022. Semi-structured interviewing was chosen as a data collection method to create a balance between naturalism and control. The choice of greater control, such as restricting eligible participants to those who only have a high school education could influence the data in ways that could compromise the representativity of the subsequent analysis. In essence, implementing strict controls could inhibit the qualitative discovery of critical information, and add another layer of complexity in accessing an already hidden population.
The planned duration of each interview was 60 min but ranged between 50 and 70 min in time and was conducted using the cloud-based, secure video platform Zoom. Upon accepting the interview request, participants were given verbal and written information about the study, told that they would receive a $60.00 USD gift card for their time, and once informed consent was obtained, an appointment was scheduled for an interview. The goal of providing a small monetary value incentive was to reduce the rate of recruitment failure based on hidden population dynamics with an already challenging study population [20] and to ensure that participants incur no costs and to provide a revenue-neutral experience. The interviewer had prior background experience working with law enforcement professionals and AI technologies, facilitating the execution of an interview protocol process to help to guide each interview and create efficient use of the allotted time. The participants were informed at the beginning of the interview that they could opt out at any time without repercussions. Additionally, the interviewer briefed all 20 participants that the interview data was confidential, and all participants consented to be audio-recorded.
The interviews were transcribed using an intelligent verbatim transcription technique, reviewed for de-identification of any personal data, and coded using the qualitative analysis software MAXQDA. A data analysis protocol was developed to manage the results of the 20 studies. Initially, numerous dimensions were considered for data analysis, but similar to prior work from our group (e.g., [21]), data was coded based on the abductive inference approach to qualitative research. First-cycle coding consisted of establishing structural codes based on the interview questions and open (initial) coding. Second-cycle coding consisted of axial coding to facilitate the development of a thematic framework and to identify and map the overarching themes in the 20 interviews (see Figure 1). Two researchers thoroughly reviewed a coding pilot subset of four transcripts to establish intercoder reliability. Percent agreement between coders was 91.89%, with a Kappa (RK) of 0.93, well within the acceptable rates of consensus. Coders discussed coding conflicts and refined the definitions of codes, and then the remaining transcripts were analyzed independently by one researcher.
Since the subject pool was drafted from participants attending university-based executive training programs, respondents report longer years of service (85% over ten years) and senior ranks (55% supervise other personnel) compared to a random distribution of sworn law enforcement officers. In addition, female officers and military veterans are slightly overrepresented in the sample (20% and 30%, respectively). Nationally in the United States, 12.3% of state and local law enforcement officers are female [22], and 19% of law enforcement officers are military veterans [23].

3. Results

In the following sections, the results from the qualitative analysis are synthesized into eight identified categories, three domains, and three overarching themes as presented in Figure 1.

3.1. Role of Law Enforcement

Participants were well aware of their law enforcement duties and responsibilities but provided varied responses on their perspectives on the role of law enforcement in the 21st century United States. The requirement to serve and protect was mentioned often [40%, n = 8], with participants explaining the need to protect citizens, protect property, and in two instances, act as a mental health counselor for the public when the situation called for it. Managing public safety [30%, n = 6], maintaining order [20%, n = 4], and crime control [25%, n = 5] were mentioned somewhat less frequently among the participants. Additionally, some officers identified community policing [20%, n = 4] as an essential aspect of law enforcement’s role and the need to engage their communities to be able to protect and serve them. The following excerpts from the participants are descriptive of the two primary roles mentioned most often in the study.
Serve and Protect: “Well, in my opinion, I mean, we are still on the front lines of protecting, but we also forget that second part of serving, right? So, most agencies have it somewhere in their motto or code of ethics or something like that, serving and protecting.”
(Police Officer 14)
Managing Public Safety: “My particular role now is to keep the highways safe, keep them clear of obstruction, assist motorists, whether they have struck something in the road or someone else has struck them, and remove impaired drivers from the road. I do my little part in the big wheel of law enforcement in general.”
(Police Officer 7)

3.2. Qualities of Law Enforcement Professionals

According to the participants, there are numerous personal and professional qualities expected of police officers. Nevertheless, there was some agreement among the participants on several critical qualities that comprise the moral character of the ideal law enforcement professional. The respondents consistently emphasized the virtues of integrity [50%, n = 10] and honesty [40%, n = 8], with empathy [25%, n = 5] and loyalty [20%, n = 4] characterized as qualities somewhat less frequently, and compassion being discussed by two participants [10%, n = 2]. In addition, several participants discussed the need to think quickly on their feet and make split-second decisions in stressful situations as a law enforcement quality [15%, n = 3]. Good moral character [10%, n = 2] as a quality over and above integrity and honesty was mentioned only by two participants, with one participant emphasizing knowing the difference between right and wrong and using discretion when making decisions. One participant emphasized the need for patience during citizen/officer encounters and as necessary to mitigate rash decision-making. The following excerpts from the study are descriptive of the two primary qualities mentioned most often in the study and highlights the criticality of split-second decision-making in policing.
Integrity: “Of course, integrity, based on what we do, I mean, integrity is something that is required, and a lot of it has to do with the fact that we’re dealing with a lot of individuals who would be easy to manipulate or take advantage of, or steal from, and having that integrity aspect of characteristics would put the officers in a position where they know what they’re doing could be harmful to the individuals, and blatantly obvious immoral behavior such as stealing, hurting others, things like that. The definition I had one time of integrity was doing the right thing when nobody’s looking, but I disagree with that. Mine is, doing the right thing, not caring who’s looking, like you know, got to do the right thing whether they’re looking or not. So, I think integrity is a big deal.”
(Police Officer 5)
Honesty: “You have to be honest because the citizens have to trust you. You are put in a position where you are the voice of the disenfranchised, those who cannot speak. You are there for the victims and the witnesses who are scared or cannot speak for themselves, so you have to do it for them. And if you are not honest, then you cannot do that.”
(Police Officer 18)
Split-Second Decision-Making (Indecisiveness): “And with decision-making […] some people have a hard time making a decision in a split-second, or making a decision under duress, under force, and that will get people killed if they hesitate to try to make a decision.”
(Police Officer 1)

3.3. Diversity in Law Enforcement

When participants described the importance of diversity in law enforcement, a majority of study participants explicitly stated that diversity was very important or important [70%, n = 14]. In comparison, six participants [30%, n = 6] did not offer an opinion on its importance. According to the participants, police departments should be demographically representative of the communities they serve. Additionally, the question of diversity in law enforcement is viewed as creating recruitment challenges [20%, n = 4], specifically with concerns about balancing diversity and finding qualified candidates that mirror the community. In terms of gender, one participant expressed the view that law enforcement is a male-dominated profession, which creates many challenges for females. In contrast, another participant expressed a positive experience for females in the profession. The following excerpts from the study demonstrate the importance the study participants place on community representation and balancing diversity with qualifications.
Community Representation: “Yeah, it’s important, you want an agency that looks like the people that we serve. So, if you have a predominant demographic, predominant race […] then that should probably […] be the predominant race or demographic in that agency, because if we serve the community, so we must be part of the community. So, we should look like the community.”
(Police Officer 18)
Balancing Diversity with Qualifications: “Well, I am of the opinion that [we should recruit] the best person for the job […] regardless of the background. So, I do want to put that out there. I do not believe we should be hiring just because, or promoting, if you want to say that, promoting or hiring or anything along that, just because of racial or ethnicity issues. But getting back to your question, some of the problems that I think could arise is that we can get one-dimensional. Let’s say we have very little diversity in one agency. Their experience levels and their background levels are not going to be as vast and expansive as it would if we are able to bring in different backgrounds and different ethnicities and different genders.”
(Police Officer 14)

3.4. View of Artificial Intelligence (AI) Technologies

Law enforcement professionals had mixed sentiments when presenting their opinion on AI technologies. Additionally, several participants [n = 5] had to be prompted by the interviewer about the meaning of AI. The interviewer provided AI application examples to the study participants in order for them to comprehend the context behind the interview question, suggesting a knowledge gap exists in policing and the understanding of AI. Overall, 50 percent of the sample expressed a positive view of AI technologies, 10 percent expressed a negative view, and 40 percent maintained an ambivalent stance towards AI technologies. Several participants discussed the justification of reliability in employing AI tools in policing with emphasis on additional research, prompting an ambivalent or fence-sitting stance (see Table 2. for an example excerpt from the study that captures ambivalent views of AI technologies).

3.5. View of Self-Driving Technology

Law enforcement professionals generally presented more mixed sentiments when presenting their opinion on self-driving technology when compared to their views on other types of artificial intelligence. Overall, 30 percent of the sample expressed a positive view of self-driving technology, 35 percent expressed a negative view, and 35 percent had an ambivalent view of self-driving technology. Table 2 provides sample quotes illustrative of the views of AI technologies and self-driving technology and Figure 2 presents a quantitative comparison of the views of AI technologies and self-driving technology.

3.6. Role of Artificial Intelligence in Policing

The range of AI applications in law enforcement and their usage varied considerably across the sample. AI-based policing technologies frequently referenced were predictive policing, facial recognition, gunshot detection, license plate readers, and crime analysis software. In terms of how AI policing technology would enhance law enforcement capacity and capability, predictive policing and surveillance technologies were described by participants as tools that could increase police intelligence capabilities, increase efficient use of human resources, and increase police responsiveness to calls. Three participants (15%) cited all the policing technologies referenced above as technologies that have the potential to reduce the severity or probability of police officer and citizen injuries and fatalities during encounters. Of particular note, several participants [n = 3] commented on the employment record of gunshot detection technology, referencing ShotSpotter (see for more information, accessed on 1 February 2023) as a technology that improves response times, saves lives, and helps to bridge the gap where citizens might not necessarily report gunshots being fired in their communities by making law enforcement aware of potential incidences. Other participants [n = 3] expressed an opinion that the technology has no effect on crime, is too expensive for their departments to employ, and does not work in rural areas. Separately, four participants (20%) expressed a lack of familiarity with AI policing technologies because, to their knowledge, they are not employed in their jurisdictions. One participant mentioned that society is not ready for law enforcement to use AI policing technologies in their communities.

3.7. Societal Impacts of Self-Driving Technology

Self-driving technology has both beneficial and harmful implications for communities and law enforcement, as shown in Figure 3. According to the participants, autonomous vehicles have the potential to increase public safety and reduce traffic offenses or infractions for the general public. Additionally, self-driving technology could reduce the amount of distracted driving and instances of driving while intoxicated (DWI) events, promoting traffic safety and mitigating driver error. In contrast, fears of autonomous vehicles malfunctioning or operating incorrectly were a steady minority concern that could affect public safety and trust in the technology, including the fear of hacking [20%, n = 4]. However, the consensus was that as the technology improves, it will become less of a concern. One participant suggested that autonomous vehicles could increase anxiety and could take away from the time when individuals could decompress from stress during the physical act of manually driving a vehicle.
Moreover, accountability concerns regarding the assignment of fault and responsibility for accidents were an issue. According to some of the participants who emphasized accountability concerns, the vehicle owner maintained responsibility for any incidences involving autonomous vehicle accidents or infractions, citing the obligation to maintain vehicle awareness regarding vehicle traffic and override the autonomous control system in emergencies. In one instance, a participant stated that the autonomous vehicle bears responsibility. Alternatively, another participant claimed that responsibility involving autonomous vehicle accidents or violations is situationally dependent. The following quotes from the study highlight the participants’ views on the societal impacts of self-driving technology.
Public Safety (Reduce Driver Operator Error): “I am open to it because a very, very high percentage of accidents are based on driver error. So very little is vehicle problems, and very little is environmental problems. The rest of it is going to be on the driver. So, I would think that this will help […] I have been to a lot of accidents and the overwhelming majority of it is because of an error on the part of the driver. So, I am hopeful that this will actually help and be beneficial to safety.”
(Police Officer 14)
Accountability Concerns: “I would say whoever is sitting technically in the driver’s seat because I would look at it as like a plane, if you are flying a plane that is on autopilot and the autopilot messes up, you as the pilot have to step in and take over the plane. So, if you are in a car that has self-driving technology and it starts messing up, you must step in and take over. So, you still must be paying attention, you cannot just hit auto drive and take a nap.”
(Police Officer 18)

3.8. View of Self-Driving Technology

Widespread application of self-driving or AV technology creates novel challenges for law enforcement officers, as shown in Figure 4. Participants described traffic enforcement as problematic, citing concerns about assigning culpability during traffic stops and accidents, public policy implications, insurance coverage issues, and motor vehicle law enforcement training deficiencies for police officers when encountering autonomous vehicles. Conversely, participants described how self-driving technology enables increased law enforcement capacity to concentrate on other core policing tasks, reallocating human resources to other pressing law enforcement issues and increasing organizational efficiency. As a way of explicitly enhancing police capacity, it was noted that self-driving technology could decrease police response times to incidences based on improved navigation. Alternatively, it was suggested that the technology could increase police response times based in part on a lack of practical experience and study participants not knowing or fully understanding the capabilities of autonomous vehicles for policing. In addition, one participant expressed concern that officer use of self-driving technology may cause driving skills to atrophy. The following quotes from the study highlight the participants’ views of self-driving technology.
Increase Law Enforcement Capacity (Reallocate Human Resources): “Well, we investigate probably 125 motor vehicle collisions every single month. So mathematically, you are talking about, well over a thousand wrecks a year. So, if you could substantially reduce those crashes, that’s a lot of man hours that officers are not have having to investigate those crashes […] Instead of them investigating crashes, they are doing something else.”
(Police Officer 16)
Criminal Justice Challenges of AVs: “[…] I am still going to charge them. I mean, they are in the vehicle, they are supposed to be at least in some kind of control of the vehicle, whether they are touching the steering wheel or hitting the gas or not. Obviously, I do not know how that is going to work when it comes time to convict him. I mean, I am sure somebody will come up with some kind of defense where it is not the person’s fault, it is somebody else’s fault of course.”
(Police Officer 3)
Atrophy of Police Driving Skills: “[…] where you run into issues is if someone does not drive because they are using that program, that car all the time, and then all of a sudden, they have to drive, I could see that could cause problems. Especially if you are doing some type of high-risk maneuver.”
(Police Officer 12)

4. Discussion

The findings in this study suggest that although AI technologies are becoming ubiquitous in society, their role in law enforcement from the perspective of police officers varies considerably depending on general familiarity with the concept of AI and how much individual jurisdictions employ these technologies in their communities. In addition, this study reveals that integrating technological advancements to include autonomous vehicles could impact the relationships between communities and police jurisdictions from a public safety and traffic enforcement perspective. To expand on these previous points, this study offers critical considerations for developing ethics and procedural training for police officers who employ and increasingly interact with artificial intelligence technologies.
At the same time, there are objective prerequisites and reasons for applying AI technologies in the course of law enforcement work. They are conditioned upon the fact that modern policing is required to solve many issues—reducing crime, optimization of law enforcement agencies, improving the efficiency of resources to ensure the activities of law enforcement agencies, increasing public confidence in law enforcement, and reduction in corruption. All the participants in this study echoed these requirements in their description of the role of policing in general and when considering how AI impacts law enforcement. The police officers’ perceptions captured in this study begs the question, could AI create congruency with established norms and rules set forth by public policy?
Nevertheless, police officers believe that AI technologies perform a limited role in the law enforcement domain, with the consensus among the participants that the technologies will expand and become widespread in the next 5 to 10 years. The evidence from the findings establishes that police officers think AI technologies positively impact law enforcement, improve public safety, reduce crime, and increase policing capability and capacity. Conversely, the evidence suggests that law enforcement professionals believe AI technologies will not necessarily increase police and community trust. Ethical concerns that were raised include autonomy, privacy, affective empathy, and the potential to infringe on civil rights if the technology is not used responsibly.

4.1. The Intersection of Law Officer Qualities with Artificial Intelligence

High-profile instances of police violence and discrimination [24,25,26] compound an already skeptical and distrustful public who are generally not comfortable with AI, thus creating obstacles to developing ethical reasoning in policing AI technologies [3,4,27]. However, this study expands upon previous work that examined ethical implications for AI by establishing law enforcement qualities most important to police officers to extend the research on the morality of AI and what would constitute virtuous policing AI characteristics [28]. Responsible AI would incorporate the virtues of integrity, honesty, loyalty, and compassion into the design of its non-human agency to reduce mistrust and build perceptions of competence in such law enforcement applications. Furthermore, as technology evolves, AI can progressively enhance the reliability and performance of policing practices [10,11]. Artificial intelligence technologies have the potential to support moral evaluations that manifest in situations where police officers must make split-second decisions with greater accuracy and precision, at least in principle. For instance, there are parallel efforts encompassing rational decision-making in applying big data and artificial intelligence in the medical field to enhance the accuracy of medical protocols [29]. In turn, the added reliability and performance of AI technologies compared to their human counterparts for policing could reduce aversion to implementing these technologies in the communities, diminishing ambivalent and negative sentiments [27]. The benefits create socially desirable AI policing technologies as public safety goods, reducing the stigma of the sociotechnical extension of law enforcement and fostering community engagement.

4.2. Artificial Intelligence (AI) Technologies as Public Safety Goods

The evidence presented in the study suggests that AI technologies that enhance policing could be deployed more effectively as public safety goods, with the well-being of community members central to the demand signal for such implementation. Moreover, if emerging technologies such as predictive policing, facial recognition, surveillance technologies, and social media scraping and monitoring generating massive volumes of data are well-regulated and carefully implemented, AI could detect criminal activities that would otherwise go unnoticed and facilitate crime prevention and public safety [30]. The resulting AI technologies as public safety goods can potentially increase community confidence in policing and the criminal justice system. However, the study participants expressed concerns about the risks of algorithm bias (diversity and representativeness challenges), the challenge of replicating the human factor of empathy, and concerns about privacy and trust. In addition, fairness, accountability, transparency, and explainability challenges remain as presented in the broader academic debate [1,12].
Artificial intelligence has the potential to bridge or hamper police and community engagement and relationships. To ensure AI can serve as a bridge for police and community engagement, AI policing technologies must be fair, accountable, transparent, and explainable. If algorithmic biases are not reined in, they create the potential to create a feedback loop that disproportionately targets minority and low-income communities, recreating the same public perceptions and issues of police discrimination from human officers. Privacy and safety protocols, as well as fairness and non-discrimination regulations, should be put into place in order to protect law enforcement professionals working around and with artificial intelligence and for members of the public who are either direct beneficiaries of the AI or perhaps a suspected criminal target based on historical crime data and datasets reflective of higher crime rates [1]. Moreover, any use of AI policing technologies must respect due process and the presumption of innocence while avoiding policing that discriminates against selected populations. A culture of accountability must be established at an institutional and organizational level that is transparent and shows how AI technologies in the context of police work develop conclusions and reach decisions. As a final point, AI policing technologies must be explainable, at least generally, in how decisions are reached [31]. Law enforcement professionals should, at a minimum, have a broad understanding of the AI technologies used in their jurisdictions and the criminal justice system as a whole. Procedural training for police officers who employ artificial intelligence technologies as they become more prevalent in law enforcement should start with basic police officer training to close the knowledge gap and foster the explainability principle. Moreover, AI technologies should not be so incomprehensible that the public cannot determine their use as a public good to promote public safety [32]. Figure 5 conceptualizes the findings from the study and from the background literature through the responsible design of AI policing technologies that addresses ethical and societal concerns.

4.3. Self-Driving (Autonomous) Vehicles: Implications for Society and Policing

Over the last few years, self-driving vehicle research and implementation in society have increased significantly [33] and provide an opportunity to examine the ethical implications of artificial intelligence on society and the law enforcement institution [34]. The results of our study suggest that police officers generally have a more unfavorable view of self-driving technology than other AI technologies. This view of self-driving technology may be attributed to accidents involving self-driving vehicles, the uncertainty regarding traffic enforcement, and the overall judgment of moral acceptability biasing some study participants. Nevertheless, these unfavorable views underscore the requirement to address liability and culpability between human owners of vehicles and manufacturers as the technology becomes more widely available and affordable for the public [33]. Future research exploring neurocomputational ethics applications such as the agent–deed–consequence (ADC) model of moral judgment developed by Dubljević and colleagues [34,35], could present a solution for implementing an ethics code into AI, which would improve upon the currently available single-focus approaches and facilitate mitigation of traffic enforcement and criminal justice challenges noted by the study participants. Of note, the ADC model applies virtue ethics, deontology, and consequentialism moral theories to the agent, deed, and consequence components. See Dubljević 2020 for discussion on and application of the ADC model in autonomous vehicles [34].
In contrast, self-driving vehicles have the potential to increase law enforcement capacity and capabilities. For instance, fewer officers serving in traffic units enables police departments to allocate human resources to other police work and tasks, such as searching for missing persons or responding to emergency calls. Separately, response times may improve based on enhanced navigation and automated driving tasks. Additionally, autonomous driving has the potential to enable police officers to allocate cognitive faculties to develop more suitable, feasible, and better courses of action for incidences that also reduce the severity and probability of injury or loss of human life.

5. Policy Implications

Our qualitative study provides insight into both near- and far-term trends of AI technologies and associated policy implications as AI continues to expand into the domain of law enforcement and diffuse into the communities they serve. This study was conducted across various law enforcement departments in North Carolina, providing a first snapshot into law enforcement’s relationship with AI. When coalesced with other research [1,2,3,4,5], the findings suggest that premature deployment of AI technologies can aggravate existing biases and discrimination or violate data privacy and protection practices, infringing on civil rights. Future policy formulation guided by social innovation [36], and public consensus must aim to promote accountability of law enforcement through the application of responsible design of AI in policing as shown in Figure 5 with an end state of providing societal benefits and mitigating harm to the populace. Additionally, agenda setting at the institutional level should mandate the development and adoption of ethical and procedural training standards for law enforcement agencies that intend on or currently employ AI technologies for policing. As with any novel technology, broad public discussion on downstream effects of widespread implementation needs to be facilitated prior to crafting detailed policy proposals [37].

6. Limitations

There are limitations to consider when interpreting the results of the study. In particular, using a snowball strategy to recruit study participants could create problems of representativeness and sampling principles [20]. To expand on this potential limitation, convenience sampling using a snowball strategy in this study could create selection bias toward the inclusion of law enforcement officers who have preexisting relationships with other officers, conceivably affecting the sample’s external validity. Given this deficiency in the recruitment strategy, the study emphasized diversity across the roles of police officers, gender, years of experience, and their jurisdictions during the recruitment process. Additionally, the study used the professional networking online platform LinkedIn to recruit several participants [n = 3], and the study author gave two presentations to law enforcement enrolled in the Law Enforcement Executive Program (LEEP) and Administrative Officers Management Program (AOMP) to recruit participants and mitigate selection bias.
A second limitation to consider in this study is that there was no prior AI or AI-specific experience criterion and no specific requirement for the size of the city or county for officers to participate in the study. The specific geographical setting could potentially affect the results (i.e., relative exposure to certain technologies could be different for officers from larger cities in contrast to rural areas). However, imposing additional constraints when using convenience sampling with an already hidden population would have significantly increased participant recruitment challenges. Moreover, applying an experience criterion for AI would have limited the qualitative data and prevented uncovering of the disparities in law enforcement’s breadth and width of understanding of AI technologies and their implications.

7. Conclusions

The expansion of AI into the public, private, and government sectors has significantly contributed to humanity’s advancement. For better or worse, no technology is without its ethical and social implications, and as such, the incorporation of AI technologies into the realm of law enforcement has gained prominence during controversial times where instances of police violence have garnered increased media scrutiny and triggered public protests. This study investigates how integrating technological advancements, including gunshot detection, facial recognition, crime prediction, and autonomous vehicle technology impacts the relationships between communities and police jurisdictions. Additionally, the current study contributes to an underexplored aspect of AI in policing by examining how police officers reflect on and make sense of AI technologies in the context of their law enforcement work and provides a snapshot of their views on how AI technologies impact the communities they serve. Furthermore, the consideration of self-driving technology offers unique insight into the perspectives, ethical considerations, and challenges of AI in policing.
The qualitative findings and core themes synthesized in this study provide a platform for developing robust quantitative future research. Surveys with stakeholders might elucidate how (1) AI-enhanced law enforcement may impact community relationship dynamics, (2) principled ethics of police practices using AI technologies and law enforcement values and diversity factors impact the responsible design of AI technologies, and (3) algorithmic policing technologies can create perceived societal benefits, and how the risk of causing harm can be mitigated. Society has a moral obligation to craft and implement well-informed policies that focus on these concerns to mitigate the consequences of fully integrating AI technologies into law enforcement.

Author Contributions

Conceptualization, R.P.D. and V.D.; methodology, R.P.D. and V.D.; validation, J.R.B.; formal analysis, R.P.D. and V.D.; investigation, R.P.D. and V.D.; resources, R.P.D. and J.R.B.; data curation, R.P.D.; writing—original draft preparation, R.P.D.; writing—review and editing, J.R.B. and V.D.; visualization, R.P.D.; supervision, V.D.; project administration, V.D.; funding acquisition, J.R.B. and V.D. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

The study protocol was approved by the Institutional Review Board of North Carolina State University, Approval No. 20276, Date of Approval: 12/06/2019.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available upon request from the corresponding author.


The authors would like to thank Elizabeth Eskander for research assistance and support.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

An exhaustive search of multiple scholarly databases in cooperation with a university research librarian was conducted in March of 2022 to confirm the publication of prior peer-reviewed studies. ProQuest Central returned 12 hits with no returns using the search string: ab ((police OR “law enforcement” OR LEO) AND (AI OR “artificial intelligence”) AND qualitative). Academic Search Complete returned three hits with no returns using the search string: (police OR “law enforcement” Or LEO) AND (AI OR “artificial intelligence”) AND qualitative. Web of Science returned five hits with no returns using the search string: (police OR “law enforcement” OR LEO) AND (AI OR “artificial intelligence”) AND qualitative) (Abstract). A gray literature search of Google Scholar returned one significant result for a study in the United Kingdom [38].


  1. Alikhademi, K.; Drobina, E.; Prioleau, D.; Richardson, B.; Purves, D.; Gilbert, J.E. A Review of Predictive Policing from the Perspective of Fairness. Artif. Intell. Law 2022, 30, 1–17. [Google Scholar] [CrossRef]
  2. Berk, R.A. Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement. Annu. Rev. Criminol. 2021, 4, 209–237. [Google Scholar] [CrossRef]
  3. Ouchchy, L.; Coin, A.; Dubljević, V. AI in the headlines: The Portrayal of the Ethical Issues of Artificial Intelligence in the Media. AI & Soc. 2020, 35, 927–936. [Google Scholar] [CrossRef] [Green Version]
  4. Paulsen, J.E. AI, Trustworthiness, and the Digital Dirty Harry Problem. Nord. J. Stud. Polic. 2021, 8, 1–19. [Google Scholar] [CrossRef]
  5. Vestby, A.; Vestby, J. Machine Learning and the Police: Asking the Right Questions. Polic. A J. Policy Pract. 2021, 15, 44–58. [Google Scholar] [CrossRef] [Green Version]
  6. Vinuesa, R.; Azizpour, H.; Leite, I.; Balaam, M.; Dignum, V.; Domisch, S.; Felländer, A.; Langhans, S.D.; Tegmark, M.; Nerini, F.F. The Role of Artificial Intelligence in Achieving the Sustainable Development Goals. Nat. Commun. 2020, 11, 233. [Google Scholar] [CrossRef] [Green Version]
  7. Morin, R.; Parker, K.; Stepler, R.; Mercer, A. Behind the Badge: Amid Protests and Calls for Reform, How View Their Jobs, Key Issues and Recent Fatal Encounters between Blacks and Police. Pew Research Center. 2017, pp. 1–96. Available online: (accessed on 1 June 2022).
  8. Tyler, T.R. Enhancing Police Legitimacy. ANNALS Am. Acad. Political Soc. Sci. 2004, 593, 84–99. [Google Scholar] [CrossRef]
  9. Cortright, C.E.; McCann, W.; Willits, D.; Hemmens, C.; Stohr, M.K. An Analysis of State Statutes Regarding the Role of Law Enforcement. Crim. Justice Policy Rev. 2020, 31, 103–132. [Google Scholar] [CrossRef]
  10. Meijer, A.; Wessels, M. Predictive Policing: Review of Benefits and Drawbacks. Int. J. Public Adm. 2019, 42, 1031–1039. [Google Scholar] [CrossRef] [Green Version]
  11. Minocher, X.; Randall, C. Predictable policing: New technology, old bias, and future resistance in big data surveillance. Convergence 2020, 26, 1108–1124. [Google Scholar] [CrossRef]
  12. Matulionyte, R.; Hanif, A. A call for more explainable AI in law enforcement. SSRN Electron. J. 2021, 1–6. [Google Scholar] [CrossRef]
  13. Yen, C.P.; Hung, T.W. Achieving Equity with Predictive Policing Algorithms: A Social Safety Net Perspective. Sci. Eng. Ethics 2021, 27, 1–16. [Google Scholar] [CrossRef] [PubMed]
  14. Marx, G.T. Ethics for the New Surveillance. Inf. Soc. 1998, 14, 171–185. [Google Scholar] [CrossRef]
  15. DeCew, J.W. In Pursuit of Privacy: Law, Ethics, and the Rise of Technology; Cornell University Press: New York, NY, USA, 1997. [Google Scholar]
  16. Adams, I.; Mastracci, S. Visibility is a Trap: The Ethics of Police Body-worn Cameras and Control. Adm. Theory Prax. 2017, 39, 313–328. [Google Scholar] [CrossRef]
  17. Office of Community Oriented Policing Services. President’s Task Force on 21st Century Policing. In Final Report of the President’s Task Force on 21st Century Policing; Office of Community Oriented Policing Services: Washington, DC, USA, 2015; pp. 1–36. Available online: (accessed on 1 June 2022).
  18. Timmermans, S.; Tavory, I. Theory Construction in Qualitative Research: From Grounded Theory to Abductive Analysis. Sociol. Theory 2012, 30, 167–186. [Google Scholar] [CrossRef]
  19. Kopak, A. Lights, Cameras, Action: A Mixed Methods Analysis of Police Perceptions of Citizens Who Video Record Officers in the Line of Duty in the United States. Int. J. Crim. Justice Sci. 2014, 9, 225–240. Available online: (accessed on 1 December 2022).
  20. Atkinson, R.; Flint, J. Accessing Hidden and Hard-to-Reach Populations: Snowball Research Strategies. Soc. Res. Update 2001, 33, 1–4. [Google Scholar]
  21. Coin, A.; Mulder, M.; Dubljević, V. Ethical Aspects of BCI Technology: What is the State of the Art? Philosophies 2020, 5, 31. [Google Scholar] [CrossRef]
  22. Gardner, A.M.; Scott, K.M. Census of State and Local Law Enforcement Agencies, 2018—Statistical Tables. U.S. Department of Justice, 2022. Available online: (accessed on 1 December 2022).
  23. Weichselbaum, S.; Schwartzapfel, B. When veterans become cops, some bring war home. USA Today, 2017. Available online: on 1 November 2022).
  24. Anderson, M.; Barthel, M.; Perrin, A.; Vogels, E.A. #BlackLivesMatter Surges on Twitter after George Floyd’s Death. Pew Research Center. 2020. Available online: (accessed on 1 November 2022).
  25. Cowell, M.; Corsi, C.; Johnson, T.; Brinkley-Rubinstein, L. The Factors that Motivate Law Enforcement’s Use of Force: A Systematic Review. Am. J. Community Psychol. 2021, 67, 142–151. [Google Scholar] [CrossRef]
  26. Peeples, L. What the data say about police brutality and racial bias—and which reforms might work. Nature 2021, 583, 22–24. [Google Scholar] [CrossRef]
  27. Bigman, Y.E.; Gray, K. People are Averse to Machines Making Moral Decisions. Cognition 2018, 181, 21–34. [Google Scholar] [CrossRef]
  28. Shank, D.B.; North, M.; Arnold, C.; Gamez, P. Can Mind Perception Explain Virtuous Character Judgments of Artificial Intelligence? Technol. Mind Behav. 2021, 2, 1–38. [Google Scholar] [CrossRef]
  29. Kim, H.S. Decision-Making in Artificial Intelligence: Is It Always Correct? J. Korean Med. Sci. 2020, 35, 1–3. [Google Scholar] [CrossRef]
  30. Rigano, C. Using Artificial Intelligence to Address Criminal Justice Needs. Natl. Inst. Justice (NIJ) J. 2019, 1–10. Available online: (accessed on 1 December 2022).
  31. Bauer, W.A.; Dubljević, V. AI Assistants and the Paradox of Internal Automaticity. Neuroethics 2020, 13, 303–310. [Google Scholar] [CrossRef]
  32. Dubljević, V. Neuroethics, Justice, and Autonomy: Public Reason in the Cognitive Enhancement Debate; Springer: Cham, Switzerland, 2019. [Google Scholar]
  33. Othman, K. Public Acceptance and Perception of Autonomous Vehicles: A Comprehensive Review. AI Ethics 2021, 1, 355–387. [Google Scholar] [CrossRef] [PubMed]
  34. Dubljević, V. Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles. Sci. Eng. Ethics 2020, 26, 2461–2472. [Google Scholar] [CrossRef]
  35. Dubljević, V.; Sattler, S.; Racine, E. Deciphering Moral Intuition: How Agents, Deeds, and Consequences Influence Moral Judgment. PLoS ONE 2018, 13, e0204631. [Google Scholar] [CrossRef] [Green Version]
  36. Bokhari, S.A.A.; Myeong, S. Use of Artificial Intelligence in Smart Cities for Smart Decision-Making: A Social Innovation Perspective. Sustainability 2022, 14, 620. [Google Scholar] [CrossRef]
  37. Dubljević, V.; Venero, C.; Knafo, S. What is Cognitive Enhancement? In Cognitive Enhancement; Knafo, S., Venero, C., Eds.; Elsevier: Amsterdam, The Netherlands; Academic Press: London, UK, 2015; pp. 1–9. [Google Scholar]
  38. Urquhart, L.; Miranda, D. Policing Faces: The Present and Future of Intelligent Facial Surveillance. Inf. Commun. Technol. Law 2022, 31, 194–219. [Google Scholar] [CrossRef]
Figure 1. Overarching themes in law enforcement and artificial intelligence technologies.
Figure 1. Overarching themes in law enforcement and artificial intelligence technologies.
Applsci 13 03887 g001
Figure 2. Views of AI Technologies and Self-Driving Technology.
Figure 2. Views of AI Technologies and Self-Driving Technology.
Applsci 13 03887 g002
Figure 3. Societal effects of self-driving technology.
Figure 3. Societal effects of self-driving technology.
Applsci 13 03887 g003
Figure 4. Self-driving technology implications for policing.
Figure 4. Self-driving technology implications for policing.
Applsci 13 03887 g004
Figure 5. Addressing ethical and societal concerns by responsible design in AI policing technologies. This figure captures the findings from the study and synthesizes a framework with existing literature for addressing the ethical and societal concerns in AI policing by responsible design [1,2,10,11].
Figure 5. Addressing ethical and societal concerns by responsible design in AI policing technologies. This figure captures the findings from the study and synthesizes a framework with existing literature for addressing the ethical and societal concerns in AI policing by responsible design [1,2,10,11].
Applsci 13 03887 g005
Table 1. Sample demographics.
Table 1. Sample demographics.
Demographic CharacteristicInterview Sample N = 20
FrequencyPercentageMean (SD)
Tenure: Experience (years) 18.8 (6.34)
     5 years or less15%
     6 to 10 years210%
     10 to 19 years945%
     20 years or more840%
Education Level 1
     High School Diploma or
     Undergraduate Degree735%
     Postgraduate Degree15%
Military Experience
Law Enforcement Role
     Line/Patrol Officer525%
     Special Agent15%
     Supervisory Position1050%
     Senior Leadership15%
1 Study participants [n = 7] who did not provide comments about their education level were coded as high school graduates or who had passed the General Educational Development (G.E.D.) Test indicating high school equivalency as per the minimum requirements for employment as a law enforcement officer in the state of North Carolina. For more information, see (accessed on 1 June 2022).
Table 2. Excerpts on the Views of Artificial Intelligence Technologies and Self-Driving Technology.
Table 2. Excerpts on the Views of Artificial Intelligence Technologies and Self-Driving Technology.
Positive View of AI TechnologiesPO3 *“Yeah, I mean, I am all for technology. I mean, especially in law enforcement. I think, and of course I cannot speak for all law enforcement agencies, but I think most of the agencies are kind of behind on the times. And that all obviously has to do with money, being able to get grants, to get the technology. But anything that can help a law enforcement officer carry out his job, duty, or help the agency carry out their mission, I think is a good thing and I think it’s helpful.”
Negative View of AI TechnologiesPO13“But, to me, there is no substitute for old school investigation of going and talking to somebody. I think you are going to make the problem even worse than what we have right now by doing that because I think everybody is looking at each other as an object, you are A, or you are B or your C, you are not a human being. And I think AI would make that worse. We need to think of each other.”
Ambivalent View of AI TechnologiesPO8“I think it needs to be pretty much studied more so that we can be certain of the efficacy. I am not really sold. I do think it would free up and allow manpower or increase officer ability to spend their time doing other things. But I am not too sure if I like that it could rid some jobs or just how effective it would be into providing correct information on things that humans can do.”
Positive View of Self-Driving
PO16“I feel like in a perfect world, that is probably better than people, because with that automatic driving technology, everything does what it is supposed to do, whereas when humans drive, nobody does what they are supposed to do. People drive at different speeds, they have different following distances and that is what creates all the problems, but if everybody drives the same speed and has the same following distance, you probably will never have wrecks or have very few of them.”
Negative View of Self-Driving
PO13“Oh, I absolutely hate it. Yeah, I am not a fan. I like driving for one and I do not trust computers that much. [...] I do not believe in putting an Alexa in your house to hear everything that you say or do is recorded, because there is so many ways to hack in now. What if say, you are driving your Tesla down the road and somebody hacks in and next thing you know, you crash. I don’t like it. I like driving. I have always liked driving since I was young, and I would not trust it.”
Ambivalent View of Self-Driving TechnologyPO1“I think the passive technologies that are here now with the light mitigation and stop mitigation, and anti-backing when it stops you from backing over somebody, I think that’s great. I don’t know, given here recently on TV, I saw a report of a Tesla. I think it was a Tesla that had wrecked, and it was in the self-drive mode. So, I think it’s still got a little bit farther to go before it matures enough to be widespread as they want it to be.”
* For clarification, PO3 refers to the third police officer (law enforcement) participant in the study. Each additional participant is identified using the same method.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dempsey, R.P.; Brunet, J.R.; Dubljević, V. Exploring and Understanding Law Enforcement’s Relationship with Technology: A Qualitative Interview Study of Police Officers in North Carolina. Appl. Sci. 2023, 13, 3887.

AMA Style

Dempsey RP, Brunet JR, Dubljević V. Exploring and Understanding Law Enforcement’s Relationship with Technology: A Qualitative Interview Study of Police Officers in North Carolina. Applied Sciences. 2023; 13(6):3887.

Chicago/Turabian Style

Dempsey, Ronald P., James R. Brunet, and Veljko Dubljević. 2023. "Exploring and Understanding Law Enforcement’s Relationship with Technology: A Qualitative Interview Study of Police Officers in North Carolina" Applied Sciences 13, no. 6: 3887.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop