Abstract
Legal regulation and practical implementation of artificial intelligence (AI) in Kazakhstan’s criminal procedure are considered within the context of judicial digital transformation. Risks arise for fundamental procedural principles, including the presumption of innocence, adversarial process, and protection of individual rights and freedoms. Legislative mechanisms ensuring lawful and rights-based application of AI in criminal proceedings are required to maintain procedural balance. Comparative legal analysis, formal legal research, and a systemic approach reveal gaps in existing legislation: absence of clear definitions, insufficient regulation, and lack of accountability for AI use. Legal recognition of AI and the establishment of procedural safeguards are essential. The novelty of the study lies in the development of concrete approaches to the introduction of artificial intelligence technologies into criminal procedure, taking into account Kazakhstan’s practical experience with the digitalization of criminal case management. Unlike existing research, which examines AI in the legal profession primarily from a theoretical perspective, this work proposes detailed mechanisms for integrating models and algorithms into the processing of criminal cases. The implementation of AI in criminal justice enhances the efficiency, transparency, and accuracy of case handling by automating document preparation, data analysis, and monitoring compliance with procedural deadlines. At the same time, several constraints persist, including dependence on the quality of training datasets, the impossibility of fully replacing human legal judgment, and the need to uphold the principles of the presumption of innocence, the right to privacy, and algorithmic transparency. The findings of the study underscore the potential of AI, provided that procedural safeguards are strictly observed and competent authorities exercise appropriate oversight. Two potential approaches are outlined: selective amendments to the Criminal Procedure Code concerning rights protection, privacy, and judicial powers; or adoption of a separate provision on digital technologies and AI. Implementation of these measures would create a balanced legal framework that enables effective use of AI while preserving core procedural guarantees.
1. Introduction
Modern trends in digitalization affect virtually all areas of public life, including law enforcement and the judiciary. One of the most widely debated aspects of technological progress is the integration of artificial intelligence (AI) into legal practice, particularly in criminal procedure. The use of AI in criminal justice promises greater efficiency in crime investigation, streamlining of procedural processes, and a reduction in both time and resource expenditures.
At the same time, the active deployment of such technologies raises serious legal, ethical, and procedural challenges. The use of algorithmic systems in evidence collection, decisions on coercive measures, risk assessments of recidivism, and even in shaping judicial positions must strictly comply with the principles of criminal procedure. These include legality (Art. 10 of the CPC of the Republic of Kazakhstan), judicial protection of human rights and freedoms (Art. 12 CPC), safeguarding individual rights during criminal proceedings (Art. 15 CPC), the right to privacy (Art. 16 CPC), the presumption of innocence (Art. 19 CPC), and adversarial proceedings with equality of arms (Art. 21 CPC) (Criminal Procedure Code of the Republic of Kazakhstan 2014).
The European Union has already enacted the Artificial Intelligence Act (AI Act), which came into effect on 1 August 2024. Its purpose is to mitigate risks associated with the use of AI and to establish a supervisory authority to ensure compliance. Violations may result in substantial fines. The Act distinguishes four categories of AI systems: those with limited, unacceptable, and high risks, as well as generative systems (neural networks). Prohibited uses include AI systems that enable cognitive or behavioral manipulation of individuals or vulnerable groups (e.g., children), as well as biometric identification technologies such as facial recognition.1
Kazakhstan is following global trends in the regulation of artificial intelligence, developing its own legislation that takes into account both international experience and the specific features of the national economy. In particular, the draft law “On Artificial Intelligence” is currently under active discussion. On 3 March 2025, Member of Parliament E. Smyshlyayeva introduced the bill, which aims to establish a transparent legal framework for the integration of AI technologies into the country’s economy. The draft consists of seven chapters and 27 articles, introducing a classification of AI systems based on levels of risk (minimal, medium, and high) to regulate their use.2
At present, however, criminal procedure law does not contain clear provisions governing the application of AI in the context of criminal proceedings. This results in legal uncertainty and highlights the need for comprehensive scholarly analysis. Accordingly, there arises a necessity to examine the normative preconditions, limitations, and prospects for regulating the use of AI within the criminal justice system.
The purpose of this article is to examine the key legal aspects of introducing artificial intelligence into criminal procedure, to identify potential risks, and to develop approaches to shaping an appropriate legal framework. The research objectives are: to reveal the theoretical and legal nature of artificial intelligence and to outline its potential applications in criminal proceedings; to analyze current criminal procedure legislation of the Republic of Kazakhstan with regard to AI technologies; to study foreign experience—particularly that of the European Union, the United States, and China—in regulating the use of AI in criminal justice; to identify the main legal and ethical risks arising from the integration of AI into criminal procedure; and to formulate proposals for improving criminal procedure legislation in order to ensure the safe and lawful use of AI in legal practice.
The object of the study is criminal procedure in the context of digitalization and the introduction of artificial intelligence technologies. The subject of the study is the provisions of the criminal procedure legislation of the Republic of Kazakhstan that regulate, or are subject to regulation, in the field of AI application, as well as the legal and practical aspects of AI use at different stages of criminal proceedings.
2. Methods
The methodological framework of the study is shaped by the need for a comprehensive assessment of the normative, theoretical, and practical dimensions of integrating artificial intelligence technologies into the criminal procedure of the Republic of Kazakhstan. The research employs both general scientific and specialized legal methods that make it possible to evaluate the current state of AI regulation and determine the prospects for its incorporation into criminal justice.
- 1.
- Historical-legal method
This method is used to examine the evolution of Kazakhstan’s criminal procedure legislation in the context of digitalization, beginning with the introduction of electronic legal and procedural tools such as the “electronic criminal case”, digital databases, and online judicial services. This approach enables the identification of how the modernization of electronic procedures has prepared the normative environment for the potential integration of AI technologies, as well as recognition of institutional features of the criminal process—centralized pre-trial investigation, the primacy of written procedure, and the role of prosecutorial oversight—that influence the application of algorithmic systems.
Given its fundamental characteristics—codified legislation, the primacy of written law, the structure of criminal procedure, and methods of legal regulation—Kazakhstan aligns with the Romano-Germanic (continental) legal family. This is directly relevant to assessing the feasibility of introducing artificial intelligence, as the continental model is grounded in a high degree of normative precision, requiring clear statutory definitions of concepts, procedures, and boundaries for the use of AI.
In recent years, Kazakhstan has intensified its international cooperation in the fields of digitalization and artificial intelligence, concluding a series of agreements with key global partners. Collaboration with the United States includes memoranda with major technology companies such as Hewlett Packard Enterprise (HPE) and Oracle, as well as cooperation with Groq, which contributes to the development of AI infrastructure and the exchange of advanced technological solutions.3 Kazakhstan and China have agreed to establish a joint international laboratory on artificial intelligence and sustainable development, thereby expanding cooperation in emerging technologies and scientific research.4 A significant area of progress has also been the partnership with the United Arab Emirates: the agreement between Samruk-Kazyna and the company AIQ is aimed at introducing digital and AI-based technologies in the energy sector.5 In parallel, cooperation with Russia is also developing, with discussions focusing on opportunities for deepening collaboration in digitalization and IT development, including the application of artificial intelligence-based solutions.6
These strategic agreements concluded by Kazakhstan with a number of foreign states in the field of digitalization and artificial intelligence do not alter the nature of the Kazakhstani legal system nor its affiliation with the continental legal tradition. Their impact is limited to technical aspects of cooperation, including data exchange, interoperability of digital security standards, and the potential integration of big-data analytical methodologies. The legal architecture of criminal procedure remains national in character and fully self-sufficient.
- 2.
- Comparative Legal Method
This method was employed to compare the legislation of Kazakhstan and the practices of digitalizing criminal procedure with international models, including:
- −
- the European Union Artificial Intelligence Act (2024);
- −
- U.S. approaches to predictive analytics and recidivism-risk assessment;
- −
- China’s system of “smart courts” and algorithm-based prosecutorial practice;
- −
- the Council of Europe’s recommendations on the ethical use of AI in the justice sector.
This approach made it possible to determine the extent to which foreign mechanisms may be applied within Kazakhstan’s legal system and identify the most promising directions for normative development.
- 3.
- Formal Legal Method
This method was used to analyze the content of the current normative acts of the Republic of Kazakhstan, including the Criminal Procedure Code, the Law “On Personal Data”, and the draft Law “On Artificial Intelligence”. Its application enabled the identification of key gaps, such as the absence of a legal definition of artificial intelligence in procedural legislation, insufficient regulation of digital evidence, and the lack of procedural safeguards and liability mechanisms for the use of algorithmic systems.
- 4.
- Content Analysis of Law-Enforcement Practice
An examination was conducted of the digital tools actually used by investigative bodies and the courts, including:
- −
- the electronic criminal case file;
- −
- systems of video recording;
- −
- analytical platforms for data identification and processing;
- −
- biometric systems.
This analysis made it possible to assess the practical readiness of law-enforcement agencies to employ AI technologies and identify concrete risks associated with the introduction of algorithmic systems.
- 5.
- Systemic Approach
This approach was applied to examine artificial intelligence as an integral component of the broader digital transformation of criminal procedure. The analysis encompassed normative acts, technological infrastructure, the competencies of procedural actors, and mechanisms of judicial oversight. This made it possible to assess AI not in isolation but within the operational framework of the entire system of procedural safeguards.
- 6.
- Prognostic Method
The method was used to develop a model for the prospective evolution of criminal procedure regulation in the sphere of artificial intelligence. Based on an assessment of existing legislative gaps, international experience, and the current level of digitalization, the following potential directions were identified:
- −
- targeted amendments to the Criminal Procedure Code of the Republic of Kazakhstan;
- −
- or the development of a separate article or chapter governing the use of AI.
This method enabled the projection of how the proposed regulatory solutions may influence the balance between investigative efficiency and the protection of individual procedural rights.
- 7.
- Legal Modeling Method
Drawing on the analysis of existing legal norms and international standards, specific regulatory proposals were formulated:
- −
- introducing a definition of artificial intelligence into Article 7 of the CPC of the Republic of Kazakhstan, aligned with the forthcoming special law;
- −
- establishing procedural limitations on the use of AI technologies;
- −
- defining forms of liability for algorithmic errors;
- −
- specifying the powers of investigators, inquiry officers, prosecutors, and judges when employing AI tools.
- 8.
- Analysis of Scholarly Sources
The research base encompasses an examination of domestic and foreign academic publications, normative legal acts, international instruments, and open-source informational resources. In particular, materials from Wikipedia were consulted, as they provide systematized information on artificial intelligence technologies, their software implementations, and patterns of practical use across various jurisdictions. These data made it possible to compare international approaches to the integration of AI into the legal domain, identify differences in state regulatory frameworks, and assess the extent to which algorithmic systems are employed in criminal procedure within the EU, the United States, China, and several other jurisdictions. The use of open-source materials contributed to a more comprehensive comparative legal analysis and facilitated the formation of a holistic understanding of global trends in AI development.
The theoretical and normative foundation of the research consists of the provisions of the Criminal Procedure Code of the Republic of Kazakhstan, international legal instruments, draft laws, doctrinal sources, scholarly publications, as well as official documents of international organizations such as the European Commission and the Council of Europe.
3. Historical Background and International Approaches to AI Regulation
3.1. Theoretical Foundations and Evolution of Artificial Intelligence
According to Article 1 of the Draft Law of the Republic of Kazakhstan “On Artificial Intelligence” (2025), artificial intelligence is defined as an information and communication technology that can imitate or even surpass human cognitive functions in order to perform intellectual tasks and find solutions.
The history of artificial intelligence as an independent scientific field begins in the mid-20th century. By that time, a broad intellectual foundation had already been established—ranging from philosophical reflections on the nature of knowledge, to theories of brain function in neurophysiology and psychology, to the efforts of economists and mathematicians to formalize knowledge and develop optimal computations. A key milestone was the emergence of the mathematical theory of algorithms and the creation of the first electronic computers.
The advent of computers capable of performing calculations many times faster than humans raised a fundamental question: can machines replicate human thinking?
The first scholarly work generally recognized as devoted to AI is Warren McCulloch and Walter Pitts’ article “A Logical Calculus of the Ideas Immanent in Nervous Activity”. In it, they proposed a mathematical model of neural elements capable of performing logical operations, laying the foundation for artificial neural networks (McCulloch and Pitts 1943). However, the first fundamental theoretical study directly addressing machine intelligence as such was Alan Turing’s article “Computing Machinery and Intelligence”. There, he reformulated the classic question “Can machines think?” by proposing an alternative: the “imitation game”—later known as the Turing Test—as a criterion for assessing whether a machine’s behavior could be considered indistinguishable from that of a human (Turing 1950).
A practical confirmation of AI’s emergence as an applied discipline was the development of the Logic Theorist program in 1955–1956 by Allen Newell, Herbert Simon, and Cliff Shaw. Logic Theorist was the first program capable of independently performing logical proofs, and it is often regarded as the first “true” implementation of artificial intelligence (Newell and Simon 1956).
The history of AI can therefore be outlined as follows:
- 1943—McCulloch and Pitts: formalization of neural networks;
- 1950—Turing: the test for machine intelligence;
- 1955—Newell, Simon, and Shaw: Logic Theorist as the first working system.
These works—both theoretical and applied—laid the foundation of what we now call artificial intelligence. They introduced key concepts (neural networks, the Turing Test, heuristic search) and made it possible to move from philosophical speculation to concrete algorithms. The foundations for the subsequent scholarly field of AI & Law were also laid during this period, when in the 1970s–1980s researchers began attempting to formalize legal reasoning and develop algorithms for the analysis of legal norms. Notable examples include McCarthy’s Taxman system, the logical model of the British Nationality Act (Sergot et al. 1986), conceptual legal analysis systems (Hafner 1987), and early case-based legal reasoning systems (Ashley 1990).
The landmark Dartmouth Seminar of 1956 is widely regarded as the moment when artificial intelligence emerged as a distinct scientific discipline. It was there that John McCarthy first introduced the term “Artificial Intelligence”. This event marked the beginning of AI as a separate field of inquiry, which evolved from the symbolic systems of the 1950s–1960s to contemporary statistical methods and machine learning. In a broader historical context, as noted by Gil Press, the origins of AI can be traced back to mechanical systems of reasoning developed between the thirteenth and seventeenth centuries—including the works of the Catalan philosopher Ramon Llull, such as Ars generalis ultima (1308), a system for the mechanical combination of concepts, and Gottfried Leibniz’s Dissertatio de arte combinatoria (1666). Press therefore draws a continuous intellectual trajectory: from medieval attempts to formalize human thought, to the formal establishment of AI as a scientific discipline, and further to the rapid development of modern technologies such as deep learning and large-scale data analysis.7
Researchers then began developing programs capable of solving logical problems, playing chess, and even proving theorems. However, it soon became clear that the path to genuine AI was far more complex. Despite early enthusiasm, progress was slow, and by the 1970s, government funding for AI research in the United States had begun to decline. This period became known as the “AI winter”—a time of disappointment and waning interest (Bíró 2024).
In the 1980s, however, AI experienced a resurgence. In the United States, expert systems—computer programs designed to provide advice in specialized domains such as medicine or engineering—were actively developed. These systems found applications in industry and business, and although they were functionally limited, they revived interest in AI. Yet this stage also ended in disillusionment: expert systems were rigid, resource-intensive, and difficult to scale. By the early 1990s, the United States entered a second “AI winter” (Sarikaya 2024).
At the same time, advances in computing and new algorithms shifted attention away from strict logic-based approaches toward learning methods, particularly neural networks. Machine learning became the new paradigm. Research conducted at universities and laboratories such as MIT, Stanford, and Carnegie Mellon began to bear fruit. The scientific schools of these universities played a decisive role in the development of neural networks and statistical data analysis. This research also started to influence the legal domain, which is reflected in works such as Rissland et al. (2005), Prakken and Sartor (1998), and Bench-Capon (1991). The United States regained its leadership, this time with a focus on statistics and data. By the 2000s, American technology companies—Google, Amazon, Microsoft—had begun investing in AI divisions, recognizing its strategic importance for the future.
The 2010s marked the beginning of the deep learning era. A true breakthrough came when neural networks achieved near-human performance in image analysis, speech recognition, and machine translation. In the U.S., this was also the period when powerful research groups and startups emerged, dedicated to advancing AI. Among them was OpenAI, founded in 2015, which committed itself to open research in the pursuit of general AI.
It was American companies—Google (including DeepMind), Meta, Microsoft, NVIDIA, and others—that came to the forefront of the so-called “AI race”. They began creating systems capable of generating text, music, images, and even computer code. The leap in generative AI was especially striking: first GPT-3, then GPT-4, and in 2024—GPT-4o. These models no longer merely “solve tasks”, but interact with humans in ways increasingly close to natural communication. During this period, research on algorithmic interpretability and transparency in the legal domain gained particular relevance—most notably the works (Doshi-Velez and Kim 2017; Rudin 2019), which emphasize the necessity of explainable models when used in legally consequential decision-making.
Today, the United States is at the center of global AI development. Government programs are investing in education, science, and security, while ethical standards and regulatory frameworks are being created to ensure responsible technological progress. Artificial intelligence is now applied in medicine, transportation, education, the military, and everyday life. Yet, these advances also bring challenges: how to control powerful models, how to protect user data, and how to prevent distortions of truth and manipulation. Thus, the trajectory of AI in the U.S. has been one of rises and setbacks, scientific breakthroughs, disappointments, and extraordinary achievements. From the simple idea of an intelligent machine to advanced neural networks capable of writing texts, treating diseases, and even modeling the physics of the world—all this has become possible thanks to decades of American efforts at the intersection of science, technology, and philosophy. Illustrative examples include predictive policing systems (Lum and Isaac 2016), big data analytical algorithms (Alkhazraji and Yahya 2024), and the widely known COMPAS risk assessment tool.
3.2. The U.S. Experience in AI Implementation and Challenges
In the United States, AI is also increasingly applied at various stages of criminal proceedings—from the preliminary analysis of suspects to sentencing. However, despite its widespread adoption, U.S. criminal procedure law does not yet contain a unified, comprehensive legal framework for AI, which raises both practical and constitutional issues.
Key questions arising from the use of AI in U.S. criminal justice include:
- The Presumption of Innocence and Equality Before the Law;
- The Right of the Accused to Know and Challenge Evidence, Including Algorithmic Predictions;
- The absence of procedural safeguards when using “black boxes” (algorithms that are inaccessible for scrutiny).
At present, the U.S. Supreme Court has not issued precedent-setting decisions directly regulating the use of AI in criminal proceedings. However, some lower courts have already faced the need to take AI into account as a factor influencing the fairness of trials.
AI is widely used at the investigative stage, for example:
- Predictive Policing Systems (Predpol, Shotspotter);
- Algorithms for Analyzing Financial Transactions, Phone Records, and Geolocation Data;
- Automated systems for assessing national security threats (e.g., No-Fly Watchlist8).
However, these systems are often not subject to judicial review before the initiation of formal criminal prosecution, raising concerns regarding potential violations of the Fourth Amendment (prohibition of unreasonable searches and seizures).
One of the most well-known examples is the COMPAS system (Correctional Offender Management Profiling for Alternative Sanctions), used in some states to:
- Assess the Risk of Recidivism;
- Determine the Level of Danger Posed by the Defendant;
- Assist in decisions on pre-trial detention or sentencing (Brennan and Dieterich 2017).
The system came under heavy criticism after a 2016 ProPublica investigation revealed systemic racial bias: COMPAS consistently overestimated the risk of recidivism for African American defendants compared to white defendants under similar circumstances. The algorithm is proprietary (owned by Northpointe), which creates a problem of opacity and the inability to challenge it in court—a violation of the right to defense and due process under the Fifth and Fourteenth Amendments to the U.S. Constitution. This controversy has sparked extensive discussion in legal scholarship and the field of algorithmic fairness (Brennan and Dieterich 2017; Larson et al. 2016; Dressel and Farid 2018; Barocas and Selbst 2016; Selbst 2018; Kroll et al. 2017).
Scholars note that the deployment of AI in U.S. criminal procedure is advancing more rapidly than its legal regulation. This creates risks of violating the principles of due process, the presumption of innocence, and equality of arms. In the absence of U.S. Supreme Court precedents, regulation is shaped primarily by lower courts. However, academic studies emphasize the need to adopt federal standards ensuring the transparency and explainability of algorithmic systems (Hamilton 2021; Berk 2018).
Thus, the use of AI in U.S. criminal proceedings is advancing more rapidly than legislation can adapt. At present, there is a significant gap between technological capabilities and legal safeguards. Without legislative intervention and judicial oversight, AI use risks:
- −
- reinforcing discrimination;
- −
- undermining trust in the justice system;
- −
- limiting the defendant’s right to a fair and transparent trial.
There is a pressing need for federal standards on algorithmic transparency and explainability, as well as for stronger protections of constitutional rights in the age of digital justice.
3.3. The European Union’s Approach to AI Ethics and Regulation
The history of artificial intelligence in the European Union (EU) reflects a trajectory distinct from that of the United States, marked by a stronger emphasis on ethics, privacy, law, and cross-country research coordination. The EU has traditionally focused not only on technological advancement but also on the humanitarian, social, and legal dimensions of AI.
AI development in Europe began in the same decades as in the U.S.—the mid-20th century. European philosophers, logicians, and mathematicians such as Ludwig Wittgenstein, Alan Turing (a Briton, though often associated with global science), Jean Piaget, and others influenced approaches to understanding intelligence and formalizing thought. From the 1950s to the 1970s, AI in Europe was primarily advanced within universities: in the United Kingdom, France, Germany, and the Netherlands, small research groups worked on systems for automatic translation, logical reasoning, and early expert systems.
Unlike in the U.S., where AI development was closely tied from the outset to military contracts and large corporations, in Europe, the process unfolded in a more academic manner. The main focus was placed on theory, linguistics, and the philosophical aspects of intelligence. During the 1980s, many European countries began launching national AI support programs. For example, the United Kingdom supported projects in machine learning, Germany focused on robotics, and France on computational linguistics.
By the 1990s, AI in the EU began to take on a more applied form: expert systems were introduced in medicine, energy, and finance. The European Commission began supporting research initiatives through the Framework Programs for Research and Technological Development (FP)—large-scale scientific initiatives uniting universities, companies, and government institutions.9 This allowed for coordination among member states and the development of AI technologies in line with shared European values (Douglas et al. 2020).
Since the early 2000s, Europe has been actively involved in the advancement of machine learning and robotics. Institutions such as CAIRNE (Confederation of Laboratories for AI Research in Europe) and ELLIS (European Laboratory for Learning and Intelligent Systems) were established to strengthen Europe’s presence against the growing influence of the U.S. and China in AI. Research became more coordinated, and the EU began investing billions of euros into the development of digital technologies.
Since 2018, the EU has emerged as a global leader in AI regulation. The Ethics Guidelines for Trustworthy AI10 outlined clear European priorities: the protection of human rights, algorithmic transparency, safety, and oversight of autonomous systems. This stood in sharp contrast to the more laissez-faire approach in the United States.
In 2021, the European Commission introduced the world’s first legislative proposal regulating artificial intelligence (European Commission 2021) an ambitious initiative aimed at establishing a legal framework for the safe and ethical use of AI. Following extensive negotiations, the European Parliament and the Council of the European Union reached a political agreement in December 2023. Regulation (EU) 2024/1689 (the “AI Act”) was formally signed on 13 June 2024. It was published in the Official Journal of the European Union on 12 July 2024 and entered into force on 1 August 2024.
The implementation of its provisions is phased: certain rules have applied since 2 February 2025, while others will take effect gradually until 2 August 2027. The Act classifies AI systems according to levels of risk and imposes stringent requirements on technologies defined as “high-risk”.11
Today, AI in the EU is actively developing across healthcare, transportation, energy, environmental protection, and digital governance. Most efforts are aimed at leveraging AI in the public interest: combating climate change, digitizing public services, and supporting elderly and vulnerable groups. European countries emphasize “trustworthy AI”—technologies that can be relied upon from legal, ethical, and practical perspectives (High-Level Expert Group on Artificial Intelligence (AI HLEG) 2019).
The EU is also playing an active role in international negotiations on AI ethics and standards, promoting its model of combining innovation with legal accountability. While Europe may lag behind the U.S. and China in terms of deployment speed and market scale, its approach has become a regulatory benchmark to which other countries pay close attention.
The history of AI in the European Union thus represents a trajectory from fundamental scientific research and philosophical debate to the creation of an advanced system of regulations and programs supporting the safe and ethical development of technology. The EU has chosen a human-centric approach, where technology serves society rather than the other way around. This marks its distinctive role in the global history of artificial intelligence.12
The integration of AI into the EU’s criminal justice system offers significant potential for improving crime prevention, recidivism forecasting, and individualized assessments of social risk. At the same time, this development raises serious legal, ethical, and social concerns. In practice, AI systems can both enhance analytics and streamline procedures, but also generate risks of human rights violations, discrimination, and the erosion of judicial independence.
The European Parliament, in a number of resolutions, has emphasized the necessity of safeguarding human rights when introducing AI into the legal sphere, including a ban on the use of mass facial recognition technologies in public spaces. Discussions have also focused on the need for special legal frameworks and quality standards for “judicial AI”.
The AI Act classifies AI systems used in law enforcement and judicial activities as high-risk. This means that they are subject to strict requirements regarding transparency, risk management, human oversight, and data protection.
The Council of Europe’s Ethical Charter stresses that the use of AI in criminal proceedings must strictly comply with fundamental rights and the principles of a fair trial. Particular attention is given to:
- −
- the principle of non-discrimination—AI systems must not allow algorithmic bias, especially in decisions concerning pre-trial detention, sentencing, or recidivism assessment;
- −
- the principle of transparency, impartiality, and reliability—any decision involving AI must be understandable, justified, and subject to review by the court or the parties;
- −
- the preservation of human oversight—AI may serve only as an auxiliary tool, not as a substitute for a judge or investigator;
- −
- data protection—the processing of personal data in criminal proceedings involving AI requires special safeguards to prevent violations of the right to privacy;
- −
- system quality and safety—only verified, reliable, and certified AI systems may be applied in criminal cases.13
As we can see, the Charter advocates for the ethical and lawful use of AI, where the protection of human rights, fairness, and the rule of law take precedence over technological interests.
These issues are particularly evident in the case of the HART (Harm Assessment Risk Tool)—a recidivism prediction technology developed in partnership with the University of Cambridge and using police records from Durham. The system is based on machine learning and analyzes about 30 factors, including not only the characteristics of the offense but also social parameters such as postal code and gender. In experimental settings, HART demonstrated high accuracy in predicting both low and high risk of recidivism—up to 98% and 88%, respectively.14
However, its application has raised concerns over the opacity of the algorithm, the potential reinforcement of social stereotypes, and the risk of algorithmic discrimination. The very use of social factors not directly related to criminal behavior may conflict with the principle of individualized sentencing and the presumption of innocence. These issues are widely examined in the scholarship on algorithmic fairness (Wessels 2024; Copelin 2025).
As analysis shows, even with high levels of accuracy, AI tools such as HART should be applied cautiously and only as auxiliary instruments, never as substitutes for judicial evaluation.
Moreover, the European approach—unlike the American one—requires strict compliance with human rights standards, including the defendant’s right to know and challenge algorithmic conclusions. This is ensured in particular by the General Data Protection Regulation (GDPR). In the U.S., in contrast, algorithms such as the COMPAS system are used without sufficient transparency and with proven cases of discrimination, including on racial grounds.
Thus, the application of AI in EU criminal justice requires an extremely balanced approach. Technologies based on big data and machine learning must not replace judicial intuition, humanism, and the principle of individualized consideration of each person. Any use of such tools must be accompanied by:
- −
- strict legal regulation;
- −
- continuous monitoring of their effectiveness and fairness;
- −
- the possibility of independent expert review;
- −
- and, most importantly, the preservation of judicial sovereignty—both institutional and moral.
AI in criminal justice can be beneficial, but only if it remains subordinate to, and does not substitute for, human justice grounded in the values of dignity, equality, and legality. In this domain, significant importance attaches to scholarship on the modeling of legal reasoning and argumentation (Bench-Capon and Sartor 2003; Gordon et al. 2007; Atkinson and Bench-Capon 2007), as well as to contemporary research on classifiers and explainability in legal tasks (Richmond et al. 2023).
The article by E. Eteris and I. Veikša (Eteris and Veikša 2025) examined the key challenges and opportunities that artificial intelligence brings to the legal profession in Europe. The authors emphasize that technological breakthroughs—such as deep learning and natural language processing—are transforming the work of legal professionals by streamlining routine tasks and enhancing analytical capabilities. At the same time, they draw attention to the shortage of qualified specialists capable of operating within new digital environments, which necessitates a rethinking of legal education and the retraining of practicing lawyers.
The authors analyzed the digital transition in the EU from the standpoint of the regulatory environment and noted that legislative changes, such as Regulation (EU) 2024/1689, reinforce the need for the legal profession to adapt to AI. They propose introducing new branches of law into national legal systems and orienting the legal profession toward integration with digital technologies. This includes reforms in the training of future lawyers as well as the development of retraining programs for practitioners already working in the field.
Moreover, E. Eteris and I. Veikša draw attention to a range of ethical and responsibility-related challenges associated with the integration of artificial intelligence into legal practice. According to the authors, the use of AI necessitates the development of clear professional standards to safeguard impartiality, ensure transparency, and maintain effective human oversight over algorithmic systems. They further argue that law firms, public authorities, and other legal institutions must actively invest in the acquisition of AI-related competencies, facilitate the systematic integration of such technologies into professional workflows, and develop new regulatory categories that reflect the ongoing digital transformation of the legal domain.
Taken as a whole, the findings of Eteris and Veikša demonstrate that the European approach to AI in the legal profession extends far beyond the technical deployment of algorithmic tools. It presupposes a fundamental restructuring of professional responsibilities, legal education, and the regulatory landscape. In this context, the ultimate purpose of AI implementation is to enhance the quality of justice while preserving the integrity of ethical and legal standards.
This position is fully supported by the present study. Indeed, the introduction of AI into the legal sphere should not be reduced to the mechanistic automation of routine operations. For technological innovations to produce substantive benefits, several interconnected dimensions require reconsideration.
First, professional roles must evolve. Legal practitioners will increasingly be required to master digital competencies, understand the logic of algorithmic decision-making, and critically assess the reliability and validity of AI-generated outputs. While AI can operate as an auxiliary tool, the authority to make final, legally significant decisions must remain with a human professional.
Second, legal education and professional training require systematic modernization. The preparation of new specialists must incorporate instruction on AI applications in law, digital analytics, and the ethical implications of technological decision-making. Equally important is the retraining of practicing lawyers, whose adaptation to the digitalized legal environment is essential for ensuring the integrity and sustainability of legal practice.
Third, the regulatory framework must be updated. The introduction of AI into legal procedures necessitates the development of norms governing the use of algorithmic systems, mechanisms ensuring transparency, safeguards for personal data, and guarantees of procedural equality. Only a robust regulatory foundation can ensure conformity with the principles of justice and prevent risks associated with algorithmic bias or opacity.
In conclusion, the position advanced by Eteris and Veikša aligns with the broader conceptual understanding that AI represents not simply a technological instrument but a driver of systemic transformation affecting the legal profession, legal education, and legislative development. A comprehensive, multi-level approach to AI regulation and implementation enables the mitigation of risks to fundamental rights and supports the maintenance of high ethical and professional standards in the administration of justice.
3.4. China’s Experience with “Smart Courts” and Government Leadership
Interest in AI in China emerged in the late 1970s, following Deng Xiaoping’s key economic reforms. The primary focus at that stage was on automated theorem proving and logical inference (Luong and Fedasiuk 2022).
In 1981, the Chinese Association for Artificial Intelligence (CAAI) was established. In 1986, AI was included in the national 863 Program (a high-level scientific and technological development initiative).15
In 1987, the first Chinese academic publication on AI appeared (Tianjin University), and by the late 1980s, dedicated journals and conferences were launched, laying the foundation for an academic AI community.16
From the early 2000s, the Chinese government began large-scale funding of AI; in 2006, this priority was formally enshrined in the Five-Year Plans. In 2011, the Beijing branch of the American Association for Artificial Intelligence (AAAI) opened; the Wu Wenjun AI Science and Technology Award was established; and in 2013, the International Joint Conference on Artificial Intelligence (IJCAI) was held in Beijing for the first time—officially integrating China into the global AI ecosystem.
In July 2017, the State Council launched the New Generation AI Development Plan, aiming for global leadership by 2030 with a target of building an industry worth $150 billion. Other key initiatives included embedding AI in successive Five-Year Plans, establishing industrial clusters and funds for infrastructure, big data, and supercomputing (targeting 300 EFLOPS by 2025) (Luo 2018).
By 2021, Chinese universities had developed large-scale language models (GLMs) with hundreds of billions of parameters. In 2023–2024, a domestic ecosystem of foundation models emerged, such as Ernie Bot by Baidu (launched in 2023 and updated to version 4.5 in March 2025).17
China also fostered its national “AI tigers”—companies such as SenseTime, iFlytek, and MiniMax. In 2024, MiniMax raised $600 million and was valued at $2.5 billion. In 2018, Xinhua launched the first “AI news anchor”.18 In 2017–2018, Xiaoyi by iFlytek became the first robot to pass the national medical licensing exam.19
AI has been actively applied in speech and image recognition, as well as in education—for example, Squirrel AI, which provides adaptive learning platforms.20
In August 2023, China introduced the first mandatory regulations for generative AI governing public services; developers are required to ensure compliance with “socialist values”. The rules introduced content controls and new restrictions, including the government’s authority to block foreign services within China. In relative comparison with the U.S., since 2018 Chinese researchers have published 25% more papers. Xi Jinping’s strategy of “self-reliance in semiconductors” and dominance in AI positions China as a true peer competitor to the United States (Government of Kazakhstan 2017).
The success of DeepSeek (2024) and Ernie Bot illustrates how the Chinese AI ecosystem is shifting “from imitation to leadership”. DeepSeek is a powerful open-source AI platform that quickly rose to prominence among leading AI systems thanks to its accessibility, efficiency, and intellectual capabilities comparable to GPT-4. Its flagship models—DeepSeek-R1 and DeepSeek-V3—demonstrate strong performance in language processing and programming tasks, making advanced AI accessible to a global audience.
Since the late 2000s, China has actively implemented “smart court” systems using AI, big data, and blockchain to automate many stages of judicial proceedings:
The Hangzhou Internet Court (operational since 2019) handles Internet-related disputes, including criminal cases, and employs AI for evidence analysis, facial recognition, and speech processing.21
Programs such as Little Wisdom (Xiao Zhi 3.0) and the Xiao Baogong Intelligent Sentencing Prediction System are used for case preparation, record-keeping, evidence analysis, and sentencing recommendations (Zhabina 2023).
The 206 System, developed by iFlyTek in cooperation with the Shanghai court, automates the mapping of evidence to legal norms, identifies gaps in the evidentiary base, assists in interrogations, and provides judges with guidance on the consistency of facts with the law. While the system acts as an assistant, judges remain fully responsible for the final decision (Liang 2019). Judicial authorities also deploy platforms such as Wise Judge (Beijing High Court) and the Criminal Case Handling Assistant (Shanghai High Court) to ensure consistent rulings—“one case—one judgment” grounded in prior case law.
China is not merely adopting AI in criminal justice but is institutionalizing these practices with clear regulatory guidance. In 2022, the Supreme People’s Court of China outlined an official trajectory for AI in the judiciary by issuing the document “Opinions on Regulating and Strengthening the Applications of AI in Judicial Fields”. This document reflects the judiciary’s intention to balance innovation with the protection of fundamental rights. The principles set forth emphasize that AI cannot replace human justice but may complement it. Special attention is given to technological transparency to ensure algorithmic explainability and accountability, while also preventing discrimination and safeguarding the rights of trial participants (Supreme People’s Court of China 2022).
Recent scholarly research also demonstrates that the People’s Republic of China is developing its own legal model of digital justice. For example, the article by Junlin Peng and Wen Xiang examined the emergence of “smart courts” in China as an element of a broader state-driven initiative aimed at digitizing judicial processes and enhancing the governability of the court system. According to the authors, digitalization within Chinese courts creates significant opportunities for improving the administration of justice. It accelerates case processing, increases procedural transparency, reduces corruption risks, and decreases the administrative burden placed on judges. They further emphasize that the integration of big-data analytics enables the unification of judicial practice: algorithmic comparison of similar cases contributes to greater consistency in judicial decision-making, thereby strengthening the predictability of the judicial system. Given the exceptionally high volume of cases adjudicated in China, technological enhancement of judicial capacity has become a key mechanism for maintaining institutional efficiency.
However, Peng and Xiang also note that smart courts generate complex challenges. One of the most serious concerns relates to the risk of excessive reliance on algorithmic recommendations and the potential erosion of judicial autonomy: if AI-generated outputs become dominant, judges may gradually lose the ability to conduct independent legal reasoning and factual assessment of evidence. The authors also raise issues of fairness and transparency in algorithmic systems; opaque or non-transparent models may reproduce latent biases or be employed to reinforce administrative control over the judiciary. Problems related to privacy and data protection, as well as digital inequality that limits online access for certain segments of the population, are likewise identified as significant risks, especially in criminal proceedings.
Peng and Xiang conclude that China’s smart courts represent both a major achievement of the digital era and a complex regulatory challenge. For digitalization to truly strengthen, rather than undermine, the administration of justice, it is necessary to ensure algorithmic transparency, maintain the independence of judicial discretion, and develop robust standards for data protection. The Chinese model illustrates the remarkable potential of emerging technologies; however, its long-term success will depend on the system’s ability to balance efficiency and oversight, technological advancement and legal safeguards (Peng and Xiang 2019).
The article “Smart Courts: A New Path to Justice in China?” (Shi et al. 2021) examined the ways in which China integrates digital technologies into its judicial system and explains why this experience is considered unique in the global context. The authors emphasize that, unlike many other jurisdictions, the Chinese government adopts a centralized and comprehensive model that encompasses all courts across the country. Through the use of big data, blockchain technologies, online judicial procedures, and artificial intelligence algorithms, the smart court system has already contributed to broader access to justice, expedited case processing, reduced administrative costs, and enhanced transparency in judicial operations.
The development of China’s smart courts is presented as the outcome of a long-term transformation that has unfolded in three stages. The first stage (1996–2003) was characterized by the complete digitization of judicial documentation and the transition of court operations to electronic formats following the landmark 1996 conference. The second stage (2004–2013) witnessed the introduction of online hearings and the expansion of remote modes of interaction with the judiciary. The third stage began in 2014, when the smart court initiative was formally launched, prioritizing the deeper integration of advanced technologies and the establishment of an “open, transparent, and user-centered” judiciary, as outlined in the Fourth Five-Year Reform Plan.
Despite the notable progress achieved, the authors caution that extensive technological implementation is accompanied by significant challenges. These include risks associated with algorithmic decision-making, the potential exacerbation of digital inequality, concerns regarding the influence of technological infrastructure on judicial independence, and issues related to privacy and data protection. The article concludes that the further development of smart courts requires a deliberate and balanced approach, particularly in the context of adopting advanced artificial intelligence solutions. Accelerating and reducing the cost of judicial procedures must not come at the expense of fairness, due process, or the quality of judicial protection.
The article by Straton Papagianneas and Nino Junius (Papagianneas and Junius 2023), devoted to the development of China’s “smart courts”, offers a conceptual analysis of how the digital transformation of justice in China is closely intertwined with the ideological framework and political-legal worldview of the Chinese Communist Party (CCP). The authors do not merely describe technological innovations; rather, they examine the type of “justice” these technologies produce and legitimize, as well as the reasons why digitalization has been introduced into China’s judicial system so rapidly and systematically.
According to the authors, smart courts are framed by the Chinese state as tools for strengthening procedural justice—primarily through enhanced transparency, accountability, and adherence to due process. However, these values are interpreted through the prism of the Party’s official worldview, in which “justice” is closely linked to social order, governability, and the maintenance of institutional legitimacy. Automation, therefore, is not ideologically neutral: it reinforces a specific model of justice that the state considers normatively correct.
Papagianneas and Junius identified three central challenges that, according to Chinese policy documents and reform advocates, smart courts are designed to address: increasing efficiency, strengthening public trust, and ensuring social stability. The proposed solutions within the Chinese judicial reform agenda are organized along two dimensions: procedural justice and substantive justice. The authors further emphasize that the Smart Court Reform seeks to integrate these two layers, despite their inherent tensions. Formal procedural requirements are expected to serve substantive goals of the Party-state, such as social harmony and political legitimization.
Ultimately, the authors conclude that the automation of Chinese justice represents not a mere technological upgrade, but an institutional mechanism for embedding a particular ideological conception of justice. Smart courts are firmly integrated into the Party-state model and function to reinforce it by creating a “digital infrastructure of legitimacy” that renders justice more efficient, controllable, and predictable within the boundaries of the official political trajectory. This, according to the authors, explains the rapid expansion of AI and automated systems in China’s judiciary: these technologies align naturally with the CCP’s ideological objectives and facilitate their reproduction.
Thus, China demonstrates an approach where human-centered principles remain a priority despite the rapid push toward digitalization. This is particularly important in criminal justice, where personal freedoms and human destinies are at stake. China’s strategy is aimed not merely at technical progress but at integrating AI into a framework of fair, accountable, and ethically oriented justice.
4. Development of AI and Digital Transformation in Kazakhstan
4.1. National Strategy and Overall Progress in Artificial Intelligence
The development of AI in Kazakhstan has evolved from early steps within the framework of national digitalization to the active implementation of a comprehensive national strategy. Initially, AI adoption was linked to the Digital Kazakhstan program, aimed at modernizing public administration, healthcare, and education (Government of Kazakhstan 2017). By the early 21st century, Kazakh scholars such as Altynbek Sharipbayev began advancing Kazakh computational linguistics and natural language processing algorithms (Sharipbaev 2024), providing an essential foundation for subsequent AI progress in the country.
In 2024, Kazakhstan adopted the Concept for the Development of Artificial Intelligence for 2024–2029 (Government of Kazakhstan 2024). Its objectives include creating high-performance computing infrastructure such as data centers and supercomputers, launching a unified digital platform Smart Data Ukimet,22 and training one million citizens in digital and AI-related skills. In parallel, development began on the national language model KazLLM, trained on 148 billion tokens, intended to ensure the country’s digital sovereignty, including for the Kazakh language (Schmidhuber 2015).
AI is already being deployed in public services: in electronic notary systems, mortgage and credit services, and the automation of document workflows. In healthcare, AI algorithms are used for diagnostics and medical imaging analysis. For example, the domestic startup CerebraAI successfully applies AI for stroke analysis. In education, “smart schools” and digital platforms are being launched. In transportation, intelligent video surveillance systems, smart traffic lights, and monitoring algorithms are being developed.
In addition, Kazakhstan is building an AI-centered business ecosystem with startups and private projects in finance, logistics, education, and medicine. Initiatives such as AI-Campus and Alem.AI are emerging to provide resources for startups and AI developers. The state is actively investing in infrastructure, stimulating scientific research, and preparing a legislative framework. A forthcoming law on artificial intelligence is expected to define usage rules, ban practices such as social scoring, manipulative technologies, and unjustified biometric recognition, and establish mechanisms for ethical oversight and regulation.
Despite rapid development, Kazakhstan continues to face a number of challenges: a shortage of qualified specialists, the need to ensure ethical transparency, the sustainability of digital infrastructure, and the protection of personal data. The absence of a comprehensive legal framework also limits the broader application of AI in sensitive areas, including law enforcement and the judiciary.
4.2. The Role of Artificial Intelligence in the Legislative Process
Over the course of several months, beginning in March, the Republic of Kazakhstan engaged in intensive public and expert discussions concerning a draft law aimed at regulating the use of artificial intelligence. Following detailed debates, expert consultations, and the introduction of multiple amendments, the legislation has now been enacted, marking a significant milestone in the legal formalization of this complex and multidimensional domain.23
The very adoption of the law indicates that Kazakhstan’s traditional legislative mechanisms have demonstrated the capacity to adapt to the rapid pace of technological change. Despite substantial disagreements regarding conceptual approaches, the legislative system ultimately produced well-reasoned solutions capable of responding to the accelerating development of technological innovations.
The complexity of regulating AI stemmed from the need to introduce new legal categories—such as “machine-learning systems”, “algorithmic accountability”, and “model transparency”—while also accounting for the diverse architectures and capabilities of different AI systems. The adopted law codifies several foundational principles, including the reliability and safety of AI systems, the accountability of developers and users, as well as the protection of personal data and privacy.
In addition, the legislation provides for the establishment of a National AI Platform, envisioned as a unified infrastructure for the development, testing, and evaluation of AI solutions in Kazakhstan. This platform is expected to enable experts and policymakers to employ AI as an auxiliary instrument for refining regulatory acts and monitoring the performance of algorithmic systems.
The adoption of the law reflects the state’s recognition of the importance of technological innovation while simultaneously seeking to construct a balanced regulatory environment in which emerging technologies serve the public interest and legal guarantees remain robust. The new law is designed to ensure regulatory flexibility while maintaining high standards of safety, transparency, and accountability.
Previous stages of our research have already indicated that the use of AI in law-making holds substantial potential for improving the quality and consistency of the legal system. In particular, AI algorithms can be effectively employed to analyze existing legislation in order to identify duplications or contradictions, logical inconsistencies, outdated wording, and redundant provisions. Moreover, such technologies are capable of generating predictive models based on the analysis of law-enforcement practices and the potential consequences of adopting certain legal norms.
Therefore, the protracted discussions around the AI draft law themselves illustrate the relevance of integrating intelligent digital instruments into legislative activity. The use of AI in the law-making process does not replace human participation but can significantly enhance the efficiency of analytical and predictive work during the preparation and expert review of draft regulations. This is especially critical when the subject of regulation is directly linked to high-technology sectors and requires a timely yet well-grounded and legally precise response from the state.
4.3. Digital Transformation of Criminal Justice in Kazakhstan: Systems, Practices, and the Emerging Role of AI
This becomes particularly relevant when drafting or amending the Criminal Procedure Code of the Republic of Kazakhstan. AI systems are able to analyze judicial practice, identifying provisions whose interpretation causes difficulties in real-world application, thereby allowing potential legal uncertainty to be eliminated before a law is passed. This would help ensure a more precise, coherent, and stable legal architecture, reduce legal conflicts, and improve the effectiveness of law enforcement. In the context of criminal proceedings, AI can be applied to the automated analysis of evidence, crime prediction, and the processing of large volumes of data during pre-trial investigations and court hearings.
In 2015, the Unified Register of Pre-Trial Investigations (URPI) was launched, and since 2018 it has allowed criminal cases to be handled in electronic format. Currently, more than 160,000 criminal cases—about 76% of the total—are investigated through this system. All procedural documents are generated via the URPI, decisions are automatically forwarded to the prosecutor, and participants in the proceedings—investigators, prosecutors, and defense attorneys—gain online access to case materials, which increases transparency and reduces the risk of falsification.
Alongside the URPI, a whole range of digital projects has been implemented: the Unified Register of Administrative Proceedings (URAP), the Unified Register of Inspection Subjects and Objects (URISO), the “Electronic Appeals” system, an analytical center, and the SIO PSO platform for inter-agency information exchange. The prosecution service has gained the ability to approve decisions online regarding recognition of suspects, requalification, or termination of investigations—and without such verification, decisions lack legal force. Mobile applications and offline tools (e.g., during crime scene inspections) simplify interactions and eliminate paper-based workflows. Prosecutors’ personal dashboards and analytical services help identify risks, assess crime trends, and monitor inspections, including through the use of geoinformation maps. Since 2021, the e-Obrashenie system has enabled individuals and legal entities to submit requests and complaints to the prosecutor’s office online—from registration to receipt of a response.24
Kazakhstan also operates both the “E-Criminal Case” (E-Qylmystyq is)25—a unified information system created to automate all stages of the criminal process from registration to court proceedings—and the “E-Törelik” system,26 an electronic judicial platform covering all court levels and ensuring automation. Both systems play an essential role in the digitalization of criminal and judicial procedures. Since 2018, a significant part of the criminal process has been fully digitized. As of early 2025, 95% of all cases are conducted electronically.
One of the most prominent achievements in the digitalization of Kazakhstan’s judiciary is the “Judicial Office” service, which implements the concept of a virtual court. Through this platform, citizens can file lawsuits, submit petitions, and pay state fees online. The service also offers e-mediation functions, online hearings, court schedules, electronic powers of attorney, and case search tools. According to Judge Nurzhan Moldakov, a “Judge Assistance” module is now also used in criminal proceedings to help judges select appropriate sentences, and the 20% decrease in prosecutorial appeals over four years demonstrates the effectiveness of these innovations (Bryushko 2024).
However, the electronic system itself does not constitute artificial intelligence. It merely provides automated storage, access, and processing of documents, but does not perform prediction, contradiction analysis, or the generation of recommendations—functions that are characteristic of AI models. Accordingly, any references to the use of AI in Kazakhstan during the period of 2015–2018 should be interpreted as references to digital systems and electronic registries rather than to the deployment of full-scale artificial intelligence models. Meaningful experimentation with AI in criminal and judicial processes became feasible only at a much later stage, once sufficient datasets had been accumulated and appropriate algorithmic solutions had emerged.
Since February 2023, the “Digital Analytics of Judicial Practice” service has been in operation, processing millions of judicial acts to detect anomalies, provide statistical insights, and ensure the unification of judicial practice. The judiciary now receives real-time analytics on similar cases upon case submission. From July 2024, the process of drafting court rulings in administrative violation cases, including traffic offenses, has been automated. AI generates a draft ruling, which the judge only needs to edit and sign. Courts have also implemented secure electronic platforms for filing claims, exchanging documents, and monitoring cybersecurity threats.27
At the meeting of the Supreme Court Chairpersons of SCO member states in China, the Chairperson of the Supreme Court of Kazakhstan, Aslambek Mergaliev, presented the country’s successful experience in implementing AI and digital technologies in the judiciary, including measures for cybersecurity, secure claim submission platforms, and specialized training for judges in information security.28
Since June 2025, the pilot project “Digital Assistant for Investigators” has been operating in Astana. The system analyzes connections between cases, generates recommendations on investigation strategies, and creates document templates, offering schemes for interrogations and forensic examinations.29 Thus, AI assists in case registration, analysis, document preparation, and decision-making, thereby enhancing both the efficiency and transparency of the process. In the near future, these technologies are expected to be formally codified, accompanied by the development of ethical standards in cooperation with international organizations.
On 19 May 2023, during the international scientific and practical conference “Artificial Intelligence and Big Data in the Judicial and Law Enforcement System: Realities and Requirements of the Time”, organized by the Supreme Court of the Republic of Kazakhstan, the Union of Judges, and the Law Enforcement Academy under the General Prosecutor’s Office, a wide range of pressing issues related to the introduction of IT technologies and artificial intelligence into the judicial and law enforcement domains were discussed. The forum brought together more than 100 participants, including senior representatives of the judiciary, the prosecution service, government ministries, members of parliament, as well as domestic and foreign scholars.
The discussions focused primarily on the development of electronic justice, the automation of judicial proceedings, the use of big data, the establishment of electronic courts and case-management systems, and the integration of speech recognition technologies and digital workflow solutions. The speakers emphasized that AI and digitalization enhance the efficiency and overall quality of the work of courts and prosecution bodies, support the forecasting of case outcomes, and improve public access to justice. Importantly, they highlighted that AI is not intended to replace judges but to serve as an auxiliary instrument that facilitates informed and timely decision-making.
Another key aspect addressed during the conference was the need to ensure strict compliance with ethical standards and cybersecurity requirements when working with big data and AI technologies. The event underscored the strategic importance of digital transformation for the modernization of the judicial system, the protection of citizens’ rights, and the effective functioning of public institutions.30
In Kazakhstan, a substantial body of academic scholarship has already emerged addressing the use of artificial intelligence in the legal domain, including its potential application in criminal procedure. Although studies that directly examine the use of AI in criminal justice remain limited, a number of works have explored adjacent dimensions such as the digitalization of law-enforcement activities, the legal nature of AI, access to justice, and the application of analytical systems within courts and investigative bodies.
Among these contributions is the article by A.M. Maitanov and T.N. Suleimenov (Maitanov and Suleimenov 2025), which focused on the use of AI in the investigation of crimes related to illicit drug trafficking. The authors examined the capacity of algorithmic systems to optimize evidence collection and detect criminal activity online, thereby illustrating the practical prospects for the integration of AI into pre-trial investigations.
Another relevant study is that of N.O. Dulatbekov and S.N. Bachurin (Dulatbekov and Bachurin 2024), which considered the theoretical foundations for employing AI-based tools within the three-tier structure of Kazakhstan’s judicial and law-enforcement system. Their analysis underscores that AI may support activities at the pre-trial stage, during expert analytical processes, and throughout judicial proceedings, while significantly transforming the logistics and architecture of criminal procedure.
A further important contribution is the article by N.V. Sidorova and A.M. Serikbayev (Sidorova and Serikbayev 2025), which examined access to justice in the context of increasing AI adoption. The authors analyzed how technological solutions can facilitate interaction between citizens and the judicial system, reduce procedural barriers, and minimize time expenditures. These findings hold direct relevance for criminal proceedings, particularly in relation to legal assistance for suspects and defendants.
Similar issues were explored in the study by I.S. Saktaganova, E.V. Mitskaya, and A.B. Saktaganova (Saktaganova et al. 2025), which addresses both the opportunities and risks associated with AI in the administration of justice. The authors emphasize the need for regulatory safeguards, certification of algorithmic systems, and limitations on the involvement of AI in decision-making processes of critical importance, including the imposition of criminal sentences.
The article “Artificial Intelligence in Judicial Proceedings: A Legal Experiment in Singapore, China, and Kazakhstan” (Alekseeva 2025) examined how three jurisdictions introduced artificial intelligence technologies into their judicial systems and identified the legal, practical, and ethical challenges arising from this process. The author demonstrated that the growing interest in AI-supported justice is driven primarily by its potential to enhance institutional efficiency. Technological tools accelerate the processing of case materials, automate routine procedural actions, optimize the distribution of judicial workload, and improve citizens’ access to judicial services. The use of big data, intelligent document-analysis systems, and online judicial platforms contributes not only to increased procedural speed but also to improved quality of legal service delivery. This is particularly evident in Singapore and China, where AI-based tools are already employed for preliminary case assessment, evidence processing, and the conduct of online hearings.
At the same time, the article underscores the substantial risks associated with the integration of these technologies. The author notes that excessive automation may undermine the indispensable role of judicial discretion, which is especially critical in the sensitive domain of adjudication. The problem of algorithmic opacity—the so-called “black box”—raises concerns regarding the fairness, transparency, and reasoned nature of decisions in which AI systems play a preparatory or advisory role. Significant attention is also paid to risks related to confidentiality and data security, as judicial processes involve large volumes of sensitive personal information. Furthermore, digital inequality may create new barriers to accessing justice for socially vulnerable groups who may lack the ability or resources to use digital judicial services.
The author concludes that while international experience demonstrates the clear practical benefits of AI in judicial proceedings, its application must be subject to strict regulatory safeguards. AI should function exclusively as an auxiliary instrument and must neither replace judicial discretion nor alter the fundamental principles of fair trial. Safe and responsible implementation requires algorithmic transparency, institutional oversight, strong data-protection guarantees, and equal access to digital services. By comparing the experiences of Singapore, China, and Kazakhstan, the author argues that successful integration of AI into judicial systems is achievable only when technological advancement is accompanied by a coherent legal framework and strict adherence to human-rights standards.
5. Gaps and the Need for Legal Recognition of AI in the Criminal Procedure Code of Kazakhstan
5.1. Identifying Gaps and Developing Regulatory Strategies
The integration of AI into criminal procedure law represents not merely a technological innovation but a profound legal and ethical transformation. To ensure that the use of AI complies with the principles of legality, transparency, and accountability, it must be formally enshrined in the Criminal Procedure Code. Two approaches are possible: first, introducing targeted amendments to existing articles of the Code—particularly those concerning the protection of citizens’ rights and freedoms, the safeguarding of private life, the use of technical means to record procedural actions, and clarifying the powers of judges and prosecutors in the context of AI use; second, developing a separate article or chapter dedicated to the application of digital technologies and AI in criminal proceedings.
5.2. Protection of Rights and Freedoms (Articles 11 and 24 of the Criminal Procedure Code of Kazakhstan)
For example, Article 11, “Protection of the rights and freedoms of citizens in criminal proceedings”, could be supplemented with a provision stating that the use of automated systems, including AI, must not violate the procedural guarantees of the parties—the suspect, the accused, the victim, and other participants in the process. All decisions must be made exclusively by a human, not by an algorithm.
5.3. Privacy and the Inviolability of Private Life (Article 16 of the Criminal Procedure Code of Kazakhstan)
In the context of introducing AI technologies into Kazakhstan’s criminal justice process, Article 16 of the CPC RK, “Inviolability of private life. Confidentiality of correspondence, telephone conversations, postal, telegraphic and other communications”, requires special attention. AI technologies used in data processing—including geolocation analysis, audio interception, automated filtering of large datasets of messages and images—significantly expand the technical capabilities of investigative bodies and the prosecution service. However, they also heighten the risks of arbitrary interference in private life, which necessitates clear legal regulation.
The proposed addition to Article 16 of the CPC RK would state:
The use of automated information systems, including artificial intelligence technologies, for the collection, storage, analysis, and interpretation of data affecting a person’s private life shall be permitted only on the basis of judicial authorization in cases provided for by this Code. The algorithms employed must ensure transparency, technical verifiability, and compliance with the principles of legality, proportionality, and minimal interference in private life. Any person with respect to whom such technologies were applied shall have the right to be notified and to appeal actions that resulted in the restriction of the inviolability of private life.
This addition secures a balance between the efficiency of investigation and the protection of human rights. It aligns with:
- −
- the Constitution of the Republic of Kazakhstan (Article 18)—guaranteeing the protection of private life and the confidentiality of correspondence (Constitution of the Republic of Kazakhstan 1995);
- −
- international standards (e.g., Article 8 of the European Convention on Human Rights);31
- −
- the provisions of the Law of the Republic of Kazakhstan “On Personal Data and Their Protection” (Law of the Republic of Kazakhstan “On Personal Data and Its Protection” 2013).
5.4. Judicial Oversight and the Presumption of Innocence (Articles 53 and 19 of the Criminal Procedure Code of Kazakhstan)
To mitigate the risk of violating citizens’ rights and freedoms, it is proposed to supplement Article 24 with the following provision: “The use of digital and intelligent technologies, including artificial intelligence, shall not limit or diminish the constitutional rights and freedoms of participants in criminal proceedings”. This amendment corresponds to international standards, including the position of the European Court of Human Rights regarding interference with the digital dimension of individual rights (Kubasheva 2025).
The court, as the guarantor of legality and personal rights, must oversee the proper application of AI. Accordingly, it is suggested to supplement Article 53 with the following clause: “The court shall exercise control over the lawful use of artificial intelligence technologies in the course of criminal proceedings and may take measures in cases of violations arising from improper, inaccurate, or non-transparent algorithmic data processing”. This provision codifies the principle of technological accountability and prevents a “black box” situation, where algorithms become inaccessible for assessment. Professors of the Department of Constitutional and Civil Law at L.N. Gumilyov Eurasian National University rightly emphasize that responsibility for decisions made with the involvement of AI must remain with the judge, since AI itself cannot be a subject of legal responsibility, and we fully share this position (Saktaganova et al. 2025).
In light of Article 19 of the CPC RK, which enshrines the principle of the presumption of innocence, the use of AI technologies in criminal proceedings must be strictly limited to frameworks that exclude automatic issuance of indictments or forecasts without proper judicial evaluation. AI must not create presumptions of guilt on the basis of algorithmic predictions (e.g., recidivism risk or criminal profiling), as this would contravene the fundamental principle that a person is presumed innocent until proven guilty in accordance with the law. Therefore, any AI-generated conclusions can serve only as auxiliary tools without independent evidentiary value. It is advisable to include a provision stipulating that algorithmic forecasts and analytical reports obtained through AI cannot be considered as proof of guilt without proper judicial evaluation and do not relieve the prosecution of its burden of proof.
6. Specific Legislative Proposals for the Criminal Procedure Code of Kazakhstan
6.1. Regulation of Digital Evidence (Articles 125 and 120 of the Criminal Procedure Code of Kazakhstan)
Modern AI algorithms, actively used for analyzing audio, video, and digital traces, necessitate clarification of the evidentiary framework in the CPC RK to legitimize the inclusion of such data in the process of proof. In this regard, Article 125 could be supplemented with the following norm: “Information obtained through the use of artificial intelligence technologies, including the analysis of images, video recordings, audio files, and digital traces, may be recognized as physical evidence, provided that its authenticity, reliability, and reproducibility are confirmed”. Such a formulation enables the admissibility of digital evidence while safeguarding procedural guarantees.
In Article 120, “Documents”, it would be appropriate to stipulate that documents obtained with the use of artificial intelligence technologies may be considered relevant to a criminal case. However, such documents should be assessed on an equal footing with other evidence, without being granted special status. This would allow automated findings to be used alongside other written evidence, provided they are subject to expert review.
6.2. Powers of the Prosecutor (Article 58 of the Criminal Procedure Code of Kazakhstan)
In the context of digitalization and the integration of AI technologies into criminal proceedings, the role of the prosecutor acquires additional dimensions. Beyond the traditional functions of supervising the legality of pre-trial investigations and maintaining the public prosecution, prosecutors are now entrusted with overseeing the lawful use of AI systems in criminal proceedings. This includes verifying the admissibility and reliability of evidence obtained through AI, ensuring compliance with the principles of equality of arms and the presumption of innocence, and preventing violations of the rights of suspects, defendants, and victims as a result of automated data processing. Prosecutors must also assess whether algorithms contain bias, discriminatory elements, or breaches of confidentiality, and, where necessary, initiate further inspections or expert evaluations. In the future, the prosecutorial authorities may be assigned the function of certifying or accrediting AI systems used in the criminal procedure sphere.
Accordingly, it seems appropriate to amend Article 58 of the Criminal Procedure Code of the Republic of Kazakhstan, which regulates prosecutorial powers, by introducing a specific clause concerning the use of AI technologies:
“The prosecutor may employ automated analytical tools and artificial intelligence-based technologies in the exercise of supervisory and procedural functions, provided that such technologies are applied exclusively as auxiliary means, do not replace the prosecutor’s personal legal assessment, do not diminish the procedural rights of participants in the proceedings, and comply with the principles of legality and objectivity. The use of algorithms that influence decision-making must be transparent, reproducible, and subject to review by the court or a superior prosecutor”.
6.3. Powers of Investigators and Inquirers (Articles 59, 60, 62, and 63 of the Criminal Procedure Code of Kazakhstan)
This addition reflects both international and national standards of law enforcement, including the principles articulated in the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems (Council of Europe 2018). It underscores that AI may be used by the prosecution service (e.g., for big data analysis, detection of repeat offenses, or risk assessment) but must not undermine the presumption of innocence or substitute the conscious decision-making of an official.
Moreover, the digitalization of criminal proceedings necessitates the development of an appropriate regulatory framework at the pre-trial investigation stage, where participants from the prosecutorial side—investigators, inquirers, the head of the investigative department, and the head of the inquiry body—play an active role. These officials bear responsibility for making key procedural decisions, collecting, examining, and evaluating evidence, monitoring deadlines, and coordinating actions within the framework of a criminal case.
The use of AI technologies in their work is already becoming an integral part of day-to-day operations—whether in automating data analysis, risk prediction, managing digital document workflows, or providing technical support in recording investigative actions. However, the current CPC RK lacks provisions that directly regulate such forms of support and ensure their legal legitimacy. This gap creates risks of procedural violations, infringements of participants’ rights, and potential abuse.
Therefore, in order to reflect the use of AI in the work of the head of the investigative department and investigators, as well as the head of the inquiry body and inquirers, it appears advisable to introduce targeted amendments to the CPC RK. These amendments should take into account the procedural status of the aforementioned officials, define permissible forms of AI use in their activities, and establish safeguards of oversight, accountability, and protection of the rights of participants in criminal proceedings.
In particular, it is proposed to supplement Article 59 of the CPC RK, which regulates the powers of the head of the investigative department, with a provision allowing the use of AI systems for the analysis of criminal cases, the automated monitoring of compliance with procedural deadlines and investigative stages, as well as for the assessment of procedural decisions made by investigators. At the same time, it must be expressly stated that the use of such technologies does not release the head of the department from the duty to exercise personal procedural oversight and to make decisions strictly in accordance with the law. This would allow AI to be employed as an analytical tool without substituting the official’s authority.
Article 60 of the CPC RK, concerning the powers of investigators, should be supplemented with a norm permitting the use of AI technologies in information retrieval, evidence analysis, the modeling of event scenarios, and the preparation of draft procedural decisions. At the same time, the independence of investigators must remain paramount, and AI cannot serve as the basis for decision-making in the absence of their participation.
Similarly, Article 62, which regulates the activities of the head of the inquiry body, should provide for the use of AI-based digital systems to oversee the work of inquirers, monitor inquiry deadlines, and manage electronic document workflows. Such systems must be applied strictly in compliance with requirements for the protection of personal data, privacy, and other constitutional guarantees.
Article 63 of the CPC RK, devoted to the powers of inquirers, should establish the right to use AI for the preliminary analysis of information, the assessment of risks of repeat offenses, and the initial systematization of evidence. At the same time, final legal conclusions—including the initiation of pre-trial investigations, the qualification of offenses, and other significant decisions—must remain exclusively within the competence of inquirers as procedurally independent actors.
6.4. Use of Artificial Intelligence in Investigative and Judicial Proceedings (Articles 243, 210, 252, 219, 270, and 287 of the Criminal Procedure Code of Kazakhstan)
In all cases, a general provision must be codified stating that AI technologies cannot replace the decision-making will of an official. Automated decision-making that affects the rights and freedoms of citizens without the direct participation of an authorized person is impermissible. These amendments are aimed at ensuring technological support for law enforcement agencies without undermining the principles of criminal procedure or the constitutional rights of individuals.
Modern investigative actions are increasingly linked to the use of AI technologies, particularly in the collection and processing of digital evidence, including audio and video recordings, metadata, and the content of communications. In this regard, it is proposed to introduce targeted amendments to a number of procedural provisions of the CPC RK in order to legally establish the admissibility, limits, and conditions of AI use in actions such as covert surveillance, interception and retrieval of information transmitted via telecommunications networks, the appointment of expert examinations, and the conduct of interrogations, inspections, searches, or seizures. For example, Article 243 should be supplemented with a provision stipulating that AI may be used in the interception of communications only as an auxiliary tool. Software solutions must not substitute legal procedures or human judgment and must ensure both the protection of personal data and the possibility of appealing the results.
AI can be used as an auxiliary tool in recording testimony. It is proposed to amend Article 210 of the CPC RK as follows: “The use of technologies based on artificial intelligence for the automatic transcription and analysis of speech during interrogation shall be permitted, provided that the accuracy of processing and the authenticity of the recorded information are ensured”. This measure could enhance the efficiency of investigative actions and guarantee proper documentation. Where doubts arise regarding the correctness of algorithmic conclusions, the court must have the ability to verify them.
Article 252 should be supplemented with a norm requiring expert oversight of the use of intelligent systems in digital searches, particularly when extracting information from electronic devices. The use of AI must be strictly limited to the scope of the specific warrant, and access to data unrelated to the case must be excluded.
In Article 219, it is advisable to establish that, during the examination of digital objects (such as mobile phones, servers, or cloud storage systems), AI-based software tools may be applied for sorting, searching, and classifying information. Such actions must be accompanied by logging of all processing stages and verification of the accuracy of the extracted data. AI can significantly accelerate the analysis of large volumes of digital files (for instance, those obtained from a crime scene), but automation without proper documentation and independent verification could undermine the reliability of the information obtained.
Article 270, “Appointment of Expert Examination”, should be amended to include a provision allowing the use of AI-based software solutions, provided that the criteria of reproducibility, algorithmic transparency, and mandatory human expert oversight are observed.
It is also proposed to amend Article 287 of the CPC RK with the following: “If algorithms of artificial intelligence were used during the initial expert examination, the court shall have the right to appoint an additional or repeated expert examination with the involvement of specialists in digital technologies to verify the accuracy of the results obtained and the methods applied”. This would ensure scientific verification and the legal reliability of AI-based findings.
These legislative proposals are aimed at maintaining a balance between the efficiency of digital investigative actions and the safeguarding of constitutional guarantees of the individual, including the right to privacy, the protection of personal data, and due legal process. Of particular relevance in the context of digitalization and the integration of AI are the articles relating to remote access, the analysis of personal data, and digital evidence. Their legal reinforcement will ensure respect for both the effectiveness of investigations and the protection of fundamental rights enshrined in the Constitution and the CPC.
6.5. Use of Artificial Intelligence in Record-Keeping and Protocol Management (Articles 199 and 347 of the Criminal Procedure Code of Kazakhstan)
AI can also be applied to the automation of procedural formalities. The use of AI in protocol drafting under the CPC RK opens new opportunities for enhancing the accuracy, completeness, and objectivity of recording procedural actions. Within the framework of Article 199, which regulates the preparation of investigation protocols, AI can be employed for the automatic conversion of audio and video recordings into text, the creation of real-time draft protocols, and the structuring of recorded information in accordance with statutory requirements. This is particularly relevant for interrogations, inspections, searches, and other actions where precision of wording is of crucial importance.
In the context of Part 3 of Article 199, which establishes requirements for protocol content, AI could ensure automatic verification of mandatory elements such as date, time, place, participants, and signatures. It could analyze protocols for logical gaps or inconsistencies and notify the investigator about missing elements. However, legal responsibility for the document’s content must remain exclusively with the official. Part 6 of Article 199, addressing objections to the protocol, could also be expanded with digital functions: AI may record parties’ objections to specific parts of the text and correlate them with the original audio or video recordings, thereby facilitating verification of the accuracy of the record. This would allow a more precise reflection of participants’ positions and strengthen confidence in the protocol as a source of evidence.
Under Part 5 of Article 199, which regulates the use of audio and video recordings of investigative actions, AI could not only automatically transcribe recordings but also synchronize them with the text of the protocol, create time stamps, and—subject to participants’ consent—recognize speakers by voice or image. These technologies would enhance the objectivity of the recorded information and reduce the risk of distortion.
Such approaches may also be applied at the judicial stages of criminal proceedings, including the recording of court hearings. Here, AI could perform the function of automatic transcription, reducing the workload of court clerks and providing judges with tools for rapid search and analysis of information presented during proceedings. Accordingly, it is proposed to amend Article 347 of the CPC RK to state: “The recording of court hearings may be carried out using automated speech recognition systems based on artificial intelligence technologies, subject to mandatory subsequent verification and approval of the final text by the judge”. This would optimize the process without undermining legal validity.
At the same time, the use of AI must remain strictly within the framework of the law: the protocol is a piece of evidence, and therefore AI cannot itself sign the document or alter information without human confirmation. A protocol generated with the assistance of AI must be authenticated by authorized participants, and any automated adjustments must be transparent and documented.
Thus, AI in protocol drafting can serve an auxiliary function, significantly facilitating and accelerating the process of recording, while not replacing human involvement or violating legal procedures. Considering the potential risks associated with data reliability and security, its application should be regulated by specific provisions of the CPC, which must define both the possibilities and the limitations of such digital solutions.
The question of how criminal procedure norms can be integrated into artificial intelligence models is central to ensuring the procedural protection of personal data, preventing arbitrary interference with private life, and safeguarding the rights of participants in criminal proceedings. Such integration is feasible only if algorithmic systems are designed from the outset in conformity with legal requirements and are technically capable of implementing them. To meet these conditions, AI architectures must incorporate mechanisms for automatic logging of all actions, strict differentiation of access levels, mandatory user authentication, filtering of personal data in accordance with established access categories, and the capacity to ensure verifiability of analytical results and reproducibility of algorithmic outputs. Only under these conditions can AI be employed in criminal procedure without jeopardizing procedural guarantees or fundamental rights.
International technological solutions demonstrate that this form of integration is achievable. Platforms such as DeepJudge AI. and Legora AI illustrate the potential for combining complex legal reasoning with AI architectures capable of monitoring compliance with procedural norms, structuring evidence in accordance with legal categories, and analyzing large volumes of judicial and investigative materials, while maintaining transparency of algorithmic reasoning. These systems employ secure data-storage mechanisms, maintain detailed operational logs, and ensure explainability of results, thereby enabling their use as auxiliary tools in judicial and investigative activities without the risk of substituting the will or discretion of a procedural authority.
With respect to national information systems in Kazakhstan—including the Unified Register of Pre-Trial Investigations, the Unified Register of Administrative Proceedings, the Unified Register of Subjects and Objects of Inspections, and other law-enforcement information systems of the URISO type—foreign technological solutions cannot be adopted directly due to differences in regulatory frameworks and information-security requirements. Nevertheless, their architectural principles are highly valuable for the development of domestic AI modules. These registries already operate in digital form and possess the infrastructural capacity to support the implementation of intelligent subsystems capable of automatically analyzing large datasets of criminal cases, identifying logical inconsistencies, monitoring compliance with procedural deadlines, generating analytical models of criminogenic patterns, and recording all operations in a secure audit log.
The technical approaches implemented in DeepJudge AI and Legora AI—including algorithmic explainability, secure document structuring, procedural filters, automatic activity logging, and built-in internal-audit tools—can significantly accelerate the creation of domestic digital solutions inherently aligned with procedural form and tailored to the requirements of the Criminal Procedure Code of the Republic of Kazakhstan.
6.6. Introduction of a Formal Definition of Artificial Intelligence and a Systemic Approach (Article 7 of the Criminal Procedure Code of Kazakhstan)
Incorporating a definition of artificial intelligence into Article 7 of the CPC RK, aligned with the terminology of sectoral legislation, appears both necessary and timely. At the current stage of digitalization of the legal system, AI is increasingly applied in law enforcement practice, including the automated analysis of evidence, digital protocol drafting, risk prediction, and the formation of preliminary legal assessments. In this context, legal certainty regarding the concept of “artificial intelligence” becomes a key prerequisite for its proper integration into criminal procedural mechanisms.
The proposed formulation—“artificial intelligence is a software or hardware-software complex capable of performing analytical, predictive, and learning tasks not limited to a pre-defined algorithm”—is already codified in sectoral regulatory acts of the Republic of Kazakhstan and reflects the technological specificity of AI systems. The inclusion of this definition in Article 7 of the CPC RK would ensure consistency of terminology across different branches of legislation and eliminate legal uncertainty in the use of AI within criminal proceedings.
In addition, the text of the Code should explicitly stipulate that the use of AI in criminal justice is permissible only under the condition of strict compliance with fundamental procedural principles—legality, judicial protection of human rights and freedoms, safeguarding of citizens’ rights in criminal proceedings, inviolability of private life, presumption of innocence, and adjudication based on adversarial proceedings and equality of arms. Given that artificial intelligence, by its very nature, can operate on autonomous algorithms, its deployment in the legal system must be firmly confined to the boundaries of human oversight and procedural guarantees. Without clear regulatory safeguards, risks may arise, including the substitution of human decisions with machine outputs, unjustified interference with the right to defense, or the denial of an objective legal evaluation.
We assume that embedding a definition of AI in Article 7 of the CPC RK would not only create the basis for its regulated application within criminal justice, but also open the way for developing special provisions that establish the procedures and limits of using intelligent digital systems. This would contribute to legal certainty, transparency, and the sustainable advancement of the digital transformation of criminal proceedings in Kazakhstan.
The creation of a separate article or chapter in the CPC RK devoted to the use of digital technologies and artificial intelligence represents one of the systemic approaches to integrating AI into criminal proceedings. Such an approach would make it possible to codify the principles, limits, and admissibility of digital solutions across different stages of the process—from pre-trial investigation to court proceedings. The introduction of such a provision could include regulations on the aims and objectives of digitalization in criminal justice, a description of permissible AI tools, restrictions on their application, as well as procedures for judicial or prosecutorial oversight of their use. Special attention should be given to ensuring confidentiality, protecting personal data, and establishing mechanisms for appealing decisions in which AI has played a role. This approach would concentrate legal regulation in one part of the Code, making it clear and accessible to all participants in criminal proceedings—from investigators, inquirers, prosecutors, and judges to defense lawyers, suspects, and victims.
The modeling of criminal procedure using computational models and algorithms presupposes the identification of key procedural stages that may be automated. These stages include the registration of a criminal case, the classification and preliminary analysis of case materials, the preparation of draft procedural decisions, the assignment of forensic examinations, the monitoring of statutory deadlines, and the allocation of tasks among investigators and prosecutors. The algorithms employed should be modular, with each procedural element represented as an independent, verifiable function capable of reproducing all intermediate computational steps. They must also ensure full traceability by recording logs, data sources, and decision-making criteria. A further essential requirement is the differentiation of user access in accordance with procedural status, as well as the alignment of algorithmic operations with legislative requirements, including compliance with statutory deadlines, the presumption of innocence, and evidentiary standards.
Training datasets are derived from real, completed criminal cases with full anonymization of personal data. The dataset is divided into training, testing, and validation subsets, while each case is annotated in accordance with stages of investigation, forensic results, and judicial outcomes. The algorithms are applied to classify cases, assess the risk of recidivism, predict the likelihood of appeals, generate draft procedural documents, and detect anomalies in procedural conduct, such as deadline violations or the absence of required evidence. All algorithmic outputs are subject to mandatory human review by an investigator or prosecutor, while the models undergo ongoing updating based on newly completed cases and are systematically evaluated for accuracy, completeness, and impartiality.
At the regulatory level, it is proposed to introduce explicit provisions stipulating that algorithms function solely as auxiliary tools and that all legally significant decisions remain within the exclusive competence of a human decision-maker. Such provisions may be incorporated into the articles of the Criminal Procedure Code of the Republic of Kazakhstan concerning privacy, the protection of the rights of participants in criminal proceedings, and judicial oversight of the legality of procedural actions. All algorithmic operations must be logged to ensure transparency, while participants in criminal proceedings must retain the right to challenge decisions generated or initiated by AI tools. Algorithmic systems should minimize intrusions into private life, ensure the protection of personal data through anonymization and differentiated access, and undergo periodic audits for compliance with international standards, including the requirements of the European Convention on Human Rights and the EU Artificial Intelligence Act.
As a result, the application of AI-based modeling to criminal procedure allows for the integration of real practical aspects of case processing, including the use of structured training datasets, thereby maintaining an appropriate balance between investigative efficiency, the automation of document management, and the procedural guarantees afforded to participants in criminal proceedings.
7. Conclusions
Thus, as scholars rightly note, digitalization contributes to strengthening the transparency and independence of the judicial system; however, its effectiveness directly depends on legal certainty and institutional safeguards. Supporting this view, it must be emphasized that digital technologies in criminal proceedings should not merely automate procedures but must serve as instruments for reinforcing legal standards and ensuring real access to justice (Zhursimbayev et al. 2025).
Kazakhstan, in its pursuit of modernizing the legal system and advancing state digitalization, is well positioned to become one of the first CIS countries to establish a dedicated legislative framework for the use of AI in criminal proceedings. This would enhance trust in the judiciary, safeguard citizens’ rights in the context of digitalization, and represent a significant step toward more efficient and modern justice.
The integration of artificial intelligence technologies into criminal proceedings has substantial implications for enhancing the efficiency, transparency, and accuracy of procedural activities. The use of AI enables the accelerated processing of large volumes of data, the automation of procedural document preparation, the identification of recidivism risks, and the facilitation of oversight over procedural deadlines. Digitalization also reduces the likelihood of falsification, strengthens the transparency of interactions among participants in the criminal process, and improves access to information for both prosecutorial authorities and the public.
Nevertheless, the study identified a number of constraints. The effectiveness of AI systems is directly dependent on the quality and scope of the training datasets; incomplete or biased data may lead to erroneous conclusions. Automated algorithms cannot fully substitute for human legal judgment, which necessitates continuous oversight by judges, prosecutors, and investigators. There are also significant legal and ethical limitations, including the observance of the principles of the presumption of innocence, the right to privacy, the protection of personal data, and the transparency of algorithmic decision-making. Furthermore, the use of AI in criminal proceedings may create risks associated with technological dependency, difficulties in the verification of algorithmic mechanisms due to the “black box” problem, and the need for dedicated regulatory and ethical standards.
Overall, the findings demonstrate the substantial potential for integrating AI into criminal justice, while simultaneously underscoring the necessity of strict adherence to procedural safeguards, ensuring algorithmic transparency, and maintaining continuous oversight by competent authorities.
In conclusion, the integration of artificial intelligence into the criminal procedure system of the Republic of Kazakhstan opens wide opportunities for improving the efficiency, timeliness, and objectivity of justice. However, such changes cannot occur spontaneously or solely as a result of technological initiatives—they require clear legal regulation, the creation of stable normative frameworks, and the development of procedural mechanisms that guarantee the protection of individual rights and freedoms at all stages of the criminal process. In this way, the digital transformation of criminal proceedings should not undermine the foundations of justice but, on the contrary, should strengthen them by ensuring higher-quality investigations, reducing judicial errors, and increasing public confidence in the judiciary.
Author Contributions
Conceptualization, G.N.M.; methodology, G.N.M. and Y.T.A.; formal analysis, A.K.Z. and Y.T.A.; investigation, A.K.Z. and Y.T.A.; writing—original draft preparation, G.N.M.; writing—review and editing, A.K.Z., N.M.A. and Y.T.A.; supervision, G.N.M.; project administration, G.N.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan within the framework of the program-targeted financing project BR27101389, “The introduction of artificial intelligence tools into the legislative process of the Republic of Kazakhstan to optimize efficiency and enhance the transparency of legislation.”.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Acknowledgments
Generative AI tools (Grammarly Premium and ChatGPT-5) were used for language editing, including grammar, style, and formatting improvements; the authors reviewed all outputs and take full responsibility for the final content.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Alekseeva, Ekaterina Vladimirovna. 2025. Artificial Intelligence in Judicial Proceedings: Experience of Legal Experiments in Singapore, China, and Kazakhstan. Legal Engineering 19: 549–53. [Google Scholar]
- Alkhazraji, Ibrahim Abdulla Mohammad Aldallal, and Mohd Yamani bin Yahya. 2024. The Effect of Big Data Analytics on Predictive Policing: The Mediation Role of Crisis Management. Revista de Gestão Social e Ambiental 18: e6033. [Google Scholar] [CrossRef]
- Ashley, Kevin D. 1990. Modeling Legal Argument: Reasoning with Cases and Hypotheticals, rev. ed. Cambridge: MIT Press, pp. 1–350. Available online: https://apps.dtic.mil/sti/tr/pdf/ADA250559.pdf (accessed on 14 October 2025).
- Atkinson, Katie, and Trevor Bench-Capon. 2007. Practical Reasoning about Cases in Law. Journal of Logic and Computation 17: 1–32. [Google Scholar]
- Barocas, Solon, and Andrew D. Selbst. 2016. Big Data’s Disparate Impact. California Law Review 104: 671–732. [Google Scholar] [CrossRef]
- Bench-Capon, Trevor. 1991. Knowledge-Based Systems and Legal Applications. Knowledge-Based Systems 4: 118–24. [Google Scholar]
- Bench-Capon, Trevor, and Giovanni Sartor. 2003. A Model of Legal Reasoning with Cases Incorporating Theories and Values. Artificial Intelligence 150: 97–143. [Google Scholar] [CrossRef]
- Berk, Richard. 2018. Machine Learning Risk Assessments in Criminal Sentencing. Federal Sentencing Reporter 30: 222–31. Available online: https://www.researchgate.net/publication/275352025_Machine_Learning_Forecasts_of_Risk_to_Inform_Sentencing_Decisions (accessed on 14 October 2025).
- Bíró, Gábor. 2024. The First AI Winter: How Early Expectations Collided with Reality. Available online: https://www.birow.com/ru/az-elso-ai-tel (accessed on 14 October 2025).
- Brennan, Tim, and William Dieterich. 2017. Correctional Offender Management Profiles for Alternative Sanctions (COMPAS). Available online: https://www.researchgate.net/publication/321528262_Correctional_Offender_Management_Profiles_for_Alternative_Sanctions_COMPAS (accessed on 8 June 2025).
- Bryushko, Mark. 2024. Artificial Intelligence Helps Kazakhstani Judges Deliver Verdicts. Baq.kz. Available online: https://rus.baq.kz/iskusstvennyy-intellekt-pomogaet-kazahstanskim-sudyam-vynosit-prigovory_300002163/ (accessed on 10 June 2025).
- Constitution of the Republic of Kazakhstan. 1995. Available online: https://online.zakon.kz/Document/?doc_id=1005029 (accessed on 14 October 2025).
- Copelin, Meagan T. 2025. Bias and Fairness in Algorithmic Decision-Making: Legal and Technical Perspectives. AI & Society. Available online: https://www.researchgate.net/publication/392130617_Bias_and_Discrimination_in_Algorithmic_Decision-Making_A_Legal_Perspective (accessed on 22 November 2025).
- Council of Europe. 2018. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment. Available online: https://rm.coe.int/ru-ethical-charter-en-version-17-12-2018-mdl-06092019-2-/16809860f4 (accessed on 14 October 2025).
- Criminal Procedure Code of the Republic of Kazakhstan. 2014. July 4. Available online: https://adilet.zan.kz/rus/docs/K1400000231 (accessed on 14 October 2025).
- Doshi-Velez, Finale, and Been Kim. 2017. Towards a Rigorous Science of Interpretable Machine Learning. arXiv arXiv:1702.08608. Available online: https://arxiv.org/abs/1702.08608 (accessed on 14 October 2025).
- Douglas, Tom, Jonathan Pugh, Ilina Singh, Julian Savulescu, and Seena Fazel. 2020. Risk Assessment Tools in Criminal Justice and Forensic Psychiatry: The Need for Better Data. European Psychiatry 63: e20. [Google Scholar] [CrossRef]
- Draft Law of the Republic of Kazakhstan “On Artificial Intelligence”. 2025. Available online: https://online.zakon.kz/Document/?doc_id=34868071 (accessed on 4 May 2025).
- Dressel, Julia, and Hany Farid. 2018. The Accuracy, Fairness, and Limits of Predicting Recidivism. Science Advances 4: eaao5580. [Google Scholar] [CrossRef]
- Dulatbekov, Nurlan O., and Sergey N. Bachurin. 2024. Theoretical Prerequisites for the Use of Artificial Intelligence Tools in Implementing a Three-Tier Model of Judicial and Law Enforcement Activities in the Republic of Kazakhstan. Criminal Law and Criminal Procedure Law 29: 115. [Google Scholar] [CrossRef]
- Eteris, Eugene, and Ingrida Veikša. 2025. Artificial Intelligence (AI) in the Legal Profession: European Approach. ETR 2: 129–36. [Google Scholar] [CrossRef]
- European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM/2021/206 Final. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 (accessed on 12 April 2025).
- Gordon, Thomas F., Henry Prakken, and Douglas Walton. 2007. The Carneades Model of Argument and Burden of Proof. Artificial Intelligence 171: 875–96. [Google Scholar] [CrossRef]
- Government of Kazakhstan. 2017. State Program “Digital Kazakhstan” (Adopted 12 December 2017). Available online: https://adilet.zan.kz/rus/docs/P1700000827 (accessed on 8 June 2025).
- Government of Kazakhstan. 2024. Concept for the Development of Artificial Intelligence for 2024–2029 (Adopted 24 July 2024). Available online: https://adilet.zan.kz/rus/docs/P2400000592 (accessed on 8 June 2025).
- Hafner, Carole D. 1987. Computer Understanding of Legal Rules: A Conceptual Approach. Artificial Intelligence and Law 1: 21–35. [Google Scholar]
- Hamilton, Melissa. 2021. Evaluating Algorithmic Risk Assessment. New Criminal Law Review 24: 156–90. [Google Scholar] [CrossRef]
- High-Level Expert Group on Artificial Intelligence (AI HLEG). 2019. Ethics Guidelines for Trustworthy AI. European Commission. Available online: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (accessed on 12 April 2025).
- Kroll, Joshua A., Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2017. Accountable Algorithms. University of Pennsylvania Law Review 165: 633–705. [Google Scholar]
- Kubasheva, Angela. 2025. Kazakhstan to Join the European Convention on Mutual Assistance in Criminal Matters. Available online: https://tengrinews.kz/kazakhstan_news/kazahstan-prisoedinitsya-evropeyskoy-konventsii-vzaimnoy-573631 (accessed on 2 June 2025).
- Larson, Jeff, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. Available online: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (accessed on 14 October 2025).
- Law of the Republic of Kazakhstan “On Personal Data and Its Protection”. 2013. Available online: https://adilet.zan.kz/eng/docs/Z1300000094 (accessed on 12 April 2025).
- Liang, Chenyu. 2019. Shanghai Court Adopts New AI Assistant. Sixth Tone. Available online: https://www.sixthtone.com/news/1003496 (accessed on 14 October 2025).
- Lum, Kristian, and William Isaac. 2016. To Predict and Serve? Significance 13: 14–19. [Google Scholar] [CrossRef]
- Luo, Yan. 2018. Covington Artificial Intelligence Update: China’s Vision for the Next Generation of AI. Available online: https://www.insideprivacy.com/artificial-intelligence/chinas-vision-for-the-next-generation-of-ai/ (accessed on 18 June 2025).
- Luong, Ngor, and Ryan Fedasiuk. 2022. State Plans, Research, and Funding. In Chinese Power and Artificial Intelligence, 1st ed. Edited by William C. Hannas and Huey-Meei Chang. London: Routledge, pp. 3–18. [Google Scholar]
- Maitanov, Aidar M., and Tolegen N. Suleimenov. 2025. Application of Artificial Intelligence in Combating Drug-Related Crimes in Kazakhstan: New Horizons and Opportunities. Bulletin of the Karaganda University “Law Series” 11830: 116–24. [Google Scholar] [CrossRef]
- McCulloch, Warren S., and Walter Pitts. 1943. A Logical Calculus of the Ideas Immanent in Nervous Activity. The Bulletin of Mathematical Biophysics 5: 115–33. [Google Scholar] [CrossRef]
- Newell, Allen, and Herbert A. Simon. 1956. The Logic Theorist: A Machine for Theorem-Proving. Santa Monica: RAND Corporation. [Google Scholar]
- Papagianneas, Straton, and Nino Junius. 2023. Fairness and Justice through Automation in China’s Smart Courts. Computer Law & Security Review 49: 105–30. [Google Scholar]
- Peng, Junlin, and Wen Xiang. 2019. The Rise of Smart Courts in China: Opportunities and Challenges to the Judiciary in a Digital Age. Naveiñ Reet: Nordic Journal of Law and Social Research 9: 345–72. [Google Scholar] [CrossRef]
- Prakken, Henry, and Giovanni Sartor. 1998. Modelling Reasoning with Precedents in a Formal Dialogue Game. In Judicial Applications of Artificial Intelligence. Dordrecht: Springer. [Google Scholar]
- Richmond, Karen McGregor, Satya Muddamsetty, Thomas Gammeltoft, and Henrik Palmer Olsen. 2023. Explainable AI and Law: An Evidential Survey. Available online: https://www.researchgate.net/publication/376661358_Explainable_AI_and_Law_An_Evidential_Survey (accessed on 14 October 2025).
- Rissland, Edwina L., Kevin D. Ashley, and L. Karl Branting. 2005. Case-based reasoning and law. The Knowledge Engineering Review 20: 293–98. [Google Scholar] [CrossRef]
- Rudin, Cynthia. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence 1: 206–15. [Google Scholar] [CrossRef] [PubMed]
- Saktaganova, Indira S., Elena V. Mitskaya, and Akmaral B. Saktaganova. 2025. Application of Artificial Intelligence in Justice: Prospects and Challenges. Bulletin of the Institute of Legislation and Legal Information of the Republic of Kazakhstan 80: 68–78. [Google Scholar] [CrossRef]
- Sarikaya, Ferhat. 2024. The Cycles of AI Winters: A Historical Analysis and Modern Perspective. Zenodo. [Google Scholar] [CrossRef]
- Schmidhuber, Jurgen. 2015. Critique of Paper by “Deep Learning Conspiracy” (Nature 521 p. 436). Available online: https://people.idsia.ch/~juergen/deep-learning-conspiracy.html (accessed on 5 June 2025).
- Selbst, Andrew D. 2018. Disparate Impact in Big Data Policing. Georgia Law Review 52: 109–84. [Google Scholar] [CrossRef]
- Sergot, Marek J., Fariba Sadri, Robert A. Kowalski, Frank Kriwaczek, Peter Hammond, and H. Terese Cory. 1986. The British Nationality Act as a Logic Program. Communications of the ACM 29: 370–86. [Google Scholar] [CrossRef]
- Sharipbaev, Altynbek Amirovich. 2024. Farabi University News. Available online: https://farabi.university/news/87532?lang=en (accessed on 7 June 2025).
- Shi, Changqing, Tania Sourdin, and Bin Li. 2021. The Smart Court—A New Pathway to Justice in China. International Journal for Court Administration 12: 4. [Google Scholar] [CrossRef]
- Sidorova, Natalia V., and Alibek M. Serikbayev. 2025. Access to Justice in the Republic of Kazakhstan and the Status Quo of Artificial Intelligence. Criminal Law and Criminal Procedure Law 30: 119. [Google Scholar] [CrossRef]
- Supreme People’s Court of China. 2022. Opinions on Regulating and Strengthening the Applications of Artificial Intelligence in the Judicial Field. Available online: https://www.chinajusticeobserver.com/law/x/the-supreme-peoples-court-the-opinions-on-regulating-and-strengthening-the-applications-of-artificial-intelligence-in-the-judicial-field-20221208 (accessed on 10 March 2025).
- Turing, Alan M. 1950. Computing Machinery and Intelligence. Mind 49: 433–60. [Google Scholar] [CrossRef]
- Wessels, Martijn. 2024. Algorithmic Policing Accountability: Eight Sociotechnical Challenges. Policing and Society 34: 124–38. Available online: https://pure.eur.nl/ws/portalfiles/portal/98305602/Algorithmic_policing_accountability_eight_sociotechnical_challenges.pdf (accessed on 14 October 2025). [CrossRef]
- Zhabina, Alena. 2023. How China’s AI Is Automating the Legal System. Deutsche Welle (DW). January 20. Available online: https://www.dw.com/en/how-chinas-ai-is-automating-the-legal-system/a-64465988 (accessed on 14 October 2025).
- Zhursimbayev, Sagyndyk K., Erzhan Kemali, and Alua Muratova. 2025. Improvement of the National Judicial System in the Republic of Kazakhstan: Analysis of Innovations and Problems. Bulletin of L.N. Gumilyov Eurasian National University. Law Series 151: 134–47. [Google Scholar] [CrossRef]
| 1 | The EU AI Act entered into force on 1 August 2024, see Euronews report “EU Artificial Intelligence Act Enters into Force”, available online: https://ru.euronews.com/next/2024/08/01/the-eu-ai-act-enters-into-force, accessed on 23 May 2025. |
| 2 | See Draft Law of the Republic of Kazakhstan “On Artificial Intelligence” (2025), which is currently under parliamentary discussion. |
| 3 | Kazakhstan partners with U.S. companies to advance digitalization and AI, available online: https://turkic.world/en/articles/turkic_states/435475, accessed on 7 November 2025. |
| 4 | Kazakhstan and China to establish an international laboratory for artificial intelligence and sustainable development, available online: https://www.kt.kz/rus/science/kazahstan_i_kitay_sozdadut_mezhdunarodnuyu_laboratoriyu_1377974087.html, accessed on 23 May 2025. |
| 5 | AIQ set to advance digital transformation in Kazakhstan’s energy sector through agreement with Samruk Kazyna, available online: https://aiqintelligence.ai/newsroom/news-and-press-releases/AIQ-set-to-Advance-Digital-Transformation-in-Kazakhstan-s-Energy-Sector-through-Agreement-with-Samruk-Kazyna, accessed on 23 May 2025. |
| 6 | Kazakhstan and Russia strengthen cooperation in the field of artificial intelligence, available online: https://ru.sputnik.kz/20250404/kazakhstan-i-rossiya-ukreplyayut-sotrudnichestvo-v-sfere-iskusstvennogo-intellekta-52195777.html, accessed on 23 May 2025. |
| 7 | Gil Press, 114 Milestones in the History of Artificial Intelligence (AI). Forbes, available online: https://www.forbes.com/sites/gilpress/2021/05/19/114-milestones-in-the-history-of-artificial-intelligence-ai, accessed on 23 May 2025. |
| 8 | American Civil Liberties Union (ACLU) Watchlists, available online: https://www.aclu.org/issues/national-security/privacy-and-surveillance/watchlists, accessed on 23 May 2025. |
| 9 | Framework Programs for Research and Innovation of the European Community (National Research University “Higher School of Economics”, available online: https://fp.hse.ru/frame, accessed on 15 May 2025. |
| 10 | European Commission for the Efficiency of Justice (CEPEJ), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, available online: https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment, accessed on 18 June 2025. |
| 11 | AI Act Regulation (EU) 2024/1689, available online: https://eur-lex.europa.eu/eli/reg/2024/1689/oj, accessed on 18 June 2025. |
| 12 | European Commission, European Approach to Artificial Intelligence, available online: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence, accessed on 18 June 2025. |
| 13 | See note 11. |
| 14 | HART—Assessment Risk Tool, predictive policing based on personal data, available online: https://ai-watch.github.io/AI-watch-T6-X/service/90142.html, accessed on 18 June 2025. |
| 15 | Chinese Association for Artificial Intelligence, Introduction, available online: https://en.caai.cn/index.php?s=/home/article/index/id/2.html, accessed on 10 June 2025. |
| 16 | First AI Article in Nature Machine Intelligence, available online: https://prc.today/pervaya-stat-ob-ii-v-zhurnale-nature-machine-intelligence, accessed on 10 June 2025. |
| 17 | Chinese Tech Giant Baidu Just Released Its Answer to ChatGPT, available online: https://www.technologyreview.com/2023/03/16/1069919/baidu-ernie-bot-chatgpt-launch, accessed on 10 June 2025. |
| 18 | Xinhua Hires the First AI TV Anchor, 22 February 2019, available online: https://hightech.plus/2019/02/22/sinhua-prinyala-na-rabotu-pervuyu-ii-televedushuyu, accessed on 10 March 2025. |
| 19 | Chinese Robot Doctor Makes History by Passing Medical Licensing Exam. Industry Tap, available online: https://www.industrytap.com/chinese-robot-doctor-makes-history-passing-medical-licensing-exam/44664, accessed on 21 March 2025. |
| 20 | Education Tech Firm Squirrel AI Bullish on Market Prospects, 16 September 2021, available online: https://global.chinadaily.com.cn/a/202109/16/WS6142fa9ca310e0e3a6822120.html, accessed on 14 October 2025. |
| 21 | Hangzhou Internet Court, available online: https://www.netcourt.gov.cn/?lang=En, accessed on 10 March 2025. |
| 22 | Government of Kazakhstan, Draft Concept for the Development of Artificial Intelligence for 2024–2029. |
| 23 | Large Language Model KazLLM Developed in Kazakhstan, available online: https://www.gov.kz/memleket/entities/mdai/press/news/details/902638?lang=ru, accessed on 14 October 2025. |
| 24 | Ministry of Internal Affairs of the Republic of Kazakhstan, E-Criminal Case System (“E-Ugolovnoe delo”), available online: https://www.gov.kz/memleket/entities/mdai/press/article/details/27005, accessed on 10 June 2025. |
| 25 | Modern Digital Services Created in the Prosecutor’s Office System for the Public and the State, available online: https://vecher.kz/ru/article/kakie-sovremennye-sifrovye-servisy-sozdany-v-sisteme-prokuratury-dlia-naseleniia-i-gosudarstva.html, accessed on 5 August 2025. |
| 26 | Smart Bridge—Passport Service (DODSVS-S-4332), available online: https://sb.egov.kz/services/passport/DODSVS-S-4332, accessed on 7 March 2025. |
| 27 | Artificial Intelligence Assists Judges in Kazakhstan, available online: https://sud.gov.kz/rus/print/325231, accessed on 5 August 2025. |
| 28 | Successful Application of AI in Kazakhstani Courts Discussed at the SCO Member States Meeting in China, available online: https://ru.sputnik.kz/20250425/ob-uspeshnom-primenenii-ii-v-sudakh-kazakhstana-rasskazali-na-soveschanii-chlenov-shos-v-kitae-52833037.html, accessed on 2 May 2025. |
| 29 | Ministry of Internal Affairs and the Prosecutor General’s Office Launch an AI Project for Criminal Investigations, available online: https://exclusive.kz/mvd-i-genprokuratura-zapuskajut-ai-proekt-dlja-rassledovanij, accessed on 10 July 2025. |
| 30 | See note 29. |
| 31 | European Convention on Human Rights and Fundamental Freedoms, Rome, 4 November 1950, available online: https://www.coe.int/ru/web/compass/the-european-convention-on-human-rights-and-its-protocols, accessed on 2 June 2025. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).