2. Legal Framework: Domestic and International
The application of AI in arbitration in India is still nascent. While Indian arbitration practitioners and institutions are beginning to use and explore AI tools at personal levels, there is no specific legal or regulatory frameworks addressing this area. The broader acceptance of technological advancements in arbitration practices is reflected by the Supreme Court’s stance on recognizing the use of electronic means for arbitration agreement and procedural aspects [
2]. The Government of India has also proposed the Draft Arbitration and Conciliation (Amendment) Bill, 2024, to amend the Arbitration and Conciliation Act, 1996, and align India’s arbitration rules and practices with prevailing international standards. While the proposed bill includes provisions for the conduct of arbitration proceedings using audio–video electronic means, it has failed to address the aspect of integration of AI in arbitration proceedings [
3]. The Mediation Act, 2023 also provides guidance for online mediation, but does not address the use or regulation of AI technology in mediation [
4].
Moreover, Indian arbitration institutions, such as the Indian Council of Arbitration (ICA) or the Mumbai Centre for International Arbitration (MCIA), have also not issued any guidelines, rules, or policies in this regard. It is thus evident that the regulation of the use of AI technology in ADR has been completely left to the broader technology regulations and data privacy laws, including the Information Technology (IT) Act, 2000, and the Digital Personal Data Protection Act, 2023. The Government has incorporated AI-specific provisions in the proposed Digital India Act, including the regulation of high-risk AI for its safe and ethical use and addressing other issues like deepfakes, etc. Until the act is enacted, amendments to the IT Rules, 2021, are the only regulations vaguely dealing with AI-related concerns.
India can explore several models and international best practices for integrating AI into its arbitration law. With its e-courts project already in place, the same infrastructure could be expanded to include AI tools for arbitration. Below, we discuss a few models that India can explore and adopt to regulate the use of AI by legal practitioners.
2.1. SVAMC Guidelines on the Use of AI in Arbitration
The Silicon Valley Arbitration & Mediation Centre (SVAMC) has framed detailed guidelines on the use of AI in arbitration, which could effectively supplement national legislations on arbitration to secure fair and balanced use of AI in arbitration (Markus Altenkirch Hossbach Raika, “The New Guidelines on the Use of Artificial Intelligence in Arbitration: Background and Essential Aspects”) [
5]. These guidelines cover all the involved stakeholders and address their concerns separately. Guideline 2 clearly stipulates that all participants in the arbitration proceedings are responsible for their use of AI and warns them against feeding confidential information to non-reliable and public AI tools, such as ChatGPT. Only special AI tools that adequately safeguard confidentiality should be used with confidential information. Party representatives are also made liable for any uncorrected errors or inaccuracies in AI-generated or refined output that they have submitted or used in furtherance of arbitral proceedings [
6]. Further, to ensure transparency and accountability on the part of legal counsels, Guideline 3 empowers the arbitral tribunal to ask for complete disclosure of AI tools used in drafting the submission, including the name, version, and relevant setting of the tool used, how it was used, and the complete prompt used to obtain the associated output.
The guidelines also effectively scrutinize arbitrators for ethical and diligent use of AI tools. Guideline 6 stipulates that the arbitrators cannot delegate their role to any AI tool. They must carry out an independent analysis of the facts, the law, and the evidence. This reinforces the view that AI cannot be used in decision-making process without human oversight. While submitting their dispute to arbitration, the parties would not want their representatives to blindly submit an AI-drafted submission of their claim and the arbitrators to render an AI-drafted award for redressal of their dispute. The decision-making process is a personal and non-delegable task, and thus, even if AI technology does advance to further levels, the responsibility of arbitrators to conduct independent analysis with human considerations remains indispensable.
2.2. The EU AI Act
The European Union’s Act classifies certain AI systems based on their risk levels, ranging from minimal to unacceptable risks [
7]. AI systems used in administration of justice, including arbitration, are classified as “high-risk AI systems” and are subject to stringent regulatory requirements. These include obligations of ensuring transparency in their operations, demonstrating accuracy and reliability, mitigating risks of bias or discrimination, and providing clear documentation to enable human oversight. To supplement a legal practitioner’s understanding of the potential and risks of AI tools, the act emphasizes the importance of AI literacy for providers and deployers of AI systems [
8]. Thus, while the EU AI Act does not explicitly regulate arbitration, its provisions on high-risk AI systems, including risk assessments and implementing safeguards, offers an opportunity to integrate AI into arbitration responsibly, balancing innovation with the protection of basic rights of the parties. While the statute is compliance-based, creating obligations for the AI developers and businesses, there is yet to be seen a framework that helps ascertain the liability for non-compliance with these obligations.
2.3. SCC Arbitration Institute’s AI Guide
The Arbitration Institute of the Stockholm Institute of Commerce (SCC) released a non-binding guide regarding the use of AI in cases administered under its rule in October 2024 [
9]. It follows the EU Act’s definition of “artificial intelligence system” and advocates the responsible use of AI by practitioners. It encourages arbitral tribunals using AI systems in dispute resolution to keep in mind confidentiality, quality, integrity, and non-delegation of the decision-making mandate. The SCC’s “light approach” promotes voluntary disclosure by arbitrators regarding their use of AI in researching and interpreting facts, applying law to facts, and other aspects of decision-making to enhance transparency and procedural integrity.
2.4. Australia’s Guidelines on Responsible Use of AI in Litigation
The Victorian Supreme Court released the “Guidelines for the Responsible Use of Artificial Intelligence in Litigation” in May 2024 [
10]. It stipulates that wherever necessary, the use of AI tools for document preparation must be disclosed to other parties and the court. Moreover, self-represented litigants and witnesses using generative AI to prepare documents are encouraged to include a statement disclosing the AI tool’s usage to assist the judicial officer in assessing such documents. The guidelines discourage the use of commercial or freely available public programs such as ChatGPT and Google Gemini to generate output and prepare legal documents. This is mostly because of the “black box” nature of AI [
11], and it is encouraged to check the output so as not to be out of date, incomplete, inaccurate, inapplicable to the jurisdiction, or biased. Ultimately, a party or practitioner relying on or signing a document remains responsible for the accuracy of the content. The fact that a document was made using generative AI is no excuse for erred or unoriginal submissions.
The Australian Government has also shown proactiveness in monitoring the use and application of AI. It enacted the AI Assessment Framework (NWS AIAF), based on Australia’s AI Ethics Principles [
12], to guide the government’s use of AI technologies. It is a risk self-assessment which concludes with giving a rating for each of the five Ethics Principles [
13]. This rating determines whether the assessment needs to be submitted to the AI Review Committee, or whether the Government Agency may proceed without changes. This framework is updated from time to time with advancements in AI technology. While this initiative is primarily meant to promote the responsible use of AI and ensure fairness, privacy and security, transparency, and accountability within government operations, such a risk self-assessment framework can also be deployed in the dispute resolution area.
3. Challenges
India currently lacks a dedicated legal framework governing the use of artificial intelligence (AI) in arbitration. However, the government has introduced broader policy initiatives aimed at ensuring the ethical and responsible deployment of AI across various sectors. These initiatives, though not arbitration-specific, provide a foundation for understanding how AI-related risks, including bias, data security, and accountability, could be managed in the arbitration framework. The National Strategy for Artificial Intelligence introduced by NITI Aayog in 2018 emphasizes the integration of AI in key areas such as healthcare, smart governance, and financial services, underscoring principles of transparency, fairness, and security. These principles are crucial in arbitration, where procedural fairness and impartiality are central to the legitimacy of the process.
While no specific law regulates the use of AI in arbitration, certain sectoral guidelines may offer indirect insights into the regulatory approach India might take. For instance, the Securities and Exchange Board of India (SEBI) requires financial market participants to disclose their use of AI and machine learning models, promoting transparency and risk assessment. If applied to arbitration, a similar disclosure requirement could ensure that AI tools used by arbitrators, legal counsel, or arbitral institutions are subject to scrutiny, reducing the risk of biased decision-making or opaque reasoning. Likewise, the National Digital Health Mission’s approach to AI in healthcare—focused on data privacy and security—suggests a possible framework for handling confidential case data in arbitration, particularly when AI-powered legal research tools or decision support systems are employed.
Recognizing the risks associated with AI, the Ministry of Electronics and Information Technology (MeitY) has proposed establishing an AI Safety Institute to develop standards and identify risks associated with AI applications. If extended to arbitration, such an institution could help create best practices for AI-assisted dispute resolution, including guidelines for arbitrators using AI tools in decision-making, safeguards against automated bias, and cybersecurity protocols for AI-driven case management systems. The INR 20 crore allocated under the IndiaAI Mission for AI safety and interoperability initiatives highlights a growing governmental interest in regulating AI technologies more comprehensively, which could eventually extend to the legal sector, specifically alternative dispute resolution mechanisms including arbitration.
At the international level, India’s engagement with the Global Partnership on Artificial Intelligence (GPAI) indicates its commitment to responsible AI governance. In the context of arbitration, adopting global best practices from jurisdictions that have begun addressing AI’s role in legal decision-making could help shape a future regulatory framework. As India increasingly integrates AI into its legal and commercial sectors, the need for an arbitration-specific AI liability framework becomes more pressing. The absence of clear regulations raises critical questions regarding accountability—whether an arbitrator, legal counsel, or AI developer would be held liable in case of biased AI-generated decisions or data breaches. Addressing these issues through legislative intervention or guidelines from arbitral institutions such as the Singapore International Arbitration Centre (SIAC) or Mumbai Centre for International Arbitration (MCIA) could help establish clarity on liability and ethical AI use in arbitration.
While India is yet to implement a structured legal framework for AI in arbitration, existing regulatory approaches in other sectors indicate a growing recognition of AI-related risks. The principles of transparency, accountability, and fairness outlined in national AI policies provide a starting point for developing a liability framework for AI-assisted arbitration. Future reforms may include mandatory disclosures of AI usage, ethical guidelines for arbitrators relying on AI-generated insights, and mechanisms to challenge AI-influenced arbitral awards. Given the rapid adoption of AI in dispute resolution, India’s arbitration landscape will need to evolve with a regulatory framework that balances innovation with fairness and accountability. The few major challenges in the context of the use of AI in arbitration are as follows:
3.1. Human Oversight: The Risk of Automation Bias and Accountability Vacuums
Arbitration inherently involves judgment that considers not just legal rules but also the subtleties of fairness, equity, and context. The phenomenon of automation bias—where human decision-makers defer to AI outputs even when they are flawed—poses a grave risk [
14]. AI lacks the human qualities of empathy and moral reasoning, which are essential in sensitive cases, such as those involving domestic violence, child custody, etc. Arbitrators may use public generative AI models like ChatGPT to assist them with drafting arbitral awards. While there is no apparent problem in taking AI’s assistance for technical tasks, such as grammar and refinement, the challenge arises when arbitrators start delegating their work to AI. Over-reliance on AI could undermine the human element of justice. In this context, Justice P.B. Balaji has stated that AI can help with summarizing pleadings and tagging cases but should not be used for writing judgements [
15]. Over-reliance on AI tools could lead arbitrators to rubber-stamp machine-generated suggestions, diminishing critical analysis and undermining the value of human intuition and reasoned deliberation.
Moreover, accountability becomes complex when decisions are AI-influenced. Traditional arbitration demands a clear attribution of responsibility for awards. If an opaque AI system has substantively influenced a tribunal’s decision, it raises troubling questions about accountability for unjust outcomes [
16]. Without clear audit trails and explainability, human arbitrators may find it difficult even to realize when AI errors have tainted the process.
Human oversight is essential not merely as a procedural layer but also as a safeguard to arbitration’s legitimacy. AI must remain a tool, not an arbiter. Regulatory frameworks should enforce continuous human supervision, ensure that arbitrators understand the functioning (and limits) of the AI tools that they use, and impose disclosure obligations when AI meaningfully influences outcomes.
A recent study highlights that AI judges tend to follow the letter of the law with strict consistency, whereas human judges incorporate broader contextual reasoning into their decisions [
17]. This finding offers a useful analogy for the debate surrounding the use of AI in arbitration. In a similar way, if arbitration increasingly relies on AI systems that prioritize rigid application of legal rules and patterns, it risks losing the nuanced appreciation of social, cultural, and factual contexts that human arbitrators bring to the process. Arbitration often requires balancing competing equities, understanding unwritten commercial norms, and adapting procedures to fit the complexities of transnational disputes—elements that purely algorithmic reasoning may systematically overlook.
Thus, much like the concern that AI judges may deliver legally correct but contextually insensitive outcomes, the use of AI in arbitration could lead to procedurally efficient but substantively unjust awards. Maintaining meaningful human involvement is therefore essential to preserving the flexibility, fairness, and responsiveness that have historically defined arbitration as a preferred mode of dispute resolution.
3.2. Algorithmic Bias and Procedural Fairness
Algorithms form the base for AI tools. All devices ranging from computers to generative AI programs require algorithms to operate. The 1 s and 0 s in every program essentially decide how the AI system is to interact. Thus, algorithms act like the law, governing AI systems every step of the way. Naturally, like any other algorithmic process, machine learning falls prey to algorithmic bias. Given India’s complex social diversity, the risk of biased AI decisions is particularly acute. Without careful and considerate design and regular audits and updates, AI could perpetuate existing inequalities and biases in arbitral outcomes. For instance, say most of the custody cases that come for arbitration are decided in favor of the mother. The data that AI would utilize to extract “relevant” facts and analyze past trends would be automatically biased towards one party.
AI systems learn from historical data, and if past arbitration awards or legal materials contain biases (whether relating to nationality, corporate size, industry, or even arbitrator behavior), AI can replicate and reinforce those biases invisibly [
18]. The “black box” nature of AI models compounds this risk. When AI’s internal decision-making processes are inaccessible to users, bias becomes difficult to detect, contest, or correct. Algorithmic bias often results not merely from biased data but also from model design choices, feature selection, and feedback loops—where initial biased outcomes are reinforced over time through adaptive learning [
19].
Bias mitigation strategies must include bias testing at every stage of the AI lifecycle, diversifying training datasets, and embedding fairness constraints into algorithm design. Believing that AI is neutral is not only misguided but also dangerously naïve. In arbitration, where fairness and impartiality are non-negotiable, failure to address these sources of bias could erode trust in the arbitral process. Hence, AI systems used in arbitration must undergo independent bias audits, and standards of explainability and contestability must be enforced.
Poor human oversight allows algorithmic biases to slip through undetected. Conversely, biases embedded in AI recommendations can subtly steer human decision-makers, compromising their independence. For example, if the delicate nuances of the case, including an individual’s best interest, are not considered and analyzed by a human mind, the outcome could be against the principles of natural justice. Thus, the presence of algorithmic bias in AI systems greatly undermines due process and procedural fairness in the administration of justice. Without transparency, accountability, and rigorous bias mitigation, there is a risk that AI could institutionalize and normalize unfairness under the guise of “efficiency.”
3.3. Data Protection and Privacy Risks: Challenges Under the DPDP Act, 2023
As previously discussed, the use of AI in arbitration involves significant personal and sensitive data processing, including parties’ identities, financial details, and confidential information from arbitration submissions. These practices are subject to regulatory frameworks, with India’s Digital Personal Data Protection (DPDP) Act, 2023, seeking to ensure that such data are protected, primarily through informed consent, transparency, and data minimization principles.
The DPDP Act introduces obligations for data fiduciaries that control or process personal data, as outlined in Sections 5 and 6, which mandate clear notice and informed consent from individuals. However, in the context of AI, which often relies on processing vast amounts of data collected from multiple sources, ‘granular consent’ becomes difficult to secure for every use case. This lack of explicit consent could undermine the intent behind these sections, paralleling concerns within the General Data Protection Regulation (GDPR) of the EU, which also emphasizes transparency and explicit consent but faces challenges in the context of AI and big data analytics [
20] (Sections 5 and 6).
Furthermore, the transparency requirements in Section 7 of the DPDP Act clash with AI’s “black box” nature, where the decision-making processes of algorithms are often opaque. This also resonates with GDPR’s call for explainability, which requires that individuals understand how decisions are made, especially when based on automated processes. Without transparent AI systems, arbitration parties might find it challenging to understand how their personal data influenced an AI-driven recommendation or decision, potentially violating their rights to due process [
20] (Section 7).
Moreover, cross-border data transfer restrictions under Section 16 of the DPDP Act create additional layers of complexity. AI systems often rely on global cloud computing infrastructure, where data may be transferred across borders. The DPDP Act’s restrictions on transferring personal data outside India, unless certain safeguards are in place, could significantly hinder the use of AI in international arbitration, similar to the GDPR’s stringent data transfer provisions. Compliance with these provisions will require careful data localization or robust international agreements, complicating the deployment of AI technologies in global dispute resolution contexts [
21].
Additionally, the DPDP Act’s provisions on algorithmic profiling under Section 9 must be taken into account. The act forbids unfair or discriminatory processing of data based on automated decisions that could affect individuals’ rights. In AI-driven arbitration, where algorithms may use profiling techniques to predict outcomes based on past disputes, there is a heightened risk that such profiling could be discriminatory [
22]. This mirrors concerns raised in recent studies on AI governance, which warn about the risks of bias in algorithmic decision-making, underscoring the need for robust bias mitigation strategies and regulatory oversight [
20] (Section 9). While the DPDP Act strengthens privacy rights, it also presents significant challenges for arbitration, especially with the increased adoption of AI systems.
3.4. Complexities of Ascertaining Liability
While the DPDP Act obligations also raise the issue of liability, ascertainment of liability in the case of AI use in arbitration presents multifaceted complexities. The integration of AI systems into arbitration processes raises significant challenges regarding liability ascertainment. When AI tools are used to assist or even make decisions, questions emerge about who should be held responsible in the event of errors, biases, or breaches of data protection and fairness principles. Traditional legal frameworks, which assume human agency and accountability, often struggle to accommodate the complexities introduced by autonomous or semi-autonomous systems.
A central concern is whether liability should rest with the developers who created the AI model, the users (such as arbitral institutions or tribunals) who deploy it, or a combination of both. Allocation of responsibility becomes even more complex when the AI system operates as a “black box”, making it difficult to trace or explain decision-making errors.
Recent regulatory developments, such as the EU AI Act, attempt to address these challenges by imposing distinct obligations on both providers and users of high-risk AI systems and by proposing mechanisms to ease the burden of proof for affected parties [
7]. However, a critical shortcoming of the EU’s approach is its over-reliance on ex ante compliance measures—such as risk assessments and conformity checks—without providing sufficient clarity on how liability will be apportioned when unforeseen harm occurs despite compliance [
7] (Article 4). In highly dynamic fields like arbitration, where AI may evolve post-deployment or interact unpredictably with new data, this gap could lead to legal uncertainty and fragmented enforcement across member states [
23].
Thus, there is no proper liability framework in place if an AI system provides incorrect or biased recommendations. AI programs can clearly not be held liable as they operate according to the set algorithm developed by its creator. Thus, it is unclear as to who would be held responsible—the developers, the practitioners, the judiciary, or the government.
4. Conclusions and Recommendations
India currently lacks a comprehensive legal framework to regulate the use of AI in arbitration. Without proper regulation, there is a real risk that AI could be misused to justify predetermined outcomes or expedite cases at the cost of thorough deliberation and fairness. While concepts like copyright infringement and plagiarism are generally understood, AI-generated work and AI similarity issues are still relatively novel. Legal practitioners often tend to treat AI outputs as their own, raising significant concerns regarding intellectual property violations and ethical practice.
First and foremost, India must develop a dedicated legal framework for the use of AI in arbitration, rather than subsuming it under broader technology regulations. Notably, while the 2047 Viksit Bharat Mission envisions a future-ready India and significant amendments have been made to commercial dispute frameworks since 2014 (including the 2024 Draft Bill for further reforms), there remains no mention of the use of AI in arbitration or litigation processes. This oversight must be urgently addressed to align India’s arbitration ecosystem with global best practices.
Although the 2024 Union Budget announced funding for a regulator for arbitration, the institutional infrastructure remains non-operational as of now, creating a regulatory vacuum. This void must be filled swiftly with a specialized body capable of issuing guidelines on AI usage, ensuring ethical compliance and resolving disputes arising out of AI’s role in arbitral proceedings.
Drawing from global practices, India should consider the model set forth by the SCC AI Guide and SVAMC Guidelines on AI use in arbitration. Based on these models, two primary reforms are suggested:
4.1. Disclosure of AI Usage
While AI should not be discouraged, its use must be regulated transparently. Party representatives and arbitral tribunals must disclose when and how AI has been used in case preparation, research, or decision support tasks. Mandating AI disclosure reports would enhance transparency, maintain party equality, and protect the integrity of arbitral proceedings.
4.2. Non-Delegation of Decision-Making
The arbitral tribunal must remain the ultimate decision-maker, fully applying its judicial mind to the facts and law. AI may be used to assist procedural matters, such as document management, initial settlement proposals, or data analysis. However, substantive decision-making and legal reasoning must remain exclusively within the human domain, preserving fairness, confidentiality, and accountability.
Moreover, to address the serious concern of algorithmic bias, arbitral institutions must prioritize the development of in-house AI systems subjected to regular audits to ensure representativeness and non-discrimination. Ethical principles such as inclusivity, fairness, and explainability must be embedded into the design, deployment, and operation of AI systems.
Importantly, training on the ethical and responsible use of AI must become an integral part of capacity-building initiatives for both arbitration and litigation practitioners. Arbitrators and legal professionals should be trained to critically understand the limitations, biases, and legal implications of AI technologies. Drawing from the spirit of Article 4 of the EU AI Act, ethical AI principles and AI literacy should be embedded in legal education and professional certification programs.
By proactively addressing these challenges, India can ensure that the adoption of AI in arbitration strengthens—rather than undermines—due process, fairness, and trust in the system, contributing meaningfully to its vision for a modern, efficient, and future-ready dispute resolution framework.