1. Introduction
1.1. Types of Artificial Intelligence
As artificial intelligence systems evolve and become integrated into our routines, their potential and utility also become evident in the legal context. Its use as an assistant in dispute resolution manifests itself, above all, in the speed and efficiency with which its tools can execute and improve tasks such as the management, review, and translation of documents, legal research (Jus Mundi, LexisNexis, CoCounsel, Harvey), as well as the analysis and presentation of evidence. Currently, many legal professionals have begun to use AI to assess the strength of evidence, estimate the probability of success in lawsuits, freeing up time for a more strategic performance by lawyers.
1.2. AI in Arbitration
Despite its predominantly individual use and external to the procedure, AI tools raise relevant concerns regarding the confidentiality and security of information. Arbitration involves sensitive data, documents, procedural strategies, identity of parties and third parties, which can be inadvertently exposed when entered into platforms operated by third parties or by models that operate on external servers.
Depending on the architecture of the tool and the volume of data shared, the use of systems such as generative models may even require prior consent from the parties, since certain information may leave the exclusive domain of the user and become part of external databases. As there is still no clear regulation on the limits of this sharing, uncertainty zones persist that can generate procedural incidents, disputes over the validity of acts, and questions about the integrity of the procedure.
In contrast, technologies used in the process, that is, formally incorporated into the procedural rules and executed as part of the rite, present challenges of a different nature. By assuming functions typically performed by subjects in the process, such as conducting hearings or technical analysis of documents, such tools can impact structuring principles of arbitration, especially due process, equality, impartiality, and freedom of conviction.
Although they can rationalize specific steps, they do not alter the central logic of the procedure, which continues to depend on human choices, methodological validations, and the procedural dialogue itself between the parties and the tribunal. In practice, more sophisticated systems tend to introduce new layers of debate (on algorithmic opacity, biases, admissibility and probative force of automated results, in addition to the security of the data processed), which can increase the duration or costs of arbitration, and not necessarily reduce them.
On the other hand, already consolidated solutions for digitization and virtualization (such as electronic protocols, procedural management platforms, and remote hearings) demonstrate measurable impacts of reducing time and costs. The same does not occur with advanced AI tools: despite being promising, they remain of restricted use, economically variable, and dependent on strong human supervision.
Thus, true efficiency in arbitration continues to be conditioned on a realistic assessment of the cost-benefit ratio of available technologies and the maintenance of human control over acts that influence the conviction of the judges (SCHERER, 2019).
2. Impacts of AI on the production of evidence in Arbitration
2.1. On the need for regulation
Despite artificial intelligence proving to be a powerful tool for the production and management of evidence in arbitration, the impacts that its use may have on specific cases cannot be ignored. Therefore, the construction of specific normative and guiding parameters is essential, in order to prevent the indiscriminate adoption of these technologies from compromising the authenticity of the evidence, due process, and, ultimately, confidence in the arbitral procedure itself.
This risk is particularly sensitive when considering the so-called deepfakes, capable of creating false evidence or manipulating existing evidence, making it extremely difficult to differentiate what is authentic from what was generated or adulterated by AI. Reliance on falsified evidence can distort the outcome of arbitrations and, in extreme situations, lead to true miscarriages of justice, in addition to impacting additional time and costs when the tribunal needs to resort to experts to verify the legitimacy of digital evidence.
For this reason, recently, several normative texts from arbitral institutions or other organizations have been published, such as the CIArb Guideline on the Use of AI in Arbitration, according to which the Party intending to use a Permitted AI Tool shall use reasonable efforts to independently verify the sources and accuracy of the results obtained, correcting any errors before submitting documents or other evidence and, if necessary, also throughout the procedure (CIArb, 2024). On the other hand, the use of AI to produce content capable of inducing the tribunal or the opposing party into error is expressly prohibited, whether through the fabrication or adulteration of evidence, or by formulating commands (“prompts”) designed to deliberately generate an untrue or biased result (CIArb, 2024).
From a similar perspective, the SVAMC Guidelines on the Use of Artificial Intelligence in Arbitration establish, in an orientative character, in clause 5, that the parties must respect the integrity of the procedures, and the use of AI to falsify evidence is not allowed, so that the authenticity of the evidence and the arbitral procedure is not compromised (SVAMC, 2024).
Similarly, the VIAC Note on the Use of Artificial Intelligence in Arbitration Proceedings acknowledges that it is up to the arbitrators, in the exercise of their discretion, to decide whether it is necessary to require the disclosure of evidence produced with the support of AI, as well as to determine the admissibility, relevance, materiality, and probative value of such elements (VIAC, 2024).
The SCC Guide to the Use of AI in Cases Administered under the SCC Rules indicates that these tools should be equipped with solutions to evidence and detect content that has been generated by AI or manipulated by it, using reliable methods for this purpose (SCC, 2024). Similarly, Leonardo F. Souza-McMurtrie highlights that artificial intelligence itself is one of the most promising solutions to the problem of evidence falsification, insofar as it can be trained to identify it (Souza-McMurtrie, 2023).
In addition to the use of AI by the parties, the need to regulate its use by the arbitrators themselves is also discussed, indicating clear parameters for action. In general, the conditions imposed for the use of AI emphasize a combination of transparency and personal responsibility of the human operator, who remains responsible for supervising the result generated by the tool and for responding for its content.
In this context, perhaps the most sensitive rules are those related to the treatment of confidential information or trade secrets. Some courts have taken special care to prevent the simple presentation of pleadings and documents from resulting in the disclosure of sensitive data, especially when using widespread programs such as ChatGPT. For example, it is required that briefs containing confidential or exclusive information expressly identify these excerpts (for example, in parentheses or highlighted), that a non-confidential version of the document with the sensitive information suppressed be presented, and that the recipients of the confidential briefs refrain from disclosing their content to any subjects not authorized to receive it. Even though the responsible companies claim that there is no risk of undue access, it is feared that such systems may retain sensitive data entered by users and that, under certain circumstances, this information may become accessible to unauthorized third parties, in violation of the duty of confidentiality commonly existing in arbitration.
These cautions, however, in line with the CIArb Guideline on the Use of AI in Arbitration (2024, item 4.5), should not prevent the private use of AI tools by the parties and their teams in internal activities that do not interfere with the progress or integrity of the procedure, since arbitrators should not regulate this type of use.
Therefore, it is noted that, with the proper regulations, the use of artificial intelligence in arbitral procedures can constitute an important instrument for improving and evolving arbitration, provided that a set of guidelines is observed that preserve the good faith and credibility of the procedure. Notwithstanding the regulatory advances, there are still gaps in each of these instruments, which is why such normative frameworks have much to evolve – as Leonardo F. Souza-McMurtrie highlights, arguing that the current guidelines are still premature, and may disorganize a process under construction and, with that, hinder the emergence of better solutions, which is why they must be continuously revisited in light of practical experience (Souza-McMurtrie, 2025).
2.2. Authenticity of evidence
Once the analysis on the need to regulate the use of artificial intelligence in arbitration is overcome, a second axis of concern is imposed: the authenticity of the digital evidence produced or processed by these tools. In a context in which the decision of the arbitral tribunal depends, to a large extent, on the confidence in the evidence presented, any uncertainties regarding the integrity of this material directly affect the legitimacy of the result.
In this scenario, the creation of false evidence emerges as one of the main threats associated with the use of AI in arbitration, especially in the production of evidence intended for the arbitral tribunal: the same technology that allows organizing, searching, and analyzing large volumes of data can also be used to create false evidence or adulterate authentic content, as in deepfakes.
On the other hand, the very distrust of certain types of evidence opens space for what Rebecca Delfino called “deepfake defense,” a strategy by which lawyers use skepticism about the integrity of digital evidence to cast doubt on its legitimacy, even if it is, in fact, authentic (Delfino, 2023). This creates an environment in which the arbitrators’ apprehension about the possibility of technological manipulation is exploited to challenge the credibility of practically any evidence, at any time.
Guidelines such as the CIArb Guideline on the Use of AI in Arbitration, although already recognizing risks associated with AI in the production of evidence, do not directly address specific strategies such as the so-called “deepfake defense,” located at the intersection of technology, law, and ethics, which aggravates the difficulty of tribunals in dealing with this new type of allegation (CIArb, 2024).
For these reasons, it is suggested (Limond; Calthrop, 2025) that digital evidence be accompanied by a declaration of authenticity signed by the lawyer or a specialized technical opinion that attests that it has been examined and considered authentic and reliable, without removing the responsibility of the parties, in light of the principle of good faith, to ensure the authenticity and legality of the evidence and arbitration documents they present.
2.3. Burden of proof
The use of artificial intelligence in the production, organization, and analysis of evidence significantly modifies the distribution of the burden of proof in arbitration. Digital evidence mediated by algorithms ceases to be a simple document presented to the tribunal and begins to involve technical processes that require human supervision, methodological transparency, and, when necessary, explanations about its origin, traceability, and integrity.
This additional burden arises from factors such as algorithmic opacity, a characteristic of models whose internal functioning is not verifiable, the potential for automatic reconstruction of content by generative tools, and the technical inequality between the parties, which can compromise the exercise of due process. In this environment, the notion of continuous human responsibility applies: although AI participates in the evidence production chain, the responsibility for verifying, reviewing, and validating the content remains entirely human.
The most recent guidelines from arbitral institutions reinforce this understanding. The CIArb Guideline on the Use of AI in Arbitration requires parties to indicate when they use AI to produce or process evidence and to independently verify the accuracy of the results (CIArb, 2024). The SVAMC Guidelines reiterate that technology cannot compromise the authenticity of the evidence (SVAMC, 2024). The VIAC Note recognizes the power of arbitrators to request supplementary information about the methods employed (VIAC, 2024), while the SCC Guide recommends the use of tools capable of detecting artificially generated or manipulated content (SCC, 2024). Together, these guidelines establish that the party that introduces technology into the procedure is responsible for demonstrating that it has not compromised the integrity of the evidence.
AI also generates hypotheses of shifting the burden of proof. This phenomenon occurs when the opposing party presents a plausible doubt about the authenticity of the material, especially in a context marked by deepfakes and sophisticated digital manipulation tools. In these situations, it is up to the party that presented the evidence to explain its process of obtaining it and demonstrate the digital chain of custody.
This set of factors reinforces the importance of preserving the digital chain of custody. Evidence produced or processed by AI must be accompanied by information about the tool used, parameters applied, model versions, and modification phases.
In summary, artificial intelligence does not diminish the burden of proof in arbitration, but transforms it. The party that makes use of AI assumes expanded responsibility for demonstrating the authenticity, integrity, and reliability of the evidence presented, and must ensure adequate human supervision, methodological transparency, and complete traceability of the material. In a scenario characterized by sophisticated possibilities of digital manipulation, risks of algorithmic error, and relevant technical asymmetries, the burden of proof accompanies the technological risk introduced into the procedure, reaffirming that technology expands, and does not replace, the evidentiary responsibility of the parties.
2.4. Limits of judicial cooperation in the search for truth
The complexity of digital evidence and the risk of manipulations by artificial intelligence can lead the parties, in exceptional situations, to seek the support of the Judiciary to preserve or enable the production of evidence in aid of arbitration – for example, through urgent injunctions, early production of evidence, or acts that depend on state coercive powers, especially in the face of third parties not subject to the jurisdiction of the arbitral tribunal (Lew; Mistelis; Kröll, 2003).
However, this cooperation has well-defined limits, as arbitration remains an autonomous procedure, founded on competence-competence and the responsibility of the arbitrators for conducting the instruction. State intervention cannot replace the evidentiary assessment of the arbitral tribunal or serve as an indirect route to re-discuss issues of merit (Lew; Mistelis; Kröll, 2003).
As the classic doctrine highlights, judicial intervention in arbitration should remain restricted to support measures intended to supply coercive powers that arbitrators do not possess, such as ordering the preservation of evidence or the delivery of documents held by third parties. In these situations, state courts act only to make the arbitral process effective, without replacing the evidentiary judgment of the arbitral tribunal or interfering in the conduct of the merits (Lew; Mistelis; Kröll, 2003).
Another relevant limit is confidentiality. The submission of sensitive documents, technical metadata, or strategic business information to a judicial process, which is generally public, can compromise duties of confidentiality assumed by the parties in arbitration. Therefore, the activation of the Judiciary should be exceptional and, when inevitable, accompanied by measures that reduce the risk of undue exposure.
It is not generally accepted that judicial cooperation be used strategically to transfer to the Judiciary the task of verifying the authenticity of digital evidence or endorsing generic allegations of manipulation by artificial intelligence. In these situations, the discussion must be submitted to the arbitral tribunal itself, which is responsible for conducting the instruction and assessing the evidence. Only in extreme cases, in which it is argued that the sentence was rendered based on false evidence, is there room for judicial control of the report, within the narrow limits of art. 32 of the Arbitration Law, as illustrated by recent decisions involving arbitrations with the participation of the Public Administration (AGU, 2023).
The legitimate space for judicial cooperation is therefore restricted to cases in which the effectiveness of the arbitral procedure depends on acts that require external coercive powers, such as orders directed to third parties, urgent preservation of volatile data, or preventive measures to avoid the destruction of evidence. Even in these cases, state action must be instrumental, always subordinate to the decisions of the arbitral tribunal and without invading its decision-making sphere.
3. Conclusion
Many argue that regulation is necessary precisely to preserve the human characteristics of the dispute resolution process, without, however, stifling innovation. The main arbitral institutions have already been establishing important rules for the use of artificial intelligence, but regulation must be thought of in adequate and progressive doses, as has occurred with other technologies throughout history (CIArb, 2024; SVAMC, 2024; VIAC, 2024; SCC, 2024).
The technological race of large companies is irreversible, and those concerned with due process in arbitration need to recognize, simultaneously, the benefits and risks of the integration of artificial intelligence. In several jurisdictions, such as the Swiss Federal Tribunal and the English High Court, the analogy with the tribunal secretary offers a promising path: although useful, he continues to be an assistant, limited by the arbitrator’s guidelines and the fundamental principles of arbitration (Swiss Federal Tribunal, Case 4A_709/2014, 2015; P v Q [2017] EWHC 194 (Comm)). Similarly, artificial intelligence can and should be understood as a powerful tool, capable of streamlining procedures, offering insights, and even assisting in the drafting of decisions, but always kept in the position of technical assistant, subordinate to the final discretion and critical judgment of the human arbitrator (Lew; Mistelis; Kröll, 2003; Scherer, 2024).
Ultimately, the credibility of arbitration in an environment marked by disinformation and increasing sophistication of digital manipulations will depend less on the speed of technology and more on the ability of the parties and arbitrators to control it, audit it, and use it without abdicating probative rigor. Artificial intelligence can strengthen the arbitral procedure, provided that it is incorporated in a responsible, transparent, and proportional manner (Souza-McMurtrie, 2023; Delfino, 2023).


