DIGITAL EVIDENCE IN THE AGE OF DEEPFAKES: THE ADMISSIBILITY CHALLENGES OF AI-MANIPULATED VIDEOS, PHOTOS, AND AUDIO IN NIGERIAN CRIMINAL AND CIVIL TRIALS
Published on September 18, 2025
Published on September 18, 2025
In today’s digital era, where technology shapes not only communication but also the very nature of truth, courts are increasingly confronted with novel challenges in assessing evidence. Among the most disruptive developments is the rise of deepfakes: AI-generated or manipulated videos, photos, and audio that can convincingly portray events that never occurred. Unlike traditional forms of digital alteration, deepfakes deploy advanced deep learning techniques to replicate facial expressions, voices, and gestures with startling accuracy, thereby blurring the line between fact and fabrication.
Nigeria, like many jurisdictions, has made significant strides in addressing the admissibility of electronic and computer-generated evidence through statutory reforms and judicial decisions. The enactment of the Evidence Act 2011, particularly Section 84, established a framework for admitting electronic evidence upon satisfying conditions of authenticity and reliability. Judicial authorities such as Trade Bank Plc v. Chami (2003) 13 NWLR (Pt. 836) 158 have entrenched the principle that courts cannot ignore the realities of technological advancement. Yet, the emergence of deepfakes has exposed critical gaps in this framework.
While Section 84 of the Evidence Act, 2011 focuses on the regular use and proper functioning of the computer system that produced the evidence, it does not directly address the integrity of the content itself, that is, whether the video, audio, or image genuinely reflects reality. This creates significant evidentiary and justice-related challenges in both criminal trials, where the State must prove guilt beyond reasonable doubt, and civil trials, where issues of defamation, privacy, and contractual disputes may hinge on manipulated media.
In this regard, Nigerian courts are now compelled to grapple with questions that go beyond traditional authentication such as:
The admissibility of digital evidence in Nigeria has undergone significant transformation over the last few decades. Prior to the enactment of the Evidence Act 2011, courts were reluctant to admit computer-generated evidence, largely due to concerns about authenticity, reliability, and the ease with which electronic records could be altered.
The turning point came with the recognition that the justice system could no longer afford to “shut its eyes to the mysteries of the computer,” as famously expressed in Esso West Africa Inc. v. Oyegbola (1969) NMLR 194,198. Where the court emphasised that the law must adapt to technological realities, even before specific statutory provisions existed.
Section 84 of the Evidence Act 2011 marked a paradigm shift by expressly providing a framework for the admissibility of electronic evidence. Under this section, statements contained in documents produced by a computer are admissible once the party relying on such evidence satisfies conditions relating to the regular use of the computer, proper operation, and authenticity, typically supported by a certificate of compliance. Judicial authorities such as Kubor v. Dickson (2013) 4 NWLR (Pt. 1345) 534 and Dickson v. Sylva (2017) 8 NWLR (Pt. 1567) 167 have reinforced the mandatory nature of these requirements, stressing strict compliance before digital evidence can be admitted.
Despite these advancements, challenges persist. Section 84 addresses the process by which electronic documents are generated and stored, but it does not expressly regulate the content of such documents. In other words, once a digital exhibit passes the Section 84 threshold, courts often proceed to weigh it as credible evidence without a structured mechanism for assessing whether the material itself has been technologically manipulated. With the emergence of deepfakes, media that may meet the technical admissibility requirements but still be entirely fabricated, this gap in the law has become increasingly obvious.
Furthermore, the enactment of the Nigeria Data Protection Act 2023 (NDPA) has introduced new dimensions to the admissibility debate. Since deepfakes often involve the use of facial data and voice biometrics, questions now arise regarding whether their creation or use in litigation infringes privacy rights and data protection principles (see s. 24–26 of the NDPA, 2023 on sensitive personal data processing/consent of the data holder). Nigerian courts must therefore strike a delicate balance in upholding evidentiary openness in the digital age while safeguarding the integrity of judicial proceedings and the fundamental rights of individuals.
Deepfakes represent a radical departure from earlier forms of digital manipulation. Traditional “cheap fakes” relied on crude editing techniques such as splicing, cropping, or audio overdubbing, which left detectable traces and could be challenged relatively easily in court. By contrast, deepfakes employ advanced deep learning models, particularly Generative Adversarial Networks (GANs), to replicate the likeness, voice, and gestures of real individuals with uncanny precision.
This sophistication presents two legal challenges for Nigerian courts:
Beyond evidentiary and privacy concerns, deepfakes also raise issues of criminal liability under the Cybercrimes (Prohibition, Prevention, etc.) Act 2015. For instance, section 13 criminalises the alteration or suppression of computer data resulting in inauthentic data being relied upon as genuine, a description that aptly captures the essence of deepfake technology.
In addition, the Act contains provisions addressing the misdirection of electronic messages, unlawful interception of biometric data, computer-related fraud, identity theft & impersonation, and cyberstalking pursuant Sections 11, 12, 14, 22 & 24 respectively. Collectively, these provisions provide a statutory framework for prosecuting the misuse of deepfakes in Nigeria, particularly where synthetic media is weaponised for purposes such as defamation, harassment, or fraud.
Consequently, the age-old assumption that “seeing is believing” no longer holds. The law must therefore recalibrate the standards of authenticity in an era where sight and sound can be convincingly manufactured.
Nigerian courts have consistently shown openness to technological evidence; however, with the rise of deepfakes, mere openness is insufficient. What is required is structured vigilance. Courts must adopt a more active gatekeeping role by insisting on detailed Section 84 certificates, encouraging the use of expert testimony under Section 68, and weighing prejudice against probative value before admitting highly sensitive media.
But beyond certificates, expanded legislative provisions are necessary. Section 84 could be amended to expressly capture the risks of AI-generated manipulation, mandating that compliance certificates include declarations that the content has not been synthetically altered. This could work alongside a broader Artificial Intelligence Act regulating generative AI and synthetic media. In this way, admissibility reforms and substantive AI regulation would complement, not substitute, each other.
Comparatively, common law jurisdictions such as the United States and the United Kingdom have begun exploring provenance technologies (e.g., Content Authenticity Initiative (CAI), Coalition for Content Provenance and Authenticity (C2PA) standards) and stricter pre-trial disclosure rules for audio-visual exhibits. While Nigerian courts could adapt these lessons within the Evidence Act framework, doing so will likely require legislative or judicial intervention. Appellate courts, for instance, could expand the scope of Section 84 through purposive interpretation, creating judicial precedents that directly address synthetic media.
Deepfakes expose the tension between technological advancement and the integrity of evidence in Nigeria’s justice system. While Section 84 of the Evidence Act provides a statutory gateway for admitting digital evidence, it does not directly confront the manipulation risks posed by AI-generated media. Therefore, authenticity must now be tested beyond the computer system to the truthfulness of the content itself.
Moving forward, Nigerian courts must adopt stricter authentication practices, expand reliance on expert testimony, and remain alert to the privacy and cybercrime dimensions of synthetic media. At the same time, modest statutory and procedural reforms, such as requiring Section 84 certificates to declare that the content has not been AI-manipulated could strengthen admissibility rules without overhauling the entire Act.
Partner
Associate