What are Deepfakes?  

Deepfakes are AI-generated synthetic videos and audio that convincingly mimic human speech and movement, often impersonating real people in scenarios that never occurred. These AI-powered tools are the latest and most dangerous threat in the ongoing battle against disinformation. 

How Are They Being Used?  

  • Disinformation and Electoral Influence: Deepfake technology surged as a disinformation tool in 2024, with at least 133 documented campaigns impacting more than 30% of the 60 countries that held elections in 2024. Thirty-seven percent (37%) of these disinformation campaigns focused on U.S. elections, with actors linked to China, Iran, Russia, and North Korea weaponizing deepfakes to polarize and misinform voters.1 
  • Cybercrime and Corporate Fraud: Criminals are increasingly using deepfake technology to impersonate senior executives and infiltrate corporate systems. A common threat involves fraudsters mimicking an executive’s voice to authorize a fraudulent wire transfer or access sensitive data, exemplifying the risks of corporate theft, IP compromise, and economic harm.2 
  • Weaponized Harassment and Exploitation: Deepfake pornography made up 98% of all deepfake videos online in 2023, often targeting women, activists, or dissidents.3 Victims of these crimes face devastating personal and professional consequences, with such harassment increasingly used to silence critics of authoritarian regimes.4 

Blockchain Solutions and the Immutable Watermark 

Blockchain technology offers a compelling solution to combat the challenges posed by deepfakes by leveraging its unique attributes of immutability, transparency, and decentralized recordkeeping. By securely storing and publicizing image provenance metadata – data about where and how an image was captured or created – analysts can verify the origin and history of media, ensuring it remains unaltered. This makes it nearly impossible for bad actors to tamper with or falsify content undetected.  

Blockchains enable verification nodes to maintain the fidelity of media content, using economic incentives to ensure honest participation. For example, in proof-of-stake blockchains like Ethereum, validators risk losing a portion of their staked cryptocurrency through slashing mechanisms if they submit false data or act maliciously, thereby upholding the integrity of the system. 

Furthermore, Zero-knowledge proofs (ZKPs) strengthen blockchain’s ability to verify media by confirming authenticity without revealing sensitive information. ZKPs work by comparing small pieces of encrypted evidence between users and verifiers, preserving privacy while ensuring accuracy. Once the small pieces of encrypted evidence are compared, the system verifies whether the media matches its original, unaltered state. 

In contrast, legacy solutions like metadata watermarking and the Coalition for Content Provenance and Authenticity’s (C2PA) standard to identify AI-manipulated media face limitations.  These methods do not incorporate the immutability, auditability, or economic incentive mechanisms of blockchains.  Specifically, C2PA watermarks are often lost when files are reformatted – such as when they are converted from JPEG to PNG. Additionally, legacy detection methods rely on AI models to be pre-programmed to an unknowable standard. C2PA audit trails are hosted by centralized entities like Adobe, Microsoft, and Google – all of whom have suffered recent data breaches. Blockchain-based deepfake detection stands out as the most effective approach to mitigate the risks of deepfakes, addressing critical issues like AI-enabled harassment, corporate theft, and political disinformation. 

Who we are: 

The Digital Chamber (TDC) advocates for national and international standards that leverage blockchain’s inherent strengths that can mitigate AI’s greatest risks.