US lawmakers introduce the NO FAKES Act to combat AI deepfakes and safeguard rights, though critics warn it may lead to censorship concerns.
As artificial intelligence continues to evolve, its capabilities are increasingly being exploited for harmful purposes, including defrauding cryptocurrency users. To address this growing threat, a group of US lawmakers has introduced a new bill aimed at protecting citizens from AI-generated deepfakes and unauthorized digital copies.
On September 12, Representatives Madeleine Dean and María Elvira Salazar unveiled the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. This legislation seeks to safeguard Americans from the misuse of AI and combat the spread of unauthorized AI-generated deepfakes. The lawmakers highlighted that the bill would empower individuals to take legal action against malicious actors who create or distribute unauthorized digital replicas of others for profit. Furthermore, it would protect media platforms from legal liability if they remove offending content.
While the NO FAKES Act is intended to promote innovation and protect free speech, not everyone is convinced of its positive impact. Some critics argue that the bill may unintentionally stifle lawful expression.
Related: Pennsylvania Proposes Bill for a Strategic Bitcoin Reserve
Concerns Over Censorship
Corynne McSherry, the legal director at the Electronic Frontier Foundation, a prominent digital rights advocacy organization, has expressed serious concerns about the bill. She warns that the NO FAKES Act could pave the way for “private censorship.”
In a blog post published in August, McSherry noted that while the bill might be beneficial for legal professionals, it could become a “nightmare” for everyday citizens. She criticized the NO FAKES Act for offering fewer protections for lawful speech compared to the Digital Millennium Copyright Act (DMCA), which governs the use of copyrighted material.
Related: SEC Changes Stance on Crypto Regulations, Combats Fraud in Digital Assets
Under the DMCA, individuals have a straightforward process to file a counter-notice and restore their content if it has been wrongfully taken down. However, McSherry explained that the NO FAKES Act would require individuals to defend their rights in court within a tight 14-day window. “The wealthy and powerful have access to legal teams that can handle such demands, but most creators, activists, and citizen journalists do not,” she added.
While McSherry acknowledges that AI-generated deepfakes can cause significant harm, she believes that the flaws in the NO FAKES Act could undermine its goals.
Rising Threat of AI in Crypto Scams
Related: The Bahamas Enacts Digital Assets and Registered Exchanges Act
In recent months, the misuse of AI technology has led to an increase in sophisticated scams targeting cryptocurrency users. In the second quarter of 2024 alone, software company Gen Digital reported that AI-powered deepfakes were responsible for stealing at least $5 million in cryptocurrency. Security experts are urging users to remain vigilant as AI-driven scams become increasingly convincing and difficult to detect.
Web3 security firm CertiK has also raised alarms about the future potential of AI-based attacks. They predict that these malicious activities may go beyond video and audio manipulation and could soon target cryptocurrency wallets that use facial recognition for security. CertiK advises that wallets with this feature must ensure they are prepared to defend against AI attack vectors.
As AI technology continues to advance, the introduction of the NO FAKES Act reflects the growing concern among lawmakers about the potential for abuse. However, striking the right balance between protecting individuals from harm and preserving free speech remains a challenge that lawmakers and digital rights advocates must navigate carefully.