BitBulteni

BitBulteni

Monday 23 March 2026
Policy & Regulation | July 12, 2024 | BitBulteni

Protecting Against AI Deep Fakes: The Importance of the COPIED Act

Protecting Against AI Deep Fakes: The Importance of the COPIED Act

A new bill has been introduced by a bipartisan group of senators and aims to mandate watermarking of such content to prevent the misuse of artificial intelligence (AI) deep fakes.

This bill, introduced by Senator Maria Cantwell (D-WA), Senator Marsha Blackburn (R-TN), and Senator Martin Heinrich (D-NM), proposes a standard method for watermarking AI-generated content.

This bill, called the Content Source Protection and Regulated and Deep Fake Media Act (COPIED), would strengthen the protection of creators and establish controls on the types of content that AI can be trained on.

Senator Cantwell stated that this bill would provide “much-needed transparency” to AI-generated content, while emphasizing that it would enable “creators, including local journalists, artists and musicians, to take back control of their content.”

This bill would require AI service providers to include information about the origin of content that users produce, and would require that information be implemented in a “machine-readable” way and cannot be bypassed or removed using AI-based tools.

The Federal Trade Commission (FTC) will oversee enforcement of the COPIED Act, and the regulator will treat violations as unfair or deceptive conduct similar to other violations under the FTC Act. With the introduction of artificial intelligence, there has been much debate around its ethical implications, given the technology’s ability to crawl large amounts of data across the web. These concerns were evident when tech giant Microsoft stepped back from serving as a board member at OpenAI.

“AI has given malicious actors the ability to create deep fakes by impersonating any individual, including members of the creative community, without their consent and profiting from fake content,” said Senator Blackburn. The proposed bill coincides with a 245% increase in fraud and fraud using deep fake content.

A report from Bitget estimates that losses from such schemes will be worth $10 billion by 2025. In the crypto space, scammers are using artificial intelligence to impersonate well-known personalities such as Elon Musk and Vitalik Buterin to trick users.

In June 2024, a client of crypto exchange OKX lost more than $2 million after attackers managed to bypass the platform’s security using deep fake videos of the victim.

A month ago, Hong Kong authorities cracked down on a scam platform that used Elon Musk’s likeness to mislead investors. These incidents show how serious consequences the malicious use of AI can lead to, highlighting how important the COPIED bill is.

Meanwhile, tech giant Google has been criticized by National Cyber ​​Security Center (NCC) founder Michael Marcotte for taking inadequate precautions against crypto-targeted deep fakes.

Marcotte stated that Google should make more efforts to prevent fake content spreading on its platforms. These criticisms suggest that big tech companies need to be more proactive in preventing the misuse of AI.

The COPIED bill is seen as an important step towards ensuring the ethical and safe use of artificial intelligence technology. While the rapidly evolving capabilities of AI offer great opportunities in content production, they also present potential for abuse.

Therefore, such legal regulations are critical to both protect the rights of creators and ensure public safety. If the bill becomes law, it will ensure that AI-based content is more transparent and traceable, thus preventing deep fakes and other abuses.

Tags: Yapay zekaAIDeep fakeCOPIED yasasıSenatör Maria CantwellSenatör Marsha BlackburnSenatör Martin Heinrich

Related Posts