In brief: As AI-generated deepfakes become more realistic and convincing every day, Microsoft has called on Congress to pass laws protecting against their use for election manipulation, crimes, and abuse. The plea comes just a few weeks after the US Senate introduced a bill to create legal framework for ethical AI development.

Brad Smith, Microsoft's President and Vice Chairman, writes that while the tech sector and non-profit groups have taken steps to address the problems posed by deepfakes, especially when used for fraud, abuse, and manipulation against kids and seniors, laws need to evolve to combat these issues.

Smith urged the US to pass a comprehensive deepfake fraud statute that will give lawmakers a legal framework to prosecute criminals who use the technology to steal from everyday Americans. Smith also wants lawmakers to update federal and state laws on child sexual exploitation, abuse, and non-consensual intimate imagery to include AI-generated content.

Microsoft also wants Congress to require AI system providers to use tools that label synthetic content, which it says is essential to build trust in the information ecosystem.

Earlier this month, the US Senate introduced new legislation called the "Content Origin Protection and Integrity from Edited and Deepfaked Media Act" (COPIED Act). The act is designed to outlaw the unethical use of AI-generated content and deepfake technology and allows victims of sexually explicit deepfakes to sue their creators.

Although they have been around for years, the advancing tech behind deepfakes has brought them into the spotlight recently. The explicit images of Taylor Swift shared online in January, seen by over 27 million people on X alone, led to lawmakers calling for changes.

In the UK, explicit deepfakes were made illegal under the Online Safety Act in October 2023. PornHub, meanwhile, has banned deepfakes since 2018.

The political implications of deepfakes have proven to be a warranted concern during this election year. The phone calls using a cloned version of Joe Biden's voice in January led to the President calling for AI voice impersonations to be banned. Elsewhere, Microsoft has issued several warnings about China using generative AI to try to influence the election.

This week saw Elon Musk repost a digitally altered video of Kamala Harris, where her deepfaked voice says she is the "ultimate diversity hire" and that President Biden is senile. Many say the video violates X's own rules on posting manipulated content.