Worried about AI hijacking your voice for a deepfake? This tool could help

November 13, 2023 – Media Mention
NPR

Yaél M. Weitz

 

Counsel, Yael Weitz, was quoted in NPR in an article discussing the development of new tools to facilitate the detection of AI bot deepfakes and to make it more difficult for AI systems to create them. The article also discusses potential legislation to address liability for the use of people's likenesses without consent.

The article notes that "[a]rtificial intelligence has gotten so good at mimicking people's physical looks and voices that it can be hard to tell if they're real or fake." This has posed a particular problem for celebrities, who are trying to stay ahead of the AI bots. The tools being developed to combat deepfakes include one that scrambles the signal "such that it prevents the AI-based synthesize engine from generating an effective copycat." Another solution is deepfake detection, which includes embedding digital watermarks in digital and audio so that users can identify if they are made by AI. 

The article notes that recently members of the U.S. senate announced they were discussing a new bipartisan bill — "the "Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2023" or the "NO FAKES Act of 2023" for short — that would hold the creators of deepfakes liable if they use people's likenesses without authorization." 

"When it comes to preventing deepfake abuses, consent is key." 

"The bill would provide a uniform federal law where currently the right of publicity varies from state to state," said Weitz. 

Right now, only half of the U.S states have "right of publicity" laws, which give an individual the exclusive right to license the use of their identity for commercial promotion. And they all offer differing degrees of protection. But a federal law may be years away.

Read the full article in NPR here. Access may require a subscription.