With the rise of AI-driven deepfakes, an increasingly urgent conversation is forming about how this technology will shape the future of both visual media and politics. From the use of AI to spread false information and bypass fact-checkers to the potential for malicious actors to impersonate people online, the implications of deepfakes are far reaching. To further explore this issue, the U.S. House of Representatives Subcommittee on Civil Rights and Civil Liberties hearing on Tuesday will examine the implications of AI-driven deepfakes.
Leading the hearing will be Congresswoman Nancy Mace, who has been a leading voice on the issue of deepfakes and the military use of AI for the past year since she joined the subcommittee. In her opening statement, Congresswoman Mace outlined the potential impacts of AI-driven deepfakes and argued for greater regulation of the technology. “The proliferation of deepfakes in our society can have serious implications for both free speech and the democratic process,” Mace said. “If we do not work together to detect and prevent deepfake technologies from entering our society, we may wind up with a world in which facts and truth become irrelevant.”
For the hearing, Mace has called many prominent experts on the technology, including Danielle Keats Citron, a professor at the University of Maryland’s School of Law and a leading advocate for greater regulation of deepfakes. Citron argued for the need to take a proactive approach against deepfakes, citing existing laws that offer little protection. “I think the most important point to garner from this hearing is that existing legal infrastructure provides limited resources and leaves us vulnerable in the face of an expansion of deepfake technology,” Citron said.
Other experts, such as Dr. Jonathan Follett from the Partnership on AI, argued that AI-driven deepfakes should be regulated but only in instances where the technology is likely to cause harm. “AI-driven deepfakes should be regulated,” Follett said. “But what is important is to differentiate between different types of deepfakes and to focus our efforts on those that are the most harmful, such as those designed to deceive or undermine trust.”
Mace also discussed the military use of AI-driven deepfakes, and argued that their use should be regulated with greater oversight. “Exploring and regulating military use of these technologies warrants our attention and action,” Mace said. “We should consider, through a measured approach, regulations or prohibitions around military use of AI-driven deepfakes, which endanger innocent lives.”
The hearing on Tuesday is an important step in exploring the implications of AI-driven deepfakes. With greater regulation of the technology already underway in several states, it remains to be seen how best to protect the general public while still allowing technology to move forward. But it is clear that Nancy Mace and other leaders in Congress are determined to make sure the conversation around deepfakes is an informed one.