The Federal Communications Commission (FCC) has taken a significant step in combating the rise of AI-generated voices in robocalls by making them illegal. The announcement, made on Thursday, empowers states to prosecute those responsible for these deceptive calls. FCC Chairwoman Jessica Rosenworcel emphasized the urgency of addressing the misuse of AI-generated voices, which have been employed to extort vulnerable individuals, impersonate celebrities, and misinform voters.
The move follows a concerning incident in New Hampshire, where voters received robocalls impersonating US President Joe Biden ahead of the state’s presidential primary. These calls, linked to companies in Texas, aimed to dissuade voters from participating in the primary, sparking an estimated 5,000 to 25,000 calls and prompting a criminal investigation.
The FCC highlighted the potential for AI-generated voice calls to spread misinformation by impersonating public figures and even family members. While state attorneys general have the authority to prosecute perpetrators for scams or fraud, the FCC’s decision directly targets the use of AI-generated voices in robocalls, providing additional legal avenues for enforcement.
The action taken by the FCC aligns with calls from attorneys general across 26 states, who urged restrictions on AI use in marketing calls. Led by Pennsylvania Attorney General Michelle Henry, the effort underscores the importance of safeguarding consumers from deceptive practices facilitated by advancing technology.
Concerns about AI manipulation extend beyond robocalls, with deepfake technology posing threats to electoral integrity globally. From the US and UK to India, political figures have been targeted by audio deepfakes, raising alarms among cybersecurity authorities about the potential impact on elections. As technology continues to evolve, regulatory measures become essential to combatting emerging threats and protecting democratic processes.