New Hampshire Attorney General Charges Political Consultant Who Organized robocalls with US President Joe Biden’s AI-generated voice ahead of the presidential primaries, who now faces a $6 million fine and several criminal charges.
The Federal Communications Commission (FCC) reported that the fine it proposed Thursday for Steven Kramer is the first to be imposed in relation to generative AI technology. The company accused of transmitting the calls, Lingo Telecom, faces a $2 million fine, although in both cases the parties could reach an agreement or continue negotiating, the FCC said.
Kramer, 54, faces 13 counts of felony voter suppression and misdemeanor impersonation of a candidate after thousands of New Hampshire residents received a robocall message asking them not to vote until November.
Kramer has admitted to orchestrating a message that was sent to thousands of voters two days before the Jan. 23 primary. The message reproduced a voice similar to that of the Democratic president who used his characteristic phrase “what a bunch of nonsense” and falsely insinuated that voting in the primaries would prevent the electorate from casting their vote in the presidential elections.
An attorney for Kramer could not immediately be identified.
There is growing concern in Washington that the AI generated content could deceive voters in November’s presidential and congressional elections. Some senators want to pass legislation before November that would address the AI threats to electoral integrity.
“New Hampshire remains committed to ensuring that our elections remain free of unlawful interference and our investigation into this matter remains ongoing,” said Attorney General John Formella.
Formella hopes the state and federal actions will “send a strong deterrent signal to anyone who might consider interfering with the election, whether through the use of artificial intelligence or otherwise.”
On Wednesday, FCC Chairwoman Jessica Rosenworcel proposed requiring disclosure of artificial intelligence (AI)-generated content in political ads on radio and television, both for candidate and issue ads, but not banning any AI-generated content. .
The FCC said the use of AI is expected to play a substantial role in political ads in 2024. The FCC highlighted the potential for deceptive “deepfakes,” which are “altered images, videos or audio recordings depicting people doing or saying things that they did not actually do or say, or events that did not actually occur.”
[Con información de Reuters y AP]
Connect with the Voice of America! Subscribe to our channels Youtube, WhatsApp and to the newsletter. Turn on notifications and follow us on Facebook, x and instagram.
Add Comment