Some people fear that artificial intelligence (AI) will begin the apocalypse and bring about mass extinction. But the reality is that it’s already breeding a different type of catastrophe: one characterized by an onslaught of misinformation, falsely generated data, compromised cybersecurity, and so much more.
On Saturday, January 20, residents across New Hampshire received an automated phone call supposedly from President Biden, urging voters against participating in the January 23 primary election. However, an announcement from the state attorney general warned citizens to disregard the call and stated that it used generative AI to imitate Biden’s voice.
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the recording said. “Your vote makes a difference in November, not this Tuesday.”
According to the attorney general’s office, the call was fabricated to appear as though an officer of a Democratic committee had sent it. It also opened using one of Joe Biden’s most popular phrases, “What a bunch of malarkey.”
AI has been used in the past in attempts to thwart political campaigns. In June of 2023, former Florida governor Ron Desantis’ presidential campaign posted deepfaked photos on Twitter (now X) of his political rival, Donald Trump, kissing and hugging former health official Anthony S. Fauci. Now, the robocalls mimicking Biden indicate the continuation of AI use in a political environment. With the continued development of generative AI, officials can only speculate what sort of electoral chaos may ensue as a result of false information. Whether the American people choose to accept it or not, the use of AI in politics has exponentially expanded the campaigning playing field.
Lawmakers across the US have been pushing for stricter regulation of political content created by artificial intelligence. House and Senate committees have met nearly three dozen times and introduced at least 30 AI-focused bills to Congress in attempts to moderate the use of AI. Many of these bills emphasized the importance of guarding against deepfakes, whether they be audio or visual.
The Biden robocall situation is only one of many significant accounts of recent AI misuse. Another prominent celebrity, Taylor Swift, became the target of AI-generated deepfakes when fake, explicit images of her were released online. The images outraged millions of her fans, drawing even more attention to the escalating issue. As AI continues to be used unethically, victims are left to hope that prominent cases like Biden and Swift’s will raise enough awareness for more regulations to be established.