AI being used to deceive voters worldwide

LONDON. — Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone with a smartphone and a devious imagination to create fake – but convincing – content aimed at fooling voters.

It marks a quantum leap from a few years ago, when creating phony photos, videos or audio clips required teams of people with time, technical skill and money. Now, using free and low-cost generative artificial intelligence services from companies like Google and OpenAI, anyone can create high-quality “deepfakes” with just a simple text prompt.

A wave of AI deepfakes tied to elections in Europe and Asia has coursed through social media for months, serving as a warning for more than 50 countries heading to the polls this year.

“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” said Henry Ajder, a leading expert in generative AI based in Cambridge, England.

The question is no longer whether AI deepfakes could affect elections, but how influential they will be, said Ajder, who runs a consulting firm called Latent Space Advisory.

As the US presidential race heats up, FBI director Christopher Wray recently warned about the growing threat, saying generative AI makes it easy for “foreign adversaries to engage in malign influence.”

With AI deepfakes, a candidate’s image can be smeared, or softened. Voters can be steered toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that a surge of AI deepfakes could erode the public’s trust in what they see and hear.

Some recent examples of AI deepfakes include:

A video of Moldova’s pro-Western president throwing her support behind a political party friendly to Russia.

Audio clips of Slovakia’s liberal party leader discussing vote rigging and raising the price of beer.

A video of an opposition lawmaker in Bangladesh — a conservative Muslim majority nation — wearing a bikini.

The novelty and sophistication of the technology makes it hard to track who is behind AI deepfakes. Experts say governments and companies are not yet capable of stopping the deluge, nor are they moving fast enough to solve the problem.

As the technology improves, “definitive answers about a lot of the fake content are going to be hard to come by,” Ajder said

Some AI deepfakes aim to sow doubt about candidates’ allegiances.

Audio-only deepfakes are especially hard to verify because, unlike photos and videos, they lack telltale signs of manipulated content.

In Slovakia, audio clips resembling the voice of the liberal party chief were shared widely on social media just days before parliamentary elections. The clips purportedly captured him talking about hiking beer prices and rigging the vote.

It’s understandable that voters might fall for the deception, Ajder said, because humans are “much more used to judging with our eyes than with our ears.”

In the US, robocalls impersonating U.S. President Joe Biden urged voters in New Hampshire to abstain from voting in January’s primary election. The calls were later traced to a political consultant who said he was trying to publicize the dangers of AI deepfakes. – AP

You Might Also Like

Comments

Take our Survey

We value your opinion! Take a moment to complete our survey