Saturday, July 27, 2024
HomeBlogDeepfakes in an election year — is Asia ready to handle misinformation...

Deepfakes in an election year — is Asia ready to handle misinformation campaigns?

A video of late Indonesian President Suharto pushing for the political party he once presided over went viral ahead of the February 14 elections in Indonesia.  

The AI-generated deepfake video that replicated his visage and voice received 4.7 million views on X alone.  

This was not an isolated instance.  

In Pakistan, a deepfake of former Prime Minister Imran Khan surfaced around the national elections, claiming that his party would boycott them. Meanwhile, in the United States, New Hampshire voters heard a deepfake of President Joe Biden urging them not to vote in the presidential primary.  

Deepfakes of politicians are becoming more common, especially with 2024 slated to be the largest worldwide election year in history.  

According to reports, at least 60 countries and more than four billion people will vote for presidents and legislators this year, raising severe concerns about deepfakes. 

According to a Sumsub research published in November, the number of deepfakes worldwide increased by tenfold between 2022 and 2023. In APAC alone, deepfakes increased by 1,530% within the same period.  

Between 2021 and 2023, the rate of identity fraud in online media, including social platforms and digital advertising, increased the most, by 274%. Professional services, healthcare, transportation, and video gaming were among the industries affected by identity fraud.  

According to Simon Chesterman, senior director of AI governance at AI Singapore, Asia is not prepared to combat deepfakes in elections in terms of regulation, technology, or education.  

CrowdStrike’s 2024 Global Threat Report stated that with the number of elections scheduled this year, nation-state actors such as China, Russia, and Iran are very likely to execute misinformation or disinformation campaigns to cause disruption.  

“The more serious interventions would be if a major power decides they want to disrupt a country’s election — that’s probably going to be more impactful than political parties playing around on the margins,” Chesterman said.  

However, most deepfakes would still be created by actors within the individual countries, he stated.  

According to Carol Soon, principal research fellow and head of the society and culture department at Singapore’s Institute of Policy Studies, domestic actors can include opposition parties and political opponents, as well as extreme right and left wingers. 

Deepfake risks  

At the very least, deepfakes poison the information ecosystem, making it difficult for individuals to locate reliable information or form informed opinions about a party or candidate, according to Soon.  

Voters may be turned off by a specific politician if they read news about a scandalous situation that goes viral before being proved as bogus, according to Chesterman. “Although several governments have tools (to prevent online falsehoods), the concern is the genie will be out of the bottle before there’s time to push it back in.”  

“We saw how quickly X could be taken over by deep fake pornography involving Taylor Swift—these things can spread incredibly quickly,” he added, adding that legislation is frequently insufficient and difficult to police. “It’s often too little too late.”  

According to Adam Meyers, CrowdStrike’s head of counter-adversary operations, deepfakes may also trigger confirmation bias in people: “Even if they know in their hearts it’s not true, if it’s the message they want and something they want to believe in, they’re not going to let that go.”  

Chesterman also stated that falsified footage demonstrating electoral malpractice, such as ballot stuffing, might lead to people losing faith in the election’s legitimacy.  

On the other hand, candidates may reject the truth about themselves, which may be unpleasant or unattractive, and blame it on deepfakes instead. Soon said. 

Who should take responsibility?  

According to Chesterman, there is a growing recognition that social media platforms must shoulder more responsibility as a result of their quasi-public status.  

In February, 20 prominent technology companies, including Microsoft, Meta, Google, Amazon, IBM, artificial intelligence startup OpenAI, and social media businesses such as Snap, TikTok, and X, declared a collective commitment to oppose the fraudulent use of AI in elections this year.  

Soon stated that the signed technology agreement is a significant beginning step, but its effectiveness will be determined by implementation and enforcement. With internet companies implementing diverse tactics across their platforms, she believes a multi-pronged strategy is required.  

Soon also stated that tech businesses must be very clear about the kind of decisions they make, such as the types of processes they implement.  

However, Chesterman believes it is unfair to expect private corporations to perform services that are basically public. Deciding what content to accept on social media is a difficult decision, and firms may take months to decide, he said.  

“We should not just be relying on the good intentions of these companies,” Chesterman went on to say. “That’s why regulations need to be established and expectations need to be set for these companies.”  

To that end, the Coalition for Content Provenance and Authenticity (C2PA), a non-profit organization, has introduced digital credentials for content, which will display verified information such as the creator’s name, where and when it was created, and whether generative AI was used to create the material. 

C2PA members include Adobe, Microsoft, Google, and Intel.  

OpenAI will add C2PA content credentials to photos made with its DALL·E 3 offering early this year. 

In a Bloomberg House interview at the World Economic Forum in January, OpenAI founder and CEO Sam Altman stated that the business was “quite focused” on preventing its technology from being used to sway elections.  

“I think our role is very different than the role of a distribution platform” like a social media site or news publisher, he said. “We have to collaborate with them, so it’s like you create here and distribute here. And there should be a nice conversation between them.” 

Meyers recommended establishing a bipartisan, non-profit technology organization devoted entirely to the analysis and detection of deepfakes.  

“The public can then send them content that they suspect is manipulated,” he explained. “It’s not foolproof but at least there’s some sort of mechanism people can rely on.”  

While technology is part of the solution, Chesterman believes that a large part of it is up to customers, who are still unprepared.  

Soon after, he emphasized the need of public education.  

“We need to continue outreach and engagement efforts to heighten the sense of vigilance and consciousness when the public comes across information,” she went on to say.  

The public needs to be more attentive; in addition to fact-checking when something is highly questionable, users should fact-check crucial bits of information, particularly before sharing it with others, she said.  

“There is something for everyone to do,” Soon stated. “It’s all hands on deck.”  

— CNBC’s MacKenzie Sigalos and Ryan Browne contributed to this story. 

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments