Will AI help or hurt elections in Asia?

Across Asia, the rise of artificial intelligence is posing a dilemma for democracy. From the Philippines to South Korea, governments and politicians are grappling with the double-edged nature of AI: its capacity to spread disinformation, but also to improve voter engagement, streamline campaigns, and enhance election administration.
While fears of AI-driven deepfakes and propaganda are legitimate, an overemphasis on risks could obscure the technology’s potential to strengthen democratic processes, according to a new report from the Council of Asian Liberals and Democrats and Manila-based polling firm WR Numero.

The policy paper argues that ignoring the opportunities could leave “democratic and liberal parties at a disadvantage as opposing political forces and other industries embrace AI”.

AI is already revolutionising Asia’s elections and politics, changing how campaigns are run and “reshaping the entire electoral process”, said Cleve Arguelles, a political scientist and CEO of WR Numero. But, he added, “with these new opportunities come significant challenges we need to address.”

“Technology waits for no one,” warned Cambodian senator Mardi Seng, chair of CALD and a member of the liberal Khmer Will party, at the report’s launch on December 4. Seng stressed the urgency for governments to adopt AI to avoid being left behind.

The report highlights that countries across the region have reached a “critical juncture” in their use of AI. While the technology can open doors to ethical political engagement, it also amplifies risks such as biased messaging, data misuse and the spread of disinformation.

“The initial wave of AI adoption in politics has shown that while risks are evident, such as the potential for biased messaging and data misuse, the opportunities are just as significant,” it said.

Take South Korea, for example. The nation has pioneered the use of AI to increase voter participation and ensure transparency. However, the same technology has also been weaponised. During South Korea’s legislative elections in April, at least 129 deepfake videos circulated online, including fabricated clips of President Yoon Suk-yeol admitting to corruption and criticising rivals.
In Indonesia, AI-generated disinformation surged this year. Videos falsely showing presidential candidate Anies Baswedan speaking fluent Arabic – despite his lack of proficiency – and the late president Suharto endorsing Golkar Party candidates garnered millions of views. The Indonesian Anti-Defamation Society reported that AI-fuelled falsehoods had doubled compared to previous elections.
But AI isn’t just a tool for disinformation; it’s also driving innovation. Since 2014, Indonesia has used Sidalih, an AI-powered voter list system that centralises data and detects duplicates. This year, President Prabowo Subianto’s campaign tapped generative AI to create gemoy avatars – cartoonlike versions of himself and his running mate Gibran Rakabuming Raka – to connect with younger voters.
Some regional governments are already taking steps to regulate AI in politics. South Korea’s updated election laws, introduced last year, established measures to combat AI misuse. The laws ban the creation and distribution of election-related deepfakes, with penalties of up to seven years in prison or fines of US$37,500. The report praised this approach, which balances electoral integrity “with constructive AI use, such as generating campaign slogans or speeches”.

Taiwan, too, is setting a precedent. Its Infodemic platform, developed by Taiwan AI Labs, uses large language models to track and counter disinformation in real time. Monitoring platforms like Facebook, YouTube, X and TikTok, the system identifies patterns in fake narratives and troll activity to preemptively address emerging threats.
With the Philippines gearing up for major elections next year, analysts have urged the country to carefully navigate the fine line between AI’s promises and perils.

Dominic Ligot, founder of the social impact AI enterprise CirroLytix and the data ethics organisation Data Ethics PH, said the Philippines was particularly vulnerable to disinformation campaigns.

“The technology’s dual use in creating and detecting deepfakes means that without stringent controls and public education, the risks could outweigh the benefits,” Ligot told This Week in Asia. He suggested that AI could be deployed to create interactive voter education tools and train citizens to identify misinformation through initiatives like a “deepfake gallery.”

But time is not on the Philippines’ side. Ligot noted that the tight timeline might prevent institutions from developing comprehensive AI policies before the elections.

Instead, he recommended that “the best approach would be to focus on raising public awareness and deploying targeted AI tools to detect and counteract misinformation swiftly”.

Arguelles, the political scientist, acknowledged that there were risks but cautioned against framing the Philippines’ challenges as exceptional.

The country’s disinformation problem is “significant but not unique” he said. “For example, while deepfakes and AI-generated propaganda have been weaponised … countries have also demonstrated how AI can be used effectively to monitor disinformation and enhance electoral transparency.”

Arguelles suggested that the Philippines could adapt Indonesia’s Sidalih system to streamline voter registration or use AI to authenticate candidacy documents. Chatbots could provide real-time polling information, while AI-powered fact-checking tools could help journalists debunk fake news.

“AI is already being used in the Philippines’ electoral processes, making it impractical to ignore its presence,” Arguelles said. “The key is to address the challenges directly … By taking these steps, AI can be harnessed as a tool to strengthen, rather than compromise, electoral integrity.”

As AI continues to evolve, the stakes for electoral integrity across Asia are only growing. The technology offers unprecedented opportunities to engage voters, personalise campaigns, and improve transparency. But, as the report warns, the same tools can undermine democracy if left unchecked.

×