Digital Technologies Enable Artificial Armies and Artificial Candidates

The main impact of advances in generative AI technology appears to have been in its ability to create artificial representations of key players. This happened both as misinformation, but several elections included notable instances of clearly labeled AI usage to create videos of dead or detained candidates. The transparent and sometimes democracy-supporting use of AI in elections deserves further attention.

India

In the 2024 general elections, the DMK party – a regional party based mostly in the state of Tamil Nadu – used AI to create videos of deceased leader M. Karunanidhi, which were shown at campaign rallies. In one of the speeches, for example, Karunanidhi was seen as recollecting his achievements as the Chief Minister of Tamil Nadu, while simultaneously praising his son’s ability to govern and the efforts of party workers. Other leaders such as Prime Minister Narendra Modi have also deployed these technologies, including AI clones and holograms for campaigning.

Read more in our report on the 2024 Indian Election.

Indonesia

During the 2024 general elections, voters in Indonesia faced the phenomena of buzzers. They are typically young, tech-savvy individuals, are often hired to generate online buzz by promoting political messages, products, or ideologies. These people are often employed by political figures, businesses, or organizations seeking to influence public opinion, and they operate by amplifying narratives across platforms like Twitter and Facebook, blurring the line between genuine discourse and paid manipulation.

Buzzers are often deployed to artificially boost certain topics and scandals higher up the algorithmic agenda, although the effectiveness of buzzers alone at shaping discourse at the level of national politics is dubious.

Read more in our report on the 2024 Indonesia Election.

Pakistan

Pakistan Tehreek-e-Insaf (PTI) leader Imran Khan, the party’s leader, has been incarcerated under corruption charges since 2023, rendering him unable to participate in traditional campaign activities. To address this challenge, the PTI utilized an Artificial Intelligence Voice Clone to simulate Khan’s voice during the election campaign.

According to the party, Khan, through his legal representatives, provided new speech scripts, which were then handed over to party officials. These scripts were transformed into audio by analyzing Khan’s previous speeches to replicate his distinctive speaking style. The resulting audio was released to the public as audiovisual presentations, where Khan’s AI-generated voice was synchronized with archival photos and videos. In this strategy, the PTI, however, clearly noted that the voice had been generated by Artificial Intelligence tools.

Read more in our report on the 2024 Pakistan Election.

Mozambique

Presidential candidate Venâncio Mondlane has used AI images of himself online, including on his Facebook page from where he would often live stream. These digitally generated portrayals presented him at the center of a group of adoring supporters. However, given his popularity and regular attendance at rallies prior to the election, these images may be seen as augmented, synthetic supplements to his authentic reputation as a rousing speaker rather than attempts to willfully deceive.

Social media — while rapidly rising in popularity in recent years — has very low penetration in the country and in regional comparison, so social media strategies are often aimed at a young, tech-savvy audience in the capital Maputo or in the broader diaspora with the expectation they both understand and recognize intentional uses of AI.

Read more in our report on the 2024 Mozambique Election.

Georgia

Evidence collected by media monitors, and described in Meta’s report on closing inauthentic accounts before the election, showed that Georgia’s social media was the target of concerted computational propaganda from both domestic and foreign sources. This greatly expanded the reach of anti-European and pro-Georgian Dream narratives, with coordinated bot networks seeking to delegitimise the opposition and protest movements.

Read more in our report on the 2024 Georgia Election.

The United Kingdom

McLoughlin (2024) identifies four different varieties of AI-generated synthetic media in the UK election. Firstly, AI was used to create humorous or satirical content. Secondly, AI tools and AI-generated media were incorporated into campaign materials, with parties using AI to generate scenes or backgrounds to be combined with other assets. 

Thirdly, AI candidates, such as AI Steve in Brighton and Hove, served as stand-ins or representations for actual candidates. These AI candidates were presented as highly accessible and controlled by citizens, though they garnered more media attention than votes. AI was also used by paper candidates to present more individualized campaign materials despite limited resources.

Finally, AI-driven disinformation – using deepfake images, videos and audio to spread misleading content. These examples included deepfaked audio clips of Labour candidates, which had a more impactful presence on social media and blurred the line between satirical in-jokes and intentional deception.

Read more in our report on the 2024 United Kingdom Election.

The United States

During the election, the Trump campaign and its supporters extensively used generative AI to both satirise and wilfully mislead, posting fabricated endorsements from figures like Taylor Swift, attacking rivals with imagery of Kamala Harris in communist attire, and circulating manipulated videos of Democratic leaders to misrepresent their statements.

Trump amplified AI-generated spectacles of himself on social media (specifically Truth Social) in heroic or absurd roles — a fighter pilot, the pope, or a conductor — to energise his base and the campaign used AI to visualise policy points, contrasting idyllic scenes with AI-rendered dystopias about "open borders."

Researchers identified AI-driven bot networks on X that generated pro-Trump content, while digital collectives produced viral memetic propaganda, including the use of AI-generated images — e.g. a fake photo of a child affected by Hurricane Helene — to criticise the incumbent Biden administration and blur the line between exaggeration and falsehood. 

Read more in our report on the 2024 Unites States Election.

Mexico

In several instances, generative AI was used to create artificial videos, particularly targeting Claudia Sheinbaum as the leading presidential candidate. Two of the most widely shared AI-generated videos against her sought to exploit her Jewish heritage and left-wing affiliations.

In one, Sheinbaum appears to advocate for closing Catholic churches, with a satanic symbol in the background. Just days after she won the election, another deepfake surfaced, depicting Sheinbaum speaking Russian with communist propaganda in the room. Although the significant gap between the candidates suggests that this type of manipulation had little impact, in a much tighter race, the consequences could have been more substantial.

Read more in our report on the 2024 Mexico Election.

Next
Next

Digital Communication Structures Spread Hate and Drive Polarization