Safeguard democracy, ensure integrity of people’s voice


E CARTOON APR 26, 2024.jpg

 

In just over a year, we will be holding the 2025 midterm elections. 

 

And with the advent of technology, the specter of deepfakes looms large over the electoral landscape. Sophisticated and deceptive videos created using artificial intelligence (AI) have the potential to sway public opinion, undermine trust, and disrupt the democratic process. 

 

Deepfakes are manipulated videos or images that convincingly depict individuals saying or doing things they never actually did.

 

On Wednesday, April 24, the Presidential Communications Office (PCO) urged the public to remain vigilant after no less than President Marcos became a victim of deepfake. An audio deepfake made it appear that the President was directing the Armed Forces of the Philippines to act against a particular country. “No such directive exists nor has been made,” the PCO clarified. 

 

With deepfakes becoming more prevalent, the Department of Information and Communications Technology (DICT) has raised a red flag, urging lawmakers to take decisive action to regulate the use of AI in producing misleading content.

 

In the context of elections, these maliciously crafted media pieces can be used to mislead voters, sow discord, and undermine trust. Deepfakes can falsely portray candidates endorsing policies or making controversial statements, leading voters astray. By spreading fabricated content, deepfakes can exacerbate divisions and create confusion among citizens. And when citizens can’t discern real from fake, trust in the electoral process erodes.

 

The lessons from the recent experiences of Moldova, Slovakia, Taiwan, and Bangladesh underscore the urgency of addressing this issue.

 

In the 2023 elections in Moldova, deepfakes flooded social media, including a pro-Western politician depicted to be endorsing a Russian-friendly party, causing confusion and mistrust among voters. 

 

In Slovakia, deepfakes targeted political leaders by widely sharing audio clips resembling their voices and issuing controversial statements, thus tarnishing their reputations and affecting electoral results. 

 

In Taiwan, the spread of manipulated videos raised concerns about the integrity of the democratic process. And in Bangladesh, deepfakes distorted candidates’ messages, leading to misinformation and polarization.

 

This is why the call of the DICT for the House of Representatives to draft legislation specifically targeting AI-generated deepfakes is crucial, not only in preserving the sanctity of the ballot but in keeping the integrity of information being disseminated. 

 

Among the vital deepfake issues that any legislation should address are identification and labeling, criminalization, media literacy, election integrity measures.

It is important for the law to mandate clear labeling of deepfake content to inform viewers that what they are witnessing is not genuine. Crucial here is criminalizing the creation and dissemination of deepfakes with the intent to deceive the public and providing stiff penalties.

 

The government must also invest in public awareness campaigns to educate citizens about deepfakes and how to recognize them. Rigorous monitoring of social media platforms and prompt removal of deepfake content are essential.

 

The state must act swiftly to safeguard its democratic institutions. Congress should collaborate with experts, civil society, and technology companies to create a robust legal framework. Additionally, investing in AI-detection tools and training election officials to recognize deepfakes is crucial.

 

As the digital landscape evolves, so must we ramp up our defenses against disinformation. The sanctity of our elections and the integrity of information disseminated depends on our ability to outpace the malevolent creativity of AI. Let us ensure that the voice of the people remains authentic and unaltered amid the rising tide of deepfakes.