ADVERTISEMENT

Lok Sabha Elections 2024: Shielding India From Disinformation Now And In The Future

As with most major elections in the past decade, disinformation is being weaponised to influence the masses, says Shamla Naidoo.

<div class="paragraphs"><p>(Source: Unsplash)</p></div>
(Source: Unsplash)

The upcoming Lok Sabha elections in India are a high-stakes battle that extend beyond the country’s borders. There is a game of influence fuelled by the motivations of both domestic and external players to sway voters on one side or the other of the political spectrum, and for some, the ends justify the nefarious means.

In fact, not just for India, but 2024 is an unprecedented year for global democracy, with a number of major elections that will involve more than half the world’s population.

The World Economic Forum has ranked India highest for risk of disinformation and misinformation in its Global Risks Report 2024. As with most major elections in the past decade, disinformation is being weaponised to influence the masses, and today it is done via audio and video deepfakes, threatening that the decisions voters will make may not be based on legitimate information, but instead be built upon the blatant lies they spread.

Let us examine the challenges and possible solutions:

AI Is Not The Answer (Yet)

Generative AI tools have considerably lowered the barriers to creating deepfakes. Malicious actors are leveraging them to fuel disinformation around major elections, and India is not being spared. Unfortunately, there has not been an equivalent AI breakthrough to fight disinformation.

Despite advances in the identification and control of fake news and deepfakes in recent years, AI’s role here is currently limited. Training algorithms that can consistently and reliably make the distinction between truth and lies is a considerably more complex endeavour than just creating fake news. Detecting a lie requires sophisticated cognitive skills that AI does not yet have.

Some tools can detect if audiovisual content has been AI-generated, but they do not yet enable large-scale operations or automate the detection and removal of deepfakes, without heavy human resources in the background.

Some AI solutions do exist, such as spread analysis, where the speed at which news travels can indicate if it is fake (as fake news and deepfakes tend to spread faster because they are created with clever sharing processes at the heart of the campaign). But the search for a technological silver bullet that would nip deepfakes in the bud is still a long journey, and with current approaches, the fight is still a time-consuming whack-a-mole game.

In the absence of game-changing technological advancements to fight disinformation, we have to find solutions in other areas.

Media Smart Citizens

Avoiding mass manipulation requires an educated population. The threat of deepfakes is one that the Indian population still needs to grasp, and they need support in the process, starting with social media platforms that are the main vectors of disinformation. Meta has announced a series of measures, including a ‘Know What’s Real’ education campaign around the elections, a WhatsApp chatbot to report fake news and deepfakes, and it has also reinforced its third-party fact-checking team in India. A number of social media platforms are also mandating labels for AI-generated content, and more initiatives may come from the recently signed Tech Accord. But disinformation won’t disappear after the elections, and we need more proactivity and support from social media giants to help users recognise fake news, and mitigate their spread across their platforms.

While political leaders and the Election Commission are proactively discussing the issue around this election, future governments will need to reflect on how they can properly equip citizens to navigate a future where AI-generated content is rife. In other parts of the world, conversations are taking place about teaching media literacy in schools, where children learn what disinformation is, and develop fact-checking and critical thinking reflexes.

But awareness and education alone will not be enough to mitigate the impact of disinformation.

More Accountability From More Parties

India currently lacks policies to regulate the use of AI, and the creation and spread of deepfakes, even though both seem to be in the pipeline. That is not to say that deepfakes created with criminal intent are not currently punishable, but each case could fall under different existing legislations, and creating a bespoke regulatory framework would bring clarity and consistency on regulating AI usage in the country.

Importantly, it would also be an opportunity to define accountability for disinformation or failing to act against it, and include more parties in the framework, which could help accelerate mitigation action. In the case of political deepfakes, it could mean a shared accountability among the authors, social media platforms if they fail to remove them, and the political parties concerned if they fail to alert the electorate in a timely fashion.

Countering disinformation is a complex journey, but one that democracies have to take if they want to protect democratic values and their population from those who wish to sow discord. Technology alone is unlikely to bring a miracle solution, and disinformation will not be properly fought without a more diverse arsenal, that includes major education initiatives and the right regulations.

Shamla Naidoo is Adjunct Professor of Law at the University of Illinois Chicago School of Law, and Head of Cloud Strategy & Innovation at Netskope.

The views expressed here are those of the author and do not necessarily represent the views of NDTV Profit or its editorial team.