DAWN.COM

Today's Paper | December 22, 2024

Updated 03 Feb, 2024 10:15am

Disinformation and fact-checking

PHONES rang in the state of New Hampshire ahead of primary elections. Joe Biden’s voice was heard over the line. “We know the value of voting Democrats. It’s important that you save your vote for the November election,” the voice said.

“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.” Between 5,000 and 25,000 calls were made. The Biden administration never initiated any of those calls, and the voice was a deepfake — generated through artificial intelligence (AI) and mastered to sound like President Biden.

“These messages appear to be an unlawful attempt to disrupt the New Hampshire presidential primary election and to suppress New Hampshire voters,” said the state attorney general’s office in response to the ‘robocalls’.

These robocalls used caller ID spoofing, a technique which alters the caller ID to show a different phone number, hiding the actual source of the call. In this case, the robocall appeared to have come from a number associated with Kathy Sullivan, chairperson of the New Hampshire Democratic Party, an affiliate of the Democratic Party.

This example is part of a broader trend where AI-generated content was used to mislead voters during election periods. In Bangladesh, feeds of ‘international’ news channels were created using AI ahead of elections.

In these fabricated news segments, AI-generated anchors falsely reported significant events, including allegations of US involvement in funding riots and violence in Bangladesh. Previously, disinformation from external sources in the US presidential elections of 2016 aimed at influencing voter behaviour caught many off guard. Similar trends are observed globally. The challenge then is to find effective means to counteract political disinformation.

While imposing stringent laws or criminalising disinformation are pegged as solutions, such measures could inadvertently criminalise free speech and be used to suppress legitimate discourse. Therefore, the answer to combating disinformation isn’t as straightforward as ‘enacting a law against fake news’, or erecting a ‘national firewall’. It requires a careful balance between regulation and the preservation of free expression.

An effective strategy in countering political disinformation is the ‘inoculation’ approach, a topic I’ve extensively covered in a previous op-ed. This method involves pre-emptively exposing the public to a weakened form of misinformation, thereby enabling them to better recognise and resist deceptive information.

Another proven strategy in combating disinformation is the consistent publication of fact-checks by reputable media organisations. This approach helps to identify and correct misinformation, fostering an informed public. It also enhances the credibility and reliability of media sources, making them trusted authorities in discerning truth from falsehood. However, this approach has challenges.

The challenge is to find effective means to counteract political disinformation.

Monitoring social media platforms for disinformation is an uphill task due to the enormous volume of content. Even with a substantial team, addressing the flood of disinformation items for fact-checking is daunting. Selecting just a few pieces from thousands that circulate daily for verification, while ensuring timely publication of fact-checks, is challenging. This is compounded by financial constraints faced by newsrooms in recent years, which caused massive layoffs and pay cuts, making the allocation of sufficient human and technical resources for this task even more difficult.

Additionally, the financial viability of many fact-checking organisations remains a concern. A significant number of these outlets struggle with sustaining operations due to funding challenges. Many of them operate without a robust sustainability plan, often relying on partnerships through various third-party fact-checking programmes funded by tech companies, or operating on a grant basis around landmark events. This lack of a stable financial model and the resultant limitations pose a risk to their mid- to long-term viability.

For third-party fact-checkers working with tech companies, a significant concern is the potential conflict of interest. The relationship between traditional media and information literacy (MIL) initiatives and conventional fact-checking based on post-bunking, highlights a substantial gap.

While tech platforms might support MIL and traditional fact-checking, their willingness to fund in-depth investigations into organised disinformation campaigns, especially those that might scrutinise the role of tech companies themselves, including the lack of effective regulation of hate speech against vulnerable groups, is less certain. Comprehensive investigations into the sources and beneficiaries of disinformation, such as those conducted into the disinformation in the US presidential elections of 2016, are crucial but may not always receive support from the said platforms.

To effectively combat disinformation, it is crucial to strengthen credible newsrooms, the long-standing gatekeepers of information, rather than creating parallel structures with little to no transparency in ownership. This means making the publication of fact-checks a sustainable venture for credible newsrooms, enhancing their web traffic, and consequently, revenue.

There are several strategies to achieve this. Drawing from my recent experience with a newsroom that is profiting from publishing fact-checks, a combination of leveraging Cunningham’s Law, effective Search Engine Optimisation, and smart social media tactics can be significantly impactful. This approach is also vital in redirecting web traffic to credible news sources, thereby countering the dominance of big tech companies over Pakistan’s digital advertising revenue.

Another strategy to empower newsrooms against disinformation is to improve their news sourcing and verification techniques, and augment it through tech-based initiatives. My experience training 14 leading newsrooms in Pakistan highlighted a common practice of sourcing information from social media platforms. This practice, while essential in modern times, can sometimes contribute to the spread of misinformation.

Finally, the fight against misinformation necessitates an influx of new players in both pre- and post-bunking strategies. While this may seem to counter the idea of avoiding parallel structures, the reality is that fresh perspectives are needed. This includes tech-savvy individuals who can innovate in digital investigations and tech-based initiatives to empower newsrooms.

It is also crucial for diverse entities, from independent digital-first media start-ups to third-party fact-checkers, to engage and grow in this space. Although there may be some concerns with the latter, their contributions are nonetheless vital, as they can reach and correct misinformation for thousands, if not more, daily.

The writer is a media strategist and trainer, and founder of Media Matters for Democracy, a media development organisation.

Published in Dawn, February 3rd, 2024

Read Comments

Shocking US claim on reach of Pakistani missiles Next Story