NEW YORK: Facebook owner Meta announced major changes to its policies on digitally created and altered media on Friday, ahead of US elections poised to test its ability to police deceptive content generated by new artificial intelligence technologies.

The social media giant will start applying “Made with AI” labels next month to AI-generated videos, images and audio posted on its platforms, expanding a policy that previously addressed only a narrow slice of doctored videos, Vice President of Content Policy Monika Bickert said in a blog post.

Bickert said Meta would also apply separate and more prominent labels to digitally altered media that poses a “particularly high risk of materially deceiving the public on a matter of importance”, regardless of whether the content was created using AI or other tools.

The new approach will shift the company’s treatment of manipulated content. It will move from one focused on removing a limited set of posts toward one that keeps the content up while providing viewers with information about how it was made.

Meta previously announ­ced a scheme to detect images made using other companies’ generative AI tools using invisible markers built into the files, but did not give a start date at the time.

A company spokesperson said the new labelling approach would apply to content posted on Meta’s Facebook, Instagram and Threads services.

Meta will begin applying the more prominent “high-risk” labels immediately, the spokesperson said.

Impact on elections

The changes come months before a US presidential election in November that tech researchers warn may be transformed by new generative AI technologies. Political campaigns have already begun deploying AI tools in places like Indonesia, pushing the boundaries of guidelines issued by providers.

In February, Meta’s oversight board called the company’s existing rules on manipulated media “incoherent” after reviewing a video of US President Joe Biden posted on Facebook last year that altered real footage to wrongfully suggest he had behaved inappropriately.

The footage was permitted to stay up, as Meta’s existing “manipulated media” policy bars misleadingly altered videos only if they were produced by artificial intelligence or if they make people appear to say words they never actually said.

Published in Dawn, April 6th, 2024

Opinion

Editorial

Reserved seats
24 Sep, 2024

Reserved seats

THE verdict is in. But does that make a difference? The Supreme Court’s detailed reasoning for its decision in the...
Close call
24 Sep, 2024

Close call

A DISASTER of considerable proportions was averted on Sunday when a roadside device exploded in Swat as diplomats...
Digital gagging
24 Sep, 2024

Digital gagging

IT happened again over the weekend. Internet users in Pakistan found themselves cut off from WhatsApp and Instagram,...
Fancy tax scheme
Updated 23 Sep, 2024

Fancy tax scheme

GOVERNMENTS propose, bureaucrats dispose — often relegating ‘plans’ to an existing pile of schemes gathering...
Lebanon on edge
23 Sep, 2024

Lebanon on edge

NOT content with the bloodbath it has unleashed in Gaza, Israel is now on the rampage in Lebanon, routinely ...
Chikungunya threat
23 Sep, 2024

Chikungunya threat

MISERY usually follows every rainy season. If it is not infrastructural degradation, it is disease. And so, the...