A FACEBOOK clickbait headline.
A FACEBOOK clickbait headline.

Facebook has stepped up its two-year-old anti-clickbait campaign, changing its news feed algorithm to weed out manipulative or deceptive headlines. The company says this is being done by popular demand, but the reality is probably more complicated. As a heavy Facebook user with more than 90,000 subscribers and about 4,200 friends, I don’t want Facebook to do this: it’s another dangerous step towards shrinking the view of the world people receive through the social network.

Facebook researchers Alex Peysakhovich and Kristin Hendrix explained the algorithm change in a post on Thursday. Rather than reducing the distribution of links from which users quickly bounce back to Facebook (which means they were probably enticed to click and then disappointed), the company has tweaked the algorithm to look for phrases commonly used in clickbait headlines.

The definition of clickbait is important here. Facebook’s is currently narrow: to be considered clickbait, a headline needs to withhold information necessary to understand what the story is about (“You’ll Never Believe Who Tripped and Fell on the Red Carpet…”), forcing users to click on the link to find out the answer. Linguists call such tricks deixis (the use of words that require a context to be understood) and cataphora (a figure of speech in which an earlier expression describes a “forward” expression that hasn’t come up yet). Facebook also has a problem with headlines that exaggerate the impact of the story (“Apples Are Actually Bad For You?!” leading to a piece saying it’s a bad idea to eat 50 pounds of apples a day).

This would not rule out most headlines from BuzzFeed, which, according to the social content consultancy NewsWhip, is the second-biggest magnet for Facebook comments, likes and shares among news sites.

BuzzFeed claims it “doesn’t do clickbait” — but that’s using the same narrow definition Facebook uses. That’s not how most people understand it: to most of us, BuzzFeed’s formulaic listicle headlines — a number plus a strong sales pitch for the subject, as in “23 Things That Will Make You Feel Like an Adult” (the content being a collection of “native ads” for everything from collapsible lunchboxes to wristwatches) — fall into the clickbait category. Just like cataphoric headlines (“Why This Father Feeds His Son Freakish Fruit and Vegetables”), they create what in 1994 behavioural economist George Loewenstein called a “curiosity gap”: They focus the reader’s attention on a gap in her knowledge, producing “a feeling of deprivation labelled curiosity”. That makes them clickbait, too.

This year, a team from Bauhaus University in Weimar, Germany, published a paper on clickbait detection. They built a machine-learning-based model to analyse a set of news content links from Twitter (not Facebook), but the results of their work — following stricter criteria than Facebook says it’s using, 215 of them in all — are revealing. Here’s what they found:

“Business Insider sends 51pc clickbait, followed by Huffington Post, The Independent, BuzzFeed, and The Washington Post with more than 40pc each. Most online-only news publishers (Business Insider, Huffington Post, BuzzFeed, Mashable) send at least 33pc clickbait, Bleacher Report being the only exception with a little less than 10pc. TV networks (CNN, NBC, ABC, Fox) are generally at the low end of the distribution.”

Most online news publishers use headlines that, in the words of the Bauhaus paper, “exploit cognitive biases to increase the likelihood of readers clicking an accompanying link”. Yet content providers such as BuzzFeed pay it to promote its “native advertising” posts, and Facebook pays them to produce video content that helps Facebook sell more expensive ads. My guess is that this limits Facebook’s willingness to hinder the distribution of its content.

Facebook ends up fighting only certain obvious kinds of clickbait, but as it does so, it’s developing the capability to block content from our news feeds based on certain words or expressions. Facebook has long been moving away from being an impartial platform on which users can place any content. Since late June, it’s been showing us more posts from friends and family and less from news organisations. Now, there’s the anti-clickbait change.

Facebook’s goal is to increase users’ engagement with every post and maximise the time spent on the social network: It’s good for revenue. Yet since news websites get more than 40 per cent of referral traffic from Facebook, every such change limits the amount of information that reaches readers. And some changes, including the latest one, even create the potential for censorship and arbitrary selection.

I was happy with Facebook as a platform, and I’m not the only one. Readers are generally not dumb. In June, when the UK voted to leave the European Union, the Financial Times — which, unlike BuzzFeed, really doesn’t do clickbait — had one of the top percentages of shares among its Facebook interactions, according to NewsWhip. The raciest tabloids with the flashiest headlines covered Brexit, too, but people wanted to share the FT’s sober journalism.

A Facebook user is usually smart enough to distinguish real journalism from entertainment. She doesn’t click on manipulative headlines expecting to find serious content, and if some users complain about such experiences, they are not necessarily representative of Facebook’s enormous user base. They certainly do not represent me. I don’t want Facebook to curate — especially algorithmically — the already-curated output of professional news organisations.

If somebody wants to block certain types of headlines — because they are manipulative, or for any other reason — they should have access to filters to personalise their news feed. Instead, Facebook presents users with a black box for fear of having its algorithm reverse-engineered. That’s worse than not messing with the natural flow of posts at all.

There is one more reason Facebook’s anti-clickbait rule is wrong. At this stage of its development, artificial intelligence is terrible at processing human languages, and letting it police content is premature. The tweaked algorithm would probably classify The New York Times’ and the Guardian’s sarcastic headlines about Facebook as clickbait because they contain the word “shocker” and the expression “you won’t believe”. Silicon Valley companies are overconfident in their technology. They have to admit humans are better at content, at least for now.

—By arrangement with Bloomberg-The Washington Post

Published in Dawn, August 9th, 2016

Opinion

Editorial

Military convictions
Updated 22 Dec, 2024

Military convictions

Pakistan’s democracy, still finding its feet, cannot afford such compromises on core democratic values.
Need for talks
22 Dec, 2024

Need for talks

FOR a long time now, the country has been in the grip of relentless political uncertainty, featuring the...
Vulnerable vaccinators
22 Dec, 2024

Vulnerable vaccinators

THE campaign to eradicate polio from Pakistan cannot succeed unless the safety of vaccinators and security personnel...
Strange claim
Updated 21 Dec, 2024

Strange claim

In all likelihood, Pakistan and US will continue to be ‘frenemies'.
Media strangulation
Updated 21 Dec, 2024

Media strangulation

Administration must decide whether it wishes to be remembered as an enabler or an executioner of press freedom.
Israeli rampage
21 Dec, 2024

Israeli rampage

ALONG with the genocide in Gaza, Israel has embarked on a regional rampage, attacking Arab and Muslim states with...