THE authorities that govern the use of technology in Pakistan seem to be on a moral crusade of late. The intentions were apparent earlier this year when the Citizen Protection Rules were announced. Although the fate of these Rules is still unclear, regulation of online content is becoming increasingly draconian under the sanction of the Prevention of Electronic Crimes Act (Peca) 2016. The internet regulation model Pakistan is currently following will prove to be damaging not just for us — the users — but also for tech firms and the digital ecosystem at large.
‘Web 2.0’ is the term used to describe the internet as we know it today: an interactive virtual world which empowers users to access more information than ever before and create their own content, giving a new life to the ideas of freedom of speech and expression, right of access to information, and democratic participation.
Regulation of Web 2.0 is a unique challenge and our understanding of how to regulate it is continuously evolving. Regulatory challenges vis-à-vis social media platforms include curbing anti-competitive practices and ensuring a level playing field for new entrants, data protection, blocking fake news and removing illegal content. On these issues, online platforms are often at odds with regulators — agile and innovative legal approaches are needed to resolve these challenges.
Social media platforms — or intermediaries — such as Facebook, Twitter and Google are often held accountable for the content generated on their platforms. Governments around the world frequently ask them to remove content deemed unlawful. Some types of content — such as videos that depict violent acts, fake news, or content that may infringe someone’s privacy — must be monitored, regulated and blocked. Social media companies employ teams that are tasked with the job of scouring these platforms for such content and diligently removing it.
Along with education, Web 2.0 is the new battlefield where alternative voices must be suppressed to maintain the national narrative.
Beyond such obvious cases of unlawful content, there are grey areas. Where authorities may view a social media post to be against the “integrity, security or defence of Pakistan”, the user may view it to be an opinion and her right to post it integral to the idea of freedom of speech. How should the intermediary respond to the request for removal of such content?
Should the intermediary bend to the murky legal requirements, no matter how flawed, or should it uphold the spirit of Web 2.0 and of Article 19 of the UN’s Universal Declaration of Human Rights, “Everyone has the right to freedom of opinion and expression; the right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers”. The latter may result in all-out bans of the platforms as has happened in countries such as Pakistan and China.
Central to this issue is the debate around intermediary liability regimes. Should online platforms be held liable for third-party content? In assessing the costs and benefits we can define three stakeholders: the government, the intermediary, and the users. An intermediary liability regime works well for authoritarian governments — it is easy to regulate content on platforms when you can hold the platform liable for everything posted on it.
For the intermediary, such a regime imposes unsustainable operational costs. According to the Citizen Protection rules, intermediaries would be required to open an office in Pakistan and appoint a focal person to liaise with the regulators. They would also be held responsible for all content posted on it to comply with Section 37 of Peca — regardless of any other standards or rules the company upholds — and shall act to remove within 24 hours any content deemed to be in violation of Pakistani law.
It would be cumbersome to monitor every post to ensure it is in compliance with ambiguous regulations — depending on who is interpreting it, any content may be deemed illegal. In fact, Peca is designed to do just that: leave a massive grey area allowing action against anything deemed to be against “the glory of Islam or the integrity, security or defence of Pakistan public order, decency or morality”.
Moreover, the number of requests to take down content and to hand over user information would render social media platforms useless and the requirement to abide by Pakistani law regardless of any other community standards or guidelines would defeat the purpose of such a platform.
Next, consider the users. In a recent article on the Single National Curriculum, Dr Faisal Bari wrote: “The state has always had a deep interest in managing the national narrative. This has usually been done by using religion and nationalism to suppress alternative voices. ... If they do not fit the religious/nationalistic frame that has been forged by some elements of the state in Pakistan, they would be rejected. And education has been a battlefield for this rejection.”
Along with education, Web 2.0 is the new battlefield where alternative voices must be suppressed to maintain the national narrative. Web 2.0 puts power in the hands of the user — power to voice opinions, expose truths, access unfiltered information. This is perhaps one of the greatest threats to the national narrative. When intermediaries are required to remove content, it is the users who suffer as freedom of speech and access to information are diluted.
The question of how to regulate social media platforms and what online content is permissible are not easy to answer. What is clear is that blanket bans, threats and censorship are not the right path. Intrinsic to the idea of Web 2.0 is decentralisation and democratic participation — trying to divorce these ideals from the medium will be a great disservice to the people of Pakistan who deserve to benefit from and be empowered by new technologies.
The writer is a development and technology policy consultant.
Twitter: @anummalkani
Published in Dawn, September 14th, 2020