The trouble with technology

Published October 17, 2022
The writer is a former ambassador to the US, UK & UN.
The writer is a former ambassador to the US, UK & UN.

AN important global debate is underway about the disruptive impact of new technology. There is no doubt modern technology has been a force for good and responsible for innumerable positive developments — empowering people, improving lives, increasing productivity, advancing medical and scientific knowledge and transforming societies. Technological developments have helped to drive unprecedented social and economic progress. But the fourth industrial revolution has also involved the evolution of advance technologies that are creating disruption, new vulnerabilities and harmful repercussions, which are not fully understood, much less managed. A digitalised world is facing the challenge of cybersecurity as threats rise across the world. Data theft and fraud, cyberattacks and breaches of critical systems, electricity networks and financial markets are all part of rising risks.

Communication technology now dominates our lives like never before. It brings untold benefits but also presents new dangers. The phenomenon of fake news for example is not new. But its omnipresence today has much to do with digital technology, which has produced a proliferation of information channels and expansion of social media. Online platforms have become vehicles for the spread of misinformation. Fake news easily circulates due to the magnifying power of social media in a mostly unregulated environment. Anonymity in social media platforms gives trolls and purveyors of false stories the assurance they will not be held accountable for their lies or hate messages. So fake news is posted on social media without fear of retribution. ‘Deepfakes’ — doctored videos using artificial intelligence (AI) — are now commonly used to mislead and deceive.

The profit motive and business model of social media companies prevents them from instituting real checks on divisive and sensational content irrespective of whether it is true or false. That means ‘digital wildfires’ are rarely contained. Digital technology is also being abused to commit crimes, recruit terrorists and spread hate, all of which imperil societies. This presents challenges to social stability in what is now called the post-truth era.

Digital technology is also fuelling polarisation and divisiveness within countries. Studies have pointed to its disruptive impact on political systems and democracy. In an article in the European Journal of Futures Research in March 2022, the authors wrote that “In times of scepticism and a marked dependence on different types of AI in a network full of bots, trolls, and fakes, unprecedented standards of polarisation and intolerance are intensifying and crystallising with the coming to power of leaders of dubious democratic reputation”. The connection between the rise of right-wing populist leaders and their cynical but effective deployment of social media is now well established.

New technologies present opportunities and dangers for nations and people.

Artificial intelligence or machine intelligence presents many dangers such as invasion of privacy and compromise of multiple dimensions of security. The biggest threat posed by autonomous weapons systems is that they can take decisions and even strategies out of human hands. They can independently target and neutralise adversaries and operate without the benefit of human judgement or thoughtful calculation of risks. Today, AI is fuelling an arms race in lethal autonomous weapons in a new arena of superpower competition.

Read: The PMO audio leaks — Could they be faked?

The book, co-authored by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher, The Age of AI: And our Human Future, lays bare the dangers ahead. AI has ushered in a new period of human consciousness, say the authors (Schmidt is Google’s former CEO), which “augers a revolution in human affairs”. But this, they argue, can lead to human beings losing the ability to reason, reflect and conceptualise. It could in fact “permanently change our relationship with reality”.

Their discussion of the military uses of AI and how it is used to fight wars is especially instructive. AI would enhance conventional, nuclear and cyber capabilities in ways that would make security relations between rival powers more problematic and conflicts harder to limit. The authors say that in the nuclear era, the goal of national security strategy was deterrence. This depended on a set of key assumptions — the adversary’s known capabilities, recognised doctrines and predictable responses. Their core argument about the destabilising nature of AI weapons and cyber capabilities is that their value and efficacy stems from their “opacity and deniability and in some cases their operation at the ambiguous borders of disinformation, intelligence collection and sabotage … creating strategies without acknowledged doctrines”. They see this as leading to calamitous outcomes. They note the race for AI dominance between China and the US, which other countries are likely to join. AI capabilities are challenging the traditional notion of security and this intelligent book emphasises that the injection of “nonhuman logic to military systems” can result in disaster.

Advanced new generation military technologies are a source of increasing concern because of their wide implications for international peace and stability. The remote-control war waged by US-led Western forces in Afghanistan over two decades involved the use of unmanned aerial vehicles or drones. This had serious consequences and resulted in the killing of innocent people. The use of a cyberweapon — the Stuxnet computer worm — by the US to target Iranian facilities in 2007 to degrade its nuclear programme was the first attack of its kind. More recently, Russian and Ukrainian militaries are using remotely operated aerial platforms in the Ukraine conflict. Reliance on technology can confront countries at war with unexpected problems. For example, frontline Ukrainian soldiers have faced outages of the internet satellite service which was supposed to prevent Russian forces from using that technology. This digital disruption is reported to have caused a crucial loss of communication between Ukraine’s military forces.

Despite the risks and dangers of such new technologies, there is no international effort aimed at managing them much less regulating their use. There is no move by big powers for any dialogue on cyber and AI arms control. If the global internet can’t be regulated and giant, unaccountable social media companies continue to rake in excessive profits, there is even less prospect of mitigating the destabilising effects of cyber and AI-enabled military capabilities.

The writer is a former ambassador to the US, UK & UN.

Published in Dawn, October 17th, 2022

Opinion

Editorial

A hasty retreat
Updated 28 Nov, 2024

A hasty retreat

Govt should not extend its campaign of violence against PTI and its leaders, thinking it now has the upper hand. Enough is enough.
Lebanon truce
28 Nov, 2024

Lebanon truce

WILL it hold? That is the question many in the Middle East and beyond will be asking after a 60-day ceasefire ...
MDR anomaly removed
28 Nov, 2024

MDR anomaly removed

THE State Bank’s decision to remove its minimum deposit rate requirement for conventional banks on deposits from...
Islamabad march
Updated 27 Nov, 2024

Islamabad march

WITH emotions running high, chaos closes in. As these words were being written, rumours and speculation were all...
Policing the internet
27 Nov, 2024

Policing the internet

IT is chilling to witness how Pakistan — a nation that embraced the freedoms of modern democracy, and the tech ...
Correcting sports priorities
27 Nov, 2024

Correcting sports priorities

IT has been a lingering battle that has cast a shadow over sports in Pakistan: who are the national sports...