THE already heated debate around the potential and pitfalls of artificial intelligence reached fever pitch last week, when some of the most respected leaders working in the field got together to say that AI was improving at such a rate that it could potentially lead to an ‘extinction’ event. This means that AI technology is improving at a pace faster than human beings can come up with ways to govern these new frontiers of machine capability. So what is now ChatGPT and other similar language-based chatbots could soon make the need for human endeavours redundant.
Some of those to sound the alarm are industry leaders whose companies are benefiting from the progress. Together, they issued a statement released by the Centre for Artificial Intelligence Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The executives who signed on include Sam Altman, who is the chief of OpenAI, Demis Hassabis the CEO of Google DeepMind, and Dario Amodei of Anthropic.
Besides these leaders, 350 other researchers, engineers and scientists also signed on to the statement because they too believe that the risk of AI technology progressing into realms that are no longer governable by human beings is a very real possibility and even a likelihood.
Altman has suggested that the world is in need of an overseer agency to govern AI in the same way as the International Atomic Energy Agency monitors the use of atomic technology and prevents those it sees as bad actors from obtaining technology that can be used to construct bombs, which could destroy the world. This parallel should particularly resonate here in the subcontinent, where the technology to build atomic bombs has defined everything for decades now.
It appears unlikely that a whole new frontier can be regulated and governed on the basis of common principles.
Imagine if an arms race for AI was also underway, with errant actors threatening to use the most menacing aspects of AI to derail other countries’ transportation systems, flight-operating systems, utilities and all kinds of other mechanisms. Just as the constant threat of terrorists getting control of the atomic bomb frightens people today, so too would the prospect of AI on the march towards human beings’ extinction frighten people tomorrow. Add to this the fact that atomic governance was actually something initiated at a moment when constitutional liberalism reigned and international cooperation was considered valuable in itself, and the picture gets more worrisome for many.
In today’s times, the world cannot even agree on basic means with which to arrest the cataclysmic pace of climate change. It appears even more unlikely that a whole new frontier could be regulated and governed on the basis of common principles. A grim estimation of this state of affairs would be taking bets on whether it would be a climate catastrophe and a too-hot-to-inhabit planet, or AI-induced extinction of the human race that would end the world as we know it first.
Nevertheless, a word must be said here about just how tantalising AI-generated tools are. When one American university professor was questioned about how she was dealing with students using ChatGPT, she shrugged her shoulders and said she wasn’t doing anything about it. Instead, she responded, the quality of writing taught in American high schools had become so poor and the legibility of student papers was plummeting at such a rate that she would be pleased if they used the tool and produced a readable paper.
Many in the group called GenZ, who do not remember a time when the internet did not exist, have already accepted the tool and put it to use doing all the onerous non-creative sort of writing, such as producing instruction manuals or giving directions from one place to another. It is not difficult to envision a time when this generation, saved from all the ‘from scratch’ tasks, will not be able to imagine a world without the existence of AI.
Some hope for AI can be drawn from the fact that Sam Altman and others in the industry emphasise that the reason to invest in AI technology is simply that it has the potential to create a better world. The ability to synthesise vast troves of information — whether it is in biomedicine, education or energy — can drastically reduce waste of all sorts and vastly improve the utilisation of limited resources.
No one can stop AI from being developed further, and many say that it is precisely its potential for great good that is driving so many companies into the race to build better in a machine-run world.
At the same time, it is also necessary to point to just how large the gap between those with access to technology and those without it is becoming. Even as AI promises new frontiers of efficiency and optimisation, a drastic reduction in the utilisation of labour is in sight.
There are still enormous swathes of the world that do not have access to clean water, sanitation and vaccinations. One can say that, despite burgeoning demographic numbers, human extinction from want, from hunger and malnutrition, and so many entirely preventable disasters is already underway. One problem for AI to solve, then, must necessarily be to bring two worlds — one of great wealth, the other of great poverty — together.
The true measure of AI may well be of how effective or otherwise it will be when it comes to providing solutions to the more intractable problems that the international community faces today. A supercomputer may be able to synthesise data but can it truly go beyond replication and produce an entirely self-generated innovation? The world, and all of us in it, will soon find out.
The writer is an attorney teaching constitutional law and political philosophy.
Published in Dawn, June 8th, 2023
Dear visitor, the comments section is undergoing an overhaul and will return soon.