ARMED drones, unconventional warfare, special forces, private contractors, rapid developments in technology, are changing the nature of the battlefield beyond recognition.
They have profound implications for existing international law as well as ethics and morality. And however much campaigners may wish to ban them, drones are here to stay.
So warns Noam Lubell, professor of International Law of Armed Conflict at Essex university, in an inaugural lecture to be delivered tonight (Nov 18).
In his important lecture, “Robot warriors, terrorists, and private contractors: What future for the ‘laws of war’”? Lubell notes that “the image of the actor on the battlefield often does not conform to the model picture of a soldier in national military uniform”.
He adds: “This transformation has been exacerbated by three developments: the ‘war on terror’, the rising use of private contractors, and dramatic technological advances.”
Laws of armed conflict should cover the fight against terrorism, he says. The problem with private contractors is they look like civilians - they do not have the same hierarchical structure or training as regular armed forces. How do we hold them accountable?
But it is the future of drones, and the transition from “remote control” to “autonomous” machines, that we discussed before he gave his lecture.
Some say that autonomous machines, from an ethical point of view, might be better than human involvement. Or would they simply “dehumanise” war?
Lubell noted that studies showed a significant number of soldiers admitted mistreating civilians robots. Would robots fair better? There was evidence that machines could recognise human emotions and act with ‘”compassion”.
For example, health care robots could purr when stroked, contributing to the reduction of blood pressure. “Robot pets” could act as companions for vulnerable and lonely people. Robots would be used more and more for teaching and surgery. Self-driving cars were being developed.
Robots cannot be divorced from battlefields.
Lubell referred to a distinction between “automated” and “autonomous” machines. The first, ie missile defence systems, were programmed to respond predictably to a particular event. The latter would be more adaptable without the need for new programming.
However, he said, there is no clear dividing line between the two definitions. If there was one, he suggested, it was getting increasingly blurred.
Lubell’s message is that there are real risks in the development of robot technology. Those risks had to be carefully considered, and minimised.
He called for a transparent debate about the capabilities of robots and what ethical and legal principles should guide their use.
The UK Ministry of Defence (MoD) internal thinktank, the Development, Concepts and Doctrine Centre (DCDC), noted in a report two years ago that military planners in the UK and US were keen to develop increasingly sophisticated automated weapons.
It identified some advantages of an unmanned weapons system. “Robots cannot be emotive, cannot hate. A robot cannot be driven by anger to carry out illegal actions such as those at My Lai [the massacre by US troops of hundreds of unarmed civilians in South Vietnam in March 1968]”, it said.
The MoD study added: “In theory, therefore, “autonomy should enable more ethical and legal warfare. However, we must be sure that clear accountability for robotic thought exists, and this raises a number of difficult debates. Is a programmer guilty of a war crime if a system error leads to an illegal act?
Where is the intent required for an accident to become a crime?”
At a UN-sponsored meeting in Geneva last week parties to the Convention on Conventional Weapons (CWC) agreed to reconvene in May to discuss the future of “autonomous weapons”. Campaigners want to ban them.
But if there was a ban, who would actually sign up to it?, asked Lubell.
By arrangement with the Guardian
Dear visitor, the comments section is undergoing an overhaul and will return soon.