Artificial intelligence’s current ability, future prospects discussed

Published March 24, 2018
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer
DR Muhammad Haris, Farieha Aziz and Dr Sajjad Haider participate in the discussion at Habib University on Thursday.—Photo by writer

KARACHI: “Would you be comfortable if I was actually a robot? What if your child’s teacher in school was a robot? Would you be fine with sending your child to school in a self-driving car?”

Two experts on artificial intelligence (AI) made one sit up and wake up to reality during a discussion about concerns regarding AI, titled, ‘Journey on intelligence: a dialogue where philosophy inquires artificial intelligence’ organised by the School of Science and Engineering at Habib University on Thursday.

As we step into the age of AI, the question of ethics remains hanging in the middle. How will mankind deal with this conflict between artificial intelligence and human intellect while also dealing with the moral and ethical issues?

Dr Sajjad Haider, head of the AI lab at the Institute of Business Administration, started with Alan Turing’s 1950 test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. But ‘AI’ or ‘artificial intelligence’ was truly born in 1956 when the computer scientist John McCarthy coined the term.

‘Real-world application of robots has people worried about losing their jobs’

“The years following that saw millions of dollars being poured into developing human-like intelligence, which was not really happening, making it the dark period of AI,” he said.

But things started happening and picking up pace around 1996 and 1997 when IBM’s Deep Blue defeated the world-class chess master Garry Kasparov.

“We didn’t have that much computing power in the 1950s but we did in the 1990s. Then recently, in 2016, Google DeepMind’s AlphaGo defeated Go champion Lee Sedol leading to much media hype about AI. Now we see the DARPA challenges involving autonomous or driverless vehicles.

“We also have robots now that look like humans,” he said. “They happen to have Asian faces because they are mostly manufactured in Japan or China. We already have service robots that look like machines but the human face on robots helps smooth interaction between the machine and the human,” he explained.

“But real-world application of robots has people worried about losing their jobs now. For instance there is the Google translator that has got call centre workers all worried,” he said.

“But throughout history we have seen that whenever something new comes along it may make some professions less popular but then there are new professions that come into demand while creating new openings,” Dr Sajjad pointed out.

Other things that are equally or perhaps more worrisome for humans include data mining, deface technology, etc. There is Facebook, Cambridge Analytica, the world’s biggest social network, at the centre of an international scandal involving voter data, the 2016 US presidential election and Brexit. There are smart programmes that can analyse your facial expressions to know your personality.

Dr Sajjad thought that people have been very successful with many great AI tools and they will keep on using them. But what if only a few have access to such tools? “Then it will be just like nuclear technology, which can be misused by those who have access to it,” he pointed out.

Meanwhile, Dr Muhammad Haris, a professor of philosophy at Habib University, said that the combination of biogenetics, AI and state power is making us wonder who’s going to hold power in the future.

“When we think, we think and then there is a gap before the process of reflection,” he said. “That kind of gap diminishes with AI. So AI and genetics will get entangled,” he added.

He also reminded about what is already there such as systems of surveillance and things like face recognition by computers. He quoted the example of a movie rights company sending a legal notice to Vimeo to take down a video that they presumed belonged to them. But they were in fact mistaken as it was a computer simulation of the original which was very difficult to differentiate.

“So now with AI you have a situation where the sources of similarities are increasing, leading to the production of hyper-reality,” he said.

The discussion was moderated by Farieha Aziz, a journalist and co-founder of Bolo Bhi, a digital rights and civil liberties group.

Published in Dawn, March 24th, 2018

Opinion

Editorial

Tax amendments
Updated 20 Dec, 2024

Tax amendments

Bureaucracy gimmicks have not produced results, will not do so in the future.
Cricket breakthrough
20 Dec, 2024

Cricket breakthrough

IT had been made clear to Pakistan that a Champions Trophy without India was not even a distant possibility, even if...
Troubled waters
20 Dec, 2024

Troubled waters

LURCHING from one crisis to the next, the Pakistani state has been consistent in failing its vulnerable citizens....
Madressah oversight
Updated 19 Dec, 2024

Madressah oversight

Bill should be reconsidered and Directorate General of Religious Education, formed to oversee seminaries, should not be rolled back.
Kurram’s misery
Updated 19 Dec, 2024

Kurram’s misery

The state must recognise that allowing such hardship to continue undermines its basic duty to protect citizens’ well-being.
Hiking gas rates
19 Dec, 2024

Hiking gas rates

IMPLEMENTATION of a new Ogra recommendation to increase the gas prices by an average 8.7pc or Rs142.45 per mmBtu in...