Human-sounding Google Assistant sparks ethics questions
The new Google digital assistant converses so naturally it may seem like a real person.
The unveiling of the natural-sounding robo-assistant by the tech giant this week wowed some observers but left others fretting over the ethics of how the human-seeming software might be used.
Google chief Sundar Pichai played a recording of the Google Assistant independently calling a hair salon and a restaurant to make bookings — interacting with staff who evidently didn't realize they were dealing with artificial intelligence software, rather than a real customer.
Tell the Google Assistant to book a table for four at 6:00pm, it tends to the phone call in a human-sounding voice complete with “speech disfluencies” such as “ums” and “uhs.” “This is what people often do when they are gathering their thoughts,” Google engineers Yaniv Leviathan and Yossi Matias said in a Duplex blog post.
Google Assistant artificial intelligence enhanced with “Duplex” technology that let it engage like a real person on the phone was a surprise and, for some unsettling, star of the internet giant's annual developers conference this week in its hometown of Mountain View, California.
The digital assistant was also programmed to understand when to respond quickly, such as after someone says “hello,” versus pausing as a person might before answering complex questions.
Google pitched the enhanced assistant as a potential boon to busy people and small businesses which lack websites customers can use to make appointments.
“Our vision for our assistant is to help you get things done,” Pichai told the approximately 7,000 developers at the Google I/O conference, along with an online audience watching his streamed presentation on Tuesday.
Google will be testing the digital assistant improvement in the months ahead.
Realistic robo-callers
The Duplex demonstration was quickly followed by debate over whether people answering phones should be told when they are speaking to human-sounding software and how the technology might be abused in the form of more convincing “robocalls” by marketers or political campaigns.
“Google Duplex is the most incredible, terrifying thing out of #IO18 so far,” tweeted Chris Messina, a product designer whose resume includes Google and bringing the idea of the hashtag to Twitter.
Google Duplex is an important development and signals an urgent need to figure out proper governance of machines that can fool people into thinking they are human, according to Kay Firth-Butterfield, head of the AI and machine learning project at the World Economic Forum's Center for the Fourth Industrial Revolution.
“These machines could call on behalf of political parties and make ever more convincing recommendations for voting,” Firth-Butterfield reasoned.
“Will children be able to use these agents and receive calls from them?” Digital assistants making arrangements for people also raise the question of who is responsible for mistakes, such as a no-show or cancellation fee for an appointment set for the wrong time.
At a time of heightened concerns about online privacy, there were also worries expressed about what kind of data digital assistants might collect and who gets access to it.
“My sense is that humans, in general, don't mind talking to machines so long as they know they are doing so,” read a post credited to Lauren Weinstein in a chat forum below the Duplex blog post.
An array of comments at Twitter contended there was an ethical breach to not letting people know they were conversing with software.
“If you've grown up watching 'Star Trek TNG' like me then you probably considered natural voice interactions with computers a thing of the future,” read a post by Andreas Schafer in the blog chat forum. “Well, looks like the future is here.“