Artificial intelligence (A.I.) is a central component of the impending shift to ambient computing, where voice assistants do many of the mundane tasks we just don’t enjoy. Google Duplex, which drew widespread criticism after its launch at I/O this Spring, is on the verge of being rolled out – and Google is trying to frame the ethical debate around it.
The company invited a small gathering of tech press to a Manhattan restaurant recently for a demonstration, as well as more context about Duplex. The location wasn’t coincidental; the restaurant’s owner chimed in to tout Duplex’s technology as capable of helping their restaurant with the 100-plus reservation calls they reportedly receive on a daily basis.
A new video posted on YouTube (embedded below) also tries to ease concerns around the technology. It’s more best-case-scenario messaging from Google: A city street where it might be hard to make a call due to ambient noise, a restaurant name most digital assistants would screw up, and an oddly receptive host or restauranteur set the video’s tone. The message: Everyone loves how easy Google Duplex makes life, and you should, too.
Businesses can choose not to accept Duplex calls. The calls are recorded, but restaurants can opt out by explicitly telling the bot not to save the call when it informs them, at the start of the chat, about the recording. Try that with your local cable provider.
Press were able to stress-test Google Duplex, with Gizmodo claiming it was able to stump the software. When problems occur, Google Assistant (the over-arching A.I. powering much of Google’s services these days) either gives up or hands the call off to a human operator. This is handy now, but unscalable.
A.I., Ethics, and Behavior
In a way, Google is going back to square one. It’s returning to voice calls, a really old technology, after spending nearly a decade easing us into search and apps to handle most of our needs. (That emphasis on digital interaction is part of the reason mobile carriers shifted gears from how many minutes and text messages you were allotted to how much data your plan has.)
As in the early days of search, Google sees opportunity with A.I. and machine learning. Nobody is doing it at Google’s level, and others like Amazon and Apple are more keen to use their assistants for their own, often narrowly-focused services.
Behaviorally, Google Duplex is antithetical to many current trends. Hosts and restauranteurs won’t want to chat with bots, and it’s far easier (and more familiar) for users to dive into OpenTable and click a few buttons than dictate to your phone where and when you want to eat.
With that in mind, Google’s main fight over Duplex’s future may be internal: It already utilizes OpenTable for reservations inside Google search, which is a far better experience than hoping the bot inside your phone does its job. (A clear counterpoint to that: Not every restaurant is on OpenTable, or any other booking engine – but every restaurant has a phone.)
While Google attempts to frame Duplex as cool and effective, it’s burying the motive. Google Duplex is a data gathering tool. It is the product of a company eager to learn all it can about human intelligence to improve artificial intelligence.
What Google has learned from observing human behavior is that we gladly trade personal data for convenience. Robots making appointments for you is the beautiful tip of the iceberg, but the strip-mining of personal data is the bummock that could have far-reaching, and potentially destructive, consequences; it’s what sinks ships, metaphorically speaking.
The Ethical Dilemma for Google Duplex Rages On
In its A.I. manifesto, Google has the following to say about privacy:
We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
In lay terms, this says Google Assistant will tell humans it’s a bot, and let them opt-out of call recording. According to TechCrunch, the bot also won’t relay your information:
Asked for the user’s email address, on the other hand, the system simply says it doesn’t have the permission of its “client” to disclose such information, maintaining the whole “assistant” relationship.
But a human “assistant” wouldn’t pilfer your data and sell it to advertisers because that’s ethically immoral. And if your newly-hired assistant were also working at an ad agency, you might consider that a conflict of interest. Google is an ad agency with services designed to filter data through its pipelines so it can make money.
Google is framing the ethical debate around Duplex as ‘value added.’ Unfortunately, it’s framing a stained canvas. We’ve seen what happens when convenience is bartered for data, and it’s not pretty. Ambient computing has the ability to capture far more incidental data than we’re sometimes willing to acknowledge, and we still don’t know what Google is planning to do with it. It’s a potentially dangerous precedent at the earliest stage of the next wave of computing.