Google Duplex Shows How Developers Must Take Care with A.I.

Last week, Google revealed Duplex, an artificial intelligence (A.I.) platform designed to speak to customer-service representatives on the phone. Although the search-engine giant hailed this innovation as a way to meaningfully improve humans’ day-to-day interactions with others, some critics immediately balked at the idea of a piece of software acting like a flesh-and-blood person.

With Duplex, Google really went out of its way to build a system that mimics human conversation; it even pauses and says “um” at moments. On its blog, Google has posted recordings of Duplex booking appointments and restaurant reservations. The company has made it very clear that the system can only engage in these very specific conversations; general discussions with human beings are beyond its capabilities—at least for right now.

“At the core of Duplex is a recurrent neural network (RNN) designed to cope with these challenges, built using TensorFlow Extended (TFX),” is how Google’s blog explained the technology at the heart of the system. “To obtain its high precision, we trained Duplex’s RNN on a corpus of anonymized phone conversation data. The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation (e.g. the desired service for an appointment, or the current time of day) and more.”

Soon after Google revealed Duplex, it responded to rising criticism by announcing that the system would identify itself at the beginning of a call. That adjustment, along with the controversy over Duplex’s human-like features, illustrates a key issue for developers and other tech pros as A.I. becomes more commercialized: Should you announce to your users that an A.I. is indeed at work?

For example, a number of companies (including Facebook and Microsoft) already offer bots that interact with customers on Messenger and other communication platforms. These bots are pretty easy to spot, even if they don’t explicitly announce themselves; they can generally stick to very limited routines, with zero ability to improvise.

But with larger datasets and more powerful dev tooling, such platforms could become far more effective at blending in, which will raise the question of whether all human-A.I. interactions should kick off with a declaration of identity. As the events with Duplex show, if there’s any possibility of confusion, people are potentially more comfortable with an A.I. actually announcing itself. And if they’re comfortable, they’re much more likely to continue using a particular app or platform.

Related

One Response to “Google Duplex Shows How Developers Must Take Care with A.I.”