Amazon, Google Robotics Hints at Possibility… and Danger

Will the world’s biggest tech companies begin pouring more resources into robotics?

Amazon has offered a few hints that it plans on cycling up its consumer-facing robotics work. Rohit Prasad, a scientist in Amazon’s Alexa division, told the audience at the MIT Technology Review’s EmTech Digital AI conference that “the only way to make smart assistants really smart is to give it eyes and let it explore the world.” (Hat tip to The Verge for the link.)

Does that mean Amazon is planning on an Alexa-empowered robot that can follow you around the house like a plastic pet? Rumors to that effect have persisted for years, powered in part by Amazon’s continuing interest in automating the other aspects of its business, including warehouse operations. But the ultimate nature of such a robot—and its abilities—is an open question.

Meanwhile, years after it shut down its robotics-related efforts, Google seems to have a renewed interest in figuring out how to build machines that think for themselves. In a posting on Google’s A.I. blog, Andy Zeng, a student researcher at Robotics at Google, described how the company paired with researchers at Princeton, Columbia, and MIT to develop a… “TossingBot.”

The TossingBot, in Zeng’s words, is “a picking robot for our real, random world that learns to grasp and throw objects into selected boxes outside its natural range.” Throwing is a difficult task, requiring the robot to grasp (so to speak) concepts such as mass, friction, and aerodynamics; throwing a screwdriver is very different from hurling a full-sized car, for instance. The key, as you might expect, is judicious use of artificial intelligence (A.I.) and deep learning.

“Previously we’ve shown that our robots can learn to push and grasp a large variety of objects, but accurately throwing objects requires a larger understanding of projectile physics,” Zeng wrote. “Acquiring this knowledge from scratch with only trial-and-error is not only time consuming and expensive, but also generally doesn’t work outside of very specific, and carefully set up training scenarios.”

In theory, A.I. will allow the robot to “train” itself on a variety of real-world scenarios, becoming more effective every time it runs through a particular sequence of “tosses.” If this experiment works out, Google can conceivably adjust it to fit all kinds of other robot operations. After that, will the company begin producing robots for the “real world”?

We’ve been down this road before, of course. When Google first acquired Boston Dynamics, the creators of the infamous “BigDog,” people assumed that a new era of robotics was upon us. But as with so many other Google projects (including Google Plus, Labs, and… oh, heck, just take a look at the graveyard for yourself), the robotics initiative floundered and died with nary a whimper.

Of course, all big tech firms have a tendency to hype up big projects, only for those projects to fail (raise a glass to all those mobile developers who poured resources into creating apps for Windows Phone back in the day). “Fail fast” is a mantra that many tech executives embrace, and with good reason: Maintaining a platform that has zero momentum isn’t anyone’s idea of a good time.

But it also means that any tech pros interested in robotics need to exhibit a bit of caution when companies say they’re going “all in” on robots. If Amazon, Google, and their rivals begin touting the benefits of combining A.I. with robotics hardware, and encourage third-party developers and tech pros to jump aboard, it might be worth waiting a bit to see if an ecosystem actually develops. Once burned, as they say, is twice shy.