Main image of article Weekend Roundup: Waymo’s Challenge to You; Amazon’s Secret Robot

It’s the weekend! Let’s touch on some of the big tech stories from the week you might have missed, including Waymo’s interesting new challenge, Amazon’s top-secret robot, and a fascinating A.I. snafu.

Waymo Wants You

Waymo, the autonomous-driving startup that began life as a Google division, is hosting another round of dataset challenges, complete with a $15,000 prize for technologists. That’s in addition to the release of a new dataset (Waymo Open Dataset: Motion) focused on behavior prediction and motion forecasting. 

There are four new Waymo Open Dataset challenges: A motion prediction challenge, an interaction prediction challenge, real-time 3D detection, and real-time 2D detection. Winning teams for each challenge will receive the aforementioned $15,000, with second-place teams earning $5,000, and third-place teams scoring $2,000. You can find the challenges’ rules on Waymo’s website; in essence, you’re training a machine-learning model to accomplish each challenges’ goals. (Last year’s Waymo challenges were similar.)

Based on the challenges and open dataset update, it’s clear that Waymo is trying to refine how autonomous vehicles detect and predict everything happening on the road around them. “While AV researchers have made significant progress on the motion forecasting problem in recent years, more high-quality open-source motion data can help achieve new breakthroughs throughout the industry,” reads a note on the company’s blog. “Today, we’re expanding the Waymo Open Dataset with the publication of a motion dataset – which we believe to be the largest interactive dataset yet released for research into behavior prediction and motion forecasting for autonomous driving.”

So Waymo hopes that it can achieve at least some of its goals via crowdsourcing. That’s certainly one way of doing things; meanwhile, Tesla has beta-tested its autonomous-driving software using actual cars on actual roads. At least Waymo’s new challenges have zero chance of resulting in a real vehicle crashing into a wall.  

Amazon’s Top Secret Home Robot 

Robotics fans, take note: Amazon is reportedly hard at work on a “home robot” that basically sounds like an Alexa device on wheels. Codenamed “Vesta,” the robot could feature a screen, microphone, cameras, and sensors capable of detecting temperature and other atmospheric details. 

According to Business Insider, some 800 Amazon employees are working on the project. Amazon has a mixed record with hardware—although its Alexa devices were smash hits, the company’s Fire Phone was a serious failure—so the teams involved are reportedly cautious about the development and rollout of this particular effort. The big question is whether consumers will find a home robot essential or just annoying; while cameras, microphones and sensors on a rolling platform would be useful for exploring Mars, how much would that hardware ease most folks’ daily lives?

When Is an Apple an iPod?

Artificial intelligence (A.I.) and machine learning are the future of many industries. However, researchers know that there’s still quite a bit of work to be done in order to improve the current generation of A.I. platforms, which can be a little bit… crude at moments.

For example, OpenAI revealed this week (in a lengthy blog posting that’s a must-read if you’re into anything A.I.-related) that someone could fool even sophisticated computer-vision software by slapping a piece of paper with something written on it over an object. Instead of accurately identifying the object, the software identifies whatever’s written on the paper. In OpenAI’s example, placing a note that says “iPod” on an apple will force a system to classify the fruit as the iconic music player.

We refer to these attacks as typographic attacks,” the blog added. “We believe attacks such as those described above are far from simply an academic concern. By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model. Like the Adversarial Patch, this attack works in the wild; but unlike such attacks, it requires no more technology than pen and paper.”

It’s just another reminder that it’s still early days for A.I., and the field needs researchers and technologists who can help weed out these bugs before they impact real-world systems.

That’s it! Have a great weekend, everyone!