Google Leads Charge Into A.I. Chipset Battle

Much as in the 1990s, we’re entering a new era of “silicon warfare,” with big tech firms pushing their own chipsets. This time around, artificial intelligence (A.I.) is the battleground.

At I/O 2016, Google introduced its first Tensor Processing Unit, or TPU. This custom chipset was meant to accelerate A.I. workloads in an attempt to make machines process complex tasks faster. In 2017, Google refreshed TPU, and introduced the ability to rent virtual machines powered by the boards.

Now, Google is expanding its reach: TPUs are available on the Google Cloud Platform via a new beta program. The company says it will help machine learning (ML) and A.I. scale:

Cloud TPUs are a family of Google-designed hardware accelerators that are optimized to speed up and scale up specific ML workloads programmed with TensorFlow. Built with four custom ASICs, each Cloud TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory onto a single board. These boards can be used alone or connected together via an ultra-fast, dedicated network to form multi-petaflop ML supercomputers that we call “TPU pods.” We will offer these larger supercomputers on GCP later this year.

It’s an approach unique to Google, but not an unfamiliar one in terms of the broader A.I. market. While the Mountain View firm tries to solidify its position as a cloud-based ML platform provider, NVIDIA is making big bets on specialized silicon in big auto: A deal with VW is a watershed for the chipmaker, and possibly an industry. NVIDIA’s intensely powerful hardware is aimed at car automation on a large scale, and helps companies such as VW understand driving habits and driver preferences to build better and safer in-car experiences.

Various reports note that Apple is looking into manufacturing its own chipsets for use across its product lineup. Already handling its own SoC design for mobile, Apple is said to be examining its entire hardware stack to create completely custom designs. This may even extend to car automation, where Apple admits it’s investing in research.

This all echoes Microsoft’s own ‘Intelligent Edge, Intelligent Cloud’ initiative: smarter devices feed more data to the cloud, where A.I. and ML take over to return better contextual results faster.

Each of these companies has an angle on the future of A.I. Apple’s stack is a silo: users get in, and their investment in the Apple ecosystem makes it harder to leave. This has always been true, but it’s much more difficult to pull yourself away from a cloud and/or on-device chipset program that learns how to better serve you via Siri – and transfers your data to new devices automatically as you upgrade. Google is doing something similar for Android.

NVIDIA simply wants to be the hardware provider in an untapped market (cars) proven to be great for third-party hardware providers. The automotive parts industry is worth an estimated $106.5 billion. If it can make a home for its products inside smart cars, NVIDIA is well-positioned for the future, and probably untouchable if it builds enough of a lead.

Google and Microsoft are attempting a similar thing in the cloud. The differentiator is the crowd they’re chasing; Microsoft is far more interested in enterprise, whereas Google is leaning into its consumer synergy for a more sprawling approach to building and scaling both A.I. and ML.

In a short time, we’ve gone from teaching bots how to communicate with humans to scalable cloud-reliant A.I. with powerful processing units. Some of it happens on the device (or in the car), while some companies simply want devices to feed info to a hungry cloud. Happily, this newest silicon war is keeping everyone in their specialized lanes, helping companies succeed and users discover value.