Google may get a lot of credit for the development of world-girdling, cloud-sprouting exascale datacenters, but its IT infrastructure started on such a small scale that co-founder Larry Page was able to reinforce its data-storage interconnect structure by performing “miracle surgery” using a twist tie.
At the time, Google’s datacenter infrastructure didn’t have all the bells and whistles that it does now, though even before security became a critical concern, almost no one was able to set foot in a Google datacenter “because it was tiny,” according to a blog posted Feb. 4 by Urs Holzle, senior vice president of technical infrastructure and Google Fellow.
Holzle, a Fulbright scholar with a Ph.D. in computer science from Stanford University, is credited with the design and construction of Google’s network of high-efficiency, scale-out datacenters on which Google spent $7.3 billion during 2013 alone.
Holzle reminisced about the early days in a Feb. 4 blog written on the 15th anniversary of his first visit to Google’s enterprise data cage, a seven-by-four-meter structure holding about 30 PCs on shelves that barely fit in the space (even before adding a human admin).
The cage was on the floor of the Santa Clara, Calif. Internet hosting facility of Exodus Communications, elbow-to-elbow with eBay and down the hall from AltaVista, which was then the most popular search engine on the Internet.
Twenty five of the caged PCs were there to build and serve the index. The other five ran the spiders that searched the (comparatively) tiny Internet of 1999 for content.
Storage drives for the servers were wrapped cases improvised by co-founders Larry Page and Sergey Brin, according to a comment Brin contributed under Holzle’s blog. They also improvised their own ribbon cables so they were able to connect seven drives at a time to each server. (“We were very cheap!” Brin explained.)
Maintainability is the downside of cheap, however. One of the cables running outside of the boxes (rather than the inside, where it’d be safe) was damaged when Brin and Page were loading equipment into the server cage. “So, late that night, desperate to get the machines up and running Larry did a little miracle surgery to the cable with a twist tie,” Brin wrote. “Incredibly it worked!”
Hardware could be had or made for cheap, but bandwidth was another story.
“A megabit cost $1200/month and we had to buy two,” Holzle wrote, still sounding annoyed at both the cost and packaging, especially because Google didn’t actually use up the second megabit until the summer of 1999. At the time, 1 million queries per day ate up only about 1Mbit/sec of bandwidth, he wrote.
Small, cheap, and low volume the earliest Google data cage might have been, but at least it wasn’t pretty. “[It] was one of the ugliest and [most] unkempt caged I’d ever seen,” former Exodus employee Joshua Franta recalled in a comment. “A few years later they were renting space in 35,000 sq.ft. increments rather than 28 and Googly-ness had made its way into the cabling.”
Another former Exodus employee, Jonas Luster, recalled getting frantic pages to rush over and reboot a Google cluster or “wiggle those cables and we’ll ping it from here.” The server and the datacenters and the provision of technical support for the wiggling of cables has become far more efficient since then, according to less-nostalgic Google docs.
Image: Google, Inc.