[caption id="attachment_12094" align="aligncenter" width="580"] PCI-E cable vs Intel's new MXC interconnect[/caption] It's not often something as simple as a cable can genuinely impact something as large and complex as a datacenter, but Intel is giving it a try. Among the host of long-anticipated datacenter products Intel announced yesterday was the miracle-fibre MXC optical cable developed with Corning Cable Systems. No matter how miraculous its qualities, it would have been hard for a cable to attract any notice at all among the low-power, high-efficiency Atom chips, "disaggregated" server racks, 64-bit systems-on-a-chip and microservers that got most of the attention from both Intel and the media following the announcement yesterday. But Intel went to some extra effort to make sure not only that MXC got noticed, but that everyone understood that a new cable could have a bigger long-term impact on datacenters than new processors. Intel will give a detailed technical presentation on MXC Sept. 12 at the Intel Developers Forum Sept. 10-12 in San Francisco. Though only a cable, MXC may be the most heavily engineered, most ambitiously product-designed bit of cord ever to hit the datacenters. The custom-designed silicon and interface systems at the head and tail end come from the endless development cycle of Intel's Silicon Photonics Operation; the cable is a still-unreleased, even-more-advanced version of the already technically advanced, flexible data-loss-resistant fiberoptic cable Corning already points to as one of its top recent technical achievements. Developing it took two years of joint development between Intel and Corning, in addition to the years spent developing the supporting technologies. The resulting MXC is protocol independent, able to support almost any type of data, and is adaptable enough to be used for anything from super-short connections between blades in a server rack to links between server racks and storage, storage and networks. Intel announced plans to share details of the manufacture and design of the new fiber to other cable manufacturers or to IEEE, in an effort to keep costs low, as soon as it resolves resulting issues about intellectual property. The hardware is much narrower than current fiberoptic cables. It's the speed that matters, however. According to Intel's estimate, MXC can carry 25Gbit/sec of data per strand of fiber across distances as great as 300 meters. As many as 64 fibers can be wrapped into a single cable, bringing its capacity up to 1.6Tbit/sec. Even a single strand of it would smoke the 8Gbit/sec PCI-E interconnects within many server racks. In thicker versions it could engulf the 1Gbit/sec and 10Gbit/sec interconnects that are most common in datacenters now, as well as the 40Gbit/sec and 100Gbit/sec links market-research firms predict will be the norm by 2017. The only thing that can touch it right now is top-end VCSEL fiberoptics that can run one strand at 25Gbit/sec, but only across 100 meters. At 300 meters, even it slows to 10Gbit/sec. For Intel, speed is merely a means to an end. Reducing the bulk of datacenter interconnect cabling will save space, improve air circulation and make heating and cooling more efficient, as well as giving servers more throughput. What convenient, high-bandwidth wires actually do is allow datacenter- and systems-designers top build more efficient systems that group memory, processors, I/O, storage and other components together according to function, rather than stuffing all of them into a space limited by the short practical distances of copper or current fiber cabling. With unobtrusive, superfast fiber, the ultimate impact on design of the datacenter is similar in the physical world to what happened online when virtualization cut the ties between physical servers and virtual ones: All the processing power can be stacked together in a rack at one end of the room and be linked to memory, storage and network gateways housed in their own dedicated racks elsewhere. That would allow datacenter managers to buy or upgrade capacity for processing or memory as needed by changing just one rack of hardware, for example, rather than having to pack all the components a system needs into each of dozens of racks because they won't work if they get too far from one another. "We're excited about the flexibility that these technologies can bring to hardware and how silicon photonics will enable us to interconnect these resources with less concern about their physical placement," according to a statement from Facebook VP of hardware design and Open Compute Foundation Chairman Frank Frankovsky, in a January statement from Intel in which it first hinted at the abilities of MXC. Ambitious visions of redesign for datacenters are common enough, but not many are based on changes in the characteristics of a cable. "With simple physical things like cabling, you just don't think about it until you are in a data center with 200,000 servers or 100,000 servers [involved]," according to a statement Sept. 4 from Chris Phillips, who is general manager of program management for Windows Server and Systems Center at Microsoft. "Then you realize, 'wow, wires really are hard.' They block airflow, and they do all of these evil things to you, and humans touch them and they screw them up. So we are excited to be engaged again with Intel on architecture and design."   Image: Intel/Corning