It takes a real effort of will to reset our thinking in any situation, but that’s exactly what we’ll need to do in order to reinvent whole ecosystems (such as health care and education) for a massively connected, sensor-rich, API-infused world. The effort will pay off with huge improvements in the sustainable performance of these functions. Intel’s 4004 was first advertised in November 1971, while the word ‘Internet’ wasn’t coined until three years later; the TCP/IP stack was not standardized until 1982. That gap provided a long, long time—seven Moore's-Law cycles—for isolated nodes of computation to swarm across the planet, with connectivity considered an extra-cost option and at most an intermittent convenience. We're still carrying the resulting baggage, in the form of ziggurats of middleware that simulate what we might have much more simply built for real–if we'd had ubiquitous connection before we'd gotten cheap CPUs. From a purely technical point of view, we can understand and rage at the perversity. If you want to enable simultaneous editing of a document by many contributors, for example, the obvious way is to have a shared data structure and a simple means of concurrent access and conflict resolution. The worst possible way is to give every editor a separate copy of the document, and to build a complex mechanism of replication and combination that tries to approximate what could be done better (and cheaper) the cloudy way. The former is something like Google Docs, the latter something like Microsoft Office. But what if the Internet had been invented (in some concept and form) before the microprocessor? In a connected-first world, wouldn't Google Docs have long been considered the norm, instead of being seen as an innovation? Technical elegance, though, is much too narrow a frame of reference. Let's think big. Let's think about global, crucial, currently unsustainable institutions—such as the aforementioned health care. Too many people crowd into too few doctors' offices to take too many tests at too much cost to deliver too many null results—that say, in effect, "OK, you didn't need to come into the office today after all." In a world of “connection first,” we'd put radio nodes in medicine bottles to confirm that the patient is taking pills on schedule. We'd put network-connected sensors in toilets to do basic urinalysis. We'd let people opt-in to connect their grocery store frequent-buyer accounts to their primary care providers, so information on what we're eating and drinking could be accurately recorded instead of optimistically self-reported. Likewise higher education: instead of diverting huge fractions of tuition and fees to on-campus housing and recreation, we'd make the campus a place that's occasionally visited for seminars and feedback sessions—while most learning takes place in apprentice- or intern-style engagements, and faculty members tailor sequences of content to match a student's assignments in practical settings. This is the way we need to think about the cloud: not merely as new connective tissue for the institutions and processes that we already have, but as a new environment in which we can re-think what those processes should be and how those institutions should do their jobs. That's the difference between accident and design—and the difference between being a leader, and being a victim of disruption. Peter Coffee spent ten years as a practicing engineer in petrochemical and aerospace projects before his 19-year detour as Technology Editor at PCWeek (which later became eWEEK). He’s now VP and Head of Platform Research at Salesforce.com, where he works with CxOs and ISVs to advance the state-of-the-art of cloud-based application development.   Image: Alexander Kirch/Shutterstock.com