Main image of article Cloud: Throughput Versus Bandwidth
One of the least acknowledged gotchas in cloud computing is the cost of transferring data. Most providers charge for data transfers in much the same way mobile service providers do. That means you not only have to estimate how much compute and storage you might need, but how much throughput, as well. Data TransferYes, I said “throughput,” not “bandwidth.” There's a difference, and it's an important one to understand as you consider migration to a cloud environment. Both bandwidth and throughput are network metrics. Bandwidth generally refers to how much and how fast a pipe you have between any two nodes on a network. A T1, for example, can transfer up to 1.54 MB of data per second. That's the bandwidth that's available between the edge of the corporate network and the Internet. It’s not, however, the throughput you'll necessarily see for an application. That's because throughput measures the rate at which data is successfully transferred between two endpoints. It's impacted by things like round trip latency and TCP window sizes, in addition to limitations that may exist in the client and on the server. Throughput is basically the measure of how fast a given application can transfer data. While throughput is certainly based in part on, and limited by, bandwidth, the two rates aren’t the same, even though both are typically measured using bits per second. That's why seeing a graphical representation can sometimes make folks a bit anxious. You'll note that connectivity between the server (where the app lives) and the rest of the network isn't exactly awe-inspiring in terms of bandwidth. Nor is the client connectivity – the infamous “last mile” -- all that promising.