Wednesday 17 June 2009

Faster than a speeding bullet

Over the next couple of posts I'm intending to go in to just a little more depth about the fallacies of network computing I mentioned last time. During the past week I have been doing some work using Amazon’s Web Services Platform to speed up one of our websites.

While demonstrating the delivery speed gained by using Amazon’s CloudFront web service, I encountered a surprising (to me at least) reaction. Though impressed by the visible improvement, many non-web developers were raising points such as, "Why does where our site is hosted matter? I thought the web is globally available", "I thought we were paying for unlimited bandwidth with our ISP?"...

Well, this reaction fell straight in to the path of fallacies number 2 and 3!

Latency is zero.
Bandwidth is infinite.

WRONG!!!

Many people assume that bandwidth and latency are the same thing, however the common metaphor is of a system of roads. The number of lanes on the road is equivalent to its bandwidth, and its length is equivalent to its latency.

A high-bandwidth connection usually has lower latency, since it has less chance to get congested at peak times. However, latency and bandwidth are NOT directly related. They normally go together (i.e. high bandwidth usually means better ping times), but they don't have to.

Even if we could buy "unlimited" bandwidth from our supplier, then if our website is served up from a single location, no matter how fast the connection, it will still suffer from the effects of network latency/lag when viewed at a global level.

It's all to do with the speed of light. No, really! It's Physics time. Hooray!

With the exception some new fangled weird physics theories, as far as normal physics goes, the speed of light is constant. Nothing can travel faster than it, and this "nothing" rule therefore applies to the messages you send around the internet.

Light goes very "fastly" (as my 2 year old son says) - about 300K km/second. This is roughly a million times faster than sound, and fast enough to zoom around the Earth more than 7 times in one second. This sounds really quick, and it is for day to day things like phoning someone (or using smoke signals); you wouldn't perceive it. However, over distances of thousands of miles, the assumption that you never need to worry about the speed starts to break down.

You can sometimes see this effect if you have satellite TV and the same programme is being broadcast on both terrestrial and satellite channels. If you flick between the channels you'll see the satellite picture arrives a moment later than the one broadcast from the ground.

Broadcasting uses "unreliable" protocols, so if messages are held up, they are assumed missing in action and your signal/picture will break up. TCP/IP however is a "reliable" protocol, so if messages are held up, you have to wait for them to arrive, and your experience is that your signal will appear to slow down.

If you're a computer trying to send a message to the other side of the world, it takes about 100 milliseconds to "ping" a computer in Australia from here in the UK (assuming your message travels at the speed of light).

Try this!... Ping a well known server in Australia:

> ping wwf.org.au

...and a well know server here in the UK:

> ping direct.gov.uk

Depending upon where you are located, you'll probably see very different times. I'm seeing ping times of between 340 and 360 ms to Australia, and just 14 to 21 pinging locally (from Scotland to London).

So what does this mean to me as a developer? Well if my users are in London and my site is hosted in Australia, it's going to take a long time to load.

The most common way to mitigate this problem is to use the smallest possible message size, compress your graphics, and so on. However, for rich content like large flash or silverlight files, or streaming video, this is often impossible. An alternative is to put your content as close as possible, in network terms, to your end user.

Whatever your application does, there are some important design decisions to be made in terms of how to partition, deploy and manage it to ensure an optimum experience for end users.

The most common design practices to work around these 2 assumptions are...

1. Optimise your message sizes carefully.
2. Compress any large content.
3. Make a clear separation between types of content, so that your application can be partitioned either logically or geographically if necessary.

At a basic level you might separate static vs. dynamically generated content, but could also be partitioning based on expected usage, data size, or resource locale, for example.

Whatever you need to take in to account for your specific application, the one rule to rule them all is
"Never assume that the network is fast enough by default."

No comments:

Post a Comment