Thursday, 30 July 2009

Do it Again!

This is the last in my mini-series about the fallacies of network computing before moving on to deal with the practical side of dealing with these things.

The final fallacy to address, states that "the network is reliable". As you know if you ever used AOL in its, erm... heyday? - it's not. It may go a bit faster now, but it's the same old network that it used to be.

What kinds of operations can be safely performed on something so unreliable? The answer is: "Idempotent" ones. Gosh, Latin! How old is this internet?

idem - meaning "the same". You'll see this used in legal papers abbreviated to "id." as lawers do tend to repeat themselves a lot, and are by far the biggest fans of dead languages (outwith Delphi programming circles).

potent - meaning power

So: "having the same power". But what does that mean? Essentially it's a big fancy word for something that's very simple (in theory at least).

Idempotent operations are essentially repeatable, and have no side effects.

Due to the infrastructure of the net, it is often the case on the web that messages between end points may be sent and received more than once, so it's important that messages processed multiple times do not cause side effects. The clichéd example of this is the browser refresh, and while this is an example of idempotence, I feel it's much clearer to explain by showing what it's not.

Voting systems are the common example of failing to take idempotence in to account.

Even though the "replay attack" is one of the oldest forms of internet abuse, it is still surprisingly common to come across sites that contain polls which are easily rigged. Essentially the web page presents "Vote for A" and "Vote for B" as functions. You click "Vote for B" and a counter is incremented. If you fire the request again, the counter is incremented again.

Write...

for (int i = 0; i < 10000; i++) { VoteForB(); }

... and zap! In a fraction of a second, "B" becomes the clear winner.

Increment() IS NOT an idempotent function. It has side effects on application state and is not repeatable.

An idempotent operation is both repeatable and passive.

So, a toggle() function is not idempotent. A Sqaure() Function or the evaluation of a regular expression against given input is idempotent. It does not change any value given to it and has no side effects.

A * A will always equal A squared.

You can of course deconstruct many non-idempotent operations in to a number of idempotent equivalents, e.g. SwitchOff() and SwitchOn() functions rather than Toggle().

Calling SwitchOff() SwitchOff() SwitchOff() SwitchOff() SwitchOff() SwitchOff() will always result in the Switch being Off.

Idempotence is often confused with method safety. A "Safe" method has absolutely no side effects, for example a read operation is Safe. A Delete() operation is not "Safe" but it can be made to be idempotent. i.e. repeatable with no side effects.

This type fault tolerance is actually designed in to HTTP itself. See:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.2

Fundamentally, the RESTful-style of the web is different from the CRUD style of most database-oriented systems.

Under RESTful systems using HTTP, calling PUT multiple times will result in the resource existing at a particular address. Calling DELETE multiple times will result in the item no-longer-existing.

You can call PUT PUT PUT or DELETE DELETE DELETE and the end result will be the same.

The comparable operations using something like SQL will result in errors beyond their first call: "The record already exists" or "The record does not exist", respectively.

On the web, you ignore idempotence at your peril. Famously, Google Web Accelerator wreaked havoc and was quickly withdrawn due to web developers widely ignoring idempotence and method safety.

Almost overnight, millions of web developers learned about idempotence the hard way.

While idempotence seems fairly simple on the face of it, the reality of it can quickly become very, very complicated, particularly since, in most distributed systems, you need to take in to account the idea of sequential idempotence. If you're looking at a system with multiple workers of some sort, either threads, VM instances, or WorkerRoles in Azure, and if you're using lock-free structures (as you should be), the possibility exists that you may have to process the same message not just more than once, but also out of order.

For example, suppose that someone updates a record, while another simultaneously deletes it.
Here, I have an Idempotent PUT and an idempotent DELETE verb. I can call PUT PUT PUT or DELETE DELETE DELETE, and have the same end result. But now I have an extra problem: I now require two independent operations to have combined idempotence.

The following sequences (and even all of these if run in parallel together)

PUT DELETE
PUT PUT DELETE PUT DELETE PUT
DELETE PUT


should all have the same end result. What is that end result?

Now you can see how idempotence works, you can think about how to solve this. And if you want a clue see this article:
http://www.spamdailynews.com/publish/Google_subpoenaed_to_turn_over_deleted_Gmail_messages.shtml
The important thing to remember is to write code that is repeatable and has no side effects. The result of this is that you enable fault tolerance, using the simple policy of "If you're not sure it worked, just do it again".

Thursday, 23 July 2009

For Sale: One slightly worn RDBMS

Web and cloud apps have to be able to scale as close to linearly as possible, in a hostile environment. Network topology and number of users can fluctuate wildly. Contrast this with a desktop application, which just has to handle a single user. Or with a client/server app, which may have to handle a couple of hundred users usually on the same network. If millions of users start using our distributed system, the experience of each should be no different from what it would be if only 5 or 6 people were using it.

While web developers are used to dealing with concurrency, either consciously or just by knowing that calling Application.Lock() from VBScript will cripple their app, it is also becoming a wider issue for desktop developers, as chip manufacturers squeeze more and more cores on to their chips.

The arch-enemy of concurrency is shared data. Any data that two people have to update can cause concurrency problems. This is actually made slightly simpler in web programming by virtue of HTTP being stateless, and is one reason why the functional style of programming as seen on the web is becoming more popular in other applications.

If we can maintain the qualities of statelessness and atomicity in our operations, we are already well on our way to make our code safe to run concurrently. It's only when someone attempts to force a stateful model that you run into problems.

If shared data is the enemy of concurrency, locks are its atomic bombs. Locks work by serialising access to shared data, so that only one process / thread / fibre can write shared data at any one time. This means that anyone else just has to wait in line. Serialisation is the arch enemy of scalability.

So, NEVER hold editable shared data in your application layers. I will of course break this absolute rule in a future post for grandstanding purposes, but I'm allowed to - It's my blog! And of course, it's to illustrate a good point.

This issue doesn't tend to be a problem for those accustomed to web programming, since most web frameworks either enforce, or heavily imply, avoidance of global state. In most real systems today, shared state is handled by a backend RDBMS, such as Oracle or SQL Server, which offer a gigantic improvement over file-based DBs such as Access or Paradox. These do a very good job, at the moment, of handling medium to large scale apps. However, if we scale to the web, where fast global availability is required, then "Houston, we have a problem." We realise all we’ve done is offloaded the problem further down the chain. Because these client-server style databases have transactions, and therefore locks, embedded firmly at the heart of their architecture, they will not scale globally. You can partition them; you put them on ever larger machines, or even clusters of machines; but they will always require a global lock to perform a commit.

So if RDBMS' such as SQL Server don't scale well, then what does?

Whilst still a drop in the ocean in terms of commercially enterprise databases, Databases based on Distributed hash tables, such as CouchDB, Google’s BigTable, Voldemort, AWS SimpleDB and similar, are gaining a lot of traction.

Microsoft’s forthcoming Azure Table Storage falls in to this category too, unlike their slightly clumsy and poorly positioned "old man in a new hat" - SQL Azure Services.

These types of database characteristically emphasise Decentralization, Scalability and Fault Tolerance, with nodes continuously joining, leaving, and failing within the system.

Sound familiar? Well these are all databases designed to work not only on the web, but in accordance with the fundamental architecture of the web. They emphasise availability over transactional consistency which can cause serious headaches for people stuck in an RDBMS world view. (The idea that the database will sometimes return the wrong result is horrifying to them.)

Scalable Transactions are EXTREMELY complex (actually impossible, as previously discussed) in a distributed system. So, support for transactions in Distributed Databases is rare to limited. If you look at Amazon's SimpleDB or Azure Table Store, if you perform a "BatchPut" or "EntityGroupUpdate" respectively, these do succeed or fail as a unit. However, they are subject to limitations well below the expectations of developers who are used to developing against RDBMS systems. Maximum request sizes of 4MB, or number of items in a transaction (25 for SDB), seem almost laughable. However these distributed document style databases are a fundamentally different type of application from the RDMBS.

Unlike SQL Server or Oracle however, they are reliable, cheap, and MASSIVELY scalable. Therefore, while they account for a tiny fraction of the database market at the moment, they do offer a very serious cost-based competitive advantage over large scale installations of RDBMS systems. I will write a future post about lock free data management patterns and post-relational DBs in the future, but once again, for now the key takeaway for concurrency is:

Locks are mindblowingly expensive in a distributed system.

And they get more expensive the more they are used. If you really MUST use them, stand well back and wear very dark sunglasses.

Next and last up: Idempotence

Monday, 13 July 2009

"It Worked."

In my last post I briefly discussed statelessness. The key point about statelessness is that between HTTP calls, whether to fetch web pages, or to access web services or other HTTP based services, each part of the system should be capable of being restarted without causing the others to fail.

Ideally, services should require no state. There are ways of maintaining a "session state" in many web platforms, but these should very much be seen as an optimisation or workaround, rather than a first port of call. I will look at the various session state patterns.

One side effect of statelessness is that we do not really have the context of a conversation available to us, so all the state required to complete a transaction must be available during a single request/response exchange and have no dependencies outside this. This characteristic is referred to as "Atomicity", and normally produces a particularly interesting expression on the face a developers when realised for the first time. Usually it's a result of attempting to use a component that has adopted what I would call the "Barefaced Liar" pattern - all seemed fine when the code ran locally.

This is where a component is designed to expose a simple interface, common to UI development, but actually impossible to guarantee in a distributed environment. There are lots of examples of this but often it's a Distributed Transaction Coordinator of some sort - although session state providers, and WCF’s "Bi-Directional" channel, can cause similar nauseating effects.

So... we have no state available here in which to hold stateful conversations.

The mother of all conversations, in Software Circles at least, is the "2-Phase commit". This protocol ensures that when I ask for work to be done by say three co-operating participants, then either all of it is done, or else none is. The canonical example of this is the bank transaction. We want money to be taken out of account one and put in to another, but if one of those actions fails, we don't want to revert just that one action and pretend nothing untoward has happened. We don't want both (or neither!) of the accounts to have the money.

It essentially works like this. A coordinator asks each participant to do work and waits until it has received a confirmation reply from all participants.

The participants do their work up to the point where they will be asked to commit to it. Each remembers how to undo what they've done so far. They then reply with an agreement to commit their work if the work was done, or a "failed" message if they could not perform the work.

If the coordinator received an agreement message from all participants, then it sends a commit message to them all, asking them to commit their work. On the other hand, if the coordinator received a fail message from any of the participants, then it sends a "rollback" message to all, asking them to undo their work.

Then, it waits again for responses back from all participants, confirming they've committed or rolled back. Only when all have confirmed successful committal, does the co-ordinator inform whoever is interested, that the transaction has succeeded.

This is a staple pattern of Client Server Systems, particularly of the RDBMS kind. However, it is not guaranteed to work in a stateless system as conversations require context and so are not stateless. This breaks our previous advice about maintaining disposability. You cannot invisibly dispose of a participant in a transaction who is waiting to commit with no side effects.

So... Operations exposed on the web should all be Atomic. They should fail or pass as a single unit.

To a client/server developer this of course seems at first sight to be monumentally impractical in many real world scenarios. But in fact most activities of any real-world value cannot use transactions as we know them.

In a standard web services integration, the BookFlight(), BookHotel() and BookCar() methods in a BookHoliday() operation may well be run by completely different organisations. Each of these organisations are likely to have differing policies and data platforms.

I've always found it funny that the poster boy for the 2-phase commit, The Bank Account Transfer, in reality cannot - and does not - use the 2-phase commit protocol.

So what does it use? Well the answer lies in a fairly pungent concoction of asynchronous co-ordination, reconciliation and compensation that is definitely an acquired taste. I will come back to look at specific ingredients you can use in a later post but for now, the important thing to remember is...

At the service boundary, operations should fail or pass in their entirety.

Next up: Concurrency

Thursday, 2 July 2009

The state I am in

One of the fallacies of distributed computing states that "Network topology doesn't change."

Well, yes it does, particularly on the internet. Pieces of hardware and software are constantly being added, upgraded and removed. The network is in a constant state of flux. So how do you perform reliable operations in such a volatile environment?

Well, the answer lies in part with the idea of statelessness. HTTP is what’s known as a stateless protocol. For those coming from a non-distributed platform background, statelessness is one of the most misunderstood aspects of web development.

Partly this is due to attempts to hide statelessness by bolting on some - often clunky - state management system. This is because statelessness is often viewed as a kind of deficiency. Framework developers all over the world say, "Oh yuk. Statelessness. This is soOOo difficult to work with. I want my developers to be able to put a button on a form and say OnClick => SayHello(). It's 2009! I don't want to have to worry about state issues on a button click!".

If you're dealing with a Windows Forms style of application, your button is more or less the same “object" that is shown on the user’s screen; but in many web frameworks, a "button" and its on screen representation in a user’s browser may be thousands of miles apart. The abstraction of “a button” assumes a single object, with a state - but this is not the reality. What appears to be one button is in fact two buttons: a logical one on a server, and a UI representation in a browser, with a “click” message being sent between them. The concept of "a button" is a leaky abstraction, and the reason that it's leaky, is because it does not take in to account the network volatility that is a standard feature of the web.

So, what do I mean by statelessness?

When you use the web, it enlists the resources required to fulfil your request dynamically. Once the request has been completed, the various physical components that took part are disconnected, and neither know nor care anything for each other anymore. How sad :-(

Let's look at it in terms of a conversation,

Customer: Hello
Teller: Hello, what can I do for you today?
Customer: I'd like to know my bank balance please.
Teller: Can I have your account number please?
Customer: It's 09-F9-11-02-9D-74-E3-5B-D8-41-56-C5-63-56-88-C0 ;-)
Teller: Well. Your balance is only $1,000,000.
Customer: Ok, Can I please transfer all of it to my Swiss bank account please?
Teller: Yeah sure, what's the account number?
Customer: 12345678.
Teller: Ok, I'll just do that now for you. Would you like a receipt?
Customer: Yes please.
Teller: Here you go => Receipt Provided.
Customer: Bye.
Teller: Bye.

This conversation contains state. Each role of Customer and Teller must know the context of the messages being transferred.

The stateless version of this would go more like...

Customer: Hello, I'd like to know the Balance for account 09-F9-11-02-9D-74-E3-5B-D8-41-56-C5-63-56-88-C0,
Teller: It's $1,000,000

Customer: Could you please transfer $1,000,000 from account 09-F9-11-02-9D-74-E3-5B-D8-41-56-C5-63-56-88-C0 to Swiss bank account number 12345678 and provide a receipt please.
Teller: Yup. Here you go => Receipt Provided.

The end results are the same. However in the first example, "fine-grained operations" are used. The Customer and the Teller need to know where they are in the conversation to understand the context of each statement. In the second, each individual part of the conversation contains all of the information required to perform the requested task.

If we think of these as client and server message exchanges, then in the second example it doesn't matter if the server (teller) is restarted, reinstalled or is blown-up and replaced between the two requests. This is invisible to the end user. You would never be aware of the swap.

However, in the first example, if the server was restarted it would "forget" where you were in the conversation.

Customer: Yes please.
Teller: Sorry? What?

...or in pseudocode...

Client: Yes please.
Server: Sorry? What? WTF? Ka-Boom!

This is a fundamental difference between programming in a “stateful” way, and distributed programming.

Understanding statelessness, its implications and how to manage state in a stateless environment, is a core skill of a web developer. Without understanding the implications of various state management systems, you simply cannot develop large scale applications. In web application there is ALWAYS a client, and a server, and a potentially lengthy request/response message exchange between them.

Most programmers have a fairly firm grip on Object-Oriented programming, and one of the keys to this is the packaging of state and behaviour. As a result, they get locked in to a system of thinking that says state and behaviour should always go together, hence the popularity of technologies such as CORBA, DCOM, and Remoting. These technologies make it easy for people who are used to dealing with "objects" to deal with these without worrying directly about the messaging process that goes on behind the scenes.

However, you often hear complaints about the scalability of these types of remote object technologies. The reason for this often involves assumptions about state. Since remoting interfaces are often automatically generated by whatever remoting framework you are using, and the objects used are very often designed deliberately in such a way as to be re-usable (although not necessarily performant) in both distributed and non-distributed environments, you end up with Chatty, context bound, fine-grained interfaces, which have a hard time keeping up when you have a large number of conversations going on at once.

Ceteris paribus, if you imagine the two conversations above in a remote system where each message exchange took a second, the first would take 7 seconds, the second 2.

What's also interesting is that the two message exchanges in the second conversation can be made either way round or, if required, in parallel. This of course works only if you are able to make some assumptions regarding the availability of cash to you, for example, that you have an overdraft facility of $1,000,000 which you are currently not using. I will come back to this when I discuss Concurrency and Data Consistency, but it is perfectly reasonable to run these operations in parallel. This, along with the rapid growth in multi-core and cloud computing, is one reason for the recent resurgence of interest in functional/stateless programming languages such as Erlang and F#.

The important thing to remember about statelessness is that it is all about maintaining a high degree of disposability. You should be able to throw everything away between message exchanges without a user noticing a server restart. Doing this will enable your application to handle server restarts and changes in network topology gracefully.

Next up: Atomicity.