As I said the other day, I'm currently in the process of going through my book manuscript [ADDDP] and one of the tasks is to delete sections that just don't fit for some reason. It's a bit painful of course, but "kill your darlings" is probably good advice here.
However, instead of just throwing these sections away I thought I might be able to use them for something else so here's a part that I wrote at the very beginning of the book, eighteen months ago. It's called "Computing by Bricks" and it's about a certain style of distribution that might be interesting for you if you need to cope with extreme load and therefore need extreme scalability. Here goes.
When considering a system with a huge load, you can normally see that different parts of the system have different characteristics. This knowledge about the system can be used very efficiently for designing and deploying the system.
Assume you are about to build a very large online shopping application. You will probably need some browsing functionality so that the users can browse the catalogue of products. That information is normally very close to read-only, which means that the cluster for taking care of the catalogue browsing need to share nothing. Each server lives a life of its own, having all the information it needs. Each server is not only a web server and an application server but also a database server and can and should also use extensive caching (see Figure 1). If one machine burns, it doesn't take long to add a new machine into the cluster, being an exact replica of the other machines. If the load is too high, just add more replicas.
The catalog browsing
The biggest problem with this part of the system is dealing with catalogue updates. Each user should be able go find the same information no matter which server he comes to at next refresh. Consequently, the updates must go to all servers at the same time. One solution to the problem is using generations of the catalogue so that all servers get the update, but the new generation isn't valid until a certain point in time.
The second part of the system is for dealing with the shopping carts. There is also a cluster of servers, but in this case the database for the shopping carts is a shared resource, once again so that the users can hit any of the servers without noticing any difference (see Figure 2). It might even be that for reasons of availability the database is split over a failover cluster, but it's not very likely.
The shopping carts
At certain time intervals, old shopping carts that were abandoned without creating an order are emptied. This is a pretty heavy operation, but it only affects the second part of the system, the shopping cart module that is.
The third part of the system is for dealing with persistent information such as customer data. Here the load is very low and we might even be able to run very large applications with a single web server/application server. On the other hand, it's quite likely that the database is living in a failover cluster (see Figure 3).
The basic data
Finally, the fourth part of the system is for the filling of the final orders. Compared to the first part of the system, the load here is typically very low. It's usually fine to use a message queue for dealing with the orders, pushed from the shopping carts. The orders are then popped, one by one, as soon as possible. That way, this part is stabilized and at the same time can execute on very moderate hardware (see Figure 4).
Processing orders
Thinking like this, you can grow your application to extreme load in a very flexible manner. Sure, the administrative burden is much higher than with a single box, but the scalability can also be extremely nice, especially from a price performance perspective. Do we need this for most systems? No. For some systems? Yes.