Quantcast

Ruby on Rails, Io, Lisp, JavaScript, Dynamic Languages, Prototype-based programming and more...

Technoblog reader special: $10 off web hosting by FatCow!

Monday, July 09, 2007

Advanced Concepts in Ruby on Rails Hosting Part IV

In these past weeks, we have discussed the transition from a standard reverse proxy (represented by a single manager handing documents to be translated to various translators, one at a time as soon as they came in) to a system I call drproxy with clients running on all application servers buffering requests and handing them to instances of the application as soon as they were ready (represented by office managers buffering documents for the translators).

It seems as though we have created a pretty streamlined system during these weeks, but there is still a bottleneck. Most people will never reach the bottleneck, but many people will feel pain around it. That bottleneck is the request server (represented by the manager who hands documents to office managers). The request server is very fast and can take many requests per second without flinching, however it is a single point of failure. If it goes down, nothing gets through. Not only that, but adding a second request server doubles the potential number of incoming requests. In the analogy, if the main manager stays home sick, no documents get translated that day. Drproxy is built to be able to run various request servers very easily, both for load distribution and redundancy.

Drproxy has been built in the crucible, we have been using it at MOG for over a month now, tweaking and making improvements. I am working on bundling the software, which will be open source, and when we release it should be very ready to use out of the box for most Rails websites. Thanks for following this series and I really hope you enjoy drproxy.

You should follow me on twitter here.

Technoblog reader special: click here to get $10 off web hosting by FatCow!

Monday, July 02, 2007

Advanced Concepts in Ruby on Rails Hosting Part III

In our discussion of distributing web requests to different servers via the analogy of a translation company, we ended up last week with a question. To recap, the analogy compares application serving computers to translation offices and instances of the application as translators. Further, a manager (reverse proxy) sat in front of the offices distributing tasks to each office. Last week we realized that to increase efficiency of our translators, we could put a manager in each office in order to buffer requests. That way, any individual bottleneck could not hold up the queue from being processed. For distributing web requests, I created a reverse proxy called drproxy to do exactly this.

These types of systems can be found over and over again in the real world. Just this weekend I was in line to order a polish dog from Costco. There was a single line with two servers processing the line. I watched as a father couldn't get decisions from each of his three children, yet the other server's booth moved along smoothly and brought me closer and closer to the polish dog. You can find similar lines at Nordstrom Rack, Fry's Electronics, any restaurant, and many other locations.

One system that works like the less efficient "round-robin" method described a few weeks ago are lines found in grocery stores. You get in a line praying that the people in front of you don't like to write checks or count out change because if they take their time, they hold you up. How many times did you choose what looked like the fastest checkout line only to watch people go through other lines faster due to one price-check on isle five? Some grocery stores have started implementing self-checkout systems. I find that whenever given the choice, I tend to go directly to the self-checkout because it is always faster. One of the reasons it is faster is because there is a single line for four checkout machines. You could get one person counting change, one person doing a price-check and still have two machines checking people out smoothly. It is amazing that grocery stores have not realized this and implemented better line processing.

The question I posed at the end of last week was whether there was an even more efficient way to distribute requests. I propose that there is. Here is why: just as any individual translator might get a backed up queue of requests, translation offices could become overwhelmed. Based on pure probability, one office might build up a queue of 100 requests while another might sit there queue-less. There are a few ways to tackle this problem, but whichever way you choose to handle it, you must know the size of the queues in each office at any given time. Given this knowledge, you could choose to only hand requests to the least busy of the offices. I am not a big fan of this approach because it seems like you could imagine situations where large groups of documents go to the same office in a row and I like to distribute requests randomly to prevent buildups and attacks on the system. The way that I implemented drproxy to distribute requests was by randomly picking any off! ice, except for the busiest office. If one office builds up a backup of requests and the others have no requests, no new requests will be sent to the busy office until it frees up. This load balancing system works very effectively.

I hate to do this to you again, but can you think of any other major bottlenecks in our system? I can.

You should follow me on twitter here.

Technoblog reader special: click here to get $10 off web hosting by FatCow!

 

If you like this blog, you might also like top photography schools.