Thursday, July 3, 2008

Building Load Resilient Web Servers

One of the bigger challenges with deploying dynamic web servers is being able to cope with load peaks that happen when the URL is communicated to larger communities in a short time. I have seen several sites being slashdotted and collapse under the load, and it is common to blame the underlying server technology for crashes; being Lisp lovers, we don't want to see that happen for our sites.

Using a caching frontend to reduce backend load

Following industry practice, we have been using a caching reverse proxy in front of our Lisp based dynamic web servers from the beginning. Our choice was Squid, as I had prior experience configuring it and found it to work reliably once the configuration was in place. I was never quite happy with that choice, though, because Squid's main operation mode is forward proxying. It has gazillions of configuration options that are not needed for a reverse proxy, which made us feel that Squid would not be a perfect match for our demands.

This is not to say that Squid has hurt us in any way. It has served us well during a load peak on the create-rainforest web site, but as reverse proxying has become very common in the last few years, we found it about time to go shopping for a solution that might be a better match to our needs.

We hoped to find a frontend proxy having the following features:

  • Caching - We want the frontend to cache all content that is not session dependent in order to reduce the load on our Lisp backend.
  • Scalable - The frontend must be able to handle loads much larger than what we see normally in order to accomodate for peaks.
  • Request queueing - Ideally, we'd like the frontend to handle all concurrency issues and send requests to the backend one after the other using a single persistent http connection, maybe with pipelining.
This set of requirements reduced the available options dramatically, and we ended up giving varnish serious consideration.

Evaluating varnish

varnish seems to have most of the features that we require: It supports caching, has been tested under very high loads and supports FreeBSD which is our deployment platform. Varnish has been written by Poul-Henning Kamp of FreeBSD fame, so we also found it to be culturally compatible.

Our evaluation revealed that varnish is under active development and support, and we found the developers be very responsive to our bug reports and requests. Its architecture looks well thought out, and the configuration language (even if underdocumented) makes the request handling process very transparent.

There are a number of downsides with varnish, though:

    • Eager cache cleanup policy: varnish removes objects from its cache as soon as they expire. This means that expired objects are always fetched from the backend, even if the previous cached copy was still up to date. It is possible to work around this by either hacking varnish to revalidate cached objects before expiring them or by not expiring them automatically, but rather by explicit purge requests sent from the backend to varnish. Both options, while doable, require substantial work.
      No high load safeguards: varnish offers no control over the number of backend connections established and does not support any queuing or backend resource control mechanisms. This means that it will unconditionally try to establish a connection to the backend if an object can't be served from the cache. Once the backend reaches saturation, varnish will aggravate the situation by sending even more requests to the backend, increasing the time to recover from the backend saturation.
      Threading: varnish uses threading to schedule requests. While this in principle should not be something negative, threaded programs are much harder to debug in practice, and many concurrency bugs that threaded code is susceptible to only show up after a long time or under certain load patterns which may be hard to reproduce. Admittedly, I am a threads hater, but I am writing this here because we found a serious threading race condition on the second day of our evaluation which prevented varnish from even starting up. My trust in the code was seriously affected by this.
  • When talking to the varnish developers, the problems were acknowledged, help to fixable problems was very quick and they told us that more advanced features will be develop after the upcoming 2.0 release of varnish.

    We would have liked to switch to varnish because of the very good support and because it is meant to be a web frontend, nothing else. Yet, at the current point in time, it does not seem to be mature enough to serve our needs. After having evaluated it, we turned back to squid as it seemed to be the only other option. We found that squid meets our requirements for cache cleaning and revalidation of cached objects very well.

    Making objects cacheable

    Chosing a front end software is only part of what needs to be done to make a web system fast and robust enough to withstand high loads. The most important factor is to make the frontend serve a large percentage of the incoming requests from its cache and only consult the backend server for content that is really dynamic. The HTTP/1.1 protocol provides for request and response headers that control how content can be cached, and there is little need for explicit configuration if these headers are used correctly.

    Using If-modified-since

    One way to limit the traffic to the backend is implement the If-modified-since mechanism. It is supported by Hunchentoot for static files by default, and can also be used for dynamic handlers that can check whether a resource has changed since it had previously been requested. Varnish will set the If-modified-since header when it requests resources that it already has in the cache, so every cacheable resource will normally be transfered from the backend to Varnish only once.

    Controlling cache refreshing

    Often, resources are dynamic, yet it is not crucial that every client sees the absolutely latest version of the resource. For example, in our square meter sales application, visitors should always see the current number of square meters sold, but it is not vital that this information is accurate to the second. Thus, we want the cache to refresh such pages only at a certain interval. The HTTP/1.1 provides for the cache-control directive and in particular the max-age parameter. It specifies how long a cache may consider a cached resource be valid without revalidating with the originating server. By setting this parameter in responses sent by the backend, we can effectively limit the maximum refresh rate for dynamic resources that do not need completely up to date every time.

    Testing realistically

    In order to test the performance of a Web system, it needs better tools than the often-used ApacheBench tool, which only tests response times and throughput of a single URL. For meaningful results, one should simulate a user load that reflects the load that real users create. I have been using SIEGE for informal testing often, as it is easy to use, but I have also found it a little flakey and prune to random crashes, which made us look for more reliable solutions.

    In the FreeBSD ports collection, we found tsung. Tsung is an open-source multi-protocol distributed load testing tool written in Erlang, and it has good support for HTTP. In addition to running load tests against web servers and generating statistics reports, it supports a session recorder that can be used to create a log of the URLs that a user visits when browsing a web server which can then be directly used to simulate many simultaneous users. As an added bonus, it is possible to capture dynamically generated information from web server responses into variables and use them in subsequents requests in a session. This feature can be used to generate sale transactions or user registrations in a load simulation.

    Using tsung and squid, we were able to tune our Lisp backend so that all non-dynamic content is served from the cache, pinpoint a serious performance problem and simulate realistic loads. We are now confident that our setup can withstand the next rush of users without crashing.

    Share:

    7 comments:

    1. In our work we use web polygraph - very useful tool for testing web proxies load

      ReplyDelete
    2. Great piece. I hope we'll see more like it.

      Varnish not doing re-validation is a real bummer. There's a class of applications that, without any TTL-based expiring cache at all, could benefit tremendously simply by performing last modified / "deep etag" generation up-front and just not doing the heavy stuff when nothing's changed.

      I wish there was a caching system that felt more like Make than a caching system. Serving requests should be all about dependency resolution and time-stamp tracking. Varnish seems real close to being a solid core for such a system.

      ReplyDelete
    3. Varnish 2.0.2 is out. Key features vs. V1:
      * ESI include support
      * Round-robin and random backend load balancers
      * Backend health check
      * Serve expired objects, until we have a fresh one
      * OpenSolaris? support
      * Much improved malloc backend, particularly on Linux
      * Lots and lots of bug fixes

      ReplyDelete
    4. Nice analysis, the question of course is did it work when you got peak usage? And from your blog I guess that you are using a Lisp server does that hold for such peak loads as you described?

      ReplyDelete
    5. My squid+Hunchentoot combination survived several load peaks, although the largest was about 2000 visitors in one hour. This is by no means a gigantic load, yet as everything is fully dynamic, I am pretty happy with the result.

      Hunchentoot, the backend HTTP server written in Common Lisp, is not designed to handle a large number of concurrent connections. This is why I am using a frontend cache in the first place. Only those requests hit the backend that need to be computed.

      ReplyDelete