Monday, September 10, 2007

Declarative caching in the UI layer

In my recent post about the architecture, I mentioned the fact that a fair amount of caching was taking place in the web UI layer. I'd like to expand on that.

The data consumed by web UI layer comes solely from the service layer. The services are injected into Tapestry components by Spring.

Let's take an example. If you look at the home page, you'll see that recently added crossword puzzles are displayed for different categories (e.g. American or British type puzzles). The Tapestry component responsible for outputting this bit of HTML uses the "constructor" (as in puzzle constructor) service, injected into the component at runtime. Since the component is located on a high traffic page, its performance needs to scale so hitting the service layer (and thus the database) on every requests is out of the question.

My first solution to this problem was to cache the service layer data in the component itself. It worked for a while but proved inflexible in the long run. For instance, depending on where a component was located (e.g. a high or low traffic page, etc), different caching strategies were needed.

My second solution (and the one I'm using now), was to configure caching proxies in Spring using Spring modules and Ehcache and inject those into the components instead of the real services. This way, I could decide to turn caching on or off depending on the specific needs of the component, without it being aware of anything.

Wednesday, September 5, 2007

About the blog

In 2000, while still in university, I created an online crossword puzzle site called It was (and still is) just a hobby, an outlet to hone my programming, graphical design, web design and system administration skills.

Over the years, I've improved and grown the site, even doing a complete rewrite in 2005, going from Perl to Java. I also started a company called Pyromod Software Inc to accommodate the growth.

I started this blog to share my experience on the many topics related to building and growing a web site. I hope you enjoy reading it, and hopefully it will inspire you to start your own web site! architecture

This document describes the hardware and software architecture of (more info about the site here). Hopefully, it will provide guidance to people wanting to undertake such a project, but are not sure where to start.


Here are some stats to help you assess the kind of scalability that can be achieved with the architecture on the specified hardware.

As of September 2007:

Monthly visits: 1.4 million
Monthly pageviews: 5.5 million
Members: 224,000
Peak simultaneous visitors: 4,500
Database tables: 30
Database size: 1.6 GB
Average page rendering time: 50 ms
Average load average: 0.33
Average CPU usage: 10%


The site runs on a single machine. An Apache server receives web connections. Its functions include: URL rewriting and sanitizing, SSL and logging. It then proxies connections to a Tomcat appserver. Tomcat is responsible for processing requests. MySQL is used to persist web sessions and business data.

,--------, ,--------, ,-------,
--->| Apache |--->| Tomcat |--->| MySQL |
`--------' `--------' `-------'

Another machine is used for failover and has the same architecture.


Both main and failover servers have the same configuration:

Hardware: 2.8 GHz Pentium 4 HT with 1 GB of memory
OS: Debian sarge (3.1) using the 2.6 kernel
Web: Apache 2.0.54
Appserver: Tomcat 5.5.17 with a customized JDBC session store
Database: MySQL 4.1.11 using the InnoDB engine
Mail: Exim 4.5


I use a multi-tier architecture:

-->| web UI |
,---------, ,--------, ,-------------,
| service |-->| domain |-->| persistence |
`---------' `--------' `-------------'
-->| web service |

The following technologies are used: Tapestry 4.0 (web UI), Axis 1.3 (web service), Spring 2.0 (web UI, web service, service), Hibernate 3.2 (persistence).


Hibernate is used to map domain model entities to the database. Transaction support is provided by the InnoDB MySQL storage engine.

I also use Data Access Objects to move data in and out of the database. The bulk of the code the DAOs implement is query logic. Separating that logic from the service layer is essential to make database optimization as easy as possible.


The site domain logic is implemented in this layer. The domain model contains about two dozens entities, implemented as POJOs.


There are about a dozen services in this layer, all implemented using Spring beans. Spring provides transaction demarcation and caching. The services use the DAOs provided by the persistence layer to access and persist the entities.

The service layer has two purposes. First, it is used to implement workflow logic. Workflow logic is business logic that doesn't pertain to the domain model per say, for example, sending a welcome email when a new member signs up. Second, it provides a procedural and remotable API to the layers above.

Data Transfer Objects are used to carry data in and out of this layer. The choice to use DTOs was made at the beginning because it was my original intent to expose that layer through web services. However, I realized that I wouldn't be able to keep the services APIs stable enough for the needs of a public interface so I created a separate web service layer. Given the chance to redo it, I would probably ditch the DTOs and use the domain model entites for data transfers. This would save me a lot of data mapping code and would certainly improve performance.


The web UI layer contains a couple hundreds Tapestry pages and components.

The components get their data from the service layer. Since that data can oftentimes afford to be a bit out of date, a caching strategy has been devised. I simply stick a caching proxy in front of Spring beans for which I want to cache the results. I then inject the cached bean into the components that can benefit from the caching. This reduces the load on the database and improves page rendering time.

I originally designed the web UI layer to be stateless until the visitor logged in, then it would become stateful. Tapestry makes it easy to do that and since the majority of the visitors would not log in, all was well. However, as web UI layer became more sophisticated, statelessness started to become a hindrance so the web UI is now running statefully from the moment a visitor enters the site. This increased the number of web sessions in the database, so I had to rewrite parts of the Tomcat JDBC session store to accomodate for the extra load.


This layer gives third-parties a programmatic access to the functionality of the site. It is composed of a thin facade of services adapting and stablizing the APIs provided by the service layer. In order for me to evolve the interface without disrupting established clients, I incorporated a version number to the API. So far, all the changes made have been backward compatible but it reassures me that, if need be, incompatible changes can be made to the API, as long as the version is increased.


The site was initially hosted on a shared server. Then it was moved to a virtual private server to finally make the transition to dedicated server. Recently, repeated hardware failures and downtimes prompted me to start using DNS failover to provide redundancy to the site.

Before rolling out the solution, I performed an experiment to assess the viability of DNS failover. You can read more about it here. Granted, this solution doesn't provide 100% availability since DNS changes take time to propagate (even with a low TTL), but it's good enough.

The setup is simple, I have two machines, each running its own copy of the site. During normal operations, the DNS A record of the domain name points to the main machine. When a failure occurs, the A record is reconfigured to point to the failover machine. The failover machine is kept in sync with the main server using rsync over SSH for the filesystem data and MySQL master/slave replication over a Virtual Private Network for the database. I host the DNS records at DNSMadeEasy because they offer basic server monitoring and automatic DNS failover. An added bonus of the MySQL master/slave replication is that it simplifies database backups. I simply stop the slave from replicating while I do a database dump. It has no performance impact on the master and the dumps have referential integrity.

Monday, September 3, 2007

Viability of DNS failover

In Spring 2006, my site was plagued with recurrent hardware problems causing serious downtime. At the time, the site was hosted on a dedicated server and I had no failover strategy whatsoever so when a hard disk failed on the server, you could expect a few days of downtime.

At the beginning of the summer, I got fed up and started investigating possible solutions to this problem and, after some experimentation, finally settled on DNS failover. Here are the results of the experimentation, originally posted on WebHosting Talk:

I run a site with about 1,000,000 unique visitors per month and recent server failures made me decide to get a failover server to minimize downtime. My goal wasn't to get 99.999% uptime but to be able to be back on track after a failure in a "reasonable" amount of time. After evaluating several solutions, I decided to go with DNS failover. Here's how the setup work:

1) points to main server with a very low TTL (time to live)
2) failover server replicates data from main server
3) when main server goes down, is changed to point to failover server

The drawback is the DNS propagation time since some DNS servers don't honor the TTL and there is some caching happening on the user's machine and browser. I looked for empirical data to gauge the extent of the problem but couldn't find any so I decided to setup my own experiment:

I start with pointing to the main server with a TTL of 1800 seconds (1/2 hour). I then change it to point to the failover server which simply port forwards to the main server. On the main server, I periodically compute the percentage of requests coming from the failover server which gives me the percentage of people for which the DNS change has propagated.

I made the DNS change at exactly 16:04 on 06/21/06 and here are the percentage of propagated users:

06/21/06 16:00 0 %
06/21/06 16:05 3 %
06/21/06 16:10 20 %
06/21/06 16:15 37 %
06/21/06 16:20 59 %
06/21/06 16:25 69 %
06/21/06 16:30 76 %
06/21/06 16:35 80 %
06/21/06 16:40 86 %
06/21/06 16:45 90 %
06/21/06 16:50 91 %
06/21/06 16:55 92 %
06/21/06 17:00 93 %
06/21/06 17:05 94 %
06/21/06 17:10 94 %
06/21/06 17:15 95 %
06/21/06 17:35 95 %
06/21/06 17:40 96 %
06/21/06 17:45 97 %
06/22/06 10:40 99 %

So even after 18 hours, there is still a certain percentage of users going to the old server so DNS failover is obviously not a 99.999% uptime solution. However, since more than 90% of the users are propagated in the first hour, the solution works well enough for me.