{"id":246,"date":"2009-09-08T16:22:13","date_gmt":"2009-09-08T19:22:13","guid":{"rendered":"http:\/\/blog.plataformatec.com.br\/?p=246"},"modified":"2009-09-18T13:26:20","modified_gmt":"2009-09-18T16:26:20","slug":"how-to-avoid-dog-pile-effect-rails-app","status":"publish","type":"post","link":"https:\/\/blog.plataformatec.com.br\/2009\/09\/how-to-avoid-dog-pile-effect-rails-app\/","title":{"rendered":"How to avoid the dog-pile effect on your Rails app"},"content":{"rendered":"
Everyone already heard about scalability at least once. Everyone already heard about memcached<\/a> as well. What not everyone might heard is the dog-pile effect and how to avoid it. But before we start, let’s take a look on how to use Rails with memcached.<\/p>\n First, if you never used memcached with rails or never read\/heard a lot about scalability, I recommend checking out Scaling Rails<\/a> episodes done by Gregg Pollack<\/a>, in special the episode<\/a> about memcached.<\/p>\n Assuming that you have your memcached installed and want to use it on your application, you just need to add the following to your configuration files (for example production.rb):<\/p>\n By default, Rails will search for a memcached process running on localhost:11211.<\/p>\n But wait, why would I want to use memcached? Well, imagine that your application has a page where a slow query is executed against the database to generate a ranking of blog posts based on the author’s influence and this query takes on average 5 seconds. In this case, everytime an user access this page, the query will be executed and your application will end up having a very high response time.<\/p>\n Since you don’t want the user to wait 5 seconds everytime he wants to see the ranking, what do you do? You store the query results inside memcached. Once your query result is cached, your app users do not have to wait for those damn 5 seconds anymore!<\/p>\n Nice, we start to cache our query results, our application is responsive and we can finally sleep at night, right?<\/p>\n That depends. Let’s suppose we are expiring the cache based on a time interval, for example 5 minutes. Let’s see how it will work in two scenarios:<\/p>\n 1 user accessing the page after the cache was expired:<\/b><\/p>\n In this first case, when the user access the page after the cache was expired, the query will be executed again. After 5 seconds the user will be able to see the ranking, your server worked a little and your application is still working.<\/p>\n N users accessing the page after the cache was expired:<\/b><\/p>\n Imagine that in a certain hour, this page on your application receives 4 requests per second on average. In this case, between the first request and the query results being returned, 5 seconds will pass and something around 20 requests will hit your server. The problem is, all those 20 requests will miss the cache and your application will try to execute the query in all of them, consuming a lot of CPU and memory resources. This is the dog-pile effect.<\/p>\n Depending on how many requests hit your server and the amount of resources needed to process the query, the dog-pile effect can bring your application down. Holy cow!<\/p>\n Luckily, there are a few solutions to handle this effect. Let’s take a look at one of them.<\/p>\nRails + Memcached = <3<\/h3>\n
\r\nconfig.cache_store = :mem_cache_store\r\n<\/pre>\n
What is the dog-pile effect?<\/h3>\n