Posts tagged "performance"

We, Rails developers, have always worried about improving the performance of our test suites. Today I would like to share three quick tips we employ in our projects that can drastically speed up your test suite.

1. Reduce Devise.stretches

Add the following to your spec/test helper:

Devise.stretches = 1

Explanation: Devise uses bcrypt-ruby by default to encrypt your password. Bcrypt is one of the best choices for such job because, different from other hash libraries like MD5, SHA1, SHA2, it was designed to be slow. So if someone steals your database it will take a long time for them to crack each password in it.

That said, it is expected that Devise will also be slow during tests as many tests are generating and comparing passwords. For this reason, a very easy way to improve your test suite performance is to reduce the value in Devise.stretches, which represents the cost taken while generating a password with bcrypt. This will make your passwords less secure, but that is ok as long as it applies only to the test environment.

Latest Devise versions already set stretches to one on test environments in your initializer, but if you have an older application, this will yield a nice improvement!

2. Increase your log level

Add the following to your spec/test helper:

Rails.logger.level = 4

Explanation: Rails by default logs everything that is happening in your test environment to “log/test.log”. By increasing the logger level, you will be able to reduce the IO during your tests. The only downside of this approach is that, if a test is failing, you won’t have anything logged. In such cases, just comment the configuration option above and run your tests again.

3. Use shared connection with transactional fixtures

If you are using Capybara for javascript tests and Active Record, add the lines below to your spec/test helper and be sure you are running with transactional fixtures equals to true:

class ActiveRecord::Base
  mattr_accessor :shared_connection
  @@shared_connection = nil
 
  def self.connection
    @@shared_connection || retrieve_connection
  end
end
 
# Forces all threads to share the same connection. This works on
# Capybara because it starts the web server in a thread.
ActiveRecord::Base.shared_connection = ActiveRecord::Base.connection

Explanation: A long time ago, when Rails was still in 1.x branch, a new configuration option called use_transactional_fixtures was added to Rails. This feature is very simple: before each test Active Record will issue a begin transaction statement and issue a rollback after the test is executed. This is awesome because Active Record will ensure that no data will be left in our database by simply using transactions, which is really, really fast.

However, this approach may not work in all cases. Active Record connection pool works by creating a new connection to the database for each thread. And, by default, database connections do not share transactions state. This means that, if you create data inside a transaction in a thread (which has its own database connection), another thread cannot see the data created at all! This is usually not an issue, unless if you are using Capybara with Javascript tests.

When using Capybara with javascript tests, Capybara starts your Rails application inside a thread so the underlying browser (Selenium, Webkit, Celerity, etc) can access it. Since the test suite and the server are running in different threads, if our test suite is running inside a transaction, all the data created inside the test suite will no longer be available in the server. Alternatively, since the server is outside the transaction, data created by the server won’t be cleaned up. For this reason, many people turn off use_transactional_fixtures and use Database Cleaner to clean up their database after each test. However, this affects your test suite performance badly.

The patch above, however, provides a very simple solution to both problems. It forces Active Record to share the same connection between all threads. This is not a problem in your test suite because when the test thread is running, there is no request in the server thread. When the server thread is running, the test thread is waiting for a response from the server. So it is unlikely that both will use the connection at the same time. Therefore, with the patch above, you no longer need to use Database Cleaner (unless you are using another database like Mongo) and, more importantly, you must turn use_transactional_fixtures back to true, which will create a transaction wrapping both your test and server data, providing a great boost in your test suite performance.

Finally, if any part of your code is using threads to access the database and you need to test it, you can just set ActiveRecord::Base.shared_connection = nil during that specific test and everything should work great!

Conclusion

That’s it! I hope you have enjoyed those tips and, if they helped you boost your test suite performance, please let us know in the comments the time your test suite took to run before and after those changes! Also, please share any tips you may have as well!

A new I18n gem just got released and it comes with two new backends extensions: Fast and InterpolationCompiler.

First, what is a backend?

I18n.t, I18n.translate, I18n.l and I18n.localize methods are actually just wrappers to I18n.backend, which is actually who does all the heavy lifting. This means that you can change your backend to other stuff, as long as it respects the required API.

By default, I18n comes with the Simple backend, but others are available. For example, I18n has an ActiveRecord, which stores translations in the database. This is useful in cases someone needs to change translations through a web interface. To use it, you just need to do:

  I18n.backend = I18n::Backend::ActiveRecord

There are a couple other backends, like a backend which implements fallbacks, so if something cannot be found in a specified language, like german (:de), it can fallback to english (:en). You can check the whole list, but for this while, we are going to focus on the two new backends extensions.

Fast

Fast, means fast. And oh, boy, this one is fast. This extension flattens translations to speed up the look up. For example, the following hash { :a => { :b => { :c => :foo } } }, gets flattened to { :"a.b.c" => "foo" }, so instead of recursively looking into hashes, it looks just once. The obvious expense is that whenever you are storing translations, you need to flatten the translation hash, and it takes more time than usual.

In order to measure different backend implementations, I pushed some benchmark setup to the I18n repository. The current setup measures the storage time, the time it takes to translate a key (the depth of the key means how many nested hashes it takes), the time to translate a key falling back to the default key and the time translate a key (at depth 5) and interpolate. The results comparing the Simple backend without and with Fast extension are showed below:

Simple vs. Fast

In other words, a simple lookup using the Fast extension is 3 to 4 times faster than the Simple one. Besides, configuring your application to use it is very simple:

  I18n::Backend::Simple.send :include, I18n::Backend::Fast

Nice!

Interpolation compiler

The InterpolationCompiler is a backend extension which extracts all required interpolation keys from a string, leaving just the minimum required to runtime. Imagine the following string: "This is a custom blank message for {{model}}: {{attribute}}". This extension annotates the string so it knows that it needs to interpolate both model and attribute, and points exactly where the interpolation should happen. We can compare the Simple backend without and with the InterpolationCompiler below:

Simple vs. Interpol

The InterpolationCompiler just changes the time taken when we have interpolation keys, without affecting too much the other translations. You can add it to your app as easy as the Fast backend:

  I18n::Backend::Simple.send :include, I18n::Backend::InterpolationCompiler

Run, I18n, run!

But the best is still coming! Fast and InterpolationCompiler can actually be used together, to achieve unseen performance in I18n. The benchmark speaks for itself:

Simple vs. FastInterpol

While we speed up the performance in around four times in simple lookups, Fast and InterpolationCompiler improvements get combined whenever we need to use interpolation, becoming around six times faster!

As said previously, both extensions increase the time taken to store translations as side-effect. Such can be viewed below:

Store

The yaml hash used in for the benchmark, is relatively slow, but it shows how the time taken to store translations grows with such extensions. But remember, you are constantly storing translations only in development (before each request is processed). In production, the translations are stored at startup and that is it.

Using within Rails

You should be able to use such features today in Rails 2.3.5 and it will also be possible in Rails 3. You just need to install the I18n gem and configure it in your environment.

Why care?

All the times shown are in miliseconds. In other words, why care? If you are building a simple application, using just one language, those improvements likely won’t change anything. But in an application which relies on I18n, during a request/response lifecycle, I18n is invoked many times: error messages for models, flash messages, page titles, e-mail subjects, page content, date and time localization, pluralization rules and even in many of ActionView helpers. So in such cases, it’s worth to give such extensions a try.

Running benchmarks on your own

If you want to run benchmarks on your own, it’s quite simple. You just need to do:

git clone git://github.com/svenfuchs/i18n.git
cd i18n
ruby benchmark/run.rb

Credits

The possibility to have backends and such extensions is due to Sven Fuchs, which leads the I18n group quite well.

Many of the backends were added by I18n community, and the Fast and InterpolationCompiler were created by thedarkone.

Guys, I owe you a beer! ;)

Enjoy!