It is common for web applications to interface with external services. When testing, since depending on an external service is very fragile, we end up mocking the interaction with such services. However, once in a while, it is still a good idea to check if the contract between your application and the service is still valid.
For example, this week we had to interact with a SOAP service, let’s call it
KittenInfo (why would someone provide kitten information via a SOAP service is beyond the scope of this blog post). We only need to contact one end-point of the
KittenInfo and it is called
get_details, which receives a kitten identifier and returns kitten information:
Since this API is simple, it is very easy to mock the client whenever it is required by our application. On the other hand, we still need to verify that the integration between
KittenInfo SOAP service and our application works correctly, so we write some tests for it:
describe KittenInfo::Client do it "retrieves kitten details" do client = KittenInfo::Client.new details = client.get_details("gorbypuff") details[:owner].should == "tenderlove" end end
However, since this is actually contacting the SOAP Service, it may make your test suite more fragile and slower, even more in this case, in which the SOAP Service responses take as long as kitten’s staring contests.
One possible solution to this problem is to make use of filter tags to exclude the SOAP integration tests from running, except when explicitly desired. We could do this by simply setting:
describe KittenInfo::Client, external: true do # ... end
Then, in your
spec_helper.rb, just set:
RSpec.configure do |config| config.filter_run_excluding external: true end
Now, running your specs will by default skip all groups that have
:external set to
true. Whenever you tweak the client, or in your builds, you can run those specific tests with:
$ rspec --tag external
$ rspec --tag ~js.
What about you? What is your favorite RSpec trick?
When David Chelimsky was visiting São Paulo in last April, we invited him to go out for some coffee, beers and brazilian appetizers. We had a great time and we talked about different topics like OO, programming languages, authoring books and, as expected, about testing.
One of the topics in our testing discussion was the current confusion in rspec-rails request specs when using Capybara. There is an open issue for this in rspec-rails’ issues tracker and discussing it personally allowed us to talk about some possible solutions, which could take months in internet time.
rspec-rails is a gem that wraps Rails testing behaviors into RSpec’s example groups. For example, the controller example group is based on
ActionController::TestCase::Behavior. There are also example groups for views, helpers and so forth, but for now we are interested in the request example group, which is as a wrapper for
ActionDispatch::Integration::Runner. The Rails’ integration runner is built on top of
rack-test, a great small gem that adds support to methods like
delete and handle the rack request and response objects.
This setup with the request example group running on top of Rails’ Integration Runner works fine until you add Capybara to your application (which is always a good idea). The issue is that Capybara by default includes its DSL in the same request example group and that’s when the confusion starts.
Capybara, being an acceptance test framework, does not expose low-level details like a request or response object. In order to access a web page using Capybara, the developer needs to use the method
visit (instead of
get). To read the accessed page body, the developer must use
page instead of manipulating the
However, since both Capybara DSL and Rails’ Integration Runner are included in the same example group, both methods
get are available! Not only that, even if I visit a web page using Capybara’s
visit, I can still access the request and response object that comes from Rails, except that they will be blank since Capybara uses a completely different stack to access the application.
This confusion not only happens inside each test but it also leads to a poor testing suite. I have seen many, many files inside
spec/requests that mixes both syntaxes.
Talking to David, I have expressed a possible solution to this problem based on how we have been building applications at Plataformatec. First of all, we start by having two directories:
spec/acceptance. Since both are supported by Capybara, this (mostly) works out of the box.
Everything you want to test from the user/browser perspective goes under
spec/acceptance. So if you want to test that by filling the body and the title fields and pressing the button “Publish” publishes a new blog post, you will test that under acceptance (protip: we usually have subdirectories inside
spec/acceptance based on the application roles, like
spec/requests applies to the inner working of your application. Is it returning the proper http headers? Is this route streaming the correct JSON response? Also, since APIs are not part of the user/browser perspective, they are also tested under
spec/requests and not under
This separation of concerns already helps solving the confusion above. Under
spec/acceptance, you should use only Capybara helpers. Inside
spec/requests, you are using Rails provided tools. However, this does not solve the underlying problem that both helpers are still included in
Therefore, while this blog post means to provide some guidance for those that run into such problems, we also would like to propose a solution that we discussed with David. The solution goes like this:
1) We change RSpec to no longer generate
spec/requests, but both
spec/features (I have proposed
spec/acceptance but David pointed out those are not strictly speaking acceptance tests). The Capybara DSL (
page and friends) should not be included in
spec/api under any circumstance.
2) We change Capybara to include by default its DSL and RSpec matchers under
spec/features and change the
feature method to rely on the type
:features instead of
The proposal suggests the addition of two new directories instead of changing the behavior of existing ones in order to be backwards compatible while ensuring a safe and more semantic future for everyone else. David asked me to outline our conversation in a blog post, so we can get some awareness and feedback before undergoing such changes. So, what do you think?