Posts by Rodrigo Flores

*This blog post tells about how we improved a VCS workflow to another one that suited our and the consumer needs. It was a great result: we minimized the chances of occurring one of the worst problems for a developer in a project: big integration while we maintained an ‘almost releasable branch’ all the time

In the last months we’ve been working on a project with a mixed development team (Plataformatec’s team and the customer’s team). We, of course, used a version control system (specifically git) and we set up a nice git branching model for our team. As agilists, we know that we should not use anything that requires a lot of bureaucracy (things like opening a ticket to integrate a branch into the trunk).

Using nvie guide as base, we developed a git workflow. First of all, we had three main branches:

  • production: contains the code that is currently on production. We also have a production server that contains, obviously, the production code up and running.
  • staging: contains the code that is being tested before going to production (we used this branch to deploy to a production-like environment that worked as a final test until production, this environment is also called staging)
  • master: contains the already accepted features. To consider a feature as “accepted” we deployed to another environment (called “dev”) and asked a QA analyst to test it. Once approved we merged the commits. This “dev” environment is used for this kind of approval and also for general purposes like when we say: “take a look at this new awesome feature we’re developing”.

For each feature that we developed, we created a git branch (almost all of them we pushed it to the remote server to facilitate code review and to deploy to the “dev” environment). Everyday, we ran git rebase master, to update our branch code (except for features developed by more than one developer). Once the feature is complete, we rebased master into it, and merge it using –no-ff (to create a merge commit). For the branches that more than one developer worked on, we usually talked and set up a “rebase period” where one does the rebase, forces the push (because you changed your local tree so git does not accept it as a non forced push) and updates the remote branch.

Close to the production deploy, we merged (always using –no-ff) master branch into staging branch and deploy to staging. Once approved, we merged to production, and created a tag telling the current version of the application and then we do the deploy. When we deployed to production we also removed the merged branches from the remote repository.

One of the great advantages of this schema is: Master is always “almost” ready for a release. Yeah, some features really deserves to be validated right before the deploy, because another feature can break them, but we kept the master as an “always releasable stable” branch (and also we used a continuous integration tool in order to enforce all tests passing). Another great advantage is: as we updated our code everyday, it was very unusual for us to face big integration scenarios.

For the “dev” environment deploy we also set up a capistrano task that asks which branch we want to deploy to be possible to deploy something from any branch.

This workflow has worked really well for us and maybe it is useful to you (maybe for you to adapt it to something that works better for you as we did with the workflow suggested on nvie).

To summarize, this is our git workflow in commands (supposing that we are on master branch):

git checkout -b my-awesome-feature
# (... you do some code and some commits and you go home to have some sleep or maybe play some starcraft 2 ...)

#(arrived at the office on the next day)
git rebase master

#(... continue working and commiting and sleeping (or maybe playing some starcraft 2 ...)
git rebase master

# (... some commits ... and voila ... you've finished...)
git push origin my-awesome-feature
cap dev deploy

# (...YAY! QA analyst just approved it ...)
git pull origin master
git rebase master

# (run the tests to ensure all of them pass)
git checkout master
git pull --rebase origin master
git merge --no-ff my-awesome-branch
git push origin master

#(... it is time to validate on staging)
git checkout staging
git pull origin staging
git merge --no-ff master
git push origin staging
cap staging deploy

#(... QA analysts validate the staging ...)
git checkout production
git merge --no-ff staging
git tag -a v1.4.2 -m "Releasing on 13th February"
git push origin production
git push --tags origin production
cap production deploy

Well, this is how we improved a git workflow based in another one. As almost everything, there is no bullet proof for it, but we found interesting to share this experience with you as it was a success (every developer on the team enjoyed it). But please, we would like to receive some feedback about it :-). Have you used something similar in your team? Do you have any ideas on how we can improve it?

One of the great gems released in the past few months was OmniAuth. It is very easy to use, it works without tons of configurations (unless configuring XML files is your thing) and there are already plenty of resources about it on the internet.

However, it is not easy to do acceptance tests with Omniauth as it depends on external services to work. So what should we do? When I face a scenario like this, I split the acceptance test in two parts: one before the external service and one after the external service response.

Testing the first one is trivial: you only have to ensure there is an <a> tag with href equals to “/auth/facebook” (or “/auth/#{insert_your_provider_here}” if you use another one). We don’t test any of the redirects done by OmniAuth internals, because it is already tested in gem’s tests, so we go to the next step: testing the OmniAuth callback.

Testing OmniAuth callbacks is in general cumbersome but for OAuth2 providers it is a bit easier as it uses Faraday internally to connect to the provider. With Faraday, we can configure a test adapter and stub calls to return what we want.

The OmniAuth strategy provides an entry point to the Faraday connection, but we don’t have an access to the strategy directly, so we need to store it globally. For a Facebook strategy, we can achieve it as below whenever configuring Omniauth middleware:

module OmniAuth
  mattr_accessor :facebook_strategy
end
 
middleware.use OmniAuth::Strategies::Facebook, "APP_ID", "APP_SECRET" do |strategy|
  OmniAuth.facebook_strategy = strategy
end

Note you will need OmniAuth 0.2.0 (which currently is OmniAuth master) as previous versions don’t yield a block on initialization like above.

With access to the strategy, we do the acceptance test. First, we define the hashes we will return on the Faraday responses.

FACEBOOK_INFO = {
  :id =>; '12345',
  :link => 'http://facebook.com/plataformatec',
  :email => 'myemail@example.com',
  :first_name => 'Plataforma',
  :last_name => 'Tecnologia',
  :website => 'http://blog.plataformatec.com.br'
}
 
ACCESS_TOKEN = {
  :access_token => "nevergonnagiveyouup"
}

Now, we will define the stubs. OAuth2 strategies do two requests: one to retrieve the access token and another to retrieve the user information. In this example, let’s stub the Facebook requests and assign these stubs to a new connection.

stubs = Faraday::Adapter::Test::Stubs.new do |b|
  b.post('/oauth/access_token') { [200, {}, ACCESS_TOKEN.to_json] }
  b.get('/me?access_token=nevergonnagiveyouup') { [200, {}, FACEBOOK_INFO.to_json] }
end
 
OmniAuth.facebook_strategy.client.connection = Faraday::Connection.new do |builder|
  builder.adapter :test, stubs
end

For each provider the URLs may differ, so an idea is to do this on a TDD way (or you can browse through the OmniAuth source code and see the url that it requests):

  1. Assign the Faraday fake connection without stubs.
  2. Run your test
  3. See the test to raise an exception like “No stubbed request for DESIRED_URL”.
  4. Add the stubbed request with the response that you want.
  5. Repeat this process until the test pass

This is what we do on acceptance tests with OmniAuth: testing before and after the access to the external services.

Another approach is to do only one test by short-circuiting the provider authentication URLs. To do that on a Rails application, you can store the provider URL on a method like “OmniAuth.facebook_url” and stub the method to return the callback URL on your test. If you happen to be using Devise, the upcoming 1.2 version does the short-circuiting automatically for you, as you can see in Devise integration tests.

What about you? How do you stub OmniAuth requests and responses on your applications?

Suppose you park your car in a neighborhood with a lot of occurrences of vandalism and crimes. If you return two weeks later, there is a good chance that you will find the car in the same state that it were when you left it there.  Now imagine if you parked the same car but with a broken window. There is a good chance that you will see an engineless car, without tires or many other parts. Of course, we did not conduct this social experiment, but we know about a theory called Broken windows theory that says if we have something intact, it has a tendency to remain intact. But if we have something broken, there is a tendency that it will be even more damaged after some time.

But how does it apply to software development? Suppose you have some developers working on a project with about a thousand tests covering all the code and all of them are passing. If a developer implements a feature and it breaks some tests, it seems reasonable that he will make an effort to make the necessary modifications so all tests pass, or maybe he’ll remove them if they don’t make sense anymore. Also, it is expected that he will write new tests to cover the new feature.

Now suppose a different scenario: the project has a thousand tests and a hundred of them are not passing. Even worse, the code coverage is about 70%. As the developer writes the new feature, there is a tendency that he may not create tests and not fix the broken tests that this implementation caused. There is an analogy with the broken windows and failing tests: if you allow a failing test to stay there, there is a good chance that your team may not work on correcting tests and the number of failing tests may increase during time, leading to a poor quality software.

OK, you have heard about the problem, but how can we solve that? A first idea is to hire a monkey that keeps an eye in the version control, and everytime there is an update there, it presses a key that runs all the tests. The monkey is actually very smart and it will scream if it sees red F’s in the screen. And every time the monkey screams, you know that somebody commited a broken test and you also know that every commit will make the monkey scream. So it’s time to fix the broken test. But you may have problems with Greenpeace doing that. So, it is a good idea to have something that does not depend on living animals. So maybe a robot? Yes, this is what we call Continuous Integration tools.

Continuous integration is what we call the process where developers integrate their codes continuously. Continuous Integration tool is an application that executes the build every time it is updated in the version control. This build can watch for compilation/interpretations errors, not passing tests, performance issues or anything you may think reasonable to watch.

There are lots of continuos integration solutions suitable for Ruby (and consequently Ruby on Rails). Some java-based (Hudson) or some in Ruby (Cruise Control rb or Signal). We tried using Signal at first, but we have some problems related to the gem environment (we could not make it change the environment to the project gemset instead of the Signal gemset environment). So we tried the minimalistic CI JOE.

CI JOE is a CI server created by the Githubber Chris Wanstrath. It is sinatra based and does not need even a database to run: all of its configurations are recorded in the .git/config file and it uses git hooks to run the desired actions. There are only two requirements: your project must use git for version control, and the build should return a zero shell status for success and anything different than zero for not successful builds. You can use it with any programming language or framework. I strongly encourage you to configure to pull the build trigger every time you do a git push (you can do that simply doing a POST to a URL).

The most awesome thing about CI JOE is that it is dead simple to configure it to do almost anything. You can configure Campfire notifications, e-mail deliveries or an alarming buzzing when it fails. The only requirement is that a shell command exists (if it requires a sequence a of shell commands, you can join them using ‘;’). A great idea is to integrate electric devices like leds, buzzers or even a semaphore.

The project comprehensive README has a lot of examples, including how to protect it with HTTP basic authentication or how to configure a queue of build requests. For our projects, we usually customize it by setting a git after-reset hook to execute “bundle install” (it always execute this git hook before the build starts) and then we modify the html template to include the project logo. Next, CI is ready to rock and roll!

And you? What do you think about CI JOE? Are you customizing it in any way? Are you using another CI solution for your Ruby on Rails projects?