Posts by José Valim

Swift has been recently announced by Apple and I have been reading the docs and playing with the language out of curiority. I was pleasantly surprised with many features in the language, like the handling of optional values (and types) and with immutability being promoted throughout the language.

The language also feels extensible. For extensibility, I am using the same criteria we use for Elixir, which is the ability to implement language constructs using the language itself.

For example, in many languages the short-circuiting && operator is defined as special part of the language. In those languages, you can’t reimplement the operator using the constructs provided by the language.

In Elixir, however, you can implement the && operator as a macro:

defmacro left && right do
  quote do
    case unquote(left) do
      false -> false
      _ -> unquote(right)
    end
  end
end

In Swift, you can also implement operators and easily define the && operator with the help of the @auto_closure attribute:

func &&(lhs: LogicValue, rhs: @auto_closure () -> LogicValue) -> Bool {
    if lhs {
        if rhs() == true {
            return true
        }
    }
    return false
}

The @auto_closure attribute automatically wraps the tagged argument in a closure, allowing you to control when it is executed and therefore implement the short-circuiting property of the && operator.

However, one of the features I suspect that will actually hurt extensibility in Swift is the Extensions feature. I have compared the protocols implementation in Swift with the ones found in Elixir and Clojure on Twitter and, as developers have asked for a more detailed explanation, I am writing this blog post as result!

Extensions

The extension feature in Swift has many use cases. You can read them all in more detail in their documentation. For now, we will cover the general case and discuss the protocol case, which is the bulk of this blog post.

Following the example in Apple documentation itself:

extension Double {
    var km: Double { return self * 1_000.0 }
    var m: Double { return self }
    var cm: Double { return self / 100.0 }
    var mm: Double { return self / 1_000.0 }
    var ft: Double { return self / 3.28084 }
}

let oneInch = 25.4.mm
println("One inch is \(oneInch) meters")
// prints "One inch is 0.0254 meters"

let threeFeet = 3.ft
println("Three feet is \(threeFeet) meters")
// prints "Three feet is 0.914399970739201 meters"

In the example above, we are extending the Double type, adding our own computed properties. Those extensions are global and, if you are Ruby developer, it will remind you of monkey patching in Ruby. However, in Ruby classes are always open, and here the extension is always explicit (which I personally consider to be a benefit).

What troubles extensions is exactly the fact they are global. While I understand some extensions would be useful to define globally, they always come with the possibility of namespace pollution and name conflicts. Two libraries can define the same extensions to the Double type that behave slightly different, leading to bugs.

This has always been a hot topic in the Ruby community with Refinements being proposed in late 2010 as a solution to the problem. At this moment, it is unclear if extensions can be scoped in any way in Swift.

The case for protocols

Protocols are a fantastic feature in Swift. Per the documentation: “a protocol defines a blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality”.

Let’s see their example:

protocol FullyNamed {
    var fullName: String { get }
}

struct Person: FullyNamed {
    var fullName: String
}

let john = Person(fullName: "John Appleseed")
// john.fullName is "John Appleseed"

In the example above we defined a FullyNamed protocol and implemented it while defining the Person struct. The benefit of protocols is that the compiler can now guarantee the struct complies with the definitions specified in the protocol. In case the protocol changes in the future, you will know immediately by recompiling your project.

I have been long advocating this feature for Ruby. For example, imagine you have the following Ruby code:

class Person
  attr_accessor :first, :last

  def full_name
    first + " " + last
  end
end

And you have a method somewhere that expects an object that implements full_name:

def print_full_name(obj)
  puts obj.full_name
end

At some point, you may want to print the title too:

def print_full_name(obj)
  if title = obj.title
    puts title + " " + obj.full_name
  else
    puts obj.full_name
  end
end

Your contract has now changed but there is no mechanism to notify implementations of such change. This is particularly cumbersome because sometimes such changes are done by accident, when you don’t want to actually modify the contract.

This issue has happened multiple times in Rails. Before Rails 3, there was no official contract between the controller and the model and between the view and the model. This meant that, while Rails worked fine with Active Record (Rails’ built-in model layer), every Rails release could possibly break integration with other models because the contract suddenly became larger due to changes in the implementation.

Since Rails 3, we actually define a contract for those interactions, but there is still no way to:

  • guarantee an object complies with the contract (besides extensive use of tests)
  • guarantee controllers and views obey the contract (besides extensive use of tests)

Similar to real-life contracts, unless you write it down and sign it, there is no guarantee both parts will actually maintain it.

The ideal solution is to be able to define multiple, tiny protocols. Someone using Swift would rather define multiple protocols for the controller and view layers:

protocol URL {
    func toParam() -> String
}

protocol FormErrors {
    var errors: Dict<String, Array[String]>
}

The interesting aspect about Swift protocols is that you can define and implement protocols for any given type, at any time. The trouble though is that the implementation of the protocols are defined in the class/struct itself and, as such, they change the class/struct globally.

Protocols and Extensions

Since protocols in Swift are implemented directly in the class/struct, be it during definition or via extension, the protocol implementation ends up changing the class/struct globally. To see the issue with this, imagine that you have two different libraries relying on different JSON protocols:

protocol JSONA {
    func toJSON(precision: Integer) -> String
}

protocol JSONB {
    func toJSON(scale: Integer) -> String
}

If the protocols above have different specifications on how the precision argument must be handled, we will be able to implement only one of the two protocols above. That’s because implementing any of the protocols above means adding a toJSON(Integer) method to the class/struct and there can be only one of them per class/struct.

Furthermore, if implementing protocols means globally adding method to classes and structs, it can actually hinder the use of protocols as a whole, as the concerns to avoid name clashes and to avoid namespace pollution will speak louder than the protocol benefits.

Let’s contrast this with protocols in Elixir:

defprotocol JSONA do
  def to_json(data, precision)
end

defprotocol JSONB do
  def to_json(data, scale)
end

defimpl JSONA, for: Integer do
  def to_json(data, _precision) do
    Integer.to_string(data)
  end
end

JSONA.to_json(1, 10)
#=> 1

Elixir protocols are heavily influenced by Clojure protocols where the implementation of a protocol is tied to the protocol itself and not to the data type implementing the protocol. This means you can implement both JSONA and JSONB protocols for the same data types and they won’t clash!

Protocols in Elixir work by dispatching on the first argument of the protocol function. So when you invoke JSONA.to_json(1, 10), Elixir checks the first argument, sees it is an integer and dispatches to the appropriate implementation.

What is interesting is that we can actually emulate this functionality in Swift! In Swift we can define the same method multiple times, as long as the type signatures do not clash. So if we use static methods and extension, we can emulate the behaviour above:

// Define a class to act as protocol dispatch
class JSON {
}

// Implement it for Double
extension JSON {
    class func toJSON(double: Double) -> String {
        return String(double)
    }
}

// Someone may implement it later for Float too
extension JSON {
    class func toJSON(float: Float) -> String {
        return String(float)
    }
}

JSON.toJSON(2.3)

The example above emulates the dynamic dispatch ability found in Elixir and Clojure which guarantees no clashes in multiple implementations. After all, if someone defines a JSONB class, all the implementations would live in the JSONB class.

Since dynamic dispatch is already available, we hope protocols in Swift are improved to support local implementations instead of changing classes/structs globally.

Summing up

Swift is a very new language and in active development. The documentation so far doesn’t cover topics like exceptions, the module system and concurrency, which indicates there are many more exciting aspects to build, discuss and develop.

It is the first time I am excited to do some mobile development. Plus the Swift playground may become a fantastic way to introduce programming.

Finally, I would personally love if Swift protocols evolved to support non-global implementations. Protocols are a very extensible mechanism to define and implement contracts and it would be a pity to see their potential hindered due to the global side-effects it may cause to the codebase.

Charlie Somerville recently tweeted he wished there was a good guide about maintaining open source software:

In between consultancy jobs and building projects for our clients, our team is directly involved in many Open Source Projects. So after exchanging some messages, I realized it would be useful to document some of the common practices we do at Plataformatec for maintaining Open Source Software.

Although this post is not a complete guide, it does share our experience of maintaining and working on the issues tracker of many projects like Rails, Simple Form, Devise, and lately Elixir.

Avoid burnout

One of the most important things when it comes to OSS is to avoid burnout. This is extremely important for small projects, which mostly depend on one or two contributors. If someone burns out in large projects, there is always someone else to take on the issues tracker and maintenance work until you gather your energy back.

For small projects though, if you need to step away, it likely means the project will go unmaintained for a while, which will just make the situation worse as issues will continue to arrive.

So a lot of what we will discuss here is meant to avoid burnout. Here are two examples:

  1. Avoid issues from piling on – it is likely that, if the issues tracker has 30 open issues, you will feel less and less compelled to work on them;

  2. Learn how to say “no” – merging code or features you are not a 100% comfortable with may makes thing worse. Every time there is a bug on that particular feature, you may feel less compelled to work on it or even not be willing to improve/refactor the code in the future, etc;

Of course different developers have different reasons to burnout, so whenever you find yourself lacking the energy to work on a project, try to understand why the energy isn’t there and find ways to avoid the problem from repeating in the future.

Contributing.md file

If you are hosting your project on GitHub, GitHub allows you to place a Contributing.md file with instructions on how to contribute to the project. For example, here are the files for Devise and Elixir projects.

Remember the community wants the same as you: the best for your Open Source project. So having detailed instructions on how to collaborate and contribute to the project helps both you and the community. You will have more detailed bug reports, better patches/pull requests, and other developers will find it easier to give back.

However keep in mind that there will still be incomplete or wrong reports from developers that didn’t read the Contributing.md file. That leads us to the next tip.

ABC (Always Be Closing)

If you have ever been to a sales workshop or read about how to improve your sales skills, it is very likely you have seen Alec Baldwin’s speech from Glengarry Glen Ross:

Although we are not doing any sales here, I strongly apply the concept of “Always Be Closing” to the issues tracker. I am always, actively, closing issues. Here are the main reasons I close open issues:

  • The user has not read the Contributing.md and have posted a bug report without the relevant information like OS, software versions, stacktraces, etc;

  • Users are asking questions on the issues tracker when a project has a specific channel for those, like a mailing list or a tag on Stack Overflow;

  • The proposed feature is too complex to not be worth the time that it requires to be implemented. Or you don’t see the possibility for the feature to be implemented in a medium-term plan;

Usually the first thing I do in the morning is to give a quick pass at my inbox looking specifically at open issues. Issues that are invalid or that cannot be acted on are closed with the proper message. In case the user provides the missing information after a bad bug report is closed, the issue will be reopened in the next morning. This activity only takes a couple minutes of your morning, which you can do while you sip your hot coffee. :)

Soon after I adopted this workflow, Vasiliy Ermolovich noticed most of the time I was closing issues with similar reasons, and he has created the extremely useful José vs OSS project which integrates with Firefox and Chrome to help me close issues more quickly. This project has probably saved me days of work!

Ask for contributions

The community is there to help you! There are many ways you can reduce your workload by interacting with the community:

  • Depending on the project, asking a sample application to reproduce the issue is extremely helpful. A lot of bug reports are closed after the reporter tries to reproduce the issue and, in the process, they find out the bug was caused by something specific to their application. It saves you precious hours from trying to reproduce a bug that you would never find!

  • If a user reports a bug, ask the user if he/she feels comfortable to send a pull request with a fix. If you have an idea of where the bug is, send the user links to the relevant part of code, and make it clear you are available in case there are any questions;

  • For Elixir, we are experimenting with tagging issues with their “perceived difficulty”, so users who want to contribute to the language can help you solve existing issues. I often list small/medium features we plan to tackle in there and often they are implemented by the community and sent as pull requests;

Find out what works for you and the community

The important thing is to find out what works for you, the other maintainers and the community. For example, while working on Elixir, I was used to close features requests that I could see that would not be part of the language in the next 6 months, even though they were features I liked too! This eventually led to some questions from the community about the approach taken in our issues tracker.

The truth is: if the feature is really useful, you will hear about it many more times! I explained the approach to the community and showed how not having such entries in the issues tracker helps us keep focus on actual bug reports and actionable tasks. Which led everyone to agree with this direction.

Try things and listen to the feedback and hopefully you will find a workflow that works nicely for everyone involved!

Be nice!

Finally, be nice and be polite! Let your contributors know that you appreciate their work as much as they appreciate yours!

What about you? Do you have any tips to share regarding OSS projects maintenance?

It has been reported that malicious users can do e-mail enumeration on sign in via timing attacks despite paranoid mode being enabled.

Whenever you try to reset your password or confirm your account, Devise gives you precise information on how to proceed, if the e-mail given is valid, if the token has not expired and so on. This means that, by trying any given e-mail, a third-party person can know if a particular e-mail is registered in that website or not.

While this is not a problem for many applications, some applications would like to keep their user information completely private, even if it means loss of usability on features like account confirmation. For such use cases, Devise supports something called paranoid mode, which has been reported to still be vulnerable to enumeration on sign in.

Releases

Only applications using Devise paranoid mode need to update. New releases have been made for Devise branches 3.2 (3.2.1), 3.1 (3.1.2), 3.0 (3.0.4) and 2.2 (2.2.8).

Users running on those branches and cannot upgrade immediately can fix this issue by applying this patch. Users running on older versions are recommended to upgrade to a supported branch immediately.

Acknowledgements

We want to thank Tim Goddard, from YouDo Ltd for reporting the issue and working with us on a fix.

We are glad to announce that Devise 3.1.0.rc is out. On this version, we have focused on some security enhancements regarding our defaults and the deprecation of TokenAuthenticatable. This blog post explains the rationale behind those changes and how to upgrade.

Devise 3.1.0.rc runs on both Rails 3.2 and Rails 4.0. There is a TL;DR for upgrading at the end of this post.

Note: We have yanked 3.1.0.rc and released to 3.1.0.rc2 which fixes some regressions. Thanks everyone for trying out the release candidates!

Do not sign the user in after confirmation

In previous Devise versions, the user was automatically signed in after confirmation. This meant that anyone that could access the confirmation e-mail could sign into someone’s account by simply clicking the link.

Automatically signing the user in could also be harmful in the e-mail reconfirmation workflow. Imagine that a user decides to change his e-mail address and, while doing so, he makes a typo on the new e-mail address. An e-mail will be sent to another address which, with the token in hands, would be able to sign in into that account.

If the user corrects the e-mail straight away, no harm will be done. But if not, someone else could sign into that account and the user would not know that it happened.

For this reason, Devise 3.1 no longer signs the user automatically in after confirmation. You can temporarily bring the old behavior back after upgrading by setting the following in your config/initializers/devise.rb:

config.allow_insecure_sign_in_after_confirmation = true

This option will be available only temporarily to aid migration.

Thanks to Andri Möll for reporting this issue.

Do not confirm account after password reset

In previous Devise versions, resetting the password automatically confirmed user accounts. This worked fine in previous Devise versions which confirmed the e-mail just on sign up, so the e-mail both confirmation and password reset tokens would be sent to were guaranteed to be the same. With the addition of reconfirmable, this setup change and Devise will no longer confirm the account after password reset.

Thanks to Andri Möll for reporting this issue and working with us on a fix.

CSRF on sign in

Devise’s sign in page was vulnerable to CSRF attacks when used with the rememberable feature. Note that the CSRF vulnerability is restricted only to the sign in page, allowing an attacker to sign the user in an account controlled by the attacker. This vulnerability does not allow the attacker to access or change a user account in any way.

This issue is fixed on Devise 3.1.0 as well as 3.0.2 and 2.2.6. Users on previous Devise versions can patch their application by simply defining the following in their ApplicationController:

def handle_unverified_request
  super
  Devise.mappings.each_key do |key|
    cookies.delete "remember_#{key}_token"
  end
end

Thanks to Kevin Dew for reporting this issue and working with us on a fix.

Store digested tokens in the database

In previous versions, Devise stored the tokens for confirmation, reset password and unlock directly in the database. This meant that somebody with read access to the database could use such tokens to sign in as someone else by, for example, resetting their password.

In Devise 3.1, we store an encrypted token in the database and the actual token is sent only via e-mail to the user. This means that:

  • Devise now requires a config.secret_key configuration. As soon as you boot your application under Devise 3.1, you will get an error with information about how to proceed;
  • Every time the user asks a token to be resent, a new token will be generated;
  • The Devise mailer now receives one extra token argument on each method. If you have customized the Devise mailer, you will have to update it. All mailers views also need to be updated to use @token, as shown here, instead of getting the token directly from the resource;
  • Any previously stored token in the database will no longer work unless you set config.allow_insecure_token_lookup = true in your Devise initializer. We recomend users upgrading to set this option on production only for a couple days, allowing users that just requested a token to get their job done.

Thanks to Stephen Touset for reporting this issue and working with us on a solution.

Token Authenticatable

Jay Feldblum also wrote to us to let us know that our tokens lookup are also vulnerable to timing attacks. Although we haven’t heard of any exploit via timing attacks on database tokens, there is a lot of research happening in this area and some attacks have been successful over the local network. For this reason, we have decided to protect applications using Devise from now on.

By digesting the confirmation, reset password and unlock tokens, as described in the previous section, we automatically protected those tokens from timing attacks.

However, we cannot digest the authentication token provided by TokenAuthenticatable, as they are often part of APIs where the token is used many times. Since the usage of the authenticatable token can vary considerably in between applications, each requiring different safety guarantees, we have decided to remove TokenAuthenticatable from Devise, allowing users to pick the best option. This gist describes two of the available solutions.

Thanks to Jay Feldblum for reporting this issue and working with us on a solution.

TL;DR for upgrading

As soon as you update Devise, you will get a warning asking you to set your config.secret_key. By upgrading Devise, your previous confirmation, reset and unlock tokens in the database will no longer work unless you set the following option to true in your Devise initializer:

config.allow_insecure_token_lookup = true

It is recommended to leave this option on just for a couple days, just to allow recently generated tokens by your application to be consumed by users. TokenAuthenticable has not been affected by those changes, however it has been deprecated and you will have to move to your own token authentication mechanisms.

Furthermore, the Devise mailer now receives an extra token argument on each method. If you have customized the Devise mailer, you will have to update it. All mailers views also need to be updated to use @token, as shown here, instead of getting the token directly from the resource.

With those changes, we hope to provide an even more secure authentication solution for Rails developers, while maintaining the flexibility expected from Devise.

One more thing, we’re writing a free ebook about Devise!

If you want to know more about Devise, we’re writing a free ebook about it.
Fill in the form below so that we can send you updates with new chapters and beta releases.


Devise has been reported to be vulnerable to CSRF token fixation attacks.

The attack can only be exploited if the attacker can set the target session, either by subdomain cookies (similar to described here) or by fixation over the same Wi-Fi network. If the user knows the CSRF token, cross-site forgery requests can be made. More information can be found here.

Note Devise is not vulnerable to session fixation attacks (i.e. the user cannot steal another user session by fixating the session id).

Releases

Devise 3.0.1 and 2.2.5 have been released with fixes for the attack.

If you can’t upgrade, you must protect your Devise application by adding the next three lines to a Rails initializer:

Warden::Manager.after_authentication do |record, warden, options|
  warden.request.session.try(:delete, :_csrf_token)
end

Notice the code above and the updated Devise versions will clean up the CSRF Token after any authentication (sign in, sign up, reset password, etc). So if you are using AJAX for such features, you will need to fetch a new CSRF token from the server.

Acknowledgements

We want to thank Egor Homakov for reporting the issue and working with us on a fix.

We are very glad to announce the logos for two of our favorite Rails open source projects…

Simple Form:

Simple Form Logo

And Devise:

Devise Logo

We would like to congratulate our designer, Bruna Kochi, who was able to capture the essence of each project in their logos. We will write about their design process soon!

Those projects have been in the Rails community for almost 4 years and it was about time for them to have their own visual identity! We would like to thank all contributors and users who have helped those projects to be more robust, flexible and popular!