When we start programming with Ruby, one of the first niceties we learn about are the Ruby blocks. In the beginning it’s easy to get tricked by the two existing forms of blocks and when to use each:

%w(a b c).each { |char| puts char }
%w(a b c).each do |char| puts char end

The Ruby Community has sort of created a “guideline” for when to use one versus another: for short or inline blocks, use curly brackets {..}, for longer or multiline blocks, use the do..end format. But did you know there is actually a slight difference between them? So sit tight, we’ll cover it now.

Operators Precedence

Languages contain operators, and these operators must obey a precedence rule so that the interpreter knows the order of execution, which means one operator will be executed first if it has higher precedence than others in a piece of code. Consider the following example:

a || b && c

What operation gets executed first, a || b, or b && c? This is where operator precedence takes action. In this case, the code is the same as this:

a || (b && c)

Which means && has higher precedence than || in Ruby. However, if you want the condition a || b to be evaluated first, you can enforce it with the use of ():

(a || b) && c

This way you are explicitly telling the interpreter that the condition inside the () should be executed first.

What about blocks?

It turns out blocks have precedence too! Lets see an example that mimics the Rails router with the redirect method:

def get(path, options = {}, &block)
  puts "get received block? #{block_given?}"
end
 
def redirect(&block)
  puts "redirect received block? #{block_given?}"
end
 
puts '=> brackets { }'
get 'eggs', to: redirect { 'eggs and bacon' }
 
puts
 
puts '=> do..end'
get 'eggs', to: redirect do 'eggs and bacon' end

This example shows a rather common code in Rails apps: a get route that redirects to some other route in the app (some arguments from the real redirect block were omitted for clarity). And all these methods do is outputting whether they received a block or not.

At a glance these two calls to get + redirect could be considered exact the same, however they behave differently because of the blocks precedence. Can you guess what’s the output? Take a look:

=> brackets {..}
redirect received block? true
get received block? false
 
=> do..end
redirect received block? false
get received block? true

The curly brackets have higher precedence than the do..end, which means the block with {..} will attach to the inner method, in this example redirect, whereas the do..end will attach to the outer method, get.

Wrapping up

This blog post originated from a real Rails issue, where you can read a little bit more about the subject and see that even Rails got it wrong in its documentation (which is now fixed). The precedence is a subtle but important difference between {..} and do..end blocks, so be careful not to be caught off guard by it.

Do you know any other interesting fact about Ruby blocks that people may not be aware of? Or maybe you learned something tricky about Ruby recently? Please share it on the comments section, we would love to hear.


This post is part of a collection of posts we’re publishing on the subjects of low internal software quality, refactoring and rewrite.

Not only physical matter deteriorates, software does too

It’s known that physical matter deteriorates. People accept that and have always dealt with it. What people don’t accept so easily is that software “deteriorates” too. Unlike physical matter, it doesn’t happen due to some physical or chemical phenomenon. It usually happens because of some business change or people change. Let me give you an example.

Imagine you’re leading the tech or product team of a startup; you’re the CTO. You already launched your product’s first version, and it was a success. Your business model was validated, and now you’re in a growth stage. That’s awesome! But it has its costs, and it brings a new set of challenges.

The first version of your product is working, but the codebase is not in the shape you’ll need from now on. Maybe your team’s velocity is not as good as it used be. Your team keeps complaining about the code quality. The CEO and the product director want new features, and your current projections will not meet the business needs.

It’s not uncommon that one of the main sources of all these problems is the poor quality of your product’s codebase. You may need a refactor1 or a rewrite.

When the codebase is not in good shape, everyone can get frustrated

If the internal quality of your product is not good, everyone becomes frustrated.

Your whole team, including developers, will get frustrated because they would like to ship features faster, but the current code quality and architecture are not helping.

The IT, product, and software departments suffer because they’re not able to meet the expectations of the other departments.

The customer also suffers because of frequent bugs, how long it takes for them to be resolved, and how long it takes new features to be launched.

You get the picture.

Identifying the symptoms

It’s the leader’s job (let’s say the CTO) to identify when a refactor or a rewrite is needed. In order to do that, he or she can look around for some symptoms, like the ones below:

  • Everything is hard: Almost every feature or bug fix your team needs to do is hard. It was not always like that. You remember the good old days when your team was fast and everything ran smoothly.
  • Slow velocity: Your team’s velocity decreased or is decreasing. When you were building the first version of your product, it was fast to develop a new feature, and your team used to build lots of them every iteration. Now it’s different.
  • Slow test suite: Your test suite takes 10x, 20x, 30x more time to run than before.
  • Bugs that don’t go away: Your team fixes a bug, then in a week or so it appears again. Every now and then your team is fixing a regression bug.
  • Your team is demotivated: Your team keeps complaining that working in the project is not as productive as it was in the past. A single person can’t build one feature alone; there are too many moving parts.
  • Knowledge silos: There are some parts of the software that only a single developer knows well enough to maintain. It’s difficult for the rest of the team to work with that specific code.
  • New developer ramp-up time is taking too long: When new developers join the team, it takes too much time for them to be fully productive.

The reason you got into one of these situations is probably not a technical one. Maybe you needed to deliver too much, too fast while you were building the first version of your product. Maybe your team didn’t have the maturity and experience in the past they have now. Analyzing the root cause is important too, but you need to do something else. You need to solve your problem.

If you’re experiencing the symptoms above, you probably have a low internal software quality problem. Recognizing the symptoms is already a big step. The next step is to think of solutions. Some solutions you may take include refactoring or a rewrite process.

Refactor or rewrite?

There’s no definitive guide about when you should do a big refactor or a rewrite, because it depends a lot on your context. That said, there are some rules of thumb that you should consider when evaluating which solution to go with:

When to rewrite

  • The technology you use is outdated, and it’s not maintained anymore.
  • Your software is really slow, and changing the architecture is not enough or is not viable.
  • The supply of software developers that know the technology you use is low and decreasing.
  • There are new technologies that offer a significant advantage compared to what you’re using.

When to refactor

  • The technology you use is still maintained and relevant.
  • It’s viable to improve your application in an incremental fashion.
  • The problem you’re solving is just technical and not a business one.

Choosing one of these options is not an easy decision, and once you go with one of them, there will be an entire new set of concerns you’ll encounter. Stay tuned, in our next blog posts we’ll talk about what to consider when doing a big refactor or a rewrite.

Now I would like to know about your experiences. Have you ever been in a similar situation? How did you identify that your problem was low internal software quality? Please share with us!


  1. I prefer the term “code refurbishment”, but people aren’t generally used to it. So I’ll use refactoring in this blog post for the sake of clarity. 

Swift has been recently announced by Apple and I have been reading the docs and playing with the language out of curiority. I was pleasantly surprised with many features in the language, like the handling of optional values (and types) and with immutability being promoted throughout the language.

The language also feels extensible. For extensibility, I am using the same criteria we use for Elixir, which is the ability to implement language constructs using the language itself.

For example, in many languages the short-circuiting && operator is defined as special part of the language. In those languages, you can’t reimplement the operator using the constructs provided by the language.

In Elixir, however, you can implement the && operator as a macro:

defmacro left && right do
  quote do
    case unquote(left) do
      false -> false
      _ -> unquote(right)
    end
  end
end

In Swift, you can also implement operators and easily define the && operator with the help of the @auto_closure attribute:

func &&(lhs: LogicValue, rhs: @auto_closure () -> LogicValue) -> Bool {
    if lhs {
        if rhs() == true {
            return true
        }
    }
    return false
}

The @auto_closure attribute automatically wraps the tagged argument in a closure, allowing you to control when it is executed and therefore implement the short-circuiting property of the && operator.

However, one of the features I suspect that will actually hurt extensibility in Swift is the Extensions feature. I have compared the protocols implementation in Swift with the ones found in Elixir and Clojure on Twitter and, as developers have asked for a more detailed explanation, I am writing this blog post as result!

Extensions

The extension feature in Swift has many use cases. You can read them all in more detail in their documentation. For now, we will cover the general case and discuss the protocol case, which is the bulk of this blog post.

Following the example in Apple documentation itself:

extension Double {
    var km: Double { return self * 1_000.0 }
    var m: Double { return self }
    var cm: Double { return self / 100.0 }
    var mm: Double { return self / 1_000.0 }
    var ft: Double { return self / 3.28084 }
}

let oneInch = 25.4.mm
println("One inch is \(oneInch) meters")
// prints "One inch is 0.0254 meters"

let threeFeet = 3.ft
println("Three feet is \(threeFeet) meters")
// prints "Three feet is 0.914399970739201 meters"

In the example above, we are extending the Double type, adding our own computed properties. Those extensions are global and, if you are Ruby developer, it will remind you of monkey patching in Ruby. However, in Ruby classes are always open, and here the extension is always explicit (which I personally consider to be a benefit).

What troubles extensions is exactly the fact they are global. While I understand some extensions would be useful to define globally, they always come with the possibility of namespace pollution and name conflicts. Two libraries can define the same extensions to the Double type that behave slightly different, leading to bugs.

This has always been a hot topic in the Ruby community with Refinements being proposed in late 2010 as a solution to the problem. At this moment, it is unclear if extensions can be scoped in any way in Swift.

The case for protocols

Protocols are a fantastic feature in Swift. Per the documentation: “a protocol defines a blueprint of methods, properties, and other requirements that suit a particular task or piece of functionality”.

Let’s see their example:

protocol FullyNamed {
    var fullName: String { get }
}

struct Person: FullyNamed {
    var fullName: String
}

let john = Person(fullName: "John Appleseed")
// john.fullName is "John Appleseed"

In the example above we defined a FullyNamed protocol and implemented it while defining the Person struct. The benefit of protocols is that the compiler can now guarantee the struct complies with the definitions specified in the protocol. In case the protocol changes in the future, you will know immediately by recompiling your project.

I have been long advocating this feature for Ruby. For example, imagine you have the following Ruby code:

class Person
  attr_accessor :first, :last

  def full_name
    first + " " + last
  end
end

And you have a method somewhere that expects an object that implements full_name:

def print_full_name(obj)
  puts obj.full_name
end

At some point, you may want to print the title too:

def print_full_name(obj)
  if title = obj.title
    puts title + " " + obj.full_name
  else
    puts obj.full_name
  end
end

Your contract has now changed but there is no mechanism to notify implementations of such change. This is particularly cumbersome because sometimes such changes are done by accident, when you don’t want to actually modify the contract.

This issue has happened multiple times in Rails. Before Rails 3, there was no official contract between the controller and the model and between the view and the model. This meant that, while Rails worked fine with Active Record (Rails’ built-in model layer), every Rails release could possibly break integration with other models because the contract suddenly became larger due to changes in the implementation.

Since Rails 3, we actually define a contract for those interactions, but there is still no way to:

  • guarantee an object complies with the contract (besides extensive use of tests)
  • guarantee controllers and views obey the contract (besides extensive use of tests)

Similar to real-life contracts, unless you write it down and sign it, there is no guarantee both parts will actually maintain it.

The ideal solution is to be able to define multiple, tiny protocols. Someone using Swift would rather define multiple protocols for the controller and view layers:

protocol URL {
    func toParam() -> String
}

protocol FormErrors {
    var errors: Dict<String, Array[String]>
}

The interesting aspect about Swift protocols is that you can define and implement protocols for any given type, at any time. The trouble though is that the implementation of the protocols are defined in the class/struct itself and, as such, they change the class/struct globally.

Protocols and Extensions

Since protocols in Swift are implemented directly in the class/struct, be it during definition or via extension, the protocol implementation ends up changing the class/struct globally. To see the issue with this, imagine that you have two different libraries relying on different JSON protocols:

protocol JSONA {
    func toJSON(precision: Integer) -> String
}

protocol JSONB {
    func toJSON(scale: Integer) -> String
}

If the protocols above have different specifications on how the precision argument must be handled, we will be able to implement only one of the two protocols above. That’s because implementing any of the protocols above means adding a toJSON(Integer) method to the class/struct and there can be only one of them per class/struct.

Furthermore, if implementing protocols means globally adding method to classes and structs, it can actually hinder the use of protocols as a whole, as the concerns to avoid name clashes and to avoid namespace pollution will speak louder than the protocol benefits.

Let’s contrast this with protocols in Elixir:

defprotocol JSONA do
  def to_json(data, precision)
end

defprotocol JSONB do
  def to_json(data, scale)
end

defimpl JSONA, for: Integer do
  def to_json(data, _precision) do
    Integer.to_string(data)
  end
end

JSONA.to_json(1, 10)
#=> 1

Elixir protocols are heavily influenced by Clojure protocols where the implementation of a protocol is tied to the protocol itself and not to the data type implementing the protocol. This means you can implement both JSONA and JSONB protocols for the same data types and they won’t clash!

Protocols in Elixir work by dispatching on the first argument of the protocol function. So when you invoke JSONA.to_json(1, 10), Elixir checks the first argument, sees it is an integer and dispatches to the appropriate implementation.

What is interesting is that we can actually emulate this functionality in Swift! In Swift we can define the same method multiple times, as long as the type signatures do not clash. So if we use static methods and extension, we can emulate the behaviour above:

// Define a class to act as protocol dispatch
class JSON {
}

// Implement it for Double
extension JSON {
    class func toJSON(double: Double) -> String {
        return String(double)
    }
}

// Someone may implement it later for Float too
extension JSON {
    class func toJSON(float: Float) -> String {
        return String(float)
    }
}

JSON.toJSON(2.3)

The example above emulates the dynamic dispatch ability found in Elixir and Clojure which guarantees no clashes in multiple implementations. After all, if someone defines a JSONB class, all the implementations would live in the JSONB class.

Since dynamic dispatch is already available, we hope protocols in Swift are improved to support local implementations instead of changing classes/structs globally.

Summing up

Swift is a very new language and in active development. The documentation so far doesn’t cover topics like exceptions, the module system and concurrency, which indicates there are many more exciting aspects to build, discuss and develop.

It is the first time I am excited to do some mobile development. Plus the Swift playground may become a fantastic way to introduce programming.

Finally, I would personally love if Swift protocols evolved to support non-global implementations. Protocols are a very extensible mechanism to define and implement contracts and it would be a pity to see their potential hindered due to the global side-effects it may cause to the codebase.

One of the most common questions discussed among the Agile Community is what should be done when a team doesn’t finish a user story (US) in a sprint? How can people track the progress made on an incomplete user story? In this blog post, I’ll share our approach to this question.

According to the community, when a developer finishes their work in the last few hours of an iteration, they must try to help their teammates finish their work. Otherwise, it’s recommended they help prepare the next cycle of work, analyzing the next user stories, refactoring a piece of code that could be better implemented, or writing tests. It is not advisable for a developer to get a new user story started if they won’t be able to finish this in the same cycle. However, this first approach is not always possible because user stories can be underestimated or something can happen that would delay the delivery of the user story.

A second alternative is to split the user story into two smaller ones and develop the one that can be finished on time. The first user story’s points are credited in the current cycle, the second one’s are credited in the next cycle. This approach improves the visibility of what was done in the current cycle. However, it hurts the agile philosophy, in some way it would be a delivery without business value for the customer.

The third way is for the unfinished user story to go to the next cycle with the original estimate. When it gets completed, the user story’s full effort estimate gets credited to the velocity of the new iteration. This could skew the average velocity metric, so be careful, because this is important to the Product Owner (PO) for forecasting and planning. Also beware to not have a bunch of backlog items almost done: one user story delivered has more value than a lot of user stories 90% complete.

How do we do it?

Usually, we use the first and third approaches in the following way:

  • We try to concentrate efforts on work that is closest to delivery. As soon as a developer finishes the first US, they will verify if someone needs help with finishing a task or if some user story in the current cycle has defects that need to be fixed. Keeping the work-in-progress as low as possible helps to focus on what matters most. This process is repeated until the end.
  • If this list is empty and the cycle is almost over, the developer looks for the smallest or most valuable user story (depending of project’s context) to be done.
  • If the user story has not been finished by the end of the cycle, this US shifts to the next cycle with the original estimate. However, when we plan the next cycle, we’ll consider just the missing points to finish the US.
  • When this US gets finished, we credit the whole user story’s estimate in our velocity.
  • If the developer after resuming the work on the US in the next iteration realizes that the user story was overestimated or underestimated, usually, we don’t change the estimate on the story itself, but we update our ruler score with the real estimate, as lesson learned.

Note that we do not use these exact steps every single time, everything depends and adapts according to the context of the project or the moment. The most important thing is you prioritizing to deliver maximum business value to the customer.

And you? What do you do when you have an unfinished user story in your cycle? Share with us yours experiences!

One year ago I joined Plataformatec and today, I’m going to tell you some practices that I have learned while working here during all this year. I hope you’ll find something helpful to improve your team or your company.

Sustainable work hours

Some companies expect their employees to work overtime when a project gets close to the deadline and it’s not finished yet. When I came to Plataformatec, I discovered that the default here is to work in a sustainable way (40 hours/week). There have been a few exceptions, but we are not encouraged to do so, and the extra hours can be compensated as well.

Software development tools and practices

At Plataformatec we choose to start with a small set of basic tools and practices, adapting as we learn about the project and the customer.

One of the practices that I liked the most and had no previous experience with is the retrospective meetings. If you never heard about it, it’s simple! It’s a meeting that we do at the end of an iteration, reviewing everything that happened: not only the good things, but also the ones that we need to make better, so that we can plan actions to continually improve. For me, it is a very pleasant meeting because it’s an opportunity to recognize the good work someone has done or to search for a solution to a recurring problem. One interesting fact is that we also use retrospective meetings internally, for the whole company itself.

We don’t cut corners

We really care about our customer’s products and the code quality. We don’t like adding ineffective, incomprehensible or quick-and-dirty code (also known in Brazil as “gambiarra”), because we know that it may cause problems in the future.

We know that it’s not that easy to avoid bad code and beginners mistakes, but there is no definitive solution for this kind of problem. Here at Plataformatec we use two tools to achieve a greater level of code quality: our guidelines and a simple process called code review via pull request. If you’re going to adopt just one of these practices, choose Code Review.

Every single line of code matters, that’s why it’s better not to put all that responsibility on just one person’s shoulders. The code review process can help with that. When doing code reviews, everyone in the team is accountable for the code that is being shipped, not just the developer that created the pull request.

Before working here, I thought pull requests were only an open source practice, that it didn’t have place in commercial projects. It was really mind blowing to see how it works on a daily basis. We reduce errors, ask for help to an experienced programmer, learn about features of frameworks and languages, discuss algorithms… it is an awesome communication tool, and the best part is, all that communication happens in the context of the source code.

Focus

It’s simple: we do software development, we work hard to improve the way we do it and to become one of the best references on it. The whole company works as a team, and everyone shares the same goal. It means that the salespeople close the deals with terms that developers and managers can work in a healthy and challenging environment, and, most importantly, the customers know what to expect from us.

Knowledge sharing

It was a surprise to me when I came here and saw that all of my new coworkers love the programming community and that most of open source work is done during their free time. We are frequently encouraged to contribute to open source, attend to local and international events, write blog posts, present talks and discuss software development subjects on our Hacking Evening every Tuesday. Best of all, you don’t need to do it alone, there is always someone to lend you a hand. Knowing these people and working with them motivates me and help me love more what I do every day.

That’s all I had to say for now. Working at Plataformatec has taught me a lot of good lessons and I know I still have a lot of things to learn. I selected the practices that I liked the most and wanted to share.

Do you have any practice that you appreciate? Tell us about in the comments below!

Last month our team went to Agile Trends, an event in São Paulo where discussions revolved mainly around Agility in software projects.

IMG_6551

Starting the event activities, Niels Pflaeging presented a keynote where he defends Agile alone is not sufficient to taking organizations to the Knowledge Era, and that deeper transformations are necessary for doing that. He started it with an overview of the History of Management, why bigger companies are structured the way we know them and what he believes needs to change so they can adapt to the new reality and remain competitive.

IMG_6487

After the keynote, the so-called “Trend talks” started. Each Trend Talk session was composed of two 18min talks revolving the same theme, followed by a 20min discussion round. In the Trend talks, most of what was presented and discussed was related to the challenges companies face when adopting agile.

During such presentations, most of the success cases presented how companies have adopted the Agile principles but adapted the practices to their own realities, be it a startup or a bigger corporation. Which is similar to what we’ve been doing in our own projects: whenever we start a new engagement we use our custom methodology as a Starting Point and, as we get to know more about our clients, we adapt it to better suit their realities.

Other presentations also showcased some of the Change Management challenges that companies have to face in order to make this work. These topics are similar to our own challenges when engaging client projects, and it was interesting hearing the struggles from other professionals so we can make sure to always keep our toolbox up to date.

IMG_6498

The event also contemplated some workshops, and we joined one that had Kanban as its theme. In this workshop, we honed our skills while running an imaginary pizza place (too bad the pizzas weren’t real) and as we progressed in the game, we could also buy adjustments to our workflow in an “upgrade store” using the cash we made. The point was to get us thinking about how our decisions affected the business’ performance and profitability. Besides that, we also joined a workshop about Learning 3.0, where the participants were divided in small groups and each group had a real problem to work on. We had to understand, share experiences related to the problem and discuss ideas about how to solve it. In this workshop we had the opportunity to work with people from other companies with different backgrounds, which contributed to bringing different perspectives to the discussions.

Finally, we also attended some interesting Keynotes addressing themes that were not directly related to Agile, such as Internet Privacy and the Internet of Things. Even though not directly related to project management methodologies, they were nice in the sense they help us keep our minds open to other trends related to the tech landscape.

IMG_6542

Being part of the event was a great opportunity to see how events related to Agile have gained increased traction during the past couple of years. It was also nice meeting old friends there, as well as meet new people.

Have you also been to Agile Trends? Did any of the talks catch your attention in special? Please let us know it in the comments.