Plataformatec Blog Plataformatec's place to talk about Ruby, Ruby on Rails and software engineering Mon, 25 May 2015 20:11:00 +0000 en-US hourly 1 Introducing reducees Thu, 21 May 2015 12:00:34 +0000 »]]> Elixir provides the concept of collections, which may be in-memory data structures, as well as events, I/O resources and more. Those collections are supported by the Enumerable protocol, which is an implementation of an abstraction we call “reducees”.

In this article, we will outline the design decisions behind such abstraction, often exploring ideas from Haskell, Clojure and Scala that eventually led us to develop this new abstraction called reducees, focusing specially on the constraints and performance characteristics of the Erlang Virtual Machine.

At the end, there is a link to a talk I have recently given at Elixirconf EU 2015, the first Elixir conference in Europe, that explores the next steps and how we plan to introduce asynchrony into the standard library.

Recursion and Elixir

Elixir is a functional programming language that runs on the Erlang VM. All the examples on this article will be written in Elixir although we will introduce the concepts bit by bit.

Elixir provides linked-lists. Lists can hold many items and, with pattern matching, it is easy to extract the head (the first item) and the tail (the rest) of a list:

iex> [h|t] = [1, 2, 3]
iex> h
iex> t
[2, 3]

An empty list won’t match the pattern [h|t]:

[h|t] = []
** (MatchError) no match of right hand side value: []

Suppose we want to recurse every element in the list, multiplying each element by 2. Let’s write a double function:

defmodule Recursion do
  def double([h|t]) do

  def double([]) do

The function above recursively traverses the list, doubling the head at each step and invoking itself with the tail. We could define a similar function if we wanted to triple every element in the list but it makes more sense to abstract our current implementation. Let’s define a function called map that applies a given function to each element in the list:

defmodule Recursion do
  def map([h|t], fun) do
    [fun.(h)|map(t, fun)]

  def map([], _fun) do

double could now be defined in terms of map as follows:

def double(list) do
  map(list, fn x -> x * 2 end)

Manually recursing the list is straight-forward but it doesn’t really compose. Imagine we would like to implement other functional operations like filter, reduce, take and so on for lists. Then we introduce sets, dictionaries, and queues into the language and we would like to provide the same operations for all of them.

Instead of manually implementing all of those operations for each data structure, it is better to provide an abstraction that allows us to define those operations only once, and they will work with different data structures.

That’s our next step.

Introducing Iterators

The idea behind iterators is that we ask the data structure what is the next item until the data structure no longer has items to emit.

Let’s implement iterators for lists. This time, we will be using Elixir documentation and doctests to detail how we expect iterators to work:

defmodule Iterator do
  @doc """
  Each step needs to return a tuple containing
  the next element and a payload that will be
  invoked the next time around.

      iex> next([1, 2, 3])
      {1, [2, 3]}
      iex> next([2, 3])
      {2, [3]}
      iex> next([3])
      {3, []}
      iex> next([])
  def next([h|t]) do
    {h, t}

  def next([]) do

We can implement map on top of next:

def map(collection, fun) do
  map_next(next(collection), fun)

defp map_next({h, t}, fun) do
  [fun.(h)|map_next(next(t), fun)]

defp map_next(:done, _fun) do

Since map uses the next function, as long as we implement next for a new data structure, map (and all future functions) should work out of the box. This brings the polymorphism we desired but it has some downsides.

Besides not having ideal performance, it is quite hard to make iterators work with resources (events, I/O, etc), leading to messy and error-prone code.

The trouble with resources is that, if something goes wrong, we need to tell the resource that it should be closed. After all, we don’t want to leave file descriptors or database connections open. This means we need to extend our next contract to introduce at least one other function called halt.

halt should be called if the iteration is interrupted suddenly, either because we are no longer interested in the next items (for example, if someone calls take(collection, 5) to retrieve only the first five items) or because an error happened. Let’s start with take:

def take(collection, n) do
  take_next(next(collection), n)

# Invoked on every step
defp take_next({h, t}, n) when n > 0 do
  [h|take_next(next(t), n - 1)]

# If we reach this, the collection finished
defp take_next(:done, _n) do

# If we reach this, we took all we cared about before finishing
defp take_next(value, 0) do
  halt(value) # Invoke halt as a "side-effect" for resources

Implementing take is somewhat straight-forward. However we also need to modify map since every step in the user supplied function can fail. Therefore we need to make sure we call halt on every possible step in case of failures:

def map(collection, fun) do
  map_next(next(collection), fun)

defp map_next({h, t}, fun) do
  [try do
     e ->
       # Invoke halt as a "side-effect" for resources
       # in case of failures and then re-raise
   end|map_next(next(t), fun)]

defp map_next(:done, _fun) do

This is not elegant nor performant. Furthermore, it is very error prone. If we forget to call halt at some particular point, we can end-up with a dangling resource that may never be closed.

Introducing reducers

Not long ago, Clojure introduced the concept of reducers.

Since Elixir protocols were heavily inspired on Clojure protocols, I was very excited to see their take on collection processing. Instead of imposing a particular mechanism for traversing collections as in iterators, reducers are about sending computations to the collection so the collection applies the computation on itself. From the announcement: “the only thing that knows how to apply a function to a collection is the collection itself”.

Instead of using a next function, reducers expect a reduce implementation. Let’s implement this reduce function for lists:

defmodule Reducer do
  def reduce([h|t], acc, fun) do
    reduce(t, fun.(h, acc), fun)

  def reduce([], acc, _fun) do

With reduce, we can easily calculate the sum of a collection:

def sum(collection) do
  reduce(collection, 0, fn x, acc -> x + acc end)

We can also implement map in terms of reduce. The list, however, will be reversed at the end, requiring us to reverse it back:

def map(collection, fun) do
  reversed = reduce(collection, [], fn x, acc -> [fun.(x)|acc] end)
  # Call Erlang reverse (implemented in C for performance)

Reducers provide many advantages:

  • They are conceptually simpler and faster
  • Operations like map, filter, etc are easier to implement than the iterators one since the recursion is pushed to the collection instead of being part of every operation
  • It opens the door to parallelism as its operations are no longer serial (in contrast to iterators)
  • No conceptual changes are required to support resources as collections

The last bullet is the most important for us. Because the collection is the one applying the function, we don’t need to change map to support resources, all we need to do is to implement reduce itself. Here is a pseudo-implementation of reducing a file line by line:

def reduce(file, acc, fun) do
  descriptor =

  try do
    reduce_next(IO.readline(descriptor), acc, fun)

defp reduce_next({line, descriptor}, acc, fun) do
  reduce_next(IO.readline(descriptor), fun.(line, acc), fun)

defp reduce_next(:done, acc, _fun) do

Even though our file reducer uses something that looks like an iterator, because that’s the best way to traverse the file, from the map function perspective we don’t care which operation is used internally. Furthermore, it is guaranteed the file is closed after reducing, regardless of success or failure.

There are, however, two issues when implementing reducers as proposed in Clojure into Elixir.

First of all, some operations like take cannot be implemented in a purely functional way. For example, Clojure relies on reference types on its take implementation. This may not be an issue depending on the language/platform (it certainly isn’t in Clojure) but it is an issue in Elixir as side-effects would require us to spawn new processes every time take is invoked.

Another drawback of reducers is, because the collection is the one controlling the reducing, we cannot implement operations like zip that requires taking one item from a collection, then suspending the reduction, then taking an item from another collection, suspending it, and starting again by resuming the first one and so on. Again, at least not in a purely functional way.

With reducers, we achieve the goal of a single abstraction that works efficiently with in-memory data structures and resources. However, it is limited on the amount of operations we can support efficiently, in a purely functional way, so we had to continue looking.

Introducing iteratees

It was at Code Mesh 2013 that I first heard about iteratees. I attended a talk by Jessica Kerr and, in the first minutes, she described exactly where my mind was at the moment: iterators and reducers indeed have their limitations, but they have been solved in scalaz-stream.

After the talk, Jessica and I started to explore how scalaz-stream solves those problems, eventually leading us to the Monad.Reader issue that introduces iteratees. After some experiments, we had a prototype of iteratees working in Elixir.

With iteratees, we have “instructions” going “up and down” between the source and the reducing function telling what is the next step in the collection processing:

defmodule Iteratee do
  @doc """
  Enumerates the collection with the given instruction.

  If the instruction is a `{:cont, fun}` tuple, the given
  function will be invoked with `{:some, h}` if there is
  an entry in the collection, otherwise `:done` will be

  If the instruction is `{:halt, acc}`, it means there is
  nothing to process and the collection should halt.
  def enumerate([h|t], {:cont, fun}) do
    enumerate(t, fun.({:some, h})) 

  def enumerate([], {:cont, fun}) do

  def enumerate(_, {:halt, acc}) do
    {:halted, acc}

With enumerate defined, we can write map:

def map(collection, fun) do
  {:done, acc} = enumerate(collection, {:cont, mapper([], fun)})

defp mapper(acc, fun) do
    {:some, h} -> {:cont, mapper([fun.(h)|acc], fun)}
    :done      -> {:done, acc}

enumerate is called with {:cont, mapper} where mapper will receive {:some, h} or :done, as defined by enumerate. The mapper function then either returns {:cont, mapper}, with a new mapper function, or {:done, acc} when the collection has told no new items will be emitted.

The Monad.Reader publication defines iteratees as teaching fold (reduce) new tricks. This is precisely what we have done here. For example, while map only returns {:cont, mapper}, it could have returned {:halt, acc} and that would have told the collection to halt. That’s how take could be implemented with iteratees, we would send cont instructions until we are no longer interested in new elements, finally returning halt.

So while iteratees allow us to teach reduce new tricks, they are much harder to grasp conceptually. Not only that, functions implemented with iteratees were from 6 to 8 times slower in Elixir when compared to their reducer counterpart.

In fact, it is even harder to see how iteratees are actually based on reduce since it hides the accumulator inside a closure (the mapper function, in this case). This is also the cause of the performance issues in Elixir: for each mapped element in the collection, we need to generate a new closure, which becomes very expensive when mapping, filtering or taking items multiple times.

That’s when we asked: what if we could keep what we have learned with iteratees while maintaining the simplicity and performance characteristics of reduce?

Introducing reducees

Reducees are similar to iteratees. The difference is that they clearly map to a reduce operation and do not create closures as we traverse the collection. Let’s implement reducee for a list:

defmodule Reducee do
  @doc """
  Reduces the collection with the given instruction,
  accumulator and function.

  If the instruction is a `{:cont, acc}` tuple, the given
  function will be invoked with the next item and the

  If the instruction is `{:halt, acc}`, it means there is
  nothing to process and the collection should halt.
  def reduce([h|t], {:cont, acc}, fun) do
    reduce(t, fun.(h, acc), fun) 

  def reduce([], {:cont, acc}, _fun) do
    {:done, acc}

  def reduce(_, {:halt, acc}, _fun) do
    {:halted, acc}

Our reducee implementations maps cleanly to the original reduce implementation. The only difference is that the accumulator is always wrapped in a tuple containing the next instruction as well as the addition of a halt checking clause.

Implementing map only requires us to send those instructions as we reduce:

def map(collection, fun) do
  {:done, acc} =
    reduce(collection, {:cont, []}, fn x, acc ->
      {:cont, [fun.(x)|acc]}

Compared to the original reduce implementation:

def map(collection, fun) do
  reversed = reduce(collection, [], fn x, acc -> [fun.(x)|acc] end)

The only difference between both implementations is the accumulator wrapped in tuples. We have effectively replaced the closures in iteratees by two-item tuples in reducees, which provides a considerably speed up in terms of performance.

The tuple approach allows us to teach new tricks to reducees too. For example, our initial implementation already supports passing {:halt, acc} instead of {:cont, acc}, which we can use to implement take on top of reducees:

def take(collection, n) when n > 0 do
  {_, {acc, _}} =
    reduce(collection, {:cont, {[], n}}, fn
      x, {acc, count} -> {take_instruction(count), {[x|acc], n-1}}

defp take_instruction(1), do: :halt
defp take_instruction(n), do: :cont

The accumulator in given to reduce now holds a list, to collect results, as well as the number of elements we still need to take from the collection. Once we have taken the last item (count == 1), we halt the collection.

At the end of the day, this is the abstraction that ships with Elixir. It solves all requirements outlined so far: it is simple, fast, works with both in-memory data structures and resources as collections, and it supports both take and zip operations in a purely functional way.

The path forward

Elixir developers mostly do not need to worry about the underlying reducees abstraction. Developers work directly with the module Enum which provides a series of operations that work with any collection. For example:

iex>[1, 2, 3], fn x -> x * 2 end)
[2, 4, 6]

All functions in Enum are eager. The map operation above receives a list and immediately returns a list. None the less, it didn’t take long for us to add lazy variants of those operations:

iex>[1, 2, 3], fn x -> x * 2 end)

All the functions in Stream are lazy: they only store the computation to be performed, traversing the collection just once after all desired computations have been expressed.

In addition, the Stream module provides a series of functions for abstracting resources, generating infinite collections and more.

In other words, in Elixir we use the same abstraction to provide both eager and lazy operations, that accepts both in-memory data structures or resources as collections, all conveniently encapsulated in both Enum and Stream modules.

While reducees are an important milestones, they are not our end goal. After all, the operations in Enum and Stream we have implemented so far are still purely functional and the Rx developers have showed us there is a long way to go once we decide to tackle asynchrony.

That’s exactly what we want to explore next for Elixir. For those interested in learning more, I have explored those topics at Elixirconf EU 2015 (the content related to this post starts at 30:39):

We hope you are as excited as we are about our foundations and what is coming next!

P.S.: An enormous thank you to Jessica Kerr for introducing me to iteratees and pairing with me at Code Mesh. Also, thanks to Jafar Husein for the conversations at Code Mesh and the team behind Rx which we are exploring next. Finally, thank you to James Fish, Pater Hamilton, Eric Meadows-Jönsson and Alexei Sholik for the countless reviews, feedback and prototypes regarding Elixir’s future.

Subscribe to Elixir Radar

]]> 2
Nobody told me Minitest was this fun Mon, 11 May 2015 12:00:05 +0000 »]]> Ever since I started working with Ruby I have been using RSpec to test my apps and gems without giving minitest much thought. Recently I started a new non-Rails project and decided to give Minitest a try just for the fun of it. Migrating from one tool to another was refreshingly fun due to the fact that that Minitest and RSpec aren’t so different from each other – they both have the basic features that we need in a testing library to get things running, and if you are used to testing your code moving from one to the other might not be so scary as you might expect.

Translating testing idioms

One of the first things that I looked into was how some of common RSpec idioms should be implemented when using Minitest.

The classic ones are fairly simple: the before and after lifecycle hooks should be equivalent as implementing the setup and teardown methods in your test class, and you have control over the inheritance chain by selecting when/where to call super. let and subject can be achieved with methods that use memoization to cache their values.

# A classic RSpec subject/before usage.
require 'spec_helper'

describe Post do
  subject(:post) { }
  before { post.publish! }

# The equivalent with Minitest & Ruby.
require 'test_helper'

class PostTest < Minitest::Test
  def post
    @post ||=

  def setup

RSpec shared examples, where you can reuse a set of examples across your test suite, can be replicated by simply writing your tests in modules and depend on accessor methods to inject any objects your tests might depend on

# What used to be a shared_examples 'Serialization' can be a module...
module SerializationTests
  def serializer
    raise NotImplementedError

# And your test cases can include that module to copy the tests
class JSONSerializationTest < Minitest::Test
  include SerializationTests

  def serializer

class MarshalSerializationTest < Minitest::Test
  include SerializationTests

  def serializer

Mocks and stubs, which are incredibly flexible when using RSpec, are available in Minitest without any third party gem:

class PostTest < Minitest::Test
  def test_notifies_on_publish
    notifier =
    notifier.expect :notify!, true

    post.publish!(notifier: notifier)

  def test_does_not_notifies_on_republish
    notifier =

    post.stub :published?, true do
      post.publish!(notifier: notifier)

If you want a different or more fluent API, you can use something like mocha to improve your mocks, or even bring RSpec API into the mix – with some manual setup you can pick the rspec-mocks gem and define your mocks and stubs just like when using the complete RSpec tooling:

require 'rspec/mocks'

class PostTest < Minitest::Test
  include ::RSpec::Mocks::ExampleMethods

  def before_setup

  def after_teardown

  def test_notifies_on_publish
    notifier = double('A notifier')
    expect(notifier).to receive(:notify!)

    post.publish!(notifier: notifier)

Know your assertions

One of my favorite parts of RSpec is how expressive the assertions can be – from the Ruby code that we have to write to the errors that the test runner will emit when something is broken. One might think that we can have something similar when working with Minitest, but that is not exactly true.

Let’s say we want to test a method like Post#active?. Using a dynamic matcher from RSpec like expect(post).to be_active will produce a very straightforward message when that assertion fails: expected #<Post: …>.active? to return false, got true.

With Minitest, we might be tempted to write an assertion like assert !, but then the failure message wouldn’t be much useful for us: Failed assertion, no message given. But fear not, because for something like this we have the assert_predicate and refute_predicate assertions, and they can produce very straightforward failure messages like Expected #<Post:…> to not be active?., which clearly explains what went wrong with our tests.

Besides the predicate assertions, we have a few other assertion methods that can useful instead of playing with the plain assert method: assert_includes, assert_same, assert_operator and so on – and every one of those has a refute_ counterpart for negative assertions.

It’s always a matter of checking the documentation – The Minitest::Assertions module explains all the default assertions that you use with Minitest.

And in the case where you want to write a new assertion, you can always mimic how the built-in assertions are written to write your own:

module ActiveModelAssertions
  def assert_valid(model, msg = nil)
    msg = message(msg) { "Expected #{model} to be valid, but got errors: #{errors}." }
    valid = model.valid?
    errors = model.errors.full_messages.join(', ')
    assert valid, msg

class PostTest < Minitest::Test
  include ActiveModelAssertions

  def test_post_validations
    post = 'The Post')
    assert_valid post

Active Support goodies

If you want some extra sugar in your tests, you can bring some of extensions that Active Support has for Minitest that are available when working with Rails – a more declarative API, some extra assertions, time traveling and anything else that Rails might bring to the table.

require 'active_support'
require 'active_support/test_case'
require 'minitest/autorun'

ActiveSupport.test_order = :random

class PostTest < ActiveSupport::TestCase
  # setup' and teardown' can be blocks,
  # like RSpec before' and after'.
  setup do
    @post =

  # 'test' is a declarative way to define
  # test methods.
  test 'deactivating a post' do
    refute_predicate @post, :active?

Tweaking the toolchain

Minitest simplicity might not be so great when it comes to the default spec runner and reporter, which lack some of my favorite parts of RSpec – the verbose and colored output, the handful of command line flags or the report on failures that get the command to run a single failure test. But on the good side, even though Minitest does not ship with some of those features by default, there are a great number of gems that can help our test suite to be more verbose and friendly whenever we need to fix a failing test.

For instance, with the minitest-reporters gem you can bring some color to your tests output or make it compatible with RubyMine and TeamCity. You can use reporters that are compatible with JUnit or RubyMine if that’s your thing. You can use minitest-fail-fast to bring the --fail-fast flag from RSpec and exit your test suite as soon as a test fails. Or you can track down object allocations in your tests using minitest-gcstats.

If any of those gems aren’t exactly the setup you want it, you can always mix it up a bit and roll your own gem with reporters, helpers and improvements that are suitable for the way you write your tests.

Thanks to this extensibility, Rails 5 will bring some improvements to how you run the tests in your app to improve the overall testing experience with Rails (be sure to check this Pull Request and the improvements from other Pull Requests).

]]> 7
What I have learned in my first three months at Plataformatec Thu, 23 Apr 2015 12:00:44 +0000 »]]> It hasn’t passed too much since I’ve begun working at Plataformatec, but I’ve learned a lot already! I work with great professionals which are gathered not only by common goals but also by an awesome culture. I’ll list a few points that I think are being great so far.

Be pragmatic and agile

Plataformatec is a very pragmatic company that is in a constant learning process. We are always alert for possible knowledge acquirement. Empirical knowledge is analyzed with care, we try to document everything for further usage. A good experience is always a candidate for a practice and when it’s not, it becomes useful for future projects or similar situations.

Our practices are continuously being re-evaluated to assure that they still fulfill their purpose. We’re not afraid of deprecating them when it’s needed, but we always collect data from them for continuous improvement.

Teams should share without being afraid

Sharing what you’ve learned is great for helping people and get feedback from them. In the other hand, sharing doubts isn’t easy as it seems. People tend to ask for help in a private context so fewer people know about it. I always remember my teachers saying in the class: “Don’t be ashamed of asking any doubts”. But that happened anyway, because being mocked was never good.

Our team is composed of really nice and helpful people. We enjoy helping people and we share with each other pretty much everything we think it’s interesting. It’s very common to hear someone here saying “a doubt from one person could be the same from another person” or “I may have a right answer for you, but it’s quite possible that someone has a better way to do this”. Doubts can be a great source of feedback and knowledge when they are public. That’s why we don’t usually use private communication tools unless it is something personal.

Open source culture

Like many people, I got to know Plataformatec for its contributions to the open source community. But when I joined the company, I noticed that the open source ideas go even deeper here. I’d say that Plataformatec’s soul is open source not only because of its software contributions but also because of its way to approach everything else.

I’m sure that I have learned a lot about code review and documentation, but the contribution practices here goes beyond that. Lots of things we do are made with the help of the whole team: from pull requests to blog posts, to ceremonies and company decisions. We are very communicative! Take a look at this blog post to know more about our communication processes.

These were the points that I think are the most interesting to share so far. I hope I can contribute more and more to this team and to share what I learn with you along this journey.

]]> 11
Plataformatec at Erlang Factory San Francisco 2015 Thu, 16 Apr 2015 12:00:38 +0000 »]]> Erlang Factory

Two weeks ago José Valim and I went to Erlang Factory San Francisco and we had a great time. Erlang Factory San Francisco is one of the biggest events in the Erlang community. One of the things that got me most excited about it was how many people were excited and talking about Elixir throughout the event.

Elixir talks

The event itself had a whole track dedicated to Elixir. I may be biased but the talk that I liked the most was Valim’s one. In his talk, he explains what Elixir is about and how the language foundation is enabling Elixir maintainers to explore different strategies for laziness, parallelism and distribution. You should definitely watch it:

Besides Valim’s talk, there were many others. Jamie Winsor shared his experience in developing a whole Massively Multiplayer Online Game platform using Elixir. In another room, Frank Hunleth introduced the Nerves Project to run Elixir in embedded devices.

I’ll not comment on each one of the talks because this week’s Elixir Radar will be a special edition focusing only on the Elixir talks at Erlang Factory. Subscribe to Elixir Radar and stay tuned! Also, you can go ahead and watch all of the those Elixir talks on Youtube.

People using Elixir in production

The highlight of the event was chatting with other Elixir developers. It was awesome to see how many developers are falling in love with Elixir.

Not only that, many of them are already running Elixir in production! So Valim and I decided to grab our phones, order a microphone from Amazon and record some video interviews to show their experience running Elixir in different environments, from marketing and video companies to messaging systems and embedded devices.

We’re still editing those videos, but as soon as we get them done, we’re going to publish them in our Youtube channel and will announce them here and in our Twitter account.

Stay tuned for more Elixir news!

Update (04/23/2015)
We already have a video teaser of the interviews we did with companies using Elixir in production. Check it out!

Subscribe here to Elixir Radar

]]> 0
Build embedded and start permanent in Elixir 1.0.4 Fri, 10 Apr 2015 12:00:54 +0000 »]]> Elixir v1.0.4 ships with two new important options for new projects. If you generate a new application with mix new, you will see in your mix.exs:

[build_embedded: Mix.env == :prod,
 start_permanent: Mix.env == :prod]

Although those options were originally meant to be in Elixir v1.1, we have decided to bring them into v1.0.4 and do a new release. In this post, we will explain why.

Protocol consolidation

One of Elixir’s most important features are protocols. Protocols allow developers to write code that accept any data type, dispatching to the appropriate implementation of the protocol at runtime. For example:

defprotocol JSON do
  def encode(data)

defimpl JSON, for: List do
  def encode(list) do
    # ...

We have written about protocols before and I recently explored on my Erlang Factory talk the foundation protocols have allowed us to build.

However, in order to play nicely with the dynamic nature of the Erlang VM where modules can be loaded at any time by the VM, as well as any protocol implementation, protocols need to check on every dispatch if a new implementation is available for any given data type.

While we would gladly pay this price in development as it gives developers flexibility, we would like to avoid such in production as deployments gives us a consolidated view of all modules in the system allowing us to skip those runtime checks. For this reason, Elixir provides a feature called protocol consolidation, that consolidates all protocols with their implementations, giving protocols a fast dispatch to use in production.

Prior to Elixir v1.0.4, protocol consolidation had to be manually invoked by calling mix compile.protocols, which would consolidate protocols into a predefined directory, and this directory had to be explicitly added to your load path when starting your project. Due to the manual nature of such commands, a lot of developers ended-up not running them in production, or were often confused when doing so.

For this reason, Elixir v1.0.4 introduces a :consolidate_protocols option to your projects which will take care of consolidating and loading all protocols before your application starts. This option is also set to true when :build_embedded is true.

Build embedded

When compiling your projects, Elixir will place all compiled artifacts into the _build directory:


Many of those applications and dependencies have artifacts in their source that are required during runtime. Such artifacts are placed in the priv directory in Elixir applications. By default, Elixir will symlink to their source directories during development.

In production, though, we could copy those contents to avoid symlink traversal, embedding all relevant files to run your application into the _build directory, without a need for their sources.

That’s what the :build_embedded option does and it defaults to true in production for new applications.

Start permanent

Elixir code is packaged into applications. For example, each entry we saw under _build/dev/lib above is a different application. When an application is started, it can be started in one of the three following modes:

  • permanent – if app terminates, all other applications and the entire node are also terminated
  • transient – if app terminates with :normal reason, it is reported but no other applications are terminated. If a transient application terminates abnormally, all other applications and the entire node are also terminated
  • temporary – if app terminates, it is reported but no other applications are terminated

The default mode is temporary, which again, makes sense for development. For example, our test library called ExUnit, is also an application. If the application being tested crashes, we still want the ExUnit application to continue running in order to finish all tests and generate the proper reports. In this case, you definitely do not want your application to run as permanent.

However, in production, once your application crashes permanently, beyond recovery, we want the whole node to terminate, otherwise whatever you have monitoring your application at the operating system level won’t notice any change.

The :start_permanent option starts your application as :permanent and it defaults to true in production for new applications.

Summing up

Those new options have been introduced into Elixir v1.0.4 because they are very important for running Elixir in production. They bring more performance and stability to your Elixir-based systems.

There are other smaller changes in this release, like support for Erlang 17.5 and 18.0-rc1, as well as bug fixes. Check the release notes for more information and enjoy!

Subscribe here to Elixir Radar

]]> 2
Continuous communication Wed, 11 Mar 2015 12:00:45 +0000 »]]> After continuous integration, which evolved to discrete integration, and continuous delivery, why not try continuous communication to avoid misleading messages inside your team?

Why communication matters?

It’s known that communication issues results in many software development problems.

Some agile frameworks, such as Scrum, have well defined communication activities like daily meetings and sprint plannings.

Communication is an important subject, not only for software development, but also in many other areas. There are some social frameworks, such as colletive impact and collaboration for impact, that use continuous communication to help people achieve their goals.

Here at Plataformatec, we take this quite seriously as well. We’re always evolving our communication practices, to achieve our goals and to keep our culture strong.

Continuous communication at Plataformatec

We talk a lot to each other. We use Campfire for team chatting, Basecamp for persistent messages and hangouts or skype for face-to-face calls.

Besides the day-to-day communication, we use some agile practices such as daily meetings and retrospectives. Some of these meetings/tools are used inside projects, others inter-projects and finally, we have company-wide meetings. Let’s see how these practices evolved to fit our current Modus Operandi.

Inside projects


In order to keep everyone inside the project on the same page we use daily meetings. An important detail here is the client presence. The client’s daily presence helps to avoid re-working tasks, since they know what is happening on a day-to-day basis.

Weekly project meetings

Once a week, every project team also has a meeting with the Account Manager. Our Account Manager is the person who helps the team solve problems at a higher level, typically overseeing multiple projects at a time, so they are not directly involved in the project’s day-to-day tasks.

We have an open communication channel with them, but it’s important to make a follow-up with all the team together. This is to ensure that the team exchanges more information and keep the Account Manager better informed about the project. Then he can provide the team with better advice on how to solve day-to-day challenges.


Dashboard meetings

Every Friday our company daily meeting gets a new attribution. In addition to company announcements, we also share the projects status, what happened in the current week, new technical challenges, applied techniques, releases delivered, and so on.

Everyone in projects assigns a grade to the project, which can be a value between -1 and 2. A -1 grade means “We are performing really bad” and 2 means “We are performing really well”. Together with the grade we write a brief explanation justifying it.

We start the dashboard meeting analyzing the current and past grades so we can check how the project is evolving. This is important because everyone can have a basic knowledge of how the other projects are moving on and new things that our colleagues are using. Having a big picture of each project paves a way for advice and knowledge sharing.


Company Dailies

Back when the Plataformatec team was smaller, we used to do dailies to exchange projects information and other company announcements, such as new employees or new clients. But as the team got bigger, we had more and more projects teams, and exchanging all that information became complex.

Our dailies couldn’t involve sharing project information anymore, so we changed its purpose, and today we use it to share information at a company level. Information about projects is now shared in the dashboard meetings, weekly.

Weekly reports

We also have a weekly summary email, where we point our project highlights, new leads, new employees, next events. It’s a very good tool for those who lost some meeting during the week.

Monthly retrospectives

Back when our team was smaller, we used to have biannual retrospectives, where we listed our good points, things to be maintained, and points we needed to improve. We also used the retrospectives meetings to do team appreciation, where everyone has the opportunity to congratulate or appreciate someone else’s work in the team.

We used to have one day for this meeting, but as new members joined the team, it was not enough anymore. So we changed to have a monthly retrospective, focused on the good points and the points to improve. Every month we select one or more subjects to be discussed, and quarterly we re-prioritize and list new subjects, if necessary. The appreciation now is done in the Biannual reviews.

Biannual reviews

With the retrospectives occurring monthly, now we have more time in our biannual reviews. We keep doing the team appreciation, and we also:

  • check our goal status (Mid-Year review, usually in June or July) and
  • present the year analysis in the ending of the year, usually in November.


At the very beginning of each year, the partners share the company goals and the strategic vision. This is the moment to review where we were last year, where we are now and where we want to be in the near future. It is also at this meeting that we get last year’s balance and financial projections.

One-on-one conversations

The main reason of all these meetings is to give and receive feedback. The communication problem occurs when some information is not crystal clear; the feedback mitigates this problem since everyone has more opportunities to solve one’s doubts.

However, some people have difficulty talking in crowded places, so we created an opportunity for those people too. We have a face-to-face meeting with the HR bi-monthly. Also, all partners have a weekly hour slot on their agendas to receive anyone who wants to talk about any subject.

How to evolve

It’s part of our culture to have good and clear communication. The practices we are using now certainly will not stay the same way forever. Our team is still growing; every year we have more and more projects and people joining us. Adapting and improving these practices are required to keep the company communication evolving.

It may appear that we have a lot of meetings, but they are organized and prepared to be efficient and avoid problems that having only ad-hoc communication could cause. All cited meetings here are timeboxed and have a specific goal. We work hard to achieve the meetings goals in the given timeframe for each meeting.

But beware! This is not about creating new formal meetings per se. You should focus on promoting a healthy culture of good and clear communication, and keeping feedback channels wide open within the whole company is crucial.

And what about you? What are your communication practices? Do you ever changed them?

]]> 3