Plataformatec Blog Plataformatec's place to talk about Ruby, Ruby on Rails and software engineering Thu, 22 Jan 2015 14:43:29 +0000 en-US hourly 1 Organizing microservices in a single git repository Thu, 22 Jan 2015 10:44:30 +0000 »]]> Microservices has gained popularity recently and some projects I’ve worked on had followed this approach. Basically, it’s an approach of software architecture that allows breaking monolithic applications into smaller decoupled, business-oriented and isolated deployable applications.

Each microservice normally is hosted in its own git repository, since it has very defined business boundaries and the code must be isolated from other microservices to ensure decoupling and deploy independance.

It may work greatly if you organize a team by each microservice. So, if a team is responsible for a given microservice and won’t work on other microservices, this organization may be good enough.

During project developments, we at Plataformatec understood that it is not so productive to focus on specific parts of a feature. Instead, we design and develop features by perceiving it as a whole, as it would be perceived by the end user. We don’t work with application specialists, we work with generalists and a lot of communication through pull requests.

So the best fit for the way we work, as our experience has shown us, is to put all the microservices and the clients that consume them into a single git repository. It may sound weird or semantically wrong for some, but after all, those microservices are small parts of a whole, something bigger, that is called software ecosystem. Since they share or exchange information among them, they’re somehow connected to each other.

This pragmatic approach is not exclusively ours, many people out there apply it. Two very nice examples are Facebook and Google. Of course their codebase is far larger than a normal application. They’re an exception. Google’s codebase, for instance, keeps really low level information like operating system configurations.

Using a single repository has proven to be a very good practice for us, because we can keep track of relevant pull requests easier; we can refactor, create and test new features throughout all the microservices faster; and test its integration without leaving the current context. Also, project gardening is way simpler: upgrading Ruby, Rails version, gem updates, using shared paths as gems, tests and deploy all of them can be automated and run across all microservices.

Have you worked with a single or multiple repositories? Please share your thoughts about it in the comments below!

]]> 5
Empirical knowledge formalization in project retrospectives Wed, 14 Jan 2015 11:00:08 +0000 »]]> All software projects are challenging, although not all at the same level. Even the easiest projects are tough and demand attention in order to make good decisions and adjust the course.

There are plenty of ways of doing so, like sprint retrospectives, where the good and the bad things are exposed and discussed. But, another very good way, which is not seen very often, is project retrospectives.

Project retrospectives allow a more deep analysis of the current scenario. Not a code analysis by itself, but how stakeholders handle changes, how the team sees the future, what were the challenges the team overcame, where the team failed and all those questions that apply for sprint retrospectives, but applied for the whole project, since it started until the day of this retrospective.

As stated before, you can list good and bad things, like any other retrospective, and discuss about how to act against those bad things. But what I came to tell here is about a technique that is not very common: questioning the good things.

Where wild things are

The human mind is complex and so is the knowledge acquisition which is not always organized or systematically explainable. Somethings you just know and you can’t explain why. This happens because most of it is empirical knowledge (or empirical evidence), i.e., knowledge that you obtain by observation or by experience.

I remember a toy I had, a train that would run a few meters if you pressed the button. The problem was that everybody, at first, thought it was broken, but I knew how to correctly press the button in order to make it work. When other toys of mine stopped working, I always tried pressing in different ways, including the way that my train responded to, because I had an empirical knowledge that it could work, because it worked before.

If you apply this logic to software development, you can see a lot of practices, like code reviews, plannings, retrospectives, which now are documented but previously were just known. People are always improving books and creating new techniques based on what already exists by observing and adjusting. This is how things take shape.

But, there’s a problem when empirical knowledge is used: you don’t know if your context applies in this case and you have no clues if it will work or not. It’s like taking a guess. In order to avoid that problem, you need a formalization of this very same knowledge.

By formalizing empirical knowledge, I mean to understand what happened. Instead of just pressing buttons of the train all the time, formalization would require some electrical investigation and checking why it works when pressed in a specific way. The same thing could be done in the software development life-cycle, in order to keep improving ourselves.

How to formalize

Well, back to the good points listed in a project retrospective! I’ve been in a very few amount of sprint retrospectives where the good stuff was analyzed and the way it was analyzed made no difference for me. I just couldn’t see the value of getting a plan to keep that job. It was good, it obviously should be continued.

But there is a technique called five whys intended to “discover underlying conditions that contribute to an issue”1, which can help you dig into facts and scenarios where a better analysis can be done.

But only “whys” won’t drive you to the right path. You will need more interrogative words like what, when and who. You need to understand the facts that lead to a good thing and then get the theory around the issue.

Let me share this example that happened to me and how far it went. In a project retrospective once, we had a good post-it with the text “UX”, the questions began:

“Why was UX good for the project?”
– “Because we knew everything we had to do before we started developing,” said the developer
“So, it was good to have an UX expert drawing the screens before the development. Why?”
– “Actually, it is not just before the development. The screens were ready before the planning, so we could break the User Stories into very small ones and have grooming meetings where you’ve discussed possible problems,” said the Scrum Master”
– “And it was not only a single screen, it was the whole flow that contains 3 screens and their transitions.”
“How did you use the drawings to discuss it in groomings?”
– “We placed in a screen the acceptance criteria and the drawings. The Product Owner, the UX expert, a developer and a Scrum Master used to discuss about technical limitations, simplifications and improvements.”
“So, you said you needed the User Story and the drawings to discuss in the grooming meetings. What comes first, the US or the drawing?”
– “Currently, they come almost at the same time, because the Product Owner can discuss UX things with the UX expert, since he has some knowledge of it. So, the conception is almost at the same time.”

In order to keep this example short, I’ll stop here and state our final lesson. Instead of just stating “UX are good for the project” we’ve figured out that “UX is good for the project if the UX experts are close to and aware of the business. UX work must be done at the very beginning and discussed with the whole team to check technical availability and better scheduling.”

Hope you’ve enjoyed the tips. If you do something similar, please let us know in the comments below, and start sharing knowledge =).

  1. Agile Retrospectives – Making good teams great / The Programmatic Programmers. Page 82. 

]]> 3
Estamos contratando gerentes de projetos Fri, 19 Dec 2014 11:00:48 +0000 »]]> Você tem perfil para lidar com pessoas e gosta de ágil? Gosta de metodologia ágeis como Scrum, Kanban e Lean?

Temos uma oportunidade para você. Veja abaixo como é ser um gerente de projetos na Plataformatec, e também como é a cultura da nossa empresa.

Sobre a Plataformatec

A Plataformatec é uma empresa de consultoria em desenvolvimento de projetos de software customizado. Usamos Agile, Ruby, Rails, Elixir e métodos de análise estratégica de negócios para ajudar nossos clientes a alcançarem seus objetivos. Nosso time é organizado em equipes compostas por um gerente de projetos e de dois a seis desenvolvedores que encaram os seguintes desafios:

  • entender como um projeto de software se encaixa na estratégia de um cliente e como esse projeto fará com que o cliente alcance seus objetivos
  • ajudar o cliente a definir e esclarecer os objetivos de negócio
  • criar um roadmap em formato de release backlog, pronto para ser transformado em software
  • desenvolver software de modo iterativo, incremental e colaborativo

Por entendermos que a entrega de resultados é de responsabilidade da equipe inteira, não existe hierarquia dentro de uma equipe. Cada pessoa cumpre seu papel da melhor maneira possível para atingir os objetivos do cliente. Não queremos apenas clientes, nosso objetivo é termos fãs.

Como empresa, adoramos compartilhar nosso trabalho e a produção de conhecimento. Mantemos vários projetos open-source, criamos nossa própria linguagem de programação, escrevemos três livros sobre desenvolvimento de software e palestramos em dezenas de eventos de desenvolvimento e agile. Além disso, somos referência mundial na comunidade de Ruby on Rails.

Melhoria contínua sempre esteve presente em nosso DNA. Atuamos no mercado há quase seis anos e desde sempre aplicamos a filosofia agile na empresa inteira, do time de projetos ao RH, do time de vendas ao administrativo.

Nós amamos nosso trabalho.

Sobre a vaga

O que um gerente de projetos faz na Plataformatec

Nossos gerentes de projetos não seguem a chamada linha tradicional. O principal objetivo do nosso GP é potencializar o trabalho do restante da equipe. Para isso, ele mistura atividades de um gerente de projetos, mas também de scrum master, analista de requisitos e proxy de PO.

Algumas atividades típicas do nosso gerente de projetos são:

  • participar do entendimento dos objetivos de negócio do cliente
  • ajudar o cliente a transformar suas ideias em requisitos de software
  • facilitar a comunicação do time com o cliente
  • organizar e acompanhar as releases do projeto
  • ajudar o cliente a otimizar o escopo do projeto para maximizar o valor de negócio no período do projeto
  • planejar e conduzir as cerimônias do agile: planning, daily, grooming, review e retrospectiva
  • monitorar a velocidade de entrega e indicadores de sucesso do projeto
  • retirar impedimentos da equipe

Requisitos para a vaga

  • Educação
    • ensino superior completo em curso da área de computação
    • inglês intermediário (oral e escrito)
  • Experiência
    • 1 ano em gestão de projetos de software
    • 1 ano em gestão de projetos com metodologias ágeis (Scrum, Kanban, Lean etc)
  • Skills
    • apresentar habilidade analítica
    • ser orientado a resultados
    • gostar de aprender
    • ter boa comunicação (verbal e escrita)
    • ser capaz de comunicar informações técnicas para pessoas não técnicas
  • Disponibilidade para trabalhar em SP

Como participar do processo seletivo

Para participar no processo seletivo visite e envie seu curriculum. Veja também nossa página no Facebook para conhecer mais sobre o nosso time e nosso escritório.

Nós queremos pessoas que compartilhem dos nossos valores, que se importem com o que fazem e que não cansem de aprender e de se superarem.

Se você compartilha da nossa visão, participe do processo seletivo. Se você conhece alguém que possa se interessar, indique para seu amigo.

]]> 0
The pros and cons of 4 deployment process techniques Tue, 02 Dec 2014 11:00:34 +0000 »]]> The way of deliver your product code to your customer is commonly called “deployment”. It is an important matter because it will impact in how fast your product will respond to changes and the quality of each change.

Depending on which deployment decision you take, it will impact your team and how you use your version control system.

As a consultancy, we have worked in lots of projects, and together with our customers we have devised many ways to deliver their product to their customers. We have seen some patterns, advantages and challenges on each way, and today I would like to discuss some of them:

  1. The open-source way
  2. The pipeline way
  3. The support branch way
  4. The feature toggle way

The open-source way

In the open-source world most of the times we should maintain many versions of a same product. For example, Ruby on Rails has many versions released, like 2, 3.2, 4.0, 4.1. Bugs happens, new features are created, so news releases must be delivered, but in a set of supported released versions. Still on RoR example, the supported releases are 4.1, 4.0 and 3.2 ( But how this releasing works?

The most recent version of the product is maintained on the master branch, the previous major releases have their own branches. On RoR we have the master for the new 4.2 version release, and we still have 4-1-stable, 4-0-stable, 3-2-stable branches. By following this organization we can easily apply changes on the desired versions.

For each release a tag must be created. For example, there’s a tag for RoR 4.0.0, one for 4.1.0 and so on. With tags it is possible to navigate between the released versions and if the worst happens, like losing the “version-stable” branch, it’s easy to create another one from the last released version.

Usually, a web product has just one version to be maintained, so usually we don’t need the “version-stable” branches. We can keep the new product releases on the master branch and generate a tag when we want to package and release a new product version.

When we need a “hotfix” or an urgent feature and the master is not ready yet for production, we can easily create a branch from the latest tagged version, apply the desired changes, create a new tag and release a new version. By the way, using this way you can release any branch you want. All that manipulation of applying commits, merging and creating branches and tags, can be simplified with a powerful version control system like “Git”.

Strong points

  • The flexible package creation and release.
  • It works for large teams, primarily when there are planned releases.

The challenges

  • It requires the infra to be flexible enough to support it.
  • It requires time to control what can be merged on master before the package creation.
  • It will require good ability with the version control system.
  • Manage the release versions.

Common phrases with this approach

  • “Sorry pals, I forgot to apply that hot fix patch on master”. – A developer after releasing a new product version.

The pipeline way

Using a pipeline in your deployment process means you have well defined steps and all steps must be accomplished in order to do a deployment.

Usually the steps are: run the automated tests, release on test/qa environment, create the release tag, release on production. After the steps are defined, you need some software that allows the team to automate some steps and to add the option of requiring a approval for the next step. For example, you only want to release the package to production when your QA team and PO have approved the version on QA.

Having a pipeline means your master branch is always production ready. Any new code inserted on master branch must pass the pipeline, then it is very important that the team and the pipeline to be able to quickly respond to changes.

One important precaution is to be sure that only wanted features are on master, because all code on master will always be deployed on the next software release. I have seen some confusion in this aspect, because some companies are a bit more bureaucratic and have some strict deployment rules.

Per example, a feature can only go to production when the QA team and PO approves. Placing the QA process on pipeline means you’ll put the feature on master that is not ready yet for production. It generates a problem I see regularly with this approach, I’ll call it for now the “release lock”.

Release lock

The release lock can be better understood with an example:

  1. The developers have released on master the Feature A and B.
  2. The QA team finds a BUG on Feature A.
  3. The developers release a Feature C on master
  4. The developers fix the BUG.
  5. PO approves Feature A and B and wants a deploy.

Can we deploy a release with Feature C untested and unapproved by PO? Most of the time the companies answer is no.

Some approaches we can do here are: revert Feature C commits, or simply lock code changes on master and the entire team focus on finishing the release with Feature C.

Of course there are other approaches we can incorporate in the pipeline process, and we’ll see a further discussion about it later in this post.

Strong points

  • With the pipeline it is easier for everyone in the team to understand how the deployment works.
  • The pipeline gives accessibility for anyone in the team to launch a release.
  • Less time managing versioning.

The challenges

  • You lose the flexibility to deploy any branches.
  • In large teams and some companies rules can produce release locks often.

Common phrases with this approach

  • “What? This feature is already in production?” – A member from QA team looking at the version in production.
  • “Hey, stop merging on master! We need a release today!” – The product manager after receiving pressure from stakeholders.

The support branch way

You define a branch as QA or test branch. This branch is useful to test features which aren’t ready for production. For example, if your deployment process needs QA/PO features approval.

With this approach you’ll send the features to support branch. When the feature is approved you send them to the master branch and use the normal deployment process flow. It is important to do a regression test of the merged features on master. When a regression test finds a defect, it is easy to apply them on master, since the master is clear of unwanted features.

While using this approach you should be aware that now you have two points of integration. Resolving the merging conflicts twice is a problem that can happen often, but the most troublesome issue is when the integration on the support branch breaks the application functionality.

When the support branch integration is broken you need to analyze when and where the patch with the fix will be applied.

If you apply the fix on support branch, you must remember to apply it on master again.

The other option is to find out which changes made the features incompatible together. When you find that, you can apply those changes to the branch that doesn’t have those changes. Be aware that depending on the way you do this, you may end up needing to release both features together.

Strong points

  • You can easily apply the support branch in any deployment process you choose.
  • You mitigate the release lock problem.

The challenges

  • Two points of features integration.
  • Requires good abilities with version control system.
  • The support branch maintenance.

Common phrases with this approach

  • “Gosh! I need to fix that merge conflict again.” – Developer merging on master the QA approved feature.

The feature toggle way

Sometimes you are using the feature toggle without knowing you are doing it. For example: when you enable some features only for beta users, or enable some application routes only for some network IPs, or create A/B tests for the users. In general, you are using a feature toggle when your application is restricting in some way the access for some features.

To solve the release lock problem, some teams apply the feature toggle in every feature that needs approval. In this way the team can send unapproved features to production. When the feature is approved, the feature can be turned on without a new deploy. Be aware that sending turned off features to production also means that unapproved feature code will be sended too.

Creating toggles for your features means more code and tests to control what your software does with and without the toggles. Each feature toggle you add increases the complexity and the maintenance cost of your codebase.

Thus, it is important to remove them after the feature approval before it starts damaging your software. I know what you’re thinking, yes, it is true, most of the times the QA/PO team want to test the toggle removal and you might face the release lock problem again.

Strong points

  • You can use the pipeline with only one point of integration.
  • You reduce the release lock problem.

The Challenges

  • You may increase the cost of development because of maintenance and removal of feature toggles.
  • The feature toggle management.

Common phrases with this approach

  • “What this method does?” – A developer asking for his pal.
  • “It depends, which toggles are on?” – The answer of the first question.


Most of the challenges of each deployment process requires team engagement and organization. It’s hard to decide which one is better, because it fully depends on how your team adapt to the process.

Each person perceive problems in different ways. What is a huge matter for someone, for the other is just a small itch. But if you still want answers for “What is the best option?”, I would say it is the same answer for “Which challenge your team will endure more?”.

Given that you prefer one deployment process over the others, I still think that no one should be attached totally with one process forever. Your problems can change, your team can change, your company rules can change, your application can change. Therefore, your deployment process should change together to deal with the new challenges. You can change your deployment in many ways, for example mixing ideas from each the of processes we discussed.

I’m curious to know how your team deliver features. If you use one of these options, if you mix them or if you do something different. If you want share this knowledge, please leave a comment below.

]]> 5
XSS vulnerability on Simple Form Wed, 26 Nov 2014 15:18:28 +0000 = 2.0.0 Not affected: < 2.0.0 Fixed versions: 3.1.0, 3.0.3, 2.1.2 Impact When Simple Form renders an error message it marks the text as being HTML safe, even though it may contain HTML tags. In applications where the error message can be provided ... »]]> There is a XSS vulnerability on Simple Form’s error options.

  • Versions affected: >= 2.0.0
  • Not affected: < 2.0.0
  • Fixed versions: 3.1.0, 3.0.3, 2.1.2


When Simple Form renders an error message it marks the text as being HTML safe, even though it may contain HTML tags. In applications where the error message can be provided by the users, malicious values can be provided and Simple Form will mark them as safe.

Changes at the behavior

To fix this vulnerability error messages are now always escaped. If users need to mark them as safe they will need to use explicitly the :error option:

f.input :name, error: raw('My error')


The 3.1.0, 3.0.3 and 2.1.2 releases are available at the regular locations.


There are no feasible workarounds for this issue. We recommend all users to upgrade as soon as possible.


To aid users who aren’t able to upgrade immediately we have provided patches. They are in git-am format and consist of a single changeset.


Thanks to Jake Goulding, from WhiteHat Security and Nicholas Rutherford from Medify Ltd. for reporting the issue and working with us in a fix.

]]> 0
Converting Erlang code into Elixir Wed, 12 Nov 2014 11:00:42 +0000 »]]> When you are new to any language, you probably want to run some existing code just to see how it works. Achieving success while trying new things is important, because it helps fueling your interest.

The number of code examples in Elixir is increasing, but sometimes you will have to read some Erlang code. Recently, I wanted to play a little bit with Cowboy HTTP Server, which is written in Erlang. The Cowboy repo has a lot of small examples presenting the features which is provided by it. When I tried to convert one of them to Elixir, it wasn’t as simple as I expected, since I’m not so familiarized with the language yet.

When converting, you may get into some misleading code that will not work as you expected at first. So, I’m going to present a transcoding of Cowboy WebSocket server example from Erlang to Elixir, so that you can learn some of the details that exists in the process of porting Erlang code into Elixir code.

This will not be a tutorial explaining how that Cowboy example works, it’s just about how to convert it to Elixir. Also, I’m not going to show how it could be done in idiomatic Elixir, the goal here is to translate Erlang code into Elixir the simplest way possible.

So let’s start!

Creating the project

Create a project called ws_cowboy with the following command:

mix new ws_cowboy
cd ws_cowboy

After that we are going to change/create 4 files:

  • mix.exs: declares the dependencies in the project and the application module to run
  • lib/ws_cowboy.ex: the application module that setups the cowboy routes and http server
  • lib/ws_handler.ex: handles a WebSocket request connection
  • lib/ws_supervisor.ex: supervisor for Cowboy server

Also, copy the whole priv directory from the Cowboy example to the project’s root dir.

The project definition

In the mix.exs file we are going to add the Cowboy dependency and also configure the module application, in this case WsCowboy.

defmodule WsCowboy.Mixfile do
  use Mix.Project

  def project do
    [app: :ws_cowboy,
     version: "0.0.1",
     elixir: "~> 1.0",
     deps: deps]

  # Configuration for the OTP application
  # Type `mix help` for more information
  def application do
    [applications: [:logger, :cowboy],
     mod: {WsCowboy, []}]

  # Dependencies can be Hex packages:
  #   {:mydep, "~> 0.3.0"}
  # Or git/path repositories:
  #   {:mydep, git: "", tag: "0.1.0"}
  # Type `mix help deps` for more examples and options
  defp deps do
    [{:cowboy, "~> 1.0.0"}]

Configuring the HTTP application

Here is the transcode of websocket_app.erl file to the ws_cowboy.ex file:

defmodule WsCowboy do
  @behaviour :application

  def start(_type, _args) do
    dispatch = :cowboy_router.compile([
      {:_, [
        {"/", :cowboy_static, {:priv_file, :ws_cowboy, "index.html"}},
        {"/websocket", WsHandler, []},
        {"/static/[...]", :cowboy_static, {:priv_dir, :ws_cowboy, "static"}}
    {:ok, _} = :cowboy.start_http(:http, 100, [{:port, 8080}],
                                  [{:env, [{:dispatch, dispatch}]}])

  def stop(_state) do

If you never read any Erlang code and you came from a language like Ruby, you might get confused on basic things. So, let’s go through some of the details of porting websocket_app.erl to the ws_cowboy.ex.

Erlang files represents a module, in Elixir it is the same thing, but in this case we use the defmodule macro. Erlang modules can be accessed using :<module_name>, so in this case we are defining that this module has the application behaviour.

Different from other languages like Ruby, lowercase names aren’t variables in Erlang, but atoms, while in Elixir atoms look the same as Ruby symbols. So, we scanned every lowercase names and replaced them with :<name>.

Upper case names in Erlang aren’t constant but variables, so we changed them to lowercase. It is good to do this in inverse order to avoid mixing variables and atoms.

During the process of converting this file, there was a line making the application not work and I spent some time trying to figure out what was wrong. In Ruby, 'foo' and "foo" are both strings, but in Elixir and Erlang they are different things. The single quote in Erlang is an atom (symbol), so the '_' line must be converted to :_ in Elixir. If you miss this little detail, unfortunately it will compile and run, but Cowboy will always return a 400 status code.

Except that, everything is pretty straightforward, the only detail is the :cowboy_static definition that you have to replace with your dir app name, in this case :ws_cowboy.

To transcode function calls, you just have to replace the : with ., like in Ruby.

Handling the WebSocket connection

You can read more about how Cowboy handles WebSocket here. Here’s the transcode of the file ws_handler.erl to ws_handler.ex:

defmodule WsHandler do
  @behaviour :cowboy_websocket_handler

  def init({:tcp, :http}, _req, _opts) do
    {:upgrade, :protocol, :cowboy_websocket}

  def websocket_init(_transport_name, req, _opts) do
    :erlang.start_timer(1000, self(), "Hello!")
    {:ok, req, :undefined_state}

  def websocket_handle({:text, msg}, req, state) do
    {:reply, {:text, "That's what she said! #{msg}"}, req, state}

  def websocket_handle(_data, req, state) do
    {:ok, req, state}

  def websocket_info({:timeout, _ref, msg}, req, state) do
    :erlang.start_timer(1000, self(), "How' you doin'?")
    {:reply, {:text, msg}, req, state}

  def websocket_info(_info, req, state) do
    {:ok, req, state}

  def websocket_terminate(_reason, _req, _state) do

Following the steps done in the previous file, there is no secret on this one. The only detail is that the Erlang version used binary notation for strings, but in Elixir you can use just "string" normally. Also, you can use string interpolation "That's what she said! #{msg}".

Writing the supervisor

Now there’s just one translation missing, from websocket_sup.erl to ws_supervisor.ex. Here we just used __MODULE__ instead of the Erlang ?MODULE:

defmodule WsSupervisor do
  @behaviour :supervisor

  def start_link do
    :supervisor.start_link({:local, __MODULE__}, __MODULE__, [])

  def init([]) do
    procs = []
    {:ok, {{:one_for_one, 10, 10}, procs}}

Running your server

Running your server is pretty easy, just run the mix run --no-halt command and check in your browser.


The goal of this post was to show an example of how to port Erlang code into Elixir code. The result we got is the simplest translation possible, it was not my idea to write idiomatic Elixir here. For example, in Elixir you would not use Erlang’s supervisor module, but rather the Elixir’s supervisor.

I hope you could get the picture of how it is to translate Erlang code into Elixir, how it’s not so hard and some of the details that you must pay attention while doing it.

If you are interesting in learning more about Elixir, check out the getting started page.

Do you had any issues when starting to play with Elixir that you cracked your head to figuring out why it didn’t work? Share your experience and doubts with us!

]]> 0