Organizing microservices in a single git repository

Microservices has gained popularity recently and some projects I’ve worked on had followed this approach. Basically, it’s an approach of software architecture that allows breaking monolithic applications into smaller decoupled, business-oriented and isolated deployable applications.

Each microservice normally is hosted in its own git repository, since it has very defined business boundaries and the code must be isolated from other microservices to ensure decoupling and deploy independance.

It may work greatly if you organize a team by each microservice. So, if a team is responsible for a given microservice and won’t work on other microservices, this organization may be good enough.

During project developments, we at Plataformatec understood that it is not so productive to focus on specific parts of a feature. Instead, we design and develop features by perceiving it as a whole, as it would be perceived by the end user. We don’t work with application specialists, we work with generalists and a lot of communication through pull requests.

So the best fit for the way we work, as our experience has shown us, is to put all the microservices and the clients that consume them into a single git repository. It may sound weird or semantically wrong for some, but after all, those microservices are small parts of a whole, something bigger, that is called software ecosystem. Since they share or exchange information among them, they’re somehow connected to each other.

This pragmatic approach is not exclusively ours, many people out there apply it. Two very nice examples are Facebook and Google. Of course their codebase is far larger than a normal application. They’re an exception. Google’s codebase, for instance, keeps really low level information like operating system configurations.

Using a single repository has proven to be a very good practice for us, because we can keep track of relevant pull requests easier; we can refactor, create and test new features throughout all the microservices faster; and test its integration without leaving the current context. Also, project gardening is way simpler: upgrading Ruby, Rails version, gem updates, using shared paths as gems, tests and deploy all of them can be automated and run across all microservices.

Have you worked with a single or multiple repositories? Please share your thoughts about it in the comments below!

Subscribe to our blog
Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInEmail this to someone
  • milgner

    My preferred solution is to have individual repositories for each project and then tie them together in a main repository via git submodules. That way each project has its individual commit history and branches while still allowing version tagging in the parent repository.

    Sure, it adds a slight overhead and you have to `git submodule update` regularly but that can be easily alleviated using git hooks and/or aliases.

  • We have multiple repositories, and we use a bunch of bash scripts to manage that. My feeling is that most features that we develop have one service that is most involved, and that is where we open the pull request. If that’s not the case, then maybe the new feature is too big.

  • Here we use this solution too. Works fine for us.

  • That’s what we use too to handle our custom gems required by the project, but we don’t often need to work on them. Maybe if those gems were constantly changing to adapt the main project changes we’d have to re-evaluate it and see if getting rid of the submodules could lead to a better experience. Anyway, I don’t feel any pain in working with submodules as I’m pretty used to them for years, so maybe this might be a matter of personal preferences on how to manage the small pieces… I’d probably be fine with either submodules or using the same repository…

  • Richard Santos

    How about your deployment process?

  • MatthewWAdams

    While I don’t disagree with any individual thing you’ve said here, the cumulative effect is that you

    1) version all your microservices together

    2) upgrade common dependencies together

    3) tightly couple microservices in subsystems

    4) commonly change several coupled microservices to implement a feature

    That doesn’t sound to me like you are talking about microservices at all. What I think you are describing is a standard component-based application architecture.

    Microservices add additional design constraints: e.g. small surface-area (single function) services which tend to be built once, well tested at the public API (integration) level, which is strongly defined and long outlasts any particular implementation. Updates fix bugs rather than add features; additional features necessarily imply a new microservice.

    Within that context, a feature-based development focus is certainly a good approach, and is not incompatible with the rapid creation of new microservices. These can be developed in the parent repo until the feature is ready (while their APIs are being discovered and iterated), and migrated into their own repositories once they have stabilized at the end of the particular feature delivery sprint.

  • Gustavo Dutra

    I’d say it depends. You can deploy normally and make only the relevant services/microservices run on a given machine. Or, maybe, it can be part of the process to remove not used code from the deployed machine. It may differ from deploy tool to deploy tool, maybe I can help you better by asking if you have any specific deploy question.