{"id":2820,"date":"2012-06-15T11:00:36","date_gmt":"2012-06-15T14:00:36","guid":{"rendered":"http:\/\/blog.plataformatec.com.br\/?p=2820"},"modified":"2012-06-16T14:55:05","modified_gmt":"2012-06-16T17:55:05","slug":"why-your-web-framework-should-not-adopt-rack-api","status":"publish","type":"post","link":"https:\/\/blog.plataformatec.com.br\/2012\/06\/why-your-web-framework-should-not-adopt-rack-api\/","title":{"rendered":"Why your web framework should not adopt Rack API"},"content":{"rendered":"

Or, even better, why your web framework should not adopt a CGI-based API.<\/p>\n

For the past few years I have been studying and observing the development of different emerging languages closely with a special focus on web frameworks\/servers. Unfortunately, most of the new web frameworks are following the Rack\/WSGI specification which may be a mistake depending on the platform you are targeting (particularly true for Erlang and Node.js which have very strong streaming foundations and is by default part of their stack).<\/p>\n

This blog post is an attempt to detail the limitations in the Rack\/CGI-based APIs that the Rails Core Team has found while working with the streaming feature that shipped with Rails 3.1 and why we need better abstractions in the long term.<\/p>\n

Case in study<\/h3>\n

The use case we have in mind here is streaming. In Rails, we have focused on streaming as a way to quickly return the head of the HTML page to the browser, so the browser can start downloading assets (like javascript and stylesheet) while the server is generating the rest of the page. There is a great entry on Rails weblog about streaming in general<\/a> and a Railscast if you want to focus on how to use it in your Rails applications<\/a>. However, streaming is not only limited to HTML responses and can also be useful in API endpoints, for example, to stream search results as they pop-up or to synchronize with mobile devices.<\/p>\n

The Rack specification in a nutshell<\/h3>\n

In Rack, the response is made by an array with three elements: [status, headers, body]<\/code><\/p>\n

The body can be any object that responds to the method each<\/code>. This means streaming can be done by passing an object that will, for example, lazily read a file and stream chunks when each<\/code> is called.<\/p>\n

A Rack application is any object that implements the method call<\/code> and receives an environment hash\/dictionary with the request information. When I said above that most new web frameworks are following the Rack specification, is because they are introducing an API similar to this one just described.<\/p>\n

The issue<\/h3>\n

In order to understand the issue, we will consider three entities: the client, the server and the application. The client (for example a browser) sends a request to the server which forwards it to an application. In this case, the server and the application are communicating via the Rack API.<\/p>\n

The issue in streaming cases is that, sending a response back from the application does not mean the application finished processing. For example, consider a middleware (a middleware is an object that sits in between the server and our application) that opens up a connection to the database for the duration of the request and cleans it afterwards:<\/p>\n

\r\ndef call(env)\r\n  connection = DB.checkout_connection\r\n  env[\"db.connection\"] = connection\r\n  @app.call(env)\r\nensure\r\n  DB.checkin_connection connection\r\nend\r\n<\/pre>\n

Without streaming, it would work as follow:<\/p>\n

    \n
  1. The server receives a request and passes it down the stack<\/li>\n
  2. The request reaches the middleware<\/li>\n
  3. The middleware checks out the connection<\/li>\n
  4. The application is invoked, renders a view accessing the database using the connection and returns the rendered view as a string<\/li>\n
  5. The middleware checks the connection back in<\/li>\n
  6. The response is sent back to the client<\/li>\n<\/ol>\n

    With streaming, this would happen:<\/p>\n

      \n
    1. The server receives a request and passes it down the stack<\/li>\n
    2. The request reaches the middleware<\/li>\n
    3. The middleware checks out the connection<\/li>\n
    4. The app is called but does not render anything. Instead it returns a lazy object as response body that will stream the HTML page in chunks as the `each` method is called<\/li>\n
    5. The middleware checks the connection back in<\/li>\n
    6. Back in the server, we have received the lazy body and will start to stream it<\/li>\n
    7. While streaming the body, since the body is lazily calculated, now is the time it must access the database. But, since the middleware already checked the connection back in, our code will fail with a “not connected” exception<\/li>\n<\/ol>\n

      The first reaction to solve this issue is to ensure that all streaming happens inside the application, i.e. the application would have a mechanism to stream the response back and only when it is done it would return the Rack response back. However, if the application does this, any middleware that desires to modify the header or the response body won’t be able to do so because the response was already streamed from inside the application.<\/p>\n

      Our work-around for Rails was to create proxies that wrap the response body:<\/p>\n

      \r\ndef call(env)\r\n  connection = DB.checkout_connection\r\n  env[\"db.connection\"] = connection\r\n  response = @app.call(env)\r\n  ResponseProxy.new(response).on_close do\r\n    DB.checkin_connection connection\r\n  end\r\nend\r\n<\/pre>\n

      However, this is inefficient and extremely limited (not all middleware can be converted to such approach). In order for streaming to be successful, the underlying server API needs to acknowledge that the headers and the response body can be sent at different times. Not only that, it needs to provide proper callbacks around the response lifecycle (before sending headers, when the response is closed, on each stream, etc).<\/p>\n

      The trade-off here is that this can no longer be achieved with an easy API as Rack’s. In general, we would like to have a response objects that provides several life-cycle hooks. For example, the middleware above could be rewritten as:<\/p>\n

      \r\ndef call(request, response)\r\n  connection = DB.checkout_connection\r\n  request.env[\"db.connection\"] = connection\r\n  response.on_close { DB.checkin_connection(connection) }\r\n  @app.call(request, response)\r\nend\r\n<\/pre>\n

      The Java Servlet specification is a good example of how request and response objects could be designed to provide such hooks.<\/p>\n

      Other middleware<\/h3>\n

      In the example above I focused on the database connection middleware but this limitation exists, in one way or the other, in the majority of middleware in a stack. For example, a middleware that rescues any exception inside the application to render a 500 page also needs to be adapted. Other middleware simply won’t work. For instance, Rails ships with a middleware that provides an ETag header based on the body which has to be disabled when streaming.<\/p>\n

      Looking back<\/h3>\n

      Does this mean moving to Rack was a mistake? Not at all<\/b>. Rack appeared when the web development Ruby community was fragmented and the simplicity of the Rack API made it possible to unify the different web frameworks and web servers available. Looking back, I would take the standardization provided by Rack any day regardless of the limitations it brings. Now that we have a standard, we are already working on addressing such issues, which leads us to…<\/p>\n

      Looking forward<\/h3>\n

      Streaming will become more and more important. While working with HTML streaming requires special attention both technically and also in terms of usability, as outlined in Rails’ documentation<\/a>, API endpoints could benefit from it with basically no extra cost. Not only that, features in HTML5 like server-sent events could easily be built on top of streaming without requiring a specific end-point in your application to handle them.<\/p>\n

      While CGI was originally streaming friendly, the abstractions we built on top of it (like middleware) are not. I believe web frameworks should be moving towards better connection\/socket abstractions and away from the old CGI-based APIs, which served us well but it is finally time for us to let it go.<\/p>\n

      PS: Thanks to Aaron Patterson<\/a> (who has also written about this issue in his blog<\/a>), Yehuda Katz<\/a>, Konstantin Haase<\/a> and James Tucker<\/a> for early review and feedback.<\/p>\n

      F.A.Q.<\/h3>\n

      This section was added after the blog post was released based on some common questions.<\/p>\n

      Q:<\/b> Isn’t it a bad idea to mix both streaming and non-streaming behavior in the same stack?<\/p>\n

      That depends on the stack. This is definitely not an issue with Erlang and Node.js since both stacks are streaming based. In Ruby, I believe a threaded jRuby or Thin will allow you to get away with keeping a socket open waiting for responses, but it will probably turn out to be a bad idea with other servers since the process holding the socket won’t be able to respond to any other request.<\/p>\n

      Q:<\/b> Is there a need to do everything streaming based when a request\/response would be fine?<\/p>\n

      No, there is no need. The point of the blog post is not<\/b> to advocate for streaming only frameworks, but simply state that a Rack API may severely limit your streaming capability in case your platform supports it. Personally, I would like to be able to choose and mix both, if my stack allows me to do so.<\/p>\n","protected":false},"excerpt":{"rendered":"

      Or, even better, why your web framework should not adopt a CGI-based API. For the past few years I have been studying and observing the development of different emerging languages closely with a special focus on web frameworks\/servers. Unfortunately, most of the new web frameworks are following the Rack\/WSGI specification which may be a mistake … \u00bb<\/a><\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[1],"tags":[182,38,7,181],"aioseo_notices":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts\/2820"}],"collection":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/comments?post=2820"}],"version-history":[{"count":17,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts\/2820\/revisions"}],"predecessor-version":[{"id":2920,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/posts\/2820\/revisions\/2920"}],"wp:attachment":[{"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/media?parent=2820"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/categories?post=2820"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.plataformatec.com.br\/wp-json\/wp\/v2\/tags?post=2820"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}