On Tue, Jun 7, 2011 at 10:20 AM, James Tucker <jftucker@gmail.com> wrote:
The problem is, this isn't simple. Different servers have different scheduling mechanisms, and deferred operations specifications reach into scheduling in a horrible way.
On Jun 3, 2011, at 4:23 PM, ghazel wrote:
> It seems to me that Rack is in need of a new post-response stage of
> processing. This stage would occur after the response is fully written
> and the client is unblocked, and before the next request is processed.
>
> Similar to what OobGC ( http://bogomips.org/unicorn.git/tree/lib/unicorn/oob_gc.rb#n59
> ) accomplishes, it is sometimes useful to perform additional
> operations after the response is written without blocking the client.
> For example, the Oink middleware logs statistics about the request,
> but blocks the response since it has no ability not to: (
> https://github.com/noahd1/oink/blob/4158d71bc9150f011072b2c6eefe73c720a78d46/lib/oink/middleware.rb#L16
> ). This processing takes time, and needlessly delays the response.
>
> This proposal would entail something like a single function which is
> called on each middleware after the response is written to the client
> and the socket is closed (depending on the server implementation). For
> servers which have no ability to not block the client or delay further
> requests the function should still be called, and the impact would be
> similar to the behavior today.
>
> Thoughts?
Should these run linearly? Should they be able to be pooled if env['rack.multithreaded']. In that case should they receive the same number of workers as the main request/response pool? Should they work out of the same pool?
And that's just some basics with threads...
You can quite easily handle this on your own in middleware or servers a number of ways today, without introducing either far reaching / extensive specs or incomplete restrictions that parallel some we already have (like stack based control).