rack-devel archive mirror (unofficial) https://groups.google.com/group/rack-devel
 help / color / mirror / Atom feed
* big responses to slow clients: Rack vs PSGI
@ 2016-11-15 23:10 Eric Wong
  2016-12-15 20:07 ` James Tucker
  0 siblings, 1 reply; 5+ messages in thread
From: Eric Wong @ 2016-11-15 23:10 UTC (permalink / raw)
  To: rack-devel

I've been poking around in Plack/PSGI for Perl5 some months,
and am liking it in some ways more than Rack.

This only covers server-agnostic web applications; IMHO exposing
applications to server-specific stuff defeats the purpose of
these common specs.

In Rack, one major problem I have is streaming large responses
requires calling body.each synchronously.

For handling writing large responses to slow clients, this means
a Rack web server has 2 choices:


1) Block the calling Thread, Fiber, or process until the
   slow client can consume the input.  This hurts if you have
   many slow clients blocking all your threads.

      body.each { |buf| client.write(buf) }
      body.close

   Simple, but your app is at the mercy of how fast the client
   chooses to read the response.


2) Detect :wait_writable/:wait_readable (EAGAIN) when writing to
   the slow client and start buffering the response to memory or
   filesystem.

   This may lead to out-of-memory or out-of-storage conditions.

   nginx does this by default when proxying, so Rubyists are
   often unaware of this as it's common to use nginx in front
   of Rack servers for this purpose.

   Something like the following should handle slow clients
   without relying on nginx for buffering:

      tmp = nil
      body.each do |buf|
        if tmp
          tmp.write(buf)
        else
          # the optimistic case:
          case ret = client.write_nonblock(buf, exception: false)
          when :wait_writable, :wait_writable # EAGAIN :<
            tmp = Tempfile.new(ret.to_s)
            tmp.write(buf)
          when Integer
            exp = buf.bytesize
            if exp > ret # partial write :<
               tmp = Tempfile.new('partial')
               tmp.write(buf.byteslice(ret, exp - ret))
            end
          end
        end
      end

      if tmp
        server_specific_finish(client, tmp, body)
      else
        body.close if body.respond_to?(:close)
      end

   Gross; but smaller responses never get buffered this way.
   Any server-specific logic is still contained within the
   server itself, the Rack app may remain completely unaware
   of how a server handles slow clients.



PSGI allows at least two methods for streaming large responses.
I will only cover the "pull" method of getline+close below.

Naively, getline+close is usable like the Rack method 1) for
body.each:

      # Note: "getline" in Plack/PSGI is not required to return
      # a "line", so it can behave like "readpartial" in Ruby.
      while (defined(my $buf = $body->getline)) {
          $client->write($buf);
      }
      $body->close;

...With all the problems of blocking on the $client->write call.

On the surface, the difference between Rack and PSGI here is
minor.


However, "getline" yielding control to the server entirely has a
significant advantage over the Rack app calling a Proc provided
by the server: The server can stop calling $body->getline once
it detects a client is slow.

      # For the non-Perl-literate, it's pretty similar to Ruby.
      # Scalar variables are prefixed with $, and method.
      # calls are "$foo->METHOD" instead of "foo.METHOD" in Ruby
      # if/else/elsif/while all work the same as in Ruby
      # I will over-comment here assuming readers here are not
      # familiar with Perl.

      # Make client socking non-blocking, equivalent to
      # "IO#nonblock = true" in Ruby; normal servers would only
      # call this once after accept()-ing a connection.
      $client->blocking(0);

      my $blocked; # "my" declares a locally-scoped variable

      # "undef" in Perl are the equivalent of "nil" in Ruby,
      # so "defined" checks here are equivalent to Ruby nil checks
      while (defined(my $buf = $body->getline)) {
          # length($buf) is roughly buf.bytesize in Ruby;
          # I'll assume all data is binary since Perl's Unicode
          # handling confuses me no matter how many times I RTFM.
          my $exp = length($buf);

          # Behaves like Ruby IO#write_nonblock after the
          # $client->blocking(0) call above:
          my $ret = $client->syswrite($buf);

          # $ret is the number of bytes written on success:
          if (defined $ret) {
              if ($exp > $ret) { # partial write :<

                  # similar to String#byteslice in Ruby:
                  $blocked = substr($buf, $ret, $exp - $ret);

                  last; # break out of the while loop
              } # else { continue looping on while }

          # $! is the system errno from syswrite (see perlvar manpage
          # for details), $!{E****} just checks for $! matching the
          # particular error number.
          } elsif ($!{EAGAIN} || $!{EWOULDBLOCK}) {
              # A minor detail in this example:
              # this assignment is a copy, so equivalent to
              # "blocked = buf.dup" in Ruby, NOT merely
              # "blocked = buf".
              $blocked = $buf;

              last; # break out of the while loop
          } else {
              # Perl does not raise exceptions by default on
              # syscall errors, "die" is the standard exception
              # throwing mechanism:
              die "syswrite failed: $!\n";
          }
      }
      if (defined $blocked) {
          server_specific_finish($client, $blocked, $body);
      } else {
          $body->close;
      }

In both my Rack and PSGI examples, I have a reference to a
server_specific_finish call.  In the Rack example, this method
will stream the entire contents of tmp (a Tempfile) to the
client.

The problem is tmp in the Rack example may be as large as
the entire response.  This sucks for big responses.

In the PSGI example, the server_specific_finish call will only
have the contents of one buffer from $body->getline in memory at
a time.  The server will make further calls to $body->getline
when (and only when) the previous buffer is fully-written to the
client socket.  There is only one (app-provided) buffer in
server memory at once, not entire response.

Both server_specific_finish calls will call the "close" method
on the body when the entire response is written to the client
socket.  Delaying the "close" call may make sense for logging
purposes in Rack, even if body.each is long done running, and is
obviously required in the PSGI case since further "getline"
calls need to be made before "close".


The key difference is that in Rack, the data is "pushed" to the
server by the Rack app.  In PSGI, the app may instead ask the
server to "pull" that data.


Anyways, thanks for reading this far.  I just felt like writing
something down for future Rack/Ruby-related projects.  I'm not
sure if Rack can change without breaking all existing apps
and middlewares.

-- 

--- 
You received this message because you are subscribed to the Google Groups "Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: big responses to slow clients: Rack vs PSGI
  2016-11-15 23:10 big responses to slow clients: Rack vs PSGI Eric Wong
@ 2016-12-15 20:07 ` James Tucker
  2016-12-24 23:15   ` Eric Wong
  2017-06-01 22:05   ` Eric Wong
  0 siblings, 2 replies; 5+ messages in thread
From: James Tucker @ 2016-12-15 20:07 UTC (permalink / raw)
  To: Rack Development

[-- Attachment #1: Type: text/plain, Size: 11283 bytes --]

I agree with you, with a caveat. I think that the model is better for some
specific use cases.

_ry and I had this debate at reasonable length back before node, when he
was writing flow and I prepped the first thin async patch/rack hack.

The fundamental problem here is a competition of use cases and ideals. It
would be ideal for server authors if Ruby had lightweight coroutines that
could be scheduled across threads and relinquish control on IO. You can
emulate some of this with Fibers, and Neverblock went really far down this
path, as did Goliath. There's really no substitute for real user
schedulable goroutines and a decent IO subsystem though - we can build the
latter, but the prior is squarely in MRI's control - and they're now headed
further down the path of thread isolation.

Skipping a very large background and diatribe of these choices in MRI, I
will say that the chosen path is really not suitable for very high scale
processes. To be clear, I'm not saying "ruby doesn't scale" in the sense
that we can't use it for services that handle Mqps and up. You can - it's
not cheap, but it's entirely doable, and the engineering cost is on par
with other choices, assuming reasonable problem solving. What I am saying
is that Ruby, as designed and where it is headed, will not be ideal for
tens of thousands of open sockets, or more. It does not have enough
lightweight datastructures. It doesn't have the right IO primitives, and it
doesn't have the right concurrency primitives. The newer concurrency
primitives that are being discussed also make solving the general case of
this problem harder. The general case assumes that task latencies are
unpredictable, and in that space, you need a scheduler that can perform
memory and cpu offloading. MRI will not deliver either of those
capabilities.

The reason I describe the above, is that if we accept that Ruby is only
good for limited scale and cost per process, then we can view the problem a
little differently. Instead of trying to force it, or constantly work
around those limitations in library level systems, we can work around the
problem in flow control and traffic management systems. Indeed this is what
we do today, though as you note, often by accident/side effect. The kinds
of catastrophic load balancing challenges such as the rap genius saga will
remain, and slow client challenges will also. We can deal with them. At
scale you never really get to escape these anyway, except by engineering
systems to specifically combat those challenges. Either you're so fast and
efficient that attacks are impractical, or you start needing to put
appropriate limitations in place in upstream defenses. Ultimately the prior
is brittle and as such the latter eventually gets deployed by
SRE/sysadmins/ops folks.

I also don't mean to say that it would be bad to explore some more of these
ideas. I think it would, particularly if it could lead to more examples of
what servers need in order to be efficient, as strong cases for MRI/Ruby
designers to consider. I also think there are good cases to be made for
alternative servers for specific use cases - I know you specifically Eric
have done great work in this area. I would encourage you, absolutely, to
freely depart from Rack for those use cases. I'd also be really happy to
eat my words if you find some way of taking on elixir style scalability
with MRI, though on a personal level I don't know if it's worth the time.
As always, thank you for everything you've done, and for the discussion.

On Nov 15, 2016 3:10 PM, "Eric Wong" <e@80x24.org> wrote:

> I've been poking around in Plack/PSGI for Perl5 some months,
> and am liking it in some ways more than Rack.
>
> This only covers server-agnostic web applications; IMHO exposing
> applications to server-specific stuff defeats the purpose of
> these common specs.
>
> In Rack, one major problem I have is streaming large responses
> requires calling body.each synchronously.
>
> For handling writing large responses to slow clients, this means
> a Rack web server has 2 choices:
>
>
> 1) Block the calling Thread, Fiber, or process until the
>    slow client can consume the input.  This hurts if you have
>    many slow clients blocking all your threads.
>
>       body.each { |buf| client.write(buf) }
>       body.close
>
>    Simple, but your app is at the mercy of how fast the client
>    chooses to read the response.
>
>
> 2) Detect :wait_writable/:wait_readable (EAGAIN) when writing to
>    the slow client and start buffering the response to memory or
>    filesystem.
>
>    This may lead to out-of-memory or out-of-storage conditions.
>
>    nginx does this by default when proxying, so Rubyists are
>    often unaware of this as it's common to use nginx in front
>    of Rack servers for this purpose.
>
>    Something like the following should handle slow clients
>    without relying on nginx for buffering:
>
>       tmp = nil
>       body.each do |buf|
>         if tmp
>           tmp.write(buf)
>         else
>           # the optimistic case:
>           case ret = client.write_nonblock(buf, exception: false)
>           when :wait_writable, :wait_writable # EAGAIN :<
>             tmp = Tempfile.new(ret.to_s)
>             tmp.write(buf)
>           when Integer
>             exp = buf.bytesize
>             if exp > ret # partial write :<
>                tmp = Tempfile.new('partial')
>                tmp.write(buf.byteslice(ret, exp - ret))
>             end
>           end
>         end
>       end
>
>       if tmp
>         server_specific_finish(client, tmp, body)
>       else
>         body.close if body.respond_to?(:close)
>       end
>
>    Gross; but smaller responses never get buffered this way.
>    Any server-specific logic is still contained within the
>    server itself, the Rack app may remain completely unaware
>    of how a server handles slow clients.
>
>
>
> PSGI allows at least two methods for streaming large responses.
> I will only cover the "pull" method of getline+close below.
>
> Naively, getline+close is usable like the Rack method 1) for
> body.each:
>
>       # Note: "getline" in Plack/PSGI is not required to return
>       # a "line", so it can behave like "readpartial" in Ruby.
>       while (defined(my $buf = $body->getline)) {
>           $client->write($buf);
>       }
>       $body->close;
>
> ...With all the problems of blocking on the $client->write call.
>
> On the surface, the difference between Rack and PSGI here is
> minor.
>
>
> However, "getline" yielding control to the server entirely has a
> significant advantage over the Rack app calling a Proc provided
> by the server: The server can stop calling $body->getline once
> it detects a client is slow.
>
>       # For the non-Perl-literate, it's pretty similar to Ruby.
>       # Scalar variables are prefixed with $, and method.
>       # calls are "$foo->METHOD" instead of "foo.METHOD" in Ruby
>       # if/else/elsif/while all work the same as in Ruby
>       # I will over-comment here assuming readers here are not
>       # familiar with Perl.
>
>       # Make client socking non-blocking, equivalent to
>       # "IO#nonblock = true" in Ruby; normal servers would only
>       # call this once after accept()-ing a connection.
>       $client->blocking(0);
>
>       my $blocked; # "my" declares a locally-scoped variable
>
>       # "undef" in Perl are the equivalent of "nil" in Ruby,
>       # so "defined" checks here are equivalent to Ruby nil checks
>       while (defined(my $buf = $body->getline)) {
>           # length($buf) is roughly buf.bytesize in Ruby;
>           # I'll assume all data is binary since Perl's Unicode
>           # handling confuses me no matter how many times I RTFM.
>           my $exp = length($buf);
>
>           # Behaves like Ruby IO#write_nonblock after the
>           # $client->blocking(0) call above:
>           my $ret = $client->syswrite($buf);
>
>           # $ret is the number of bytes written on success:
>           if (defined $ret) {
>               if ($exp > $ret) { # partial write :<
>
>                   # similar to String#byteslice in Ruby:
>                   $blocked = substr($buf, $ret, $exp - $ret);
>
>                   last; # break out of the while loop
>               } # else { continue looping on while }
>
>           # $! is the system errno from syswrite (see perlvar manpage
>           # for details), $!{E****} just checks for $! matching the
>           # particular error number.
>           } elsif ($!{EAGAIN} || $!{EWOULDBLOCK}) {
>               # A minor detail in this example:
>               # this assignment is a copy, so equivalent to
>               # "blocked = buf.dup" in Ruby, NOT merely
>               # "blocked = buf".
>               $blocked = $buf;
>
>               last; # break out of the while loop
>           } else {
>               # Perl does not raise exceptions by default on
>               # syscall errors, "die" is the standard exception
>               # throwing mechanism:
>               die "syswrite failed: $!\n";
>           }
>       }
>       if (defined $blocked) {
>           server_specific_finish($client, $blocked, $body);
>       } else {
>           $body->close;
>       }
>
> In both my Rack and PSGI examples, I have a reference to a
> server_specific_finish call.  In the Rack example, this method
> will stream the entire contents of tmp (a Tempfile) to the
> client.
>
> The problem is tmp in the Rack example may be as large as
> the entire response.  This sucks for big responses.
>
> In the PSGI example, the server_specific_finish call will only
> have the contents of one buffer from $body->getline in memory at
> a time.  The server will make further calls to $body->getline
> when (and only when) the previous buffer is fully-written to the
> client socket.  There is only one (app-provided) buffer in
> server memory at once, not entire response.
>
> Both server_specific_finish calls will call the "close" method
> on the body when the entire response is written to the client
> socket.  Delaying the "close" call may make sense for logging
> purposes in Rack, even if body.each is long done running, and is
> obviously required in the PSGI case since further "getline"
> calls need to be made before "close".
>
>
> The key difference is that in Rack, the data is "pushed" to the
> server by the Rack app.  In PSGI, the app may instead ask the
> server to "pull" that data.
>
>
> Anyways, thanks for reading this far.  I just felt like writing
> something down for future Rack/Ruby-related projects.  I'm not
> sure if Rack can change without breaking all existing apps
> and middlewares.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "Rack Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to rack-devel+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups "Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[-- Attachment #2: Type: text/html, Size: 13398 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: big responses to slow clients: Rack vs PSGI
  2016-12-15 20:07 ` James Tucker
@ 2016-12-24 23:15   ` Eric Wong
  2016-12-27 16:00     ` James Tucker
  2017-06-01 22:05   ` Eric Wong
  1 sibling, 1 reply; 5+ messages in thread
From: Eric Wong @ 2016-12-24 23:15 UTC (permalink / raw)
  To: rack-devel

Thank you for your response.

As far as departing from Rack...  I guess PSGI was one departure :)

Looking back, I think the possibility of an ecosystem of
stdlib/gems to support lightweight coroutines was lost when MRI
got 1:1 threads with YARV.  Fibers, Neverblock, Goliath never
got the critical mass to affect stdlib or most gems after that.
So yeah, I agree this really needs core/stdlib support which I
doubt can still happen.  Coro for Perl5 is in a bad, perhaps
worse spot, even.

And yet I know my brain still favors OS kernel primitives over
language-level primitives; so in that way MRI/YARV today is
closer to how my brain works w.r.t. non-blocking I/O combined
with native threads or processes as needed.  *shrug*

Yet, it seems the Ruby mainstream is content with primitive
servers like unicorn.  I often wonder if the unintentional
popularity of unicorn set the Ruby ecosystem back 5-10 years in
terms of concurrency.  Likely so, but the damage is done :<
In my defense, I suck at marketing, so it's not my fault.


Anyways, my original post was really a reporting-in-from-hiatus
message.  I haven't done anything new with server design in over
5 years, and don't expect I will in the future; just occasional
janitorial work.  yahns was merely a repackaging and
consolidation of findings from Rainbows! as a "best of" release
with some warts removed.

-- 

--- 
You received this message because you are subscribed to the Google Groups "Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: big responses to slow clients: Rack vs PSGI
  2016-12-24 23:15   ` Eric Wong
@ 2016-12-27 16:00     ` James Tucker
  0 siblings, 0 replies; 5+ messages in thread
From: James Tucker @ 2016-12-27 16:00 UTC (permalink / raw)
  To: Rack Development

[-- Attachment #1: Type: text/plain, Size: 3906 bytes --]

On Dec 24, 2016 7:15 PM, "Eric Wong" <e@80x24.org> wrote:

Thank you for your response.

As far as departing from Rack...  I guess PSGI was one departure :)

Looking back, I think the possibility of an ecosystem of
stdlib/gems to support lightweight coroutines was lost when MRI
got 1:1 threads with YARV.  Fibers, Neverblock, Goliath never
got the critical mass to affect stdlib or most gems after that.
So yeah, I agree this really needs core/stdlib support which I
doubt can still happen.  Coro for Perl5 is in a bad, perhaps
worse spot, even.


Fibers as implemented lose their usefulness as scheduler primitives because
of their thread local overloading and strict thread binding. If that was
fixed they'd be able to be used like coroutines are used for scheduling
elsewhere.

And yet I know my brain still favors OS kernel primitives over
language-level primitives; so in that way MRI/YARV today is
closer to how my brain works w.r.t. non-blocking I/O combined
with native threads or processes as needed.  *shrug*


From a server/Io perspective this makes sense. The problem is from the
apps/users perspective. They ideally need to be able to "just write"
without managing buffers, non-blocking, multiplexing etc. That's what the
stack solutions give them that no other solution does. I do think a write
capable API is OK - that's why I introduced hijack, as imperfect as it is.
Of course hijack is hard to implement correctly in the face of 1.0 and 1.1
support, especially if you try to support pipelining. The split boolean
design was to allow cow cloneable environment objects only allocating space
for bool and some buffer pointers in the case of most requests, with the
hijack methods being localled from some preexisting server context.

Yet, it seems the Ruby mainstream is content with primitive
servers like unicorn.  I often wonder if the unintentional
popularity of unicorn set the Ruby ecosystem back 5-10 years in
terms of concurrency.  Likely so, but the damage is done :<
In my defense, I suck at marketing, so it's not my fault.


Unicorn was about deploying rails easily for most people, which it really
did. Thin had some other challenges, from the fact that the EM build often
had issues with openssl to unintended buffering issues that are fine for
short requests but very bad for long requests.

Goliath tried, with stronger marketing, to address the concurrency issues,
but Goliath baseline performance was too far behind more basic approaches
to make it a viable option. Anyone who benched their apps and/or didn't
turned explicitly for it will have seen this very clearly.

Anyways, my original post was really a reporting-in-from-hiatus
message.  I haven't done anything new with server design in over
5 years, and don't expect I will in the future; just occasional
janitorial work.  yahns was merely a repackaging and
consolidation of findings from Rainbows! as a "best of" release
with some warts removed.


I'd love to see it happen. I think about doing something periodically, but
for general user use cases its quite a lot of work (as one has to write
good primitives first). I don't really deploy much ruby anymore, my last
bit is probably going away by 2018. Maybe I'll be back, maybe work will
bring me back, but for now, it's someone else race.

Happy Christmas and New years!


--

---
You received this message because you are subscribed to the Google Groups
"Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 

--- 
You received this message because you are subscribed to the Google Groups "Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

[-- Attachment #2: Type: text/html, Size: 6106 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: big responses to slow clients: Rack vs PSGI
  2016-12-15 20:07 ` James Tucker
  2016-12-24 23:15   ` Eric Wong
@ 2017-06-01 22:05   ` Eric Wong
  1 sibling, 0 replies; 5+ messages in thread
From: Eric Wong @ 2017-06-01 22:05 UTC (permalink / raw)
  To: rack-devel

James Tucker <jftucker@gmail.com> wrote:
> The fundamental problem here is a competition of use cases and ideals. It
> would be ideal for server authors if Ruby had lightweight coroutines that
> could be scheduled across threads and relinquish control on IO. You can
> emulate some of this with Fibers, and Neverblock went really far down this
> path, as did Goliath. There's really no substitute for real user
> schedulable goroutines and a decent IO subsystem though - we can build the
> latter, but the prior is squarely in MRI's control - and they're now headed
> further down the path of thread isolation.

Heh, I guess this thread inspired me a bit and I'm at least
proposing Fiber#start + auto-scheduling Fibers for MRI:

	https://bugs.ruby-lang.org/issues/13618

Fibers can't be scheduled across threads, yet(*); but
reimplementing 1.8 green threads with a different name is
at least getting under way...


(*) it might be possible if we gave up getcontext/setcontext
    optimizations with FIBER_USE_NATIVE in cont.c  *shrug*
    ko1 was going to work on general context switching
    improvements for 2.5.

<snip>

> tens of thousands of open sockets, or more. It does not have enough
> lightweight datastructures. It doesn't have the right IO primitives, and it
> doesn't have the right concurrency primitives. The newer concurrency
> primitives that are being discussed also make solving the general case of
> this problem harder. The general case assumes that task latencies are
> unpredictable, and in that space, you need a scheduler that can perform
> memory and cpu offloading. MRI will not deliver either of those
> capabilities.

*shrug*  Honestly I have no clue when it comes to designing
APIs for Ruby.   That's for matz and others, I guess...

AFAIK, ko1 is working on reducing Fiber costs...

I'll just be happy if I can get memory usage of my Ruby
processes down to what they were around a decade ago.  Of
course, I'm being motivated by continuing to use hardware which
was weak and outdated a decade ago :x

<snip>

> I also don't mean to say that it would be bad to explore some more of these
> ideas. I think it would, particularly if it could lead to more examples of
> what servers need in order to be efficient, as strong cases for MRI/Ruby
> designers to consider. I also think there are good cases to be made for
> alternative servers for specific use cases - I know you specifically Eric
> have done great work in this area. I would encourage you, absolutely, to
> freely depart from Rack for those use cases. I'd also be really happy to
> eat my words if you find some way of taking on elixir style scalability
> with MRI, though on a personal level I don't know if it's worth the time.
> As always, thank you for everything you've done, and for the discussion.

*shrug*  The actual hacking is pretty much mindless zombie work
at this point.  Designing APIs is hard, so I steal, or let
others deal with it.  Stealing is good at least because it
should be easier to port existing code over.

I dunno much outside of C/Ruby/Perl5, so I'm guessing "elixir
style scalability" is really great (maybe like Go+goroutines)?.
As far as network servers/client go, memory overhead
per-connection (also idle vs active) seems to be the metric to
go with.

Some MRI structs are pretty big, but also slowly working on
shrinking them...  And rb_io_t is in the public API, sadly :<

Still, I do I'm not sure if anything done with MRI could match
my benchmark for Linux/*BSD server scalability at the moment:
128 bytes per-connected client(**), (+ 128 when active) and
being able to read/write across dozens (perhaps hundreds) of
rotational disks simultaneously without starvation:

	https://bogomips.org/cmogstored/design.txt
	https://bogomips.org/cmogstored/queues.txt
	git clone git://bogomips.org/cmogstored

The internal API is enum + switch/case-based; so its more error
prone if done in languages without enum return values.
yahns is nearly identical in design; but Ruby+Rack adds a
truckload of overhead and unpredictability with GC :<

(**) 64 bytes is doable without too many sacrifices, even;
     but I'm not sure it's worth the complexity + effort
     given the size of underlying kernel structs.

-- 

--- 
You received this message because you are subscribed to the Google Groups "Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-06-01 22:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-15 23:10 big responses to slow clients: Rack vs PSGI Eric Wong
2016-12-15 20:07 ` James Tucker
2016-12-24 23:15   ` Eric Wong
2016-12-27 16:00     ` James Tucker
2017-06-01 22:05   ` Eric Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).