rack-devel archive mirror (unofficial) https://groups.google.com/group/rack-devel
 help / color / mirror / Atom feed
From: Eric Wong <e@80x24.org>
To: rack-devel@googlegroups.com
Subject: Re: big responses to slow clients: Rack vs PSGI
Date: Thu, 1 Jun 2017 22:05:40 +0000	[thread overview]
Message-ID: <20170601220540.GA22960@starla> (raw)
In-Reply-To: <CABGa_T-jwSwA_upw29LpE9ab9-x4qQt5wz9yO-=jU7bAi1iTfg@mail.gmail.com>

James Tucker <jftucker@gmail.com> wrote:
> The fundamental problem here is a competition of use cases and ideals. It
> would be ideal for server authors if Ruby had lightweight coroutines that
> could be scheduled across threads and relinquish control on IO. You can
> emulate some of this with Fibers, and Neverblock went really far down this
> path, as did Goliath. There's really no substitute for real user
> schedulable goroutines and a decent IO subsystem though - we can build the
> latter, but the prior is squarely in MRI's control - and they're now headed
> further down the path of thread isolation.

Heh, I guess this thread inspired me a bit and I'm at least
proposing Fiber#start + auto-scheduling Fibers for MRI:

	https://bugs.ruby-lang.org/issues/13618

Fibers can't be scheduled across threads, yet(*); but
reimplementing 1.8 green threads with a different name is
at least getting under way...


(*) it might be possible if we gave up getcontext/setcontext
    optimizations with FIBER_USE_NATIVE in cont.c  *shrug*
    ko1 was going to work on general context switching
    improvements for 2.5.

<snip>

> tens of thousands of open sockets, or more. It does not have enough
> lightweight datastructures. It doesn't have the right IO primitives, and it
> doesn't have the right concurrency primitives. The newer concurrency
> primitives that are being discussed also make solving the general case of
> this problem harder. The general case assumes that task latencies are
> unpredictable, and in that space, you need a scheduler that can perform
> memory and cpu offloading. MRI will not deliver either of those
> capabilities.

*shrug*  Honestly I have no clue when it comes to designing
APIs for Ruby.   That's for matz and others, I guess...

AFAIK, ko1 is working on reducing Fiber costs...

I'll just be happy if I can get memory usage of my Ruby
processes down to what they were around a decade ago.  Of
course, I'm being motivated by continuing to use hardware which
was weak and outdated a decade ago :x

<snip>

> I also don't mean to say that it would be bad to explore some more of these
> ideas. I think it would, particularly if it could lead to more examples of
> what servers need in order to be efficient, as strong cases for MRI/Ruby
> designers to consider. I also think there are good cases to be made for
> alternative servers for specific use cases - I know you specifically Eric
> have done great work in this area. I would encourage you, absolutely, to
> freely depart from Rack for those use cases. I'd also be really happy to
> eat my words if you find some way of taking on elixir style scalability
> with MRI, though on a personal level I don't know if it's worth the time.
> As always, thank you for everything you've done, and for the discussion.

*shrug*  The actual hacking is pretty much mindless zombie work
at this point.  Designing APIs is hard, so I steal, or let
others deal with it.  Stealing is good at least because it
should be easier to port existing code over.

I dunno much outside of C/Ruby/Perl5, so I'm guessing "elixir
style scalability" is really great (maybe like Go+goroutines)?.
As far as network servers/client go, memory overhead
per-connection (also idle vs active) seems to be the metric to
go with.

Some MRI structs are pretty big, but also slowly working on
shrinking them...  And rb_io_t is in the public API, sadly :<

Still, I do I'm not sure if anything done with MRI could match
my benchmark for Linux/*BSD server scalability at the moment:
128 bytes per-connected client(**), (+ 128 when active) and
being able to read/write across dozens (perhaps hundreds) of
rotational disks simultaneously without starvation:

	https://bogomips.org/cmogstored/design.txt
	https://bogomips.org/cmogstored/queues.txt
	git clone git://bogomips.org/cmogstored

The internal API is enum + switch/case-based; so its more error
prone if done in languages without enum return values.
yahns is nearly identical in design; but Ruby+Rack adds a
truckload of overhead and unpredictability with GC :<

(**) 64 bytes is doable without too many sacrifices, even;
     but I'm not sure it's worth the complexity + effort
     given the size of underlying kernel structs.

-- 

--- 
You received this message because you are subscribed to the Google Groups "Rack Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rack-devel+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

      parent reply	other threads:[~2017-06-01 22:05 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-15 23:10 big responses to slow clients: Rack vs PSGI Eric Wong
2016-12-15 20:07 ` James Tucker
2016-12-24 23:15   ` Eric Wong
2016-12-27 16:00     ` James Tucker
2017-06-01 22:05   ` Eric Wong [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://groups.google.com/group/rack-devel

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170601220540.GA22960@starla \
    --to=rack-devel@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).