ruby-core@ruby-lang.org archive (unofficial mirror)
 help / color / mirror / Atom feed
From: chris@chrisseaton.com
To: ruby-core@ruby-lang.org
Subject: [ruby-core:98248] [Ruby master Feature#16786] Light-weight scheduler for improved concurrency.
Date: Sun, 10 May 2020 14:17:56 +0000 (UTC)	[thread overview]
Message-ID: <redmine.journal-85489.20200510141755.3344@ruby-lang.org> (raw)
In-Reply-To: redmine.issue-16786.20200414143858.3344@ruby-lang.org

Issue #16786 has been updated by chrisseaton (Chris Seaton).


I recently did a deep dive into this approach and how it would fit into the Ruby ecosystem as part of my work on TruffleRuby.

I think what is being proposed here looks like a very practical idea for improving concurrency on Ruby, for the common use-case of applications doing a lot of IO with many clients. The core idea is simple to explain and understand, which I think is the real strong point here.

I also appreciate how the proposal has been architectured to have a pluggable backend. As a researcher that means we're open to experiment with some more radical ideas but running the same user code.

If you didn't know, TruffleRuby implements fibres as threads, which is also what JRuby does. This is because the JVM doesn't have any lighweight threading mechanism at the moment. The JVM will get fibres through a project called loom, and some experimental work we did to integrate this into TruffleRuby was promising. It should work the same on JRuby. I'm planning to implement this issue in TruffleRuby for experimentation even if we don't have the expected performance characteristics of fibre yet.


----------------------------------------
Feature #16786: Light-weight scheduler for improved concurrency.
https://bugs.ruby-lang.org/issues/16786#change-85489

* Author: ioquatix (Samuel Williams)
* Status: Open
* Priority: Normal
----------------------------------------
# Abstract

We propose to introduce a light weight fiber scheduler, to improve the concurrency of Ruby code with minimal changes.

# Background

We have been discussing and considering options to improve Ruby scalability for several years. More context can be provided by the following discussions:

- https://bugs.ruby-lang.org/issues/14736
- https://bugs.ruby-lang.org/issues/13618

The final Ruby Concurrency report provides some background on the various issues considered in the latest iteration: https://www.codeotaku.com/journal/2020-04/ruby-concurrency-final-report/index

# Proposal

We propose to introduce the following concepts:

- A `Scheduler` interface which provides hooks for user-supplied event loops.
- Non-blocking `Fiber` which can invoke the scheduler when it would otherwise block.

## Scheduler

The per-thread fiber scheduler interface is used to intercept blocking operations. A typical implementation would be a wrapper for a gem like EventMachine or Async. This design provides separation of concerns between the event loop implementation and application code. It also allows for layered schedulers which can perform instrumentation, enforce constraints (e.g. during testing) and provide additional logging. You can see a [sample implementation here](https://github.com/socketry/async/pull/56).

```ruby
class Scheduler
  # Wait for the given file descriptor to become readable.
  def wait_readable(io)
  end

  # Wait for the given file descriptor to become writable.
  def wait_writable(io)
  end

  # Wait for the given file descriptor to match the specified events within
  # the specified timeout.
  # @param event [Integer] a bit mask of +IO::WAIT_READABLE+,
  #   `IO::WAIT_WRITABLE` and `IO::WAIT_PRIORITY`.
  # @param timeout [#to_f] the amount of time to wait for the event.
  def wait_any(io, events, timeout)
  end

  # Sleep the current task for the specified duration, or forever if not
  # specified.
  # @param duration [#to_f] the amount of time to sleep.
  def wait_sleep(duration = nil)
  end

  # The Ruby virtual machine is going to enter a system level blocking
  # operation.
  def enter_blocking_region
  end

  # The Ruby virtual machine has completed the system level blocking
  # operation.
  def exit_blocking_region
  end

  # Intercept the creation of a non-blocking fiber.
  def fiber(&block)
    Fiber.new(blocking: false, &block)
  end

  # Invoked when the thread exits.
  def run
    # Implement event loop here.
  end
end
```

A thread has a non-blocking fiber scheduler. All blocking operations on non-blocking fibers are hooked by the scheduler and the scheduler can switch to another fiber. If any mutex is acquired by a fiber, then a scheduler is not called; the same behaviour as blocking Fiber.

Schedulers can be written in Ruby. This is a desirable property as it allows them to be used in different implementations of Ruby easily.

To enable non-blocking fiber switching on blocking operations:

- Specify a scheduler: `Thread.current.scheduler = Scheduler.new`.
- Create several non-blocking fibers: `Fiber.new(blocking:false) {...}`.
- As the main fiber exits, `Thread.current.scheduler.run` is invoked which
  begins executing the event loop until all fibers are finished.

### Time/Duration Arguments

Tony Arcieri suggested against using floating point values for time/durations, because they can accumulate rounding errors and other issues. He has a wealth of experience in this area so his advice should be considered carefully. However, I have yet to see these issues happen in an event loop. That being said, round tripping between `struct timeval` and `double`/`VALUE` seems a bit inefficient. One option is to have an opaque argument that responds to `to_f` as well as potentially `seconds` and `microseconds` or some other such interface (could be opaque argument supported by `IO.select` for example).

### File Descriptor Arguments

Because of the public C interface we may need to support a specific set of wrappers for CRuby.

```c
int rb_io_wait_readable(int);
int rb_io_wait_writable(int);
int rb_wait_for_single_fd(int fd, int events, struct timeval *tv);
```

One option is to introduce hooks specific to CRuby:

```ruby
class Scheduler
  # Wrapper for rb_io_wait_readable(int) C function.
  def wait_readable_fd(fd)
    wait_readable(::IO.from_fd(fd, autoclose: false))
  end

  # Wrapper for rb_io_wait_readable(int) C function.
  def wait_writable_fd(fd)
    wait_writable(::IO.from_fd(fd, autoclose: false))
  end

  # Wrapper for rb_wait_for_single_fd(int) C function.
  def wait_for_single_fd(fd, events, duration)
    wait_any(::IO.from_fd(fd, autoclose: false), events, duration)
  end
end
```

Alternatively, in CRuby, it may be possible to map from `fd` -> `IO` instance. Most C schedulers only care about file descriptor, so such a mapping will introduce a small performance penalty. In addition, most C level schedulers will not care about `IO` instance.

## Non-blocking Fiber

We propose to introduce per-fiber flag `blocking: true/false`.

A fiber created by `Fiber.new(blocking: true)` (the default `Fiber.new`) becomes a "blocking Fiber" and has no changes from current Fiber implementation.

A fiber created by `Fiber.new(blocking: false)` becomes a "non-blocking Fiber" and it will be scheduled by the per-thread scheduler when the blocking operations (blocking I/O, sleep, and so on) occurs.

```ruby
Fiber.new(blocking: false) do
  puts Fiber.current.blocking? # false

  # May invoke `Thread.scheduler&.wait_readable`.
  io.read(...)

  # May invoke `Thread.scheduler&.wait_writable`.
  io.write(...)

  # Will invoke `Thread.scheduler&.wait_sleep`.
  sleep(n)
end.resume
```

Non-blocking fibers also supports `Fiber#resume`, `Fiber#transfer` and `Fiber.yield` which are necessary to create a scheduler.

### Fiber Method

We also introduce a new method which simplifes the creation of these non-blocking fibers:

```ruby
Fiber do
  puts Fiber.current.blocking? # false
end
```

This method invokes `Scheduler#fiber(...)`. The purpose of this method is to allow the scheduler to internally decide the policy for when to start the fiber, and whether to use symmetric or asymmetric fibers.

If no scheduler is specified, it is a error: `RuntimeError.new("No scheduler is available")`.

In the future we may expand this to support some kind of default scheduler.

## Non-blocking I/O

`IO#nonblock` is an existing interface to control whether I/O uses blocking or non-blocking system calls. We can take advantage of this:

- `IO#nonblock = false` prevents that particular IO from utilising the scheduler. This should be the default for `stderr`.
- `IO#nonblock = true` enables that particular IO to utilise the scheduler. We should enable this where possible.

As proposed by Eric Wong, we believe that making I/O non-blocking by default is the right approach. We have expanded his work in the current implementation. By doing this, when the user writes `Fiber do ... end` they are guaranteed the best possible concurrency possible, without any further changes to code. As an example, one of the tests shows `Net::HTTP.get` being used in this way with no further modifications required.

To support this further, consider the counterpoint, that `Net::HTTP.get(..., blocking: false)` is required for concurrent requests. Library code may not expose the relevant options, sevearly limiting the user's ability to improve concurrency, even if that is what they desire.

# Implementation

We have an evolving implementation here: https://github.com/ruby/ruby/pull/3032 which we will continue to update as the proposal changes.

# Evaluation

This proposal provides the hooks for scheduling fibers. With regards to performance, there are several things to consider:

- The impact of the scheduler design on non-concurrent workloads. We believe it's acceptable.
- The impact of the scheduler design on concurrent workloads. Our results are promising.
- The impact of different event loops on throughput and latency. We have independent tests which confirm the scalability of the approach.

We can control for the first two in this proposal, and depending on the design we may help or hinder the wrapper implementation.

In the tests, we provide a basic implementation using `IO.select`. As this proposal is finalised, we will introduce some basic benchmarks using this approach.

# Discussion

The following points are good ones for discussion:

- Handling of file descriptors vs `IO` instances.
- Handling of time/duration arguments.
- General design and naming conventions.
- Potential platform issues (e.g. CRuby vs JRuby vs TruffleRuby, etc).

The following is planned to be described by @eregon in another design document:

- Semantics of non-blocking mutex (e.g. `Mutex.new(blocking: false)` or some other approach).

In the future we hope to extend the scheduler to handle other blocking operations, including name resolution, file I/O (by `io_uring`) and others.




-- 
https://bugs.ruby-lang.org/

  parent reply	other threads:[~2020-05-10 14:24 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-14 14:38 [ruby-core:97878] [Ruby master Feature#16786] Light-weight scheduler for improved concurrency samuel
2020-04-14 14:54 ` [ruby-core:97879] " shevegen
2020-04-14 15:31 ` [ruby-core:97880] " samuel
2020-04-14 18:15 ` [ruby-core:97883] " headius
2020-04-14 18:45 ` [ruby-core:97884] " headius
2020-04-14 18:55 ` [ruby-core:97885] " tom.enebo
2020-04-14 23:55 ` [ruby-core:97886] " samuel
2020-04-14 23:58 ` [ruby-core:97887] " samuel
2020-04-15  0:42 ` [ruby-core:97888] " samuel
2020-04-15  1:18 ` [ruby-core:97890] " samuel
2020-04-16  9:05 ` [ruby-core:97908] " eregontp
2020-04-16  9:15 ` [ruby-core:97909] " eregontp
2020-04-17  5:27 ` [ruby-core:97932] " samuel
2020-04-17  5:36 ` [ruby-core:97933] " samuel
2020-04-17  5:42 ` [ruby-core:97934] " samuel
2020-04-24  9:10 ` [ruby-core:98053] " samuel
2020-04-24  9:26 ` [ruby-core:98054] " eregontp
2020-04-28  7:12 ` [ruby-core:98079] " sam.saffron
2020-04-28 20:49 ` [ruby-core:98080] " eregontp
2020-05-06 23:33 ` [ruby-core:98164] " samuel
2020-05-10 14:17 ` chris [this message]
2020-05-12 22:00 ` [ruby-core:98303] " ko1
2020-05-13  4:46 ` [ruby-core:98311] " samuel
2020-05-13  4:47 ` [ruby-core:98312] " samuel
2020-05-13  6:25 ` [ruby-core:98313] " samuel
2020-05-13  6:58 ` [ruby-core:98314] " ko1
2020-05-13  7:00 ` [ruby-core:98315] " ko1
2020-05-13 18:44 ` [ruby-core:98329] " daniel
2020-05-14  7:16 ` [ruby-core:98342] " samuel
2020-05-14  7:22 ` [ruby-core:98343] " matz
2020-05-14  8:50 ` [ruby-core:98348] " samuel
2020-05-14  9:25 ` [ruby-core:98349] " duerst
2020-05-14 10:44 ` [ruby-core:98350] " samuel
2020-05-15  3:10 ` [ruby-core:98370] " matz
2020-05-15 15:06 ` [ruby-core:98388] " midnight_w
2020-05-29  3:39 ` [ruby-core:98565] " mauricio
2020-07-21  6:49 ` [ruby-core:99247] " ciconia
2020-07-21 17:53 ` [ruby-core:99253] " eregontp
2020-08-17 11:02 ` [ruby-core:99613] " samuel
2020-08-17 11:09 ` [ruby-core:99614] " samuel
2020-08-17 11:35 ` [ruby-core:99615] " samuel
2020-08-18  4:48 ` [ruby-core:99621] " ko1
2020-08-18  8:02 ` [ruby-core:99622] " samuel
2020-08-18 14:07 ` [ruby-core:99627] " eregontp
2020-08-20 13:38 ` [ruby-core:99657] " samuel
2020-08-20 13:46 ` [ruby-core:99659] " samuel
2020-09-03  3:06 ` [ruby-core:99862] " matz
2020-09-05  6:31 ` [ruby-core:99935] " samuel
2020-09-14  5:23 ` [ruby-core:100005] " samuel
2020-09-21  0:22 ` [ruby-core:100056] " samuel
2020-09-21  9:06 ` [ruby-core:100058] " eregontp
2020-09-25  1:53 ` [ruby-core:100115] " samuel
2020-09-27 10:15 ` [ruby-core:100189] " eregontp
2020-09-30 23:23 ` [ruby-core:100245] " samuel
2020-10-02 18:17 ` [ruby-core:100283] " eregontp
2020-10-03  8:12 ` [ruby-core:100288] " matz
2020-10-06  3:27 ` [ruby-core:100305] " samuel
2020-10-07 19:55 ` [ruby-core:100337] " eregontp
2020-11-07  2:12 ` [ruby-core:100731] " matz
2020-11-07 10:40 ` [ruby-core:100734] " samuel
2020-11-07 14:36 ` [ruby-core:100737] " eregontp
2020-11-07 14:43 ` [ruby-core:100739] " eregontp
2020-11-08  6:05 ` [ruby-core:100747] " matz
2020-11-08 15:35 ` [ruby-core:100749] " eregontp
2020-11-11 23:51 ` [ruby-core:100802] " eregontp
2020-11-13 17:17 ` [ruby-core:100838] " daniel
2020-11-13 17:23 ` [ruby-core:100840] " eregontp
2020-11-16 20:14 ` [ruby-core:100879] " samuel
2020-11-20  8:21 ` [ruby-core:100967] " matz
2020-11-20  8:27 ` [ruby-core:100969] " matz
2020-12-06  6:36 ` [ruby-core:101259] " samuel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.ruby-lang.org/en/community/mailing-lists/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=redmine.journal-85489.20200510141755.3344@ruby-lang.org \
    --to=ruby-core@ruby-lang.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).