ruby-core@ruby-lang.org archive (unofficial mirror)
 help / color / mirror / Atom feed
From: samuel@oriontransfer.net
To: ruby-core@ruby-lang.org
Subject: [ruby-core:99613] [Ruby master Feature#16786] Light-weight scheduler for improved concurrency.
Date: Mon, 17 Aug 2020 11:02:18 +0000 (UTC)	[thread overview]
Message-ID: <redmine.journal-87092.20200817110218.3344@ruby-lang.org> (raw)
In-Reply-To: redmine.issue-16786.20200414143858.3344@ruby-lang.org

Issue #16786 has been updated by ioquatix (Samuel Williams).


There has been some discussion about the interface of the Scheduler.

## C Interface Exposure

It was largely copied from the existing Ruby and C interfaces where it seemed to make sense. For example `rb_wait_for_single_fd` -> `wait_for_single_fd`, etc.

We discussed current (public) C interface, which is:

``` c
int rb_io_wait_readable(int);
int rb_io_wait_writable(int);
int rb_wait_for_single_fd(int fd, int events, struct timeval *tv);
```

@ko1 said he doesn't want to expose these methods to scheduler, and he would rather have an implicit (non-cached) `IO.from_fd` for every operation so that the scheduler only sees IO instances.

- This may introduce performance issue.
- This may introduce consistency problem.

Right now in the proof of concept scheduler, it will cache `fd -> IO` and `IO -> Wrapper`. A wrapper contains the cached state of `epoll`/`kqueue` registration which requires one system call to register and one system call to deregister. I agree, that ideally all `IO` is represented by a unique `IO` object, so that we can cache this correctly. However, by creating an `IO` instance for each `read` and `write` call, not only do we create a lot of garbage, we also introduce duplicate `IO`s for the same underlying `fd`.

So, these issues need to be addressed some how. Ultimately, I'm fine with removing the C `_fd` wrappers, provided that:

- We deprecate the existing C functions which take file descriptors.
- We introduce new C functions which take `IO` instances.
- We update all code in CRuby to use these new functions.
- In order to remove those C wrappers from the Scheduler, we need to implement some kind of `fd -> IO` cache.

Ultimately, it makes the interface of the scheduler simpler, so I'm happy with that. But it's a lot of work and the current proposal is working.

## C Interface "Surface Area"

The proposed scheduler replicates methods from `IO`, including `IO#wait_readable` -> `Scheduler#wait_readable`, `IO#wait_writable` -> `Scheduler#wait_writable` and `IO#wait` -> `Scheduler#wait_any`.

It was brought to my attention that the surface area of this was too big.

I'm also okay with this point. However, my original design was to avoid making changes and to follow the existing interfaces.

That being said, if change is desired here, after discussion, this is what I would suggest.

- We should introduce `IO#wait_priority`. It's a 3rd kind of event which currently not handled.
- We should change `IO#wait` to take some bitmask of flags, e.g. `IO::READABLE`, `IO::WRITABLE`, `IO::PRIORITY`. There are sometimes system specific flags too. The order of arguments is also cumbersome if you don't want to specify timeout.
- We should rewrite `IO#wait_$event(timeout)` into `IO#wait($EVENT, timeout)` as a defined implementation.
- `IO#wait` should be redirected to `Scheduler#wait` (or `#wait_io`).
- As part of this, we should rename `wait_sleep` to `sleep` if we want to try and be consistent with `wait` naming (i.e. `Kernel#sleep` -> `Scheduler#sleep` and `IO#wait` -> `Scheduler#wait`.

This greatly reduces the surface area of the functions that get forwarded into the scheduler and should also reduce the size of the CRuby implementation. It provides a nice central funnel for the `wait` event and using an integer bitmask is much better for forwards compatibility (i.e. not just readable/writable but also priority, out of band data, other system specific events).

## Blocking Hooks

The proposed scheduler introduced some new hooks. This was kind of experimental feature to detect blocking operation. However, after testing it, we found it cannot detect every blocking operation (to be expected I suppose). For example in SQLite3, the GVL is not released even for long blocking operation. So, this feature is dangerous.

However, that being said, fibers that tie up the event loop represent a significant issue to event driven code. This is well known issue from any event driven system. Care must be taken to off-load work to threads (or Ractors!). Therefore, I'm happy to remove this hooks. But we should be aware that this doesn't mitigate the need for instrumentation around fiber context switch and if we can't provide this feedback, users may have a bad experience with event driven Ruby.

My conclusion is that we need better sampling profiler which also measures fiber context switch. But it is a lot of work.

## "Fiber{}" Naming

I personally like the "Fiber do...end". It's short, we don't break any existing code by using it, contrary to prior discussion, this *does* return a Fiber in every way shape and form, the only difference is that it defers to the scheduler for creation. It feels consistent for how we use things like `Integer`, `Float` and so on.

In `Async`, it actually creates an `Async::Task` and returns the task's fiber. @ko1 recently created his own Scheduler and implemented a pessimistic scheduler by returning the fiber and later executing it. Async is optimistic scheduler and will execute fibers immediately, rather than adding them to work queue. The flexibility of this design was enabled by the `def fiber` implementation provided by the scheduler.

Since the initial proposal, we made it a failure if the scheduler is not defined, which means that the user can clearly indicate that a piece of code *requires* a scheduler.

Regarding the specific name, I am not convinced by any of the proposed alternatives.

- `spawn`:`Kernel#spawn` is already defined: https://rubyapi.org/2.7/o/kernel#method-i-spawn
- `Scheduler#fiber`: Calling this method directly cannot have the same convenient error checking as some top level method like `Fiber{}`.
- `Fiber.nonblocking{}`: Well, maybe? It's longer (bad?), it's also not strictly speaking `Fiber.new(nonblocking: true)` because it goes via the scheduler... so... being more specific actually makes it easier to be wrong.

The logic of `Fiber{}` is simply: It creates a fiber, and you can run code it in, and it requires a scheduler to exist. The scheduler might not even be non-blocking scheduler. It's implementation dependent.

Let me speak frankly, I've seen so many discussions wasting so much time about naming things. It's probably best that before you suggest some alternative, that you actually spend a few hours trying out the current situation to see how it feels. If you have a strong opinion about it, let's discuss it in private so that I can compile a list of options for a final discussion. In the end, we need to choose something, and there is no perfect answer.


----------------------------------------
Feature #16786: Light-weight scheduler for improved concurrency.
https://bugs.ruby-lang.org/issues/16786#change-87092

* Author: ioquatix (Samuel Williams)
* Status: Open
* Priority: Normal
----------------------------------------
# Abstract

We propose to introduce a light weight fiber scheduler, to improve the concurrency of Ruby code with minimal changes.

# Background

We have been discussing and considering options to improve Ruby scalability for several years. More context can be provided by the following discussions:

- https://bugs.ruby-lang.org/issues/14736
- https://bugs.ruby-lang.org/issues/13618

The final Ruby Concurrency report provides some background on the various issues considered in the latest iteration: https://www.codeotaku.com/journal/2020-04/ruby-concurrency-final-report/index

# Proposal

We propose to introduce the following concepts:

- A `Scheduler` interface which provides hooks for user-supplied event loops.
- Non-blocking `Fiber` which can invoke the scheduler when it would otherwise block.

## Scheduler

The per-thread fiber scheduler interface is used to intercept blocking operations. A typical implementation would be a wrapper for a gem like EventMachine or Async. This design provides separation of concerns between the event loop implementation and application code. It also allows for layered schedulers which can perform instrumentation, enforce constraints (e.g. during testing) and provide additional logging. You can see a [sample implementation here](https://github.com/socketry/async/pull/56).

```ruby
class Scheduler
  # Wait for the given file descriptor to become readable.
  def wait_readable(io)
  end

  # Wait for the given file descriptor to become writable.
  def wait_writable(io)
  end

  # Wait for the given file descriptor to match the specified events within
  # the specified timeout.
  # @param event [Integer] a bit mask of +IO::WAIT_READABLE+,
  #   `IO::WAIT_WRITABLE` and `IO::WAIT_PRIORITY`.
  # @param timeout [#to_f] the amount of time to wait for the event.
  def wait_any(io, events, timeout)
  end

  # Sleep the current task for the specified duration, or forever if not
  # specified.
  # @param duration [#to_f] the amount of time to sleep.
  def wait_sleep(duration = nil)
  end

  # The Ruby virtual machine is going to enter a system level blocking
  # operation.
  def enter_blocking_region
  end

  # The Ruby virtual machine has completed the system level blocking
  # operation.
  def exit_blocking_region
  end

  # Intercept the creation of a non-blocking fiber.
  def fiber(&block)
    Fiber.new(blocking: false, &block)
  end

  # Invoked when the thread exits.
  def run
    # Implement event loop here.
  end
end
```

A thread has a non-blocking fiber scheduler. All blocking operations on non-blocking fibers are hooked by the scheduler and the scheduler can switch to another fiber. If any mutex is acquired by a fiber, then a scheduler is not called; the same behaviour as blocking Fiber.

Schedulers can be written in Ruby. This is a desirable property as it allows them to be used in different implementations of Ruby easily.

To enable non-blocking fiber switching on blocking operations:

- Specify a scheduler: `Thread.current.scheduler = Scheduler.new`.
- Create several non-blocking fibers: `Fiber.new(blocking:false) {...}`.
- As the main fiber exits, `Thread.current.scheduler.run` is invoked which
  begins executing the event loop until all fibers are finished.

### Time/Duration Arguments

Tony Arcieri suggested against using floating point values for time/durations, because they can accumulate rounding errors and other issues. He has a wealth of experience in this area so his advice should be considered carefully. However, I have yet to see these issues happen in an event loop. That being said, round tripping between `struct timeval` and `double`/`VALUE` seems a bit inefficient. One option is to have an opaque argument that responds to `to_f` as well as potentially `seconds` and `microseconds` or some other such interface (could be opaque argument supported by `IO.select` for example).

### File Descriptor Arguments

Because of the public C interface we may need to support a specific set of wrappers for CRuby.

```c
int rb_io_wait_readable(int);
int rb_io_wait_writable(int);
int rb_wait_for_single_fd(int fd, int events, struct timeval *tv);
```

One option is to introduce hooks specific to CRuby:

```ruby
class Scheduler
  # Wrapper for rb_io_wait_readable(int) C function.
  def wait_readable_fd(fd)
    wait_readable(::IO.from_fd(fd, autoclose: false))
  end

  # Wrapper for rb_io_wait_readable(int) C function.
  def wait_writable_fd(fd)
    wait_writable(::IO.from_fd(fd, autoclose: false))
  end

  # Wrapper for rb_wait_for_single_fd(int) C function.
  def wait_for_single_fd(fd, events, duration)
    wait_any(::IO.from_fd(fd, autoclose: false), events, duration)
  end
end
```

Alternatively, in CRuby, it may be possible to map from `fd` -> `IO` instance. Most C schedulers only care about file descriptor, so such a mapping will introduce a small performance penalty. In addition, most C level schedulers will not care about `IO` instance.

## Non-blocking Fiber

We propose to introduce per-fiber flag `blocking: true/false`.

A fiber created by `Fiber.new(blocking: true)` (the default `Fiber.new`) becomes a "blocking Fiber" and has no changes from current Fiber implementation. This includes the root fiber.

A fiber created by `Fiber.new(blocking: false)` becomes a "non-blocking Fiber" and it will be scheduled by the per-thread scheduler when the blocking operations (blocking I/O, sleep, and so on) occurs.

```ruby
Fiber.new(blocking: false) do
  puts Fiber.current.blocking? # false

  # May invoke `Thread.scheduler&.wait_readable`.
  io.read(...)

  # May invoke `Thread.scheduler&.wait_writable`.
  io.write(...)

  # Will invoke `Thread.scheduler&.wait_sleep`.
  sleep(n)
end.resume
```

Non-blocking fibers also supports `Fiber#resume`, `Fiber#transfer` and `Fiber.yield` which are necessary to create a scheduler.

### Fiber Method

We also introduce a new method which simplifes the creation of these non-blocking fibers:

```ruby
Fiber do
  puts Fiber.current.blocking? # false
end
```

This method invokes `Scheduler#fiber(...)`. The purpose of this method is to allow the scheduler to internally decide the policy for when to start the fiber, and whether to use symmetric or asymmetric fibers.

If no scheduler is specified, it is a error: `RuntimeError.new("No scheduler is available")`.

In the future we may expand this to support some kind of default scheduler.

## Non-blocking I/O

`IO#nonblock` is an existing interface to control whether I/O uses blocking or non-blocking system calls. We can take advantage of this:

- `IO#nonblock = false` prevents that particular IO from utilising the scheduler. This should be the default for `stderr`.
- `IO#nonblock = true` enables that particular IO to utilise the scheduler. We should enable this where possible.

As proposed by Eric Wong, we believe that making I/O non-blocking by default is the right approach. We have expanded his work in the current implementation. By doing this, when the user writes `Fiber do ... end` they are guaranteed the best possible concurrency possible, without any further changes to code. As an example, one of the tests shows `Net::HTTP.get` being used in this way with no further modifications required.

To support this further, consider the counterpoint, that `Net::HTTP.get(..., blocking: false)` is required for concurrent requests. Library code may not expose the relevant options, sevearly limiting the user's ability to improve concurrency, even if that is what they desire.

# Implementation

We have an evolving implementation here: https://github.com/ruby/ruby/pull/3032 which we will continue to update as the proposal changes.

# Evaluation

This proposal provides the hooks for scheduling fibers. With regards to performance, there are several things to consider:

- The impact of the scheduler design on non-concurrent workloads. We believe it's acceptable.
- The impact of the scheduler design on concurrent workloads. Our results are promising.
- The impact of different event loops on throughput and latency. We have independent tests which confirm the scalability of the approach.

We can control for the first two in this proposal, and depending on the design we may help or hinder the wrapper implementation.

In the tests, we provide a basic implementation using `IO.select`. As this proposal is finalised, we will introduce some basic benchmarks using this approach.

# Discussion

The following points are good ones for discussion:

- Handling of file descriptors vs `IO` instances.
- Handling of time/duration arguments.
- General design and naming conventions.
- Potential platform issues (e.g. CRuby vs JRuby vs TruffleRuby, etc).

The following is planned to be described by @eregon in another design document:

- Semantics of non-blocking mutex (e.g. `Mutex.new(blocking: false)` or some other approach).

In the future we hope to extend the scheduler to handle other blocking operations, including name resolution, file I/O (by `io_uring`) and others. We may need to introduce additional hooks. If these hooks are not defined on the scheduler implementation, we will revert back to the blocking implementation where possible.



-- 
https://bugs.ruby-lang.org/

  parent reply	other threads:[~2020-08-17 11:02 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-14 14:38 [ruby-core:97878] [Ruby master Feature#16786] Light-weight scheduler for improved concurrency samuel
2020-04-14 14:54 ` [ruby-core:97879] " shevegen
2020-04-14 15:31 ` [ruby-core:97880] " samuel
2020-04-14 18:15 ` [ruby-core:97883] " headius
2020-04-14 18:45 ` [ruby-core:97884] " headius
2020-04-14 18:55 ` [ruby-core:97885] " tom.enebo
2020-04-14 23:55 ` [ruby-core:97886] " samuel
2020-04-14 23:58 ` [ruby-core:97887] " samuel
2020-04-15  0:42 ` [ruby-core:97888] " samuel
2020-04-15  1:18 ` [ruby-core:97890] " samuel
2020-04-16  9:05 ` [ruby-core:97908] " eregontp
2020-04-16  9:15 ` [ruby-core:97909] " eregontp
2020-04-17  5:27 ` [ruby-core:97932] " samuel
2020-04-17  5:36 ` [ruby-core:97933] " samuel
2020-04-17  5:42 ` [ruby-core:97934] " samuel
2020-04-24  9:10 ` [ruby-core:98053] " samuel
2020-04-24  9:26 ` [ruby-core:98054] " eregontp
2020-04-28  7:12 ` [ruby-core:98079] " sam.saffron
2020-04-28 20:49 ` [ruby-core:98080] " eregontp
2020-05-06 23:33 ` [ruby-core:98164] " samuel
2020-05-10 14:17 ` [ruby-core:98248] " chris
2020-05-12 22:00 ` [ruby-core:98303] " ko1
2020-05-13  4:46 ` [ruby-core:98311] " samuel
2020-05-13  4:47 ` [ruby-core:98312] " samuel
2020-05-13  6:25 ` [ruby-core:98313] " samuel
2020-05-13  6:58 ` [ruby-core:98314] " ko1
2020-05-13  7:00 ` [ruby-core:98315] " ko1
2020-05-13 18:44 ` [ruby-core:98329] " daniel
2020-05-14  7:16 ` [ruby-core:98342] " samuel
2020-05-14  7:22 ` [ruby-core:98343] " matz
2020-05-14  8:50 ` [ruby-core:98348] " samuel
2020-05-14  9:25 ` [ruby-core:98349] " duerst
2020-05-14 10:44 ` [ruby-core:98350] " samuel
2020-05-15  3:10 ` [ruby-core:98370] " matz
2020-05-15 15:06 ` [ruby-core:98388] " midnight_w
2020-05-29  3:39 ` [ruby-core:98565] " mauricio
2020-07-21  6:49 ` [ruby-core:99247] " ciconia
2020-07-21 17:53 ` [ruby-core:99253] " eregontp
2020-08-17 11:02 ` samuel [this message]
2020-08-17 11:09 ` [ruby-core:99614] " samuel
2020-08-17 11:35 ` [ruby-core:99615] " samuel
2020-08-18  4:48 ` [ruby-core:99621] " ko1
2020-08-18  8:02 ` [ruby-core:99622] " samuel
2020-08-18 14:07 ` [ruby-core:99627] " eregontp
2020-08-20 13:38 ` [ruby-core:99657] " samuel
2020-08-20 13:46 ` [ruby-core:99659] " samuel
2020-09-03  3:06 ` [ruby-core:99862] " matz
2020-09-05  6:31 ` [ruby-core:99935] " samuel
2020-09-14  5:23 ` [ruby-core:100005] " samuel
2020-09-21  0:22 ` [ruby-core:100056] " samuel
2020-09-21  9:06 ` [ruby-core:100058] " eregontp
2020-09-25  1:53 ` [ruby-core:100115] " samuel
2020-09-27 10:15 ` [ruby-core:100189] " eregontp
2020-09-30 23:23 ` [ruby-core:100245] " samuel
2020-10-02 18:17 ` [ruby-core:100283] " eregontp
2020-10-03  8:12 ` [ruby-core:100288] " matz
2020-10-06  3:27 ` [ruby-core:100305] " samuel
2020-10-07 19:55 ` [ruby-core:100337] " eregontp
2020-11-07  2:12 ` [ruby-core:100731] " matz
2020-11-07 10:40 ` [ruby-core:100734] " samuel
2020-11-07 14:36 ` [ruby-core:100737] " eregontp
2020-11-07 14:43 ` [ruby-core:100739] " eregontp
2020-11-08  6:05 ` [ruby-core:100747] " matz
2020-11-08 15:35 ` [ruby-core:100749] " eregontp
2020-11-11 23:51 ` [ruby-core:100802] " eregontp
2020-11-13 17:17 ` [ruby-core:100838] " daniel
2020-11-13 17:23 ` [ruby-core:100840] " eregontp
2020-11-16 20:14 ` [ruby-core:100879] " samuel
2020-11-20  8:21 ` [ruby-core:100967] " matz
2020-11-20  8:27 ` [ruby-core:100969] " matz
2020-12-06  6:36 ` [ruby-core:101259] " samuel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.ruby-lang.org/en/community/mailing-lists/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=redmine.journal-87092.20200817110218.3344@ruby-lang.org \
    --to=ruby-core@ruby-lang.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).