ruby-core@ruby-lang.org archive (unofficial mirror)
 help / color / mirror / Atom feed
From: samuel@oriontransfer.org
To: ruby-core@ruby-lang.org
Subject: [ruby-core:86768] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid
Date: Mon, 30 Apr 2018 01:24:32 +0000 (UTC)	[thread overview]
Message-ID: <redmine.journal-71723.20180430012431.01cc5918a1bcf852@ruby-lang.org> (raw)
In-Reply-To: redmine.issue-13618.20170601001407@ruby-lang.org

Issue #13618 has been updated by ioquatix (Samuel Williams).


> Using a background thread is your mistake.

Don't assume I made this design. It was made by other people. I merely tested it because I was interested in the performance overhead. And yes, there is significant overhead. And let's be generous, people who invested their time and effort to make such a thing for Ruby deserve our appreciation. Knowing that the path they chose to explore was not good is equally important.

> Multiple foreground threads safely use epoll_wait or kevent on the SAME epoll or kqueue fd. It's perfectly safe to do that.

Sure, that's reasonable. If you want to share those data structures across threads, you can dispatch your work in different threads too. I liked what you did with https://yhbt.net/yahns/yahns.txt and it's an interesting design.

The biggest single benefit of this design is that blocking operations in an individual "task" or "worker" won't block any other "task" or "worker", up to the limit of the thread pool you allocate, at which point things WILL start causing blocking. So you can't avoid blocking even with this design.

The major downside of such a design is that workers have to assume they could be running on different threads, so shared data structure needs locking/will cause contention. In addition the current state of the Ruby GIL means that any such design will generally have poor performance.

Here is almost identical code path running, one with 8 forked processes, and one with 8 threads, running on Ruby 2.5:

```
> falcon serve --threaded
> wrk -t8 -c8 -d10 http://localhost:9292
Running 10s test @ http://localhost:9292
  8 threads and 8 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    54.67ms   25.39ms 189.02ms   72.29%
    Req/Sec    18.50      7.18    40.00     53.38%
  1483 requests in 10.04s, 174.88MB read
Requests/sec:    147.74
Transfer/sec:     17.42MB

> falcon serve --forked
> wrk -t8 -c8 -d10 http://localhost:9292
Running 10s test @ http://localhost:9292
  8 threads and 8 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    29.77ms   66.90ms 571.70ms   93.71%
    Req/Sec    71.50     19.54   128.00     83.42%
  5442 requests in 10.10s, 641.61MB read
Requests/sec:    538.90
Transfer/sec:     63.54MB
```

This test is actually on a fresh Rails website (Rails performance isn't great to begin with), on macOS which has pretty bad IO performance. Running the same thing on Linux gives:

```
% falcon serve --threaded
% wrk -t8 -c8 -d10 http://localhost:9292
Running 10s test @ http://localhost:9292
  8 threads and 8 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    26.41ms   13.74ms 123.01ms   69.85%
    Req/Sec    38.53     11.26    80.00     63.38%
  3082 requests in 10.01s, 363.36MB read
Requests/sec:    307.99
Transfer/sec:     36.31MB

% falcon serve --forked
% wrk -t8 -c8 -d10 http://localhost:9292
Running 10s test @ http://localhost:9292
  8 threads and 8 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     9.78ms   24.91ms 309.70ms   97.59%
    Req/Sec   168.68     49.75   262.00     63.89%
  13203 requests in 10.02s, 1.52GB read
Requests/sec:   1318.05
Transfer/sec:    155.39MB
```

So, I think it's safe to say, that in an end to end test, the GIL is a MAJOR performance issue. Feel free to correct me if you think I'm wrong. I'm sure this story is more complicated than the above benchmarks, but I felt like it was a useful comparison.

Therefore, right now, for highly concurrent IO with Ruby, what you actually want is the following:

- One process per CPU core.
- One IO thread per process.
- Multiple fibers, one per worker.

Blocking operations that are causing performance issues should use a thread pool. For things like launching an external process or syscall, and waiting for it to finish, threads are ideal.

The major benefit of such a design is that individual fibers all run on the same thread. You ultimately have similar issues w.r.t. blocking as yahns. However, because all workers run concurrently on the same thread, you don't have any locking/concurrency/mutability issues. To me, this is a massive benefit as it makes writing code with this model super easy.

> Typical reactor is not designed to handle that :P

Yes, but it's by design, not by accident. If you need to scale up, just fork more reactors. On the linux desktop above, `async-http` can handle 100,000+ requests per second using 8 cores for trivial benchmarks. So, performance is something which can scale. The next question then, is design.

There is some elegance in the design you propose. Your proposal requires some kind of "Task" or "Worker" which is a fiber which will yield when IO would block, and resume when IO is ready. Based on what you've said, do you mind explaining whether the "Task" or "Worker" is resumed on the same thread or a different one? Do you maintain a thread pool?

If it's always resumed on the same thread, how do you manage that? e.g. perhaps you can show me how the following would work:

```
Thread.new do
	Worker.new do
		# .. blocking IO
	end
	
	Worker.new do
		# .. blocking IO
	end
	
	# implicitly waits for all workers to complete?
end
```

If you following this model, the thread must be calling into `epoll` or `kqueue` in order to resume work. But based on what you've said, if you have several of the above threads running, and the thread itself is invoking `epoll_wait`, then it receives events for a different thread, how does that work? Do you send the events to the different thread? If you do that, what is the overhead? If you don't do that, do you move workers between threads?

Then, why not consider the similar model to async which uses per-thread reactors. The workers do not move around threads, and the reactor does not need to send events to other threads.

Thanks for your continued time and patience discussing these interesting issues.

----------------------------------------
Feature #13618: [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid
https://bugs.ruby-lang.org/issues/13618#change-71723

* Author: normalperson (Eric Wong)
* Status: Assigned
* Priority: Normal
* Assignee: normalperson (Eric Wong)
* Target version: 
----------------------------------------
```
auto fiber schedule for rb_wait_for_single_fd and rb_waitpid

Implement automatic Fiber yield and resume when running
rb_wait_for_single_fd and rb_waitpid.

The Ruby API changes for Fiber are named after existing Thread
methods.

main Ruby API:

    Fiber#start -> enable auto-scheduling and run Fiber until it
		   automatically yields (due to EAGAIN/EWOULDBLOCK)

The following behave like their Thread counterparts:

    Fiber.start - Fiber.new + Fiber#start (prelude.rb)
    Fiber#join - run internal scheduler until Fiber is terminated
    Fiber#value - ditto
    Fiber#run - like Fiber#start (prelude.rb)

Right now, it takes over rb_wait_for_single_fd() and
rb_waitpid() function if the running Fiber is auto-enabled
(cont.c::rb_fiber_auto_sched_p)

Changes to existing functions are minimal.

New files (all new structs and relations should be documented):

    iom.h - internal API for the rest of RubyVM (incomplete?)
    iom_internal.h - internal header for iom_(select|epoll|kqueue).h
    iom_epoll.h - epoll-specific pieces
    iom_kqueue.h - kqueue-specific pieces
    iom_select.h - select-specific pieces
    iom_pingable_common.h - common code for iom_(epoll|kqueue).h
    iom_common.h - common footer for iom_(select|epoll|kqueue).h

Changes to existing data structures:

    rb_thread_t.afrunq   - list of fibers to auto-resume
    rb_vm_t.iom          - Ruby I/O Manager (rb_iom_t) :)

Besides rb_iom_t, all the new structs are stack-only and relies
extensively on ccan/list for branch-less, O(1) insert/delete.

As usual, understanding the data structures first should help
you understand the code.

Right now, I reuse some static functions in thread.c,
so thread.c includes iom_(select|epoll|kqueue).h

TODO:

    Hijack other blocking functions (IO.select, ...)

I am using "double" for timeout since it is more convenient for
arithmetic like parts of thread.c.   Most platforms have good FP,
I think.  Also, all "blocking" functions (rb_iom_wait*) will
have timeout support.

./configure gains a new --with-iom=(select|epoll|kqueue) switch

libkqueue:

  libkqueue support is incomplete; corner cases are not handled well:

    1) multiple fibers waiting on the same FD
    2) waiting for both read and write events on the same FD

  Bugfixes to libkqueue may be necessary to support all corner cases.
  Supporting these corner cases for native kqueue was challenging,
  even.  See comments on iom_kqueue.h and iom_epoll.h for
  nuances.

Limitations

Test script I used to download a file from my server:
----8<---
require 'net/http'
require 'uri'
require 'digest/sha1'
require 'fiber'

url = 'http://80x24.org/git-i-forgot-to-pack/objects/pack/pack-97b25a76c03b489d4cbbd85b12d0e1ad28717e55.idx'

uri = URI(url)
use_ssl = "https" == uri.scheme
fibs = 10.times.map do
  Fiber.start do
    cur = Fiber.current.object_id
    # XXX getaddrinfo() and connect() are blocking
    # XXX resolv/replace + connect_nonblock
    Net::HTTP.start(uri.host, uri.port, use_ssl: use_ssl) do |http|
      req = Net::HTTP::Get.new(uri)
      http.request(req) do |res|
    dig = Digest::SHA1.new
    res.read_body do |buf|
      dig.update(buf)
      #warn "#{cur} #{buf.bytesize}\n"
    end
    warn "#{cur} #{dig.hexdigest}\n"
      end
    end
    warn "done\n"
    :done
  end
end

warn "joining #{Time.now}\n"
fibs[-1].join(4)
warn "joined #{Time.now}\n"
all = fibs.dup

warn "1 joined, wait for the rest\n"
until fibs.empty?
  fibs.each(&:join)
  fibs.keep_if(&:alive?)
  warn fibs.inspect
end

p all.map(&:value)

Fiber.new do
  puts 'HI'
end.run.join
```


---Files--------------------------------
0001-auto-fiber-schedule-for-rb_wait_for_single_fd-and-rb.patch (82.8 KB)


-- 
https://bugs.ruby-lang.org/

  parent reply	other threads:[~2018-04-30  1:24 UTC|newest]

Thread overview: 173+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <redmine.issue-13618.20170601001407@ruby-lang.org>
2017-06-01  0:14 ` [ruby-core:81492] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid normalperson
2017-06-01  0:36   ` [ruby-core:81493] " Eric Wong
2017-06-29  6:11   ` [ruby-core:81826] " Eric Wong
2018-08-27 20:27   ` [ruby-core:88695] " Eric Wong
2018-09-01 13:13     ` [ruby-core:88800] " Eric Wong
2018-09-02  9:24     ` [ruby-core:88806] " Eric Wong
2018-09-12 20:27       ` [ruby-core:88961] " Eric Wong
2018-11-14 22:03   ` [ruby-core:89799] Thread::Light#raise and Thread::Light#kill Eric Wong
2018-11-20  8:44   ` [ruby-core:89900] Thread::Light patch against r65832 Eric Wong
2018-11-20 10:20     ` [ruby-core:89904] " Koichi Sasada
2018-11-20 15:15       ` [ruby-core:89909] " Eric Wong
2018-11-21 10:59         ` [ruby-core:89920] " Eric Wong
2018-12-15 12:19   ` [ruby-core:90546] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid Eric Wong
2017-06-01  2:15 ` [ruby-core:81495] " ko1
2017-06-01  9:18   ` [ruby-core:81500] " Eric Wong
2017-06-01  5:48 ` [ruby-core:81498] " ko1
2017-06-01 14:40 ` [ruby-core:81507] " ko1
2017-06-01 21:51   ` [ruby-core:81514] " Eric Wong
2017-06-02 18:05 ` [ruby-core:81537] " eregontp
2017-06-02 23:18   ` [ruby-core:81543] " Eric Wong
2017-06-09  8:15 ` [ruby-core:81631] " samuel
2017-06-09 20:32   ` [ruby-core:81643] " Eric Wong
2017-06-14  2:13 ` [ruby-core:81672] " samuel
2017-06-14  2:49   ` [ruby-core:81674] " Eric Wong
2017-06-15  1:56 ` [ruby-core:81687] " samuel
2017-06-15 20:28   ` [ruby-core:81695] " Eric Wong
2017-06-19 13:17 ` [ruby-core:81721] " samuel
2017-06-20 19:08   ` [ruby-core:81732] " Eric Wong
2017-07-13  8:37 ` [ruby-core:82028] " ko1
2017-07-13 22:31   ` [ruby-core:82040] " Eric Wong
2017-07-31  9:19 ` [ruby-core:82214] " samuel
2017-07-31  9:21 ` [ruby-core:82215] " samuel
2017-08-30  3:16 ` [ruby-core:82518] " mame
2017-08-31  5:58   ` [ruby-core:82552] " Eric Wong
2017-09-12  5:40     ` [ruby-core:82756] " Eric Wrong
2017-09-28  1:06       ` [ruby-core:83034] " Eric Wrong
2017-12-07  4:30         ` [ruby-core:84118] " Eric Wong
2017-12-10 22:30 ` [ruby-core:84149] " samuel
2017-12-11  1:37   ` [ruby-core:84153] " Eric Wong
2018-01-23 11:35 ` [ruby-core:84980] [Ruby trunk Feature#13618][Assigned] " hsbt
2018-01-23 17:31   ` [ruby-core:85012] " Eric Wong
2018-01-24 21:51     ` [ruby-core:85081] " Eric Wong
2018-01-24 22:01       ` [ruby-core:85082] " Eric Wong
2018-01-25  1:13         ` [ruby-core:85087] " Daniel Ferreira
2018-01-28 14:09         ` [ruby-core:85181] " Koichi Sasada
2018-01-28 20:00           ` [ruby-core:85189] " Eric Wong
2018-01-28 14:01       ` [ruby-core:85180] " Koichi Sasada
2018-01-28 11:02     ` [ruby-core:85173] " Eric Wong
2018-01-28 14:32     ` [ruby-core:85183] " Koichi Sasada
2018-01-25  1:15 ` [ruby-core:85088] [Ruby trunk Feature#13618] " danieldasilvaferreira
2018-01-25  4:34   ` [ruby-core:85094] " Eric Wong
2018-01-25  5:06     ` [ruby-core:85095] " Daniel Ferreira
2018-01-26 10:16 ` [ruby-core:85128] " samuel
2018-01-26 19:13   ` [ruby-core:85136] " Eric Wong
2018-01-26 20:31     ` [ruby-core:85138] " Daniel Ferreira
2018-01-26 21:36       ` [ruby-core:85139] " Eric Wong
2018-01-27  1:08 ` [ruby-core:85140] " hsbt
2018-01-27  1:17 ` [ruby-core:85141] " danieldasilvaferreira
2018-01-27  3:45 ` [ruby-core:85144] " danieldasilvaferreira
2018-01-28 10:47   ` [ruby-core:85171] " Eric Wong
2018-01-27 23:34 ` [ruby-core:85162] " merch-redmine
2018-01-28  0:02 ` [ruby-core:85163] " danieldasilvaferreira
2018-01-28  0:33 ` [ruby-core:85164] " danieldasilvaferreira
2018-01-28 10:54   ` [ruby-core:85172] " Eric Wong
2018-01-28  5:20 ` [ruby-core:85168] " sam.saffron
2018-01-28 10:41   ` [ruby-core:85170] " Eric Wong
2018-01-28 12:29 ` [ruby-core:85174] " danieldasilvaferreira
2018-01-28 17:50 ` [ruby-core:85186] " danieldasilvaferreira
2018-01-28 20:18   ` [ruby-core:85190] " Eric Wong
2018-01-28 20:43 ` [ruby-core:85191] " danieldasilvaferreira
2018-01-28 21:19   ` [ruby-core:85193] " Eric Wong
2018-01-29  0:39 ` [ruby-core:85199] " sam.saffron
2018-01-29  4:42   ` [ruby-core:85204] " Eric Wong
2018-01-29  5:06 ` [ruby-core:85206] " sam.saffron
2018-01-29  5:14   ` [ruby-core:85207] " Koichi Sasada
2018-01-29  9:48   ` [ruby-core:85217] " Eric Wong
2018-01-29  5:38 ` [ruby-core:85209] " sam.saffron
2018-01-29 20:56 ` [ruby-core:85235] " shannonskipper
2018-01-29 21:28 ` [ruby-core:85236] " sam.saffron
2018-01-29 22:23   ` [ruby-core:85237] " Eric Wong
2018-01-31  2:48 ` [ruby-core:85273] " samuel
2018-02-02  5:46 ` [ruby-core:85335] " sam.saffron
2018-02-02  6:22   ` [ruby-core:85336] " Eric Wong
2018-02-03  1:36 ` [ruby-core:85353] " sam.saffron
2018-02-03  9:33   ` [ruby-core:85362] " Eric Wong
2018-02-04  6:14 ` [ruby-core:85371] " jjyruby
2018-02-05 21:42   ` [ruby-core:85417] " Eric Wong
2018-02-08  0:25 ` [ruby-core:85472] " sam.saffron
2018-02-13 22:39   ` [ruby-core:85531] " Eric Wong
2018-02-15  3:22 ` [ruby-core:85575] " samuel
2018-02-15  4:02   ` [ruby-core:85576] " Eric Wong
2018-02-15 13:13 ` [ruby-core:85585] " samuel
2018-02-20  6:42 ` [ruby-core:85674] " matz
2018-02-20  9:06   ` [ruby-core:85686] " Eric Wong
2018-02-21  1:52     ` [ruby-core:85704] " Koichi Sasada
2018-02-21  8:07       ` [ruby-core:85726] " Eric Wong
2018-02-21  8:23         ` [ruby-core:85727] " Koichi Sasada
2018-02-21 14:55 ` [ruby-core:85732] " jjyruby
2018-02-28 19:25 ` [ruby-core:85868] " keystonelemur
2018-03-13  2:57 ` [ruby-core:86092] " samuel
2018-04-21 11:23   ` [ruby-core:86639] " Eric Wong
2018-04-26  4:57 ` [ruby-core:86689] " samuel
2018-04-26  6:01   ` [ruby-core:86691] " Eric Wong
2018-04-30  1:24 ` samuel [this message]
2018-04-30 10:25   ` [ruby-core:86774] " Eric Wong
2018-04-30  1:37 ` [ruby-core:86769] " samuel
2018-04-30 10:47   ` [ruby-core:86775] " Eric Wong
2018-05-02  5:20 ` [ruby-core:86821] " samuel
2018-05-02  7:54   ` [ruby-core:86826] " Eric Wong
2018-05-02  8:38 ` [ruby-core:86829] " samuel
2018-05-02 10:56   ` [ruby-core:86832] " Eric Wong
2018-05-02 23:36 ` [ruby-core:86850] " samuel
2018-05-03  1:15   ` [ruby-core:86853] " Eric Wong
2018-05-05 13:06 ` [ruby-core:86910] " samuel
2018-05-06  3:03   ` [ruby-core:86915] " Eric Wong
2018-05-07 11:39 ` [ruby-core:86929] " samuel
2018-05-10 20:06   ` [ruby-core:86972] " Eric Wong
2018-05-08  5:25 ` [ruby-core:86942] " samuel
2018-05-08  6:40 ` [ruby-core:86943] " samuel
2018-05-08  7:01 ` [ruby-core:86944] " samuel
2018-05-10 21:09   ` [ruby-core:86973] " Eric Wong
2018-05-11  2:09 ` [ruby-core:86976] " samuel
2018-06-13  1:16 ` [ruby-core:87484] " Eric Wong
2018-06-18  0:59   ` [ruby-core:87504] " Eric Wong
2018-07-04  7:37 ` [ruby-core:87776] " funny.falcon
2018-07-04  8:45 ` [ruby-core:87779] " samuel
2018-07-04 16:40 ` [ruby-core:87786] " funny.falcon
2018-07-05  7:20 ` [ruby-core:87803] " funny.falcon
2018-07-05  8:43 ` [ruby-core:87810] " funny.falcon
2018-07-05  9:35 ` [ruby-core:87811] " samuel
2018-07-05 18:12 ` [ruby-core:87818] " funny.falcon
2018-07-05 21:55 ` [ruby-core:87822] " samuel
2018-07-06  7:48 ` [ruby-core:87835] " funny.falcon
2018-07-06  9:16 ` [ruby-core:87837] " samuel
2018-07-06 18:10 ` [ruby-core:87839] " funny.falcon
2018-07-06 21:11   ` [ruby-core:87840] " Eric Wong
2018-08-08  1:21 ` [ruby-core:88331] " samuel
2018-08-08  8:48   ` [ruby-core:88350] " Eric Wong
2018-08-08 11:14     ` [ruby-core:88352] " Matthew Kerwin
2018-08-09  8:04 ` [ruby-core:88374] " samuel
2018-08-09  8:25 ` [ruby-core:88376] " samuel
2018-08-09  8:34   ` [ruby-core:88378] " Eric Wong
2018-08-10  9:33 ` [ruby-core:88433] " ko1
2018-08-14  0:42   ` [ruby-core:88467] " Eric Wong
2018-08-14  8:22     ` [ruby-core:88476] " Koichi Sasada
2018-08-14 17:47       ` [ruby-core:88484] " Eric Wong
2018-08-10 11:45 ` [ruby-core:88437] " samuel
2018-08-14  9:00 ` [ruby-core:88478] " danieldasilvaferreira
2018-08-14 18:25   ` [ruby-core:88486] " Eric Wong
2018-09-04 21:36 ` [ruby-core:88838] " Greg.mpls
2018-09-05 21:47   ` [ruby-core:88873] " Eric Wong
2018-09-13  8:17 ` [ruby-core:88989] " matz
2018-09-13  9:18   ` [ruby-core:88992] " Eric Wong
2018-09-13 16:23 ` [ruby-core:88999] " shevegen
2018-09-21 17:58 ` [ruby-core:89120] " shannonskipper
2018-09-28  2:35   ` [ruby-core:89204] " Eric Wong
2018-11-20 20:58 ` [ruby-core:89913] " shevegen
2018-11-22  1:28 ` [ruby-core:89939] " me
2018-11-22  2:14   ` [ruby-core:89943] " Eric Wong
2018-11-22 10:40     ` [ruby-core:89968] Thread::Light updated for r65925 (sleep fix) Eric Wong
2018-11-28 11:05       ` [ruby-core:90115] Thread::Light r66072 Eric Wong
2018-11-22 15:40 ` [ruby-core:89978] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid me
2018-11-28 10:22 ` [ruby-core:90111] " matz
2018-11-28 10:44   ` [ruby-core:90112] " Eric Wong
2018-11-28 12:19 ` [ruby-core:90117] " takashikkbn
2018-11-28 19:51   ` [ruby-core:90134] " Eric Wong
2018-11-28 20:07 ` [ruby-core:90136] " samuel
2018-11-29  0:16   ` [ruby-core:90141] " Eric Wong
2018-11-29 11:26     ` [ruby-core:90161] Thread::Light#run and Thread::Light#wakeup Eric Wong
2019-01-01 22:58 ` [ruby-core:90846] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid me
2019-01-04 14:49   ` [ruby-core:90890] " Eric Wong
2019-02-13  9:44 ` [ruby-core:91528] Re: Technical question on ruby Thread::Light scheduling Eric Wong
2019-05-07  7:24 ` [ruby-core:92579] [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid shevegen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-list from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.ruby-lang.org/en/community/mailing-lists/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=redmine.journal-71723.20180430012431.01cc5918a1bcf852@ruby-lang.org \
    --to=ruby-core@ruby-lang.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).