From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS4713 221.184.0.0/13 X-Spam-Status: No, score=-3.5 required=3.0 tests=AWL,BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.0 Received: from neon.ruby-lang.org (neon.ruby-lang.org [221.186.184.75]) by dcvr.yhbt.net (Postfix) with ESMTP id 3CEB71F406 for ; Thu, 10 May 2018 21:09:30 +0000 (UTC) Received: from neon.ruby-lang.org (localhost [IPv6:::1]) by neon.ruby-lang.org (Postfix) with ESMTP id 3368E120A47; Fri, 11 May 2018 06:09:28 +0900 (JST) Received: from dcvr.yhbt.net (dcvr.yhbt.net [64.71.152.64]) by neon.ruby-lang.org (Postfix) with ESMTPS id 7F51F120A42 for ; Fri, 11 May 2018 06:09:24 +0900 (JST) Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id 766FB1F406; Thu, 10 May 2018 21:09:22 +0000 (UTC) Date: Thu, 10 May 2018 21:09:22 +0000 From: Eric Wong To: ruby-core@ruby-lang.org Message-ID: <20180510210922.GB3189@dcvr> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-ML-Name: ruby-core X-Mail-Count: 86973 Subject: [ruby-core:86973] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid X-BeenThere: ruby-core@ruby-lang.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: Ruby developers List-Id: Ruby developers List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ruby-core-bounces@ruby-lang.org Sender: "ruby-core" samuel@oriontransfer.net wrote: > I hacked in EPOLLONESHOT semantics into my runloop. IT was > about the same performance. But when I leveraged it correctly > (calling `EPOLL_CTL_ADD` when accepting IO once, and > `EPOLL_CTL_DEL` when closing IO, then `EPOLL_CTL_MOD` when > waiting for event), I saw a 25% improvement in throughput. It > was just a very rough test case but interesting none the less. I would not expect one-shot to improve things unless you design your application around it. It also won't help if you only expect to deal with well-behaved clients and your application processing times are uniform. One-shot helps with application design and allows in resource migration when sharing the queue across threads. Again, this design may harm overall throughput and performance under IDEAL conditions. That's because there is a single queue and assumes all requests can be processed at roughly the same speed. However in NON-IDEAL conditions, some endpoints are handled more slowly than others. They are slow because the application needs to do more work, like an expensive calculation or FS access, NOT because of a "slow client". One-shot also makes application design easier when an evil client which is aggressively pipelining requests to request large responses, yet reading slowly. Thus, the evil client is fast at writing requests, but slow at reading responses. Server and reactor designers sometimes don't consider this case: I haven't checked in years, but EventMachine was a huge offender here since it didn't allow disabling read callbacks at all. What happened was evil clients could keep sending requests, and the server would keep processing them and writing responses to a userspace buffer which the evil client was never draining. So, eventually, it would trigger OOM on the server. Non-oneshot reactor designs need to consider this attack vector. One-shot designs don't even need to think about it, because it's not "reacting" with callbacks. One-shot uses EPOLL_CTL_MOD (or EV_ADD) only when the reader/writer hits EAGAIN. With one-shot you won't have to deal with disabling callbacks which blindly "react" to whatever evil clients send you. So in my experience, one-shot saves me a lot of time since I don't have to keep track of as much state in userspace and remember to disable callbacks from firing if an evil client is sending requests faster than they're reading them.