From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS4713 221.184.0.0/13 X-Spam-Status: No, score=-3.4 required=3.0 tests=AWL,BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED,SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.0 Received: from neon.ruby-lang.org (neon.ruby-lang.org [221.186.184.75]) by dcvr.yhbt.net (Postfix) with ESMTP id 70A6A1F424 for ; Thu, 26 Apr 2018 06:01:48 +0000 (UTC) Received: from neon.ruby-lang.org (localhost [IPv6:::1]) by neon.ruby-lang.org (Postfix) with ESMTP id 70DFE1208D6; Thu, 26 Apr 2018 15:01:46 +0900 (JST) Received: from dcvr.yhbt.net (dcvr.yhbt.net [64.71.152.64]) by neon.ruby-lang.org (Postfix) with ESMTPS id EA1C91208BE for ; Thu, 26 Apr 2018 15:01:39 +0900 (JST) Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id 4732B1F424; Thu, 26 Apr 2018 06:01:37 +0000 (UTC) Date: Thu, 26 Apr 2018 06:01:37 +0000 From: Eric Wong To: ruby-core@ruby-lang.org Message-ID: <20180426060137.GA11978@dcvr> References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-ML-Name: ruby-core X-Mail-Count: 86691 Subject: [ruby-core:86691] Re: [Ruby trunk Feature#13618] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid X-BeenThere: ruby-core@ruby-lang.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: Ruby developers List-Id: Ruby developers List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ruby-core-bounces@ruby-lang.org Sender: "ruby-core" samuel@oriontransfer.org wrote: > If you are unsure of a good definition for the reactor > pattern, I think this is a good one: > https://en.wikipedia.org/wiki/Reactor_pattern except the > assumption that you need to invert flow control which is not > necessary using fibers. Right, I was reading the Wikipedia page and that description does not resemble the implementation I have for this feature. > In my experience, experimenting with implementations that use > shared epoll/kqueue on a background thread Using a background thread is your mistake. Multiple foreground threads safely use epoll_wait or kevent on the SAME epoll or kqueue fd. It's perfectly safe to do that. Typical reactor is not designed to handle that :P If we eventually encounter contention, we can add more epoll or kqueue descriptors; but I doubt it'll ever come to that. Back to the diner analogy: multiple restaurant waiter can sit at the counter to wait if the cooks are slow and there's no diners placing new orders. > , the thread contention is a pretty big overhead, I think > somewhere between 5x and 10x overhead but I'd prefer to back > that up with real numbers. Not only that, the practical > implementation is more complicated since you need to implement > IPC, locking etc. IPC? Interprocess communication? What? There's no processes, here. No extra locking, either. The kernel already does locking, no point in doing it in luserspace.