From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS4713 221.184.0.0/13 X-Spam-Status: No, score=-2.9 required=3.0 tests=AWL,BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_MED,SPF_PASS,T_RP_MATCHES_RCVD shortcircuit=no autolearn=no autolearn_force=no version=3.4.0 Received: from neon.ruby-lang.org (neon.ruby-lang.org [221.186.184.75]) by dcvr.yhbt.net (Postfix) with ESMTP id DDAA51F404 for ; Sun, 28 Jan 2018 20:00:23 +0000 (UTC) Received: from neon.ruby-lang.org (localhost [IPv6:::1]) by neon.ruby-lang.org (Postfix) with ESMTP id B69EF120A07; Mon, 29 Jan 2018 05:00:21 +0900 (JST) Received: from dcvr.yhbt.net (dcvr.yhbt.net [64.71.152.64]) by neon.ruby-lang.org (Postfix) with ESMTPS id 9A9E41209D6 for ; Mon, 29 Jan 2018 05:00:16 +0900 (JST) Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id A3C7C1FAE2; Sun, 28 Jan 2018 20:00:14 +0000 (UTC) Date: Sun, 28 Jan 2018 20:00:14 +0000 From: Eric Wong To: ruby-core@ruby-lang.org Message-ID: <20180128200014.GB10749@starla> References: <20180123173133.GB14355@starla> <20180124215113.GA29994@starla> <20180124220143.GA5600@80x24.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-ML-Name: ruby-core X-Mail-Count: 85189 Subject: [ruby-core:85189] Re: [Ruby trunk Feature#13618][Assigned] [PATCH] auto fiber schedule for rb_wait_for_single_fd and rb_waitpid X-BeenThere: ruby-core@ruby-lang.org X-Mailman-Version: 2.1.15 Precedence: list Reply-To: Ruby developers List-Id: Ruby developers List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ruby-core-bounces@ruby-lang.org Sender: "ruby-core" Koichi Sasada wrote: > On 2018/01/25 7:01, Eric Wong wrote: > >For everything else that serves multiple clients in a single > >process, fair sharing is preferable. > > Could you elaborate more? Generally, fairness is preferable. But I think we > can document "we don't guarantee fairness scheduling on this feature", > because our motivation is to provide a way to process multiple connections. > Thoughts? If I write a multi-process server with many long-lived connections, it's best to balance those connections to mitigate bottlenecks/problems which exist in each process. That way, any slowdown or crash which affects one process only affects its fair subset of connections. This is fair sharing across different *nix processes... Within each process, Threadlets are also round-robin scheduled, but run until they cannot proceed. > Or dose it cause live-lock? (no-problem on server-client apps, but > multi-agents programs seems to cause live locking) It should not, Threadlet is FIFO for "ready" Fibers; epoll and kqueue are readiness queues are FIFO internally, too. Blocking accept() mitigates live-lock/thundering herd across different processes. For non-blocking accept(), I will add EPOLLEXCLUSIVE support.