From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: X-Spam-Status: No, score=-4.2 required=3.0 tests=ALL_TRUSTED,AWL,BAYES_00, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF shortcircuit=no autolearn=ham autolearn_force=no version=3.4.6 Received: from localhost (dcvr.yhbt.net [127.0.0.1]) by dcvr.yhbt.net (Postfix) with ESMTP id C2EAD1F44D; Wed, 3 Apr 2024 22:31:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=80x24.org; s=selector1; t=1712183504; bh=U9CIyzoADpCLgm1NyZH+CtaZBVY4FEEMNmsxS3u8JHQ=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=AbuUhKTnfi80IQb6ioWQ5VvKRPr0KBRokNjXLD+ZCJn/aqA2dYLR21lUcVwB9ZRXa VnCn4soUYpjcAwWHiAOOw0f8IDS0pcwy5rDksH4Gj4YiLBabR88QaoKAgLVtrDBRlj seuyIhEIuUVGEFcjZv1HrphOVwkkf3dWPZYBsahw= Date: Wed, 3 Apr 2024 22:31:44 +0000 From: Eric Wong To: Konstantin Ryabitsev Cc: meta@public-inbox.org Subject: Re: sample robots.txt to reduce WWW load Message-ID: <20240403223144.M651128@dcvr> References: <20240401132145.M567778@dcvr> <20240403-able-meticulous-narwhal-aeea54@lemur> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20240403-able-meticulous-narwhal-aeea54@lemur> List-Id: Konstantin Ryabitsev wrote: > On Mon, Apr 01, 2024 at 01:21:45PM +0000, Eric Wong wrote: > > Performance is still slow, and crawler traffic patterns tend to > > do bad things with caches at all levels, so I've regretfully had > > to experiment with robots.txt to mitigate performance problems. > > This has been the source of grief for us, because aggressive bots don't appear > to be paying any attention to robots.txt, and they are fudging their > user-agent string to pretend to be a regular browser. I am dealing with one > that is hammering us from China Mobile IP ranges and is currently trying to > download every possible snapshot of torvalds/linux, while pretending to be > various versions of Chrome. Ouch, that's from cgit doing `git archive` on every single commit? Yeah, that's a PITA and not something varnish can help with :/ I suppose you're already using some nginx knobs to throttle or limit requests from their IP ranges? It's been years since I've used nginx myself, but AFAIK nginx buffering is either full (buffer everything before sending) or not buffered at all. IOW, (AFAIK) there's no lazy buffering that tries to send whatever it can, but falls back to buffering when a client is the bottleneck. I recommend "proxy_buffering off" in nginx for public-inbox-{httpd,netd} since the lazy buffering done by our Perl logic is ideal for git-{archive,http-backend} trickling to slow clients. This ensures the git memory hogs finish as quickly as possible and we can slowly trickle to slow (or throttled) clients with minimal memory overhead. When I run cgit nowadays, it's _always_ being run by public-inbox-{httpd,netd} to get this lazy buffering behavior. Previously, I used another poorly-marketed (epoll|kqueue) multi-threaded Ruby HTTP server to get the same lazy buffering behavior (I still rely on that server to do HTTPS instead of nginx since I don't yet have a Perl reverse proxy). All that said, PublicInbox::WwwCoderepo (JS-free cgit replacement + inbox integration UI) only generates archive links for tags and not every single commit. > So, while I welcome having a robots.txt recommendation, it kinda assumes that > robots will actually play nice and won't try to suck down as much as possible > as quickly as possible for training some LLM-du-jour. robots.txt actually made a significant difference before I started playing around with jemalloc-inspired size classes for malloc in glibc[1] and mwrap-perl[2]. I've unleashed the bots again and let them run rampant on the https://80x24.org/lore/ HTML pages. Will need to add malloc tracing on my own to generate reproducible results to prove it's worth adding to glibc malloc... [1] https://public-inbox.org/libc-alpha/20240401191925.M515362@dcvr/ [2] https://80x24.org/mwrap-perl/20240403214222.3258695-2-e@80x24.org/