list mirror (unofficial, one of many)
 help / color / mirror / code / Atom feed
From: Taylor Blau <>
To: Jeff King <>
Cc: Thomas Braun <>,
	Derrick Stolee <>,
Subject: Re: [PATCH 0/5] handling 4GB .idx files
Date: Tue, 1 Dec 2020 13:27:26 -0500	[thread overview]
Message-ID: <X8aLDlzcNCwP699c@nand.local> (raw)
In-Reply-To: <>

On Tue, Dec 01, 2020 at 06:23:28AM -0500, Jeff King wrote:
> I'm not sure if EXPENSIVE is the right ballpark, or if we'd want a
> VERY_EXPENSIVE. On my machine, the whole test suite for v2.29.0 takes 64
> seconds to run, and setting GIT_TEST_LONG=1 bumps that to 103s. It got a
> bit worse since then, as t7900 adds an EXPENSIVE test that takes ~200s
> (it's not strictly additive, since we can work in parallel on other
> tests for the first bit, but still, yuck).
> So we're looking at 2-3x to run the expensive tests now. This new one
> would be 20x or more. I'm not sure if anybody would care or not (i.e.,
> whether anyone actually runs the whole suite with this flag). I thought
> we did for some CI job, but it looks like it's just the one-off in
> t5608.

I had written something similar yesterday before mutt crashed and I
decided to stop work for the day.

I have a sense that probably very few people actually run GIT_TEST_LONG
regularly, and that that group may vanish entirely if we added a test
which increased the runtime of the suite by 20x in this mode.

I have mixed feelings about VERY_EXPENSIVE. On one hand, having this
test checked in so that we can quickly refer back to it in the case of a
regression is useful. On the other hand, what is it worth to have this
in-tree if nobody ever runs it? I'm speculating about whether or not
people would run this, of course.

My hunch is that anybody who is interested enough to fix regressions in
this area would be able to refer back to the list archive to dig up this
thread and recover the script.

I don't feel strongly, really, but just noting some light objections to
checking this test into the suite.


  parent reply	other threads:[~2020-12-01 18:28 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-13  5:06 Jeff King
2020-11-13  5:06 ` [PATCH 1/5] compute pack .idx byte offsets using size_t Jeff King
2020-11-13  5:07 ` [PATCH 2/5] use size_t to store pack .idx byte offsets Jeff King
2020-11-13  5:07 ` [PATCH 3/5] fsck: correctly compute checksums on idx files larger than 4GB Jeff King
2020-11-13  5:07 ` [PATCH 4/5] block-sha1: take a size_t length parameter Jeff King
2020-11-13  5:07 ` [PATCH 5/5] packfile: detect overflow in .idx file size checks Jeff King
2020-11-13 11:02   ` Johannes Schindelin
2020-11-15 14:43 ` [PATCH 0/5] handling 4GB .idx files Thomas Braun
2020-11-16  4:10   ` Jeff King
2020-11-16 13:30     ` Derrick Stolee
2020-11-16 23:49       ` Jeff King
2020-11-30 22:57     ` Thomas Braun
2020-12-01 11:23       ` Jeff King
2020-12-01 11:39         ` t7900's new expensive test Jeff King
2020-12-01 20:55           ` Derrick Stolee
2020-12-02  2:47             ` [PATCH] t7900: speed up " Jeff King
2020-12-03 15:23               ` Derrick Stolee
2020-12-01 18:27         ` Taylor Blau [this message]
2020-12-02 13:12           ` [PATCH 0/5] handling 4GB .idx files Jeff King

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

  List information:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=X8aLDlzcNCwP699c@nand.local \ \ \ \ \ \
    --subject='Re: [PATCH 0/5] handling 4GB .idx files' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Code repositories for project(s) associated with this inbox:

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).