list mirror (unofficial, one of many)
 help / color / mirror / code / Atom feed
From: Jeff King <>
To: Derrick Stolee <>
Cc: Thomas Braun <>,
	Derrick Stolee <>,
Subject: [PATCH] t7900: speed up expensive test
Date: Tue, 1 Dec 2020 21:47:56 -0500	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Tue, Dec 01, 2020 at 03:55:00PM -0500, Derrick Stolee wrote:

> > Since Stolee is on the cc and has already seen me complaining about his
> > test, I guess I should expand a bit. ;)
> Ha. I apologize for causing pain here. My thought was that GIT_TEST_LONG=1
> was only used by someone really willing to wait, or someone specifically
> trying to investigate a problem that only triggers on very large cases.
> In that sense, it's not so much intended as a frequently-run regression
> test, but a "run this if you are messing with this area" kind of thing.
> Perhaps there is a different pattern to use here?

No, I think your interpretation is pretty reasonable. I definitely do
not run with GIT_TEST_LONG normally, but was only poking at it because
of the other discussion.

> > Though speaking of which, another easy win might be setting
> > core.compression to "0". We know the random data won't compress anyway,
> > so there's no point in spending cycles on zlib.
> The intention is mostly to expand the data beyond two gigabytes, so
> dropping compression to get there seems like a good idea. If we are
> not compressing at all, then perhaps we can reliably cut ourselves
> closer to the 2GB limit instead of overshooting as a precaution.

Probably, though I think at best we could save 20% of the test cost. I
didn't play much with it.

> Cutting out 70% out seems like a great idea. I don't think it was super
> intentional to slow down those tests.

Here it is as a patch (the numbers are slightly different this time
because I used my usual ramdisk, but the overall improvement is roughly
the same).

I didn't look into whether it should be cleaning up (test 16 takes
longer, too, because of the extra on-disk bytes). I also noticed while
running with "-v" that it complains of corrupted refs. I assume this is
leftover cruft from earlier tests, and I didn't dig into it. But it may
be something worth cleaning up for somebody more familiar with these

-- >8 --
Subject: [PATCH] t7900: speed up expensive test

A test marked with EXPENSIVE creates two 2.5GB files and adds them to
the repository. This takes 194s to run on my machine, versus 2s when the
EXPENSIVE prereq isn't set. We can trim this down a bit by doing two

  - use "git commit --quiet" to avoid spending time generating a diff
    summary (this actually only helps for the second commit, but I've
    added it here to both for consistency). This shaves off 8s.

  - set core.compression to 0. We know these files are full of random
    bytes, and so won't compress (that's the point of the test!).
    Spending cycles on zlib is pointless. This shaves off 122s.

After this, my total time to run the script is 64s. That won't help
normal runs without GIT_TEST_LONG set, of course, but it's easy enough
to do.

Signed-off-by: Jeff King <>
 t/ | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/t/ b/t/
index d9e68bb2bf..d9a02df686 100755
--- a/t/
+++ b/t/
@@ -239,13 +239,15 @@ test_expect_success 'incremental-repack task' '
 test_expect_success EXPENSIVE 'incremental-repack 2g limit' '
+	test_config core.compression 0 &&
 	for i in $(test_seq 1 5)
 		test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big ||
 		return 1
 	done &&
 	git add big &&
-	git commit -m "Add big file (1)" &&
+	git commit -qm "Add big file (1)" &&
 	# ensure any possible loose objects are in a pack-file
 	git maintenance run --task=loose-objects &&
@@ -257,7 +259,7 @@ test_expect_success EXPENSIVE 'incremental-repack 2g limit' '
 		return 1
 	done &&
 	git add big &&
-	git commit -m "Add big file (2)" &&
+	git commit -qm "Add big file (2)" &&
 	# ensure any possible loose objects are in a pack-file
 	git maintenance run --task=loose-objects &&

  reply	other threads:[~2020-12-02  2:49 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-13  5:06 [PATCH 0/5] handling 4GB .idx files Jeff King
2020-11-13  5:06 ` [PATCH 1/5] compute pack .idx byte offsets using size_t Jeff King
2020-11-13  5:07 ` [PATCH 2/5] use size_t to store pack .idx byte offsets Jeff King
2020-11-13  5:07 ` [PATCH 3/5] fsck: correctly compute checksums on idx files larger than 4GB Jeff King
2020-11-13  5:07 ` [PATCH 4/5] block-sha1: take a size_t length parameter Jeff King
2020-11-13  5:07 ` [PATCH 5/5] packfile: detect overflow in .idx file size checks Jeff King
2020-11-13 11:02   ` Johannes Schindelin
2020-11-15 14:43 ` [PATCH 0/5] handling 4GB .idx files Thomas Braun
2020-11-16  4:10   ` Jeff King
2020-11-16 13:30     ` Derrick Stolee
2020-11-16 23:49       ` Jeff King
2020-11-30 22:57     ` Thomas Braun
2020-12-01 11:23       ` Jeff King
2020-12-01 11:39         ` t7900's new expensive test Jeff King
2020-12-01 20:55           ` Derrick Stolee
2020-12-02  2:47             ` Jeff King [this message]
2020-12-03 15:23               ` [PATCH] t7900: speed up " Derrick Stolee
2020-12-01 18:27         ` [PATCH 0/5] handling 4GB .idx files Taylor Blau
2020-12-02 13:12           ` Jeff King

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

  List information:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).