git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: "Ævar Arnfjörð Bjarmason" <avarab@gmail.com>
To: Gregory Szorc <gregory.szorc@gmail.com>
Cc: git@vger.kernel.org, stolee@gmail.com, Taylor Blau <me@ttaylorr.com>
Subject: Re: Race condition between repack and loose-objects maintenance task
Date: Wed, 20 Jul 2022 11:52:11 +0200	[thread overview]
Message-ID: <220720.86cze0um88.gmgdl@evledraar.gmail.com> (raw)
In-Reply-To: <da675dd2-c9cf-c3b4-0231-58b3cf3ce9d7@gmail.com>


On Tue, Jul 19 2022, Gregory Szorc wrote:

> On 6/29/2022 8:23 PM, Taylor Blau wrote:
>> On Wed, Jun 29, 2022 at 11:19:09PM -0400, Taylor Blau wrote:
>>>> However, I think there is yet another bug at play: running
>>>> `incremental-repack` appears to be able to repack the cruft packfile.
>>>> In doing so, it deletes its .mtimes file and the metadata inside.
>>>
>>> That sounds like a bug to me. I'll take a look into it and see what I
>>> can find.
>> I actually think that there are two bugs here.
>> One is that we run a MIDX repack and expire, which could lead to us
>> repacking the entire contents of the cruft pack and then expiring the
>> metadata file. This is a bug, and we should exclude cruft packs from
>> this computation.
>> Another bug can happen when the cruft pack gets written into the
>> MIDX
>> and is MIDX-expireable (meaning that no objects are selected from the
>> cruft pack). In that case, the `git multi-pack-index expire` step would
>> remove the cruft pack entirely, which is also incorrect.
>> I'll take a look at fixing both of these, and thanks for pointing
>> them
>> out!
>
> For posterity, when I disabled cruft packfiles after having it enabled
> for a few weeks, the next `git gc` invocation on a high traffic repo 
> resulted in >100k loose objects/files being created before they were
> summarily deleted by the GC's prune. This is significantly greater
> than the unreferenced object creation rate of the underlying repo. So
> it appears as if the MIDX stripping of the cruft packfile mtimes
> effectively disabled pruning, leading to a build-up of unreferenced objects.
>
> Fortunately I hadn't deployed cruft packfiles to production. If I had,
> the excessive filesystem churn on NFS would have caused an incident
> due to degraded performance.
>
> Since the interaction between cruft packfiles and MIDX appears to be
> buggy, I think I'm going to leave cruft packfiles disabled until these 
> features work better together.

I haven't looked deeply into whether there's any cruft packfile & MIDX
interaction at play here, but I suspect based on past experience that
this has nothing whatsoever to do with the MIDX.

It's because git suddenly found a bunch of objects that should be
evicted from the packs, as you disabled the cruft pack feature.

Here's a past test-case & report of mine where I ran into that:

    https://lore.kernel.org/git/87fu6bmr0j.fsf@evledraar.gmail.com/

That was way before cruft packfiles were integrated, but from my
understanding of how they work the mechanism is exactly the same.

I.e. in that case I deleted a bunch of refs (tag objects), which caused
those to become unreferenced, so on the next "gc/repack" they were all
evicted from their respective packs, at which point the "gc.pruneExpire"
clock started ticking for them.

Whereas in your case you disabled "cruft packs", which are basically a
mechanism where git keeps such "unused" objects in a pack. Once you
disabled it git stopped considering the new *.mtimes file, and started
extracting these loose objects en-masse.

Which b.t.w. is something I think we might want to reconsider. I.e. we
just pass "--cruft" to pack-objects (see b757353676d
(builtin/pack-objects.c: --cruft without expiration, 2022-05-20)), but
don't have a "write cruft" v.s. "read cruft" setting (but see below).

The one thing that gives me pause here is your mention of "summarily
deleted by the GC's prune". Did you run "git prune --expire=now"
manually, or do you have a really aggressive "gc.pruneExpire" setting?
It's also posible that I'm entirely misunderstanding this whole thing...

Anyway.

Since that 2018-era post of mine I've thought a bit about how to best
handle this particular edge case.

I think the simplest implementation that would work well enough would be
to teach "git repack" to limit how many objects it's willing to extract
from a pack. I.e. we'd have the amount of "unreachable" objects we'd be
willing to eject from the pack limited, with check similar to how
"gc.auto" works (i.e. once we reach the limit we'd implicitly turn on
"--keep-unreachable").

It's far from perfect, and e.g. if the N objects you're extracting
happen to delta-compress particularly well you could still have cases
where e.g. a 1GB repo becomes a 10GB one with the new loose objects.

But it would allow us to limit the common case where we'd e.g. create 1k
new loose objects, not 100k or whatever.

It does have the downside that unless you'd disable that limit the "gc"
-> "gc.pruneExpire" window would become even more fuzzy. I.e. now if you
run "gc" the clock starts ticking right away, after this the clock would
start ticking whenever we happened to extract those objects.

So there's certainly less dumb ways to do this, and we'd need to make
sure that whatever limit we set is aggressive enough that we'd always
"stay ahead". I.e. if a repo just has a very high creation rate of
objects that become unreachable we'd need to ensure that we wouldn't
have an ever growing .git as our eviction rate wouldn't keep up with our
additions.

For cruft packs (this is a continuation of the "see below" above) we'd
do the same thing, but could benefit from knowing that such objects are
"already expired", so we could read the *.mtimes, and possibly expire
some right away.

Aside from some very obscure use-cases I think these these sort of
"loose explosion" are one-off events, so if we can smooth those out. The
various gc.* settings can be hard to reason about, but we really do try
to be gently by default, but notably fail very badly at that in some
known edge cases, such as this one.




  reply	other threads:[~2022-07-20 10:12 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-29 16:55 Race condition between repack and loose-objects maintenance task Gregory Szorc
2022-06-29 17:03 ` Taylor Blau
2022-06-29 17:10   ` Taylor Blau
2022-06-29 17:16   ` Gregory Szorc
2022-06-29 17:21     ` Taylor Blau
2022-06-30  0:44       ` Gregory Szorc
2022-06-30  3:19         ` Taylor Blau
2022-06-30  3:23           ` Taylor Blau
2022-06-30 20:12             ` Junio C Hamano
2022-07-05 18:43               ` Gregory Szorc
2022-07-06  8:50                 ` Ævar Arnfjörð Bjarmason
2022-07-20  1:40             ` Gregory Szorc
2022-07-20  9:52               ` Ævar Arnfjörð Bjarmason [this message]
2022-07-26 16:22                 ` Gregory Szorc
2022-07-26 18:11                   ` Ævar Arnfjörð Bjarmason
2022-07-20  1:41             ` Gregory Szorc

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=220720.86cze0um88.gmgdl@evledraar.gmail.com \
    --to=avarab@gmail.com \
    --cc=git@vger.kernel.org \
    --cc=gregory.szorc@gmail.com \
    --cc=me@ttaylorr.com \
    --cc=stolee@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).