git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: Duy Nguyen <pclouds@gmail.com>
To: Jeff King <peff@peff.net>
Cc: "Ævar Arnfjörð Bjarmason" <avarab@gmail.com>,
	"Git Mailing List" <git@vger.kernel.org>,
	"Junio C Hamano" <gitster@pobox.com>,
	"Christian Couder" <christian.couder@gmail.com>
Subject: Re: git gc --auto yelling at users where a repo legitimately has >6700 loose objects
Date: Fri, 12 Jan 2018 21:23:05 +0700	[thread overview]
Message-ID: <20180112142305.GA338@ash> (raw)
In-Reply-To: <20180112134609.GB7880@sigill.intra.peff.net>

On Fri, Jan 12, 2018 at 08:46:09AM -0500, Jeff King wrote:
> On Thu, Jan 11, 2018 at 10:33:15PM +0100, Ævar Arnfjörð Bjarmason wrote:
> 
> >  4. At the end of all this, we check *again* if we have >6700 objects,
> >     if we do we print "run 'git prune'" to .git/gc.log, and will just
> >     emit that error for the next day before trying again, at which point
> >     we unlink the gc.log and retry, see gc.logExpiry.
> > 
> > Right now I've just worked around this by setting gc.pruneExpire to a
> > lower value (4.days.ago). But there's a larger issue to be addressed
> > here, and I'm not sure how.
> 
> IMHO the right solution is to stop exploding loose objects, and instead
> write them all into a "cruft" pack. That's more efficient, to boot
> (since it doesn't waste inodes, and may even retain deltas between cruft
> objects).
> 
> But there are some tricks around timestamps. I wrote up some thoughts
> in:
> 
>   https://public-inbox.org/git/20170610080626.sjujpmgkli4muh7h@sigill.intra.peff.net/
> 
> and downthread from there.

My thoughts were moving towards that "multiple cruft packs" idea in
your last email of that thread [1]. I'll quote it here so people don't
have to open the link

> > Why can't we generate a new cruft-pack on every gc run that
> > detects too many unreachable objects? That would not be as
> > efficient as a single cruft-pack but it should be way more
> > efficient than the individual objects, no?
> > 
> > Plus, chances are that the existing cruft-packs are purged with
> > the next gc run anyways.
> 
> Interesting idea. Here are some thoughts in random order.
> 
> That loses some delta opportunities between the cruft packs, but
> that's certainly no worse than the all-loose storage we have today.

Does it also affect deltas when we copy some objects to the new
repacked pack (e.g. some objects in the cruft pack getting referenced
again)? I remember we do reuse deltas sometimes but not in very
detail. I guess we probably won't suffer any suboptimal deltas ...

> 
> One nice aspect is that it means cruft objects don't incur any I/O
> cost during a repack.

But cruft packs do incur object lookup cost since we still go through
all packs linearly. The multi-pack index being discussed recently
would help. But even without that, packs are sorted by mtime and old
cruft packs won't affect as much I guess, as long as there aren't a
zillion cruft packs around. Then even prepare_packed_git() is hit.

> It doesn't really solve the "too many loose objects after gc"
> problem.  It just punts it to "too many packs after gc". This is
> likely to be better because the number of packs would scale with the
> number of gc runs, rather than the number of crufty objects. But
> there would still be corner cases if you run gc frequently. Maybe
> that would be acceptable.
> 
> I'm not sure how the pruning process would work, especially with
> respect to objects reachable from other unreachable-but-recent
> objects. Right now the repack-and-delete procedure is done by
> git-repack, and is basically:
> 
>   1. Get a list of all of the current packs.
> 
>   2. Ask pack-objects to pack everything into a new pack. Normally this
>      is reachable objects, but we also include recent objects and
>      objects reachable from recent objects. And of course with "-k" all
>      objects are kept.
> 
>   3. Delete everything in the list from (1), under the assumption that
>      anything worth keeping was repacked in step (2), and anything else
>      is OK to drop.
> 
> So if there are regular packs and cruft packs, we'd have to know in
> step 3 which are which. We'd delete the regular ones, whose objects
> have all been migrated to the new pack (either a "real" one or a
> cruft one), but keep the crufty ones whose timestamps are still
> fresh.
> 
> That's a small change, and works except for one thing: the reachable
> from recent objects. You can't just delete a whole cruft pack. Some
> of its objects may be reachable from objects in other cruft packs
> that we're keeping. In other words, you have cruft packs where you
> want to keep half of the objects they contain. How do you do that?

Do we have to? Those reachable from recent objects must have ended up
in the new pack created at step 2, correct? Which means we can safely
remove the whole pack.

Those reachable from other cruft packs. I'm not sure if it's different
from when these objects are loose. If a loose object A depends on B,
but B is much older than A, then B may get pruned anyway while A stays
(does not sound right if A gets reused).

> I think you'd have to make pack-objects aware of the concept of
> cruft packs, and that it should include reachable-from-recent
> objects in the new pack only if they're in a cruft pack that is
> going to be deleted. So those objects would be "rescued" from the
> cruft pack before it goes away and migrated to the new cruft
> pack. That would effectively refresh their timestamp, but that's
> fine. They're reachable from objects with that fresh timestamp
> already, so effectively they couldn't be deleted until that
> timestamp is hit.
> 
> So I think it's do-able, but it is a little complicated.

[1] https://public-inbox.org/git/20170620140837.fq3wxb63lnqay6xz@sigill.intra.peff.net/
--
Duy

  reply	other threads:[~2018-01-12 14:23 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-11 21:33 git gc --auto yelling at users where a repo legitimately has >6700 loose objects Ævar Arnfjörð Bjarmason
2018-01-12 12:07 ` Duy Nguyen
2018-01-12 13:41   ` Duy Nguyen
2018-01-12 14:44   ` Ævar Arnfjörð Bjarmason
2018-01-13 10:07     ` Jeff King
2018-01-12 13:46 ` Jeff King
2018-01-12 14:23   ` Duy Nguyen [this message]
2018-01-13  9:58     ` Jeff King
2018-02-08 16:23 ` Ævar Arnfjörð Bjarmason

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180112142305.GA338@ash \
    --to=pclouds@gmail.com \
    --cc=avarab@gmail.com \
    --cc=christian.couder@gmail.com \
    --cc=git@vger.kernel.org \
    --cc=gitster@pobox.com \
    --cc=peff@peff.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).