git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* Settings for minimizing repacking (and keeping 'rsync' happy)
@ 2019-07-27 23:41 ardi
  2019-07-29  9:42 ` Jeff King
  2019-07-29 14:35 ` Konstantin Ryabitsev
  0 siblings, 2 replies; 5+ messages in thread
From: ardi @ 2019-07-27 23:41 UTC (permalink / raw)
  To: git

Hi!

Some of my Git repositories have mirrors, maintained with 'rsync'. I
want to have some level of repacking, so that the repositories are
efficient, but I also want it to minimize it, so that 'rsync' never
has to perform a big transfer for the repositories.

For example, I think it would be fine if files are repacked just once
in their lifetimes, and then that resulting pack file is never
repacked again. I did read the gc.bigPackThreshold and
gc.autoPackLimit settings, but I don't think they would accomplish
that.

Basically, what I'm describing is the behaviour of not packing files
until the resulting pack would be a given size (say 10MB for example),
and then never repack such ~10MB packs again, ever.

Can this be done with some Git settings? And do you foresee any kind
of serious drawback or potential problem with this kind of behaviour?

Thanks!!

ardi

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Settings for minimizing repacking (and keeping 'rsync' happy)
  2019-07-27 23:41 Settings for minimizing repacking (and keeping 'rsync' happy) ardi
@ 2019-07-29  9:42 ` Jeff King
  2019-07-29 12:56   ` Ævar Arnfjörð Bjarmason
  2019-07-29 14:35 ` Konstantin Ryabitsev
  1 sibling, 1 reply; 5+ messages in thread
From: Jeff King @ 2019-07-29  9:42 UTC (permalink / raw)
  To: ardi; +Cc: git

On Sun, Jul 28, 2019 at 01:41:34AM +0200, ardi wrote:

> Some of my Git repositories have mirrors, maintained with 'rsync'. I
> want to have some level of repacking, so that the repositories are
> efficient, but I also want it to minimize it, so that 'rsync' never
> has to perform a big transfer for the repositories.

Yes, this is a common problem. The solutions I've seen/used are:

  - use a git-aware transport like git-fetch that can negotiate which
    objects to send

  - use a tool that can find duplicated chunks across files. Many
    de-duping backup systems (e.g., borg) use a rolling hash similar to
    rsync to find moveable chunks, but then look up those chunks in a
    master index (whereas rsync is always looking to match chunks in a
    file of the same name). This works well in practice because Git is
    not usually rewriting most of the data, but just shuffling it around
    between files.

    In theory it shouldn't be that hard to tell the receiving rsync to
    look for source chunks not just in the file of the same name, but
    from a set of existing packfiles (say, everything already in
    .git/objects/pack/ on the receiver). But I don't know offhand of an
    option to rsync to do so.

> For example, I think it would be fine if files are repacked just once
> in their lifetimes, and then that resulting pack file is never
> repacked again. I did read the gc.bigPackThreshold and
> gc.autoPackLimit settings, but I don't think they would accomplish
> that.
> 
> Basically, what I'm describing is the behaviour of not packing files
> until the resulting pack would be a given size (say 10MB for example),
> and then never repack such ~10MB packs again, ever.
> 
> Can this be done with some Git settings? And do you foresee any kind
> of serious drawback or potential problem with this kind of behaviour?

You can mark a pack to be kept forever by creating a matching
"pack-1234abcd.keep" file. That doesn't do your automatic "I want 10MB
packs" thing, but if you did it occasionally at the right frequency,
you'd end up with a bunch of 10MB-ish packs.

But there are downsides to having a bunch of packs:

  - object lookups are O(log n) within a single pack, but O(n) over the
    number of packs. So if you get a very large number of packs, normal
    operations will start to suffer. This is mitigated by the new "midx"
    feature, which generates an index for multiple packs.

  - git doesn't allow delta compression across packs. So imagine you
    have ten versions of a file that's 5kb, and each version changes
    about 100 bytes. In a single pack, we'd store one base object, plus
    9 deltas, for a total of about 6kb (5000 + 9*100). Across two packs,
    we'd store ~11kb (2*5000 + 8*100). And the worst case is ten packs
    at 50kb.

    As a more real-world example, try this:

      git -c pack.packsizelimit=10M repack -ad

    In a fresh clone of git.git, the size of the pack directory jumps
    from 88MB to 168MB. And in a time-based split (i.e., creating a new
    10MB pack every week), it may be even worse. The command above
    ordered the objects optimally to keep deltas together and _then_
    split things. Whereas a time-based scheme would likely sprinkle
    versions of a file across more packs.

    It should be possible to loosen this restriction and allow
    cross-pack deltas, but it would be very risky. The assumption that
    packs are independent of each other is implicit in much of Git's
    repacking code, so it would be easy to introduce a bug where we
    generate a circular dependency (object A in pack X is a delta
    against object B in pack Y, which is a delta against object A --
    oops, we don't have a full copy anymore).

-Peff

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Settings for minimizing repacking (and keeping 'rsync' happy)
  2019-07-29  9:42 ` Jeff King
@ 2019-07-29 12:56   ` Ævar Arnfjörð Bjarmason
  2019-07-29 20:16     ` Jeff King
  0 siblings, 1 reply; 5+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2019-07-29 12:56 UTC (permalink / raw)
  To: Jeff King; +Cc: ardi, git


On Mon, Jul 29 2019, Jeff King wrote:

> On Sun, Jul 28, 2019 at 01:41:34AM +0200, ardi wrote:
>
>> Some of my Git repositories have mirrors, maintained with 'rsync'. I
>> want to have some level of repacking, so that the repositories are
>> efficient, but I also want it to minimize it, so that 'rsync' never
>> has to perform a big transfer for the repositories.
>
> Yes, this is a common problem. The solutions I've seen/used are:
>
>   - use a git-aware transport like git-fetch that can negotiate which
>     objects to send
>
>   - use a tool that can find duplicated chunks across files. Many
>     de-duping backup systems (e.g., borg) use a rolling hash similar to
>     rsync to find moveable chunks, but then look up those chunks in a
>     master index (whereas rsync is always looking to match chunks in a
>     file of the same name). This works well in practice because Git is
>     not usually rewriting most of the data, but just shuffling it around
>     between files.
>
>     In theory it shouldn't be that hard to tell the receiving rsync to
>     look for source chunks not just in the file of the same name, but
>     from a set of existing packfiles (say, everything already in
>     .git/objects/pack/ on the receiver). But I don't know offhand of an
>     option to rsync to do so.
>
>> For example, I think it would be fine if files are repacked just once
>> in their lifetimes, and then that resulting pack file is never
>> repacked again. I did read the gc.bigPackThreshold and
>> gc.autoPackLimit settings, but I don't think they would accomplish
>> that.
>>
>> Basically, what I'm describing is the behaviour of not packing files
>> until the resulting pack would be a given size (say 10MB for example),
>> and then never repack such ~10MB packs again, ever.
>>
>> Can this be done with some Git settings? And do you foresee any kind
>> of serious drawback or potential problem with this kind of behaviour?
>
> You can mark a pack to be kept forever by creating a matching
> "pack-1234abcd.keep" file. That doesn't do your automatic "I want 10MB
> packs" thing, but if you did it occasionally at the right frequency,
> you'd end up with a bunch of 10MB-ish packs.
>
> But there are downsides to having a bunch of packs:
>
>   - object lookups are O(log n) within a single pack, but O(n) over the
>     number of packs. So if you get a very large number of packs, normal
>     operations will start to suffer. This is mitigated by the new "midx"
>     feature, which generates an index for multiple packs.
>
>   - git doesn't allow delta compression across packs. So imagine you
>     have ten versions of a file that's 5kb, and each version changes
>     about 100 bytes. In a single pack, we'd store one base object, plus
>     9 deltas, for a total of about 6kb (5000 + 9*100). Across two packs,
>     we'd store ~11kb (2*5000 + 8*100). And the worst case is ten packs
>     at 50kb.
>
>     As a more real-world example, try this:
>
>       git -c pack.packsizelimit=10M repack -ad
>
>     In a fresh clone of git.git, the size of the pack directory jumps
>     from 88MB to 168MB. And in a time-based split (i.e., creating a new
>     10MB pack every week), it may be even worse. The command above
>     ordered the objects optimally to keep deltas together and _then_
>     split things. Whereas a time-based scheme would likely sprinkle
>     versions of a file across more packs.
>
>     It should be possible to loosen this restriction and allow
>     cross-pack deltas, but it would be very risky. The assumption that
>     packs are independent of each other is implicit in much of Git's
>     repacking code, so it would be easy to introduce a bug where we
>     generate a circular dependency (object A in pack X is a delta
>     against object B in pack Y, which is a delta against object A --
>     oops, we don't have a full copy anymore).

The thread I started at
https://public-inbox.org/git/87bmhiykvw.fsf@evledraar.gmail.com/ should
also be of interest. I.e. we could have some knobs to create more
"stable" packs, I know rsync does some in-file hashing, but I don't
if/how that works if you have 1 file split into N where some chunks in
the N are in the one file.

But it's possible to imagine a repacking algorithm that would keep
producing entirely new packs but arrange for it to be ordered/delta'd in
such a way that it optimizes for page-by-page similarity to an older
pack to some degree.

So e.g. in the examples you mention break the delta chain at 5, then
pick it up again once it's 10 etc. So the intermediate packs where it's
6, 7, 8, 9 would have the new stuff at the end.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Settings for minimizing repacking (and keeping 'rsync' happy)
  2019-07-27 23:41 Settings for minimizing repacking (and keeping 'rsync' happy) ardi
  2019-07-29  9:42 ` Jeff King
@ 2019-07-29 14:35 ` Konstantin Ryabitsev
  1 sibling, 0 replies; 5+ messages in thread
From: Konstantin Ryabitsev @ 2019-07-29 14:35 UTC (permalink / raw)
  To: ardi; +Cc: Git Mailing List

On Sat, 27 Jul 2019 at 19:41, ardi <ardillasdelmonte@gmail.com> wrote:
>
> Hi!
>
> Some of my Git repositories have mirrors, maintained with 'rsync'. I
> want to have some level of repacking, so that the repositories are
> efficient, but I also want it to minimize it, so that 'rsync' never
> has to perform a big transfer for the repositories.

We use grokmirror to replicate several thousand repositories worldwide
using git-native protocols. It is fast and very efficient.
https://github.com/mricon/grokmirror

Regards,
-- 
Konstantin Ryabitsev
Director, Projects IT
The Linux Foundation
Montréal, Québec

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Settings for minimizing repacking (and keeping 'rsync' happy)
  2019-07-29 12:56   ` Ævar Arnfjörð Bjarmason
@ 2019-07-29 20:16     ` Jeff King
  0 siblings, 0 replies; 5+ messages in thread
From: Jeff King @ 2019-07-29 20:16 UTC (permalink / raw)
  To: Ævar Arnfjörð Bjarmason; +Cc: ardi, git

On Mon, Jul 29, 2019 at 02:56:34PM +0200, Ævar Arnfjörð Bjarmason wrote:

> The thread I started at
> https://public-inbox.org/git/87bmhiykvw.fsf@evledraar.gmail.com/ should
> also be of interest. I.e. we could have some knobs to create more
> "stable" packs, I know rsync does some in-file hashing, but I don't
> if/how that works if you have 1 file split into N where some chunks in
> the N are in the one file.
> 
> But it's possible to imagine a repacking algorithm that would keep
> producing entirely new packs but arrange for it to be ordered/delta'd in
> such a way that it optimizes for page-by-page similarity to an older
> pack to some degree.

I actually think that's the part that rsync does well. We don't keep
page-by-page similarity, but rsync (and other tools like borg) are
really good at finding the moved chunks. The problem is just that it
doesn't know to compare chunks between two files with unrelated names.

-Peff

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-07-29 20:16 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-27 23:41 Settings for minimizing repacking (and keeping 'rsync' happy) ardi
2019-07-29  9:42 ` Jeff King
2019-07-29 12:56   ` Ævar Arnfjörð Bjarmason
2019-07-29 20:16     ` Jeff King
2019-07-29 14:35 ` Konstantin Ryabitsev

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).