From: Jeff King <peff@peff.net>
To: Duy Nguyen <pclouds@gmail.com>
Cc: Git Mailing List <git@vger.kernel.org>,
Christian Couder <chriscool@tuxfamily.org>,
Derrick Stolee <dstolee@microsoft.com>
Subject: Re: [PATCH 1/3] pack-objects: fix tree_depth and layer invariants
Date: Thu, 22 Nov 2018 11:50:12 -0500 [thread overview]
Message-ID: <20181122165012.GH28192@sigill.intra.peff.net> (raw)
In-Reply-To: <CACsJy8AETNxNnEq-8ROrQTkjy-_9mtoprmc=BQ554f7QECajPw@mail.gmail.com>
On Tue, Nov 20, 2018 at 05:37:18PM +0100, Duy Nguyen wrote:
> > But in (b), we use the number of stored objects, _not_ the allocated
> > size of the objects array. So we can run into a situation like this:
> >
> > 1. packlist_alloc() needs to store the Nth object, so it grows the
> > objects array to M, where M > N.
> >
> > 2. oe_set_tree_depth() wants to store a depth, so it allocates an
> > array of length N. Now we've violated our invariant.
> >
> > 3. packlist_alloc() needs to store the N+1th object. But it _doesn't_
> > grow the objects array, since N <= M still holds. We try to assign
> > to tree_depth[N+1], which is out of bounds.
>
> Do you think if this splitting data to packing_data is too fragile
> that we should just scrape the whole thing and move all data back to
> object_entry[]? We would use more memory of course but higher memory
> usage is still better than more bugs (if these are likely to show up
> again).
Certainly that thought crossed my mind while working on these patches. :)
Especially given the difficulties it introduced into the recent
bitmap-reuse topic, and the size fixes we had to deal with in v2.19.
Overall, though, I dunno. This fix, while subtle, turned out not to be
too complicated. And the memory savings are real. I consider 100M
objects to be on the large size of feasible for stock Git these days,
and I think we are talking about on the order of 4GB memory savings
there. You need a big machine to handle a repository of that size, but
4GB is still appreciable.
So I guess at this point, with all (known) bugs fixed, we should stick
with it for now. If it becomes a problem for development of a future
feature, then we can re-evaluate then.
-Peff
next prev parent reply other threads:[~2018-11-22 16:50 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-20 9:44 [PATCH 0/3] delta-island fixes Jeff King
2018-11-20 9:46 ` [PATCH 1/3] pack-objects: fix tree_depth and layer invariants Jeff King
2018-11-20 16:37 ` Duy Nguyen
2018-11-22 16:50 ` Jeff King [this message]
2018-11-21 4:52 ` Junio C Hamano
2018-11-20 9:48 ` [PATCH 2/3] pack-objects: zero-initialize tree_depth/layer arrays Jeff King
2018-11-20 9:50 ` [PATCH 3/3] pack-objects: fix off-by-one in delta-island tree-depth computation Jeff King
2018-11-20 15:06 ` [PATCH 0/3] delta-island fixes Derrick Stolee
2018-11-22 16:43 ` Jeff King
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: http://vger.kernel.org/majordomo-info.html
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181122165012.GH28192@sigill.intra.peff.net \
--to=peff@peff.net \
--cc=chriscool@tuxfamily.org \
--cc=dstolee@microsoft.com \
--cc=git@vger.kernel.org \
--cc=pclouds@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://80x24.org/mirrors/git.git
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).