git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: "Ben Peart" <peartben@gmail.com>
To: "'Shawn Pearce'" <spearce@spearce.org>
Cc: "'git'" <git@vger.kernel.org>, <benpeart@microsoft.com>
Subject: RE: [RFC] Add support for downloading blobs on demand
Date: Tue, 17 Jan 2017 16:50:13 -0500	[thread overview]
Message-ID: <002501d2710b$af74c4d0$0e5e4e70$@gmail.com> (raw)
In-Reply-To: <CAJo=hJumYXTRN_B3iZdmcpomp7wJ+UPcikxGb6rn9W=uJeYmfw@mail.gmail.com>

Thanks for the encouragement, support, and good ideas to look into.

Ben

> -----Original Message-----
> From: Shawn Pearce [mailto:spearce@spearce.org]
> Sent: Friday, January 13, 2017 4:07 PM
> To: Ben Peart <peartben@gmail.com>
> Cc: git <git@vger.kernel.org>; benpeart@microsoft.com
> Subject: Re: [RFC] Add support for downloading blobs on demand
> 
> On Fri, Jan 13, 2017 at 7:52 AM, Ben Peart <peartben@gmail.com> wrote:
> >
> > Goal
> > ~~~~
> >
> > To be able to better handle repos with many files that any individual
> > developer doesn’t need it would be nice if clone/fetch only brought
> > down those files that were actually needed.
> >
> > To enable that, we are proposing adding a flag to clone/fetch that
> > will instruct the server to limit the objects it sends to commits and
> > trees and to not send any blobs.
> >
> > When git performs an operation that requires a blob that isn’t
> > currently available locally, it will download the missing blob and add
> > it to the local object store.
> 
> Interesting. This is also an area I want to work on with my team at $DAY_JOB.
> Repositories are growing along multiple dimensions, and developers or
> editors don't always need all blobs for all time available locally to successfully
> perform their work.
> 
> > Design
> > ~~~~~~
> >
> > Clone and fetch will pass a “--lazy-clone” flag (open to a better name
> > here) similar to “--depth” that instructs the server to only return
> > commits and trees and to ignore blobs.
> 
> My group at $DAY_JOB hasn't talked about it yet, but I want to add a
> protocol capability that lets clone/fetch ask only for blobs smaller than a
> specified byte count. This could be set to a reasonable text file size (e.g. <= 5
> MiB) to predominately download only source files and text documentation,
> omitting larger binaries.
> 
> If the limit was set to 0, its the same as your idea to ignore all blobs.
> 

This is an interesting idea that may be an easier way to help mitigate 
the cost of very large files.  While our primary issue today is the 
sheer number of files, I'm sure at some point we'll run into issues with 
file size as well.  

> > Later during git operations like checkout, when a blob cannot be found
> > after checking all the regular places (loose, pack, alternates, etc),
> > git will download the missing object and place it into the local
> > object store (currently as a loose object) then resume the operation.
> 
> Right. I'd like to have this object retrieval be inside the native Git wire
> protocol, reusing the remote configuration and authentication setup. That
> requires expanding the server side of the protocol implementation slightly
> allowing any reachable object to be retrieved by SHA-1 alone. Bitmap indexes
> can significantly reduce the computational complexity for the server.
> 

Agree.  

> > To prevent git from accidentally downloading all missing blobs, some
> > git operations are updated to be aware of the potential for missing blobs.
> > The most obvious being check_connected which will return success as if
> > everything in the requested commits is available locally.
> 
> This ... sounds risky for the developer, as the repository may be corrupt due
> to a missing object, and the user cannot determine it.
> 
> Would it be reasonable for the server to return a list of SHA-1s it knows
> should exist, but has omitted due to the blob threshold (above), and the
> local repository store this in a binary searchable file?
> During connectivity checking its assumed OK if an object is not present in the
> object store, but is listed in this omitted objects file.
> 

Corrupt repos due to missing blobs must be pretty rare as I've never 
seen anyone report that error but for this and other reasons (see Peff's 
suggestion on how to minimize downloading unnecessary blobs) having this 
data could be valuable.  I'll add it to the list of things to look into.

> > To minimize the impact on the server, the existing dumb HTTP protocol
> > endpoint “objects/<sha>” can be used to retrieve the individual
> > missing blobs when needed.
> 
> I'd prefer this to be in the native wire protocol, where the objects are in pack
> format (which unfortunately differs from loose format). I assume servers
> would combine many objects into pack files, potentially isolating large
> uncompressable binaries into their own packs, stored separately from
> commits/trees/small-text-blobs.
> 
> I get the value of this being in HTTP, where HTTP caching inside proxies can
> be leveraged to reduce master server load. I wonder if the native wire
> protocol could be taught to use a variation of an HTTP GET that includes the
> object SHA-1 in the URL line, to retrieve a one-object pack file.
> 

You make a good point. I don't think the benefit of hitting this 
"existing" end point outweighs the many drawbacks.  Adding the ability 
to retrieve an individual blob via the native wire protocol seems a 
better plan.

> > Performance considerations
> > ~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > We found that downloading commits and trees on demand had a
> > significant negative performance impact.  In addition, many git
> > commands assume all commits and trees are available locally so they
> > quickly got pulled down anyway.  Even in very large repos the commits
> > and trees are relatively small so bringing them down with the initial
> > commit and subsequent fetch commands was reasonable.
> >
> > After cloning, the developer can use sparse-checkout to limit the set
> > of files to the subset they need (typically only 1-10% in these large
> > repos).  This allows the initial checkout to only download the set of
> > files actually needed to complete their task.  At any point, the
> > sparse-checkout file can be updated to include additional files which
> > will be fetched transparently on demand.
> >
> > Typical source files are relatively small so the overhead of
> > connecting and authenticating to the server for a single file at a
> > time is substantial.  As a result, having a long running process that
> > is started with the first request and can cache connection information
> > between requests is a significant performance win.
> 
> Junio and I talked years ago (offline, sorry no mailing list archive) about
> "narrow checkout", which is the idea of the client being able to ask for a pack
> file from the server that only includes objects along specific path names. This
> would allow a client to amortize the setup costs, and even delta compress
> source files against each other (e.g.
> boilerplate across Makefiles or license headers).
> 
> If the paths of interest can be determined as a batch before starting the
> connection, this may be easier than maintaining a cross platform connection
> cache in a separate process.
> 

We looked into sparse/narrow-clone but for a variety of reasons it 
didn't work well for our usage patterns (see my response to Peff's 
feedback for more details).

> > Now some numbers
> > ~~~~~~~~~~~~~~~~
> >
> > One repo has 3+ million files at tip across 500K folders with 5-6K
> > active developers.  They have done a lot of work to remove large files
> > from the repo so it is down to < 100GB.
> >
> > Before changes: clone took hours to transfer the 87GB .pack + 119MB
> > .idx
> >
> > After changes: clone took 4 minutes to transfer 305MB .pack + 37MB
> > .idx
> >
> > After hydrating 35K files (the typical number any individual developer
> > needs to do their work), there was an additional 460 MB of loose files
> > downloaded.
> >
> > Total savings: 86.24 GB * 6000 developers = 517 Terabytes saved!
> >
> > We have another repo (3.1 M files, 618 GB at tip with no history with
> > 3K+ active developers) where the savings are even greater.
> 
> This is quite impressive, and shows this strategy has a lot of promise.
> 
> 
> > Future Work
> > ~~~~~~~~~~~
> >
> > The current prototype calls a new hook proc in
> > sha1_object_info_extended and read_object, to download each missing
> > blob.  A better solution would be to implement this via a long running
> > process that is spawned on the first download and listens for requests
> > to download additional objects until it terminates when the parent git
> > operation exits (similar to the recent long running smudge and clean filter
> work).
> 
> Or batching these up in advance. checkout should be able to determine
> which path entries from the index it wants to write to the working tree. Once
> it has that set of paths it wants to write, it should be fast to construct a
> subset of paths for which the blobs are not present locally, and then pass the
> entire group off for download.
> 

Yes, I'm optimistic that we can optimize for the checkout case (which is 
a _very_ common case!).

> > Need to do more investigation into possible code paths that can
> > trigger unnecessary blobs to be downloaded.  For example, we have
> > determined that the rename detection logic in status can also trigger
> > unnecessary blobs to be downloaded making status slow.
> 
> There isn't much of a workaround here. Only options I can see are disabling
> rename detection when objects are above a certain size, or removing entries
> from the rename table when the blob isn't already local, which may yield
> different results than if the blob(s) were local.
> 
> Another is to try to have actual source files always be local, and thus we only
> punt on rename detection for bigger files that are more likely to be binary,
> and thus less likely to match for rename[1] unless it was SHA-1 identity
> match, which can be done without the
> blob(s) present.
> 

While large files can be a real problem, our biggest issue today is 
having a lot (millions!) of source files when any individual developer 
only needs a small percentage of them.  Git with 3+ million local files 
just doesn't perform well.

We'll see what we can come up with here - especially if we had some 
information _about_ the blob, even though we didn't have the blob itself.

> 
> [1] I assume most really big files are some sort of media asset (e.g.
> JPEG), where a change inside the source data may result in large difference in
> bytes due to the compression applied by the media file format.
> 
> > Need to investigate an alternate batching scheme where we can make a
> > single request for a set of "related" blobs and receive single a
> > packfile (especially during checkout).
> 
> Heh, what I just said above. Glad to see you already thought of it.
> 
> > Need to investigate adding a new endpoint in the smart protocol that
> > can download both individual blobs as well as a batch of blobs.
> 
> Agreed, I said as much above. Again, glad to see you have similar ideas. :)


  reply	other threads:[~2017-01-17 21:50 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-13 15:52 [RFC] Add support for downloading blobs on demand Ben Peart
2017-01-13 21:07 ` Shawn Pearce
2017-01-17 21:50   ` Ben Peart [this message]
2017-01-17 22:05     ` Martin Fick
2017-01-17 22:23       ` Stefan Beller
2017-01-18 18:27         ` Ben Peart
2017-01-17 18:42 ` Jeff King
2017-01-17 21:50   ` Ben Peart
2017-02-05 14:03     ` Christian Couder
2017-02-07 18:21       ` Ben Peart
2017-02-07 21:56         ` Jakub Narębski
2017-02-08  2:18           ` Ben Peart
2017-02-23 15:39         ` Ben Peart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='002501d2710b$af74c4d0$0e5e4e70$@gmail.com' \
    --to=peartben@gmail.com \
    --cc=benpeart@microsoft.com \
    --cc=git@vger.kernel.org \
    --cc=spearce@spearce.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).