This feature allows servers to serve part of their packfile response as URIs.
This allows server designs that improve scalability in bandwidth and CPU usage
(for example, by serving some data through a CDN), and (in the future) provides
some measure of resumability to clients.
This feature is available only in protocol version 2.
The server advertises the `packfile-uris` capability.
If the client then communicates which protocols (HTTPS, etc.) it supports with
a `packfile-uris` argument, the server MAY send a `packfile-uris` section
directly before the `packfile` section (right after `wanted-refs` if it is
sent) containing URIs of any of the given protocols. The URIs point to
packfiles that use only features that the client has declared that it supports
(e.g. ofs-delta and thin-pack). See protocol-v2.txt for the documentation of
Clients should then download and index all the given URIs (in addition to
downloading and indexing the packfile given in the `packfile` section of the
response) before performing the connectivity check.
The server can be trivially made compatible with the proposed protocol by
having it advertise `packfile-uris`, tolerating the client sending
`packfile-uris`, and never sending any `packfile-uris` section. But we should
include some sort of non-trivial implementation in the Minimum Viable Product,
at least so that we can test the client.
This is the implementation: a feature, marked experimental, that allows
the server to be configured by one or more entries with the format:
uploadpack.excludeobject=<object-hash> <level> <pack-hash> <uri>
Value `<object-hash>` is the key of entry, and the object type can be
blob, tree, commit, or tag. Value of entry has three parts,
`<pack-hash>` is used to identify the packfile which contains the given
`<object-hash>` object, and `<uri>` is the URI to download the packfile by
client. For example, When a blob is configured with `uploadpack.excludeobject`
that means whenever the blob to be send is assembled, the object will
In addition to excluding a single object like blob, sometimes it's
hoped to exclude not only the object itself, but also all the related
objects with it, like all the objects a tree contains or the ancestors
that a commit can reach. In these cases, the `<level>` is designed to
distinguish the scope of exclusion, it supports three levels:
- Level 0: Excluding a single object itself, without any objects that
have a relationship with it.
- Level 1: Excluding object itself, and objects it contains.
- Level 2: Excluding object itself, the objects it contains, and the
ancestors it can reach.
If `<level>` is configured as 0, only the object itself will be
excluded, no matter what the object type is. It is a common scenario
for large size blobs, but it does much not sense for other object types
(e.g. download a singe commit without downloading the blobs and tree
If `<level>` is configured as 1, not only the single object but also all
the objects in it will be excluded. This applies to scenarios where
it's wanted to exclude a specified non-blob object that includes some
lage size objects.
- If <object-hash> is a blob, the result is the same as level 0, because blob
contains nothing just itself.
- If <object-hash> is a tree, the tree itself, and all blobs and trees
in it will be excluded.
- If <object-hash> is a commit, the commit itself, the referenced
root-tree, and all blobs and trees in the root-tree will be excluded.
- If <object-hash> is a tag, the tag itself, the dereferenced commit
and all trees and blobs contained in its root-tree will be excluded.
If `<level>` is configured as 2, not only the objects in the scope of
level 1 , but also the reachable ancestors will be excluded if
`<object-hash>` is commit or tag.
The old configuration of packfile-uri:
uploadpack.blobPackfileUri=<object-hash> <pack-hash> <uri>
For the old configuration is compatible with the new one, but it only
supports the exclusion of blob objects.
The client has a config variable `fetch.uriprotocols` that determines which
protocols the end user is willing to use. By default, this is empty.
When the client downloads the given URIs, it should store them with "keep"
files, just like it does with the packfile in the `packfile` section. These
additional "keep" files can only be removed after the refs have been updated -
just like the "keep" file for the packfile in the `packfile` section.
The division of work (initial fetch + additional URIs) introduces convenient
points for resumption of an interrupted clone - such resumption can be done
after the Minimum Viable Product (see "Future work").
The protocol design allows some evolution of the server and client without any
need for protocol changes, so only a small-scoped design is included here to
form the MVP. For example, the following can be done:
* On the client, resumption of clone. If a clone is interrupted, information
could be recorded in the repository's config and a "clone-resume" command
can resume the clone in progress. (Resumption of subsequent fetches is more
difficult because that must deal with the user wanting to use the repository
even after the fetch was interrupted.)
There are some possible features that will require a change in protocol:
* Additional HTTP headers (e.g. authentication)
* Byte range support
* Different file formats referenced by URIs (e.g. raw object)