git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: Christian Couder <christian.couder@gmail.com>
To: Stefan Beller <sbeller@google.com>
Cc: "git@vger.kernel.org" <git@vger.kernel.org>,
	Junio C Hamano <gitster@pobox.com>, Jeff King <peff@peff.net>,
	Ben Peart <Ben.Peart@microsoft.com>,
	Jonathan Tan <jonathantanmy@google.com>,
	Nguyen Thai Ngoc Duy <pclouds@gmail.com>,
	Mike Hommey <mh@glandium.org>,
	Lars Schneider <larsxschneider@gmail.com>,
	Eric Wong <e@80x24.org>,
	Christian Couder <chriscool@tuxfamily.org>
Subject: Re: [PATCH v5 35/40] Add Documentation/technical/external-odb.txt
Date: Fri, 25 Aug 2017 08:14:08 +0200	[thread overview]
Message-ID: <CAP8UFD1oONnj93UKf=nBzgOQtY2E+ZVvoLGDNGLsZVobfiN90Q@mail.gmail.com> (raw)
In-Reply-To: <CAGZ79kYhUJ5mmTEO3b9G7M6onuCusBUTSsD7KeCmeMpfaOvroQ@mail.gmail.com>

On Thu, Aug 3, 2017 at 8:38 PM, Stefan Beller <sbeller@google.com> wrote:
> On Thu, Aug 3, 2017 at 2:19 AM, Christian Couder
> <christian.couder@gmail.com> wrote:
>> This describes the external odb mechanism's purpose and
>> how it works.
>
> Thanks for providing this documentation patch!
>
> I read through it sequentially, see questions that came to mind
> in between.

Thanks for your feedback!

> If the very last paragraph came earlier (or an example), it
> would have helped me to understand the big picture better.

Ok, I added the following at the end of the "helpers" section:

"Early on git commands send an 'init' instruction to the registered
commands. A capability negociation will take place during this
request/response exchange which will let Git and the helpers know how
they can further collaborate. The attribute system can also be used to
tell Git which objects should be handled by which helper."

>> +Purpose
>> +=======
>> +
>> +The purpose of this mechanism is to make possible to handle Git
>> +objects, especially blobs, in much more flexible ways.
>> +
>> +Currently Git can store its objects only in the form of loose objects
>> +in separate files or packed objects in a pack file.
>> +
>> +This is not flexible enough for some important use cases like handling
>> +really big binary files or handling a really big number of files that
>> +are fetched only as needed. And it is not realistic to expect that Git
>> +could fully natively handle many of such use cases.
>
> This is a strong statement. Why is it not realistic? What are these
> "many of such use cases"?

What I mean is that the Git default storage (loose objects and packed
objects) cannot easily be optimized for many different kind of
contents. Currently it is optimized for a not huge number of not very
big text files, and it works quite well too when there are not too
many quite small binary files. And then there are tweaks that can be
used to improve things in specific cases (for example if you have very
big text files, you can set "core.bigfilethreshold" to a size bigger
than your text files so that they will still be delta-compress as Peff
explained in a recent thread).

As Git is used by more and more by people having different needs, I
think it is not realistic to expect that we can optimize its object
storage for all these different needs. So a better strategy is to just
let them store objects in external stores.

If we wanted to optimize for different use cases without letting
people use external stores, we would anyway need to implement
different internal stores which would be a huge burden and which could
lead us to re-implement things like HTTP servers that already exists
outside Git.

About these many use cases, I gave the "really big binary files"
example which is why Git LFS exists (and which GitLab is interested in
better solving), and the "really big number of files that are fetched
only as needed" example which Microsoft is interested in solving. I
could also imagine that some people have both big text files and big
binary files in which case the "core.bigfilethreshold" might not work
well, or that some people already have blobs in some different stores
(like HTTP servers, Docker registries, artifact stores, ...) and want
to fetch them from there as much as possible. And then letting people
use different stores can make clones or fetches restartable which
would solve another problem people have long been complaining about...

I will try to use the above explanations to better improve the
statement in the documentation though I don't want it to be as long as
the above. Do you have an idea about what the right balance should be?

>> +Furthermore many improvements that are dependent on specific setups
>> +could be implemented in the way Git objects are managed if it was
>> +possible to customize how the Git objects are handled. For example a
>> +restartable clone using the bundle mechanism has often been requested,
>> +but implementing that would go against the current strict rules under
>> +which the Git objects are currently handled.
>
> So in this example, you would use todays git-clone to obtain a small version
> of the repo and then obtain other objects later?

The problem with explaining how it would work is that the
--initial-refspec option is added to git clone later in the patch
series. And there could be changes in the later part of the patch
series. So I don't want to promise or explain too much here.
But maybe I could add another patch to better explain that at the end
of the series.

>> +What Git needs a mechanism to make it possible to customize in a lot
>> +of different ways how the Git objects are handled.
>
> I do not understand why we need this. Is this aimed to support git LFS,
> which by its model has additional objects not natively tracked by Git, that
> are fetched later when needed?

It is aimed to support not just something like git LFS, but also many
different use cases (see my above explanations).

>> Though this
>> +mechanism should try as much as possible to avoid interfering with the
>> +usual way in which Git handle its objects.
>> +
>> +Helpers
>> +=======
>> +
>> +ODB helpers are commands that have to be registered using either the
>> +"odb.<odbname>.subprocessCommand" or the "odb.<odbname>.scriptCommand"
>> +config variables.
>> +
>> +Registering such a command tells Git that an external odb called
>> +<odbname> exists and that the registered command should be used to
>> +communicate with it.
>> +
>> +There are 2 kinds of commands. Commands registered using the
>> +"odb.<odbname>.subprocessCommand" config variable are called "process
>> +commands" and the associated mode is called "process mode". Commands
>> +registered using the "odb.<odbname>.scriptCommand" config variables
>> +are called "script commands" and the associated mode is called "script
>> +mode".
>
> So there is the possibility for multiple ODBs by the nature of the config
> as we can have multiple <odbname> sections. How does Git know which
> odb to talk to? (does it talk to all of them when asking for a random object?)
>
> When writing an object how does Git decide where to store an object
> (internally or in one of its ODB? Maybe in multiple ODBs? Does the user
> give rules how to tackle the problem or will Git have some magic to do
> the right thing? If so where can I read about that?)

Yeah, it's possible to configure many ODBs. In this case after the
'init' instruction, Git will know what are the instructions supported
by each ODB. If more than one ODB support a 'get_*' instruction, yeah,
Git will ask the ODBs supporting a 'get_*' instruction in turn for
each object it did not already find. If more than one ODB support a
'put_*' instruction and if the attributes for a blob correspond to
more than one of these ODBs, yeah Git will try to "put" the blob into
these ODBs in turn until it succeeds.

> One could think that one ODB is able to learn about objects out of band
> i.e. to replace the fetch/clone/push mechanism, whereas another ODB is
> capable of efficient fast local storage and yet another one that is optimized
> for storing large binary files.

Yeah, all these things are possible.

Hopefully the following will clarify that:

"The communication happens through instructions that are sent by Git
and that the commands should answer. If it makes sense, Git will send
the same instruction to many commands in the order in which they are
configured."

>> +Process Mode
>> +============
>> +
>> +In process mode the command is started as a single process invocation
>> +that should last for the entire life of the single Git command that
>> +started it.
>> +
>> +A packet format (pkt-line, see technical/protocol-common.txt) based
>> +protocol over standard input and standard output is used for
>> +communication between Git and the helper command.
>> +
>> +After the process command is started, Git sends a welcome message
>> +("git-read-object-client"), a list of supported protocol version
>> +numbers, and a flush packet. Git expects to read a welcome response
>> +message ("git-read-object-server"), exactly one protocol version
>> +number from the previously sent list, and a flush packet. All further
>> +communication will be based on the selected version.
>> +
>> +The remaining protocol description below documents "version=1". Please
>> +note that "version=42" in the example below does not exist and is only
>> +there to illustrate how the protocol would look with more than one
>> +version.
>> +
>> +After the version negotiation Git sends a list of all capabilities
>> +that it supports and a flush packet. Git expects to read a list of
>> +desired capabilities, which must be a subset of the supported
>> +capabilities list, and a flush packet as response:
>> +
>> +------------------------
>> +packet: git> git-read-object-client
>> +packet: git> version=1
>> +packet: git> version=42
>> +packet: git> 0000
>> +packet: git< git-read-object-server
>> +packet: git< version=1
>> +packet: git< 0000
>> +packet: git> capability=get_raw_obj
>> +packet: git> capability=have
>> +packet: git> capability=put_raw_obj
>> +packet: git> capability=not-yet-invented
>> +packet: git> 0000
>> +packet: git< capability=get_raw_obj
>> +packet: git< 0000
>> +------------------------
>> +
>> +Afterwards Git sends a list of "key=value" pairs terminated with a
>> +flush packet. The list will contain at least the instruction (based on
>> +the supported capabilities) and the arguments for the
>> +instruction. Please note, that the process must not send any response
>> +before it received the final flush packet.
>> +
>> +In general any response from the helper should end with a status
>> +packet. See the documentation of the 'get_*' instructions below for
>> +examples of status packets.
>> +
>> +After the helper has processed an instruction, it is expected to wait
>> +for the next "key=value" list containing another instruction.
>> +
>> +On exit Git will close the pipe to the helper. The helper is then
>> +expected to detect EOF and exit gracefully on its own. Git will wait
>> +until the process has stopped.
>> +
>> +Script Mode
>> +===========
>> +
>> +In this mode Git launches the script command each time it wants to
>> +communicates with the helper. There is no welcome message and no
>> +protocol version in this mode.
>> +
>> +The instruction and associated arguments are passed as arguments when
>> +launching the script command and if needed further information is
>> +passed between Git and the command through stdin and stdout.
>> +
>> +Capabilities/Instructions
>> +=========================
>> +
>> +The following instructions are currently supported by Git:
>> +
>> +- init
>> +- get_git_obj
>> +- get_raw_obj
>> +- get_direct
>> +- put_raw_obj
>> +- have
>> +
>> +The plan is to also support 'put_git_obj' and 'put_direct' soon, for
>> +consistency with the 'get_*' instructions.
>> +
>> + - 'init'
>> +
>> +All the process and script commands must accept the 'init'
>> +instruction. It should be the first instruction sent to a command. It
>> +should not be advertised in the capability exchange. Any argument
>> +should be ignored.
>> +
>> +In process mode, after receiving the 'init' instruction and a flush
>> +packet, the helper should just send a status packet and then a flush
>> +packet. See the 'get_*' instructions below for examples of status
>> +packets.
>> +
>> +In script mode the command should print on stdout the capabilities
>> +that it supports if any. This is the only time in script mode when a
>> +capability exchange happens.
>> +
>> +For example a script command could use the following shell code
>> +snippet to handle the 'init' instruction:
>> +
>> +------------------------
>> +case "$1" in
>> +init)
>> +       echo "capability=get_git_obj"
>> +       echo "capability=put_raw_obj"
>> +       echo "capability=have"
>> +       ;;
>> +------------------------
>
> I can see the rationale for script mode, but not quite for process mode
> as in process mode we could do the same init work that is needed after
> the welcome message?
>
> Is it kept in process mode to keep consistent with script mode?

Yes and because I want only the 'init' instruction to be required.
In process mode, if there were no 'init' instruction how could Git
know if it is ok to send a 'get' instruction for example if it does
not yet know the helpers' capabilities, and how could a helper that
only setup things be called only once?

> I assume this is to setup the ODB, which then can also state  things like
> "I am not in a state to work, as the network connection is missing"
> or ask the user for a password for the encrypted database?

Yeah, the helper can also take advantage of 'init' to setup and check
everything.

Do you think I should clarify something?

>> + - 'get_git_obj <sha1>' and 'get_raw_obj <sha1>'
>> +
>> +These instructions should have a hexadecimal <sha1> argument to tell
>> +which object the helper should send to git.
>> +
>> +In process mode the sha1 argument should be followed by a flush packet
>> +like this:
>> +
>> +------------------------
>> +packet: git> command=get_git_obj
>> +packet: git> sha1=0a214a649e1b3d5011e14a3dc227753f2bd2be05
>> +packet: git> 0000
>> +------------------------
>> +
>> +After reading that the helper should send the requested object to Git in a
>> +packet series followed by a flush packet. If the helper does not experience
>> +problems then the helper must send a "success" status like the following:
>> +
>> +------------------------
>> +packet: git< status=success
>> +packet: git< 0000
>> +------------------------
>> +
>> +In case the helper cannot or does not want to send the requested
>> +object as well as any other object for the lifetime of the Git
>> +process, then it is expected to respond with an "abort" status at any
>> +point in the protocol:
>> +
>> +------------------------
>> +packet: git< status=abort
>> +packet: git< 0000
>> +------------------------
>> +
>> +Git neither stops nor restarts the helper in case the "error"/"abort"
>> +status is set.
>> +
>> +If the helper dies during the communication or does not adhere to the
>> +protocol then Git will stop and restart it with the next instruction.
>> +
>> +In script mode the helper should just send the requested object to Git
>> +by writing it to stdout and should then exit. The exit code should
>> +signal to Git if a problem occured or not.
>> +
>> +The only difference between 'get_git_obj' and 'get_raw_obj' is that in
>> +case of 'get_git_obj' the requested object should be sent as a Git
>> +object (that is in the same format as loose object files). In case of
>> +'get_raw_obj' the object should be sent in its raw format (that is the
>> +same output as `git cat-file <type> <sha1>`).
>
> In case of abort, what are the implications for Git? How do we deliver the
> message to the user (should the helper print to stderr, or is there a way
> to relay it through Git such that we do not have racy output?)

The helper can print something to stderr. Hopefully it will be printed
using one printf() call or something like that which will make it not
so racy. (Or what kind of race are you talking about?) And Git will
remove the current instruction from the helpers capabilities, so it
will not ask the same instruction again (for the duration of the
current git process).

>> + - 'get_direct <sha1>'
>> +
>> +This instruction is similar as the other 'get_*' instructions except
>> +that no object should be sent from the helper to Git. Instead the
>> +helper should directly write the requested object into a loose object
>> +file in the ".git/objects" directory.
>> +
>> +After the helper has sent the "status=success" packet and the
>> +following flush packet in process mode, or after it has exited in the
>> +script mode, Git should lookup again for a loose object file with the
>> +requested sha1.
>
> Does it have to be a loose object or is the helper also allowed
> to put a packfile into $GIT_OBJECT_DIRECTORY/pack ?
> If so, is it expected to also produce an idx file?

It could also be a packfile and an idx file, but I expect most of
these kind of helpers will just create loose object files.
I will clarify with:

"...Git will lookup again for the requested sha1 in its loose
object files and pack files."

>> + - 'put_raw_obj <sha1> <size> <type>'
>> +
>> +This instruction should be following by three arguments to tell which
>> +object the helper will receive from git: <sha1>, <size> and
>> +<type>. The hexadecimal <sha1> argument describes the object that will
>> +be sent from Git to the helper. The <type> is the object type (blob,
>> +tree, commit or tag) of this object. The <size> is the size of the
>> +(decompressed) object content.
>
> So the type is encoded as strings "blob", "tree" ... Maybe quote them?

Ok, they will be quoted in the next version.

> The size is "in bytes" (maybe add that unit?). I expect there is no fanciness
> allowed such as "3.3MB" as that is not precise enough.

Yeah, I added "in bytes".

>> +In process mode the last argument (the type) should be followed by a
>> +flush packet.
>> +
>> +After reading that the helper should read the announced object from
>> +Git in a packet series followed by a flush packet.
>> +
>> +If the helper does not experience problems when receiving and storing
>> +or processing the object, then the helper must send a "success" status
>> +as described for the 'get_*' instructions.
>
> Otherwise an abort is expected?

There are also "notfound" and "error" failures. I will clarify this
with the following:

"Git neither stops nor restarts the helper in case a
"notfound"/"error"/"abort" status is set. An "error" status means a
possibly more transient error than an abort. The helper should also
send a "notfound" error in case of a "get_*" instruction, which means
that the requested object cannot be found."

>> +In script mode the helper should just receive the announced object
>> +from its standard input. After receiving and processing the object,
>> +the helper should exit and its exit code should signal to Git if a
>> +problem occured or not.
>> +
>> +- 'have'
>> +
>> +In process mode this instruction should be followed by a flush
>> +packet. After receiving this packet the helper should send the sha1,
>> +size and type, in this order, of all the objects it can provide to Git
>> +(through a 'get_*' instruction). There should be a space character
>> +between the sha1 and the size and between the size and the type, and
>> +then a new line character after the type.
>
> As this is also inside a packet, do we need to care about splitting
> up the payload? i.e. when we have a lot of objects such that we need
> multiple packets to present all 'have's, are we allowed to split
> up anywhere or just after a '\n' ?

The code only supports splitting just after a '\n'. I will clarify with:

"If many packets are needed to send back all this information, the
split between packets should be made after the new line characters."

>> +If the helper does not experience problems, then it must then send a
>> +"success" status as described for the 'get_*' instructions.
>> +
>> +In script mode the helper should send to its standard output the sha1,
>> +size and type, in this order of all the objects it can provide to
>> +Git. There should also be a space character between the sha1 and the
>> +size and between the size and the type, and then a new line character
>> +after the type.
>> +
>> +After sending this, the script helper should exit and its exit code
>> +should signal to Git if a problem occured or not.
>> +
>> +Selecting objects
>> +=================
>> +
>> +To select objects that should be handled by an external odb, one can
>> +use the git attributes system. For now this will only work will blobs
>> +and this will only work along with the 'put_raw_obj' instruction.
>> +
>> +For example if one has an external odb called "magic" and has
>> +registered an associated a process command helper that supports the
>> +'put_raw_obj' instruction, then one can tell Git that all the .jpg
>> +files should be handled by the "magic" odb using a .gitattributes file
>> +can that contains:
>> +
>> +------------------------
>> +*.jpg           odb=magic
>> +------------------------
>
> Hah that answers some questions that are asked earlier!
>
> What happens if I say
>
>   *.jpg odb=my-magic-store,my-jpeg-store
>
> ?

I am not sure how the attributes system works but I think it should handle this.
So the above would mean that Git will try to send the .jpg files to
both the "my-magic-store" and the "my-jpeg-store" helpers. The order
depends on which of those appears first in the config files.

> Maybe relevant:
> https://public-inbox.org/git/20170725211300.vwlpioy5jes55273@sigill.intra.peff.net/
> "Extend the .gitattributes file to also specify file sizes"

Yeah, it looks like this could help if some attributes could be set
depending on file sizes.

Thanks,
Christian.

  reply	other threads:[~2017-08-25  6:14 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-03  9:18 [PATCH v5 00/40] Add initial experimental external ODB support Christian Couder
2017-08-03  9:18 ` [PATCH v5 01/40] builtin/clone: get rid of 'value' strbuf Christian Couder
2017-08-03  9:18 ` [PATCH v5 02/40] t0021/rot13-filter: refactor packet reading functions Christian Couder
2017-08-03  9:18 ` [PATCH v5 03/40] t0021/rot13-filter: improve 'if .. elsif .. else' style Christian Couder
2017-08-03  9:18 ` [PATCH v5 04/40] Add Git/Packet.pm from parts of t0021/rot13-filter.pl Christian Couder
2017-08-03 19:11   ` Junio C Hamano
2017-08-04  6:32     ` Christian Couder
2017-08-03  9:18 ` [PATCH v5 05/40] t0021/rot13-filter: use Git/Packet.pm Christian Couder
2017-08-03  9:18 ` [PATCH v5 06/40] Git/Packet.pm: improve error message Christian Couder
2017-08-03  9:18 ` [PATCH v5 07/40] Git/Packet.pm: add packet_initialize() Christian Couder
2017-08-03  9:18 ` [PATCH v5 08/40] Git/Packet.pm: add capability functions Christian Couder
2017-08-03 19:14   ` Junio C Hamano
2017-08-04 20:34     ` Christian Couder
2017-08-03  9:18 ` [PATCH v5 09/40] sha1_file: prepare for external odbs Christian Couder
2017-08-03  9:18 ` [PATCH v5 10/40] Add initial external odb support Christian Couder
2017-08-03 19:34   ` Junio C Hamano
2017-08-03 20:17     ` Jeff King
2017-09-14 10:14     ` Christian Couder
2017-08-03  9:18 ` [PATCH v5 11/40] odb-helper: add odb_helper_init() to send 'init' instruction Christian Couder
2017-09-10 12:12   ` Lars Schneider
2017-09-14  7:18     ` Christian Couder
2017-08-03  9:18 ` [PATCH v5 12/40] t0400: add 'put_raw_obj' instruction to odb-helper script Christian Couder
2017-09-10 12:12   ` Lars Schneider
2017-09-14  7:09     ` Christian Couder
2017-08-03  9:18 ` [PATCH v5 13/40] external odb: add 'put_raw_obj' support Christian Couder
2017-08-03 19:50   ` Junio C Hamano
2017-09-14  9:17     ` Christian Couder
2017-08-03  9:19 ` [PATCH v5 14/40] external-odb: accept only blobs for now Christian Couder
2017-08-03 19:52   ` Junio C Hamano
2017-09-14  9:59     ` Christian Couder
2017-08-03  9:19 ` [PATCH v5 15/40] t0400: add test for external odb write support Christian Couder
2017-08-03  9:19 ` [PATCH v5 16/40] Add GIT_NO_EXTERNAL_ODB env variable Christian Couder
2017-08-03  9:19 ` [PATCH v5 17/40] Add t0410 to test external ODB transfer Christian Couder
2017-08-03  9:19 ` [PATCH v5 18/40] lib-httpd: pass config file to start_httpd() Christian Couder
2017-08-03  9:19 ` [PATCH v5 19/40] lib-httpd: add upload.sh Christian Couder
2017-08-03 20:07   ` Junio C Hamano
2017-09-14  7:43     ` Christian Couder
2017-08-03  9:19 ` [PATCH v5 20/40] lib-httpd: add list.sh Christian Couder
2017-08-03  9:19 ` [PATCH v5 21/40] lib-httpd: add apache-e-odb.conf Christian Couder
2017-08-03  9:19 ` [PATCH v5 22/40] odb-helper: add odb_helper_get_raw_object() Christian Couder
2017-08-03  9:19 ` [PATCH v5 23/40] pack-objects: don't pack objects in external odbs Christian Couder
2017-08-03  9:19 ` [PATCH v5 24/40] Add t0420 to test transfer to HTTP external odb Christian Couder
2017-08-03  9:19 ` [PATCH v5 25/40] external-odb: add 'get_direct' support Christian Couder
2017-08-03 21:40   ` Junio C Hamano
2017-09-14  8:39     ` Christian Couder
2017-09-14 18:19       ` Jonathan Tan
2017-09-15 11:24         ` Christian Couder
2017-09-15 20:54           ` Jonathan Tan
2017-08-03  9:19 ` [PATCH v5 26/40] odb-helper: add 'script_mode' to 'struct odb_helper' Christian Couder
2017-08-03  9:19 ` [PATCH v5 27/40] odb-helper: add init_object_process() Christian Couder
2017-08-03  9:19 ` [PATCH v5 28/40] Add t0450 to test 'get_direct' mechanism Christian Couder
2017-08-03  9:19 ` [PATCH v5 29/40] Add t0460 to test passing git objects Christian Couder
2017-08-03  9:19 ` [PATCH v5 30/40] odb-helper: add put_object_process() Christian Couder
2017-08-03  9:19 ` [PATCH v5 31/40] Add t0470 to test passing raw objects Christian Couder
2017-08-03  9:19 ` [PATCH v5 32/40] odb-helper: add have_object_process() Christian Couder
2017-08-03  9:19 ` [PATCH v5 33/40] Add t0480 to test "have" capability and raw objects Christian Couder
2017-08-03  9:19 ` [PATCH v5 34/40] external-odb: use 'odb=magic' attribute to mark odb blobs Christian Couder
2017-08-03  9:19 ` [PATCH v5 35/40] Add Documentation/technical/external-odb.txt Christian Couder
2017-08-03 18:38   ` Stefan Beller
2017-08-25  6:14     ` Christian Couder [this message]
2017-08-25 21:23       ` Jonathan Tan
2017-08-29  9:37         ` Christian Couder
2017-08-28 18:59   ` Ben Peart
2017-08-29 15:43     ` Christian Couder
2017-08-30 12:50       ` Ben Peart
2017-08-30 14:15         ` Christian Couder
2017-08-03  9:19 ` [PATCH v5 36/40] clone: add 'initial' param to write_remote_refs() Christian Couder
2017-08-03  9:19 ` [PATCH v5 37/40] clone: add --initial-refspec option Christian Couder
2017-08-03  9:19 ` [PATCH v5 38/40] clone: disable external odb before initial clone Christian Couder
2017-08-03  9:19 ` [PATCH v5 39/40] Add tests for 'clone --initial-refspec' Christian Couder
2017-08-03  9:19 ` [PATCH v5 40/40] Add t0430 to test cloning using bundles Christian Couder
2017-09-10 12:30 ` [PATCH v5 00/40] Add initial experimental external ODB support Lars Schneider
2017-09-14  7:02   ` Christian Couder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP8UFD1oONnj93UKf=nBzgOQtY2E+ZVvoLGDNGLsZVobfiN90Q@mail.gmail.com' \
    --to=christian.couder@gmail.com \
    --cc=Ben.Peart@microsoft.com \
    --cc=chriscool@tuxfamily.org \
    --cc=e@80x24.org \
    --cc=git@vger.kernel.org \
    --cc=gitster@pobox.com \
    --cc=jonathantanmy@google.com \
    --cc=larsxschneider@gmail.com \
    --cc=mh@glandium.org \
    --cc=pclouds@gmail.com \
    --cc=peff@peff.net \
    --cc=sbeller@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).