git@vger.kernel.org mailing list mirror (one of many)
 help / Atom feed
* [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
@ 2017-09-29 20:11 Jonathan Tan
  2017-09-29 21:08 ` Johannes Schindelin
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Jonathan Tan @ 2017-09-29 20:11 UTC (permalink / raw)
  To: git; +Cc: Jonathan Tan, gitster, git, peartben, christian.couder

These patches are also available online:
https://github.com/jonathantanmy/git/commits/partialclone3

(I've announced it in another e-mail, but am now sending the patches to the
mailing list too.)

Here's an update of my work so far. Notable features:
 - These 18 patches allow a user to clone with --blob-max-bytes=<bytes>,
   creating a partial clone that is automatically configured to lazily
   fetch missing objects from the origin. The local repo also has fsck
   working offline, and GC working (albeit only on locally created
   objects).
 - Cloning and fetching is currently only able to exclude blobs by a
   size threshold, but the local repository is already capable of
   fetching missing objects of any type. For example, if a repository
   with missing trees or commits is generated by any tool (for example,
   a future version of Git), current Git with my patches will still be
   able to operate on them, automatically fetching those missing trees
   and commits when needed.
 - Missing blobs are fetched all at once during checkout.

Jeff Hostetler has sent out some object-filtering patches [1] that is a
superset of the object-filtering functionality that I have (in the
pack-objects patches). I have gone for the minimal approach here, but if
his patches are merged, I'll update my patch set to use those.

[1] https://public-inbox.org/git/20170922203017.53986-6-git@jeffhostetler.com/

Demo
====

Obtain a repository.

    $ make prefix=$HOME/local install
    $ cd $HOME/tmp
    $ git clone https://github.com/git/git

Make it advertise the new feature and allow requests for arbitrary blobs.

    $ git -C git config uploadpack.advertiseblobmaxbytes 1
    $ git -C git config uploadpack.allowanysha1inwant 1

Perform the partial clone and check that it is indeed smaller. Specify
"file://" in order to test the partial clone mechanism. (If not, Git will
perform a local clone, which unselectively copies every object.)

    $ git clone --blob-max-bytes=0 "file://$(pwd)/git" git2
    $ git clone "file://$(pwd)/git" git3
    $ du -sh git2 git3
    85M	git2
    130M	git3

Observe that the new repo is automatically configured to fetch missing objects
from the original repo. Subsequent fetches will also be partial.

    $ cat git2/.git/config
    [core]
    	repositoryformatversion = 1
    	filemode = true
    	bare = false
    	logallrefupdates = true
    [remote "origin"]
    	url = [snip]
    	fetch = +refs/heads/*:refs/remotes/origin/*
    	blobmaxbytes = 0
    [extensions]
    	partialclone = origin
    [branch "master"]
    	remote = origin
    	merge = refs/heads/master

Design
======

Local repository layout
-----------------------

A repository declares its dependence on a *promisor remote* (a remote that
declares that it can serve certain objects when requested) by a repository
extension "partialclone". `extensions.partialclone` must be set to the name of
the remote ("origin" in the demo above).

A packfile can be annotated as originating from the promisor remote by the
existence of a "(packfile name).promisor" file with arbitrary contents (similar
to the ".keep" file). Whenever a promisor remote sends an object, it declares
that it can serve every object directly or indirectly referenced by the sent
object.

A promisor packfile is a packfile annotated with the ".promisor" file. A
promisor object is an object that the promisor remote is known to be able to
serve, because it is an object in a promisor packfile or directly referred to by
one.

(In the future, we might need to add ".promisor" support to loose objects.)

Connectivity check and gc
-------------------------

The object walk done by the connectivity check (as used by fsck and fetch) stops
at all promisor objects.

The object walk done by gc also stops at all promisor objects. Only non-promisor
packfiles are deleted (if pack deletion is requested); promisor packfiles are
left alone. This maintains the distinction between promisor packfiles and
non-promisor packfiles. (In the future, we might need to do something more
sophisticated with promisor packfiles.)

Fetching of missing objects
---------------------------

When `sha1_object_info_extended()` (or similar) is invoked, it will
automatically attempt to fetch a missing object from the promisor remote if that
object is not in the local repository. For efficiency, no check is made as to
whether that object is known to be a promisor object or not.

This automatic fetching can be toggled on and off by the `fetch_if_missing`
global variable, and it is on by default.

The actual fetch is done through the fetch-pack/upload-pack protocol. Right now,
this uses the fact that upload-pack allows blob and tree "want"s, and this
incurs the overhead of the unnecessary ref advertisement. I hope that protocol
v2 will allow us to declare that blob and tree "want"s are allowed, and allow
the client to declare that it does not want the ref advertisement. All packfiles
downloaded in this way are annotated with ".promisor".

Fetching with `git fetch`
-------------------------

The fetch-pack/upload-pack protocol has also been extended to support omission
of blobs above a certain size. The client only allows this when fetching from
the promisor remote, and will annotate any packs received from the promisor
remote with ".promisor".

Jonathan Tan (18):
  fsck: introduce partialclone extension
  fsck: support refs pointing to promisor objects
  fsck: support referenced promisor objects
  fsck: support promisor objects as CLI argument
  index-pack: refactor writing of .keep files
  introduce fetch-object: fetch one promisor object
  sha1_file: support lazily fetching missing objects
  rev-list: support termination at promisor objects
  gc: do not repack promisor packfiles
  pack-objects: rename want_.* to ignore_.*
  pack-objects: support --blob-max-bytes
  fetch-pack: support excluding large blobs
  fetch: refactor calculation of remote list
  fetch: support excluding large blobs
  clone: support excluding large blobs
  clone: configure blobmaxbytes in created repos
  unpack-trees: batch fetching of missing blobs
  fetch-pack: restore save_commit_buffer after use

 Documentation/git-pack-objects.txt                |  12 +-
 Documentation/technical/pack-protocol.txt         |   9 +
 Documentation/technical/protocol-capabilities.txt |   7 +
 Documentation/technical/repository-version.txt    |  12 +
 Makefile                                          |   1 +
 builtin/cat-file.c                                |   2 +
 builtin/clone.c                                   |  24 +-
 builtin/fetch-pack.c                              |  21 ++
 builtin/fetch.c                                   |  36 ++-
 builtin/fsck.c                                    |  26 +-
 builtin/gc.c                                      |   3 +
 builtin/index-pack.c                              | 113 ++++---
 builtin/pack-objects.c                            |  97 ++++--
 builtin/prune.c                                   |   7 +
 builtin/repack.c                                  |   7 +-
 builtin/rev-list.c                                |  13 +
 cache.h                                           |  13 +-
 connected.c                                       |   1 +
 environment.c                                     |   1 +
 fetch-object.c                                    |  45 +++
 fetch-object.h                                    |  11 +
 fetch-pack.c                                      |  23 +-
 fetch-pack.h                                      |   3 +
 list-objects.c                                    |  16 +-
 object.c                                          |   2 +-
 packfile.c                                        |  77 ++++-
 packfile.h                                        |  13 +
 remote-curl.c                                     |  21 +-
 remote.c                                          |   2 +
 remote.h                                          |   2 +
 revision.c                                        |  33 ++-
 revision.h                                        |   5 +-
 setup.c                                           |   7 +-
 sha1_file.c                                       |  38 ++-
 t/t0410-partial-clone.sh                          | 343 ++++++++++++++++++++++
 t/t5300-pack-object.sh                            |  45 +++
 t/t5500-fetch-pack.sh                             | 115 ++++++++
 t/t5601-clone.sh                                  | 101 +++++++
 t/test-lib-functions.sh                           |  12 +
 transport-helper.c                                |   4 +
 transport.c                                       |  18 ++
 transport.h                                       |  12 +
 unpack-trees.c                                    |  22 ++
 upload-pack.c                                     |  16 +-
 44 files changed, 1278 insertions(+), 113 deletions(-)
 create mode 100644 fetch-object.c
 create mode 100644 fetch-object.h
 create mode 100755 t/t0410-partial-clone.sh

-- 
2.14.2.822.g60be5d43e6-goog


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-09-29 20:11 [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) Jonathan Tan
@ 2017-09-29 21:08 ` Johannes Schindelin
  2017-10-02  4:23 ` Junio C Hamano
  2017-10-03  6:15 ` Christian Couder
  2 siblings, 0 replies; 8+ messages in thread
From: Johannes Schindelin @ 2017-09-29 21:08 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: git, gitster, git, peartben, christian.couder

Hi Jonathan,

On Fri, 29 Sep 2017, Jonathan Tan wrote:

> Jeff Hostetler has sent out some object-filtering patches [1] that is a
> superset of the object-filtering functionality that I have (in the
> pack-objects patches). I have gone for the minimal approach here, but if
> his patches are merged, I'll update my patch set to use those.

I wish there was a way for you to work *with* Jeff on this. It seems that
your aims are similar enough for that (you both need changes in the
protocol) yet different enough to allow for talking past each other (big
blobs vs narrow clone).

And I get the impression that in this instance, it slows everything down
to build competing, large patch series rather than building on top of each
other's work.

Additionally, I am not helping by pestering Jeff all the time about
different issues, so it is partially my fault.

But maybe there is a chance to *really* go for a minimal approach, as in
"incremental enough that you can share the first <N> patches"? And even
better: "come up with those first <N> patches together"?

Ciao,
Dscho



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-09-29 20:11 [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) Jonathan Tan
  2017-09-29 21:08 ` Johannes Schindelin
@ 2017-10-02  4:23 ` Junio C Hamano
  2017-10-03  6:15 ` Christian Couder
  2 siblings, 0 replies; 8+ messages in thread
From: Junio C Hamano @ 2017-10-02  4:23 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: git, git, peartben, christian.couder

Jonathan Tan <jonathantanmy@google.com> writes:

> Jeff Hostetler has sent out some object-filtering patches [1] that is a
> superset of the object-filtering functionality that I have (in the
> pack-objects patches). I have gone for the minimal approach here, but if
> his patches are merged, I'll update my patch set to use those.
>
> [1] https://public-inbox.org/git/20170922203017.53986-6-git@jeffhostetler.com/

Sounds good.  Or perhaps rebasing the other way around, if we feel
that the "fsck with known-missing object" part of your series is
with a better done-ness than Jeff's series (which is my impression
but I has an obvious bias that I happened to have reviewed your
series with finer toothed comb before I saw Jeff's series).

Thanks for working well together ;-).

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-09-29 20:11 [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) Jonathan Tan
  2017-09-29 21:08 ` Johannes Schindelin
  2017-10-02  4:23 ` Junio C Hamano
@ 2017-10-03  6:15 ` Christian Couder
  2017-10-03  8:50   ` Junio C Hamano
  2 siblings, 1 reply; 8+ messages in thread
From: Christian Couder @ 2017-10-03  6:15 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: git, Junio C Hamano, Jeff Hostetler, Ben Peart

On Fri, Sep 29, 2017 at 10:11 PM, Jonathan Tan <jonathantanmy@google.com> wrote:
> These patches are also available online:
> https://github.com/jonathantanmy/git/commits/partialclone3
>
> (I've announced it in another e-mail, but am now sending the patches to the
> mailing list too.)
>
> Here's an update of my work so far. Notable features:
>  - These 18 patches allow a user to clone with --blob-max-bytes=<bytes>,
>    creating a partial clone that is automatically configured to lazily
>    fetch missing objects from the origin. The local repo also has fsck
>    working offline, and GC working (albeit only on locally created
>    objects).
>  - Cloning and fetching is currently only able to exclude blobs by a
>    size threshold, but the local repository is already capable of
>    fetching missing objects of any type. For example, if a repository
>    with missing trees or commits is generated by any tool (for example,
>    a future version of Git), current Git with my patches will still be
>    able to operate on them, automatically fetching those missing trees
>    and commits when needed.
>  - Missing blobs are fetched all at once during checkout.

Could you give a bit more details about the use cases this is designed for?
It seems that when people review my work they want a lot of details
about the use cases, so I guess they would also be interesting in
getting this kind of information for your work too.

Could this support users who would be interested in lazily cloning
only one kind of files, for example *.jpeg?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-10-03  6:15 ` Christian Couder
@ 2017-10-03  8:50   ` Junio C Hamano
  2017-10-03 14:39     ` Jeff Hostetler
  0 siblings, 1 reply; 8+ messages in thread
From: Junio C Hamano @ 2017-10-03  8:50 UTC (permalink / raw)
  To: Christian Couder; +Cc: Jonathan Tan, git, Jeff Hostetler, Ben Peart

Christian Couder <christian.couder@gmail.com> writes:

> Could you give a bit more details about the use cases this is designed for?
> It seems that when people review my work they want a lot of details
> about the use cases, so I guess they would also be interesting in
> getting this kind of information for your work too.
>
> Could this support users who would be interested in lazily cloning
> only one kind of files, for example *.jpeg?

I do not know about others, but the reason why I was not interested
in finding out "use cases" is because the value of this series is
use-case agnostic.

At least to me, the most interesting part of the series is that it
allows you to receive a set of objects transferred from the other
side that lack some of objects that would otherwise be required to
be here for connectivity purposes, and it introduces a mechanism to
allow object transfer layer, gc and fsck to work well together in
the resulting repository that deliberately lacks some objects.  The
transfer layer marks the objects obtained from a specific remote as
such, and gc and fsck are taught not to try to follow a missing link
or diagnose a missing link as an error, if a missing link is
expected using the mark the transfer layer left.

and it does so in such a way that it is use-case agnostic.  The
mechanism does not care how the objects to be omitted were chosen,
and how the omission criteria were negotiated between the sender and
the receiver of the pack.

I think the series comes with one filter that is size-based, but I
view it as a technology demonstration.  It does not have to be the
primary use case.  IOW, I do not think the series is meant as a
declaration that size-based filtering is the most important thing
and other omission criteria are less important.

You should be able to build path based omission (i.e. narrow clone)
or blobtype based omission.  Depending on your needs, you may want
different object omission criteria.  It is something you can build
on top.  And the work done on transfer/gc/fsck in this series does
not have to change to accommodate these different "use cases".



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-10-03  8:50   ` Junio C Hamano
@ 2017-10-03 14:39     ` Jeff Hostetler
  2017-10-03 23:42       ` Jonathan Tan
  0 siblings, 1 reply; 8+ messages in thread
From: Jeff Hostetler @ 2017-10-03 14:39 UTC (permalink / raw)
  To: Junio C Hamano, Christian Couder; +Cc: Jonathan Tan, git, Ben Peart



On 10/3/2017 4:50 AM, Junio C Hamano wrote:
> Christian Couder <christian.couder@gmail.com> writes:
> 
>> Could you give a bit more details about the use cases this is designed for?
>> It seems that when people review my work they want a lot of details
>> about the use cases, so I guess they would also be interesting in
>> getting this kind of information for your work too.
>>
>> Could this support users who would be interested in lazily cloning
>> only one kind of files, for example *.jpeg?
> 
> I do not know about others, but the reason why I was not interested
> in finding out "use cases" is because the value of this series is
> use-case agnostic.
> 
> At least to me, the most interesting part of the series is that it
> allows you to receive a set of objects transferred from the other
> side that lack some of objects that would otherwise be required to
> be here for connectivity purposes, and it introduces a mechanism to
> allow object transfer layer, gc and fsck to work well together in
> the resulting repository that deliberately lacks some objects.  The
> transfer layer marks the objects obtained from a specific remote as
> such, and gc and fsck are taught not to try to follow a missing link
> or diagnose a missing link as an error, if a missing link is
> expected using the mark the transfer layer left.
> 
> and it does so in such a way that it is use-case agnostic.  The
> mechanism does not care how the objects to be omitted were chosen,
> and how the omission criteria were negotiated between the sender and
> the receiver of the pack.
> 
> I think the series comes with one filter that is size-based, but I
> view it as a technology demonstration.  It does not have to be the
> primary use case.  IOW, I do not think the series is meant as a
> declaration that size-based filtering is the most important thing
> and other omission criteria are less important.
> 
> You should be able to build path based omission (i.e. narrow clone)
> or blobtype based omission.  Depending on your needs, you may want
> different object omission criteria.  It is something you can build
> on top.  And the work done on transfer/gc/fsck in this series does
> not have to change to accommodate these different "use cases".
> 

Agreed.

There are lots of reasons for wanting partial clones (and we've been
flinging lots of RFCs at each other that each seem to have a different
base assumption (small-blobs-only vs sparse-checkout vs <whatever>))
and not reaching consensus or closure.

The main thing is to allow the client to use partial clone to request
a "subset", let the server deliver that "subset", and let the client
tooling deal with an incomplete view of the repo.

As I see it there are the following major parts to partial clone:
1. How to let git-clone (and later git-fetch) specify the desired
    subset of objects that it wants?  (A ref-relative request.)
2. How to let the server and git-pack-objects build that incomplete
    packfile?
3. How to remember in the local config that a partial clone (or
    fetch) was used and that missing object should be expected?
4. How to dynamically fetch individual missing objects individually?
     (Not a ref-relative request.)
5. How to augment the local ODB with partial clone information and
    let git-fsck (and friends) perform limited consistency checking?
6. Methods to bulk fetching missing objects (whether in a pre-verb
    hook or in unpack-tree)
7. Miscellaneous issues (e.g. fixing places that accidentally cause
    a missing object to be fetched that don't really need it).

My proposal [1] includes a generic filtering mechanism that handles 3
types of filtering and makes it easy to add other techniques as we
see fit.  It slips in at the list-objects / traverse_commit_list
level and hides all of the details from rev-list and pack-objects.
I have a follow on proposal [2] that extends the filtering parameter
handling to git-clone, git-fetch, git-fetch-pack, git-upload-pack
and the pack protocol.  That takes care of items 1 and 2 above.

Jonathan's proposal [3] includes code to update the local config,
dynamically fetch individual objects, and handle the local ODB and
fsck consistency checking.  That takes care of items 3, 4, and 5.

As was suggested above, I think we should merge our efforts:
using my filtering for 1 and 2 and Jonathan's code for 3, 4, and 5.
I would need to eliminate the "relax" options in favor of his
is_promised() functionality for index-pack and similar.  And omit
his blob-max-bytes changes from pack-objects, the protocol and
related commands.

That should be a good first step.

We both have thoughts on bulk fetching (mine in pre-verb hooks and
his in unpack-tree).  We don't need this immediately, but can wait
until the above is working to revisit.

[1] https://github.com/jeffhostetler/git/pull/3
[2]https://github.com/jeffhostetler/git/pull/4
[3] https://github.com/jonathantanmy/git/tree/partialclone3

Thoughts?

Thanks,
Jeff

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-10-03 14:39     ` Jeff Hostetler
@ 2017-10-03 23:42       ` Jonathan Tan
  2017-10-04 13:30         ` Jeff Hostetler
  0 siblings, 1 reply; 8+ messages in thread
From: Jonathan Tan @ 2017-10-03 23:42 UTC (permalink / raw)
  To: Jeff Hostetler; +Cc: Junio C Hamano, Christian Couder, git, Ben Peart

On Tue, Oct 3, 2017 at 7:39 AM, Jeff Hostetler <git@jeffhostetler.com> wrote:
>
> As I see it there are the following major parts to partial clone:
> 1. How to let git-clone (and later git-fetch) specify the desired
>    subset of objects that it wants?  (A ref-relative request.)
> 2. How to let the server and git-pack-objects build that incomplete
>    packfile?
> 3. How to remember in the local config that a partial clone (or
>    fetch) was used and that missing object should be expected?
> 4. How to dynamically fetch individual missing objects individually?
>     (Not a ref-relative request.)
> 5. How to augment the local ODB with partial clone information and
>    let git-fsck (and friends) perform limited consistency checking?
> 6. Methods to bulk fetching missing objects (whether in a pre-verb
>    hook or in unpack-tree)
> 7. Miscellaneous issues (e.g. fixing places that accidentally cause
>    a missing object to be fetched that don't really need it).

Thanks for the enumeration.

> As was suggested above, I think we should merge our efforts:
> using my filtering for 1 and 2 and Jonathan's code for 3, 4, and 5.
> I would need to eliminate the "relax" options in favor of his
> is_promised() functionality for index-pack and similar.  And omit
> his blob-max-bytes changes from pack-objects, the protocol and
> related commands.
>
> That should be a good first step.

This sounds good to me. Jeff Hostetler's filtering (all blobs, blobs
by size, blobs by sparse checkout specification) is more comprehensive
than mine, so removing blob-max-bytes from my code is not a problem.

> We both have thoughts on bulk fetching (mine in pre-verb hooks and
> his in unpack-tree).  We don't need this immediately, but can wait
> until the above is working to revisit.

Agreed.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches)
  2017-10-03 23:42       ` Jonathan Tan
@ 2017-10-04 13:30         ` Jeff Hostetler
  0 siblings, 0 replies; 8+ messages in thread
From: Jeff Hostetler @ 2017-10-04 13:30 UTC (permalink / raw)
  To: Jonathan Tan; +Cc: Junio C Hamano, Christian Couder, git, Ben Peart



On 10/3/2017 7:42 PM, Jonathan Tan wrote:
> On Tue, Oct 3, 2017 at 7:39 AM, Jeff Hostetler <git@jeffhostetler.com> wrote:
>>
>> As I see it there are the following major parts to partial clone:
>> 1. How to let git-clone (and later git-fetch) specify the desired
>>     subset of objects that it wants?  (A ref-relative request.)
>> 2. How to let the server and git-pack-objects build that incomplete
>>     packfile?
>> 3. How to remember in the local config that a partial clone (or
>>     fetch) was used and that missing object should be expected?
>> 4. How to dynamically fetch individual missing objects individually?
>>      (Not a ref-relative request.)
>> 5. How to augment the local ODB with partial clone information and
>>     let git-fsck (and friends) perform limited consistency checking?
>> 6. Methods to bulk fetching missing objects (whether in a pre-verb
>>     hook or in unpack-tree)
>> 7. Miscellaneous issues (e.g. fixing places that accidentally cause
>>     a missing object to be fetched that don't really need it).
> 
> Thanks for the enumeration.
> 
>> As was suggested above, I think we should merge our efforts:
>> using my filtering for 1 and 2 and Jonathan's code for 3, 4, and 5.
>> I would need to eliminate the "relax" options in favor of his
>> is_promised() functionality for index-pack and similar.  And omit
>> his blob-max-bytes changes from pack-objects, the protocol and
>> related commands.
>>
>> That should be a good first step.
> 
> This sounds good to me. Jeff Hostetler's filtering (all blobs, blobs
> by size, blobs by sparse checkout specification) is more comprehensive
> than mine, so removing blob-max-bytes from my code is not a problem.
> 
>> We both have thoughts on bulk fetching (mine in pre-verb hooks and
>> his in unpack-tree).  We don't need this immediately, but can wait
>> until the above is working to revisit.
> 
> Agreed.
> 

Thanks.

I'll make a first pass at merging our efforts then and
post something shortly.

Jeff


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, back to index

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-29 20:11 [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) Jonathan Tan
2017-09-29 21:08 ` Johannes Schindelin
2017-10-02  4:23 ` Junio C Hamano
2017-10-03  6:15 ` Christian Couder
2017-10-03  8:50   ` Junio C Hamano
2017-10-03 14:39     ` Jeff Hostetler
2017-10-03 23:42       ` Jonathan Tan
2017-10-04 13:30         ` Jeff Hostetler

git@vger.kernel.org mailing list mirror (one of many)

Archives are clonable:
	git clone --mirror https://public-inbox.org/git
	git clone --mirror http://ou63pmih66umazou.onion/git
	git clone --mirror http://czquwvybam4bgbro.onion/git
	git clone --mirror http://hjrcffqmbrq6wope.onion/git

Newsgroups are available over NNTP:
	nntp://news.public-inbox.org/inbox.comp.version-control.git
	nntp://ou63pmih66umazou.onion/inbox.comp.version-control.git
	nntp://czquwvybam4bgbro.onion/inbox.comp.version-control.git
	nntp://hjrcffqmbrq6wope.onion/inbox.comp.version-control.git
	nntp://news.gmane.org/gmane.comp.version-control.git

 note: .onion URLs require Tor: https://www.torproject.org/
       or Tor2web: https://www.tor2web.org/

AGPL code for this site: git clone https://public-inbox.org/ public-inbox