git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* Make `git fetch --all` parallel?
@ 2016-10-11 20:12 Ram Rachum
  2016-10-11 20:53 ` Stefan Beller
  0 siblings, 1 reply; 12+ messages in thread
From: Ram Rachum @ 2016-10-11 20:12 UTC (permalink / raw)
  To: git

Hi everyone!

I have a repo that has a bunch of different remotes, and I noticed
slowness when doing `git fetch --all`. Is it currently made
sequentially? Do you think that maybe it could be done in parallel so
it could be much faster?

Thanks,
Ram.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 20:12 Make `git fetch --all` parallel? Ram Rachum
@ 2016-10-11 20:53 ` Stefan Beller
  2016-10-11 22:37   ` Junio C Hamano
  0 siblings, 1 reply; 12+ messages in thread
From: Stefan Beller @ 2016-10-11 20:53 UTC (permalink / raw)
  To: Ram Rachum; +Cc: git@vger.kernel.org

On Tue, Oct 11, 2016 at 1:12 PM, Ram Rachum <ram@rachum.com> wrote:
> Hi everyone!
>
> I have a repo that has a bunch of different remotes, and I noticed
> slowness when doing `git fetch --all`. Is it currently made
> sequentially? Do you think that maybe it could be done in parallel so
> it could be much faster?
>
> Thanks,
> Ram.

If you were to run fetching from each remote in parallel
assuming the work load is unchanged, this would speed up the
execution by the number of remotes.

This translation sounds pretty easy at first, but when looking into
the details it is not as easy any more:

What if 2 remotes have the same object (e.g. the same commit)?
Currently this is easy: The first remote to fetch from will deliver that
object to you.

When fetching in parallel, we would want to download that object from
just one remote, preferably the remote with better network connectivity(?)

So I do think it would be much faster, but I also think patches for this would
require some thought and a lot of refactoring of the fetch code.

The current fetch protocol is roughly:

remote: I have these refs:
8a36cd87b7c85a651ab388d403629865ffa3ba0d HEAD
10d26b0d1ef1ebfd09418ec61bdadc299ac988e2 refs/heads/ab/gitweb-abbrev-links
77947bbe24e0306d1ce5605c962c4a25f5aca22f refs/heads/ab/gitweb-link-html-escape
...

client: I want 8a36cd87b7c85a651ab388d403629865ffa3ba0d,
and I have 231ce93d2a0b0b4210c810e865eb5db7ba3032b2
and I have 02d0927973782f4b8b7317b499979fada1105be6
and I have 1172e16af07d6e15bca6398f0ded18a0ae7b9249

remote: I don't know about 231ce93d2a0b0b4210c810e865eb5db7ba3032b2,
nor 02d0927973782f4b8b7317b499979fada1105be6, but
I know about 1172e16af07d6e15bca6398f0ded18a0ae7b9249

.... conversation continues...

remote: Ok I figured out what you need, here is a packfile:
<binary stuff>


During the negotiation phase a client would have to be able to change its
mind (add more "haves", or in case of the parallel fetching these become
"will-have-soons", although the remote figured out the client did not have it
earlier.)

If you want to see more details, see Documentation/technical/pack-protocol.txt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 20:53 ` Stefan Beller
@ 2016-10-11 22:37   ` Junio C Hamano
  2016-10-11 22:50     ` Stefan Beller
  0 siblings, 1 reply; 12+ messages in thread
From: Junio C Hamano @ 2016-10-11 22:37 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Ram Rachum, git@vger.kernel.org

Stefan Beller <sbeller@google.com> writes:

> So I do think it would be much faster, but I also think patches for this would
> require some thought and a lot of refactoring of the fetch code.
> ...
> During the negotiation phase a client would have to be able to change its
> mind (add more "haves", or in case of the parallel fetching these become
> "will-have-soons", although the remote figured out the client did not have it
> earlier.)

Even though a fancy optimization as you outlined might be ideal, I
suspect that users would be happier if the network bandwidth is
utilized to talk to multiple remotes at the same time even if they
end up receiving the same recent objects from more than one place in
the end.

Is the order in which "git fetch --all" iterates over "all remotes"
predictable and documented?  If so, listing the remotes from more
powerful and well connected place to slower ones and then doing an
equivalent of stupid

	for remote in $list_of_remotes_ordered_in_such_a_way
	do
		git fetch "$remote" &
		sleep 2
	done

might be fairly easy thing to bring happiness.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 22:37   ` Junio C Hamano
@ 2016-10-11 22:50     ` Stefan Beller
  2016-10-11 22:58       ` Junio C Hamano
                         ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Stefan Beller @ 2016-10-11 22:50 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Ram Rachum, git@vger.kernel.org

On Tue, Oct 11, 2016 at 3:37 PM, Junio C Hamano <gitster@pobox.com> wrote:
> Stefan Beller <sbeller@google.com> writes:
>
>> So I do think it would be much faster, but I also think patches for this would
>> require some thought and a lot of refactoring of the fetch code.
>> ...
>> During the negotiation phase a client would have to be able to change its
>> mind (add more "haves", or in case of the parallel fetching these become
>> "will-have-soons", although the remote figured out the client did not have it
>> earlier.)
>
> Even though a fancy optimization as you outlined might be ideal, I
> suspect that users would be happier if the network bandwidth is
> utilized to talk to multiple remotes at the same time even if they
> end up receiving the same recent objects from more than one place in
> the end.

I agree. Though even for implementing the "dumb" case of fetching
objects twice we'd have to take care of some racing issues, I would assume.

Why did you put a "sleep 2" below?
* a slow start to better spread load locally? (keep the workstation responsive?)
* a slow start to have different fetches in a different phase of the
fetch protocol?
* avoiding some subtle race?

At the very least we would need a similar thing as Jeff recently sent for the
push case with objects quarantined and then made available in one go?

>
> Is the order in which "git fetch --all" iterates over "all remotes"
> predictable and documented?

it is predictable, as it is just the same order as put by grep in
$ grep "\[remote " .git/config, i.e. in order of the file, which in my
case turns out to be sorted by importance/history quite naturally.
But reordering my config file would be not a big deal.

I dunno, if documented though.

> If so, listing the remotes from more
> powerful and well connected place to slower ones and then doing an
> equivalent of stupid
>
>         for remote in $list_of_remotes_ordered_in_such_a_way

list_of_remotes_ordered_in_such_a_way is roughly:
$(git config --get-regexp remote.*.url | tr '.' ' ' |awk '{print $2}')

>         do
>                 git fetch "$remote" &
>                 sleep 2
>         done
>
> might be fairly easy thing to bring happiness.

I would love to see the implementation though, as over time I accumulate
a lot or remotes. (Someone published patches on the mailing list and made
them available somewhere hosted? Grabbing them from their hosting site
is easier than applying patches for me, so I'd rather fetch them... so I have
some remotes now)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 22:50     ` Stefan Beller
@ 2016-10-11 22:58       ` Junio C Hamano
  2016-10-11 22:58       ` Stefan Beller
  2016-10-11 22:59       ` Jeff King
  2 siblings, 0 replies; 12+ messages in thread
From: Junio C Hamano @ 2016-10-11 22:58 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Ram Rachum, git@vger.kernel.org

Stefan Beller <sbeller@google.com> writes:

> Why did you put a "sleep 2" below?

The assumption was to fetch from faster and near the center of the
project universe early, so by giving them head-start, fetches that
start in later rounds may have chance to see newly updated remote
tracking refs when telling the poorer other ends what we have.

> At the very least we would need a similar thing as Jeff recently sent for the
> push case with objects quarantined and then made available in one go?

There is no race; ref updates are done only after objects are fully
finalized.  You can do the quarantine but that would defeat the "let
ones from the center of universe finish early so later ones from
poor periphery have more .have's to work with" idea, I suspect.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 22:50     ` Stefan Beller
  2016-10-11 22:58       ` Junio C Hamano
@ 2016-10-11 22:58       ` Stefan Beller
  2016-10-11 22:59       ` Jeff King
  2 siblings, 0 replies; 12+ messages in thread
From: Stefan Beller @ 2016-10-11 22:58 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Ram Rachum, git@vger.kernel.org

> I dunno, if documented though.

http://stackoverflow.com/questions/26373995/how-to-control-the-order-of-fetching-when-fetching-all-remotes-by-git-fetch-al

We do not give promises about the order of --all (checked with our
documentation as well), however there seems to be a
grouping scheme for remotes that you can order via
setting remotes.default (which is not documented in man git config,
but only in the git remote man page)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 22:50     ` Stefan Beller
  2016-10-11 22:58       ` Junio C Hamano
  2016-10-11 22:58       ` Stefan Beller
@ 2016-10-11 22:59       ` Jeff King
  2016-10-11 23:16         ` Ævar Arnfjörð Bjarmason
  2016-10-11 23:18         ` Stefan Beller
  2 siblings, 2 replies; 12+ messages in thread
From: Jeff King @ 2016-10-11 22:59 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Junio C Hamano, Ram Rachum, git@vger.kernel.org

On Tue, Oct 11, 2016 at 03:50:36PM -0700, Stefan Beller wrote:

> I agree. Though even for implementing the "dumb" case of fetching
> objects twice we'd have to take care of some racing issues, I would assume.
> 
> Why did you put a "sleep 2" below?
> * a slow start to better spread load locally? (keep the workstation responsive?)
> * a slow start to have different fetches in a different phase of the
> fetch protocol?
> * avoiding some subtle race?
> 
> At the very least we would need a similar thing as Jeff recently sent for the
> push case with objects quarantined and then made available in one go?

I don't think so. The object database is perfectly happy with multiple
simultaneous writers, and nothing impacts the have/wants until actual
refs are written. Quarantining objects before the refs are written is an
orthogonal concept.

I'm not altogether convinced that parallel fetch would be that much
faster, though. Your bottleneck for a fetch is generally the network for
most of the time, then a brief spike of CPU during delta resolution. You
might get some small benefit from overlapping the fetches so that you
spend CPU on one while you spend network on the other, but I doubt it
would be nearly as beneficial as the parallel submodule clones (which
generally have a bigger CPU segment, and also are generally considered
independent, so there's no real tradeoff of getting duplicate objects).

Sometimes the bottleneck is the server preparing the back, but if that
is the case, you should probably complain to your server admin to enable
bitmaps. :)

> I would love to see the implementation though, as over time I accumulate
> a lot or remotes. (Someone published patches on the mailing list and made
> them available somewhere hosted? Grabbing them from their hosting site
> is easier than applying patches for me, so I'd rather fetch them... so I have
> some remotes now)

I usually just do a one-off fetch of their URL in such a case, exactly
because I _don't_ want to end up with a bunch of remotes. You can also
mark them with skipDefaultUpdate if you only care about them
occasionally (so you can "git fetch sbeller" when you care about it, but
it doesn't slow down your daily "git fetch").

-Peff

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 22:59       ` Jeff King
@ 2016-10-11 23:16         ` Ævar Arnfjörð Bjarmason
  2016-10-11 23:18         ` Stefan Beller
  1 sibling, 0 replies; 12+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2016-10-11 23:16 UTC (permalink / raw)
  To: Jeff King; +Cc: Stefan Beller, Junio C Hamano, Ram Rachum, git@vger.kernel.org

On Wed, Oct 12, 2016 at 12:59 AM, Jeff King <peff@peff.net> wrote:

> I'm not altogether convinced that parallel fetch would be that much
> faster, though.

I have local aliases to use GNU parallel for stuff like this, on my
git.git which has accumulated 17 remotes:

    $ time parallel -j1 'git fetch {}' ::: $(git remote)
    real    0m18.265s
    $ time parallel -j8 'git fetch {}' ::: $(git remote)
    real    0m2.957s

In that case I didn't have any new objects to fetch, but just doing
the negotiation in parallel was a lot faster.

So there's big wins in some cases.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 22:59       ` Jeff King
  2016-10-11 23:16         ` Ævar Arnfjörð Bjarmason
@ 2016-10-11 23:18         ` Stefan Beller
  2016-10-12  1:34           ` Jeff King
  1 sibling, 1 reply; 12+ messages in thread
From: Stefan Beller @ 2016-10-11 23:18 UTC (permalink / raw)
  To: Jeff King; +Cc: Junio C Hamano, Ram Rachum, git@vger.kernel.org

On Tue, Oct 11, 2016 at 3:59 PM, Jeff King <peff@peff.net> wrote:
> On Tue, Oct 11, 2016 at 03:50:36PM -0700, Stefan Beller wrote:
>
>> I agree. Though even for implementing the "dumb" case of fetching
>> objects twice we'd have to take care of some racing issues, I would assume.
>>
>> Why did you put a "sleep 2" below?
>> * a slow start to better spread load locally? (keep the workstation responsive?)
>> * a slow start to have different fetches in a different phase of the
>> fetch protocol?
>> * avoiding some subtle race?
>>
>> At the very least we would need a similar thing as Jeff recently sent for the
>> push case with objects quarantined and then made available in one go?
>
> I don't think so. The object database is perfectly happy with multiple
> simultaneous writers, and nothing impacts the have/wants until actual
> refs are written. Quarantining objects before the refs are written is an
> orthogonal concept.

If a remote advertises its tips, we'd need to look these up (clientside) to
decide if we have them, and I do not think we'd do that via a reachability
check, but via direct lookup in the object data base? So I do not quite
understand, what we gain from the atomic ref writes in e.g. remote/origin/.


> I'm not altogether convinced that parallel fetch would be that much
> faster, though.

Ok, time to present data... Let's assume a degenerate case first:
"up-to-date with all remotes" because that is easy to reproduce.

I have 14 remotes currently:

$ time git fetch --all
real 0m18.016s
user 0m2.027s
sys 0m1.235s

$ time git config --get-regexp remote.*.url |awk '{print $2}' |xargs
-P 14 -I % git fetch %
real 0m5.168s
user 0m2.312s
sys 0m1.167s

A factor of >3, so I suspect there is improvement ;)

Well just as Ævar pointed out, there is some improvement.

>
> I usually just do a one-off fetch of their URL in such a case, exactly
> because I _don't_ want to end up with a bunch of remotes. You can also
> mark them with skipDefaultUpdate if you only care about them
> occasionally (so you can "git fetch sbeller" when you care about it, but
> it doesn't slow down your daily "git fetch").

And I assume you don't want the remotes because it takes time to fetch and not
because your disk space is expensive. ;)

>
> -Peff

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-11 23:18         ` Stefan Beller
@ 2016-10-12  1:34           ` Jeff King
  2016-10-12  1:52             ` Jeff King
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff King @ 2016-10-12  1:34 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Junio C Hamano, Ram Rachum, git@vger.kernel.org

On Tue, Oct 11, 2016 at 04:18:15PM -0700, Stefan Beller wrote:

> >> At the very least we would need a similar thing as Jeff recently sent for the
> >> push case with objects quarantined and then made available in one go?
> >
> > I don't think so. The object database is perfectly happy with multiple
> > simultaneous writers, and nothing impacts the have/wants until actual
> > refs are written. Quarantining objects before the refs are written is an
> > orthogonal concept.
> 
> If a remote advertises its tips, we'd need to look these up (clientside) to
> decide if we have them, and I do not think we'd do that via a reachability
> check, but via direct lookup in the object data base? So I do not quite
> understand, what we gain from the atomic ref writes in e.g. remote/origin/.

It's been a while since I've dug into the fetch protocol. But I think we
cover the "do we have the objects already" check via quickfetch(), which
does do a reachability check, And then we advertise our "have" commits
by walking backwards from our ref tips, so everything there is
reachable.

Anything else would be questionable, especially under older versions of
git, as we promise only to have a complete graph for objects reachable
from the refs. Older versions of git would happily truncate unreachable
history based on the 2-week prune expiration period.

> > I'm not altogether convinced that parallel fetch would be that much
> > faster, though.
> 
> Ok, time to present data... Let's assume a degenerate case first:
> "up-to-date with all remotes" because that is easy to reproduce.
> 
> I have 14 remotes currently:
> 
> $ time git fetch --all
> real 0m18.016s
> user 0m2.027s
> sys 0m1.235s
> 
> $ time git config --get-regexp remote.*.url |awk '{print $2}' |xargs
> -P 14 -I % git fetch %
> real 0m5.168s
> user 0m2.312s
> sys 0m1.167s

So first, thank you (and Ævar) for providing real numbers. It's clear
that I was talking nonsense.

Second, I wonder where all that time is going. Clearly there's an
end-to-end latency issue, but I'm not sure where it is. Is it startup
time for git-fetch? Is it in getting and processing the ref
advertisement from the other side? What I'm wondering is if there are
opportunities to speed up the serial case (but nobody really cared
before because it doesn't matter unless you're doing 14 of them back to
back).

> > I usually just do a one-off fetch of their URL in such a case, exactly
> > because I _don't_ want to end up with a bunch of remotes. You can also
> > mark them with skipDefaultUpdate if you only care about them
> > occasionally (so you can "git fetch sbeller" when you care about it, but
> > it doesn't slow down your daily "git fetch").
> 
> And I assume you don't want the remotes because it takes time to fetch and not
> because your disk space is expensive. ;)

That, and it clogs the ref namespace. You can mostly ignore the extra
refs, but they show up in the "git checkout ..." DWIM, for example.

-Peff

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-12  1:34           ` Jeff King
@ 2016-10-12  1:52             ` Jeff King
  2016-10-12  6:47               ` Stefan Beller
  0 siblings, 1 reply; 12+ messages in thread
From: Jeff King @ 2016-10-12  1:52 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Junio C Hamano, Ram Rachum, git@vger.kernel.org

On Tue, Oct 11, 2016 at 09:34:28PM -0400, Jeff King wrote:

> > Ok, time to present data... Let's assume a degenerate case first:
> > "up-to-date with all remotes" because that is easy to reproduce.
> > 
> > I have 14 remotes currently:
> > 
> > $ time git fetch --all
> > real 0m18.016s
> > user 0m2.027s
> > sys 0m1.235s
> > 
> > $ time git config --get-regexp remote.*.url |awk '{print $2}' |xargs
> > -P 14 -I % git fetch %
> > real 0m5.168s
> > user 0m2.312s
> > sys 0m1.167s
> 
> So first, thank you (and Ævar) for providing real numbers. It's clear
> that I was talking nonsense.
> 
> Second, I wonder where all that time is going. Clearly there's an
> end-to-end latency issue, but I'm not sure where it is. Is it startup
> time for git-fetch? Is it in getting and processing the ref
> advertisement from the other side? What I'm wondering is if there are
> opportunities to speed up the serial case (but nobody really cared
> before because it doesn't matter unless you're doing 14 of them back to
> back).

Hmm. I think it really might be just network latency. Here's my fetch
time:

  $ git config remote.origin.url
  git://github.com/gitster/git.git

  $ time git fetch origin
  real    0m0.183s
  user    0m0.072s
  sys     0m0.008s

14 of those in a row shouldn't take more than about 2.5 seconds, which
is still twice as fast as your parallel case. So what's going on?

One is that I live about a hundred miles from GitHub's data center, and
my ping time there is ~13ms. The other side of the country, let alone
Europe, is going to be noticeably slower just for the TCP handshake.

The second is that git:// is really cheap and simple. git-over-ssh is
over twice as slow:

  $ time git fetch git@github.com:gitster/git
  ...
  real    0m0.432s
  user    0m0.100s
  sys     0m0.032s

HTTP fares better than I would have thought, but is also slower:

  $ time git fetch https://github.com/gitster/git
  ...
  real    0m0.258s
  user    0m0.080s
  sys     0m0.032s

-Peff

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Make `git fetch --all` parallel?
  2016-10-12  1:52             ` Jeff King
@ 2016-10-12  6:47               ` Stefan Beller
  0 siblings, 0 replies; 12+ messages in thread
From: Stefan Beller @ 2016-10-12  6:47 UTC (permalink / raw)
  To: Jeff King; +Cc: Junio C Hamano, Ram Rachum, git@vger.kernel.org

On Tue, Oct 11, 2016 at 6:52 PM, Jeff King <peff@peff.net> wrote:
> On Tue, Oct 11, 2016 at 09:34:28PM -0400, Jeff King wrote:
>
>> > Ok, time to present data... Let's assume a degenerate case first:
>> > "up-to-date with all remotes" because that is easy to reproduce.
>> >
>> > I have 14 remotes currently:
>> >
>> > $ time git fetch --all
>> > real 0m18.016s
>> > user 0m2.027s
>> > sys 0m1.235s
>> >
>> > $ time git config --get-regexp remote.*.url |awk '{print $2}' |xargs
>> > -P 14 -I % git fetch %
>> > real 0m5.168s
>> > user 0m2.312s
>> > sys 0m1.167s
>>
>> So first, thank you (and Ævar) for providing real numbers. It's clear
>> that I was talking nonsense.
>>
>> Second, I wonder where all that time is going. Clearly there's an
>> end-to-end latency issue, but I'm not sure where it is. Is it startup
>> time for git-fetch? Is it in getting and processing the ref
>> advertisement from the other side? What I'm wondering is if there are
>> opportunities to speed up the serial case (but nobody really cared
>> before because it doesn't matter unless you're doing 14 of them back to
>> back).
>
> Hmm. I think it really might be just network latency. Here's my fetch
> time:
>
>   $ git config remote.origin.url
>   git://github.com/gitster/git.git
>
>   $ time git fetch origin
>   real    0m0.183s
>   user    0m0.072s
>   sys     0m0.008s
>
> 14 of those in a row shouldn't take more than about 2.5 seconds, which
> is still twice as fast as your parallel case. So what's going on?
>
> One is that I live about a hundred miles from GitHub's data center, and
> my ping time there is ~13ms. The other side of the country, let alone
> Europe, is going to be noticeably slower just for the TCP handshake.
>
> The second is that git:// is really cheap and simple. git-over-ssh is
> over twice as slow:
>
>   $ time git fetch git@github.com:gitster/git
>   ...
>   real    0m0.432s
>   user    0m0.100s
>   sys     0m0.032s
>
> HTTP fares better than I would have thought, but is also slower:
>
>   $ time git fetch https://github.com/gitster/git
>   ...
>   real    0m0.258s
>   user    0m0.080s
>   sys     0m0.032s
>
> -Peff

Well 9/14 are https for me, the rest is git://
Also 9/14 (but a different set) is github, the rest is
either internal or kernel.org.

Fetching from github (https) is only 0.9s from here
(SF bay area, I'm not in Europe any more ;) )

I would have expected to have a speedup
of roughly 2 + latency gains. Factor 2 because
in the current state of affairs either the client or the
remote is working, i.e. the other sie is idle/waiting, so
factor 2 seemed reasonable (and ofc the latency), so I
was a bit surprised to see a higher yield.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-10-12  6:48 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-11 20:12 Make `git fetch --all` parallel? Ram Rachum
2016-10-11 20:53 ` Stefan Beller
2016-10-11 22:37   ` Junio C Hamano
2016-10-11 22:50     ` Stefan Beller
2016-10-11 22:58       ` Junio C Hamano
2016-10-11 22:58       ` Stefan Beller
2016-10-11 22:59       ` Jeff King
2016-10-11 23:16         ` Ævar Arnfjörð Bjarmason
2016-10-11 23:18         ` Stefan Beller
2016-10-12  1:34           ` Jeff King
2016-10-12  1:52             ` Jeff King
2016-10-12  6:47               ` Stefan Beller

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).