git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* Re: 0 bot for Git
       [not found] <CAGZ79kYWGFN1W0_y72-V6M3n4WLgtLPzs22bWgs1ObCCDt5BfQ@mail.gmail.com>
@ 2016-04-12  4:29 ` Stefan Beller
  2016-04-12  6:41   ` Greg KH
                     ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: Stefan Beller @ 2016-04-12  4:29 UTC (permalink / raw)
  To: Greg KH, git@vger.kernel.org

Resending as plain text. (I need to tame my mobile)

On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
> Hi Greg,
>
> Thanks for your talk at the Git Merge 2016!
> The Git community uses the same workflow as the kernel. So we may be
> interested in the 0 bot which could compile and test each patch on the list.
> Could you put us in touch with the authors/maintainers of said tool?
>
> Unlike the kernel we would not need hardware testing and we're low traffic
> compared to the kernel, which would make it easier to set it up.
>
> A healthier Git would help the kernel long term as well.
>
> Thanks,
> Stefan

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12  4:29 ` 0 bot for Git Stefan Beller
@ 2016-04-12  6:41   ` Greg KH
  2016-04-12  7:23   ` Matthieu Moy
  2016-04-12  9:42   ` Duy Nguyen
  2 siblings, 0 replies; 41+ messages in thread
From: Greg KH @ 2016-04-12  6:41 UTC (permalink / raw)
  To: Stefan Beller; +Cc: git@vger.kernel.org

On Mon, Apr 11, 2016 at 09:29:59PM -0700, Stefan Beller wrote:
> Resending as plain text. (I need to tame my mobile)
> 
> On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
> > Hi Greg,
> >
> > Thanks for your talk at the Git Merge 2016!
> > The Git community uses the same workflow as the kernel. So we may be
> > interested in the 0 bot which could compile and test each patch on the list.
> > Could you put us in touch with the authors/maintainers of said tool?
> >
> > Unlike the kernel we would not need hardware testing and we're low traffic
> > compared to the kernel, which would make it easier to set it up.

We don't get much, if any, real hardware testing from the 0-day bot,
it's just lots and lots of builds and static testing tools.

You can reach the developers of it at:
	kbuild test robot <lkp@intel.com>

Hope this helps,

greg k-h

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12  4:29 ` 0 bot for Git Stefan Beller
  2016-04-12  6:41   ` Greg KH
@ 2016-04-12  7:23   ` Matthieu Moy
  2016-04-12 14:52     ` Stefan Beller
  2016-04-12  9:42   ` Duy Nguyen
  2 siblings, 1 reply; 41+ messages in thread
From: Matthieu Moy @ 2016-04-12  7:23 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Greg KH, git@vger.kernel.org

Stefan Beller <sbeller@google.com> writes:

> Hi Greg,
>
> Thanks for your talk at the Git Merge 2016!
> The Git community uses the same workflow as the kernel. So we may be
> interested in the 0 bot which could compile and test each patch on the list.

In the case of Git, we already have Travis-CI that can do rather
thorough testing automatically (run the complete testsuite on a clean
machine for several configurations). You get the benefit from it only if
you use GitHub pull-requests today. It would be interesting to have a
bot watch the list, apply patches and push to a travis-enabled fork of
git.git on GitHub to get the same benefit when posting emails directly
to the list.

-- 
Matthieu Moy
http://www-verimag.imag.fr/~moy/

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12  4:29 ` 0 bot for Git Stefan Beller
  2016-04-12  6:41   ` Greg KH
  2016-04-12  7:23   ` Matthieu Moy
@ 2016-04-12  9:42   ` Duy Nguyen
  2016-04-12 14:59     ` Stefan Beller
  2 siblings, 1 reply; 41+ messages in thread
From: Duy Nguyen @ 2016-04-12  9:42 UTC (permalink / raw)
  To: Stefan Beller; +Cc: git@vger.kernel.org

> On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
>> Hi Greg,
>>
>> Thanks for your talk at the Git Merge 2016!

Huh? It already happened?? Any interesting summary to share with us?
-- 
Duy

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12  7:23   ` Matthieu Moy
@ 2016-04-12 14:52     ` Stefan Beller
  2016-04-12 15:15       ` Philip Li
  2016-04-12 20:29       ` Matthieu Moy
  0 siblings, 2 replies; 41+ messages in thread
From: Stefan Beller @ 2016-04-12 14:52 UTC (permalink / raw)
  To: Matthieu Moy, lkp; +Cc: Greg KH, git@vger.kernel.org

On Tue, Apr 12, 2016 at 12:23 AM, Matthieu Moy
<Matthieu.Moy@grenoble-inp.fr> wrote:
> Stefan Beller <sbeller@google.com> writes:
>
>> Hi Greg,
>>
>> Thanks for your talk at the Git Merge 2016!
>> The Git community uses the same workflow as the kernel. So we may be
>> interested in the 0 bot which could compile and test each patch on the list.
>
> In the case of Git, we already have Travis-CI that can do rather
> thorough testing automatically (run the complete testsuite on a clean
> machine for several configurations). You get the benefit from it only if
> you use GitHub pull-requests today.

But who uses that? (Not a lot of old-timers here, that's for sure)

> It would be interesting to have a
> bot watch the list, apply patches and push to a travis-enabled fork of
> git.git on GitHub to get the same benefit when posting emails directly
> to the list.

That is better (and probably more work) than what I had in mind.
IIUC the 0 bot can grab a patch from a mailing list and apply it to a
base (either the real base as encoded in the patch or a best guess)
and then run "make".

At least that's how I understand the kernel setup. So my naive thought
is that the 0 bot maintainer "only" needs to add another mailing list
to the watch list of the 0 bot?

Stefan

>
> --
> Matthieu Moy
> http://www-verimag.imag.fr/~moy/

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12  9:42   ` Duy Nguyen
@ 2016-04-12 14:59     ` Stefan Beller
  2016-04-14 22:04       ` Christian Couder
  0 siblings, 1 reply; 41+ messages in thread
From: Stefan Beller @ 2016-04-12 14:59 UTC (permalink / raw)
  To: Duy Nguyen, git@vger.kernel.org

On Tue, Apr 12, 2016 at 2:42 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>> On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
>>> Hi Greg,
>>>
>>> Thanks for your talk at the Git Merge 2016!
>
> Huh? It already happened?? Any interesting summary to share with us?

Summary from the contributors summit:
* encoding was a huge discussion point:
  If you use Git on a case sensitive file system, you may have
  branches "foo" and "FOO" and it just works. Once you switch over
  to another file system this may break horribly.

  The discussion revealed lots more of these points in Git and then a
  discussion on fundamentals sparked, on whether we want to go by the lowest
  common denominator or treat any system special or have it as is, but less
  broken.

* We still don't know how to handle large repositories
  (I am working on submodules but that is too vague and not solving
  the actual problem, people really want there
    * large files
    * large trees
   [* lots of commits (i.e. lots of objects)]
   in one repo and it should work just as fast.)

That was my main take away.

Stefan

> --
> Duy

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 14:52     ` Stefan Beller
@ 2016-04-12 15:15       ` Philip Li
  2016-04-12 20:29       ` Matthieu Moy
  1 sibling, 0 replies; 41+ messages in thread
From: Philip Li @ 2016-04-12 15:15 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Matthieu Moy, lkp, Greg KH, git@vger.kernel.org

On Tue, Apr 12, 2016 at 07:52:02AM -0700, Stefan Beller wrote:
> On Tue, Apr 12, 2016 at 12:23 AM, Matthieu Moy
> <Matthieu.Moy@grenoble-inp.fr> wrote:
> > Stefan Beller <sbeller@google.com> writes:
> >
> >> Hi Greg,
> >>
> >> Thanks for your talk at the Git Merge 2016!
> >> The Git community uses the same workflow as the kernel. So we may be
> >> interested in the 0 bot which could compile and test each patch on the list.
> >
> > In the case of Git, we already have Travis-CI that can do rather
> > thorough testing automatically (run the complete testsuite on a clean
> > machine for several configurations). You get the benefit from it only if
> > you use GitHub pull-requests today.
> 
> But who uses that? (Not a lot of old-timers here, that's for sure)
> 
> > It would be interesting to have a
> > bot watch the list, apply patches and push to a travis-enabled fork of
> > git.git on GitHub to get the same benefit when posting emails directly
> > to the list.
> 
> That is better (and probably more work) than what I had in mind.
> IIUC the 0 bot can grab a patch from a mailing list and apply it to a
> base (either the real base as encoded in the patch or a best guess)
> and then run "make".
> 
> At least that's how I understand the kernel setup. So my naive thought
> is that the 0 bot maintainer "only" needs to add another mailing list
> to the watch list of the 0 bot?

yes, this is feasible in 0 bot, though it requires some refactoring to current
mailing list logic which focuses on linux kernel like using guess work to find
out which maintainer tree to apply a patch on by following a set of rules. 
After the current proposal to add extra info to git-format-patch is accepted,
this can be more smooth.

> 
> Stefan
> 
> >
> > --
> > Matthieu Moy
> > http://www-verimag.imag.fr/~moy/
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 14:52     ` Stefan Beller
  2016-04-12 15:15       ` Philip Li
@ 2016-04-12 20:29       ` Matthieu Moy
  2016-04-12 20:49         ` Junio C Hamano
  1 sibling, 1 reply; 41+ messages in thread
From: Matthieu Moy @ 2016-04-12 20:29 UTC (permalink / raw)
  To: Stefan Beller; +Cc: lkp, Greg KH, git@vger.kernel.org

Stefan Beller <sbeller@google.com> writes:

> On Tue, Apr 12, 2016 at 12:23 AM, Matthieu Moy
> <Matthieu.Moy@grenoble-inp.fr> wrote:
>> Stefan Beller <sbeller@google.com> writes:
>>
>>> Hi Greg,
>>>
>>> Thanks for your talk at the Git Merge 2016!
>>> The Git community uses the same workflow as the kernel. So we may be
>>> interested in the 0 bot which could compile and test each patch on the list.
>>
>> In the case of Git, we already have Travis-CI that can do rather
>> thorough testing automatically (run the complete testsuite on a clean
>> machine for several configurations). You get the benefit from it only if
>> you use GitHub pull-requests today.
>
> But who uses that? (Not a lot of old-timers here, that's for sure)

Not many people clearly. I sometimes do, but SubmitGit as it is today
doesn't (yet?) beat "git send-email" for me. In a perfect world where I
could just ask SubmitGit "please wait for Travis to complete, if it
passes then send to the list, otherwise email me", I would use it more.

But my point wasn't to say "we already have everything we need", but
rather "we already have part of the solution, so an ideal complete
solution could integrate with it".

>> It would be interesting to have a
>> bot watch the list, apply patches and push to a travis-enabled fork of
>> git.git on GitHub to get the same benefit when posting emails directly
>> to the list.
>
> That is better (and probably more work) than what I had in mind.
> IIUC the 0 bot can grab a patch from a mailing list and apply it to a
> base (either the real base as encoded in the patch or a best guess)
> and then run "make".

I don't know how 0 bot solves this, but the obvious issue with this
approach is to allow dealing with someone sending a patch like

+++ Makefile
--- Makefile
+all:
+	rm -fr $(HOME); sudo rm -fr /

to the list. One thing that Travis gives us for free is isolation:
malicious code in the build cannot break the bot, only the build itself.

-- 
Matthieu Moy
http://www-verimag.imag.fr/~moy/

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 20:29       ` Matthieu Moy
@ 2016-04-12 20:49         ` Junio C Hamano
  2016-04-13  5:43           ` Matthieu Moy
                             ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: Junio C Hamano @ 2016-04-12 20:49 UTC (permalink / raw)
  To: Matthieu Moy; +Cc: Stefan Beller, lkp, Greg KH, git@vger.kernel.org

Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:

> But my point wasn't to say "we already have everything we need", but
> rather "we already have part of the solution, so an ideal complete
> solution could integrate with it".

Yes.  That is a good direction to go.

They may already have part of the solution, and their half may be
better than what we have, in which case we may want to switch, but
if what we have already works well there is no need to.

> I don't know how 0 bot solves this, but the obvious issue with this
> approach is to allow dealing with someone sending a patch like
>
> +++ Makefile
> --- Makefile
> +all:
> +	rm -fr $(HOME); sudo rm -fr /
>
> to the list. One thing that Travis gives us for free is isolation:
> malicious code in the build cannot break the bot, only the build
> itself.

True, presumably the Travis integration already solves that part, so
I suspect it is just the matter of setting up:

 - a fork of git.git and have Travis monitor any and all new
   branches;

 - a bot that scans the list traffic, applies each series it sees to
   a branch dedicated for that series and pushes to the above fork.

isn't it?

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 20:49         ` Junio C Hamano
@ 2016-04-13  5:43           ` Matthieu Moy
  2016-04-13 12:16             ` Lars Schneider
  2016-04-13  6:11           ` Lars Schneider
  2016-04-13 13:44           ` Fengguang Wu
  2 siblings, 1 reply; 41+ messages in thread
From: Matthieu Moy @ 2016-04-13  5:43 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Stefan Beller, lkp, Greg KH, git@vger.kernel.org

Junio C Hamano <gitster@pobox.com> writes:

> Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:
>
> True, presumably the Travis integration already solves that part, so
> I suspect it is just the matter of setting up:
>
>  - a fork of git.git and have Travis monitor any and all new
>    branches;
>
>  - a bot that scans the list traffic, applies each series it sees to
>    a branch dedicated for that series and pushes to the above fork.

... and to make it really useful: a way to get a notification email sent
on-list or at least to the submitter as a reply to the patch series.
Just having a web interface somewhere that knows how broken the code is
would not be that useful.

-- 
Matthieu Moy
http://www-verimag.imag.fr/~moy/

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 20:49         ` Junio C Hamano
  2016-04-13  5:43           ` Matthieu Moy
@ 2016-04-13  6:11           ` Lars Schneider
  2016-04-13 16:27             ` Junio C Hamano
  2016-04-13 13:44           ` Fengguang Wu
  2 siblings, 1 reply; 41+ messages in thread
From: Lars Schneider @ 2016-04-13  6:11 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Matthieu Moy, Stefan Beller, lkp, Greg KH, git@vger.kernel.org


On 12 Apr 2016, at 22:49, Junio C Hamano <gitster@pobox.com> wrote:

> Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:
> 
>> But my point wasn't to say "we already have everything we need", but
>> rather "we already have part of the solution, so an ideal complete
>> solution could integrate with it".
> 
> Yes.  That is a good direction to go.
> 
> They may already have part of the solution, and their half may be
> better than what we have, in which case we may want to switch, but
> if what we have already works well there is no need to.
> 
>> I don't know how 0 bot solves this, but the obvious issue with this
>> approach is to allow dealing with someone sending a patch like
>> 
>> +++ Makefile
>> --- Makefile
>> +all:
>> +	rm -fr $(HOME); sudo rm -fr /
>> 
>> to the list. One thing that Travis gives us for free is isolation:
>> malicious code in the build cannot break the bot, only the build
>> itself.
> 
> True, presumably the Travis integration already solves that part, so
> I suspect it is just the matter of setting up:
> 
> - a fork of git.git and have Travis monitor any and all new
>   branches;
> 
> - a bot that scans the list traffic, applies each series it sees to
>   a branch dedicated for that series and pushes to the above fork.
> 
> isn't it?

Mailing list users can already use Travis CI to check their patches
prior to sending them. I just posted a patch with setup instructions
(see $gmane/291371).

@Junio:
If you setup Travis CI for your https://github.com/gitster/git fork
then Travis CI would build all your topic branches and you (and 
everyone who is interested) could check 
https://travis-ci.org/gitster/git/branches to see which branches 
will break pu if you integrate them.

I talked to Josh Kalderimis from Travis CI and he told me the load
wouldn't be a problem for Travis CI at all.

- Lars

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13  5:43           ` Matthieu Moy
@ 2016-04-13 12:16             ` Lars Schneider
  2016-04-13 12:30               ` Matthieu Moy
  0 siblings, 1 reply; 41+ messages in thread
From: Lars Schneider @ 2016-04-13 12:16 UTC (permalink / raw)
  To: Matthieu Moy
  Cc: Junio C Hamano, Stefan Beller, lkp, Greg KH, git@vger.kernel.org


> On 13 Apr 2016, at 07:43, Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> wrote:
> 
> Junio C Hamano <gitster@pobox.com> writes:
> 
>> Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:
>> 
>> True, presumably the Travis integration already solves that part, so
>> I suspect it is just the matter of setting up:
>> 
>> - a fork of git.git and have Travis monitor any and all new
>>   branches;
>> 
>> - a bot that scans the list traffic, applies each series it sees to
>>   a branch dedicated for that series and pushes to the above fork.
> 
> ... and to make it really useful: a way to get a notification email sent
> on-list or at least to the submitter as a reply to the patch series.
> Just having a web interface somewhere that knows how broken the code is
> would not be that useful.

Travis CI could do this but I intentionally disabled it to not annoy anyone.
It would be easy to enable it here:
https://github.com/git/git/blob/7b0d47b3b6b5b64e02a5aa06b0452cadcdb18355/.travis.yml#L98-L99

Cheers,
Lars

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 12:16             ` Lars Schneider
@ 2016-04-13 12:30               ` Matthieu Moy
  2016-04-13 16:14                 ` Lars Schneider
  2016-04-13 16:15                 ` Junio C Hamano
  0 siblings, 2 replies; 41+ messages in thread
From: Matthieu Moy @ 2016-04-13 12:30 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Junio C Hamano, Stefan Beller, lkp, Greg KH, git@vger.kernel.org

Lars Schneider <larsxschneider@gmail.com> writes:

>> On 13 Apr 2016, at 07:43, Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> wrote:
>> 
>> Junio C Hamano <gitster@pobox.com> writes:
>> 
>>> Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:
>>> 
>>> True, presumably the Travis integration already solves that part, so
>>> I suspect it is just the matter of setting up:
>>> 
>>> - a fork of git.git and have Travis monitor any and all new
>>>   branches;
>>> 
>>> - a bot that scans the list traffic, applies each series it sees to
>>>   a branch dedicated for that series and pushes to the above fork.
>> 
>> ... and to make it really useful: a way to get a notification email sent
>> on-list or at least to the submitter as a reply to the patch series.
>> Just having a web interface somewhere that knows how broken the code is
>> would not be that useful.
>
> Travis CI could do this but I intentionally disabled it to not annoy anyone.
> It would be easy to enable it here:
> https://github.com/git/git/blob/7b0d47b3b6b5b64e02a5aa06b0452cadcdb18355/.travis.yml#L98-L99

The missing part would be "as a reply to the patch series". When I start
reviewing a series, if the patch is broken and the CI system already
knows, I'd rather have the information attached in the same thread right
inside my mailer.

-- 
Matthieu Moy
http://www-verimag.imag.fr/~moy/

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 20:49         ` Junio C Hamano
  2016-04-13  5:43           ` Matthieu Moy
  2016-04-13  6:11           ` Lars Schneider
@ 2016-04-13 13:44           ` Fengguang Wu
  2 siblings, 0 replies; 41+ messages in thread
From: Fengguang Wu @ 2016-04-13 13:44 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Matthieu Moy, Stefan Beller, Greg KH, git@vger.kernel.org,
	Philip Li

> > I don't know how 0 bot solves this, but the obvious issue with this
> > approach is to allow dealing with someone sending a patch like
> >
> > +++ Makefile
> > --- Makefile
> > +all:
> > +	rm -fr $(HOME); sudo rm -fr /
> >
> > to the list. One thing that Travis gives us for free is isolation:
> > malicious code in the build cannot break the bot, only the build
> > itself.

Sure, isolation is a must have for public test services like Travis or
0day. We optimize the 0day infrastructure for good behaviors and also
have ways to isolate malicious ones.

> True, presumably the Travis integration already solves that part, so
> I suspect it is just the matter of setting up:
> 
>  - a fork of git.git and have Travis monitor any and all new
>    branches;
> 
>  - a bot that scans the list traffic, applies each series it sees to
>    a branch dedicated for that series and pushes to the above fork.
> 
> isn't it?

Right. 0day bot could auto maintain a patch-representing git tree for
Travis to monitor and test. As how we already did for the linux kernel
project, creating one git branch per patchset posted to the lists:

        https://github.com/0day-ci/linux/branches

In principle the git project should have more simple rules to decide
"which base should the robot apply a patch to". But we do need some
hints about the git community's rules in order to start the work. If
without such hints from the community, we may start with dumb rules
like "apply to latest origin/master" or "apply to latest release tag".

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 12:30               ` Matthieu Moy
@ 2016-04-13 16:14                 ` Lars Schneider
  2016-04-13 16:15                 ` Junio C Hamano
  1 sibling, 0 replies; 41+ messages in thread
From: Lars Schneider @ 2016-04-13 16:14 UTC (permalink / raw)
  To: Matthieu Moy
  Cc: Junio C Hamano, Stefan Beller, lkp, Greg KH, git@vger.kernel.org


> On 13 Apr 2016, at 14:30, Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> wrote:
> 
> Lars Schneider <larsxschneider@gmail.com> writes:
> 
>>> On 13 Apr 2016, at 07:43, Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> wrote:
>>> 
>>> Junio C Hamano <gitster@pobox.com> writes:
>>> 
>>>> Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:
>>>> 
>>>> True, presumably the Travis integration already solves that part, so
>>>> I suspect it is just the matter of setting up:
>>>> 
>>>> - a fork of git.git and have Travis monitor any and all new
>>>>  branches;
>>>> 
>>>> - a bot that scans the list traffic, applies each series it sees to
>>>>  a branch dedicated for that series and pushes to the above fork.
>>> 
>>> ... and to make it really useful: a way to get a notification email sent
>>> on-list or at least to the submitter as a reply to the patch series.
>>> Just having a web interface somewhere that knows how broken the code is
>>> would not be that useful.
>> 
>> Travis CI could do this but I intentionally disabled it to not annoy anyone.
>> It would be easy to enable it here:
>> https://github.com/git/git/blob/7b0d47b3b6b5b64e02a5aa06b0452cadcdb18355/.travis.yml#L98-L99
> 
> The missing part would be "as a reply to the patch series". When I start
> reviewing a series, if the patch is broken and the CI system already
> knows, I'd rather have the information attached in the same thread right
> inside my mailer.
I see. How would the automation know where the email patch needs to be applied?

- Lars

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 12:30               ` Matthieu Moy
  2016-04-13 16:14                 ` Lars Schneider
@ 2016-04-13 16:15                 ` Junio C Hamano
  1 sibling, 0 replies; 41+ messages in thread
From: Junio C Hamano @ 2016-04-13 16:15 UTC (permalink / raw)
  To: Matthieu Moy
  Cc: Lars Schneider, Stefan Beller, lkp, Greg KH, git@vger.kernel.org

Matthieu Moy <Matthieu.Moy@grenoble-inp.fr> writes:

>>>> True, presumably the Travis integration already solves that part, so
>>>> I suspect it is just the matter of setting up:
>>>> 
>>>> - a fork of git.git and have Travis monitor any and all new
>>>>   branches;
>>>> 
>>>> - a bot that scans the list traffic, applies each series it sees to
>>>>   a branch dedicated for that series and pushes to the above fork.
>>> 
>>> ... and to make it really useful: a way to get a notification email sent
>>> on-list or at least to the submitter as a reply to the patch series.
>> Travis CI could do this ...
> The missing part would be "as a reply to the patch series". When I start
> reviewing a series, if the patch is broken and the CI system already
> knows, I'd rather have the information attached in the same thread right
> inside my mailer.

Yeah, such a message thrown randomly at the list would be too noisy
to be useful, but if it is sent to a specific thread as a response,
it would grab attention of those who are interested in the series,
which is exactly what we want.

So with what you added, the list of what is needed is now:

 - a fork of git.git and have Travis monitor any and all new
   branches;
 
 - a bot that scans the list traffic, applies each series it sees to
   a branch dedicated for that series and pushes to the above fork;

 - a bot (which can be the same as the above or a different one, as
   long as the former and the latter has a way to associate a topic
   branch and the original message) that receives success/failure
   notice from Travis, relays it as a response to the original patch
   on the list.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13  6:11           ` Lars Schneider
@ 2016-04-13 16:27             ` Junio C Hamano
  2016-04-13 17:09               ` Lars Schneider
  0 siblings, 1 reply; 41+ messages in thread
From: Junio C Hamano @ 2016-04-13 16:27 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Matthieu Moy, Stefan Beller, lkp, Greg KH, git@vger.kernel.org

Lars Schneider <larsxschneider@gmail.com> writes:

> @Junio:
> If you setup Travis CI for your https://github.com/gitster/git fork
> then Travis CI would build all your topic branches and you (and 
> everyone who is interested) could check 
> https://travis-ci.org/gitster/git/branches to see which branches 
> will break pu if you integrate them.

I would not say such an arrangement is worthless, but it targets a
wrong point in the patch flow.

The patches that result in the most wastage of my time (i.e. a
shared bottleneck resource the community should strive to optimize
for) are the ones that fail to hit 'pu'.  Ones that do not even
build in isolation, ones that may build but fail even the new tests
they bring in, ones that break existing tests, and ones that are OK
in isolation but do not play well with topics already in flight.

Automated testing of what is already on 'pu' does not help reduce
the above cost, as the culling must be done by me _without_ help
from automated test you propose to run on topics in 'pu'.  Ever
heard of chicken and egg?

Your "You can setup your own CI" update to SubmittingPatches may
encourage people to test before sending.  The "Travis CI sends
failure notice as a response to a crappy patch" discussed by
Matthieu in the other subthread will be of great help.

Thanks.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 16:27             ` Junio C Hamano
@ 2016-04-13 17:09               ` Lars Schneider
  2016-04-13 17:29                 ` Stefan Beller
  2016-04-13 17:47                 ` Junio C Hamano
  0 siblings, 2 replies; 41+ messages in thread
From: Lars Schneider @ 2016-04-13 17:09 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Matthieu Moy, Stefan Beller, lkp, Greg KH, git@vger.kernel.org


> On 13 Apr 2016, at 18:27, Junio C Hamano <gitster@pobox.com> wrote:
> 
> Lars Schneider <larsxschneider@gmail.com> writes:
> 
>> @Junio:
>> If you setup Travis CI for your https://github.com/gitster/git fork
>> then Travis CI would build all your topic branches and you (and 
>> everyone who is interested) could check 
>> https://travis-ci.org/gitster/git/branches to see which branches 
>> will break pu if you integrate them.
> 
> I would not say such an arrangement is worthless, but it targets a
> wrong point in the patch flow.
> 
> The patches that result in the most wastage of my time (i.e. a
> shared bottleneck resource the community should strive to optimize
> for) are the ones that fail to hit 'pu'.  Ones that do not even
> build in isolation, ones that may build but fail even the new tests
> they bring in, ones that break existing tests, and ones that are OK
> in isolation but do not play well with topics already in flight.

I am not sure what you mean by "fail to hit 'pu'". Maybe we talk at
cross purposes. Here is what I think you do, please correct me:

1.) You pick the topics from the mailing list and create feature 
    branches for each one of them. E.g. one of my recent topics 
    is "ls/config-origin".

2.) At some point you create a new pu branch based on the latest
    next branch. You merge all the new topics into the new pu.

If you push the topics to github.com/gitster after step 1 then
Travis CI could tell you if the individual topic builds clean 
and passes all tests. Then you could merge only clean topics in 
step 2 which would result in a pu that is much more likely to 
build clean.

Could that process avoid wasting your time with bad patches?

> Automated testing of what is already on 'pu' does not help reduce
> the above cost, as the culling must be done by me _without_ help
> from automated test you propose to run on topics in 'pu'.  Ever
> heard of chicken and egg?
> 
> Your "You can setup your own CI" update to SubmittingPatches may
> encourage people to test before sending.  The "Travis CI sends
> failure notice as a response to a crappy patch" discussed by
> Matthieu in the other subthread will be of great help.
> 
> Thanks.
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 17:09               ` Lars Schneider
@ 2016-04-13 17:29                 ` Stefan Beller
  2016-04-13 17:43                   ` Greg KH
  2016-04-16 15:51                   ` Lars Schneider
  2016-04-13 17:47                 ` Junio C Hamano
  1 sibling, 2 replies; 41+ messages in thread
From: Stefan Beller @ 2016-04-13 17:29 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Junio C Hamano, Matthieu Moy, lkp, Greg KH, git@vger.kernel.org

On Wed, Apr 13, 2016 at 10:09 AM, Lars Schneider
<larsxschneider@gmail.com> wrote:
>
>> On 13 Apr 2016, at 18:27, Junio C Hamano <gitster@pobox.com> wrote:
>>
>> Lars Schneider <larsxschneider@gmail.com> writes:
>>
>>> @Junio:
>>> If you setup Travis CI for your https://github.com/gitster/git fork
>>> then Travis CI would build all your topic branches and you (and
>>> everyone who is interested) could check
>>> https://travis-ci.org/gitster/git/branches to see which branches
>>> will break pu if you integrate them.
>>
>> I would not say such an arrangement is worthless, but it targets a
>> wrong point in the patch flow.
>>
>> The patches that result in the most wastage of my time (i.e. a
>> shared bottleneck resource the community should strive to optimize
>> for) are the ones that fail to hit 'pu'.  Ones that do not even
>> build in isolation, ones that may build but fail even the new tests
>> they bring in, ones that break existing tests, and ones that are OK
>> in isolation but do not play well with topics already in flight.
>
> I am not sure what you mean by "fail to hit 'pu'". Maybe we talk at
> cross purposes. Here is what I think you do, please correct me:
>
> 1.) You pick the topics from the mailing list and create feature
>     branches for each one of them. E.g. one of my recent topics
>     is "ls/config-origin".

and by You you mean Junio.

Ideally the 0bot would have sent the message as a reply to the
cover letter with the information "doesn't compile/breaks test t1234",
so Junio could ignore that series (no time wasted on his part).

At Git Merge Greg said (paraphrasing here):

  We waste developers time, because we have plenty of it. Maintainers time
  however is precious because maintainers are the bottleneck and a scare
  resource to come by.

And I think Git and the kernel have the same community design here.
(Except the kernel is bigger and has more than one maintainer)

So the idea is help Junio make a decision to drop/ignore those patches
with least amount of brain cycled spent as possible. (Not even spend 5
seconds on it).

>
> 2.) At some point you create a new pu branch based on the latest
>     next branch. You merge all the new topics into the new pu.

but Junio also runs test after each(?) merge(?) of a series and once
tests fail, it takes time to sort out, what caused it. (Is that the patch series
alone or is that because 2 series interact badly with each other?)

>
> If you push the topics to github.com/gitster after step 1 then
> Travis CI could tell you if the individual topic builds clean
> and passes all tests. Then you could merge only clean topics in
> step 2 which would result in a pu that is much more likely to
> build clean.

IIRC Junio did not like granting travis access to the "blessed" repository
as travis wants so much permissions including write permission to that
repo. (We/He could have a second non advertised repo though)

Also this would incur wait time on Junios side

1) collect patches (many series over the day)
2) push
3) wait
4) do the merges

however a 0 bot would do
1) collect patches faster than Junio (0 bot is a computer after all,
working 24/7)
2) test each patch/series individually
3) send feedback without the wait time, so the contributor from a different
   time zone gets feedback quickly. (round trip is just the build and test time,
   which the developer forgot to do any way if it fails)

>
> Could that process avoid wasting your time with bad patches?
>
>> Automated testing of what is already on 'pu' does not help reduce
>> the above cost, as the culling must be done by me _without_ help
>> from automated test you propose to run on topics in 'pu'.  Ever
>> heard of chicken and egg?
>>
>> Your "You can setup your own CI" update to SubmittingPatches may
>> encourage people to test before sending.  The "Travis CI sends
>> failure notice as a response to a crappy patch" discussed by
>> Matthieu in the other subthread will be of great help.
>>
>> Thanks.
>>
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 17:29                 ` Stefan Beller
@ 2016-04-13 17:43                   ` Greg KH
  2016-04-16 15:51                   ` Lars Schneider
  1 sibling, 0 replies; 41+ messages in thread
From: Greg KH @ 2016-04-13 17:43 UTC (permalink / raw)
  To: Stefan Beller
  Cc: Lars Schneider, Junio C Hamano, Matthieu Moy, lkp,
	git@vger.kernel.org

On Wed, Apr 13, 2016 at 10:29:57AM -0700, Stefan Beller wrote:
> 
> At Git Merge Greg said (paraphrasing here):
> 
>   We waste developers time, because we have plenty of it. Maintainers time
>   however is precious because maintainers are the bottleneck and a scare
>   resource to come by.

s/scare/scarce/

Although some people might disagree :)

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 17:09               ` Lars Schneider
  2016-04-13 17:29                 ` Stefan Beller
@ 2016-04-13 17:47                 ` Junio C Hamano
  1 sibling, 0 replies; 41+ messages in thread
From: Junio C Hamano @ 2016-04-13 17:47 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Matthieu Moy, Stefan Beller, lkp, Greg KH, git@vger.kernel.org

Lars Schneider <larsxschneider@gmail.com> writes:

> I am not sure what you mean by "fail to hit 'pu'". Maybe we talk at
> cross purposes. Here is what I think you do, please correct me:
>
> 1.) You pick the topics from the mailing list and create feature 
>     branches for each one of them. E.g. one of my recent topics 
>     is "ls/config-origin".

I do not do this step blindly.  The patch is studied in MUA, perhaps
applied to a new topic to view it in wider context, and tested in
isolation at this step.  In any of these steps, I may decide it is
way too premature for 'pu' and discard it.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-12 14:59     ` Stefan Beller
@ 2016-04-14 22:04       ` Christian Couder
  2016-04-15  9:51         ` Parallel checkout (Was Re: 0 bot for Git) Duy Nguyen
  0 siblings, 1 reply; 41+ messages in thread
From: Christian Couder @ 2016-04-14 22:04 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Duy Nguyen, git@vger.kernel.org

On Tue, Apr 12, 2016 at 4:59 PM, Stefan Beller <sbeller@google.com> wrote:
> On Tue, Apr 12, 2016 at 2:42 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>>> On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
>>>> Hi Greg,
>>>>
>>>> Thanks for your talk at the Git Merge 2016!
>>
>> Huh? It already happened?? Any interesting summary to share with us?
>
> Summary from the contributors summit:
> * encoding was a huge discussion point:
>   If you use Git on a case sensitive file system, you may have
>   branches "foo" and "FOO" and it just works. Once you switch over
>   to another file system this may break horribly.
>
>   The discussion revealed lots more of these points in Git and then a
>   discussion on fundamentals sparked, on whether we want to go by the lowest
>   common denominator or treat any system special or have it as is, but less
>   broken.
>
> * We still don't know how to handle large repositories
>   (I am working on submodules but that is too vague and not solving
>   the actual problem, people really want there
>     * large files
>     * large trees
>    [* lots of commits (i.e. lots of objects)]
>    in one repo and it should work just as fast.)
>
> That was my main take away.

There is a draft of an article about the first part of the Contributor
Summit in the draft of the next Git Rev News edition:

https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md

Everyone is welcome to contribute about things that are missing or not accurate.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Parallel checkout (Was Re: 0 bot for Git)
  2016-04-14 22:04       ` Christian Couder
@ 2016-04-15  9:51         ` Duy Nguyen
  2016-04-15 11:18           ` Christian Couder
                             ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: Duy Nguyen @ 2016-04-15  9:51 UTC (permalink / raw)
  To: Christian Couder; +Cc: Stefan Beller, git@vger.kernel.org

On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
> On Tue, Apr 12, 2016 at 4:59 PM, Stefan Beller <sbeller@google.com> wrote:
> > On Tue, Apr 12, 2016 at 2:42 AM, Duy Nguyen <pclouds@gmail.com> wrote:
> >>> On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
> >>>> Hi Greg,
> >>>>
> >>>> Thanks for your talk at the Git Merge 2016!
> >>
> >> Huh? It already happened?? Any interesting summary to share with us?
> 
> There is a draft of an article about the first part of the Contributor
> Summit in the draft of the next Git Rev News edition:
> 
> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md

Thanks. I read the sentence "This made people mention potential
problems with parallelizing git checkout" and wondered what these
problems were. And because it's easier to think while you test
something, to flesh out your thoughts, I wrote the below patch, which
does parallel checkout with multiple worker processes. I wonder if the
same set of problems apply to it.

The idea is simple, you offload some work to process workers. In this
patch, only entry.c:write_entry() is moved to workers. We still do
directory creation and all sort of checks and stat refresh in the main
process. Some more work may be moved away, for example, the entire
builtin/checkout.c:checkout_merged().

Multi process is less efficient than multi thread model. But I doubt
we could make object db access thread-safe soon. The last discussion
was 2 years ago [1] and nothing much has happened.

Numbers are encouraging though. On linux-2.6 repo running on linux and
ext4 filesystem, checkout_paths() would dominate "git checkout :/".
Unmodified git takes about 31s.


16:26:00.114029 builtin/checkout.c:1299 performance: 31.184973659 s: checkout_paths
16:26:00.114225 trace.c:420             performance: 31.256412935 s: git command: 'git' 'checkout' '.'

When doing write_entry() on 8 processes, it takes 22s (shortened by ~30%)

16:27:39.973730 trace.c:420             performance: 5.610255442 s: git command: 'git' 'checkout-index' '--worker'
16:27:40.956812 trace.c:420             performance: 6.595082013 s: git command: 'git' 'checkout-index' '--worker'
16:27:41.397621 trace.c:420             performance: 7.032024175 s: git command: 'git' 'checkout-index' '--worker'
16:27:47.453999 trace.c:420             performance: 13.078537207 s: git command: 'git' 'checkout-index' '--worker'
16:27:48.986433 trace.c:420             performance: 14.612951643 s: git command: 'git' 'checkout-index' '--worker'
16:27:53.149378 trace.c:420             performance: 18.781762536 s: git command: 'git' 'checkout-index' '--worker'
16:27:54.884044 trace.c:420             performance: 20.514473730 s: git command: 'git' 'checkout-index' '--worker'
16:27:55.319990 trace.c:420             performance: 20.948326263 s: git command: 'git' 'checkout-index' '--worker'
16:27:55.863211 builtin/checkout.c:1299 performance: 22.723118420 s: checkout_paths
16:27:55.863398 trace.c:420             performance: 22.854547640 s: git command: 'git' 'checkout' '--parallel' '.'

I suspect on nfs or windows, the gain may be higher due to IO blocking
the main process more.

Note that this for-fun patch is not optmized at all (and definitely
not portable). I could have sent a group of paths to the worker in a
single system call instead of one per call. The trace above also shows
unbalance issues with workers, where some workers exit early because
of my naive work distribution. Numbers could get a bit better.

[1] http://thread.gmane.org/gmane.comp.version-control.git/241965/focus=242020

-- 8< --
diff --git a/builtin/checkout-index.c b/builtin/checkout-index.c
index 92c6967..7163216 100644
--- a/builtin/checkout-index.c
+++ b/builtin/checkout-index.c
@@ -9,6 +9,7 @@
 #include "quote.h"
 #include "cache-tree.h"
 #include "parse-options.h"
+#include "entry.h"
 
 #define CHECKOUT_ALL 4
 static int nul_term_line;
@@ -179,6 +180,9 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
 		OPT_END()
 	};
 
+	if (argc == 2 && !strcmp(argv[1], "--worker"))
+		return parallel_checkout_worker();
+
 	if (argc == 2 && !strcmp(argv[1], "-h"))
 		usage_with_options(builtin_checkout_index_usage,
 				   builtin_checkout_index_options);
diff --git a/builtin/checkout.c b/builtin/checkout.c
index efcbd8f..51caad2 100644
--- a/builtin/checkout.c
+++ b/builtin/checkout.c
@@ -20,6 +20,7 @@
 #include "resolve-undo.h"
 #include "submodule-config.h"
 #include "submodule.h"
+#include "entry.h"
 
 static const char * const checkout_usage[] = {
 	N_("git checkout [<options>] <branch>"),
@@ -236,7 +237,8 @@ static int checkout_merged(int pos, struct checkout *state)
 }
 
 static int checkout_paths(const struct checkout_opts *opts,
-			  const char *revision)
+			  const char *revision,
+			  int parallel)
 {
 	int pos;
 	struct checkout state;
@@ -357,6 +359,8 @@ static int checkout_paths(const struct checkout_opts *opts,
 	state.force = 1;
 	state.refresh_cache = 1;
 	state.istate = &the_index;
+	if (parallel)
+		start_parallel_checkout(&state);
 	for (pos = 0; pos < active_nr; pos++) {
 		struct cache_entry *ce = active_cache[pos];
 		if (ce->ce_flags & CE_MATCHED) {
@@ -367,11 +371,18 @@ static int checkout_paths(const struct checkout_opts *opts,
 			if (opts->writeout_stage)
 				errs |= checkout_stage(opts->writeout_stage, ce, pos, &state);
 			else if (opts->merge)
+				/*
+				 * XXX: in parallel mode, we may want
+				 * to let worker perform the merging
+				 * instead and send SHA-1 result back
+				 */
 				errs |= checkout_merged(pos, &state);
 			pos = skip_same_name(ce, pos) - 1;
 		}
 	}
 
+	errs |= run_parallel_checkout();
+
 	if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
 		die(_("unable to write new index file"));
 
@@ -1132,6 +1143,7 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
 	struct branch_info new;
 	char *conflict_style = NULL;
 	int dwim_new_local_branch = 1;
+	int parallel = 0;
 	struct option options[] = {
 		OPT__QUIET(&opts.quiet, N_("suppress progress reporting")),
 		OPT_STRING('b', NULL, &opts.new_branch, N_("branch"),
@@ -1159,6 +1171,8 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
 				N_("second guess 'git checkout <no-such-branch>'")),
 		OPT_BOOL(0, "ignore-other-worktrees", &opts.ignore_other_worktrees,
 			 N_("do not check if another worktree is holding the given ref")),
+		OPT_BOOL(0, "parallel", &parallel,
+			 N_("parallel checkout")),
 		OPT_BOOL(0, "progress", &opts.show_progress, N_("force progress reporting")),
 		OPT_END(),
 	};
@@ -1279,8 +1293,12 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
 		strbuf_release(&buf);
 	}
 
-	if (opts.patch_mode || opts.pathspec.nr)
-		return checkout_paths(&opts, new.name);
+	if (opts.patch_mode || opts.pathspec.nr) {
+		uint64_t start = getnanotime();
+		int ret = checkout_paths(&opts, new.name, parallel);
+		trace_performance_since(start, "checkout_paths");
+		return ret;
+	}
 	else
 		return checkout_branch(&opts, &new);
 }
diff --git a/entry.c b/entry.c
index a410957..5e0eb1c 100644
--- a/entry.c
+++ b/entry.c
@@ -3,6 +3,36 @@
 #include "dir.h"
 #include "streaming.h"
 
+#include <sys/epoll.h>
+#include "pkt-line.h"
+#include "argv-array.h"
+#include "run-command.h"
+
+struct checkout_item {
+	struct cache_entry *ce;
+	struct checkout_item *next;
+};
+
+struct checkout_worker {
+	struct child_process cp;
+	struct checkout_item *to_complete;
+	struct checkout_item *to_send;
+};
+
+struct parallel_checkout {
+	struct checkout state;
+	struct checkout_worker *workers;
+	struct checkout_item *items;
+	int nr_items, alloc_items;
+	int nr_workers;
+};
+
+static struct parallel_checkout *parallel_checkout;
+
+static int queue_checkout(struct parallel_checkout *,
+			  const struct checkout *,
+			  struct cache_entry *);
+
 static void create_directories(const char *path, int path_len,
 			       const struct checkout *state)
 {
@@ -290,5 +320,299 @@ int checkout_entry(struct cache_entry *ce,
 		return 0;
 
 	create_directories(path.buf, path.len, state);
+
+	if (!queue_checkout(parallel_checkout, state, ce))
+		/*
+		 * write_entry() will be done by parallel_checkout_worker() in
+		 * a separate process
+		 */
+		return 0;
+
 	return write_entry(ce, path.buf, state, 0);
 }
+
+int start_parallel_checkout(const struct checkout *state)
+{
+	if (parallel_checkout)
+		die("BUG: parallel checkout already initiated");
+	if (0 && state->force)
+		die("BUG: not support --force yet");
+	parallel_checkout = xmalloc(sizeof(*parallel_checkout));
+	memset(parallel_checkout, 0, sizeof(*parallel_checkout));
+	memcpy(&parallel_checkout->state, state, sizeof(*state));
+
+	return 0;
+}
+
+static int queue_checkout(struct parallel_checkout *pc,
+			  const struct checkout *state,
+			  struct cache_entry *ce)
+{
+	struct checkout_item *ci;
+
+	if (!pc ||
+	    !S_ISREG(ce->ce_mode) ||
+	    memcmp(&pc->state, state, sizeof(*state)))
+		return -1;
+
+	ALLOC_GROW(pc->items, pc->nr_items + 1, pc->alloc_items);
+	ci = pc->items + pc->nr_items++;
+	ci->ce = ce;
+	return 0;
+}
+
+static int item_cmp(const void *a_, const void *b_)
+{
+	const struct checkout_item *a = a_;
+	const struct checkout_item *b = b_;
+	return strcmp(a->ce->name, b->ce->name);
+}
+
+static int setup_workers(struct parallel_checkout *pc, int epoll_fd)
+{
+	int from, nr_per_worker, i;
+
+	pc->workers = xmalloc(sizeof(*pc->workers) * pc->nr_workers);
+	memset(pc->workers, 0, sizeof(*pc->workers) * pc->nr_workers);
+
+	nr_per_worker = pc->nr_items / pc->nr_workers;
+	from = 0;
+
+	for (i = 0; i < pc->nr_workers; i++) {
+		struct checkout_worker *worker = pc->workers + i;
+		struct child_process *cp = &worker->cp;
+		struct checkout_item *item;
+		struct epoll_event ev;
+		int to;
+
+		to = from + nr_per_worker;
+		if (i == pc->nr_workers - 1)
+			to = pc->nr_items;
+		item = NULL;
+		while (from < to) {
+			pc->items[from].next = item;
+			item = pc->items + from;
+			from++;
+		}
+		worker->to_send = item;
+		worker->to_complete = item;
+
+		cp->git_cmd = 1;
+		cp->in = -1;
+		cp->out = -1;
+		argv_array_push(&cp->args, "checkout-index");
+		argv_array_push(&cp->args, "--worker");
+		if (start_command(cp))
+			die(_("failed to run checkout worker"));
+
+		ev.events = EPOLLOUT | EPOLLERR | EPOLLHUP;
+		ev.data.u32 = i * 2;
+		if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, cp->in, &ev) == -1)
+			die_errno("epoll_ctl");
+
+		ev.events = EPOLLIN | EPOLLERR | EPOLLHUP;
+		ev.data.u32 = i * 2 + 1;
+		if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, cp->out, &ev) == -1)
+			die_errno("epoll_ctl");
+	}
+	return 0;
+}
+
+static int send_to_worker(struct checkout_worker *worker, int epoll_fd)
+{
+	if (!worker->to_send) {
+		struct epoll_event ev;
+
+		packet_flush(worker->cp.in);
+
+		epoll_ctl(epoll_fd, EPOLL_CTL_DEL, worker->cp.in, &ev);
+		close(worker->cp.in);
+		worker->cp.in = -1;
+		return 0;
+	}
+
+	/*
+	 * XXX: put the fd in non-blocking mode and send as many files
+	 * as possible in one go.
+	 */
+	packet_write(worker->cp.in, "%s %s",
+		     sha1_to_hex(worker->to_send->ce->sha1),
+		     worker->to_send->ce->name);
+	worker->to_send = worker->to_send->next;
+	return 0;
+}
+
+int parallel_checkout_worker(void)
+{
+	struct checkout state;
+	struct cache_entry *ce = NULL;
+
+	memset(&state, 0, sizeof(state));
+	/* FIXME: pass 'force' over */
+	for (;;) {
+		int len;
+		unsigned char sha1[20];
+		char *line = packet_read_line(0, &len);
+
+		if (!line)
+			return 0;
+
+		if (len < 40)
+			return 1;
+		if (get_sha1_hex(line, sha1))
+			return 1;
+		line += 40;
+		len -= 40;
+		if (*line != ' ')
+			return 1;
+		line++;
+		len--;
+		if (!ce || ce_namelen(ce) < len) {
+			free(ce);
+			ce = xcalloc(1, cache_entry_size(len));
+			ce->ce_mode = S_IFREG | ce_permissions(0644);
+		}
+		ce->ce_namelen = len;
+		hashcpy(ce->sha1, sha1);
+		memcpy(ce->name, line, len + 1);
+
+		if (write_entry(ce, ce->name, &state, 0))
+			return 1;
+		/*
+		 * XXX process in batch and send bigger number of
+		 * checked out entries back
+		 */
+		packet_write(1, "1");
+	}
+}
+
+static int receive_from_worker(struct checkout_worker *worker,
+			       int refresh_cache)
+{
+	int len, val;
+	char *line;
+
+	line = packet_read_line(worker->cp.out, &len);
+	val = atoi(line);
+	if (val <= 0)
+		die("BUG: invalid value");
+	while (val && worker->to_complete &&
+	       worker->to_complete != worker->to_send) {
+		if (refresh_cache) {
+			struct stat st;
+			struct cache_entry *ce = worker->to_complete->ce;
+
+			lstat(ce->name, &st);
+			fill_stat_cache_info(ce, &st);
+			ce->ce_flags |= CE_UPDATE_IN_BASE;
+		}
+		worker->to_complete = worker->to_complete->next;
+		val--;
+	}
+	if (val)
+		die("BUG: invalid value");
+	return 0;
+}
+
+static int finish_worker(struct checkout_worker *worker, int epoll_fd)
+{
+	struct epoll_event ev;
+	char buf[1];
+	int ret;
+
+	assert(worker->to_send == NULL);
+	assert(worker->to_complete == NULL);
+
+	ret = xread(worker->cp.out, buf, sizeof(buf));
+	if (ret != 0)
+		die("BUG: expect eof");
+	epoll_ctl(epoll_fd, EPOLL_CTL_DEL, worker->cp.out, &ev);
+	close(worker->cp.out);
+	worker->cp.out = -1;
+	if (finish_command(&worker->cp))
+		die("worker had a problem");
+	return 0;
+}
+
+static int really_finished(struct parallel_checkout *pc)
+{
+	int i;
+
+	for (i = 0; i < pc->nr_workers; i++)
+		if (pc->workers[i].to_complete)
+			return 0;
+	return 1;
+}
+
+/* XXX progress support for unpack-trees */
+int run_parallel_checkout(void)
+{
+	struct parallel_checkout *pc = parallel_checkout;
+	int ret, i;
+	int epoll_fd;
+	struct epoll_event *events;
+
+	if (!pc || !pc->nr_items) {
+		free(pc);
+		parallel_checkout = NULL;
+		return 0;
+	}
+
+	qsort(pc->items, pc->nr_items, sizeof(*pc->items), item_cmp);
+	pc->nr_workers = 8;
+	epoll_fd = epoll_create(pc->nr_workers * 2);
+	if (epoll_fd == -1)
+		die_errno("epoll_create");
+	ret = setup_workers(pc, epoll_fd);
+	events = xmalloc(sizeof(*events) * pc->nr_workers * 2);
+
+	ret = 0;
+	while (!ret) {
+		int maybe_all_done = 0, nr;
+
+		nr = epoll_wait(epoll_fd, events, pc->nr_workers * 2, -1);
+		if (nr == -1 && errno == EINTR)
+			continue;
+		if (nr == -1) {
+			ret = nr;
+			break;
+		}
+		for (i = 0; i < nr; i++) {
+			int is_in = events[i].data.u32 & 1;
+			int worker_id = events[i].data.u32 / 2;
+			struct checkout_worker *worker = pc->workers + worker_id;
+
+			if (!is_in && (events[i].events & EPOLLOUT))
+				ret = send_to_worker(worker, epoll_fd);
+			else if (events[i].events & EPOLLIN) {
+				if (worker->to_complete) {
+					int refresh = pc->state.refresh_cache;
+					ret = receive_from_worker(worker, refresh);
+					pc->state.istate->cache_changed |= CE_ENTRY_CHANGED;
+				} else {
+					ret = finish_worker(worker, epoll_fd);
+					maybe_all_done = 1;
+				}
+			} else if (events[i].events & (EPOLLERR | EPOLLHUP)) {
+				if (is_in && !worker->to_complete) {
+					ret = finish_worker(worker, epoll_fd);
+					maybe_all_done = 1;
+				} else
+					ret = -1;
+			} else
+				die("BUG: what??");
+			if (ret)
+				break;
+		}
+
+		if (maybe_all_done && really_finished(pc))
+			break;
+	}
+
+	close(epoll_fd);
+	free(pc->workers);
+	free(events);
+	free(pc);
+	parallel_checkout = NULL;
+	return ret;
+}
diff --git a/unpack-trees.c b/unpack-trees.c
index 9f55cc2..433c54e 100644
--- a/unpack-trees.c
+++ b/unpack-trees.c
@@ -220,6 +220,7 @@ static int check_updates(struct unpack_trees_options *o)
 	remove_marked_cache_entries(&o->result);
 	remove_scheduled_dirs();
 
+	/* start_parallel_checkout() */
 	for (i = 0; i < index->cache_nr; i++) {
 		struct cache_entry *ce = index->cache[i];
 
@@ -234,6 +235,7 @@ static int check_updates(struct unpack_trees_options *o)
 			}
 		}
 	}
+	/* run_parallel_checkout() */
 	stop_progress(&progress);
 	if (o->update)
 		git_attr_set_direction(GIT_ATTR_CHECKIN, NULL);
-- 8< --

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15  9:51         ` Parallel checkout (Was Re: 0 bot for Git) Duy Nguyen
@ 2016-04-15 11:18           ` Christian Couder
  2016-04-15 11:32             ` Duy Nguyen
  2016-04-15 16:52             ` Jeff King
  2016-04-15 15:08           ` Stefan Beller
  2016-04-26 11:35           ` Duy Nguyen
  2 siblings, 2 replies; 41+ messages in thread
From: Christian Couder @ 2016-04-15 11:18 UTC (permalink / raw)
  To: Duy Nguyen
  Cc: Stefan Beller, git@vger.kernel.org, Jeff King, Michael Haggerty

On Fri, Apr 15, 2016 at 11:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
> On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
>>
>> There is a draft of an article about the first part of the Contributor
>> Summit in the draft of the next Git Rev News edition:
>>
>> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md
>
> Thanks. I read the sentence "This made people mention potential
> problems with parallelizing git checkout" and wondered what these
> problems were.

It may have been Michael or Peff (CC'ed) saying that it could break
some builds as the timestamps on the files might not always be ordered
in the same way.

Now perhaps parallel checkout could be activated only if a config
option was set. (Yeah, I know it looks like I am very often asking for
config options.)

> And because it's easier to think while you test
> something, to flesh out your thoughts, I wrote the below patch, which
> does parallel checkout with multiple worker processes.

Thanks for your work on this. It is very interesting indeed.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15 11:18           ` Christian Couder
@ 2016-04-15 11:32             ` Duy Nguyen
  2016-04-15 16:52             ` Jeff King
  1 sibling, 0 replies; 41+ messages in thread
From: Duy Nguyen @ 2016-04-15 11:32 UTC (permalink / raw)
  To: Christian Couder
  Cc: Stefan Beller, git@vger.kernel.org, Jeff King, Michael Haggerty

On Fri, Apr 15, 2016 at 6:18 PM, Christian Couder
<christian.couder@gmail.com> wrote:
> On Fri, Apr 15, 2016 at 11:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>> On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
>>>
>>> There is a draft of an article about the first part of the Contributor
>>> Summit in the draft of the next Git Rev News edition:
>>>
>>> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md
>>
>> Thanks. I read the sentence "This made people mention potential
>> problems with parallelizing git checkout" and wondered what these
>> problems were.
>
> It may have been Michael or Peff (CC'ed) saying that it could break
> some builds as the timestamps on the files might not always be ordered
> in the same way.

Very subtle. I suppose if we dumb down the distribution algorithm, we
could make it stable (even though it won't be the same as serial
checkout). Performance will degrade, not sure if it's still worth
parallelizing at that point

> Now perhaps parallel checkout could be activated only if a config
> option was set. (Yeah, I know it looks like I am very often asking for
> config options.)

And I think it could be unconditionally activated at clone time (I
can't imagine different timestamp order can affect anything at that
point), where the number of files to checkout is biggest.
-- 
Duy

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15  9:51         ` Parallel checkout (Was Re: 0 bot for Git) Duy Nguyen
  2016-04-15 11:18           ` Christian Couder
@ 2016-04-15 15:08           ` Stefan Beller
  2016-04-16  0:16             ` Duy Nguyen
  2016-04-26 11:35           ` Duy Nguyen
  2 siblings, 1 reply; 41+ messages in thread
From: Stefan Beller @ 2016-04-15 15:08 UTC (permalink / raw)
  To: Duy Nguyen, Jonathan Nieder; +Cc: Christian Couder, git@vger.kernel.org

On Fri, Apr 15, 2016 at 2:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
> On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
>> On Tue, Apr 12, 2016 at 4:59 PM, Stefan Beller <sbeller@google.com> wrote:
>> > On Tue, Apr 12, 2016 at 2:42 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>> >>> On Mon, Apr 11, 2016 at 7:51 AM, Stefan Beller <sbeller@google.com> wrote:
>> >>>> Hi Greg,
>> >>>>
>> >>>> Thanks for your talk at the Git Merge 2016!
>> >>
>> >> Huh? It already happened?? Any interesting summary to share with us?
>>
>> There is a draft of an article about the first part of the Contributor
>> Summit in the draft of the next Git Rev News edition:
>>
>> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md
>
> Thanks. I read the sentence "This made people mention potential
> problems with parallelizing git checkout" and wondered what these
> problems were. And because it's easier to think while you test
> something, to flesh out your thoughts, I wrote the below patch, which
> does parallel checkout with multiple worker processes. I wonder if the
> same set of problems apply to it.

I mentioned it, as Jonathan mentioned it a while ago. (So I was just
referring to hear say).

>
> The idea is simple, you offload some work to process workers. In this
> patch, only entry.c:write_entry() is moved to workers. We still do
> directory creation and all sort of checks and stat refresh in the main
> process. Some more work may be moved away, for example, the entire
> builtin/checkout.c:checkout_merged().
>
> Multi process is less efficient than multi thread model. But I doubt
> we could make object db access thread-safe soon. The last discussion
> was 2 years ago [1] and nothing much has happened.
>
> Numbers are encouraging though. On linux-2.6 repo running on linux and
> ext4 filesystem, checkout_paths() would dominate "git checkout :/".
> Unmodified git takes about 31s.

Please also benchmark "make build" or another read heavy operation
with these 2 different checkouts. IIRC that was the problem. (checkout
improved, but due to file ordering on the fs, the operation afterwards
slowed down, such that it became a net negative)

>
>
> 16:26:00.114029 builtin/checkout.c:1299 performance: 31.184973659 s: checkout_paths
> 16:26:00.114225 trace.c:420             performance: 31.256412935 s: git command: 'git' 'checkout' '.'
>
> When doing write_entry() on 8 processes, it takes 22s (shortened by ~30%)
>
> 16:27:39.973730 trace.c:420             performance: 5.610255442 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:40.956812 trace.c:420             performance: 6.595082013 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:41.397621 trace.c:420             performance: 7.032024175 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:47.453999 trace.c:420             performance: 13.078537207 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:48.986433 trace.c:420             performance: 14.612951643 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:53.149378 trace.c:420             performance: 18.781762536 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:54.884044 trace.c:420             performance: 20.514473730 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:55.319990 trace.c:420             performance: 20.948326263 s: git command: 'git' 'checkout-index' '--worker'
> 16:27:55.863211 builtin/checkout.c:1299 performance: 22.723118420 s: checkout_paths
> 16:27:55.863398 trace.c:420             performance: 22.854547640 s: git command: 'git' 'checkout' '--parallel' '.'
>
> I suspect on nfs or windows, the gain may be higher due to IO blocking
> the main process more.
>
> Note that this for-fun patch is not optmized at all (and definitely
> not portable). I could have sent a group of paths to the worker in a
> single system call instead of one per call. The trace above also shows
> unbalance issues with workers, where some workers exit early because
> of my naive work distribution. Numbers could get a bit better.
>
> [1] http://thread.gmane.org/gmane.comp.version-control.git/241965/focus=242020

Would it make sense to use the parallel processing infrastructure from
run-command.h
instead of doing all setup and teardown yourself?
(As you call it for-fun patch, I'd assume the answer is: Writing code
is more fun than
using other peoples code ;)


>
> -- 8< --
> diff --git a/builtin/checkout-index.c b/builtin/checkout-index.c
> index 92c6967..7163216 100644
> --- a/builtin/checkout-index.c
> +++ b/builtin/checkout-index.c
> @@ -9,6 +9,7 @@
>  #include "quote.h"
>  #include "cache-tree.h"
>  #include "parse-options.h"
> +#include "entry.h"
>
>  #define CHECKOUT_ALL 4
>  static int nul_term_line;
> @@ -179,6 +180,9 @@ int cmd_checkout_index(int argc, const char **argv, const char *prefix)
>                 OPT_END()
>         };
>
> +       if (argc == 2 && !strcmp(argv[1], "--worker"))
> +               return parallel_checkout_worker();
> +
>         if (argc == 2 && !strcmp(argv[1], "-h"))
>                 usage_with_options(builtin_checkout_index_usage,
>                                    builtin_checkout_index_options);
> diff --git a/builtin/checkout.c b/builtin/checkout.c
> index efcbd8f..51caad2 100644
> --- a/builtin/checkout.c
> +++ b/builtin/checkout.c
> @@ -20,6 +20,7 @@
>  #include "resolve-undo.h"
>  #include "submodule-config.h"
>  #include "submodule.h"
> +#include "entry.h"
>
>  static const char * const checkout_usage[] = {
>         N_("git checkout [<options>] <branch>"),
> @@ -236,7 +237,8 @@ static int checkout_merged(int pos, struct checkout *state)
>  }
>
>  static int checkout_paths(const struct checkout_opts *opts,
> -                         const char *revision)
> +                         const char *revision,
> +                         int parallel)
>  {
>         int pos;
>         struct checkout state;
> @@ -357,6 +359,8 @@ static int checkout_paths(const struct checkout_opts *opts,
>         state.force = 1;
>         state.refresh_cache = 1;
>         state.istate = &the_index;
> +       if (parallel)
> +               start_parallel_checkout(&state);
>         for (pos = 0; pos < active_nr; pos++) {
>                 struct cache_entry *ce = active_cache[pos];
>                 if (ce->ce_flags & CE_MATCHED) {
> @@ -367,11 +371,18 @@ static int checkout_paths(const struct checkout_opts *opts,
>                         if (opts->writeout_stage)
>                                 errs |= checkout_stage(opts->writeout_stage, ce, pos, &state);
>                         else if (opts->merge)
> +                               /*
> +                                * XXX: in parallel mode, we may want
> +                                * to let worker perform the merging
> +                                * instead and send SHA-1 result back
> +                                */
>                                 errs |= checkout_merged(pos, &state);
>                         pos = skip_same_name(ce, pos) - 1;
>                 }
>         }
>
> +       errs |= run_parallel_checkout();
> +
>         if (write_locked_index(&the_index, lock_file, COMMIT_LOCK))
>                 die(_("unable to write new index file"));
>
> @@ -1132,6 +1143,7 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
>         struct branch_info new;
>         char *conflict_style = NULL;
>         int dwim_new_local_branch = 1;
> +       int parallel = 0;
>         struct option options[] = {
>                 OPT__QUIET(&opts.quiet, N_("suppress progress reporting")),
>                 OPT_STRING('b', NULL, &opts.new_branch, N_("branch"),
> @@ -1159,6 +1171,8 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
>                                 N_("second guess 'git checkout <no-such-branch>'")),
>                 OPT_BOOL(0, "ignore-other-worktrees", &opts.ignore_other_worktrees,
>                          N_("do not check if another worktree is holding the given ref")),
> +               OPT_BOOL(0, "parallel", &parallel,
> +                        N_("parallel checkout")),
>                 OPT_BOOL(0, "progress", &opts.show_progress, N_("force progress reporting")),
>                 OPT_END(),
>         };
> @@ -1279,8 +1293,12 @@ int cmd_checkout(int argc, const char **argv, const char *prefix)
>                 strbuf_release(&buf);
>         }
>
> -       if (opts.patch_mode || opts.pathspec.nr)
> -               return checkout_paths(&opts, new.name);
> +       if (opts.patch_mode || opts.pathspec.nr) {
> +               uint64_t start = getnanotime();
> +               int ret = checkout_paths(&opts, new.name, parallel);
> +               trace_performance_since(start, "checkout_paths");
> +               return ret;
> +       }
>         else
>                 return checkout_branch(&opts, &new);
>  }
> diff --git a/entry.c b/entry.c
> index a410957..5e0eb1c 100644
> --- a/entry.c
> +++ b/entry.c
> @@ -3,6 +3,36 @@
>  #include "dir.h"
>  #include "streaming.h"
>
> +#include <sys/epoll.h>
> +#include "pkt-line.h"
> +#include "argv-array.h"
> +#include "run-command.h"
> +
> +struct checkout_item {
> +       struct cache_entry *ce;
> +       struct checkout_item *next;
> +};
> +
> +struct checkout_worker {
> +       struct child_process cp;
> +       struct checkout_item *to_complete;
> +       struct checkout_item *to_send;
> +};
> +
> +struct parallel_checkout {
> +       struct checkout state;
> +       struct checkout_worker *workers;
> +       struct checkout_item *items;
> +       int nr_items, alloc_items;
> +       int nr_workers;
> +};
> +
> +static struct parallel_checkout *parallel_checkout;
> +
> +static int queue_checkout(struct parallel_checkout *,
> +                         const struct checkout *,
> +                         struct cache_entry *);
> +
>  static void create_directories(const char *path, int path_len,
>                                const struct checkout *state)
>  {
> @@ -290,5 +320,299 @@ int checkout_entry(struct cache_entry *ce,
>                 return 0;
>
>         create_directories(path.buf, path.len, state);
> +
> +       if (!queue_checkout(parallel_checkout, state, ce))
> +               /*
> +                * write_entry() will be done by parallel_checkout_worker() in
> +                * a separate process
> +                */
> +               return 0;
> +
>         return write_entry(ce, path.buf, state, 0);
>  }
> +
> +int start_parallel_checkout(const struct checkout *state)
> +{
> +       if (parallel_checkout)
> +               die("BUG: parallel checkout already initiated");
> +       if (0 && state->force)
> +               die("BUG: not support --force yet");
> +       parallel_checkout = xmalloc(sizeof(*parallel_checkout));
> +       memset(parallel_checkout, 0, sizeof(*parallel_checkout));
> +       memcpy(&parallel_checkout->state, state, sizeof(*state));
> +
> +       return 0;
> +}
> +
> +static int queue_checkout(struct parallel_checkout *pc,
> +                         const struct checkout *state,
> +                         struct cache_entry *ce)
> +{
> +       struct checkout_item *ci;
> +
> +       if (!pc ||
> +           !S_ISREG(ce->ce_mode) ||
> +           memcmp(&pc->state, state, sizeof(*state)))
> +               return -1;
> +
> +       ALLOC_GROW(pc->items, pc->nr_items + 1, pc->alloc_items);
> +       ci = pc->items + pc->nr_items++;
> +       ci->ce = ce;
> +       return 0;
> +}
> +
> +static int item_cmp(const void *a_, const void *b_)
> +{
> +       const struct checkout_item *a = a_;
> +       const struct checkout_item *b = b_;
> +       return strcmp(a->ce->name, b->ce->name);
> +}
> +
> +static int setup_workers(struct parallel_checkout *pc, int epoll_fd)
> +{
> +       int from, nr_per_worker, i;
> +
> +       pc->workers = xmalloc(sizeof(*pc->workers) * pc->nr_workers);
> +       memset(pc->workers, 0, sizeof(*pc->workers) * pc->nr_workers);
> +
> +       nr_per_worker = pc->nr_items / pc->nr_workers;
> +       from = 0;
> +
> +       for (i = 0; i < pc->nr_workers; i++) {
> +               struct checkout_worker *worker = pc->workers + i;
> +               struct child_process *cp = &worker->cp;
> +               struct checkout_item *item;
> +               struct epoll_event ev;
> +               int to;
> +
> +               to = from + nr_per_worker;
> +               if (i == pc->nr_workers - 1)
> +                       to = pc->nr_items;
> +               item = NULL;
> +               while (from < to) {
> +                       pc->items[from].next = item;
> +                       item = pc->items + from;
> +                       from++;
> +               }
> +               worker->to_send = item;
> +               worker->to_complete = item;
> +
> +               cp->git_cmd = 1;
> +               cp->in = -1;
> +               cp->out = -1;
> +               argv_array_push(&cp->args, "checkout-index");
> +               argv_array_push(&cp->args, "--worker");
> +               if (start_command(cp))
> +                       die(_("failed to run checkout worker"));
> +
> +               ev.events = EPOLLOUT | EPOLLERR | EPOLLHUP;
> +               ev.data.u32 = i * 2;
> +               if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, cp->in, &ev) == -1)
> +                       die_errno("epoll_ctl");
> +
> +               ev.events = EPOLLIN | EPOLLERR | EPOLLHUP;
> +               ev.data.u32 = i * 2 + 1;
> +               if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, cp->out, &ev) == -1)
> +                       die_errno("epoll_ctl");
> +       }
> +       return 0;
> +}
> +
> +static int send_to_worker(struct checkout_worker *worker, int epoll_fd)
> +{
> +       if (!worker->to_send) {
> +               struct epoll_event ev;
> +
> +               packet_flush(worker->cp.in);
> +
> +               epoll_ctl(epoll_fd, EPOLL_CTL_DEL, worker->cp.in, &ev);
> +               close(worker->cp.in);
> +               worker->cp.in = -1;
> +               return 0;
> +       }
> +
> +       /*
> +        * XXX: put the fd in non-blocking mode and send as many files
> +        * as possible in one go.
> +        */
> +       packet_write(worker->cp.in, "%s %s",
> +                    sha1_to_hex(worker->to_send->ce->sha1),
> +                    worker->to_send->ce->name);
> +       worker->to_send = worker->to_send->next;
> +       return 0;
> +}
> +
> +int parallel_checkout_worker(void)
> +{
> +       struct checkout state;
> +       struct cache_entry *ce = NULL;
> +
> +       memset(&state, 0, sizeof(state));
> +       /* FIXME: pass 'force' over */
> +       for (;;) {
> +               int len;
> +               unsigned char sha1[20];
> +               char *line = packet_read_line(0, &len);
> +
> +               if (!line)
> +                       return 0;
> +
> +               if (len < 40)
> +                       return 1;
> +               if (get_sha1_hex(line, sha1))
> +                       return 1;
> +               line += 40;
> +               len -= 40;
> +               if (*line != ' ')
> +                       return 1;
> +               line++;
> +               len--;
> +               if (!ce || ce_namelen(ce) < len) {
> +                       free(ce);
> +                       ce = xcalloc(1, cache_entry_size(len));
> +                       ce->ce_mode = S_IFREG | ce_permissions(0644);
> +               }
> +               ce->ce_namelen = len;
> +               hashcpy(ce->sha1, sha1);
> +               memcpy(ce->name, line, len + 1);
> +
> +               if (write_entry(ce, ce->name, &state, 0))
> +                       return 1;
> +               /*
> +                * XXX process in batch and send bigger number of
> +                * checked out entries back
> +                */
> +               packet_write(1, "1");
> +       }
> +}
> +
> +static int receive_from_worker(struct checkout_worker *worker,
> +                              int refresh_cache)
> +{
> +       int len, val;
> +       char *line;
> +
> +       line = packet_read_line(worker->cp.out, &len);
> +       val = atoi(line);
> +       if (val <= 0)
> +               die("BUG: invalid value");
> +       while (val && worker->to_complete &&
> +              worker->to_complete != worker->to_send) {
> +               if (refresh_cache) {
> +                       struct stat st;
> +                       struct cache_entry *ce = worker->to_complete->ce;
> +
> +                       lstat(ce->name, &st);
> +                       fill_stat_cache_info(ce, &st);
> +                       ce->ce_flags |= CE_UPDATE_IN_BASE;
> +               }
> +               worker->to_complete = worker->to_complete->next;
> +               val--;
> +       }
> +       if (val)
> +               die("BUG: invalid value");
> +       return 0;
> +}
> +
> +static int finish_worker(struct checkout_worker *worker, int epoll_fd)
> +{
> +       struct epoll_event ev;
> +       char buf[1];
> +       int ret;
> +
> +       assert(worker->to_send == NULL);
> +       assert(worker->to_complete == NULL);
> +
> +       ret = xread(worker->cp.out, buf, sizeof(buf));
> +       if (ret != 0)
> +               die("BUG: expect eof");
> +       epoll_ctl(epoll_fd, EPOLL_CTL_DEL, worker->cp.out, &ev);
> +       close(worker->cp.out);
> +       worker->cp.out = -1;
> +       if (finish_command(&worker->cp))
> +               die("worker had a problem");
> +       return 0;
> +}
> +
> +static int really_finished(struct parallel_checkout *pc)
> +{
> +       int i;
> +
> +       for (i = 0; i < pc->nr_workers; i++)
> +               if (pc->workers[i].to_complete)
> +                       return 0;
> +       return 1;
> +}
> +
> +/* XXX progress support for unpack-trees */
> +int run_parallel_checkout(void)
> +{
> +       struct parallel_checkout *pc = parallel_checkout;
> +       int ret, i;
> +       int epoll_fd;
> +       struct epoll_event *events;
> +
> +       if (!pc || !pc->nr_items) {
> +               free(pc);
> +               parallel_checkout = NULL;
> +               return 0;
> +       }
> +
> +       qsort(pc->items, pc->nr_items, sizeof(*pc->items), item_cmp);
> +       pc->nr_workers = 8;
> +       epoll_fd = epoll_create(pc->nr_workers * 2);
> +       if (epoll_fd == -1)
> +               die_errno("epoll_create");
> +       ret = setup_workers(pc, epoll_fd);
> +       events = xmalloc(sizeof(*events) * pc->nr_workers * 2);
> +
> +       ret = 0;
> +       while (!ret) {
> +               int maybe_all_done = 0, nr;
> +
> +               nr = epoll_wait(epoll_fd, events, pc->nr_workers * 2, -1);
> +               if (nr == -1 && errno == EINTR)
> +                       continue;
> +               if (nr == -1) {
> +                       ret = nr;
> +                       break;
> +               }
> +               for (i = 0; i < nr; i++) {
> +                       int is_in = events[i].data.u32 & 1;
> +                       int worker_id = events[i].data.u32 / 2;
> +                       struct checkout_worker *worker = pc->workers + worker_id;
> +
> +                       if (!is_in && (events[i].events & EPOLLOUT))
> +                               ret = send_to_worker(worker, epoll_fd);
> +                       else if (events[i].events & EPOLLIN) {
> +                               if (worker->to_complete) {
> +                                       int refresh = pc->state.refresh_cache;
> +                                       ret = receive_from_worker(worker, refresh);
> +                                       pc->state.istate->cache_changed |= CE_ENTRY_CHANGED;
> +                               } else {
> +                                       ret = finish_worker(worker, epoll_fd);
> +                                       maybe_all_done = 1;
> +                               }
> +                       } else if (events[i].events & (EPOLLERR | EPOLLHUP)) {
> +                               if (is_in && !worker->to_complete) {
> +                                       ret = finish_worker(worker, epoll_fd);
> +                                       maybe_all_done = 1;
> +                               } else
> +                                       ret = -1;
> +                       } else
> +                               die("BUG: what??");
> +                       if (ret)
> +                               break;
> +               }
> +
> +               if (maybe_all_done && really_finished(pc))
> +                       break;
> +       }
> +
> +       close(epoll_fd);
> +       free(pc->workers);
> +       free(events);
> +       free(pc);
> +       parallel_checkout = NULL;
> +       return ret;
> +}
> diff --git a/unpack-trees.c b/unpack-trees.c
> index 9f55cc2..433c54e 100644
> --- a/unpack-trees.c
> +++ b/unpack-trees.c
> @@ -220,6 +220,7 @@ static int check_updates(struct unpack_trees_options *o)
>         remove_marked_cache_entries(&o->result);
>         remove_scheduled_dirs();
>
> +       /* start_parallel_checkout() */
>         for (i = 0; i < index->cache_nr; i++) {
>                 struct cache_entry *ce = index->cache[i];
>
> @@ -234,6 +235,7 @@ static int check_updates(struct unpack_trees_options *o)
>                         }
>                 }
>         }
> +       /* run_parallel_checkout() */
>         stop_progress(&progress);
>         if (o->update)
>                 git_attr_set_direction(GIT_ATTR_CHECKIN, NULL);
> -- 8< --

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15 11:18           ` Christian Couder
  2016-04-15 11:32             ` Duy Nguyen
@ 2016-04-15 16:52             ` Jeff King
  2016-04-15 17:31               ` Junio C Hamano
  2016-04-16  5:17               ` Michael Haggerty
  1 sibling, 2 replies; 41+ messages in thread
From: Jeff King @ 2016-04-15 16:52 UTC (permalink / raw)
  To: Christian Couder
  Cc: Duy Nguyen, Stefan Beller, git@vger.kernel.org, Michael Haggerty

On Fri, Apr 15, 2016 at 01:18:46PM +0200, Christian Couder wrote:

> On Fri, Apr 15, 2016 at 11:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
> > On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
> >>
> >> There is a draft of an article about the first part of the Contributor
> >> Summit in the draft of the next Git Rev News edition:
> >>
> >> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md
> >
> > Thanks. I read the sentence "This made people mention potential
> > problems with parallelizing git checkout" and wondered what these
> > problems were.
> 
> It may have been Michael or Peff (CC'ed) saying that it could break
> some builds as the timestamps on the files might not always be ordered
> in the same way.

I don't think it was me. I'm also not sure how it would break a build.
Git does not promise a particular timing or order for updating files as
it is. So if we are checking out two files "a" and "b", and your build
process depends on the timestamp between them, I think all bets are
already off.

-Peff

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15 16:52             ` Jeff King
@ 2016-04-15 17:31               ` Junio C Hamano
  2016-04-15 17:38                 ` Jeff King
  2016-04-16  5:17               ` Michael Haggerty
  1 sibling, 1 reply; 41+ messages in thread
From: Junio C Hamano @ 2016-04-15 17:31 UTC (permalink / raw)
  To: Jeff King
  Cc: Christian Couder, Duy Nguyen, Stefan Beller, git@vger.kernel.org,
	Michael Haggerty

Jeff King <peff@peff.net> writes:

> On Fri, Apr 15, 2016 at 01:18:46PM +0200, Christian Couder wrote:
>
>> On Fri, Apr 15, 2016 at 11:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>> > On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
>> >>
>> >> There is a draft of an article about the first part of the Contributor
>> >> Summit in the draft of the next Git Rev News edition:
>> >>
>> >> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md
>> >
>> > Thanks. I read the sentence "This made people mention potential
>> > problems with parallelizing git checkout" and wondered what these
>> > problems were.
>> 
>> It may have been Michael or Peff (CC'ed) saying that it could break
>> some builds as the timestamps on the files might not always be ordered
>> in the same way.
>
> I don't think it was me. I'm also not sure how it would break a build.

Yup, "will break a build" is a crazy-talk that I'd be surprised if
you said something silly like that ;-)

Last time I checked, I think the accesses to attributes from the
convert.c thing was one of the things that are cumbersome to make
safe in multi-threaded world.

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15 17:31               ` Junio C Hamano
@ 2016-04-15 17:38                 ` Jeff King
  0 siblings, 0 replies; 41+ messages in thread
From: Jeff King @ 2016-04-15 17:38 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Christian Couder, Duy Nguyen, Stefan Beller, git@vger.kernel.org,
	Michael Haggerty

On Fri, Apr 15, 2016 at 10:31:39AM -0700, Junio C Hamano wrote:

> Last time I checked, I think the accesses to attributes from the
> convert.c thing was one of the things that are cumbersome to make
> safe in multi-threaded world.

Multi-threaded grep has the same problem. I think we started with a big
lock on the attribute access. That works OK, as long as you hold the
lock only for the lookup and not the actual filtering. We later moved to
pre-loading the attributes in 9dd5245c104, because looking up attributes
in order is much more efficient (because locality of paths lets us reuse
work from the previous request).

So I'm guessing the major work here will be to split the "look up smudge
attributes" step from "do the smudge".

-Peff

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15 15:08           ` Stefan Beller
@ 2016-04-16  0:16             ` Duy Nguyen
  0 siblings, 0 replies; 41+ messages in thread
From: Duy Nguyen @ 2016-04-16  0:16 UTC (permalink / raw)
  To: Stefan Beller; +Cc: Jonathan Nieder, Christian Couder, git@vger.kernel.org

On Fri, Apr 15, 2016 at 10:08 PM, Stefan Beller <sbeller@google.com> wrote:
> On Fri, Apr 15, 2016 at 2:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>> On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
>> The idea is simple, you offload some work to process workers. In this
>> patch, only entry.c:write_entry() is moved to workers. We still do
>> directory creation and all sort of checks and stat refresh in the main
>> process. Some more work may be moved away, for example, the entire
>> builtin/checkout.c:checkout_merged().
>>
>> Multi process is less efficient than multi thread model. But I doubt
>> we could make object db access thread-safe soon. The last discussion
>> was 2 years ago [1] and nothing much has happened.
>>
>> Numbers are encouraging though. On linux-2.6 repo running on linux and
>> ext4 filesystem, checkout_paths() would dominate "git checkout :/".
>> Unmodified git takes about 31s.
>
> Please also benchmark "make build" or another read heavy operation
> with these 2 different checkouts. IIRC that was the problem. (checkout
> improved, but due to file ordering on the fs, the operation afterwards
> slowed down, such that it became a net negative)

That's way too close to fs internals. Don't filesystems these days
have b-tree and indexes to speed up pathname lookup (which makes file
creation order meaningless, I guess)? If it only happens to a fs or
two, I'm leaning to say "your problem, fix your file system". A
mitigation may be let worker handle whole directory (non-recursively)
so file creation order within a directory is almost the same.

> Would it make sense to use the parallel processing infrastructure from
> run-command.h
> instead of doing all setup and teardown yourself?
> (As you call it for-fun patch, I'd assume the answer is: Writing code
> is more fun than
> using other peoples code ;)

I did look at run-command.h. Your run_process_parallel() looked almost
fit, but I needed control over stdout for coordination, not to be
printed. At that point, yes writing new code was more fun than
tweaking run_process_parallel :-D
-- 
Duy

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15 16:52             ` Jeff King
  2016-04-15 17:31               ` Junio C Hamano
@ 2016-04-16  5:17               ` Michael Haggerty
  1 sibling, 0 replies; 41+ messages in thread
From: Michael Haggerty @ 2016-04-16  5:17 UTC (permalink / raw)
  To: Jeff King, Christian Couder
  Cc: Duy Nguyen, Stefan Beller, git@vger.kernel.org

On 04/15/2016 06:52 PM, Jeff King wrote:
> On Fri, Apr 15, 2016 at 01:18:46PM +0200, Christian Couder wrote:
> 
>> On Fri, Apr 15, 2016 at 11:51 AM, Duy Nguyen <pclouds@gmail.com> wrote:
>>> On Fri, Apr 15, 2016 at 12:04:49AM +0200, Christian Couder wrote:
>>>>
>>>> There is a draft of an article about the first part of the Contributor
>>>> Summit in the draft of the next Git Rev News edition:
>>>>
>>>> https://github.com/git/git.github.io/blob/master/rev_news/drafts/edition-14.md
>>>
>>> Thanks. I read the sentence "This made people mention potential
>>> problems with parallelizing git checkout" and wondered what these
>>> problems were.
>>
>> It may have been Michael or Peff (CC'ed) saying that it could break
>> some builds as the timestamps on the files might not always be ordered
>> in the same way.
> 
> I don't think it was me. I'm also not sure how it would break a build.
> Git does not promise a particular timing or order for updating files as
> it is. So if we are checking out two files "a" and "b", and your build
> process depends on the timestamp between them, I think all bets are
> already off.

I'm hazy on this, but I think somebody at Git Merge pointed out that
parallel checkouts (within a single repository) could be tricky if
multiple Git filenames are mapped to the same file due to filesystem
case-insensitivity or encoding normalization.

Michael

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-13 17:29                 ` Stefan Beller
  2016-04-13 17:43                   ` Greg KH
@ 2016-04-16 15:51                   ` Lars Schneider
  2016-04-16 18:02                     ` Junio C Hamano
  1 sibling, 1 reply; 41+ messages in thread
From: Lars Schneider @ 2016-04-16 15:51 UTC (permalink / raw)
  To: Stefan Beller
  Cc: Junio C Hamano, Matthieu Moy, lkp, Greg KH, git@vger.kernel.org


On 13 Apr 2016, at 19:29, Stefan Beller <sbeller@google.com> wrote:

> On Wed, Apr 13, 2016 at 10:09 AM, Lars Schneider
> <larsxschneider@gmail.com> wrote:
>> 
>>> On 13 Apr 2016, at 18:27, Junio C Hamano <gitster@pobox.com> wrote:
>>> 
>>> Lars Schneider <larsxschneider@gmail.com> writes:
>>> 
>>>> @Junio:
>>>> If you setup Travis CI for your https://github.com/gitster/git fork
>>>> then Travis CI would build all your topic branches and you (and
>>>> everyone who is interested) could check
>>>> https://travis-ci.org/gitster/git/branches to see which branches
>>>> will break pu if you integrate them.
>>> 
>>> I would not say such an arrangement is worthless, but it targets a
>>> wrong point in the patch flow.
>>> 
>>> The patches that result in the most wastage of my time (i.e. a
>>> shared bottleneck resource the community should strive to optimize
>>> for) are the ones that fail to hit 'pu'.  Ones that do not even
>>> build in isolation, ones that may build but fail even the new tests
>>> they bring in, ones that break existing tests, and ones that are OK
>>> in isolation but do not play well with topics already in flight.
>> 
>> I am not sure what you mean by "fail to hit 'pu'". Maybe we talk at
>> cross purposes. Here is what I think you do, please correct me:
>> 
>> 1.) You pick the topics from the mailing list and create feature
>>    branches for each one of them. E.g. one of my recent topics
>>    is "ls/config-origin".
> 
> and by You you mean Junio.
Yes.


> Ideally the 0bot would have sent the message as a reply to the
> cover letter with the information "doesn't compile/breaks test t1234",
> so Junio could ignore that series (no time wasted on his part).
> 
> At Git Merge Greg said (paraphrasing here):
> 
>  We waste developers time, because we have plenty of it. Maintainers time
>  however is precious because maintainers are the bottleneck and a scare
>  resource to come by.
> 
> And I think Git and the kernel have the same community design here.
> (Except the kernel is bigger and has more than one maintainer)
> 
> So the idea is help Junio make a decision to drop/ignore those patches
> with least amount of brain cycled spent as possible. (Not even spend 5
> seconds on it).
That sounds great. I just wonder how 0bot would know where to apply
the patches?


>> 2.) At some point you create a new pu branch based on the latest
>>    next branch. You merge all the new topics into the new pu.
> 
> but Junio also runs test after each(?) merge(?) of a series and once
> tests fail, it takes time to sort out, what caused it. (Is that the patch series
> alone or is that because 2 series interact badly with each other?)
> 
>> 
>> If you push the topics to github.com/gitster after step 1 then
>> Travis CI could tell you if the individual topic builds clean
>> and passes all tests. Then you could merge only clean topics in
>> step 2 which would result in a pu that is much more likely to
>> build clean.
> 
> IIRC Junio did not like granting travis access to the "blessed" repository
> as travis wants so much permissions including write permission to that
> repo. (We/He could have a second non advertised repo though)
AFAIK TravisCI does not ask for repo write permissions. They ask for
permission to write the status of commits (little green checkmark on
GitHub) and repo hooks:
https://docs.travis-ci.com/user/github-oauth-scopes


> Also this would incur wait time on Junios side
> 
> 1) collect patches (many series over the day)
> 2) push
> 3) wait
> 4) do the merges
He could do the merges as he does them today but after some time
he (and the contributor of a patch) would know if a certain patch
brakes pu.


> however a 0 bot would do
> 1) collect patches faster than Junio (0 bot is a computer after all,
> working 24/7)
> 2) test each patch/series individually
> 3) send feedback without the wait time, so the contributor from a different
>   time zone gets feedback quickly. (round trip is just the build and test time,
>   which the developer forgot to do any way if it fails)
I agree that this would be even better. However, I assume this mechanism
requires some setup? TravisCI works today.


> 
>> 
>> Could that process avoid wasting your time with bad patches?
>> 
>>> Automated testing of what is already on 'pu' does not help reduce
>>> the above cost, as the culling must be done by me _without_ help
>>> from automated test you propose to run on topics in 'pu'.  Ever
>>> heard of chicken and egg?
>>> 
>>> Your "You can setup your own CI" update to SubmittingPatches may
>>> encourage people to test before sending.  The "Travis CI sends
>>> failure notice as a response to a crappy patch" discussed by
>>> Matthieu in the other subthread will be of great help.
>>> 
>>> Thanks.
>>> 
>> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-16 15:51                   ` Lars Schneider
@ 2016-04-16 18:02                     ` Junio C Hamano
  2016-04-22  8:19                       ` Lars Schneider
  0 siblings, 1 reply; 41+ messages in thread
From: Junio C Hamano @ 2016-04-16 18:02 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Stefan Beller, Matthieu Moy, lkp, Greg KH, git@vger.kernel.org

Lars Schneider <larsxschneider@gmail.com> writes:

>> Also this would incur wait time on Junios side
>> 
>> 1) collect patches (many series over the day)
>> 2) push
>> 3) wait
>> 4) do the merges
> He could do the merges as he does them today but after some time
> he (and the contributor of a patch) would know if a certain patch
> brakes pu.

Read what you wrote again and realize that your step 1. does not
require any expertise or taste from the person who does so.  Anybody
could do it, in other words.  Instead of demanding me to do more of
mindless chore, why don't you try doing that yourself with your fork
at GitHub?

I suspect you haven't read my response $gmane/291469 to your message
yet, but "as he does them today" would mean _all_ of the following
has to happen during phase 1) above:

 - Look at the patch and see if it is even remotely interesting;

 - See what maintenance track it should apply to by comparing its
   context and check availability of features post-image wants to
   use in the mantenance tracks;
 
 - Fork a topic and apply, and inspect the result with larger -U
   value (or log -p -W);
 
 - Run tests on the topic.

 - Try merging it to the eventual target track (e.g. 'maint-2.7'),
   'master', 'next' and 'pu' (note that this is not "one of these",
   but "all of these"), and build the result (and optionally test).
   Then discard these trial merges.

Two things you seem to be missing are:

 * I do not pick up patches from the list with the objective of
   queuing them in 'pu'.  I instead look for and process topics that
   could go to 'next', or that I want to see in 'next' eventually
   with fixes.  Queing leftover bits in 'pu' as "not ready for
   'next'" is done only because I saw promises in them (and that
   determination requires time from me), and did not fail in earlier
   steps before they even gain a topic branch in my tree (otherwise
   I wouldn't be able to keep up with the traffic).

 * The last step, trial merges, is often a very good method to see
   potential problems and unintended interactions with other topics.
   A fix we would want to see in older maintenance tracks may depend
   on too new a feature we added recently, etc.

Also see $gmane/291469

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-16 18:02                     ` Junio C Hamano
@ 2016-04-22  8:19                       ` Lars Schneider
  2016-04-22 17:30                         ` Junio C Hamano
  0 siblings, 1 reply; 41+ messages in thread
From: Lars Schneider @ 2016-04-22  8:19 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Stefan Beller, Matthieu Moy, lkp, Greg KH, git@vger.kernel.org


> On 16 Apr 2016, at 20:02, Junio C Hamano <gitster@pobox.com> wrote:
> 
> Lars Schneider <larsxschneider@gmail.com> writes:
> 
>>> Also this would incur wait time on Junios side
>>> 
>>> 1) collect patches (many series over the day)
>>> 2) push
>>> 3) wait
>>> 4) do the merges
>> He could do the merges as he does them today but after some time
>> he (and the contributor of a patch) would know if a certain patch
>> brakes pu.
> 
> Read what you wrote again and realize that your step 1. does not
> require any expertise or taste from the person who does so.  Anybody
> could do it, in other words.  Instead of demanding me to do more of
> mindless chore, why don't you try doing that yourself with your fork
> at GitHub?
> 
> I suspect you haven't read my response $gmane/291469 to your message
> yet, but "as he does them today" would mean _all_ of the following
> has to happen during phase 1) above:
> 
> - Look at the patch and see if it is even remotely interesting;
> 
> - See what maintenance track it should apply to by comparing its
>   context and check availability of features post-image wants to
>   use in the mantenance tracks;
> 
> - Fork a topic and apply, and inspect the result with larger -U
>   value (or log -p -W);
> 
> - Run tests on the topic.
> 
> - Try merging it to the eventual target track (e.g. 'maint-2.7'),
>   'master', 'next' and 'pu' (note that this is not "one of these",
>   but "all of these"), and build the result (and optionally test).
>   Then discard these trial merges.
> 
> Two things you seem to be missing are:
> 
> * I do not pick up patches from the list with the objective of
>   queuing them in 'pu'.  I instead look for and process topics that
>   could go to 'next', or that I want to see in 'next' eventually
>   with fixes.  Queing leftover bits in 'pu' as "not ready for
>   'next'" is done only because I saw promises in them (and that
>   determination requires time from me), and did not fail in earlier
>   steps before they even gain a topic branch in my tree (otherwise
>   I wouldn't be able to keep up with the traffic).
> 
> * The last step, trial merges, is often a very good method to see
>   potential problems and unintended interactions with other topics.
>   A fix we would want to see in older maintenance tracks may depend
>   on too new a feature we added recently, etc.
> 
> Also see $gmane/291469

Thanks for the explanation. My intention was not to be offensive.
I was curious about your workflow and I was wondering if the
Travis CI integration could be useful for you in any way.

Best,
Lars

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-22  8:19                       ` Lars Schneider
@ 2016-04-22 17:30                         ` Junio C Hamano
  2016-04-24  7:15                           ` Johannes Schindelin
  0 siblings, 1 reply; 41+ messages in thread
From: Junio C Hamano @ 2016-04-22 17:30 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Stefan Beller, Matthieu Moy, lkp, Greg KH, git@vger.kernel.org

Lars Schneider <larsxschneider@gmail.com> writes:

> Thanks for the explanation. My intention was not to be offensive.
> I was curious about your workflow and I was wondering if the
> Travis CI integration could be useful for you in any way.

Don't worry; I didn't feel offended.  The Travis stuff running on
the branches at http://github.com/git/git would surely catch issues
on MacOSX and/or around git-p4 (neither of which I test myself when
merging to 'pu') before they hit 'next', and that is already helping
us greatly.

And if volunteers or bots pick up in-flight patches that have not
hit 'pu' and feed them to Travis through their repositories, that
would also help the project, so your work on hooking up our source
tree with Travis is greatly appreciated.

It was just that Travis running on broken-down topic branches that
appear in http://github.com/gitster/git would not help my workflow,
which was the main point of illustrating the way how these branches
work.

Thanks.


  

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-22 17:30                         ` Junio C Hamano
@ 2016-04-24  7:15                           ` Johannes Schindelin
  2016-04-24 12:19                             ` SZEDER Gábor
       [not found]                             ` <CAE5ih7-fyuEvSDzmHVfXta3hd_7ZRp1+_VtytDM+u0372Kyidg@mail.gmail.com>
  0 siblings, 2 replies; 41+ messages in thread
From: Johannes Schindelin @ 2016-04-24  7:15 UTC (permalink / raw)
  To: Junio C Hamano
  Cc: Lars Schneider, Stefan Beller, Matthieu Moy, lkp, Greg KH,
	git@vger.kernel.org

Hi Lars & Junio,

On Fri, 22 Apr 2016, Junio C Hamano wrote:

> Lars Schneider <larsxschneider@gmail.com> writes:
> 
> > Thanks for the explanation. My intention was not to be offensive.
> > I was curious about your workflow and I was wondering if the
> > Travis CI integration could be useful for you in any way.
> 
> Don't worry; I didn't feel offended.  The Travis stuff running on
> the branches at http://github.com/git/git would surely catch issues
> on MacOSX and/or around git-p4 (neither of which I test myself when
> merging to 'pu') before they hit 'next', and that is already helping
> us greatly.

I agree that it helps to catch those Mac and P4 issues early.

However, it is possible that bogus errors are reported that might not have
been introduced by the changes of the PR, and I find it relatively hard to
figure out the specifics. Take for example

	https://travis-ci.org/git/git/jobs/124767554

It appears that t9824 fails with my interactive rebase work on MacOSX,
both Clang and GCC versions. I currently have no access to a Mac for
developing (so I am denied my favorite debugging technique: sh t... -i -v
-x), and I seem to be unable to find any useful log of what went wrong
*specifically*.

Any ideas how to find out?

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-24  7:15                           ` Johannes Schindelin
@ 2016-04-24 12:19                             ` SZEDER Gábor
  2016-04-24 13:05                               ` Johannes Schindelin
       [not found]                             ` <CAE5ih7-fyuEvSDzmHVfXta3hd_7ZRp1+_VtytDM+u0372Kyidg@mail.gmail.com>
  1 sibling, 1 reply; 41+ messages in thread
From: SZEDER Gábor @ 2016-04-24 12:19 UTC (permalink / raw)
  To: Johannes Schindelin
  Cc: SZEDER Gábor, Junio C Hamano, Lars Schneider, Stefan Beller,
	Matthieu Moy, lkp, Greg KH, git@vger.kernel.org


> > Don't worry; I didn't feel offended.  The Travis stuff running on
> > the branches at http://github.com/git/git would surely catch issues
> > on MacOSX and/or around git-p4 (neither of which I test myself when
> > merging to 'pu') before they hit 'next', and that is already helping
> > us greatly.
> 
> I agree that it helps to catch those Mac and P4 issues early.
> 
> However, it is possible that bogus errors are reported that might not have
> been introduced by the changes of the PR, and I find it relatively hard to
> figure out the specifics. Take for example
> 
> 	https://travis-ci.org/git/git/jobs/124767554
> 
> It appears that t9824 fails with my interactive rebase work on MacOSX,
> both Clang and GCC versions. I currently have no access to a Mac for
> developing (so I am denied my favorite debugging technique: sh t... -i -v
> -x), and I seem to be unable to find any useful log of what went wrong
> *specifically*.
> 
> Any ideas how to find out?

You could patch .travis.yml on top of your changes to run only t9824
(so travis-ci returns feedback faster) and to run it with additional
options '-i -x', push it to github, and await developments.  I'm not
saying it's not cumbersome, because it is, and the p4 cleanup timeout
loop with '-x' is particularly uninteresting...  but it works, though
in this case '-x' doesn't output anything useful.

You could also check how independent branches or even master are
faring, and if they fail at the same tests, like in this case they do,
then it's not your work that causes the breakage.

It seems you experience the same breakage that is explained and fixed
in this thread:

  http://thread.gmane.org/gmane.comp.version-control.git/291917

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
  2016-04-24 12:19                             ` SZEDER Gábor
@ 2016-04-24 13:05                               ` Johannes Schindelin
  0 siblings, 0 replies; 41+ messages in thread
From: Johannes Schindelin @ 2016-04-24 13:05 UTC (permalink / raw)
  To: SZEDER Gábor
  Cc: Junio C Hamano, Lars Schneider, Stefan Beller, Matthieu Moy, lkp,
	Greg KH, git@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 1609 bytes --]

Hi Gábor,

On Sun, 24 Apr 2016, SZEDER Gábor wrote:

> > > Don't worry; I didn't feel offended.  The Travis stuff running on
> > > the branches at http://github.com/git/git would surely catch issues
> > > on MacOSX and/or around git-p4 (neither of which I test myself when
> > > merging to 'pu') before they hit 'next', and that is already helping
> > > us greatly.
> > 
> > I agree that it helps to catch those Mac and P4 issues early.
> > 
> > However, it is possible that bogus errors are reported that might not
> > have been introduced by the changes of the PR, and I find it
> > relatively hard to figure out the specifics. Take for example
> > 
> > 	https://travis-ci.org/git/git/jobs/124767554
> > 
> > It appears that t9824 fails with my interactive rebase work on MacOSX,
> > both Clang and GCC versions. I currently have no access to a Mac for
> > developing (so I am denied my favorite debugging technique: sh t... -i
> > -v -x), and I seem to be unable to find any useful log of what went
> > wrong *specifically*.
> > 
> > Any ideas how to find out?
> 
> You could patch .travis.yml on top of your changes to run only t9824 (so
> travis-ci returns feedback faster) and to run it with additional options
> '-i -x', push it to github, and await developments.  I'm not saying it's
> not cumbersome, because it is, and the p4 cleanup timeout loop with '-x'
> is particularly uninteresting...  but it works, though in this case '-x'
> doesn't output anything useful.

*slaps-his-head* of course! Because I can change the build definition...
Thanks!

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
       [not found]                                         ` <CAE5ih78arC2V76XR7yUoXk77c0d_z3Hzupw6MA1+saS3faXjTw@mail.gmail.com>
@ 2016-04-24 13:05                                           ` Johannes Schindelin
       [not found]                                           ` <1C553D20-26D9-4BF2-B77E-DEAEDDE869E2@gmail.com>
  1 sibling, 0 replies; 41+ messages in thread
From: Johannes Schindelin @ 2016-04-24 13:05 UTC (permalink / raw)
  To: Luke Diamand
  Cc: Lars Schneider, lkp, Greg KH, Stefan Beller, Matthieu Moy,
	Junio C Hamano, git@vger.kernel.org

[-- Attachment #1: Type: text/plain, Size: 1565 bytes --]

Hi Luk,

On Sun, 24 Apr 2016, Luke Diamand wrote:

> On 24 Apr 2016 08:19, "Johannes Schindelin" <Johannes.Schindelin@gmx.de>
> wrote:
> >
> > Hi Lars & Junio,
> >
> > On Fri, 22 Apr 2016, Junio C Hamano wrote:
> >
> > > Lars Schneider <larsxschneider@gmail.com> writes:
> > >
> > > > Thanks for the explanation. My intention was not to be offensive.
> > > > I was curious about your workflow and I was wondering if the
> > > > Travis CI integration could be useful for you in any way.
> > >
> > > Don't worry; I didn't feel offended.  The Travis stuff running on
> > > the branches at http://github.com/git/git would surely catch issues
> > > on MacOSX and/or around git-p4 (neither of which I test myself when
> > > merging to 'pu') before they hit 'next', and that is already helping
> > > us greatly.
> >
> > I agree that it helps to catch those Mac and P4 issues early.
> >
> > However, it is possible that bogus errors are reported that might not have
> > been introduced by the changes of the PR, and I find it relatively hard to
> > figure out the specifics. Take for example
> >
> >         https://travis-ci.org/git/git/jobs/124767554
> >
> > It appears that t9824 fails with my interactive rebase work on MacOSX,
> > both Clang and GCC versions. I
> 
> That test is failing because git-lfs has changed its  output format and
> git-p4 had not yet been taught about this.
> 
> There's a patch from Lars to fix it -  see the mailing list for details.

Yeah, thanks, Gábor provided me with the link.

Ciao,
Johannes

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: 0 bot for Git
       [not found]                                           ` <1C553D20-26D9-4BF2-B77E-DEAEDDE869E2@gmail.com>
@ 2016-04-25 14:07                                             ` Johannes Schindelin
  0 siblings, 0 replies; 41+ messages in thread
From: Johannes Schindelin @ 2016-04-25 14:07 UTC (permalink / raw)
  To: Lars Schneider
  Cc: Luke Diamand, lkp, Greg KH, Stefan Beller, Matthieu Moy,
	Junio C Hamano, git@vger.kernel.org

Hi Lars,

On Sun, 24 Apr 2016, Lars Schneider wrote:

> [...] the current Git Travis CI OSX build always installs the latest
> versions of Git LFS and Perforce via brew [1] and the Linux build
> installs fixed versions [2].  Consequently new LFS/Perforce versions can
> brake the OS X build even if there is no change in Git.
> 
> You could argue that this is bad CI practice because CI results should
> be reproducible.  However, it has value to test the latest versions to
> detect integration errors as the one with Git LFS 1.2. We could add
> additional build jobs to test fixed versions and latest versions but
> this would just burn a lot of CPU cycles... that was the reason why I
> chose the way it is implemented right now. I will add a comment to the
> OSX build to make the current strategy more clear (Linux fixed versions,
> OS X latest versions).
> 
> What do you think about that? Do you agree/disagree? Do you see a better
> way?

I agree with your reasoning.

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Parallel checkout (Was Re: 0 bot for Git)
  2016-04-15  9:51         ` Parallel checkout (Was Re: 0 bot for Git) Duy Nguyen
  2016-04-15 11:18           ` Christian Couder
  2016-04-15 15:08           ` Stefan Beller
@ 2016-04-26 11:35           ` Duy Nguyen
  2 siblings, 0 replies; 41+ messages in thread
From: Duy Nguyen @ 2016-04-26 11:35 UTC (permalink / raw)
  To: Christian Couder; +Cc: Stefan Beller, git@vger.kernel.org

On Fri, Apr 15, 2016 at 4:51 PM, Duy Nguyen <pclouds@gmail.com> wrote:
> Numbers are encouraging though. On linux-2.6 repo running on linux and
> ext4 filesystem, checkout_paths() would dominate "git checkout :/".
> Unmodified git takes about 31s.
>
>
> 16:26:00.114029 builtin/checkout.c:1299 performance: 31.184973659 s: checkout_paths
> 16:26:00.114225 trace.c:420             performance: 31.256412935 s: git command: 'git' 'checkout' '.'
>
> When doing write_entry() on 8 processes, it takes 22s (shortened by ~30%)

I continued to develop it into a series. This same laptop now reduces
checkout time closer to 50% on linux-2.6. However my other laptop
gives me the opposite result, parallel checkout takes longer time to
complete. I suspect that only with fast enough disks that CPU may
become temporary bottleneck. This is where parallel checkout shines
because it spreads the load out and quickly moves the bottleneck back
to I/O (after a while I/O queues should be fully populated again). On
systems with slower disks like mine, I/O is always the bottleneck and
spreading I/O over many processes just makes it worse (probably
confuse I/O scheduler more).

Since it's not doing anything for _me_, I'm dropping this. Anybody
interested can check it out and maybe try it from parallel-checkout
branch [1]. It probably can build on windows (epoll is gone). And it
probably help improve performance when smudge filter is used (because
that can potentially add more load to cpu). More notes in commit
8fe9b5c (entry.c: parallel checkout support - 2016-04-18)

[1] https://github.com/pclouds/git/commits/parallel-checkout
-- 
Duy

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2016-04-26 11:36 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAGZ79kYWGFN1W0_y72-V6M3n4WLgtLPzs22bWgs1ObCCDt5BfQ@mail.gmail.com>
2016-04-12  4:29 ` 0 bot for Git Stefan Beller
2016-04-12  6:41   ` Greg KH
2016-04-12  7:23   ` Matthieu Moy
2016-04-12 14:52     ` Stefan Beller
2016-04-12 15:15       ` Philip Li
2016-04-12 20:29       ` Matthieu Moy
2016-04-12 20:49         ` Junio C Hamano
2016-04-13  5:43           ` Matthieu Moy
2016-04-13 12:16             ` Lars Schneider
2016-04-13 12:30               ` Matthieu Moy
2016-04-13 16:14                 ` Lars Schneider
2016-04-13 16:15                 ` Junio C Hamano
2016-04-13  6:11           ` Lars Schneider
2016-04-13 16:27             ` Junio C Hamano
2016-04-13 17:09               ` Lars Schneider
2016-04-13 17:29                 ` Stefan Beller
2016-04-13 17:43                   ` Greg KH
2016-04-16 15:51                   ` Lars Schneider
2016-04-16 18:02                     ` Junio C Hamano
2016-04-22  8:19                       ` Lars Schneider
2016-04-22 17:30                         ` Junio C Hamano
2016-04-24  7:15                           ` Johannes Schindelin
2016-04-24 12:19                             ` SZEDER Gábor
2016-04-24 13:05                               ` Johannes Schindelin
     [not found]                             ` <CAE5ih7-fyuEvSDzmHVfXta3hd_7ZRp1+_VtytDM+u0372Kyidg@mail.gmail.com>
     [not found]                               ` <CAE5ih7_fabo7DyZyHoxdp1Y4ygby2qwKA8E1N6MuGzHa60Wf4w@mail.gmail.com>
     [not found]                                 ` <CAE5ih7-DzFe_3=kyan=E17zCo-iNdNdH7DE5ZwYVHmFvUBkDkA@mail.gmail.com>
     [not found]                                   ` <CAE5ih7-d9ow0dF6wfWCkmx+HAQOzVBONGCC_U4-2bkDUZGPecQ@mail.gmail.com>
     [not found]                                     ` <CAE5ih7_OEAWjYm9LwMAwBCtnvG+KgGo1aFuT9CyQjeN6nFmC5w@mail.gmail.com>
     [not found]                                       ` <CAE5ih7-z8K5Z7HuBa=pF53b6obU60ZCxrEkTLWbaSMsg0G1Ctg@mail.gmail.com>
     [not found]                                         ` <CAE5ih78arC2V76XR7yUoXk77c0d_z3Hzupw6MA1+saS3faXjTw@mail.gmail.com>
2016-04-24 13:05                                           ` Johannes Schindelin
     [not found]                                           ` <1C553D20-26D9-4BF2-B77E-DEAEDDE869E2@gmail.com>
2016-04-25 14:07                                             ` Johannes Schindelin
2016-04-13 17:47                 ` Junio C Hamano
2016-04-13 13:44           ` Fengguang Wu
2016-04-12  9:42   ` Duy Nguyen
2016-04-12 14:59     ` Stefan Beller
2016-04-14 22:04       ` Christian Couder
2016-04-15  9:51         ` Parallel checkout (Was Re: 0 bot for Git) Duy Nguyen
2016-04-15 11:18           ` Christian Couder
2016-04-15 11:32             ` Duy Nguyen
2016-04-15 16:52             ` Jeff King
2016-04-15 17:31               ` Junio C Hamano
2016-04-15 17:38                 ` Jeff King
2016-04-16  5:17               ` Michael Haggerty
2016-04-15 15:08           ` Stefan Beller
2016-04-16  0:16             ` Duy Nguyen
2016-04-26 11:35           ` Duy Nguyen

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).