Hi René, On Sat, 27 Apr 2019, René Scharfe wrote: > Am 27.04.19 um 11:59 schrieb René Scharfe:> Am 26.04.19 um 16:51 schrieb > Johannes Schindelin: > >> > >> On Mon, 15 Apr 2019, Jeff King wrote: > >> > >>> On Sun, Apr 14, 2019 at 12:01:10AM +0200, René Scharfe wrote: > >>> > >>>> Doing compression in its own thread may be a good idea. > >>> > >>> Yeah. It might even make the patch simpler, since I'd expect it to > >>> be implemented with start_async() and a descriptor, making it look > >>> just like a gzip pipe to the caller. :) > >> > >> Sadly, it does not really look like it is simpler. > > > > I have to agree -- at least I was unable to pull off the stdout > > plumbing trick. > > The simplest solution is of course to not touch the archive code. We could do that, of course, and we could avoid adding a new command that we have to support for eternity by introducing a command mode for `git archive` instead (think: `git archive --gzip -9`), and marking that command mode clearly as an internal implementation detail. But since the performance is still not quite on par with `gzip`, I would actually rather not, and really, just punt on that one, stating that people interested in higher performance should use `pigz`. And who knows, maybe nobody will complain at all about the performance? It's not like `gzip` is really, really fast (IIRC LZO blows gzip out of the water, speed-wise). And if we get "bug" reports about this, we 1) have a very easy workaround: git config --global archive.tgz.command 'gzip -cn' 2) could always implement a pigz-like multi-threading solution. I strongly expect a YAGNI here, though. Ciao, Dscho