git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* High locking contention during repack?
@ 2018-12-12  3:01 Iucha, Florin
  2018-12-12 11:24 ` Jeff King
  0 siblings, 1 reply; 9+ messages in thread
From: Iucha, Florin @ 2018-12-12  3:01 UTC (permalink / raw)
  To: git@vger.kernel.org


Greetings,

 

I am running “git-repack  -A -d -f -F --window=250 --depth=250” on a Git repository converted using git-svn.

 

The repository contains more than 10 years of development, and a mixture of source code and media assets. The size of the objects directory is around 50GB, and there are around 700k objects. There are several svn branches.

 

I have ran the repack with Git 2.17.1 (original package from Ubuntu 18.04), 2.19.1 (package from Ubuntu Git PPA) and with git next (bc1bbc6f855c3b5ef7fcbd0f688f647c4e5b208b).

 

The system is a 16 core / 32 thread Threadripper with 128GB of RAM and NVMe storage. The repack starts strong, with 32 threads but it fairly quickly gets to 99% done and the number of threads drops to 4 then 3 then 2. However, running “dstat 5” I see lots of “sys” time without any IO time (the network traffic you see is caused by SSH).

 

 

--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
24  24  48   4   0|   0     0 | 362B 8392B|   0     0 |  64k 5042
27  25  44   4   0|   0     0 | 289B 4830B|   0     0 |  36k 5017
24  24  48   4   0|   0     0 | 362B 8392B|   0     0 |  64k 5042
27  25  44   4   0|   0     0 | 289B 4830B|   0     0 |  36k 5017
20  22  54   4   0|   0     0 | 330B 4826B|   0     0 |  31k 4657
22  23  51   3   0|   0     0 | 286B 4834B|   0     0 |  27k 4536
17  23  56   4   0|   0     0 | 299B 4820B|   0     0 |  29k 4099
24  23  49   4   0|   0     0 | 347B 4830B|   0     0 |  32k 4436
27  24  45   4   0|   0     0 | 292B 4820B|   0     0 |  22k 4513
26  27  44   3   0|   0     0 | 292B 4820B|   0     0 |  24k 4878
25  28  44   4   0|   0     0 | 329B 4837B|   0     0 |  45k 5023
23  30  44   3   0|   0  9830B| 370B 4836B|   0     0 |  40k 5064
23  26  47   4   0|   0     0 | 373B 4848B|   0     0 |  52k 4746
25  26  45   4   0|   0     0 | 323B 4833B|   0     0 |  31k 4874
24  28  45   3   0|   0     0 | 269B 4842B|   0     0 |  30k 4820
25  28  43   4   0|   0     0 | 280B 4826B|   0     0 |  45k 5237
25  29  43   3   0|   0     0 | 263B 4826B|   0     0 |  26k 5037
22  30  45   4   0|   0     0 | 287B 4823B|   0     0 |  45k 4154
  0  53  42   5   0|   0     0 | 275B 1210B|   0     0 |5017  1331
  0  53  42   5   0|   0     0 |  97B    0 |   0     0 |5006  1272
  0  53  42   5   0|   0     0 |  70B    0 |   0     0 |5008  1272
  0  53  42   5   0|   0     0 |  24B    0 |   0     0 |5007  1283
11  43  43   4   0|   0     0 | 644B   19k|   0     0 |  20k 2995
25  28  43   4   0|   0   123k| 304B 4802B|   0     0 |  29k 5118
25  27  44   4   0|   0     0 | 297B 4800B|   0     0 |  41k 5121
26  27  43   4   0|   0     0 | 320B 4786B|   0     0 |  29k 4973
22  30  44   3   0|   0     0 | 332B 4786B|   0     0 |  32k 4610
24  27  46   3   0|   0     0 | 296B 4786B|   0     0 |  56k 4632
35  17  44   3   0|   0     0 | 292B 4776B|   0     0 | 100k 4889
34  20  43   4   0|   0  9830B| 610B 4820B|   0     0 |  59k 4811


This when it gets to two threads phase:

  1  40  59   0   0|   0    38k| 373B 2458B|   0     0 |  50k 2874
  0  40  59   0   0|   0     0 | 268B 2892B|   0     0 |  10k 2858
  0  40  59   0   0|   0     0 | 299B 3102B|   0     0 |8392  2919
  1  40  59   0   0|   0     0 | 297B 3176B|   0     0 |  21k 2831
  0  40  59   0   0|   0     0 | 281B 3414B|   0     0 |9192  2856
  1  40  59   0   0|   0     0 | 264B 3493B|   0     0 |  17k 2905
  1  40  59   0   0|   0    35k| 340B 3571B|   0     0 |  13k 2885

 

The repack still progresses, as I see the count of repacked objects being incremented every several minutes. But I suspect there is some terrible lock contention, because otherwise I can’t explain the high system usage with no IO.

 

Running a strace on the running git-repack process shows only these:
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
--- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---

 

Any idea on how to debug this? I have ran git-repack under gdb, but it seems to spin on builtin/repack.c line 409.
 

Thank you for Git,

florin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: High locking contention during repack?
  2018-12-12  3:01 High locking contention during repack? Iucha, Florin
@ 2018-12-12 11:24 ` Jeff King
  2018-12-12 14:04   ` Ævar Arnfjörð Bjarmason
  2018-12-12 16:49   ` Iucha, Florin
  0 siblings, 2 replies; 9+ messages in thread
From: Jeff King @ 2018-12-12 11:24 UTC (permalink / raw)
  To: Iucha, Florin; +Cc: git@vger.kernel.org

On Wed, Dec 12, 2018 at 03:01:47AM +0000, Iucha, Florin wrote:

> I am running “git-repack  -A -d -f -F --window=250 --depth=250” on a
> Git repository converted using git-svn.

Sort of tangential to your question, but:

  - Using "-F" implies "-f" already, so you don't need both. That said,
    you are probably wasting CPU to use "-F", unless you have adjusted
    zlib compression settings since the last pack. (Whereas using "-f"
    is useful, if you're willing to pay the CPU tradeoff).

  - Using --depth=250 does not actually decrease the packfile size very
    much, and results in a packfile which is more expensive for
    subsequent processes to use. Some experimentation showed that
    --depth=50 is a sweet spot, and that is the default for both normal
    "git gc" and "git gc --aggressive" these days.

    See 07e7dbf0db (gc: default aggressive depth to 50, 2016-08-11) for
    more discussion.

> The system is a 16 core / 32 thread Threadripper with 128GB of RAM and
> NVMe storage. The repack starts strong, with 32 threads but it fairly
> quickly gets to 99% done and the number of threads drops to 4 then 3
> then 2. However, running “dstat 5” I see lots of “sys” time without
> any IO time (the network traffic you see is caused by SSH).

This sounds mostly normal and expected. The parallel part of a repack is
the delta search, which is not infinitely parallelizable. Each worker
thread is given a "chunk" of objects, and it uses a sliding window to
search for delta pairs through that chunk. You don't want a chunk that
approaches the window size, since at every chunk boundary you're missing
delta possibilities.

The default chunk size is about 1/nr_threads of the total list size
(i.e., we portion out all the work). And then when a thread finishes, we
take work from the thread with the most work remaining, and portion it
out. However, at some point the chunks approach their minimum, and we
stop dividing. So the number of threads will drop, eventually to 1, and
you'll be waiting on it to finish that final chunk.

So that's all working as planned. Having high sys load does seem a bit
odd. Most of the effort should be going to reading the mmap'd data from
disk, zlib-inflating it and computing a fingerprint, and then comparing
the fingerprints. So that would mostly be user time.

> Running a strace on the running git-repack process shows only these:
> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
> 
> Any idea on how to debug this? I have ran git-repack under gdb, but it seems to spin on builtin/repack.c line 409.

The heavy lifting here is done by the pack-objects child process, not
git-repack itself. Try running with GIT_TRACE=1 in the environment to
see the exact invocation, but timing and debugging:

  git pack-objects --all --no-reuse-delta --delta-base-offset --stdout \
    </dev/null >/dev/null

should produce interesting results.

The SIGALRM loop you see above is likely just the progress meter
triggering once per second (the actual worker threads are updating an
int, and then at least once per second we'll show the int).

-Peff

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: High locking contention during repack?
  2018-12-12 11:24 ` Jeff King
@ 2018-12-12 14:04   ` Ævar Arnfjörð Bjarmason
  2018-12-12 16:49   ` Iucha, Florin
  1 sibling, 0 replies; 9+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2018-12-12 14:04 UTC (permalink / raw)
  To: Jeff King; +Cc: Iucha, Florin, git@vger.kernel.org


On Wed, Dec 12 2018, Jeff King wrote:

> On Wed, Dec 12, 2018 at 03:01:47AM +0000, Iucha, Florin wrote:
>
>> I am running “git-repack  -A -d -f -F --window=250 --depth=250” on a
>> Git repository converted using git-svn.
>
> Sort of tangential to your question, but:
>
>   - Using "-F" implies "-f" already, so you don't need both. That said,
>     you are probably wasting CPU to use "-F", unless you have adjusted
>     zlib compression settings since the last pack. (Whereas using "-f"
>     is useful, if you're willing to pay the CPU tradeoff).
>
>   - Using --depth=250 does not actually decrease the packfile size very
>     much, and results in a packfile which is more expensive for
>     subsequent processes to use. Some experimentation showed that
>     --depth=50 is a sweet spot, and that is the default for both normal
>     "git gc" and "git gc --aggressive" these days.
>
>     See 07e7dbf0db (gc: default aggressive depth to 50, 2016-08-11) for
>     more discussion.

I wonder if the size is still too high. I'd rather take the 5-10%
speedup quoted in that commit than a slight decrease in disk space use.

The git.git repository now with the repack command in that commit
message, with --depth=$n:

    573M    /tmp/git-1
    200M    /tmp/git-2
    118M    /tmp/git-3
    91M     /tmp/git-4
    82M     /tmp/git-5
    77M     /tmp/git-6
    74M     /tmp/git-7
    72M     /tmp/git-8
    71M     /tmp/git-9
    70M     /tmp/git-10
    67M     /tmp/git-15
    66M     /tmp/git-20
    65M     /tmp/git-40
    65M     /tmp/git-35
    65M     /tmp/git-30
    65M     /tmp/git-25
    64M     /tmp/git-50
    64M     /tmp/git-45

Produced via:

    parallel 'rm -rf /tmp/git-{}; cp -Rvp /tmp/git.git/ /tmp/git-{} && git -C /tmp/git-{} repack -adf --depth={} --window=250' ::: {1..10} 15 20 25 30 35 40 45 50

And then the log -S benchmarks:

              s/iter log -S 1 log -S 50 log -S 45 log -S 35 log -S 30 log -S 25 log -S 20 log -S 15 log -S 10 log -S 5
    log -S 1    12.6       --      -26%      -27%      -32%      -36%      -38%      -38%      -39%      -40%     -42%
    log -S 50   9.28      36%        --       -0%       -7%      -13%      -15%      -16%      -18%      -19%     -21%
    log -S 45   9.25      36%        0%        --       -7%      -12%      -15%      -16%      -17%      -19%     -20%
    log -S 35   8.62      46%        8%        7%        --       -6%       -9%      -10%      -11%      -13%     -14%
    log -S 30   8.12      55%       14%       14%        6%        --       -3%       -4%       -6%       -8%      -9%
    log -S 25   7.86      60%       18%       18%       10%        3%        --       -1%       -3%       -5%      -6%
    log -S 20   7.77      62%       19%       19%       11%        4%        1%        --       -2%       -3%      -5%
    log -S 15   7.64      65%       21%       21%       13%        6%        3%        2%        --       -2%      -4%
    log -S 10   7.51      68%       24%       23%       15%        8%        5%        3%        2%        --      -2%
    log -S 5    7.37      71%       26%       25%       17%       10%        7%        5%        4%        2%       --

So we're getting a 20% speedup in log -S by using --depth=5 instead of
--depth=50, neatly at the cost of 20% more disk space, but could also
get a 19% speedup for 8% (--depth=10) more disk space, etc.

Then in rev-list it makes less of a difference, narrowing this down a
bit:

                s/iter rev-list 50 rev-list 25 rev-list 10  rev-list 5
    rev-list 50   1.61          --         -1%         -4%         -5%
    rev-list 25   1.60          1%          --         -3%         -4%
    rev-list 10   1.55          4%          3%          --         -1%
    rev-list 5    1.54          5%          4%          1%          --

It seems to me we should at least move this to --depth=25 by default. A
~15% speedup in log -S & other mass-blob lookups for 1.5% more disk
space seems like a good trade-off. As your commit notes RAM (or disk
space) is not commonly the bottleneck anymore.

>> The system is a 16 core / 32 thread Threadripper with 128GB of RAM and
>> NVMe storage. The repack starts strong, with 32 threads but it fairly
>> quickly gets to 99% done and the number of threads drops to 4 then 3
>> then 2. However, running “dstat 5” I see lots of “sys” time without
>> any IO time (the network traffic you see is caused by SSH).
>
> This sounds mostly normal and expected. The parallel part of a repack is
> the delta search, which is not infinitely parallelizable. Each worker
> thread is given a "chunk" of objects, and it uses a sliding window to
> search for delta pairs through that chunk. You don't want a chunk that
> approaches the window size, since at every chunk boundary you're missing
> delta possibilities.
>
> The default chunk size is about 1/nr_threads of the total list size
> (i.e., we portion out all the work). And then when a thread finishes, we
> take work from the thread with the most work remaining, and portion it
> out. However, at some point the chunks approach their minimum, and we
> stop dividing. So the number of threads will drop, eventually to 1, and
> you'll be waiting on it to finish that final chunk.
>
> So that's all working as planned. Having high sys load does seem a bit
> odd. Most of the effort should be going to reading the mmap'd data from
> disk, zlib-inflating it and computing a fingerprint, and then comparing
> the fingerprints. So that would mostly be user time.
>
>> Running a strace on the running git-repack process shows only these:
>> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
>> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
>> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
>> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
>> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
>> --- SIGALRM {si_signo=SIGALRM, si_code=SI_KERNEL} ---
>>
>> Any idea on how to debug this? I have ran git-repack under gdb, but it seems to spin on builtin/repack.c line 409.
>
> The heavy lifting here is done by the pack-objects child process, not
> git-repack itself. Try running with GIT_TRACE=1 in the environment to
> see the exact invocation, but timing and debugging:
>
>   git pack-objects --all --no-reuse-delta --delta-base-offset --stdout \
>     </dev/null >/dev/null
>
> should produce interesting results.
>
> The SIGALRM loop you see above is likely just the progress meter
> triggering once per second (the actual worker threads are updating an
> int, and then at least once per second we'll show the int).
>
> -Peff

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: High locking contention during repack?
  2018-12-12 11:24 ` Jeff King
  2018-12-12 14:04   ` Ævar Arnfjörð Bjarmason
@ 2018-12-12 16:49   ` Iucha, Florin
  2018-12-12 16:54     ` Ævar Arnfjörð Bjarmason
  2018-12-12 18:08     ` Iucha, Florin
  1 sibling, 2 replies; 9+ messages in thread
From: Iucha, Florin @ 2018-12-12 16:49 UTC (permalink / raw)
  To: Jeff King; +Cc: git@vger.kernel.org

Jeff,

Thank you for the advice, I will reduce the depth.

Running "git pack-objects --all --no-reuse-delta --delta-base-offset --stdout </dev/null >/dev/null", it got to 99% fairly quickly, now it has 5 threads running (using 99.9% CPU), but the "dstat 5" still shows lots of nothing, with some heavy system activity:

--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw 
  1   8  91   0   0| 166k  224k|   0     0 |   0     0 |6892   970 
  0  13  87   0   0|   0     0 | 333B 1758B|   0     0 |2285  1374 
  0  13  87   0   0|   0     0 | 309B 1190B|   0     0 |2233  1265 
  0  13  87   0   0|   0     0 | 282B 1053B|   0     0 |2174  1239 
  0  13  87   0   0|   0     0 | 278B 1251B|   0     0 |1930  1173 
  0  13  87   0   0|   0     0 | 274B  942B|   0     0 |1987  1139 
  0  13  87   0   0|   0     0 | 264B 1168B|   0     0 |1928  1205 
  0  13  87   0   0|   0     0 | 388B 2435B|   0     0 |2245  1280 
  0  13  87   0   0|   0     0 | 268B 1145B|   0     0 |2225  1221 
  0  13  87   0   0|   0     0 | 164B  732B|   0     0 |2607  1333 
  0  13  87   0   0|   0     0 | 156B  996B|   0     0 |2100  1270 
  0  13  87   0   0|   0     0 | 206B 1019B|   0     0 |2192  1296 
  0  13  87   0   0|   0     0 | 198B  824B|   0     0 |2019  1236 
  0  13  87   0   0|   0     0 | 245B  435B|   0     0 |1974  1195 
  0  13  87   0   0|   0     0 | 252B  855B|   0     0 |1852  1166 
  0  13  87   0   0|   0     0 | 230B  758B|   0     0 |2066  1299 
  0  13  87   0   0|   0     0 | 284B  925B|   0     0 |1860  1225 
  0  13  87   0   0|   0     0 | 289B 2682B|   0     0 |1796  1197 
  0  13  87   0   0|   0     0 | 939B 1263B|   0     0 |1913  1304 
  0  13  87   0   0|   0     0 |1212B 1366B|   0     0 |1915  1343

I will try running the command under the debugger and stop it when it gets to this point and poke around.

florin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: High locking contention during repack?
  2018-12-12 16:49   ` Iucha, Florin
@ 2018-12-12 16:54     ` Ævar Arnfjörð Bjarmason
  2018-12-12 18:08     ` Iucha, Florin
  1 sibling, 0 replies; 9+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2018-12-12 16:54 UTC (permalink / raw)
  To: Iucha, Florin; +Cc: Jeff King, Git Mailing List

On Wed, Dec 12, 2018 at 5:52 PM Iucha, Florin <Florin.Iucha@amd.com> wrote:
>
> Jeff,
>
> Thank you for the advice, I will reduce the depth.
>
> Running "git pack-objects --all --no-reuse-delta --delta-base-offset --stdout </dev/null >/dev/null", it got to 99% fairly quickly, now it has 5 threads running (using 99.9% CPU), but the "dstat 5" still shows lots of nothing, with some heavy system activity:
>
> --total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
> usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw
>   1   8  91   0   0| 166k  224k|   0     0 |   0     0 |6892   970
>   0  13  87   0   0|   0     0 | 333B 1758B|   0     0 |2285  1374
>   0  13  87   0   0|   0     0 | 309B 1190B|   0     0 |2233  1265
>   0  13  87   0   0|   0     0 | 282B 1053B|   0     0 |2174  1239
>   0  13  87   0   0|   0     0 | 278B 1251B|   0     0 |1930  1173
>   0  13  87   0   0|   0     0 | 274B  942B|   0     0 |1987  1139
>   0  13  87   0   0|   0     0 | 264B 1168B|   0     0 |1928  1205
>   0  13  87   0   0|   0     0 | 388B 2435B|   0     0 |2245  1280
>   0  13  87   0   0|   0     0 | 268B 1145B|   0     0 |2225  1221
>   0  13  87   0   0|   0     0 | 164B  732B|   0     0 |2607  1333
>   0  13  87   0   0|   0     0 | 156B  996B|   0     0 |2100  1270
>   0  13  87   0   0|   0     0 | 206B 1019B|   0     0 |2192  1296
>   0  13  87   0   0|   0     0 | 198B  824B|   0     0 |2019  1236
>   0  13  87   0   0|   0     0 | 245B  435B|   0     0 |1974  1195
>   0  13  87   0   0|   0     0 | 252B  855B|   0     0 |1852  1166
>   0  13  87   0   0|   0     0 | 230B  758B|   0     0 |2066  1299
>   0  13  87   0   0|   0     0 | 284B  925B|   0     0 |1860  1225
>   0  13  87   0   0|   0     0 | 289B 2682B|   0     0 |1796  1197
>   0  13  87   0   0|   0     0 | 939B 1263B|   0     0 |1913  1304
>   0  13  87   0   0|   0     0 |1212B 1366B|   0     0 |1915  1343
>
> I will try running the command under the debugger and stop it when it gets to this point and poke around.

FWIW compiling with gcc's gprof support is probably a better way to
figure out "what was my program doing all this time?".

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: High locking contention during repack?
  2018-12-12 16:49   ` Iucha, Florin
  2018-12-12 16:54     ` Ævar Arnfjörð Bjarmason
@ 2018-12-12 18:08     ` Iucha, Florin
  2018-12-12 18:30       ` Iucha, Florin
  1 sibling, 1 reply; 9+ messages in thread
From: Iucha, Florin @ 2018-12-12 18:08 UTC (permalink / raw)
  To: Jeff King; +Cc: git@vger.kernel.org, Ævar Arnfjörð Bjarmason

(Apologize for top-posting but making Outlook quote properly is a bear)

Down to 1+4 threads:

--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw 
  0  13  87   0   0|   0     0 |2059B   15k|   0     0 |2200  1730 
  0  13  87   0   0|   0     0 | 585B 2754B|   0     0 |1792  1269 
  0  13  87   0   0|   0     0 | 178B  734B|   0     0 |1691  1157 
  0  13  87   0   0|   0  6554B| 225B  734B|   0     0 |1885  1238 
  0  13  87   0   0|   0     0 | 159B  736B|   0     0 |1958  1262 
  0  13  87   0   0|   0     0 | 310B  760B|   0     0 |1813  1175

(gdb) info threads
  Id   Target Id         Frame 
* 1    Thread 0x7f7b837dcb80 (LWP 11918) "git" 0x000055c7dded56eb in ll_find_deltas (processed=0x7ffce7da82a8, depth=<optimized out>, window=<optimized out>, 
    list_size=0, list=<optimized out>) at builtin/pack-objects.c:2516
  2    Thread 0x7f782e16a700 (LWP 11929) "git" create_delta (index=0x55c807d2ff20, trg_buf=<optimized out>, trg_size=trg_size@entry=9677636, 
    delta_size=delta_size@entry=0x7f782e169e78, max_size=3585674) at diff-delta.c:381
  3    Thread 0x7f6fe77fe700 (LWP 11948) "git" create_delta (index=0x7f6e35171010, trg_buf=<optimized out>, trg_size=trg_size@entry=163385853, 
    delta_size=delta_size@entry=0x7f6fe77fde78, max_size=2328926) at diff-delta.c:381
  4    Thread 0x7f6fe67fc700 (LWP 11950) "git" create_delta (index=0x7f6e3f7f3010, trg_buf=<optimized out>, trg_size=trg_size@entry=74543780, 
    delta_size=delta_size@entry=0x7f6fe67fbe78, max_size=2582505) at diff-delta.c:381
  5    Thread 0x7f6fe5ffb700 (LWP 11951) "git" create_delta (index=0x7f6f01d06010, trg_buf=<optimized out>, trg_size=trg_size@entry=28510185, 
    delta_size=delta_size@entry=0x7f6fe5ffae78, max_size=9146484) at diff-delta.c:381
(gdb) bt
#0  0x00007f7b82f909f3 in futex_wait_cancelable (private=<optimized out>, expected=0, futex_word=0x55c7de2ff348 <progress_cond+40>)
    at ../sysdeps/unix/sysv/linux/futex-internal.h:88
#1  __pthread_cond_wait_common (abstime=0x0, mutex=0x55c7de2ff360 <progress_mutex>, cond=0x55c7de2ff320 <progress_cond>) at pthread_cond_wait.c:502
#2  __pthread_cond_wait (cond=cond@entry=0x55c7de2ff320 <progress_cond>, mutex=mutex@entry=0x55c7de2ff360 <progress_mutex>) at pthread_cond_wait.c:655
#3  0x000055c7dded56eb in ll_find_deltas (processed=0x7ffce7da82a8, depth=<optimized out>, window=<optimized out>, list_size=0, list=<optimized out>)
    at builtin/pack-objects.c:2516
#4  prepare_pack (depth=<optimized out>, window=<optimized out>) at builtin/pack-objects.c:2673
#5  cmd_pack_objects (argc=<optimized out>, argv=<optimized out>, prefix=<optimized out>) at builtin/pack-objects.c:3491
#6  0x000055c7dde74711 in run_builtin (argv=<optimized out>, argc=<optimized out>, p=<optimized out>) at git.c:421
#7  handle_builtin (argc=<optimized out>, argv=<optimized out>) at git.c:647
#8  0x000055c7dde75725 in run_argv (argv=0x7ffce7daa7e0, argcp=0x7ffce7daa7ec) at git.c:701
#9  cmd_main (argc=<optimized out>, argv=<optimized out>) at git.c:798
#10 0x000055c7dde7437f in main (argc=6, argv=0x7ffce7daaa58) at common-main.c:45
(gdb) thread 2
[Switching to thread 2 (Thread 0x7f782e16a700 (LWP 11929))]
#0  create_delta (index=0x55c807d2ff20, trg_buf=<optimized out>, trg_size=trg_size@entry=9677636, delta_size=delta_size@entry=0x7f782e169e78, max_size=3585674)
    at diff-delta.c:381
381                                     if (entry->val != val)
(gdb) bt
#0  create_delta (index=0x55c807d2ff20, trg_buf=<optimized out>, trg_size=trg_size@entry=9677636, delta_size=delta_size@entry=0x7f782e169e78, max_size=3585674)
    at diff-delta.c:381
#1  0x000055c7dded0606 in try_delta (mem_usage=<synthetic pointer>, max_depth=50, src=0x7f702c256bc0, trg=0x7f702c256ca0) at builtin/pack-objects.c:2129
#2  find_deltas (list=<optimized out>, list_size=list_size@entry=0x55c7e467daf4, window=<optimized out>, depth=<optimized out>, processed=<optimized out>)
    at builtin/pack-objects.c:2262
#3  0x000055c7dded0e29 in threaded_find_deltas (arg=0x55c7e467dae0) at builtin/pack-objects.c:2408
#4  0x00007f7b82f8a6db in start_thread (arg=0x7f782e16a700) at pthread_create.c:463
#5  0x00007f7b82cb388f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb) thread 2
[Switching to thread 2 (Thread 0x7f782e16a700 (LWP 11929))]
#0  create_delta (index=0x55c807d2ff20, trg_buf=<optimized out>, trg_size=trg_size@entry=9677636, delta_size=delta_size@entry=0x7f782e169e78, max_size=3585674)
    at diff-delta.c:381
381                                     if (entry->val != val)
(gdb) bt
#0  create_delta (index=0x55c807d2ff20, trg_buf=<optimized out>, trg_size=trg_size@entry=9677636, delta_size=delta_size@entry=0x7f782e169e78, max_size=3585674)
    at diff-delta.c:381
#1  0x000055c7dded0606 in try_delta (mem_usage=<synthetic pointer>, max_depth=50, src=0x7f702c256bc0, trg=0x7f702c256ca0) at builtin/pack-objects.c:2129
#2  find_deltas (list=<optimized out>, list_size=list_size@entry=0x55c7e467daf4, window=<optimized out>, depth=<optimized out>, processed=<optimized out>)
    at builtin/pack-objects.c:2262
#3  0x000055c7dded0e29 in threaded_find_deltas (arg=0x55c7e467dae0) at builtin/pack-objects.c:2408
#4  0x00007f7b82f8a6db in start_thread (arg=0x7f782e16a700) at pthread_create.c:463
#5  0x00007f7b82cb388f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

florin

-----Original Message-----
From: Iucha, Florin 
Sent: Wednesday, December 12, 2018 11:50 AM
To: 'Jeff King' <peff@peff.net>
Cc: git@vger.kernel.org
Subject: RE: High locking contention during repack?

Jeff,

Thank you for the advice, I will reduce the depth.

Running "git pack-objects --all --no-reuse-delta --delta-base-offset --stdout </dev/null >/dev/null", it got to 99% fairly quickly, now it has 5 threads running (using 99.9% CPU), but the "dstat 5" still shows lots of nothing, with some heavy system activity:

--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read  writ| recv  send|  in   out | int   csw 
  1   8  91   0   0| 166k  224k|   0     0 |   0     0 |6892   970 
  0  13  87   0   0|   0     0 | 333B 1758B|   0     0 |2285  1374 
  0  13  87   0   0|   0     0 | 309B 1190B|   0     0 |2233  1265 
  0  13  87   0   0|   0     0 | 282B 1053B|   0     0 |2174  1239 
  0  13  87   0   0|   0     0 | 278B 1251B|   0     0 |1930  1173 
  0  13  87   0   0|   0     0 | 274B  942B|   0     0 |1987  1139 
  0  13  87   0   0|   0     0 | 264B 1168B|   0     0 |1928  1205 
  0  13  87   0   0|   0     0 | 388B 2435B|   0     0 |2245  1280 
  0  13  87   0   0|   0     0 | 268B 1145B|   0     0 |2225  1221 
  0  13  87   0   0|   0     0 | 164B  732B|   0     0 |2607  1333 
  0  13  87   0   0|   0     0 | 156B  996B|   0     0 |2100  1270 
  0  13  87   0   0|   0     0 | 206B 1019B|   0     0 |2192  1296 
  0  13  87   0   0|   0     0 | 198B  824B|   0     0 |2019  1236 
  0  13  87   0   0|   0     0 | 245B  435B|   0     0 |1974  1195 
  0  13  87   0   0|   0     0 | 252B  855B|   0     0 |1852  1166 
  0  13  87   0   0|   0     0 | 230B  758B|   0     0 |2066  1299 
  0  13  87   0   0|   0     0 | 284B  925B|   0     0 |1860  1225 
  0  13  87   0   0|   0     0 | 289B 2682B|   0     0 |1796  1197 
  0  13  87   0   0|   0     0 | 939B 1263B|   0     0 |1913  1304 
  0  13  87   0   0|   0     0 |1212B 1366B|   0     0 |1915  1343

I will try running the command under the debugger and stop it when it gets to this point and poke around.

florin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: High locking contention during repack?
  2018-12-12 18:08     ` Iucha, Florin
@ 2018-12-12 18:30       ` Iucha, Florin
  2018-12-12 19:05         ` Iucha, Florin
  0 siblings, 1 reply; 9+ messages in thread
From: Iucha, Florin @ 2018-12-12 18:30 UTC (permalink / raw)
  To: git@vger.kernel.org; +Cc: Ævar Arnfjörð Bjarmason, Jeff King

It seems to be spending a lot of time in create_delta. I have added a breakpoint on calling create_delta and a few good minutes pass between the breakpoint being hit. The large loop in create_delta has practically no outgoing function calls (save for the realloc on line 476), so I still don't understand how dstat could report 0% user time. htop indicates the worker threads are using 100% CPU time.

florin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: High locking contention during repack?
  2018-12-12 18:30       ` Iucha, Florin
@ 2018-12-12 19:05         ` Iucha, Florin
  2018-12-12 21:50           ` Iucha, Florin
  0 siblings, 1 reply; 9+ messages in thread
From: Iucha, Florin @ 2018-12-12 19:05 UTC (permalink / raw)
  To: git@vger.kernel.org; +Cc: Ævar Arnfjörð Bjarmason, Jeff King

Is there an easy way to figure out what object we're processing in create_delta? I have filtered out some large data files in .gitattributes already.

florin

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: High locking contention during repack?
  2018-12-12 19:05         ` Iucha, Florin
@ 2018-12-12 21:50           ` Iucha, Florin
  0 siblings, 0 replies; 9+ messages in thread
From: Iucha, Florin @ 2018-12-12 21:50 UTC (permalink / raw)
  To: git@vger.kernel.org; +Cc: Ævar Arnfjörð Bjarmason, Jeff King

> Is there an easy way to figure out what object we're processing in create_delta? I have filtered out some large data files in .gitattributes already.

For the record:
* add breakpoint on create_delta at diff-delta.c:321
* stop the execution
* switch to the worker thread
* select frame 3: threaded_find_deltas at builtin/pack-objects.c:2408
* p/x me->list.idx.oid.hash

florin

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-12-12 21:50 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-12  3:01 High locking contention during repack? Iucha, Florin
2018-12-12 11:24 ` Jeff King
2018-12-12 14:04   ` Ævar Arnfjörð Bjarmason
2018-12-12 16:49   ` Iucha, Florin
2018-12-12 16:54     ` Ævar Arnfjörð Bjarmason
2018-12-12 18:08     ` Iucha, Florin
2018-12-12 18:30       ` Iucha, Florin
2018-12-12 19:05         ` Iucha, Florin
2018-12-12 21:50           ` Iucha, Florin

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).