On Tue, Jul 16, 2019 at 8:00 AM Florian Weimer wrote: > > One experiment I'd like to do (and maybe someone wants to help with > that): Instrument CC and CXX invocations with something that captures > the current directory and the command lines during a regular build. > Afterwards, replay all the compiler invocations, in parallel. (This may > need some manual tweaks for adding barriers, but perhaps not if we run > this test on an already-built tree.) This should gives a number: How > much time does it take, at minimum, to build glibc, without make > overhead or artificial serialization. It will tell us how inefficient > the current build system really is. Is the make overhead for a > from-scratch build just those 12 seconds I mentioned above, or is it > much larger? > > This should give us some guidance whether we need to focus on > from-scratch performance, or on making incremental builds accurate. This isn't the data point you were asking for, but it's a complementary one: I wrote a simple program (attached; it's in Python, so we've got interpreter overhead in here, but I expect the dominant factor is OS overhead) that calls lstat() on every file in a set of directory trees, except not descending into any subdirectory named '.git', and records the result in a data structure, and then reports how long it took to do that. This should approximate the minimum possible time for an ideal build tool to determine that an incremental build has nothing to do. On the computer I'm typing this on (Xeon E3-1245v3, Linux 4.19, speculative execution mitigations active, SSD), I ran this test 50 times on a glibc source and build tree and got a median time of 0.181 seconds, with interquartile range of 0.00163 seconds. zw