Date | Commit message (Collapse) |
|
Noticed while testing on FreeBSD 11.2 amd64 with the optional
Inline::C extension using clang 6.0.0. The end result on
FreeBSD was spawning processes failed badly and things were
immediately unusable with this enabled.
av_len is a misleading API, and I failed to read the API
comments in perl:/av.c which state:
> Note that, unlike what the name implies, it returns
> the highest index in the array, so to get the size of
> the array you need to use "av_len(av) + 1".
> This is unlike "sv_len", which returns what you would expect.
If this bug affected anybody, it would've only affected users
using both the optional Inline::C module AND set the
PERL_INLINE_DIRECTORY environment variable.
That said, I've never seen any evidence of it on Debian
GNU/Linux + gcc on any x86 variant. That includes full 64-bit
systems, a full 32-bit system, a 64-bit system with 32-bit
userspace, across multiple gcc versions since 2016.
|
|
Our high-level config already treats single limits as a
soft==hard limit for limiters; so stop handling that redundant
in the low-level spawn() sub.
|
|
This allows users to configure RLIMIT_{CORE,CPU,DATA} using
our "limiter" config directive when spawning external processes.
|
|
cgit (and most other CGI executables) is not typically installed
for use via $PATH, so we'll need to support absolute paths to
run it.
|
|
We'll be spawning cgit and git-diff, which can take gigantic
amounts of CPU time and/or heap given the right (ermm... wrong)
input. Limit the damage that large/expensive diffs can cause.
|
|
Using update-copyrights from gnulib
While we're at it, use the SPDX identifier for AGPL-3.0+ to
ease mechanical processing.
|
|
fork failures are unfortunately common when Xapian has
gigabytes and gigabytes mmapped.
|
|
While we only want to stop our daemons and gracefully destroy
subprocesses, it is common for 'Ctrl-C' from a terminal to kill
the entire pgroup.
Killing an entire pgroup nukes subprocesses like git-upload-pack
breaks graceful shutdown on long clones. Make a best effort to
ensure git-upload-pack processes are not broken when somebody
signals an entire process group.
Followup-to: commit 37bf2db81bbbe114d7fc5a00e30d3d5a6fa74de5
("doc: systemd examples should only kill one process")
|
|
We can't rely on absolute paths when installed on other
systems.
Unfortunately, mlmmj-* requires them, but none of the core
code will use it.
|
|
We cannot afford to fire Perl-level signal handlers in the
vforked child process since they're not designed to run in
the child like that.
Thus we need to block all signals before calling vfork, reset
signal dispositions in the child, and restore the signal mask in
the parent.
ref: https://ewontfix.com/7
|
|
This makes for better compile-time checking and also helps
document which calls are private for HTTP and NNTP.
While we're at it, use IO::Handle::* functions procedurally,
too, since we know we're working with native glob handles.
|
|
We can rely on timely auto-destruction based on reference
counting; reducing the chance of redundant close(2) calls
which may hit the wront FD.
We do care about certain close calls (e.g. writing to a buffered
IO handle) if we require error-checking for write-integrity. In
other cases, let things go out-of-scope so it can be freed
automatically after use.
|
|
This is necessary since we want to be able to do arbitrary redirects
via the popen interface. Oh well, we'll be a little slower for now
for users without vfork. vfork users will get all the performance
benefits.
|
|
We must stash the error correctly when nesting evals, oops :x
|
|
This should reduce overhead of spawning git processes
from our long-running httpd and nntpd servers.
|
|
Under Linux, vfork maintains constant performance as
parent process size increases. fork needs to prepare pages
for copy-on-write, requiring a linear scan of the address
space.
|