Date | Commit message (Collapse) |
|
This will make it easier to support asynchronous blob
retrievals. The `$ctx->{nr}' counter is no longer implicitly
supplied since many users didn't care for it, so stack overhead
is slightly reduced.
|
|
Like with WwwAtomStream and MboxGz, we can bless the existing
$ctx object directly to avoid allocating a new hashref. We'll
also switch from "->" to "::" to reduce stack utilization.
|
|
This allows -httpd to handle other requests while waiting
for git to retrieve and decode blobs. We'll also break
apart t/psgi_v2.t further to ensure tests run against
-httpd in addition to generic PSGI testing.
Using xt/httpd-async-stream.t to test against clones of meta@public-inbox.org
shows a 10-12% performance improvement with the following env:
TEST_JOBS=1000 TEST_CURL_OPT=--compressed TEST_ENDPOINT=new.atom
|
|
We want to be able to parallelize and stress test more
endpoints and toggle `--compressed' and possibly other
options in curl.
|
|
No need to deepen our object graph, here.
|
|
stat(2) on the inboxdir is unlikely to be correct, now that
msgmap truncates its journal (rather than unlinking it).
|
|
We always return Z (UTC) times, anyways, so we'll always
use gmtime() on the seconds-after-the-epoch.
|
|
This restores gzip-by-default behavior for /$INBOX/$MSGID/raw
endpoints for all indexed inboxes. Unindexed v1 inboxes will
remain uncompressed, for now.
|
|
We can bless $ctx directly into a MboxGz object to reduce
hash lookups and allocations.
|
|
This lets the -httpd worker process make better use of time
instead of waiting for git-cat-file to respond. With 4 jobs in
the new test case against a clone of
<https://public-inbox.org/meta/>, a speedup of 10-12% is shown.
Even a single job shows a 2-5% improvement on an SSD.
|
|
Instead of gzipping some (mbox.gz, manifest.js.gz) responses and
leaving P::M::D to do the rest, we gzip everything ourselves,
now, so P::M::D is redundant.
|
|
This will allow us to gzip responses generated by cgit
and any other CGI programs or long-lived streaming
responses we may spawn.
|
|
This will allow others to mimic our award-winning homepage
design without needing to rely on Plack::Middleware::Deflater
or varnish to compress responses.
|
|
It's no longer needed, we no longer show a runtime error
for zlib being missing, as zlib is a hard requirement.
Fixes: a318e758129d616b ("make zlib-related modules a hard dependency")
|
|
This simplifies callers, as witnessed by the change to
WwwListing. It adds overhead to NoopFilter, but NoopFilter
should see little use as nearly all HTTP clients request gzip.
|
|
The new ->zmore and ->zflush APIs make it possible to replace
existing verbose usages of Compress::Raw::Deflate and simplify
buffering logic for streaming large gzipped data.
One potentially user visible change is we now break the mbox.gz
response on zlib failures, instead of silently continuing onto
the next message. zlib only seems to fail on OOM, which should
be rare; so it's ideal we drop the connection anyways.
|
|
The changes to GzipFilter here may be beneficial for building
HTML and XML responses in other places, too.
|
|
It'll give us a nicer HTML header and footer.
|
|
No point in streaming a tiny response via ->getline,
but we may stream to a gzipped buffer, later.
|
|
Most of our plain-text responses are config files
big enough to warrant compression.
|
|
Our most common endpoints deserve to be gzipped.
|
|
Plack::Middleware::Deflater forces us to use a memory-intensive
closure. Instead, work towards building compressed strings in
memory to reduce the overhead of buffering large HTML output.
|
|
We currently don't use bytes::length in ->write, so there's no
need to `use bytes'. Favor `//=' to describe the intent of the
conditional assignment since the C::R::Z::Deflate object is
always truthy. Also use the local $gz variable to avoid
unnecessary {gz} hash lookups.
|
|
We avoided a managed circular reference in 10ee3548084c125f
but introduced a pipe FD leak, instead. So handle the EOF
we get when the "git cat-file --batch" process exits and
closes its stdout FD.
v2: remove ->close entirely. PublicInbox::Git->cleanup
handles all cleanup. This prevents us from inadvertantly
deleting the {async_cat} field associated with a different
pipe than the one GAC is monitoring.
Fixes: 10ee3548084c125f ("git_async_cat: remove circular reference")
|
|
inotify_add_watch(2), open(2), stat(2) may all fail due to
permissions errors, especially when running -nntpd/-imapd
as `nobody' as recommended.
|
|
Spammers may send emails with nasty characters which can throw
off git-fast-import. Users with non-existent or weaker spam
filters may be susceptible to corruption in the fast-import
stream as a result.
This was actually quietly fixed in git on 2020-06-01 by
commit 9ab886546cc89f37819e1ef09cb49fd9325b3a41
("smsg: introduce ->populate method"), but no test case
was created.
Reported-by: Eric W. Biederman <ebiederm@xmission.com>
Link: https://public-inbox.org/meta/87imf4qn87.fsf@x220.int.ebiederm.org/
Link: https://public-inbox.org/meta/20200601100657.14700-6-e@yhbt.net/
|
|
Network connections fail and need to be detected sooner rather
than later during IDLE to avoid backtrace floods. In case the
IDLE process dies completely, don't respawn right away, either,
to avoid entering a respawn loop.
There's also a typo fix :P
|
|
We no longer use writev(2) in pi_fork_exec to emit errors.
|
|
I was wondering about this myself the other day and had to read
up on it. So make a note of it for future readers.
|
|
The default (and fast) TEST_RUN_MODE=2 preloads most modules,
but TEST_RUN_MODE=0 is more realistic and can catch some
problems which may show up in real-world use.
|
|
To ensure reliable signal delivery in Perl, it seems we need to
repeatedly signal processes which aren't using signalfd (or
EVFILT_SIGNAL) with our event loop.
|
|
parent.pm is smaller than base.pm, and we'll also move
towards relying on `-w' (or not) to toggle process-wide
warnings during development.
|
|
Making the RLIMITS list a function doesn't allow constant
folding, so just make it an array accessible to other modules.
|
|
Anonymous subs cost over 5K each on x86-64. So prefer the
less-recommended-but-still-documented way of using
Linux::Inotify2::watch to register watchers.
This also updates FakeInotify to detect modifications correctly
when used on systems with neither IO::KQueue nor
Linux::Inotify2.
|
|
This will help us catch warnings in new code and notice
inadvertantly skipped tests.
|
|
Maildir scanning still happens in the main process. Scanning
dozens of Maildirs is still time-consuming and monopolizes the
event loop during WatchMaildir::event_step. This can cause
cause zombies to accumulate before Sigfd::event_step triggers
DS::reap_pids.
|
|
Subprocess we spawn may want to use SIGCHLD for themselves.
This also ensures we restore default signal handlers
in the pure Perl version.
|
|
In case our git or spam checker subprocesses spawn
subprocesses of their own. We'll also ensure signal
handlers are properly setup before unblocking them.
|
|
It could be useful to see warnings generated for known problematic
messages just as it is for possibly non-problematic ones.
|
|
It's cheaper to check for duplicates than run `spamc'
repeatedly when rechecking. We already do this for
v1 with by using the "ls" command with fast-import,
but v2 requires checking against over.sqlite3.
|
|
We won't be attempting to reuse Mail::IMAPConnections used to
check authentication info, for now, so stop storing
$self->{mics}.
We can also combine $poll initialization for IMAP and NNTP
to avoid data structure duplication. Furthermore, rely on
autovivification to create {idle_pids} and {poll_pids}.
|
|
SQLite only issues non-blocking F_SETLK ops (not F_SETLKW) and
retries failures using a configurable busy_timeout. SQLite's
busy loop sleeps for a millisecond and retries the lock until
the configured busy_timeout is hit.
Trying to set ->sqlite_busy_timeout to larger values (e.g. 30000
milliseconds) still leads to failure when running the new stress
test with 8 processes with TMPDIR on a 7200 RPM HDD.
Inspection of SQLite source reveals there's no built-in way to
use F_SETLKW, so tack on the existing flock(2) support we use to
synchronize git + SQLite + Xapian for inbox writing. We use
flock(2) instead of POSIX fcntl(2) locks since Perl doesn't
provide a way to manipulate "struct flock" portably.
|
|
While git-credential-netrc exists in git.git contrib/, it may
not be widely known or installed. Net::Netrc is already a
standard part of most (if not all) Perl installations, so use it
directly if available.
|
|
Git.pm may not be installed on some systems; or some users have
multiple Perl installations and Git.pm is not available to the
Perl running -watch. Accomodate both those types of users by
providing our own "git credential" wrapper.
|
|
In case output is redirected to a pipe, ensure stdout and stderr
are always unbuffered, as -watch may go long periods without
any output to fill up buffers.
|
|
Since we use the non-ref scalar URL in many error messages,
favor keeping the unblessed URL in the long-lived process.
This avoids showing "snews://" to users who've specified
"nntps://" URLs, since "nntps" is IANA-registered nowadays and
what we show in our documentation, while "snews" was just a
draft the URI package picked up decades ago.
|
|
This is similar to IMAP support, but only supports polling.
Automatic altid support is not yet supported, yet; but may
be in the future.
v2: small grammar fix by Kyle Meyer
Link: https://public-inbox.org/meta/87sgeg5nxf.fsf@kyleam.com/
|
|
Existing use of the $ENV{TAIL} relied on parsing --std{out,err},
which was only usable for read-only daemons. However, -watch
doesn't use PublicInbox::Daemon code(*), so attempt to figure
out redirects.
(*) -watch won't able to run as a daemon in cases when
git-credential prompts for IMAP/NNTP passwords.
PublicInbox::Daemon is also designed for read-only
parallelism where all worker processes are the same.
Any subprocesses spawned by -watch are to do specific
tasks for a particular set of inboxes.
|
|
We may just modify PublicInbox::Config->urlmatch in the future
to support git <1.8.5, but I wonder if there's enough users on
git <1.8.5 to justify it.
|
|
Since we store all watched directory names as keys in %mdmap,
there should be no need to keep an array of those directories
around.
t/watch_maildir*.t required changes to remove trained spam.
Once we've trained something as spam, there shouldn't be
a need to rescan it.
|