Date | Commit message (Collapse) |
|
RFC3977 6.1.2.2 LISTGROUP allows a [range] arg after [group],
and supporting it allows NNTP support in neomutt to work again.
Tested with NeoMutt 20170113 (1.7.2) on Debian stretch
(oldstable)
|
|
RFC3977 8.4.2 mandates the order of non-standard headers
to be after the first seven standard headers/metadata;
so "Xref:" must appear after "Lines:"|":lines".
Additionally, non-required header names must be followed
by ":full".
Cc: Jonathan Corbet <corbet@lwn.net>
Reported-by: Urs Janßen
<E1hmKBw-0008Bq-8t@akw>
|
|
We won't need further layering after enabling compression. So
be explicit about which sub we're calling when we hit ->do_read
from NNTP and eliminate the need for the comment.
|
|
We need to ensure further timers can be registered if there's
currently no idle clients.
|
|
In HTTP.pm, we can use the same technique NNTP.pm uses with
long_response with the $long_cb callback and avoid storing
$pull in the per-client structure at all. We can also reuse
the same logic to push the callback into wbuf from NNTP.
This does NOT introduce a new circular reference, but documents
it more clearly.
|
|
We're using Qspawn, now
|
|
Relying on "use" to import during BEGIN means we get to take
advantage of prototype checking of function args during the rest
of the compilation phase.
|
|
*glob notation isn't always necessary, and there's
no need to disable 'once' warnings, this way.
|
|
No point in uglifying our code since we need the POSIX
module in many places, anyways.
|
|
* origin/nntp-compress:
nntp: improve error reporting for COMPRESS
nntp: reduce memory overhead of zlib
nntp: support COMPRESS DEFLATE per RFC 8054
nntp: move LINE_MAX constant to the top
nntp: use msg_more as a method
|
|
While we're usually not stuck waiting on waitpid after
seeing a pipe EOF or even triggering SIGPIPE in the process
(e.g. git-http-backend) we're reading from, it MAY happen
and we should be careful to never hang the daemon process
on waitpid calls.
v2: use "eq" for string comparison against 'DEFAULT'
|
|
Add some checks for errors at initialization, though there's not
much that can be done with ENOMEM-type errors aside from
dropping clients.
We can also get rid of the scary FIXME for MemLevel=8. It was a
stupid error on my part in the original per-client deflate
stream implementation calling C::R::Z::{Inflate,Deflate} in
array context and getting the extra dualvar error code as a
string result, causing the {zin}/{zout} array refs to have
extra array elements.
|
|
Using Z_FULL_FLUSH at the right places in our event loop, it
appears we can share a single zlib deflate context across ALL
clients in a process.
The zlib deflate context is the biggest factor in per-client
memory use, so being able to share that across many clients
results in a large memory savings.
With 10K idle-but-did-something NNTP clients connected to a
single process on a 64-bit system, TLS+DEFLATE used around
1.8 GB of RSS before this change. It now uses around 300 MB.
TLS via IO::Socket::SSL alone uses <200MB in the same situation,
so the actual memory reduction is over 10x.
This makes compression less efficient and bandwidth increases
around 45% in informal testing, but it's far better than no
compression at all. It's likely around the same level of
compression gzip gives on the HTTP side.
Security implications with TLS? I don't know, but I don't
really care, either... public-inbox-nntpd doesn't support
authentication and it's up to the client to enable compression.
It's not too different than Varnish caching gzipped responses
on the HTTP side and having responses go to multiple HTTPS
clients.
|
|
This is only tested so far with my patches to Net::NNTP at:
https://rt.cpan.org/Ticket/Display.html?id=129967
Memory use in C10K situations is disappointing, but that's
the nature of compression.
gzip compression over HTTPS does have the advantage of not
keeping zlib streams open when clients are idle, at the
cost of worse compression.
|
|
It'll be accessible from other places, and there was no real
point in having it declared inside a sub.
|
|
It's a tad slower, but we'll be able to subclass this to rely
on zlib deflate buffering. This is advantageous for TLS clients
since (AFAIK) IO::Socket::SSL/OpenSSL doesn't give us ways to use
MSG_MORE or writev(2) like like GNUTLS does.
|
|
Given most folks have multiple mail accounts, there's no reason
we can't support multiple Maildirs.
|
|
We can drop some unnecessary imports and now that we switched
to InboxWritable.
|
|
Diffstat summary comments were added to git last year and
we need to filter them out to get anchors working properly.
Reported-by: SZEDER Gábor <szeder.dev@gmail.com>
https://public-inbox.org/meta/20190704231123.GF20404@szeder.dev/
|
|
Net::NNTP won't attempt to use older versions of IO::Socket::SSL
because 2.007 is the "first version with default CA on most platforms"
according to comments in Net::NNTP. But then again we don't make
remote requests when testing...
|
|
We need to ensure the BIN_DETECT (8000 byte) check in
ViewVCS can be handled properly when sending giant
files. Otherwise, EPOLLET won't notify us, again,
and responses can get stuck.
While we're at it, bump up the read-size up to 4096
bytes so we make fewer trips to the kernel.
|
|
* origin/nntp:
nntp: add support for CAPABILITIES command
nntp: remove DISABLED hash checks
|
|
Some clients may rely on this for STARTTLS support.
|
|
Before I figured out the long_response API, I figured there'd
be expensive, process-monopolizing commands which admins might
want to disable. Nearly 4 years later, we've never needed it
and running a server without commands such as OVER/XOVER is
unimaginable.
|
|
We need to be able to successfully connect() to the socket
before attempting further tests. Merely testing for the
existence of a socket isn't enough, since the server may've
only done bind(), not listen().
|
|
For users relying on socket activation via service manager (e.g.
systemd) and running multiple service instances (@1, @2),
we need to ensure configuration of the socket is NonBlocking.
Otherwise, service managers such as systemd may clear the
O_NONBLOCK flag for a small window where accept/accept4
blocks:
public-inbox-nntpd@1 |systemd |public-inbox-nntpd@2
--------------------------+----------------+--------------------
F_SETFL,O_NONBLOCK|O_RDWR | | (not running, yet)
|F_SETFL, O_RDWR |
|fork+exec @2... |
accept(...) # blocks! | |(started by systemd)
| |F_SETFL,O_NONBLOCK|O_RDWR
| |accept(...) non-blocking
It's a very small window where O_NONBLOCK can be cleared,
but it exists, and I finally hit it after many years.
|
|
IO::Socket:*->new options are verbose and we can save
a bunch of code by putting this into t/common.perl,
since the related spawn_listener stuff is already there.
|
|
20190431 isn't real, NNTP.pm failed to parse it when our
test client sent it.
|
|
* origin/email-simple-mem:
nntp: reduce syscalls for ARTICLE and BODY
mbox: split header and body processing
mbox: use Email::Simple->new to do in-place modifications
nntp: rework and simplify art_lookup response
|
|
For users running multiple (-nntpd@1, -nntpd@2) instances of
either -httpd or -nntpd via systemd to implement zero-downtime
restarts; it's possible for a listen socket to become blocking
for a moment during an accept syscall and cause a daemons to
get stuck in a blocking accept() during
PublicInbox::Listener::event_step (event_read in previous
versions).
Since O_NONBLOCK is a file description flag, systemd clearing
O_NONBLOCK momentarily (before PublicInbox::Listener::new
re-enables it) creates a window for another instance of our
daemon to get stuck in accept().
cf. systemd.service(5)
|
|
We need to ensure all these subroutines return false on
incomplete.
|
|
Since we have EPOLL_CTL_DEL implemented for the poll(2) and
kqueue backends, we can rely on Perl refcounting to gently
close(2) the underlying file descriptors as references get
dropped.
This may be beneficial in the future if we want to drop a
descriptor from the event loop without actually closing it.
|
|
Perl prior to 5.22 did not bundle a Net::NNTP (or libnet)
capable of handling TLS.
|
|
EV_DISPATCH is actually a better match for EPOLLONESHOT
semantics than EV_ONESHOT in that it doesn't require EV_ADD
for every mod operation.
Blindly using EV_ADD everywhere forces the FreeBSD kernel to
do extra allocations up front, so it's best avoided.
|
|
Linux pipes default to 65536 bytes in size, and we want to read
external processes as fast as possible now that we don't use
Danga::Socket or buffer to heap.
However, drop the buffer ASAP if we need to wait on anything;
since idle buffers can be idle for eons. This lets other
execution contexts can reuse that memory right away.
|
|
With DS buffering to a temporary file nowadays, applying
backpressure to git-http-backend(1) hurts overall memory
usage of the system. Instead, try to get git-http-backend(1)
to finish as quickly as possible and use edge-triggered
notifications to reduce wakeups on our end.
|
|
We can close directly in event_step without bad side effects,
and then we also don't need to take a reason arg from worker_quit,
since we weren't logging it anywhere.
|
|
The master process only dies once and we close ourselves right
away. So it doesn't matter if it's level-triggered or
edge-triggered, actually, but one-shot is most consistent with
our use and keeps the kernel from doing extra work.
|
|
It's barely any effort at all to support HTTPS now that we have
NNTPS support and can share all the code for writing daemons.
However, we still depend on Varnish to avoid hug-of-death
situations, so supporting reverse-proxying will be required.
|
|
We need to be careful about handling EAGAIN on write(2)
failures deal with SSL_WANT_READ vs SSL_WANT_WRITE as
appropriate.
|
|
Our hacks in EvCleanup::next_tick and EvCleanup::asap were due
to the fact "closed" sockets were deferred and could not wake
up the event loop, causing certain actions to be delayed until
an event fired.
Instead, ensure we don't sleep if there are pending sockets to
close.
We can then remove most of the EvCleanup stuff
While we're at it, split out immediate timer handling into a
separate array so we don't need to deal with time calculations
for the event loop.
|
|
We don't need extra wakeups from the kernel when we know a
listener is already active.
|
|
Don't use epoll or kqueue to watch for anything unless we hit
EAGAIN, since we don't know if a socket is SSL or not.
|
|
We'll be reusing requeue in other places to reduce trips to
the kernel to retrieve "hot" descriptors.
|
|
Doing this for HTTP cuts the memory usage of 10K
idle-after-one-request HTTP clients from 92 MB to 47 MB.
The savings over the equivalent NNTP change in commit
6f173864f5acac89769a67739b8c377510711d49,
("nntp: lazily allocate and stash rbuf") seems down to the
size of HTTP requests and the fact HTTP is a client-sends-first
protocol where as NNTP is server-sends-first.
|
|
We need to ensure we run lsof on the sleep(1) process, and not
the fork of ourselves before execve(2). This race applies when
we're using the default pure-Perl spawn() implementation.
|
|
Knowing which message failed a spam check is tough when I have
many Maildirs and don't have a search indexing tool setup for
spam mail.
|
|
Chances are we already have extra buffer space following the
expensive LF => CRLF conversion that we can safely append an
extra CRLF in those places without incurring a copy of the
full string buffer.
While we're at it, document where our pain points are in terms
of memory usage, since tracking/controlling memory use isn't
exactly obvious in high-level languages.
Perhaps we should start storing messages in git as CRLF...
|
|
When dealing with ~30MB messages, we can save another ~30MB by
splitting the header and body processing and not appending the
body string back to the header.
We'll rely on buffering in gzip or kernel (via MSG_MORE)
to prevent silly packet sizes.
|
|
Email::Simple->new will split the head from the body in-place,
and we can avoid using Email::Simple::body. This saves us from
holding an extra copy of the message in memory, and saves us
around ~30MB when operating on ~30MB messages.
|