Date | Commit message (Collapse) |
|
We can cut down on the number of operations required
using "grep" instead of "foreach".
|
|
We can rely on autovification to turn `undef' value of {wbuf}
into an arrayref.
Furthermore, "push" returns the (new) size of the array since at
least Perl 5.0 (I didn't look further back), so we can use that
return value instead of calling "scalar" again.
|
|
We cannot safely call "fileno(undef)" without bringing down the
entire -nntpd process :x. To ensure no logging regression, we
now stash the FD for the duration of the long response to ensure
the error can be matched to the original command in logs.
Fixes: 207b89615a1a0c06 ("nntp: remove cyclic refs from long_response")
|
|
Time::Local has the concept of a "rolling century" which is
defined at 50 years on either side of the current year. Since
it's now 2020 and >50 years since the Unix epoch, the year "70"
gets interpreted by Time::Local as 2070-01-01 instead of
1970-01-01.
Since NNTP servers are unlikely to store messages from the
future, we'll feed 4-digit year to Time::Local::{timegm,timelocal}
and hopefully not have to worry about things until Y10K.
This fixes test failures on t/v2writable.t and t/nntpd.t since
2020-01-01.
|
|
Introduce xover_i, which does the same thing as the anonymous
sub it replaces.
|
|
Introduce hdr_msgid_range_i, which does the same thing as the
anonymous sub it replaces.
|
|
Introduce newnews_i, which does the same thing as the anonymous
sub it replaces.
|
|
Introduce listgroup_range_i and listgroup_all_i subs which
do the same things as the anonymous subs they replace.
|
|
Introduce xrover_i which does the same thing as the anonymous
sub it replaces.
|
|
Introduce searchmsg_range_i, which does the same thing as
the anonymous sub it replaces.
|
|
Leftover cyclic references are a source of memory leaks. While
our code is AFAIK unaffected by such leaks at the moment,
eliminating a potential source of bugs will make maintenance
easier.
We make the long_response API cycle-free by stashing the
callback into the NNTP object. However, callers will need
to be updated to get rid of the circular reference to $self.
We do that be replacing anonymous subs with name subroutine
references, such as xref_range_i replacing the formerly
anonymous sub inside hdr_xref.
|
|
...Instead of just returning a plain scalar inside an arrayref.
This is because we usually pass the result of NNTP::get_range to
Msgmap::msg_range. Upcoming changes will move us away from
anonymous subroutines, so this change will make followup commits
easier-to-digest as modifications to the underlying scalar can
be more easily propagated between non-anonymous-subs.
|
|
Our NNTP code no longer relies on search or Xapian. Msgmap
and Git modules are loaded anyways through Inbox->(git|mm|over)
methods, however.
|
|
No need to do an eval dance or disable strict refs.
|
|
We'll be supporting idle timeout for the HTTP code in the
future to deal directly with Internet-exposed clients w/o
Varnish or nginx.
|
|
EvCleanup only existed since Danga::Socket was a separate
component, and cleanup code belongs with the event loop.
|
|
We don't want to get hung into a state where we see "\n" via
index(), yet cannot consume rbuf in the while loop. So tweak
the regexp to ensure we always consume rbuf.
I suspect this is what causes occasional 100% CPU usage of
-nntpd, but reproducing it's been difficult..
|
|
Since Net::NNTP::listgroup doesn't support the range parameter,
I had to test this manually and noticed extra CRLF were emitted.
|
|
RFC3977 6.1.2.2 LISTGROUP allows a [range] arg after [group],
and supporting it allows NNTP support in neomutt to work again.
Tested with NeoMutt 20170113 (1.7.2) on Debian stretch
(oldstable)
|
|
RFC3977 8.4.2 mandates the order of non-standard headers
to be after the first seven standard headers/metadata;
so "Xref:" must appear after "Lines:"|":lines".
Additionally, non-required header names must be followed
by ":full".
Cc: Jonathan Corbet <corbet@lwn.net>
Reported-by: Urs Janßen
<E1hmKBw-0008Bq-8t@akw>
|
|
We need to ensure further timers can be registered if there's
currently no idle clients.
|
|
In HTTP.pm, we can use the same technique NNTP.pm uses with
long_response with the $long_cb callback and avoid storing
$pull in the per-client structure at all. We can also reuse
the same logic to push the callback into wbuf from NNTP.
This does NOT introduce a new circular reference, but documents
it more clearly.
|
|
Relying on "use" to import during BEGIN means we get to take
advantage of prototype checking of function args during the rest
of the compilation phase.
|
|
Add some checks for errors at initialization, though there's not
much that can be done with ENOMEM-type errors aside from
dropping clients.
We can also get rid of the scary FIXME for MemLevel=8. It was a
stupid error on my part in the original per-client deflate
stream implementation calling C::R::Z::{Inflate,Deflate} in
array context and getting the extra dualvar error code as a
string result, causing the {zin}/{zout} array refs to have
extra array elements.
|
|
Using Z_FULL_FLUSH at the right places in our event loop, it
appears we can share a single zlib deflate context across ALL
clients in a process.
The zlib deflate context is the biggest factor in per-client
memory use, so being able to share that across many clients
results in a large memory savings.
With 10K idle-but-did-something NNTP clients connected to a
single process on a 64-bit system, TLS+DEFLATE used around
1.8 GB of RSS before this change. It now uses around 300 MB.
TLS via IO::Socket::SSL alone uses <200MB in the same situation,
so the actual memory reduction is over 10x.
This makes compression less efficient and bandwidth increases
around 45% in informal testing, but it's far better than no
compression at all. It's likely around the same level of
compression gzip gives on the HTTP side.
Security implications with TLS? I don't know, but I don't
really care, either... public-inbox-nntpd doesn't support
authentication and it's up to the client to enable compression.
It's not too different than Varnish caching gzipped responses
on the HTTP side and having responses go to multiple HTTPS
clients.
|
|
This is only tested so far with my patches to Net::NNTP at:
https://rt.cpan.org/Ticket/Display.html?id=129967
Memory use in C10K situations is disappointing, but that's
the nature of compression.
gzip compression over HTTPS does have the advantage of not
keeping zlib streams open when clients are idle, at the
cost of worse compression.
|
|
It'll be accessible from other places, and there was no real
point in having it declared inside a sub.
|
|
It's a tad slower, but we'll be able to subclass this to rely
on zlib deflate buffering. This is advantageous for TLS clients
since (AFAIK) IO::Socket::SSL/OpenSSL doesn't give us ways to use
MSG_MORE or writev(2) like like GNUTLS does.
|
|
Some clients may rely on this for STARTTLS support.
|
|
Before I figured out the long_response API, I figured there'd
be expensive, process-monopolizing commands which admins might
want to disable. Nearly 4 years later, we've never needed it
and running a server without commands such as OVER/XOVER is
unimaginable.
|
|
* origin/email-simple-mem:
nntp: reduce syscalls for ARTICLE and BODY
mbox: split header and body processing
mbox: use Email::Simple->new to do in-place modifications
nntp: rework and simplify art_lookup response
|
|
We need to be careful about handling EAGAIN on write(2)
failures deal with SSL_WANT_READ vs SSL_WANT_WRITE as
appropriate.
|
|
Our hacks in EvCleanup::next_tick and EvCleanup::asap were due
to the fact "closed" sockets were deferred and could not wake
up the event loop, causing certain actions to be delayed until
an event fired.
Instead, ensure we don't sleep if there are pending sockets to
close.
We can then remove most of the EvCleanup stuff
While we're at it, split out immediate timer handling into a
separate array so we don't need to deal with time calculations
for the event loop.
|
|
We'll be reusing requeue in other places to reduce trips to
the kernel to retrieve "hot" descriptors.
|
|
Doing this for HTTP cuts the memory usage of 10K
idle-after-one-request HTTP clients from 92 MB to 47 MB.
The savings over the equivalent NNTP change in commit
6f173864f5acac89769a67739b8c377510711d49,
("nntp: lazily allocate and stash rbuf") seems down to the
size of HTTP requests and the fact HTTP is a client-sends-first
protocol where as NNTP is server-sends-first.
|
|
Chances are we already have extra buffer space following the
expensive LF => CRLF conversion that we can safely append an
extra CRLF in those places without incurring a copy of the
full string buffer.
While we're at it, document where our pain points are in terms
of memory usage, since tracking/controlling memory use isn't
exactly obvious in high-level languages.
Perhaps we should start storing messages in git as CRLF...
|
|
We don't need some of the array elements returned from
art_lookup, anymore (and haven't used them in years).
We can also shorten the lifetime of the Email::Simple object by
relying on the fact Email::Simple->new will modify it's arg if
given a SCALARREF and allow us to avoid Email::Simple::body
calls.
Unfortunately, this doesn't seem to provide any noticeable
improvement in memory usage when dealing with a 30+ MB test
message, since our previous use of ->body_set('') was saving
some memory, but forcing a LF-only body to be CRLF was making
Perl allocate extra space for s///sg.
|
|
A tiny write() for the greeting on a just accept()-ed TCP socket
won't fail with EAGAIN, so we can avoid the extra epoll syscall
traffic with plain sockets.
|
|
Allocating a per-client buffer up front is unnecessary and
wastes a hash slot. For the majority of (non-malicious)
clients, we won't need to store rbuf in a long-lived object
associated with a client socket at all.
This saves around 10M on 64-bit with 20K connected-but-idle
clients.
|
|
We can get rid of the {long_res} field and reuse the write
buffer ordering logic to prevent nesting of responses from
requeue.
On FreeBSD, this fixes a problem of callbacks firing twice
because kqueue as event_step is now our only callback entry
point.
There's a slight change in the stdout "logging" format, in
that we can no longer distinguish between writes blocked
due to slow clients or deferred long responses. Not sure
if this affects anybody parsing logs or not, but preserving
the old format could prove expensive and not worth the
effort.
|
|
No need to allocate a new PerlIO::scalar filehandle for every
client, instead we can now pass the same CODE reference which
calls DS->write on a reused string reference.
|
|
This is in accordance with TLS standards and will be needed
to support session caching/reuse in the future. However, we
don't issue shutdown(2) since we know not to inadvertantly
share our sockets with other processes.
|
|
IO::Socket::SSL will try to re-bless back to the original class
on TLS negotiation failure. Unfortunately, the original class
is 'GLOB', and re-blessing to 'GLOB' takes away all the IO::Handle
methods, because Filehandle/IO are a special case in Perl5.
Anyways, since we already use syswrite() and sysread() as functions
on our socket, we might as well use CORE::close(), as well (and
it plays nicely with tied classes).
|
|
It kinda, barely works, and I'm most happy I got it working
without any modifications to the main NNTP::event_step callback
thanks to the DS->write(CODE) support we inherited from
Danga::Socket.
|
|
This will be needed for NNTPS support, since we need
to negotiate the TLS connection before writing the
greeting and we can reuse the existing buffer layer
to enqueue writes.
|
|
We can be smarter about requeuing clients to run and avoid
excessive epoll_ctl calls since we can trust event_step to do
the right thing depending on the state of the client.
|
|
Both NNTP and HTTP have common needs and we can factor
out some common code to make dealing with IO::Socket::SSL
easier.
|
|
It should not matter because our rbuf is always from
a socket without encoding layers, but this makes things
easier to follow.
|
|
This is cleaner in most cases and may allow Perl to reuse memory
from unused fields.
We can do this now that we no longer support Perl 5.8; since
Danga::Socket was written with struct-like pseudo-hash support
in mind, and Perl 5.9+ dropped support for pseudo-hashes over
a decade ago.
|
|
Integer comparisions of "$!" are faster than hash lookups.
See commit 6fa2b29fcd0477d126ebb7db7f97b334f74bbcbc
("ds: cleanup Errno imports and favor constant comparisons")
for benchmarks.
|