Date | Commit message (Collapse) |
|
As in Import, we'll fall back to Sender: if From: is missing,
and use the primary_address of the inboxes to indicate the total
absence of those fields.
|
|
Non-slice mailboxes never have messages themselves,
so we must not assume a message exists when sending
untagged EXISTS messages.
|
|
The resulting OID ("oid_b") is a required arg and part of
$env->{PATH_INFO}, instead; so it's never part of an optional
query parameter.
|
|
Returning an empty string for a filename makes no sense,
so instead return `undef' so the caller can setup a fallback
using the "//" operator.
This fixes uninitialized variable warnings because split()
on an empty string returns `undef', which caused to_filename
to warn on s// and tr// ops.
|
|
This means we need to filter out "" from query parameters.
While we're at it, update comments for the WWW endpoint.
|
|
I'm not sure why this wasn't done in Jun/July 2016 when I was
working on PublicInbox::Address to replace the DoS-vulnerable
Email::Address.
Nowadays, PublicInbox::Address allows using Email::Address::XS
which should be fast and robust.
|
|
parent.pm is leaner than base.pm, and Time::HiRes::stat is
more accurate, so take advantage of these Perl 5.10+-isms
since it's been over a year since we left 5.8 behind.
|
|
We actually don't do anything with {env} or {'psgix.io'}
on client aborts, so checking the truthiness of '{forward}'
is necessary.
|
|
Since -edit and -purge should be rare and TOCTOU around them
rarer still; missing {blobs} could be indicative of a real bug
elsewhere. Warn on them.
And I somehow ended up with 3 different field names for Inbox
objects. Perhaps they'll be made consistent in the future.
|
|
While all the {async_next} callbacks needed eval guards anyways
because of DS->write, {async_eml} callbacks did not.
Ensure any bugs in our code or data corruption result in
termination of the HTTP connection, so as not to leave clients
hanging on a response which never comes or is mangled in some
way.
|
|
We can reuse some of the GzipFilter infrastructure used by other
WWW components to handle slow blob retrieval, here. The
difference from previous changes is we don't decide on the 200
status code until we've retrieved the blob and found the
attachment.
While we're at it, ensure we can compress text attachment
responses once again, since all text attachments are served
as text/plain.
|
|
gzf_maybe always returns a GzipFilter object, even if it uses
CompressNoop. We can also use ->zflush instead of
->translate(undef) here for the final bit.
|
|
This simplifies the primary callers of eml_entry while only making
mknews.perl worse.
|
|
We no longer favor getline+close for streaming PSGI responses
when using public-inbox-httpd. We still support it for other
PSGI servers, though.
|
|
All of our streaming responses use ::aresponse, now, and our
synchronous responses use html_oneshot. So there's no need
for the old WwwStream::response.
|
|
We can build and buffer the HTML <head> section once the first
non-ghost message in a thread is loaded, so there's no need to
perform an extra check on $ctx->{nr} once the $eml is ready.
|
|
We can save stack space and simplify subroutine calls, here.
|
|
Another 10% or so speedup when displaying full messages off
search results.
|
|
Once again this speeds another endpoint up 10% or so.
|
|
$ctx->{msgs} won't ever contain undef values.
|
|
Another 10% or so speedup in a frequently-hit endpoint.
|
|
Once again, this shows a ~10% speedup with multi-message
threads in xt/httpd-async-stream.t regardless of whether
TEST_JOBS is 1 or 100.
|
|
This will allow -httpd to handle other requusts if waiting on
an HDD seek or git to decode a blob.
|
|
This makes WwwStream closer to MboxGz and WwwAtomStream
and will eventually allow us to follow the same patterns.
|
|
parent.pm is leaner than base and we'll rely on `-w' for
warnings during development.
|
|
Z_FINISH is the default for Compress::Raw::Zlib::Deflate->flush,
anyways, so there's no reason to import it. And none of C::R::Z
is needed in WwwText now that gzf_maybe handles it all.
|
|
Virtually all of our responses are going to be gzipped, anyways.
This will allow us to utilize zlib as a buffering layer and
share common code for async blob retrieval responses.
To streamline this and allow GzipFilter to be a parent class,
we'll replace the NoopFilter with a similar CompressNoop class
which emulates the two Compress::Raw::Zlib::Deflate methods we
use.
This drops a bunch of redundant code and will hopefully make
upcoming WwwStream changes easier to reason about.
|
|
This will make it easier to support asynchronous blob
retrievals. The `$ctx->{nr}' counter is no longer implicitly
supplied since many users didn't care for it, so stack overhead
is slightly reduced.
|
|
Like with WwwAtomStream and MboxGz, we can bless the existing
$ctx object directly to avoid allocating a new hashref. We'll
also switch from "->" to "::" to reduce stack utilization.
|
|
This allows -httpd to handle other requests while waiting
for git to retrieve and decode blobs. We'll also break
apart t/psgi_v2.t further to ensure tests run against
-httpd in addition to generic PSGI testing.
Using xt/httpd-async-stream.t to test against clones of meta@public-inbox.org
shows a 10-12% performance improvement with the following env:
TEST_JOBS=1000 TEST_CURL_OPT=--compressed TEST_ENDPOINT=new.atom
|
|
No need to deepen our object graph, here.
|
|
stat(2) on the inboxdir is unlikely to be correct, now that
msgmap truncates its journal (rather than unlinking it).
|
|
We always return Z (UTC) times, anyways, so we'll always
use gmtime() on the seconds-after-the-epoch.
|
|
This restores gzip-by-default behavior for /$INBOX/$MSGID/raw
endpoints for all indexed inboxes. Unindexed v1 inboxes will
remain uncompressed, for now.
|
|
We can bless $ctx directly into a MboxGz object to reduce
hash lookups and allocations.
|
|
This lets the -httpd worker process make better use of time
instead of waiting for git-cat-file to respond. With 4 jobs in
the new test case against a clone of
<https://public-inbox.org/meta/>, a speedup of 10-12% is shown.
Even a single job shows a 2-5% improvement on an SSD.
|
|
This will allow us to gzip responses generated by cgit
and any other CGI programs or long-lived streaming
responses we may spawn.
|
|
This will allow others to mimic our award-winning homepage
design without needing to rely on Plack::Middleware::Deflater
or varnish to compress responses.
|
|
It's no longer needed, we no longer show a runtime error
for zlib being missing, as zlib is a hard requirement.
Fixes: a318e758129d616b ("make zlib-related modules a hard dependency")
|
|
This simplifies callers, as witnessed by the change to
WwwListing. It adds overhead to NoopFilter, but NoopFilter
should see little use as nearly all HTTP clients request gzip.
|
|
The new ->zmore and ->zflush APIs make it possible to replace
existing verbose usages of Compress::Raw::Deflate and simplify
buffering logic for streaming large gzipped data.
One potentially user visible change is we now break the mbox.gz
response on zlib failures, instead of silently continuing onto
the next message. zlib only seems to fail on OOM, which should
be rare; so it's ideal we drop the connection anyways.
|
|
The changes to GzipFilter here may be beneficial for building
HTML and XML responses in other places, too.
|
|
It'll give us a nicer HTML header and footer.
|
|
No point in streaming a tiny response via ->getline,
but we may stream to a gzipped buffer, later.
|
|
Most of our plain-text responses are config files
big enough to warrant compression.
|
|
Our most common endpoints deserve to be gzipped.
|
|
Plack::Middleware::Deflater forces us to use a memory-intensive
closure. Instead, work towards building compressed strings in
memory to reduce the overhead of buffering large HTML output.
|
|
We currently don't use bytes::length in ->write, so there's no
need to `use bytes'. Favor `//=' to describe the intent of the
conditional assignment since the C::R::Z::Deflate object is
always truthy. Also use the local $gz variable to avoid
unnecessary {gz} hash lookups.
|
|
We avoided a managed circular reference in 10ee3548084c125f
but introduced a pipe FD leak, instead. So handle the EOF
we get when the "git cat-file --batch" process exits and
closes its stdout FD.
v2: remove ->close entirely. PublicInbox::Git->cleanup
handles all cleanup. This prevents us from inadvertantly
deleting the {async_cat} field associated with a different
pipe than the one GAC is monitoring.
Fixes: 10ee3548084c125f ("git_async_cat: remove circular reference")
|
|
inotify_add_watch(2), open(2), stat(2) may all fail due to
permissions errors, especially when running -nntpd/-imapd
as `nobody' as recommended.
|