Date | Commit message (Collapse) |
|
We still pull it in via Email::LocalDelivery, but that
dependency will go away, soon.
|
|
Or whatever the appropriate Perl terminology, is...
And we will need to do something appropriate for other
encodings, too. I still barely understand Perl Unicode
despite attempting to understand the docs over the years..
|
|
We need to ensure we show the message body ASAP since
the thread generation via Xapian could take a while
and maybe even raise an exception or crash.
|
|
They're effectively noops anyways, and we don't want to be
holding a reference to the read end of the parent pipe.
|
|
Otherwise, URLs can be crafted to inject HTML.
|
|
Oops, pesky users of single-character email addresses!
|
|
* unsubscribe:
unsubscribe.milter: use default postfork dispatcher
unsubscribe: prevent decrypt from showing random crap
examples/unsubscribe-psgi@.service: disable worker processes
unsubscribe: bad URL fixup
unsubscribe: get off mah lawn^H^H^Hist
|
|
While we may end up mirroring lists which allow HTML mail,
encourage plain-text for compatibility since all current
inboxes we host are text-only.
|
|
Oops, needless waste of space.
|
|
Oops :x Add an additional test for live data for any
unprintable characters, too, since this could be a dangerous
source of HTML injection.
|
|
This should reduce link following for replies and improve
visibility. This should also reduce cache overhead/footprint
for crawlers.
|
|
Oops, this quiets down a warning seen in logs.
|
|
No need to duplicate the string when transforming it;
learned from studying SpamAssassin 3.4.1
|
|
We cannot let a client monopolize the single-threaded server
even if it can drain the socket buffer faster than we can
emit data.
While we're at it, acknowledge the this behavior (which happens
naturally) in httpd/async.
The same idea is present in NNTP for the long_response code.
This is the HTTP followup to:
commit 0d0fde0bff97 ("nntp: introduce long response API for streaming")
commit 79d8bfedcdd2 ("nntp: avoid signals for long responses")
|
|
Still a work in progress, but SearchView no longer depends
on Plack::Request at all and Feed is getting there.
We now parse all query parameters up front, but we may do
that lazily again in the future.
|
|
Accessing $env directly is faster and we will eventually
remove all Plack::Request dependencies.
|
|
Plack::Request is unnecessary overhead for this given the
strictness of git-http-backend. Furthermore, having to make
commit 311c2adc8c63 ("avoid Plack::Request parsing body")
to avoid tempfiles should not have been necessary.
|
|
Oops, we totally forgot to automate testing for this :x
|
|
We can't leave them lingering in the parent process at
all due to the risk of corruption with multiple processes.
|
|
We cannot have strftime using the local timezone for %z.
This fixes output when a server is not running UTC.
|
|
Most of its functionality is in the PublicInbox::Inbox class.
While we're at it, we no longer auto-create newsgroup names
based on the inbox name, since newsgroup names probably deserve
some thought when it comes to hierarchy.
|
|
It's moved into the Inbox module and we no longer use it
in WWW
|
|
I haven't used it in a while and the existing "description"
is probably good enough.
If we support it again, it should be plain-text + auto-linkified
for ease-of-maintenance and consistency.
|
|
It's a low priority, but acknowledge it.
|
|
Oops, added a test to prevent regressions while we're at it.
|
|
The generic PSGI code needs to avoid resource leaks if
smart cloning is disabled (due to resource contraints).
|
|
This makes more sense as it keeps management of rpipe
nice and neat.
|
|
The restart_read callback has no chance of circular reference,
and weakening $self before we create it can cause $self to
be undefined inside the callback (seen during stress testing).
Fixes: 395406118cb2 ("httpd/async: prevent circular reference")
|
|
We need to avoid circular references in the generic PSGI layer,
do it by abusing DESTROY.
|
|
Lightly tested, this seems to work when mass-aborting
responses. Will still need to automate the testing...
|
|
We must avoid circular references which can cause leaks in
long-running processes. This callback is dangerous since
it may never be called to properly terminate everything.
|
|
git has stricter requirements for ident names (no '<>')
which Email::Address allows.
Even in 1.908, Email::Address also has an incomplete fix for
CVE-2015-7686 with a DoS-able regexp for comments. Since we
don't care for or need all the RFC compliance of Email::Address,
avoiding it entirely may be preferable.
Email::Address will still be installed as a requirement for
Email::MIME, but it is only used by the
Email::MIME::header_str_set which we do not use
|
|
Having an excessive amount of git-pack-objects processes is
dangerous to the health of the server. Queue up process spawning
for long-running responses and serve them sequentially, instead.
|
|
We no longer override Danga::Socket::event_write and instead
re-enable reads by queuing up another callback in the $close
response callback. This is necessary because event_write may not be
completely done writing a response, only the existing buffered data.
Furthermore, the {closed} field can almost be set at any time when
writing, so we must check it before acting on pipelined requests as
well as during write callbacks in more().
|
|
Standardize the code we have in place to avoid creating too many
timer objects. We do not need exact timers for things that don't
need to be run ASAP, so we can play things fast and loose to avoid
wasting power with unnecessary wakeups.
We only need two classes of timers:
* asap - run this on the next loop tick, after operating on
@Danga::Socket::ToClose to close remaining sockets
* later - run at some point in the future. It could be as
soon as immediately (like "asap"), and as late as 60s into
the future.
In the future, we support an "emergency" switch to fire "later"
timers immediately.
|
|
Oops, really gotta start checking logs in tests :x
Fixes: bb38f0fcce739 ("http: chunk in the server, not middleware")
|
|
Since PSGI does not require Transfer-Encoding: chunked or
Content-Length, we cannot expect random apps we host to chunk
their responses.
Thus, to improve interoperability, chunk at the HTTP layer like
other PSGI servers do. I'm chosing a more syscall-intensive method
(via multiple send(...MSG_MORE) for now to reduce copy + packet
overhead.
|
|
We will have clients dropping connections during long clone
and fetch operations; so do not retain references holding
backend processes once we detect a client has dropped.
|
|
Only check query parameters since there's no useful body
in there.
|
|
This bit is still being redone to support gigantic repos.
|
|
We may spawn this in a large server process, so be sure
to take advantage of the optional vfork() support when
for folks who set PERL_INLINE_DIRECTORY.
|
|
Users may change terminal sizes if the process is connected to a
terminal, so we can't reasonably expect SIGWINCH to work as
intended.
|
|
We can't rely on absolute paths when installed on other
systems.
Unfortunately, mlmmj-* requires them, but none of the core
code will use it.
|
|
The offset argument must be an integer for Xapian,
however users (or bots) type the darndest things.
AFAIK this has no security implications besides triggering
a warning (which could lead to out-of-space-errors)
|
|
This simplifies the code somewhat; but it could probably
still be made simpler. It will need to support command
queueing for expensive commands so expensive processes
can be queued up.
|
|
Unfortunately, the original design did not work because
middleware can wrap the response body and make `async_pass'
invisible to HTTP.pm
|
|
We can rely entirely on getline + close callbacks
and be compatible with 100% of PSGI servers.
|
|
We will figure out a different way to avoid overloading...
|
|
This can avoid an expensive copy for big strings.
|
|
Otherwise, we get deep recursion as we keep calling
recursively on giant responses
|