Date | Commit message (Collapse) |
|
We cannot let a client monopolize the single-threaded server
even if it can drain the socket buffer faster than we can
emit data.
While we're at it, acknowledge the this behavior (which happens
naturally) in httpd/async.
The same idea is present in NNTP for the long_response code.
This is the HTTP followup to:
commit 0d0fde0bff97 ("nntp: introduce long response API for streaming")
commit 79d8bfedcdd2 ("nntp: avoid signals for long responses")
|
|
User input is imperfect, do not pollute our mail logs with
warnings we cannot fix. This is documented in the
Email::MIME::ContentType manpage so it should remain supported.
|
|
Still a work in progress, but SearchView no longer depends
on Plack::Request at all and Feed is getting there.
We now parse all query parameters up front, but we may do
that lazily again in the future.
|
|
Accessing $env directly is faster and we will eventually
remove all Plack::Request dependencies.
|
|
Plack::Request is unnecessary overhead for this given the
strictness of git-http-backend. Furthermore, having to make
commit 311c2adc8c63 ("avoid Plack::Request parsing body")
to avoid tempfiles should not have been necessary.
|
|
Oops, we totally forgot to automate testing for this :x
|
|
We can't leave them lingering in the parent process at
all due to the risk of corruption with multiple processes.
|
|
We cannot have strftime using the local timezone for %z.
This fixes output when a server is not running UTC.
|
|
Ugh, this is a nasty corruption bug and I can't recommend
this project for Debian 8.0 users without documenting this.
|
|
It's no longer a part of the stock Perl distribution,
and we don't need a whole module for just one function.
|
|
Most of its functionality is in the PublicInbox::Inbox class.
While we're at it, we no longer auto-create newsgroup names
based on the inbox name, since newsgroup names probably deserve
some thought when it comes to hierarchy.
|
|
It's moved into the Inbox module and we no longer use it
in WWW
|
|
I haven't used it in a while and the existing "description"
is probably good enough.
If we support it again, it should be plain-text + auto-linkified
for ease-of-maintenance and consistency.
|
|
We build the atomUrl from url, which can change
dynamically depending on what PSGI environment it
is called under.
|
|
Relying on the number of processors isn't a great idea
since some of our tests rely on delays to test blocking
and slow client behavior.
|
|
It's a low priority, but acknowledge it.
|
|
We don't serve things like robots.txt, favicon.ico, or
.well-known/ endpoints ourselves, but ensure we can be
used with Plack::App::Cascade for others.
|
|
Oops, added a test to prevent regressions while we're at it.
|
|
The generic PSGI code needs to avoid resource leaks if
smart cloning is disabled (due to resource contraints).
|
|
This makes more sense as it keeps management of rpipe
nice and neat.
|
|
The restart_read callback has no chance of circular reference,
and weakening $self before we create it can cause $self to
be undefined inside the callback (seen during stress testing).
Fixes: 395406118cb2 ("httpd/async: prevent circular reference")
|
|
We need to avoid circular references in the generic PSGI layer,
do it by abusing DESTROY.
|
|
Lightly tested, this seems to work when mass-aborting
responses. Will still need to automate the testing...
|
|
We must avoid circular references which can cause leaks in
long-running processes. This callback is dangerous since
it may never be called to properly terminate everything.
|
|
git has stricter requirements for ident names (no '<>')
which Email::Address allows.
Even in 1.908, Email::Address also has an incomplete fix for
CVE-2015-7686 with a DoS-able regexp for comments. Since we
don't care for or need all the RFC compliance of Email::Address,
avoiding it entirely may be preferable.
Email::Address will still be installed as a requirement for
Email::MIME, but it is only used by the
Email::MIME::header_str_set which we do not use
|
|
Having an excessive amount of git-pack-objects processes is
dangerous to the health of the server. Queue up process spawning
for long-running responses and serve them sequentially, instead.
|
|
We no longer override Danga::Socket::event_write and instead
re-enable reads by queuing up another callback in the $close
response callback. This is necessary because event_write may not be
completely done writing a response, only the existing buffered data.
Furthermore, the {closed} field can almost be set at any time when
writing, so we must check it before acting on pipelined requests as
well as during write callbacks in more().
|
|
Standardize the code we have in place to avoid creating too many
timer objects. We do not need exact timers for things that don't
need to be run ASAP, so we can play things fast and loose to avoid
wasting power with unnecessary wakeups.
We only need two classes of timers:
* asap - run this on the next loop tick, after operating on
@Danga::Socket::ToClose to close remaining sockets
* later - run at some point in the future. It could be as
soon as immediately (like "asap"), and as late as 60s into
the future.
In the future, we support an "emergency" switch to fire "later"
timers immediately.
|
|
Oops, really gotta start checking logs in tests :x
Fixes: bb38f0fcce739 ("http: chunk in the server, not middleware")
|
|
Since PSGI does not require Transfer-Encoding: chunked or
Content-Length, we cannot expect random apps we host to chunk
their responses.
Thus, to improve interoperability, chunk at the HTTP layer like
other PSGI servers do. I'm chosing a more syscall-intensive method
(via multiple send(...MSG_MORE) for now to reduce copy + packet
overhead.
|
|
We will have clients dropping connections during long clone
and fetch operations; so do not retain references holding
backend processes once we detect a client has dropped.
|
|
Only check query parameters since there's no useful body
in there.
|
|
Some readers will want to use "HTTPS Everywhere" conveniently;
and I will support it.
|
|
This bit is still being redone to support gigantic repos.
|
|
We may spawn this in a large server process, so be sure
to take advantage of the optional vfork() support when
for folks who set PERL_INLINE_DIRECTORY.
|
|
Followup-to: commit 24e0219f364ed402f9136227756e0f196dc651aa
("remove GIT_DIR env usage in favor of --git-dir")
|
|
Users may change terminal sizes if the process is connected to a
terminal, so we can't reasonably expect SIGWINCH to work as
intended.
|
|
We can't rely on absolute paths when installed on other
systems.
Unfortunately, mlmmj-* requires them, but none of the core
code will use it.
|
|
The offset argument must be an integer for Xapian,
however users (or bots) type the darndest things.
AFAIK this has no security implications besides triggering
a warning (which could lead to out-of-space-errors)
|
|
This simplifies the code somewhat; but it could probably
still be made simpler. It will need to support command
queueing for expensive commands so expensive processes
can be queued up.
|
|
Unfortunately, the original design did not work because
middleware can wrap the response body and make `async_pass'
invisible to HTTP.pm
|
|
We can rely entirely on getline + close callbacks
and be compatible with 100% of PSGI servers.
|
|
We will figure out a different way to avoid overloading...
|
|
We need to ensure $? is set properly for users.
|
|
This can avoid an expensive copy for big strings.
|
|
Otherwise, we get deep recursion as we keep calling
recursively on giant responses
|
|
Sometimes we need to read something to ensure it's a successful
response.
|
|
This will allow us to minimize buffering after we wait
(possibly a long time) for readability. This also greatly
reduces the amount of Danga::Socket-specific knowledge we
have in our PSGI code, making it easier for others to
understand.
|
|
We don't need to update-server-info (or read-tree) if fast
import was spawned for removals and no changes were made.
|
|
We shouldn't need sigprocmask unless we're running multiple
native threads or using vfork, neither of which is the case,
here.
|