Date | Commit message (Collapse) |
|
We must use a normal write instead of send(.., MSG_MORE)
when writing responses of "Content-Length: 0" to avoid
the corking effect MSG_MORE provides. We only want to
cork headers if we will send a non-empty body.
Fixes: c3eeaf664cf0 ("http: clarify intent for persistence")
This needs a proper test.
|
|
Server admins may not be able to afford to have too many
git-pack-objects processes running at once. Since PSGI
HTTP servers should already be configured to use multiple
processes for other requests; limit concurrency of smart
backends to one; and fall back to dumb responses if we're
already generating a pack.
|
|
Using http.getanyfile still keeps the http-backend process
alive, so it's better to break out of that process and
handle serving entirely within the HTTP server.
|
|
This is probably trivial enough to be final?
|
|
While we're at it, update some references to ssoma in the
Makefile.PL comment.
|
|
No need to maintain per-block environment state when we can
localize it to per-command. We've had --git-dir= in git
since 1.4.2 (2006-08-12) and already use it all over the
place.
|
|
By converting to using ourt git-fast-import-based Import
module. This should allow us to be more easily installed.
|
|
Quote-folding was a major design mistake pre-1.0. Since this
project is still in its infancy and unlikely to be in wide
use at the moment, redirect the /f/ endpoints back to the
plain message.
|
|
...And mark quotes as <span class="q"> since it barely
costs us anything and allows users to choose colors
themselves with custom, user-supplied CSS.
Reduce allocations of the Linkify object, too.
|
|
Quote-folding can be detrimental as it fails to hide the
real problem of over-quoting.
Over-quoting wastes bandwidth and space for all readers, not
just WWW readers of the public-inbox. So hopefully removing
quote-folding support from the WWW interface can shame those
repliers into quoting only relevant portions of what they reply
to.
|
|
This will allow us to write fast importers for existing
archives as well as eventually removing the ssoma dependency
for performance and ease-of-installation.
|
|
This lets us one-line git commands easily like ``, but without
having to remember --git-dir or escape arguments.
|
|
Allow users to do wacky things here if they really wish...
It's bad practice, but at least allow other readers to
mock users of these headers :P
|
|
This allows us to reduce installation dependencies while
retaining performance as it favors HTTP::Parser::XS when
it is installed and available.
PLACK_HTTP_PARSER_PP may be set to 1 to force a pure Perl
parser for testing.
|
|
We cannot risk using all of a users' disk space buffering
gigantic requests. Use the defaults git gives us since
we primarily host git repositories.
|
|
HTTP::Parser::XS::PP does not reject excessively large
headers like the XS version. Ensure we reject headers
over 16K since public-inbox should never need such large
request headers.
|
|
This means we can avoid false-positives when inheriting multiple
Unix domain sockets.
|
|
Non-socket activation users will want to install Net::Server
for daemonization, pid file writing, and user/group switching.
|
|
Due to the deterministic way reference counting works,
we do not want to drop references to existing FDs
even if we no longer need the glob reference; the actual
FD is all we can pass through on exec.
|
|
Just to ensure we hit the code path independently of
WWW code.
|
|
Listening on Unix domain sockets can be convenient for running
behind reverse proxies, avoiding port conflicts, limiting access,
or avoiding the overhead (if any) of TCP over loopback.
|
|
This allows us to share more code between daemons and avoids
having to make additional syscalls for preparing REMOTE_HOST
and REMOTE_PORT in the PSGI env in -httpd.
This will also make supporting HTTP (and NNTP) over Unix sockets
easier in a future commit.
|
|
This should make identifiying leftover directories
due to SIGKILL-ed tests easier.
|
|
It seems common for users to end statements with URLs,
while it is rare for a URL itself to end with a '.' or ';'.
So make a guess and assume the URL was intended to not
include the trailing '.' or ';'
|
|
We do not need to load Plack::Request outside of WWW anymore.
|
|
This is a step towards having consistent, reproducible
test output. (ugh, but each %hash usage screws that up).
|
|
In case folks do not use eatmydata or tmpfs for testing,
use transactions to reduce the number of fsync calls
made and hopefully prevent drives from wearing out.
|
|
HTTP responses may be long-running or requests may be slow or
pipelined. Ensure we don't kill them off prematurely.
|
|
No point in loading Data::Dumper if we do not use it
in the tests.
|
|
This should reduce overhead of spawning git processes
from our long-running httpd and nntpd servers.
|
|
Under Linux, vfork maintains constant performance as
parent process size increases. fork needs to prepare pages
for copy-on-write, requiring a linear scan of the address
space.
|
|
Just in case my knowledge of chunking is wrong.
|
|
This is meant to provide an easy starting point for server admins.
It provides a basic HTTP server for admins unfamiliar with
configuring PSGI applications as well as being an identical
interface for management as our nntpd implementation.
This HTTP server may also be a generic Plack/PSGI server for
existing Plack/PSGI applications.
|
|
This should not be dependent on what is in the users'
$HOME config, oops.
|
|
This is enabled by default, for now.
Smart HTTP cloning support will be added later, but it will
be optional since it can be highly CPU and memory intensive.
|
|
Sometimes users forget trailing slashes; but we should not punish
them with infinite loops.
|
|
Fixes commit 83fedde4cde6539386c9d3ecf37fb99d74af8d93
("tests: fixup requirements for tests")
|
|
We should be able to run tests on bare bones systems more easily.
|
|
HTTP/1.1 clients will want persistent connections and
need to know response terminations.
|
|
We'll be using it for more than just cat-file.
Adding a `popen' API for internal use allows us to save a bunch
of code in other places.
|
|
The "cat_file" sub now allows a block to be passed for partial
processing. Additionally, a new "check" method is added to
retrieve only object metadata: (SHA-1 identifier, type, size)
|
|
We use it as a general compressor for identifiers such as
subject paths, so using the "mid_" prefix probably is not
appropriate.
|
|
More testing is good, especially since clients I use
don't implement all the commands.
|
|
Multiline responses must end with "\r\n.\r\n", so we won't
break out early in case the OS doesn't support MSG_MORE.
|
|
The document data of a search message already contains a good chunk
of the information needed to respond to OVER/XOVER commands quickly.
Expand on that and use the document data to implement OVER/XOVER
quickly.
This adds a dependency on Xapian being available for nntpd usage,
but is probably alright since nntpd is esoteric enough that anybody
willing to run nntpd will also want search functionality offered
by Xapian.
This also speeds up XHDR/HDR with the To: and Cc: headers and
:bytes/:lines article metadata used by some clients for header
displays and marking messages as read/unread.
|
|
Oops, we need to test commands more closely :x
Add a missing prototype while we're at it for extra
checking.
|
|
This is similar to XHDR, but differs in how it handles Message-ID
lookups.
|
|
This is allowed by RFC 2980 and HDR (to-be-implemented) in
RFC 3977 supports it, too.
|
|
We'll require some modifications for HDR support, though.
|
|
RFC 3977 supports YYYYMMDD dates while retaining backwards
compatibility for old YYMMDD dates.
|