Date | Commit message (Collapse) |
|
It was harmless, besides wasting space and memory.
|
|
..if the Email::MIME ->crlf is LF.
Email::MIME::Encodings forces everything to CRLF on
quoted-printable messages for RFC-compliance; and
git-apply --ignore-whitespace seems to miss a context
line which is just "\r\n" (w/o leading space).
|
|
We need to keep line-numbers from <a> tags synced to the actual
line numbers in the code when working in smaller viewports.
Maybe I only work on reasonable projects, but excessively
long lines seem to be less of a problem in code than they are
in emails.
|
|
We must reset diff context when starting a new file;
and we must check for all-zeroe object_ids as the
post-image correctly.
|
|
We still need to use XHTML the Atom feed, and XHTML requires
attributes to be quoted, whereas HTML 5 does not.
|
|
This makes things less error-prone and allows us to only
highlight the "@@ -\S+ \+\S+ @@" part of the hunk header
line, without highlighting the function context.
This more closely matches the coloring behavior of git-diff(1)
|
|
Having diff highlighting alone is still useful, even
if blob-resolution/recreation is too expensive or
unfeasible.
|
|
Since we now support more CSS classes for coloring,
give this feature more visibility.
|
|
Maybe we'll default to a dark theme to promote energy savings...
See contrib/css/README for details
|
|
Apparently Email::MIME returns quoted-printable text
with CRLF. So use --ignore-whitespace with git-apply(1)
and ensure we don't capture '\r' in pathnames from
those emails.
And restore "$@" dumping when we die while solving.
|
|
As with our use of the trailing slash in $MESSAGE_ID/T/ and
'$MESSAGE_ID/t/' endpoints, this for 'wget -r --mirror'
compatibility as well as allowing sysadmins to quickly stand up
a static directory with "index.html" in it to reduce load.
|
|
Applying a 100+ patch series can be a pain and lead to a wayward
client monopolizing the connection. On the other hand, we'll
also need to be careful and limit the number of in-flight file
descriptors and parallel git-apply processes when we move to an
evented model, here.
|
|
It's not likely to be worth our time to support
a callback-driven model for something which happens
once per patch series.
|
|
This will allow each patch search via Xapian to "yield" the
current client in favor of another client in the PSGI web
interface for fairness.
|
|
We'll be breaking this up into several steps, too; since
searching inboxes for patch blobs can take 10s of milliseconds
for me.
|
|
A bit messy at the moment, but we need to break this up
into smaller steps for fairness with other clients, as
applying dozens of patches can take several hundred
milliseconds.
|
|
We want more fine-grained scheduling for PSGI use, as
the patch application step can take hundreds of milliseconds
on my modest hardware
|
|
Help users find out where each step of the resolution came from.
Also, we must clean abort the process if we have missing blobs.
And refine the output to avoid unnecessary braces, too.
|
|
David Turner's patch to return "ambiguous" seems like a reasonable
patch for future versions of git:
https://public-inbox.org/git/672a6fb9e480becbfcb5df23ae37193784811b6b.camel@novalis.org/
|
|
Meaningful names in URLs are nice, and it can make
life easier for supporting syntax-highlighting
|
|
No need to incur extra I/O traffic with a working-tree and
uncompressed files on the filesystem. git can handle patch
application in memory and we rely on exact blob matching
anyways, so no need for 3way patch application.
|
|
Ambiguity is not worth it for internal usage with the
solver.
|
|
|
|
Remove the make_path dependency and call mkdir directly.
Capture mode on new files, avoid referencing non-existent
functions and enhance the debug output for users to read.
|
|
This will be useful for disambiguating short OIDs in older
emails when abbreviations were shorter.
Tested against the following script with /path/to/git.git
==> t.perl <==
use strict;
use PublicInbox::Git;
use Data::Dumper;
my $dir = shift or die "Usage: $0 GIT_DIR # (of git.git)";
my $git = PublicInbox::Git->new($dir);
my @res = $git->check('dead');
print Dumper({res => \@res, err=> $git->last_check_err});
@res = $git->check('5335669531d83d7d6c905bcfca9b5f8e182dc4d4');
print Dumper({res => \@res, err=> $git->last_check_err});
|
|
It'll be helpful for displaying progress in SolverGit
output.
|
|
For redundancy and centralization resistance.
|
|
This will lookup git blobs from associated git source code
repositories. If the blobs can't be found, an attempt to
"solve" them via patch application will be performed.
Eventually, this may become the basis of a type-agnostic
frontend similar to "git show"
|
|
Same reasoning as commit 7b7885fc3be2719c068c0a2fc860d53f17a1d933,
because GUI browsers have a tendency to use a different
font-family (and thus different size) as the rest of the page.
|
|
It seems pointless due to the indentation, and interacts
badly with some CSS colouring.
|
|
We need to work with 0x22 (double-quote) and 0x5c (backslash);
even if they're oddball characters in filenames which wouldn't
be used by projects I'd want to work on.
|
|
Alpine is apparently stricter than other clients I've tried
w.r.t. using CRLF for headers. So do the same thing we do for
bodies to ensure we only emit CRLFs and no bare LFs.
Reported-by: Wang Kang <i@scateu.me>
https://public-inbox.org/meta/alpine.DEB.2.21.99.1901161043430.29788@la.scateu.me/
|
|
Actually, it turns out git.git/remote.c::valid_remote_nick
rules alone are insufficient. More checking is performed as
part of the refname in the git.git/refs.c::check_refname_component
I also considered rejecting URL-unfriendly inbox names entirely,
but realized some users may intentionally configure names not
handled by our WWW endpoint for archives they don't want
accessible over HTTP.
|
|
This function doesn't have a lot of callers at the moment so
none of them are affected by this change. But the plan is to
use this in our WWW code for things, so do it now before we
call it in more places.
Results from a Thinkpad X200 with a Core2Duo P8600 @ 2.4GHz:
Benchmark: timing 10 iterations of cp, ip...
cp: 12.868 wallclock secs (12.86 usr + 0.00 sys = 12.86 CPU) @ 0.78/s (n=10)
ip: 10.9137 wallclock secs (10.91 usr + 0.00 sys = 10.91 CPU) @ 0.92/s (n=10)
Note: I mainly care about unquoted performance because
that's the common case for the target audience of public-inbox.
Script used to get benchmark results against the Linux source tree:
==> bench_unquote.perl <==
use strict;
use warnings;
use Benchmark ':hireswallclock';
my $nr = 50;
my %GIT_ESC = (
a => "\a",
b => "\b",
f => "\f",
n => "\n",
r => "\r",
t => "\t",
v => "\013",
);
sub git_unquote_ip ($) {
return $_[0] unless ($_[0] =~ /\A"(.*)"\z/);
$_[0] = $1;
$_[0] =~ s/\\([abfnrtv])/$GIT_ESC{$1}/g;
$_[0] =~ s/\\([0-7]{1,3})/chr(oct($1))/ge;
$_[0];
}
sub git_unquote_cp ($) {
my ($s) = @_;
return $s unless ($s =~ /\A"(.*)"\z/);
$s = $1;
$s =~ s/\\([abfnrtv])/$GIT_ESC{$1}/g;
$s =~ s/\\([0-7]{1,3})/chr(oct($1))/ge;
$s;
}
chomp(my @files = `git -C ~/linux ls-tree --name-only -r v4.19.13`);
timethese(10, {
cp => sub { for (0..$nr) { git_unquote_cp($_) for @files } },
ip => sub { for (0..$nr) { git_unquote_ip($_) for @files } },
});
|
|
We'll be using it outside of searchidx...
|
|
* commit 'mem':
view: more culling for search threads
over: cull unneeded fields for get_thread
searchmsg: remove unused fields for PSGI in Xapian results
searchview: drop unused {seen} hashref
searchmsg: remove Xapian::Document field
searchmsg: get rid of termlist scanning for mid
httpd: remove psgix.harakiri reference
|
|
This allows v1 tests to continue working on git 1.8.0 for
now. This allows git 2.1.4 packaged with Debian 8 ("jessie")
to run old tests, at least.
I suppose it's safe to drop Debian 7 ("wheezy") due to our
dependency on git 1.8.0 for "merge-base --is-ancestor".
Writing V2 repositories requires git 2.6 for "get-mark"
support, so mask out tests for older gits.
|
|
It looks like Net::Socket::IP comes with Perl 5.20 and
later; so we won't have to hassle users with another
package to install.
|
|
Hopefully this helps people familiarize themselves with
the source code.
|
|
{mapping} overhead is now down to ~1.3M at the end of
a giant thread from hell.
|
|
On a certain ugly /$INBOX/$MESSAGE_ID/T/ endpoint with 1000
messages in the thread, this cuts memory usage from 2.5M to 1.9M
(which still isn't great, but it's a start).
|
|
These fields are only necessary in NNTP and not even stored in
Xapian; so keeping them around for the PSGI web UI search
results wastes nearly 80K when loading large result sets.
|
|
Unused since commit 5f09452bb7e6cf49fb6eb7e6cf166a7c3cdc5433
("view: cull redundant phrases in subjects")
|
|
We don't need to be carrying this around with the many SearchMsg
objects we have. This saves about 20K from a large SearchView
"&x=t" response.
|
|
It doesn't seem to be used anywhere
|
|
We don't need to set "psgix." extension fields for things
we don't support. This saves 138 bytes per-client in $env
as measured by Devel::Size::total_size
|
|
We need to parse the MIME object in order to get the
datestamp for those sites.
Fixes: 7d02b9e64455 ("view: stop storing all MIME objects on large threads")
|
|
do_write must return 0 or 1.
|
|
While we try to discard the $smsg (SearchMsg) objects quickly,
they remain referenced via $node (SearchThread::Msg) objects,
which are stored forever in $ctx->{mapping} to cull redundant
words out of subjects in the thread skeleton.
This significantly cuts memory bloat with large search results
with '&x=t'. Now, the search results overhead of
SearchThread::Msg and linked objects are stable at around 350K
instead of ~7M per response in a rough test (there's more
savings to be had in the same areas).
Several hundred kilobytes is still huge and a large per-client
cost; but it's far better than MEGABYTES per-client.
|
|
I've hit /proc/sys/fs/pipe-user-pages-* limits on some systems.
So stop hogging resources on pipes which don't benefit from
giant sizes.
Some of these can use eventfd in the future to further reduce
resource use.
|