about summary refs log tree commit homepage
DateCommit message (Collapse)
2018-12-18TODO: add a note for davfs2 Range: support
And maybe I or somebody else interested will implement it, since fusedav is abandoned upstream and removed from Debian testing: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=840388 Yes, I have fusedav patches at https://bogomips.org/fusedav.git as noted in the above bug report, but I think davfs2 has more momentum at the moment.
2018-12-12doc/hosted: add glibc and bug-gnulib mirrors
These have existed for a while, actually, so, we might as well publicize them. While we're at it, add a disclaimer to discourage reliance on single points of failure.
2018-12-06nntp: prevent event_read from firing twice in a row
When a client starts pipelining requests to us which trigger long responses, we need to keep socket readiness checks disabled and only enable them when our socket rbuf is drained. Failure to do this caused aborted clients with "BUG: nested long response" when Danga::Socket calls event_read for read-readiness after our "next_tick" sub fires in the same event loop iteration. Reported-by: Jonathan Corbet <corbet@lwn.net> cf. https://public-inbox.org/meta/20181013124658.23b9f9d2@lwn.net/
2018-10-16Add Xrefs to over/xover lines
Putting the Xref field into xover lines allows newsreaders to mark cross-posted messages read when catching up a group. That, in turn, massively improves the life of crazy people who try to follow dozens of kernel lists, where emails are often heavily cross-posted.
2018-10-16Put the NNTP server name into Xref lines
RFC 5536 sec 3.2.14 says that the server-name in an Xref line is "which news server generated the header field"; indeed, that is necessary for newsreaders like gnus to handle references properly. So pick up the server name from the config if available (the first name if there's more than one), from the host name otherwise, and use it rather than the domain name of the list server. Tests have been adjusted to match the new behavior.
2018-08-10Import.pm: When purging replace a purged file with a zero length file
This ensures that the number of added files remains the same and thus the article numbers derived from a repository will remain the same. I think this is the last place in public-inbox that has to be tweaked to guarantee the generated article number will remain the same in an public inbox archive. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-05overidx: preserve `tid' column on re-indexing
Otherwise, walking backwards through history could mean the root message in a thread forgets its `tid' and it prevents messages from being looked up by it. This bug was hidden by the fact that `sid' matches were often good enough to link threads together.
2018-08-05view: distinguish strict and loose thread matches
The "loose" (Subject:-based) thread matching yields too many hits for some common subjects (e.g. "[GIT] Networking" on LKML) and causes thread skeletons to not show the current messages. Favor strict matches in the query and only add loose matches if there's space. While working on this, I noticed the backwards --reindex walk breaks `tid' on v1 repositories, at least. That bug was hidden by the Subject: match logic and not discovered until now. It will be fixed separately. Reported-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org>
2018-08-03Merge branch 'eb/index-incremental'
Incremental indexing fixes from Eric W. Biederman. These prevents the highest message number in msgmap from being reassigned after deletes in rare cases and ensures messages are deleted from msgmap in v2. * eb/index-incremental: V2Writeable.pm: In unindex_oid delete the message from msgmap V2Writeable.pm: Ensure that a found message number is in the msgmap SearchIdx,V2Writeable: Update num_highwater on optimized deletes t/v[12]reindex.t: Verify the num highwater is as expected t/v[12]reindex.t Verify num_highwater Msgmap.pm: Track the largest value of num ever assigned SearchIdx.pm: Always assign numbers backwards during incremental indexing t/v[12]reindex.t: Test incremental indexing works t/v[12]reindex.t: Test that the resulting msgmap is as expected t/v[12]reindex.t: Place expected second in Xapian tests t/v2reindex.t: Isolate the test cases more t/v1reindex.t: Isolate the test cases Import.pm: Don't assume {in} and {out} always exist
2018-08-03V2Writeable.pm: In unindex_oid delete the message from msgmap
Now that we track the num highwater mark it is safe to remove messages from msgmap that have been previously allocated. Removing even the highest numbered article will no longer cause new message numbers to move backwards. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-03V2Writeable.pm: Ensure that a found message number is in the msgmap
The lookup to see if a num has already been assigned to a message happens in a temporary copy of message map. It is possible that the number has been removed from the current message map. The unindex/reindex after a history rewrite triggered by a purge should be one such case. Therefore add the number to the msgmap in case it is not currently present. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-03SearchIdx,V2Writeable: Update num_highwater on optimized deletes
When performing an incremental index update with index_sync if a message is seen to be both added and deleted update the num_highwater mark even though the message is not otherwise indexed. This ensures index_sync generates the same msgmap no matter which commit it stops at during incremental syncs. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-03t/v[12]reindex.t: Verify the num highwater is as expected
Instrument the tests to verify the highwater num highwater mark is where it is expected. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-03t/v[12]reindex.t Verify num_highwater
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-03Msgmap.pm: Track the largest value of num ever assigned
Today the only thing that prevents public-inbox not reusing the message numbers of deleted messages is the sqlite autoincrement magic and that only works part of the time. The new incremental indexing test has revealed areas where today public-inbox does try to reuse numbers of deleted messages. Reusing the message numbers of existing messages is a problem because if a client ever sees messages that are subsequently deleted the client will not see the new messages with their old numbers. In practice this is difficult to trigger because it requires the most recently added message to be removed and have the removal show up in a separate pull request. Still it can happen and it should be handled. Instead of infering the highset number ever used by finding the maximum number in the message map, track the largest number ever assigned directly. Update Msgmap to track this value and update the indexers to use this value. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-03search: (really) match the behavior of WWW for indexing text
Not sure what was going through my mind when I made my first attempt at this, but we really want to make sure we index all the text we display in the web view (and presumably anything a reasonable mail client can display). Followup-to: 0cf6196025d4e4880cd1ed859257ce21dd3cdcf6 ("search: match the behavior of WWW for indexing text")
2018-08-02SearchIdx.pm: Always assign numbers backwards during incremental indexing
When walking messages newest to oldest, assigning the larger numbers before smaller numbers ensures older messages get smaller numbers. This leads to the possibility of a msgmap that can be regenerated when needed. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-02t/v[12]reindex.t: Test incremental indexing works
Capture interesting commits of the test repository in mark variables. Use those marks to build interesting scenarios where index_sync proceeds as if those marks are the heads of the repositor. Use this capability to test what happens when adds and deletes are mixed within a repository. Be sad because things don't yet work as they should. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-02t/v[12]reindex.t: Test that the resulting msgmap is as expected
Deeply inspect the entire message map in the reindexing tests as the actual message order is significant and can result in surprises. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-02t/v[12]reindex.t: Place expected second in Xapian tests
Place the expected value second in is and isnt tests because when these tests fail they report the second value as the expected value. A report saying got 0 expected 8 'no Xapian search results' can be confusing. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-02t/v2reindex.t: Isolate the test cases more
While inspecting the tests I realized that because we have been reusing variables there can be a memory between one test case and another. Add scopes and local variables to prevent an unintended memory between one test cases. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-02t/v1reindex.t: Isolate the test cases
While inspecting the tests I realized that because we have been reusing variables there can be a memory between one test case and another. Add scopes and local variables to prevent an unintended memory between one test and another. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-08-02Import.pm: Don't assume {in} and {out} always exist
While working on one of the tests I did: my $im = PublicInbox::V2Writable->new($ibx, 1); my $im0 = $im->importer(); $im->add($mime); Which resulted in a warning of the use of an undefined value from atfork_child, and the test failing nastily. Inspection of the code reveals this can happen anytime gfi_start has not been called. So just fix atfork_child to skip closing file descriptors that have not yet been setup. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-30ProcessPipe.pm: Use read not sysread
While playing with git fast export I discovered that mixing <> and read would give inconsistent results. I tracked the issue down to using sysread in ProcessPipe instead of plain read. If it is desirable to use readline I can't see how using sysread can work as readline to be efficient needs to use buffered I/O. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-29mda: allow configuring globally without spamc support
This reuses some of the configuration from -watch, but remains independent since some configurations will use -watch for some inboxes and -mda for others. The default remains "spamc" for -mda users so nothing changes without explicit configuration. Per-inbox configurations may also be supported in the future.
2018-07-29mda: v2: ensure message bodies are indexed
We must not clobber the original message string, as Email::MIME(*) still needs it for iterating through parts in SearchIdx (but not when handing it as a raw string to git-fast-import). I've noticed message bodies (especially dfpre/dpost) were not getting indexed when going through -mda (no problems with -watch). This also did not affect v1 repos, since indexing is a separate process for v1 and requires re-reading the data from git. (*) tested Email::MIME 1.937 on Debian stretch
2018-07-29t/v2mda: make it easy to test v1 repos here, too
It will help track down a bug which only seems to happen in v2 repos.
2018-07-29mda: use InboxWritable
It's a convenient wrapper nowadays, so get rid of some legacy code and minimize differences from the -watch code.
2018-07-20search: use boolean prefixes for git blob queries
I've hit some case where probabilistic searches don't work when using dfpre:/dfpost:/dfblob: search prefixes because stemming in the query parser interferes. In any case, our indexing code indexes longer/unabbreviated blob names down to its 7 character abbreviation, so there should be no need to do wildcard searches on git blob names.
2018-07-20v1: allow upgrading indexlevel=basic to 'medium' or 'full'
For v1 repos, we don't need to write any metadata to Xapian and changing from 'basic' to 'medium' or 'full' will work. For v2, the metadata for indexing is stored in msgmap (because the Xapian databases are partitioned for parallelism), so a reindex is required.
2018-07-19tests: fixup indexlevel setting in tests
The correct field is underscore-less for consistency with git-config naming conventions. While we're at it, beef up the v2 tests with actual size checks, too. I also noticed phrase searching still seems to work for the limited test case, so I left it documented; but the size checking verifies the space savings.
2018-07-19Import.pm: Deal with potentially missing From and Sender headers
Use ||= '' to ensure that if the From or Sender header is not present the code sees an empty string and instead of undefined. I had some email messages with a From field without an @ (because the sender was local) and without a Sender which were causing errors when imported. I think this was bad enough that the email messages were failing to be imported. Signed-off-by: Eric Biederamn <ebiederm@xmission.com>
2018-07-19searchidx: respect XAPIAN_FLUSH_THRESHOLD env if set
Xapian documents and respect XAPIAN_FLUSH_THRESHOLD to define the interval in documents to flush, so don't override it with our own BATCH_BYTES. This is helpful for initial indexing for those on slower storage but enough RAM. It is unnecessary for -watch and frequent incremental indexing; and it increases transaction times if -watch is playing "catch-up" if it was stopped for a while. The original BATCH_BYTES was tuned for a machine with little memory as the default XAPIAN_FLUSH_THRESHOLD of 10000 documents was causing swap storms. Using document counts also proved an innaccurate estimator of RAM usage compared to the actual bytes processed.
2018-07-19public-inbox-init: Initialize indexlevel
If indexlevel is specified on the command line prefer that. If indexlevel is specified in the config file prefer that. If indexlevel is not specified anywhere default to full. This should make indexlevel somewhat approachable. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-19SearchIdx: Allow the amount of indexing be configured
This adds a new inbox configuration option 'indexlevel' that can take the values 'full', 'medium', and 'basic'. When set to 'full' everything is indexed including the positions of all terms. When set to 'medium' everything except the positions of terms is indexed. When set to 'basic' terms and positions are not indexed. Just the Overview database for NNTP is created. Which is still quite good and allows searching for messages by Message-ID. But there are no indexes to support searching inside the email messages themselves. Update the reindex tests to exercise the full medium and basic code paths Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-19SearchIdx: Add the mechanism for making all Xapian indexing optional
Create a new method add_xapian that holds all of the code to create Xapian indexes. The creation of this method simpliy involved idenitifying the relevant code and moving it from add_message. A call is added to add_xapian from add_message to keep everything working as it currently does. The new call is made conditional upon index levels of 'full' and 'medium'. The index levels that index positions and terms the two things public-inbox uses Xapian to index. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-19SearchIdx.pm: Make indexing search positions optional
About half the size of the Xapian search index turns out to be search positions. The search positions are only used in a very narrow set of queries. Make the search positions optional so people don't need to pay the cost of queries they will never make. This also makes public-inbox more approachable for light hacking as generating all of the indexes is time consuming. The way this is done is to add a method to SearchIdx called index_text that wraps the call of the term generator method index_text. The new index_text method takes care of calling both index_text and increase_termpos (the two functions that are responsible for position data). Then index_users, index_diff_inc, index_old_diff_fn, index_diff, index_body are made proper methods that calls the new index_text. Callers of the new index_text are slightly simplified as they don't need to call increase_termpos as well. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-18t/v2reindex.t: Swap the order of minmax tests so errors make sense
Previously if a minmax test failed it would say it was expecting the incorrect value, which is confusing when looking into why the test fails. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-18t/v2reindex.t: Don't reuse $ibx as two different kinds of variable
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-18t/search.t t/v2writable.t: Teach search tests to fail more cleanly.
Now that some of the indexes are optionals these tests might fail so teach them to fail more cleanly. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-18t/v2reindex.t: Ensure the numbers 1 to 10 are used
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-18SearchIdx: Decrement regen_down even for added messages that are later deleted.
Decrement regen_down when visiting messages that appear in %D that we know will later be deleted. This ensures consistent message numbers are generated no matter which commit number is on top. Allowing deletes to propagage separately from the messages they delete without causing problems. The v2 trees already do this and when the indexes are deleted and rebuilt they maintain they commit numbers. Add a v1 version of the v2reindex test to verify that reindexing is working properly on v1 as well as v2. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-18index: avoid false-positive warning on off-by-one
We subtract one from "jobs" to map to "partitions" to account for the overview index and git fast-import jobs.
2018-07-15v2writable: unindex deleted messages after incremental fetch
The normal behavior is to prevent the deleted messages from being indexed in the first place. However, when fetching incrementally via git; public-inbox-index needs to account for deleted files which were created outside of the most recent fetch/reindexing window. Reported-by: Eric W. Biederman <ebiederm@xmission.com>
2018-07-07Import: Don't copy nulls from emails into git
Recently I ran git --git-dir=lkml/git/1.git fsck and it reported: > warning in commit 299dbd50b6995c6debe2275f0df984ce697fb4cc: nulInCommit: NULL byte inthe commit object body Which I found quite scary. Nulls in the wrong place have a bad tendency to make programs misbehave. It turns out someone had placed "=?iso-8859-1?q?=00?=" at the end of their subject line. Which is the mime encoding for NULL. Email::Mime had correctly decoded the header, and then public-inbox had simply copied the contents of the header into the subject line of the git commit. To prevent that from causing problems replace nulls in such subject lines with spaces. Signed-off-by: Eric Biederman <ebiederm@xmission.com>
2018-07-06MsgTime.pm: Use strptime to compute the time zone
Recently I had trouble cloning lkml/git/0.git because git fsck on receive was failing. The output of git fsck was: > Checking object directories: 100% (256/256), done. > warning in commit 59173dc1fe67b113ace4ce83e7f522414b3e0404: badTimezone: invalid author/committer line - bad time zone > warning in commit ff22aaff22eb4479e49e93f697e385f76db51c55: badTimezone: invalid author/committer line - bad time zone > warning in commit 609b744909693f5f00aff5ed9928beeeee9ded2e: badTimezone: invalid author/committer line - bad time zone > warning in commit 084572141db8e0d879428afb278bd338f2dbb053: badTimezone: invalid author/committer line - bad time zone > warning in commit 789d204de27cd12c6da693d903390a241a1a4bca: badTimezone: invalid author/committer line - bad time zone > warning in commit 0d9a65948b0c957007ca387cd56b690f9bab9c08: badTimezone: invalid author/committer line - bad time zone > warning in commit f7468c42b4196ee6323afb373ab9323971c38d69: badTimezone: invalid author/committer line - bad time zone > warning in commit 85e0cd6dd527cd55ad0440f14384529b83818228: badTimezone: invalid author/committer line - bad time zone > warning in commit f31e19a2e772c9ed00728ef142af9c550ea5de6a: badTimezone: invalid author/committer line - bad time zone > warning in commit 56eb7384443ef84e17e29504a304a071b189ae67: badTimezone: invalid author/committer line - bad time zone > warning in commit e4470030471e6810414b9de5e3b52e16f2245d12: badTimezone: invalid author/committer line - bad time zone > warning in commit f913b48caa097c3b2cb3f491707944f88d52d89f: badTimezone: invalid author/committer line - bad time zone > warning in commit 4390f26923d572c6dab6cce8282c7cad5520d785: badTimezone: invalid author/committer line - bad time zone > warning in commit 0f66db71a06bd7d651a0cd80877d8043b70fda20: badTimezone: invalid author/committer line - bad time zone > warning in commit d71472c40b36dcdf0396afc9778f6137eea45887: badTimezone: invalid author/committer line - bad time zone > warning in commit e8d3b19a91a2d86b6a91bd19dc811e851398b519: badTimezone: invalid author/committer line - bad time zone > warning in commit afd9fc0cc87e56ed7736d633e17d0ef77817b3cc: badTimezone: invalid author/committer line - bad time zone > warning in commit 811b3217708358cf1b75fba4602a64a426fce0f5: badTimezone: invalid author/committer line - bad time zone > warning in commit e7a751a597c6f5e4770c61bdee6220d55a37cba9: badTimezone: invalid author/committer line - bad time zone > warning in commit 3e32ad6192fe093e03e6b9346c3a90b16d9905c0: badTimezone: invalid author/committer line - bad time zone > warning in commit 5e66b47528e79d3bbb769e137f036a1fa99cccf9: badTimezone: invalid author/committer line - bad time zone > warning in commit d90d67d94ca47142670dff13fcb81ab7afab07bb: badTimezone: invalid author/committer line - bad time zone > Checking objects: 100% (1711464/1711464), done. > Checking connectivity: 1711464, done. Upon examination with git show --pretty=raw all of the problem commits had a time zone that was not 4 digits long. This time zone had been passed straight from the Date line in the email into the author line of the commit. Looking into that I discovered that str2time takes into account the time zone, and was actually able to process these weird time zones. So get the normalized time zone with strptime and convert it from seconds from gmt to hours and minutes from gmt. Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
2018-07-04v2: fill alternates with old epochs on init from mirrors
For v2 repositories with multiple epochs, we must not forget about earlier epochs in clones. Ensure we update the alternates file with all known epochs up to the current one. Reported-by: Eric W. Biederman <ebiederm@xmission.com> https://public-inbox.org/meta/871scj2vzi.fsf@xmission.com/
2018-06-26additional tests for bad Message-IDs in URLs
Followup-to: 73cfed86d8a8287a ("www: use undecoded paths for Message-ID extraction") Reported-by: Leah Neukirchen <leah@vuxu.org> https://public-inbox.org/meta/8736xsb5s5.fsf@vuxu.org/
2018-06-26www: use undecoded paths for Message-ID extraction
In PSGI, PATH_INFO contains URI-decoded paths which cause problems when Message-IDs contain ambiguous characters for used for routing. Instead, extract the undecoded path from REQUEST_URI and use that. Reported-by: Leah Neukirchen <leah@vuxu.org> https://public-inbox.org/meta/8736xsb5s5.fsf@vuxu.org/
2018-06-19Tweak over.sqlite3 queries for sqlite < 3.8
The query planner in sqlite3 < 3.8 is not very clever, so when it sees num mentioned in the query filter, it decides not to use the fast idx_ts index and goes for the much slower autoindex. CentOS-7 still has sqlite-3.7, so loading the http landing page of a very large archive (LKML) was taking over 18 seconds, as oppposed to milliseconds on a system with sqlite-3.8 and above: $ time sqlite3 -line over.sqlite3 'SELECT ts,ds,ddd FROM over \ WHERE num > 0 ORDER BY ts DESC LIMIT 1000;' > /dev/null real 0m19.610s user 0m17.805s sys 0m1.805s $ sqlite3 -line over.sqlite3 'EXPLAIN QUERY PLAN SELECT ts,ds,ddd \ FROM over WHERE num > 0 ORDER BY ts DESC LIMIT 1000;' selectid = 0 order = 0 from = 0 detail = SEARCH TABLE over USING INDEX sqlite_autoindex_over_1 (num>?) (~250000 rows) However, if we slightly tweak the query per SQlite recommendations [1] by adding + to the num filter, we force it to use the correct index and see much faster performance: $ time sqlite3 -line over.sqlite3 'SELECT ts,ds,ddd FROM over \ WHERE +num > 0 ORDER BY ts DESC LIMIT 1000;' > /dev/null real 0m0.007s user 0m0.005s sys 0m0.002s $ sqlite3 -line over.sqlite3 'EXPLAIN QUERY PLAN SELECT ts,ds,ddd \ FROM over WHERE +num > 0 ORDER BY ts DESC LIMIT 1000;' selectid = 0 order = 0 from = 0 detail = SCAN TABLE over USING INDEX idx_ts (~1464303 rows) This appears to be the only place where this is needed in order to avoid running into this issue. As far as I can tell, this change has no impact on systems running newer sqlite3 (>= 3.8). .. [1] https://sqlite.org/optoverview.html#disqualifying_where_clause_terms_using_unary_ Signed-off-by: Konstantin Ryabitsev <konstantin@linuxfoundation.org>