user/dev discussion of public-inbox itself
 help / color / mirror / code / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download mbox.gz: |
* Re: [PATCH 1/5] v2writable: fix batch size accounting
  2020-08-07 10:52  7% ` [PATCH 1/5] v2writable: fix batch size accounting Eric Wong
@ 2020-08-07 13:13  7%   ` Eric Wong
  0 siblings, 0 replies; 3+ results
From: Eric Wong @ 2020-08-07 13:13 UTC (permalink / raw)
  To: meta

Eric Wong <e@yhbt.net> wrote:
> We need to account for whether shard parallelization is
> enabled or not, since users of parallelization are expected
> to have more RAM.
> ---
>  lib/PublicInbox/V2Writable.pm | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/PublicInbox/V2Writable.pm b/lib/PublicInbox/V2Writable.pm
> index a029fe4c..03320b9c 100644
> --- a/lib/PublicInbox/V2Writable.pm
> +++ b/lib/PublicInbox/V2Writable.pm
> @@ -152,6 +152,12 @@ sub add {
>  	$self->{ibx}->with_umask(\&_add, $self, $eml, $check_cb);
>  }
>  
> +sub batch_bytes ($) {
> +	my ($self) = @_;
> +	$self->{parallel} ? $PublicInbox::SearchIdx::BATCH_BYTES
> +		: $PublicInbox::SearchIdx::BATCH_BYTES * $self->{shards};
> +}

Oops, that was backwards :x  Will squash this in:

diff --git a/lib/PublicInbox/V2Writable.pm b/lib/PublicInbox/V2Writable.pm
index 03320b9c0..f7a318e5b 100644
--- a/lib/PublicInbox/V2Writable.pm
+++ b/lib/PublicInbox/V2Writable.pm
@@ -154,8 +154,8 @@ sub add {
 
 sub batch_bytes ($) {
 	my ($self) = @_;
-	$self->{parallel} ? $PublicInbox::SearchIdx::BATCH_BYTES
-		: $PublicInbox::SearchIdx::BATCH_BYTES * $self->{shards};
+	($self->{parallel} ? $self->{shards} : 1) *
+		$PublicInbox::SearchIdx::BATCH_BYTES;
 }
 
 # indexes a message, returns true if checkpointing is needed

>  sub do_idx ($$$$) {
>  	my ($self, $msgref, $mime, $smsg) = @_;
> @@ -160,7 +166,7 @@ sub do_idx ($$$$) {
>  	my $idx = idx_shard($self, $smsg->{num} % $self->{shards});
>  	$idx->index_raw($msgref, $mime, $smsg);
>  	my $n = $self->{transact_bytes} += $smsg->{raw_bytes};
> -	$n >= ($PublicInbox::SearchIdx::BATCH_BYTES * $self->{shards});
> +	$n >= batch_bytes($self);
>  }

...Because the old code always assumed parallel shards (even
with --jobs=0).

>  sub _add {
> @@ -1195,7 +1201,7 @@ sub index_xap_step ($$$;$) {
>  	my $ibx = $self->{ibx};
>  	my $all = $ibx->git;
>  	my $over = $ibx->over;
> -	my $batch_bytes = $PublicInbox::SearchIdx::BATCH_BYTES;
> +	my $batch_bytes = batch_bytes($self);
>  	$step //= $self->{shards};
>  	my $end = $sync->{art_end};
>  	if (my $pr = $sync->{-opt}->{-progress}) {

And the new index_xap_step was not designed for parallel
operation, initially.

^ permalink raw reply related	[relevance 7%]

* [PATCH 1/5] v2writable: fix batch size accounting
  2020-08-07 10:52  6% [PATCH 0/5] more indexing improvements Eric Wong
@ 2020-08-07 10:52  7% ` Eric Wong
  2020-08-07 13:13  7%   ` Eric Wong
  0 siblings, 1 reply; 3+ results
From: Eric Wong @ 2020-08-07 10:52 UTC (permalink / raw)
  To: meta

We need to account for whether shard parallelization is
enabled or not, since users of parallelization are expected
to have more RAM.
---
 lib/PublicInbox/V2Writable.pm | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/PublicInbox/V2Writable.pm b/lib/PublicInbox/V2Writable.pm
index a029fe4c..03320b9c 100644
--- a/lib/PublicInbox/V2Writable.pm
+++ b/lib/PublicInbox/V2Writable.pm
@@ -152,6 +152,12 @@ sub add {
 	$self->{ibx}->with_umask(\&_add, $self, $eml, $check_cb);
 }
 
+sub batch_bytes ($) {
+	my ($self) = @_;
+	$self->{parallel} ? $PublicInbox::SearchIdx::BATCH_BYTES
+		: $PublicInbox::SearchIdx::BATCH_BYTES * $self->{shards};
+}
+
 # indexes a message, returns true if checkpointing is needed
 sub do_idx ($$$$) {
 	my ($self, $msgref, $mime, $smsg) = @_;
@@ -160,7 +166,7 @@ sub do_idx ($$$$) {
 	my $idx = idx_shard($self, $smsg->{num} % $self->{shards});
 	$idx->index_raw($msgref, $mime, $smsg);
 	my $n = $self->{transact_bytes} += $smsg->{raw_bytes};
-	$n >= ($PublicInbox::SearchIdx::BATCH_BYTES * $self->{shards});
+	$n >= batch_bytes($self);
 }
 
 sub _add {
@@ -1195,7 +1201,7 @@ sub index_xap_step ($$$;$) {
 	my $ibx = $self->{ibx};
 	my $all = $ibx->git;
 	my $over = $ibx->over;
-	my $batch_bytes = $PublicInbox::SearchIdx::BATCH_BYTES;
+	my $batch_bytes = batch_bytes($self);
 	$step //= $self->{shards};
 	my $end = $sync->{art_end};
 	if (my $pr = $sync->{-opt}->{-progress}) {

^ permalink raw reply related	[relevance 7%]

* [PATCH 0/5] more indexing improvements
@ 2020-08-07 10:52  6% Eric Wong
  2020-08-07 10:52  7% ` [PATCH 1/5] v2writable: fix batch size accounting Eric Wong
  0 siblings, 1 reply; 3+ results
From: Eric Wong @ 2020-08-07 10:52 UTC (permalink / raw)
  To: meta

VERY big batch sizes seem helpful on HDDs..  And I also blew up
a run because --compact ran in parallel with 32 shards :x

And --help should exist for all commands users may run from
the CLI.

Eric Wong (5):
  v2writable: fix batch size accounting
  index: --compact respects --sequential-shard
  index: max out XAPIAN_FLUSH_THRESHOLD if using --batch-size
  searchidx: use Perl truthiness to detect XAPIAN_FLUSH_THRESHOLD
  index: add built-in --help / -?

 Documentation/public-inbox-index.pod |  4 +-
 lib/PublicInbox/SearchIdx.pm         |  3 +-
 lib/PublicInbox/V2Writable.pm        | 10 ++++-
 script/public-inbox-index            | 58 ++++++++++++++++++++++------
 4 files changed, 58 insertions(+), 17 deletions(-)

^ permalink raw reply	[relevance 6%]

Results 1-3 of 3 | reverse | options above
-- pct% links below jump to the message on this page, permalinks otherwise --
2020-08-07 10:52  6% [PATCH 0/5] more indexing improvements Eric Wong
2020-08-07 10:52  7% ` [PATCH 1/5] v2writable: fix batch size accounting Eric Wong
2020-08-07 13:13  7%   ` Eric Wong

Code repositories for project(s) associated with this public inbox

	https://80x24.org/public-inbox.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).