about summary refs log tree commit homepage
path: root/t
diff options
context:
space:
mode:
authorEric Wong <e@yhbt.net>2020-03-19 03:32:53 -0500
committerEric Wong <e@yhbt.net>2020-03-20 18:22:51 +0000
commit8fb8fc52420ef669c5b9c583d32647e9fbdffd88 (patch)
treefd72fc5be02fd3e5bb901f2330626756534a5d89 /t
parentc713cd419189cbe5cf72b6e60e846458985ffcdb (diff)
downloadpublic-inbox-8fb8fc52420ef669c5b9c583d32647e9fbdffd88.tar.gz
We already lazy-load WwwListing for the CGI script, and
hiding another layer of lazy-loading makes things difficult
to do WWW->preload.

We want long-lived processes to do all long-lived allocations up
front to avoid fragmentation in the allocator, but we'll still
support short-lived processes by lazy-loading individual modules
in the PublicInbox::* namespace.

Mixing up allocation lifetimes (e.g. doing immortal allocations
while a large amount of space is taken by short-lived objects)
will cause fragmentation in any allocator which favors large
contiguous regions for performance reasons.  This includes any
malloc implementation which relies on sbrk() for the primary
heap, including glibc malloc.
Diffstat (limited to 't')
-rw-r--r--t/www_listing.t4
1 files changed, 2 insertions, 2 deletions
diff --git a/t/www_listing.t b/t/www_listing.t
index 5168e16a..39c19577 100644
--- a/t/www_listing.t
+++ b/t/www_listing.t
@@ -9,8 +9,8 @@ use PublicInbox::TestCommon;
 require_mods(qw(URI::Escape Plack::Builder Digest::SHA
                 IO::Compress::Gzip IO::Uncompress::Gunzip HTTP::Tiny));
 require PublicInbox::WwwListing;
-my $json = eval { PublicInbox::WwwListing::_json() };
-plan skip_all => "JSON module missing: $@" if $@;
+my $json = $PublicInbox::WwwListing::json or
+        plan skip_all => "JSON module missing";
 
 use_ok 'PublicInbox::Git';