git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
From: Jeff King <peff@peff.net>
To: Duy Nguyen <pclouds@gmail.com>
Cc: Elijah Newren <newren@gmail.com>, Git Mailing List <git@vger.kernel.org>
Subject: Re: 2.18.0 Regression: packing performance and effectiveness
Date: Thu, 19 Jul 2018 15:17:20 -0400	[thread overview]
Message-ID: <20180719191719.GA22504@sigill.intra.peff.net> (raw)
In-Reply-To: <20180719182442.GA5796@duynguyen.home>

On Thu, Jul 19, 2018 at 08:24:42PM +0200, Duy Nguyen wrote:

> > > Looking at that output, my _guess_ is that we somehow end up with a
> > > bogus delta_size value and write out a truncated entry. But I couldn't
> > > reproduce the issue with smaller test cases.
> > 
> > Could it be a race condition?
> 
> I'm convinced my code is racy (between two writes). I created a broken
> pack once with 32 threads. Elijah please try again with this new
> patch. It should fix this (I only tried repack a few times so far but
> will continue)

Good thinking, it's definitely racy. And that's why my tiny reproduction
didn't work. I even tried bumping it up to 10 blobs instead of 2, but
that's not nearly enough.

> The race is this
> 
> 1. Thread one sees a large delta size and NULL delta_size[] array,
>    allocates the new array and in the middle of copying old delta
>    sizes over.
> 
> 2. Thread two wants to write a new (large) delta size. It sees that
>    delta_size[] is already allocated, it writes the correct size there
>    (and truncated one in object_entry->delta_size_)
> 
> 3. Back to thread one, it now copies the truncated value in
>    delta_size_ from step 2 to delta_size[] array, overwriting the good
>    value that thread two wrote.

Right. Or we could even allocate two delta_size arrays, since the
NULL-check and the allocation are not atomic.

> There is also a potential read/write race where a read from
> pack_size[] happens when the array is not ready. But I don't think it
> can happen with current try_delta() code. I protect it anyway to be
> safe.

Hrm. That one's disappointing, because we read much more often than we
write, and this introduces potential lock contention. It may not matter
much in practice, though.

> +static unsigned long oe_delta_size(struct packing_data *pack,
> +				   const struct object_entry *e)
> +{
> +	unsigned long size;
> +
> +	read_lock();	 /* to protect access to pack->delta_size[] */
> +	if (pack->delta_size)
> +		size = pack->delta_size[e - pack->objects];
> +	else
> +		size = e->delta_size_;
> +	read_unlock();
> +	return size;
> +}

Yuck, we even have to pay the read_lock() cost when we don't overflow
into the pack->delta_size array (but I agree we have to for
correctness).

Again, though, this amount of contention probably doesn't make a big
difference, since we're holding the lock for such a short time
(especially compared to all the work of computing the deltas).

This could be separate from the read_lock(), though, since that one does
block for much longer (e.g., while zlib inflating objects from disk).

> +static void oe_set_delta_size(struct packing_data *pack,
> +			      struct object_entry *e,
> +			      unsigned long size)
> +{
> +	read_lock();	 /* to protect access to pack->delta_size[] */
> +	if (!pack->delta_size && size < pack->oe_delta_size_limit) {
> +		e->delta_size_ = size;
> +		read_unlock();
> +		return;
> +	}

And ditto for this one. I thought we could get away with the "fast case"
skipping the lock, but we have to check pack->delta_size atomically to
even use it.

If each individual delta_size had an overflow bit that indicates "use me
literally" or "look me up in the array", then I think the "quick" ones
could avoid locking. It may not be worth the complexity though.

> @@ -160,6 +162,8 @@ struct object_entry *packlist_alloc(struct packing_data *pdata,
>  
>  		if (!pdata->in_pack_by_idx)
>  			REALLOC_ARRAY(pdata->in_pack, pdata->nr_alloc);
> +		if (pdata->delta_size)
> +			REALLOC_ARRAY(pdata->delta_size, pdata->nr_alloc);
>  	}
>  

This realloc needs to happen under the lock, too, I think. It would be
OK without locking for an in-place realloc, but if the chunk has to be
moved, somebody in oe_set_delta_size() might write to the old memory.

This is in a file that doesn't even know about read_lock(), of course.
Probably you need a delta mutex as part of the "struct packing_data",
and then it can just be handled inside pack-objects.c entirely.

-Peff

  reply	other threads:[~2018-07-19 19:17 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-18 22:51 2.18.0 Regression: packing performance and effectiveness Elijah Newren
2018-07-18 22:51 ` [RFC PATCH] fix-v1: revert "pack-objects: shrink delta_size field in struct object_entry" Elijah Newren
2018-07-18 22:51 ` [RFC PATCH] fix-v2: make OE_DELTA_SIZE_BITS a bit bigger Elijah Newren
2018-07-19  5:41 ` 2.18.0 Regression: packing performance and effectiveness Duy Nguyen
2018-07-19  5:49   ` Jeff King
2018-07-19 15:27   ` Elijah Newren
2018-07-19 15:43     ` Duy Nguyen
2018-07-19  5:44 ` Jeff King
2018-07-19  5:57   ` Duy Nguyen
2018-07-19 15:16     ` Duy Nguyen
2018-07-19 16:42       ` Elijah Newren
2018-07-19 17:23         ` Jeff King
2018-07-19 17:31           ` Duy Nguyen
2018-07-19 18:24             ` Duy Nguyen
2018-07-19 19:17               ` Jeff King [this message]
2018-07-19 23:11               ` Elijah Newren
2018-07-20  5:28                 ` Jeff King
2018-07-20  5:30                   ` Jeff King
2018-07-20  5:47                   ` Duy Nguyen
2018-07-20 17:21                   ` Elijah Newren
2018-07-19 17:04       ` Jeff King
2018-07-19 19:25       ` Junio C Hamano
2018-07-19 19:27         ` Junio C Hamano
2018-07-20 15:39 ` [PATCH] pack-objects: fix performance issues on packing large deltas Nguyễn Thái Ngọc Duy
2018-07-20 17:40   ` Jeff King
2018-07-21  4:23     ` Duy Nguyen
2018-07-23 21:37       ` Jeff King
2018-07-20 17:43   ` Elijah Newren
2018-07-20 23:52     ` Elijah Newren
2018-07-21  4:07       ` Duy Nguyen
2018-07-21  7:08         ` Duy Nguyen
2018-07-21  4:47     ` Duy Nguyen
2018-07-21  6:56       ` Elijah Newren
2018-07-21  7:14         ` Duy Nguyen
2018-07-22  6:22       ` Elijah Newren
2018-07-22  6:49         ` Duy Nguyen
2018-07-23 12:34     ` Elijah Newren
2018-07-23 15:50       ` Duy Nguyen
2018-07-22  8:04   ` [PATCH v2] " Nguyễn Thái Ngọc Duy
2018-07-23 18:04     ` Junio C Hamano
2018-07-23 18:38       ` Duy Nguyen
2018-07-23 18:49         ` Duy Nguyen
2018-07-23 21:30           ` Jeff King
2018-07-26  8:12     ` Johannes Sixt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: http://vger.kernel.org/majordomo-info.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180719191719.GA22504@sigill.intra.peff.net \
    --to=peff@peff.net \
    --cc=git@vger.kernel.org \
    --cc=newren@gmail.com \
    --cc=pclouds@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).