unofficial mirror of libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Noah Goldstein via Libc-alpha <libc-alpha@sourceware.org>
To: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Cc: GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH v3 3/7] stdlib: Optimization qsort{_r} swap implementation (BZ #19305)
Date: Fri, 15 Oct 2021 12:45:16 -0500	[thread overview]
Message-ID: <CAFUsyfK8LLTuXNa5B74VGScYLMLBKb++LEimxKXNdJsxxCwqvg@mail.gmail.com> (raw)
In-Reply-To: <ee6580d8-8929-a83d-c561-59db1827fb34@linaro.org>

On Fri, Oct 15, 2021 at 12:34 PM Adhemerval Zanella
<adhemerval.zanella@linaro.org> wrote:
>
>
>
> On 15/10/2021 14:17, Noah Goldstein wrote:
> > On Fri, Oct 15, 2021 at 8:29 AM Adhemerval Zanella
> > <adhemerval.zanella@linaro.org> wrote:
> >>
> >>
> >>
> >> On 13/10/2021 00:39, Noah Goldstein wrote:
> >>>
> >>>
> >>> On Tue, Oct 12, 2021 at 11:29 PM Noah Goldstein <goldstein.w.n@gmail.com <mailto:goldstein.w.n@gmail.com>> wrote:
> >>>
> >>>
> >>>
> >>>     On Fri, Sep 3, 2021 at 1:14 PM Adhemerval Zanella via Libc-alpha <libc-alpha@sourceware.org <mailto:libc-alpha@sourceware.org>> wrote:
> >>>
> >>>         It optimizes take in consideration both the most common elements are
> >>>         either 32 or 64 bit in size [1] and inputs are aligned to the word
> >>>         boundary.  This is similar to the optimization done on lib/sort.c
> >>>         from Linux.
> >>>
> >>>         This patchs adds an optimized swap operation on qsort based in previous
> >>>         msort one.  Instead of byte operation, three variants are provided:
> >>>
> >>>           1. Using uint32_t loads and stores.
> >>>           2. Using uint64_t loads and stores.
> >>>           3. Generic one with a temporary buffer and memcpy/mempcpy.
> >>>
> >>>         The 1. and 2. options are selected only either if architecture defines
> >>>         _STRING_ARCH_unaligned or if base pointer is aligned to required type.
> >>>
> >>>         It also fixes BZ#19305 by checking input size against number of
> >>>         elements 1 besides 0.
> >>>
> >>>         Checked on x86_64-linux-gnu.
> >>>
> >>>         [1] https://sourceware.org/pipermail/libc-alpha/2018-August/096984.html <https://sourceware.org/pipermail/libc-alpha/2018-August/096984.html>
> >>>         ---
> >>>          stdlib/qsort.c | 109 +++++++++++++++++++++++++++++++++++++++++--------
> >>>          1 file changed, 91 insertions(+), 18 deletions(-)
> >>>
> >>>         diff --git a/stdlib/qsort.c b/stdlib/qsort.c
> >>>         index 23f2d28314..59458d151b 100644
> >>>         --- a/stdlib/qsort.c
> >>>         +++ b/stdlib/qsort.c
> >>>         @@ -24,20 +24,85 @@
> >>>          #include <limits.h>
> >>>          #include <stdlib.h>
> >>>          #include <string.h>
> >>>         +#include <stdbool.h>
> >>>
> >>>         -/* Byte-wise swap two items of size SIZE. */
> >>>         -#define SWAP(a, b, size)                                                     \
> >>>         -  do                                                                         \
> >>>         -    {                                                                        \
> >>>         -      size_t __size = (size);                                                \
> >>>         -      char *__a = (a), *__b = (b);                                           \
> >>>         -      do                                                                     \
> >>>         -       {                                                                     \
> >>>         -         char __tmp = *__a;                                                  \
> >>>         -         *__a++ = *__b;                                                      \
> >>>         -         *__b++ = __tmp;                                                     \
> >>>         -       } while (--__size > 0);                                               \
> >>>         -    } while (0)
> >>>         +/* Swap SIZE bytes between addresses A and B.  These helpers are provided
> >>>         +   along the generic one as an optimization.  */
> >>>         +
> >>>         +typedef void (*swap_func_t)(void * restrict, void * restrict, size_t);
> >>>         +
> >>>         +/* Return trues is elements can be copied used word load and sortes.
> >>>         +   The size must be a multiple of the alignment, and the base address.  */
> >>>         +static inline bool
> >>>         +is_aligned_to_copy (const void *base, size_t size, size_t align)
> >>>         +{
> >>>         +  unsigned char lsbits = size;
> >>>         +#if !_STRING_ARCH_unaligned
> >>>         +  lsbits |= (unsigned char)(uintptr_t) base;
> >>>         +#endif
> >>>         +  return (lsbits & (align - 1)) == 0;
> >>>         +}
> >>>         +
> >>>         +#define SWAP_WORDS_64 (swap_func_t)0
> >>>         +#define SWAP_WORDS_32 (swap_func_t)1
> >>>         +#define SWAP_BYTES    (swap_func_t)2
> >>>         +
> >>>         +static void
> >>>         +swap_words_64 (void * restrict a, void * restrict b, size_t n)
> >>>         +{
> >>>         +  do
> >>>         +   {
> >>>         +     n -= 8;
> >>>         +     uint64_t t = *(uint64_t *)(a + n);
> >>>         +     *(uint64_t *)(a + n) = *(uint64_t *)(b + n);
> >>>         +     *(uint64_t *)(b + n) = t;
> >>>         +   } while (n);
> >>>         +}
> >>>         +
> >>>         +static void
> >>>         +swap_words_32 (void * restrict a, void * restrict b, size_t n)
> >>>         +{
> >>>         +  do
> >>>         +   {
> >>>         +     n -= 4;
> >>>         +     uint32_t t = *(uint32_t *)(a + n);
> >>>         +     *(uint32_t *)(a + n) = *(uint32_t *)(b + n);
> >>>         +     *(uint32_t *)(b + n) = t;
> >>>         +   } while (n);
> >>>         +}
> >>>
> >>>
> >>> I'm not certain swap_words_32 / swap_words_8 will be optimal for larger
> >>> key sizes. Looking at GCC's implementation of swap_generic on modern x86_64:
> >>> https://godbolt.org/z/638h3Y9va <https://godbolt.org/z/638h3Y9va>
> >>> It's able to optimize the temporary buffer out of the loop and use xmm registers which
> >>> will likely win out for larger sizes.
> >>
> >> It is probably not the most optimized code compiler can generated since
> >> I tried to go for a more generic code that should results in a somewhat
> >> better code in a architecture agnostic code.  I trying to mimic some
> >> of optimization Linux did on 37d0ec34d111acfdb, and the swap_bytes I
> >> used the same strategy used on d3496c9f4f27d (Linux did not optimize
> >> anything for the byte version).
> >>
> >> I think it would be possible to tune it for an specific architecture
> >> and/or compiler, but I would prefer to use a good enough algorithm that
> >> work reasonable on multiple architectures.
> >
> > That's fair. Although I still think there are some improvements.
> >
> > Looking at the assembly for all three in fact it seems GCC optimizes all of them
> > to larger register copies: https://godbolt.org/z/bd9nnnoEY
> >
> > The biggest difference seems to be the setup / register spills for
> > the generic version so for the common case of a relatively small key
> > the special case for 4/8 makes sense.
> >
> > Have you checked that GCC is able to use the conditions for selecting
> > the swap function to optimize the functions themselves? In the godbolt
> > link I got reasonable value out of adding the invariants to swap_64/swap_32.
>
> Not yet, but I think the generated code for both x86_64 and aarch64 seems
> simple enough and should cover the common case (keys with size 4 or 8)
> fast enough [1].

swap is in the inner loop. Seems like a pretty critical component to have fully
optimized. The aarch64 version looks good, but the x86_64 version seems
to be lacking. Not arguing for an arch specific version, but if the directives
can add value to x86_64 without detracting from aarch64 seems like a zero
cost improvement.

>
> And since for v4 my plan is not to remove mergesort anymore, but limit it
> to a static buffer; it might be possible to use the same swap routines
> for both mergesort and quicksort.
>
> >
> > It also may be worth it to write a custom memcpy implementation for
> > size = 0..SWAP_GENERIC_SIZE so it can be inlined (and probably
> > more optimized than what generic memcpy can get away with).
> >
>
> I *really* do not want to go for this way.  My hope is with small sizes
> compiler can inline the memcpy (which gcc does for most common
> architectures).

Since size is non-constant for the tail I don't see how we are going
avoid 3x memcpy calls. Although that can be another patch if it
gets values.

>
>
> [1] https://godbolt.org/z/v7e4xxqGa

  reply	other threads:[~2021-10-15 17:45 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-03 17:11 [PATCH v3 0/7] Use introsort for qsort Adhemerval Zanella via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 1/7] benchtests: Add bench-qsort Adhemerval Zanella via Libc-alpha
2021-09-04  9:09   ` Alexander Monakov via Libc-alpha
2021-09-06 18:30     ` Adhemerval Zanella via Libc-alpha
2021-10-13  3:19   ` Noah Goldstein via Libc-alpha
2021-10-15 12:52     ` Adhemerval Zanella via Libc-alpha
2021-10-15 16:39       ` Noah Goldstein via Libc-alpha
2021-10-15 17:19         ` Adhemerval Zanella via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 2/7] support: Fix getopt_long with CMDLINE_OPTIONS Adhemerval Zanella via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 3/7] stdlib: Optimization qsort{_r} swap implementation (BZ #19305) Adhemerval Zanella via Libc-alpha
2021-10-13  3:29   ` Noah Goldstein via Libc-alpha
2021-10-13  3:39     ` Noah Goldstein via Libc-alpha
2021-10-15 13:29       ` Adhemerval Zanella via Libc-alpha
2021-10-15 17:17         ` Noah Goldstein via Libc-alpha
2021-10-15 17:34           ` Adhemerval Zanella via Libc-alpha
2021-10-15 17:45             ` Noah Goldstein via Libc-alpha [this message]
2021-10-15 17:56               ` Adhemerval Zanella via Libc-alpha
2021-10-15 13:12     ` Adhemerval Zanella via Libc-alpha
2021-10-15 16:45       ` Noah Goldstein via Libc-alpha
2021-10-15 17:21         ` Adhemerval Zanella via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 4/7] stdlib: Move insertion sort out qsort Adhemerval Zanella via Libc-alpha
2021-09-06 20:35   ` Fangrui Song via Libc-alpha
2021-09-06 20:48     ` Fangrui Song via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 5/7] stdlib: qsort: Move some macros to inline function Adhemerval Zanella via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 6/7] stdlib: Implement introsort with qsort Adhemerval Zanella via Libc-alpha
2021-09-04  9:17   ` Alexander Monakov via Libc-alpha
2021-09-06 18:43     ` Adhemerval Zanella via Libc-alpha
2021-09-06 20:23   ` Fangrui Song via Libc-alpha
2021-10-13  3:53   ` Noah Goldstein via Libc-alpha
2021-09-03 17:11 ` [PATCH v3 7/7] stdlib: Remove use of mergesort on qsort (BZ #21719) Adhemerval Zanella via Libc-alpha
2021-09-03 19:18 ` [PATCH v3 0/7] Use introsort for qsort Paul Eggert
2021-09-06 14:13   ` Carlos O'Donell via Libc-alpha
2021-09-06 17:03     ` Zack Weinberg via Libc-alpha
2021-09-06 18:19       ` Adhemerval Zanella via Libc-alpha
2021-09-07  0:14     ` Paul Eggert
2021-09-07 14:32       ` Adhemerval Zanella via Libc-alpha
2021-09-07 17:39         ` Paul Eggert
2021-09-07 18:07           ` Adhemerval Zanella via Libc-alpha
2021-09-07 19:28             ` Paul Eggert
2021-09-08 11:56               ` Adhemerval Zanella via Libc-alpha
2021-09-09  0:39                 ` Paul Eggert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/libc/involved.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFUsyfK8LLTuXNa5B74VGScYLMLBKb++LEimxKXNdJsxxCwqvg@mail.gmail.com \
    --to=libc-alpha@sourceware.org \
    --cc=adhemerval.zanella@linaro.org \
    --cc=goldstein.w.n@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).