From: "H.J. Lu via Libc-alpha" <libc-alpha@sourceware.org>
To: Sajan Karumanchi <sajan.karumanchi@amd.com>
Cc: Florian Weimer <fweimer@redhat.com>,
"Mallappa, Premachandra" <premachandra.mallappa@amd.com>,
GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
Date: Mon, 11 Jan 2021 09:27:13 -0800 [thread overview]
Message-ID: <CAMe9rOrLMhCnSqTY6zB47ZcMo5VXFCh9pM0XNV_J0TttYzYfUw@mail.gmail.com> (raw)
In-Reply-To: <20210111104301.205094-1-sajan.karumanchi@amd.com>
On Mon, Jan 11, 2021 at 2:43 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
> sysdeps/x86/cacheinfo.h | 14 ++++++++++++++
> .../x86_64/multiarch/memmove-vec-unaligned-erms.S | 2 +-
> 2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index 00d2d8a52a..00c3a823f0 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> /* Threshold to use Enhanced REP STOSB. */
> long int __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB. */
> +long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;
The default should be the same as __x86_shared_non_temporal_threshold.
> static void
> get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> long int core)
> @@ -351,6 +354,11 @@ init_cacheinfo (void)
> /* Account for exclusive L2 and L3 caches. */
> shared += core;
> }
> + /* ERMS feature is implemented from Zen3 architecture and it is
> + performing poorly for data above L2 cache size. Henceforth, adding
> + an upper bound threshold parameter to limit the usage of Enhanced
> + REP MOVSB operations and setting its value to L2 cache size. */
> + __x86_max_rep_movsb_threshold = core;
> }
> }
>
> @@ -423,6 +431,12 @@ init_cacheinfo (void)
> else
> __x86_rep_movsb_threshold = rep_movsb_threshold;
>
> + /* Setting the upper bound of ERMS to the known default value of
> + non-temporal threshold for architectures other than AMD. */
> + if (cpu_features->basic.kind != arch_kind_amd)
> + __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
> +
> +
> # if HAVE_TUNABLES
> __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> # endif
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..5682e7a9fd 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -240,7 +240,7 @@ L(return):
> ret
>
> L(movsb):
> - cmp __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> + cmp __x86_max_rep_movsb_threshold(%rip), %RDX_LP
Please add some comments here and update the algorithm at the
beginning of the function.
> jae L(more_8x_vec)
> cmpq %rsi, %rdi
> jb 1f
> --
> 2.25.1
>
--
H.J.
next prev parent reply other threads:[~2021-01-11 17:27 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-11 10:43 [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB sajan.karumanchi--- via Libc-alpha
2021-01-11 17:27 ` H.J. Lu via Libc-alpha [this message]
2021-01-12 18:48 ` [PATCH 1/1] " sajan.karumanchi--- via Libc-alpha
2021-01-12 20:04 ` H.J. Lu via Libc-alpha
2021-01-13 15:18 ` [PATCH] " sajan.karumanchi--- via Libc-alpha
2021-01-13 15:26 ` H.J. Lu via Libc-alpha
2021-01-12 18:56 ` Karumanchi, Sajan via Libc-alpha
-- strict thread matches above, loose matches on Subject: below --
2021-01-07 16:22 sajan.karumanchi--- via Libc-alpha
2021-01-08 14:03 ` Florian Weimer via Libc-alpha
2021-01-11 10:46 ` Karumanchi, Sajan via Libc-alpha
2021-01-18 17:07 ` Florian Weimer via Libc-alpha
2021-01-18 17:10 ` Adhemerval Zanella via Libc-alpha
2021-01-22 10:18 ` sajan.karumanchi--- via Libc-alpha
2021-02-01 17:05 ` H.J. Lu via Libc-alpha
2022-04-27 23:38 ` Sunil Pandey via Libc-alpha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://www.gnu.org/software/libc/involved.html
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMe9rOrLMhCnSqTY6zB47ZcMo5VXFCh9pM0XNV_J0TttYzYfUw@mail.gmail.com \
--to=libc-alpha@sourceware.org \
--cc=fweimer@redhat.com \
--cc=hjl.tools@gmail.com \
--cc=premachandra.mallappa@amd.com \
--cc=sajan.karumanchi@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).