unofficial mirror of libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
@ 2021-01-07 16:22 sajan.karumanchi--- via Libc-alpha
  2021-01-08 14:03 ` Florian Weimer via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: sajan.karumanchi--- via Libc-alpha @ 2021-01-07 16:22 UTC (permalink / raw)
  To: libc-alpha, carlos, fweimer, hjl.tools
  Cc: Sajan Karumanchi, premachandra.mallappa

From: Sajan Karumanchi <sajan.karumanchi@amd.com>

In the process of optimizing memcpy for AMD machines, we have found the
vector move operations are outperforming enhanced REP MOVSB for data
transfers above the L2 cache size on Zen3 architectures.
To handle this use case, we are adding an upper bound parameter on
enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
As per large-bench results, we are configuring this parameter to the
L2 cache size for AMD machines and applicable from Zen3 architecture
supporting the ERMS feature.
For architectures other than AMD, it is the computed value of
non-temporal threshold parameter.

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
---
 sysdeps/x86/cacheinfo.c                           | 15 ++++++++++++++-
 .../x86_64/multiarch/memmove-vec-unaligned-erms.S |  4 ++--
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index 3fb4a028d8..9d7f8992be 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -1,5 +1,5 @@
 /* x86_64 cache info.
-   Copyright (C) 2003-2020 Free Software Foundation, Inc.
+   Copyright (C) 2003-2021 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
    The GNU C Library is free software; you can redistribute it and/or
@@ -533,6 +533,9 @@ long int __x86_shared_non_temporal_threshold attribute_hidden;
 /* Threshold to use Enhanced REP MOVSB.  */
 long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 
+/* Threshold to stop using Enhanced REP MOVSB.  */
+long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;
+
 /* Threshold to use Enhanced REP STOSB.  */
 long int __x86_rep_stosb_threshold attribute_hidden = 2048;
 
@@ -839,6 +842,11 @@ init_cacheinfo (void)
 	      /* Account for exclusive L2 and L3 caches.  */
 	      shared += core;
             }
+	  /* ERMS feature is implemented from Zen3 architecture and it is
+	     performing poorly for data above L2 cache size. Henceforth, adding
+	     an upper bound threshold parameter to limit the usage of Enhanced
+	     REP MOVSB operations and setting its value to L2 cache size.  */
+	  __x86_max_rep_movsb_threshold = core;
 	}
     }
 
@@ -909,6 +917,11 @@ init_cacheinfo (void)
   else
     __x86_rep_movsb_threshold = rep_movsb_threshold;
 
+  /* Setting the upper bound of ERMS to the known default value of
+     non-temporal threshold for architectures other than AMD.  */
+  if (cpu_features->basic.kind != arch_kind_amd)
+    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
+
 # if HAVE_TUNABLES
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
 # endif
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index bd5dc1a3f3..c18eaf7ef6 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -1,5 +1,5 @@
 /* memmove/memcpy/mempcpy with unaligned load/store and rep movsb
-   Copyright (C) 2016-2020 Free Software Foundation, Inc.
+   Copyright (C) 2016-2021 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
    The GNU C Library is free software; you can redistribute it and/or
@@ -233,7 +233,7 @@ L(return):
 	ret
 
 L(movsb):
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	cmp	__x86_max_rep_movsb_threshold(%rip), %RDX_LP
 	jae	L(more_8x_vec)
 	cmpq	%rsi, %rdi
 	jb	1f
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-07 16:22 [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB sajan.karumanchi--- via Libc-alpha
@ 2021-01-08 14:03 ` Florian Weimer via Libc-alpha
  2021-01-11 10:46   ` Karumanchi, Sajan via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: Florian Weimer via Libc-alpha @ 2021-01-08 14:03 UTC (permalink / raw)
  To: sajan.karumanchi--- via Libc-alpha
  Cc: premachandra.mallappa, sajan.karumanchi

* sajan karumanchi:

> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>

Thanks for the patch.  Would you be able to rebase it on top of current
master?  There are some non-trivial conflicts, as far as I can see.

Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
@ 2021-01-11 10:43 sajan.karumanchi--- via Libc-alpha
  2021-01-11 17:27 ` H.J. Lu via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: sajan.karumanchi--- via Libc-alpha @ 2021-01-11 10:43 UTC (permalink / raw)
  To: libc-alpha, carlos, fweimer, hjl.tools
  Cc: Sajan Karumanchi, premachandra.mallappa

From: Sajan Karumanchi <sajan.karumanchi@amd.com>

In the process of optimizing memcpy for AMD machines, we have found the
vector move operations are outperforming enhanced REP MOVSB for data
transfers above the L2 cache size on Zen3 architectures.
To handle this use case, we are adding an upper bound parameter on
enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
As per large-bench results, we are configuring this parameter to the
L2 cache size for AMD machines and applicable from Zen3 architecture
supporting the ERMS feature.
For architectures other than AMD, it is the computed value of
non-temporal threshold parameter.

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
---
 sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
 .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  2 +-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index 00d2d8a52a..00c3a823f0 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 /* Threshold to use Enhanced REP STOSB.  */
 long int __x86_rep_stosb_threshold attribute_hidden = 2048;
 
+/* Threshold to stop using Enhanced REP MOVSB.  */
+long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;
+
 static void
 get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
 		       long int core)
@@ -351,6 +354,11 @@ init_cacheinfo (void)
 	      /* Account for exclusive L2 and L3 caches.  */
 	      shared += core;
             }
+	  /* ERMS feature is implemented from Zen3 architecture and it is
+	     performing poorly for data above L2 cache size. Henceforth, adding
+	     an upper bound threshold parameter to limit the usage of Enhanced
+	     REP MOVSB operations and setting its value to L2 cache size.  */
+	  __x86_max_rep_movsb_threshold = core;
       }
     }
 
@@ -423,6 +431,12 @@ init_cacheinfo (void)
   else
     __x86_rep_movsb_threshold = rep_movsb_threshold;
 
+  /* Setting the upper bound of ERMS to the known default value of
+     non-temporal threshold for architectures other than AMD.  */
+  if (cpu_features->basic.kind != arch_kind_amd)
+    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
+
+
 # if HAVE_TUNABLES
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
 # endif
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 0980c95378..5682e7a9fd 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -240,7 +240,7 @@ L(return):
 	ret
 
 L(movsb):
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP
 	jae	L(more_8x_vec)
 	cmpq	%rsi, %rdi
 	jb	1f
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* RE: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-08 14:03 ` Florian Weimer via Libc-alpha
@ 2021-01-11 10:46   ` Karumanchi, Sajan via Libc-alpha
  2021-01-18 17:07     ` Florian Weimer via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: Karumanchi, Sajan via Libc-alpha @ 2021-01-11 10:46 UTC (permalink / raw)
  To: Florian Weimer, sajan.karumanchi--- via Libc-alpha; +Cc: Mallappa, Premachandra

[AMD Public Use]

Hi Florian,

I have pushed a new patch on top the rebased master branch.

Thanks & Regards,
Sajan K.

-----Original Message-----
From: Florian Weimer <fweimer@redhat.com> 
Sent: Friday, January 8, 2021 7:33 PM
To: sajan.karumanchi--- via Libc-alpha <libc-alpha@sourceware.org>
Cc: carlos@redhat.com; hjl.tools@gmail.com; Karumanchi, Sajan <Sajan.Karumanchi@amd.com>; Mallappa, Premachandra <Premachandra.Mallappa@amd.com>
Subject: Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.

[CAUTION: External Email]

* sajan karumanchi:

> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found 
> the vector move operations are outperforming enhanced REP MOVSB for 
> data transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on 
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture 
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of 
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>

Thanks for the patch.  Would you be able to rebase it on top of current master?  There are some non-trivial conflicts, as far as I can see.

Florian
--
Red Hat GmbH, https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fde.redhat.com%2F&amp;data=04%7C01%7Csajan.karumanchi%40amd.com%7C385bb220a2ed40f8383b08d8b3de2881%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637457114064957358%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=o3x1r6WGqvzGwx1Rju%2FEyksBPRJGb%2B3cx9c%2FJHnP%2B3k%3D&amp;reserved=0 , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-11 10:43 sajan.karumanchi--- via Libc-alpha
@ 2021-01-11 17:27 ` H.J. Lu via Libc-alpha
  2021-01-12 18:56   ` Karumanchi, Sajan via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: H.J. Lu via Libc-alpha @ 2021-01-11 17:27 UTC (permalink / raw)
  To: Sajan Karumanchi; +Cc: Florian Weimer, Mallappa, Premachandra, GNU C Library

On Mon, Jan 11, 2021 at 2:43 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
>  sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
>  .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  2 +-
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index 00d2d8a52a..00c3a823f0 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
>  /* Threshold to use Enhanced REP STOSB.  */
>  long int __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB.  */
> +long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;

The default should be the same as __x86_shared_non_temporal_threshold.

>  static void
>  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>                        long int core)
> @@ -351,6 +354,11 @@ init_cacheinfo (void)
>               /* Account for exclusive L2 and L3 caches.  */
>               shared += core;
>              }
> +         /* ERMS feature is implemented from Zen3 architecture and it is
> +            performing poorly for data above L2 cache size. Henceforth, adding
> +            an upper bound threshold parameter to limit the usage of Enhanced
> +            REP MOVSB operations and setting its value to L2 cache size.  */
> +         __x86_max_rep_movsb_threshold = core;
>        }
>      }
>
> @@ -423,6 +431,12 @@ init_cacheinfo (void)
>    else
>      __x86_rep_movsb_threshold = rep_movsb_threshold;
>
> +  /* Setting the upper bound of ERMS to the known default value of
> +     non-temporal threshold for architectures other than AMD.  */
> +  if (cpu_features->basic.kind != arch_kind_amd)
> +    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
> +
> +
>  # if HAVE_TUNABLES
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>  # endif
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..5682e7a9fd 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -240,7 +240,7 @@ L(return):
>         ret
>
>  L(movsb):
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP

Please add some comments here and update the algorithm at the
beginning of the function.

>         jae     L(more_8x_vec)
>         cmpq    %rsi, %rdi
>         jb      1f
> --
> 2.25.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-11 17:27 ` H.J. Lu via Libc-alpha
@ 2021-01-12 18:56   ` Karumanchi, Sajan via Libc-alpha
  0 siblings, 0 replies; 13+ messages in thread
From: Karumanchi, Sajan via Libc-alpha @ 2021-01-12 18:56 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Florian Weimer, Mallappa, Premachandra, GNU C Library

[AMD Public Use]

Hi H.J.Lu,

I have pushed the patch with updated comments and algorithm. As __x86_shared_non_temporal_threshold is a variable(not a constant value) and is computed during initialization phase, I cannot set this as a default value for '__x86_max_rep_movsb_threshold'.

Thanks & Regards,
Sajan K.

-----Original Message-----
From: H.J. Lu <hjl.tools@gmail.com> 
Sent: Monday, January 11, 2021 10:57 PM
To: Karumanchi, Sajan <Sajan.Karumanchi@amd.com>
Cc: GNU C Library <libc-alpha@sourceware.org>; Carlos O'Donell <carlos@redhat.com>; Florian Weimer <fweimer@redhat.com>; Mallappa, Premachandra <Premachandra.Mallappa@amd.com>
Subject: Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.

[CAUTION: External Email]

On Mon, Jan 11, 2021 at 2:43 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found 
> the vector move operations are outperforming enhanced REP MOVSB for 
> data transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on 
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture 
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of 
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
>  sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
>  .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  2 +-
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h index 
> 00d2d8a52a..00c3a823f0 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden 
> = 2048;
>  /* Threshold to use Enhanced REP STOSB.  */  long int 
> __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB.  */ long int 
> +__x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;

The default should be the same as __x86_shared_non_temporal_threshold.

>  static void
>  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>                        long int core)
> @@ -351,6 +354,11 @@ init_cacheinfo (void)
>               /* Account for exclusive L2 and L3 caches.  */
>               shared += core;
>              }
> +         /* ERMS feature is implemented from Zen3 architecture and it is
> +            performing poorly for data above L2 cache size. Henceforth, adding
> +            an upper bound threshold parameter to limit the usage of Enhanced
> +            REP MOVSB operations and setting its value to L2 cache size.  */
> +         __x86_max_rep_movsb_threshold = core;
>        }
>      }
>
> @@ -423,6 +431,12 @@ init_cacheinfo (void)
>    else
>      __x86_rep_movsb_threshold = rep_movsb_threshold;
>
> +  /* Setting the upper bound of ERMS to the known default value of
> +     non-temporal threshold for architectures other than AMD.  */  if 
> + (cpu_features->basic.kind != arch_kind_amd)
> +    __x86_max_rep_movsb_threshold = 
> + __x86_shared_non_temporal_threshold;
> +
> +
>  # if HAVE_TUNABLES
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>  # endif
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S 
> b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..5682e7a9fd 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -240,7 +240,7 @@ L(return):
>         ret
>
>  L(movsb):
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP

Please add some comments here and update the algorithm at the beginning of the function.

>         jae     L(more_8x_vec)
>         cmpq    %rsi, %rdi
>         jb      1f
> --
> 2.25.1
>


--
H.J.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-12 20:04 [PATCH 1/1] " H.J. Lu via Libc-alpha
@ 2021-01-13 15:18 ` sajan.karumanchi--- via Libc-alpha
  2021-01-13 15:26   ` H.J. Lu via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: sajan.karumanchi--- via Libc-alpha @ 2021-01-13 15:18 UTC (permalink / raw)
  To: hjl.tools; +Cc: Sajan Karumanchi, Premachandra Mallappa, libc-alpha, fweimer

From: Sajan Karumanchi <sajan.karumanchi@amd.com>

In the process of optimizing memcpy for AMD machines, we have found the
vector move operations are outperforming enhanced REP MOVSB for data
transfers above the L2 cache size on Zen3 architectures.
To handle this use case, we are adding an upper bound parameter on
enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
As per large-bench results, we are configuring this parameter to the
L2 cache size for AMD machines and applicable from Zen3 architecture
supporting the ERMS feature.
For architectures other than AMD, it is the computed value of
non-temporal threshold parameter.

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
---
 sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
 .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  8 ++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index 00d2d8a52a..f20b7fea4f 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 /* Threshold to use Enhanced REP STOSB.  */
 long int __x86_rep_stosb_threshold attribute_hidden = 2048;
 
+/* Threshold to stop using Enhanced REP MOVSB.  */
+long int __x86_max_rep_movsb_threshold attribute_hidden;
+
 static void
 get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
 		       long int core)
@@ -351,6 +354,11 @@ init_cacheinfo (void)
 	      /* Account for exclusive L2 and L3 caches.  */
 	      shared += core;
             }
+	  /* ERMS feature is implemented from Zen3 architecture and it is
+	     performing poorly for data above L2 cache size. Henceforth, adding
+	     an upper bound threshold parameter to limit the usage of Enhanced
+	     REP MOVSB operations and setting its value to L2 cache size.  */
+	  __x86_max_rep_movsb_threshold = core;
       }
     }
 
@@ -423,6 +431,12 @@ init_cacheinfo (void)
   else
     __x86_rep_movsb_threshold = rep_movsb_threshold;
 
+  /* Setting the upper bound of ERMS to the known default value of
+     non-temporal threshold for architectures other than AMD.  */
+  if (cpu_features->basic.kind != arch_kind_amd)
+    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
+
+
 # if HAVE_TUNABLES
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
 # endif
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 0980c95378..c7e75bfbda 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -30,7 +30,10 @@
       load and aligned store.  Load the last 4 * VEC and first VEC
       before the loop and store them after the loop to support
       overlapping addresses.
-   6. If size >= __x86_shared_non_temporal_threshold and there is no
+   6. On machines with ERMS feature, if size greater than equal or to
+      __x86_rep_movsb_threshold and less than
+      __x86_max_rep_movsb_threshold, then REP MOVSB will be used.
+   7. If size >= __x86_shared_non_temporal_threshold and there is no
       overlap between destination and source, use non-temporal store
       instead of aligned store.  */
 
@@ -240,7 +243,8 @@ L(return):
 	ret
 
 L(movsb):
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	/* Avoid REP MOVSB for sizes above max threshold.  */
+	cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP
 	jae	L(more_8x_vec)
 	cmpq	%rsi, %rdi
 	jb	1f
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-13 15:18 ` [PATCH] " sajan.karumanchi--- via Libc-alpha
@ 2021-01-13 15:26   ` H.J. Lu via Libc-alpha
  0 siblings, 0 replies; 13+ messages in thread
From: H.J. Lu via Libc-alpha @ 2021-01-13 15:26 UTC (permalink / raw)
  To: Sajan Karumanchi, Adhemerval Zanella
  Cc: Florian Weimer, Premachandra Mallappa, GNU C Library

On Wed, Jan 13, 2021 at 7:19 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
>  sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
>  .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  8 ++++++--
>  2 files changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index 00d2d8a52a..f20b7fea4f 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
>  /* Threshold to use Enhanced REP STOSB.  */
>  long int __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB.  */
> +long int __x86_max_rep_movsb_threshold attribute_hidden;
> +
>  static void
>  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>                        long int core)
> @@ -351,6 +354,11 @@ init_cacheinfo (void)
>               /* Account for exclusive L2 and L3 caches.  */
>               shared += core;
>              }
> +         /* ERMS feature is implemented from Zen3 architecture and it is
> +            performing poorly for data above L2 cache size. Henceforth, adding
> +            an upper bound threshold parameter to limit the usage of Enhanced
> +            REP MOVSB operations and setting its value to L2 cache size.  */
> +         __x86_max_rep_movsb_threshold = core;
>        }
>      }
>
> @@ -423,6 +431,12 @@ init_cacheinfo (void)
>    else
>      __x86_rep_movsb_threshold = rep_movsb_threshold;
>
> +  /* Setting the upper bound of ERMS to the known default value of
> +     non-temporal threshold for architectures other than AMD.  */
> +  if (cpu_features->basic.kind != arch_kind_amd)
> +    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
> +
> +
>  # if HAVE_TUNABLES
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>  # endif
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..c7e75bfbda 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -30,7 +30,10 @@
>        load and aligned store.  Load the last 4 * VEC and first VEC
>        before the loop and store them after the loop to support
>        overlapping addresses.
> -   6. If size >= __x86_shared_non_temporal_threshold and there is no
> +   6. On machines with ERMS feature, if size greater than equal or to
> +      __x86_rep_movsb_threshold and less than
> +      __x86_max_rep_movsb_threshold, then REP MOVSB will be used.
> +   7. If size >= __x86_shared_non_temporal_threshold and there is no
>        overlap between destination and source, use non-temporal store
>        instead of aligned store.  */
>
> @@ -240,7 +243,8 @@ L(return):
>         ret
>
>  L(movsb):
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       /* Avoid REP MOVSB for sizes above max threshold.  */
> +       cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP
>         jae     L(more_8x_vec)
>         cmpq    %rsi, %rdi
>         jb      1f
> --
> 2.25.1
>

LGTM.

We are in code freeze for glibc 2.33.

Thanks.

-- 
H.J.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-11 10:46   ` Karumanchi, Sajan via Libc-alpha
@ 2021-01-18 17:07     ` Florian Weimer via Libc-alpha
  2021-01-18 17:10       ` Adhemerval Zanella via Libc-alpha
  2021-01-22 10:18       ` sajan.karumanchi--- via Libc-alpha
  0 siblings, 2 replies; 13+ messages in thread
From: Florian Weimer via Libc-alpha @ 2021-01-18 17:07 UTC (permalink / raw)
  To: Karumanchi, Sajan
  Cc: Mallappa, Premachandra, sajan.karumanchi--- via Libc-alpha

* Sajan Karumanchi:

> [AMD Public Use]
>
> Hi Florian,
>
> I have pushed a new patch on top the rebased master branch.

I've received RM ack (from Adhemerval) for the patch off-list, and I
think we should put it unto the release.

However, we need another rebase. 8-( Sorry about that.  Would you please
be so kind to post it?

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-18 17:07     ` Florian Weimer via Libc-alpha
@ 2021-01-18 17:10       ` Adhemerval Zanella via Libc-alpha
  2021-01-22 10:18       ` sajan.karumanchi--- via Libc-alpha
  1 sibling, 0 replies; 13+ messages in thread
From: Adhemerval Zanella via Libc-alpha @ 2021-01-18 17:10 UTC (permalink / raw)
  To: Florian Weimer, Karumanchi, Sajan
  Cc: sajan.karumanchi--- via Libc-alpha, Mallappa, Premachandra



On 18/01/2021 14:07, Florian Weimer via Libc-alpha wrote:
> * Sajan Karumanchi:
> 
>> [AMD Public Use]
>>
>> Hi Florian,
>>
>> I have pushed a new patch on top the rebased master branch.
> 
> I've received RM ack (from Adhemerval) for the patch off-list, and I
> think we should put it unto the release.

Btw, I have sent a message in private to Florian, where it should be sent
to libc-alpha.

--
This is ok for 2.33. If I understand correctly, it should affect only
memcpy performance on x86, right?
--

> 
> However, we need another rebase. 8-( Sorry about that.  Would you please
> be so kind to post it?
> 
> Thanks,
> Florian
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-18 17:07     ` Florian Weimer via Libc-alpha
  2021-01-18 17:10       ` Adhemerval Zanella via Libc-alpha
@ 2021-01-22 10:18       ` sajan.karumanchi--- via Libc-alpha
  2021-02-01 17:05         ` H.J. Lu via Libc-alpha
  1 sibling, 1 reply; 13+ messages in thread
From: sajan.karumanchi--- via Libc-alpha @ 2021-01-22 10:18 UTC (permalink / raw)
  To: fweimer; +Cc: Sajan Karumanchi, Premachandra Mallappa, libc-alpha

From: Sajan Karumanchi <sajan.karumanchi@amd.com>

In the process of optimizing memcpy for AMD machines, we have found the
vector move operations are outperforming enhanced REP MOVSB for data
transfers above the L2 cache size on Zen3 architectures.
To handle this use case, we are adding an upper bound parameter on
enhanced REP MOVSB:'__x86_rep_movsb_stop_threshold'.
As per large-bench results, we are configuring this parameter to the
L2 cache size for AMD machines and applicable from Zen3 architecture
supporting the ERMS feature.
For architectures other than AMD, it is the computed value of
non-temporal threshold parameter.

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
---
 sysdeps/x86/cacheinfo.h                           |  4 ++++
 sysdeps/x86/dl-cacheinfo.h                        | 15 ++++++++++++++-
 sysdeps/x86/include/cpu-features.h                |  2 ++
 .../x86_64/multiarch/memmove-vec-unaligned-erms.S |  7 +++++--
 4 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index 68c253542f..0f0ca7c08c 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -54,6 +54,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 /* Threshold to use Enhanced REP STOSB.  */
 long int __x86_rep_stosb_threshold attribute_hidden = 2048;
 
+/* Threshold to stop using Enhanced REP MOVSB.  */
+long int __x86_rep_movsb_stop_threshold attribute_hidden;
+
 static void
 init_cacheinfo (void)
 {
@@ -79,5 +82,6 @@ init_cacheinfo (void)
 
   __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
+  __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
 }
 #endif
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index a31fa0783a..374ba82467 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -704,7 +704,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   int max_cpuid_ex;
   long int data = -1;
   long int shared = -1;
-  long int core;
+  long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
   unsigned long int level1_dcache_size = -1;
@@ -886,6 +886,18 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
 #endif
     }
 
+  unsigned long int rep_movsb_stop_threshold;
+  /* ERMS feature is implemented from AMD Zen3 architecture and it is
+     performing poorly for data above L2 cache size. Henceforth, adding
+     an upper bound threshold parameter to limit the usage of Enhanced
+     REP MOVSB operations and setting its value to L2 cache size.  */
+  if (cpu_features->basic.kind == arch_kind_amd)
+    rep_movsb_stop_threshold = core;
+  /* Setting the upper bound of ERMS to the computed value of
+     non-temporal threshold for architectures other than AMD.  */
+  else
+    rep_movsb_stop_threshold = non_temporal_threshold;
+
   /* The default threshold to use Enhanced REP STOSB.  */
   unsigned long int rep_stosb_threshold = 2048;
 
@@ -935,4 +947,5 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->non_temporal_threshold = non_temporal_threshold;
   cpu_features->rep_movsb_threshold = rep_movsb_threshold;
   cpu_features->rep_stosb_threshold = rep_stosb_threshold;
+  cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
 }
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 624736b40e..475e877294 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -870,6 +870,8 @@ struct cpu_features
   unsigned long int non_temporal_threshold;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
+  /* Threshold to stop using "rep movsb".  */
+  unsigned long int rep_movsb_stop_threshold;
   /* Threshold to use "rep stosb".  */
   unsigned long int rep_stosb_threshold;
   /* _SC_LEVEL1_ICACHE_SIZE.  */
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 0980c95378..50bb1fccb2 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -30,7 +30,10 @@
       load and aligned store.  Load the last 4 * VEC and first VEC
       before the loop and store them after the loop to support
       overlapping addresses.
-   6. If size >= __x86_shared_non_temporal_threshold and there is no
+   6. On machines with ERMS feature, if size greater than equal or to
+      __x86_rep_movsb_threshold and less than
+      __x86_rep_movsb_stop_threshold, then REP MOVSB will be used.
+   7. If size >= __x86_shared_non_temporal_threshold and there is no
       overlap between destination and source, use non-temporal store
       instead of aligned store.  */
 
@@ -240,7 +243,7 @@ L(return):
 	ret
 
 L(movsb):
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	cmp     __x86_rep_movsb_stop_threshold(%rip), %RDX_LP
 	jae	L(more_8x_vec)
 	cmpq	%rsi, %rdi
 	jb	1f
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-01-22 10:18       ` sajan.karumanchi--- via Libc-alpha
@ 2021-02-01 17:05         ` H.J. Lu via Libc-alpha
  2022-04-27 23:38           ` Sunil Pandey via Libc-alpha
  0 siblings, 1 reply; 13+ messages in thread
From: H.J. Lu via Libc-alpha @ 2021-02-01 17:05 UTC (permalink / raw)
  To: Sajan Karumanchi; +Cc: Florian Weimer, Premachandra Mallappa, GNU C Library

On Fri, Jan 22, 2021 at 2:19 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_rep_movsb_stop_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
>  sysdeps/x86/cacheinfo.h                           |  4 ++++
>  sysdeps/x86/dl-cacheinfo.h                        | 15 ++++++++++++++-
>  sysdeps/x86/include/cpu-features.h                |  2 ++
>  .../x86_64/multiarch/memmove-vec-unaligned-erms.S |  7 +++++--
>  4 files changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index 68c253542f..0f0ca7c08c 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -54,6 +54,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
>  /* Threshold to use Enhanced REP STOSB.  */
>  long int __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB.  */
> +long int __x86_rep_movsb_stop_threshold attribute_hidden;
> +
>  static void
>  init_cacheinfo (void)
>  {
> @@ -79,5 +82,6 @@ init_cacheinfo (void)
>
>    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> +  __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
>  }
>  #endif
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index a31fa0783a..374ba82467 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -704,7 +704,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    int max_cpuid_ex;
>    long int data = -1;
>    long int shared = -1;
> -  long int core;
> +  long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
>    unsigned long int level1_dcache_size = -1;
> @@ -886,6 +886,18 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>  #endif
>      }
>
> +  unsigned long int rep_movsb_stop_threshold;
> +  /* ERMS feature is implemented from AMD Zen3 architecture and it is
> +     performing poorly for data above L2 cache size. Henceforth, adding
> +     an upper bound threshold parameter to limit the usage of Enhanced
> +     REP MOVSB operations and setting its value to L2 cache size.  */
> +  if (cpu_features->basic.kind == arch_kind_amd)
> +    rep_movsb_stop_threshold = core;
> +  /* Setting the upper bound of ERMS to the computed value of
> +     non-temporal threshold for architectures other than AMD.  */
> +  else
> +    rep_movsb_stop_threshold = non_temporal_threshold;
> +
>    /* The default threshold to use Enhanced REP STOSB.  */
>    unsigned long int rep_stosb_threshold = 2048;
>
> @@ -935,4 +947,5 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->non_temporal_threshold = non_temporal_threshold;
>    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
>    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> +  cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
>  }
> diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> index 624736b40e..475e877294 100644
> --- a/sysdeps/x86/include/cpu-features.h
> +++ b/sysdeps/x86/include/cpu-features.h
> @@ -870,6 +870,8 @@ struct cpu_features
>    unsigned long int non_temporal_threshold;
>    /* Threshold to use "rep movsb".  */
>    unsigned long int rep_movsb_threshold;
> +  /* Threshold to stop using "rep movsb".  */
> +  unsigned long int rep_movsb_stop_threshold;
>    /* Threshold to use "rep stosb".  */
>    unsigned long int rep_stosb_threshold;
>    /* _SC_LEVEL1_ICACHE_SIZE.  */
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..50bb1fccb2 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -30,7 +30,10 @@
>        load and aligned store.  Load the last 4 * VEC and first VEC
>        before the loop and store them after the loop to support
>        overlapping addresses.
> -   6. If size >= __x86_shared_non_temporal_threshold and there is no
> +   6. On machines with ERMS feature, if size greater than equal or to
> +      __x86_rep_movsb_threshold and less than
> +      __x86_rep_movsb_stop_threshold, then REP MOVSB will be used.
> +   7. If size >= __x86_shared_non_temporal_threshold and there is no
>        overlap between destination and source, use non-temporal store
>        instead of aligned store.  */
>
> @@ -240,7 +243,7 @@ L(return):
>         ret
>
>  L(movsb):
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       cmp     __x86_rep_movsb_stop_threshold(%rip), %RDX_LP
>         jae     L(more_8x_vec)
>         cmpq    %rsi, %rdi
>         jb      1f
> --
> 2.25.1
>

LGTM.   OK for 2.34.

Thanks.

-- 
H.J.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
  2021-02-01 17:05         ` H.J. Lu via Libc-alpha
@ 2022-04-27 23:38           ` Sunil Pandey via Libc-alpha
  0 siblings, 0 replies; 13+ messages in thread
From: Sunil Pandey via Libc-alpha @ 2022-04-27 23:38 UTC (permalink / raw)
  To: H.J. Lu, libc-stable
  Cc: Florian Weimer, Sajan Karumanchi, GNU C Library,
	Premachandra Mallappa

On Mon, Feb 1, 2021 at 9:13 AM H.J. Lu via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On Fri, Jan 22, 2021 at 2:19 AM <sajan.karumanchi@amd.com> wrote:
> >
> > From: Sajan Karumanchi <sajan.karumanchi@amd.com>
> >
> > In the process of optimizing memcpy for AMD machines, we have found the
> > vector move operations are outperforming enhanced REP MOVSB for data
> > transfers above the L2 cache size on Zen3 architectures.
> > To handle this use case, we are adding an upper bound parameter on
> > enhanced REP MOVSB:'__x86_rep_movsb_stop_threshold'.
> > As per large-bench results, we are configuring this parameter to the
> > L2 cache size for AMD machines and applicable from Zen3 architecture
> > supporting the ERMS feature.
> > For architectures other than AMD, it is the computed value of
> > non-temporal threshold parameter.
> >
> > Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> > ---
> >  sysdeps/x86/cacheinfo.h                           |  4 ++++
> >  sysdeps/x86/dl-cacheinfo.h                        | 15 ++++++++++++++-
> >  sysdeps/x86/include/cpu-features.h                |  2 ++
> >  .../x86_64/multiarch/memmove-vec-unaligned-erms.S |  7 +++++--
> >  4 files changed, 25 insertions(+), 3 deletions(-)
> >
> > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > index 68c253542f..0f0ca7c08c 100644
> > --- a/sysdeps/x86/cacheinfo.h
> > +++ b/sysdeps/x86/cacheinfo.h
> > @@ -54,6 +54,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> >  /* Threshold to use Enhanced REP STOSB.  */
> >  long int __x86_rep_stosb_threshold attribute_hidden = 2048;
> >
> > +/* Threshold to stop using Enhanced REP MOVSB.  */
> > +long int __x86_rep_movsb_stop_threshold attribute_hidden;
> > +
> >  static void
> >  init_cacheinfo (void)
> >  {
> > @@ -79,5 +82,6 @@ init_cacheinfo (void)
> >
> >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> > +  __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> >  }
> >  #endif
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index a31fa0783a..374ba82467 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -704,7 +704,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    int max_cpuid_ex;
> >    long int data = -1;
> >    long int shared = -1;
> > -  long int core;
> > +  long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> >    unsigned long int level1_dcache_size = -1;
> > @@ -886,6 +886,18 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >  #endif
> >      }
> >
> > +  unsigned long int rep_movsb_stop_threshold;
> > +  /* ERMS feature is implemented from AMD Zen3 architecture and it is
> > +     performing poorly for data above L2 cache size. Henceforth, adding
> > +     an upper bound threshold parameter to limit the usage of Enhanced
> > +     REP MOVSB operations and setting its value to L2 cache size.  */
> > +  if (cpu_features->basic.kind == arch_kind_amd)
> > +    rep_movsb_stop_threshold = core;
> > +  /* Setting the upper bound of ERMS to the computed value of
> > +     non-temporal threshold for architectures other than AMD.  */
> > +  else
> > +    rep_movsb_stop_threshold = non_temporal_threshold;
> > +
> >    /* The default threshold to use Enhanced REP STOSB.  */
> >    unsigned long int rep_stosb_threshold = 2048;
> >
> > @@ -935,4 +947,5 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> > +  cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> >  }
> > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > index 624736b40e..475e877294 100644
> > --- a/sysdeps/x86/include/cpu-features.h
> > +++ b/sysdeps/x86/include/cpu-features.h
> > @@ -870,6 +870,8 @@ struct cpu_features
> >    unsigned long int non_temporal_threshold;
> >    /* Threshold to use "rep movsb".  */
> >    unsigned long int rep_movsb_threshold;
> > +  /* Threshold to stop using "rep movsb".  */
> > +  unsigned long int rep_movsb_stop_threshold;
> >    /* Threshold to use "rep stosb".  */
> >    unsigned long int rep_stosb_threshold;
> >    /* _SC_LEVEL1_ICACHE_SIZE.  */
> > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > index 0980c95378..50bb1fccb2 100644
> > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > @@ -30,7 +30,10 @@
> >        load and aligned store.  Load the last 4 * VEC and first VEC
> >        before the loop and store them after the loop to support
> >        overlapping addresses.
> > -   6. If size >= __x86_shared_non_temporal_threshold and there is no
> > +   6. On machines with ERMS feature, if size greater than equal or to
> > +      __x86_rep_movsb_threshold and less than
> > +      __x86_rep_movsb_stop_threshold, then REP MOVSB will be used.
> > +   7. If size >= __x86_shared_non_temporal_threshold and there is no
> >        overlap between destination and source, use non-temporal store
> >        instead of aligned store.  */
> >
> > @@ -240,7 +243,7 @@ L(return):
> >         ret
> >
> >  L(movsb):
> > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > +       cmp     __x86_rep_movsb_stop_threshold(%rip), %RDX_LP
> >         jae     L(more_8x_vec)
> >         cmpq    %rsi, %rdi
> >         jb      1f
> > --
> > 2.25.1
> >
>
> LGTM.   OK for 2.34.
>
> Thanks.
>
> --
> H.J.

I would like to backport this patch to release branches.
Any comments or objections?

--Sunil

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-04-27 23:39 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-07 16:22 [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB sajan.karumanchi--- via Libc-alpha
2021-01-08 14:03 ` Florian Weimer via Libc-alpha
2021-01-11 10:46   ` Karumanchi, Sajan via Libc-alpha
2021-01-18 17:07     ` Florian Weimer via Libc-alpha
2021-01-18 17:10       ` Adhemerval Zanella via Libc-alpha
2021-01-22 10:18       ` sajan.karumanchi--- via Libc-alpha
2021-02-01 17:05         ` H.J. Lu via Libc-alpha
2022-04-27 23:38           ` Sunil Pandey via Libc-alpha
  -- strict thread matches above, loose matches on Subject: below --
2021-01-11 10:43 sajan.karumanchi--- via Libc-alpha
2021-01-11 17:27 ` H.J. Lu via Libc-alpha
2021-01-12 18:56   ` Karumanchi, Sajan via Libc-alpha
2021-01-12 20:04 [PATCH 1/1] " H.J. Lu via Libc-alpha
2021-01-13 15:18 ` [PATCH] " sajan.karumanchi--- via Libc-alpha
2021-01-13 15:26   ` H.J. Lu via Libc-alpha

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).