unofficial mirror of libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Carlos O'Donell via Libc-alpha <libc-alpha@sourceware.org>
To: Florian Weimer <fw@deneb.enyo.de>,
	Prem Mallappa via Libc-alpha <libc-alpha@sourceware.org>
Cc: codonell@redhat.com, Michael Matz <matz@suse.de>,
	Prem Mallappa <Premachandra.Mallappa@amd.com>,
	Prem Mallappa <prem.mallappa@gmail.com>,
	schwab@suse.com
Subject: Re: [PATCH 0/3] RFC: Platform Support for AMD Zen and AVX2/AVX
Date: Tue, 17 Mar 2020 09:17:00 -0400	[thread overview]
Message-ID: <6a5b92f9-31d9-01c6-e6c6-acf1554e4458@redhat.com> (raw)
In-Reply-To: <87wo7je4me.fsf@mid.deneb.enyo.de>

On 3/17/20 5:02 AM, Florian Weimer wrote:
> * Prem Mallappa via Libc-alpha:
> 
>> From: Prem Mallappa <Premachandra.Mallappa@amd.com>
>>
>> Hello Glibc Community,
>>
>> == (cross posting to libc-alpha, apologies for the spam) ==
>>
>> This is in response to
>>
>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=24979
>> [2] https://sourceware.org/bugzilla/show_bug.cgi?id=24080
>> [3] https://sourceware.org/bugzilla/show_bug.cgi?id=23249
>>
>> It is clear that there is no panacea here. However,
>> here is an attempt to address them in parts.
>>
>> From [1], enable customers who already have
>> "haswell" libs and has seen perf benifits by loading
>> them on AMD Zen.
>> (Load libraries by placing them in LD_LIBRARY_PATH/zen
>> or by a symbolic link zen->haswell)
>>
>> From [2] and [3]
>> And, A futuristic generic-avx2/generic-avx libs,
>> enables OS vendors to supply an optimized set.
>> And haswell/zen are really a superset, hence
>> keeping it made sense.
>>
>> By this we would like to open it up for discussion
>> The haswell/zen can be intel/amd
>> (or any other name, and supply ifunc based loading
>> internally)
> 
> I think we cannot use the platform subdirectory for that because there
> is just a single one.  If we want a Intel/AMD split, we need to
> enhance the dynamic loader to try the CPU vendor directory first, and
> then fallback to a shared subdirectory.  Most distributions do not
> want to test and ship binaries specific to Intel or AMD CPUs.

I agree. The additional burden on testing, maintaining, and supporting
distinct libraries is not feasible.

> That's a generic loader change which will need some time to implement,
> but we can work on something else in the meantime:
> 
> We need to check for *all* relevant CPU flags such code can use and,
> and only enable a subdirectory if they are present.  This is necessary
> because virtualization and microcode updates can disable individual
> CPU features.

Agreed. This is the only sensible plan. The platform directories already
imply some of this, but it's not well structured.

> For the new shared subdirectory, I think we should not restrict
> ourselves just to AVX2, but we should also include useful extensions
> that are in practice always implemented in silicon along with AVX2,
> but can be separately tweaked.

Agreed.

> This seems to be a reasonable list of CPU feature flags to start with:
> 
>   3DNOW
>   3DNOWEXT
>   3DNOWPREFETCH
>   ABM
>   ADX
>   AES
>   AVX
>   AVX2
>   BMI
>   BMI2
>   CET
>   CLFLUSH
>   CLFLUSHOPT
>   CLWB
>   CLZERO
>   CMPXCHG16B
>   ERMS
>   F16C
>   FMA
>   FMA4
>   FSGSBASE
>   FSRM
>   FXSR
>   HLE
>   LAHF
>   LZCNT
>   MOVBE
>   MWAITX
>   PCLMUL
>   PCOMMIT
>   PKU
>   POPCNT
>   PREFETCHW
>   RDPID
>   RDRAND
>   RDSEED
>   RDTSCP
>   RTM
>   SHA
>   SSE3
>   SSE4.1
>   SSE4.2
>   SSE4A
>   SSSE3
>   TSC
>   XGETBV
>   XSAVE
>   XSAVEC
>   XSAVEOPT
>   XSAVES
> 
> You (as in AMD) need to go through this list and come back with the
> subset that you think should be enabled for current and future CPUs,
> based on your internal roadmap and known errata for existing CPUs.  We
> do not need a rationale for how you filter down the list, merely the
> outcome.

And this is the hard part that we can't solve without AMD's help.

Even if you ignore "future CPUs" it would be useful to get this list
for all current CPUs given your architectural knowledge, errata, and
other factors like microcode, that covers the currently released CPUs.

> (I already have the trimmed-down list from Intel.)
> 


-- 
Cheers,
Carlos.


  reply	other threads:[~2020-03-17 13:17 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-17  4:46 [PATCH 0/3] RFC: Platform Support for AMD Zen and AVX2/AVX Prem Mallappa via Libc-alpha
2020-03-17  4:46 ` [PATCH 1/3] x86: Refactor platform support in cpu_features Prem Mallappa via Libc-alpha
2020-03-17  4:46 ` [PATCH 2/3] x86: Add AMD Zen and AVX2/AVX platform support Prem Mallappa via Libc-alpha
2020-03-17  4:46 ` [PATCH 3/3] x86: test to load from PLATFORM path Prem Mallappa via Libc-alpha
2020-03-17  9:02 ` [PATCH 0/3] RFC: Platform Support for AMD Zen and AVX2/AVX Florian Weimer
2020-03-17 13:17   ` Carlos O'Donell via Libc-alpha [this message]
2020-03-17 19:27     ` Adhemerval Zanella via Libc-alpha
2020-03-17 21:37       ` Carlos O'Donell via Libc-alpha
2020-03-27 14:26         ` Florian Weimer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/libc/involved.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6a5b92f9-31d9-01c6-e6c6-acf1554e4458@redhat.com \
    --to=libc-alpha@sourceware.org \
    --cc=Premachandra.Mallappa@amd.com \
    --cc=carlos@redhat.com \
    --cc=codonell@redhat.com \
    --cc=fw@deneb.enyo.de \
    --cc=matz@suse.de \
    --cc=prem.mallappa@gmail.com \
    --cc=schwab@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).