From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS3215 2.6.0.0/16 X-Spam-Status: No, score=-4.2 required=3.0 tests=AWL,BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED,SPF_HELO_PASS,SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.2 Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by dcvr.yhbt.net (Postfix) with ESMTPS id EE9851F4B4 for ; Mon, 19 Apr 2021 21:07:50 +0000 (UTC) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id F03623955437; Mon, 19 Apr 2021 21:07:48 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org F03623955437 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1618866469; bh=/agy1rL0d88sbIoKbezKDkZR33F7sE5MDZcUQ+vW2dQ=; h=References:In-Reply-To:Date:Subject:To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=kGJO8gd9FwARs1WbRw9U6KHqvHV+iB1fxRNsqkvHPlaPERBj5djHPy6MuGM0jWCRe ArPd/8Gg6ciIy0xWPvDS/e0/q6f3TlxMx5PKNHmD7FmNF1Oz8rdyCm6+6l3ZBnFK6V kBGPvstB98bbI6WHwE9RFHKOabKK47huqPpxHgsE= Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by sourceware.org (Postfix) with ESMTPS id E974F3844028 for ; Mon, 19 Apr 2021 21:07:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org E974F3844028 Received: by mail-pg1-x52f.google.com with SMTP id b17so25143039pgh.7 for ; Mon, 19 Apr 2021 14:07:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=/agy1rL0d88sbIoKbezKDkZR33F7sE5MDZcUQ+vW2dQ=; b=Q1pVUQ/hE3EOeOQb7qGpUyFg8M4iX+iz/X01aO2P5XB7OBtIEvRrL+jHmm8ixsXPvT Y/Wiw6U4lDLgXKeVL2c4HF0xwPBnoYr/uiDuDxl9kijS/un9gQR6EoB31Dngmd47wFP4 mdP+3lNQ6IcM7cRXzGB0dLvpWbzed5RBUl5MdL4tY0GA8jSDn5g+8MtVHa6pmgBVNrfy n+pGa8a/0d6XJLxAb+f2+9FKA1AAfjIJPff2BAkAZkw6uMnmphON3B91Z9uXpTVlW2uY HIv5so1j46qxY9J7eiWHXwM8YGE+4hMvq4S6H5iiOd3cfC4BEo9kPylmzbsU8lp3Yni+ 9Jig== X-Gm-Message-State: AOAM5312rNDXpeb15KHqbKNyTE/5ifD3jJW4cpQ8DGqiknjIgU9i97Gv Gt4Y1C2jh357D+qbsrP8RKkkbqC64rQaKvDWoczha3U1 X-Google-Smtp-Source: ABdhPJxwG6ReFc8bq59/rRQEhhT+9AyDUH1TZdXUnE8hXlWZRphWzP1Xj7xCUGWX2cnR3V8KWdHvpGF4fDC5TbIhZ9o= X-Received: by 2002:a63:cb42:: with SMTP id m2mr13502481pgi.140.1618866461878; Mon, 19 Apr 2021 14:07:41 -0700 (PDT) MIME-Version: 1.0 References: <20210419163025.2285675-1-goldstein.w.n@gmail.com> In-Reply-To: Date: Mon, 19 Apr 2021 14:07:31 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] x86: Optimize less_vec evex and avx512 memset-vec-unaligned-erms.S To: "H.J. Lu" Content-Type: text/plain; charset="UTF-8" X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Noah Goldstein via Libc-alpha Reply-To: Noah Goldstein Cc: GNU C Library Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" On Mon, Apr 19, 2021 at 1:39 PM H.J. Lu wrote: > > On Mon, Apr 19, 2021 at 12:35 PM Noah Goldstein wrote: > > > > On Mon, Apr 19, 2021 at 2:45 PM H.J. Lu wrote: > > > > > > On Mon, Apr 19, 2021 at 9:30 AM Noah Goldstein wrote: > > > > > > > > No bug. This commit adds optimized cased for less_vec memset case that > > > > uses the avx512vl/avx512bw mask store avoiding the excessive > > > > branches. test-memset and test-wmemset are passing. > > > > > > > > Signed-off-by: Noah Goldstein > > > > --- > > > > sysdeps/x86_64/multiarch/ifunc-memset.h | 6 ++- > > > > .../multiarch/memset-avx512-unaligned-erms.S | 2 +- > > > > .../multiarch/memset-evex-unaligned-erms.S | 2 +- > > > > .../multiarch/memset-vec-unaligned-erms.S | 52 +++++++++++++++---- > > > > 4 files changed, 47 insertions(+), 15 deletions(-) > > > > > > > > diff --git a/sysdeps/x86_64/multiarch/ifunc-memset.h b/sysdeps/x86_64/multiarch/ifunc-memset.h > > > > index 502f946a84..eda5640541 100644 > > > > --- a/sysdeps/x86_64/multiarch/ifunc-memset.h > > > > +++ b/sysdeps/x86_64/multiarch/ifunc-memset.h > > > > @@ -54,7 +54,8 @@ IFUNC_SELECTOR (void) > > > > && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512)) > > > > { > > > > if (CPU_FEATURE_USABLE_P (cpu_features, AVX512VL) > > > > - && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW)) > > > > + && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) > > > > + && CPU_FEATURE_USABLE_P (cpu_features, BMI2)) > > > > { > > > > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) > > > > return OPTIMIZE (avx512_unaligned_erms); > > > > @@ -68,7 +69,8 @@ IFUNC_SELECTOR (void) > > > > if (CPU_FEATURE_USABLE_P (cpu_features, AVX2)) > > > > { > > > > if (CPU_FEATURE_USABLE_P (cpu_features, AVX512VL) > > > > - && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW)) > > > > + && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) > > > > + && CPU_FEATURE_USABLE_P (cpu_features, BMI2)) > > > > > > Please also update ifunc-impl-list.c. > > > > Done. > > > > > > > > > { > > > > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) > > > > return OPTIMIZE (evex_unaligned_erms); > > > > diff --git a/sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S b/sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S > > > > index 22e7b187c8..d03460be93 100644 > > > > --- a/sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S > > > > +++ b/sysdeps/x86_64/multiarch/memset-avx512-unaligned-erms.S > > > > @@ -19,6 +19,6 @@ > > > > # define SECTION(p) p##.evex512 > > > > # define MEMSET_SYMBOL(p,s) p##_avx512_##s > > > > # define WMEMSET_SYMBOL(p,s) p##_avx512_##s > > > > - > > > > +# define USE_LESS_VEC_MASKMOV 1 > > > > > > USE_LESS_VEC_MASKED_STORE > > > > Done. > > > > > > > > > # include "memset-vec-unaligned-erms.S" > > > > #endif > > > > diff --git a/sysdeps/x86_64/multiarch/memset-evex-unaligned-erms.S b/sysdeps/x86_64/multiarch/memset-evex-unaligned-erms.S > > > > index ae0a4d6e46..eb3541ef60 100644 > > > > --- a/sysdeps/x86_64/multiarch/memset-evex-unaligned-erms.S > > > > +++ b/sysdeps/x86_64/multiarch/memset-evex-unaligned-erms.S > > > > @@ -19,6 +19,6 @@ > > > > # define SECTION(p) p##.evex > > > > # define MEMSET_SYMBOL(p,s) p##_evex_##s > > > > # define WMEMSET_SYMBOL(p,s) p##_evex_##s > > > > - > > > > +# define USE_LESS_VEC_MASKMOV 1 > > > > # include "memset-vec-unaligned-erms.S" > > > > #endif > > > > diff --git a/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > > > > index 584747f1a1..6b02e87f48 100644 > > > > --- a/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > > > > +++ b/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > > > > @@ -63,6 +63,9 @@ > > > > # endif > > > > #endif > > > > > > > > +#define PAGE_SIZE 4096 > > > > +#define LOG_PAGE_SIZE 12 > > > > + > > > > #ifndef SECTION > > > > # error SECTION is not defined! > > > > #endif > > > > @@ -213,11 +216,38 @@ L(loop): > > > > cmpq %rcx, %rdx > > > > jne L(loop) > > > > VZEROUPPER_SHORT_RETURN > > > > + > > > > + .p2align 4 > > > > L(less_vec): > > > > /* Less than 1 VEC. */ > > > > # if VEC_SIZE != 16 && VEC_SIZE != 32 && VEC_SIZE != 64 > > > > # error Unsupported VEC_SIZE! > > > > # endif > > > > +# ifdef USE_LESS_VEC_MASKMOV > > > > + /* Clear high bits from edi. Only keeping bits relevant to page > > > > + cross check. Using sall instead of andl saves 3 bytes. Note > > > > + that we are using rax which is set in > > > > + MEMSET_VDUP_TO_VEC0_AND_SET_RETURN as ptr from here on out. */ > > > > + sall $(32 - LOG_PAGE_SIZE), %edi > > > > + /* Check if VEC_SIZE load cross page. Mask loads suffer serious > > > > + performance degradation when it has to fault supress. */ > > > > + cmpl $((PAGE_SIZE - VEC_SIZE) << (32 - LOG_PAGE_SIZE)), %edi > > > > > > Please use AND and CMP since AND has higher throughput. > > > > AND uses more code size for VEC_SIZE=16/32 and just barely pushes the > > L(cross_page) to the next 16 byte chunk so the extra 3 bytes from AND > > end up costing 16 bytes. Not aligning L(cross_page) to 16 also > > introduces higher variance to benchmarks so I think it has to be all 16 bytes. > > > > As is I don't think throughput of AND / SAL is on the critical > > path so code size should win out. (We can also decode MOV -1, ecx > > first cycle with SAL as opposed to AND). > > > > What do you think? > > I prefer AND over SAL. Something like > > diff --git a/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > b/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > index 3a59d39267..763fb907b9 100644 > --- a/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > +++ b/sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S > @@ -217,21 +217,17 @@ L(loop): > jne L(loop) > VZEROUPPER_SHORT_RETURN > > - .p2align 4 > + /* NB: Don't align this branch target to reduce code size. */ Not aligning this branch can harm performance. Median stays about the same but variance / geomean go up. > L(less_vec): > /* Less than 1 VEC. */ > # if VEC_SIZE != 16 && VEC_SIZE != 32 && VEC_SIZE != 64 > # error Unsupported VEC_SIZE! > # endif > # ifdef USE_LESS_VEC_MASK_STORE > - /* Clear high bits from edi. Only keeping bits relevant to page > - cross check. Using sall instead of andl saves 3 bytes. Note > - that we are using rax which is set in > - MEMSET_VDUP_TO_VEC0_AND_SET_RETURN as ptr from here on out. */ > - sall $(32 - LOG_PAGE_SIZE), %edi > - /* Check if VEC_SIZE load cross page. Mask loads suffer serious > + /* Check if VEC_SIZE store cross page. Mask stores suffer serious > performance degradation when it has to fault supress. */ > - cmpl $((PAGE_SIZE - VEC_SIZE) << (32 - LOG_PAGE_SIZE)), %edi > + andl $(PAGE_SIZE - 1), %edi > + cmpl $(PAGE_SIZE - VEC_SIZE), %edi > ja L(cross_page) > # if VEC_SIZE > 32 > movq $-1, %rcx > > Thanks. > > -- > H.J.