From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS17314 8.43.84.0/22 X-Spam-Status: No, score=-3.7 required=3.0 tests=AWL,BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, PDS_RDNS_DYNAMIC_FP,RCVD_IN_DNSWL_MED,RDNS_DYNAMIC,SPF_HELO_PASS, SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.2 Received: from sourceware.org (ip-8-43-85-97.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by dcvr.yhbt.net (Postfix) with ESMTPS id 3D35A1F8C6 for ; Tue, 27 Jul 2021 03:12:04 +0000 (UTC) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 2DE483969830 for ; Tue, 27 Jul 2021 03:12:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 2DE483969830 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1627355523; bh=4nfzlJ6PPe9Xy8mcDJ98AVb0FFXC3YkkEX3UCyFkU/8=; h=References:In-Reply-To:Date:Subject:To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=CSlaFi7Lw+degDc3dNtIt48i/BlpHt8z7lzifTL5kk22CbSuMqzDZn/jI+9Hm9UMW 7Xz0y8PQJzaPwsWe7RWyaMdAFVyMX+ZZY8BO+VM2g3RntwiM+U+hXDG2RSpAQP4RSP AAX9+0jMY2y23ETjTYB3FP5wToaCD7MBza6BlFyI= Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by sourceware.org (Postfix) with ESMTPS id DC21E383D001 for ; Tue, 27 Jul 2021 03:11:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org DC21E383D001 Received: by mail-pj1-x1034.google.com with SMTP id mt6so15827602pjb.1 for ; Mon, 26 Jul 2021 20:11:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4nfzlJ6PPe9Xy8mcDJ98AVb0FFXC3YkkEX3UCyFkU/8=; b=okfcfuB5FRyCfZk7vol1y7HUWPi0bCL09+ukXZuLnArv8xoqcymOALt8SXSpIIqYyT 4tbAxu11haCnToYjMiXGhxFfXKAfRUCx8UYRzZQFHeiCYTm7HuHJK+IkocE6Op8WYHFT 5BcJPixVd6N05Dyn5akOn67deNlgbTcuRJm63hRIe6BhEZLaegvp4DKEcgrI1921A6ms VRTgeiZeyA+JBbDtlH5M8qGOjnIkcPneTgBv4M5eG8L+sBzjBIS2mI0swbkscY4xbAXr +wlP5UBzrRxTHO8U9WduaBUfbeOIQb4mEy8Y+ScGQ1ksstFYeeh6N+p5ExXDf8UJ+rO5 REtw== X-Gm-Message-State: AOAM5326OwALk8+scxs10qCX08kmRU4dW/gbWna4FI1PTgd6pIOrFjAe ma4amDge27uGUgv7P6wK+h85gqY75ItgmXboPBI= X-Google-Smtp-Source: ABdhPJwwX3upNxZwK/P9kf7yZKTz1E7ye3s9fSOAaufhNpFq1wUBRucakYtHsGbiH3CTxsi+SoBoeX/Nb3K0WSW77Ks= X-Received: by 2002:a17:90a:ab0f:: with SMTP id m15mr14900722pjq.154.1627355501844; Mon, 26 Jul 2021 20:11:41 -0700 (PDT) MIME-Version: 1.0 References: <20210726120055.1089971-1-hjl.tools@gmail.com> In-Reply-To: Date: Mon, 26 Jul 2021 20:11:05 -0700 Message-ID: Subject: Re: [PATCH] x86-64: Add Avoid_Short_Distance_REP_MOVSB To: "Carlos O'Donell" Content-Type: text/plain; charset="UTF-8" X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: "H.J. Lu via Libc-alpha" Reply-To: "H.J. Lu" Cc: GNU C Library Errors-To: libc-alpha-bounces+e=80x24.org@sourceware.org Sender: "Libc-alpha" On Mon, Jul 26, 2021 at 7:15 PM Carlos O'Donell wrote: > > On 7/26/21 8:00 AM, H.J. Lu via Libc-alpha wrote: > > commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5 > > Author: H.J. Lu > > Date: Sat Jan 25 14:19:40 2020 -0800 > > > > x86-64: Avoid rep movsb with short distance [BZ #27130] > > introduced some regressions on Intel processors without Fast Short REP > > MOV (FSRM). Add Avoid_Short_Distance_REP_MOVSB to avoid rep movsb with > > short distance only on Intel processors with FSRM. bench-memmove-large > > on Skylake server shows that cycles of __memmove_evex_unaligned_erms are > > improved for the following data size: > > > > before after Improvement > > length=4127, align1=3, align2=0: 479.38 343.00 28% > > length=4223, align1=9, align2=5: 405.62 335.50 17% > > length=8223, align1=3, align2=0: 786.12 495.00 37% > > length=8319, align1=9, align2=5: 256.69 170.38 33% > > length=16415, align1=3, align2=0: 1436.88 839.50 41% > > length=16511, align1=9, align2=5: 1375.50 840.62 39% > > length=32799, align1=3, align2=0: 2890.00 1850.62 36% > > length=32895, align1=9, align2=5: 2891.38 1948.62 32% > > > > There are no regression on Ice Lake server. > > At this point we're waiting on Noah to provide feedback on the performance > results given the alignment nop insertion you provided as a follow-up patch We are testing 25 byte nop padding now: https://gitlab.com/x86-glibc/glibc/-/commit/de8985640a568786a59576716db54e0749d420e8 > (unless you can confirm this yourself). > > Looking forward to a v2 the incorporates the alignment fix (pending Noah's > comments), and my suggestions below. > > > --- > > sysdeps/x86/cacheinfo.h | 7 +++++++ > > sysdeps/x86/cpu-features.c | 5 +++++ > > .../x86/include/cpu-features-preferred_feature_index_1.def | 1 + > > sysdeps/x86/sysdep.h | 3 +++ > > sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S | 5 +++++ > > 5 files changed, 21 insertions(+) > > > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h > > index eba8dbc4a6..174ea38f5b 100644 > > --- a/sysdeps/x86/cacheinfo.h > > +++ b/sysdeps/x86/cacheinfo.h > > @@ -49,6 +49,9 @@ long int __x86_rep_stosb_threshold attribute_hidden = 2048; > > /* Threshold to stop using Enhanced REP MOVSB. */ > > long int __x86_rep_movsb_stop_threshold attribute_hidden; > > > > +/* String/memory function control. */ > > +int __x86_string_control attribute_hidden; > > Please expand comment. > > Suggest: > > /* A bit-wise OR of string/memory requirements for optimal performance > e.g. X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB. These bits > are used at runtime to tune implementation behavior. */ > int __x86_string_control attribute_hidden; I will fix it in the v2 patch. Thanks. > > + > > static void > > init_cacheinfo (void) > > { > > @@ -71,5 +74,9 @@ init_cacheinfo (void) > > __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; > > __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; > > __x86_rep_movsb_stop_threshold = cpu_features->rep_movsb_stop_threshold; > > + > > + if (CPU_FEATURES_ARCH_P (cpu_features, Avoid_Short_Distance_REP_MOVSB)) > > + __x86_string_control > > + |= X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB; > > OK. > > > } > > #endif > > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c > > index 706a172ba9..645bba6314 100644 > > --- a/sysdeps/x86/cpu-features.c > > +++ b/sysdeps/x86/cpu-features.c > > @@ -555,6 +555,11 @@ init_cpu_features (struct cpu_features *cpu_features) > > cpu_features->preferred[index_arch_Prefer_AVX2_STRCMP] > > |= bit_arch_Prefer_AVX2_STRCMP; > > } > > + > > + /* Avoid avoid short distance REP MOVSB on processor with FSRM. */ > > + if (CPU_FEATURES_CPU_P (cpu_features, FSRM)) > > + cpu_features->preferred[index_arch_Avoid_Short_Distance_REP_MOVSB] > > + |= bit_arch_Avoid_Short_Distance_REP_MOVSB; > > OK. > > > } > > /* This spells out "AuthenticAMD" or "HygonGenuine". */ > > else if ((ebx == 0x68747541 && ecx == 0x444d4163 && edx == 0x69746e65) > > diff --git a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > > index 133aab19f1..d7c93f00c5 100644 > > --- a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > > +++ b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > > @@ -33,3 +33,4 @@ BIT (Prefer_No_AVX512) > > BIT (MathVec_Prefer_No_AVX512) > > BIT (Prefer_FSRM) > > BIT (Prefer_AVX2_STRCMP) > > +BIT (Avoid_Short_Distance_REP_MOVSB) > > OK. > > > diff --git a/sysdeps/x86/sysdep.h b/sysdeps/x86/sysdep.h > > index 51c069bfe1..35cb90d507 100644 > > --- a/sysdeps/x86/sysdep.h > > +++ b/sysdeps/x86/sysdep.h > > @@ -57,6 +57,9 @@ enum cf_protection_level > > #define STATE_SAVE_MASK \ > > ((1 << 1) | (1 << 2) | (1 << 3) | (1 << 5) | (1 << 6) | (1 << 7)) > > > > Suggest adding: > > /* Constants for bits in __x86_string_control: */ > > > +/* Avoid short distance REP MOVSB. */ > > +#define X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB (1 << 0) > > OK. > > > + > > #ifdef __ASSEMBLER__ > > > > /* Syntactic details of assembler. */ > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > index a783da5de2..9f02624375 100644 > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > @@ -325,12 +325,16 @@ L(movsb): > > /* Avoid slow backward REP MOVSB. */ > > jb L(more_8x_vec_backward) > > # if AVOID_SHORT_DISTANCE_REP_MOVSB > > + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) > > + jz 3f > > OK. > > > movq %rdi, %rcx > > subq %rsi, %rcx > > jmp 2f > > # endif > > 1: > > # if AVOID_SHORT_DISTANCE_REP_MOVSB > > + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, __x86_string_control(%rip) > > + jz 3f > > OK. > > > movq %rsi, %rcx > > subq %rdi, %rcx > > 2: > > @@ -338,6 +342,7 @@ L(movsb): > > is N*4GB + [1..63] with N >= 0. */ > > cmpl $63, %ecx > > jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */ > > +3: > > OK. > > > # endif > > mov %RDX_LP, %RCX_LP > > rep movsb > > > > > -- > Cheers, > Carlos. > -- H.J.