From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS17314 8.43.84.0/22 X-Spam-Status: No, score=-4.2 required=3.0 tests=AWL,BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,SPF_HELO_PASS,SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.2 Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by dcvr.yhbt.net (Postfix) with ESMTPS id 267671F8C6 for ; Mon, 26 Jul 2021 17:20:53 +0000 (UTC) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 000883855014 for ; Mon, 26 Jul 2021 17:20:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 000883855014 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1627320052; bh=NeyxCCYG6D6zF7u41yBfxcZJ8IcVTEAtA0ajTUIVfqY=; h=References:In-Reply-To:Date:Subject:To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=eGdXI6yEnHi8PNAKJlIawesJI0/0qrknxTrMI/Fgui4vwkHNwF61SAo8vYDNoUCyv EI2q6gJsjfDVuT0W9GbD8WRWaZJ6NIBcg0lQG4k4MxogQhRYd0c6wzR1mYYJqGS7kB R9W4xEYEApKToJmrqkuCssJ/Ro0Fzh4p48GgvSSg= Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by sourceware.org (Postfix) with ESMTPS id AAF9B3858402 for ; Mon, 26 Jul 2021 17:20:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org AAF9B3858402 Received: by mail-pl1-x630.google.com with SMTP id e21so7969498pla.5 for ; Mon, 26 Jul 2021 10:20:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=N6r2et6QLPUtNbKkdZBGmk5FEpr0RkaWjzAfezi/rcg=; b=CFxmbA1MS1xvyyGus6LThvVdXJqofqWXtjzSzDSWLgXoqx12Ls+iaATmMt/aN+I1s1 wiFwM5l44muoqJqMMkvAkZ8MXVx0dDHvJNykkgp15DoThHguBgKqbBIj/Ar7KsSy9dwR 7FGriXGHb8T5TdgYYXI/7kalK2j39f0nkcJWX4CXpUW9lyBidpTgAEeSDejeZ4rHS9TZ yWZGfyggBfpz0JhiGdeZ6++NMheoGgj6Sf+sWbGLmwUolONBTqwT27e/olAW0q66EbxO 2459wSNM4XX61LBvQfwAk5LYvXXIKCRFJkZLhAbzAuEDhbqOONMDs6Xfc8bunSP2OZ57 g9zA== X-Gm-Message-State: AOAM530cheTW2Jdc60YHch3tlMVTg0s4LF12RaIqpBwc+heyFmovWmRp sL9thpUZsN+REu7J9+RrKezk2KOGWCK2srljmGY= X-Google-Smtp-Source: ABdhPJzsCba6gLkbAN6RTF40KAgRDmIU60RY+JzYsNn8NWqvilMEHdNYCntAhrWGrFvZIMwOLF7m1upmWEPdHZ+1v5E= X-Received: by 2002:a63:1648:: with SMTP id 8mr19086672pgw.140.1627320029886; Mon, 26 Jul 2021 10:20:29 -0700 (PDT) MIME-Version: 1.0 References: <20210726120055.1089971-1-hjl.tools@gmail.com> In-Reply-To: <20210726120055.1089971-1-hjl.tools@gmail.com> Date: Mon, 26 Jul 2021 13:20:18 -0400 Message-ID: Subject: Re: [PATCH] x86-64: Add Avoid_Short_Distance_REP_MOVSB To: "H.J. Lu" Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Noah Goldstein via Libc-alpha Reply-To: Noah Goldstein Cc: GNU C Library Errors-To: libc-alpha-bounces+e=80x24.org@sourceware.org Sender: "Libc-alpha" On Mon, Jul 26, 2021 at 8:02 AM H.J. Lu via Libc-alpha < libc-alpha@sourceware.org> wrote: > commit 3ec5d83d2a237d39e7fd6ef7a0bc8ac4c171a4a5 > Author: H.J. Lu > Date: Sat Jan 25 14:19:40 2020 -0800 > > x86-64: Avoid rep movsb with short distance [BZ #27130] > > introduced some regressions on Intel processors without Fast Short REP > MOV (FSRM). Add Avoid_Short_Distance_REP_MOVSB to avoid rep movsb with > short distance only on Intel processors with FSRM. bench-memmove-large > on Skylake server shows that cycles of __memmove_evex_unaligned_erms are > improved for the following data size: > > before after Improvement > length=4127, align1=3, align2=0: 479.38 343.00 28% > length=4223, align1=9, align2=5: 405.62 335.50 17% > length=8223, align1=3, align2=0: 786.12 495.00 37% > length=8319, align1=9, align2=5: 256.69 170.38 33% > length=16415, align1=3, align2=0: 1436.88 839.50 41% > length=16511, align1=9, align2=5: 1375.50 840.62 39% > length=32799, align1=3, align2=0: 2890.00 1850.62 36% > length=32895, align1=9, align2=5: 2891.38 1948.62 32% > > There are no regression on Ice Lake server. > On Tigerlake I see some strange results for the random tests: "ifuncs": ["__memcpy_avx_unaligned", "__memcpy_avx_unaligned_erms", "__memcpy_evex_unaligned", "__memcpy_evex_unaligned_erms", "__memcpy_ssse3_back", "__memcpy_ssse3", "__memcpy_avx512_no_vzeroupper", "__memcpy_avx512_unaligned", "__memcpy_avx512_unaligned_erms", "__memcpy_sse2_unaligned", "__memcpy_sse2_unaligned_erms", "__memcpy_erms"], Without the Patch "length": 4096, "timings": [117793, 118814, 95009.2, 140061, 209016, 162007, 112210, 113011, 139953, 106604, 106483, 116845] With the patch "length": 4096, "timings": [136386, 95256.7, 134947, 102466, 182687, 163942, 110546, 127766, 98344.5, 107647, 109190, 118613] It seems like some of the erms versions are heavily pessimized while the non-erms versions are significantly benefited. I think it has to do with the change in alignment of L(less_vec) though I am not certain. Are you seeing the same performance changes on Skylake/Icelake server? > --- > sysdeps/x86/cacheinfo.h | 7 +++++++ > sysdeps/x86/cpu-features.c | 5 +++++ > .../x86/include/cpu-features-preferred_feature_index_1.def | 1 + > sysdeps/x86/sysdep.h | 3 +++ > sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S | 5 +++++ > 5 files changed, 21 insertions(+) > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h > index eba8dbc4a6..174ea38f5b 100644 > --- a/sysdeps/x86/cacheinfo.h > +++ b/sysdeps/x86/cacheinfo.h > @@ -49,6 +49,9 @@ long int __x86_rep_stosb_threshold attribute_hidden = > 2048; > /* Threshold to stop using Enhanced REP MOVSB. */ > long int __x86_rep_movsb_stop_threshold attribute_hidden; > > +/* String/memory function control. */ > +int __x86_string_control attribute_hidden; > + > static void > init_cacheinfo (void) > { > @@ -71,5 +74,9 @@ init_cacheinfo (void) > __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold; > __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; > __x86_rep_movsb_stop_threshold = > cpu_features->rep_movsb_stop_threshold; > + > + if (CPU_FEATURES_ARCH_P (cpu_features, Avoid_Short_Distance_REP_MOVSB)) > + __x86_string_control > + |= X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB; > } > #endif > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c > index 706a172ba9..645bba6314 100644 > --- a/sysdeps/x86/cpu-features.c > +++ b/sysdeps/x86/cpu-features.c > @@ -555,6 +555,11 @@ init_cpu_features (struct cpu_features *cpu_features) > cpu_features->preferred[index_arch_Prefer_AVX2_STRCMP] > |= bit_arch_Prefer_AVX2_STRCMP; > } > + > + /* Avoid avoid short distance REP MOVSB on processor with FSRM. */ > + if (CPU_FEATURES_CPU_P (cpu_features, FSRM)) > + cpu_features->preferred[index_arch_Avoid_Short_Distance_REP_MOVSB] > + |= bit_arch_Avoid_Short_Distance_REP_MOVSB; > } > /* This spells out "AuthenticAMD" or "HygonGenuine". */ > else if ((ebx == 0x68747541 && ecx == 0x444d4163 && edx == 0x69746e65) > diff --git > a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > index 133aab19f1..d7c93f00c5 100644 > --- a/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > +++ b/sysdeps/x86/include/cpu-features-preferred_feature_index_1.def > @@ -33,3 +33,4 @@ BIT (Prefer_No_AVX512) > BIT (MathVec_Prefer_No_AVX512) > BIT (Prefer_FSRM) > BIT (Prefer_AVX2_STRCMP) > +BIT (Avoid_Short_Distance_REP_MOVSB) > diff --git a/sysdeps/x86/sysdep.h b/sysdeps/x86/sysdep.h > index 51c069bfe1..35cb90d507 100644 > --- a/sysdeps/x86/sysdep.h > +++ b/sysdeps/x86/sysdep.h > @@ -57,6 +57,9 @@ enum cf_protection_level > #define STATE_SAVE_MASK \ > ((1 << 1) | (1 << 2) | (1 << 3) | (1 << 5) | (1 << 6) | (1 << 7)) > > +/* Avoid short distance REP MOVSB. */ > +#define X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB (1 << 0) > + > #ifdef __ASSEMBLER__ > > /* Syntactic details of assembler. */ > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > index a783da5de2..9f02624375 100644 > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > @@ -325,12 +325,16 @@ L(movsb): > /* Avoid slow backward REP MOVSB. */ > jb L(more_8x_vec_backward) > # if AVOID_SHORT_DISTANCE_REP_MOVSB > + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, > __x86_string_control(%rip) > + jz 3f > movq %rdi, %rcx > subq %rsi, %rcx > jmp 2f > # endif > 1: > # if AVOID_SHORT_DISTANCE_REP_MOVSB > + andl $X86_STRING_CONTROL_AVOID_SHORT_DISTANCE_REP_MOVSB, > __x86_string_control(%rip) > + jz 3f > movq %rsi, %rcx > subq %rdi, %rcx > 2: > @@ -338,6 +342,7 @@ L(movsb): > is N*4GB + [1..63] with N >= 0. */ > cmpl $63, %ecx > jbe L(more_2x_vec) /* Avoid "rep movsb" if ECX <= 63. */ > +3: > # endif > mov %RDX_LP, %RCX_LP > rep movsb > -- > 2.31.1 > >