From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=AWL,BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED,RDNS_NONE,SPF_HELO_PASS,SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.2 Received: from sourceware.org (unknown [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by dcvr.yhbt.net (Postfix) with ESMTPS id 32B131F4B4 for ; Tue, 12 Jan 2021 20:05:35 +0000 (UTC) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 04204386F037; Tue, 12 Jan 2021 20:05:34 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 04204386F037 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1610481934; bh=W+9FzsDBgjYn1vdP7pQxxF7W2miAaSt7k6Aentc4xR8=; h=References:In-Reply-To:Date:Subject:To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=yzMxFu1XhI8EauUKx0738YtqZsX7KmTwNwCggVoNUsZAZXR9OuCNYi3NT3A+q3yTg 1j1Vb6zTqoFsCZBPmWSmGP1s9BmtyZdNz60HjZC7aEtbXbF77MUQFX3+l/00HP2Qf0 gc33EDMRjofOa6uPK+QNHjp/13GdeIgGxH+G1Ip0= Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com [IPv6:2607:f8b0:4864:20::32f]) by sourceware.org (Postfix) with ESMTPS id EA4DD3850407 for ; Tue, 12 Jan 2021 20:05:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org EA4DD3850407 Received: by mail-ot1-x32f.google.com with SMTP id w3so3456396otp.13 for ; Tue, 12 Jan 2021 12:05:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=W+9FzsDBgjYn1vdP7pQxxF7W2miAaSt7k6Aentc4xR8=; b=SCTUrqpAl67itZiXPLv+E0wZnPfLVTHqlL7YppdHl5R+YAOBaEjiXK7k0FXiI2RdKj t4S25wBXwszjvm/KncBYqZVTyKIGK1dlpDILx/W1Bn/0jyz/pDHtzCvwxoTq/geXLQCs w5nyyfvyFmxVFiZ+/9Mlpa+IvCW19c0znI7Jkx2ara/htOfiwOY9UgTcW8HvFoPUQB1Q NKj4w0zbgA+4EIS2O4rjrEHLKfcpcO+c8iZciVZDVuYiIHpij2ysqp/SQnIm1s5mO4vN o0YP1pJv4Cs3ZkJ5Uyer9cuZMwe/7FIS1q9cApGJYh1iJRQOtVJN8HzzDvRxL/gQiv2Y VoSw== X-Gm-Message-State: AOAM533gCL2LPQ0z8VeZr2TjGX1zESuvnYrBZWWuC4BXo22YaIJA7Sti Onlo0IAHYreXmr4uzcUBiY2S4Tx9nAoxKPIvHcQ= X-Google-Smtp-Source: ABdhPJz0M79mHbxh4w5/l3SJWK3QdXH+FnN0LkCa/MLgwqD01gfqNngeNDcfDFeQvt24sXPACVJF5B0lZiJ+/BFKJQ4= X-Received: by 2002:a9d:6285:: with SMTP id x5mr730844otk.179.1610481930283; Tue, 12 Jan 2021 12:05:30 -0800 (PST) MIME-Version: 1.0 References: <20210112184841.416254-1-sajan.karumanchi@amd.com> In-Reply-To: <20210112184841.416254-1-sajan.karumanchi@amd.com> Date: Tue, 12 Jan 2021 12:04:54 -0800 Message-ID: Subject: Re: [PATCH 1/1] x86: Adding an upper bound for Enhanced REP MOVSB. To: Sajan Karumanchi Content-Type: text/plain; charset="UTF-8" X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: "H.J. Lu via Libc-alpha" Reply-To: "H.J. Lu" Cc: Florian Weimer , Premachandra Mallappa , GNU C Library Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" On Tue, Jan 12, 2021 at 10:49 AM wrote: > > From: Sajan Karumanchi > > In the process of optimizing memcpy for AMD machines, we have found the > vector move operations are outperforming enhanced REP MOVSB for data > transfers above the L2 cache size on Zen3 architectures. > To handle this use case, we are adding an upper bound parameter on > enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'. > As per large-bench results, we are configuring this parameter to the > L2 cache size for AMD machines and applicable from Zen3 architecture > supporting the ERMS feature. > For architectures other than AMD, it is the computed value of > non-temporal threshold parameter. > > Reviewed-by: Premachandra Mallappa > --- > sysdeps/x86/cacheinfo.h | 14 ++++++++++++++ > .../x86_64/multiarch/memmove-vec-unaligned-erms.S | 10 ++++++++-- > 2 files changed, 22 insertions(+), 2 deletions(-) > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h > index 00d2d8a52a..00c3a823f0 100644 > --- a/sysdeps/x86/cacheinfo.h > +++ b/sysdeps/x86/cacheinfo.h > @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048; > /* Threshold to use Enhanced REP STOSB. */ > long int __x86_rep_stosb_threshold attribute_hidden = 2048; > > +/* Threshold to stop using Enhanced REP MOVSB. */ > +long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024; This should be 0 just like __x86_shared_non_temporal_threshold. > static void > get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, > long int core) > @@ -351,6 +354,11 @@ init_cacheinfo (void) > /* Account for exclusive L2 and L3 caches. */ > shared += core; > } > + /* ERMS feature is implemented from Zen3 architecture and it is > + performing poorly for data above L2 cache size. Henceforth, adding > + an upper bound threshold parameter to limit the usage of Enhanced > + REP MOVSB operations and setting its value to L2 cache size. */ > + __x86_max_rep_movsb_threshold = core; > } > } > > @@ -423,6 +431,12 @@ init_cacheinfo (void) > else > __x86_rep_movsb_threshold = rep_movsb_threshold; > > + /* Setting the upper bound of ERMS to the known default value of > + non-temporal threshold for architectures other than AMD. */ > + if (cpu_features->basic.kind != arch_kind_amd) > + __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold; > + > + > # if HAVE_TUNABLES > __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; > # endif > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > index 0980c95378..8c1a592552 100644 > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > @@ -30,7 +30,10 @@ > load and aligned store. Load the last 4 * VEC and first VEC > before the loop and store them after the loop to support > overlapping addresses. > - 6. If size >= __x86_shared_non_temporal_threshold and there is no > + 6. On machines with ERMS feature, if size greater than equal or to > + __x86_rep_movsb_threshold and less than > + __x86_max_rep_movsb_threshold, then REP MOVSB will be used. > + 7. If size >= __x86_shared_non_temporal_threshold and there is no > overlap between destination and source, use non-temporal store > instead of aligned store. */ > > @@ -240,7 +243,10 @@ L(return): > ret > > L(movsb): > - cmp __x86_shared_non_temporal_threshold(%rip), %RDX_LP > + /* Avoid REP MOVSB for sizes above max threshold, which is > + L2 cache size for AMD machines and for all other machines > + it is __x86_shared_non_temporal_threshold. */ Just /* Avoid REP MOVSB for sizes above the maximum threshold. */ > + cmp __x86_max_rep_movsb_threshold(%rip), %RDX_LP > jae L(more_8x_vec) > cmpq %rsi, %rdi > jb 1f > -- > 2.25.1 > -- H.J.