From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=AWL,BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_PASS, SPF_PASS shortcircuit=no autolearn=ham autolearn_force=no version=3.4.2 Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by dcvr.yhbt.net (Postfix) with ESMTP id B815F1F4B4 for ; Thu, 24 Sep 2020 07:31:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727114AbgIXHb0 (ORCPT ); Thu, 24 Sep 2020 03:31:26 -0400 Received: from cloud.peff.net ([104.130.231.41]:39078 "EHLO cloud.peff.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726655AbgIXHb0 (ORCPT ); Thu, 24 Sep 2020 03:31:26 -0400 Received: (qmail 1784 invoked by uid 109); 24 Sep 2020 07:31:26 -0000 Received: from Unknown (HELO peff.net) (10.0.1.2) by cloud.peff.net (qpsmtpd/0.94) with ESMTP; Thu, 24 Sep 2020 07:31:26 +0000 Authentication-Results: cloud.peff.net; auth=none Received: (qmail 24156 invoked by uid 111); 24 Sep 2020 07:31:28 -0000 Received: from coredump.intra.peff.net (HELO sigill.intra.peff.net) (10.0.0.2) by peff.net (qpsmtpd/0.94) with (TLS_AES_256_GCM_SHA384 encrypted) ESMTPS; Thu, 24 Sep 2020 03:31:27 -0400 Authentication-Results: peff.net; auth=none Date: Thu, 24 Sep 2020 03:31:25 -0400 From: Jeff King To: Han-Wen Nienhuys Cc: Junio C Hamano , Han-Wen Nienhuys via GitGitGadget , git , Han-Wen Nienhuys Subject: Re: [PATCH 06/13] reftable: (de)serialization for the polymorphic record type. Message-ID: <20200924073125.GD1851751@coredump.intra.peff.net> References: <791f69c000556e93bf5fcfc0ec9304833b12565b.1600283416.git.gitgitgadget@gmail.com> <20200924072151.GC1851751@coredump.intra.peff.net> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20200924072151.GC1851751@coredump.intra.peff.net> Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org On Thu, Sep 24, 2020 at 03:21:51AM -0400, Jeff King wrote: > > I originally had > > > > +void put_be64(uint8_t *out, uint64_t v) > > +{ > > + int i = sizeof(uint64_t); > > + while (i--) { > > + out[i] = (uint8_t)(v & 0xff); > > + v >>= 8; > > + } > > +} > > > > in my reftable library, which is portable. Is there a reason for the > > magic with htonll and friends? > > Presumably it was thought to be faster. This comes originally from the > block-sha1 code in 660231aa97 (block-sha1: support for architectures > with memory alignment restrictions, 2009-08-12). I don't know how it > compares in practice, and especially these days. > > Our fallback routines are similar to an unrolled version of what you > wrote above. We should be able to measure it pretty easily, since block-sha1 uses a lot of get_be32/put_be32. I generated a 4GB random file, built with BLK_SHA1=Yes and -O2, and timed: t/helper/test-tool sha1