git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
@ 2017-03-24 23:24 Johannes Schindelin
  2017-03-24 23:24 ` [PATCH 1/7] sha1dc: safeguard against outside definitions of BIGENDIAN Johannes Schindelin
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:24 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

As I pointed out several times in the past, the performance hit of
enabling SHA1DC globally is not acceptable. This patch series not only
demonstrates that clearly in the perf test it adds (it is the last patch
in the current series, and its commit message has some numbers), it also
shows an early glimpse of what I will be proposing to fix the situation.

Obviously, this patch series is not complete yet:

- the first patch is an obvious bug fix that I sent out separately, but
  it is needed to unbreak almost all tests on Windows.

- the most important part will be the patch turning core.enableSHA1DC
  into a tristate: "externalOnly" or "smart" or "auto" or something
  indicating that it switches on collision detection only for commands
  that accept objects from an outside source into the local repository,
  such as fetch, fast-import, etc


Johannes Schindelin (7):
  sha1dc: safeguard against outside definitions of BIGENDIAN
  Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL
  config: add the core.enablesha1dc setting
  t0013: do not skip the entire file wholesale without DC_SHA1
  t0013: test DC_AND_OPENSSL_SHA1, too
  mingw: enable DC_AND_OPENSSL_SHA1 by default
  p0013: new test to compare SHA1DC vs OpenSSL

 Makefile               | 13 +++++++++++++
 config.c               |  8 ++++++++
 config.mak.uname       |  1 +
 hash.h                 |  2 +-
 sha1dc/sha1.c          | 23 ++++++++++++++++++++++-
 sha1dc/sha1.h          | 16 ++++++++++++++++
 t/helper/test-sha1.c   | 10 ++++++++++
 t/perf/p0013-sha1dc.sh | 23 +++++++++++++++++++++++
 t/t0013-sha1dc.sh      | 18 ++++++++++++------
 9 files changed, 106 insertions(+), 8 deletions(-)
 create mode 100644 t/perf/p0013-sha1dc.sh


base-commit: 063fe858b89ef8ee27965115fd6b1ed12e42e793
Published-As: https://github.com/dscho/git/releases/tag/sha1dc-knob-v1
Fetch-It-Via: git fetch https://github.com/dscho/git sha1dc-knob-v1

-- 
2.12.1.windows.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] sha1dc: safeguard against outside definitions of BIGENDIAN
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
@ 2017-03-24 23:24 ` Johannes Schindelin
  2017-03-24 23:24 ` [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL Johannes Schindelin
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:24 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

In sha1dc/sha1.c, we #define BIGENDIAN under certain circumstances, and
obviously leave the door open for scenarios where our conditions do not
catch and that constant is #defined elsewhere.

However, we did not expect that anybody would possibly #define BIGENDIAN
to 0, indicating that the current platform is *not* big endian.

This is not just a theoretical consideration: On Windows, the winsock2.h
header file (which is used to allow Git to communicate via network) does
indeed do this.

Let's test for that circumstance, too, and byte-swap as intended in that
case.

This fixes a massive breakage on Windows where current `pu` (having
switched on DC_SHA1 by default) breaks pretty much every single test
case.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 sha1dc/sha1.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
index 6dd0da36084..d99db4f2e1b 100644
--- a/sha1dc/sha1.c
+++ b/sha1dc/sha1.c
@@ -35,7 +35,7 @@
 
 #define sha1_mix(W, t)  (rotate_left(W[t - 3] ^ W[t - 8] ^ W[t - 14] ^ W[t - 16], 1))
 
-#if defined(BIGENDIAN)
+#if defined(BIGENDIAN) && BIGENDIAN != 0
 	#define sha1_load(m, t, temp)  { temp = m[t]; }
 #else
 	#define sha1_load(m, t, temp)  { temp = m[t]; sha1_bswap32(temp); }
-- 
2.12.1.windows.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
  2017-03-24 23:24 ` [PATCH 1/7] sha1dc: safeguard against outside definitions of BIGENDIAN Johannes Schindelin
@ 2017-03-24 23:24 ` Johannes Schindelin
  2017-03-25 19:51   ` Ævar Arnfjörð Bjarmason
  2017-03-30 16:16   ` Junio C Hamano
  2017-03-24 23:24 ` [PATCH 3/7] config: add the core.enablesha1dc setting Johannes Schindelin
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:24 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

Nowadays, there are practical collision attacks on the SHA-1 algorithm.
For that reason, Git integrated code that detects attempts to sneak in
objects using those known attack vectors and enabled it by default.

The collision detection is not for free, though: when using the SHA1DC
code, calculating the SHA-1 takes substantially longer than using
OpenSSL's (in some case hardware-accelerated) SHA1-routines, and this
happens even when switching off the collision detection in SHA1DC's code.

Therefore, it makes sense to limit the use of the collision-detecting
SHA1DC to the cases where objects are introduced from any outside source,
and use the fast OpenSSL code instead when working on implicitly trusted
data (such as when the user calls `git add`).

This patch introduces the DC_AND_OPENSSL_SHA1 Makefile knob to compile
with both SHA1DC and OpenSSL's SHA-1 routines, defaulting to SHA1DC. A
later patch will add the ability to switch between them.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 Makefile      | 12 ++++++++++++
 hash.h        |  2 +-
 sha1dc/sha1.c | 21 +++++++++++++++++++++
 sha1dc/sha1.h | 16 ++++++++++++++++
 4 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 9f8b35ad41f..3e181d2f0e2 100644
--- a/Makefile
+++ b/Makefile
@@ -147,6 +147,11 @@ all::
 # Define OPENSSL_SHA1 environment variable when running make to link
 # with the SHA1 routine from openssl library.
 #
+# Define DC_AND_OPENSSL_SHA1 environment variable when running make to use
+# the collision-detecting sha1 algorithm by default, configurable via the
+# core.enableSHA1DC setting, falling back to OpenSSL's faster algorithm when
+# the collision detection is disabled.
+#
 # Define SHA1_MAX_BLOCK_SIZE to limit the amount of data that will be hashed
 # in one call to the platform's SHA1_Update(). e.g. APPLE_COMMON_CRYPTO
 # wants 'SHA1_MAX_BLOCK_SIZE=1024L*1024L*1024L' defined.
@@ -1391,6 +1396,12 @@ ifdef APPLE_COMMON_CRYPTO
 	SHA1_MAX_BLOCK_SIZE = 1024L*1024L*1024L
 endif
 
+ifdef DC_AND_OPENSSL_SHA1
+	EXTLIBS += $(LIB_4_CRYPTO)
+	LIB_OBJS += sha1dc/sha1.o
+	LIB_OBJS += sha1dc/ubc_check.o
+	BASIC_CFLAGS += -DSHA1_DC_AND_OPENSSL
+else
 ifdef OPENSSL_SHA1
 	EXTLIBS += $(LIB_4_CRYPTO)
 	BASIC_CFLAGS += -DSHA1_OPENSSL
@@ -1415,6 +1426,7 @@ endif
 endif
 endif
 endif
+endif
 
 ifdef SHA1_MAX_BLOCK_SIZE
 	LIB_OBJS += compat/sha1-chunked.o
diff --git a/hash.h b/hash.h
index a11fc9233fc..3a09343270d 100644
--- a/hash.h
+++ b/hash.h
@@ -7,7 +7,7 @@
 #include <CommonCrypto/CommonDigest.h>
 #elif defined(SHA1_OPENSSL)
 #include <openssl/sha.h>
-#elif defined(SHA1_DC)
+#elif defined(SHA1_DC) || defined(SHA1_DC_AND_OPENSSL)
 #include "sha1dc/sha1.h"
 #else /* SHA1_BLK */
 #include "block-sha1/sha1.h"
diff --git a/sha1dc/sha1.c b/sha1dc/sha1.c
index d99db4f2e1b..e6bcf0ffa86 100644
--- a/sha1dc/sha1.c
+++ b/sha1dc/sha1.c
@@ -1806,3 +1806,24 @@ void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *vdata, unsigned long len)
 	}
 	SHA1DCUpdate(ctx, data, len);
 }
+
+#ifdef SHA1_DC_AND_OPENSSL
+void (*SHA1_Init_func)(SHA_CTX_union *ctx) = (void *)SHA1DCInit;
+void (*SHA1_Update_func)(SHA_CTX_union *ctx, const void *pointer, size_t size) =
+	(void *)git_SHA1DCUpdate;
+int (*SHA1_Final_func)(unsigned char sha1[20], SHA_CTX_union *ctx) =
+	(void *)git_SHA1DCFinal;
+
+void toggle_sha1dc(int enable)
+{
+	if (enable) {
+		SHA1_Init_func = (void *)SHA1DCInit;
+		SHA1_Update_func = (void *)git_SHA1DCUpdate;
+		SHA1_Final_func = (void *)git_SHA1DCFinal;
+	} else {
+		SHA1_Init_func = (void *)SHA1_Init;
+		SHA1_Update_func = (void *)SHA1_Update;
+		SHA1_Final_func = (void *)SHA1_Final;
+	}
+}
+#endif
diff --git a/sha1dc/sha1.h b/sha1dc/sha1.h
index bd8bd928fb3..243c2fe0b6b 100644
--- a/sha1dc/sha1.h
+++ b/sha1dc/sha1.h
@@ -110,10 +110,26 @@ void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
  */
 void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *data, unsigned long len);
 
+#ifdef SHA1_DC_AND_OPENSSL
+extern void toggle_sha1dc(int enable);
+
+typedef union {
+	SHA1_CTX dc;
+	SHA_CTX openssl;
+} SHA_CTX_union;
+extern void (*SHA1_Init_func)(SHA_CTX_union *ctx);
+extern void (*SHA1_Update_func)(SHA_CTX_union *ctx, const void *data, size_t len);
+extern int (*SHA1_Final_func)(unsigned char sha1[20], SHA_CTX_union *ctx);
+#define platform_SHA_CTX SHA_CTX_union
+#define platform_SHA1_Init SHA1_Init_func
+#define platform_SHA1_Update SHA1_Update_func
+#define platform_SHA1_Final SHA1_Final_func
+#else
 #define platform_SHA_CTX SHA1_CTX
 #define platform_SHA1_Init SHA1DCInit
 #define platform_SHA1_Update git_SHA1DCUpdate
 #define platform_SHA1_Final git_SHA1DCFinal
+#endif
 
 #if defined(__cplusplus)
 }
-- 
2.12.1.windows.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/7] config: add the core.enablesha1dc setting
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
  2017-03-24 23:24 ` [PATCH 1/7] sha1dc: safeguard against outside definitions of BIGENDIAN Johannes Schindelin
  2017-03-24 23:24 ` [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL Johannes Schindelin
@ 2017-03-24 23:24 ` Johannes Schindelin
  2017-03-24 23:25 ` [PATCH 4/7] t0013: do not skip the entire file wholesale without DC_SHA1 Johannes Schindelin
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:24 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

When compiled with DC_AND_OPENSSL_SHA1, this new config setting allows to
switch from the collision-detecting SHA-1 routines (SHA1DC) to the
noticeably faster OpenSSL ones.

The default is still to detect collisions.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 config.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/config.c b/config.c
index 1a4d85537b3..f94e2963e57 100644
--- a/config.c
+++ b/config.c
@@ -1205,6 +1205,14 @@ static int git_default_core_config(const char *var, const char *value)
 		return 0;
 	}
 
+	if (!strcmp(var, "core.enablesha1dc")) {
+#ifdef DC_AND_OPENSSL_SHA1
+		toggle_sha1dc(git_config_bool(var, value));
+#else
+		warning("Ignoring core.enablesha1dc='%s'", value);
+#endif
+	}
+
 	/* Add other config variables here and to Documentation/config.txt. */
 	return 0;
 }
-- 
2.12.1.windows.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/7] t0013: do not skip the entire file wholesale without DC_SHA1
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
                   ` (2 preceding siblings ...)
  2017-03-24 23:24 ` [PATCH 3/7] config: add the core.enablesha1dc setting Johannes Schindelin
@ 2017-03-24 23:25 ` Johannes Schindelin
  2017-03-24 23:25 ` [PATCH 5/7] t0013: test DC_AND_OPENSSL_SHA1, too Johannes Schindelin
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:25 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

So far, there is only one test case in that script, and that case indeed
requires that the code was compiled with with the DC_SHA1 flag.

However, we are about to add another test case to verify that the
DC_AND_OPENSSL_SHA1 flag works correctly, too.

So let's refactor the code a little.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 t/t0013-sha1dc.sh | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/t/t0013-sha1dc.sh b/t/t0013-sha1dc.sh
index 6d655cb161b..435a96d6108 100755
--- a/t/t0013-sha1dc.sh
+++ b/t/t0013-sha1dc.sh
@@ -4,13 +4,9 @@ test_description='test sha1 collision detection'
 . ./test-lib.sh
 TEST_DATA="$TEST_DIRECTORY/t0013"
 
-if test -z "$DC_SHA1"
-then
-	skip_all='skipping sha1 collision tests, DC_SHA1 not set'
-	test_done
-fi
+test -z "$DC_SHA1" || test_set_prereq DC_SHA1
 
-test_expect_success 'test-sha1 detects shattered pdf' '
+test_expect_success DC_SHA1 'test-sha1 detects shattered pdf' '
 	test_must_fail test-sha1 <"$TEST_DATA/shattered-1.pdf" 2>err &&
 	test_i18ngrep collision err &&
 	grep 38762cf7f55934b34d179ae6a4c80cadccbb7f0a err
-- 
2.12.1.windows.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/7] t0013: test DC_AND_OPENSSL_SHA1, too
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
                   ` (3 preceding siblings ...)
  2017-03-24 23:25 ` [PATCH 4/7] t0013: do not skip the entire file wholesale without DC_SHA1 Johannes Schindelin
@ 2017-03-24 23:25 ` Johannes Schindelin
  2017-03-24 23:28 ` [PATCH 6/7] mingw: enable DC_AND_OPENSSL_SHA1 by default Johannes Schindelin
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:25 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 Makefile             |  1 +
 t/helper/test-sha1.c | 10 ++++++++++
 t/t0013-sha1dc.sh    | 10 ++++++++++
 3 files changed, 21 insertions(+)

diff --git a/Makefile b/Makefile
index 3e181d2f0e2..0b581357625 100644
--- a/Makefile
+++ b/Makefile
@@ -2251,6 +2251,7 @@ GIT-BUILD-OPTIONS: FORCE
 	@echo NO_UNIX_SOCKETS=\''$(subst ','\'',$(subst ','\'',$(NO_UNIX_SOCKETS)))'\' >>$@+
 	@echo PAGER_ENV=\''$(subst ','\'',$(subst ','\'',$(PAGER_ENV)))'\' >>$@+
 	@echo DC_SHA1=\''$(subst ','\'',$(subst ','\'',$(DC_SHA1)))'\' >>$@+
+	@echo DC_AND_OPENSSL_SHA1=\''$(subst ','\'',$(subst ','\'',$(DC_AND_OPENSSL_SHA1)))'\' >>$@+
 ifdef TEST_OUTPUT_DIRECTORY
 	@echo TEST_OUTPUT_DIRECTORY=\''$(subst ','\'',$(subst ','\'',$(TEST_OUTPUT_DIRECTORY)))'\' >>$@+
 endif
diff --git a/t/helper/test-sha1.c b/t/helper/test-sha1.c
index a1c13f54eca..27ce8869e51 100644
--- a/t/helper/test-sha1.c
+++ b/t/helper/test-sha1.c
@@ -8,6 +8,16 @@ int cmd_main(int ac, const char **av)
 	int binary = 0;
 	char *buffer;
 
+	if (ac > 1 && !strcmp(av[1], "--disable-sha1dc")) {
+#ifdef SHA1_DC_AND_OPENSSL
+		toggle_sha1dc(0);
+#else
+		die("Not compiled with DC_AND_OPENSSL_SHA1");
+#endif
+		ac--;
+		av++;
+	}
+
 	if (ac == 2) {
 		if (!strcmp(av[1], "-b"))
 			binary = 1;
diff --git a/t/t0013-sha1dc.sh b/t/t0013-sha1dc.sh
index 435a96d6108..2b529b31b4e 100755
--- a/t/t0013-sha1dc.sh
+++ b/t/t0013-sha1dc.sh
@@ -5,6 +5,10 @@ test_description='test sha1 collision detection'
 TEST_DATA="$TEST_DIRECTORY/t0013"
 
 test -z "$DC_SHA1" || test_set_prereq DC_SHA1
+test -z "$DC_AND_OPENSSL_SHA1" || {
+	test_set_prereq DC_AND_OPENSSL_SHA1
+	test_set_prereq DC_SHA1
+}
 
 test_expect_success DC_SHA1 'test-sha1 detects shattered pdf' '
 	test_must_fail test-sha1 <"$TEST_DATA/shattered-1.pdf" 2>err &&
@@ -12,4 +16,10 @@ test_expect_success DC_SHA1 'test-sha1 detects shattered pdf' '
 	grep 38762cf7f55934b34d179ae6a4c80cadccbb7f0a err
 '
 
+test_expect_success DC_AND_OPENSSL_SHA1 'sha1dc can be turned off' '
+	test-sha1 --disable-sha1dc <"$TEST_DATA/shattered-1.pdf" 2>err &&
+	! test_i18ngrep collision err &&
+	! grep 38762cf7f55934b34d179ae6a4c80cadccbb7f0a err
+'
+
 test_done
-- 
2.12.1.windows.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/7] mingw: enable DC_AND_OPENSSL_SHA1 by default
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
                   ` (4 preceding siblings ...)
  2017-03-24 23:25 ` [PATCH 5/7] t0013: test DC_AND_OPENSSL_SHA1, too Johannes Schindelin
@ 2017-03-24 23:28 ` Johannes Schindelin
  2017-03-24 23:28 ` [PATCH 7/7] p0013: new test to compare SHA1DC vs OpenSSL Johannes Schindelin
  2017-03-25  6:37 ` [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Junio C Hamano
  7 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:28 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 config.mak.uname | 1 +
 1 file changed, 1 insertion(+)

diff --git a/config.mak.uname b/config.mak.uname
index 399fe192719..a16e9ef0551 100644
--- a/config.mak.uname
+++ b/config.mak.uname
@@ -552,6 +552,7 @@ else
 		USE_LIBPCRE= YesPlease
 		NO_CURL =
 		USE_NED_ALLOCATOR = YesPlease
+		DC_AND_OPENSSL_SHA1 = YesPlease
 	else
 		COMPAT_CFLAGS += -D__USE_MINGW_ANSI_STDIO
 		NO_CURL = YesPlease
-- 
2.12.1.windows.1



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/7] p0013: new test to compare SHA1DC vs OpenSSL
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
                   ` (5 preceding siblings ...)
  2017-03-24 23:28 ` [PATCH 6/7] mingw: enable DC_AND_OPENSSL_SHA1 by default Johannes Schindelin
@ 2017-03-24 23:28 ` Johannes Schindelin
  2017-03-25  6:37 ` [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Junio C Hamano
  7 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-24 23:28 UTC (permalink / raw)
  To: git; +Cc: Junio C Hamano

To demonstrate the need for the core.enableSHA1DC knob, this test
compares the performance of the SHA-1 algorithms with collision
detection vs OpenSSL's (that does not detect attempted collision
attacks).

The payload size of 300MB was actually not concocted from thin air, but
is based on the massive Windows monorepo, whose index file weighs in
with roughly that size (and which is SHA-1'ed upon every single read and
write).

On this developer's machine, this comparison shows a hefty difference:

	0013.1: calculate SHA-1 for 300MB (SHA1DC)    3.03(0.03+0.17)
	0013.2: calculate SHA-1 for 300MB (OpenSSL)   0.58(0.06+0.16)

It is not only that ~6x slower performance is a pretty tall order, the
absolute numbers themselves speak a very clear language: having to wait
one second every time a file is `git add`ed is noticeable, but one can
handwave it away. Having to wait six seconds (3 to read the index, a
fraction of a millisecond to hash the new contents and update the
in-memory index, then 3 seconds to write out the index) is outright
annoying. And unnecessary, too: the content of the index is never
crafted to cause SHA-1 collisions.

Obviously, this test requires that Git was built with the new
DC_AND_OPENSSL_SHA1 make flag; it is skipped otherwise.

Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
---
 t/perf/p0013-sha1dc.sh | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)
 create mode 100644 t/perf/p0013-sha1dc.sh

diff --git a/t/perf/p0013-sha1dc.sh b/t/perf/p0013-sha1dc.sh
new file mode 100644
index 00000000000..e08473ac969
--- /dev/null
+++ b/t/perf/p0013-sha1dc.sh
@@ -0,0 +1,23 @@
+#!/bin/sh
+
+test_description="Tests performance of SHA1DC vs OpenSSL"
+
+. ./perf-lib.sh
+
+test -n "$DC_AND_OPENSSL_SHA1" || {
+	skip_all='DC_AND_OPENSSL_SHA1 required for this test'
+	test_done
+}
+
+test_perf 'calculate SHA-1 for 300MB (SHA1DC)' '
+	dd if=/dev/zero bs=1M count=300 |
+	test-sha1
+'
+
+test_perf 'calculate SHA-1 for 300MB (OpenSSL)' '
+	dd if=/dev/zero bs=1M count=300 |
+	test-sha1 --disable-sha1dc
+'
+
+test_done
+
-- 
2.12.1.windows.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
                   ` (6 preceding siblings ...)
  2017-03-24 23:28 ` [PATCH 7/7] p0013: new test to compare SHA1DC vs OpenSSL Johannes Schindelin
@ 2017-03-25  6:37 ` Junio C Hamano
  2017-03-25 16:58   ` Junio C Hamano
                     ` (2 more replies)
  7 siblings, 3 replies; 23+ messages in thread
From: Junio C Hamano @ 2017-03-25  6:37 UTC (permalink / raw)
  To: Johannes Schindelin; +Cc: git

Johannes Schindelin <johannes.schindelin@gmx.de> writes:

> - the most important part will be the patch turning core.enableSHA1DC
>   into a tristate: "externalOnly" or "smart" or "auto" or something
>   indicating that it switches on collision detection only for commands
>   that accept objects from an outside source into the local repository,
>   such as fetch, fast-import, etc

There are different uses of SHA-1 hashing in Git, and I do agree
that depending on the use, some of them do not need the overhead for
the collision-attack detection.  As DC_SHA1 with attack detection
disabled may still be slower than other implementations, it may make
sense to have a compile-time option to use DC_SHA1 for places that
needs protection, and another implementation [*1*] for places that
don't.

I think the places that MUST use DC variant is anything that hashes
content for a single object to compute its object name.  "git add"
[*2*] that creates a new loose object, "git index-pack" that reads
incoming pack stream over the wire, reconstitutes each object data
while resolving delta and hash them to get their names to construct
the .idx file and "git unpack-objects" that does the same but
explodes the pack contents into loose objects, "git write-tree" that
creates a new tree object given the contents of the index, etc.

One notable exception of the above is "update-index --refresh".  We
already have contents in the index and in the object store, and we
are hashing the contents in the working tree to see if it hashes to
the same value.  When the hash does not match, it won't go in to the
object store.  When the hash does match, it either is indeed the
same content (i.e. no collision), in which case we earlier must have
done the collision-attack detecting hash when we added the object to
the object store.  Or the object we have in the object store and
what is in the working tree are different contents that hash to the
same name, in which case the user already has colliding pair and it
is too late to invoke collision-attack detecting variant ;-)

The running checksum over the whole file csum-file.c computes does
not have to be the collision-attack detecting kind.  This is the
hash at the end of various files like the index, .pack .idx, etc.
These are used to protect us against bit-flipping disks and we are
not fighting with a clever disk that can do collision attack.  For
that matter, some of these checksums do not even have to be SHA-1.
If one hacks his own Git to replace SHA-1 checksum at the end of the
index file with something faster and weaker and use it in one's
repository, nobody else would notice nor care.  The same thing can
be said for the .idx file.  The one at the end of .pack does get
checked at the receiving end when it comes over the wire, so it MUST
be SHA-1, but it does not have to be hashed with collision-attack
detection.

The rerere database is indexed with SHA-1 of (normalized) conflict
text.  This does not have to be hashed with the collision-attack
detection logic.  Thanks to recent update that allows multiple pairs
of conflict and its resolution, the subsystem is prepared to see two
conflicts that share the same hash already (for completely different
reasons).

The hash that names a packfile is constructed by sorting all the
names of the objects contained in the packfile and running SHA-1
hash over it.  I think this MUST be hashed with collision-attack
detection.  A malicious site can feed you a packfile that contains
objects the site crafts so that the sorted object names would result
in a collision-attack, ending up with one pack that contains a sets
of objects different from another pack that happens to have the same
packname, causing Git to say "Ah, this new pack must have the same
set of objects as the pack we already have" and discard it,
resulting in lost objects and a corrupt repository with missing
objects.


[Footnote]

*1* I think this PREVIEW hardcodes OpenSSL only for illustration and
    that is OK for a preview.  Given the recent news on licensing,
    however, if we want to pursue this dual hashing scheme, we must
    consider allowing other implementations as well in the final
    form.

*2* In this paragraph, whenever "git" command is named, I mean both
    the command and its underlying machinery.  When I say "git
    write-tree", for example, write_index_as_tree() obviously is
    included.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-25  6:37 ` [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Junio C Hamano
@ 2017-03-25 16:58   ` Junio C Hamano
  2017-03-26  6:18   ` Jeff King
  2017-03-29 20:02   ` Johannes Schindelin
  2 siblings, 0 replies; 23+ messages in thread
From: Junio C Hamano @ 2017-03-25 16:58 UTC (permalink / raw)
  To: Johannes Schindelin; +Cc: git

Junio C Hamano <gitster@pobox.com> writes:

> The hash that names a packfile is constructed by sorting all the
> names of the objects contained in the packfile and running SHA-1
> hash over it.

Sorry, but I need to make a correction here.

This "SHA-1 over sorted object names" is a description of an ancient
behaviour before 1190a1ac ("pack-objects: name pack files after
trailer hash", 2013-12-05) happened.  These days the pack name is
the same as the csum-file checksum of the .pack contents.

This however does not change the fact that the site that feeds us a
packfile is in control of the hash, hence the name we give to the
resulting packfile.  Unlike the use of csum-file for the trailing
hash for the index file, which is only to protect against bit
flipping, "SHA-1 over .pack contents" done here is used to come up
with a unique name used for identification and deduplication (of the
packfile, not of individual objects), and the need for protection
against collision attack attempts does not change between the
implementation before 1190a1ac and after that commit.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL
  2017-03-24 23:24 ` [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL Johannes Schindelin
@ 2017-03-25 19:51   ` Ævar Arnfjörð Bjarmason
  2017-03-30 16:16   ` Junio C Hamano
  1 sibling, 0 replies; 23+ messages in thread
From: Ævar Arnfjörð Bjarmason @ 2017-03-25 19:51 UTC (permalink / raw)
  To: Johannes Schindelin; +Cc: Git Mailing List, Junio C Hamano

On Sat, Mar 25, 2017 at 12:24 AM, Johannes Schindelin
<johannes.schindelin@gmx.de> wrote:
> The collision detection is not for free, though: when using the SHA1DC
> code, calculating the SHA-1 takes substantially longer than using
> OpenSSL's (in some case hardware-accelerated) SHA1-routines, and this
> happens even when switching off the collision detection in SHA1DC's code.

[...]in some case*s*[...]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-25  6:37 ` [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Junio C Hamano
  2017-03-25 16:58   ` Junio C Hamano
@ 2017-03-26  6:18   ` Jeff King
  2017-03-26 23:16     ` Junio C Hamano
  2017-03-29 20:02   ` Johannes Schindelin
  2 siblings, 1 reply; 23+ messages in thread
From: Jeff King @ 2017-03-26  6:18 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Johannes Schindelin, git

On Fri, Mar 24, 2017 at 11:37:54PM -0700, Junio C Hamano wrote:

> The hash that names a packfile is constructed by sorting all the
> names of the objects contained in the packfile and running SHA-1
> hash over it.  I think this MUST be hashed with collision-attack
> detection.  A malicious site can feed you a packfile that contains
> objects the site crafts so that the sorted object names would result
> in a collision-attack, ending up with one pack that contains a sets
> of objects different from another pack that happens to have the same
> packname, causing Git to say "Ah, this new pack must have the same
> set of objects as the pack we already have" and discard it,
> resulting in lost objects and a corrupt repository with missing
> objects.

I don't think this case really matters for collision detection. What's
important is what Git does when it receives a brand-new packfile that
would overwrite an existing one. It _should_ keep the old one, under the
usual "existing data wins" rule.

It should be easy to test, though:

  $ git init tmp && cd tmp
  $ git commit --allow-empty -m foo
  $ git gc

  $ touch -d yesterday .git/objects/pack/*
  $ ls -l .git/objects/pack
  -r--r--r-- 1 peff peff 1128 Mar 25 02:10 pack-7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b.idx
  -r--r--r-- 1 peff peff  153 Mar 25 02:10 pack-7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b.pack

  $ git index-pack --stdin <.git/objects/pack/*.pack
  pack	7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b

  $ ls -l .git/objects/pack
  -r--r--r-- 1 peff peff 1128 Mar 25 02:10 pack-7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b.idx
  -r--r--r-- 1 peff peff  153 Mar 25 02:10 pack-7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b.pack

Looks like the timestamps were retained. <phew> And if we use strace, we
can see what happens:

  $ strace git index-pac k--stdin <.git/objects/pack/*.pack
  link(".git/objects/pack/tmp_pack_YSrdWU", ".git/objects/pack/pack-7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b.pack") = -1 EEXIST (File exists)
  unlink(".git/objects/pack/tmp_pack_YSrdWU") = 0
  link(".git/objects/pack/tmp_idx_O94NNU", ".git/objects/pack/pack-7e9d64ac27adc9ce1b12774dd287ff9bd8a9345b.idx") = -1 EEXIST (File exists)
  unlink(".git/objects/pack/tmp_idx_O94NNU") = 0

This is due to the link()/EEXIST handling in finalize_object_file. It
has a FIXME for a collision check, so we could actually detect at that
point whether we have a real collision, or if the other side just
happened to send us the same pack.

I wouldn't be surprised if the dumb-http walker is not so careful,
though (but the solution is to make it careful, not to worry about
a weak hash algorithm).

The rest of your email all made sense to me.

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-26  6:18   ` Jeff King
@ 2017-03-26 23:16     ` Junio C Hamano
  2017-03-27  1:11       ` Jeff King
  0 siblings, 1 reply; 23+ messages in thread
From: Junio C Hamano @ 2017-03-26 23:16 UTC (permalink / raw)
  To: Jeff King; +Cc: Johannes Schindelin, git

Jeff King <peff@peff.net> writes:

> On Fri, Mar 24, 2017 at 11:37:54PM -0700, Junio C Hamano wrote:
>
>> The hash that names a packfile is constructed by sorting all the
>> names of the objects contained in the packfile and running SHA-1
>> hash over it.  I think this MUST be hashed with collision-attack
>> detection.  A malicious site can feed you a packfile that contains
>> objects the site crafts so that the sorted object names would result
>> in a collision-attack, ending up with one pack that contains a sets
>> of objects different from another pack that happens to have the same
>> packname, causing Git to say "Ah, this new pack must have the same
>> set of objects as the pack we already have" and discard it,
>> resulting in lost objects and a corrupt repository with missing
>> objects.
>
> I don't think this case really matters for collision detection. What's
> important is what Git does when it receives a brand-new packfile that
> would overwrite an existing one. It _should_ keep the old one, under the
> usual "existing data wins" rule.

If a malicious site can craft two packfiles that hash to the same,
then it can first feed one against a fetch request, then feed the
other one when a later fetch request comes, and then the later pack
is discarded by the "existing data wins" rule.  Even though this
later pack may have all the objects necessary to complete the refs
this later fetch request asked for, and an earlier pack that
happened to have the same pack trailer hash doesn't have these
necessary objects, the receiver ends up discarding this necessary
pack.  The end result is that the receiver's repository is now
corrupt, lacking some objects.

It is a different matter if such an "attack" is a useful one to
conduct from attacker's point of view. 

I highlighted this case as notable because the way the trailing hash
is also used as the name of packfile makes this fall into the same
category as object hash, in that the hash is used for identification
and deduplication (because we have a rule that says there can be
only one _thing_ that hashes to one value), for which we do want to
use the collision-attack detecting kind of hashing, even though it
otherwise should fall into the other category (i.e. use of csum-file
API to compute trailer hash), where the hash is used merely for
bit-flip detection (we are perfectly OK if you have multiple index
files with different contents that happen to have the same hash in
the trailer, because the hash is not used for identificaiton and
deduplication).

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-26 23:16     ` Junio C Hamano
@ 2017-03-27  1:11       ` Jeff King
  2017-03-27  6:07         ` Junio C Hamano
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff King @ 2017-03-27  1:11 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Johannes Schindelin, git

On Sun, Mar 26, 2017 at 04:16:06PM -0700, Junio C Hamano wrote:

> > I don't think this case really matters for collision detection. What's
> > important is what Git does when it receives a brand-new packfile that
> > would overwrite an existing one. It _should_ keep the old one, under the
> > usual "existing data wins" rule.
> 
> If a malicious site can craft two packfiles that hash to the same,
> then it can first feed one against a fetch request, then feed the
> other one when a later fetch request comes, and then the later pack
> is discarded by the "existing data wins" rule.  Even though this
> later pack may have all the objects necessary to complete the refs
> this later fetch request asked for, and an earlier pack that
> happened to have the same pack trailer hash doesn't have these
> necessary objects, the receiver ends up discarding this necessary
> pack.  The end result is that the receiver's repository is now
> corrupt, lacking some objects.

No, I don't think so. We don't trust the trailer hash for anything to do
with corruption; we actually inflate the objects and see which ones we
got. So the victim will notice immediately that what the attacker sent
it is insufficient to complete the fetch (or push), and will refuse to
update the refs. The fetch transfer, but nobody gets corrupted.

Or another way to think about it: we don't care what the trailer hash
is. It comes from the untrusted attacker in the first place. So this
attack is no different than the other side just sending a bogus pack
that omits an object (but has a valid checksum).  The fact that it
happens to use the same on-disk name as something we already have is
irrelevant. We'll still notice that we don't have everything we need to
update the refs.

So the best you could do is probably a mini-DoS. Something like:

  0. Attacker generates two colliding packs, A and B. Only pack B has
     object X.

  1. Somehow, the attacker convinces upstream to take history with X
     _and_ to repack such that serving a fetch will yield pack B.

  2. You fetch from the attacker, who feeds you A.

  3. Now you fetch from upstream, who tries to send you B. You refuse
     it, thinking you already have it (but you really have A). The fetch
     fails because you are missing X.

That situation persists until either you repack (at which point you no
longer have A, unless your repack happens to generate the exact same
pack again!), or upstream history moves (or is repacked), causing them
to send a different pack.

I think arranging for (1) is exceedingly unlikely, and the fact that
your investment in (0) is lost the moment either the victim or the
upstream changes a single bit makes it an unlikely attack.

> I highlighted this case as notable because the way the trailing hash
> is also used as the name of packfile makes this fall into the same
> category as object hash, in that the hash is used for identification
> and deduplication (because we have a rule that says there can be
> only one _thing_ that hashes to one value), for which we do want to
> use the collision-attack detecting kind of hashing, even though it
> otherwise should fall into the other category (i.e. use of csum-file
> API to compute trailer hash), where the hash is used merely for
> bit-flip detection (we are perfectly OK if you have multiple index
> files with different contents that happen to have the same hash in
> the trailer, because the hash is not used for identificaiton and
> deduplication).

If we really wanted to address this (and I remain unconvinced that it's
worth caring about), the simplest thing is not to do collision
detection, but simply to give the packs a less predictable name. There
is nothing at all that cares about the filenames beyond the syntactic
"pack-[0-9a-f]{40}.pack", and it does not need to match the trailer
checksum (and as you know, we switched it recently without ill effect).

So we could literally just switch to writing out pack-$x.pack, where $x
is something like the 160-bit truncated sha-256, and the readers would
be none the wiser.

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-27  1:11       ` Jeff King
@ 2017-03-27  6:07         ` Junio C Hamano
  2017-03-27  7:09           ` Jeff King
  0 siblings, 1 reply; 23+ messages in thread
From: Junio C Hamano @ 2017-03-27  6:07 UTC (permalink / raw)
  To: Jeff King; +Cc: Johannes Schindelin, git

Jeff King <peff@peff.net> writes:

>> If a malicious site can craft two packfiles that hash to the same,
>> then it can first feed one against a fetch request, then feed the
>> other one when a later fetch request comes, and then the later pack
>> is discarded by the "existing data wins" rule.  Even though this
>> later pack may have all the objects necessary to complete the refs
>> this later fetch request asked for, and an earlier pack that
>> happened to have the same pack trailer hash doesn't have these
>> necessary objects, the receiver ends up discarding this necessary
>> pack.  The end result is that the receiver's repository is now
>> corrupt, lacking some objects.
>
> No, I don't think so. We don't trust the trailer hash for anything to do
> with corruption; we actually inflate the objects and see which ones we
> got. So the victim will notice immediately that what the attacker sent
> it is insufficient to complete the fetch (or push), and will refuse to
> update the refs. The fetch transfer, but nobody gets corrupted.

In the scenario I was presenting, both the original fetch that gives
one packdata and the later fetch that gives another packdata (which
happens to share the csum-file trailing checksum) satisfy the "does
the new pack give us enough objects to really complete the tips of
refs?" check.  The second fetch transfers, we validate the packdata
using index-pack (we may pass --check-self-contained-and-connected
and we would pass --strict if transfer-fsck is set), we perhaps even
store it in quarantine area while adding it to the list of in-core
packs, make sure everything is now connected from the refs using
pre-existing packs and this new pack.  The index-pack may see
everything is good and then would report the resulting pack name
back to index_pack_lockfile() called by fetch-pack.c::get_pack().

That was the scenario I had in mind.  Not "bogus sender throws a
corrupt pack at you" case, where we check the integrity and
connectivity of the objects ourselves.

And the trailer hash the sender added at the end of the pack data
stream does not have to come into the picture.  When we compute that
ourselves for the received pack data, because the sender is trying a
successful collision attack (they gave us one pack that hashes to
certain value earlier; now they are giving us the other one), we
would end up computing the same.

But even though both of these packs _are_ otherwise valid (in the
sense that they satisfactorily transfer objects necessary to make
the refs that were fetched complete), because we name the packs
after the trailer hash and we cannot have two files with the same
name, we end up throwing away the later one.

As I said, it is a totally different matter if this attack scenario
is a practical threat.  For one thing, it is probably harder than
just applying the straight "shattered" attack to create a single
object collision--you have to make two packs share the same trailing
hash _and_ make sure that both of them record data for valid
objects.  But I am not convinced that it would be much harder
(e.g. I understand that zlib deflate can be told not to attempt
compression at all, and the crafted garbage used in the middle part
of the "shattered" attack can be such a blob object expressed as a
base object--once the attacker has two such packfiles that hash the
same, two object names for these garbage blobs can be used to
present two versions of the values for a ref to be fetched by these
two fetch requests).

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-27  6:07         ` Junio C Hamano
@ 2017-03-27  7:09           ` Jeff King
  2017-03-27 17:15             ` Junio C Hamano
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff King @ 2017-03-27  7:09 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: Johannes Schindelin, git

On Sun, Mar 26, 2017 at 11:07:02PM -0700, Junio C Hamano wrote:

> > No, I don't think so. We don't trust the trailer hash for anything to do
> > with corruption; we actually inflate the objects and see which ones we
> > got. So the victim will notice immediately that what the attacker sent
> > it is insufficient to complete the fetch (or push), and will refuse to
> > update the refs. The fetch transfer, but nobody gets corrupted.
> 
> In the scenario I was presenting, both the original fetch that gives
> one packdata and the later fetch that gives another packdata (which
> happens to share the csum-file trailing checksum) satisfy the "does
> the new pack give us enough objects to really complete the tips of
> refs?" check.

Right, my point was that we do that check _after_ throwing away the
duplicate-named pack. So you cannot fool that check, update the ref, and
then throw away the pack to get a corrupt receiver. The receiver throws
away the pack first, then says "hey, I don't have all the objects" and
aborts.

That said...

> The second fetch transfers, we validate the packdata using index-pack
> (we may pass --check-self-contained-and-connected and we would pass
> --strict if transfer-fsck is set), we perhaps even store it in
> quarantine area while adding it to the list of in-core packs, make
> sure everything is now connected from the refs using pre-existing
> packs and this new pack.  The index-pack may see everything is good
> and then would report the resulting pack name back to
> index_pack_lockfile() called by fetch-pack.c::get_pack().

These are interesting corner cases. We only use
--check-self-contained-and-connected with clones, but you may still have
packs from an alternate during a clone (although I think the two packs
would be allowed to co-exist indefinitely, then).

The quarantine case is more interesting. The two packs _do_ co-exist
while we do the connectivity check there, and then afterwards we can
have only one. So that reversal of operations introduces a problem, and
you could end up with a lasting corruption as a result.

> But even though both of these packs _are_ otherwise valid (in the
> sense that they satisfactorily transfer objects necessary to make
> the refs that were fetched complete), because we name the packs
> after the trailer hash and we cannot have two files with the same
> name, we end up throwing away the later one.

I kind of wonder if we should simply allow potential duplicates to
co-exist. The pack names really aren't used for duplicate suppression in
any meaningful sense. We effectively use them as UUIDs so that each new
pack gets a unique name without having to do any locking or other
coordination. It would not be unreasonable to say "oops, 1234abcd
already exists; I'll just increment and call this new one 1234abce". The
two presumably-the-same packs would then co-exist until the new "repack
-a" removes duplicates (not just at the pack level, but at the object
level).

The biggest problem there is that "claiming" a pack name is not
currently atomic. We just do it blindly. So switching to some other
presumed-unique UUID might actually be easier (whether SHA-256 of the
pack contents or some other method).

> As I said, it is a totally different matter if this attack scenario
> is a practical threat.  For one thing, it is probably harder than
> just applying the straight "shattered" attack to create a single
> object collision--you have to make two packs share the same trailing
> hash _and_ make sure that both of them record data for valid
> objects.  But I am not convinced that it would be much harder
> (e.g. I understand that zlib deflate can be told not to attempt
> compression at all, and the crafted garbage used in the middle part
> of the "shattered" attack can be such a blob object expressed as a
> base object--once the attacker has two such packfiles that hash the
> same, two object names for these garbage blobs can be used to
> present two versions of the values for a ref to be fetched by these
> two fetch requests).

Yeah, I think we can assume it will be possible with SHAttered levels of
effort. An attacker can use it to create a persistent corruption by
having somebody fetch from them twice. So not really that interesting an
attack, but it is something. I still think that ditching SHA-1 for the
naming is probably a better fix than worrying about SHA-1 collisions.

-Peff

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-27  7:09           ` Jeff King
@ 2017-03-27 17:15             ` Junio C Hamano
  0 siblings, 0 replies; 23+ messages in thread
From: Junio C Hamano @ 2017-03-27 17:15 UTC (permalink / raw)
  To: Jeff King; +Cc: Johannes Schindelin, git

Jeff King <peff@peff.net> writes:

> Yeah, I think we can assume it will be possible with SHAttered levels of
> effort. An attacker can use it to create a persistent corruption by
> having somebody fetch from them twice. So not really that interesting an
> attack, but it is something. I still think that ditching SHA-1 for the
> naming is probably a better fix than worrying about SHA-1 collisions.

Yes, I agree with that part.  

Our trailer checksum happens to be SHA-1 mostly because the code was
available, not because they need to be a crypto-strong hash.  It can
safely be changed to something other than SHA-1 that is much faster,
if that is desired, when it is used only for bit-flip detection of
local files like the index file.

I also agree that changing the naming scheme (e.g. use the "hash" as
a hash to choose hash-bucket but accept the fact that hashes can
collide) is a better solution, if this "packname can collide" were
to become real problem.

Thanks.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-25  6:37 ` [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Junio C Hamano
  2017-03-25 16:58   ` Junio C Hamano
  2017-03-26  6:18   ` Jeff King
@ 2017-03-29 20:02   ` Johannes Schindelin
  2017-03-30  0:31     ` Junio C Hamano
  2 siblings, 1 reply; 23+ messages in thread
From: Johannes Schindelin @ 2017-03-29 20:02 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git

Hi Junio,

On Fri, 24 Mar 2017, Junio C Hamano wrote:

> Johannes Schindelin <johannes.schindelin@gmx.de> writes:
> 
> > - the most important part will be the patch turning core.enableSHA1DC
> > into a tristate: "externalOnly" or "smart" or "auto" or something
> > indicating that it switches on collision detection only for commands
> > that accept objects from an outside source into the local repository,
> > such as fetch, fast-import, etc
> 
> There are different uses of SHA-1 hashing in Git, and I do agree
> that depending on the use, some of them do not need the overhead for
> the collision-attack detection.

Indeed.

I guess I should have clarified very clearly what I intended to accomplish
with this preview.

Let me first summarize the background:

- Git originally used SHA-1 as a convenient catch-all hashing algorithm,
  with the security of the hash merely being a nice afterthought.
  Unfortunately, SHA-1 was hardcoded. Mistakes were made. They always are.

- After the SHAttered blog post became public, Linus first made the case
  that it matters not all that much: the development of the Linux kernel
  is based on trust, and nobody would pull from a person they do not trust.
  This approach does obviously not extend to most other projects.

- By switching the default to DC_SHA1 already in `master`, you now took
  the exact opposite position: it *always* matters, even when you trust
  people, and the 6x slowdown demonstrated by my perf test is something that
  everybody needs to accept, even if it is spent for no benefit in return.

Between these two extremes ("collision attacks do not matter if you run
your project based on trust" vs "we always mistrust everybody, including
the user's own source code that they stage for commit"), I think there are
many shades of green, and I think it would be delusional to believe that
we can predict the trust model for each and every Git user [*1*], baking
it into a single compile time setting.

That is why I wanted to implement a tristate config so that users can
adapt Git to their particular scenario. That way, maintainers of
precompiled Git packages do not have to dictate to Git users what trust
model they should use.

One scenario seems to be common, and it is certainly one that I have a
direct interest in supporting: inside a company, where the server as well
as the developers are implicitly trusted not to fiddle with collision
attacks (because there are much easier ways to hide malicious code, let's
be frank). And in this scenario, slowing down the SHA-1 computation half
an order of magnitude by trying to detect collision attacks is simply
unacceptable, because there is no benefit, only cost to it.

An even more common scenario is when a developer works on a local
repository, adds a couple of files, then runs rebase, then merges, etc. In
*none* of these cases does the developer distrust any of the objects
flying about. Like above, forcing the developer to accept half an order of
magnitude slow down of the SHA-1 computation is something I would consider
disrespectful of those developers' time. Note: in this scenario, any
object coming from elsewhere would most likely be subject to the collision
detection, as the developer may not trust anybody but themselves.

In other words: I disagree that, say, `git add` should use the collision
detecting SHA-1.

I also suspect that you had a much more elaborate (maybe even fragile)
strategy than mine in mind when you tried to determine which code paths
would need collision detection and which ones would not: we have *no*
context in the object-oriented sense whenever we call the object hashing
functions. Meaning that you would have to introduce such a context, or to
add some sort of thread local state. I have to admit that I do not like
either way.

The approach I chose instead was to make the switch global, per command.
Obviously, the next step is to identify the Git commands which accept
objects from external sources (`clone`, `fetch`, `fast-import` and all the
remote helpers come to mind) and let them set a global flag before asking
git_default_config() to parse the core.enableSHA1DC setting, so that the
special value `external` could trigger the collision detecting code for
those, and only those, commands. That would still err on the safe side if
these commands are used to hash objects originating from the same machine,
but that is a price I would be willing to pay for the simplicity of this
approach.

Does my explanation manage to illustrate in a plausible way why I chose
the approach that I did?

Ciao,
Johannes

Footnote *1*: It is equally delusional, of course, to claim that every Git
user can and should configure and compile Git for themselves.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-29 20:02   ` Johannes Schindelin
@ 2017-03-30  0:31     ` Junio C Hamano
  2017-04-18 11:30       ` Johannes Schindelin
  0 siblings, 1 reply; 23+ messages in thread
From: Junio C Hamano @ 2017-03-30  0:31 UTC (permalink / raw)
  To: Johannes Schindelin; +Cc: git

Johannes Schindelin <Johannes.Schindelin@gmx.de> writes:

> - After the SHAttered blog post became public, Linus first made the case
>   that it matters not all that much: the development of the Linux kernel
>   is based on trust, and nobody would pull from a person they do not trust.
>   This approach does obviously not extend to most other projects.
>
> - By switching the default to DC_SHA1 already in `master`, you now took
>   the exact opposite position: it *always* matters, even when you trust
>   people, and the 6x slowdown demonstrated by my perf test is something that
>   everybody needs to accept, even if it is spent for no benefit in return.

You seem to be trying very hard to make it look like as if Linus
said one thing and I decided to do something else, but if one digs
into the archive, it should be clear that that is not how the
current line of development has happened.  Even from the same
person, you can hear that "the social aspect of how the project is
structured makes the kernel less susceptible to the shattered
attack" [*1*] and that "mitigation at Git level is easy" [*2*].

The kernel as a project may not have to do much, but the world of
projects using Git is wider than the kernel alone, and mitigation
that applies to all was available and is easy to make it the
default.

> The approach I chose instead was to make the switch global, per command.
> Obviously, the next step is to identify the Git commands which accept
> objects from external sources (`clone`, `fetch`, `fast-import` and all the
> remote helpers come to mind) and let them set a global flag before asking
> git_default_config() to parse the core.enableSHA1DC setting, so that the
> special value `external` could trigger the collision detecting code for
> those, and only those, commands. That would still err on the safe side if
> these commands are used to hash objects originating from the same machine,
> but that is a price I would be willing to pay for the simplicity of this
> approach.
>
> Does my explanation manage to illustrate in a plausible way why I chose
> the approach that I did?

I agree that there may be rooms to tweak the degree of paranoia per
codepath (I said that already in the message you are responding to),
but as Linus and Peff already said in the old discussion thread
[*3*], I doubt that it needs to be runtime configurable.

In any case, I do not think you should blindly trust what end-users
add to the repository locally.  A vendor may send in a binary driver
update via a pull-request, and you would want to protect "clone" and
"fetch" client because of that.  The same driver update however may
come as a patch that is accepted by running "git am".  Or you may
get a vendor tarball ftp'ed over and "git add ." the whole thing.

These "local" operations still need the same degree of paranoia as
your "git fetch"; "per command" may not be such a good criterion
because of that.

That is why I mentioned "you want to be paranoid any time you add
new data to the object database" as a rule of thumb in the message
you are responding to.  It may be overly broad, but would be a
better starting point than "'git add' is always safe as it is
local".

So in short, I do agree with your idea of using "faster" hashing for
some codepaths and "paranoid" ones for others.  I think "slower but
collision-attack detecting" one (aka DC_SHA1) plus a choice of
"faster" one at compile time would be a sensible way to go, and if
one is truly paranoid, the "faster" one _could_ be sha1dc.c with
collision detection disabled.  On the other hand, if one is in a
closed environment, one may want both hashes configurable and let
OpenSSL implementation be chosen for both "slower but DC" and
"faster" hash.  And I am OK with that, too.


[Reference]

*1* https://public-inbox.org/git/CA+55aFz98r7NC_3BW_1HU9-0C-HcrFou3=0gmRcS38=-x8dmVw@mail.gmail.com/

*2* https://public-inbox.org/git/CA+55aFxmr6ntWGbJDa8tOyxXDX3H-yd4TQthgV_Tn1u91yyT8w@mail.gmail.com/

*3* https://public-inbox.org/git/20170323174750.xyucxmfhuc6dbrzc@sigill.intra.peff.net/

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL
  2017-03-24 23:24 ` [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL Johannes Schindelin
  2017-03-25 19:51   ` Ævar Arnfjörð Bjarmason
@ 2017-03-30 16:16   ` Junio C Hamano
  2017-03-30 16:47     ` Junio C Hamano
  2017-04-18 11:28     ` Johannes Schindelin
  1 sibling, 2 replies; 23+ messages in thread
From: Junio C Hamano @ 2017-03-30 16:16 UTC (permalink / raw)
  To: Johannes Schindelin; +Cc: git

Johannes Schindelin <johannes.schindelin@gmx.de> writes:

> +#ifdef SHA1_DC_AND_OPENSSL
> +void (*SHA1_Init_func)(SHA_CTX_union *ctx) = (void *)SHA1DCInit;
> +void (*SHA1_Update_func)(SHA_CTX_union *ctx, const void *pointer, size_t size) =
> +	(void *)git_SHA1DCUpdate;
> +int (*SHA1_Final_func)(unsigned char sha1[20], SHA_CTX_union *ctx) =
> +	(void *)git_SHA1DCFinal;
> +
> +void toggle_sha1dc(int enable)
> +{
> +	if (enable) {
> +		SHA1_Init_func = (void *)SHA1DCInit;
> +		SHA1_Update_func = (void *)git_SHA1DCUpdate;
> +		SHA1_Final_func = (void *)git_SHA1DCFinal;
> +	} else {
> +		SHA1_Init_func = (void *)SHA1_Init;
> +		SHA1_Update_func = (void *)SHA1_Update;
> +		SHA1_Final_func = (void *)SHA1_Final;
> +	}
> +}
> +#endif

As I understand that this is a demonstration series, the approach
above is OK as an expedite way to illustrate one way how run-time
switching could be done.  The approach however is not very thread
friendly, though.

> diff --git a/sha1dc/sha1.h b/sha1dc/sha1.h
> index bd8bd928fb3..243c2fe0b6b 100644
> --- a/sha1dc/sha1.h
> +++ b/sha1dc/sha1.h
> @@ -110,10 +110,26 @@ void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
>   */
>  void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *data, unsigned long len);
>  
> +#ifdef SHA1_DC_AND_OPENSSL
> +extern void toggle_sha1dc(int enable);
> +
> +typedef union {
> +	SHA1_CTX dc;
> +	SHA_CTX openssl;
> +} SHA_CTX_union;

The use of union is a good ingredient for a solution.  I would have
chosen to do this slightly differently if I were doing it.

        typedef struct {
                int safe;
                union {
                        SHA1_CTX_SAFE safe;
                        SHA1_CTX_FAST fast;
                } u;
        } git_SHA_CTX;

        void git_SHA1_Init(git_SHA_CTX *ctx, int safe);
	void git_SHA1_Update(git_SHA_CTX *ctx, const void *, unsigned long);
	git_SHA1_Final(uchar [20], git_SHA_CTX *ctx);

where SHA1_CTX_FAST may be chosen from the Makefile just like we
currently choose platform_SHA_CTX.  SHA1_CTX_SAFE could also be made
configurable but it may be OK to hardcode it to refer to SHA1_CTX of
DC's.

As you already know, I am assuming that each codepath pretty much
knows if it needs safe or fast one (e.g. the one used in csum-file.c
knows it does not have to), so each git_SHA_CTX is told which one to
use when it gets initialized.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL
  2017-03-30 16:16   ` Junio C Hamano
@ 2017-03-30 16:47     ` Junio C Hamano
  2017-04-18 11:28     ` Johannes Schindelin
  1 sibling, 0 replies; 23+ messages in thread
From: Junio C Hamano @ 2017-03-30 16:47 UTC (permalink / raw)
  To: Johannes Schindelin; +Cc: git

Junio C Hamano <gitster@pobox.com> writes:

> ...
> The use of union is a good ingredient for a solution.  I would have
> chosen to do this slightly differently if I were doing it.
>
>         typedef struct {
>                 int safe;
>                 union {
>                         SHA1_CTX_SAFE safe;
>                         SHA1_CTX_FAST fast;
>                 } u;
>         } git_SHA_CTX;
>
>         void git_SHA1_Init(git_SHA_CTX *ctx, int safe);
> 	void git_SHA1_Update(git_SHA_CTX *ctx, const void *, unsigned long);
> 	git_SHA1_Final(uchar [20], git_SHA_CTX *ctx);
>
> where SHA1_CTX_FAST may be chosen from the Makefile just like we
> currently choose platform_SHA_CTX.  SHA1_CTX_SAFE could also be made
> configurable but it may be OK to hardcode it to refer to SHA1_CTX of
> DC's.
>
> As you already know, I am assuming that each codepath pretty much
> knows if it needs safe or fast one (e.g. the one used in csum-file.c
> knows it does not have to), so each git_SHA_CTX is told which one to
> use when it gets initialized.

And if we wanted to declare "git add" is always safe, we could still
do

    int sha1_safety_global_override = -1; /* unspecified */

    void git_SHA1_Init(git_SHA_CTX *ctx, int safe)
    {
	if (sha1_safety_global_override >= 0)
	    ctx->safe = sha1_safety_global_override;
        else
	    ctx->safe = safe;

	if (ctx->safe)
	    SHA1DCInit(&(ctx->u.safe));
	else
	    platform_SHA1_Init(&(ctx->u.fast));
   }

and then have cmd_add() in builtin/add.c to flip that global
override bit to say "this does not have to be safe".  I personally
do not think it is a good idea, but I am showing that it is still
doable.  

And as long as assignment to sha1_safety_global_override is done in
a thread-friendly way, such a scheme would be more thread-friendly
as a whole compared to the "toggle_sha1dc()" approach where each CTX
instance does not know which side of the union it is being used us
(which, if mixed-up, of course would lead to a funny behaviour).


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL
  2017-03-30 16:16   ` Junio C Hamano
  2017-03-30 16:47     ` Junio C Hamano
@ 2017-04-18 11:28     ` Johannes Schindelin
  1 sibling, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-04-18 11:28 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git

Hi Junio,

On Thu, 30 Mar 2017, Junio C Hamano wrote:

> Johannes Schindelin <johannes.schindelin@gmx.de> writes:
> 
> > +#ifdef SHA1_DC_AND_OPENSSL
> > +void (*SHA1_Init_func)(SHA_CTX_union *ctx) = (void *)SHA1DCInit;
> > +void (*SHA1_Update_func)(SHA_CTX_union *ctx, const void *pointer, size_t size) =
> > +	(void *)git_SHA1DCUpdate;
> > +int (*SHA1_Final_func)(unsigned char sha1[20], SHA_CTX_union *ctx) =
> > +	(void *)git_SHA1DCFinal;
> > +
> > +void toggle_sha1dc(int enable)
> > +{
> > +	if (enable) {
> > +		SHA1_Init_func = (void *)SHA1DCInit;
> > +		SHA1_Update_func = (void *)git_SHA1DCUpdate;
> > +		SHA1_Final_func = (void *)git_SHA1DCFinal;
> > +	} else {
> > +		SHA1_Init_func = (void *)SHA1_Init;
> > +		SHA1_Update_func = (void *)SHA1_Update;
> > +		SHA1_Final_func = (void *)SHA1_Final;
> > +	}
> > +}
> > +#endif
> 
> As I understand that this is a demonstration series, the approach
> above is OK as an expedite way to illustrate one way how run-time
> switching could be done.  The approach however is not very thread
> friendly, though.

Indeed. However, the toggle is meant to be coarse, heavy-handed. I have to
protect my time as Git for Windows maintainer, so I have to keep this as
simple to maintain as possible while also benefitting my users. This
toggle is intended to be run very, very early in the cmd_main() functions.
As in: right when the config is read.

> > diff --git a/sha1dc/sha1.h b/sha1dc/sha1.h
> > index bd8bd928fb3..243c2fe0b6b 100644
> > --- a/sha1dc/sha1.h
> > +++ b/sha1dc/sha1.h
> > @@ -110,10 +110,26 @@ void git_SHA1DCFinal(unsigned char [20], SHA1_CTX *);
> >   */
> >  void git_SHA1DCUpdate(SHA1_CTX *ctx, const void *data, unsigned long len);
> >  
> > +#ifdef SHA1_DC_AND_OPENSSL
> > +extern void toggle_sha1dc(int enable);
> > +
> > +typedef union {
> > +	SHA1_CTX dc;
> > +	SHA_CTX openssl;
> > +} SHA_CTX_union;
> 
> The use of union is a good ingredient for a solution.  I would have
> chosen to do this slightly differently if I were doing it.
> 
>         typedef struct {
>                 int safe;
>                 union {
>                         SHA1_CTX_SAFE safe;
>                         SHA1_CTX_FAST fast;
>                 } u;
>         } git_SHA_CTX;
> 
>         void git_SHA1_Init(git_SHA_CTX *ctx, int safe);
> 	void git_SHA1_Update(git_SHA_CTX *ctx, const void *, unsigned long);
> 	git_SHA1_Final(uchar [20], git_SHA_CTX *ctx);
> 
> where SHA1_CTX_FAST may be chosen from the Makefile just like we
> currently choose platform_SHA_CTX.  SHA1_CTX_SAFE could also be made
> configurable but it may be OK to hardcode it to refer to SHA1_CTX of
> DC's.
> 
> As you already know, I am assuming that each codepath pretty much
> knows if it needs safe or fast one (e.g. the one used in csum-file.c
> knows it does not have to), so each git_SHA_CTX is told which one to
> use when it gets initialized.

Thanks, at this stage it is pretty clear, though, that I will have to
maintain this. I am not willing to make this as configurable as you
suggested, as these patches will have to stay in git-for-windows/git.

Ciao,
Dscho

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag
  2017-03-30  0:31     ` Junio C Hamano
@ 2017-04-18 11:30       ` Johannes Schindelin
  0 siblings, 0 replies; 23+ messages in thread
From: Johannes Schindelin @ 2017-04-18 11:30 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git

Hi Junio,

On Wed, 29 Mar 2017, Junio C Hamano wrote:

> Johannes Schindelin <Johannes.Schindelin@gmx.de> writes:
> 
> > The approach I chose instead was to make the switch global, per
> > command.  Obviously, the next step is to identify the Git commands
> > which accept objects from external sources (`clone`, `fetch`,
> > `fast-import` and all the remote helpers come to mind) and let them
> > set a global flag before asking git_default_config() to parse the
> > core.enableSHA1DC setting, so that the special value `external` could
> > trigger the collision detecting code for those, and only those,
> > commands. That would still err on the safe side if these commands are
> > used to hash objects originating from the same machine, but that is a
> > price I would be willing to pay for the simplicity of this approach.
> >
> > Does my explanation manage to illustrate in a plausible way why I
> > chose the approach that I did?
> 
> I agree that there may be rooms to tweak the degree of paranoia per
> codepath (I said that already in the message you are responding to),
> but as Linus and Peff already said in the old discussion thread
> [*3*], I doubt that it needs to be runtime configurable.

My real world scenario here was intended to cast a spell on your doubts:
the very real slow down is very inacceptable.

Given your reluctance to accept that the performance impact is large
enough to be painful, and that there are situations where the trust in
your infrastructure and your colleagues makes that pain unnecessary, you
leave me no other option than to make this a Git for Windows-only feature.

I really tried to reconcile your concerns with the needs of Git for
Windows' users, to demonstrate that the compile-time switch is
unacceptable, but it is time to accept defeat and the newly acquired
maintenance burden.

Moving on,
Dscho


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2017-04-18 11:30 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-24 23:24 [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Johannes Schindelin
2017-03-24 23:24 ` [PATCH 1/7] sha1dc: safeguard against outside definitions of BIGENDIAN Johannes Schindelin
2017-03-24 23:24 ` [PATCH 2/7] Makefile: optionally compile with both SHA1DC and SHA1_OPENSSL Johannes Schindelin
2017-03-25 19:51   ` Ævar Arnfjörð Bjarmason
2017-03-30 16:16   ` Junio C Hamano
2017-03-30 16:47     ` Junio C Hamano
2017-04-18 11:28     ` Johannes Schindelin
2017-03-24 23:24 ` [PATCH 3/7] config: add the core.enablesha1dc setting Johannes Schindelin
2017-03-24 23:25 ` [PATCH 4/7] t0013: do not skip the entire file wholesale without DC_SHA1 Johannes Schindelin
2017-03-24 23:25 ` [PATCH 5/7] t0013: test DC_AND_OPENSSL_SHA1, too Johannes Schindelin
2017-03-24 23:28 ` [PATCH 6/7] mingw: enable DC_AND_OPENSSL_SHA1 by default Johannes Schindelin
2017-03-24 23:28 ` [PATCH 7/7] p0013: new test to compare SHA1DC vs OpenSSL Johannes Schindelin
2017-03-25  6:37 ` [PATCH 0/7] PREVIEW: Introduce DC_AND_OPENSSL_SHA1 make flag Junio C Hamano
2017-03-25 16:58   ` Junio C Hamano
2017-03-26  6:18   ` Jeff King
2017-03-26 23:16     ` Junio C Hamano
2017-03-27  1:11       ` Jeff King
2017-03-27  6:07         ` Junio C Hamano
2017-03-27  7:09           ` Jeff King
2017-03-27 17:15             ` Junio C Hamano
2017-03-29 20:02   ` Johannes Schindelin
2017-03-30  0:31     ` Junio C Hamano
2017-04-18 11:30       ` Johannes Schindelin

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).