unofficial mirror of libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Eyal Itkin via Libc-alpha <libc-alpha@sourceware.org>
To: "Carlos O'Donell" <carlos@redhat.com>
Cc: Florian Weimer <fweimer@redhat.com>,
	GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH] Update tcache double-free check
Date: Mon, 10 Aug 2020 16:07:28 +0300	[thread overview]
Message-ID: <CAA=iMULaEe72oAnaCtn4AUQRUAiaMp+Mn8hy2RWh8+2J0Q-mgQ@mail.gmail.com> (raw)
In-Reply-To: <22dd78e7-13a7-0bec-37ba-f3e32a9630ab@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 3810 bytes --]

Updated the patch to perform an atomic update operation on the global
entropy state, so to avoid races when multiple threads are initialized
simultaneously. The patch now accumulates entropy between threads,
while still using the backup case of ~tcache to take care of cases in
which no entropy was yet to be available.

As Carlos mentioned earlier, I guess you will want to discuss this
patch before integrating it. Also, feel free to update the patch if
needed in case I missed some whitespace / long line coding style.

Thanks again,
Eyal


On Sun, Jul 26, 2020 at 12:07 AM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On 7/25/20 6:39 AM, Eyal Itkin wrote:
> > I've gone over the compiled binaries before/after the patch, and the
> > change on my x64 binary is as follows: no change to tcache_get() or
> > tcache_put(), and an addition of 2 asm instructions to _int_free(). If
> > we would have only used ~tcache this would have been reduced to an
> > addition of a single asm instruction in _int_free().
> >
> > The patch's impact on benchtests/bench-malloc-(simple,thread) is
> > negligible, and is less than the natural noise I get on my GCP
> > instance between two consecutive benchmark executions.
> >
> > Since the call to getrandom() is only on tcache_init(), hence once per
> > thread, I'm fine with it being called for all threads, but it is
> > really for you to decide.
> >
> > There is a possible fix to make sure that the entropy we get from
> > getrandom() will be accumulated, thus helping all future threads that
> > might need it. We won't be able to update the keys of previous threads
> > (if they lacked entropy), but we can enrich future threads with the
> > entropy even if they failed to use getrandom(). This solution will
> > also enable us to use all of the received entropy from getrandom()
> > even when we get less than the wanted amount of bytes.
> >
> > The solution for such an entropy accumulation is quite simple:
> >
> > static __thread size_t tcache_key = 0;
> > static size_t tcache_entropy = 0;
> >
> > void tcache_init()
> > {
> >       ...
> >       /* Attempt to get a random key for the double-free checks.  */
> >       size_t local_entropy = 0;
> >       getrandom (&local_entropy, sizeof(local_entropy), GRND_NONBLOCK);
> >       /* always accumulate it with the past seen entropy */
> >       tcache_entropy ^= local_entropy;
> >       /* always use the basic alternative of ~tcache in case we didn't
> > have any entropy.  */
> >       tcache_key = tcache_entropy ^ ~((size_t) tcache);
> > }
> >
> > As can be seen, in case that all calls to getrandom() failed, and we
> > received nothing from it, we would still use ~tcache, just like
> > before. In addition, this use of "~tcache" also adds the entropy from
> > ASLR to the tcache_key, so we don't lose anything from using it.
> >
> > Please advise me of the best way to proceed.
>
> I like the accumulation of entropy. I'm a little worried about thread
> startup and shutdown costs, but I haven't measured that yet. I'm also
> busy getting the release out the door (I'm the release manager).
> If you have thread startup/shutdown costs that would be interesting
> for me to see as a reviewer.
>
> I'd like to see Florian's opinion on this.
>
> I fully expect that if you flesh this out you'll have to handle the
> getrandom failure case, and the atomics required to avoid the data
> race with multiple threads and global state. However, hold off on
> that until we get a big more consensus if we really want to go to
> this level of complexity. It would be good to keep the entropy we
> have in the event the system is under some kind of entropy starvation
> at a later point (I've seen at least one of these for TCP sequence
> numbers, read and drop from /dev/random etc.)
>
> --
> Cheers,
> Carlos.
>

[-- Attachment #2: 0001-Update-tcache-double-free-check.patch --]
[-- Type: application/octet-stream, Size: 3575 bytes --]

From baf66196c486f6ab68be340fdbffd0e0df784e77 Mon Sep 17 00:00:00 2001
From: Eyal Itkin <eyalit@checkpoint.com>
Date: Mon, 10 Aug 2020 15:58:08 +0300
Subject: [PATCH] Update tcache double-free check

Update the value used for the tcache entry->key when checking for
double free operations. Use a random value by default, and accumulate
the gathered entropy across time and across threads. In addition, use
~tcache as a backup value incase there isn't enough entropy / entropy
isn't available.

Original key value was "tcache" which may lead to security issues in
code with use-after-free vulnerabilities ("House of Io" exploit). The
new key is no longer a valid pointer to a critical meta-data struct.
---
 malloc/malloc.c | 29 ++++++++++++++++++++++++-----
 1 file changed, 24 insertions(+), 5 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index ee87ddbbf9..4e9a5e17ef 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -247,6 +247,9 @@
 /* For SINGLE_THREAD_P.  */
 #include <sysdep-cancel.h>
 
+/* For tcache double-free checks.  */
+#include <sys/random.h>
+
 /*
   Debugging:
 
@@ -2910,7 +2913,7 @@ typedef struct tcache_entry
 {
   struct tcache_entry *next;
   /* This field exists to detect double frees.  */
-  struct tcache_perthread_struct *key;
+  size_t key;
 } tcache_entry;
 
 /* There is one of these for each thread, which contains the
@@ -2926,6 +2929,8 @@ typedef struct tcache_perthread_struct
 
 static __thread bool tcache_shutting_down = false;
 static __thread tcache_perthread_struct *tcache = NULL;
+static __thread size_t tcache_key = 0;
+static size_t tcache_entropy = 0;
 
 /* Caller must ensure that we know tc_idx is valid and there's room
    for more chunks.  */
@@ -2936,7 +2941,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
 
   /* Mark this chunk as "in the tcache" so the test in _int_free will
      detect a double free.  */
-  e->key = tcache;
+  e->key = tcache_key;
 
   e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
   tcache->entries[tc_idx] = e;
@@ -2953,7 +2958,7 @@ tcache_get (size_t tc_idx)
     malloc_printerr ("malloc(): unaligned tcache chunk detected");
   tcache->entries[tc_idx] = REVEAL_PTR (e->next);
   --(tcache->counts[tc_idx]);
-  e->key = NULL;
+  e->key = 0;
   return (void *) e;
 }
 
@@ -3019,8 +3024,22 @@ tcache_init(void)
     {
       tcache = (tcache_perthread_struct *) victim;
       memset (tcache, 0, sizeof (tcache_perthread_struct));
-    }
 
+      /* Attempt to get a random key for the double-free checks.  */
+      size_t local_entropy = 0, updated_entropy, entropy, old;
+      getrandom (&local_entropy, sizeof(local_entropy), GRND_NONBLOCK);
+      /* Always accumulate it with past seen entropy.  */
+      old = tcache_entropy;
+      do
+        {
+          entropy = old;
+          updated_entropy = local_entropy ^ entropy;
+        }
+      while ((old = catomic_compare_and_exchange_val_acq (&tcache_entropy,
+        updated_entropy, entropy)) != entropy);
+      /* Always use the default agreed alternative: ~tcache.  */
+      tcache_key = updated_entropy ^ ~((size_t) tcache);
+    }
 }
 
 # define MAYBE_INIT_TCACHE() \
@@ -4218,7 +4237,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	   trust it (it also matches random payload data at a 1 in
 	   2^<size_t> chance), so verify it's not an unlikely
 	   coincidence before aborting.  */
-	if (__glibc_unlikely (e->key == tcache))
+	if (__glibc_unlikely (e->key == tcache_key))
 	  {
 	    tcache_entry *tmp;
 	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
-- 
2.17.1


  reply	other threads:[~2020-08-10 13:07 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-24 13:37 [PATCH] Update tcache double-free check Eyal Itkin via Libc-alpha
2020-07-24 21:05 ` Carlos O'Donell via Libc-alpha
2020-07-25 10:39   ` Eyal Itkin via Libc-alpha
2020-07-25 21:07     ` Carlos O'Donell via Libc-alpha
2020-08-10 13:07       ` Eyal Itkin via Libc-alpha [this message]
2020-08-10 13:12         ` Carlos O'Donell via Libc-alpha
2020-08-10 13:35           ` Eyal Itkin via Libc-alpha
2020-08-10 13:44             ` Carlos O'Donell via Libc-alpha
2021-07-02  7:24             ` Siddhesh Poyarekar
2021-07-02  7:57               ` Eyal Itkin via Libc-alpha
2021-07-02  8:45                 ` Siddhesh Poyarekar
2020-08-26 20:40           ` Carlos O'Donell via Libc-alpha
2020-10-03  9:04             ` Eyal Itkin via Libc-alpha
2020-10-04 19:41               ` Carlos O'Donell via Libc-alpha
2020-10-14 13:44                 ` Eyal Itkin via Libc-alpha

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.gnu.org/software/libc/involved.html

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAA=iMULaEe72oAnaCtn4AUQRUAiaMp+Mn8hy2RWh8+2J0Q-mgQ@mail.gmail.com' \
    --to=libc-alpha@sourceware.org \
    --cc=carlos@redhat.com \
    --cc=eyal.itkin@gmail.com \
    --cc=fweimer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).