From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on dcvr.yhbt.net X-Spam-Level: X-Spam-ASN: AS17314 8.43.84.0/22 X-Spam-Status: No, score=-2.9 required=3.0 tests=AWL,BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,MAILING_LIST_MULTI, PDS_RDNS_DYNAMIC_FP,RCVD_IN_DNSWL_MED,RDNS_DYNAMIC,SPF_HELO_PASS, SPF_PASS,URIBL_BLACK shortcircuit=no autolearn=no autolearn_force=no version=3.4.2 Received: from sourceware.org (ip-8-43-85-97.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by dcvr.yhbt.net (Postfix) with ESMTPS id 4EFC41F8C6 for ; Wed, 7 Jul 2021 17:15:49 +0000 (UTC) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 54F933854807 for ; Wed, 7 Jul 2021 17:15:47 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 54F933854807 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1625678147; bh=nFBNw2f8YmRxQDIzeKfxsR965JRbKuQMfsXbAel6prI=; h=To:Subject:References:Date:In-Reply-To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=uTEcuyJyuhY+I/5upIUde5jvzPYfG0UtrJALeVAfk1nriqICBSVacaMqec6/SrEme v5S9SjgBVPrUZdWvRzTna6WvzPpPQYznPyTHs00KsJgPd6fDju6kpnjJcv3cP/iLvi 0+1PokLRANbSDSIWPHViwi0d8MnjYEfwxgfr6bZU= Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTP id 5A6A63857020 for ; Wed, 7 Jul 2021 17:15:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 5A6A63857020 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-278-B9TGdokSPDyC8D9ErhFQwg-1; Wed, 07 Jul 2021 13:15:24 -0400 X-MC-Unique: B9TGdokSPDyC8D9ErhFQwg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 663951084F57; Wed, 7 Jul 2021 17:15:23 +0000 (UTC) Received: from oldenburg.str.redhat.com (ovpn-115-5.ams2.redhat.com [10.36.115.5]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8AE8760C5F; Wed, 7 Jul 2021 17:15:22 +0000 (UTC) To: Adhemerval Zanella Subject: Re: [PATCH v7 1/4] support: Add support_stack_alloc References: <20210706145839.1658623-1-adhemerval.zanella@linaro.org> <20210706145839.1658623-2-adhemerval.zanella@linaro.org> <87k0m2a0na.fsf@oldenburg.str.redhat.com> Date: Wed, 07 Jul 2021 19:15:20 +0200 In-Reply-To: (Adhemerval Zanella's message of "Wed, 7 Jul 2021 09:17:35 -0300") Message-ID: <87eeca2ggn.fsf@oldenburg.str.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Florian Weimer via Libc-alpha Reply-To: Florian Weimer Cc: Adhemerval Zanella via Libc-alpha Errors-To: libc-alpha-bounces+e=80x24.org@sourceware.org Sender: "Libc-alpha" * Adhemerval Zanella: > On 07/07/2021 07:17, Florian Weimer wrote: >> * Adhemerval Zanella via Libc-alpha: >> >>> The code to allocate a stack from xsigstack is refactored so it can >>> be more generic. The new support_stack_alloc() also set PROT_EXEC >>> if DEFAULT_STACK_PERMS has PF_X. This is required on some >>> architectures (hppa for instance) and trying to access the rtld >>> global from testsuite will require more intrusive refactoring >>> in the ldsodefs.h header. >> >> DEFAULT_STACK_PERMS is misnamed, it's really HISTORIC_STACK_PERMS. >> All architectures override it to RW permissions in the toolchain >> (maybe with the exception of Hurd, which uses trampolines for nested >> functions). > > This is in fact two different requirements, this gnulib thread gives > a nice summary about the permission required from trampolines [1]. > Another requirement is how Linux layout the signal return code for the > signal handler stack. It seems that hppa still requires executable > stacks, since tst-xsigstack does fails without a executable stack even > on a recent 5.10.46-1 kernel. Ugh, okay. >> I have a cstack_allocate version that handles this. It can only be done >> from within glibc proper because we do not export the stack execution >> status directly. But I think it's out of scope for glibc 2.34 by now. > > We can in theory access the ldsodes.h fields directly and then > use GL (dl_stack_flags) information to set the stack executable or not. > The problem is ldsodefs.h is quite convoluted and it would require more > refactoring to use outside libc.so code. But I agree with you that > having less hacky way to obtain this information is better. > > So are you ok with the current approach or being conservative and use > DEFAULT_STACK_PERMS on libsupport? DEFAULT_STACK_PERMS with a comment is fine. I will resubmit my cstack_allocate patches for glibc 2.35 patches, and they will fully handle executable stacks. >>> + /* The guard bands need to be large enough to intercept offset >>> + accesses from a stack address that might otherwise hit another >>> + mapping. Make them at least twice as big as the stack itself, to >>> + defend against an offset by the entire size of a large >>> + stack-allocated array. The minimum is 1MiB, which is arbitrarily >>> + chosen to be larger than any "typical" wild pointer offset. >>> + Again, no matter what the number is, round it up to a whole >>> + number of pages. */ >>> + size_t guardsize = roundup (MAX (2 * stacksize, 1024 * 1024), pagesize); >>> + size_t alloc_size = guardsize + stacksize + guardsize; >>> + /* Use MAP_NORESERVE so that RAM will not be wasted on the guard >>> + bands; touch all the pages of the actual stack before returning, >>> + so we know they are allocated. */ >>> + void *alloc_base = xmmap (0, >>> + alloc_size, >>> + PROT_NONE, >>> + MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE|MAP_STACK, >>> + -1); >>> + /* PF_X can be overridden if PT_GNU_STACK is present. */ >>> + int prot = PROT_READ | PROT_WRITE >>> + | (DEFAULT_STACK_PERMS & PF_X ? PROT_EXEC : 0); >>> + xmprotect (alloc_base + guardsize, stacksize, prot); >>> + memset (alloc_base + guardsize, 0xA5, stacksize); >>> + return (struct support_stack) { alloc_base + guardsize, stacksize, guardsize }; >> >> This doesn't handle different stack growth directions. >> > > At least for the usages of the routine it does not require any adjustment: > xsigaltstack and xclone will handle it. I saw no regression for > tst-xsigaltstack and tst-clone_range. Huh, I would expect the guard area to be outside of the stack region returned by stack allocation. That's how the cstack_allocate API does it. If the current tests expect something else, then the approach in the patch is okay (with a comment for DEFAULT_STACK_PERMS). Thanks, Florian