From: Florian Weimer <fweimer@redhat.com>
To: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
Cc: "H.J. Lu" <hjl.tools@gmail.com>,
Carlos O'Donell <carlos@redhat.com>, nd <nd@arm.com>,
GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH] Add getcpu
Date: Mon, 10 Dec 2018 13:22:06 +0100 [thread overview]
Message-ID: <871s6p7es1.fsf@oldenburg2.str.redhat.com> (raw)
In-Reply-To: <79399800-c68b-a1d6-6871-ac407d37c639@arm.com> (Szabolcs Nagy's message of "Mon, 10 Dec 2018 12:20:11 +0000")
* Szabolcs Nagy:
> On 05/12/18 17:41, H.J. Lu wrote:
>> On Wed, Dec 5, 2018 at 9:33 AM Carlos O'Donell <carlos@redhat.com> wrote:
>>>
>>> On 12/5/18 9:29 AM, H.J. Lu wrote:
>>>> To optimize for multi-node NUMA system, I need a very fast way to identity which
>>>> node the current process is running on. getcpu:
>>>>
>>>> NAME
>>>> getcpu - determine CPU and NUMA node on which the calling thread is
>>>> running
> ...
>>> I see that on x86 you have a vdso vgetcpu, and that lsl is one instruction and
>>> loads the cpunode mask and ccpunode bits in one shot (atomic). So this should
>>> work fine, but for all other callers I assume this will be a syscall.
>>
>> I need vdso getcpu to avoid syscall. I am working on a NUMA spinlock
>> library which depends on a very fast getcpu.
>
> hm i thought rseq will let you access cpuid faster than vdso.
> (and that can even protect against preemptive scheduling.
> i don't know much about numa, but i'd expect the cpuid to
> numa node mapping to be fixed and then cpuid is enough)
The mapping is not fixed, due to CPU hotplug/unplug, VM migration and
CRIU.
Thanks,
Florian
prev parent reply other threads:[~2018-12-10 12:22 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-05 14:29 RFC: Add getcpu wrapper H.J. Lu
2018-12-05 15:35 ` Florian Weimer
2018-12-05 15:40 ` H.J. Lu
2018-12-05 15:48 ` Florian Weimer
2018-12-05 16:00 ` Zack Weinberg
2018-12-05 16:04 ` H.J. Lu
2018-12-05 17:33 ` Carlos O'Donell
2018-12-05 17:41 ` [PATCH] Add getcpu H.J. Lu
2018-12-05 18:13 ` H.J. Lu
2018-12-05 18:48 ` Florian Weimer
2018-12-05 19:50 ` H.J. Lu
2018-12-05 19:57 ` Florian Weimer
2018-12-05 20:07 ` H.J. Lu
2018-12-05 20:14 ` H.J. Lu
2018-12-07 12:50 ` Tulio Magno Quites Machado Filho
2018-12-07 14:49 ` H.J. Lu
2018-12-05 20:43 ` Joseph Myers
2018-12-05 20:56 ` The future of the manual (was Re: [PATCH] Add getcpu) Zack Weinberg
2018-12-05 21:28 ` Joseph Myers
2018-12-07 13:11 ` Florian Weimer
2018-12-05 21:39 ` [PATCH] Add getcpu H.J. Lu
2018-12-05 21:45 ` Joseph Myers
2018-12-05 21:55 ` Florian Weimer
2018-12-06 20:26 ` H.J. Lu
2018-12-07 16:51 ` Florian Weimer
2018-12-07 17:01 ` H.J. Lu
2018-12-07 17:15 ` Florian Weimer
2018-12-07 20:29 ` [PATCH] Don't use __typeof__ (getcpu) H.J. Lu
2018-12-07 20:38 ` Samuel Thibault
2018-12-07 20:49 ` H.J. Lu
2018-12-10 12:20 ` [PATCH] Add getcpu Szabolcs Nagy
2018-12-10 12:22 ` Florian Weimer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://www.gnu.org/software/libc/involved.html
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=871s6p7es1.fsf@oldenburg2.str.redhat.com \
--to=fweimer@redhat.com \
--cc=Szabolcs.Nagy@arm.com \
--cc=carlos@redhat.com \
--cc=hjl.tools@gmail.com \
--cc=libc-alpha@sourceware.org \
--cc=nd@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).