From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:09 +0100 Received: from LASPEX02MSOL01.citrite.net (10.160.21.45) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:07 -0500 Received: from esa6.hc3370-68.iphmx.com (10.160.38.12) by LASPEX02MSOL01.citrite.net (10.160.21.45) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:07 -0800 Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: Ha5rE0IYy54WA6tjIHc/h/qA8IG2MNcxI60nwx4tPyC+1DmTk0jkR05xzr5PbwyuW+ZEFY0dYb P3ectOEeCUlWnhxg8BXooo0mykXNLxfyuwzod6LbaQMbGkZ258YpCB+uOno/Qk5HgHEeP44/5U 3DDoToOLw3c8XRTYp7YE2bFsIoktrVTS8AbmqoXcs2COuq9mFhr4hx9lPXo/bzjV1g9hFWXzec wqV7QVf5uLwaP07vrFYh711lRfuHK5RtF19izsEWBWbjcckkYqDDnwP9hMogZcyjcTWPvKPtgC 80XOAnR7NkyLTxLoMjRSl0Eq X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 10279306 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 10279306 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3ADGi6HxPCYTZ26F0ZeDsl6mtUPXoX/o7sNwtQ0K?= =?us-ascii?q?IMzox0Iv7/rarrMEGX3/hxlliBBdydt6sfzbCO4+u5ADdIyK3CmUhKSIZLWR?= =?us-ascii?q?4BhJdetC0bK+nBN3fGKuX3ZTcxBsVIWQwt1Xi6NU9IBJS2PAWK8TW94jEIBx?= =?us-ascii?q?rwKxd+KPjrFY7OlcS30P2594HObwlSizexfL1/IA+ooQnNtcQajpZuJrs/xx?= =?us-ascii?q?DUvnZGZuNayH9yK1mOhRj8/MCw/JBi8yRUpf0s8tNLXLv5caolU7FWFSwqPG?= =?us-ascii?q?8p6sLlsxnDVhaP6WAHUmoKiBpIAhPK4w/8U5zsryb1rOt92C2dPc3rUbA5XC?= =?us-ascii?q?mp4ql3RBP0jioMKiU0+3/LhMNukK1boQqhpx1hzI7SfIGVL+d1cqfEcd8HWW?= =?us-ascii?q?ZNQsNdWipcCY2+coQPFfIMM+ZGoYfgqVUArhywCguiBO701jNEmmX70bEg3u?= =?us-ascii?q?g9DQ3L2hErEdIUsHTTqdX4LKMcUf2rw6nSwjXMcfVW0ir85ojSdRAhuuqMVq?= =?us-ascii?q?93fMrTxkkvDQTFjk6LqYH+JDOVy/8NvHaB4+V8UuKvjncqpgdsqTas3schkp?= =?us-ascii?q?TFi4YVx1ze6Cl0zoY4KcemREJlfdKoCoZcuiGCO4drRs4vQ3tktDs0x7AGo5?= =?us-ascii?q?K3YjYGxIg9yxLBa/GKfI6F6Q/5WumLOzd3nndldaq/hxms9UigzfXxVsy70V?= =?us-ascii?q?pUtCZFicTMtmsT2BDJ98eIVONx/kan2TmRywDe8vxILEQ0mKbBNpIszL49mo?= =?us-ascii?q?ANvUjdAiP6glj6ga+OekUh4Oeo6uDnYrv8pp+bMo95kgP+Mqs0msy4GuQ4KR?= =?us-ascii?q?MDX3OG+eSnyrLv51H2QLJPjvEuiKnWrIjaJdgHpq6+GwJb05gs6xGlDzepzt?= =?us-ascii?q?sUh3cJLE9DeBKDlYfpI0rDL+7lDfuln1uskStrx+rHPrzuGJnCMn/DkLL5d7?= =?us-ascii?q?Zn90Fc0BYzzcxY559MC7EOOvTzVlXztNPCEhA5MBe0w+HhCNhmyIweRHiDDb?= =?us-ascii?q?OYMKPOq1+I5+0uL/OQa48SvTauY8Qisu7jizo1lEEQeYGt3IALczaoE/J+OU?= =?us-ascii?q?KbbHHwxNAbHjQkpA07Gc73hUeNXDgbSGy1RLl0sjM0EoW9Fq/YW5ugxreG2X?= =?us-ascii?q?HoTdVtemlaBwXUQj/TfIKeVqJJMXrKeJUzuyEYVbWnV44q3A2vswm/8bd8M+?= =?us-ascii?q?7I4XRI5cDuz9Eroe3YzkptrXkqX4Kcy2GIXyd/mWZbDzM13aUqp0t7xx/D1K?= =?us-ascii?q?VjmPVXGJRV4O8BSQY1M5PQjqR6Btn+VxiHf4KPT1CrEZ29GT9kaNU3zpcVZl?= =?us-ascii?q?plXc24h0XB0DCtGKQ9jKGQCdo/9aePl2PpKZNFwm3dnLIkk0FgR8JOMWO8ga?= =?us-ascii?q?sq9QfJAJXSu16EjKvsfqMZj2bW7GnW622IsQlDVRJoF6XIWXdKfkzNsdHw/V?= =?us-ascii?q?/PVZerGe5hKRZaxIiOJ/IQOOfkhlhHWvrvfe/mTTjhyjWWAhCFjvOBd4O0PW?= =?us-ascii?q?UWh36CUA1aw0YS5XaDJU41ASLz62TZRCdjE17ieSaOuaF3tW+7Q0kozgqLc1?= =?us-ascii?q?wp1ry7/QQQjOCdTPVb16wNuSMooTF5VFin2NeeB92FrgtnNKJSBLF1qFVIz2?= =?us-ascii?q?XCrCRmI4etaatlgx9Wcgh6uV/vywQiEp9JwoAhqHInyhY3KLrNiQIcMWrDgN?= =?us-ascii?q?aqYOWRczShmXLnI7Tb0VzfztuMr6oU4ap+q164517xUxRytXR/09xFlXCb48?= =?us-ascii?q?avbkJaXJTvX0Iw7xU/qavdZ3x35YzO0mZ3GbKpqTKE0NUsTrht2lO7ctFTPb?= =?us-ascii?q?nRXg39CcoBHOC1Nfcn3VOua1hXWYIavL5xNMSgefyc3aetN+s1hzOqg1NM54?= =?us-ascii?q?Vl216N/S5xE7Sa5ZsOzvCG0wfCbA/S3Az44Pj+gpsMJTwJFzT5ySO/X9EJI/?= =?us-ascii?q?MiO4cTCWK+Zcaww4c2i5noUn9evFmtYjFOkM2mYx2JdHTmwBZdk08QpDSrlD?= =?us-ascii?q?C5wDp9jzwy5vPOgWqXmLmkLkBffDcWDGB5xU/hO421k8wXUC3KJ0AymR2p6F?= =?us-ascii?q?y7j6lXqaJjLnXCFEJBfiz4NWZnAeO7sruPZdIK6Yt96HQKFr3nPBbAEOa7+k?= =?us-ascii?q?ZJtkGrV3FTzz06aTyw75jilkI8iGnGdykr6SSJP8BoxRLPotfbQK00vHJOSS?= =?us-ascii?q?9mhD3QHlX5McOu+IDelZjZtfulf3m8TZAVei7uh9DIpG6g6GtmDAfq1fK8gN?= =?us-ascii?q?r8CiAhzDT2kdJtUG+byXS0Kpmu3KO8P+V9e0BuD1Kp8Mt2FLZ1lY4ojY0R03?= =?us-ascii?q?wX1d2FuGAKmmDpPZBHyLrzOTATECUTzYefs22HkAVza2iEzIXjWjCBz9t9Mp?= =?us-ascii?q?OkN3gO1Ht17tgWWv7Mt/oV23or5AL/91iZYOAhzG5Elr13siZc2r9P4E13l0?= =?us-ascii?q?D/SvgTBRUKZHe00UnXqYj49OIOOi6uaeTijREh2478SunE+kYFBj74YstwRH?= =?us-ascii?q?MrqJwvbhSViSy1ssa+JLyyJZoSrkHGykufybITcNRp0aNUzSt/ZTCk4Sxjl7?= =?us-ascii?q?J91Vo2msjk9Imfdzc0ovj/WEIJcGesPIVKpG2I7+4Wn97KjdnxQtMwS3NTGs?= =?us-ascii?q?OvFKzgESpO5625akDXQGV68yvKX+OHRkee8Bs09i2UVcrzbDfNfCBflI05IX?= =?us-ascii?q?vVbE1H3FJNDW58x8ZoUFr3mIq5Nx0irjEJugyh+0cKk7o2cUOlCiGH413zD1?= =?us-ascii?q?V8AJmHcEgPv10Evh6Ld5XPvqQrWHsDtpy58F7UezHdPl8VSztTAgrcQAq4W9?= =?us-ascii?q?vmrdjYr7rBV7f4daeTJ+/V8aoGEK3TjZO3jtk/rmjKb5nJZyY6SaV8gBELXG?= =?us-ascii?q?glSZ+F3W9SEWpNzXyLPpX+xl/0+yt8qt2z/abwQAy2o42IFbZWNZNk/BX+gK?= =?us-ascii?q?GIM/OciXRiMTgCkJUL23LMzP4U21t36Wkmdj+mFakMuHzWVKyL3KlQERMfb2?= =?us-ascii?q?V4M84A7qQ32hRBNJzAkt2zzqR/kvM+F1ZCUxrmh92tYssJZWq6MTalTA7OPb?= =?us-ascii?q?CCb2SRkfv6aq69V7Bcyd5smUbo5mS9FEnudnSOjDC3ERCkaroT1GTFbFpfoI?= =?us-ascii?q?G4YlBmDm2xBNThIga2NtN6l1hUifU9m2/KOGgAMDN9b1IFr7ue6jldi+l+HG?= =?us-ascii?q?oJ52RsLO2NkSKUp+fCLZNevfxuCyVy3+VUhRZyg6NS9z1BTedplTH6q8405U?= =?us-ascii?q?q7ieTJxjcmGBtCpzBXhZ6a6EVvPaKKk/sIEX3A/R8L8SCRE0FT9oYjU4ay/f?= =?us-ascii?q?oKkp6WyfG7MjpJ/tPK8NFJCtPddoSHOyF6bkKsRm6SDRMFSC7tPmba1Ck/2L?= =?us-ascii?q?mf8GOYqp8ip93igp0LH/VZW0YyDegyEVl+EZoJJ5I9DVZG2faLydUF43aztk?= =?us-ascii?q?ybXMJBopXOTe6fG93qOGzflqRfal0EzPmrSOZbfp2+0EtkZF5gmY3MEEeFRt?= =?us-ascii?q?FBrBpqaQosqVlM+nxzFzxh6wfecgqopUQrO7uxlx8yhBF5ZL11pizx+FpxLV?= =?us-ascii?q?3P9nJpzBsB3O79iDXUSwbfab+qVNgOWTHprEV3OZT+EV4sMF+C2Hd8PTKBfI?= =?us-ascii?q?p/yrttcWcy013ZqcEJAuNHQOtIbU1IyA=3D=3D?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HpAQBM2fldhg/ch8NlHQEBAQkBEQU?= =?us-ascii?q?FAYF+ghuBRiMECyqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwE?= =?us-ascii?q?BAQIBAgMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDElkOEB0iEkkOGSKDAIJ?= =?us-ascii?q?8oSg9AiMBTIEEizGJDoFIgTaHP4MWgUMagUE/gRGDUYo3BI1CgjifD4I+lgY?= =?us-ascii?q?MG45Ri32lKoQagWmBezMaCBsVgydQERSNHg4JjiRAMwGPJAEB?= X-IPAS-Result: =?us-ascii?q?A0HpAQBM2fldhg/ch8NlHQEBAQkBEQUFAYF+ghuBRiMEC?= =?us-ascii?q?yqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCAgEBA?= =?us-ascii?q?hABAQEKCQsIKYVKgjspAYNPAgEDElkOEB0iEkkOGSKDAIJ8oSg9AiMBTIEEi?= =?us-ascii?q?zGJDoFIgTaHP4MWgUMagUE/gRGDUYo3BI1CgjifD4I+lgYMG45Ri32lKoQag?= =?us-ascii?q?WmBezMaCBsVgydQERSNHg4JjiRAMwGPJAEB?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="10279306" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDGQ9bSEHfMngCsmyG0Q6KFzr3M/kwx4riA/bH?= =?us-ascii?q?H68XkYwy5lIGNkk/AeU6EGDMEryakXcrQURQBsZG13tCRErumpnpu3r+?= =?us-ascii?q?ERPlKmDFgYnYhHi3v7sCVi9Fs4kx5ybHlpi1t9FQtDQvzq3IPpMVc0SU?= =?us-ascii?q?2SJ4UvYvvjN8wpe4lhuvYDsg=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa6.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:04 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 13462AB9B; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Konrad Rzeszutek Wilk , Stefano Stabellini , "Wei Liu" , Dario Faggioli , Josh Whitehead , Stewart Hildebrand , Meng Xu Subject: [PATCH 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory Date: Wed, 18 Dec 2019 08:48:51 +0100 Message-ID: <20191218074859.21665-2-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: 37c99f12-d910-4e98-d600-08d7838ec34c X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL01.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 Move sched*c and cpupool.c to a new directory common/sched. Signed-off-by: Juergen Gross --- MAINTAINERS | 8 +-- xen/common/Kconfig | 66 +--------------------- xen/common/Makefile | 8 +-- xen/common/sched/Kconfig | 65 +++++++++++++++++++++ xen/common/sched/Makefile | 7 +++ .../{compat/schedule.c => sched/compat_schedule.c} | 2 +- xen/common/{ => sched}/cpupool.c | 0 xen/common/{ => sched}/sched_arinc653.c | 0 xen/common/{ => sched}/sched_credit.c | 0 xen/common/{ => sched}/sched_credit2.c | 0 xen/common/{ => sched}/sched_null.c | 0 xen/common/{ => sched}/sched_rt.c | 0 xen/common/{ => sched}/schedule.c | 2 +- 13 files changed, 80 insertions(+), 78 deletions(-) create mode 100644 xen/common/sched/Kconfig create mode 100644 xen/common/sched/Makefile rename xen/common/{compat/schedule.c => sched/compat_schedule.c} (97%) rename xen/common/{ => sched}/cpupool.c (100%) rename xen/common/{ => sched}/sched_arinc653.c (100%) rename xen/common/{ => sched}/sched_credit.c (100%) rename xen/common/{ => sched}/sched_credit2.c (100%) rename xen/common/{ => sched}/sched_null.c (100%) rename xen/common/{ => sched}/sched_rt.c (100%) rename xen/common/{ => sched}/schedule.c (99%) diff --git a/MAINTAINERS b/MAINTAINERS index 012c847ebd..37d4da2bc2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -174,7 +174,7 @@ M: Josh Whitehead M: Stewart Hildebrand S: Supported L: DornerWorks Xen-Devel -F: xen/common/sched_arinc653.c +F: xen/common/sched/sched_arinc653.c F: tools/libxc/xc_arinc653.c ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE @@ -212,7 +212,7 @@ CPU POOLS M: Juergen Gross M: Dario Faggioli S: Supported -F: xen/common/cpupool.c +F: xen/common/sched/cpupool.c DEVICE TREE M: Stefano Stabellini @@ -378,13 +378,13 @@ RTDS SCHEDULER M: Dario Faggioli M: Meng Xu S: Supported -F: xen/common/sched_rt.c +F: xen/common/sched/sched_rt.c SCHEDULING M: George Dunlap M: Dario Faggioli S: Supported -F: xen/common/sched* +F: xen/common/sched/ SEABIOS UPSTREAM M: Wei Liu diff --git a/xen/common/Kconfig b/xen/common/Kconfig index 2f516da101..79465fc1f9 100644 --- a/xen/common/Kconfig +++ b/xen/common/Kconfig @@ -278,71 +278,7 @@ config ARGO If unsure, say N. -menu "Schedulers" - visible if EXPERT = "y" - -config SCHED_CREDIT - bool "Credit scheduler support" - default y - ---help--- - The traditional credit scheduler is a general purpose scheduler. - -config SCHED_CREDIT2 - bool "Credit2 scheduler support" - default y - ---help--- - The credit2 scheduler is a general purpose scheduler that is - optimized for lower latency and higher VM density. - -config SCHED_RTDS - bool "RTDS scheduler support (EXPERIMENTAL)" - default y - ---help--- - The RTDS scheduler is a soft and firm real-time scheduler for - multicore, targeted for embedded, automotive, graphics and gaming - in the cloud, and general low-latency workloads. - -config SCHED_ARINC653 - bool "ARINC653 scheduler support (EXPERIMENTAL)" - default DEBUG - ---help--- - The ARINC653 scheduler is a hard real-time scheduler for single - cores, targeted for avionics, drones, and medical devices. - -config SCHED_NULL - bool "Null scheduler support (EXPERIMENTAL)" - default y - ---help--- - The null scheduler is a static, zero overhead scheduler, - for when there always are less vCPUs than pCPUs, typically - in embedded or HPC scenarios. - -choice - prompt "Default Scheduler?" - default SCHED_CREDIT2_DEFAULT - - config SCHED_CREDIT_DEFAULT - bool "Credit Scheduler" if SCHED_CREDIT - config SCHED_CREDIT2_DEFAULT - bool "Credit2 Scheduler" if SCHED_CREDIT2 - config SCHED_RTDS_DEFAULT - bool "RT Scheduler" if SCHED_RTDS - config SCHED_ARINC653_DEFAULT - bool "ARINC653 Scheduler" if SCHED_ARINC653 - config SCHED_NULL_DEFAULT - bool "Null Scheduler" if SCHED_NULL -endchoice - -config SCHED_DEFAULT - string - default "credit" if SCHED_CREDIT_DEFAULT - default "credit2" if SCHED_CREDIT2_DEFAULT - default "rtds" if SCHED_RTDS_DEFAULT - default "arinc653" if SCHED_ARINC653_DEFAULT - default "null" if SCHED_NULL_DEFAULT - default "credit2" - -endmenu +source "common/sched/Kconfig" config CRYPTO bool diff --git a/xen/common/Makefile b/xen/common/Makefile index 62b34e69e9..2abb8250b0 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -3,7 +3,6 @@ obj-y += bitmap.o obj-y += bsearch.o obj-$(CONFIG_CORE_PARKING) += core_parking.o obj-y += cpu.o -obj-y += cpupool.o obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o obj-y += domctl.o @@ -38,12 +37,6 @@ obj-y += radix-tree.o obj-y += rbtree.o obj-y += rcupdate.o obj-y += rwlock.o -obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o -obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o -obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o -obj-$(CONFIG_SCHED_RTDS) += sched_rt.o -obj-$(CONFIG_SCHED_NULL) += sched_null.o -obj-y += schedule.o obj-y += shutdown.o obj-y += softirq.o obj-y += sort.o @@ -74,6 +67,7 @@ obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall extra-y := symbols-dummy.o subdir-$(CONFIG_COVERAGE) += coverage +subdir-y += sched subdir-$(CONFIG_UBSAN) += ubsan subdir-$(CONFIG_NEEDS_LIBELF) += libelf diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig new file mode 100644 index 0000000000..883ac87cab --- /dev/null +++ b/xen/common/sched/Kconfig @@ -0,0 +1,65 @@ +menu "Schedulers" + visible if EXPERT = "y" + +config SCHED_CREDIT + bool "Credit scheduler support" + default y + ---help--- + The traditional credit scheduler is a general purpose scheduler. + +config SCHED_CREDIT2 + bool "Credit2 scheduler support" + default y + ---help--- + The credit2 scheduler is a general purpose scheduler that is + optimized for lower latency and higher VM density. + +config SCHED_RTDS + bool "RTDS scheduler support (EXPERIMENTAL)" + default y + ---help--- + The RTDS scheduler is a soft and firm real-time scheduler for + multicore, targeted for embedded, automotive, graphics and gaming + in the cloud, and general low-latency workloads. + +config SCHED_ARINC653 + bool "ARINC653 scheduler support (EXPERIMENTAL)" + default DEBUG + ---help--- + The ARINC653 scheduler is a hard real-time scheduler for single + cores, targeted for avionics, drones, and medical devices. + +config SCHED_NULL + bool "Null scheduler support (EXPERIMENTAL)" + default y + ---help--- + The null scheduler is a static, zero overhead scheduler, + for when there always are less vCPUs than pCPUs, typically + in embedded or HPC scenarios. + +choice + prompt "Default Scheduler?" + default SCHED_CREDIT2_DEFAULT + + config SCHED_CREDIT_DEFAULT + bool "Credit Scheduler" if SCHED_CREDIT + config SCHED_CREDIT2_DEFAULT + bool "Credit2 Scheduler" if SCHED_CREDIT2 + config SCHED_RTDS_DEFAULT + bool "RT Scheduler" if SCHED_RTDS + config SCHED_ARINC653_DEFAULT + bool "ARINC653 Scheduler" if SCHED_ARINC653 + config SCHED_NULL_DEFAULT + bool "Null Scheduler" if SCHED_NULL +endchoice + +config SCHED_DEFAULT + string + default "credit" if SCHED_CREDIT_DEFAULT + default "credit2" if SCHED_CREDIT2_DEFAULT + default "rtds" if SCHED_RTDS_DEFAULT + default "arinc653" if SCHED_ARINC653_DEFAULT + default "null" if SCHED_NULL_DEFAULT + default "credit2" + +endmenu diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile new file mode 100644 index 0000000000..359af4f8bb --- /dev/null +++ b/xen/common/sched/Makefile @@ -0,0 +1,7 @@ +obj-y += cpupool.o +obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o +obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o +obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o +obj-$(CONFIG_SCHED_RTDS) += sched_rt.o +obj-$(CONFIG_SCHED_NULL) += sched_null.o +obj-y += schedule.o diff --git a/xen/common/compat/schedule.c b/xen/common/sched/compat_schedule.c similarity index 97% rename from xen/common/compat/schedule.c rename to xen/common/sched/compat_schedule.c index 8b6e6f107d..2e450685d6 100644 --- a/xen/common/compat/schedule.c +++ b/xen/common/sched/compat_schedule.c @@ -37,7 +37,7 @@ static int compat_poll(struct compat_sched_poll *compat) #define do_poll compat_poll #define sched_poll compat_sched_poll -#include "../schedule.c" +#include "schedule.c" int compat_set_timer_op(u32 lo, s32 hi) { diff --git a/xen/common/cpupool.c b/xen/common/sched/cpupool.c similarity index 100% rename from xen/common/cpupool.c rename to xen/common/sched/cpupool.c diff --git a/xen/common/sched_arinc653.c b/xen/common/sched/sched_arinc653.c similarity index 100% rename from xen/common/sched_arinc653.c rename to xen/common/sched/sched_arinc653.c diff --git a/xen/common/sched_credit.c b/xen/common/sched/sched_credit.c similarity index 100% rename from xen/common/sched_credit.c rename to xen/common/sched/sched_credit.c diff --git a/xen/common/sched_credit2.c b/xen/common/sched/sched_credit2.c similarity index 100% rename from xen/common/sched_credit2.c rename to xen/common/sched/sched_credit2.c diff --git a/xen/common/sched_null.c b/xen/common/sched/sched_null.c similarity index 100% rename from xen/common/sched_null.c rename to xen/common/sched/sched_null.c diff --git a/xen/common/sched_rt.c b/xen/common/sched/sched_rt.c similarity index 100% rename from xen/common/sched_rt.c rename to xen/common/sched/sched_rt.c diff --git a/xen/common/schedule.c b/xen/common/sched/schedule.c similarity index 99% rename from xen/common/schedule.c rename to xen/common/sched/schedule.c index e70cc70a65..a550dd8f93 100644 --- a/xen/common/schedule.c +++ b/xen/common/sched/schedule.c @@ -3125,7 +3125,7 @@ void __init sched_setup_dom0_vcpus(struct domain *d) #endif #ifdef CONFIG_COMPAT -#include "compat/schedule.c" +#include "compat_schedule.c" #endif #endif /* !COMPAT */ -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:10 +0100 Received: from MIAPEX02MSOL01.citrite.net (10.52.109.11) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:08 -0500 Received: from esa3.hc3370-68.iphmx.com (10.9.154.239) by MIAPEX02MSOL01.citrite.net (10.52.109.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 18 Dec 2019 02:49:07 -0500 Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: cqDaaviZ3XT+XoPhkD5jXJcFIbmp/UsK6x+JTd1qgd7PB7HOFFs4RePSuPEaHRTRHQFXyPCb65 vmkAB8E/xo/qW0LlZF4PvgWjUGBKkRWFAnD1gGRLl+JghzXsUwGfU40syE9aTftJAorZvo8aFx k82Ngqcn2E+qD/6aII6QHAAVD76BIizdIffySr9mxmfeyUByAWn+Wlw+MHRuf68Feg9msMcWdg WDXHpJyMzjO21v9CAjKluz4gKHCIm7skLLCkZYz9Zj9/GcQJv2a/BXyQPoYkO6/VKaA03ftf4l gWX4qdkTaKe3v/WGbQAgHRBQ X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 9853020 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 9853020 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3A629iRhcQMsPnsHVCXjxYri0YlGMj4u6mDksu8p?= =?us-ascii?q?Mizoh2WeGdxcS4YB7h7PlgxGXEQZ/co6odzbaP6Oa6ATxLuM/a+Fk5M7V0Hy?= =?us-ascii?q?cfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aFR?= =?us-ascii?q?rwLxd6KfroEYDOkcu3y/qy+5rOaAlUmTaxe7x/IAi4oAnLqMUanYhvJqksxh?= =?us-ascii?q?fUrHZDZvhby35vKV+PhRj3+92+/IRk8yReuvIh89BPXKDndKkmTrJWESorPX?= =?us-ascii?q?kt6MLkqRfMQw2P5mABUmoNiRpHHxLF7BDhUZjvtCbxq/dw1zObPc3ySrA0RC?= =?us-ascii?q?ii4qJ2QxLmlCsLKzg0+3zMh8dukKxUvg6upx1nw47Vfo6VMuZ+frjAdt8eXG?= =?us-ascii?q?ZNQ9pdWzBEDo66coABDfcOPfxAoof9uVUAsBWwCwqrCuztxD9FnWP60bEg3u?= =?us-ascii?q?g9CwzKwBAsEtQTu3rUttX1M6ISXPixwqnV0zrDdfxW1in76IPVcx4hu/aMXa?= =?us-ascii?q?lrccHMzkQvFQzFjk+XqYz+JDOYzf8Ns3WA7+V+T+6gl2knqwRorzWp28wiiZ?= =?us-ascii?q?HJi5oIxl3A9Sh12ps5KNO4RUJhf9KoDodcuzuHO4Z2Ws8uXmVltSYgxrEbt5?= =?us-ascii?q?O2fDIGxIo5yxLDcfCLbYiF7x3lWe2MOzl3nmhld6i6hxuq8Uiv1On8Vs6s3V?= =?us-ascii?q?ZPoStJjMPAtmsQ1xzI9MeLUOZy8Vm51TaO0QDc9P1ELFgpmaffK5Mt2KM8m5?= =?us-ascii?q?QTvEjZACP6hln6gLWLekgk4uSo7v7oYrTipp+SLY90jQT+P7wsmsywG+Q4NR?= =?us-ascii?q?MOX3OA9OSnyb3j5lP2TK9Wgf0xl6nVqIraKtgDpq6lHw9V1Z4u5RehADehyt?= =?us-ascii?q?QYkmcIIEhLdhKaiYjpP0vBIOjjAPihnlSgiitkx/HaPr37A5XMIWLPn6vmfb?= =?us-ascii?q?Z480Rc0hY8zchD55JIDbEMOPPzVVX3tNDCDB82KRC7w+X8CNV60IMTQnyAAr?= =?us-ascii?q?KDPKzOsF+I+vgvI+iDZYMPpDn9LP0ltLbSiioplFlYcaS30J8/bHGjAu8gM0?= =?us-ascii?q?ifeWDrgNoKDSENpAVtYvbtjQigTDNJaHu0F40m4SogQNakAp3EXZuFm6Ga0W?= =?us-ascii?q?GwGZgANTMOMUyFDXq9L9bMYPwLci/HZ5Y5ymZeB5G8V48s0w2vvwbmyr1ha9?= =?us-ascii?q?DZ4TAcqYm6hIgn6vbazlcy/mcvUJzYjTHLTnl0m3NOTDgzj+hzok14n1GE16?= =?us-ascii?q?Uww/lVDsda6P4BVAAmfYXdwOp3B5G6Wg/IctqTDlfzRNKgDGJ5Vco/lvkJZU?= =?us-ascii?q?s1ANC+llbbxSP/AbAPkKeQLIcp6a+a1H/0dI5m03iT7KA6lBE9R9dXc2ivh6?= =?us-ascii?q?px7Q/WUozGiUKCjI6xaL8RmiXK8TTL1nKA6WdfVgM4SqDZRTYfa0/R+Mz+/V?= =?us-ascii?q?/HRqSyBK4PNxsbj9WfMaYMZtCw1wd8SfzuOcrTbyeKo0nqVUfa4LSKYcKqdn?= =?us-ascii?q?4Uh2PdAxNfzFhWoybANBA+AzfnqGXbXnRoEhr0bkXg/PMbyjvzR1IozwyMc0?= =?us-ascii?q?xq1qaksh8Ti/uGTvoP37UC8C4/ojRwFVy50prYEd2F7wZmeaxdZ5s67jIlnS?= =?us-ascii?q?rbuBZ6JYeINL15ixgVdAE290Li2hNrC5lRxNAwpSBixw5zJKSElVJZImrJgN?= =?us-ascii?q?apYeCRcDOquknyOMu0khnE3d2b+7kC8qE1sFy4+gGiTRF9qzA5g59Uy3uZ9t?= =?us-ascii?q?PBCw9BNPC5Gksx6RV+oKnXJycn4IaBn39jK66vqRfZxskkQuAiz1zzN8caK6?= =?us-ascii?q?6CGALoRocYCNKpM/cCgEWyY1QPO+UYp8tWd4u2MvCB3qCsJuNpmjmr2H9G7I?= =?us-ascii?q?5K2UWJ7yNgS+TM0s9fkcuV1QaGSTrwyWyZnJCswNJ8bCoJViqy0iG/Qo5aPf?= =?us-ascii?q?YtJcNVWSGvO82y1pN1gJu/E3Jf8VeiARsB1qrLMVKQYEb6xhZ4zlkMrDqsni?= =?us-ascii?q?7wwzFvkj4vp7aSx2SXkrmkLUFafDQRHC870B/lOsCsgsofXVS0YgRM9lPt/k?= =?us-ascii?q?v8y6VB5ex+I2TVXUZUbn3zJmBmXLG3s+nKaMpO5ZU09CRPBb3tPBbDEeO7+U?= =?us-ascii?q?FClXq+TA48jHghejqnu4v0hUl3k2vGanZ49yGGIYQulVHe/NzZVbha2T9VIU?= =?us-ascii?q?swwTTRGFW4OMGkuNuOkJKW+OKxTWW6TbVIbDLmi4iHsWHorX0vGhC5k/2pz5?= =?us-ascii?q?flHhI9yjTTzMRxWGPDqxO2MeyJn+yqdOlgeEdvHlr17cF3T5p/noUHj5YVwX?= =?us-ascii?q?EGh5+R8ClPgSLpPN5cw667cGsVSGtB3YvO+Aa8kh4GTDrB18fjW36a2Mcke9?= =?us-ascii?q?SqfjZcxHcm984TQKaMsO4dxXAz8xzp80SJJqIn1jYFlal3sSFc2rtR/lNziH?= =?us-ascii?q?3aWOl3fwEQPDSwxU3Tt5bj8+MPIj7oKOb41VIiz4n6U/fc/0cEHi6+I8lqHD?= =?us-ascii?q?csvJwjaxSVgCO1s9i1PoGXN4521FXckg+c3bEJd9Rr0KJM3W0+Zyr8pSF3kr?= =?us-ascii?q?Jqy0Aym8n85M/edS1s5P7rW0ACcGepNoVJo2qr1PgC+6Tel4G3Qsc4RGlNDc?= =?us-ascii?q?quFbTwV2tM/fX/a1TXTmB68yfDX+ODTEnGsBwurmqTQcn0bDfNfCJfl407Ak?= =?us-ascii?q?DFfyk9yEgVRGlox89iUFnylYq7LRs/vndLvRb5skcek7M0cUCnDCGF/l/uMn?= =?us-ascii?q?BuE/39ZFJX9l0QvhaJd5XHsKQqTnEeptr790SMMjDJPlUVSzhSBwreXQilZv?= =?us-ascii?q?70vryiu6CZHrbsdqqTJ+/S9KoEDK/OndX1icNn52reb5TReCA+SaRkggwbGi?= =?us-ascii?q?oiU8XBx2dUF2pNyniLNZTF4kbnn08/5sG5+/D2VA+9/pOBUv1UNsti/xTwiq?= =?us-ascii?q?CGf++WjyJkJTsKzYsCnznOz6YS2FpUgCZrElvlWbUGvirQQK+CgbdZVFgdbD?= =?us-ascii?q?1+MM8O5KU5lghLMsrGht6nzaZ2yOYvDEtIXkDgncfvYtEWJ2a6NxXMA0PuVv?= =?us-ascii?q?zOYDTPx4uuO/GGRLZdjftZu1iLgRjASxCxGDOFmnGpWgumaqdMh3rAY0Qb5t?= =?us-ascii?q?H7cw5tDHilR9XjOFW9N5dsgDs6zKdR5DuCPHMAMTV6b0JGr6GBpSJejPJlHm?= =?us-ascii?q?Vd731jZeCakied5uPcJ94Yq/xuSihzkutb5jw9xd43pGlcQ+dpnSLJstN0i1?= =?us-ascii?q?S2yK+U1yFqFhZD635KiI+NoUR+KPDZ+51HChOmtFoG6WSdDQhPpsMwU4yy/f?= =?us-ascii?q?kLl56Vzvq1cWoRu8jZ9sYdGcXOfc+cOSFnMRGyQ2GMSVVVCz+zNWTPwUdala?= =?us-ascii?q?L3lDXdo54kp5zrgJdLRKVcUQl/FPwADV9+NMcfO5oxVTQh2+3+7oZA9T+loR?= =?us-ascii?q?/dSd8P9IjATe6XCO7zJSyxiKkeIQAV2r6+IYlZZeiZkwRyL1J9morNAU/ZW9?= =?us-ascii?q?tA9zZgYgEDq0JI6HFiT2c31hu4ODPo22caELuPpjBzigZ6Zr12pjL8uREsO0?= =?us-ascii?q?HH4ic9whFoyIfVxAuJeTu0F5+eGIRfCi76rU80a8upWBtuYEu5mkk2bW6YFY?= =?us-ascii?q?IUtKNpcCVQsCGZoYFGQKYOVrBfbVkbwvTFP/g=3D?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FQAQD62Pldhg/ch8NlHAEBAQEBBwE?= =?us-ascii?q?BEQEEBAEBgX6BdCeBRiMECyqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwE?= =?us-ascii?q?MAQEBAwEBAQIBAgMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNUhA/Elc?= =?us-ascii?q?ZIoI1S4J8oSU9AiMBTIEEin4ziQ+BSIE2hz+EWRqBQT+BEYNRHIobBI0wDgS?= =?us-ascii?q?COJ8Pgj6WBgwbgkOMDot9kBSZMIFpgXszGggbFYMnUBEUjR4OCY4kQDOPJQE?= =?us-ascii?q?B?= X-IPAS-Result: =?us-ascii?q?A0FQAQD62Pldhg/ch8NlHAEBAQEBBwEBEQEEBAEBgX6Bd?= =?us-ascii?q?CeBRiMECyqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBA?= =?us-ascii?q?gMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNUhA/ElcZIoI1S4J8oSU9A?= =?us-ascii?q?iMBTIEEin4ziQ+BSIE2hz+EWRqBQT+BEYNRHIobBI0wDgSCOJ8Pgj6WBgwbg?= =?us-ascii?q?kOMDot9kBSZMIFpgXszGggbFYMnUBEUjR4OCY4kQDOPJQEB?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="9853020" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDFS7cs8gMAjx8ER8/f7X6Sl0l/lSUPlWzV81g?= =?us-ascii?q?RQE0jdHBCE9HKG0RcCjmpfPyKSgwh6q6uH5L+MAvJiWhVHSwAvq+2gyj?= =?us-ascii?q?DVGMllpbI38KILHkB7qcNml0nNjZWESnxbYXieyogkU2Ckp45y6/9xT6?= =?us-ascii?q?6M2d9h6Put1XZQJ5OsL4txxA=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa3.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:06 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 62ED1AC71; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , Jan Beulich , "Andrew Cooper" , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , George Dunlap , Ian Jackson , Julien Grall , Konrad Rzeszutek Wilk , Stefano Stabellini , "Dario Faggioli" , Josh Whitehead , Stewart Hildebrand , Meng Xu Subject: [PATCH 2/9] xen/sched: make sched-if.h really scheduler private Date: Wed, 18 Dec 2019 08:48:52 +0100 Message-ID: <20191218074859.21665-3-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: b7093713-9ba2-4bda-201a-08d7838ec34f X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL01.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 include/xen/sched-if.h should be private to scheduler code, so move it to common/sched/sched-if.h and move the remaining use cases to cpupool.c and schedule.c. Signed-off-by: Juergen Gross --- xen/arch/x86/dom0_build.c | 5 +- xen/common/domain.c | 70 ---------- xen/common/domctl.c | 135 +------------------ xen/common/sched/cpupool.c | 13 +- xen/{include/xen => common/sched}/sched-if.h | 3 - xen/common/sched/sched_arinc653.c | 3 +- xen/common/sched/sched_credit.c | 2 +- xen/common/sched/sched_credit2.c | 3 +- xen/common/sched/sched_null.c | 3 +- xen/common/sched/sched_rt.c | 3 +- xen/common/sched/schedule.c | 191 ++++++++++++++++++++++++++- xen/include/xen/domain.h | 3 + xen/include/xen/sched.h | 7 + 13 files changed, 228 insertions(+), 213 deletions(-) rename xen/{include/xen => common/sched}/sched-if.h (99%) diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c index 28b964e018..56c2dee0fc 100644 --- a/xen/arch/x86/dom0_build.c +++ b/xen/arch/x86/dom0_build.c @@ -9,7 +9,6 @@ #include #include #include -#include #include #include @@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void) dom0_nodes = node_online_map; for_each_node_mask ( node, dom0_nodes ) cpumask_or(&dom0_cpus, &dom0_cpus, &node_to_cpumask(node)); - cpumask_and(&dom0_cpus, &dom0_cpus, cpupool0->cpu_valid); + cpumask_and(&dom0_cpus, &dom0_cpus, cpupool_valid_cpus(cpupool0)); if ( cpumask_empty(&dom0_cpus) ) - cpumask_copy(&dom0_cpus, cpupool0->cpu_valid); + cpumask_copy(&dom0_cpus, cpupool_valid_cpus(cpupool0)); max_vcpus = cpumask_weight(&dom0_cpus); if ( opt_dom0_max_vcpus_min > max_vcpus ) diff --git a/xen/common/domain.c b/xen/common/domain.c index 611116c7fc..f4f0a66262 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -10,7 +10,6 @@ #include #include #include -#include #include #include #include @@ -565,75 +564,6 @@ void __init setup_system_domains(void) #endif } -void domain_update_node_affinity(struct domain *d) -{ - cpumask_var_t dom_cpumask, dom_cpumask_soft; - cpumask_t *dom_affinity; - const cpumask_t *online; - struct sched_unit *unit; - unsigned int cpu; - - /* Do we have vcpus already? If not, no need to update node-affinity. */ - if ( !d->vcpu || !d->vcpu[0] ) - return; - - if ( !zalloc_cpumask_var(&dom_cpumask) ) - return; - if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) - { - free_cpumask_var(dom_cpumask); - return; - } - - online = cpupool_domain_master_cpumask(d); - - spin_lock(&d->node_affinity_lock); - - /* - * If d->auto_node_affinity is true, let's compute the domain's - * node-affinity and update d->node_affinity accordingly. if false, - * just leave d->auto_node_affinity alone. - */ - if ( d->auto_node_affinity ) - { - /* - * We want the narrowest possible set of pcpus (to get the narowest - * possible set of nodes). What we need is the cpumask of where the - * domain can run (the union of the hard affinity of all its vcpus), - * and the full mask of where it would prefer to run (the union of - * the soft affinity of all its various vcpus). Let's build them. - */ - for_each_sched_unit ( d, unit ) - { - cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); - cpumask_or(dom_cpumask_soft, dom_cpumask_soft, - unit->cpu_soft_affinity); - } - /* Filter out non-online cpus */ - cpumask_and(dom_cpumask, dom_cpumask, online); - ASSERT(!cpumask_empty(dom_cpumask)); - /* And compute the intersection between hard, online and soft */ - cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); - - /* - * If not empty, the intersection of hard, soft and online is the - * narrowest set we want. If empty, we fall back to hard&online. - */ - dom_affinity = cpumask_empty(dom_cpumask_soft) ? - dom_cpumask : dom_cpumask_soft; - - nodes_clear(d->node_affinity); - for_each_cpu ( cpu, dom_affinity ) - node_set(cpu_to_node(cpu), d->node_affinity); - } - - spin_unlock(&d->node_affinity_lock); - - free_cpumask_var(dom_cpumask_soft); - free_cpumask_var(dom_cpumask); -} - - int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity) { /* Being disjoint with the system is just wrong. */ diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 03d0226039..3407db44fd 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include @@ -65,9 +64,9 @@ static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap, return err; } -static int xenctl_bitmap_to_bitmap(unsigned long *bitmap, - const struct xenctl_bitmap *xenctl_bitmap, - unsigned int nbits) +int xenctl_bitmap_to_bitmap(unsigned long *bitmap, + const struct xenctl_bitmap *xenctl_bitmap, + unsigned int nbits) { unsigned int guest_bytes, copy_bytes; int err = 0; @@ -200,7 +199,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info) info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info)); BUG_ON(SHARED_M2P(info->shared_info_frame)); - info->cpupool = d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; + info->cpupool = cpupool_get_id(d); memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t)); @@ -234,16 +233,6 @@ void domctl_lock_release(void) spin_unlock(¤t->domain->hypercall_deadlock_mutex); } -static inline -int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff) -{ - return vcpuaff->flags == 0 || - ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && - guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) || - ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) && - guest_handle_is_null(vcpuaff->cpumap_soft.bitmap)); -} - void vnuma_destroy(struct vnuma_info *vnuma) { if ( vnuma ) @@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) case XEN_DOMCTL_setvcpuaffinity: case XEN_DOMCTL_getvcpuaffinity: - { - struct vcpu *v; - const struct sched_unit *unit; - struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity; - - ret = -EINVAL; - if ( vcpuaff->vcpu >= d->max_vcpus ) - break; - - ret = -ESRCH; - if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL ) - break; - - unit = v->sched_unit; - ret = -EINVAL; - if ( vcpuaffinity_params_invalid(vcpuaff) ) - break; - - if ( op->cmd == XEN_DOMCTL_setvcpuaffinity ) - { - cpumask_var_t new_affinity, old_affinity; - cpumask_t *online = cpupool_domain_master_cpumask(v->domain); - - /* - * We want to be able to restore hard affinity if we are trying - * setting both and changing soft affinity (which happens later, - * when hard affinity has been succesfully chaged already) fails. - */ - if ( !alloc_cpumask_var(&old_affinity) ) - { - ret = -ENOMEM; - break; - } - cpumask_copy(old_affinity, unit->cpu_hard_affinity); - - if ( !alloc_cpumask_var(&new_affinity) ) - { - free_cpumask_var(old_affinity); - ret = -ENOMEM; - break; - } - - /* Undo a stuck SCHED_pin_override? */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE ) - vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); - - ret = 0; - - /* - * We both set a new affinity and report back to the caller what - * the scheduler will be effectively using. - */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - { - ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), - &vcpuaff->cpumap_hard, - nr_cpu_ids); - if ( !ret ) - ret = vcpu_set_hard_affinity(v, new_affinity); - if ( ret ) - goto setvcpuaffinity_out; - - /* - * For hard affinity, what we return is the intersection of - * cpupool's online mask and the new hard affinity. - */ - cpumask_and(new_affinity, online, unit->cpu_hard_affinity); - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, - new_affinity); - } - if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) - { - ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), - &vcpuaff->cpumap_soft, - nr_cpu_ids); - if ( !ret) - ret = vcpu_set_soft_affinity(v, new_affinity); - if ( ret ) - { - /* - * Since we're returning error, the caller expects nothing - * happened, so we rollback the changes to hard affinity - * (if any). - */ - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - vcpu_set_hard_affinity(v, old_affinity); - goto setvcpuaffinity_out; - } - - /* - * For soft affinity, we return the intersection between the - * new soft affinity, the cpupool's online map and the (new) - * hard affinity. - */ - cpumask_and(new_affinity, new_affinity, online); - cpumask_and(new_affinity, new_affinity, - unit->cpu_hard_affinity); - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, - new_affinity); - } - - setvcpuaffinity_out: - free_cpumask_var(new_affinity); - free_cpumask_var(old_affinity); - } - else - { - if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, - unit->cpu_hard_affinity); - if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) - ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, - unit->cpu_soft_affinity); - } + ret = vcpu_affinity_domctl(d, op->cmd, &op->u.vcpuaffinity); break; - } case XEN_DOMCTL_scheduler_op: ret = sched_adjust(d, &op->u.scheduler_op); diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 4d3adbdd8d..d5b64d0a6a 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -16,11 +16,12 @@ #include #include #include -#include #include #include #include +#include "sched-if.h" + #define for_each_cpupool(ptr) \ for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next)) @@ -876,6 +877,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op) return ret; } +int cpupool_get_id(const struct domain *d) +{ + return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; +} + +cpumask_t *cpupool_valid_cpus(struct cpupool *pool) +{ + return pool->cpu_valid; +} + void dump_runq(unsigned char key) { unsigned long flags; diff --git a/xen/include/xen/sched-if.h b/xen/common/sched/sched-if.h similarity index 99% rename from xen/include/xen/sched-if.h rename to xen/common/sched/sched-if.h index b0ac54e63d..a702fd23b1 100644 --- a/xen/include/xen/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -12,9 +12,6 @@ #include #include -/* A global pointer to the initial cpupool (POOL0). */ -extern struct cpupool *cpupool0; - /* cpus currently in no cpupool */ extern cpumask_t cpupool_free_cpus; diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c index 565575c326..fe15754900 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -26,7 +26,6 @@ #include #include -#include #include #include #include @@ -35,6 +34,8 @@ #include #include +#include "sched-if.h" + /************************************************************************** * Private Macros * **************************************************************************/ diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index aa41a3301b..a098ca0f3a 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -15,7 +15,6 @@ #include #include #include -#include #include #include #include @@ -24,6 +23,7 @@ #include #include +#include "sched-if.h" /* * Locking: diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c index f7c477053c..5bfe1441a2 100644 --- a/xen/common/sched/sched_credit2.c +++ b/xen/common/sched/sched_credit2.c @@ -18,7 +18,6 @@ #include #include #include -#include #include #include #include @@ -26,6 +25,8 @@ #include #include +#include "sched-if.h" + /* Meant only for helping developers during debugging. */ /* #define d2printk printk */ #define d2printk(x...) diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c index 3f3418c9b1..5a23a7e7dc 100644 --- a/xen/common/sched/sched_null.c +++ b/xen/common/sched/sched_null.c @@ -29,10 +29,11 @@ */ #include -#include #include #include +#include "sched-if.h" + /* * null tracing events. Check include/public/trace.h for more details. */ diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index b2b29481f3..379b56bc2a 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -20,7 +20,6 @@ #include #include #include -#include #include #include #include @@ -31,6 +30,8 @@ #include #include +#include "sched-if.h" + /* * TODO: * diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index a550dd8f93..c751faa741 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -23,7 +23,6 @@ #include #include #include -#include #include #include #include @@ -38,6 +37,8 @@ #include #include +#include "sched-if.h" + #ifdef CONFIG_XEN_GUEST #include #else @@ -1607,6 +1608,194 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason) return ret; } +static inline +int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff) +{ + return vcpuaff->flags == 0 || + ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) && + guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) || + ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) && + guest_handle_is_null(vcpuaff->cpumap_soft.bitmap)); +} + +int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, + struct xen_domctl_vcpuaffinity *vcpuaff) +{ + struct vcpu *v; + const struct sched_unit *unit; + int ret = 0; + + if ( vcpuaff->vcpu >= d->max_vcpus ) + return -EINVAL; + + if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL ) + return -ESRCH; + + if ( vcpuaffinity_params_invalid(vcpuaff) ) + return -EINVAL; + + unit = v->sched_unit; + + if ( cmd == XEN_DOMCTL_setvcpuaffinity ) + { + cpumask_var_t new_affinity, old_affinity; + cpumask_t *online = cpupool_domain_master_cpumask(v->domain); + + /* + * We want to be able to restore hard affinity if we are trying + * setting both and changing soft affinity (which happens later, + * when hard affinity has been succesfully chaged already) fails. + */ + if ( !alloc_cpumask_var(&old_affinity) ) + return -ENOMEM; + + cpumask_copy(old_affinity, unit->cpu_hard_affinity); + + if ( !alloc_cpumask_var(&new_affinity) ) + { + free_cpumask_var(old_affinity); + return -ENOMEM; + } + + /* Undo a stuck SCHED_pin_override? */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE ) + vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE); + + ret = 0; + + /* + * We both set a new affinity and report back to the caller what + * the scheduler will be effectively using. + */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + { + ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), + &vcpuaff->cpumap_hard, nr_cpu_ids); + if ( !ret ) + ret = vcpu_set_hard_affinity(v, new_affinity); + if ( ret ) + goto setvcpuaffinity_out; + + /* + * For hard affinity, what we return is the intersection of + * cpupool's online mask and the new hard affinity. + */ + cpumask_and(new_affinity, online, unit->cpu_hard_affinity); + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, new_affinity); + } + if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) + { + ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity), + &vcpuaff->cpumap_soft, nr_cpu_ids); + if ( !ret) + ret = vcpu_set_soft_affinity(v, new_affinity); + if ( ret ) + { + /* + * Since we're returning error, the caller expects nothing + * happened, so we rollback the changes to hard affinity + * (if any). + */ + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + vcpu_set_hard_affinity(v, old_affinity); + goto setvcpuaffinity_out; + } + + /* + * For soft affinity, we return the intersection between the + * new soft affinity, the cpupool's online map and the (new) + * hard affinity. + */ + cpumask_and(new_affinity, new_affinity, online); + cpumask_and(new_affinity, new_affinity, unit->cpu_hard_affinity); + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, new_affinity); + } + + setvcpuaffinity_out: + free_cpumask_var(new_affinity); + free_cpumask_var(old_affinity); + } + else + { + if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD ) + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, + unit->cpu_hard_affinity); + if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT ) + ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, + unit->cpu_soft_affinity); + } + + return ret; +} + +void domain_update_node_affinity(struct domain *d) +{ + cpumask_var_t dom_cpumask, dom_cpumask_soft; + cpumask_t *dom_affinity; + const cpumask_t *online; + struct sched_unit *unit; + unsigned int cpu; + + /* Do we have vcpus already? If not, no need to update node-affinity. */ + if ( !d->vcpu || !d->vcpu[0] ) + return; + + if ( !zalloc_cpumask_var(&dom_cpumask) ) + return; + if ( !zalloc_cpumask_var(&dom_cpumask_soft) ) + { + free_cpumask_var(dom_cpumask); + return; + } + + online = cpupool_domain_master_cpumask(d); + + spin_lock(&d->node_affinity_lock); + + /* + * If d->auto_node_affinity is true, let's compute the domain's + * node-affinity and update d->node_affinity accordingly. if false, + * just leave d->auto_node_affinity alone. + */ + if ( d->auto_node_affinity ) + { + /* + * We want the narrowest possible set of pcpus (to get the narowest + * possible set of nodes). What we need is the cpumask of where the + * domain can run (the union of the hard affinity of all its vcpus), + * and the full mask of where it would prefer to run (the union of + * the soft affinity of all its various vcpus). Let's build them. + */ + for_each_sched_unit ( d, unit ) + { + cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity); + cpumask_or(dom_cpumask_soft, dom_cpumask_soft, + unit->cpu_soft_affinity); + } + /* Filter out non-online cpus */ + cpumask_and(dom_cpumask, dom_cpumask, online); + ASSERT(!cpumask_empty(dom_cpumask)); + /* And compute the intersection between hard, online and soft */ + cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask); + + /* + * If not empty, the intersection of hard, soft and online is the + * narrowest set we want. If empty, we fall back to hard&online. + */ + dom_affinity = cpumask_empty(dom_cpumask_soft) ? + dom_cpumask : dom_cpumask_soft; + + nodes_clear(d->node_affinity); + for_each_cpu ( cpu, dom_affinity ) + node_set(cpu_to_node(cpu), d->node_affinity); + } + + spin_unlock(&d->node_affinity_lock); + + free_cpumask_var(dom_cpumask_soft); + free_cpumask_var(dom_cpumask); +} + typedef long ret_t; #endif /* !COMPAT */ diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 769302057b..c931eab4a9 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -27,6 +27,9 @@ struct xen_domctl_getdomaininfo; void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info); void arch_get_domain_info(const struct domain *d, struct xen_domctl_getdomaininfo *info); +int xenctl_bitmap_to_bitmap(unsigned long *bitmap, + const struct xenctl_bitmap *xenctl_bitmap, + unsigned int nbits); /* * Arch-specifics. diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 9f7bc69293..2507a833c2 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -50,6 +50,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t); /* A global pointer to the hardware domain (usually DOM0). */ extern struct domain *hardware_domain; +/* A global pointer to the initial cpupool (POOL0). */ +extern struct cpupool *cpupool0; + #ifdef CONFIG_LATE_HWDOM extern domid_t hardware_domid; #else @@ -929,6 +932,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason); int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity); int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity); void restore_vcpu_affinity(struct domain *d); +int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, + struct xen_domctl_vcpuaffinity *vcpuaff); void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); @@ -1054,6 +1059,8 @@ int cpupool_add_domain(struct domain *d, int poolid); void cpupool_rm_domain(struct domain *d); int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); +int cpupool_get_id(const struct domain *d); +cpumask_t *cpupool_valid_cpus(struct cpupool *pool); void schedule_dump(struct cpupool *c); extern void dump_runq(unsigned char key); -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:09 +0100 Received: from MIAPEX02MSOL02.citrite.net (10.52.109.12) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:07 -0500 Received: from esa6.hc3370-68.iphmx.com (10.9.154.239) by MIAPEX02MSOL02.citrite.net (10.52.109.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 18 Dec 2019 02:49:07 -0500 Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: 5AHlZpcuN0oQE0jWt7j2JFvOxLrEq5aX3qsccfGzRXuTwBDz5xHi+uW8InO8uDFZf42rcLjJma N37Z/hooXVWkhTipsh/omyxNt/93alSGTCYTWq9QeyApFh7ryMrYdvdyz9+lXK8fHU+ASlAaUt Np3RFPHimPHHaFExayNaM2zNweevNVUBE8rHbKa1Huue6owMB1JaS71dTARLbMoVcaAfpz4IP7 jDOxodkWVigR99K44Uq5LkneZQhRF54KoSBsSP07z6fNoNcGK6VrCK2h203llZ2VrS5hBGL/sF p+rG4VvkiaGSoteiAWDmRPDZ X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 10279307 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 10279307 X-Ironport-Server: esa6.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3A7yFWAxRDEMJUtR9nSoVBKkpyLdpsv+yvbD5Q0Y?= =?us-ascii?q?Iujvd0So/mwa6yZhCN2/xhgRfzUJnB7Loc0qyK6vumAzRQqs/Y6zgrS99lb1?= =?us-ascii?q?c9k8IYnggtUoauKHbQC7rUVRE8B9lIT1R//nu2YgB/Ecf6YEDO8DXptWZBUh?= =?us-ascii?q?rwOhBoKevrB4Xck9q41/yo+53Ufg5EmCexbal9IRmrowjdrNcajZdhJ6o+1x?= =?us-ascii?q?fFv3VFcPlKyG11Il6egwzy7dqq8p559CRQtfMh98peXqj/Yq81U79WAik4Pm?= =?us-ascii?q?4s/MHkugXNQgWJ5nsHT2UZiQFIDBTf7BH7RZj+rC33vfdg1SaAPM32Sbc0WS?= =?us-ascii?q?m+76puVRTlhjsLOyI//WrKkcF7kr5Vrwy9qBx+247UYZ+aNPxifqPGYNgWQX?= =?us-ascii?q?NNUttNWyBdB4+xaZYEAegcMuZCt4Tzp0UAowawCwevA+3gyDFIi2Tq0aEmye?= =?us-ascii?q?ktDRvL0BA8E98IrX/arM/1NKAXUe2tyKfI0CvMb+lZ2Tjj7ojDbxEvoeuLXb?= =?us-ascii?q?Jrasra1E4iFwHKjlWKrozlJCiV2/8Ws2iG9OpvS/ijhHIgqwF0uzWiwNonhI?= =?us-ascii?q?rRho8N11zJ8SV0zJwoKdC2SEN3e8CoHIVMuy2AKod7QtsuT3xstSs60LEKpJ?= =?us-ascii?q?C2cSgQxJg52RLTd+aLf5aI7x/sUuuaPC12i2h/eL2lgha/6UigxfP4VsmzyF?= =?us-ascii?q?tKqyVEnsfWunAWyhzT8daIRuFg8Ui/wTqP1gbT5f9YIU0siKbWL54szqQtmp?= =?us-ascii?q?cdsUnPBDH6lFvqgKOMa0kp+vCk6+H9bbXnop+cOZV0igb7Mqk2mMy/Dv44Mg?= =?us-ascii?q?8VX2iA4um8z6Dj/VbnT7lQkvI2lazZvIjAJcsHvq65HxNV0oE75hawETim18?= =?us-ascii?q?4YnXYZI15fZR2Hko7pO1XUL/DgFvqwnUmsnC13yPDHIr3hGJTNL3fZnLj9er?= =?us-ascii?q?Z97lZWyBAvwtBH+5JUFrYBLeryWkDrstzUFB05PBaozObkE9V90YUeVHmRDa?= =?us-ascii?q?+EKq/drV6I5v41I+mNa44ZoiryK/8g562msXhsiVIbOKWkw5YTQHS5Beh9ZV?= =?us-ascii?q?WUZ2L2hdUMGntMuRAxH8Lwj1jXcyNefXm/WeoT/DYgE8ryD4jZQZu2qKecxy?= =?us-ascii?q?r9FZpTMDMVQmuQGGvlIt3XE8wHbzifd4o4ymReD+qIVpMh2BeytQTz17tgKK?= =?us-ascii?q?/u9zYFsY74jYglteDInE909TcvVJvFlj/dCWBsnmYYATQx2fM3rU98zwKF1q?= =?us-ascii?q?51y7xdGMdI7vxEGgE9KdbHzuN8BtyzEgLMd9uEUhCnF9OhBzxiBskpzYooZE?= =?us-ascii?q?BwU86nkgiFxzCjVr0ajbuQH7Qv77nRmXP2IpU10G7IgZEolEJuWc5TLSujj6?= =?us-ascii?q?97+RLUAtvLnF+ejL2CbrkH0WjG82LQhXGWshR+Vwh9Gb7AQWhZZkbSqoHh4V?= =?us-ascii?q?jeSra1Fbk9Gg5Rk4iZN7BHLNHk1A8UfvroNdXAbm70oF+eX0bZlJWLaoeiO2?= =?us-ascii?q?gG1XubCENfyF9LuCjWcwkmBiKx5WnZCW4mE1WneE7q/eRkzRHzBkYp0wGHaV?= =?us-ascii?q?Fg3Lup61YUg/KbUfYawrMDvm8otTx1GF+329+eBcCHokJte6BVYNV151kity?= =?us-ascii?q?qRtQNnOYe7B7t/nVNYeANy/gvv2xhxFoRcgJ0ytnp5hAF2KK+ezBZAb2bBjM?= =?us-ascii?q?22Y+aKbDOopFb2Nv2zuBmWytud96YR5e5tpk7q4kepHRF5rC0ijYkT0mOc44?= =?us-ascii?q?WMBw0XAveTGg468QZ3o7bCb2wz/YTRgDduPrOzqSTqwM8yCa0uzRPqLJ9Pdb?= =?us-ascii?q?iJEgP/CZhQBce0L/cxs0O0dR9CN+dXvv1RXYvuZ76N36ilO/xllTStgDFc4Y?= =?us-ascii?q?xz5UmL8jJ1Vu/C25tcmaOi0wCKVinxgBKaiu6nwtkWXTgJBSL/xDPtWshRbf?= =?us-ascii?q?YpIt5OVzfoItW3w8U4jJnoCTZU81uqBlVO38HMG1LabVPn2hZL/V8KunHhki?= =?us-ascii?q?y9hzB5iDAmqKOD0TeGmrW/MkNcZygRHjEk1AqkKJP8l90AWUm0cwUl8XntrV?= =?us-ascii?q?33waRWvuU3LmXeR1tJYzmjKmhjVqWqsb/RK8VL6Z4urWBWSLHlMA3cE+Wg5U?= =?us-ascii?q?FAi2W6Qjg7pnhzbTyht5TnkgYvhXmUcjBzpyGCJpk1m0eZ5cTcQO4X1T0DF0?= =?us-ascii?q?wawXHaAEaxO96x8JCajZDG56q8WHimTYZ7ajTwwMWLsy7xtiV6RAaymfy+gI?= =?us-ascii?q?itEwcg1jTg/8J3TiiOpxH5KNqOtezyIadseU9mA0X54sxxF9Rlk4c+s5oX3G?= =?us-ascii?q?ATmpSf+XdU2Xe2K9hQ3rjyKWYcXTNeicCA+xDrgQcwSxDBj5K8THiWxdFtIs?= =?us-ascii?q?W3cn9DkDxo9NhEUe+V9OAWxHMu5Av+/FyXOb8nxH8c0ad8sSdFxbhT4Ex0kG?= =?us-ascii?q?PFRepLeCsQdS30y0bRtIj48foRPCD3NuHunEtmwYL4V+7E/FwaADChPc58VS?= =?us-ascii?q?5ospcmbAmKiS2trNi9KZ+IKohP03/c2xbY07oMdM13zLxSw3IgYDy1vGV5mb?= =?us-ascii?q?dn3Foygdfi+tDAcjsl/brlUEcGbHuvO5JVoGi2y/8AwqP0l8iuBskzQ2tXGs?= =?us-ascii?q?q5C6ruSHVL67zmL1rcSWNh7C7LQfyGQV7ZsR8953PXT8LyaCrRfiFCi486HV?= =?us-ascii?q?/EfBc64khcXS1mzMdoTUbznJ2nKholoGhWvAWwqwMQmLs5a1+mCjmZ/l3wLG?= =?us-ascii?q?9zEsP6TlIe7xketR6EbYrHt7w1RnAIuMXm9lfFK3THNV0ZUCdQARHCXQq7eO?= =?us-ascii?q?Hpv4KlkaDQB/LifaKfPPPU9bEYDazQg8roiNAu/i7QZJzeYD87U7tihRIFBT?= =?us-ascii?q?cjQIzYg2ldEHBRzX+XKZTB+lHtoUgV5oi+9v/vRQ7it5CXBeIUN9Jx9hSyx6?= =?us-ascii?q?yEMqaRgi19NDpVhIgUyyWOzr8B0VoWzSppclzPWfwBsyXJUa7dyLRPAUVdbS?= =?us-ascii?q?RtOc9Mqak720FMP8Xfl9/4hKVggLgtEVBZWFf9m8avI8sXP2W6M1CBD0GOUd?= =?us-ascii?q?bObXjCx8W9OPnuc7BbgeRKuhH1gg61SBe8bBKEkTShFx20OL8KjCrAZ0MG/d?= =?us-ascii?q?/tNBd1CW3zCtnhb0/zNtg/ljAwzbAu4xGCfWcBLThxdV9MpbyM/GtZhPt4AW?= =?us-ascii?q?lI8ntiK6GNhS+Y6+DSLptev+FsB2x4kOdT4XJyzLUwjmkMXPtuhC7btcJjuX?= =?us-ascii?q?mjgrPJ0Sd8XVxCp3cDhY6Gu1ljJbSM9pREXiWhnlpF5mGRBhIW4tp9X4S+5u?= =?us-ascii?q?YJkYWJzvirbm4foJrO8MARBtbZMpeKK3N/dxriQ2WLVE5bHXiqLWHalwpWl/?= =?us-ascii?q?TBkx/d5pU8tJXon4IDD7FBU1lgXPEVEEN+B/QZPYx6GDgjlPTI6axArWr7tx?= =?us-ascii?q?TXSMhA69rfUemOBPz0NDuDpbxUPV0T3Kj1a4gefN6euQQqehxxm4LEHFDVVN?= =?us-ascii?q?ZGr3h6bwM6l05K9WB3Umw530+8N1GdpUQLHPvxpSYYzwtzZeN0r2Xp8w1xPU?= =?us-ascii?q?fRqW08nRtpwIm3sXWqaDf0aZyIc8ROESOt7xoqL4j2BQ1yaF/qxB02BHL/X7?= =?us-ascii?q?tUyoBYWyVugQ7Yt4FIHKcEH7ZZexJWzvaSNawl?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FNAgBM2fldhg/ch8NlHgELHIQZgUY?= =?us-ascii?q?jBAsqky+dQAkEAQELLwEBAYQ/AoIaHAcBBDQTAgMBDAEBAQMBAQECAQIDAgI?= =?us-ascii?q?BAQIQAQEBCgkLCCmFSoI7KQGDTwIBAxIVUhA/ElcZIoMAgnyhKD0CIwFMgQS?= =?us-ascii?q?KfjOJDoFIgTaHP4RZGoFBP4RiijcEjUKCOJ8Pgj6WBgwbjlGLfalEgWmBezM?= =?us-ascii?q?aCBsVgydQERSNHg4JjiRAM48lAQE?= X-IPAS-Result: =?us-ascii?q?A0FNAgBM2fldhg/ch8NlHgELHIQZgUYjBAsqky+dQAkEA?= =?us-ascii?q?QELLwEBAYQ/AoIaHAcBBDQTAgMBDAEBAQMBAQECAQIDAgIBAQIQAQEBCgkLC?= =?us-ascii?q?CmFSoI7KQGDTwIBAxIVUhA/ElcZIoMAgnyhKD0CIwFMgQSKfjOJDoFIgTaHP?= =?us-ascii?q?4RZGoFBP4RiijcEjUKCOJ8Pgj6WBgwbjlGLfalEgWmBezMaCBsVgydQERSNH?= =?us-ascii?q?g4JjiRAM48lAQE?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="10279307" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDEwUyy/5Umc1R+ooIoSsUzGk1x7dMdNLSlGey?= =?us-ascii?q?ZPLRRTBFGBjo3NF/EwMNqjnIEWErRTRr7xwhuhDBakcvRZxSTQx2Qe8N?= =?us-ascii?q?enNFnyNZ91ThSX15Gg7wWu5LPp97RF08RTrSEElo5aaiik8+BTY3VbCp?= =?us-ascii?q?IEYaPBkIjZJBhCkMsvDkrEiA=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa6.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:04 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 84EE8AD5F; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , George Dunlap , Dario Faggioli , "Andrew Cooper" , Ian Jackson , Jan Beulich , Julien Grall , "Konrad Rzeszutek Wilk" , Stefano Stabellini , Wei Liu Subject: [PATCH 3/9] xen/sched: cleanup sched.h Date: Wed, 18 Dec 2019 08:48:53 +0100 Message-ID: <20191218074859.21665-4-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: 9438aefa-6e2f-459e-d6f5-08d7838ec312 X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL02.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 There are some items in include/xen/sched.h which can be moved to sched-if.h as they are scheduler private. Signed-off-by: Juergen Gross --- xen/common/sched/sched-if.h | 13 +++++++++++++ xen/common/sched/schedule.c | 2 +- xen/include/xen/sched.h | 17 ----------------- 3 files changed, 14 insertions(+), 18 deletions(-) diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h index a702fd23b1..edce354dc7 100644 --- a/xen/common/sched/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sched_unit *unit) struct cpupool { int cpupool_id; +#define CPUPOOLID_NONE -1 unsigned int n_dom; cpumask_var_t cpu_valid; /* all cpus assigned to pool */ cpumask_var_t res_valid; /* all scheduling resources of pool */ @@ -618,5 +619,17 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step, void sched_rm_cpu(unsigned int cpu); const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu); +void schedule_dump(struct cpupool *c); +struct scheduler *scheduler_get_default(void); +struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); +void scheduler_free(struct scheduler *sched); +int cpu_disable_scheduler(unsigned int cpu); +int schedule_cpu_add(unsigned int cpu, struct cpupool *c); +int schedule_cpu_rm(unsigned int cpu); +int sched_move_domain(struct domain *d, struct cpupool *c); +struct cpupool *cpupool_get_by_id(int poolid); +void cpupool_put(struct cpupool *pool); +int cpupool_add_domain(struct domain *d, int poolid); +void cpupool_rm_domain(struct domain *d); #endif /* __XEN_SCHED_IF_H__ */ diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index c751faa741..db8ce146ca 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -1346,7 +1346,7 @@ int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity) return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_hard_affinity); } -int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity) +static int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity) { return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_soft_affinity); } diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 2507a833c2..55335d6ab3 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -685,7 +685,6 @@ int sched_init_vcpu(struct vcpu *v); void sched_destroy_vcpu(struct vcpu *v); int sched_init_domain(struct domain *d, int poolid); void sched_destroy_domain(struct domain *d); -int sched_move_domain(struct domain *d, struct cpupool *c); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); @@ -918,19 +917,10 @@ static inline bool sched_has_urgent_vcpu(void) return atomic_read(&this_cpu(sched_urgent_count)); } -struct scheduler; - -struct scheduler *scheduler_get_default(void); -struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr); -void scheduler_free(struct scheduler *sched); -int schedule_cpu_add(unsigned int cpu, struct cpupool *c); -int schedule_cpu_rm(unsigned int cpu); void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value); -int cpu_disable_scheduler(unsigned int cpu); void sched_setup_dom0_vcpus(struct domain *d); int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason); int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity); -int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity); void restore_vcpu_affinity(struct domain *d); int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, struct xen_domctl_vcpuaffinity *vcpuaff); @@ -1051,17 +1041,10 @@ extern enum cpufreq_controller { FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen } cpufreq_controller; -#define CPUPOOLID_NONE -1 - -struct cpupool *cpupool_get_by_id(int poolid); -void cpupool_put(struct cpupool *pool); -int cpupool_add_domain(struct domain *d, int poolid); -void cpupool_rm_domain(struct domain *d); int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); int cpupool_get_id(const struct domain *d); cpumask_t *cpupool_valid_cpus(struct cpupool *pool); -void schedule_dump(struct cpupool *c); extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from AMSPEX02CL02.citrite.net (10.69.22.126) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:07 +0100 Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by AMSPEX02CL02.citrite.net (10.69.22.126) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 08:49:06 +0100 Received: from esa2.hc3370-68.iphmx.com (10.160.38.12) by LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:06 -0800 Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: CXMz9w7m4CikXJm9XrosHHRafvocy1IyLqVP3kQTNzdJdlNkyMGAkrZ3CziACeYq9VHPPA+JTj Lcjd6NXmxTQDLyJwlsJuVBpZRpYMUB69FD6azBPu6tBfikd/FD0CbN6G89iSdXk1hgCE61RGtr STuWfOk7FXTvl2pCpFUAHRgj+LvmEminPL6Jybue5dc71NzVJ7rvnGO9Rrz+0xZLw7ZHgehKx6 d/vL4CUhWlp7o9Ncdfwo8HhbwRBAyruijQQaqjMOxL1aYcz5Abl4ZMHLy34d6xB76k5E9Rh1m4 Plp1VckS/SAClg+p7/3Ab/nx X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 9869624 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 9869624 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3ADLdQQBJSyGgh28f5ONmcpTZWNBhigK39O0sv0r?= =?us-ascii?q?FitYgRL/TxwZ3uMQTl6Ol3ixeRBMOHsqkC0bKG+Pm5AiQp2tWoiDg6aptCVh?= =?us-ascii?q?sI2409vjcLJ4q7M3D9N+PgdCcgHc5PBxdP9nC/NlVJSo6lPwWB6nK94iQPFR?= =?us-ascii?q?rhKAF7Ovr6GpLIj8Swyuu+54Dfbx9HiTagb75+Ngu6oRnSu8UZnIduN6g8wQ?= =?us-ascii?q?bVr3VVfOhb2XlmLk+JkRbm4cew8p9j8yBOtP8k6sVNT6b0cbkmQLJBFDgpPH?= =?us-ascii?q?w768PttRnYUAuA/WAcXXkMkhpJGAfK8hf3VYrsvyTgt+p93C6aPdDqTb0xRD?= =?us-ascii?q?+v4btnRAPuhSwaLDMy7n3ZhdJsg6JauBKhpgJww4jIYIGOKfFyerrRcc4GSW?= =?us-ascii?q?ZdW8pcUTFKDIGhYIsVF+cPPfhWoZTzqVUNqhWxBwesCfjzxjNUnHL6wbc33/?= =?us-ascii?q?g9HQzAwQcuH8gOsHPRrNjtMKkSTP66zLPHzTrdafNdxDbz6JLPchA6uvGHQL?= =?us-ascii?q?V9ccjeyUk1EAPFlU6QpJf+PzOIzeQCrXKX4PZnVeKqkmMqrRx6rDu3xso0l4?= =?us-ascii?q?XEgoIYxkrZ+Sh3wIs5P8O0RFBnbdK+DZddtzmWO5ZyT84iWW1kpSg3x70ctZ?= =?us-ascii?q?KlYiQG1I4rywDfZvGIdYWD/wjtW/yLIThigXJoYLK/iAi28Uin0uD8U8a13E?= =?us-ascii?q?hQoipej9nMrW4C2wbO5ceZUvd9/0Gh1iiT1w3L9+1JJUA5mbDGJ5MvwrM8jI?= =?us-ascii?q?QfvVrfEiPshUn7jLeadkA+9eip7+TnbK/mppiZN4JsiAH+Pb4umsukAeQjKQ?= =?us-ascii?q?UORHWb+f+i27H54UL5R7BKguUskqbFqJDaOdgbpqmhDg9azIkj7Ay/Dzi439?= =?us-ascii?q?gCgHYHMVRFeBadgIjxIFzBPu34Ae2jjFStjDdr3fHGP7L7DprRKXjDleSpQb?= =?us-ascii?q?Eo+0NajQY+091bz5ZVEa0aZuL+XFfrs97VBQN/NBa7kMj9D9Ao9J4TQ22CBO?= =?us-ascii?q?e2KqTJqhfc5O01JPKXTJQIozu7IP8gsa29xUQlkEMQKPH6laAcb2q1S7E/ex?= =?us-ascii?q?3DOyjFn8sBHGEWvwE3UO3tjhi4XCVOY2qpBvJstDwgAdjgDYyYHd/1xeLRmi?= =?us-ascii?q?ajHphGIGtBDwPEHXTpctCCXPEBIGKXL9R6mzMJHb6mV8c61B6ouQO7g7pqJ+?= =?us-ascii?q?bZ4GsZ4JTk0tUmr/bLm0QU8jp5R9+Yz3nLV3t9y2EHXTgtx4hkvFdwjFyE1P?= =?us-ascii?q?swmORWQORa/OgBSQImLdjZxu1+Bcr1X1fDcc2OU02OWci9DHc6Sddii8QWbR?= =?us-ascii?q?NbHNOvxgvGwzLsA7IRkOmTA4co96vHw3XrD8NtkTDdybIsyVUrG5EdD2Cti6?= =?us-ascii?q?9h+gSWPLbnyRXIxYCtc6lUnCPW/T3FzWHV5x4AFVAgF6TdXXUPIEDRqIax4E?= =?us-ascii?q?SKVLKoBbk9V2kJgcefNqtHbMHohlRaVb/iPtrZeWe4h2a3A16B2LqNaIPgf2?= =?us-ascii?q?hV0j/aDQAIlAUa/HDOMgZbZG/poW3ECyd1PUnyeE6q+u576Tu6QkIy0wCWfh?= =?us-ascii?q?h5zbPmshURhPGaV7YSxudd6Xpn8m4pWgznh5SPVonlxUIpZqhXbNIj7U0S2H?= =?us-ascii?q?nQ7Up9Ncf7cPgn2A5Ychx3ulOo3BJyWeAi2YAnqm0nyA1qJOeWylREInma0o?= =?us-ascii?q?r3Iab/MXTp8VakbKuciRnOlc2b/KsC8qFyqVr5uxqyPlE/6Hgh2N5QmSj5hN?= =?us-ascii?q?2CHE8ZVpT/VVwy/h5xquTBYyUz0IjT0GVlLai+tjKbgYATCeAoywitc5JkCI?= =?us-ascii?q?3UT1agK8QBHIDuJfcjwR6pZUlfY7gXqv5yPtumcuvA06mubq5smzevjGIP54?= =?us-ascii?q?4YsArE9SVmTfXT940Y2Pze1QyCHzvxl1autMnrlJsMPGBPWDPklG6+XtYXP/?= =?us-ascii?q?A6dJ1DEWq0Jsyr2thy4vylE2VV8lKuHRJO2cOkfwaTc02o2ARR0UoNpnn00S?= =?us-ascii?q?C8zjFyj3Qotv/GjXGImr+/MkBdYigSGToH7x+kO4W/gtEEUVL9YhMgzl2l7h?= =?us-ascii?q?2hmPAe+vQ5LnHTRFcOdC/zfAQAGuO9sKSPZ8lX5dYmqyJSBa6+bkqdUaXVuA?= =?us-ascii?q?YB3mXoGG4Ul3grMiqnvJn0hUkwim2HK21ohGHEYsw2zhDaro+5J7YZzn8NQy?= =?us-ascii?q?92jiPSD163Moyy/NmaoJzEt/i3S2OrUpAAOTmu146LszG3oHF7GRDq1e7mgc?= =?us-ascii?q?XpSEJptE2zn8kvTyjDqwzwJ5Xmx7jve/wyZVFmXRf985YoQ9kl1Np21c9WgT?= =?us-ascii?q?9A3N2U5SZVyD6sd4wBgeSnMjxVAmRSprydqAn9hB86dSrPmt2/DjPFhZI/L9?= =?us-ascii?q?iiPjFPgX17sZ0MUOHMq+Ua1Spt/gjh8VmXOKgj2G5EmL12ryRG5oNB8As1kn?= =?us-ascii?q?fEXu9URw8Ae3SrzkrA7sji/vwLNSDzLf7okhI5xI7pDane8FgCAjCjJsZkR3?= =?us-ascii?q?YrqJwhbDeumDXy8t22IYeMK4NC8EXSy1CZ0aBUMM5jz6RXw3M3YSSk4Sxjkr?= =?us-ascii?q?ZgxR12gcPj49PBcT02uvjhW1gBbmaQBYtb+ymx3/8PxYDPh9HpRMg9XG1MBs?= =?us-ascii?q?CgTOr0QmtD6bK+b1jISmdk7C/cQ+G6f0fX6V86/SueT9bybi/RfShJi40lHk?= =?us-ascii?q?LVJVQD0llFAXNjxcN/TFrygpSmKhwchHhZ50ak+EIdl6Qya0S5CDyH4l/vMG?= =?us-ascii?q?x8SYDDfkMHsEcbuwGMbpTYtqUpRGlZ5sHz91DLcDHBIV4SUydYASnmTxjiOL?= =?us-ascii?q?Kq+NXNofOACLP4KfzQbLGK7+dZUr+Jw5mrz4drrS2UOJ/JOH58Avk/nEFEWB?= =?us-ascii?q?UbU4zYnT4LViAaxTnVYZXdohCi9ylz6Mu49bzqXgnr+IeCWaBKP5N34ReqjK?= =?us-ascii?q?yfNumWwiFkNTJf0ZBKznjNrdpXlBsbjyUkLmP/K7kLuC/TQa6VoZd5VEZEOQ?= =?us-ascii?q?V0MsYAr68n11MLOcWA0Yytk+AoyP8tC1JVE1fmn5PhY8tCOGy7OF7dYSTDfL?= =?us-ascii?q?2bOT3Gxd32aqKgWPVRiutTrRi5pTecFQfqID2CkzDjUx3nP/tLiWmXOxlXuY?= =?us-ascii?q?f1dRgIayCrVNX9dhiyK8N6lxUz0edynW7ROCgQPHk0ck9AqKGR8TINgvh7HD?= =?us-ascii?q?8kjDItJu2FliCFqujAf89H7L0yWXkyzbgCpi1nmPNP4SpJReJ4gn7ftdc05V?= =?us-ascii?q?GtybLQkn85AFxPsjZOlMSAukAxXMeRvpRGR3vA+woAqGuKDBFf7dlqENr0oI?= =?us-ascii?q?hL18PC0qn0LX0RlrCctdtZHMXSJM+dZTA5NgH1HTfPEAYfZTu7byfEmldQ1v?= =?us-ascii?q?2fvC7wzNByut3nn5wATaVeXVo+G6YBC0hrK9cFJY9+QjIulbPC0Zw4oEGmpR?= =?us-ascii?q?yUf/10+5DKUvXLWqfqOGzflqRfal0EzOGgdNVBBsjAw0VnL2JCssHPEkvUU8?= =?us-ascii?q?pKp3Q5PBQpu0gL+397HDRqhxDVLzi16XpWLsaa2wYsg1ImM/8w7zqq6FAydA?= =?us-ascii?q?LH?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0ELAACr2fldhg/ch8NlHQEBAQkBEQU?= =?us-ascii?q?FAYFqCAELAYIagUYjBAsqjCeKGpgzFIFnCQQBAQsvAQEBhD8CghocBwEEMAk?= =?us-ascii?q?OAgMBDAEBAQMBAQECAQIDAgIBAQIQAQEBCgkLCCmFSoI7KQGDTwIBAxIVUhA?= =?us-ascii?q?/ElcZIoMAgnyhMD0CIwFMgQSKfjOJDoFIgTYBhz6EWRqBQT+BEYJec4QxZYU?= =?us-ascii?q?hBI1CoUeCPpYGDBuOUYt9jV+bZYFSghIzGggbFYMnUBEUjR4OCY4kQDOMZCm?= =?us-ascii?q?CGAEB?= X-IPAS-Result: =?us-ascii?q?A0ELAACr2fldhg/ch8NlHQEBAQkBEQUFAYFqCAELAYIag?= =?us-ascii?q?UYjBAsqjCeKGpgzFIFnCQQBAQsvAQEBhD8CghocBwEEMAkOAgMBDAEBAQMBA?= =?us-ascii?q?QECAQIDAgIBAQIQAQEBCgkLCCmFSoI7KQGDTwIBAxIVUhA/ElcZIoMAgnyhM?= =?us-ascii?q?D0CIwFMgQSKfjOJDoFIgTYBhz6EWRqBQT+BEYJec4QxZYUhBI1CoUeCPpYGD?= =?us-ascii?q?BuOUYt9jV+bZYFSghIzGggbFYMnUBEUjR4OCY4kQDOMZCmCGAEB?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="9869624" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDFbEB4rFsWFVmpnuC0bAdNuZ8Ye25XQOyh8y9?= =?us-ascii?q?EjQEFAsh1UboxgqRCLgYIErf5mrugxxvHyqeY6EGkqAJ1D/YTc6pyqQP?= =?us-ascii?q?KgSIwC9wX5eD1o/BwdX0FBcMDHwqEM8OIKjYOGff6Eu3auEnyVFHXCTr?= =?us-ascii?q?bpNKuIgd+mBeBvt/b6+83POQ=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:06 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id A2FF1AD69; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH 4/9] xen/sched: remove special cases for free cpus in schedulers Date: Wed, 18 Dec 2019 08:48:54 +0100 Message-ID: <20191218074859.21665-5-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: 03f170b4-f579-40bd-085f-08d7838ec2be X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 With the idle scheduler now taking care of all cpus not in any cpupool the special cases in the other schedulers for no cpupool associated can be removed. Signed-off-by: Juergen Gross --- xen/common/sched/sched_credit.c | 7 ++----- xen/common/sched/sched_credit2.c | 30 ------------------------------ 2 files changed, 2 insertions(+), 35 deletions(-) diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index a098ca0f3a..8b1de9b033 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -1690,11 +1690,8 @@ csched_load_balance(struct csched_private *prv, int cpu, BUG_ON(get_sched_res(cpu) != snext->unit->res); - /* - * If this CPU is going offline, or is not (yet) part of any cpupool - * (as it happens, e.g., during cpu bringup), we shouldn't steal work. - */ - if ( unlikely(!cpumask_test_cpu(cpu, online) || c == NULL) ) + /* If this CPU is going offline, we shouldn't steal work. */ + if ( unlikely(!cpumask_test_cpu(cpu, online)) ) goto out; if ( snext->pri == CSCHED_PRI_IDLE ) diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c index 5bfe1441a2..f9e521a3a8 100644 --- a/xen/common/sched/sched_credit2.c +++ b/xen/common/sched/sched_credit2.c @@ -2744,40 +2744,10 @@ static void csched2_unit_migrate( const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cpu) { - struct domain *d = unit->domain; struct csched2_unit * const svc = csched2_unit(unit); struct csched2_runqueue_data *trqd; s_time_t now = NOW(); - /* - * Being passed a target pCPU which is outside of our cpupool is only - * valid if we are shutting down (or doing ACPI suspend), and we are - * moving everyone to BSP, no matter whether or not BSP is inside our - * cpupool. - * - * And since there indeed is the chance that it is not part of it, all - * we must do is remove _and_ unassign the unit from any runqueue, as - * well as updating v->processor with the target, so that the suspend - * process can continue. - * - * It will then be during resume that a new, meaningful, value for - * v->processor will be chosen, and during actual domain unpause that - * the unit will be assigned to and added to the proper runqueue. - */ - if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_master_cpumask(d))) ) - { - ASSERT(system_state == SYS_STATE_suspend); - if ( unit_on_runq(svc) ) - { - runq_remove(svc); - update_load(ops, svc->rqd, NULL, -1, now); - } - _runq_deassign(svc); - sched_set_res(unit, get_sched_res(new_cpu)); - return; - } - - /* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity. */ ASSERT(cpumask_test_cpu(new_cpu, &csched2_priv(ops)->initialized)); ASSERT(cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity)); -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from AMSPEX02CL03.citrite.net (10.69.22.127) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:09 +0100 Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by AMSPEX02CL03.citrite.net (10.69.22.127) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 08:49:08 +0100 Received: from esa2.hc3370-68.iphmx.com (10.160.38.12) by LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:08 -0800 Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: ObGuTOtqEPqJOLLFISfyG9Dh3j17kL/gUejBzggtF02nEJS+ecedN9MaeSjlfU+HDlYwzSn/bt DZz6KMJPVG6KyI+Tc3yEM1rhxGFX87emxEhINJyrS2qSladZzua4JEeA5lOmgff4FSOObE6akE +09vG8esK0DlaL6bKgkszW5qqPExVGCa3ot4Nn5CNm8GUtql7DJ65x4NxejECeaQWezVdJIyzi ktDDhQ98/IL39UfncW1mxGQaKUnDHcLsE1hYFnzFswuV1g9Zh/rkvKEIzUp9jw1jaZLJsqzFZY /+dneIS2EnRAFlSeSdu9GPZK X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 9869627 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 9869627 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3Av5capRc4lcVRMY6lG57wYB87lGMj4u6mDksu8p?= =?us-ascii?q?Mizoh2WeGdxcu/bB7h7PlgxGXEQZ/co6odzbaP6Oa6ATxLuM/a+Fk5M7V0Hy?= =?us-ascii?q?cfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aFR?= =?us-ascii?q?rwLxd6KfroEYDOkcu3y/qy+5rOaAlUmTaxe7x/IAi4oAnLqMUanYhvJqksxh?= =?us-ascii?q?fUrHZDZvhby35vKV+PhRj3+92+/IRk8yReuvIh89BPXKDndKkmTrJWESorPX?= =?us-ascii?q?kt6MLkqRfMQw2P5mABUmoNiRpHHxLF7BDhUZjvtCbxq/dw1zObPc3ySrA0RC?= =?us-ascii?q?ii4qJ2QxLmlCsLKzg0+3zMh8dukKxUvg6upx1nw47Vfo6VMuZ+frjAdt8eXG?= =?us-ascii?q?ZNQ9pdWzBEDo66coABDfcOPfxAoof9uVUAsBWwCwqrCuztxD9FnWP60bEg3u?= =?us-ascii?q?g9CwzKwBAsEtQTu3rUttX1M6ISXPixwqnV0zrDdfxW1in76IPVcx4hu/aMXa?= =?us-ascii?q?lrccHMzkQvFQzFjk+XqYz+JDOYzf8Ns3WA7+V+T+6gl2knqwRorzWp28wiiZ?= =?us-ascii?q?HJi5oIxl3A9Sh12ps5KNO4RUJhf9KoDodcuzuHO4Z2Ws8uXmVltSYgxrEbt5?= =?us-ascii?q?O2fDIGxIo5yxLDcfCLbYiF7x3lWe2MOzl3nmhld6i6hxuq8Uiv1On8Vs6s3V?= =?us-ascii?q?ZPoStJjMPAtmsQ1xzI9MeLUOZy8Vm51TaO0QDc9P1ELFgpmaffK5Mt2KM8m5?= =?us-ascii?q?QTvEjZACP6hln6gLWLekgk4uSo7v7oYrTipp+SLY90jQT+P7wsmsywG+Q4NR?= =?us-ascii?q?MOX3OA9OSnyb3j5lP2TK9Wgf0xl6nVqIraKtgDpq6lHw9V1Z4u6xmhADehyt?= =?us-ascii?q?QYkmcIIEhYdxKaiYjpP0vBIOjjAPihnlSgiitkx/HaPr37A5XMIWLPn6vmfb?= =?us-ascii?q?Z4uAZgz18jwNYa659KB7UpJPPoRlS3pNHeFgU+MQG/36DgEtou+JkZXDetH6?= =?us-ascii?q?KDP67U+XCS4fk0a72Oa5USoy3VMOU+6rjlinpvygxVRrWgwZZCMCPwJf9hOU?= =?us-ascii?q?jMJCO02o1bQ04XogozSvDrg1SeUDlVIky/RL84+ipiWNL0AJzKHMati+fag3?= =?us-ascii?q?/+QM0QZ3hGD0DKGnDtJM2IWPYJPSSVJMIp0jkJTqOoRIJp0xay/BT7xLxqIq?= =?us-ascii?q?uc+iARuZ/5ktkg4erVmEJ67iR6WuKa1WzFVGRohiUQXTZj3q9lpldm4kyeyq?= =?us-ascii?q?U+iPtdRpRI//0cag4hLtbHyvBiTdX7WwbPZNCMHVSpWNK9GhkqU8k8hdQJZh?= =?us-ascii?q?U1AM2s2zbE2SfiGLoJj/qLCZgzp7rbxGT0Lt1hxmzu0bl7yUI7WcYJOWD/3P?= =?us-ascii?q?xF+gPeBpDEnwCir4jwJfVO+inL+S/DwHGH5gdYW1UrDv2AAyBZZ1PWqMS/7U?= =?us-ascii?q?THHfeoDvw8Pw1NxNTnSOMCY8D1jVhAWPboOcjPK2O3lWCqAB+Ux7SKJIP0cm?= =?us-ascii?q?QZ1S/ZBQAKiQcWtXqBMAE/AG+mrQe8RHRsGkjoeFjE6vRlpTWwSUp1hwCGYk?= =?us-ascii?q?t91qakrwYPjK/UQPcS07QY/SY5/mwlQRDkhI+QUYDY4VM8L8A+KZsn7VxK1H?= =?us-ascii?q?zUrVlwJZX6aaBp3QVBK0Ep7wXvzxVyGsNLls15yRFihAd0N6+c10tMMj2C2p?= =?us-ascii?q?WlcL/YMG7p5zi0drXbnFrZ1ZzFsrdK8/k+p1j56UurGVAl6G5PyMRO3j2X4Z?= =?us-ascii?q?CAX29wGdrhF00w8RZ9vbTTZCIwspjV2XNbOq6xqjbe2tgtCbJ/mCytdNpeLq?= =?us-ascii?q?6IUTTKPZ1BXpqWIfcx0xikdRteeuBZr/VrZ4b4J72HwK6uLKBrmzf0xWhA5Y?= =?us-ascii?q?l81AqL+U8eAqbK0IwC2OqwxRacWnH3i1Lpvs3smI9CbC0fBSLmmHWiXdUMIP?= =?us-ascii?q?coJcBSVC+nOIWvy897hoLxVnI9lhbrHF4A1MKzOFKTY1H7wQxMxBESqH2jlz?= =?us-ascii?q?G/ym88mDUoo6yDmS3Wlr2zKVxeYTMNHjM6yw6xcu3Wx5gAUUOlbhYkjk6o/k?= =?us-ascii?q?+ggahQ//8gdyyNEQFJZyjzPydpVa7j09jKK8NJ9p4stj1aFeqmZlXPALzyuR?= =?us-ascii?q?wByAv4AnBTgjs8cnv58oW8hBF8hG+HeTxxo2DeYtpY3grE6ZrXQvsbjV9kDG?= =?us-ascii?q?FozDLQAFa7Jdyg+96ZwozCvu6JXGWkTpRPcCPvwNDIpG6h6GZtGxH6g+Grl4?= =?us-ascii?q?itD10hySGinYoPN22AvFPmb4Lszaj/Le93YhwiGgrn88QjUoBmztlp3MpWiD?= =?us-ascii?q?5L2NPNuiBd2WbrbYcCgP24NSBVA2dVhYaSulGAugUrL2rVlduiDjPAnY05IY?= =?us-ascii?q?H9Oz1e2zphvZkRV+HKsfoc23My+wTwrBqNM6EhxXFEmaVosSdLxblO4lZIrG?= =?us-ascii?q?3VA6hOTxAEYmq20UzOt5fn8u1WfDr9KOThkhYi24jnVPbb/EldQCqrI8l4W3?= =?us-ascii?q?Usv4MlaQmKiSOvj+OsMNjIMYBK6EHSwkaG164Nb8tv3vsS2Xg+Zz+75C19jb?= =?us-ascii?q?dn3Fo3hcryvZDbeTwyp+TjWkUebGWsIZpLoVSPxe5fhprEhtH0WMw5S3NRAc?= =?us-ascii?q?euFbXyS3oTrai1blfSVmRk9DHBQ+KZRUjFsyIE5zrOC8z5biDLYiBAk5M7FU?= =?us-ascii?q?nbfAsF20gVRGlox89kUFnxmYq7Kh8/vndLvTua4lNN0r46bkiiFD2D+UHyMW?= =?us-ascii?q?9yEN/Fd1JX9l0QvhaLd5XCqLssR2cAucf65A2Vdj7AO0ISUDtPABbaQQG4W9?= =?us-ascii?q?vmrdjG+OyFCufsNOPAOPOIrvJTU/POzpWqmopg9DKROsjdJWFsVbs93VROWX?= =?us-ascii?q?Q/EMPc/ldHAy0RnCbQY8PJvw+yoWtxqd6y9PCtUwXqrYqJAL9PPdg94Ai4x7?= =?us-ascii?q?+ZMPKdjzp4LjAe0Y4QwXjPy/4U21t36WkmMjCpFf5Z73z3Qandm7FaA1sgUw?= =?us-ascii?q?0obJQayaU600ENNNXS0JX13eUj0aZwVQcDVEThn9HvbssPcSm7MxvcCUCHOa?= =?us-ascii?q?7jR3WDytzrYa66VbxbjflF/xy2tzGBFkb/PzOF3zD3XhGrOOtIgWmVJhtb8I?= =?us-ascii?q?26dx9sDyDkQreEIlWjN8RriDQt3bAurnbabygHLCNxNU9A7/WR4S5envRjCj?= =?us-ascii?q?lB435ifozm026S6+jVLIpTsOM+W3Up0bsBvTJgkeMTtXwXDOZ4kybTsNN09l?= =?us-ascii?q?y9m7PJyjE8C0UW72gbwoOTvUByf67e88olOz6M8RQT4GGXExlPqcFiD4ilua?= =?us-ascii?q?9KzcPUvLnuMzoE+NXRt5h5ZYCcOIecPXwtPAC8UibTFxcARCW3OHv3glwH1u?= =?us-ascii?q?qP7XDTopVw+f2O0NIeD7RcUlIyDPYTDE9oSccDLJlAVTQhibeHjcQM6CPv/i?= =?us-ascii?q?mUf91Tu9X8btzXBPzuLDiDirwdP0kT3Kj1a48UM9+ighEwWhxBhI3PXnHoc5?= =?us-ascii?q?VNrylmN1RmpVUXtmNjVWB110+3Mlrxsk9WLua9m1sNsiU7ef4krW+++EosKx?= =?us-ascii?q?zBoy5iyEQ=3D?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HiAACr2fldhg/ch8NlHQEBAQkBEQU?= =?us-ascii?q?FAYF+gXQngUYjBAsqsG8JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQEDAQE?= =?us-ascii?q?BAgECAwICAQECEAEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ8oTA?= =?us-ascii?q?9AiMBTIEEin4ziQ6BSIE2hz+EWRqBQT+BEYNRijcEjUKhR4I+lgYMG45Ri32?= =?us-ascii?q?pRIFpgXszGggbFYMnUBEUjSyBDQEHjRhAM48lAQE?= X-IPAS-Result: =?us-ascii?q?A0HiAACr2fldhg/ch8NlHQEBAQkBEQUFAYF+gXQngUYjB?= =?us-ascii?q?AsqsG8JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQEDAQEBAgECAwICAQECE?= =?us-ascii?q?AEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ8oTA9AiMBTIEEin4zi?= =?us-ascii?q?Q6BSIE2hz+EWRqBQT+BEYNRijcEjUKhR4I+lgYMG45Ri32pRIFpgXszGggbF?= =?us-ascii?q?YMnUBEUjSyBDQEHjRhAM48lAQE?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="9869627" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDFDMPs00M9jfsAOsg7bQA7bS7pQRKydr13BjX?= =?us-ascii?q?y3/yyX85qLCqfIXOUikAbLxaaaOrjwdbOdh0GkeNtHoM8/P0IbaYVG53?= =?us-ascii?q?c0SwJTpIqN6l3iPqo95v7OJ0s3TS1HoqRNlGsoAtH13Rr4IGr8BFk1Gk?= =?us-ascii?q?i3HyynvbHHCRh5RiXIu3XRAQ=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:08 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CD2CAAD95; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , Dario Faggioli , Meng Xu , George Dunlap Subject: [PATCH 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack Date: Wed, 18 Dec 2019 08:48:55 +0100 Message-ID: <20191218074859.21665-6-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: df008393-a3ee-48aa-33e2-08d7838ec3f5 X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 In sched_rt there are three instances of cpumasks allocated on the stack. Replace them by using cpumask_scratch. Signed-off-by: Juergen Gross --- xen/common/sched/sched_rt.c | 56 ++++++++++++++++++++++++++++++--------------- 1 file changed, 37 insertions(+), 19 deletions(-) diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 379b56bc2a..264a753116 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) * and available resources */ static struct sched_resource * -rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { - cpumask_t cpus; + cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu); cpumask_t *online; int cpu; online = cpupool_domain_master_cpumask(unit->domain); - cpumask_and(&cpus, online, unit->cpu_hard_affinity); + cpumask_and(cpus, online, unit->cpu_hard_affinity); - cpu = cpumask_test_cpu(sched_unit_master(unit), &cpus) + cpu = cpumask_test_cpu(sched_unit_master(unit), cpus) ? sched_unit_master(unit) - : cpumask_cycle(sched_unit_master(unit), &cpus); - ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) ); + : cpumask_cycle(sched_unit_master(unit), cpus); + ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) ); return get_sched_res(cpu); } +/* + * Pick a valid resource for the unit vc + * Valid resource of an unit is intesection of unit's affinity + * and available resources + */ +static struct sched_resource * +rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit) +{ + struct sched_resource *res; + + res = rt_res_pick_locked(unit, unit->res->master_cpu); + + return res; +} + /* * Init/Free related code */ @@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit) struct rt_unit *svc = rt_unit(unit); s_time_t now; spinlock_t *lock; + unsigned int cpu = smp_processor_id(); BUG_ON( is_idle_unit(unit) ); /* This is safe because unit isn't yet being scheduled */ - sched_set_res(unit, rt_res_pick(ops, unit)); + lock = pcpu_schedule_lock_irq(cpu); + sched_set_res(unit, rt_res_pick_locked(unit, cpu)); + pcpu_schedule_unlock_irq(lock, cpu); lock = unit_schedule_lock_irq(unit); @@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_unit *svc, s_time_t now) * lock is grabbed before calling this function */ static struct rt_unit * -runq_pick(const struct scheduler *ops, const cpumask_t *mask) +runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu) { struct list_head *runq = rt_runq(ops); struct list_head *iter; struct rt_unit *svc = NULL; struct rt_unit *iter_svc = NULL; - cpumask_t cpu_common; + cpumask_t *cpu_common = cpumask_scratch_cpu(cpu); cpumask_t *online; list_for_each ( iter, runq ) @@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask) /* mask cpu_hard_affinity & cpupool & mask */ online = cpupool_domain_master_cpumask(iter_svc->unit->domain); - cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity); - cpumask_and(&cpu_common, mask, &cpu_common); - if ( cpumask_empty(&cpu_common) ) + cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity); + cpumask_and(cpu_common, mask, cpu_common); + if ( cpumask_empty(cpu_common) ) continue; ASSERT( iter_svc->cur_budget > 0 ); @@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit, } else { - snext = runq_pick(ops, cpumask_of(sched_cpu)); + snext = runq_pick(ops, cpumask_of(sched_cpu), cur_cpu); if ( snext == NULL ) snext = rt_unit(sched_idle_unit(sched_cpu)); @@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new) struct rt_unit *iter_svc; struct sched_unit *iter_unit; int cpu = 0, cpu_to_tickle = 0; - cpumask_t not_tickled; + cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id()); cpumask_t *online; if ( new == NULL || is_idle_unit(new->unit) ) return; online = cpupool_domain_master_cpumask(new->unit->domain); - cpumask_and(¬_tickled, online, new->unit->cpu_hard_affinity); - cpumask_andnot(¬_tickled, ¬_tickled, &prv->tickled); + cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity); + cpumask_andnot(not_tickled, not_tickled, &prv->tickled); /* * 1) If there are any idle CPUs, kick one. * For cache benefit,we first search new->cpu. * The same loop also find the one with lowest priority. */ - cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), ¬_tickled); + cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickled); while ( cpu!= nr_cpu_ids ) { iter_unit = curr_on_cpu(cpu); @@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new) compare_unit_priority(iter_svc, latest_deadline_unit) < 0 ) latest_deadline_unit = iter_svc; - cpumask_clear_cpu(cpu, ¬_tickled); - cpu = cpumask_cycle(cpu, ¬_tickled); + cpumask_clear_cpu(cpu, not_tickled); + cpu = cpumask_cycle(cpu, not_tickled); } /* 2) candicate has higher priority, kick out lowest priority unit */ -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:11 +0100 Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:08 -0500 Received: from esa5.hc3370-68.iphmx.com (10.160.38.12) by LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:08 -0800 Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: +gnC2HAAnrie2nW6sXWQqjgZmsFPgBgRoXON2JG79thNG414YCOIFZjLIQdWwMWmekVJMbnHIW gS2B1g3KMITGhFdKkLMRdnlO5SkNdNkvRUQbwZacP2LMzmgvI4/z7GMxuKhOM//djPravrfOiW 3zHIqqWVHb34OGjuLh0jW0CTlojDVacIt/0xMOSFj6Ip37UVsC5gegBi79+WDrrkpFrQrAL2pZ nBGZpGLTGOyMz6lEZiFNd5UDEGTAhD+w9RJ1O4w8Pdopc49sEBJorwSoCZt1BqJNI2exraQtKp kMsOF+4UHv/Q5HXNDO8dHA+V X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 10226264 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 10226264 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3ACSMOnR3UbK6RoScZsmDT+DRfVm0co7zxezQtwd?= =?us-ascii?q?8Zse0TIvad9pjvdHbS+e9qxAeQG9mCsLQe07qd6vm7EUU7or+5+EgYd5JNUx?= =?us-ascii?q?JXwe43pCcHRPC/NEvgMfTxZDY7FskRHHVs/nW8LFQHUJ2mPw6arXK99yMdFQ?= =?us-ascii?q?viPgRpOOv1BpTSj8Oq3Oyu5pHfeQpFiCezbL9oMhm6sQbcusYLjYd/JKs61w?= =?us-ascii?q?fErGZPd+lK321jOEidnwz75se+/Z5j9zpftvc8/MNeUqv0Yro1Q6VAADspL2?= =?us-ascii?q?466svrtQLeTQSU/XsTTn8WkhtTDAfb6hzxQ4r8vTH7tup53ymaINH2QLUpUj?= =?us-ascii?q?ms86tnVBnlgzocOjUn7G/YlNB/jKNDoBKguRN/xZLUYJqIP/Z6Z6/RYM8WSX?= =?us-ascii?q?ZEUstXSidPAJ6zb5EXAuQBI+hWsofzqVgJoxS8CwmhH//jxiNSi3Pqx6A2z/?= =?us-ascii?q?gtHAfb1wIgBdIOt3HUoc3xOqcPT++11qbIwivFb/hL2Dn98o/Icgs6ofqRWr?= =?us-ascii?q?9wc8XRyVMoFwPelVWdspflPy6O1usTqWSU8+1gVee2hmMhtgp/oSCvy98xho?= =?us-ascii?q?TNho8Z0E3I+Ct5zYovO9G0VlJ3bcS5HJZQry2WKo97T8M4T211tis3yqcKtY?= =?us-ascii?q?ClcCQW0pgr2R7SZvOBfoOV+BzsTvyRLi19hH99eLKwmRKy8U+4x+35Wci4zk?= =?us-ascii?q?xGrjFYktnXqH8BzQHc5dafRvt8+EeuxyqP2hjO5uxAIU04j7fXJpAhz7IqiJ?= =?us-ascii?q?Yfr1jPEjXrlEj2lKOWc18r+ums6+TpeLXmoZqcOpd2igHxKKQunde/Af8jPw?= =?us-ascii?q?gVQmib4vqz2Kf/8k3+RbVGluc2nbXBsJDGOcQboba0AwBL3YYk8Ra/ACmp3M?= =?us-ascii?q?4FknkaKlJFfAiIj5DyNl7QPfD0F+uwg1WwkDdxxvDHMaftDYnKLnjGw//deu?= =?us-ascii?q?Nt5kgZxActwNR345NPFqpHMP/1QlX2ttHTElk+KQPn7fzgDYBfy44EVm+JSo?= =?us-ascii?q?CDN7jJ+QuK6fgoOPKkf5IOtXD2LP1ztK2mtmMwhVJIJfrh5pAQcn3tW60+ex?= =?us-ascii?q?/DM1PxntcMF3sLtQMiTevszWePSiNXe23rD/Ju6yoyVcSmBtyYGdjo3uzH3T?= =?us-ascii?q?+7G40QbWdDWRiAEnbtIoODXfpEKCefOdRonTFMU7+9A5Qg2hejuE6yy7duIu?= =?us-ascii?q?fOvCxNs5Xl2Ys9/PXdwDc18zE8FMGByyeVVWghnGwSQCQt9LtiukE7wVCGgu?= =?us-ascii?q?Bjm/INLdVI/LtSVxsic5vVzuh0Edf3DwDOZNCSU369X86rRzo2S4F52McANm?= =?us-ascii?q?B6HdjqlRXfx2yqDrsSwqSMH4Ax+7nA0mLZItYnjW3bz6Rng1R/GJl1OGarh7?= =?us-ascii?q?By+03oP6CTwxTLsaGseOxc0TXEqSGDxjHV4xkdDl42UL3FWGBZbUzT/5z/4Q?= =?us-ascii?q?vZQrmiBK5CUEMJwNOeKqZMdtzijElXDPblNtPEZmutmmC2TR+Wz7KIZYDudi?= =?us-ascii?q?0TxiLYQEQDlgkS+z6BO21cTm+jrHjZFydGDk/0bgXn9uw/4HK3Q0kozh2bOl?= =?us-ascii?q?V73un98RoUiPqADvILi+tV5WF48GgyRgzlmYuLWL/i70J7ca5RYM0w+gJKzm?= =?us-ascii?q?uE8Qx2Zcf/d+U81hgfawRyrwXl0BAkb+cI2cUssn4uyxJ/bKyC11YUPTGXx5?= =?us-ascii?q?frIZXMN3L/uhuobuSFkkGby9uQ9qoVvb46pE/kpxqBDVc5/jNs1NwfgB7+rt?= =?us-ascii?q?3aSQEVV5z2SEM+8RN38qrbbicK7ITRzXRwMKOwv2aYiegkD+Yk1BusOuxnHv?= =?us-ascii?q?PfTV3KGtYBT4ijM+V33ViiNUlbZKUMpOg1J8OjZ72N36v5dOpnmTuniyxA7u?= =?us-ascii?q?UfmgqO+DB7UfXgxIsezreT2Q7PWzrnjVinu9z6gsgdNGxURzHij3K+XMgIO+?= =?us-ascii?q?V7Zs4TBH2rItGry9kb5dalQHNe+FO5RhsH1MKvZRuOfgn41AxU214QpC/vki?= =?us-ascii?q?+5wjpo1jAx+/POgWqXmLmkLkBffDcWFwwAxR/2LIO5js4XRh2ldAF00huuvh?= =?us-ascii?q?2ln+0F9OJ+N2nWUQFDeC2lSgMqGqa2qLeGZNZCrZ0ytiACGuaze1eBUZbmvg?= =?us-ascii?q?AXlSjkGiENjCB+bDystpjjylZ4h3iaN21bt2fCdId7whKVt7m+DbZBmzEBQi?= =?us-ascii?q?d/kzzeAFOxasKo8dujnJDGqumiVmilW84bYWzxwIiHrif++XxyDEj1gaWogt?= =?us-ascii?q?O+W1tfs2ezx5xwWC7Pthq5foT7y/HwL7d8ZkcxTF7kt5gjQto4w81h2tdIni?= =?us-ascii?q?JAzpSNoSheyDi1aI8Dn/ugKiJKH2duoZad4RC5ihQ4djTUmMShEC3am5UpZs?= =?us-ascii?q?HmMDpNhWRjsoYSTv/TteIMnDMp8APk9kSIMaY7x25Mj6NpsiJ/4alBuRJxnH?= =?us-ascii?q?zBXfZIWxEeZWu1yVyJ94zs9fkGIjrzN+D2jA0nw5igFO3Q+18GHiylK9F4RH?= =?us-ascii?q?Q2t5sjVTCEmHzrttO9JoWWPINV7lvM1E6ex+lNdMBoyKdM3Ho/fzuh7Dt8k7?= =?us-ascii?q?VzjAQyj8jl587dcz4rpvrgREcAbViXL4sS4m2/1PcF2J/Ojsb1Q8knQnJRB9?= =?us-ascii?q?PpVa76SWNO862/aUDUSmV68jDBQt+9VUee8Bs09iOTVcnyaTfOdSlflZI4H1?= =?us-ascii?q?GcPBAN2VFJGmxhw9hjTFjsnIu7KyIbrngQ/gKq80oRjLg0al+lCT+Z/EDyMX?= =?us-ascii?q?81UMTNdUsIqFgeoR6KaYrGqbgoVyBAos/68ErXdzbdPV4WSzBSPy7MT1HlNb?= =?us-ascii?q?2z6dSS6PCWX6y4KOXDZbHIoutbH/aOzpay3oY05C6CbIOJOWdvC/l90U1GOB?= =?us-ascii?q?IxU8XfkDESRyFFjDrDNIidowmx/ikxpce6uPXtUwby6YbdEKNcd811/Ay7ir?= =?us-ascii?q?uCMOjWgztlLTFf1dUHwnqtqvBXmVcWj2s3LWuXHL8NtDDAQOfrooEMV0FJTS?= =?us-ascii?q?R1OYMI4rk1g09NMpWA1YuwiO4+j+Y1DkcDXlvkyImvYoQRLmexOUmiZg7DPa?= =?us-ascii?q?maJTDN38D8YL+tAbxWguJOshSsuDGdW0b9NzWHnjPtWlihK+ZJxC2cORVfvs?= =?us-ascii?q?m6fHMPQSD7S8n6bxSgLNJtpTgmm/svm2jHc2IRcHB9f05LsryM/HZYj/F4SA?= =?us-ascii?q?kjpjJuKeiJnTrc7vGNc8dL96IwX2IuzbocuSlpg6FY5yxFWvFvzSbJp4Moo1?= =?us-ascii?q?r9ybbXj2Q3FhtWqjNbwomMuBYHW+2R+59eVHLD5B9I43+XDkFApdR/Dcb0k7?= =?us-ascii?q?tN0dWJn6X2YmQnkZqc7Y4HCs7YJdjSemImKgbsESXIAREtSC7xc3rCnEEbnP?= =?us-ascii?q?zYpRj35tAq75PrnpQJULpSUlc4Q+gbBkpSF9sHOJ5rXzkgnOfG3v5N3mK3qV?= =?us-ascii?q?zqfOsfvp3DUavJU/D/cnCCkKJJIRcFk+uhcdYjc7bj0kknUWFU2ZzQEhOID8?= =?us-ascii?q?tQuSAnZQgx8h0UoSpOC1Yr0kegUTuDpXoaFPq6hBkz01Msevk2+XHn5FJlf1?= =?us-ascii?q?c=3D?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0GbAACL2fldhg/ch8NlHAEBAQEBBwE?= =?us-ascii?q?BEQEEBAEBgX6CG4FGIwQLKrBvCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQE?= =?us-ascii?q?BAwEBAQIBAgMCAgEBAhABAQEKCQsIKYUSCDCCOykBg08CAQMSFT4UED8SVxk?= =?us-ascii?q?igwCCfKEtPQIjAUyBBIp+M4kOgUiBNoc/hFkagUE/gRGDUYo3BI1CoUeCPpY?= =?us-ascii?q?GDBuOUYt9qUSBaYF7MxoIGxWDJ1ARFIhKhFQOCY4kQDOPJQEB?= X-IPAS-Result: =?us-ascii?q?A0GbAACL2fldhg/ch8NlHAEBAQEBBwEBEQEEBAEBgX6CG?= =?us-ascii?q?4FGIwQLKrBvCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCA?= =?us-ascii?q?gEBAhABAQEKCQsIKYUSCDCCOykBg08CAQMSFT4UED8SVxkigwCCfKEtPQIjA?= =?us-ascii?q?UyBBIp+M4kOgUiBNoc/hFkagUE/gRGDUYo3BI1CoUeCPpYGDBuOUYt9qUSBa?= =?us-ascii?q?YF7MxoIGxWDJ1ARFIhKhFQOCY4kQDOPJQEB?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="10226264" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDE2YdaZ8AzYtCtwwdsNA5xxjXdXIMx/JvrDVC?= =?us-ascii?q?Njrl/UNufz5vdCKEdXX1oNw+Qma73sIugTMCGIoAiPfg9we63dmdc6de?= =?us-ascii?q?mBdcR9PVFOqycDd9UBb3wGmYxaE3Pz/xz53u1o6bfCx6gMhtESYjyXDQ?= =?us-ascii?q?vTOnl2/lOSufOl0ewbPzxjqQ=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa5.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:07 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id F21C8ADEB; Wed, 18 Dec 2019 07:49:03 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , George Dunlap , Dario Faggioli Subject: [PATCH 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook Date: Wed, 18 Dec 2019 08:48:56 +0100 Message-ID: <20191218074859.21665-7-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: 2ae91f91-cd54-4250-02dd-08d7838ec3da X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 Instead of having an own percpu-variable for private data per cpu the generic scheduler interface for that purpose should be used. Signed-off-by: Juergen Gross --- xen/common/sched/sched_null.c | 89 +++++++++++++++++++++++++++++-------------- 1 file changed, 60 insertions(+), 29 deletions(-) diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c index 5a23a7e7dc..11aab25743 100644 --- a/xen/common/sched/sched_null.c +++ b/xen/common/sched/sched_null.c @@ -89,7 +89,6 @@ struct null_private { struct null_pcpu { struct sched_unit *unit; }; -DEFINE_PER_CPU(struct null_pcpu, npc); /* * Schedule unit @@ -159,32 +158,48 @@ static void null_deinit(struct scheduler *ops) ops->sched_data = NULL; } -static void init_pdata(struct null_private *prv, unsigned int cpu) +static void init_pdata(struct null_private *prv, struct null_pcpu *npc, + unsigned int cpu) { /* Mark the pCPU as free, and with no unit assigned */ cpumask_set_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).unit = NULL; + npc->unit = NULL; } static void null_init_pdata(const struct scheduler *ops, void *pdata, int cpu) { struct null_private *prv = null_priv(ops); - /* alloc_pdata is not implemented, so we want this to be NULL. */ - ASSERT(!pdata); + ASSERT(pdata); - init_pdata(prv, cpu); + init_pdata(prv, pdata, cpu); } static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu) { struct null_private *prv = null_priv(ops); + struct null_pcpu *npc = pcpu; - /* alloc_pdata not implemented, so this must have stayed NULL */ - ASSERT(!pcpu); + ASSERT(npc); cpumask_clear_cpu(cpu, &prv->cpus_free); - per_cpu(npc, cpu).unit = NULL; + npc->unit = NULL; +} + +static void *null_alloc_pdata(const struct scheduler *ops, int cpu) +{ + struct null_pcpu *npc; + + npc = xzalloc(struct null_pcpu); + if ( npc == NULL ) + return ERR_PTR(-ENOMEM); + + return npc; +} + +static void null_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) +{ + xfree(pcpu); } static void *null_alloc_udata(const struct scheduler *ops, @@ -268,6 +283,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit) unsigned int bs; unsigned int cpu = sched_unit_master(unit), new_cpu; cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); @@ -286,8 +302,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit) * don't, so we get to keep in the scratch cpumask what we have just * put in it.) */ - if ( likely((per_cpu(npc, cpu).unit == NULL || - per_cpu(npc, cpu).unit == unit) + if ( likely((npc->unit == NULL || npc->unit == unit) && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) ) { new_cpu = cpu; @@ -336,9 +351,11 @@ pick_res(struct null_private *prv, const struct sched_unit *unit) static void unit_assign(struct null_private *prv, struct sched_unit *unit, unsigned int cpu) { + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; + ASSERT(is_unit_online(unit)); - per_cpu(npc, cpu).unit = unit; + npc->unit = unit; sched_set_res(unit, get_sched_res(cpu)); cpumask_clear_cpu(cpu, &prv->cpus_free); @@ -363,12 +380,13 @@ static bool unit_deassign(struct null_private *prv, struct sched_unit *unit) unsigned int bs; unsigned int cpu = sched_unit_master(unit); struct null_unit *wvc; + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(list_empty(&null_unit(unit)->waitq_elem)); - ASSERT(per_cpu(npc, cpu).unit == unit); + ASSERT(npc->unit == unit); ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free)); - per_cpu(npc, cpu).unit = NULL; + npc->unit = NULL; cpumask_set_cpu(cpu, &prv->cpus_free); dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain, @@ -436,7 +454,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops, */ ASSERT(!local_irq_is_enabled()); - init_pdata(prv, cpu); + init_pdata(prv, pdata, cpu); return &sr->_lock; } @@ -446,6 +464,7 @@ static void null_unit_insert(const struct scheduler *ops, { struct null_private *prv = null_priv(ops); struct null_unit *nvc = null_unit(unit); + struct null_pcpu *npc; unsigned int cpu; spinlock_t *lock; @@ -462,6 +481,7 @@ static void null_unit_insert(const struct scheduler *ops, retry: sched_set_res(unit, pick_res(prv, unit)); cpu = sched_unit_master(unit); + npc = get_sched_res(cpu)->sched_priv; spin_unlock(lock); @@ -471,7 +491,7 @@ static void null_unit_insert(const struct scheduler *ops, cpupool_domain_master_cpumask(unit->domain)); /* If the pCPU is free, we assign unit to it */ - if ( likely(per_cpu(npc, cpu).unit == NULL) ) + if ( likely(npc->unit == NULL) ) { /* * Insert is followed by vcpu_wake(), so there's no need to poke @@ -519,7 +539,10 @@ static void null_unit_remove(const struct scheduler *ops, /* If offline, the unit shouldn't be assigned, nor in the waitqueue */ if ( unlikely(!is_unit_online(unit)) ) { - ASSERT(per_cpu(npc, sched_unit_master(unit)).unit != unit); + struct null_pcpu *npc; + + npc = unit->res->sched_priv; + ASSERT(npc->unit != unit); ASSERT(list_empty(&nvc->waitq_elem)); goto out; } @@ -548,6 +571,7 @@ static void null_unit_wake(const struct scheduler *ops, struct null_private *prv = null_priv(ops); struct null_unit *nvc = null_unit(unit); unsigned int cpu = sched_unit_master(unit); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(!is_idle_unit(unit)); @@ -569,7 +593,7 @@ static void null_unit_wake(const struct scheduler *ops, else SCHED_STAT_CRANK(unit_wake_not_runnable); - if ( likely(per_cpu(npc, cpu).unit == unit) ) + if ( likely(npc->unit == unit) ) { cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ); return; @@ -581,7 +605,7 @@ static void null_unit_wake(const struct scheduler *ops, * and its previous resource is free (and affinities match), we can just * assign the unit to it (we own the proper lock already) and be done. */ - if ( per_cpu(npc, cpu).unit == NULL && + if ( npc->unit == NULL && unit_check_affinity(unit, cpu, BALANCE_HARD_AFFINITY) ) { if ( !has_soft_affinity(unit) || @@ -622,6 +646,7 @@ static void null_unit_sleep(const struct scheduler *ops, { struct null_private *prv = null_priv(ops); unsigned int cpu = sched_unit_master(unit); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; bool tickled = false; ASSERT(!is_idle_unit(unit)); @@ -640,7 +665,7 @@ static void null_unit_sleep(const struct scheduler *ops, list_del_init(&nvc->waitq_elem); spin_unlock(&prv->waitq_lock); } - else if ( per_cpu(npc, cpu).unit == unit ) + else if ( npc->unit == unit ) tickled = unit_deassign(prv, unit); } @@ -663,6 +688,7 @@ static void null_unit_migrate(const struct scheduler *ops, { struct null_private *prv = null_priv(ops); struct null_unit *nvc = null_unit(unit); + struct null_pcpu *npc; ASSERT(!is_idle_unit(unit)); @@ -686,7 +712,8 @@ static void null_unit_migrate(const struct scheduler *ops, * If unit is assigned to a pCPU, then such pCPU becomes free, and we * should look in the waitqueue if anyone else can be assigned to it. */ - if ( likely(per_cpu(npc, sched_unit_master(unit)).unit == unit) ) + npc = unit->res->sched_priv; + if ( likely(npc->unit == unit) ) { unit_deassign(prv, unit); SCHED_STAT_CRANK(migrate_running); @@ -720,7 +747,8 @@ static void null_unit_migrate(const struct scheduler *ops, * * In latter, all we can do is to park unit in the waitqueue. */ - if ( per_cpu(npc, new_cpu).unit == NULL && + npc = get_sched_res(new_cpu)->sched_priv; + if ( npc->unit == NULL && unit_check_affinity(unit, new_cpu, BALANCE_HARD_AFFINITY) ) { /* unit might have been in the waitqueue, so remove it */ @@ -788,6 +816,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, unsigned int bs; const unsigned int cur_cpu = smp_processor_id(); const unsigned int sched_cpu = sched_get_resource_cpu(cur_cpu); + struct null_pcpu *npc = get_sched_res(sched_cpu)->sched_priv; struct null_private *prv = null_priv(ops); struct null_unit *wvc; @@ -802,14 +831,14 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, } d; d.cpu = cur_cpu; d.tasklet = tasklet_work_scheduled; - if ( per_cpu(npc, sched_cpu).unit == NULL ) + if ( npc->unit == NULL ) { d.unit = d.dom = -1; } else { - d.unit = per_cpu(npc, sched_cpu).unit->unit_id; - d.dom = per_cpu(npc, sched_cpu).unit->domain->domain_id; + d.unit = npc->unit->unit_id; + d.dom = npc->unit->domain->domain_id; } __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d); } @@ -820,7 +849,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, prev->next_task = sched_idle_unit(sched_cpu); } else - prev->next_task = per_cpu(npc, sched_cpu).unit; + prev->next_task = npc->unit; prev->next_time = -1; /* @@ -921,6 +950,7 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv = null_priv(ops); + struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; struct null_unit *nvc; spinlock_t *lock; unsigned long flags; @@ -930,9 +960,8 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu) printk("CPU[%02d] sibling={%*pbl}, core={%*pbl}", cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), CPUMASK_PR(per_cpu(cpu_core_mask, cpu))); - if ( per_cpu(npc, cpu).unit != NULL ) - printk(", unit=%pdv%d", per_cpu(npc, cpu).unit->domain, - per_cpu(npc, cpu).unit->unit_id); + if ( npc->unit != NULL ) + printk(", unit=%pdv%d", npc->unit->domain, npc->unit->unit_id); printk("\n"); /* current unit (nothing to say if that's the idle unit) */ @@ -1010,6 +1039,8 @@ static const struct scheduler sched_null_def = { .init = null_init, .deinit = null_deinit, + .alloc_pdata = null_alloc_pdata, + .free_pdata = null_free_pdata, .init_pdata = null_init_pdata, .switch_sched = null_switch_sched, .deinit_pdata = null_deinit_pdata, -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:13 +0100 Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:09 -0500 Received: from esa5.hc3370-68.iphmx.com (10.160.38.12) by LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:09 -0800 Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: k8i+TZ/MS9bV0I7gT00FwjkYvHsnlyzD9TCaP1grXL+t+3sWpFQBkTvx4dESbzPBNUS+a0lbkJ Gt1W0T7WhAvod+aO9wLzTFwJPhtbVJubHlL1Raal/FBK1womcG04zMh4aeVdImhrqeLdhOaYmQ mP8n6BB9gmW1iCbvu8aY2zcfTTZ+W1M/7CI6F7nPLHRzztRMrBajNX9XmLa8YV7jdA0ke0Q9Jw TwL4Cr2ec8zov4iMy416tkfr8hRjf2UdIwRfemi5EMHbZQIB7XzIRel05vepWPeOt19lN7VUwY UnCJjnVZMsbxG9Z6J1lKSHBT X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 10226265 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 10226265 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3AskcKhh+SWp4oJv9uRHKM819IXTAuvvDOBiVQ1K?= =?us-ascii?q?B30e4cTK2v8tzYMVDF4r011RmVBN6dsaoUwLOP6OjJYi8p2d65qncMcZhBBV?= =?us-ascii?q?cuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx?= =?us-ascii?q?7xKRR6JvjvGo7Vks+7y/2+94fcbglVijexe61+IAm1oAneq8UanIVvIbstxx?= =?us-ascii?q?XUpXdFZ/5Yzn5yK1KJmBb86Maw/Jp9/ClVpvks6c1OX7jkcqohVbBXAygoPG?= =?us-ascii?q?4z5M3wqBnMVhCP6WcGUmUXiRVHHQ7I5wznU5jrsyv6su192DSGPcDzULs5Vy?= =?us-ascii?q?iu47ttRRT1kyoMKSI3/3/LhcxxlKJboQyupxpjw47PfYqZMONycr7Bcd8GQG?= =?us-ascii?q?ZMWNtaWS5cDYOmd4YBD/QPM/tWoYf+oFUBrxW+CBKwCO/z0DJEmmX70bEm3+?= =?us-ascii?q?knDArI3BYgH9ULsHnMqNv1KbkdUfq6zKbWyzXIcvJY2S366IjTaRAqvPaBXb?= =?us-ascii?q?B/ccrLzkkvDBjFg06LqYz4JDOayP4BvHSc7+plTO+ijXMspQJpojW328shjo?= =?us-ascii?q?nEipgIxl3K9Sh12pg5KcOkREJhfNKpEINcuzyeOoZ2WM8uXm9ltSYgxrEYp5?= =?us-ascii?q?K3YDIGxIonyhPQbfGMbpKG7Qj5VOmLJDd1nHJld6y7hxa16UWgz+L9Wteu3F?= =?us-ascii?q?ZRsypFicfDumoQ2BPN8sSHS/198Vm92TuXygze7uFJLVopmafVK5Mt2KA8mo?= =?us-ascii?q?QPvUjZAyP7mF36jKqMeUUl/uio5f7nYrLjppKENY90hRv+MqM1msykGuk4KR?= =?us-ascii?q?UOX3OF9uimyLLj+kj5TK1QjvIqiqnZrIzaJcMDq662HQBV1Zwj6xmhADu8zd?= =?us-ascii?q?sYgH8HLFZfdxKflIXmJUzOLOrkAve+n1SsijZrxv/cMrL9BZXNK2DJkK39cr?= =?us-ascii?q?Zl905c1A0zwMhe551ODLENOvDzVVXtu9zbFR85NAq0zv35B9VgzI8RRWOPAr?= =?us-ascii?q?ODMKPTvl6E/P4gI+6JZNxdhDGoO/UjovLjk3I9sVscZrWym4sabmiiGfZrKF?= =?us-ascii?q?nfZmDj0fkbFmJfnBczVuPnjhWtSzlIfD7mXa0m4SogIJm7FoqFTYeo1u/SlB?= =?us-ascii?q?ynF4FbMzgVQmuHFm3lIsDdA69WMnC7P9NhnzoYVLOoV44m01SUuRTnz6Z8d7?= =?us-ascii?q?qEqCADvM+l1NMuvLKL0E9isz1sD8GNlWqKSjI8kmAJQmow26Zy6Qx4x0yY2K?= =?us-ascii?q?d1y/pfCZRI5vxPXwt7fZ7RxuB3EZbzDwTGeNraAE2+TICABjc8Bsk038dIe1?= =?us-ascii?q?x0TtetlBfYxAKxHqQY0beMAc986brSikD4PN010HPazO8khlgiTNFIMDikib?= =?us-ascii?q?R47BP7HJPSngOSkKP5Pb8E0nvr82GOhXGLoFkeUAN0VvDdWmsDY0LNsdnjzk?= =?us-ascii?q?bSFfm1FK8qdAdMkJTQEKZBZ9z3gFkDf8/NYoSPMUS2nWr4RRuTz+7KbIGxID?= =?us-ascii?q?pFmXuNTkkckwUDu32BMFp2ACDpuG/YADF0cDCnK0rx7elzrm+6RU4o3kmLaU?= =?us-ascii?q?Nmzb+85h8Sg7SVVfoS2rsOvCpppS9zGR6x2NffCtzIoAQEHu0UY9wn50xc/X?= =?us-ascii?q?nErAE7NZulbuhjilMYbwVrrhb2zRwkQo5EkMUssDYr1F8reP/eiQgcMWnChd?= =?us-ascii?q?apZ+6ySCG65h2kZq/I10uL3c2fp+EP469j9A2m4lHvF1Ii9mUh2N5QgB7+rt?= =?us-ascii?q?3HChQfVZXpXwM57R9/8vvYbTM6/Jj8zmB3PO+/tTqIiJo5QfAozBqtZYIVMq?= =?us-ascii?q?yeExTpO9YHHMXoI+sv0QvMDFpMLKVZ86g6ONmjfv2N1fuwPepuqzmhiHxO/I?= =?us-ascii?q?F31k/VrXhGR+XF3ogIz7Sj5iXXDG6utF66qYi3lJtNOHcSFTHklnChWt4XZ7?= =?us-ascii?q?VyeJZNAmCrcYW7wdB3hpilXHA9lhbrBVwc39SyUQGPdFG71gpVnUgaun2ony?= =?us-ascii?q?Kkwic8ymB06PPEgWqUmbykLUdPM3UDXGR4iFbwPYW46rJSFFOlaQQkjlrt5E?= =?us-ascii?q?r3wbRauLUqKmDSRUlSeC2lZ2pmU6a2qv+De5sWuc5u6H4KFr7mORbDF+2YwV?= =?us-ascii?q?NSyS7oEmpAySpueiqj59P5lEch1zrYciY1rWLZfNE2zhDat7m+DbZc2CQLQC?= =?us-ascii?q?5ghHzZHF+5apOr8s+TjIzrqf2lWiSqUZgZImH7iJiNsie2/zggAxKlmOuoss?= =?us-ascii?q?b6CgV82ij+nYoPN22AvFP3ZY/l0L6/OORscxxzBVPy3MF9H5l3joo6gJxDkW?= =?us-ascii?q?hfnJie+mAL1HviKdgOk7yrd2IDHHRYprydqBigwkBoKWiFgp70RmnIiNU0fM?= =?us-ascii?q?G0OysXwn5vtpgRTvbSveICxWwv+BK5tV6DOKEi2G5Hkb13tjhCxLtQ3Whlhi?= =?us-ascii?q?SFXuJLRhEeYHOqz1LQqIng5KRPODT2Kuf2ihc4xZb5S+jc6gBEBCSgJsxkRH?= =?us-ascii?q?8uqJwmYRSUlyevj+OsMNjIMYBK6kLSyVGYybASccx5l+JW13M7aCSh4Tt9jb?= =?us-ascii?q?Z81EIm3Inm7tHZcSM3oPP/WEcDcGSrO6ZxsnnslfoMx5jKmdH+WM0nQnJSAt?= =?us-ascii?q?PpVa76SWxP8623b0DXTGx68y/AUbvHQV3Gtxwg9SqUVcrzbjfOfCJ8r50qRQ?= =?us-ascii?q?HBdhUO20ZNBGR8xMNlUFjtnpepcV8ltGlAuhih8kUKkLgwcUG4CDy6xk/gay?= =?us-ascii?q?9oGsHOcFwPt18EvhaFd5TZtLk7HjkErMTw8UrUcTHdPV4QSzlOABfhZRirP6?= =?us-ascii?q?Hyt4CRrq7CWrf4daCWJ+3J87cWVu/Ul8v+jM0/o3DVaZ3JZj44VLU6whYRBi?= =?us-ascii?q?w/QZmD3W9VFWpGzEevJ4aaohy45yF6/Nun/q2tXgXx6I+LTbBVNJ1u/RuynK?= =?us-ascii?q?uOZfWIinw/LzFG25cIg3TPzd19lBYTjSpqajWgQ68Yu3WLSqXOl6tTSRUcbm?= =?us-ascii?q?tyMsBM8q4xjBJVN4jDkNTp27Vkj/kzTVBYSVjmncLvbssPRgP1fBbOAELBbe?= =?us-ascii?q?7UHzDAzsDpbK/5coV+170L5Tu3vzvTU0L4N2nFlzKyDE/3dL8cyiCDPBlO/o?= =?us-ascii?q?q6d0QlD2+rV9/gZhChVb0/xTQr3b05gG/LPm8AIHB9dU1KtLiZ8SJfhL12BW?= =?us-ascii?q?VA6nNvKeTMlTye6qHULZMfsP0jBSoR9aoS+HMh175c9z1JXtRwiHGUtcN1rh?= =?us-ascii?q?eqn6jHyzZqVgZPticehI+PuhYHW+2R/Z1BVHDYuRMVuDzBVlJT/IYjUIe3/f?= =?us-ascii?q?kNmZDVman+KSlP6YfZ5sJBQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?= =?us-ascii?q?yW8WWY65YgrZ250pgBULhAT3QuC+gXTE9iGZZRRfU/FiNhirOdgMMSsDCmqw?= =?us-ascii?q?LNQcxBop3dfveCW7P0NS2Uy7VJLUhto/uwPcEYMYv13FZnY19xkdHRGkbeat?= =?us-ascii?q?tKpzVocg4+pEgUoSpOC1Yr0kegUTuDpXoaFPq6hBkz01Msevk2+XHn5FJlfw?= =?us-ascii?q?OW9ht1q1E4nJDeuR7UaCT4df/iRpxNBmz/sE1jasqmETYwVhW7mAlfDBmBR7?= =?us-ascii?q?9ViOI7J2V70knHpIBCX/JRH/VJ?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HoAQCL2fldhg/ch8NlHQEBAQkBEQU?= =?us-ascii?q?FAYF+ghuBRiMECyqTL4MSmi4JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQE?= =?us-ascii?q?DAQEBAgECAwICAQECEAEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ?= =?us-ascii?q?8oS09AiMBTIEEin4ziQ6BSIE2hz+EWRqBQT+BEYJec4QjhhQEj3qfD4I+lgY?= =?us-ascii?q?MG45Ri32pRIFpgXszGggbFTuCbFARFI0eDgmOJEAzjGWCQAEB?= X-IPAS-Result: =?us-ascii?q?A0HoAQCL2fldhg/ch8NlHQEBAQkBEQUFAYF+ghuBRiMEC?= =?us-ascii?q?yqTL4MSmi4JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQEDAQEBAgECAwICA?= =?us-ascii?q?QECEAEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ8oS09AiMBTIEEi?= =?us-ascii?q?n4ziQ6BSIE2hz+EWRqBQT+BEYJec4QjhhQEj3qfD4I+lgYMG45Ri32pRIFpg?= =?us-ascii?q?XszGggbFTuCbFARFI0eDgmOJEAzjGWCQAEB?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="10226265" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDGv0IRFckNIwcZFQml/6Va198tKPillF4KNc/?= =?us-ascii?q?vvWuP1cI6erK6QrdT6ItodiAFqwgg7qSD35zeEYaHzpaYJVoRwzorrNQ?= =?us-ascii?q?ZJ23LHFx6ByiZj+CKOfHnIWjLlTRd8DDxBpL/ssP4O+Fzgg7O1bez90i?= =?us-ascii?q?Y79xO0Lgw3g5+eAF0naJOtSw=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa5.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:07 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 5BFCDAE19; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Ian Jackson , "Jan Beulich" , Julien Grall , "Konrad Rzeszutek Wilk" , Stefano Stabellini , Wei Liu , Josh Whitehead , Stewart Hildebrand , Meng Xu Subject: [PATCH 7/9] xen/sched: switch scheduling to bool where appropriate Date: Wed, 18 Dec 2019 08:48:57 +0100 Message-ID: <20191218074859.21665-8-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: 13543f40-50d9-4061-bd5a-08d7838ec45b X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 Scheduling code has several places using int or bool_t instead of bool. Switch those. Signed-off-by: Juergen Gross --- xen/common/sched/cpupool.c | 10 +++++----- xen/common/sched/sched-if.h | 2 +- xen/common/sched/sched_arinc653.c | 8 ++++---- xen/common/sched/sched_credit.c | 12 ++++++------ xen/common/sched/sched_rt.c | 14 +++++++------- xen/common/sched/schedule.c | 14 +++++++------- xen/include/xen/sched.h | 6 +++--- 7 files changed, 33 insertions(+), 33 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index d5b64d0a6a..14212bb4ae 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void) * the searched id is returned * returns NULL if not found. */ -static struct cpupool *__cpupool_find_by_id(int id, int exact) +static struct cpupool *__cpupool_find_by_id(int id, bool exact) { struct cpupool **q; @@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, int exact) static struct cpupool *cpupool_find_by_id(int poolid) { - return __cpupool_find_by_id(poolid, 1); + return __cpupool_find_by_id(poolid, true); } -static struct cpupool *__cpupool_get_by_id(int poolid, int exact) +static struct cpupool *__cpupool_get_by_id(int poolid, bool exact) { struct cpupool *c; spin_lock(&cpupool_lock); @@ -185,12 +185,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid, int exact) struct cpupool *cpupool_get_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 1); + return __cpupool_get_by_id(poolid, true); } static struct cpupool *cpupool_get_next_by_id(int poolid) { - return __cpupool_get_by_id(poolid, 0); + return __cpupool_get_by_id(poolid, false); } void cpupool_put(struct cpupool *pool) diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h index edce354dc7..9d0db75cbb 100644 --- a/xen/common/sched/sched-if.h +++ b/xen/common/sched/sched-if.h @@ -589,7 +589,7 @@ unsigned int cpupool_get_granularity(const struct cpupool *c); * * The hard affinity is not a subset of soft affinity * * There is an overlap between the soft and hard affinity masks */ -static inline int has_soft_affinity(const struct sched_unit *unit) +static inline bool has_soft_affinity(const struct sched_unit *unit) { return unit->soft_aff_effective && !cpumask_subset(cpupool_domain_master_cpumask(unit->domain), diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c index fe15754900..dc45378952 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -75,7 +75,7 @@ typedef struct arinc653_unit_s * arinc653_unit_t pointer. */ struct sched_unit * unit; /* awake holds whether the UNIT has been woken with vcpu_wake() */ - bool_t awake; + bool awake; /* list holds the linked list information for the list this UNIT * is stored in */ struct list_head list; @@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, struct sched_unit *unit, * will mark the UNIT awake. */ svc->unit = unit; - svc->awake = 0; + svc->awake = false; if ( !is_idle_unit(unit) ) list_add(&svc->list, &SCHED_PRIV(ops)->unit_list); update_schedule_units(ops); @@ -473,7 +473,7 @@ static void a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) != NULL ) - AUNIT(unit)->awake = 0; + AUNIT(unit)->awake = false; /* * If the UNIT being put to sleep is the same one that is currently @@ -493,7 +493,7 @@ static void a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { if ( AUNIT(unit) != NULL ) - AUNIT(unit)->awake = 1; + AUNIT(unit)->awake = true; cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ); } diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index 8b1de9b033..05930261d9 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -245,7 +245,7 @@ __runq_elem(struct list_head *elem) } /* Is the first element of cpu's runq (if any) cpu's idle unit? */ -static inline bool_t is_runq_idle(unsigned int cpu) +static inline bool is_runq_idle(unsigned int cpu) { /* * We're peeking at cpu's runq, we must hold the proper lock. @@ -344,7 +344,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now) svc->start_time += (credits * MILLISECS(1)) / CSCHED_CREDITS_PER_MSEC; } -static bool_t __read_mostly opt_tickle_one_idle = 1; +static bool __read_mostly opt_tickle_one_idle = true; boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle); DEFINE_PER_CPU(unsigned int, last_tickle_cpu); @@ -719,7 +719,7 @@ __csched_unit_is_migrateable(const struct csched_private *prv, static int _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit, - bool_t commit) + bool commit) { int cpu = sched_unit_master(unit); /* We must always use cpu's scratch space */ @@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const struct sched_unit *unit) * get boosted, which we don't deserve as we are "only" migrating. */ set_bit(CSCHED_FLAG_UNIT_MIGRATING, &svc->flags); - return get_sched_res(_csched_cpu_pick(ops, unit, 1)); + return get_sched_res(_csched_cpu_pick(ops, unit, true)); } static inline void @@ -975,7 +975,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu) * migrating it to run elsewhere (see multi-core and multi-thread * support in csched_res_pick()). */ - new_cpu = _csched_cpu_pick(ops, currunit, 0); + new_cpu = _csched_cpu_pick(ops, currunit, false); unit_schedule_unlock_irqrestore(lock, flags, currunit); @@ -1108,7 +1108,7 @@ static void csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct csched_unit * const svc = CSCHED_UNIT(unit); - bool_t migrating; + bool migrating; BUG_ON( is_idle_unit(unit) ); diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 264a753116..8646d77343 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -490,10 +490,10 @@ rt_update_deadline(s_time_t now, struct rt_unit *svc) static inline bool deadline_queue_remove(struct list_head *queue, struct list_head *elem) { - int pos = 0; + bool pos = false; if ( queue->next != elem ) - pos = 1; + pos = true; list_del_init(elem); return !pos; @@ -505,14 +505,14 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *), struct list_head *queue) { struct list_head *iter; - int pos = 0; + bool pos = false; list_for_each ( iter, queue ) { struct rt_unit * iter_svc = (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; - pos++; + pos = true; } list_add_tail(elem, iter); return !pos; @@ -605,7 +605,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq = rt_replq(ops); struct rt_unit *rearm_svc = svc; - bool_t rearm = 0; + bool rearm = false; ASSERT( unit_on_replq(svc) ); @@ -622,7 +622,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { deadline_replq_insert(svc, &svc->replq_elem, replq); rearm_svc = replq_elem(replq->next); - rearm = 1; + rearm = true; } else rearm = deadline_replq_insert(svc, &svc->replq_elem, replq); @@ -1279,7 +1279,7 @@ rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit) { struct rt_unit * const svc = rt_unit(unit); s_time_t now; - bool_t missed; + bool missed; BUG_ON( is_idle_unit(unit) ); diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index db8ce146ca..3307e88b6c 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -53,7 +53,7 @@ string_param("sched", opt_sched); * scheduler will give preferrence to partially idle package compared to * the full idle package, when picking pCPU to schedule vCPU. */ -bool_t sched_smt_power_savings = 0; +bool sched_smt_power_savings; boolean_param("sched_smt_power_savings", sched_smt_power_savings); /* Default scheduling rate limit: 1ms @@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v) { get_sched_res(v->processor)->curr = unit; get_sched_res(v->processor)->sched_unit_idle = unit; - v->is_running = 1; + v->is_running = true; unit->is_running = true; unit->state_entry_time = NOW(); } @@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit) unsigned long flags; unsigned int old_cpu, new_cpu; spinlock_t *old_lock, *new_lock; - bool_t pick_called = 0; + bool pick_called = false; struct vcpu *v; /* @@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit) if ( (new_lock == get_sched_res(new_cpu)->schedule_lock) && cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) ) break; - pick_called = 1; + pick_called = true; } else { @@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit) * We do not hold the scheduler lock appropriate for this vCPU. * Thus we cannot select a new CPU on this iteration. Try again. */ - pick_called = 0; + pick_called = false; } sched_spin_unlock_double(old_lock, new_lock, flags); @@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource *sr, vcpu_runstate_change(vnext, vnext->new_state, now); } - vnext->is_running = 1; + vnext->is_running = true; if ( is_idle_vcpu(vnext) ) vnext->sched_unit = next; @@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, struct vcpu *vnext) smp_wmb(); if ( vprev != vnext ) - vprev->is_running = 0; + vprev->is_running = false; } static void unit_context_saved(struct sched_resource *sr) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 55335d6ab3..b2f48a3512 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -557,18 +557,18 @@ static inline bool is_system_domain(const struct domain *d) * Use this when you don't have an existing reference to @d. It returns * FALSE if @d is being destroyed. */ -static always_inline int get_domain(struct domain *d) +static always_inline bool get_domain(struct domain *d) { int old, seen = atomic_read(&d->refcnt); do { old = seen; if ( unlikely(old & DOMAIN_DESTROYED) ) - return 0; + return false; seen = atomic_cmpxchg(&d->refcnt, old, old + 1); } while ( unlikely(seen != old) ); - return 1; + return true; } /* -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:11 +0100 Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:09 -0500 Received: from esa5.hc3370-68.iphmx.com (10.160.38.12) by LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:09 -0800 Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: 1SjnUNu8WBr0rIzHzh5XedMqCVK7EWg5omeoga46dipLo8puJy47KkEAD1PTTpTqmzqq/WEsRq N/6KekuZtVVIf2Ij1qNI0rIHjOUCOTaw8E4ZAZ+kIDcql8QIxPKKcjOBUApJJZiztRyWfXtqtd F/EcG3o5o7L5iQSVCb0xbL0ETFwFKYXDTW11Ob6s57fpZv5cmOT0Zt8A4yNXpyWqm3u72K9y86 OxaTY65qYhOH1MY0SNFB16c0ZbdtZTc3OV+9TCWbWYMvsWlW340oi4sIQpkj3wIVHa+FXGXT8j 8iiRUxFUmwV+bOIL+JIhLEIF X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 10226263 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 10226263 X-Ironport-Server: esa5.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3ArYvC6B/HYB0wW/9uRHKM819IXTAuvvDOBiVQ1K?= =?us-ascii?q?B30e4cTK2v8tzYMVDF4r011RmVBN6dsaoUwLOP6OjJYi8p2d65qncMcZhBBV?= =?us-ascii?q?cuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx?= =?us-ascii?q?7xKRR6JvjvGo7Vks+7y/2+94fcbglVijexe61+IAm1oAneq8UanIVvIbstxx?= =?us-ascii?q?XUpXdFZ/5Yzn5yK1KJmBb86Maw/Jp9/ClVpvks6c1OX7jkcqohVbBXAygoPG?= =?us-ascii?q?4z5M3wqBnMVhCP6WcGUmUXiRVHHQ7I5wznU5jrsyv6su192DSGPcDzULs5Vy?= =?us-ascii?q?iu47ttRRT1kyoMKSI3/3/LhcxxlKJboQyupxpjw47PfYqZMONycr7Bcd8GQG?= =?us-ascii?q?ZMWNtaWS5cDYOmd4YBD/QPM/tWoYf+oFUBrxW+CBKwCO/z0DJEmmX70bEm3+?= =?us-ascii?q?knDArI3BYgH9ULsHnMqNv1KbkdUfq6zKbWyzXIcvJY2S366IjTaRAqvPaBXb?= =?us-ascii?q?B/ccrLzkkvDBjFg06LqYz4JDOayP4BvHSc7+plTO+ijXMspQJpojW328shjo?= =?us-ascii?q?nEipgIxl3K9Sh12pg5KcOkREJhfNKpEINcuzyeOoZ2WM8uXm9ltSYgxrEYp5?= =?us-ascii?q?K3YDIGxIonyhPQbfGMbpKG7Qj5VOmLJDd1nHJld6y7hxa16UWgz+L9Wteu3F?= =?us-ascii?q?ZRsypFicfDumoQ2BPN8sSHS/198Vm92TuXygze7uFJLVopmafVK5Mt2KA8mo?= =?us-ascii?q?QPvUjZAyP7mF36jKqMeUUl/uio5f7nYrLjppKENY90hRv+MqM1msykGuk4KR?= =?us-ascii?q?UOX3OF9uimyLLj+kj5TK1QjvIqiqnZrIzaJcMDq6C4HQBV1Zwj6xmhADu83t?= =?us-ascii?q?oYgH8HLFZfdxKflIXmJUzOLOrkAve+n1SsijZrxv/cMrL9BZXNK2DJkK39cr?= =?us-ascii?q?Zl905c1A0zwMhe551ODLENOvDzVVXtu9zbFR85NAq0zv35B9VgzI8RRWOPAr?= =?us-ascii?q?ODMKPTvl6E/P4gI+6JZNxdhDGoO/UjovLjk3I9sVscZrWym4sabmiiGfZrKF?= =?us-ascii?q?nfZmDj0fkbFmJfnBczVuPnjhWtSzlIfD7mXa0m4SogIJm7FoqFTYeo1u/SlB?= =?us-ascii?q?ynF4FbMzgVQmuHFm3lIsDdA69WMnC7P9NhnzoYVLOoV44m01SUuRTnz6Z8d7?= =?us-ascii?q?qEqCADvM+l1NMuvLKL0E9isz1sD8GNlWqKSjI8kmAJQmow26Zy6Qx4x0yY2K?= =?us-ascii?q?d1y/pfCZRI5vxPXwt7fZ7RxuB3EZbzDwTGeNraAE2+TICABjc8Bsk038dIe1?= =?us-ascii?q?x0TtetlBfYxAKxHqQY0beMAc986brSikD4PN010HPazO8khlgiTNFIMDikib?= =?us-ascii?q?R47BP7HJPSngOSkKP5Pb8E0nvr82GOhXGLoFkeUAN0VvDdWmsDY0LNsdnjzk?= =?us-ascii?q?bSFfm1FK8qdAdMkJTQEKZBZ9z3gFkDf8/NYoSPMUS2nWr4RRuTz+7KbIGxID?= =?us-ascii?q?pFmXuNTkkckwUDu32BMFp2ACDpuG/YADF0cDCnK0rx7elzrm+6RU4o3kmLaU?= =?us-ascii?q?Nmzb+85h8Sg7SVVfoS2rsOvCpppS9zGR6x2NffCtzIoAQEHu0UY9wn50xc/X?= =?us-ascii?q?nErAE7NZulbuhjilMYbwVrrhb2zRwkQo5EkMUssDYr1F8reP/eiQgcMWnChd?= =?us-ascii?q?apZ+6ySCG65h2kZq/I10uL3c2fp+EP469j9A2m4lHvF1Ii9mUh2N5QgB7+rt?= =?us-ascii?q?3HChQfVZXpXwM57R9/8vvYbTM6/Jj8zmB3PO+/tTqIiJo5QfAozBqtZYIVMq?= =?us-ascii?q?yeExTpO9YHHMXoI+sv0QvMDFpMLKVZ86g6ONmjfv2N1fuwPepuqzmhiHxO/I?= =?us-ascii?q?F31k/VrXhGR+XF3ogIz7Sj5iXXDG6utF66qYi3lJtNOHcSFTHklnChWt4XZ7?= =?us-ascii?q?VyeJZNAmCrcYW7wdB3hpilXHA9lhbrBVwc39SyUQGPdFG71gpVnUgaun2ony?= =?us-ascii?q?Kkwic8ymB06PPEgWqUmbykLUdPM3UDXGR4iFbwPYW46rJSFFOlaQQkjlrt5E?= =?us-ascii?q?r3wbRauLUqKmDSRUlSeC2lZ2pmU6a2qv+De5sWuc5u6H4KFr7mORbDF+2YwV?= =?us-ascii?q?NSyS7oEmpAySpueiqj59P5lEch1zrYciY1rWLZfNE2zhDat7m+DbZc2CQLQC?= =?us-ascii?q?5ghHzZHF+5apOr8s+TjIzrqf2lWiSqUZgZImH7iJiNsie2/zggAxKlmOuoss?= =?us-ascii?q?b6CgV82ij+nYoPN22AvFP3ZY/l0L6/OORscxxzBVPy3MF9H5l3joo6gJxDkW?= =?us-ascii?q?hfnJie+mAL1HviKdgOk7yrd2IDHHRYprydqBigwkBoKWiFgp70RmnIiNU0fM?= =?us-ascii?q?G0OysXwn5vtpgRTvbSveICxWwv+BK5tV6DOKEi2G5Hkb13tjhCxLtQ3Whlhi?= =?us-ascii?q?SFXuJLRhEeYHOqz1LQqIng5KRPODT2Kuf2ihc4xZb5S+jc6gBEBCSgJsxkRH?= =?us-ascii?q?8uqJwmYRSUlyevj+OsMNjIMYBK6kLSyVGYybASccx5l+JW13M7aCSh4Tt9jb?= =?us-ascii?q?Z81EIm3Inm7tHZcSM3oPP/WEcDcGSrO6ZxsnnslfoMx5jKmdH+WM0nQnJSAt?= =?us-ascii?q?PpVa76SWxP8623b0DXTGx68y/AUbvHQV3Gtxwg9SqUVcrzbjfOfCJ8r50qRQ?= =?us-ascii?q?HBdhUO20ZNBGR8xMNlUFjtnpepcV8ltGlAuhih8kUKkLgwcUG4CDy6xk/gay?= =?us-ascii?q?9oGsHOcFwPt18EvhaFd5TZtLk7HjkErMTw8UrUcTHdPV4QSzlOABfhZRirP6?= =?us-ascii?q?Hyt4CRrq7CWrf4daCWJ+3J87cWVu/Ul8v+jM0/o3DVaZ3JZj44VLU6whYRBi?= =?us-ascii?q?w/QZmD3W9VFWpGzEevJ4aaohy45yF6/Nun/q2tXgXx6I+LTbBVNJ1u/RuynK?= =?us-ascii?q?uOZfWIinw/LzFG25cIg3TPzd19lBYTjSpqajWgQ68Yu3WLSqXOl6tTSRUcbm?= =?us-ascii?q?tyMsBM8q4xjBJVN4jDkNTp27Vkj/kzTVBYSVjmncLvbssPRgP1fBbOAELBbe?= =?us-ascii?q?7UHzDAzsDpbK/5coV+170L5Tu3vzvTU0L4N2nFlzKyDE/3dL8cyiCDPBlO/o?= =?us-ascii?q?q6d0QlD2+rV9/gZhChVb0/xTQr3b05gG/LPm8AIHB9dU1KtLiZ8SJfhL12BW?= =?us-ascii?q?VA6nNvKeTMlTye6qHULZMfsP0jBSoR9aoS+HMh175c9z1JXtRwiHGUtcN1rh?= =?us-ascii?q?eqn6jHyzZqVgZPticehI+PuhYHW+2R/Z1BVHDYuRMVuDzBVlJT/IYjUIe3/f?= =?us-ascii?q?kNmZDVman+KSlP6YfZ5sJBQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?= =?us-ascii?q?yW8WWY65YgrZ250pgBULhAT3QuC+gXTE9iGZZRRfU/FiNhirOdgMMSsDCmqw?= =?us-ascii?q?LNQcxBop3dfveCW7P0NS2Uy7VJLUhto/uwPcEYMYv13FZnY19xkdHRGkbeat?= =?us-ascii?q?tKpzVocg4+pEgUoSpOC1Yr0kegUTuDpXoaFPq6hBkz01Msevk2+XHn5FJlfw?= =?us-ascii?q?OW9ht1q1E4nJDeuR7UaCT4df/iRpxNBmz/sE1jasqmETYwVhW7mAlfDBmBR7?= =?us-ascii?q?9ViOI7J2V70knHpIBCX/JRH/VJ?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FOAgCL2fldhg/ch8NlHgELHIQZgUY?= =?us-ascii?q?jBAsqky+WHYcjCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgM?= =?us-ascii?q?CAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEhVSED8SVxkigwCCfKEtPQIjAUy?= =?us-ascii?q?BBIp+M4kOgUiBNoc/hFkagUE/gRGDUYQthgoEjUKCOJ8Pgj6WBgwbjlGLfS2?= =?us-ascii?q?pF4FpgXszGggbFYMnUBEUjR4OCY4kQDOPJQEB?= X-IPAS-Result: =?us-ascii?q?A0FOAgCL2fldhg/ch8NlHgELHIQZgUYjBAsqky+WHYcjC?= =?us-ascii?q?QQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCAgEBAhABAQEKC?= =?us-ascii?q?QsIKYVKgjspAYNPAgEDEhVSED8SVxkigwCCfKEtPQIjAUyBBIp+M4kOgUiBN?= =?us-ascii?q?oc/hFkagUE/gRGDUYQthgoEjUKCOJ8Pgj6WBgwbjlGLfS2pF4FpgXszGggbF?= =?us-ascii?q?YMnUBEUjR4OCY4kQDOPJQEB?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="10226263" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDHViP8G7mHQSrvo8ZzuqFGGdiXLRtoczeEzll?= =?us-ascii?q?DaVgPNfJb1tWXqPmDpofYil1NxCexxIGTn2/2CsgEu6Rpu1YBA8poiRj?= =?us-ascii?q?tmxCTLn5sKrKVKSE/s1mqlv//ant0cKNWOvyJL+wzB3lFNLSCl2nzPZf?= =?us-ascii?q?+fiBUh1Za90C0mi5qOQOWhmw=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa5.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:07 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id AC402AE52; Wed, 18 Dec 2019 07:49:04 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , Stefano Stabellini , Julien Grall , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , "Konrad Rzeszutek Wilk" , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Dario Faggioli Subject: [PATCH 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume() Date: Wed, 18 Dec 2019 08:48:58 +0100 Message-ID: <20191218074859.21665-9-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: d0b607e7-f80a-4c41-faeb-08d7838ec420 X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 sched_tick_suspend() and sched_tick_resume() only call rcu related functions, so eliminate them and do the rcu_idle_timer*() calling in rcu_idle_[enter|exit](). Signed-off-by: Juergen Gross --- xen/arch/arm/domain.c | 6 +++--- xen/arch/x86/acpi/cpu_idle.c | 15 ++++++++------- xen/arch/x86/cpu/mwait-idle.c | 8 ++++---- xen/common/rcupdate.c | 7 +++++-- xen/common/sched/schedule.c | 12 ------------ xen/include/xen/rcupdate.h | 3 --- xen/include/xen/sched.h | 2 -- 7 files changed, 20 insertions(+), 33 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index c0a13aa0ab..aa3df3b3ba 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -46,8 +46,8 @@ static void do_idle(void) { unsigned int cpu = smp_processor_id(); - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); local_irq_disable(); @@ -58,7 +58,7 @@ static void do_idle(void) } local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); } void idle_loop(void) diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c index 5edd1844f4..2676f0d7da 100644 --- a/xen/arch/x86/acpi/cpu_idle.c +++ b/xen/arch/x86/acpi/cpu_idle.c @@ -599,7 +599,8 @@ void update_idle_stats(struct acpi_processor_power *power, static void acpi_processor_idle(void) { - struct acpi_processor_power *power = processor_powers[smp_processor_id()]; + unsigned int cpu = smp_processor_id(); + struct acpi_processor_power *power = processor_powers[cpu]; struct acpi_processor_cx *cx = NULL; int next_state; uint64_t t1, t2 = 0; @@ -648,8 +649,8 @@ static void acpi_processor_idle(void) cpufreq_dbs_timer_suspend(); - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); /* @@ -658,10 +659,10 @@ static void acpi_processor_idle(void) */ local_irq_disable(); - if ( !cpu_is_haltable(smp_processor_id()) ) + if ( !cpu_is_haltable(cpu) ) { local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -786,7 +787,7 @@ static void acpi_processor_idle(void) /* Now in C0 */ power->last_state = &power->states[0]; local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -794,7 +795,7 @@ static void acpi_processor_idle(void) /* Now in C0 */ power->last_state = &power->states[0]; - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); if ( cpuidle_current_governor->reflect ) diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c index 52413e6da1..f49b04c45b 100644 --- a/xen/arch/x86/cpu/mwait-idle.c +++ b/xen/arch/x86/cpu/mwait-idle.c @@ -755,8 +755,8 @@ static void mwait_idle(void) cpufreq_dbs_timer_suspend(); - sched_tick_suspend(); - /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */ + rcu_idle_enter(cpu); + /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */ process_pending_softirqs(); /* Interrupts must be disabled for C2 and higher transitions. */ @@ -764,7 +764,7 @@ static void mwait_idle(void) if (!cpu_is_haltable(cpu)) { local_irq_enable(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); return; } @@ -806,7 +806,7 @@ static void mwait_idle(void) if (!(lapic_timer_reliable_states & (1 << cstate))) lapic_timer_on(); - sched_tick_resume(); + rcu_idle_exit(cpu); cpufreq_dbs_timer_resume(); if ( cpuidle_current_governor->reflect ) diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c index a56103c6f7..cb712c8690 100644 --- a/xen/common/rcupdate.c +++ b/xen/common/rcupdate.c @@ -459,7 +459,7 @@ int rcu_needs_cpu(int cpu) * periodically poke rcu_pedning(), so that it will invoke the callback * not too late after the end of the grace period. */ -void rcu_idle_timer_start() +static void rcu_idle_timer_start(void) { struct rcu_data *rdp = &this_cpu(rcu_data); @@ -475,7 +475,7 @@ void rcu_idle_timer_start() rdp->idle_timer_active = true; } -void rcu_idle_timer_stop() +static void rcu_idle_timer_stop(void) { struct rcu_data *rdp = &this_cpu(rcu_data); @@ -633,10 +633,13 @@ void rcu_idle_enter(unsigned int cpu) * Se the comment before cpumask_andnot() in rcu_start_batch(). */ smp_mb(); + + rcu_idle_timer_start(); } void rcu_idle_exit(unsigned int cpu) { + rcu_idle_timer_stop(); ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask)); cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask); } diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index 3307e88b6c..ddbface969 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -3265,18 +3265,6 @@ void schedule_dump(struct cpupool *c) rcu_read_unlock(&sched_res_rculock); } -void sched_tick_suspend(void) -{ - rcu_idle_enter(smp_processor_id()); - rcu_idle_timer_start(); -} - -void sched_tick_resume(void) -{ - rcu_idle_timer_stop(); - rcu_idle_exit(smp_processor_id()); -} - void wait(void) { schedule(); diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h index 13850865ed..174d058113 100644 --- a/xen/include/xen/rcupdate.h +++ b/xen/include/xen/rcupdate.h @@ -148,7 +148,4 @@ int rcu_barrier(void); void rcu_idle_enter(unsigned int cpu); void rcu_idle_exit(unsigned int cpu); -void rcu_idle_timer_start(void); -void rcu_idle_timer_stop(void); - #endif /* __XEN_RCUPDATE_H */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index b2f48a3512..e4263de2d5 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -688,8 +688,6 @@ void sched_destroy_domain(struct domain *d); long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *); long sched_adjust_global(struct xen_sysctl_scheduler_op *); int sched_id(void); -void sched_tick_suspend(void); -void sched_tick_resume(void); void vcpu_wake(struct vcpu *v); long vcpu_yield(void); void vcpu_sleep_nosync(struct vcpu *v); -- 2.16.4 From - Wed Dec 18 11:05:12 2019 Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:12 +0100 Received: from MIAPEX02MSOL01.citrite.net (10.52.109.11) by FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 18 Dec 2019 02:49:09 -0500 Received: from esa3.hc3370-68.iphmx.com (10.9.154.239) by MIAPEX02MSOL01.citrite.net (10.52.109.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 18 Dec 2019 02:49:09 -0500 Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of jgross@suse.com) identity=pra; client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of jgross@suse.com designates 195.135.220.15 as permitted sender) identity=mailfrom; client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="jgross@suse.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21 ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24 ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164 ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103 ip4:193.109.254.0/23 ip4:194.106.220.0/23 ip4:194.116.198.0/23 ip4:195.135.220.0/23 ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93 include:spf1.novell.com include:spf2.novell.com include:spf3.novell.com include:spf.protection.outlook.com -all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mx2.suse.de) identity=helo; client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com; envelope-from="jgross@suse.com"; x-sender="postmaster@mx2.suse.de"; x-conformance=sidf_compatible IronPort-SDR: thYnZq6RH4Y+5NfDqI4aTLXgIn4Nuco2/2lKTn5WGml5NE8kwgoFkncPsJMlZXtuFX2dpPDYkU iv/WNMlAFVhH8B+92QqjCT31Jds1m1tGzqwhN1RL3LTr+VXniX0z/lGKpWPrqgp9plmr3RyY/q HAAqyYl7o+Hpbfv/x703wHYbuECe6/Wf+u0SO5aDpTL2hOlPPYLrYwpxYs5/9GJjCUNuf8TNA4 cRJycD6KLkba4lx2a18ngsHsXoH/rX0rhEUR2RkXL1p99CDOtYFYaiohOSR9bC6yz/HFPsMymX veshjYGJZXkhOq3FFHYS/VmD X-IronPort-RemoteIP: 195.135.220.15 X-IronPort-MID: 9853024 X-IronPort-Reputation: 3.4 X-IronPort-Listener: InboundMail X-IronPort-SenderGroup: ValidList X-IronPort-MailFlowPolicy: $ACCEPTED X-SBRS: 3.4 X-MesageID: 9853024 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 195.135.220.15 X-Policy: $ACCEPTED IronPort-PHdr: =?us-ascii?q?9a23=3Ajf0oox9dcF90Df9uRHKM819IXTAuvvDOBiVQ1K?= =?us-ascii?q?B30e4cTK2v8tzYMVDF4r011RmVBN6dsaoUwLOP6OjJYi8p2d65qncMcZhBBV?= =?us-ascii?q?cuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx?= =?us-ascii?q?7xKRR6JvjvGo7Vks+7y/2+94fcbglVijexe61+IAm1oAneq8UanIVvIbstxx?= =?us-ascii?q?XUpXdFZ/5Yzn5yK1KJmBb86Maw/Jp9/ClVpvks6c1OX7jkcqohVbBXAygoPG?= =?us-ascii?q?4z5M3wqBnMVhCP6WcGUmUXiRVHHQ7I5wznU5jrsyv6su192DSGPcDzULs5Vy?= =?us-ascii?q?iu47ttRRT1kyoMKSI3/3/LhcxxlKJboQyupxpjw47PfYqZMONycr7Bcd8GQG?= =?us-ascii?q?ZMWNtaWS5cDYOmd4YBD/QPM/tWoYf+oFUBrxW+CBKwCO/z0DJEmmX70bEm3+?= =?us-ascii?q?knDArI3BYgH9ULsHnMqNv1KbkdUfq6zKbWyzXIcvJY2S366IjTaRAqvPaBXb?= =?us-ascii?q?B/ccrLzkkvDBjFg06LqYz4JDOayP4BvHSc7+plTO+ijXMspQJpojW328shjo?= =?us-ascii?q?nEipgIxl3K9Sh12pg5KcOkREJhfNKpEINcuzyeOoZ2WM8uXm9ltSYgxrEYp5?= =?us-ascii?q?K3YDIGxIonyhPQbfGMbpKG7Qj5VOmLJDd1nHJld6y7hxa16UWgz+L9Wteu3F?= =?us-ascii?q?ZRsypFicfDumoQ2BPN8sSHS/198Vm92TuXygze7uFJLVopmafVK5Mt2KA8mo?= =?us-ascii?q?QPvUjZAyP7mF36jKqMeUUl/uio5f7nYrLjppKENY90hRv+MqM1msykGuk4KR?= =?us-ascii?q?UOX3OF9uimyLLj+kj5TK1QjvIqiqnZrIzaJcMDq662HQBV1Zwj6xmhADu8zd?= =?us-ascii?q?sYgH8HLFZfdxKflIXmJUzOLOrkAve+n1SsijZrxv/cMrL9BZXNK2DJkK39cr?= =?us-ascii?q?Zl905c1A0zwMhe551ODLENOvDzVVXtu9zbFR85NAq0zv35B9VgzI8RRWOPAr?= =?us-ascii?q?ODMKPTvl6E/P4gI+6JZNxdhDGoO/UjovLjk3I9sVscZrWym4sabmiiGfZrKF?= =?us-ascii?q?nfZmDj0fkbFmJfnBczVuPnjhWtSzlIfD7mXa0m4SogIJm7FoqFTYeo1u/SlB?= =?us-ascii?q?ynF4FbMzgVQmuHFm3lIsDdA69WMnC7P9NhnzoYVLOoV44m01SUuRTnz6Z8d7?= =?us-ascii?q?qEqCADvMml1NMuvLKL0E9isz1sD8GNlWqKSjI8kmAJQmow26Zy6Qx4x0yY2K?= =?us-ascii?q?d1y/pfCZRI5vxPXwt7fZ7RxuB3EZbzDwTGeNraAE2+TICABjc8Bsk038dIe1?= =?us-ascii?q?x0TtetlBfYxAKxHqQY0beMAc986brSikD4PN010HPazO8khlgiTNFIMDikib?= =?us-ascii?q?R47BP7HJPSngOSkKP5Pb8E0nvr82GOhXGLoFkeUAN0VvDdWmsDY0LNsdnjzk?= =?us-ascii?q?bSFfm1FK8qdAdMkJTQEKZBZ9z3gFkDf8/NYoSPMUS2nWr4RRuTz+jKbIGxID?= =?us-ascii?q?pFmXuNTkkckwUDu32BMFp2ACDpuG/YADF0cDCnK0rx7elzrm+6RU4o3kmLaU?= =?us-ascii?q?Nmzb+85h8Sg7SVVfoS2rsOvCpppS9zGR6x2NffCtzIoAQEHu0UY9wn50xc/X?= =?us-ascii?q?nErAE7NZulbuhjilMYbwVrrhb2zRwkQo5EkMUssDYr1F8reP/eiQgcMWnChd?= =?us-ascii?q?apZ+6ySCG65h2kZq/I10uL3c2foeEP469j9A2m4lHvF1Ii9mUh2N5QgB7+rt?= =?us-ascii?q?3HChQfVZXpXwM57R9/8vvYbTM6/Jj8zmB3PO+/tTqIiJo5QfAozBqtZYIVMq?= =?us-ascii?q?yeExTpO9YHHMXoI+sv0QvMDFpMLKVZ86g6ONmjfv2N1fuwPepuqzmhiHxO/I?= =?us-ascii?q?F31k/VrXhGR+XF3ogIz7Sj5iXXDG6utF66qYi3lJtNPncSFTHklnChWt4XZ7?= =?us-ascii?q?VyeJZNAmCrcYW7wdB3hpilXHA9lhbrBVwc39SyUQGPdFG71gpVnUgaun2ony?= =?us-ascii?q?Kkwic8ymB06PPEgWqUmbykLUdPM3UDXGR4iFbwPYW46rJSFFOlaQQkjlrt5E?= =?us-ascii?q?r3wbRauLUqKmDSRUlSeC2lZ2pmU6a2qv+De5sWuc5u6H4KFr7mORbDF+2YwV?= =?us-ascii?q?NSyS7oEmpAySpueiqj4dP5lEch1zrYciY1rWLZfNE2zhDat7m+DbZc2CQLQC?= =?us-ascii?q?5ghHzZHF+5apOr8s+TjIzrqf2lWiSqUZgZImH7iJiNsie2/zggAxKlmOuoss?= =?us-ascii?q?b6CgV82ij+nYoPN22AvFP3ZY/l0L6/OORscxxzBVPy3MF9H5l3joo6gJxDkW?= =?us-ascii?q?hfnJie+mAL1HviKdgOk7yrd2IDHHRYprydqBigwkBoKWiFgp70RmnIiNU0fM?= =?us-ascii?q?G0OysXwn5vtpgRTvbSveICxWwv+BK5tV6DOKEi2G5Hkb13tjhCxLtQ3Whlhi?= =?us-ascii?q?SFXuJLRhEeYHOqz1LQqIng5KRPODT2Kuf2ihc4xZb5S+jc6gBEBCSgJsxkRH?= =?us-ascii?q?8uqJwmYRSUlyevj+OsMNjIMYBK6kLSyVGYybASccx5l+JW13M7aCSh4Tt9jb?= =?us-ascii?q?Z81EIm3Inm7tHZcSM3oPP/WEcDcGSrO6ZxsnnslfoMx5jKmdH+WM0nQnJSAt?= =?us-ascii?q?PpVa76SWxP8623b0DXTGx68y/AUbvHQV3Gtxwg9SqUVcrzbjfOfCJ8r50qRQ?= =?us-ascii?q?HBdhUO20ZNBGR8xMNlUFjtnpepcV8ltGlAuhih8kUKkLgwcUG4CDy6xk/gay?= =?us-ascii?q?9oGsHOcFwPt18EvhaFd5TZtLk7HjkErMTw8UrUcTHdPV4QSzlOABfhZRirP6?= =?us-ascii?q?Hyt4CRrq7CWrf4daCWJ+3J87cWVu/Ul8v+jM0/o3DVaZ3JZj44VLU6whYRBi?= =?us-ascii?q?w/QZmD3W9VFWpGzEevJ4aaohy45yF6/Nun/q2tXgXx6I+LTbBVNJ1u/RuynK?= =?us-ascii?q?uOZfWIinw/LzFG25cIg3TPzd19lBYTjSpqajWgQ68Yu3WLSqXOl6tTSRUcbm?= =?us-ascii?q?tyMsBM8q4xjBJVN4jDkNTp27Vkj/kzTVBYSVjmncLvbssPRgP1fBbOAELBbe?= =?us-ascii?q?7UHzDAzsDpbK/5coV+170L5Tu3vzvTU0L4N2/FlzKyDE/3dL8cyiCDPBlO/o?= =?us-ascii?q?q6d0QlD2+rV9/gZhChVb0/xTQr3b05gG/LPm8AIHB9dU1KtLiZ8SJfhL12BW?= =?us-ascii?q?VA6nNvKeTMlTye6qHULZMfsP0jBSoR9aoS+HMh175c9z1JXtRwiHGUtcN1rh?= =?us-ascii?q?eqn6jHyzZqVgZPticehI+PuhYHW+2R/Z1BVHDYuRMVuDzBVlJT/IYjUIe3/f?= =?us-ascii?q?kNmZDVman+KSlP6YfZ5sJHQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?= =?us-ascii?q?yW8WWY65YgrZ250pgBULhAT3QuC+gXTE9iGZZRRfU/FiNhirOdgMMSsDCmqw?= =?us-ascii?q?LNQcxBop3dfveCW7P0NS2Uy7VJLUhto/uwPcEYMYv13FZnY19xkdHRGkbeat?= =?us-ascii?q?tKpzVocg4+pEgUrCpOC1Yr0kegUTuDpWcJHKfszAUrkQY4auMopm+1vgUHY2?= =?us-ascii?q?HSrS51q3Ef3NXohTfIK2z0PP32RpxNBmz4uhppa8Kpc0NOdQS32HdcGnLBTr?= =?us-ascii?q?NVgaFncDkz2hTBopYJEvlZH/RJ?= X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HqAQD62Pldhg/ch8NlHQEBAQkBEQU?= =?us-ascii?q?FAYF+gXQngUYjBAsqky+DEpouCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQE?= =?us-ascii?q?BAwEBAQIBAgMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNPhQQPxJXGSK?= =?us-ascii?q?CNUuCfKElPQIjAUyBBIp+M4kPgUiBNoc/hFkagUE/gRGCXnOKNwSNQoI4nw+?= =?us-ascii?q?CPpYGDBuOUYt9pSqEGoFpgXszGggbFTuCbFARFI0eDgmOJEAzjyUBAQ?= X-IPAS-Result: =?us-ascii?q?A0HqAQD62Pldhg/ch8NlHQEBAQkBEQUFAYF+gXQngUYjB?= =?us-ascii?q?Asqky+DEpouCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCA?= =?us-ascii?q?gEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNPhQQPxJXGSKCNUuCfKElPQIjA?= =?us-ascii?q?UyBBIp+M4kPgUiBNoc/hFkagUE/gRGCXnOKNwSNQoI4nw+CPpYGDBuOUYt9p?= =?us-ascii?q?SqEGoFpgXszGggbFTuCbFARFI0eDgmOJEAzjyUBAQ?= X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; d="scan'208";a="9853024" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown X-MGA-submission: =?us-ascii?q?MDFFNxRAul8K1IMcG7zzfao58/IELNhbTHHPQt?= =?us-ascii?q?9tmay1wNQAxu7/d1yw00Nr5qwjA7LOtctC8xpqP2RWptURjRUWChCxtm?= =?us-ascii?q?xarI93ODbsKyoUDaOJspLDDR74rI18i+f5HDbQxUyNLfL7M0BsPlggqE?= =?us-ascii?q?asDU9YlDG9FPtSjRswR3NrYA=3D=3D?= Received: from mx2.suse.de ([195.135.220.15]) by esa3.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:08 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 195D5AF27; Wed, 18 Dec 2019 07:49:05 +0000 (UTC) From: Juergen Gross To: CC: Juergen Gross , Dario Faggioli , George Dunlap , Andrew Cooper , Ian Jackson , "Jan Beulich" , Julien Grall , "Konrad Rzeszutek Wilk" , Stefano Stabellini , Wei Liu , Josh Whitehead , Stewart Hildebrand , Meng Xu Subject: [PATCH 9/9] xen/sched: add const qualifier where appropriate Date: Wed, 18 Dec 2019 08:48:59 +0100 Message-ID: <20191218074859.21665-10-jgross@suse.com> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20191218074859.21665-1-jgross@suse.com> References: <20191218074859.21665-1-jgross@suse.com> Return-Path: jgross@suse.com Content-Type: text/plain X-MS-Exchange-Organization-Network-Message-Id: bebd7b5a-4cbc-424c-afe7-08d7838ec458 X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0 X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL01.citrite.net X-MS-Exchange-Organization-AuthAs: Anonymous MIME-Version: 1.0 Make use of the const qualifier more often in scheduling code. Signed-off-by: Juergen Gross --- xen/common/sched/cpupool.c | 2 +- xen/common/sched/sched_arinc653.c | 4 +-- xen/common/sched/sched_credit.c | 44 +++++++++++++++++---------------- xen/common/sched/sched_credit2.c | 52 ++++++++++++++++++++------------------- xen/common/sched/sched_null.c | 17 +++++++------ xen/common/sched/sched_rt.c | 32 ++++++++++++------------ xen/common/sched/schedule.c | 25 ++++++++++--------- xen/include/xen/sched.h | 9 ++++--- 8 files changed, 96 insertions(+), 89 deletions(-) diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 14212bb4ae..a6c04c46cb 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -882,7 +882,7 @@ int cpupool_get_id(const struct domain *d) return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE; } -cpumask_t *cpupool_valid_cpus(struct cpupool *pool) +const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool) { return pool->cpu_valid; } diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c index dc45378952..0de4ba6b2c 100644 --- a/xen/common/sched/sched_arinc653.c +++ b/xen/common/sched/sched_arinc653.c @@ -608,7 +608,7 @@ static struct sched_resource * a653sched_pick_resource(const struct scheduler *ops, const struct sched_unit *unit) { - cpumask_t *online; + const cpumask_t *online; unsigned int cpu; /* @@ -639,7 +639,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int cpu, void *pdata, void *vdata) { struct sched_resource *sr = get_sched_res(cpu); - arinc653_unit_t *svc = vdata; + const arinc653_unit_t *svc = vdata; ASSERT(!pdata && svc && is_idle_unit(svc->unit)); diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c index 05930261d9..f2fc1cca5a 100644 --- a/xen/common/sched/sched_credit.c +++ b/xen/common/sched/sched_credit.c @@ -233,7 +233,7 @@ static void csched_tick(void *_cpu); static void csched_acct(void *dummy); static inline int -__unit_on_runq(struct csched_unit *svc) +__unit_on_runq(const struct csched_unit *svc) { return !list_empty(&svc->runq_elem); } @@ -349,11 +349,11 @@ boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle); DEFINE_PER_CPU(unsigned int, last_tickle_cpu); -static inline void __runq_tickle(struct csched_unit *new) +static inline void __runq_tickle(const struct csched_unit *new) { unsigned int cpu = sched_unit_master(new->unit); - struct sched_resource *sr = get_sched_res(cpu); - struct sched_unit *unit = new->unit; + const struct sched_resource *sr = get_sched_res(cpu); + const struct sched_unit *unit = new->unit; struct csched_unit * const cur = CSCHED_UNIT(curr_on_cpu(cpu)); struct csched_private *prv = CSCHED_PRIV(sr->scheduler); cpumask_t mask, idle_mask, *online; @@ -509,7 +509,7 @@ static inline void __runq_tickle(struct csched_unit *new) static void csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu) { - struct csched_private *prv = CSCHED_PRIV(ops); + const struct csched_private *prv = CSCHED_PRIV(ops); /* * pcpu either points to a valid struct csched_pcpu, or is NULL, if we're @@ -652,7 +652,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int cpu, #ifndef NDEBUG static inline void -__csched_unit_check(struct sched_unit *unit) +__csched_unit_check(const struct sched_unit *unit) { struct csched_unit * const svc = CSCHED_UNIT(unit); struct csched_dom * const sdom = svc->sdom; @@ -700,8 +700,8 @@ __csched_vcpu_is_cache_hot(const struct csched_private *prv, static inline int __csched_unit_is_migrateable(const struct csched_private *prv, - struct sched_unit *unit, - int dest_cpu, cpumask_t *mask) + const struct sched_unit *unit, + int dest_cpu, const cpumask_t *mask) { const struct csched_unit *svc = CSCHED_UNIT(unit); /* @@ -725,7 +725,7 @@ _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit, /* We must always use cpu's scratch space */ cpumask_t *cpus = cpumask_scratch_cpu(cpu); cpumask_t idlers; - cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); + const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); struct csched_pcpu *spc = NULL; int balance_step; @@ -932,7 +932,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu) { struct sched_unit *currunit = current->sched_unit; struct csched_unit * const svc = CSCHED_UNIT(currunit); - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); const struct scheduler *ops = sr->scheduler; ASSERT( sched_unit_master(currunit) == cpu ); @@ -1084,7 +1084,7 @@ csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) { struct csched_unit * const svc = CSCHED_UNIT(unit); unsigned int cpu = sched_unit_master(unit); - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); SCHED_STAT_CRANK(unit_sleep); @@ -1577,7 +1577,7 @@ static void csched_tick(void *_cpu) { unsigned int cpu = (unsigned long)_cpu; - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); struct csched_pcpu *spc = CSCHED_PCPU(cpu); struct csched_private *prv = CSCHED_PRIV(sr->scheduler); @@ -1604,7 +1604,7 @@ csched_tick(void *_cpu) static struct csched_unit * csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step) { - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); const struct csched_private * const prv = CSCHED_PRIV(sr->scheduler); const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu); struct csched_unit *speer; @@ -1681,10 +1681,10 @@ static struct csched_unit * csched_load_balance(struct csched_private *prv, int cpu, struct csched_unit *snext, bool *stolen) { - struct cpupool *c = get_sched_res(cpu)->cpupool; + const struct cpupool *c = get_sched_res(cpu)->cpupool; struct csched_unit *speer; cpumask_t workers; - cpumask_t *online = c->res_valid; + const cpumask_t *online = c->res_valid; int peer_cpu, first_cpu, peer_node, bstep; int node = cpu_to_node(cpu); @@ -2008,7 +2008,7 @@ out: } static void -csched_dump_unit(struct csched_unit *svc) +csched_dump_unit(const struct csched_unit *svc) { struct csched_dom * const sdom = svc->sdom; @@ -2041,10 +2041,11 @@ csched_dump_unit(struct csched_unit *svc) static void csched_dump_pcpu(const struct scheduler *ops, int cpu) { - struct list_head *runq, *iter; + const struct list_head *runq; + struct list_head *iter; struct csched_private *prv = CSCHED_PRIV(ops); - struct csched_pcpu *spc; - struct csched_unit *svc; + const struct csched_pcpu *spc; + const struct csched_unit *svc; spinlock_t *lock; unsigned long flags; int loop; @@ -2132,12 +2133,13 @@ csched_dump(const struct scheduler *ops) loop = 0; list_for_each( iter_sdom, &prv->active_sdom ) { - struct csched_dom *sdom; + const struct csched_dom *sdom; + sdom = list_entry(iter_sdom, struct csched_dom, active_sdom_elem); list_for_each( iter_svc, &sdom->active_unit ) { - struct csched_unit *svc; + const struct csched_unit *svc; spinlock_t *lock; svc = list_entry(iter_svc, struct csched_unit, active_unit_elem); diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c index f9e521a3a8..1ed7bbde2f 100644 --- a/xen/common/sched/sched_credit2.c +++ b/xen/common/sched/sched_credit2.c @@ -692,7 +692,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask) */ static int get_fallback_cpu(struct csched2_unit *svc) { - struct sched_unit *unit = svc->unit; + const struct sched_unit *unit = svc->unit; unsigned int bs; SCHED_STAT_CRANK(need_fallback_cpu); @@ -774,7 +774,7 @@ static int get_fallback_cpu(struct csched2_unit *svc) * * FIXME: Do pre-calculated division? */ -static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time, +static void t2c_update(const struct csched2_runqueue_data *rqd, s_time_t time, struct csched2_unit *svc) { uint64_t val = time * rqd->max_weight + svc->residual; @@ -783,7 +783,8 @@ static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time, svc->credit -= val; } -static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct csched2_unit *svc) +static s_time_t c2t(const struct csched2_runqueue_data *rqd, s_time_t credit, + const struct csched2_unit *svc) { return credit * svc->weight / rqd->max_weight; } @@ -792,7 +793,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c * Runqueue related code. */ -static inline int unit_on_runq(struct csched2_unit *svc) +static inline int unit_on_runq(const struct csched2_unit *svc) { return !list_empty(&svc->runq_elem); } @@ -849,9 +850,9 @@ static inline bool same_core(unsigned int cpua, unsigned int cpub) } static unsigned int -cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu) +cpu_to_runqueue(const struct csched2_private *prv, unsigned int cpu) { - struct csched2_runqueue_data *rqd; + const struct csched2_runqueue_data *rqd; unsigned int rqi; for ( rqi = 0; rqi < nr_cpu_ids; rqi++ ) @@ -917,7 +918,7 @@ static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight, list_for_each( iter, &rqd->svc ) { - struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem); + const struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem); if ( svc->weight > max_weight ) max_weight = svc->weight; @@ -970,7 +971,7 @@ _runq_assign(struct csched2_unit *svc, struct csched2_runqueue_data *rqd) } static void -runq_assign(const struct scheduler *ops, struct sched_unit *unit) +runq_assign(const struct scheduler *ops, const struct sched_unit *unit) { struct csched2_unit *svc = unit->priv; @@ -997,7 +998,7 @@ _runq_deassign(struct csched2_unit *svc) } static void -runq_deassign(const struct scheduler *ops, struct sched_unit *unit) +runq_deassign(const struct scheduler *ops, const struct sched_unit *unit) { struct csched2_unit *svc = unit->priv; @@ -1203,7 +1204,7 @@ static void update_svc_load(const struct scheduler *ops, struct csched2_unit *svc, int change, s_time_t now) { - struct csched2_private *prv = csched2_priv(ops); + const struct csched2_private *prv = csched2_priv(ops); s_time_t delta, unit_load; unsigned int P, W; @@ -1362,11 +1363,11 @@ static inline bool is_preemptable(const struct csched2_unit *svc, * Within the same class, the highest difference of credit. */ static s_time_t tickle_score(const struct scheduler *ops, s_time_t now, - struct csched2_unit *new, unsigned int cpu) + const struct csched2_unit *new, unsigned int cpu) { struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); struct csched2_unit * cur = csched2_unit(curr_on_cpu(cpu)); - struct csched2_private *prv = csched2_priv(ops); + const struct csched2_private *prv = csched2_priv(ops); s_time_t score; /* @@ -1441,7 +1442,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_unit *new, s_time_t now) struct sched_unit *unit = new->unit; unsigned int bs, cpu = sched_unit_master(unit); struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); - cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); + const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain); cpumask_t mask; ASSERT(new->rqd == rqd); @@ -2005,7 +2006,7 @@ static void replenish_domain_budget(void* data) #ifndef NDEBUG static inline void -csched2_unit_check(struct sched_unit *unit) +csched2_unit_check(const struct sched_unit *unit) { struct csched2_unit * const svc = csched2_unit(unit); struct csched2_dom * const sdom = svc->sdom; @@ -2541,8 +2542,8 @@ static void migrate(const struct scheduler *ops, * - svc is not already flagged to migrate, * - if svc is allowed to run on at least one of the pcpus of rqd. */ -static bool unit_is_migrateable(struct csched2_unit *svc, - struct csched2_runqueue_data *rqd) +static bool unit_is_migrateable(const struct csched2_unit *svc, + const struct csched2_runqueue_data *rqd) { struct sched_unit *unit = svc->unit; int cpu = sched_unit_master(unit); @@ -3076,7 +3077,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data) static void csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit) { - struct csched2_unit *svc = unit->priv; + const struct csched2_unit *svc = unit->priv; struct csched2_dom * const sdom = svc->sdom; spinlock_t *lock; @@ -3142,7 +3143,7 @@ csched2_runtime(const struct scheduler *ops, int cpu, int rt_credit; /* Proposed runtime measured in credits */ struct csched2_runqueue_data *rqd = c2rqd(ops, cpu); struct list_head *runq = &rqd->runq; - struct csched2_private *prv = csched2_priv(ops); + const struct csched2_private *prv = csched2_priv(ops); /* * If we're idle, just stay so. Others (or external events) @@ -3239,7 +3240,7 @@ runq_candidate(struct csched2_runqueue_data *rqd, unsigned int *skipped) { struct list_head *iter, *temp; - struct sched_resource *sr = get_sched_res(cpu); + const struct sched_resource *sr = get_sched_res(cpu); struct csched2_unit *snext = NULL; struct csched2_private *prv = csched2_priv(sr->scheduler); bool yield = false, soft_aff_preempt = false; @@ -3603,7 +3604,8 @@ static void csched2_schedule( } static void -csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc) +csched2_dump_unit(const struct csched2_private *prv, + const struct csched2_unit *svc) { printk("[%i.%i] flags=%x cpu=%i", svc->unit->domain->domain_id, @@ -3626,8 +3628,8 @@ csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc) static inline void dump_pcpu(const struct scheduler *ops, int cpu) { - struct csched2_private *prv = csched2_priv(ops); - struct csched2_unit *svc; + const struct csched2_private *prv = csched2_priv(ops); + const struct csched2_unit *svc; printk("CPU[%02d] runq=%d, sibling={%*pbl}, core={%*pbl}\n", cpu, c2r(cpu), @@ -3695,8 +3697,8 @@ csched2_dump(const struct scheduler *ops) loop = 0; list_for_each( iter_sdom, &prv->sdom ) { - struct csched2_dom *sdom; - struct sched_unit *unit; + const struct csched2_dom *sdom; + const struct sched_unit *unit; sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem); @@ -3737,7 +3739,7 @@ csched2_dump(const struct scheduler *ops) printk("RUNQ:\n"); list_for_each( iter, runq ) { - struct csched2_unit *svc = runq_elem(iter); + const struct csched2_unit *svc = runq_elem(iter); if ( svc ) { diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c index 11aab25743..4906e02c62 100644 --- a/xen/common/sched/sched_null.c +++ b/xen/common/sched/sched_null.c @@ -278,12 +278,12 @@ static void null_free_domdata(const struct scheduler *ops, void *data) * So this is not part of any hot path. */ static struct sched_resource * -pick_res(struct null_private *prv, const struct sched_unit *unit) +pick_res(const struct null_private *prv, const struct sched_unit *unit) { unsigned int bs; unsigned int cpu = sched_unit_master(unit), new_cpu; - cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain); - struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; + const cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain); + const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock)); @@ -375,7 +375,7 @@ static void unit_assign(struct null_private *prv, struct sched_unit *unit, } /* Returns true if a cpu was tickled */ -static bool unit_deassign(struct null_private *prv, struct sched_unit *unit) +static bool unit_deassign(struct null_private *prv, const struct sched_unit *unit) { unsigned int bs; unsigned int cpu = sched_unit_master(unit); @@ -441,7 +441,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops, { struct sched_resource *sr = get_sched_res(cpu); struct null_private *prv = null_priv(new_ops); - struct null_unit *nvc = vdata; + const struct null_unit *nvc = vdata; ASSERT(nvc && is_idle_unit(nvc->unit)); @@ -940,7 +940,8 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev, prev->next_task->migrated = false; } -static inline void dump_unit(struct null_private *prv, struct null_unit *nvc) +static inline void dump_unit(const struct null_private *prv, + const struct null_unit *nvc) { printk("[%i.%i] pcpu=%d", nvc->unit->domain->domain_id, nvc->unit->unit_id, list_empty(&nvc->waitq_elem) ? @@ -950,8 +951,8 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc) static void null_dump_pcpu(const struct scheduler *ops, int cpu) { struct null_private *prv = null_priv(ops); - struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; - struct null_unit *nvc; + const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv; + const struct null_unit *nvc; spinlock_t *lock; unsigned long flags; diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c index 8646d77343..560614ed9d 100644 --- a/xen/common/sched/sched_rt.c +++ b/xen/common/sched/sched_rt.c @@ -352,7 +352,7 @@ static void rt_dump_pcpu(const struct scheduler *ops, int cpu) { struct rt_private *prv = rt_priv(ops); - struct rt_unit *svc; + const struct rt_unit *svc; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); @@ -371,8 +371,8 @@ rt_dump(const struct scheduler *ops) { struct list_head *runq, *depletedq, *replq, *iter; struct rt_private *prv = rt_priv(ops); - struct rt_unit *svc; - struct rt_dom *sdom; + const struct rt_unit *svc; + const struct rt_dom *sdom; unsigned long flags; spin_lock_irqsave(&prv->lock, flags); @@ -408,7 +408,7 @@ rt_dump(const struct scheduler *ops) printk("Domain info:\n"); list_for_each ( iter, &prv->sdom ) { - struct sched_unit *unit; + const struct sched_unit *unit; sdom = list_entry(iter, struct rt_dom, sdom_elem); printk("\tdomain: %d\n", sdom->dom->domain_id); @@ -509,7 +509,7 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *), list_for_each ( iter, queue ) { - struct rt_unit * iter_svc = (*qelem)(iter); + const struct rt_unit * iter_svc = (*qelem)(iter); if ( compare_unit_priority(svc, iter_svc) > 0 ) break; pos = true; @@ -547,7 +547,7 @@ replq_remove(const struct scheduler *ops, struct rt_unit *svc) */ if ( !list_empty(replq) ) { - struct rt_unit *svc_next = replq_elem(replq->next); + const struct rt_unit *svc_next = replq_elem(replq->next); set_timer(&prv->repl_timer, svc_next->cur_deadline); } else @@ -604,7 +604,7 @@ static void replq_reinsert(const struct scheduler *ops, struct rt_unit *svc) { struct list_head *replq = rt_replq(ops); - struct rt_unit *rearm_svc = svc; + const struct rt_unit *rearm_svc = svc; bool rearm = false; ASSERT( unit_on_replq(svc) ); @@ -640,7 +640,7 @@ static struct sched_resource * rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu) { cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu); - cpumask_t *online; + const cpumask_t *online; int cpu; online = cpupool_domain_master_cpumask(unit->domain); @@ -1028,7 +1028,7 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu) struct rt_unit *svc = NULL; struct rt_unit *iter_svc = NULL; cpumask_t *cpu_common = cpumask_scratch_cpu(cpu); - cpumask_t *online; + const cpumask_t *online; list_for_each ( iter, runq ) { @@ -1197,15 +1197,15 @@ rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit) * lock is grabbed before calling this function */ static void -runq_tickle(const struct scheduler *ops, struct rt_unit *new) +runq_tickle(const struct scheduler *ops, const struct rt_unit *new) { struct rt_private *prv = rt_priv(ops); - struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */ - struct rt_unit *iter_svc; - struct sched_unit *iter_unit; + const struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */ + const struct rt_unit *iter_svc; + const struct sched_unit *iter_unit; int cpu = 0, cpu_to_tickle = 0; cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id()); - cpumask_t *online; + const cpumask_t *online; if ( new == NULL || is_idle_unit(new->unit) ) return; @@ -1379,7 +1379,7 @@ rt_dom_cntl( { struct rt_private *prv = rt_priv(ops); struct rt_unit *svc; - struct sched_unit *unit; + const struct sched_unit *unit; unsigned long flags; int rc = 0; struct xen_domctl_schedparam_vcpu local_sched; @@ -1484,7 +1484,7 @@ rt_dom_cntl( */ static void repl_timer_handler(void *data){ s_time_t now; - struct scheduler *ops = data; + const struct scheduler *ops = data; struct rt_private *prv = rt_priv(ops); struct list_head *replq = rt_replq(ops); struct list_head *runq = rt_runq(ops); diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c index ddbface969..1d98e1fa8d 100644 --- a/xen/common/sched/schedule.c +++ b/xen/common/sched/schedule.c @@ -175,7 +175,7 @@ static inline struct scheduler *dom_scheduler(const struct domain *d) static inline struct scheduler *unit_scheduler(const struct sched_unit *unit) { - struct domain *d = unit->domain; + const struct domain *d = unit->domain; if ( likely(d->cpupool != NULL) ) return d->cpupool->sched; @@ -202,7 +202,7 @@ static inline struct scheduler *vcpu_scheduler(const struct vcpu *v) } #define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain) -static inline void trace_runstate_change(struct vcpu *v, int new_state) +static inline void trace_runstate_change(const struct vcpu *v, int new_state) { struct { uint32_t vcpu:16, domain:16; } d; uint32_t event; @@ -220,7 +220,7 @@ static inline void trace_runstate_change(struct vcpu *v, int new_state) __trace_var(event, 1/*tsc*/, sizeof(d), &d); } -static inline void trace_continue_running(struct vcpu *v) +static inline void trace_continue_running(const struct vcpu *v) { struct { uint32_t vcpu:16, domain:16; } d; @@ -302,7 +302,8 @@ void sched_guest_idle(void (*idle) (void), unsigned int cpu) atomic_dec(&per_cpu(sched_urgent_count, cpu)); } -void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) +void vcpu_runstate_get(const struct vcpu *v, + struct vcpu_runstate_info *runstate) { spinlock_t *lock; s_time_t delta; @@ -324,7 +325,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate) uint64_t get_cpu_idle_time(unsigned int cpu) { struct vcpu_runstate_info state = { 0 }; - struct vcpu *v = idle_vcpu[cpu]; + const struct vcpu *v = idle_vcpu[cpu]; if ( cpu_online(cpu) && v ) vcpu_runstate_get(v, &state); @@ -392,7 +393,7 @@ static void sched_free_unit_mem(struct sched_unit *unit) static void sched_free_unit(struct sched_unit *unit, struct vcpu *v) { - struct vcpu *vunit; + const struct vcpu *vunit; unsigned int cnt = 0; /* Don't count to be released vcpu, might be not in vcpu list yet. */ @@ -522,7 +523,7 @@ static unsigned int sched_select_initial_cpu(const struct vcpu *v) int sched_init_vcpu(struct vcpu *v) { - struct domain *d = v->domain; + const struct domain *d = v->domain; struct sched_unit *unit; unsigned int processor; @@ -913,7 +914,7 @@ static void sched_unit_move_locked(struct sched_unit *unit, unsigned int new_cpu) { unsigned int old_cpu = unit->res->master_cpu; - struct vcpu *v; + const struct vcpu *v; rcu_read_lock(&sched_res_rculock); @@ -1090,7 +1091,7 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit) return false; } -static void sched_reset_affinity_broken(struct sched_unit *unit) +static void sched_reset_affinity_broken(const struct sched_unit *unit) { struct vcpu *v; @@ -1176,7 +1177,7 @@ void restore_vcpu_affinity(struct domain *d) int cpu_disable_scheduler(unsigned int cpu) { struct domain *d; - struct cpupool *c; + const struct cpupool *c; cpumask_t online_affinity; int ret = 0; @@ -1251,8 +1252,8 @@ out: static int cpu_disable_scheduler_check(unsigned int cpu) { struct domain *d; - struct vcpu *v; - struct cpupool *c; + const struct vcpu *v; + const struct cpupool *c; c = get_sched_res(cpu)->cpupool; if ( c == NULL ) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index e4263de2d5..fcf8e5037b 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -771,7 +771,7 @@ static inline void hypercall_cancel_continuation(struct vcpu *v) extern struct domain *domain_list; /* Caller must hold the domlist_read_lock or domlist_update_lock. */ -static inline struct domain *first_domain_in_cpupool( struct cpupool *c) +static inline struct domain *first_domain_in_cpupool(const struct cpupool *c) { struct domain *d; for (d = rcu_dereference(domain_list); d && d->cpupool != c; @@ -779,7 +779,7 @@ static inline struct domain *first_domain_in_cpupool( struct cpupool *c) return d; } static inline struct domain *next_domain_in_cpupool( - struct domain *d, struct cpupool *c) + struct domain *d, const struct cpupool *c) { for (d = rcu_dereference(d->next_in_list); d && d->cpupool != c; d = rcu_dereference(d->next_in_list)); @@ -923,7 +923,8 @@ void restore_vcpu_affinity(struct domain *d); int vcpu_affinity_domctl(struct domain *d, uint32_t cmd, struct xen_domctl_vcpuaffinity *vcpuaff); -void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate); +void vcpu_runstate_get(const struct vcpu *v, + struct vcpu_runstate_info *runstate); uint64_t get_cpu_idle_time(unsigned int cpu); void sched_guest_idle(void (*idle) (void), unsigned int cpu); void scheduler_enable(void); @@ -1042,7 +1043,7 @@ extern enum cpufreq_controller { int cpupool_move_domain(struct domain *d, struct cpupool *c); int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op); int cpupool_get_id(const struct domain *d); -cpumask_t *cpupool_valid_cpus(struct cpupool *pool); +const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool); extern void dump_runq(unsigned char key); void arch_do_physinfo(struct xen_sysctl_physinfo *pi); -- 2.16.4