git@vger.kernel.org mailing list mirror (one of many)
 help / color / mirror / code / Atom feed
* git-am doesn't strip CRLF line endings when the mbox is base64-encoded
@ 2019-12-18 11:42 George Dunlap
  2019-12-18 12:15 ` George Dunlap
  0 siblings, 1 reply; 6+ messages in thread
From: George Dunlap @ 2019-12-18 11:42 UTC (permalink / raw)
  To: git

[-- Attachment #1: Type: text/plain, Size: 575 bytes --]

Using git 2.24.0 from Debian testing.

It seems that git-am will strip CRLF endings from mails before applying
patches when the mail isn't encoded in any way.  It will also decode
base64-encoded mails.  But it won't strip CRLF endings from
base64-encoded mails.

Attached are two mbox files for two different recent series.
plainenc.am applies cleanly with `git am`, while base64enc.am doesn't.

Poking around the man pages, it looks like part of the issue might be
that the CRLF stripping is done in `git mailsplit`, before the base64
encoding, rather than after.

 -George

[-- Attachment #2: base64enc.am --]
[-- Type: text/plain, Size: 102145 bytes --]

From - Wed Dec 18 10:46:18 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Tue, 17 Dec 2019 16:12:28 +0100
Received: from MIAPEX02MSOL02.citrite.net (10.52.109.12) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Tue, 17 Dec 2019 10:12:27 -0500
Received: from esa2.hc3370-68.iphmx.com (10.9.154.239) by
 MIAPEX02MSOL02.citrite.net (10.52.109.12) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 10:12:26 -0500
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  aisaila@bitdefender.com) identity=pra; client-ip=40.107.5.93;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  aisaila@bitdefender.com designates 40.107.5.93 as permitted
  sender) identity=mailfrom; client-ip=40.107.5.93;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  postmaster@EUR03-VE1-obe.outbound.protection.outlook.com
  designates 40.107.5.93 as permitted sender) identity=helo;
  client-ip=40.107.5.93; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="postmaster@EUR03-VE1-obe.outbound.protection.outlook.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Authentication-Results: esa2.hc3370-68.iphmx.com; spf=None smtp.pra=aisaila@bitdefender.com; spf=Pass smtp.mailfrom=aisaila@bitdefender.com; spf=Pass smtp.helo=postmaster@EUR03-VE1-obe.outbound.protection.outlook.com; dkim=pass (signature verified) header.i=@bitdefender.onmicrosoft.com
IronPort-SDR: za36JUH8wxLX1Znk5weTy/oOQAtK6ZvsPotGYmEyJkRAKYIJWGHmc4K2b8q0W2wmpz/qVJ88Rr
 uqsuTNfQbIczSLAIWQCeXdmE2bH4pvdh+9bnm2UlZC92nV98mfpO1n8uDEMGvIoPzUD7kMxKxp
 C8TFCWER8INL34gsxa6RqyRzuZy1PtLukq4FENWDTobf3ZP5BjlkJk93RrxEe5oZmhdjY/+5TE
 j3TEnK8n+doVftRxD+w+0m6L0A+rk51RlrJa+jxiHx3DLFw1YmeijBn++w7HQQnxazjQbvTFbn
 ZCxvSYRlnAFVJUa4zNG1SuBq
X-IronPort-RemoteIP: 40.107.5.93
X-IronPort-MID: 9822583
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: SBRS_Whitelist
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 9822583
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 40.107.5.93
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3AoDXbLROZBALHRrpP2IMl6mtUPXoX/o7sNwtQ0K?=
 =?us-ascii?q?IMzox0I/r9rarrMEGX3/hxlliBBdydt6sfzbCO7Ou/ACQp2tWoiDg6aptCVh?=
 =?us-ascii?q?sI2409vjcLJ4q7M3D9N+PgdCcgHc5PBxdP9nC/NlVJSo6lPwWB6nK94iQPFR?=
 =?us-ascii?q?rhKAF7Ovr6GpLIj8Swyuu+54Dfbx9HiTagb75+Ngu6oRnTu8UZg4ZuN7s6xw?=
 =?us-ascii?q?fUrHdPZ+lY335jK0iJnxb76Mew/Zpj/DpVtvk86cNOUrj0crohQ7BAAzsoL2?=
 =?us-ascii?q?465MvwtRneVgSP/WcTUn8XkhVTHQfI6gzxU4rrvSv7sup93zSaPdHzQLspVz?=
 =?us-ascii?q?mu87tnRRn1gyocKTU37H/YhdBxjKJDoRKuuRp/w5LPYIqIMPZyZ77Rcc8GSW?=
 =?us-ascii?q?ZEWMtaSi5PDZ6mb4YXD+QPI/tWr5XzqVUNqhW+BBWjCuzgxTJTmn/5xrc33/?=
 =?us-ascii?q?g7HA3a3gEtGc8FvnTOrNXyMacfSfy4zK3WwjTMdfxW3yry6JLVchs8vP+MQa?=
 =?us-ascii?q?x/ccTLxkkpCgjLjUiepJDmMT2TyukGrm+W4PBhVeK0kWEnqgVxrSK0y8g3i4?=
 =?us-ascii?q?nJmp8axU7K9SpnxoY6OMO3SFRhbdG4F5tQsjiXOo1rSc0hW2FloCk3xqEctZ?=
 =?us-ascii?q?KmYCQHyo4ryh7FZ/GDfYWE+hbuWPuLLTtlgH9ofKiziwuu/UWv0OHxV9G40F?=
 =?us-ascii?q?hUoSdGjtXBs3UA2h/P5ceZVvRw+0Ws1DeB2g3Q7+xJI045mKXHJJMkwbM9ko?=
 =?us-ascii?q?ccvELeFSHsgkr2lrWZdkA89+io9evnZrLmq4eTOYB6lg/yLqojltWwDuo3Lw?=
 =?us-ascii?q?QCRm+b9v+i27H5+k35XalKgeYxkqnEtpDVON4XprajAw9SzoYs9QqwDyun0N?=
 =?us-ascii?q?Qfm3kLNlVFeA+bj4jtPFHOJ/P4Ae2jjFSrlTdn3/HGPrv/DZXRNnXOn6vtca?=
 =?us-ascii?q?xg50JAygc/19BS64hQB7wPOP7zX1X+tN3cDh83KQy0xOPnBc1n2YMfQmKAGK?=
 =?us-ascii?q?6ZMKfIvlKT/e0vIvWMa5ILtzbzNfcl4ePhgmEjmVMHYaap2p4XZGiiHvt6O0?=
 =?us-ascii?q?WZfWbsgtAZHGcFoAU+S/bqiFKcXjJJenmyQqQ85jUhB4K+ForMWJ2tjKad0y?=
 =?us-ascii?q?e6Ap1WfGFGC1+WHXj0cIWEXqREVCXHaN9slHkIWKasT6cl1Aqyr0nqxrx/NO?=
 =?us-ascii?q?3W9yYE85X525I9s/3ek1Q++CJ5C+yZ0nqRVCdkk2UQXTg00atj50tnxQHQ/7?=
 =?us-ascii?q?J/hqlxEdFD7vVFSBpyDoLRweV8Q/HJcyPlV5/dQVe9S9SiCBk7T849ztEDZU?=
 =?us-ascii?q?pwAZOpiRWVjHniOKMci7HeXM98yanbxXWkYp8lky+bhoUCrnxjb9VGMXCvmq?=
 =?us-ascii?q?948VSOXdzXxhzK36//b78VmTXN/T3ekznGtxRCXQp5QajJGmoSYkLGoNiqgy?=
 =?us-ascii?q?GKQ+qkCKgrPwVMz8KPMO1NbNjohk9BX/DtJJLVZGfi03zlHhuMy7iQa5CwPm?=
 =?us-ascii?q?8cwCnQBkUCnw0JuHGAMAk1HCC6pGzCSTdpEAGKAQvspMdkr3bpYkYowlOxck?=
 =?us-ascii?q?dj2rGpqDsYnueVRP5W/55WkyAntzhyABOBzsrbWf+hgickQqhGetI65gV3kE?=
 =?us-ascii?q?fi8iFtNZypKa9vw2UTdQh6pWrC/BV6AYYT9KpipnN/wCx+L46k02NvTBnH+b?=
 =?us-ascii?q?TgZLfxMzGu2xv3YaKGnWjv1vKrwqEA1sQ9qm/P5FT6cyhq+SBM+sF7wWqG5c?=
 =?us-ascii?q?3PCC8+Wqn4XlgJsBQllYzeay8v14bvlngzOJeZiXydnOotOKx/6julYMtuHY?=
 =?us-ascii?q?mIMlHqLsElK8K1DcYblF6ASxsiAOxd9IQXJ5iCdfWEm7WRJ8E6vTCClXQdo6?=
 =?us-ascii?q?Z+i1CL8Bt6Tv7whK88592c/EiMVTbMjHu6rZ3Jlax8VxISMCmRxjThWYhAaa?=
 =?us-ascii?q?hzIKpTDmGuDuam4PlYq5WuBy9+8GH9WG0hwZ/8KlKCKl3n2gtI0l4L5Gaqgj?=
 =?us-ascii?q?a802lsmioy/fDFjgXT3+TvcgYGMWdXRW5kyG3hOpWwk8tAAhL6YhMgzl2l7h?=
 =?us-ascii?q?2hmPAe+vQ5LnHTRFcOdC/zfCl5SqXlkL2EboZU7Y8w9z1NWbG1YEuTS7r0ix?=
 =?us-ascii?q?Ebzy/uEWZYyD0hMTqtv8axhAR03VqUN207t3/FYYd1zBbb6sbbQKtY3yEPRS?=
 =?us-ascii?q?1xoTPWGlSxMdSv8diO0ZzEt7P2THqvA6VaajKj1oacrG276GltVAW4hOy2k8?=
 =?us-ascii?q?b7HBIS/BLBj4IvawiR6RH2b8/syriwNv9hcg9wHljg5sFmG4Z41IwtmJUX3n?=
 =?us-ascii?q?tcjZKQrjILkmb2ZM1Swrm2LGEMSjgC38PP7UD71VdiIHOEy8OxVniUzsZ7Id?=
 =?us-ascii?q?jvSmkXxi4w4c1MBKqOqrtCmCp+uF2jqgzNJ/N6m25Hm8Ej43MbnewF/TEV4H?=
 =?us-ascii?q?7EWeI0Gk9VdWzhjB3SqdC19/4IPCPxIf6xzEp7jZaqC7TQ6gdbEG30fJsvB0?=
 =?us-ascii?q?oSpo12LU7M3Xvv64rlZMiYbNQdsQeRmgvBiO4dIYw4l/4Djy5qcWznunhtx+?=
 =?us-ascii?q?k+hB1olZa02erPY2xs56u4BhdwPDzpa84d9zfhgLwYlcGTnsiuEphnBjQXTc?=
 =?us-ascii?q?7wV/v7WDkWtPnhK0OPCGhg8jHCQeWZQEnOsxQDzTqHCZ2gOnCJKWNMwM5rHl?=
 =?us-ascii?q?+dLxcE31hRAmV8n4Y5ExDsz8vkIyIbrngc4ED1rhxUx6dmLR76By3WpRyhaz?=
 =?us-ascii?q?M9YJKeMBZb4AxE60rPd8eZ66ggekMQto3ktwGLJmGBMk5BDHoAW0iNL1riIr?=
 =?us-ascii?q?Wj69TG/+WCQOG5KrGdBNfG4fwbXPCOy5W114Jg9DvZLcSDME5pCPgj01ZCV3?=
 =?us-ascii?q?R0Qp6LoTgERi0Jmi6IVPa1+EbkqBV+tdv3sPnwUVip5YDUUOQKdIs/vRGuga?=
 =?us-ascii?q?KTceWXgXQxJTFd35IKjXjGrdpXlFcTkChvczCFG7UcuSPDQaTcl7URBBkeIy?=
 =?us-ascii?q?9+L8pH6asg0xIFZZad04uqkOM+36V9AkwNTVH7n8C1ecEGRgP1fEjKAkqGLv?=
 =?us-ascii?q?XOJDHGxd32fbLpTLRRiOtOsBjj8T2fEkLlInGCj2y1D1b2ab4K0WfKZ0872s?=
 =?us-ascii?q?n1aBtmBGn9QcizZwayapl3hmZtnuVx2COMNHYcNCg6eERI/djypWtVhOtyH2?=
 =?us-ascii?q?tZ4z9rN+6BzmyQ7vLRK5IfmfFqHih5me9c7HkgjbBS6WsXIZ490DuXtdNor1?=
 =?us-ascii?q?y8x6OGyyFuUR5HgjxKmI6Gs0hkNajDsJJHXDyXmXBFpXXVABMMqdx/D9TpsK?=
 =?us-ascii?q?0F0dnDmpX4LzJa+s7V988RVIDEbdiKO307PV/1CSbZWUEbGCWzOziV1Ck/2L?=
 =?us-ascii?q?mCs2eYpZ8gpt3wlYoSH/VFAUctGKpSC1w5ToBaZsYtGGtiyfnC0YYJ/Sbs8E?=
 =?us-ascii?q?GXHZ0F+MiBDrXLX5CNYH6YleUWOkFOmOuga9xVbsqihwRjcgUoxdyWXRaPG4?=
 =?us-ascii?q?gL+mo4MUc1uBsfqnEmFz9qghu3ZF/1uC1BUqLk+3x+wgpmP7Z3/W+1sQ5ufw?=
 =?us-ascii?q?jE+HNrwht2xYWtxDmVdHSodvWKUIpbCjT5uw0KCr2mHlwnVQSphgQkOSzNHf?=
 =?us-ascii?q?RRhOA7Kjgu1l+auINPHO4aRqpBMlccwvSeZvNg1lo5yG3v3Uhc+e7MEodvji?=
 =?us-ascii?q?MHTKT09Dd+9lsma9Q4Y6vNOKBO015cwLqUuTOl3fwwxwlYIFsR9GSVe2gDv0?=
 =?us-ascii?q?ltVPFuKyez/+Nq4BCPgHMfIC5VD6Vs/7Q7rQs0IKyYwjjl0qJfJ0z5LOGZI6?=
 =?us-ascii?q?6D+iDBmcOOXlIsxxYImk1Crt0UmY8od0uZUVxqzaPES0xPbJKdb1wTPpEBvG?=
 =?us-ascii?q?LediuPr+jXlJ9uNt/7FuuzFrDW8fhExEO8HAM5WY8L65dkfNHk3UfGIMPgNL?=
 =?us-ascii?q?NAxw8q4VGhLVSeBfVNcTqBkSsLrs+yypN6x89WITRXUgAfeW2no63aoAMnmq?=
 =?us-ascii?q?/JRNAtfnITRZcJLFobZffiwGtnki0FCzO6lOUE1AKF8jnw4DzKCyXxZMZiY/?=
 =?us-ascii?q?HSYg5wDNax+nM09K382ju1ut3OYmr9M9ploNrG7+gX8o2GB/1jRr54q07Amo?=
 =?us-ascii?q?NcSi/iQyvVHNWyPZS1d5g0YIm+FCOhSlLmwWFQLY+5LJO3I6OPmw2tWYtErN?=
 =?us-ascii?q?zRwmU4LcHkXjAGR0Us/6dSvuQkI1VEOsdzYAa05VhmcfXnf0HAlI3pGjjIS3?=
 =?us-ascii?q?MeTuEDn7jgIeUPl2x0KLf9kSdoT4lmnbDvrQhRG9dSyEmZnKnrZpEAA3L6Qi?=
 =?us-ascii?q?UPIlyW9yRlzzAzZKFukoJdiFvJqQdOaTnTLb4wMTUWsY1kXgHAZikmQi8xQ1?=
 =?us-ascii?q?vW1NuR0kuXx7kXuhBlsZNR2OxBvmL5u8aAMjitRKCmp5jTvyc6K9Mhpv8oaN?=
 =?us-ascii?q?GxEo69rJrb2wfnYtzQvwmCDHHoOsdgwoMVHg8BBf5ClCciJNAMvpdH5QwpTM?=
 =?us-ascii?q?AiKrdTCa4q4Le3dT5jCi1UxigcBdrZgG4yx9yk0r6frS++NYw4OUVb4p9Dnt?=
 =?us-ascii?q?cQXSNwbi4E4qSkUteOmg=3D=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FjAQCb7/hdhl0FayhlHQEBAQkBEQU?=
 =?us-ascii?q?FAYF+gUtQgWYDBAsqCoN6g0YDhTqFVJZrhkwDGDwJBAEBCgEtAgEBAQGEPhm?=
 =?us-ascii?q?CHwYBBDQTAgECAQwBAQEDAQEBAgECAwICAQECEAEBAQgLCwgphT4BC4I7KQF?=
 =?us-ascii?q?pOTgBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEFAoEMQBY?=
 =?us-ascii?q?RBA0MAQE3AREBIgImAgQwFRIEDgUigwCCRwMvAgGiAz0CIwE/AguBBCmIYAE?=
 =?us-ascii?q?BdH8zgn4BAQWFGRhYgT8JgQ4ohz+EbQaBQT+EYoRIGIJ5gl6NRYI3OZ5UKII?=
 =?us-ascii?q?WlgQnmkktjiCaTQIEAgQFAg4BAQWBaYF7MyIbFUgNglJQERSNEgwOCYNQilN?=
 =?us-ascii?q?CATEBgSePSAGBDwEB?=
X-IPAS-Result: =?us-ascii?q?A0FjAQCb7/hdhl0FayhlHQEBAQkBEQUFAYF+gUtQgWYDB?=
 =?us-ascii?q?AsqCoN6g0YDhTqFVJZrhkwDGDwJBAEBCgEtAgEBAQGEPhmCHwYBBDQTAgECA?=
 =?us-ascii?q?QwBAQEDAQEBAgECAwICAQECEAEBAQgLCwgphT4BC4I7KQFpOTgBAQEBAQEBA?=
 =?us-ascii?q?QEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEFAoEMQBYRBA0MAQE3AREBI?=
 =?us-ascii?q?gImAgQwFRIEDgUigwCCRwMvAgGiAz0CIwE/AguBBCmIYAEBdH8zgn4BAQWFG?=
 =?us-ascii?q?RhYgT8JgQ4ohz+EbQaBQT+EYoRIGIJ5gl6NRYI3OZ5UKIIWlgQnmkktjiCaT?=
 =?us-ascii?q?QIEAgQFAg4BAQWBaYF7MyIbFUgNglJQERSNEgwOCYNQilNCATEBgSePSAGBD?=
 =?us-ascii?q?wEB?=
X-IronPort-AV: E=Sophos;i="5.69,325,1571716800"; 
   d="scan'208";a="9822583"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDGT96J1yveCxDBxD6BvL5WVW2gkAk6BIlj0rw?=
 =?us-ascii?q?VkBwhVw5/h0qWj2CL+fWO9YqVB/EUfql2jncblRMY677M3tR47nUakBW?=
 =?us-ascii?q?V1vP3uJqQV+3S+/VUTyNXGIomKsVAKoOW+jqw00BnN0Nm+OooX1Ziwng?=
 =?us-ascii?q?CrbixXcKZgCVX5ovUehX+Xsg=3D=3D?=
Received: from mail-eopbgr50093.outbound.protection.outlook.com (HELO EUR03-VE1-obe.outbound.protection.outlook.com) ([40.107.5.93])
  by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Dec 2019 10:12:25 -0500
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R62Ng1T0L5ZBK1oUbMvt2wJqtdmoKkcVI0Uo45hQS5Rqg4EpxMPSrkBGT48tfKzNEvMB4u4okOEoWpUB1R0ydUuuEqyYEjk4pmWEIXepTZRLIqYKD+O5r3VIwwaWAJK4qArX+WC4Uxywpf7j9LHJkTePd/iGhjUhoHNGBWkFMSVCKBCgIwqQygLc7Vg0/8rwSYTw5lzRxN0pV7wTrbhmgPpQoaGLw7wOoJp4xABKWmTfgzgx0WJIXgLwHK0ELvviBhPMR6seskVs/uYPjxZF06W3LEsxqSyBULNMuqYxpG9WvGJ3gaW3D/cV9atpueOmvYkway5iihzRpJI7OWE9vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dNdnDTkXFSB7LLx6bNq04Oc0bd8/QYnLUPngVXmiRD0=;
 b=Kn1QZ2zPaLCZyTITnbHiUh9gEnqLvEPffhV4rkEIQVbTKCOMJvc61mxMtbgZqPAczdqsksdm2tPFV/dIxRZ5yUL9hWeriROdCEqVG1iiEETx/yTH8+koFQr0lwtMpWaaAZ7hu77FTSpSAkRolVyRPkLQb8DD5lTHgwqHmYAICnXAz/NQ8JfLI5WgHDxB3lr2AOnrN5qQc8A0mTvUcQ5ZizpCnITz1Wf6LanyTOZO/KulAaKyjQa26B8smLQHZiXYOIABmE3U/SUemBVlx3Fb4IVoIL+qwcJRdRg+C7ignsbyvFS6zORKcdjTtpktH+vpRsBU2+NozAsDDXOQiCI58A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=bitdefender.com; dmarc=pass action=none
 header.from=bitdefender.com; dkim=pass header.d=bitdefender.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bitdefender.onmicrosoft.com; s=selector2-bitdefender-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dNdnDTkXFSB7LLx6bNq04Oc0bd8/QYnLUPngVXmiRD0=;
 b=CApFzyrk0mlMDcSadvV/d7ZUlanzXmS+m7aVES+7/PlT/2HEmwtYEBdN6pUdWDbsMBYgnAMaNTdghBKp3Ljfj+sRsO2IjLpx2+Nk1zooYnlxP3QQLDlJ+cjiXhDsw5VdDPZKhF+Fgtm5nygmi5D0onhECyNDNDc+49NlW67VOq8=
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com (10.255.30.78) by
 AM0PR02MB4386.eurprd02.prod.outlook.com (20.178.17.212) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2538.20; Tue, 17 Dec 2019 15:12:22 +0000
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d]) by AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d%4]) with mapi id 15.20.2538.019; Tue, 17 Dec 2019
 15:12:22 +0000
From: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Alexandru Stefan ISAILA <aisaila@bitdefender.com>, Razvan COJOCARU
	<rcojocaru@bitdefender.com>, Tamas K Lengyel <tamas@tklengyel.com>, "Petre
 Ovidiu PIRCALABU" <ppircalabu@bitdefender.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH V4 1/4] x86/mm: Add array_index_nospec to guest provided index
 values
Thread-Topic: [PATCH V4 1/4] x86/mm: Add array_index_nospec to guest provided
 index values
Thread-Index: AQHVtOxhsG39wuvEP0STO9uvU+j0gQ==
Date: Tue, 17 Dec 2019 15:12:21 +0000
Message-ID: <20191217151144.9781-1-aisaila@bitdefender.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-clientproxiedby: AM0PR01CA0067.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:e6::44) To AM0PR02MB5553.eurprd02.prod.outlook.com
 (2603:10a6:208:160::14)
x-ms-exchange-messagesentrepresentingtype: 1
x-mailer: git-send-email 2.17.1
x-originating-ip: [91.199.104.6]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d067ba9e-8c9c-4f83-ca0c-08d7830383e9
x-ms-traffictypediagnostic: AM0PR02MB4386:|AM0PR02MB4386:|AM0PR02MB4386:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR02MB43863905751FF81BCCB3A482AB500@AM0PR02MB4386.eurprd02.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:923;
x-forefront-prvs: 02543CD7CD
x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(366004)(136003)(396003)(39860400002)(346002)(376002)(189003)(199004)(6512007)(52116002)(64756008)(86362001)(186003)(8936002)(6486002)(4326008)(2906002)(54906003)(66476007)(71200400001)(66556008)(66946007)(36756003)(26005)(81156014)(81166006)(5660300002)(8676002)(6506007)(478600001)(66446008)(1076003)(2616005)(316002)(6916009);DIR:OUT;SFP:1102;SCL:1;SRVR:AM0PR02MB4386;H:AM0PR02MB5553.eurprd02.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: O/DKVO6WLk5mFLsOwWvTpR9EV9dOb/z8utBIF47hT/GmWxtlgygxrYXbFHURww2fBdoahCSbAU36jUcW9Af2P+DzsJFgABqDJstVvyxzROv/Cyma3xwE+P/8avTSh4/Kk5EhIFXsSq4PbkTGvBbhjvPoYngpML5prLyyw5D3ZbGcxbgFQoiahCzLrYlm8iebb0EYUtsAE5hAqXfoWTuPq8XbXdZJ6oF6w+AgGCJPXkTbeBzfbXbOujX1AMOUpqw4YTdrQjY4+uqitzCr1E5Fuu/jtC9sygJijnuBH5xtozAPxojbyfQQPl6adAqz2CdCj/FaR6pQEARfnpdMPHYaOQp/Q4sRit1eQYX12g8+YnLijmYlyWGdDF8nBQ0Ppt3rZYjRUTJl6rMyAzsh2ipeJc6/9+EuQQcn5ZDmHtkRXJu2oUJq/FCwdx27zs1pgrEl
Content-Type: text/plain; charset="utf-8"
Content-ID: <1944D3022A1EC5408CCD757F6F1CA470@eurprd02.prod.outlook.com>
Content-Transfer-Encoding: base64
X-MS-Exchange-CrossTenant-Network-Message-Id: d067ba9e-8c9c-4f83-ca0c-08d7830383e9
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Dec 2019 15:12:22.0510
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: A4teMOnRYkBZigPSnz6BzBUsDtClQtl/013J/4L4xfNdJRcIMDnzaZnH0IuLq5LRt496/1IKqGcweibRK2gV5UF8Z71xJP43w54Eyy5pPPg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR02MB4386
Return-Path: aisaila@bitdefender.com
X-MS-Exchange-Organization-Network-Message-Id: e5abb707-a0fb-4a92-0f14-08d78303871e
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

VGhpcyBwYXRjaCBhaW1zIHRvIHNhbml0aXplIGluZGV4ZXMsIHBvdGVudGlhbGx5IGd1ZXN0IHBy
b3ZpZGVkDQp2YWx1ZXMsIGZvciBhbHRwMm1fZXB0cFtdIGFuZCBhbHRwMm1fcDJtW10gYXJyYXlz
Lg0KDQpTaWduZWQtb2ZmLWJ5OiBBbGV4YW5kcnUgSXNhaWxhIDxhaXNhaWxhQGJpdGRlZmVuZGVy
LmNvbT4NCi0tLQ0KQ0M6IFJhenZhbiBDb2pvY2FydSA8cmNvam9jYXJ1QGJpdGRlZmVuZGVyLmNv
bT4NCkNDOiBUYW1hcyBLIExlbmd5ZWwgPHRhbWFzQHRrbGVuZ3llbC5jb20+DQpDQzogUGV0cmUg
UGlyY2FsYWJ1IDxwcGlyY2FsYWJ1QGJpdGRlZmVuZGVyLmNvbT4NCkNDOiBHZW9yZ2UgRHVubGFw
IDxnZW9yZ2UuZHVubGFwQGV1LmNpdHJpeC5jb20+DQpDQzogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPg0KQ0M6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
DQpDQzogV2VpIExpdSA8d2xAeGVuLm9yZz4NCkNDOiAiUm9nZXIgUGF1IE1vbm7DqSIgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPg0KQ0M6IEp1biBOYWthamltYSA8anVuLm5ha2FqaW1hQGludGVsLmNv
bT4NCkNDOiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCi0tLQ0KIHhlbi9hcmNo
L3g4Ni9tbS9tZW1fYWNjZXNzLmMgfCAxNSArKysrKysrKystLS0tLS0NCiB4ZW4vYXJjaC94ODYv
bW0vcDJtLWVwdC5jICAgIHwgIDUgKysrLS0NCiB4ZW4vYXJjaC94ODYvbW0vcDJtLmMgICAgICAg
IHwgMjcgKysrKysrKysrKysrKysrKystLS0tLS0tLS0tDQogMyBmaWxlcyBjaGFuZ2VkLCAyOSBp
bnNlcnRpb25zKCspLCAxOCBkZWxldGlvbnMoLSkNCg0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4
Ni9tbS9tZW1fYWNjZXNzLmMgYi94ZW4vYXJjaC94ODYvbW0vbWVtX2FjY2Vzcy5jDQppbmRleCAz
MjBiOWZlNjIxLi43MGYzNTI4YmIxIDEwMDY0NA0KLS0tIGEveGVuL2FyY2gveDg2L21tL21lbV9h
Y2Nlc3MuYw0KKysrIGIveGVuL2FyY2gveDg2L21tL21lbV9hY2Nlc3MuYw0KQEAgLTM2NywxMCAr
MzY3LDExIEBAIGxvbmcgcDJtX3NldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21haW4gKmQsIGdmbl90
IGdmbiwgdWludDMyX3QgbnIsDQogICAgIGlmICggYWx0cDJtX2lkeCApDQogICAgIHsNCiAgICAg
ICAgIGlmICggYWx0cDJtX2lkeCA+PSBNQVhfQUxUUDJNIHx8DQotICAgICAgICAgICAgIGQtPmFy
Y2guYWx0cDJtX2VwdHBbYWx0cDJtX2lkeF0gPT0gbWZuX3goSU5WQUxJRF9NRk4pICkNCisgICAg
ICAgICAgICAgZC0+YXJjaC5hbHRwMm1fZXB0cFthcnJheV9pbmRleF9ub3NwZWMoYWx0cDJtX2lk
eCwgTUFYX0VQVFApXSA9PQ0KKyAgICAgICAgICAgICBtZm5feChJTlZBTElEX01GTikgKQ0KICAg
ICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KIA0KLSAgICAgICAgYXAybSA9IGQtPmFyY2guYWx0
cDJtX3AybVthbHRwMm1faWR4XTsNCisgICAgICAgIGFwMm0gPSBkLT5hcmNoLmFsdHAybV9wMm1b
YXJyYXlfaW5kZXhfbm9zcGVjKGFsdHAybV9pZHgsIE1BWF9BTFRQMk0pXTsNCiAgICAgfQ0KICNl
bHNlDQogICAgIEFTU0VSVCghYWx0cDJtX2lkeCk7DQpAQCAtNDI2LDEwICs0MjcsMTEgQEAgbG9u
ZyBwMm1fc2V0X21lbV9hY2Nlc3NfbXVsdGkoc3RydWN0IGRvbWFpbiAqZCwNCiAgICAgaWYgKCBh
bHRwMm1faWR4ICkNCiAgICAgew0KICAgICAgICAgaWYgKCBhbHRwMm1faWR4ID49IE1BWF9BTFRQ
Mk0gfHwNCi0gICAgICAgICAgICAgZC0+YXJjaC5hbHRwMm1fZXB0cFthbHRwMm1faWR4XSA9PSBt
Zm5feChJTlZBTElEX01GTikgKQ0KKyAgICAgICAgICAgICBkLT5hcmNoLmFsdHAybV9lcHRwW2Fy
cmF5X2luZGV4X25vc3BlYyhhbHRwMm1faWR4LCBNQVhfRVBUUCldID09DQorICAgICAgICAgICAg
IG1mbl94KElOVkFMSURfTUZOKSApDQogICAgICAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQogDQot
ICAgICAgICBhcDJtID0gZC0+YXJjaC5hbHRwMm1fcDJtW2FsdHAybV9pZHhdOw0KKyAgICAgICAg
YXAybSA9IGQtPmFyY2guYWx0cDJtX3AybVthcnJheV9pbmRleF9ub3NwZWMoYWx0cDJtX2lkeCwg
TUFYX0FMVFAyTSldOw0KICAgICB9DQogI2Vsc2UNCiAgICAgQVNTRVJUKCFhbHRwMm1faWR4KTsN
CkBAIC00OTIsMTAgKzQ5NCwxMSBAQCBpbnQgcDJtX2dldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21h
aW4gKmQsIGdmbl90IGdmbiwgeGVubWVtX2FjY2Vzc190ICphY2Nlc3MsDQogICAgIGVsc2UgaWYg
KCBhbHRwMm1faWR4ICkgLyogYWx0cDJtIHZpZXcgMCBpcyB0cmVhdGVkIGFzIHRoZSBob3N0cDJt
ICovDQogICAgIHsNCiAgICAgICAgIGlmICggYWx0cDJtX2lkeCA+PSBNQVhfQUxUUDJNIHx8DQot
ICAgICAgICAgICAgIGQtPmFyY2guYWx0cDJtX2VwdHBbYWx0cDJtX2lkeF0gPT0gbWZuX3goSU5W
QUxJRF9NRk4pICkNCisgICAgICAgICAgICAgZC0+YXJjaC5hbHRwMm1fZXB0cFthcnJheV9pbmRl
eF9ub3NwZWMoYWx0cDJtX2lkeCwgTUFYX0VQVFApXSA9PQ0KKyAgICAgICAgICAgICBtZm5feChJ
TlZBTElEX01GTikgKQ0KICAgICAgICAgICAgIHJldHVybiAtRUlOVkFMOw0KIA0KLSAgICAgICAg
cDJtID0gZC0+YXJjaC5hbHRwMm1fcDJtW2FsdHAybV9pZHhdOw0KKyAgICAgICAgcDJtID0gZC0+
YXJjaC5hbHRwMm1fcDJtW2FycmF5X2luZGV4X25vc3BlYyhhbHRwMm1faWR4LCBNQVhfQUxUUDJN
KV07DQogICAgIH0NCiAjZWxzZQ0KICAgICBBU1NFUlQoIWFsdHAybV9pZHgpOw0KZGlmZiAtLWdp
dCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5j
DQppbmRleCBiNTUxNzc2OWM5Li5lMDg4YTYzZjU2IDEwMDY0NA0KLS0tIGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYw0KKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYw0KQEAgLTEzNTMs
NyArMTM1Myw4IEBAIHZvaWQgc2V0dXBfZXB0X2R1bXAodm9pZCkNCiANCiB2b2lkIHAybV9pbml0
X2FsdHAybV9lcHQoc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IGkpDQogew0KLSAgICBz
dHJ1Y3QgcDJtX2RvbWFpbiAqcDJtID0gZC0+YXJjaC5hbHRwMm1fcDJtW2ldOw0KKyAgICBzdHJ1
Y3QgcDJtX2RvbWFpbiAqcDJtID0NCisgICAgICAgICAgIGQtPmFyY2guYWx0cDJtX3AybVthcnJh
eV9pbmRleF9ub3NwZWMoaSwgTUFYX0FMVFAyTSldOw0KICAgICBzdHJ1Y3QgcDJtX2RvbWFpbiAq
aG9zdHAybSA9IHAybV9nZXRfaG9zdHAybShkKTsNCiAgICAgc3RydWN0IGVwdF9kYXRhICplcHQ7
DQogDQpAQCAtMTM2Niw3ICsxMzY3LDcgQEAgdm9pZCBwMm1faW5pdF9hbHRwMm1fZXB0KHN0cnVj
dCBkb21haW4gKmQsIHVuc2lnbmVkIGludCBpKQ0KICAgICBwMm0tPm1heF9tYXBwZWRfcGZuID0g
cDJtLT5tYXhfcmVtYXBwZWRfZ2ZuID0gMDsNCiAgICAgZXB0ID0gJnAybS0+ZXB0Ow0KICAgICBl
cHQtPm1mbiA9IHBhZ2V0YWJsZV9nZXRfcGZuKHAybV9nZXRfcGFnZXRhYmxlKHAybSkpOw0KLSAg
ICBkLT5hcmNoLmFsdHAybV9lcHRwW2ldID0gZXB0LT5lcHRwOw0KKyAgICBkLT5hcmNoLmFsdHAy
bV9lcHRwW2FycmF5X2luZGV4X25vc3BlYyhpLCBNQVhfRVBUUCldID0gZXB0LT5lcHRwOw0KIH0N
CiANCiB1bnNpZ25lZCBpbnQgcDJtX2ZpbmRfYWx0cDJtX2J5X2VwdHAoc3RydWN0IGRvbWFpbiAq
ZCwgdWludDY0X3QgZXB0cCkNCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0vcDJtLmMgYi94
ZW4vYXJjaC94ODYvbW0vcDJtLmMNCmluZGV4IGJhMTI2Zjc5MGEuLjdlN2Y0ZjFhN2MgMTAwNjQ0
DQotLS0gYS94ZW4vYXJjaC94ODYvbW0vcDJtLmMNCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0u
Yw0KQEAgLTI0OTksNyArMjQ5OSw3IEBAIHN0YXRpYyB2b2lkIHAybV9yZXNldF9hbHRwMm0oc3Ry
dWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IGlkeCwNCiAgICAgc3RydWN0IHAybV9kb21haW4g
KnAybTsNCiANCiAgICAgQVNTRVJUKGlkeCA8IE1BWF9BTFRQMk0pOw0KLSAgICBwMm0gPSBkLT5h
cmNoLmFsdHAybV9wMm1baWR4XTsNCisgICAgcDJtID0gZC0+YXJjaC5hbHRwMm1fcDJtW2FycmF5
X2luZGV4X25vc3BlYyhpZHgsIE1BWF9BTFRQMk0pXTsNCiANCiAgICAgcDJtX2xvY2socDJtKTsN
CiANCkBAIC0yNTQwLDcgKzI1NDAsNyBAQCBzdGF0aWMgaW50IHAybV9hY3RpdmF0ZV9hbHRwMm0o
c3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IGlkeCkNCiANCiAgICAgQVNTRVJUKGlkeCA8
IE1BWF9BTFRQMk0pOw0KIA0KLSAgICBwMm0gPSBkLT5hcmNoLmFsdHAybV9wMm1baWR4XTsNCisg
ICAgcDJtID0gZC0+YXJjaC5hbHRwMm1fcDJtW2FycmF5X2luZGV4X25vc3BlYyhpZHgsIE1BWF9B
TFRQMk0pXTsNCiAgICAgaG9zdHAybSA9IHAybV9nZXRfaG9zdHAybShkKTsNCiANCiAgICAgcDJt
X2xvY2socDJtKTsNCkBAIC0yNjIyLDkgKzI2MjIsMTAgQEAgaW50IHAybV9kZXN0cm95X2FsdHAy
bV9ieV9pZChzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaWR4KQ0KICAgICByYyA9IC1F
QlVTWTsNCiAgICAgYWx0cDJtX2xpc3RfbG9jayhkKTsNCiANCi0gICAgaWYgKCBkLT5hcmNoLmFs
dHAybV9lcHRwW2lkeF0gIT0gbWZuX3goSU5WQUxJRF9NRk4pICkNCisgICAgaWYgKCBkLT5hcmNo
LmFsdHAybV9lcHRwW2FycmF5X2luZGV4X25vc3BlYyhpZHgsIE1BWF9FUFRQKV0gIT0NCisgICAg
ICAgICBtZm5feChJTlZBTElEX01GTikgKQ0KICAgICB7DQotICAgICAgICBwMm0gPSBkLT5hcmNo
LmFsdHAybV9wMm1baWR4XTsNCisgICAgICAgIHAybSA9IGQtPmFyY2guYWx0cDJtX3AybVthcnJh
eV9pbmRleF9ub3NwZWMoaWR4LCBNQVhfQUxUUDJNKV07DQogDQogICAgICAgICBpZiAoICFfYXRv
bWljX3JlYWQocDJtLT5hY3RpdmVfdmNwdXMpICkNCiAgICAgICAgIHsNCkBAIC0yNjg2LDExICsy
Njg3LDEzIEBAIGludCBwMm1fY2hhbmdlX2FsdHAybV9nZm4oc3RydWN0IGRvbWFpbiAqZCwgdW5z
aWduZWQgaW50IGlkeCwNCiAgICAgbWZuX3QgbWZuOw0KICAgICBpbnQgcmMgPSAtRUlOVkFMOw0K
IA0KLSAgICBpZiAoIGlkeCA+PSBNQVhfQUxUUDJNIHx8IGQtPmFyY2guYWx0cDJtX2VwdHBbaWR4
XSA9PSBtZm5feChJTlZBTElEX01GTikgKQ0KKyAgICBpZiAoIGlkeCA+PSBNQVhfQUxUUDJNIHx8
DQorICAgICAgICAgZC0+YXJjaC5hbHRwMm1fZXB0cFthcnJheV9pbmRleF9ub3NwZWMoaWR4LCBN
QVhfRVBUUCldID09DQorICAgICAgICAgbWZuX3goSU5WQUxJRF9NRk4pICkNCiAgICAgICAgIHJl
dHVybiByYzsNCiANCiAgICAgaHAybSA9IHAybV9nZXRfaG9zdHAybShkKTsNCi0gICAgYXAybSA9
IGQtPmFyY2guYWx0cDJtX3AybVtpZHhdOw0KKyAgICBhcDJtID0gZC0+YXJjaC5hbHRwMm1fcDJt
W2FycmF5X2luZGV4X25vc3BlYyhpZHgsIE1BWF9BTFRQMk0pXTsNCiANCiAgICAgcDJtX2xvY2so
aHAybSk7DQogICAgIHAybV9sb2NrKGFwMm0pOw0KQEAgLTMwMzAsMTAgKzMwMzMsMTIgQEAgaW50
IHAybV9zZXRfc3VwcHJlc3NfdmUoc3RydWN0IGRvbWFpbiAqZCwgZ2ZuX3QgZ2ZuLCBib29sIHN1
cHByZXNzX3ZlLA0KICAgICBpZiAoIGFsdHAybV9pZHggPiAwICkNCiAgICAgew0KICAgICAgICAg
aWYgKCBhbHRwMm1faWR4ID49IE1BWF9BTFRQMk0gfHwNCi0gICAgICAgICAgICAgZC0+YXJjaC5h
bHRwMm1fZXB0cFthbHRwMm1faWR4XSA9PSBtZm5feChJTlZBTElEX01GTikgKQ0KKyAgICAgICAg
ICAgICBkLT5hcmNoLmFsdHAybV9lcHRwW2FycmF5X2luZGV4X25vc3BlYyhhbHRwMm1faWR4LCBN
QVhfRVBUUCldID09DQorICAgICAgICAgICAgIG1mbl94KElOVkFMSURfTUZOKSApDQogICAgICAg
ICAgICAgcmV0dXJuIC1FSU5WQUw7DQogDQotICAgICAgICBwMm0gPSBhcDJtID0gZC0+YXJjaC5h
bHRwMm1fcDJtW2FsdHAybV9pZHhdOw0KKyAgICAgICAgcDJtID0gYXAybSA9IGQtPmFyY2guYWx0
cDJtX3AybVthcnJheV9pbmRleF9ub3NwZWMoYWx0cDJtX2lkeCwNCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIE1BWF9BTFRQMk0pXTsN
CiAgICAgfQ0KICAgICBlbHNlDQogICAgICAgICBwMm0gPSBob3N0X3AybTsNCkBAIC0zMDczLDEw
ICszMDc4LDEyIEBAIGludCBwMm1fZ2V0X3N1cHByZXNzX3ZlKHN0cnVjdCBkb21haW4gKmQsIGdm
bl90IGdmbiwgYm9vbCAqc3VwcHJlc3NfdmUsDQogICAgIGlmICggYWx0cDJtX2lkeCA+IDAgKQ0K
ICAgICB7DQogICAgICAgICBpZiAoIGFsdHAybV9pZHggPj0gTUFYX0FMVFAyTSB8fA0KLSAgICAg
ICAgICAgICBkLT5hcmNoLmFsdHAybV9lcHRwW2FsdHAybV9pZHhdID09IG1mbl94KElOVkFMSURf
TUZOKSApDQorICAgICAgICAgICAgIGQtPmFyY2guYWx0cDJtX2VwdHBbYXJyYXlfaW5kZXhfbm9z
cGVjKGFsdHAybV9pZHgsIE1BWF9FUFRQKV0gPT0NCisgICAgICAgICAgICAgbWZuX3goSU5WQUxJ
RF9NRk4pICkNCiAgICAgICAgICAgICByZXR1cm4gLUVJTlZBTDsNCiANCi0gICAgICAgIHAybSA9
IGFwMm0gPSBkLT5hcmNoLmFsdHAybV9wMm1bYWx0cDJtX2lkeF07DQorICAgICAgICBwMm0gPSBh
cDJtID0gZC0+YXJjaC5hbHRwMm1fcDJtW2FycmF5X2luZGV4X25vc3BlYyhhbHRwMm1faWR4LA0K
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgTUFYX0FMVFAyTSldOw0KICAgICB9DQogICAgIGVsc2UNCiAgICAgICAgIHAybSA9IGhvc3Rf
cDJtOw0KLS0gDQoyLjE3LjENCg0K

From - Wed Dec 18 10:46:18 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Tue, 17 Dec 2019 16:12:39 +0100
Received: from LASPEX02MSOL01.citrite.net (10.160.21.45) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Tue, 17 Dec 2019 10:12:35 -0500
Received: from esa2.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL01.citrite.net (10.160.21.45) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 07:12:35 -0800
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  aisaila@bitdefender.com) identity=pra;
  client-ip=40.107.5.101; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  aisaila@bitdefender.com designates 40.107.5.101 as permitted
  sender) identity=mailfrom; client-ip=40.107.5.101;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  postmaster@EUR03-VE1-obe.outbound.protection.outlook.com
  designates 40.107.5.101 as permitted sender) identity=helo;
  client-ip=40.107.5.101; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="postmaster@EUR03-VE1-obe.outbound.protection.outlook.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Authentication-Results: esa2.hc3370-68.iphmx.com; spf=None smtp.pra=aisaila@bitdefender.com; spf=Pass smtp.mailfrom=aisaila@bitdefender.com; spf=Pass smtp.helo=postmaster@EUR03-VE1-obe.outbound.protection.outlook.com; dkim=pass (signature verified) header.i=@bitdefender.onmicrosoft.com
IronPort-SDR: GSSRfHFuftbjDZqjEGNGdGVpHvQ1LO0oONGULXCbprWGSf5zqtGpfGNh7gZy7I/ocm7KQtu568
 GktZuqO0lYsp0dsGfbNf684WENRVd0wAYKqN/YMYYP/uNRR1WZbugM7NcZnpedTqjYuegyOVim
 +t1mv3fkB31thRnzIGi/O/BSNKKMmuKnzDrY5a9pJoRWMNi9xI+76shRaROc3qlix2+/bWiR/r
 qMLJKQiChWR/KqIKqfysQNmS+hFoTzK8jBE2K9aCAvLXoE11mBv6sHUQfnbHMg9nRIWhFXE2Nd
 CyhRSiM0IiDhIWJt5ImuVUdq
X-IronPort-RemoteIP: 40.107.5.101
X-IronPort-MID: 9822597
X-IronPort-Reputation: 3.5
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: SBRS_Whitelist
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.5
X-MesageID: 9822597
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 40.107.5.101
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3A/ERh7RCbzdeTkaV11LYgUyQJP3N1i/DPJgcQr6?=
 =?us-ascii?q?AfoPdwSP37rsSwAkXT6L1XgUPTWs2DsrQY0rGQ6fi/EjNZqb+681k6OKRWUB?=
 =?us-ascii?q?EEjchE1ycBO+WiTXPBEfjxciYhF95DXlI2t1uyMExSBdqsLwaK+i764jEdAA?=
 =?us-ascii?q?jwOhRoLerpBIHSk9631+ev8JHPfglEnjWwba58IRmsswndqssbjYRgJ6os1x?=
 =?us-ascii?q?DEvmZGd+NKyG1yOFmdhQz85sC+/J5i9yRfpfcs/NNeXKv5Yqo1U6VWACwpPG?=
 =?us-ascii?q?4p6sLrswLDTRaU6XsHTmoWiBtIDBPb4xz8Q5z8rzH1tut52CmdIM32UbU5Ui?=
 =?us-ascii?q?ms4qt3VBPljjoMOiUn+2/LlMN/kKNboAqgpxNhxY7UfJqVP+d6cq/EYN8WWX?=
 =?us-ascii?q?ZNUsNXWidcAI2zcpEPAvIBM+hGsof9u1UAoxiwBQauA+3vyyNHiHD50qAhz+?=
 =?us-ascii?q?QuChvL0BA6Et4SsnnZqsj+OqcIUeCyyanF1SnOb/dI1jby8ofIdA0uoeuRXb?=
 =?us-ascii?q?ltbMTR1VcgFw3fgVWWtIfrPC6b2/gOvWad8+drSOWihHQmqwFquDevx8MshZ?=
 =?us-ascii?q?PSi40Oy1DE6Th2z5g7JdKmTk50fMCrEIFKuy6GMIt2R9ovTmd1syg0zb0GvI?=
 =?us-ascii?q?S0fCkMyJk/wh7QcfOGc5CI4hPjT+aeOyp3i29/eL6lnhq+6FGsyvH7Vsmz1F?=
 =?us-ascii?q?ZKtSxImcTPuHAVzxHe5dSLRuFg8ku92zuDzQDe5vtZLUwoiKbXM5oszqMqmp?=
 =?us-ascii?q?YOtUnOGjX6lFjqgKKZbEkp+/Ck6+r5bbjlupORMop5hwLiPaswhsOyBOY1Pw?=
 =?us-ascii?q?kAUmWY/Omx1rPu8ELlT7hPjfA7lrTWvZbHLsoBvKG5GRVa0oM75ha/ETim1N?=
 =?us-ascii?q?MYkGEIIl1LZByLk4bkN0jBL/73EPuzmlOsnyx1yPzcOb3hH4nNIWPEkLf8e7?=
 =?us-ascii?q?Zy9lRQyBIpzdBY+5JbFK0OIO7yWk/2stzUFBg5MxGow+bjD9V90YAeVXiTDa?=
 =?us-ascii?q?+eNaPeqV6I5uQxLOmQfIIYtyrxJ+I46/Lyj3I1g18QcbO00ZcLdXy0BvFmLF?=
 =?us-ascii?q?+YYXrojNcBC2AKvg8mQePwiV2CSiRcZ3e2X60m/Tw0E4OmDZveSY+zjryOwi?=
 =?us-ascii?q?G7EYBIaWBcEFyDDXDod4CcV/cWdC2SOtNhkiADVbW5So8uzxeuuBX4y7V9Mu?=
 =?us-ascii?q?XU+TYYtZXl1NVu+eLTiAs++iB1D8SByWGNTm51knkUSD8x2aB1uVZ9xUub0a?=
 =?us-ascii?q?hkn/xYEsRe6fJXXQc9L57cwPJ1B8r8VALceNeJTEypQs29DD4vU9I92cMBY0?=
 =?us-ascii?q?dhG9W4jxDC0DCmDKMSl7yOV9QI9feWz3X3Yspw1XvC/K0glEU9BNtCM3W8ga?=
 =?us-ascii?q?xy/BSVAJTG2Q3Nh6usMKgRwiPJ3GOC1naV+lFVVhZqVqfIVmxZYVHZ+4fX/E?=
 =?us-ascii?q?THGpaqBKQuNAdb1Ia5NqZEY9uhrG5vZdPKcIDXYnm4mmO5LRyJ2r+BYofsd2?=
 =?us-ascii?q?gHmi7aDR5XwEgo4X+aOF1mVW+aqGXEAWkrTAq3Oh6+3cpfjTaWUEYw0giWbk?=
 =?us-ascii?q?pni+fvqABA3K/UQqYJxbtBoyco8WgqTx62isjbD9OQqgYmZqhYaMMw7AQity?=
 =?us-ascii?q?rV4gZ8IpCtIa1kilMENQNxukLlzRJsDYtc18MtqSBi119pJKaV209GbWnd15?=
 =?us-ascii?q?HsNrDZJ2/+8QrqbKjT21rE182R9LtK4/M9+DCB9AH8OFAl9idL2sVYgUCB7J?=
 =?us-ascii?q?fHBxZAdJ/qSU84+l1bi+PxZS8h6oXIkEF9KaTmlx7n9pcXCfE+yxGmLecaFZ?=
 =?us-ascii?q?jBOR/5E8QcCMXrE+Esl1WzRz4vPO1Z9/1RXYusd67b6aPwPvxcn2KIn3xAvZ?=
 =?us-ascii?q?tyiEWMrSdnG63mxr0f4NKT3BKBXCbCj3y7vPHazNMhB3kYS1ex03H+DrASY6?=
 =?us-ascii?q?diXZ82WT/2HsOFwPtH3tnUfFJe+XLgLU0N+NGsYgaeSXHBhzRS0nozmlLkuy?=
 =?us-ascii?q?u/ySNLjiwqhLvBgTTk2Pb/LEsaKEBgdThGt3r9PraegIsbWhW3XlEtnSX0/W?=
 =?us-ascii?q?+l3PR7vIc4bGvLanlRfBP6NTpNdY/3hJyLZfYQ17MHknhYaeq8MWC1Eaz++S?=
 =?us-ascii?q?sk3Q+yBU9+5TU1ZSirpoyggT9DmkWzDmYv9C+KMdE1xA3Y4sTbX+IUxDcdWS?=
 =?us-ascii?q?1k3CHeHUPvZYP71MiIl5rFruG1Xn6gUZsWSyTw0Iecr3Hlvz9nGhTl2fC4wY?=
 =?us-ascii?q?a4SU1kgWn6z9lvRWPDqxOvKpLz2fGcNuRqNlJtGEe689BzT4h/iYIxgJg403?=
 =?us-ascii?q?kGipib8HwLnH21OtJeiurldHRYfTcQ2JbO5RT9nkhqL3aH3YX8A3yc2cJgY9?=
 =?us-ascii?q?CSaGIK1i8z4sZGBb3S57tBzmNuulTtlQvKerBmmysFj/sj7HlPm+YSpA8k1T?=
 =?us-ascii?q?mQGJg/IHMAZmnSuk3N6Nqz6qJKeGyobL68klJkmsysB62DpQcaX2vlfpAlHm?=
 =?us-ascii?q?l76cAseF7P0Xim8oj/Y5GQdtMcsBSIjg3Nx/ZYMpM/l/cGxGJnNGvxsGdjyr?=
 =?us-ascii?q?sThx1y0Zy0sY6LJn8r+6S8AxVCMSbyad9V8Tbo1PwMpMud0oGxE5kkIQ0lB8?=
 =?us-ascii?q?K0H9SvFj9a9fn8PlzIEDZn8SjDXOSPWw6H6EJ26XnIFsLjMXbfP3Qfwdh4IX?=
 =?us-ascii?q?vVbEVCnAAZWik7lZ8lB0irwsLmakJw+jEW4BbxtBJNzutiMxS3XH3YoU+kbT?=
 =?us-ascii?q?I9SZ7XKxQzjEkK7kbOOM2a58p5Hj1U85OsqgCAMCqQYAEJRWAFV0qYBkzya6?=
 =?us-ascii?q?G07IqlkaDQDe6/Iv3SJLSW/LAGEa7QmtTylNI+pmXELMiEM3h8Audu11FKBz?=
 =?us-ascii?q?Z5EJ+CxG1KFXxRljrNat7drxC5qUgV5oiy9urmXAX36M6BEbxXZJ9m+gu/gK?=
 =?us-ascii?q?OKH+SRmCp0Jzte2p4Wg3TPzfJMuTxawzErbDSrHbka4GTETbnZm6teJxQady?=
 =?us-ascii?q?9+OsZO468mmAJKPISI77G9nq49hfkzBVBfUFXnkcz8fs0GLVa2M1bfDVqKPr?=
 =?us-ascii?q?CLfGeZ+cz8bKKiRLEVt91660zq6w6SCFSrfjmYnmOvVxv0ar4UyX/Leh1GuI?=
 =?us-ascii?q?StNB1qDDqrQNXjYxy9eNh56F9+ibQ1nXrLMWc0Ozlgf05Do7uc4DkeifJ6U2?=
 =?us-ascii?q?BM9XtqK+CYlj3Rs7Gec85J96YtW3wt3+tBqGw30b5U8D1JSJkX0GPJo9hirk?=
 =?us-ascii?q?vn2uiDxzx7UQZf/zNChYaFp0Jnau3S8phNX2qB/QpYsT3WUkxV4YYjU4W26M?=
 =?us-ascii?q?UygpDVman+KSlP6YfZ5sJGQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?=
 =?us-ascii?q?yW8WWY65YgrZ250pMPUbZaUFUdHPIGBklrEdoOLY0xVTQh2+3+7oZA9T+loR?=
 =?us-ascii?q?/dSd8P9JnGTfuTBfzHIjeFgbRKah0EzKm+JoMWfN6euQQqehxxm4LEHFDVVN?=
 =?us-ascii?q?ZGr3h6bwM6l05K9WB3Umw530+2IhPo+nIYEuS42wInkgYrK/p47y/iuh1kQz?=
 =?us-ascii?q?iC7Dt1ikQ6ns/pxCycYCKkZrnlRplYUmL1rxRjbsu9ElwzNUvq2hU5fDbcG+?=
 =?us-ascii?q?AN1/05LT8t0EmE/sIRfJwUBaxcPE1Nn7fOP61uiRIE7Xz7jU5fubmcUcckyF?=
 =?us-ascii?q?RsKdj06CsfkwN7MoxvLPSJdvMQlwpe2vrW7H3virBUokdWJl5TojmbIHda4R?=
 =?us-ascii?q?VRZLd6f3H6rKswuUTHkj9HMjFWBcAnqf9r6E4xft+44X+9j+x7I1uqf6ySNK?=
 =?us-ascii?q?rD/WjLzpXXGhZuhgUJj0lA7f591sJxO0aTUkkuyvOWGXFrfYLaLhpJasNJ6H?=
 =?us-ascii?q?XJVQuzi72WhKxYZsC6HO2uSvKSvqEJhE7iBBwuA4kH8sUGGN+rzV3cKsDkar?=
 =?us-ascii?q?UCzHBPrEznKU6EA/JAZB+Q2Gtf5Zjnk9kqm9AFf3kUGi1lPD+y56rLqwNim/?=
 =?us-ascii?q?eFUNosIz8bUoYCKnMqSZi6lipe7BEiRHG81uMUzhTH7iep+nyWVWGjKYclPa?=
 =?us-ascii?q?/NNlt2Bdq7+Csy6f27kleNtJXVfDqlbZE86pnO8eMft9CMDPYHKNs1+0rahY?=
 =?us-ascii?q?RcQGSnFmDVFtvgbZHxcIAtadXcAHemXlGxhjQ5Qt23N9GoZPvt40mgVcNPvY?=
 =?us-ascii?q?+X0Sp2f9e6DS0bEgxsqvsryZ9GPVZGXbdiJBniukI5KrC1Jxqe3pO2WWGxJD?=
 =?us-ascii?q?BKTv5Zi+Kne7hQyClqZei/gihFLNly36y89kgDQ4sPhxfVyKO4ZoVQZiP0H2?=
 =?us-ascii?q?RUZwTFoSdq33gkLOs5xf0zhQ/ZqVRJeS7eb/RnMSYX2rN0TUPXO3h9DXA0Ak?=
 =?us-ascii?q?ORnZaWqBD5xKgcpmNch4oGjbUD4SK45tmHJ2vxEK2z9ceJ62x5NYdg++spdt?=
 =?us-ascii?q?W9R6nO/JLGwm6CFt+J6lXDCGjiUKMG0tlIfHABGKUOxT5jYYpe/tMeoUspCJ?=
 =?us-ascii?q?VkLuQWWvB1/+KkNWI8X3xVkX98NcvI3SRc0L21g+KIz07JIpp+aEdW4tIe0p?=
 =?us-ascii?q?NYWipyKHpC9p/mbJ3fkiq/ckZOOB0atFse7gMcmoJ+c+bp7ZCOR5hJmWYP/q?=
 =?us-ascii?q?BEFxDTH5wtzGPVD2GbhV+kF6eMrtbxhkds4aup1dMWHhliFUJa2uBa0FMyL6?=
 =?us-ascii?q?16ILURuYiMtSKUcUT9vyTmz+70fQAAm/2RTEXxCc/+jUS5VyQd/XMOQooWly?=
 =?us-ascii?q?PVFIgengR0bqomvhNHJ4X0I0s=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0GvAQCb7/hdh2UFayhlHgELHINJUIF?=
 =?us-ascii?q?mAwQLKgqDeoNGA4U6hVSdNwMYPAkEAQEKAS0CAQEBAYQ+AheCHwYBBDQTAgE?=
 =?us-ascii?q?CAQwBAQEDAQEBAgECAwICAQECEAEBAQgNCQgphT4BC4I7KQFpOTgBAQEBAQE?=
 =?us-ascii?q?BAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEFAoEMPgIBAxIRBA0MAQE?=
 =?us-ascii?q?3AQ8CASICJgICAjAVEAIEDgUigwCCRwMvAgGiAz0CIwE/AguBBCmIYAEBdH8?=
 =?us-ascii?q?zgn4BAQWFGRhYgT8JgQ4ohz+CU4IaBoFBP4ERgmVshEgIECOCVoJejUGCOzm?=
 =?us-ascii?q?eVCiCFoxmiR4ngkOYBo5NgUaZBwIEAgQFAg4BAQWBaYF7MyIbFYMnUBEUjRI?=
 =?us-ascii?q?ag1mKU0IBMQGBJ49IAYEPAQE?=
X-IPAS-Result: =?us-ascii?q?A0GvAQCb7/hdh2UFayhlHgELHINJUIFmAwQLKgqDeoNGA?=
 =?us-ascii?q?4U6hVSdNwMYPAkEAQEKAS0CAQEBAYQ+AheCHwYBBDQTAgECAQwBAQEDAQEBA?=
 =?us-ascii?q?gECAwICAQECEAEBAQgNCQgphT4BC4I7KQFpOTgBAQEBAQEBAQEBAQEBAQEBA?=
 =?us-ascii?q?QEBAQEBAQEBAQEBAQEBAQEBAQEFAoEMPgIBAxIRBA0MAQE3AQ8CASICJgICA?=
 =?us-ascii?q?jAVEAIEDgUigwCCRwMvAgGiAz0CIwE/AguBBCmIYAEBdH8zgn4BAQWFGRhYg?=
 =?us-ascii?q?T8JgQ4ohz+CU4IaBoFBP4ERgmVshEgIECOCVoJejUGCOzmeVCiCFoxmiR4ng?=
 =?us-ascii?q?kOYBo5NgUaZBwIEAgQFAg4BAQWBaYF7MyIbFYMnUBEUjRIag1mKU0IBMQGBJ?=
 =?us-ascii?q?49IAYEPAQE?=
X-IronPort-AV: E=Sophos;i="5.69,325,1571716800"; 
   d="scan'208";a="9822597"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDEZotQ1MnFS2trojv24rD6d9bmMLY42s+40Cr?=
 =?us-ascii?q?nkmV90iWzkajj9ugbvzp5mecw7HoelgaVAvHT0BJzZInrtbyMTc4jU/b?=
 =?us-ascii?q?I6I+KGBjid3Mp+JKuFNzFJF0X2BDw8711O3ceMvoNzQT5qcetuQOuO31?=
 =?us-ascii?q?gp+8mQkkXQ0yi0Reuu5crdzw=3D=3D?=
Received: from mail-eopbgr50101.outbound.protection.outlook.com (HELO EUR03-VE1-obe.outbound.protection.outlook.com) ([40.107.5.101])
  by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Dec 2019 10:12:34 -0500
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=npc6E7hfAb/6d4ckjxiKauCZtCyMWNNv+ADaAvs4bOyN5OSSA9JGcPZLc6pwb4tFRNiX9sYLnPK/6GoU+mokU8EF6sIQbd3tZNmaY7WSYlXO+Oh/Y+YHq2x1NutsE6oxturlVsxUMKZImTEmJdgrT86flhMkmDc84WPNfnX4ZcWxgfzI0fBlr5N1FxmgAe3u26XUlw4LcrHAZHmJkAo2Jlj5i47HlDgRUGEvHXDJhKLzzWOqzEr4i56cCbNl6c74PivWvouXlfEsdk3QGlpvILkLOUEQi4FA8pZximYFVPdHhs5Yo5Hr+mumZSuAM1ugmcgALcYyz4Uw/bEIhkCDPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Ye3jrTk8Fzpo5sg2oh2es9+FqCrKKcjqncyVkCrfYE=;
 b=oPGnIe72ay85MMjdsb35kk5kkfLxYd7/n8JfZsJ1vkIOwUFJSQD2YTa0f85+ALzB5e7i9C2S4MszoJ5FiYYoR0ZwVGJ8MicSCrUV7NYJENeaCxO8NdaTgCZ1Q9O8Ige6osZGaPcqPxnMRDB9c22JgOPA9haMuNbSllbHPYHYYOUIYMkhySX8AlP87uhI1NeBWk05BfOs7ZB6B4ME43cI4yKk1RxKug0/AcUMBAqI+Xe2gacOzRC/2FgihT7633t9rpkei1M4uBDHrDzEegM/mjJ/rucSxWAVniruVfnFOhdJbqUVAWaJzWX88i9/Sto8jwD3ci4erU0NGS8zx2efeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=bitdefender.com; dmarc=pass action=none
 header.from=bitdefender.com; dkim=pass header.d=bitdefender.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bitdefender.onmicrosoft.com; s=selector2-bitdefender-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Ye3jrTk8Fzpo5sg2oh2es9+FqCrKKcjqncyVkCrfYE=;
 b=Ufr1rbV+mbxHqU800UfZnFQ1/QODneD+BvcHvmstdNET2QjoVLSF+JhonwWuukBy65pBszu09uzFCY9LREywVBg3in5wY1laV9pA7s4HpK/+jpCVqkSdr3GOL/ZNanP7WHGK5gQfh8XG3zk2XVfD9rLHImfzqitq1vKQzOFJx40=
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com (10.255.30.78) by
 AM0PR02MB4386.eurprd02.prod.outlook.com (20.178.17.212) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2538.20; Tue, 17 Dec 2019 15:12:31 +0000
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d]) by AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d%4]) with mapi id 15.20.2538.019; Tue, 17 Dec 2019
 15:12:31 +0000
From: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Alexandru Stefan ISAILA <aisaila@bitdefender.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
	"Jan Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, George Dunlap <george.dunlap@eu.citrix.com>, Razvan
 COJOCARU <rcojocaru@bitdefender.com>, Tamas K Lengyel <tamas@tklengyel.com>,
	Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>
Subject: [PATCH V4 2/4] x86/altp2m: Add hypercall to set a range of sve bits
Thread-Topic: [PATCH V4 2/4] x86/altp2m: Add hypercall to set a range of sve
 bits
Thread-Index: AQHVtOxnMZNvtoiQIE62yqEH5VgLvA==
Date: Tue, 17 Dec 2019 15:12:31 +0000
Message-ID: <20191217151144.9781-2-aisaila@bitdefender.com>
References: <20191217151144.9781-1-aisaila@bitdefender.com>
In-Reply-To: <20191217151144.9781-1-aisaila@bitdefender.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-clientproxiedby: AM0PR01CA0067.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:e6::44) To AM0PR02MB5553.eurprd02.prod.outlook.com
 (2603:10a6:208:160::14)
x-ms-exchange-messagesentrepresentingtype: 1
x-mailer: git-send-email 2.17.1
x-originating-ip: [91.199.104.6]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e424ed4a-de6c-4ee7-485b-08d7830389b2
x-ms-traffictypediagnostic: AM0PR02MB4386:|AM0PR02MB4386:|AM0PR02MB4386:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR02MB4386F75A2AB30B7A146F1250AB500@AM0PR02MB4386.eurprd02.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:83;
x-forefront-prvs: 02543CD7CD
x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(366004)(136003)(396003)(39860400002)(346002)(376002)(189003)(199004)(7416002)(6512007)(52116002)(64756008)(86362001)(186003)(8936002)(6486002)(4326008)(2906002)(54906003)(107886003)(66476007)(71200400001)(66556008)(66946007)(36756003)(26005)(81156014)(81166006)(5660300002)(8676002)(6506007)(478600001)(66446008)(1076003)(2616005)(316002)(6916009);DIR:OUT;SFP:1102;SCL:1;SRVR:AM0PR02MB4386;H:AM0PR02MB5553.eurprd02.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: EnEhYEFyc8I/6RKCBwYxsBw+g/HO2QKmmNcAj2HBYDK3qI3kvTQ90Tv82bM0wZmMGRfQIZLmu/9j5EyfSX6d3Z899Un1OvT9+oRfT54HXFdKGIJMTr1vVQDTfymOFkE9iYCKJeEyzeGwISLx1HYcT5fJ4aUEF1fPR2JxowwGMA7ldfxZuSvaig5OrNT1mX/9VH3a6EY2vtzmACqIWWM1/xLOUrnkxR5QrTwR6CI3RdQ6SYLayVxzEn03Wx33lK7dL5YHoJn1RlSwiCaGdMvu0F90PB3hFznXA0N+IYTR14/UixelN8h670dwHrC6lRy/8fCfjveAYPyH0Xay1KOcsFdywUlAb1PrN8eJ6RIFCxX81wg33z28xcL49R6iS/gtricRC3ia0ByyugqxeWoyMVFA++XyGPSST7S8RYnJIL4+Jo1f8gCz6KGfiU3gMAvY
Content-Type: text/plain; charset="utf-8"
Content-ID: <F8C450FB4DD57740BD59FDA42D132892@eurprd02.prod.outlook.com>
Content-Transfer-Encoding: base64
X-MS-Exchange-CrossTenant-Network-Message-Id: e424ed4a-de6c-4ee7-485b-08d7830389b2
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Dec 2019 15:12:31.6266
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WVqs/bu4Sceq/q/29XVEceoO1vbqpcJVfvXroP9HqNFNSqabScfs201Q3vJbdafY7pnE/eUMRZsiwoUpiCmwpmknkEH4+AgfExhYhCvmkXw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR02MB4386
Return-Path: aisaila@bitdefender.com
X-MS-Exchange-Organization-Network-Message-Id: 470dcf58-1cea-476c-b98f-08d783038c3d
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL01.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

QnkgZGVmYXVsdCB0aGUgc3ZlIGJpdHMgYXJlIG5vdCBzZXQuDQpUaGlzIHBhdGNoIGFkZHMgYSBu
ZXcgaHlwZXJjYWxsLCB4Y19hbHRwMm1fc2V0X3N1cHJlc3NfdmVfbXVsdGkoKSwNCnRvIHNldCBh
IHJhbmdlIG9mIHN2ZSBiaXRzLg0KVGhlIGNvcmUgZnVuY3Rpb24sIHAybV9zZXRfc3VwcHJlc3Nf
dmVfbXVsdGkoKSwgZG9lcyBub3QgYnJha2UgaW4gY2FzZQ0Kb2YgYSBlcnJvciBhbmQgaXQgaXMg
ZG9pbmcgYSBiZXN0IGVmZm9ydCBmb3Igc2V0dGluZyB0aGUgYml0cyBpbiB0aGUNCmdpdmVuIHJh
bmdlLiBBIGNoZWNrIGZvciBjb250aW51YXRpb24gaXMgbWFkZSBpbiBvcmRlciB0byBoYXZlDQpw
cmVlbXB0aW9uIG9uIGJpZyByYW5nZXMuDQpUaGUgZ2ZuIG9mIHRoZSBmaXJzdCBlcnJvciBpcyBz
dG9yZWQgaW4NCnhlbl9odm1fYWx0cDJtX3N1cHByZXNzX3ZlX211bHRpLmZpcnN0X2Vycm9yIGFu
ZCB0aGUgZXJyb3IgY29kZSBpcw0Kc3RvcmVkIGluIHhlbl9odm1fYWx0cDJtX3N1cHByZXNzX3Zl
X211bHRpLmZpcnN0X2Vycm9yX2NvZGUuDQpJZiBubyBlcnJvciBvY2N1cnJlZCB0aGUgdmFsdWVz
IHdpbGwgYmUgMC4NCg0KU2lnbmVkLW9mZi1ieTogQWxleGFuZHJ1IElzYWlsYSA8YWlzYWlsYUBi
aXRkZWZlbmRlci5jb20+DQotLS0NCkNDOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0
cml4LmNvbT4NCkNDOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KQ0M6IEFuZHJldyBDb29wZXIgPGFu
ZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQpDQzogR2VvcmdlIER1bmxhcCA8R2VvcmdlLkR1bmxh
cEBldS5jaXRyaXguY29tPg0KQ0M6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCkND
OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KQ0M6IEtvbnJhZCBSemVzenV0ZWsgV2ls
ayA8a29ucmFkLndpbGtAb3JhY2xlLmNvbT4NCkNDOiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+DQpDQzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4NCkNDOiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGV1LmNpdHJpeC5jb20+
DQpDQzogUmF6dmFuIENvam9jYXJ1IDxyY29qb2NhcnVAYml0ZGVmZW5kZXIuY29tPg0KQ0M6IFRh
bWFzIEsgTGVuZ3llbCA8dGFtYXNAdGtsZW5neWVsLmNvbT4NCkNDOiBQZXRyZSBQaXJjYWxhYnUg
PHBwaXJjYWxhYnVAYml0ZGVmZW5kZXIuY29tPg0KLS0tDQpDaGFuZ2VzIHNpbmNlIFYzOg0KCS0g
VXBkYXRlIGNvbW1pdCBtZXNzYWdlDQoJLSBDaGVjayByYyBhbmQgX19jb3B5X3RvX2d1ZXN0KCkg
aW4gdGhlIHNhbWUgaWYNCgktIEZpeCBzdHlsZSBpc3N1ZQ0KCS0gRml4IGNvbW1lbnQgdHlwbw0K
CS0gSW5pdCBwMm0gd2l0aCBob3N0X3AybQ0KCS0gVXNlIGFycmF5X2luZGV4X25vc3BlYygpIGlu
IGFsdHAybV9wMm1bXSBhbmQgYWx0cDJtX2VwdHBbXQ0KCS0gRHJvcCBvcGFxdWUNCgktIFVzZSBw
YWQyIHRvIHJldHVybiBmaXJzdCBlcnJvciBjb2RlDQoJLSBVcGRhdGUgZmlyc3RfZ2ZuDQoJLSBT
dG9wIHRoZSByYW5nZSBsb29wIGF0IGNwdWlkLT5leHRkLm1heHBoeXNhZGRyLg0KLS0tDQogdG9v
bHMvbGlieGMvaW5jbHVkZS94ZW5jdHJsLmggICB8ICA0ICsrKw0KIHRvb2xzL2xpYnhjL3hjX2Fs
dHAybS5jICAgICAgICAgfCAzMyArKysrKysrKysrKysrKysrKw0KIHhlbi9hcmNoL3g4Ni9odm0v
aHZtLmMgICAgICAgICAgfCAxNSArKysrKysrKw0KIHhlbi9hcmNoL3g4Ni9tbS9wMm0uYyAgICAg
ICAgICAgfCA2NCArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysNCiB4ZW4vaW5jbHVk
ZS9wdWJsaWMvaHZtL2h2bV9vcC5oIHwgMTMgKysrKysrKw0KIHhlbi9pbmNsdWRlL3hlbi9tZW1f
YWNjZXNzLmggICAgfCAgMyArKw0KIDYgZmlsZXMgY2hhbmdlZCwgMTMyIGluc2VydGlvbnMoKykN
Cg0KZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhjL2luY2x1ZGUveGVuY3RybC5oIGIvdG9vbHMvbGli
eGMvaW5jbHVkZS94ZW5jdHJsLmgNCmluZGV4IGY0NDMxNjg3YjMuLjJhY2U4ZWE4MGUgMTAwNjQ0
DQotLS0gYS90b29scy9saWJ4Yy9pbmNsdWRlL3hlbmN0cmwuaA0KKysrIGIvdG9vbHMvbGlieGMv
aW5jbHVkZS94ZW5jdHJsLmgNCkBAIC0xOTIzLDYgKzE5MjMsMTAgQEAgaW50IHhjX2FsdHAybV9z
d2l0Y2hfdG9fdmlldyh4Y19pbnRlcmZhY2UgKmhhbmRsZSwgdWludDMyX3QgZG9taWQsDQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MTZfdCB2aWV3X2lkKTsNCiBpbnQgeGNfYWx0
cDJtX3NldF9zdXBwcmVzc192ZSh4Y19pbnRlcmZhY2UgKmhhbmRsZSwgdWludDMyX3QgZG9taWQs
DQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDE2X3Qgdmlld19pZCwgeGVuX3Bm
bl90IGdmbiwgYm9vbCBzdmUpOw0KK2ludCB4Y19hbHRwMm1fc2V0X3N1cHJlc3NfdmVfbXVsdGko
eGNfaW50ZXJmYWNlICpoYW5kbGUsIHVpbnQzMl90IGRvbWlkLA0KKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdWludDE2X3Qgdmlld19pZCwgeGVuX3Bmbl90IGZpcnN0X2dmbiwN
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHhlbl9wZm5fdCBsYXN0X2dmbiwg
Ym9vbCBzdmUsDQorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB4ZW5fcGZuX3Qg
KmVycm9yX2dmbiwgdWludDMyX3QgKmVycm9yX2NvZGUpOw0KIGludCB4Y19hbHRwMm1fZ2V0X3N1
cHByZXNzX3ZlKHhjX2ludGVyZmFjZSAqaGFuZGxlLCB1aW50MzJfdCBkb21pZCwNCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICB1aW50MTZfdCB2aWV3X2lkLCB4ZW5fcGZuX3QgZ2ZuLCBi
b29sICpzdmUpOw0KIGludCB4Y19hbHRwMm1fc2V0X21lbV9hY2Nlc3MoeGNfaW50ZXJmYWNlICpo
YW5kbGUsIHVpbnQzMl90IGRvbWlkLA0KZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhjL3hjX2FsdHAy
bS5jIGIvdG9vbHMvbGlieGMveGNfYWx0cDJtLmMNCmluZGV4IDA5ZGFkMDM1NWUuLjlmN2U4MzE1
YjMgMTAwNjQ0DQotLS0gYS90b29scy9saWJ4Yy94Y19hbHRwMm0uYw0KKysrIGIvdG9vbHMvbGli
eGMveGNfYWx0cDJtLmMNCkBAIC0yMzQsNiArMjM0LDM5IEBAIGludCB4Y19hbHRwMm1fc2V0X3N1
cHByZXNzX3ZlKHhjX2ludGVyZmFjZSAqaGFuZGxlLCB1aW50MzJfdCBkb21pZCwNCiAgICAgcmV0
dXJuIHJjOw0KIH0NCiANCitpbnQgeGNfYWx0cDJtX3NldF9zdXByZXNzX3ZlX211bHRpKHhjX2lu
dGVyZmFjZSAqaGFuZGxlLCB1aW50MzJfdCBkb21pZCwNCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHVpbnQxNl90IHZpZXdfaWQsIHhlbl9wZm5fdCBmaXJzdF9nZm4sDQorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB4ZW5fcGZuX3QgbGFzdF9nZm4sIGJvb2wg
c3ZlLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgeGVuX3Bmbl90ICplcnJv
cl9nZm4sIHVpbnQzMl90ICplcnJvcl9jb2RlKQ0KK3sNCisgICAgaW50IHJjOw0KKyAgICBERUNM
QVJFX0hZUEVSQ0FMTF9CVUZGRVIoeGVuX2h2bV9hbHRwMm1fb3BfdCwgYXJnKTsNCisNCisgICAg
YXJnID0geGNfaHlwZXJjYWxsX2J1ZmZlcl9hbGxvYyhoYW5kbGUsIGFyZywgc2l6ZW9mKCphcmcp
KTsNCisgICAgaWYgKCBhcmcgPT0gTlVMTCApDQorICAgICAgICByZXR1cm4gLTE7DQorDQorICAg
IGFyZy0+dmVyc2lvbiA9IEhWTU9QX0FMVFAyTV9JTlRFUkZBQ0VfVkVSU0lPTjsNCisgICAgYXJn
LT5jbWQgPSBIVk1PUF9hbHRwMm1fc2V0X3N1cHByZXNzX3ZlX211bHRpOw0KKyAgICBhcmctPmRv
bWFpbiA9IGRvbWlkOw0KKyAgICBhcmctPnUuc3VwcHJlc3NfdmVfbXVsdGkudmlldyA9IHZpZXdf
aWQ7DQorICAgIGFyZy0+dS5zdXBwcmVzc192ZV9tdWx0aS5maXJzdF9nZm4gPSBmaXJzdF9nZm47
DQorICAgIGFyZy0+dS5zdXBwcmVzc192ZV9tdWx0aS5sYXN0X2dmbiA9IGxhc3RfZ2ZuOw0KKyAg
ICBhcmctPnUuc3VwcHJlc3NfdmVfbXVsdGkuc3VwcHJlc3NfdmUgPSBzdmU7DQorDQorICAgIHJj
ID0geGVuY2FsbDIoaGFuZGxlLT54Y2FsbCwgX19IWVBFUlZJU09SX2h2bV9vcCwgSFZNT1BfYWx0
cDJtLA0KKyAgICAgICAgICAgICAgICAgIEhZUEVSQ0FMTF9CVUZGRVJfQVNfQVJHKGFyZykpOw0K
Kw0KKyAgICBpZiAoIGFyZy0+dS5zdXBwcmVzc192ZV9tdWx0aS5maXJzdF9lcnJvciApDQorICAg
IHsNCisgICAgICAgICplcnJvcl9nZm4gPSBhcmctPnUuc3VwcHJlc3NfdmVfbXVsdGkuZmlyc3Rf
ZXJyb3I7DQorICAgICAgICAqZXJyb3JfY29kZSA9IGFyZy0+dS5zdXBwcmVzc192ZV9tdWx0aS5m
aXJzdF9lcnJvcl9jb2RlOw0KKyAgICB9DQorDQorICAgIHhjX2h5cGVyY2FsbF9idWZmZXJfZnJl
ZShoYW5kbGUsIGFyZyk7DQorICAgIHJldHVybiByYzsNCit9DQorDQogaW50IHhjX2FsdHAybV9z
ZXRfbWVtX2FjY2Vzcyh4Y19pbnRlcmZhY2UgKmhhbmRsZSwgdWludDMyX3QgZG9taWQsDQogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MTZfdCB2aWV3X2lkLCB4ZW5fcGZuX3QgZ2Zu
LA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgeGVubWVtX2FjY2Vzc190IGFjY2VzcykN
CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvaHZtL2h2bS5jIGIveGVuL2FyY2gveDg2L2h2bS9o
dm0uYw0KaW5kZXggNDc1NzNmNzFiOC4uYTEyOTA0OWQ2YiAxMDA2NDQNCi0tLSBhL3hlbi9hcmNo
L3g4Ni9odm0vaHZtLmMNCisrKyBiL3hlbi9hcmNoL3g4Ni9odm0vaHZtLmMNCkBAIC00NTUzLDYg
KzQ1NTMsNyBAQCBzdGF0aWMgaW50IGRvX2FsdHAybV9vcCgNCiAgICAgY2FzZSBIVk1PUF9hbHRw
Mm1fZGVzdHJveV9wMm06DQogICAgIGNhc2UgSFZNT1BfYWx0cDJtX3N3aXRjaF9wMm06DQogICAg
IGNhc2UgSFZNT1BfYWx0cDJtX3NldF9zdXBwcmVzc192ZToNCisgICAgY2FzZSBIVk1PUF9hbHRw
Mm1fc2V0X3N1cHByZXNzX3ZlX211bHRpOg0KICAgICBjYXNlIEhWTU9QX2FsdHAybV9nZXRfc3Vw
cHJlc3NfdmU6DQogICAgIGNhc2UgSFZNT1BfYWx0cDJtX3NldF9tZW1fYWNjZXNzOg0KICAgICBj
YXNlIEhWTU9QX2FsdHAybV9zZXRfbWVtX2FjY2Vzc19tdWx0aToNCkBAIC00NzExLDYgKzQ3MTIs
MjAgQEAgc3RhdGljIGludCBkb19hbHRwMm1fb3AoDQogICAgICAgICB9DQogICAgICAgICBicmVh
azsNCiANCisgICAgY2FzZSBIVk1PUF9hbHRwMm1fc2V0X3N1cHByZXNzX3ZlX211bHRpOg0KKyAg
ICAgICAgaWYgKCBhLnUuc3VwcHJlc3NfdmVfbXVsdGkucGFkMSB8fA0KKyAgICAgICAgICAgICBh
LnUuc3VwcHJlc3NfdmVfbXVsdGkuZmlyc3RfZXJyb3JfY29kZSB8fA0KKyAgICAgICAgICAgICBh
LnUuc3VwcHJlc3NfdmVfbXVsdGkuZmlyc3RfZXJyb3IgfHwNCisgICAgICAgICAgICAgYS51LnN1
cHByZXNzX3ZlX211bHRpLmZpcnN0X2dmbiA+IGEudS5zdXBwcmVzc192ZV9tdWx0aS5sYXN0X2dm
biApDQorICAgICAgICAgICAgcmMgPSAtRUlOVkFMOw0KKyAgICAgICAgZWxzZQ0KKyAgICAgICAg
ew0KKyAgICAgICAgICAgIHJjID0gcDJtX3NldF9zdXBwcmVzc192ZV9tdWx0aShkLCAmYS51LnN1
cHByZXNzX3ZlX211bHRpKTsNCisgICAgICAgICAgICBpZiAoICghcmMgfHwgcmMgPT0gLUVSRVNU
QVJUKSAmJiBfX2NvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgKQ0KKyAgICAgICAgICAgICAgICBy
YyA9IC1FRkFVTFQ7DQorICAgICAgICB9DQorICAgICAgICBicmVhazsNCisNCiAgICAgY2FzZSBI
Vk1PUF9hbHRwMm1fZ2V0X3N1cHByZXNzX3ZlOg0KICAgICAgICAgaWYgKCBhLnUuc3VwcHJlc3Nf
dmUucGFkMSB8fCBhLnUuc3VwcHJlc3NfdmUucGFkMiApDQogICAgICAgICAgICAgcmMgPSAtRUlO
VkFMOw0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS9wMm0uYw0KaW5kZXggN2U3ZjRmMWE3Yy4uMjUzY2FiMzQ1OCAxMDA2NDQNCi0tLSBhL3hlbi9h
cmNoL3g4Ni9tbS9wMm0uYw0KKysrIGIveGVuL2FyY2gveDg2L21tL3AybS5jDQpAQCAtMzA2NCw2
ICszMDY0LDcwIEBAIG91dDoNCiAgICAgcmV0dXJuIHJjOw0KIH0NCiANCisvKg0KKyAqIFNldC9j
bGVhciB0aGUgI1ZFIHN1cHByZXNzIGJpdCBmb3IgbXVsdGlwbGUgcGFnZXMuICBPbmx5IGF2YWls
YWJsZSBvbiBWTVguDQorICovDQoraW50IHAybV9zZXRfc3VwcHJlc3NfdmVfbXVsdGkoc3RydWN0
IGRvbWFpbiAqZCwNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgeGVuX2h2
bV9hbHRwMm1fc3VwcHJlc3NfdmVfbXVsdGkgKnN2ZSkNCit7DQorICAgIHN0cnVjdCBwMm1fZG9t
YWluICpob3N0X3AybSA9IHAybV9nZXRfaG9zdHAybShkKTsNCisgICAgc3RydWN0IHAybV9kb21h
aW4gKmFwMm0gPSBOVUxMOw0KKyAgICBzdHJ1Y3QgcDJtX2RvbWFpbiAqcDJtID0gaG9zdF9wMm07
DQorICAgIHVpbnQ2NF90IHN0YXJ0ID0gc3ZlLT5maXJzdF9nZm47DQorICAgIGludCByYyA9IDA7
DQorICAgIHVpbnQ2NF90IG1heF9waHlzX2FkZHIgPSAoMVVMIDw8IGQtPmFyY2guY3B1aWQtPmV4
dGQubWF4cGh5c2FkZHIpIC0gMTsNCisNCisgICAgaWYgKCBzdmUtPnZpZXcgPiAwICkNCisgICAg
ew0KKyAgICAgICAgaWYgKCBzdmUtPnZpZXcgPj0gTUFYX0FMVFAyTSB8fA0KKyAgICAgICAgICAg
ICBkLT5hcmNoLmFsdHAybV9lcHRwW2FycmF5X2luZGV4X25vc3BlYyhzdmUtPnZpZXcsIE1BWF9F
UFRQKV0gPT0NCisgICAgICAgICAgICAgbWZuX3goSU5WQUxJRF9NRk4pICkNCisgICAgICAgICAg
ICByZXR1cm4gLUVJTlZBTDsNCisNCisgICAgICAgIHAybSA9IGFwMm0gPSBkLT5hcmNoLmFsdHAy
bV9wMm1bYXJyYXlfaW5kZXhfbm9zcGVjKHN2ZS0+dmlldywNCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIE1BWF9BTFRQMk0pXTsNCisg
ICAgfQ0KKw0KKyAgICBwMm1fbG9jayhob3N0X3AybSk7DQorDQorICAgIGlmICggYXAybSApDQor
ICAgICAgICBwMm1fbG9jayhhcDJtKTsNCisNCisgICAgd2hpbGUgKCBzdmUtPmxhc3RfZ2ZuID49
IHN0YXJ0ICYmIHN0YXJ0IDwgbWF4X3BoeXNfYWRkciApDQorICAgIHsNCisgICAgICAgIHAybV9h
Y2Nlc3NfdCBhOw0KKyAgICAgICAgcDJtX3R5cGVfdCB0Ow0KKyAgICAgICAgbWZuX3QgbWZuOw0K
KyAgICAgICAgaW50IGVyciA9IDA7DQorDQorICAgICAgICBpZiAoIGFsdHAybV9nZXRfZWZmZWN0
aXZlX2VudHJ5KHAybSwgX2dmbihzdGFydCksICZtZm4sICZ0LCAmYSwgQVAyTUdFVF9xdWVyeSkg
KQ0KKyAgICAgICAgICAgIGEgPSBwMm0tPmRlZmF1bHRfYWNjZXNzOw0KKw0KKyAgICAgICAgaWYg
KCAoZXJyID0gcDJtLT5zZXRfZW50cnkocDJtLCBfZ2ZuKHN0YXJ0KSwgbWZuLCBQQUdFX09SREVS
XzRLLCB0LCBhLA0KKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc3ZlLT5zdXBw
cmVzc192ZSkpICYmICFzdmUtPmZpcnN0X2Vycm9yICkNCisgICAgICAgIHsNCisgICAgICAgICAg
ICBzdmUtPmZpcnN0X2Vycm9yID0gc3RhcnQ7IC8qIFNhdmUgdGhlIGdmbiBvZiB0aGUgZmlyc3Qg
ZXJyb3IgKi8NCisgICAgICAgICAgICBzdmUtPmZpcnN0X2Vycm9yX2NvZGUgPSBlcnI7IC8qIFNh
dmUgdGhlIGZpcnN0IGVycm9yIGNvZGUgKi8NCisgICAgICAgIH0NCisNCisgICAgICAgIC8qIENo
ZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaXRlcmF0aW9uLiAqLw0K
KyAgICAgICAgaWYgKCBzdmUtPmxhc3RfZ2ZuID49ICsrc3RhcnQgJiYgaHlwZXJjYWxsX3ByZWVt
cHRfY2hlY2soKSApDQorICAgICAgICB7DQorICAgICAgICAgICAgcmMgPSAtRVJFU1RBUlQ7DQor
ICAgICAgICAgICAgYnJlYWs7DQorICAgICAgICB9DQorICAgIH0NCisNCisgICAgc3ZlLT5maXJz
dF9nZm4gPSBzdGFydDsNCisNCisgICAgaWYgKCBhcDJtICkNCisgICAgICAgIHAybV91bmxvY2so
YXAybSk7DQorDQorICAgIHAybV91bmxvY2soaG9zdF9wMm0pOw0KKw0KKyAgICByZXR1cm4gcmM7
DQorfQ0KKw0KIGludCBwMm1fZ2V0X3N1cHByZXNzX3ZlKHN0cnVjdCBkb21haW4gKmQsIGdmbl90
IGdmbiwgYm9vbCAqc3VwcHJlc3NfdmUsDQogICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWdu
ZWQgaW50IGFsdHAybV9pZHgpDQogew0KZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy9o
dm0vaHZtX29wLmggYi94ZW4vaW5jbHVkZS9wdWJsaWMvaHZtL2h2bV9vcC5oDQppbmRleCAzNTNm
ODAzNGQ5Li40OTk2NWQyNTZjIDEwMDY0NA0KLS0tIGEveGVuL2luY2x1ZGUvcHVibGljL2h2bS9o
dm1fb3AuaA0KKysrIGIveGVuL2luY2x1ZGUvcHVibGljL2h2bS9odm1fb3AuaA0KQEAgLTQ2LDYg
KzQ2LDE2IEBAIHN0cnVjdCB4ZW5faHZtX2FsdHAybV9zdXBwcmVzc192ZSB7DQogICAgIHVpbnQ2
NF90IGdmbjsNCiB9Ow0KIA0KK3N0cnVjdCB4ZW5faHZtX2FsdHAybV9zdXBwcmVzc192ZV9tdWx0
aSB7DQorICAgIHVpbnQxNl90IHZpZXc7DQorICAgIHVpbnQ4X3Qgc3VwcHJlc3NfdmU7IC8qIEJv
b2xlYW4gdHlwZS4gKi8NCisgICAgdWludDhfdCBwYWQxOw0KKyAgICB1aW50MzJfdCBmaXJzdF9l
cnJvcl9jb2RlOyAvKiBNdXN0IGJlIHNldCB0byAwIC4gKi8NCisgICAgdWludDY0X3QgZmlyc3Rf
Z2ZuOyAvKiBWYWx1ZSB3aWxsIGJlIHVwZGF0ZWQgKi8NCisgICAgdWludDY0X3QgbGFzdF9nZm47
DQorICAgIHVpbnQ2NF90IGZpcnN0X2Vycm9yOyAvKiBHZm4gb2YgdGhlIGZpcnN0IGVycm9yLiBN
dXN0IGJlIHNldCB0byAwLiAqLw0KK307DQorDQogI2lmIF9fWEVOX0lOVEVSRkFDRV9WRVJTSU9O
X18gPCAweDAwMDQwOTAwDQogDQogLyogU2V0IHRoZSBsb2dpY2FsIGxldmVsIG9mIG9uZSBvZiBh
IGRvbWFpbidzIFBDSSBJTlR4IHdpcmVzLiAqLw0KQEAgLTMzOSw2ICszNDksOCBAQCBzdHJ1Y3Qg
eGVuX2h2bV9hbHRwMm1fb3Agew0KICNkZWZpbmUgSFZNT1BfYWx0cDJtX3ZjcHVfZGlzYWJsZV9u
b3RpZnkgIDEzDQogLyogR2V0IHRoZSBhY3RpdmUgdmNwdSBwMm0gaW5kZXggKi8NCiAjZGVmaW5l
IEhWTU9QX2FsdHAybV9nZXRfcDJtX2lkeCAgICAgICAgICAxNA0KKy8qIFNldCB0aGUgIlN1cHJl
c3MgI1ZFIiBiaXQgZm9yIGEgcmFuZ2Ugb2YgcGFnZXMgKi8NCisjZGVmaW5lIEhWTU9QX2FsdHAy
bV9zZXRfc3VwcHJlc3NfdmVfbXVsdGkgMTUNCiAgICAgZG9taWRfdCBkb21haW47DQogICAgIHVp
bnQxNl90IHBhZDE7DQogICAgIHVpbnQzMl90IHBhZDI7DQpAQCAtMzUzLDYgKzM2NSw3IEBAIHN0
cnVjdCB4ZW5faHZtX2FsdHAybV9vcCB7DQogICAgICAgICBzdHJ1Y3QgeGVuX2h2bV9hbHRwMm1f
Y2hhbmdlX2dmbiAgICAgICAgICAgY2hhbmdlX2dmbjsNCiAgICAgICAgIHN0cnVjdCB4ZW5faHZt
X2FsdHAybV9zZXRfbWVtX2FjY2Vzc19tdWx0aSBzZXRfbWVtX2FjY2Vzc19tdWx0aTsNCiAgICAg
ICAgIHN0cnVjdCB4ZW5faHZtX2FsdHAybV9zdXBwcmVzc192ZSAgICAgICAgICBzdXBwcmVzc192
ZTsNCisgICAgICAgIHN0cnVjdCB4ZW5faHZtX2FsdHAybV9zdXBwcmVzc192ZV9tdWx0aSAgICBz
dXBwcmVzc192ZV9tdWx0aTsNCiAgICAgICAgIHN0cnVjdCB4ZW5faHZtX2FsdHAybV92Y3B1X2Rp
c2FibGVfbm90aWZ5ICBkaXNhYmxlX25vdGlmeTsNCiAgICAgICAgIHN0cnVjdCB4ZW5faHZtX2Fs
dHAybV9nZXRfdmNwdV9wMm1faWR4ICAgICBnZXRfdmNwdV9wMm1faWR4Ow0KICAgICAgICAgdWlu
dDhfdCBwYWRbNjRdOw0KZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3hlbi9tZW1fYWNjZXNzLmgg
Yi94ZW4vaW5jbHVkZS94ZW4vbWVtX2FjY2Vzcy5oDQppbmRleCBlNGQyNDUwMmUwLi4wMGU1OTRh
MGFkIDEwMDY0NA0KLS0tIGEveGVuL2luY2x1ZGUveGVuL21lbV9hY2Nlc3MuaA0KKysrIGIveGVu
L2luY2x1ZGUveGVuL21lbV9hY2Nlc3MuaA0KQEAgLTc1LDYgKzc1LDkgQEAgbG9uZyBwMm1fc2V0
X21lbV9hY2Nlc3NfbXVsdGkoc3RydWN0IGRvbWFpbiAqZCwNCiBpbnQgcDJtX3NldF9zdXBwcmVz
c192ZShzdHJ1Y3QgZG9tYWluICpkLCBnZm5fdCBnZm4sIGJvb2wgc3VwcHJlc3NfdmUsDQogICAg
ICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IGFsdHAybV9pZHgpOw0KIA0KK2ludCBw
Mm1fc2V0X3N1cHByZXNzX3ZlX211bHRpKHN0cnVjdCBkb21haW4gKmQsDQorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgc3RydWN0IHhlbl9odm1fYWx0cDJtX3N1cHByZXNzX3ZlX211bHRp
ICpzdXBwcmVzc192ZSk7DQorDQogaW50IHAybV9nZXRfc3VwcHJlc3NfdmUoc3RydWN0IGRvbWFp
biAqZCwgZ2ZuX3QgZ2ZuLCBib29sICpzdXBwcmVzc192ZSwNCiAgICAgICAgICAgICAgICAgICAg
ICAgICB1bnNpZ25lZCBpbnQgYWx0cDJtX2lkeCk7DQogDQotLSANCjIuMTcuMQ0KDQo=

From - Wed Dec 18 10:46:18 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Tue, 17 Dec 2019 16:12:41 +0100
Received: from LASPEX02MSOL01.citrite.net (10.160.21.45) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Tue, 17 Dec 2019 10:12:38 -0500
Received: from esa2.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL01.citrite.net (10.160.21.45) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 07:12:38 -0800
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  aisaila@bitdefender.com) identity=pra;
  client-ip=40.107.2.105; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  aisaila@bitdefender.com designates 40.107.2.105 as permitted
  sender) identity=mailfrom; client-ip=40.107.2.105;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  postmaster@EUR02-VE1-obe.outbound.protection.outlook.com
  designates 40.107.2.105 as permitted sender) identity=helo;
  client-ip=40.107.2.105; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="postmaster@EUR02-VE1-obe.outbound.protection.outlook.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Authentication-Results: esa2.hc3370-68.iphmx.com; spf=None smtp.pra=aisaila@bitdefender.com; spf=Pass smtp.mailfrom=aisaila@bitdefender.com; spf=Pass smtp.helo=postmaster@EUR02-VE1-obe.outbound.protection.outlook.com; dkim=pass (signature verified) header.i=@bitdefender.onmicrosoft.com
IronPort-SDR: PnqSc0B77LeANL3Hw+DGtt/O3/v0Y11DLjGxjQ++t1bxNGDqROWo17u2nW7CEhQ00f0s/7Ee84
 fYkLxjf0zQffOURGGvHJledwhb3/WYADU2X19VB+kRFiyF3CufCwYYcAqx8WJY+Nl0Ovk0Zc37
 RzQEo8EgXWDdaTycwKsAImCvl6gXrVq35lVzCNZzjK/TDBDpyAOEQqzIBA3222plXJv6sw0ojA
 Ptq6cu2Q/LimYZeaXhKpoIuv/MIeIC8nbCVilGkzdRyx+w8ZXhimhOahsfeTXGDZTtZU6HLFmJ
 vtTtF1H/jCRzDA79hWT0X6Jy
X-IronPort-RemoteIP: 40.107.2.105
X-IronPort-MID: 9822605
X-IronPort-Reputation: 3.5
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: SBRS_Whitelist
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.5
X-MesageID: 9822605
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 40.107.2.105
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3AgTaAPx3YxqAMNMPEsmDT+DRfVm0co7zxezQtwd?=
 =?us-ascii?q?8ZseMRLvad9pjvdHbS+e9qxAeQG9mCsLQe07Wd7PmocFdDyK7JiGoFfp1IWk?=
 =?us-ascii?q?1NouQttCtkPvS4D1bmJuXhdS0wEZcKflZk+3amLRodQ56mNBXdrXKo8DEdBA?=
 =?us-ascii?q?j0OxZrKeTpAI7SiNm82/yv95HJbAhEmTSwbalsIBmqrwjducsbjZZ/Iast1x?=
 =?us-ascii?q?XFpWdFdf5Lzm1yP1KTmBj85sa0/JF99ilbpuws+c1dX6jkZqo0VbNXAigoPG?=
 =?us-ascii?q?Az/83rqALMTRCT6XsGU2UZiQRHDg7Y5xznRJjxsy/6tu1g2CmGOMD9UL45VS?=
 =?us-ascii?q?i+46ptVRTlkzkMOSIn/27Li8xwlKNbrwynpxxj2I7ffYWZOONjcq/BYd8WQG?=
 =?us-ascii?q?xMVdtTWSNcGIOxd4sBAfQcM+ZEoYfzpFUOohm5CwmtGOzhxTBHi2Tq3aIhye?=
 =?us-ascii?q?ktDRvL0BAiEt8IrX/arM/1NKAXUe2t0KTIySvMbvxM1jf79YfIaA0hquyRU7?=
 =?us-ascii?q?Jsb8XRx1MgFwXYhVuTtILoIzCb2OIRvmOG6OdvT+Ovi3U9pAFwpjig3NwhhZ?=
 =?us-ascii?q?LIhoIJ11/L6yt5z5gzJdKlVkF7e8SoH4dXtyGfLoZ7RN4pTWJwuCsixbAKpY?=
 =?us-ascii?q?S3cDUWxJkl3RLTdeaLf5WU7h/jTOqcJSp0iGh4dL+7gxu+61WsxvHzW8Wu0V?=
 =?us-ascii?q?tGtiRFncfPu3wR0hHe78uKRedj8Ui93DuC1QHe5+9HLE0xl6fWJZEszaI1m5?=
 =?us-ascii?q?cQtUnMAyn7k1jsgqCMbEUr4O2o5vznYrr4op+cMJd5hA7wPaoyhsCzH/00PQ?=
 =?us-ascii?q?cBUWSG4Oi806bs8lPjTLVNk/02jrLWsJfHJcQdu6G1GRdV0pwk6xajETipzM?=
 =?us-ascii?q?gYnXgALFJDYh6HiJXpO03KIPD/Cve/gE6gnytsx/DDJrHhA5PNIWbfkLr5cr?=
 =?us-ascii?q?tx91RQxBcvwd1d/Z5YFKsNLO/vVkLxrNDYCwU2Mw2ww+bpEtV90YYeVHqVDa?=
 =?us-ascii?q?+YK6Pdq16I5uY1LOSXf4AVuSr9JOU76P7gk3A5n0IScrez3ZcNdH+4GfFmL1?=
 =?us-ascii?q?2DYXXwmtcBDXsKvg0mQezlllKCViRTZ3msUK4m+z47FYSmDYPZSYC1m7GOwj?=
 =?us-ascii?q?q7EodKaWBHFlCMC3bodoqeV/gQbyKSJ5wprjtRH6isTcot2A+jsCf+yqF7Na?=
 =?us-ascii?q?zE9ysAr5Xh2dNpoerJmlt6oSx5CYGR3n+AS0lwn3gUXHkm0aZnu0t/x1yfl6?=
 =?us-ascii?q?9ijKoLO8ZU4qZgVgoqPJjQ1fEyMMz/VAvHNvayYXeCa53yCDAqR9M1yvcEYl?=
 =?us-ascii?q?pxFtuviBzOxWyhBLpDxO/DP4A97q+Jhyu5HM160XuTkfB51wB7EOdsFEjjq7?=
 =?us-ascii?q?B28xXaG4/OlRnEzfS0IP1PlCeY7nyKiHGOtRsFDl04WvDfUHQWdkba6s704k?=
 =?us-ascii?q?rSQrP9bNZvPl5LyNKOJ6VHbNHklxNBQvLiM87ZeGW/hyG7AhPbjqjZd4fudm?=
 =?us-ascii?q?4B2z+HTUIFjwwe83uAOQUkQyCnpmPVFjt1ElzzJkjr9Lobyju7G2EuyATYQ0?=
 =?us-ascii?q?R92uiO5xQXiPeAGdYexakNtypkihkmN1G7w9/QF5+8thJsLoB9RP543lpdzm?=
 =?us-ascii?q?PesV5Ne7mbao1yjV4XdQt6+njj0Rl6EKxsusgnp3B5qWg6I6LN12p4fDHf3r?=
 =?us-ascii?q?vUJ6TXB07wxAKMO/LR+XOE3cy3y7gvzewJl3zzgyeNPGUd8CA0trsd2S6tuZ?=
 =?us-ascii?q?ntLwgjc7DfD2gozD17gOiBRAsl/I+E5GdtbaaI6BSY2IwrNPdi8A6OYdgBOr?=
 =?us-ascii?q?KPPh3AVOEhG+iDKP0KqnauP0MDZfID5oo1NeOvTKTWwoi0GN5/ljW4onhuzI?=
 =?us-ascii?q?9e/3ylqQ9fVtflxpEGmNrH3lu4aSnWjV76vNmpp6JubjMeJkOOlRT4PNF9QI?=
 =?us-ascii?q?hUJ6MnWEioL+6rmcdXvMLSRCJ57FiGPnJewMqAOhSyUHGojkVAkEUNpnq/nj?=
 =?us-ascii?q?GkiiZ5iCwtsv+H1TfVkL64JjMaJm5GQnVjhl7wII+yyuoXR1WscxNzykH36F?=
 =?us-ascii?q?3zne5br/8kcDGWHxcOfjD2KnEkWay15fKZe8AazpQuvG1MVfikJ0iAQ+v0og?=
 =?us-ascii?q?AT0ifqN2Ff2Dw2eTystpjj2Rd9jTHVN259+ULQYto43hLD/JrZTP9V0CABQX?=
 =?us-ascii?q?xxiCLeB1y1F9Oo4dmZmZrFvu2kEWmmU84bajHlmLuJrzDz/mh2GVu/kvS0z8?=
 =?us-ascii?q?XgChQ/2DTn2sNCeBjy9Ey5XKSykqOwPKRgY1ViA0L654xiAIZin4AshZYWn3?=
 =?us-ascii?q?8HmpGS+nlBmmD2Yp1X2qP7OWIEXiVDg8XU7w7sxFB5IzqXypj4WHSQzoopZ9?=
 =?us-ascii?q?Szbm4MnCNox8pLFKuZ4rFCkSZv5F2+qAPae/9mmTkBj/Ap7SdA0dsEswcs0C?=
 =?us-ascii?q?iRR4sqMxIBYX7KkBKFp5Czt6wNImakKuPvjA8gzZagFLGHskdXX3OrMpElVT?=
 =?us-ascii?q?R96MlyKjeumDX6953kdd/MbNkSqgzckhHOiPJQIY4wkfxCjDRuOGb0t3kog+?=
 =?us-ascii?q?Ahihkm0Za/tYmBY2JjmcDxShdZLTT0Y8o78DDxgatQk8Ca0prpFZJkW30KUJ?=
 =?us-ascii?q?buUfO0AWcKr/20UmTGWDY4q3qdBf/eBVrDsAE/9y2JSdbybyzEbGMUxthjWh?=
 =?us-ascii?q?SHcUFEiVpSXD5hxcFhU176gs35cEJpoDsW4w2dyFMEx+R2Oh34Smqaqh2vb2?=
 =?us-ascii?q?J+QZmEJRtS5ylI5lvZPMKT6O5+BWdT+Zjr/2nvYiSLIh9FC20EQBnODlH4N7?=
 =?us-ascii?q?iq7PHK8vSUC+SzKffDe/OFrukUBJLqjdq/l4Bh+TiLLMCGOHJvWuY61kR0Vn?=
 =?us-ascii?q?d8A83Fmj8LRnVNxRjAZMOauhqwvxZPgJzuqqbTURn0rcuCELIId9VkoEvp2e?=
 =?us-ascii?q?LdZ6iRnCZ8OXBT0ZZejXPPzbEe2hYVhUQMP3GkEK8JtCrEZKjRhqNaARMdZy?=
 =?us-ascii?q?5pcsBP6uow0xJMNsjSlt7unuIgyKdtVBEfEwW5x4mgfoQSLnu4NU/bCUruVv?=
 =?us-ascii?q?zOPjDNz8ztIOu9RbBWkORIpki1sDefHVXkO2fLnD3oWhazdOBU2X3DekUG58?=
 =?us-ascii?q?fnNE81VDuGLpqucBCwPd5pgCdjzKY93DXKPjVHbmA5LRMLr6WQ6DMeifJ6SA?=
 =?us-ascii?q?kjpjJoK/eJnyGB4qzWMJET5LFiAz99luZTyH4717dY4ixCSPFv3iDVq5Q9xj?=
 =?us-ascii?q?Pu2vnK0TdhXBdU/3xChZmCvEFrEa/Y6pVNV3vC8B8Xq26XDl5ZwrktQs2qsK?=
 =?us-ascii?q?dWxN/Vkav1IzoX6NPY8/wXAM3MId6GOn4sYlL5XSTZBwwfQXu3JHnS0gZDxe?=
 =?us-ascii?q?qK+CTf/f1Y4tD83YADQbhBWBkpG+MGXw57SccaLs4/Xyt4w+LDyp9SoyL49F?=
 =?us-ascii?q?6IGY1bpsyVCqrUWK2wbm7f1f4dOX5qifv5NdhBa9e9ghQ6LAE8xMOTRwLRRY?=
 =?us-ascii?q?wf+3U9KFNr5h0LqD8nESUywxy3MAr1uS1KTKfmkEJu0lksJrh9kVWkq1YveA?=
 =?us-ascii?q?iQrXNpwhBowIfr3WjKImy2cPf4GIhSD2Ct7RoLP5j2Qhh4YUiJpWI5aWuWf7?=
 =?us-ascii?q?tKlPMgeHti0kncssAURqYZE/0CYQcQwOHRbPItgxxQrSCuxEkP4uWga9MqjA?=
 =?us-ascii?q?wxbZultG5NwSpOUeRtf+nuCfMMyVJdwKWToiWvy+Y9hhcEIFoA+3+TfyhOv1?=
 =?us-ascii?q?EUMr4hJGyj+ekJi0TKlzZYeWcKXuYnubo2rgVkY7vGlWS5i+MLI1v5L+GFKq?=
 =?us-ascii?q?KFp2XM3dWFRF89zAJAlkVI+6R3zdZ2c0eQUBNKrvPZHBAIOMzebABNOpYKsi?=
 =?us-ascii?q?GLImDU97yVnME9JYi2G+H2QPXbub0d2ASkF11yQNxJsJRHH4Gs1VGeJsDif9?=
 =?us-ascii?q?tngV0g4hrmIFKdAbFHYhWOxX0Ootq2zZt+9YNcOj0QD2h7PSitoL3Qo0V55Z?=
 =?us-ascii?q?jLFMdzeXocUoYeYzguX9amnidCo3laJBSK6LpDjTavtnr7rCmWCyTgZd1+Yv?=
 =?us-ascii?q?vSfQlrFNy95TQ49e6xlELT9ZLdYWr9MJ4x372HofNfrJGBBfROSLB7uEqJgI?=
 =?us-ascii?q?hUSUuhVGvXGMK0LZz9OME8KMb5AXGgXhmjmio4Go3vac21IPHC0mSKDc5E9Z?=
 =?us-ascii?q?OW1zc5OYqhGyECTl1u8vob6vs0ZBVfMcZjJ0+y8Vx4b+vmfU+Zyon8Hz7rcG?=
 =?us-ascii?q?MJCaEZlaLjOdk1h2ItdrPolSFmF8liibHxqQlUGNkLlk2MnK7/IdUBF3C1Qj?=
 =?us-ascii?q?sEJE3OvXRrzWE5b7Rrm75tzk+Q6QtOdGzbEY4hIG1c4YNmDAvLcywvUzg2Gw?=
 =?us-ascii?q?fH39iEvl7kmrkW+2EEw4RklNZduX27hafxJSq2UfX0+5/UryYtY9Ugr6Brd4?=
 =?us-ascii?q?fkJ5ne7c6MrnnkVJDV9za9fmu6Gv5dxocCDR9iGKMNp0x8fMsMtMxG9FY7Ud?=
 =?us-ascii?q?o4K/pXEq4wq7u2aD1iSykP0SseUIDG1zsH0L7liunq0yyIeZFnCyQq9ZBLg9?=
 =?us-ascii?q?8TSSlzO3xMrq6/W4jYmmmIR3JNKwAWv11B?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0H5AACb7/hdh2kCayhlHAEBAQEBBwE?=
 =?us-ascii?q?BEQEEBAEBgX6BS1BxdQMECyoKg3qDRgOFOoVUlmuGTAMYPAkEAQEKARsSAgE?=
 =?us-ascii?q?BAQGBAoM8AheCHwYBBDQTAgECAQwBAQEDAQEBAgECAwICAQECEAEBAQgNCQg?=
 =?us-ascii?q?phT4BC4I7KQFpOTcBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQE?=
 =?us-ascii?q?BAQEBBQKBBwU+AgEDEhEEDQwBATcBDwIBIgImAgICMBUQAgQOBSKDAAGCRgM?=
 =?us-ascii?q?vAgGiAz0CIwE/AguBBCmIYAEBdH8zgn4BAQV3hCIYWIE/CYEOKIc/hG0GgUE?=
 =?us-ascii?q?/g3ZshEgYgnmCXo98OZ5UKIIWhy+OVSeaSS2OIJZBhAwCBAIEBQIOAQEFgWm?=
 =?us-ascii?q?BezMiGxVIDYJSCUcRFI0SDA4JgUmCB4pTQgExAYEnj0gBgQ8BAQ?=
X-IPAS-Result: =?us-ascii?q?A0H5AACb7/hdh2kCayhlHAEBAQEBBwEBEQEEBAEBgX6BS?=
 =?us-ascii?q?1BxdQMECyoKg3qDRgOFOoVUlmuGTAMYPAkEAQEKARsSAgEBAQGBAoM8AheCH?=
 =?us-ascii?q?wYBBDQTAgECAQwBAQEDAQEBAgECAwICAQECEAEBAQgNCQgphT4BC4I7KQFpO?=
 =?us-ascii?q?TcBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBBQKBBwU+A?=
 =?us-ascii?q?gEDEhEEDQwBATcBDwIBIgImAgICMBUQAgQOBSKDAAGCRgMvAgGiAz0CIwE/A?=
 =?us-ascii?q?guBBCmIYAEBdH8zgn4BAQV3hCIYWIE/CYEOKIc/hG0GgUE/g3ZshEgYgnmCX?=
 =?us-ascii?q?o98OZ5UKIIWhy+OVSeaSS2OIJZBhAwCBAIEBQIOAQEFgWmBezMiGxVIDYJSC?=
 =?us-ascii?q?UcRFI0SDA4JgUmCB4pTQgExAYEnj0gBgQ8BAQ?=
X-IronPort-AV: E=Sophos;i="5.69,325,1571716800"; 
   d="scan'208";a="9822605"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDHz5KiAI5OXPM3SvCR8Czkwc138DAbZFpqQ1a?=
 =?us-ascii?q?fGcOnegWlLifAOLt1ulQMrlDPZjoqFz06TbWsfFAQ+hE2AN8qEM0/358?=
 =?us-ascii?q?J2Hh4mq+8/smnycxnAQYnBeWdZ75WmZHQzIoJgWLuHgcuqMvBIEJozez?=
 =?us-ascii?q?fUXdF7YAi3hH9Sgw+uv+6eVg=3D=3D?=
Received: from mail-eopbgr20105.outbound.protection.outlook.com (HELO EUR02-VE1-obe.outbound.protection.outlook.com) ([40.107.2.105])
  by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Dec 2019 10:12:37 -0500
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UsqQ+JVzdelMk0G3Moh1xKAmfnKsezG16wWxRv/mqjT3V6s2GNwUqgNT+lpp4SZpq2doC/OIMysd4ppbyqYrx/VfZhprsIiHvlJGlOaRrL4lBtr6j4KY9+IosqGWAV1EuDzu70MfKSQ48uqiMHy41joCZpN7LjbP9JDokV1bQdeMKVZ8m+Y2M2KSKnc8fLYohKy9CD8n+/Y12lpTK1GC8jsV2WCVD6BJSCZ54lVS7H5gO4IsVWB3BtKDpBQX9I6i44cBm9lsYNgyy4d4dgRzz3ynS2JdAtjOqaP3w1sLmklcYS6yB5wPU9pCVl6Ne5hcEC1fYVxDLDbHAmjACglAJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oQQcj/kCLuqhEOjPzK84oKJ2erIYwHMpYTAxWNLBGWk=;
 b=E6pimmRW2Xk6NXhv8fQkEsbdZfI93flSsEdqhhdYiXQY/6ZMPzK/sr6f9ITblpDFrhlmP4BtZkAYFo1xzzSd2s6ziav2lgYS75uEl/iiUJkFtOT5ORsA5JiwaSsp90ftLrchYoJk3idKNdGJaWNlXbPv9AGh5vwxLWfVmByCV5a0DN77OffXEedL/Y87K+kKtd2nj53EoxCIZqEK9xqRHDtDOLM5vHFKlTMuPyoe8FNdAVlNRovc6UwQu4mOkEnCy6v5Cw1oSE8LagCEQ/UZ9rE0aeAQf1HQmwveWAMA8GCkRSufPY38641qL0AUr2kA8br2R3Tvctz0uBZrGZ6z4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=bitdefender.com; dmarc=pass action=none
 header.from=bitdefender.com; dkim=pass header.d=bitdefender.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bitdefender.onmicrosoft.com; s=selector2-bitdefender-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oQQcj/kCLuqhEOjPzK84oKJ2erIYwHMpYTAxWNLBGWk=;
 b=Y7dOKjYJJF5FwSMiO13NJpsd8Yri4gT2C2k6jXt/RpFvg6bycBzZ/DStHEhyHYFo99o1z9zLibHaS78rNwLPwbovGtADgKEVA6MKtQEqok3C7h8WVvBia3fq6VJBcjdPAY4TtP7DNMN2MD9GndKt7zAS4Ww4IpnMQI0pmA+fHWE=
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com (10.255.30.78) by
 AM0PR02MB4386.eurprd02.prod.outlook.com (20.178.17.212) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2538.20; Tue, 17 Dec 2019 15:12:35 +0000
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d]) by AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d%4]) with mapi id 15.20.2538.019; Tue, 17 Dec 2019
 15:12:35 +0000
From: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Alexandru Stefan ISAILA <aisaila@bitdefender.com>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: [PATCH V4 3/4] x86/mm: Pull out the p2m specifics from
 p2m_init_altp2m_ept
Thread-Topic: [PATCH V4 3/4] x86/mm: Pull out the p2m specifics from
 p2m_init_altp2m_ept
Thread-Index: AQHVtOxpSoGNE4L+skG6HNN4NefFyg==
Date: Tue, 17 Dec 2019 15:12:34 +0000
Message-ID: <20191217151144.9781-3-aisaila@bitdefender.com>
References: <20191217151144.9781-1-aisaila@bitdefender.com>
In-Reply-To: <20191217151144.9781-1-aisaila@bitdefender.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-clientproxiedby: AM0PR01CA0067.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:e6::44) To AM0PR02MB5553.eurprd02.prod.outlook.com
 (2603:10a6:208:160::14)
x-ms-exchange-messagesentrepresentingtype: 1
x-mailer: git-send-email 2.17.1
x-originating-ip: [91.199.104.6]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c1a01afe-51c6-4cc6-89c2-08d783038bb2
x-ms-traffictypediagnostic: AM0PR02MB4386:|AM0PR02MB4386:|AM0PR02MB4386:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR02MB4386E6445AB0E784035C3BB8AB500@AM0PR02MB4386.eurprd02.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:243;
x-forefront-prvs: 02543CD7CD
x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(366004)(136003)(396003)(39860400002)(346002)(376002)(189003)(199004)(6512007)(52116002)(64756008)(86362001)(186003)(8936002)(6486002)(4326008)(2906002)(54906003)(66476007)(71200400001)(66556008)(66946007)(36756003)(26005)(81156014)(81166006)(5660300002)(8676002)(6506007)(478600001)(66446008)(1076003)(2616005)(316002)(6916009);DIR:OUT;SFP:1102;SCL:1;SRVR:AM0PR02MB4386;H:AM0PR02MB5553.eurprd02.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: e4pRuDFPAcL6s+eDgRdWK3fVdAhp9PxWUVaUrBJ8R5xNELg4IptDimz7SKD228T4Mpu6Womhg+nTd26dw3AwKmKNSXgG73rYgL4fZQXMe3Ncf7gxEARtqag0B9o6wgkEtiQI0iEKr6169waPvlC/h48IFkK/rSvQq99igiuHkuqsAaoTPgkcNt0hV4ufdfedgvdTCut3dH8xZEiKaNyYAVcHuFIgDZHatGAcTxt1gxbD6/BMAPepgbakhrLvuuZQNawEBVh/CAKMJmTbvyHiDYPtvsi41b2HIFmIZCZ5TNafvWgOMb0HEk8psx2STVw6EmY2cxld+RSTsx+Ywh9OIC4vTHpjh1pgoBVbmopuZh+KRepkpyfc5Fyw4bN51UGiji+AnyNUCfrunjJ07uwzGbd97E1nDKfjVP8HKcPFwe7rEI/wPtjMN184qVNLhgbB
Content-Type: text/plain; charset="utf-8"
Content-ID: <6723024B440EC240AB191EC15FBD9BD3@eurprd02.prod.outlook.com>
Content-Transfer-Encoding: base64
X-MS-Exchange-CrossTenant-Network-Message-Id: c1a01afe-51c6-4cc6-89c2-08d783038bb2
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Dec 2019 15:12:34.9937
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: AIk4KbwPHZV1QNglVJtBdgpq7WvRljvY6i1yFB0WJcRsn/GofL6SYt8pNdcGp0kqrHAE8UHto4fQoDcD/9yAsno8UxJeFIXN5kZFObrfYm0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR02MB4386
Return-Path: aisaila@bitdefender.com
X-MS-Exchange-Organization-Network-Message-Id: 3e5a23ab-6ab2-4622-71d9-08d783038e01
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL01.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

U2lnbmVkLW9mZi1ieTogQWxleGFuZHJ1IElzYWlsYSA8YWlzYWlsYUBiaXRkZWZlbmRlci5jb20+
DQotLS0NCkNDOiBKdW4gTmFrYWppbWEgPGp1bi5uYWthamltYUBpbnRlbC5jb20+DQpDQzogS2V2
aW4gVGlhbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQpDQzogR2VvcmdlIER1bmxhcCA8Z2Vvcmdl
LmR1bmxhcEBldS5jaXRyaXguY29tPg0KQ0M6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4NCkNDOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KQ0M6IFdl
aSBMaXUgPHdsQHhlbi5vcmc+DQpDQzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4NCi0tLQ0KIHhlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgfCA2IC0tLS0tLQ0KIHhl
bi9hcmNoL3g4Ni9tbS9wMm0uYyAgICAgfCA2ICsrKysrKw0KIDIgZmlsZXMgY2hhbmdlZCwgNiBp
bnNlcnRpb25zKCspLCA2IGRlbGV0aW9ucygtKQ0KDQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMNCmluZGV4IGUwODhhNjNm
NTYuLjM2MmY3MDc5YWIgMTAwNjQ0DQotLS0gYS94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jDQor
KysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jDQpAQCAtMTM1OCwxMyArMTM1OCw3IEBAIHZv
aWQgcDJtX2luaXRfYWx0cDJtX2VwdChzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaSkN
CiAgICAgc3RydWN0IHAybV9kb21haW4gKmhvc3RwMm0gPSBwMm1fZ2V0X2hvc3RwMm0oZCk7DQog
ICAgIHN0cnVjdCBlcHRfZGF0YSAqZXB0Ow0KIA0KLSAgICBwMm0tPmRlZmF1bHRfYWNjZXNzID0g
aG9zdHAybS0+ZGVmYXVsdF9hY2Nlc3M7DQotICAgIHAybS0+ZG9tYWluID0gaG9zdHAybS0+ZG9t
YWluOw0KLQ0KLSAgICBwMm0tPmdsb2JhbF9sb2dkaXJ0eSA9IGhvc3RwMm0tPmdsb2JhbF9sb2dk
aXJ0eTsNCiAgICAgcDJtLT5lcHQuYWQgPSBob3N0cDJtLT5lcHQuYWQ7DQotICAgIHAybS0+bWlu
X3JlbWFwcGVkX2dmbiA9IGdmbl94KElOVkFMSURfR0ZOKTsNCi0gICAgcDJtLT5tYXhfbWFwcGVk
X3BmbiA9IHAybS0+bWF4X3JlbWFwcGVkX2dmbiA9IDA7DQogICAgIGVwdCA9ICZwMm0tPmVwdDsN
CiAgICAgZXB0LT5tZm4gPSBwYWdldGFibGVfZ2V0X3BmbihwMm1fZ2V0X3BhZ2V0YWJsZShwMm0p
KTsNCiAgICAgZC0+YXJjaC5hbHRwMm1fZXB0cFthcnJheV9pbmRleF9ub3NwZWMoaSwgTUFYX0VQ
VFApXSA9IGVwdC0+ZXB0cDsNCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0vcDJtLmMgYi94
ZW4vYXJjaC94ODYvbW0vcDJtLmMNCmluZGV4IDI1M2NhYjM0NTguLmQzODFmNjg3N2YgMTAwNjQ0
DQotLS0gYS94ZW4vYXJjaC94ODYvbW0vcDJtLmMNCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0u
Yw0KQEAgLTI1NTksNiArMjU1OSwxMiBAQCBzdGF0aWMgaW50IHAybV9hY3RpdmF0ZV9hbHRwMm0o
c3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IGlkeCkNCiAgICAgICAgIGdvdG8gb3V0Ow0K
ICAgICB9DQogDQorICAgIHAybS0+ZGVmYXVsdF9hY2Nlc3MgPSBob3N0cDJtLT5kZWZhdWx0X2Fj
Y2VzczsNCisgICAgcDJtLT5kb21haW4gPSBob3N0cDJtLT5kb21haW47DQorICAgIHAybS0+Z2xv
YmFsX2xvZ2RpcnR5ID0gaG9zdHAybS0+Z2xvYmFsX2xvZ2RpcnR5Ow0KKyAgICBwMm0tPm1pbl9y
ZW1hcHBlZF9nZm4gPSBnZm5feChJTlZBTElEX0dGTik7DQorICAgIHAybS0+bWF4X21hcHBlZF9w
Zm4gPSBwMm0tPm1heF9yZW1hcHBlZF9nZm4gPSAwOw0KKw0KICAgICBwMm1faW5pdF9hbHRwMm1f
ZXB0KGQsIGlkeCk7DQogDQogIG91dDoNCi0tIA0KMi4xNy4xDQoNCg==

From - Wed Dec 18 10:46:18 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Tue, 17 Dec 2019 16:12:46 +0100
Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Tue, 17 Dec 2019 10:12:45 -0500
Received: from esa6.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 07:12:45 -0800
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  aisaila@bitdefender.com) identity=pra;
  client-ip=40.107.2.123; receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
  aisaila@bitdefender.com designates 40.107.2.123 as permitted
  sender) identity=mailfrom; client-ip=40.107.2.123;
  receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="aisaila@bitdefender.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
  postmaster@EUR02-VE1-obe.outbound.protection.outlook.com
  designates 40.107.2.123 as permitted sender) identity=helo;
  client-ip=40.107.2.123; receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="aisaila@bitdefender.com";
  x-sender="postmaster@EUR02-VE1-obe.outbound.protection.outlook.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:40.92.0.0/15 ip4:40.107.0.0/16
  ip4:52.100.0.0/14 ip4:104.47.0.0/17 ip6:2a01:111:f400::/48
  ip6:2a01:111:f403::/48 -all"
Authentication-Results: esa6.hc3370-68.iphmx.com; spf=None smtp.pra=aisaila@bitdefender.com; spf=Pass smtp.mailfrom=aisaila@bitdefender.com; spf=Pass smtp.helo=postmaster@EUR02-VE1-obe.outbound.protection.outlook.com; dkim=pass (signature verified) header.i=@bitdefender.onmicrosoft.com
IronPort-SDR: nGAb3eA9Sbc9iZuJr4Fergwi3EVWhqvxtnt563Axx6RJ2qnf8udqFPUPtyGwDiyoEg9ykwyv8A
 hXYlT4o/ydC7i4iJCUJ0gOXlsfCjrST7G8D1N2/Qx3gsHKPyE4EsXkQK6Hgx2XE63w9LOiKoh6
 BGPooJrXe0gsUwfr7H75bVFZ9bOvikJU10rEIfyl/7Q3rJeOUjbF7hA3XGOHn6i6NgYE/T41zC
 uENkisdgTWHeujm0bP8mGeCRpwmVEDeufBoLtW8mD/JflFBF9ipZUdqHPFNFblpdwuDHQfzxd0
 oTArjDzMZAJP1VMeceoDzG5l
X-IronPort-RemoteIP: 40.107.2.123
X-IronPort-MID: 10228222
X-IronPort-Reputation: 3.5
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: SBRS_Whitelist
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.5
X-MesageID: 10228222
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 40.107.2.123
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3AmtYeIxFdULJbAtTngD2zpp1GYnF86YWxBRYc79?=
 =?us-ascii?q?8ds5kLTJ76p8i6bnLW6fgltlLVR4KTs6sC17ON9fq4BidfuN6oizMrSNR0TR?=
 =?us-ascii?q?gLiMEbzUQLIfWuLgnFFsPsdDEwB89YVVVorDmROElRH9viNRWJ+iXhpTEdFQ?=
 =?us-ascii?q?/iOgVrO+/7BpDdj9it1+C15pbffxhEiCCybL9vIhi6txvdutUUjIdtKKs91w?=
 =?us-ascii?q?bCr2dVdehR2W5mP0+YkQzm5se38p5j8iBQtOwk+sVdT6j0fLk2QKJBAjg+PG?=
 =?us-ascii?q?87+MPktR/YTQuS/XQcSXkZkgBJAwfe8h73WIr6vzbguep83CmaOtD2TawxVD?=
 =?us-ascii?q?+/4apnVAPkhSEaPDMi7mrZltJ/g75aoBK5phxw3YjUYJ2ONPFjeq/RZM4WSX?=
 =?us-ascii?q?ZdUspUUSFKH4GyYJYVD+cZPehWsZTzp0cAoxW9CwmjBuThyj5UiXH50qI3yP?=
 =?us-ascii?q?ghHhrc0QA8Bd8CrHbZodPoP6kSS+C1y6zIwC3fYvNSxzj855LHchY8rvGWQL?=
 =?us-ascii?q?J7bNffyVUxGAPLlFqQr5HuMTCI2OkRsGaV6fZtXv+ohmE9qgFxuSKgxsYoio?=
 =?us-ascii?q?bTnY8a0VHE9Tlkz4krK923Vkh7bsO6H5tKrS2VK4x2QsY7TmxupS00yaUGtI?=
 =?us-ascii?q?a0cSQW0pgr2gLTZv6df4SW+B7vSfidLDlgiH9hZb6znQu+/Eykx+HmS8W4zF?=
 =?us-ascii?q?lHojBEn9XUuHACyR/e5tSCR/Z4/UquxSiA2BzW5+xGIE05m6nWJpsjz7M+mZ?=
 =?us-ascii?q?ccrUHOEyzylUrriqKda18q9fKy6+v9Z7Xrvp+cOJFwigH5KqkglcKwD/gkPg?=
 =?us-ascii?q?QQRmSU9uuy2KD68U3+W7pFkOc6kq7EsJDGPssbobO5AwlI3Yo58xa/FTCm0M?=
 =?us-ascii?q?gGknYbMFJFeRWHj47zN1HJPfD4EfO/g1OrkDdo3fzJIrrhApDVInjClrfuY6?=
 =?us-ascii?q?p95lZTxQYv19xT+o5YB7MbLP7tQEP9qcLUAxEhPwyx2ennCdF91o0EWWKIB6?=
 =?us-ascii?q?+UKLndsV6T5u01IumMYJUatS3mJPgk+/7hkGE2mVEDcqmvwJQYdmq4Eu56LE?=
 =?us-ascii?q?WeZHrgms0BHnsSvgoiUOzqj0WPXz9NaHa1RaI86S80BJioA4feQICthKaO3D?=
 =?us-ascii?q?+gE5JIe2BKEEyDEXb0d4WeWvcNZzieLdNmkjwBTbKhUZMu1QmytA/mzLpqNu?=
 =?us-ascii?q?vU9TcetZ751th6/fHclRIp9TxzCcSQyH+CT3xukmwURj82wLp/oU1yy1uZza?=
 =?us-ascii?q?d4hPlYRpRv4KYDQgo8cJLR0eF+I9TzQR7aOMeETk68RderCi13Scg+iZdac0?=
 =?us-ascii?q?t7XtmvkB3H9y6rGKMO0ayGAoQu9aDR1GS3INxynSXozq4k2nAiT9tGM2G8m+?=
 =?us-ascii?q?ZE6wXdDoiBsn6wtoKDPfAV3TXE9WOK5WCPoE1VXgN2XajfG3sYYx2F/pzC+k?=
 =?us-ascii?q?reQur2WvwcOQxbxJvHc/MSM4C7oXZnYbLOJt3afmutmmC2WUvSlK3ZNdmicj?=
 =?us-ascii?q?AHxyyYE0gNyV1Nrj6NYBIzAi6xrm6ZFjFqHELibxCJk6Fy/Xm6UkM5wQyMY0?=
 =?us-ascii?q?B7kb2z/x8en/uHTP0PmLkDvXRpsGBvEVKw1M7REYDFrQt9cKhSbNUx7U0C0m?=
 =?us-ascii?q?TctgdnOYenIbwnjVkbIGEV90O7+Q9wD9dsmNQn/kg3xgh/Ir7K9V5abDqX0N?=
 =?us-ascii?q?XRFu/8LW/u8RazLpLHwlyM9fez1+Ix5e4jqlLl7jHsM3Fn3mVs09BT3HbZ3Z?=
 =?us-ascii?q?jMAAcIeL7aUks89EsfxfnQYnwl/oDK2SIxMYC/uWWc4uwUFfod0jn5Y+ZiDq?=
 =?us-ascii?q?WpRUzKN5NLCOG/Isosv1SAO0FhXqhYoZUWLfL2UKfYx76rObhChzD7qHRW7c?=
 =?us-ascii?q?Nb+32x23VVFv/DwsgImNup9yamRhDtoGe7uNnai6NPVyBORECf7hHrFIBadr?=
 =?us-ascii?q?J3dqstMEq+BuK0n9tO3oeyZDl6zHWOPkwB+deFexOYUQan4SBtjRU4uiW5wX?=
 =?us-ascii?q?ea7jB0nxF1pa+g/w3I2NrYbTECJ0pQWWg4lHewLJatv8hKXmXxNhoujwq8vk?=
 =?us-ascii?q?Thw5QYlI58Fm2OYm5QcH2lZ3EnU7G3sKKFedIK8p4zrCFLBfy1ekvJEOSvig?=
 =?us-ascii?q?YG0y7lA2pVzSw6cDfvgJjighhmkzjDdiR+t3OHP8F7nkyAvJmCFbhQxjoDVG?=
 =?us-ascii?q?9zjjyETkOkMYyP+tOZ34zGrvj4T3ioA5RSaiDqyYqoviqn6WBkDBuzkur1kd?=
 =?us-ascii?q?riQkAhySGu79BxTm3TqQrkJIzi1qC0K+ViK0BpH1b978NSEIBinoY+iZcc1G?=
 =?us-ascii?q?JcjZKQrjIciWmmCdxAwurlaWYVAz4Gx9mA+A/+xEhqNW6E3arUa0/FmI5fSo?=
 =?us-ascii?q?D/ZWkbnCUg88pNFaGYqqRemjd4qUa5qgSXZuVhmjAayr0l73tJy+0KuQ941i?=
 =?us-ascii?q?yGGfhSBkhXOyXwig6Fp86ztqRZZWujMPCw2UNykMrnDeSqqABAVHv3d5EuED?=
 =?us-ascii?q?U26cN6MVnW12b05J2ic97VNIhBjBCfnhbeguQQE6ofzaNW1wxgP2+1/XA+wr?=
 =?us-ascii?q?B9jRc1hs7i+dbXbWR1/KepRBVfM2+9YcRb4TzrgatE+6Tel4myApVsHCkKV5?=
 =?us-ascii?q?r0XLqpFjwVr/HuKweJFnU1tH6aHbPVGQLX5l1hqjrDFJWiNnffI3d8r50qTR?=
 =?us-ascii?q?iGLU1biSgeXSk2k5A0EAyn3orqd0I4rjEd61jkqwddn/pyPkqaMC+XrwOpZz?=
 =?us-ascii?q?EoDZmHeUYOqFgauAGPd5XEtbE7BSxT85y/oRbYJ3eSOUJIBjpSBRTBWQClP6?=
 =?us-ascii?q?Gu4MmG+O+dVY/cZ7PDZ6uDretGWrKG35Wqh8Fv/i2FN8GGFnNjE/E21EdFUX?=
 =?us-ascii?q?1jXc/enn9cLk5f3zKIdMOdqBqmr2d+r9u28fDicAju+YeCBbZUPdh1vRuxhO?=
 =?us-ascii?q?3QUozYzDY8IjFe2JQWwHbOw7VKx18ehRZlcDy1GKgBvyrAFfiCoKJcAh8FZi?=
 =?us-ascii?q?82D/Nms/NmjDFEItWTyt7u3+A+jvVuUAgdEAKx3MCxZckaZWq6MQGPCEGOPb?=
 =?us-ascii?q?WAbTrFpqO/Kaq9UrRRiOx8vRystTuVHknvMy7FnD7sHxyiKuBDiiiHMQcW5N?=
 =?us-ascii?q?n7K04yTzKlFYi9IhSgeMd6lzg33aE5ihaofSYHPD5wflkM5ryc4CVEg+luTm?=
 =?us-ascii?q?lI735rN+6ByG6S6+jVLIpTsOM+XnwyzroFpi58kuEOv0QmDLRvlSDfr8BjuQ?=
 =?us-ascii?q?SrieDRjDpsC0ER8nMV1MSKpUVnKePS8ZwTPBSMtB8L82iUDAwH4tV/Ddi68a?=
 =?us-ascii?q?lR0dHOkKvbIjZe/9/a8M0QCtKSI8WCeilEU1KhCHvPAQ0JQCT+f2jenEVclP?=
 =?us-ascii?q?i6/HyJopU0p57glYBIQbheHg9QdLtSGgFuG9oMJ41yVzUvnOuAjcIG0nG5qQ?=
 =?us-ascii?q?HYWMRQup2UHuLXG/jkLyyVyKVVfxZdi62tNpwdb8eovi4qIkk/honBHFDcGM?=
 =?us-ascii?q?xAsjE0JBFhu11DqTB/VjFhhxqjO1nruDlLUqfp1h8u1lkiOaJ0rGiquxFvYQ?=
 =?us-ascii?q?OUwUl42EgpxYe423bIKGa3dOHoGtgKQyvs6xpoas+9H1kzNUvq2hU7fDbcGe?=
 =?us-ascii?q?ALhuM5Jzky0V3S5cMXS6wbEf0hAldYxOnJNa8hiQ0O83z+l0EbvbCXW9w+xE?=
 =?us-ascii?q?NveJqo5SsaiThuZ9M0O6HcYZFx4AUN2vC2tzSznqA82wZAYUYGqzjNIGtW6A?=
 =?us-ascii?q?oJLrkjN2yj+ek+oQCFnjJCfiALWZ9I6rpy8VghPu2b0y/6+5N+EBjrctKydu?=
 =?us-ascii?q?aesWWGktOUSFQt0E9Oj1NC4bV9zcYkdQyTSlwry7ySUR8OMK+gYUlZYtFT+3?=
 =?us-ascii?q?7aYSuV+bmVh8stYMPnULmyFqeHr+4Mj1ihHRo1EohE9ckHEpS2kQnZIcrhML?=
 =?us-ascii?q?8Z2EAt6QDsdzDnRLxCfBOGlitCotnqkMcxhNECYGtbWD8mYm2t673apxEnmq?=
 =?us-ascii?q?+OR95oJHcRBdBbbjdoCIu7gy5crzJLCzzkt4BRgAWE8TL4oTzdSTfmaN82Lv?=
 =?us-ascii?q?6VfhptDNie8zQj/6W4hFjb/4+YLGb/f4cH2JeH+aYBqpCLBukBB6F6qFvZkp?=
 =?us-ascii?q?JESmaCflP1SYTwGbWpLo4mYJrzF2qwVUG5h3QtVcDtMd2xL6+Oxwb1WYJTt4?=
 =?us-ascii?q?rd1zcmf5zYdHlWC1J7oOcN47h5bAsIbs8gYBLmgA85MrS2PAaS1tj9C3boMz?=
 =?us-ascii?q?ZdSONTiPmrf7EChTR5dfe0kTFzK/Nyh/ny60MGQ4sGyw3T1er2LZcLSjD9Qz?=
 =?us-ascii?q?RcY1md+XJ/xjInbqBqhb5iiBLQ7QtAa3bSLLMvMCoc+IhiYDHaaXRuVjhlHR?=
 =?us-ascii?q?nF1dKFukj0mOlOtypFw4QNibED7Ce45tmHJ2vzEK2z98ePunJ5P4F/+v9/bd?=
 =?us-ascii?q?S7cJnB6MO7/HSXTYGO4FeMCHfoTqMDyNYMeHkKEr4UySlgMMgC89Mdu3p0bd?=
 =?us-ascii?q?83Ivl0MIdpvqqjMGU2By8OwSIXWoWM0SZEieC5ieODy0WgNa86ORlBi61sx9?=
 =?us-ascii?q?sQVykqPXEzmZX7DcDoujbBTWIGZgAO8Q5L+QQM0Jdqefzo65bJS5kKzCNKp/?=
 =?us-ascii?q?VzUW3AEZw6rgKqGFHTukDxTbCaq8Ls2AtTyPz21dxCB0x1CFRRyuhbkEclMv?=
 =?us-ascii?q?d8LKxC54M=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0GvAQBv7/hdh3sCayhlHgELHINJKSe?=
 =?us-ascii?q?BZgMECyoKg3qDRgOFOoVUlmuGTAMYPAkEAQEKAS0CAQEBAYQ+AheCHwYBBDQ?=
 =?us-ascii?q?TAgECAQwBAQEDAQEBAgECAwICAQECEAEBAQgNCQgphT4BC4I7KQFpOTgBAQE?=
 =?us-ascii?q?BAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEBAQEFAoEMPgIBAxIRBA0?=
 =?us-ascii?q?MAQE3AQ8CASICJgICAjAVEAIEDgUigjVLgkcDLwIBogM9AiMBPwILgQQpiGA?=
 =?us-ascii?q?BAXR/M4J+AQEFhRkYWIE/CYEOKIc/glOCGgaBQT+BEYJlbIRIGCOCVoJej3w?=
 =?us-ascii?q?5nlQoghaWBCeaSS2OIJpNAgQCBAUCDgEBBYFpgXszIhsVgydQERSNEgwOCRW?=
 =?us-ascii?q?DO4pTQgExAYEnj0gBgQ8BAQ?=
X-IPAS-Result: =?us-ascii?q?A0GvAQBv7/hdh3sCayhlHgELHINJKSeBZgMECyoKg3qDR?=
 =?us-ascii?q?gOFOoVUlmuGTAMYPAkEAQEKAS0CAQEBAYQ+AheCHwYBBDQTAgECAQwBAQEDA?=
 =?us-ascii?q?QEBAgECAwICAQECEAEBAQgNCQgphT4BC4I7KQFpOTgBAQEBAQEBAQEBAQEBA?=
 =?us-ascii?q?QEBAQEBAQEBAQEBAQEBAQEBAQEBAQEFAoEMPgIBAxIRBA0MAQE3AQ8CASICJ?=
 =?us-ascii?q?gICAjAVEAIEDgUigjVLgkcDLwIBogM9AiMBPwILgQQpiGABAXR/M4J+AQEFh?=
 =?us-ascii?q?RkYWIE/CYEOKIc/glOCGgaBQT+BEYJlbIRIGCOCVoJej3w5nlQoghaWBCeaS?=
 =?us-ascii?q?S2OIJpNAgQCBAUCDgEBBYFpgXszIhsVgydQERSNEgwOCRWDO4pTQgExAYEnj?=
 =?us-ascii?q?0gBgQ8BAQ?=
X-IronPort-AV: E=Sophos;i="5.69,325,1571716800"; 
   d="scan'208";a="10228222"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDEVpuULf5u9cOc3D5DvTaUSB+5GE0BdTY6pRO?=
 =?us-ascii?q?XGYMJvsWzyQ6cf+b+4Vk5HJyoIteHv4NdBn6iDIZeNbS5VJb2LHnFjXR?=
 =?us-ascii?q?L3EPPS8nXqzb0llb0HPlErmDKpLoc71jl4L/S8SoP770Qfc7h30At0D+?=
 =?us-ascii?q?HcGt+ZUby2pcEA4ursoZJfYw=3D=3D?=
Received: from mail-eopbgr20123.outbound.protection.outlook.com (HELO EUR02-VE1-obe.outbound.protection.outlook.com) ([40.107.2.123])
  by esa6.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Dec 2019 10:12:43 -0500
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nJS4Eg07+fBcFtVTgBMretvZBZ7ai7dElJStp/MAL9oJ8+iB70l/Zxt8hw4ZX8W3TzIlTV0s+ZGJLKo6DdZMh+Yel5G43A3U5iZRcwv5mufJzyboyy7lUJd+IMB87Ea3hZn5uVq1SdYNsw4QBD4DzyhbSY+k600PvyVcfqp+lekbkMlaatKwQq+K/lu/IKZGMrvsVSBsygz6c8zpTY1067vo+sLfsQGuZICpGr6kY27kQCIIg9A2zcM29YeguxBqsCvVC+mlCIuV99W1g7aHNID45rFvu7bfFqCShCIeV0Gf65o5UHH1ZoWE7b9OCLiE0ezffX6c/XsOIdF43tKyDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pqkvd75hAcc26QVUyyPpG1tYUVnE9+PL69gAybNnLdA=;
 b=VDTi2LdWW/cfC3qOdvTUtDAvVl60wCVeHTRm7lVnQwASjS67cFNNHis55Bt2hLBPINwpXppbF3mr5FXdULs1Kelw27GxlCYg3JrCEFptg3alhf8zlm5i+Ih9faz9mR+YPKJtQJSsbVObKWsGUc9KKBWeAEkWqt4p0/HODlVnZjzk8EViTHWX6tjx2Ba7f44+8xE7OvH9KCjNEfcc2RdTicIta3JqWxqg30Np5URM2F6FzULO/c2X88w9IF+hQLIDG1/6qVaMaffAMOimePecNLG6M2ptP6p6QOaLkD7bLPdxfjvHcecgXBzBOfj6Jouv4SJKEAoXQecqQQorC9pqWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=bitdefender.com; dmarc=pass action=none
 header.from=bitdefender.com; dkim=pass header.d=bitdefender.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=bitdefender.onmicrosoft.com; s=selector2-bitdefender-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pqkvd75hAcc26QVUyyPpG1tYUVnE9+PL69gAybNnLdA=;
 b=SJzY8O46wxhm6Jsj5Mxye+NAWUD7A4ycu6e3BYAEIyOyDXrbqEzKcZy42BHOQgxfcxwmcENQExMGk1fX1q7W/JPCEQwoIpDjabV22TEU09Gs1v04JIdhdH4cmWKInsVRqCmuLsui2qH4evqXp8jE11zosyq3jslT+XBkQl5KApg=
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com (10.255.30.78) by
 AM0PR02MB4386.eurprd02.prod.outlook.com (20.178.17.212) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2538.20; Tue, 17 Dec 2019 15:12:42 +0000
Received: from AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d]) by AM0PR02MB5553.eurprd02.prod.outlook.com
 ([fe80::8cec:7638:734c:89d%4]) with mapi id 15.20.2538.019; Tue, 17 Dec 2019
 15:12:42 +0000
From: Alexandru Stefan ISAILA <aisaila@bitdefender.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Alexandru Stefan ISAILA <aisaila@bitdefender.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Julien Grall <julien@xen.org>, "Konrad Rzeszutek
 Wilk" <konrad.wilk@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Razvan COJOCARU <rcojocaru@bitdefender.com>, Tamas K Lengyel
	<tamas@tklengyel.com>, Petre Ovidiu PIRCALABU <ppircalabu@bitdefender.com>,
	George Dunlap <george.dunlap@eu.citrix.com>
Subject: [PATCH V4 4/4] x86/mm: Make use of the default access param from
 xc_altp2m_create_view
Thread-Topic: [PATCH V4 4/4] x86/mm: Make use of the default access param from
 xc_altp2m_create_view
Thread-Index: AQHVtOxtPhIY/fNOfU26Nr2n4YFnQg==
Date: Tue, 17 Dec 2019 15:12:41 +0000
Message-ID: <20191217151144.9781-4-aisaila@bitdefender.com>
References: <20191217151144.9781-1-aisaila@bitdefender.com>
In-Reply-To: <20191217151144.9781-1-aisaila@bitdefender.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-clientproxiedby: AM0PR01CA0067.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:e6::44) To AM0PR02MB5553.eurprd02.prod.outlook.com
 (2603:10a6:208:160::14)
x-ms-exchange-messagesentrepresentingtype: 1
x-mailer: git-send-email 2.17.1
x-originating-ip: [91.199.104.6]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d1c7465d-82bc-4165-5826-08d783038fd7
x-ms-traffictypediagnostic: AM0PR02MB4386:|AM0PR02MB4386:|AM0PR02MB4386:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR02MB438698F6D3702C0E5BE4C7B0AB500@AM0PR02MB4386.eurprd02.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2582;
x-forefront-prvs: 02543CD7CD
x-forefront-antispam-report: SFV:NSPM;SFS:(10019020)(366004)(136003)(396003)(39860400002)(346002)(376002)(189003)(199004)(7416002)(6512007)(52116002)(64756008)(86362001)(186003)(8936002)(6486002)(4326008)(2906002)(54906003)(66476007)(71200400001)(66556008)(66946007)(36756003)(26005)(81156014)(81166006)(5660300002)(8676002)(6506007)(478600001)(66446008)(1076003)(2616005)(316002)(6916009);DIR:OUT;SFP:1102;SCL:1;SRVR:AM0PR02MB4386;H:AM0PR02MB5553.eurprd02.prod.outlook.com;FPR:;SPF:None;LANG:en;PTR:InfoNoRecords;MX:1;A:1;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Hvb8sdyrmFVpHcpMuHblM/GFZ0y7JsgE8e6hrXWrE4pUL+DhxOXTPUUCf7szvpsA//KDM+hj2zVlUcgVXlceThYKX3r1+jMqWZK/B9bRwH8IZWMZ/V269Tt343OrxhKK2lfUR+rI6iSlaMz5aZH15Nll1RQrHV2X53yCubu/NCSh/UJsUa6j67SCYJD9bCxS7h05r9i+6oG8kmbL3lZXoIFM76QyWftafbmYdCLhP+UITzbM6TTkDJCFh7nZw7Urdqy1nr00O9idYOlYvUElkq4Kv6nPfeBxST/PuzGJ8xgKOI3s9yQKYpxnxiCgPNEaq/oVjKnFztEuEmfPspAbrDs/ykUzWqUiTN9GWyr2HQPhEGUCi2Cb0wxiP8tS9yfvnq2pg5QWYh/DsAGghxMWRGPD+EtCK9kKOgGlVLB4pFs1Z6wEVqICccitza+vLNND
Content-Type: text/plain; charset="utf-8"
Content-ID: <CE425A9B4EFD9D44AB8E10D62D0B9390@eurprd02.prod.outlook.com>
Content-Transfer-Encoding: base64
X-MS-Exchange-CrossTenant-Network-Message-Id: d1c7465d-82bc-4165-5826-08d783038fd7
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Dec 2019 15:12:41.8998
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 487baf29-f1da-469a-9221-243f830c36f3
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PCR1X1DxL4hxn9Ih2pTkoY5HjUOuRUi2ixtjHso5vquAQESEKZvoKD/WR3CjE8wyFMnPPpeiXWFLyVTvpjnnSuL388r8QMGazJuBSJQ1zTQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR02MB4386
Return-Path: aisaila@bitdefender.com
X-MS-Exchange-Organization-Network-Message-Id: 9fa57bc0-5c97-4872-ce1e-08d783039233
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

QXQgdGhpcyBtb21lbnQgdGhlIGRlZmF1bHRfYWNjZXNzIHBhcmFtIGZyb20geGNfYWx0cDJtX2Ny
ZWF0ZV92aWV3IGlzDQpub3QgdXNlZC4NCg0KVGhpcyBwYXRjaCBhc3NpZ25zIGRlZmF1bHRfYWNj
ZXNzIHRvIHAybS0+ZGVmYXVsdF9hY2Nlc3MgYXQgdGhlIHRpbWUgb2YNCmluaXRpYWxpemluZyBh
IG5ldyBhbHRwMm0gdmlldy4NCg0KU2lnbmVkLW9mZi1ieTogQWxleGFuZHJ1IElzYWlsYSA8YWlz
YWlsYUBiaXRkZWZlbmRlci5jb20+DQotLS0NCkNDOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+DQpDQzogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCkND
OiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KQ0M6ICJSb2dlciBQYXUgTW9ubsOpIiA8cm9nZXIucGF1
QGNpdHJpeC5jb20+DQpDQzogR2VvcmdlIER1bmxhcCA8R2VvcmdlLkR1bmxhcEBldS5jaXRyaXgu
Y29tPg0KQ0M6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBldS5jaXRyaXguY29tPg0KQ0M6IEp1
bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQpDQzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxr
b25yYWQud2lsa0BvcmFjbGUuY29tPg0KQ0M6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxp
bmlAa2VybmVsLm9yZz4NCkNDOiBSYXp2YW4gQ29qb2NhcnUgPHJjb2pvY2FydUBiaXRkZWZlbmRl
ci5jb20+DQpDQzogVGFtYXMgSyBMZW5neWVsIDx0YW1hc0B0a2xlbmd5ZWwuY29tPg0KQ0M6IFBl
dHJlIFBpcmNhbGFidSA8cHBpcmNhbGFidUBiaXRkZWZlbmRlci5jb20+DQpDQzogR2VvcmdlIER1
bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29tPg0KLS0tDQpDaGFuZ2VzIHNpbmNlIFYz
Og0KCS0gQ2hhbmdlIHR5cGUgb2YgaHZtbWVtX2RlZmF1bHRfYWNjZXNzIHRvIHhlbm1lbV9hY2Nl
c3NfdA0KCS0gRml4IHN0eWxlIGlzc3Vlcw0KCS0gUmVsZWFzZSBsb2NrIGJlZm9yZSByZXR1cm4u
DQotLS0NCiB4ZW4vYXJjaC94ODYvaHZtL2h2bS5jICAgICAgICAgIHwgIDMgKystDQogeGVuL2Fy
Y2gveDg2L21tL21lbV9hY2Nlc3MuYyAgICB8ICA2ICsrKy0tLQ0KIHhlbi9hcmNoL3g4Ni9tbS9w
Mm0uYyAgICAgICAgICAgfCAyNyArKysrKysrKysrKysrKysrKysrKysrLS0tLS0NCiB4ZW4vaW5j
bHVkZS9hc20teDg2L3AybS5oICAgICAgIHwgIDMgKystDQogeGVuL2luY2x1ZGUvcHVibGljL2h2
bS9odm1fb3AuaCB8ICAyIC0tDQogeGVuL2luY2x1ZGUveGVuL21lbV9hY2Nlc3MuaCAgICB8ICA0
ICsrKysNCiA2IGZpbGVzIGNoYW5nZWQsIDMzIGluc2VydGlvbnMoKyksIDEyIGRlbGV0aW9ucygt
KQ0KDQpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS9odm0uYyBiL3hlbi9hcmNoL3g4Ni9o
dm0vaHZtLmMNCmluZGV4IGExMjkwNDlkNmIuLmQ0YjE5ZDI0MTIgMTAwNjQ0DQotLS0gYS94ZW4v
YXJjaC94ODYvaHZtL2h2bS5jDQorKysgYi94ZW4vYXJjaC94ODYvaHZtL2h2bS5jDQpAQCAtNDY4
Nyw3ICs0Njg3LDggQEAgc3RhdGljIGludCBkb19hbHRwMm1fb3AoDQogICAgIH0NCiANCiAgICAg
Y2FzZSBIVk1PUF9hbHRwMm1fY3JlYXRlX3AybToNCi0gICAgICAgIGlmICggIShyYyA9IHAybV9p
bml0X25leHRfYWx0cDJtKGQsICZhLnUudmlldy52aWV3KSkgKQ0KKyAgICAgICAgaWYgKCAhKHJj
ID0gcDJtX2luaXRfbmV4dF9hbHRwMm0oZCwgJmEudS52aWV3LnZpZXcsDQorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBhLnUudmlldy5odm1tZW1fZGVmYXVsdF9hY2Nl
c3MpKSApDQogICAgICAgICAgICAgcmMgPSBfX2NvcHlfdG9fZ3Vlc3QoYXJnLCAmYSwgMSkgPyAt
RUZBVUxUIDogMDsNCiAgICAgICAgIGJyZWFrOw0KIA0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4
Ni9tbS9tZW1fYWNjZXNzLmMgYi94ZW4vYXJjaC94ODYvbW0vbWVtX2FjY2Vzcy5jDQppbmRleCA3
MGYzNTI4YmIxLi4yODhjODY1ZmZhIDEwMDY0NA0KLS0tIGEveGVuL2FyY2gveDg2L21tL21lbV9h
Y2Nlc3MuYw0KKysrIGIveGVuL2FyY2gveDg2L21tL21lbV9hY2Nlc3MuYw0KQEAgLTMxNCw5ICsz
MTQsOSBAQCBzdGF0aWMgaW50IHNldF9tZW1fYWNjZXNzKHN0cnVjdCBkb21haW4gKmQsIHN0cnVj
dCBwMm1fZG9tYWluICpwMm0sDQogICAgIHJldHVybiByYzsNCiB9DQogDQotc3RhdGljIGJvb2wg
eGVubWVtX2FjY2Vzc190b19wMm1fYWNjZXNzKHN0cnVjdCBwMm1fZG9tYWluICpwMm0sDQotICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHhlbm1lbV9hY2Nlc3NfdCB4YWNj
ZXNzLA0KLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwMm1fYWNjZXNz
X3QgKnBhY2Nlc3MpDQorYm9vbCB4ZW5tZW1fYWNjZXNzX3RvX3AybV9hY2Nlc3Moc3RydWN0IHAy
bV9kb21haW4gKnAybSwNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB4ZW5tZW1f
YWNjZXNzX3QgeGFjY2VzcywNCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwMm1f
YWNjZXNzX3QgKnBhY2Nlc3MpDQogew0KICAgICBzdGF0aWMgY29uc3QgcDJtX2FjY2Vzc190IG1l
bWFjY2Vzc1tdID0gew0KICNkZWZpbmUgQUNDRVNTKGFjKSBbWEVOTUVNX2FjY2Vzc18jI2FjXSA9
IHAybV9hY2Nlc3NfIyNhYw0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYyBiL3hl
bi9hcmNoL3g4Ni9tbS9wMm0uYw0KaW5kZXggZDM4MWY2ODc3Zi4uZDY3MzI2ZjhiNyAxMDA2NDQN
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0uYw0KKysrIGIveGVuL2FyY2gveDg2L21tL3AybS5j
DQpAQCAtMjUsNiArMjUsNyBAQA0KIA0KICNpbmNsdWRlIDx4ZW4vZ3Vlc3RfYWNjZXNzLmg+IC8q
IGNvcHlfZnJvbV9ndWVzdCgpICovDQogI2luY2x1ZGUgPHhlbi9pb21tdS5oPg0KKyNpbmNsdWRl
IDx4ZW4vbWVtX2FjY2Vzcy5oPg0KICNpbmNsdWRlIDx4ZW4vdm1fZXZlbnQuaD4NCiAjaW5jbHVk
ZSA8eGVuL2V2ZW50Lmg+DQogI2luY2x1ZGUgPHB1YmxpYy92bV9ldmVudC5oPg0KQEAgLTI1MzMs
NyArMjUzNCw4IEBAIHZvaWQgcDJtX2ZsdXNoX2FsdHAybShzdHJ1Y3QgZG9tYWluICpkKQ0KICAg
ICBhbHRwMm1fbGlzdF91bmxvY2soZCk7DQogfQ0KIA0KLXN0YXRpYyBpbnQgcDJtX2FjdGl2YXRl
X2FsdHAybShzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaWR4KQ0KK3N0YXRpYyBpbnQg
cDJtX2FjdGl2YXRlX2FsdHAybShzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaWR4LA0K
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwMm1fYWNjZXNzX3QgaHZtbWVtX2RlZmF1
bHRfYWNjZXNzKQ0KIHsNCiAgICAgc3RydWN0IHAybV9kb21haW4gKmhvc3RwMm0sICpwMm07DQog
ICAgIGludCByYzsNCkBAIC0yNTU5LDcgKzI1NjEsNyBAQCBzdGF0aWMgaW50IHAybV9hY3RpdmF0
ZV9hbHRwMm0oc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IGlkeCkNCiAgICAgICAgIGdv
dG8gb3V0Ow0KICAgICB9DQogDQotICAgIHAybS0+ZGVmYXVsdF9hY2Nlc3MgPSBob3N0cDJtLT5k
ZWZhdWx0X2FjY2VzczsNCisgICAgcDJtLT5kZWZhdWx0X2FjY2VzcyA9IGh2bW1lbV9kZWZhdWx0
X2FjY2VzczsNCiAgICAgcDJtLT5kb21haW4gPSBob3N0cDJtLT5kb21haW47DQogICAgIHAybS0+
Z2xvYmFsX2xvZ2RpcnR5ID0gaG9zdHAybS0+Z2xvYmFsX2xvZ2RpcnR5Ow0KICAgICBwMm0tPm1p
bl9yZW1hcHBlZF9nZm4gPSBnZm5feChJTlZBTElEX0dGTik7DQpAQCAtMjU3Niw2ICsyNTc4LDcg
QEAgc3RhdGljIGludCBwMm1fYWN0aXZhdGVfYWx0cDJtKHN0cnVjdCBkb21haW4gKmQsIHVuc2ln
bmVkIGludCBpZHgpDQogaW50IHAybV9pbml0X2FsdHAybV9ieV9pZChzdHJ1Y3QgZG9tYWluICpk
LCB1bnNpZ25lZCBpbnQgaWR4KQ0KIHsNCiAgICAgaW50IHJjID0gLUVJTlZBTDsNCisgICAgc3Ry
dWN0IHAybV9kb21haW4gKmhvc3RwMm0gPSBwMm1fZ2V0X2hvc3RwMm0oZCk7DQogDQogICAgIGlm
ICggaWR4ID49IE1BWF9BTFRQMk0gKQ0KICAgICAgICAgcmV0dXJuIHJjOw0KQEAgLTI1ODMsMTYg
KzI1ODYsMjIgQEAgaW50IHAybV9pbml0X2FsdHAybV9ieV9pZChzdHJ1Y3QgZG9tYWluICpkLCB1
bnNpZ25lZCBpbnQgaWR4KQ0KICAgICBhbHRwMm1fbGlzdF9sb2NrKGQpOw0KIA0KICAgICBpZiAo
IGQtPmFyY2guYWx0cDJtX2VwdHBbaWR4XSA9PSBtZm5feChJTlZBTElEX01GTikgKQ0KLSAgICAg
ICAgcmMgPSBwMm1fYWN0aXZhdGVfYWx0cDJtKGQsIGlkeCk7DQorICAgICAgICByYyA9IHAybV9h
Y3RpdmF0ZV9hbHRwMm0oZCwgaWR4LCBob3N0cDJtLT5kZWZhdWx0X2FjY2Vzcyk7DQogDQogICAg
IGFsdHAybV9saXN0X3VubG9jayhkKTsNCiAgICAgcmV0dXJuIHJjOw0KIH0NCiANCi1pbnQgcDJt
X2luaXRfbmV4dF9hbHRwMm0oc3RydWN0IGRvbWFpbiAqZCwgdWludDE2X3QgKmlkeCkNCitpbnQg
cDJtX2luaXRfbmV4dF9hbHRwMm0oc3RydWN0IGRvbWFpbiAqZCwgdWludDE2X3QgKmlkeCwNCisg
ICAgICAgICAgICAgICAgICAgICAgICAgeGVubWVtX2FjY2Vzc190IGh2bW1lbV9kZWZhdWx0X2Fj
Y2VzcykNCiB7DQogICAgIGludCByYyA9IC1FSU5WQUw7DQogICAgIHVuc2lnbmVkIGludCBpOw0K
KyAgICBwMm1fYWNjZXNzX3QgYTsNCisgICAgc3RydWN0IHAybV9kb21haW4gKnAybTsNCisNCisg
ICAgaWYgKCBodm1tZW1fZGVmYXVsdF9hY2Nlc3MgPiBYRU5NRU1fYWNjZXNzX2RlZmF1bHQgKQ0K
KyAgICAgICAgcmV0dXJuIHJjOw0KIA0KICAgICBhbHRwMm1fbGlzdF9sb2NrKGQpOw0KIA0KQEAg
LTI2MDEsNyArMjYxMCwxNSBAQCBpbnQgcDJtX2luaXRfbmV4dF9hbHRwMm0oc3RydWN0IGRvbWFp
biAqZCwgdWludDE2X3QgKmlkeCkNCiAgICAgICAgIGlmICggZC0+YXJjaC5hbHRwMm1fZXB0cFtp
XSAhPSBtZm5feChJTlZBTElEX01GTikgKQ0KICAgICAgICAgICAgIGNvbnRpbnVlOw0KIA0KLSAg
ICAgICAgcmMgPSBwMm1fYWN0aXZhdGVfYWx0cDJtKGQsIGkpOw0KKyAgICAgICAgcDJtID0gZC0+
YXJjaC5hbHRwMm1fcDJtW2ldOw0KKw0KKyAgICAgICAgaWYgKCAheGVubWVtX2FjY2Vzc190b19w
Mm1fYWNjZXNzKHAybSwgaHZtbWVtX2RlZmF1bHRfYWNjZXNzLCAmYSkgKQ0KKyAgICAgICAgew0K
KyAgICAgICAgICAgIGFsdHAybV9saXN0X3VubG9jayhkKTsNCisgICAgICAgICAgICByZXR1cm4g
LUVJTlZBTDsNCisgICAgICAgIH0NCisNCisgICAgICAgIHJjID0gcDJtX2FjdGl2YXRlX2FsdHAy
bShkLCBpLCBhKTsNCiANCiAgICAgICAgIGlmICggIXJjICkNCiAgICAgICAgICAgICAqaWR4ID0g
aTsNCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L3AybS5oIGIveGVuL2luY2x1ZGUv
YXNtLXg4Ni9wMm0uaA0KaW5kZXggOTQyODVkYjFiNC4uYWMyZDI3ODdmNCAxMDA2NDQNCi0tLSBh
L3hlbi9pbmNsdWRlL2FzbS14ODYvcDJtLmgNCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvcDJt
LmgNCkBAIC04ODQsNyArODg0LDggQEAgYm9vbCBwMm1fYWx0cDJtX2dldF9vcl9wcm9wYWdhdGUo
c3RydWN0IHAybV9kb21haW4gKmFwMm0sIHVuc2lnbmVkIGxvbmcgZ2ZuX2wsDQogaW50IHAybV9p
bml0X2FsdHAybV9ieV9pZChzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgaWR4KTsNCiAN
CiAvKiBGaW5kIGFuIGF2YWlsYWJsZSBhbHRlcm5hdGUgcDJtIGFuZCBtYWtlIGl0IHZhbGlkICov
DQotaW50IHAybV9pbml0X25leHRfYWx0cDJtKHN0cnVjdCBkb21haW4gKmQsIHVpbnQxNl90ICpp
ZHgpOw0KK2ludCBwMm1faW5pdF9uZXh0X2FsdHAybShzdHJ1Y3QgZG9tYWluICpkLCB1aW50MTZf
dCAqaWR4LA0KKyAgICAgICAgICAgICAgICAgICAgICAgICB4ZW5tZW1fYWNjZXNzX3QgaHZtbWVt
X2RlZmF1bHRfYWNjZXNzKTsNCiANCiAvKiBNYWtlIGEgc3BlY2lmaWMgYWx0ZXJuYXRlIHAybSBp
bnZhbGlkICovDQogaW50IHAybV9kZXN0cm95X2FsdHAybV9ieV9pZChzdHJ1Y3QgZG9tYWluICpk
LCB1bnNpZ25lZCBpbnQgaWR4KTsNCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9wdWJsaWMvaHZt
L2h2bV9vcC5oIGIveGVuL2luY2x1ZGUvcHVibGljL2h2bS9odm1fb3AuaA0KaW5kZXggNDk5NjVk
MjU2Yy4uMjU5OTg4N2Y3ZiAxMDA2NDQNCi0tLSBhL3hlbi9pbmNsdWRlL3B1YmxpYy9odm0vaHZt
X29wLmgNCisrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9odm0vaHZtX29wLmgNCkBAIC0yNTEsOCAr
MjUxLDYgQEAgREVGSU5FX1hFTl9HVUVTVF9IQU5ETEUoeGVuX2h2bV9hbHRwMm1fdmNwdV9kaXNh
YmxlX25vdGlmeV90KTsNCiBzdHJ1Y3QgeGVuX2h2bV9hbHRwMm1fdmlldyB7DQogICAgIC8qIElO
L09VVCB2YXJpYWJsZSAqLw0KICAgICB1aW50MTZfdCB2aWV3Ow0KLSAgICAvKiBDcmVhdGUgdmll
dyBvbmx5OiBkZWZhdWx0IGFjY2VzcyB0eXBlDQotICAgICAqIE5PVEU6IGN1cnJlbnRseSBpZ25v
cmVkICovDQogICAgIHVpbnQxNl90IGh2bW1lbV9kZWZhdWx0X2FjY2VzczsgLyogeGVubWVtX2Fj
Y2Vzc190ICovDQogfTsNCiB0eXBlZGVmIHN0cnVjdCB4ZW5faHZtX2FsdHAybV92aWV3IHhlbl9o
dm1fYWx0cDJtX3ZpZXdfdDsNCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4vbWVtX2FjY2Vz
cy5oIGIveGVuL2luY2x1ZGUveGVuL21lbV9hY2Nlc3MuaA0KaW5kZXggMDBlNTk0YTBhZC4uZGM2
YWZjNzI1MiAxMDA2NDQNCi0tLSBhL3hlbi9pbmNsdWRlL3hlbi9tZW1fYWNjZXNzLmgNCisrKyBi
L3hlbi9pbmNsdWRlL3hlbi9tZW1fYWNjZXNzLmgNCkBAIC01OCw2ICs1OCwxMCBAQCB0eXBlZGVm
IGVudW0gew0KICAgICAvKiBOT1RFOiBBc3N1bWVkIHRvIGJlIG9ubHkgNCBiaXRzIHJpZ2h0IG5v
dyBvbiB4ODYuICovDQogfSBwMm1fYWNjZXNzX3Q7DQogDQorYm9vbCB4ZW5tZW1fYWNjZXNzX3Rv
X3AybV9hY2Nlc3Moc3RydWN0IHAybV9kb21haW4gKnAybSwNCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB4ZW5tZW1fYWNjZXNzX3QgeGFjY2VzcywNCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBwMm1fYWNjZXNzX3QgKnBhY2Nlc3MpOw0KKw0KIC8qDQogICogU2V0
IGFjY2VzcyB0eXBlIGZvciBhIHJlZ2lvbiBvZiBnZm5zLg0KICAqIElmIGdmbiA9PSBJTlZBTElE
X0dGTiwgc2V0cyB0aGUgZGVmYXVsdCBhY2Nlc3MgdHlwZS4NCi0tIA0KMi4xNy4xDQoNCg==


[-- Attachment #3: plainenc.am --]
[-- Type: text/plain, Size: 202093 bytes --]

From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:09 +0100
Received: from LASPEX02MSOL01.citrite.net (10.160.21.45) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:07 -0500
Received: from esa6.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL01.citrite.net (10.160.21.45) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:07 -0800
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: Ha5rE0IYy54WA6tjIHc/h/qA8IG2MNcxI60nwx4tPyC+1DmTk0jkR05xzr5PbwyuW+ZEFY0dYb
 P3ectOEeCUlWnhxg8BXooo0mykXNLxfyuwzod6LbaQMbGkZ258YpCB+uOno/Qk5HgHEeP44/5U
 3DDoToOLw3c8XRTYp7YE2bFsIoktrVTS8AbmqoXcs2COuq9mFhr4hx9lPXo/bzjV1g9hFWXzec
 wqV7QVf5uLwaP07vrFYh711lRfuHK5RtF19izsEWBWbjcckkYqDDnwP9hMogZcyjcTWPvKPtgC
 80XOAnR7NkyLTxLoMjRSl0Eq
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 10279306
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 10279306
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3ADGi6HxPCYTZ26F0ZeDsl6mtUPXoX/o7sNwtQ0K?=
 =?us-ascii?q?IMzox0Iv7/rarrMEGX3/hxlliBBdydt6sfzbCO4+u5ADdIyK3CmUhKSIZLWR?=
 =?us-ascii?q?4BhJdetC0bK+nBN3fGKuX3ZTcxBsVIWQwt1Xi6NU9IBJS2PAWK8TW94jEIBx?=
 =?us-ascii?q?rwKxd+KPjrFY7OlcS30P2594HObwlSizexfL1/IA+ooQnNtcQajpZuJrs/xx?=
 =?us-ascii?q?DUvnZGZuNayH9yK1mOhRj8/MCw/JBi8yRUpf0s8tNLXLv5caolU7FWFSwqPG?=
 =?us-ascii?q?8p6sLlsxnDVhaP6WAHUmoKiBpIAhPK4w/8U5zsryb1rOt92C2dPc3rUbA5XC?=
 =?us-ascii?q?mp4ql3RBP0jioMKiU0+3/LhMNukK1boQqhpx1hzI7SfIGVL+d1cqfEcd8HWW?=
 =?us-ascii?q?ZNQsNdWipcCY2+coQPFfIMM+ZGoYfgqVUArhywCguiBO701jNEmmX70bEg3u?=
 =?us-ascii?q?g9DQ3L2hErEdIUsHTTqdX4LKMcUf2rw6nSwjXMcfVW0ir85ojSdRAhuuqMVq?=
 =?us-ascii?q?93fMrTxkkvDQTFjk6LqYH+JDOVy/8NvHaB4+V8UuKvjncqpgdsqTas3schkp?=
 =?us-ascii?q?TFi4YVx1ze6Cl0zoY4KcemREJlfdKoCoZcuiGCO4drRs4vQ3tktDs0x7AGo5?=
 =?us-ascii?q?K3YjYGxIg9yxLBa/GKfI6F6Q/5WumLOzd3nndldaq/hxms9UigzfXxVsy70V?=
 =?us-ascii?q?pUtCZFicTMtmsT2BDJ98eIVONx/kan2TmRywDe8vxILEQ0mKbBNpIszL49mo?=
 =?us-ascii?q?ANvUjdAiP6glj6ga+OekUh4Oeo6uDnYrv8pp+bMo95kgP+Mqs0msy4GuQ4KR?=
 =?us-ascii?q?MDX3OG+eSnyrLv51H2QLJPjvEuiKnWrIjaJdgHpq6+GwJb05gs6xGlDzepzt?=
 =?us-ascii?q?sUh3cJLE9DeBKDlYfpI0rDL+7lDfuln1uskStrx+rHPrzuGJnCMn/DkLL5d7?=
 =?us-ascii?q?Zn90Fc0BYzzcxY559MC7EOOvTzVlXztNPCEhA5MBe0w+HhCNhmyIweRHiDDb?=
 =?us-ascii?q?OYMKPOq1+I5+0uL/OQa48SvTauY8Qisu7jizo1lEEQeYGt3IALczaoE/J+OU?=
 =?us-ascii?q?KbbHHwxNAbHjQkpA07Gc73hUeNXDgbSGy1RLl0sjM0EoW9Fq/YW5ugxreG2X?=
 =?us-ascii?q?HoTdVtemlaBwXUQj/TfIKeVqJJMXrKeJUzuyEYVbWnV44q3A2vswm/8bd8M+?=
 =?us-ascii?q?7I4XRI5cDuz9Eroe3YzkptrXkqX4Kcy2GIXyd/mWZbDzM13aUqp0t7xx/D1K?=
 =?us-ascii?q?VjmPVXGJRV4O8BSQY1M5PQjqR6Btn+VxiHf4KPT1CrEZ29GT9kaNU3zpcVZl?=
 =?us-ascii?q?plXc24h0XB0DCtGKQ9jKGQCdo/9aePl2PpKZNFwm3dnLIkk0FgR8JOMWO8ga?=
 =?us-ascii?q?sq9QfJAJXSu16EjKvsfqMZj2bW7GnW622IsQlDVRJoF6XIWXdKfkzNsdHw/V?=
 =?us-ascii?q?/PVZerGe5hKRZaxIiOJ/IQOOfkhlhHWvrvfe/mTTjhyjWWAhCFjvOBd4O0PW?=
 =?us-ascii?q?UWh36CUA1aw0YS5XaDJU41ASLz62TZRCdjE17ieSaOuaF3tW+7Q0kozgqLc1?=
 =?us-ascii?q?wp1ry7/QQQjOCdTPVb16wNuSMooTF5VFin2NeeB92FrgtnNKJSBLF1qFVIz2?=
 =?us-ascii?q?XCrCRmI4etaatlgx9Wcgh6uV/vywQiEp9JwoAhqHInyhY3KLrNiQIcMWrDgN?=
 =?us-ascii?q?aqYOWRczShmXLnI7Tb0VzfztuMr6oU4ap+q164517xUxRytXR/09xFlXCb48?=
 =?us-ascii?q?avbkJaXJTvX0Iw7xU/qavdZ3x35YzO0mZ3GbKpqTKE0NUsTrht2lO7ctFTPb?=
 =?us-ascii?q?nRXg39CcoBHOC1Nfcn3VOua1hXWYIavL5xNMSgefyc3aetN+s1hzOqg1NM54?=
 =?us-ascii?q?Vl216N/S5xE7Sa5ZsOzvCG0wfCbA/S3Az44Pj+gpsMJTwJFzT5ySO/X9EJI/?=
 =?us-ascii?q?MiO4cTCWK+Zcaww4c2i5noUn9evFmtYjFOkM2mYx2JdHTmwBZdk08QpDSrlD?=
 =?us-ascii?q?C5wDp9jzwy5vPOgWqXmLmkLkBffDcWDGB5xU/hO421k8wXUC3KJ0AymR2p6F?=
 =?us-ascii?q?y7j6lXqaJjLnXCFEJBfiz4NWZnAeO7sruPZdIK6Yt96HQKFr3nPBbAEOa7+k?=
 =?us-ascii?q?ZJtkGrV3FTzz06aTyw75jilkI8iGnGdykr6SSJP8BoxRLPotfbQK00vHJOSS?=
 =?us-ascii?q?9mhD3QHlX5McOu+IDelZjZtfulf3m8TZAVei7uh9DIpG6g6GtmDAfq1fK8gN?=
 =?us-ascii?q?r8CiAhzDT2kdJtUG+byXS0Kpmu3KO8P+V9e0BuD1Kp8Mt2FLZ1lY4ojY0R03?=
 =?us-ascii?q?wX1d2FuGAKmmDpPZBHyLrzOTATECUTzYefs22HkAVza2iEzIXjWjCBz9t9Mp?=
 =?us-ascii?q?OkN3gO1Ht17tgWWv7Mt/oV23or5AL/91iZYOAhzG5Elr13siZc2r9P4E13l0?=
 =?us-ascii?q?D/SvgTBRUKZHe00UnXqYj49OIOOi6uaeTijREh2478SunE+kYFBj74YstwRH?=
 =?us-ascii?q?MrqJwvbhSViSy1ssa+JLyyJZoSrkHGykufybITcNRp0aNUzSt/ZTCk4Sxjl7?=
 =?us-ascii?q?J91Vo2msjk9Imfdzc0ovj/WEIJcGesPIVKpG2I7+4Wn97KjdnxQtMwS3NTGs?=
 =?us-ascii?q?OvFKzgESpO5625akDXQGV68yvKX+OHRkee8Bs09i2UVcrzbDfNfCBflI05IX?=
 =?us-ascii?q?vVbE1H3FJNDW58x8ZoUFr3mIq5Nx0irjEJugyh+0cKk7o2cUOlCiGH413zD1?=
 =?us-ascii?q?V8AJmHcEgPv10Evh6Ld5XPvqQrWHsDtpy58F7UezHdPl8VSztTAgrcQAq4W9?=
 =?us-ascii?q?vmrdjYr7rBV7f4daeTJ+/V8aoGEK3TjZO3jtk/rmjKb5nJZyY6SaV8gBELXG?=
 =?us-ascii?q?glSZ+F3W9SEWpNzXyLPpX+xl/0+yt8qt2z/abwQAy2o42IFbZWNZNk/BX+gK?=
 =?us-ascii?q?GIM/OciXRiMTgCkJUL23LMzP4U21t36Wkmdj+mFakMuHzWVKyL3KlQERMfb2?=
 =?us-ascii?q?V4M84A7qQ32hRBNJzAkt2zzqR/kvM+F1ZCUxrmh92tYssJZWq6MTalTA7OPb?=
 =?us-ascii?q?CCb2SRkfv6aq69V7Bcyd5smUbo5mS9FEnudnSOjDC3ERCkaroT1GTFbFpfoI?=
 =?us-ascii?q?G4YlBmDm2xBNThIga2NtN6l1hUifU9m2/KOGgAMDN9b1IFr7ue6jldi+l+HG?=
 =?us-ascii?q?oJ52RsLO2NkSKUp+fCLZNevfxuCyVy3+VUhRZyg6NS9z1BTedplTH6q8405U?=
 =?us-ascii?q?q7ieTJxjcmGBtCpzBXhZ6a6EVvPaKKk/sIEX3A/R8L8SCRE0FT9oYjU4ay/f?=
 =?us-ascii?q?oKkp6WyfG7MjpJ/tPK8NFJCtPddoSHOyF6bkKsRm6SDRMFSC7tPmba1Ck/2L?=
 =?us-ascii?q?mf8GOYqp8ip93igp0LH/VZW0YyDegyEVl+EZoJJ5I9DVZG2faLydUF43aztk?=
 =?us-ascii?q?ybXMJBopXOTe6fG93qOGzflqRfal0EzPmrSOZbfp2+0EtkZF5gmY3MEEeFRt?=
 =?us-ascii?q?FBrBpqaQosqVlM+nxzFzxh6wfecgqopUQrO7uxlx8yhBF5ZL11pizx+FpxLV?=
 =?us-ascii?q?3P9nJpzBsB3O79iDXUSwbfab+qVNgOWTHprEV3OZT+EV4sMF+C2Hd8PTKBfI?=
 =?us-ascii?q?p/yrttcWcy013ZqcEJAuNHQOtIbU1IyA=3D=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HpAQBM2fldhg/ch8NlHQEBAQkBEQU?=
 =?us-ascii?q?FAYF+ghuBRiMECyqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwE?=
 =?us-ascii?q?BAQIBAgMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDElkOEB0iEkkOGSKDAIJ?=
 =?us-ascii?q?8oSg9AiMBTIEEizGJDoFIgTaHP4MWgUMagUE/gRGDUYo3BI1CgjifD4I+lgY?=
 =?us-ascii?q?MG45Ri32lKoQagWmBezMaCBsVgydQERSNHg4JjiRAMwGPJAEB?=
X-IPAS-Result: =?us-ascii?q?A0HpAQBM2fldhg/ch8NlHQEBAQkBEQUFAYF+ghuBRiMEC?=
 =?us-ascii?q?yqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCAgEBA?=
 =?us-ascii?q?hABAQEKCQsIKYVKgjspAYNPAgEDElkOEB0iEkkOGSKDAIJ8oSg9AiMBTIEEi?=
 =?us-ascii?q?zGJDoFIgTaHP4MWgUMagUE/gRGDUYo3BI1CgjifD4I+lgYMG45Ri32lKoQag?=
 =?us-ascii?q?WmBezMaCBsVgydQERSNHg4JjiRAMwGPJAEB?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="10279306"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDGQ9bSEHfMngCsmyG0Q6KFzr3M/kwx4riA/bH?=
 =?us-ascii?q?H68XkYwy5lIGNkk/AeU6EGDMEryakXcrQURQBsZG13tCRErumpnpu3r+?=
 =?us-ascii?q?ERPlKmDFgYnYhHi3v7sCVi9Fs4kx5ybHlpi1t9FQtDQvzq3IPpMVc0SU?=
 =?us-ascii?q?2SJ4UvYvvjN8wpe4lhuvYDsg=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa6.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:04 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 13462AB9B;
	Wed, 18 Dec 2019 07:49:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>, "Wei Liu" <wl@xen.org>, Dario
 Faggioli <dfaggioli@suse.com>, Josh Whitehead
	<josh.whitehead@dornerworks.com>, Stewart Hildebrand
	<stewart.hildebrand@dornerworks.com>, Meng Xu <mengxu@cis.upenn.edu>
Subject: [PATCH 1/9] xen/sched: move schedulers and cpupool coding to dedicated directory
Date: Wed, 18 Dec 2019 08:48:51 +0100
Message-ID: <20191218074859.21665-2-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: 37c99f12-d910-4e98-d600-08d7838ec34c
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL01.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

Move sched*c and cpupool.c to a new directory common/sched.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS                                        |  8 +--
 xen/common/Kconfig                                 | 66 +---------------------
 xen/common/Makefile                                |  8 +--
 xen/common/sched/Kconfig                           | 65 +++++++++++++++++++++
 xen/common/sched/Makefile                          |  7 +++
 .../{compat/schedule.c => sched/compat_schedule.c} |  2 +-
 xen/common/{ => sched}/cpupool.c                   |  0
 xen/common/{ => sched}/sched_arinc653.c            |  0
 xen/common/{ => sched}/sched_credit.c              |  0
 xen/common/{ => sched}/sched_credit2.c             |  0
 xen/common/{ => sched}/sched_null.c                |  0
 xen/common/{ => sched}/sched_rt.c                  |  0
 xen/common/{ => sched}/schedule.c                  |  2 +-
 13 files changed, 80 insertions(+), 78 deletions(-)
 create mode 100644 xen/common/sched/Kconfig
 create mode 100644 xen/common/sched/Makefile
 rename xen/common/{compat/schedule.c => sched/compat_schedule.c} (97%)
 rename xen/common/{ => sched}/cpupool.c (100%)
 rename xen/common/{ => sched}/sched_arinc653.c (100%)
 rename xen/common/{ => sched}/sched_credit.c (100%)
 rename xen/common/{ => sched}/sched_credit2.c (100%)
 rename xen/common/{ => sched}/sched_null.c (100%)
 rename xen/common/{ => sched}/sched_rt.c (100%)
 rename xen/common/{ => sched}/schedule.c (99%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 012c847ebd..37d4da2bc2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -174,7 +174,7 @@ M:	Josh Whitehead <josh.whitehead@dornerworks.com>
 M:	Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
 S:	Supported
 L:	DornerWorks Xen-Devel <xen-devel@dornerworks.com>
-F:	xen/common/sched_arinc653.c
+F:	xen/common/sched/sched_arinc653.c
 F:	tools/libxc/xc_arinc653.c
 
 ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE
@@ -212,7 +212,7 @@ CPU POOLS
 M:	Juergen Gross <jgross@suse.com>
 M:	Dario Faggioli <dfaggioli@suse.com>
 S:	Supported
-F:	xen/common/cpupool.c
+F:	xen/common/sched/cpupool.c
 
 DEVICE TREE
 M:	Stefano Stabellini <sstabellini@kernel.org>
@@ -378,13 +378,13 @@ RTDS SCHEDULER
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Meng Xu <mengxu@cis.upenn.edu>
 S:	Supported
-F:	xen/common/sched_rt.c
+F:	xen/common/sched/sched_rt.c
 
 SCHEDULING
 M:	George Dunlap <george.dunlap@eu.citrix.com>
 M:	Dario Faggioli <dfaggioli@suse.com>
 S:	Supported
-F:	xen/common/sched*
+F:	xen/common/sched/
 
 SEABIOS UPSTREAM
 M:	Wei Liu <wl@xen.org>
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 2f516da101..79465fc1f9 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -278,71 +278,7 @@ config ARGO
 
 	  If unsure, say N.
 
-menu "Schedulers"
-	visible if EXPERT = "y"
-
-config SCHED_CREDIT
-	bool "Credit scheduler support"
-	default y
-	---help---
-	  The traditional credit scheduler is a general purpose scheduler.
-
-config SCHED_CREDIT2
-	bool "Credit2 scheduler support"
-	default y
-	---help---
-	  The credit2 scheduler is a general purpose scheduler that is
-	  optimized for lower latency and higher VM density.
-
-config SCHED_RTDS
-	bool "RTDS scheduler support (EXPERIMENTAL)"
-	default y
-	---help---
-	  The RTDS scheduler is a soft and firm real-time scheduler for
-	  multicore, targeted for embedded, automotive, graphics and gaming
-	  in the cloud, and general low-latency workloads.
-
-config SCHED_ARINC653
-	bool "ARINC653 scheduler support (EXPERIMENTAL)"
-	default DEBUG
-	---help---
-	  The ARINC653 scheduler is a hard real-time scheduler for single
-	  cores, targeted for avionics, drones, and medical devices.
-
-config SCHED_NULL
-	bool "Null scheduler support (EXPERIMENTAL)"
-	default y
-	---help---
-	  The null scheduler is a static, zero overhead scheduler,
-	  for when there always are less vCPUs than pCPUs, typically
-	  in embedded or HPC scenarios.
-
-choice
-	prompt "Default Scheduler?"
-	default SCHED_CREDIT2_DEFAULT
-
-	config SCHED_CREDIT_DEFAULT
-		bool "Credit Scheduler" if SCHED_CREDIT
-	config SCHED_CREDIT2_DEFAULT
-		bool "Credit2 Scheduler" if SCHED_CREDIT2
-	config SCHED_RTDS_DEFAULT
-		bool "RT Scheduler" if SCHED_RTDS
-	config SCHED_ARINC653_DEFAULT
-		bool "ARINC653 Scheduler" if SCHED_ARINC653
-	config SCHED_NULL_DEFAULT
-		bool "Null Scheduler" if SCHED_NULL
-endchoice
-
-config SCHED_DEFAULT
-	string
-	default "credit" if SCHED_CREDIT_DEFAULT
-	default "credit2" if SCHED_CREDIT2_DEFAULT
-	default "rtds" if SCHED_RTDS_DEFAULT
-	default "arinc653" if SCHED_ARINC653_DEFAULT
-	default "null" if SCHED_NULL_DEFAULT
-	default "credit2"
-
-endmenu
+source "common/sched/Kconfig"
 
 config CRYPTO
 	bool
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 62b34e69e9..2abb8250b0 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -3,7 +3,6 @@ obj-y += bitmap.o
 obj-y += bsearch.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
-obj-y += cpupool.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-y += domctl.o
@@ -38,12 +37,6 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
-obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o
-obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o
-obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o
-obj-$(CONFIG_SCHED_RTDS) += sched_rt.o
-obj-$(CONFIG_SCHED_NULL) += sched_null.o
-obj-y += schedule.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
@@ -74,6 +67,7 @@ obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall
 extra-y := symbols-dummy.o
 
 subdir-$(CONFIG_COVERAGE) += coverage
+subdir-y += sched
 subdir-$(CONFIG_UBSAN) += ubsan
 
 subdir-$(CONFIG_NEEDS_LIBELF) += libelf
diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig
new file mode 100644
index 0000000000..883ac87cab
--- /dev/null
+++ b/xen/common/sched/Kconfig
@@ -0,0 +1,65 @@
+menu "Schedulers"
+	visible if EXPERT = "y"
+
+config SCHED_CREDIT
+	bool "Credit scheduler support"
+	default y
+	---help---
+	  The traditional credit scheduler is a general purpose scheduler.
+
+config SCHED_CREDIT2
+	bool "Credit2 scheduler support"
+	default y
+	---help---
+	  The credit2 scheduler is a general purpose scheduler that is
+	  optimized for lower latency and higher VM density.
+
+config SCHED_RTDS
+	bool "RTDS scheduler support (EXPERIMENTAL)"
+	default y
+	---help---
+	  The RTDS scheduler is a soft and firm real-time scheduler for
+	  multicore, targeted for embedded, automotive, graphics and gaming
+	  in the cloud, and general low-latency workloads.
+
+config SCHED_ARINC653
+	bool "ARINC653 scheduler support (EXPERIMENTAL)"
+	default DEBUG
+	---help---
+	  The ARINC653 scheduler is a hard real-time scheduler for single
+	  cores, targeted for avionics, drones, and medical devices.
+
+config SCHED_NULL
+	bool "Null scheduler support (EXPERIMENTAL)"
+	default y
+	---help---
+	  The null scheduler is a static, zero overhead scheduler,
+	  for when there always are less vCPUs than pCPUs, typically
+	  in embedded or HPC scenarios.
+
+choice
+	prompt "Default Scheduler?"
+	default SCHED_CREDIT2_DEFAULT
+
+	config SCHED_CREDIT_DEFAULT
+		bool "Credit Scheduler" if SCHED_CREDIT
+	config SCHED_CREDIT2_DEFAULT
+		bool "Credit2 Scheduler" if SCHED_CREDIT2
+	config SCHED_RTDS_DEFAULT
+		bool "RT Scheduler" if SCHED_RTDS
+	config SCHED_ARINC653_DEFAULT
+		bool "ARINC653 Scheduler" if SCHED_ARINC653
+	config SCHED_NULL_DEFAULT
+		bool "Null Scheduler" if SCHED_NULL
+endchoice
+
+config SCHED_DEFAULT
+	string
+	default "credit" if SCHED_CREDIT_DEFAULT
+	default "credit2" if SCHED_CREDIT2_DEFAULT
+	default "rtds" if SCHED_RTDS_DEFAULT
+	default "arinc653" if SCHED_ARINC653_DEFAULT
+	default "null" if SCHED_NULL_DEFAULT
+	default "credit2"
+
+endmenu
diff --git a/xen/common/sched/Makefile b/xen/common/sched/Makefile
new file mode 100644
index 0000000000..359af4f8bb
--- /dev/null
+++ b/xen/common/sched/Makefile
@@ -0,0 +1,7 @@
+obj-y += cpupool.o
+obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o
+obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o
+obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o
+obj-$(CONFIG_SCHED_RTDS) += sched_rt.o
+obj-$(CONFIG_SCHED_NULL) += sched_null.o
+obj-y += schedule.o
diff --git a/xen/common/compat/schedule.c b/xen/common/sched/compat_schedule.c
similarity index 97%
rename from xen/common/compat/schedule.c
rename to xen/common/sched/compat_schedule.c
index 8b6e6f107d..2e450685d6 100644
--- a/xen/common/compat/schedule.c
+++ b/xen/common/sched/compat_schedule.c
@@ -37,7 +37,7 @@ static int compat_poll(struct compat_sched_poll *compat)
 #define do_poll compat_poll
 #define sched_poll compat_sched_poll
 
-#include "../schedule.c"
+#include "schedule.c"
 
 int compat_set_timer_op(u32 lo, s32 hi)
 {
diff --git a/xen/common/cpupool.c b/xen/common/sched/cpupool.c
similarity index 100%
rename from xen/common/cpupool.c
rename to xen/common/sched/cpupool.c
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched/sched_arinc653.c
similarity index 100%
rename from xen/common/sched_arinc653.c
rename to xen/common/sched/sched_arinc653.c
diff --git a/xen/common/sched_credit.c b/xen/common/sched/sched_credit.c
similarity index 100%
rename from xen/common/sched_credit.c
rename to xen/common/sched/sched_credit.c
diff --git a/xen/common/sched_credit2.c b/xen/common/sched/sched_credit2.c
similarity index 100%
rename from xen/common/sched_credit2.c
rename to xen/common/sched/sched_credit2.c
diff --git a/xen/common/sched_null.c b/xen/common/sched/sched_null.c
similarity index 100%
rename from xen/common/sched_null.c
rename to xen/common/sched/sched_null.c
diff --git a/xen/common/sched_rt.c b/xen/common/sched/sched_rt.c
similarity index 100%
rename from xen/common/sched_rt.c
rename to xen/common/sched/sched_rt.c
diff --git a/xen/common/schedule.c b/xen/common/sched/schedule.c
similarity index 99%
rename from xen/common/schedule.c
rename to xen/common/sched/schedule.c
index e70cc70a65..a550dd8f93 100644
--- a/xen/common/schedule.c
+++ b/xen/common/sched/schedule.c
@@ -3125,7 +3125,7 @@ void __init sched_setup_dom0_vcpus(struct domain *d)
 #endif
 
 #ifdef CONFIG_COMPAT
-#include "compat/schedule.c"
+#include "compat_schedule.c"
 #endif
 
 #endif /* !COMPAT */
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:10 +0100
Received: from MIAPEX02MSOL01.citrite.net (10.52.109.11) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:08 -0500
Received: from esa3.hc3370-68.iphmx.com (10.9.154.239) by
 MIAPEX02MSOL01.citrite.net (10.52.109.11) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Wed, 18 Dec 2019 02:49:07 -0500
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa3.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa3.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: cqDaaviZ3XT+XoPhkD5jXJcFIbmp/UsK6x+JTd1qgd7PB7HOFFs4RePSuPEaHRTRHQFXyPCb65
 vmkAB8E/xo/qW0LlZF4PvgWjUGBKkRWFAnD1gGRLl+JghzXsUwGfU40syE9aTftJAorZvo8aFx
 k82Ngqcn2E+qD/6aII6QHAAVD76BIizdIffySr9mxmfeyUByAWn+Wlw+MHRuf68Feg9msMcWdg
 WDXHpJyMzjO21v9CAjKluz4gKHCIm7skLLCkZYz9Zj9/GcQJv2a/BXyQPoYkO6/VKaA03ftf4l
 gWX4qdkTaKe3v/WGbQAgHRBQ
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 9853020
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 9853020
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3A629iRhcQMsPnsHVCXjxYri0YlGMj4u6mDksu8p?=
 =?us-ascii?q?Mizoh2WeGdxcS4YB7h7PlgxGXEQZ/co6odzbaP6Oa6ATxLuM/a+Fk5M7V0Hy?=
 =?us-ascii?q?cfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aFR?=
 =?us-ascii?q?rwLxd6KfroEYDOkcu3y/qy+5rOaAlUmTaxe7x/IAi4oAnLqMUanYhvJqksxh?=
 =?us-ascii?q?fUrHZDZvhby35vKV+PhRj3+92+/IRk8yReuvIh89BPXKDndKkmTrJWESorPX?=
 =?us-ascii?q?kt6MLkqRfMQw2P5mABUmoNiRpHHxLF7BDhUZjvtCbxq/dw1zObPc3ySrA0RC?=
 =?us-ascii?q?ii4qJ2QxLmlCsLKzg0+3zMh8dukKxUvg6upx1nw47Vfo6VMuZ+frjAdt8eXG?=
 =?us-ascii?q?ZNQ9pdWzBEDo66coABDfcOPfxAoof9uVUAsBWwCwqrCuztxD9FnWP60bEg3u?=
 =?us-ascii?q?g9CwzKwBAsEtQTu3rUttX1M6ISXPixwqnV0zrDdfxW1in76IPVcx4hu/aMXa?=
 =?us-ascii?q?lrccHMzkQvFQzFjk+XqYz+JDOYzf8Ns3WA7+V+T+6gl2knqwRorzWp28wiiZ?=
 =?us-ascii?q?HJi5oIxl3A9Sh12ps5KNO4RUJhf9KoDodcuzuHO4Z2Ws8uXmVltSYgxrEbt5?=
 =?us-ascii?q?O2fDIGxIo5yxLDcfCLbYiF7x3lWe2MOzl3nmhld6i6hxuq8Uiv1On8Vs6s3V?=
 =?us-ascii?q?ZPoStJjMPAtmsQ1xzI9MeLUOZy8Vm51TaO0QDc9P1ELFgpmaffK5Mt2KM8m5?=
 =?us-ascii?q?QTvEjZACP6hln6gLWLekgk4uSo7v7oYrTipp+SLY90jQT+P7wsmsywG+Q4NR?=
 =?us-ascii?q?MOX3OA9OSnyb3j5lP2TK9Wgf0xl6nVqIraKtgDpq6lHw9V1Z4u5RehADehyt?=
 =?us-ascii?q?QYkmcIIEhLdhKaiYjpP0vBIOjjAPihnlSgiitkx/HaPr37A5XMIWLPn6vmfb?=
 =?us-ascii?q?Z480Rc0hY8zchD55JIDbEMOPPzVVX3tNDCDB82KRC7w+X8CNV60IMTQnyAAr?=
 =?us-ascii?q?KDPKzOsF+I+vgvI+iDZYMPpDn9LP0ltLbSiioplFlYcaS30J8/bHGjAu8gM0?=
 =?us-ascii?q?ifeWDrgNoKDSENpAVtYvbtjQigTDNJaHu0F40m4SogQNakAp3EXZuFm6Ga0W?=
 =?us-ascii?q?GwGZgANTMOMUyFDXq9L9bMYPwLci/HZ5Y5ymZeB5G8V48s0w2vvwbmyr1ha9?=
 =?us-ascii?q?DZ4TAcqYm6hIgn6vbazlcy/mcvUJzYjTHLTnl0m3NOTDgzj+hzok14n1GE16?=
 =?us-ascii?q?Uww/lVDsda6P4BVAAmfYXdwOp3B5G6Wg/IctqTDlfzRNKgDGJ5Vco/lvkJZU?=
 =?us-ascii?q?s1ANC+llbbxSP/AbAPkKeQLIcp6a+a1H/0dI5m03iT7KA6lBE9R9dXc2ivh6?=
 =?us-ascii?q?px7Q/WUozGiUKCjI6xaL8RmiXK8TTL1nKA6WdfVgM4SqDZRTYfa0/R+Mz+/V?=
 =?us-ascii?q?/HRqSyBK4PNxsbj9WfMaYMZtCw1wd8SfzuOcrTbyeKo0nqVUfa4LSKYcKqdn?=
 =?us-ascii?q?4Uh2PdAxNfzFhWoybANBA+AzfnqGXbXnRoEhr0bkXg/PMbyjvzR1IozwyMc0?=
 =?us-ascii?q?xq1qaksh8Ti/uGTvoP37UC8C4/ojRwFVy50prYEd2F7wZmeaxdZ5s67jIlnS?=
 =?us-ascii?q?rbuBZ6JYeINL15ixgVdAE290Li2hNrC5lRxNAwpSBixw5zJKSElVJZImrJgN?=
 =?us-ascii?q?apYeCRcDOquknyOMu0khnE3d2b+7kC8qE1sFy4+gGiTRF9qzA5g59Uy3uZ9t?=
 =?us-ascii?q?PBCw9BNPC5Gksx6RV+oKnXJycn4IaBn39jK66vqRfZxskkQuAiz1zzN8caK6?=
 =?us-ascii?q?6CGALoRocYCNKpM/cCgEWyY1QPO+UYp8tWd4u2MvCB3qCsJuNpmjmr2H9G7I?=
 =?us-ascii?q?5K2UWJ7yNgS+TM0s9fkcuV1QaGSTrwyWyZnJCswNJ8bCoJViqy0iG/Qo5aPf?=
 =?us-ascii?q?YtJcNVWSGvO82y1pN1gJu/E3Jf8VeiARsB1qrLMVKQYEb6xhZ4zlkMrDqsni?=
 =?us-ascii?q?7wwzFvkj4vp7aSx2SXkrmkLUFafDQRHC870B/lOsCsgsofXVS0YgRM9lPt/k?=
 =?us-ascii?q?v8y6VB5ex+I2TVXUZUbn3zJmBmXLG3s+nKaMpO5ZU09CRPBb3tPBbDEeO7+U?=
 =?us-ascii?q?FClXq+TA48jHghejqnu4v0hUl3k2vGanZ49yGGIYQulVHe/NzZVbha2T9VIU?=
 =?us-ascii?q?swwTTRGFW4OMGkuNuOkJKW+OKxTWW6TbVIbDLmi4iHsWHorX0vGhC5k/2pz5?=
 =?us-ascii?q?flHhI9yjTTzMRxWGPDqxO2MeyJn+yqdOlgeEdvHlr17cF3T5p/noUHj5YVwX?=
 =?us-ascii?q?EGh5+R8ClPgSLpPN5cw667cGsVSGtB3YvO+Aa8kh4GTDrB18fjW36a2Mcke9?=
 =?us-ascii?q?SqfjZcxHcm984TQKaMsO4dxXAz8xzp80SJJqIn1jYFlal3sSFc2rtR/lNziH?=
 =?us-ascii?q?3aWOl3fwEQPDSwxU3Tt5bj8+MPIj7oKOb41VIiz4n6U/fc/0cEHi6+I8lqHD?=
 =?us-ascii?q?csvJwjaxSVgCO1s9i1PoGXN4521FXckg+c3bEJd9Rr0KJM3W0+Zyr8pSF3kr?=
 =?us-ascii?q?Jqy0Aym8n85M/edS1s5P7rW0ACcGepNoVJo2qr1PgC+6Tel4G3Qsc4RGlNDc?=
 =?us-ascii?q?quFbTwV2tM/fX/a1TXTmB68yfDX+ODTEnGsBwurmqTQcn0bDfNfCJfl407Ak?=
 =?us-ascii?q?DFfyk9yEgVRGlox89iUFnylYq7LRs/vndLvRb5skcek7M0cUCnDCGF/l/uMn?=
 =?us-ascii?q?BuE/39ZFJX9l0QvhaJd5XHsKQqTnEeptr790SMMjDJPlUVSzhSBwreXQilZv?=
 =?us-ascii?q?70vryiu6CZHrbsdqqTJ+/S9KoEDK/OndX1icNn52reb5TReCA+SaRkggwbGi?=
 =?us-ascii?q?oiU8XBx2dUF2pNyniLNZTF4kbnn08/5sG5+/D2VA+9/pOBUv1UNsti/xTwiq?=
 =?us-ascii?q?CGf++WjyJkJTsKzYsCnznOz6YS2FpUgCZrElvlWbUGvirQQK+CgbdZVFgdbD?=
 =?us-ascii?q?1+MM8O5KU5lghLMsrGht6nzaZ2yOYvDEtIXkDgncfvYtEWJ2a6NxXMA0PuVv?=
 =?us-ascii?q?zOYDTPx4uuO/GGRLZdjftZu1iLgRjASxCxGDOFmnGpWgumaqdMh3rAY0Qb5t?=
 =?us-ascii?q?H7cw5tDHilR9XjOFW9N5dsgDs6zKdR5DuCPHMAMTV6b0JGr6GBpSJejPJlHm?=
 =?us-ascii?q?Vd731jZeCakied5uPcJ94Yq/xuSihzkutb5jw9xd43pGlcQ+dpnSLJstN0i1?=
 =?us-ascii?q?S2yK+U1yFqFhZD635KiI+NoUR+KPDZ+51HChOmtFoG6WSdDQhPpsMwU4yy/f?=
 =?us-ascii?q?kLl56Vzvq1cWoRu8jZ9sYdGcXOfc+cOSFnMRGyQ2GMSVVVCz+zNWTPwUdala?=
 =?us-ascii?q?L3lDXdo54kp5zrgJdLRKVcUQl/FPwADV9+NMcfO5oxVTQh2+3+7oZA9T+loR?=
 =?us-ascii?q?/dSd8P9IjATe6XCO7zJSyxiKkeIQAV2r6+IYlZZeiZkwRyL1J9morNAU/ZW9?=
 =?us-ascii?q?tA9zZgYgEDq0JI6HFiT2c31hu4ODPo22caELuPpjBzigZ6Zr12pjL8uREsO0?=
 =?us-ascii?q?HH4ic9whFoyIfVxAuJeTu0F5+eGIRfCi76rU80a8upWBtuYEu5mkk2bW6YFY?=
 =?us-ascii?q?IUtKNpcCVQsCGZoYFGQKYOVrBfbVkbwvTFP/g=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FQAQD62Pldhg/ch8NlHAEBAQEBBwE?=
 =?us-ascii?q?BEQEEBAEBgX6BdCeBRiMECyqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwE?=
 =?us-ascii?q?MAQEBAwEBAQIBAgMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNUhA/Elc?=
 =?us-ascii?q?ZIoI1S4J8oSU9AiMBTIEEin4ziQ+BSIE2hz+EWRqBQT+BEYNRHIobBI0wDgS?=
 =?us-ascii?q?COJ8Pgj6WBgwbgkOMDot9kBSZMIFpgXszGggbFYMnUBEUjR4OCY4kQDOPJQE?=
 =?us-ascii?q?B?=
X-IPAS-Result: =?us-ascii?q?A0FQAQD62Pldhg/ch8NlHAEBAQEBBwEBEQEEBAEBgX6Bd?=
 =?us-ascii?q?CeBRiMECyqTL51ACQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBA?=
 =?us-ascii?q?gMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNUhA/ElcZIoI1S4J8oSU9A?=
 =?us-ascii?q?iMBTIEEin4ziQ+BSIE2hz+EWRqBQT+BEYNRHIobBI0wDgSCOJ8Pgj6WBgwbg?=
 =?us-ascii?q?kOMDot9kBSZMIFpgXszGggbFYMnUBEUjR4OCY4kQDOPJQEB?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="9853020"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDFS7cs8gMAjx8ER8/f7X6Sl0l/lSUPlWzV81g?=
 =?us-ascii?q?RQE0jdHBCE9HKG0RcCjmpfPyKSgwh6q6uH5L+MAvJiWhVHSwAvq+2gyj?=
 =?us-ascii?q?DVGMllpbI38KILHkB7qcNml0nNjZWESnxbYXieyogkU2Ckp45y6/9xT6?=
 =?us-ascii?q?6M2d9h6Put1XZQJ5OsL4txxA=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa3.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:06 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 62ED1AC71;
	Wed, 18 Dec 2019 07:49:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien@xen.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>, "Dario
 Faggioli" <dfaggioli@suse.com>, Josh Whitehead
	<josh.whitehead@dornerworks.com>, Stewart Hildebrand
	<stewart.hildebrand@dornerworks.com>, Meng Xu <mengxu@cis.upenn.edu>
Subject: [PATCH 2/9] xen/sched: make sched-if.h really scheduler private
Date: Wed, 18 Dec 2019 08:48:52 +0100
Message-ID: <20191218074859.21665-3-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: b7093713-9ba2-4bda-201a-08d7838ec34f
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL01.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

include/xen/sched-if.h should be private to scheduler code, so move it
to common/sched/sched-if.h and move the remaining use cases to
cpupool.c and schedule.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/dom0_build.c                    |   5 +-
 xen/common/domain.c                          |  70 ----------
 xen/common/domctl.c                          | 135 +------------------
 xen/common/sched/cpupool.c                   |  13 +-
 xen/{include/xen => common/sched}/sched-if.h |   3 -
 xen/common/sched/sched_arinc653.c            |   3 +-
 xen/common/sched/sched_credit.c              |   2 +-
 xen/common/sched/sched_credit2.c             |   3 +-
 xen/common/sched/sched_null.c                |   3 +-
 xen/common/sched/sched_rt.c                  |   3 +-
 xen/common/sched/schedule.c                  | 191 ++++++++++++++++++++++++++-
 xen/include/xen/domain.h                     |   3 +
 xen/include/xen/sched.h                      |   7 +
 13 files changed, 228 insertions(+), 213 deletions(-)
 rename xen/{include/xen => common/sched}/sched-if.h (99%)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 28b964e018..56c2dee0fc 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -9,7 +9,6 @@
 #include <xen/libelf.h>
 #include <xen/pfn.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 
 #include <asm/amd.h>
@@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void)
         dom0_nodes = node_online_map;
     for_each_node_mask ( node, dom0_nodes )
         cpumask_or(&dom0_cpus, &dom0_cpus, &node_to_cpumask(node));
-    cpumask_and(&dom0_cpus, &dom0_cpus, cpupool0->cpu_valid);
+    cpumask_and(&dom0_cpus, &dom0_cpus, cpupool_valid_cpus(cpupool0));
     if ( cpumask_empty(&dom0_cpus) )
-        cpumask_copy(&dom0_cpus, cpupool0->cpu_valid);
+        cpumask_copy(&dom0_cpus, cpupool_valid_cpus(cpupool0));
 
     max_vcpus = cpumask_weight(&dom0_cpus);
     if ( opt_dom0_max_vcpus_min > max_vcpus )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 611116c7fc..f4f0a66262 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -10,7 +10,6 @@
 #include <xen/ctype.h>
 #include <xen/err.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
@@ -565,75 +564,6 @@ void __init setup_system_domains(void)
 #endif
 }
 
-void domain_update_node_affinity(struct domain *d)
-{
-    cpumask_var_t dom_cpumask, dom_cpumask_soft;
-    cpumask_t *dom_affinity;
-    const cpumask_t *online;
-    struct sched_unit *unit;
-    unsigned int cpu;
-
-    /* Do we have vcpus already? If not, no need to update node-affinity. */
-    if ( !d->vcpu || !d->vcpu[0] )
-        return;
-
-    if ( !zalloc_cpumask_var(&dom_cpumask) )
-        return;
-    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
-    {
-        free_cpumask_var(dom_cpumask);
-        return;
-    }
-
-    online = cpupool_domain_master_cpumask(d);
-
-    spin_lock(&d->node_affinity_lock);
-
-    /*
-     * If d->auto_node_affinity is true, let's compute the domain's
-     * node-affinity and update d->node_affinity accordingly. if false,
-     * just leave d->auto_node_affinity alone.
-     */
-    if ( d->auto_node_affinity )
-    {
-        /*
-         * We want the narrowest possible set of pcpus (to get the narowest
-         * possible set of nodes). What we need is the cpumask of where the
-         * domain can run (the union of the hard affinity of all its vcpus),
-         * and the full mask of where it would prefer to run (the union of
-         * the soft affinity of all its various vcpus). Let's build them.
-         */
-        for_each_sched_unit ( d, unit )
-        {
-            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-                       unit->cpu_soft_affinity);
-        }
-        /* Filter out non-online cpus */
-        cpumask_and(dom_cpumask, dom_cpumask, online);
-        ASSERT(!cpumask_empty(dom_cpumask));
-        /* And compute the intersection between hard, online and soft */
-        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
-
-        /*
-         * If not empty, the intersection of hard, soft and online is the
-         * narrowest set we want. If empty, we fall back to hard&online.
-         */
-        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-                           dom_cpumask : dom_cpumask_soft;
-
-        nodes_clear(d->node_affinity);
-        for_each_cpu ( cpu, dom_affinity )
-            node_set(cpu_to_node(cpu), d->node_affinity);
-    }
-
-    spin_unlock(&d->node_affinity_lock);
-
-    free_cpumask_var(dom_cpumask_soft);
-    free_cpumask_var(dom_cpumask);
-}
-
-
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
 {
     /* Being disjoint with the system is just wrong. */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 03d0226039..3407db44fd 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -11,7 +11,6 @@
 #include <xen/err.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/domain.h>
 #include <xen/event.h>
 #include <xen/grant_table.h>
@@ -65,9 +64,9 @@ static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
     return err;
 }
 
-static int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
-                                   const struct xenctl_bitmap *xenctl_bitmap,
-                                   unsigned int nbits)
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits)
 {
     unsigned int guest_bytes, copy_bytes;
     int err = 0;
@@ -200,7 +199,7 @@ void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info)
     info->shared_info_frame = mfn_to_gmfn(d, virt_to_mfn(d->shared_info));
     BUG_ON(SHARED_M2P(info->shared_info_frame));
 
-    info->cpupool = d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
+    info->cpupool = cpupool_get_id(d);
 
     memcpy(info->handle, d->handle, sizeof(xen_domain_handle_t));
 
@@ -234,16 +233,6 @@ void domctl_lock_release(void)
     spin_unlock(&current->domain->hypercall_deadlock_mutex);
 }
 
-static inline
-int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
-{
-    return vcpuaff->flags == 0 ||
-           ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
-            guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
-           ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
-            guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
-}
-
 void vnuma_destroy(struct vnuma_info *vnuma)
 {
     if ( vnuma )
@@ -608,122 +597,8 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
     case XEN_DOMCTL_setvcpuaffinity:
     case XEN_DOMCTL_getvcpuaffinity:
-    {
-        struct vcpu *v;
-        const struct sched_unit *unit;
-        struct xen_domctl_vcpuaffinity *vcpuaff = &op->u.vcpuaffinity;
-
-        ret = -EINVAL;
-        if ( vcpuaff->vcpu >= d->max_vcpus )
-            break;
-
-        ret = -ESRCH;
-        if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
-            break;
-
-        unit = v->sched_unit;
-        ret = -EINVAL;
-        if ( vcpuaffinity_params_invalid(vcpuaff) )
-            break;
-
-        if ( op->cmd == XEN_DOMCTL_setvcpuaffinity )
-        {
-            cpumask_var_t new_affinity, old_affinity;
-            cpumask_t *online = cpupool_domain_master_cpumask(v->domain);
-
-            /*
-             * We want to be able to restore hard affinity if we are trying
-             * setting both and changing soft affinity (which happens later,
-             * when hard affinity has been succesfully chaged already) fails.
-             */
-            if ( !alloc_cpumask_var(&old_affinity) )
-            {
-                ret = -ENOMEM;
-                break;
-            }
-            cpumask_copy(old_affinity, unit->cpu_hard_affinity);
-
-            if ( !alloc_cpumask_var(&new_affinity) )
-            {
-                free_cpumask_var(old_affinity);
-                ret = -ENOMEM;
-                break;
-            }
-
-            /* Undo a stuck SCHED_pin_override? */
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE )
-                vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE);
-
-            ret = 0;
-
-            /*
-             * We both set a new affinity and report back to the caller what
-             * the scheduler will be effectively using.
-             */
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
-            {
-                ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
-                                              &vcpuaff->cpumap_hard,
-                                              nr_cpu_ids);
-                if ( !ret )
-                    ret = vcpu_set_hard_affinity(v, new_affinity);
-                if ( ret )
-                    goto setvcpuaffinity_out;
-
-                /*
-                 * For hard affinity, what we return is the intersection of
-                 * cpupool's online mask and the new hard affinity.
-                 */
-                cpumask_and(new_affinity, online, unit->cpu_hard_affinity);
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
-                                               new_affinity);
-            }
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
-            {
-                ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
-                                              &vcpuaff->cpumap_soft,
-                                              nr_cpu_ids);
-                if ( !ret)
-                    ret = vcpu_set_soft_affinity(v, new_affinity);
-                if ( ret )
-                {
-                    /*
-                     * Since we're returning error, the caller expects nothing
-                     * happened, so we rollback the changes to hard affinity
-                     * (if any).
-                     */
-                    if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
-                        vcpu_set_hard_affinity(v, old_affinity);
-                    goto setvcpuaffinity_out;
-                }
-
-                /*
-                 * For soft affinity, we return the intersection between the
-                 * new soft affinity, the cpupool's online map and the (new)
-                 * hard affinity.
-                 */
-                cpumask_and(new_affinity, new_affinity, online);
-                cpumask_and(new_affinity, new_affinity,
-                            unit->cpu_hard_affinity);
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
-                                               new_affinity);
-            }
-
- setvcpuaffinity_out:
-            free_cpumask_var(new_affinity);
-            free_cpumask_var(old_affinity);
-        }
-        else
-        {
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
-                                               unit->cpu_hard_affinity);
-            if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
-                ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
-                                               unit->cpu_soft_affinity);
-        }
+        ret = vcpu_affinity_domctl(d, op->cmd, &op->u.vcpuaffinity);
         break;
-    }
 
     case XEN_DOMCTL_scheduler_op:
         ret = sched_adjust(d, &op->u.scheduler_op);
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 4d3adbdd8d..d5b64d0a6a 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -16,11 +16,12 @@
 #include <xen/cpumask.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/warning.h>
 #include <xen/keyhandler.h>
 #include <xen/cpu.h>
 
+#include "sched-if.h"
+
 #define for_each_cpupool(ptr)    \
     for ((ptr) = &cpupool_list; *(ptr) != NULL; (ptr) = &((*(ptr))->next))
 
@@ -876,6 +877,16 @@ int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op)
     return ret;
 }
 
+int cpupool_get_id(const struct domain *d)
+{
+    return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
+}
+
+cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
+{
+    return pool->cpu_valid;
+}
+
 void dump_runq(unsigned char key)
 {
     unsigned long    flags;
diff --git a/xen/include/xen/sched-if.h b/xen/common/sched/sched-if.h
similarity index 99%
rename from xen/include/xen/sched-if.h
rename to xen/common/sched/sched-if.h
index b0ac54e63d..a702fd23b1 100644
--- a/xen/include/xen/sched-if.h
+++ b/xen/common/sched/sched-if.h
@@ -12,9 +12,6 @@
 #include <xen/err.h>
 #include <xen/rcupdate.h>
 
-/* A global pointer to the initial cpupool (POOL0). */
-extern struct cpupool *cpupool0;
-
 /* cpus currently in no cpupool */
 extern cpumask_t cpupool_free_cpus;
 
diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c
index 565575c326..fe15754900 100644
--- a/xen/common/sched/sched_arinc653.c
+++ b/xen/common/sched/sched_arinc653.c
@@ -26,7 +26,6 @@
 
 #include <xen/lib.h>
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/timer.h>
 #include <xen/softirq.h>
 #include <xen/time.h>
@@ -35,6 +34,8 @@
 #include <xen/guest_access.h>
 #include <public/sysctl.h>
 
+#include "sched-if.h"
+
 /**************************************************************************
  * Private Macros                                                         *
  **************************************************************************/
diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c
index aa41a3301b..a098ca0f3a 100644
--- a/xen/common/sched/sched_credit.c
+++ b/xen/common/sched/sched_credit.c
@@ -15,7 +15,6 @@
 #include <xen/delay.h>
 #include <xen/event.h>
 #include <xen/time.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <asm/atomic.h>
 #include <asm/div64.h>
@@ -24,6 +23,7 @@
 #include <xen/trace.h>
 #include <xen/err.h>
 
+#include "sched-if.h"
 
 /*
  * Locking:
diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c
index f7c477053c..5bfe1441a2 100644
--- a/xen/common/sched/sched_credit2.c
+++ b/xen/common/sched/sched_credit2.c
@@ -18,7 +18,6 @@
 #include <xen/event.h>
 #include <xen/time.h>
 #include <xen/perfc.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <asm/div64.h>
 #include <xen/errno.h>
@@ -26,6 +25,8 @@
 #include <xen/cpu.h>
 #include <xen/keyhandler.h>
 
+#include "sched-if.h"
+
 /* Meant only for helping developers during debugging. */
 /* #define d2printk printk */
 #define d2printk(x...)
diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c
index 3f3418c9b1..5a23a7e7dc 100644
--- a/xen/common/sched/sched_null.c
+++ b/xen/common/sched/sched_null.c
@@ -29,10 +29,11 @@
  */
 
 #include <xen/sched.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <xen/trace.h>
 
+#include "sched-if.h"
+
 /*
  * null tracing events. Check include/public/trace.h for more details.
  */
diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c
index b2b29481f3..379b56bc2a 100644
--- a/xen/common/sched/sched_rt.c
+++ b/xen/common/sched/sched_rt.c
@@ -20,7 +20,6 @@
 #include <xen/time.h>
 #include <xen/timer.h>
 #include <xen/perfc.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <asm/atomic.h>
 #include <xen/errno.h>
@@ -31,6 +30,8 @@
 #include <xen/err.h>
 #include <xen/guest_access.h>
 
+#include "sched-if.h"
+
 /*
  * TODO:
  *
diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c
index a550dd8f93..c751faa741 100644
--- a/xen/common/sched/schedule.c
+++ b/xen/common/sched/schedule.c
@@ -23,7 +23,6 @@
 #include <xen/time.h>
 #include <xen/timer.h>
 #include <xen/perfc.h>
-#include <xen/sched-if.h>
 #include <xen/softirq.h>
 #include <xen/trace.h>
 #include <xen/mm.h>
@@ -38,6 +37,8 @@
 #include <xsm/xsm.h>
 #include <xen/err.h>
 
+#include "sched-if.h"
+
 #ifdef CONFIG_XEN_GUEST
 #include <asm/guest.h>
 #else
@@ -1607,6 +1608,194 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason)
     return ret;
 }
 
+static inline
+int vcpuaffinity_params_invalid(const struct xen_domctl_vcpuaffinity *vcpuaff)
+{
+    return vcpuaff->flags == 0 ||
+           ((vcpuaff->flags & XEN_VCPUAFFINITY_HARD) &&
+            guest_handle_is_null(vcpuaff->cpumap_hard.bitmap)) ||
+           ((vcpuaff->flags & XEN_VCPUAFFINITY_SOFT) &&
+            guest_handle_is_null(vcpuaff->cpumap_soft.bitmap));
+}
+
+int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
+                         struct xen_domctl_vcpuaffinity *vcpuaff)
+{
+    struct vcpu *v;
+    const struct sched_unit *unit;
+    int ret = 0;
+
+    if ( vcpuaff->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    if ( (v = d->vcpu[vcpuaff->vcpu]) == NULL )
+        return -ESRCH;
+
+    if ( vcpuaffinity_params_invalid(vcpuaff) )
+        return -EINVAL;
+
+    unit = v->sched_unit;
+
+    if ( cmd == XEN_DOMCTL_setvcpuaffinity )
+    {
+        cpumask_var_t new_affinity, old_affinity;
+        cpumask_t *online = cpupool_domain_master_cpumask(v->domain);
+
+        /*
+         * We want to be able to restore hard affinity if we are trying
+         * setting both and changing soft affinity (which happens later,
+         * when hard affinity has been succesfully chaged already) fails.
+         */
+        if ( !alloc_cpumask_var(&old_affinity) )
+            return -ENOMEM;
+
+        cpumask_copy(old_affinity, unit->cpu_hard_affinity);
+
+        if ( !alloc_cpumask_var(&new_affinity) )
+        {
+            free_cpumask_var(old_affinity);
+            return -ENOMEM;
+        }
+
+        /* Undo a stuck SCHED_pin_override? */
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_FORCE )
+            vcpu_temporary_affinity(v, NR_CPUS, VCPU_AFFINITY_OVERRIDE);
+
+        ret = 0;
+
+        /*
+         * We both set a new affinity and report back to the caller what
+         * the scheduler will be effectively using.
+         */
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+        {
+            ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
+                                          &vcpuaff->cpumap_hard, nr_cpu_ids);
+            if ( !ret )
+                ret = vcpu_set_hard_affinity(v, new_affinity);
+            if ( ret )
+                goto setvcpuaffinity_out;
+
+            /*
+             * For hard affinity, what we return is the intersection of
+             * cpupool's online mask and the new hard affinity.
+             */
+            cpumask_and(new_affinity, online, unit->cpu_hard_affinity);
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard, new_affinity);
+        }
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
+        {
+            ret = xenctl_bitmap_to_bitmap(cpumask_bits(new_affinity),
+                                          &vcpuaff->cpumap_soft, nr_cpu_ids);
+            if ( !ret)
+                ret = vcpu_set_soft_affinity(v, new_affinity);
+            if ( ret )
+            {
+                /*
+                 * Since we're returning error, the caller expects nothing
+                 * happened, so we rollback the changes to hard affinity
+                 * (if any).
+                 */
+                if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+                    vcpu_set_hard_affinity(v, old_affinity);
+                goto setvcpuaffinity_out;
+            }
+
+            /*
+             * For soft affinity, we return the intersection between the
+             * new soft affinity, the cpupool's online map and the (new)
+             * hard affinity.
+             */
+            cpumask_and(new_affinity, new_affinity, online);
+            cpumask_and(new_affinity, new_affinity, unit->cpu_hard_affinity);
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft, new_affinity);
+        }
+
+ setvcpuaffinity_out:
+        free_cpumask_var(new_affinity);
+        free_cpumask_var(old_affinity);
+    }
+    else
+    {
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_HARD )
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_hard,
+                                           unit->cpu_hard_affinity);
+        if ( vcpuaff->flags & XEN_VCPUAFFINITY_SOFT )
+            ret = cpumask_to_xenctl_bitmap(&vcpuaff->cpumap_soft,
+                                           unit->cpu_soft_affinity);
+    }
+
+    return ret;
+}
+
+void domain_update_node_affinity(struct domain *d)
+{
+    cpumask_var_t dom_cpumask, dom_cpumask_soft;
+    cpumask_t *dom_affinity;
+    const cpumask_t *online;
+    struct sched_unit *unit;
+    unsigned int cpu;
+
+    /* Do we have vcpus already? If not, no need to update node-affinity. */
+    if ( !d->vcpu || !d->vcpu[0] )
+        return;
+
+    if ( !zalloc_cpumask_var(&dom_cpumask) )
+        return;
+    if ( !zalloc_cpumask_var(&dom_cpumask_soft) )
+    {
+        free_cpumask_var(dom_cpumask);
+        return;
+    }
+
+    online = cpupool_domain_master_cpumask(d);
+
+    spin_lock(&d->node_affinity_lock);
+
+    /*
+     * If d->auto_node_affinity is true, let's compute the domain's
+     * node-affinity and update d->node_affinity accordingly. if false,
+     * just leave d->auto_node_affinity alone.
+     */
+    if ( d->auto_node_affinity )
+    {
+        /*
+         * We want the narrowest possible set of pcpus (to get the narowest
+         * possible set of nodes). What we need is the cpumask of where the
+         * domain can run (the union of the hard affinity of all its vcpus),
+         * and the full mask of where it would prefer to run (the union of
+         * the soft affinity of all its various vcpus). Let's build them.
+         */
+        for_each_sched_unit ( d, unit )
+        {
+            cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
+            cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
+                       unit->cpu_soft_affinity);
+        }
+        /* Filter out non-online cpus */
+        cpumask_and(dom_cpumask, dom_cpumask, online);
+        ASSERT(!cpumask_empty(dom_cpumask));
+        /* And compute the intersection between hard, online and soft */
+        cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
+
+        /*
+         * If not empty, the intersection of hard, soft and online is the
+         * narrowest set we want. If empty, we fall back to hard&online.
+         */
+        dom_affinity = cpumask_empty(dom_cpumask_soft) ?
+                           dom_cpumask : dom_cpumask_soft;
+
+        nodes_clear(d->node_affinity);
+        for_each_cpu ( cpu, dom_affinity )
+            node_set(cpu_to_node(cpu), d->node_affinity);
+    }
+
+    spin_unlock(&d->node_affinity_lock);
+
+    free_cpumask_var(dom_cpumask_soft);
+    free_cpumask_var(dom_cpumask);
+}
+
 typedef long ret_t;
 
 #endif /* !COMPAT */
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 769302057b..c931eab4a9 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -27,6 +27,9 @@ struct xen_domctl_getdomaininfo;
 void getdomaininfo(struct domain *d, struct xen_domctl_getdomaininfo *info);
 void arch_get_domain_info(const struct domain *d,
                           struct xen_domctl_getdomaininfo *info);
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits);
 
 /*
  * Arch-specifics.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 9f7bc69293..2507a833c2 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -50,6 +50,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
 /* A global pointer to the hardware domain (usually DOM0). */
 extern struct domain *hardware_domain;
 
+/* A global pointer to the initial cpupool (POOL0). */
+extern struct cpupool *cpupool0;
+
 #ifdef CONFIG_LATE_HWDOM
 extern domid_t hardware_domid;
 #else
@@ -929,6 +932,8 @@ int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
 int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity);
 void restore_vcpu_affinity(struct domain *d);
+int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
+                         struct xen_domctl_vcpuaffinity *vcpuaff);
 
 void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate);
 uint64_t get_cpu_idle_time(unsigned int cpu);
@@ -1054,6 +1059,8 @@ int cpupool_add_domain(struct domain *d, int poolid);
 void cpupool_rm_domain(struct domain *d);
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
+int cpupool_get_id(const struct domain *d);
+cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
 void schedule_dump(struct cpupool *c);
 extern void dump_runq(unsigned char key);
 
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:09 +0100
Received: from MIAPEX02MSOL02.citrite.net (10.52.109.12) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:07 -0500
Received: from esa6.hc3370-68.iphmx.com (10.9.154.239) by
 MIAPEX02MSOL02.citrite.net (10.52.109.12) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Wed, 18 Dec 2019 02:49:07 -0500
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa6.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: 5AHlZpcuN0oQE0jWt7j2JFvOxLrEq5aX3qsccfGzRXuTwBDz5xHi+uW8InO8uDFZf42rcLjJma
 N37Z/hooXVWkhTipsh/omyxNt/93alSGTCYTWq9QeyApFh7ryMrYdvdyz9+lXK8fHU+ASlAaUt
 Np3RFPHimPHHaFExayNaM2zNweevNVUBE8rHbKa1Huue6owMB1JaS71dTARLbMoVcaAfpz4IP7
 jDOxodkWVigR99K44Uq5LkneZQhRF54KoSBsSP07z6fNoNcGK6VrCK2h203llZ2VrS5hBGL/sF
 p+rG4VvkiaGSoteiAWDmRPDZ
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 10279307
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 10279307
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3A7yFWAxRDEMJUtR9nSoVBKkpyLdpsv+yvbD5Q0Y?=
 =?us-ascii?q?Iujvd0So/mwa6yZhCN2/xhgRfzUJnB7Loc0qyK6vumAzRQqs/Y6zgrS99lb1?=
 =?us-ascii?q?c9k8IYnggtUoauKHbQC7rUVRE8B9lIT1R//nu2YgB/Ecf6YEDO8DXptWZBUh?=
 =?us-ascii?q?rwOhBoKevrB4Xck9q41/yo+53Ufg5EmCexbal9IRmrowjdrNcajZdhJ6o+1x?=
 =?us-ascii?q?fFv3VFcPlKyG11Il6egwzy7dqq8p559CRQtfMh98peXqj/Yq81U79WAik4Pm?=
 =?us-ascii?q?4s/MHkugXNQgWJ5nsHT2UZiQFIDBTf7BH7RZj+rC33vfdg1SaAPM32Sbc0WS?=
 =?us-ascii?q?m+76puVRTlhjsLOyI//WrKkcF7kr5Vrwy9qBx+247UYZ+aNPxifqPGYNgWQX?=
 =?us-ascii?q?NNUttNWyBdB4+xaZYEAegcMuZCt4Tzp0UAowawCwevA+3gyDFIi2Tq0aEmye?=
 =?us-ascii?q?ktDRvL0BA8E98IrX/arM/1NKAXUe2tyKfI0CvMb+lZ2Tjj7ojDbxEvoeuLXb?=
 =?us-ascii?q?Jrasra1E4iFwHKjlWKrozlJCiV2/8Ws2iG9OpvS/ijhHIgqwF0uzWiwNonhI?=
 =?us-ascii?q?rRho8N11zJ8SV0zJwoKdC2SEN3e8CoHIVMuy2AKod7QtsuT3xstSs60LEKpJ?=
 =?us-ascii?q?C2cSgQxJg52RLTd+aLf5aI7x/sUuuaPC12i2h/eL2lgha/6UigxfP4VsmzyF?=
 =?us-ascii?q?tKqyVEnsfWunAWyhzT8daIRuFg8Ui/wTqP1gbT5f9YIU0siKbWL54szqQtmp?=
 =?us-ascii?q?cdsUnPBDH6lFvqgKOMa0kp+vCk6+H9bbXnop+cOZV0igb7Mqk2mMy/Dv44Mg?=
 =?us-ascii?q?8VX2iA4um8z6Dj/VbnT7lQkvI2lazZvIjAJcsHvq65HxNV0oE75hawETim18?=
 =?us-ascii?q?4YnXYZI15fZR2Hko7pO1XUL/DgFvqwnUmsnC13yPDHIr3hGJTNL3fZnLj9er?=
 =?us-ascii?q?Z97lZWyBAvwtBH+5JUFrYBLeryWkDrstzUFB05PBaozObkE9V90YUeVHmRDa?=
 =?us-ascii?q?+EKq/drV6I5v41I+mNa44ZoiryK/8g562msXhsiVIbOKWkw5YTQHS5Beh9ZV?=
 =?us-ascii?q?WUZ2L2hdUMGntMuRAxH8Lwj1jXcyNefXm/WeoT/DYgE8ryD4jZQZu2qKecxy?=
 =?us-ascii?q?r9FZpTMDMVQmuQGGvlIt3XE8wHbzifd4o4ymReD+qIVpMh2BeytQTz17tgKK?=
 =?us-ascii?q?/u9zYFsY74jYglteDInE909TcvVJvFlj/dCWBsnmYYATQx2fM3rU98zwKF1q?=
 =?us-ascii?q?51y7xdGMdI7vxEGgE9KdbHzuN8BtyzEgLMd9uEUhCnF9OhBzxiBskpzYooZE?=
 =?us-ascii?q?BwU86nkgiFxzCjVr0ajbuQH7Qv77nRmXP2IpU10G7IgZEolEJuWc5TLSujj6?=
 =?us-ascii?q?97+RLUAtvLnF+ejL2CbrkH0WjG82LQhXGWshR+Vwh9Gb7AQWhZZkbSqoHh4V?=
 =?us-ascii?q?jeSra1Fbk9Gg5Rk4iZN7BHLNHk1A8UfvroNdXAbm70oF+eX0bZlJWLaoeiO2?=
 =?us-ascii?q?gG1XubCENfyF9LuCjWcwkmBiKx5WnZCW4mE1WneE7q/eRkzRHzBkYp0wGHaV?=
 =?us-ascii?q?Fg3Lup61YUg/KbUfYawrMDvm8otTx1GF+329+eBcCHokJte6BVYNV151kity?=
 =?us-ascii?q?qRtQNnOYe7B7t/nVNYeANy/gvv2xhxFoRcgJ0ytnp5hAF2KK+ezBZAb2bBjM?=
 =?us-ascii?q?22Y+aKbDOopFb2Nv2zuBmWytud96YR5e5tpk7q4kepHRF5rC0ijYkT0mOc44?=
 =?us-ascii?q?WMBw0XAveTGg468QZ3o7bCb2wz/YTRgDduPrOzqSTqwM8yCa0uzRPqLJ9Pdb?=
 =?us-ascii?q?iJEgP/CZhQBce0L/cxs0O0dR9CN+dXvv1RXYvuZ76N36ilO/xllTStgDFc4Y?=
 =?us-ascii?q?xz5UmL8jJ1Vu/C25tcmaOi0wCKVinxgBKaiu6nwtkWXTgJBSL/xDPtWshRbf?=
 =?us-ascii?q?YpIt5OVzfoItW3w8U4jJnoCTZU81uqBlVO38HMG1LabVPn2hZL/V8KunHhki?=
 =?us-ascii?q?y9hzB5iDAmqKOD0TeGmrW/MkNcZygRHjEk1AqkKJP8l90AWUm0cwUl8XntrV?=
 =?us-ascii?q?33waRWvuU3LmXeR1tJYzmjKmhjVqWqsb/RK8VL6Z4urWBWSLHlMA3cE+Wg5U?=
 =?us-ascii?q?FAi2W6Qjg7pnhzbTyht5TnkgYvhXmUcjBzpyGCJpk1m0eZ5cTcQO4X1T0DF0?=
 =?us-ascii?q?wawXHaAEaxO96x8JCajZDG56q8WHimTYZ7ajTwwMWLsy7xtiV6RAaymfy+gI?=
 =?us-ascii?q?itEwcg1jTg/8J3TiiOpxH5KNqOtezyIadseU9mA0X54sxxF9Rlk4c+s5oX3G?=
 =?us-ascii?q?ATmpSf+XdU2Xe2K9hQ3rjyKWYcXTNeicCA+xDrgQcwSxDBj5K8THiWxdFtIs?=
 =?us-ascii?q?W3cn9DkDxo9NhEUe+V9OAWxHMu5Av+/FyXOb8nxH8c0ad8sSdFxbhT4Ex0kG?=
 =?us-ascii?q?PFRepLeCsQdS30y0bRtIj48foRPCD3NuHunEtmwYL4V+7E/FwaADChPc58VS?=
 =?us-ascii?q?5ospcmbAmKiS2trNi9KZ+IKohP03/c2xbY07oMdM13zLxSw3IgYDy1vGV5mb?=
 =?us-ascii?q?dn3Foygdfi+tDAcjsl/brlUEcGbHuvO5JVoGi2y/8AwqP0l8iuBskzQ2tXGs?=
 =?us-ascii?q?q5C6ruSHVL67zmL1rcSWNh7C7LQfyGQV7ZsR8953PXT8LyaCrRfiFCi486HV?=
 =?us-ascii?q?/EfBc64khcXS1mzMdoTUbznJ2nKholoGhWvAWwqwMQmLs5a1+mCjmZ/l3wLG?=
 =?us-ascii?q?9zEsP6TlIe7xketR6EbYrHt7w1RnAIuMXm9lfFK3THNV0ZUCdQARHCXQq7eO?=
 =?us-ascii?q?Hpv4KlkaDQB/LifaKfPPPU9bEYDazQg8roiNAu/i7QZJzeYD87U7tihRIFBT?=
 =?us-ascii?q?cjQIzYg2ldEHBRzX+XKZTB+lHtoUgV5oi+9v/vRQ7it5CXBeIUN9Jx9hSyx6?=
 =?us-ascii?q?yEMqaRgi19NDpVhIgUyyWOzr8B0VoWzSppclzPWfwBsyXJUa7dyLRPAUVdbS?=
 =?us-ascii?q?RtOc9Mqak720FMP8Xfl9/4hKVggLgtEVBZWFf9m8avI8sXP2W6M1CBD0GOUd?=
 =?us-ascii?q?bObXjCx8W9OPnuc7BbgeRKuhH1gg61SBe8bBKEkTShFx20OL8KjCrAZ0MG/d?=
 =?us-ascii?q?/tNBd1CW3zCtnhb0/zNtg/ljAwzbAu4xGCfWcBLThxdV9MpbyM/GtZhPt4AW?=
 =?us-ascii?q?lI8ntiK6GNhS+Y6+DSLptev+FsB2x4kOdT4XJyzLUwjmkMXPtuhC7btcJjuX?=
 =?us-ascii?q?mjgrPJ0Sd8XVxCp3cDhY6Gu1ljJbSM9pREXiWhnlpF5mGRBhIW4tp9X4S+5u?=
 =?us-ascii?q?YJkYWJzvirbm4foJrO8MARBtbZMpeKK3N/dxriQ2WLVE5bHXiqLWHalwpWl/?=
 =?us-ascii?q?TBkx/d5pU8tJXon4IDD7FBU1lgXPEVEEN+B/QZPYx6GDgjlPTI6axArWr7tx?=
 =?us-ascii?q?TXSMhA69rfUemOBPz0NDuDpbxUPV0T3Kj1a4gefN6euQQqehxxm4LEHFDVVN?=
 =?us-ascii?q?ZGr3h6bwM6l05K9WB3Umw530+8N1GdpUQLHPvxpSYYzwtzZeN0r2Xp8w1xPU?=
 =?us-ascii?q?fRqW08nRtpwIm3sXWqaDf0aZyIc8ROESOt7xoqL4j2BQ1yaF/qxB02BHL/X7?=
 =?us-ascii?q?tUyoBYWyVugQ7Yt4FIHKcEH7ZZexJWzvaSNawl?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FNAgBM2fldhg/ch8NlHgELHIQZgUY?=
 =?us-ascii?q?jBAsqky+dQAkEAQELLwEBAYQ/AoIaHAcBBDQTAgMBDAEBAQMBAQECAQIDAgI?=
 =?us-ascii?q?BAQIQAQEBCgkLCCmFSoI7KQGDTwIBAxIVUhA/ElcZIoMAgnyhKD0CIwFMgQS?=
 =?us-ascii?q?KfjOJDoFIgTaHP4RZGoFBP4RiijcEjUKCOJ8Pgj6WBgwbjlGLfalEgWmBezM?=
 =?us-ascii?q?aCBsVgydQERSNHg4JjiRAM48lAQE?=
X-IPAS-Result: =?us-ascii?q?A0FNAgBM2fldhg/ch8NlHgELHIQZgUYjBAsqky+dQAkEA?=
 =?us-ascii?q?QELLwEBAYQ/AoIaHAcBBDQTAgMBDAEBAQMBAQECAQIDAgIBAQIQAQEBCgkLC?=
 =?us-ascii?q?CmFSoI7KQGDTwIBAxIVUhA/ElcZIoMAgnyhKD0CIwFMgQSKfjOJDoFIgTaHP?=
 =?us-ascii?q?4RZGoFBP4RiijcEjUKCOJ8Pgj6WBgwbjlGLfalEgWmBezMaCBsVgydQERSNH?=
 =?us-ascii?q?g4JjiRAM48lAQE?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="10279307"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDEwUyy/5Umc1R+ooIoSsUzGk1x7dMdNLSlGey?=
 =?us-ascii?q?ZPLRRTBFGBjo3NF/EwMNqjnIEWErRTRr7xwhuhDBakcvRZxSTQx2Qe8N?=
 =?us-ascii?q?enNFnyNZ91ThSX15Gg7wWu5LPp97RF08RTrSEElo5aaiik8+BTY3VbCp?=
 =?us-ascii?q?IEYaPBkIjZJBhCkMsvDkrEiA=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa6.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:04 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 84EE8AD5F;
	Wed, 18 Dec 2019 07:49:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Dario Faggioli <dfaggioli@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/9] xen/sched: cleanup sched.h
Date: Wed, 18 Dec 2019 08:48:53 +0100
Message-ID: <20191218074859.21665-4-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: 9438aefa-6e2f-459e-d6f5-08d7838ec312
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

There are some items in include/xen/sched.h which can be moved to
sched-if.h as they are scheduler private.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/sched-if.h | 13 +++++++++++++
 xen/common/sched/schedule.c |  2 +-
 xen/include/xen/sched.h     | 17 -----------------
 3 files changed, 14 insertions(+), 18 deletions(-)

diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h
index a702fd23b1..edce354dc7 100644
--- a/xen/common/sched/sched-if.h
+++ b/xen/common/sched/sched-if.h
@@ -533,6 +533,7 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
 struct cpupool
 {
     int              cpupool_id;
+#define CPUPOOLID_NONE    -1
     unsigned int     n_dom;
     cpumask_var_t    cpu_valid;      /* all cpus assigned to pool */
     cpumask_var_t    res_valid;      /* all scheduling resources of pool */
@@ -618,5 +619,17 @@ affinity_balance_cpumask(const struct sched_unit *unit, int step,
 
 void sched_rm_cpu(unsigned int cpu);
 const cpumask_t *sched_get_opt_cpumask(enum sched_gran opt, unsigned int cpu);
+void schedule_dump(struct cpupool *c);
+struct scheduler *scheduler_get_default(void);
+struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
+void scheduler_free(struct scheduler *sched);
+int cpu_disable_scheduler(unsigned int cpu);
+int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
+int schedule_cpu_rm(unsigned int cpu);
+int sched_move_domain(struct domain *d, struct cpupool *c);
+struct cpupool *cpupool_get_by_id(int poolid);
+void cpupool_put(struct cpupool *pool);
+int cpupool_add_domain(struct domain *d, int poolid);
+void cpupool_rm_domain(struct domain *d);
 
 #endif /* __XEN_SCHED_IF_H__ */
diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c
index c751faa741..db8ce146ca 100644
--- a/xen/common/sched/schedule.c
+++ b/xen/common/sched/schedule.c
@@ -1346,7 +1346,7 @@ int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity)
     return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_hard_affinity);
 }
 
-int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
+static int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity)
 {
     return vcpu_set_affinity(v, affinity, v->sched_unit->cpu_soft_affinity);
 }
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2507a833c2..55335d6ab3 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -685,7 +685,6 @@ int  sched_init_vcpu(struct vcpu *v);
 void sched_destroy_vcpu(struct vcpu *v);
 int  sched_init_domain(struct domain *d, int poolid);
 void sched_destroy_domain(struct domain *d);
-int sched_move_domain(struct domain *d, struct cpupool *c);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
@@ -918,19 +917,10 @@ static inline bool sched_has_urgent_vcpu(void)
     return atomic_read(&this_cpu(sched_urgent_count));
 }
 
-struct scheduler;
-
-struct scheduler *scheduler_get_default(void);
-struct scheduler *scheduler_alloc(unsigned int sched_id, int *perr);
-void scheduler_free(struct scheduler *sched);
-int schedule_cpu_add(unsigned int cpu, struct cpupool *c);
-int schedule_cpu_rm(unsigned int cpu);
 void vcpu_set_periodic_timer(struct vcpu *v, s_time_t value);
-int cpu_disable_scheduler(unsigned int cpu);
 void sched_setup_dom0_vcpus(struct domain *d);
 int vcpu_temporary_affinity(struct vcpu *v, unsigned int cpu, uint8_t reason);
 int vcpu_set_hard_affinity(struct vcpu *v, const cpumask_t *affinity);
-int vcpu_set_soft_affinity(struct vcpu *v, const cpumask_t *affinity);
 void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
@@ -1051,17 +1041,10 @@ extern enum cpufreq_controller {
     FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
 } cpufreq_controller;
 
-#define CPUPOOLID_NONE    -1
-
-struct cpupool *cpupool_get_by_id(int poolid);
-void cpupool_put(struct cpupool *pool);
-int cpupool_add_domain(struct domain *d, int poolid);
-void cpupool_rm_domain(struct domain *d);
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
 cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
-void schedule_dump(struct cpupool *c);
 extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from AMSPEX02CL02.citrite.net (10.69.22.126) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:07 +0100
Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by
 AMSPEX02CL02.citrite.net (10.69.22.126) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 08:49:06 +0100
Received: from esa2.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:06 -0800
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: CXMz9w7m4CikXJm9XrosHHRafvocy1IyLqVP3kQTNzdJdlNkyMGAkrZ3CziACeYq9VHPPA+JTj
 Lcjd6NXmxTQDLyJwlsJuVBpZRpYMUB69FD6azBPu6tBfikd/FD0CbN6G89iSdXk1hgCE61RGtr
 STuWfOk7FXTvl2pCpFUAHRgj+LvmEminPL6Jybue5dc71NzVJ7rvnGO9Rrz+0xZLw7ZHgehKx6
 d/vL4CUhWlp7o9Ncdfwo8HhbwRBAyruijQQaqjMOxL1aYcz5Abl4ZMHLy34d6xB76k5E9Rh1m4
 Plp1VckS/SAClg+p7/3Ab/nx
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 9869624
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 9869624
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3ADLdQQBJSyGgh28f5ONmcpTZWNBhigK39O0sv0r?=
 =?us-ascii?q?FitYgRL/TxwZ3uMQTl6Ol3ixeRBMOHsqkC0bKG+Pm5AiQp2tWoiDg6aptCVh?=
 =?us-ascii?q?sI2409vjcLJ4q7M3D9N+PgdCcgHc5PBxdP9nC/NlVJSo6lPwWB6nK94iQPFR?=
 =?us-ascii?q?rhKAF7Ovr6GpLIj8Swyuu+54Dfbx9HiTagb75+Ngu6oRnSu8UZnIduN6g8wQ?=
 =?us-ascii?q?bVr3VVfOhb2XlmLk+JkRbm4cew8p9j8yBOtP8k6sVNT6b0cbkmQLJBFDgpPH?=
 =?us-ascii?q?w768PttRnYUAuA/WAcXXkMkhpJGAfK8hf3VYrsvyTgt+p93C6aPdDqTb0xRD?=
 =?us-ascii?q?+v4btnRAPuhSwaLDMy7n3ZhdJsg6JauBKhpgJww4jIYIGOKfFyerrRcc4GSW?=
 =?us-ascii?q?ZdW8pcUTFKDIGhYIsVF+cPPfhWoZTzqVUNqhWxBwesCfjzxjNUnHL6wbc33/?=
 =?us-ascii?q?g9HQzAwQcuH8gOsHPRrNjtMKkSTP66zLPHzTrdafNdxDbz6JLPchA6uvGHQL?=
 =?us-ascii?q?V9ccjeyUk1EAPFlU6QpJf+PzOIzeQCrXKX4PZnVeKqkmMqrRx6rDu3xso0l4?=
 =?us-ascii?q?XEgoIYxkrZ+Sh3wIs5P8O0RFBnbdK+DZddtzmWO5ZyT84iWW1kpSg3x70ctZ?=
 =?us-ascii?q?KlYiQG1I4rywDfZvGIdYWD/wjtW/yLIThigXJoYLK/iAi28Uin0uD8U8a13E?=
 =?us-ascii?q?hQoipej9nMrW4C2wbO5ceZUvd9/0Gh1iiT1w3L9+1JJUA5mbDGJ5MvwrM8jI?=
 =?us-ascii?q?QfvVrfEiPshUn7jLeadkA+9eip7+TnbK/mppiZN4JsiAH+Pb4umsukAeQjKQ?=
 =?us-ascii?q?UORHWb+f+i27H54UL5R7BKguUskqbFqJDaOdgbpqmhDg9azIkj7Ay/Dzi439?=
 =?us-ascii?q?gCgHYHMVRFeBadgIjxIFzBPu34Ae2jjFStjDdr3fHGP7L7DprRKXjDleSpQb?=
 =?us-ascii?q?Eo+0NajQY+091bz5ZVEa0aZuL+XFfrs97VBQN/NBa7kMj9D9Ao9J4TQ22CBO?=
 =?us-ascii?q?e2KqTJqhfc5O01JPKXTJQIozu7IP8gsa29xUQlkEMQKPH6laAcb2q1S7E/ex?=
 =?us-ascii?q?3DOyjFn8sBHGEWvwE3UO3tjhi4XCVOY2qpBvJstDwgAdjgDYyYHd/1xeLRmi?=
 =?us-ascii?q?ajHphGIGtBDwPEHXTpctCCXPEBIGKXL9R6mzMJHb6mV8c61B6ouQO7g7pqJ+?=
 =?us-ascii?q?bZ4GsZ4JTk0tUmr/bLm0QU8jp5R9+Yz3nLV3t9y2EHXTgtx4hkvFdwjFyE1P?=
 =?us-ascii?q?swmORWQORa/OgBSQImLdjZxu1+Bcr1X1fDcc2OU02OWci9DHc6Sddii8QWbR?=
 =?us-ascii?q?NbHNOvxgvGwzLsA7IRkOmTA4co96vHw3XrD8NtkTDdybIsyVUrG5EdD2Cti6?=
 =?us-ascii?q?9h+gSWPLbnyRXIxYCtc6lUnCPW/T3FzWHV5x4AFVAgF6TdXXUPIEDRqIax4E?=
 =?us-ascii?q?SKVLKoBbk9V2kJgcefNqtHbMHohlRaVb/iPtrZeWe4h2a3A16B2LqNaIPgf2?=
 =?us-ascii?q?hV0j/aDQAIlAUa/HDOMgZbZG/poW3ECyd1PUnyeE6q+u576Tu6QkIy0wCWfh?=
 =?us-ascii?q?h5zbPmshURhPGaV7YSxudd6Xpn8m4pWgznh5SPVonlxUIpZqhXbNIj7U0S2H?=
 =?us-ascii?q?nQ7Up9Ncf7cPgn2A5Ychx3ulOo3BJyWeAi2YAnqm0nyA1qJOeWylREInma0o?=
 =?us-ascii?q?r3Iab/MXTp8VakbKuciRnOlc2b/KsC8qFyqVr5uxqyPlE/6Hgh2N5QmSj5hN?=
 =?us-ascii?q?2CHE8ZVpT/VVwy/h5xquTBYyUz0IjT0GVlLai+tjKbgYATCeAoywitc5JkCI?=
 =?us-ascii?q?3UT1agK8QBHIDuJfcjwR6pZUlfY7gXqv5yPtumcuvA06mubq5smzevjGIP54?=
 =?us-ascii?q?4YsArE9SVmTfXT940Y2Pze1QyCHzvxl1autMnrlJsMPGBPWDPklG6+XtYXP/?=
 =?us-ascii?q?A6dJ1DEWq0Jsyr2thy4vylE2VV8lKuHRJO2cOkfwaTc02o2ARR0UoNpnn00S?=
 =?us-ascii?q?C8zjFyj3Qotv/GjXGImr+/MkBdYigSGToH7x+kO4W/gtEEUVL9YhMgzl2l7h?=
 =?us-ascii?q?2hmPAe+vQ5LnHTRFcOdC/zfAQAGuO9sKSPZ8lX5dYmqyJSBa6+bkqdUaXVuA?=
 =?us-ascii?q?YB3mXoGG4Ul3grMiqnvJn0hUkwim2HK21ohGHEYsw2zhDaro+5J7YZzn8NQy?=
 =?us-ascii?q?92jiPSD163Moyy/NmaoJzEt/i3S2OrUpAAOTmu146LszG3oHF7GRDq1e7mgc?=
 =?us-ascii?q?XpSEJptE2zn8kvTyjDqwzwJ5Xmx7jve/wyZVFmXRf985YoQ9kl1Np21c9WgT?=
 =?us-ascii?q?9A3N2U5SZVyD6sd4wBgeSnMjxVAmRSprydqAn9hB86dSrPmt2/DjPFhZI/L9?=
 =?us-ascii?q?iiPjFPgX17sZ0MUOHMq+Ua1Spt/gjh8VmXOKgj2G5EmL12ryRG5oNB8As1kn?=
 =?us-ascii?q?fEXu9URw8Ae3SrzkrA7sji/vwLNSDzLf7okhI5xI7pDane8FgCAjCjJsZkR3?=
 =?us-ascii?q?YrqJwhbDeumDXy8t22IYeMK4NC8EXSy1CZ0aBUMM5jz6RXw3M3YSSk4Sxjkr?=
 =?us-ascii?q?ZgxR12gcPj49PBcT02uvjhW1gBbmaQBYtb+ymx3/8PxYDPh9HpRMg9XG1MBs?=
 =?us-ascii?q?CgTOr0QmtD6bK+b1jISmdk7C/cQ+G6f0fX6V86/SueT9bybi/RfShJi40lHk?=
 =?us-ascii?q?LVJVQD0llFAXNjxcN/TFrygpSmKhwchHhZ50ak+EIdl6Qya0S5CDyH4l/vMG?=
 =?us-ascii?q?x8SYDDfkMHsEcbuwGMbpTYtqUpRGlZ5sHz91DLcDHBIV4SUydYASnmTxjiOL?=
 =?us-ascii?q?Kq+NXNofOACLP4KfzQbLGK7+dZUr+Jw5mrz4drrS2UOJ/JOH58Avk/nEFEWB?=
 =?us-ascii?q?UbU4zYnT4LViAaxTnVYZXdohCi9ylz6Mu49bzqXgnr+IeCWaBKP5N34ReqjK?=
 =?us-ascii?q?yfNumWwiFkNTJf0ZBKznjNrdpXlBsbjyUkLmP/K7kLuC/TQa6VoZd5VEZEOQ?=
 =?us-ascii?q?V0MsYAr68n11MLOcWA0Yytk+AoyP8tC1JVE1fmn5PhY8tCOGy7OF7dYSTDfL?=
 =?us-ascii?q?2bOT3Gxd32aqKgWPVRiutTrRi5pTecFQfqID2CkzDjUx3nP/tLiWmXOxlXuY?=
 =?us-ascii?q?f1dRgIayCrVNX9dhiyK8N6lxUz0edynW7ROCgQPHk0ck9AqKGR8TINgvh7HD?=
 =?us-ascii?q?8kjDItJu2FliCFqujAf89H7L0yWXkyzbgCpi1nmPNP4SpJReJ4gn7ftdc05V?=
 =?us-ascii?q?GtybLQkn85AFxPsjZOlMSAukAxXMeRvpRGR3vA+woAqGuKDBFf7dlqENr0oI?=
 =?us-ascii?q?hL18PC0qn0LX0RlrCctdtZHMXSJM+dZTA5NgH1HTfPEAYfZTu7byfEmldQ1v?=
 =?us-ascii?q?2fvC7wzNByut3nn5wATaVeXVo+G6YBC0hrK9cFJY9+QjIulbPC0Zw4oEGmpR?=
 =?us-ascii?q?yUf/10+5DKUvXLWqfqOGzflqRfal0EzOGgdNVBBsjAw0VnL2JCssHPEkvUU8?=
 =?us-ascii?q?pKp3Q5PBQpu0gL+397HDRqhxDVLzi16XpWLsaa2wYsg1ImM/8w7zqq6FAydA?=
 =?us-ascii?q?LH?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0ELAACr2fldhg/ch8NlHQEBAQkBEQU?=
 =?us-ascii?q?FAYFqCAELAYIagUYjBAsqjCeKGpgzFIFnCQQBAQsvAQEBhD8CghocBwEEMAk?=
 =?us-ascii?q?OAgMBDAEBAQMBAQECAQIDAgIBAQIQAQEBCgkLCCmFSoI7KQGDTwIBAxIVUhA?=
 =?us-ascii?q?/ElcZIoMAgnyhMD0CIwFMgQSKfjOJDoFIgTYBhz6EWRqBQT+BEYJec4QxZYU?=
 =?us-ascii?q?hBI1CoUeCPpYGDBuOUYt9jV+bZYFSghIzGggbFYMnUBEUjR4OCY4kQDOMZCm?=
 =?us-ascii?q?CGAEB?=
X-IPAS-Result: =?us-ascii?q?A0ELAACr2fldhg/ch8NlHQEBAQkBEQUFAYFqCAELAYIag?=
 =?us-ascii?q?UYjBAsqjCeKGpgzFIFnCQQBAQsvAQEBhD8CghocBwEEMAkOAgMBDAEBAQMBA?=
 =?us-ascii?q?QECAQIDAgIBAQIQAQEBCgkLCCmFSoI7KQGDTwIBAxIVUhA/ElcZIoMAgnyhM?=
 =?us-ascii?q?D0CIwFMgQSKfjOJDoFIgTYBhz6EWRqBQT+BEYJec4QxZYUhBI1CoUeCPpYGD?=
 =?us-ascii?q?BuOUYt9jV+bZYFSghIzGggbFYMnUBEUjR4OCY4kQDOMZCmCGAEB?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="9869624"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDFbEB4rFsWFVmpnuC0bAdNuZ8Ye25XQOyh8y9?=
 =?us-ascii?q?EjQEFAsh1UboxgqRCLgYIErf5mrugxxvHyqeY6EGkqAJ1D/YTc6pyqQP?=
 =?us-ascii?q?KgSIwC9wX5eD1o/BwdX0FBcMDHwqEM8OIKjYOGff6Eu3auEnyVFHXCTr?=
 =?us-ascii?q?bpNKuIgd+mBeBvt/b6+83POQ=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:06 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id A2FF1AD69;
	Wed, 18 Dec 2019 07:49:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 4/9] xen/sched: remove special cases for free cpus in schedulers
Date: Wed, 18 Dec 2019 08:48:54 +0100
Message-ID: <20191218074859.21665-5-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: 03f170b4-f579-40bd-085f-08d7838ec2be
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

With the idle scheduler now taking care of all cpus not in any cpupool
the special cases in the other schedulers for no cpupool associated
can be removed.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/sched_credit.c  |  7 ++-----
 xen/common/sched/sched_credit2.c | 30 ------------------------------
 2 files changed, 2 insertions(+), 35 deletions(-)

diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c
index a098ca0f3a..8b1de9b033 100644
--- a/xen/common/sched/sched_credit.c
+++ b/xen/common/sched/sched_credit.c
@@ -1690,11 +1690,8 @@ csched_load_balance(struct csched_private *prv, int cpu,
 
     BUG_ON(get_sched_res(cpu) != snext->unit->res);
 
-    /*
-     * If this CPU is going offline, or is not (yet) part of any cpupool
-     * (as it happens, e.g., during cpu bringup), we shouldn't steal work.
-     */
-    if ( unlikely(!cpumask_test_cpu(cpu, online) || c == NULL) )
+    /* If this CPU is going offline, we shouldn't steal work.  */
+    if ( unlikely(!cpumask_test_cpu(cpu, online)) )
         goto out;
 
     if ( snext->pri == CSCHED_PRI_IDLE )
diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c
index 5bfe1441a2..f9e521a3a8 100644
--- a/xen/common/sched/sched_credit2.c
+++ b/xen/common/sched/sched_credit2.c
@@ -2744,40 +2744,10 @@ static void
 csched2_unit_migrate(
     const struct scheduler *ops, struct sched_unit *unit, unsigned int new_cpu)
 {
-    struct domain *d = unit->domain;
     struct csched2_unit * const svc = csched2_unit(unit);
     struct csched2_runqueue_data *trqd;
     s_time_t now = NOW();
 
-    /*
-     * Being passed a target pCPU which is outside of our cpupool is only
-     * valid if we are shutting down (or doing ACPI suspend), and we are
-     * moving everyone to BSP, no matter whether or not BSP is inside our
-     * cpupool.
-     *
-     * And since there indeed is the chance that it is not part of it, all
-     * we must do is remove _and_ unassign the unit from any runqueue, as
-     * well as updating v->processor with the target, so that the suspend
-     * process can continue.
-     *
-     * It will then be during resume that a new, meaningful, value for
-     * v->processor will be chosen, and during actual domain unpause that
-     * the unit will be assigned to and added to the proper runqueue.
-     */
-    if ( unlikely(!cpumask_test_cpu(new_cpu, cpupool_domain_master_cpumask(d))) )
-    {
-        ASSERT(system_state == SYS_STATE_suspend);
-        if ( unit_on_runq(svc) )
-        {
-            runq_remove(svc);
-            update_load(ops, svc->rqd, NULL, -1, now);
-        }
-        _runq_deassign(svc);
-        sched_set_res(unit, get_sched_res(new_cpu));
-        return;
-    }
-
-    /* If here, new_cpu must be a valid Credit2 pCPU, and in our affinity. */
     ASSERT(cpumask_test_cpu(new_cpu, &csched2_priv(ops)->initialized));
     ASSERT(cpumask_test_cpu(new_cpu, unit->cpu_hard_affinity));
 
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from AMSPEX02CL03.citrite.net (10.69.22.127) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:09 +0100
Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by
 AMSPEX02CL03.citrite.net (10.69.22.127) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 08:49:08 +0100
Received: from esa2.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:08 -0800
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa2.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: ObGuTOtqEPqJOLLFISfyG9Dh3j17kL/gUejBzggtF02nEJS+ecedN9MaeSjlfU+HDlYwzSn/bt
 DZz6KMJPVG6KyI+Tc3yEM1rhxGFX87emxEhINJyrS2qSladZzua4JEeA5lOmgff4FSOObE6akE
 +09vG8esK0DlaL6bKgkszW5qqPExVGCa3ot4Nn5CNm8GUtql7DJ65x4NxejECeaQWezVdJIyzi
 ktDDhQ98/IL39UfncW1mxGQaKUnDHcLsE1hYFnzFswuV1g9Zh/rkvKEIzUp9jw1jaZLJsqzFZY
 /+dneIS2EnRAFlSeSdu9GPZK
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 9869627
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 9869627
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3Av5capRc4lcVRMY6lG57wYB87lGMj4u6mDksu8p?=
 =?us-ascii?q?Mizoh2WeGdxcu/bB7h7PlgxGXEQZ/co6odzbaP6Oa6ATxLuM/a+Fk5M7V0Hy?=
 =?us-ascii?q?cfjssXmwFySOWkMmbcaMDQUiohAc5ZX0Vk9XzoeWJcGcL5ekGA6ibqtW1aFR?=
 =?us-ascii?q?rwLxd6KfroEYDOkcu3y/qy+5rOaAlUmTaxe7x/IAi4oAnLqMUanYhvJqksxh?=
 =?us-ascii?q?fUrHZDZvhby35vKV+PhRj3+92+/IRk8yReuvIh89BPXKDndKkmTrJWESorPX?=
 =?us-ascii?q?kt6MLkqRfMQw2P5mABUmoNiRpHHxLF7BDhUZjvtCbxq/dw1zObPc3ySrA0RC?=
 =?us-ascii?q?ii4qJ2QxLmlCsLKzg0+3zMh8dukKxUvg6upx1nw47Vfo6VMuZ+frjAdt8eXG?=
 =?us-ascii?q?ZNQ9pdWzBEDo66coABDfcOPfxAoof9uVUAsBWwCwqrCuztxD9FnWP60bEg3u?=
 =?us-ascii?q?g9CwzKwBAsEtQTu3rUttX1M6ISXPixwqnV0zrDdfxW1in76IPVcx4hu/aMXa?=
 =?us-ascii?q?lrccHMzkQvFQzFjk+XqYz+JDOYzf8Ns3WA7+V+T+6gl2knqwRorzWp28wiiZ?=
 =?us-ascii?q?HJi5oIxl3A9Sh12ps5KNO4RUJhf9KoDodcuzuHO4Z2Ws8uXmVltSYgxrEbt5?=
 =?us-ascii?q?O2fDIGxIo5yxLDcfCLbYiF7x3lWe2MOzl3nmhld6i6hxuq8Uiv1On8Vs6s3V?=
 =?us-ascii?q?ZPoStJjMPAtmsQ1xzI9MeLUOZy8Vm51TaO0QDc9P1ELFgpmaffK5Mt2KM8m5?=
 =?us-ascii?q?QTvEjZACP6hln6gLWLekgk4uSo7v7oYrTipp+SLY90jQT+P7wsmsywG+Q4NR?=
 =?us-ascii?q?MOX3OA9OSnyb3j5lP2TK9Wgf0xl6nVqIraKtgDpq6lHw9V1Z4u6xmhADehyt?=
 =?us-ascii?q?QYkmcIIEhYdxKaiYjpP0vBIOjjAPihnlSgiitkx/HaPr37A5XMIWLPn6vmfb?=
 =?us-ascii?q?Z4uAZgz18jwNYa659KB7UpJPPoRlS3pNHeFgU+MQG/36DgEtou+JkZXDetH6?=
 =?us-ascii?q?KDP67U+XCS4fk0a72Oa5USoy3VMOU+6rjlinpvygxVRrWgwZZCMCPwJf9hOU?=
 =?us-ascii?q?jMJCO02o1bQ04XogozSvDrg1SeUDlVIky/RL84+ipiWNL0AJzKHMati+fag3?=
 =?us-ascii?q?/+QM0QZ3hGD0DKGnDtJM2IWPYJPSSVJMIp0jkJTqOoRIJp0xay/BT7xLxqIq?=
 =?us-ascii?q?uc+iARuZ/5ktkg4erVmEJ67iR6WuKa1WzFVGRohiUQXTZj3q9lpldm4kyeyq?=
 =?us-ascii?q?U+iPtdRpRI//0cag4hLtbHyvBiTdX7WwbPZNCMHVSpWNK9GhkqU8k8hdQJZh?=
 =?us-ascii?q?U1AM2s2zbE2SfiGLoJj/qLCZgzp7rbxGT0Lt1hxmzu0bl7yUI7WcYJOWD/3P?=
 =?us-ascii?q?xF+gPeBpDEnwCir4jwJfVO+inL+S/DwHGH5gdYW1UrDv2AAyBZZ1PWqMS/7U?=
 =?us-ascii?q?THHfeoDvw8Pw1NxNTnSOMCY8D1jVhAWPboOcjPK2O3lWCqAB+Ux7SKJIP0cm?=
 =?us-ascii?q?QZ1S/ZBQAKiQcWtXqBMAE/AG+mrQe8RHRsGkjoeFjE6vRlpTWwSUp1hwCGYk?=
 =?us-ascii?q?t91qakrwYPjK/UQPcS07QY/SY5/mwlQRDkhI+QUYDY4VM8L8A+KZsn7VxK1H?=
 =?us-ascii?q?zUrVlwJZX6aaBp3QVBK0Ep7wXvzxVyGsNLls15yRFihAd0N6+c10tMMj2C2p?=
 =?us-ascii?q?WlcL/YMG7p5zi0drXbnFrZ1ZzFsrdK8/k+p1j56UurGVAl6G5PyMRO3j2X4Z?=
 =?us-ascii?q?CAX29wGdrhF00w8RZ9vbTTZCIwspjV2XNbOq6xqjbe2tgtCbJ/mCytdNpeLq?=
 =?us-ascii?q?6IUTTKPZ1BXpqWIfcx0xikdRteeuBZr/VrZ4b4J72HwK6uLKBrmzf0xWhA5Y?=
 =?us-ascii?q?l81AqL+U8eAqbK0IwC2OqwxRacWnH3i1Lpvs3smI9CbC0fBSLmmHWiXdUMIP?=
 =?us-ascii?q?coJcBSVC+nOIWvy897hoLxVnI9lhbrHF4A1MKzOFKTY1H7wQxMxBESqH2jlz?=
 =?us-ascii?q?G/ym88mDUoo6yDmS3Wlr2zKVxeYTMNHjM6yw6xcu3Wx5gAUUOlbhYkjk6o/k?=
 =?us-ascii?q?+ggahQ//8gdyyNEQFJZyjzPydpVa7j09jKK8NJ9p4stj1aFeqmZlXPALzyuR?=
 =?us-ascii?q?wByAv4AnBTgjs8cnv58oW8hBF8hG+HeTxxo2DeYtpY3grE6ZrXQvsbjV9kDG?=
 =?us-ascii?q?FozDLQAFa7Jdyg+96ZwozCvu6JXGWkTpRPcCPvwNDIpG6h6GZtGxH6g+Grl4?=
 =?us-ascii?q?itD10hySGinYoPN22AvFPmb4Lszaj/Le93YhwiGgrn88QjUoBmztlp3MpWiD?=
 =?us-ascii?q?5L2NPNuiBd2WbrbYcCgP24NSBVA2dVhYaSulGAugUrL2rVlduiDjPAnY05IY?=
 =?us-ascii?q?H9Oz1e2zphvZkRV+HKsfoc23My+wTwrBqNM6EhxXFEmaVosSdLxblO4lZIrG?=
 =?us-ascii?q?3VA6hOTxAEYmq20UzOt5fn8u1WfDr9KOThkhYi24jnVPbb/EldQCqrI8l4W3?=
 =?us-ascii?q?Usv4MlaQmKiSOvj+OsMNjIMYBK6EHSwkaG164Nb8tv3vsS2Xg+Zz+75C19jb?=
 =?us-ascii?q?dn3Fo3hcryvZDbeTwyp+TjWkUebGWsIZpLoVSPxe5fhprEhtH0WMw5S3NRAc?=
 =?us-ascii?q?euFbXyS3oTrai1blfSVmRk9DHBQ+KZRUjFsyIE5zrOC8z5biDLYiBAk5M7FU?=
 =?us-ascii?q?nbfAsF20gVRGlox89kUFnxmYq7Kh8/vndLvTua4lNN0r46bkiiFD2D+UHyMW?=
 =?us-ascii?q?9yEN/Fd1JX9l0QvhaLd5XCqLssR2cAucf65A2Vdj7AO0ISUDtPABbaQQG4W9?=
 =?us-ascii?q?vmrdjG+OyFCufsNOPAOPOIrvJTU/POzpWqmopg9DKROsjdJWFsVbs93VROWX?=
 =?us-ascii?q?Q/EMPc/ldHAy0RnCbQY8PJvw+yoWtxqd6y9PCtUwXqrYqJAL9PPdg94Ai4x7?=
 =?us-ascii?q?+ZMPKdjzp4LjAe0Y4QwXjPy/4U21t36WkmMjCpFf5Z73z3Qandm7FaA1sgUw?=
 =?us-ascii?q?0obJQayaU600ENNNXS0JX13eUj0aZwVQcDVEThn9HvbssPcSm7MxvcCUCHOa?=
 =?us-ascii?q?7jR3WDytzrYa66VbxbjflF/xy2tzGBFkb/PzOF3zD3XhGrOOtIgWmVJhtb8I?=
 =?us-ascii?q?26dx9sDyDkQreEIlWjN8RriDQt3bAurnbabygHLCNxNU9A7/WR4S5envRjCj?=
 =?us-ascii?q?lB435ifozm026S6+jVLIpTsOM+W3Up0bsBvTJgkeMTtXwXDOZ4kybTsNN09l?=
 =?us-ascii?q?y9m7PJyjE8C0UW72gbwoOTvUByf67e88olOz6M8RQT4GGXExlPqcFiD4ilua?=
 =?us-ascii?q?9KzcPUvLnuMzoE+NXRt5h5ZYCcOIecPXwtPAC8UibTFxcARCW3OHv3glwH1u?=
 =?us-ascii?q?qP7XDTopVw+f2O0NIeD7RcUlIyDPYTDE9oSccDLJlAVTQhibeHjcQM6CPv/i?=
 =?us-ascii?q?mUf91Tu9X8btzXBPzuLDiDirwdP0kT3Kj1a48UM9+ighEwWhxBhI3PXnHoc5?=
 =?us-ascii?q?VNrylmN1RmpVUXtmNjVWB110+3Mlrxsk9WLua9m1sNsiU7ef4krW+++EosKx?=
 =?us-ascii?q?zBoy5iyEQ=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HiAACr2fldhg/ch8NlHQEBAQkBEQU?=
 =?us-ascii?q?FAYF+gXQngUYjBAsqsG8JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQEDAQE?=
 =?us-ascii?q?BAgECAwICAQECEAEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ8oTA?=
 =?us-ascii?q?9AiMBTIEEin4ziQ6BSIE2hz+EWRqBQT+BEYNRijcEjUKhR4I+lgYMG45Ri32?=
 =?us-ascii?q?pRIFpgXszGggbFYMnUBEUjSyBDQEHjRhAM48lAQE?=
X-IPAS-Result: =?us-ascii?q?A0HiAACr2fldhg/ch8NlHQEBAQkBEQUFAYF+gXQngUYjB?=
 =?us-ascii?q?AsqsG8JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQEDAQEBAgECAwICAQECE?=
 =?us-ascii?q?AEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ8oTA9AiMBTIEEin4zi?=
 =?us-ascii?q?Q6BSIE2hz+EWRqBQT+BEYNRijcEjUKhR4I+lgYMG45Ri32pRIFpgXszGggbF?=
 =?us-ascii?q?YMnUBEUjSyBDQEHjRhAM48lAQE?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="9869627"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDFDMPs00M9jfsAOsg7bQA7bS7pQRKydr13BjX?=
 =?us-ascii?q?y3/yyX85qLCqfIXOUikAbLxaaaOrjwdbOdh0GkeNtHoM8/P0IbaYVG53?=
 =?us-ascii?q?c0SwJTpIqN6l3iPqo95v7OJ0s3TS1HoqRNlGsoAtH13Rr4IGr8BFk1Gk?=
 =?us-ascii?q?i3HyynvbHHCRh5RiXIu3XRAQ=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa2.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:08 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id CD2CAAD95;
	Wed, 18 Dec 2019 07:49:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>, Meng
 Xu <mengxu@cis.upenn.edu>, George Dunlap <george.dunlap@eu.citrix.com>
Subject: [PATCH 5/9] xen/sched: use scratch cpumask instead of allocating it on the stack
Date: Wed, 18 Dec 2019 08:48:55 +0100
Message-ID: <20191218074859.21665-6-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: df008393-a3ee-48aa-33e2-08d7838ec3f5
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

In sched_rt there are three instances of cpumasks allocated on the
stack. Replace them by using cpumask_scratch.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/sched_rt.c | 56 ++++++++++++++++++++++++++++++---------------
 1 file changed, 37 insertions(+), 19 deletions(-)

diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c
index 379b56bc2a..264a753116 100644
--- a/xen/common/sched/sched_rt.c
+++ b/xen/common/sched/sched_rt.c
@@ -637,23 +637,38 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
  * and available resources
  */
 static struct sched_resource *
-rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
+rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu)
 {
-    cpumask_t cpus;
+    cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu);
     cpumask_t *online;
     int cpu;
 
     online = cpupool_domain_master_cpumask(unit->domain);
-    cpumask_and(&cpus, online, unit->cpu_hard_affinity);
+    cpumask_and(cpus, online, unit->cpu_hard_affinity);
 
-    cpu = cpumask_test_cpu(sched_unit_master(unit), &cpus)
+    cpu = cpumask_test_cpu(sched_unit_master(unit), cpus)
             ? sched_unit_master(unit)
-            : cpumask_cycle(sched_unit_master(unit), &cpus);
-    ASSERT( !cpumask_empty(&cpus) && cpumask_test_cpu(cpu, &cpus) );
+            : cpumask_cycle(sched_unit_master(unit), cpus);
+    ASSERT( !cpumask_empty(cpus) && cpumask_test_cpu(cpu, cpus) );
 
     return get_sched_res(cpu);
 }
 
+/*
+ * Pick a valid resource for the unit vc
+ * Valid resource of an unit is intesection of unit's affinity
+ * and available resources
+ */
+static struct sched_resource *
+rt_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
+{
+    struct sched_resource *res;
+
+    res = rt_res_pick_locked(unit, unit->res->master_cpu);
+
+    return res;
+}
+
 /*
  * Init/Free related code
  */
@@ -886,11 +901,14 @@ rt_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
     struct rt_unit *svc = rt_unit(unit);
     s_time_t now;
     spinlock_t *lock;
+    unsigned int cpu = smp_processor_id();
 
     BUG_ON( is_idle_unit(unit) );
 
     /* This is safe because unit isn't yet being scheduled */
-    sched_set_res(unit, rt_res_pick(ops, unit));
+    lock = pcpu_schedule_lock_irq(cpu);
+    sched_set_res(unit, rt_res_pick_locked(unit, cpu));
+    pcpu_schedule_unlock_irq(lock, cpu);
 
     lock = unit_schedule_lock_irq(unit);
 
@@ -1003,13 +1021,13 @@ burn_budget(const struct scheduler *ops, struct rt_unit *svc, s_time_t now)
  * lock is grabbed before calling this function
  */
 static struct rt_unit *
-runq_pick(const struct scheduler *ops, const cpumask_t *mask)
+runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu)
 {
     struct list_head *runq = rt_runq(ops);
     struct list_head *iter;
     struct rt_unit *svc = NULL;
     struct rt_unit *iter_svc = NULL;
-    cpumask_t cpu_common;
+    cpumask_t *cpu_common = cpumask_scratch_cpu(cpu);
     cpumask_t *online;
 
     list_for_each ( iter, runq )
@@ -1018,9 +1036,9 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask)
 
         /* mask cpu_hard_affinity & cpupool & mask */
         online = cpupool_domain_master_cpumask(iter_svc->unit->domain);
-        cpumask_and(&cpu_common, online, iter_svc->unit->cpu_hard_affinity);
-        cpumask_and(&cpu_common, mask, &cpu_common);
-        if ( cpumask_empty(&cpu_common) )
+        cpumask_and(cpu_common, online, iter_svc->unit->cpu_hard_affinity);
+        cpumask_and(cpu_common, mask, cpu_common);
+        if ( cpumask_empty(cpu_common) )
             continue;
 
         ASSERT( iter_svc->cur_budget > 0 );
@@ -1092,7 +1110,7 @@ rt_schedule(const struct scheduler *ops, struct sched_unit *currunit,
     }
     else
     {
-        snext = runq_pick(ops, cpumask_of(sched_cpu));
+        snext = runq_pick(ops, cpumask_of(sched_cpu), cur_cpu);
 
         if ( snext == NULL )
             snext = rt_unit(sched_idle_unit(sched_cpu));
@@ -1186,22 +1204,22 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new)
     struct rt_unit *iter_svc;
     struct sched_unit *iter_unit;
     int cpu = 0, cpu_to_tickle = 0;
-    cpumask_t not_tickled;
+    cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id());
     cpumask_t *online;
 
     if ( new == NULL || is_idle_unit(new->unit) )
         return;
 
     online = cpupool_domain_master_cpumask(new->unit->domain);
-    cpumask_and(&not_tickled, online, new->unit->cpu_hard_affinity);
-    cpumask_andnot(&not_tickled, &not_tickled, &prv->tickled);
+    cpumask_and(not_tickled, online, new->unit->cpu_hard_affinity);
+    cpumask_andnot(not_tickled, not_tickled, &prv->tickled);
 
     /*
      * 1) If there are any idle CPUs, kick one.
      *    For cache benefit,we first search new->cpu.
      *    The same loop also find the one with lowest priority.
      */
-    cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), &not_tickled);
+    cpu = cpumask_test_or_cycle(sched_unit_master(new->unit), not_tickled);
     while ( cpu!= nr_cpu_ids )
     {
         iter_unit = curr_on_cpu(cpu);
@@ -1216,8 +1234,8 @@ runq_tickle(const struct scheduler *ops, struct rt_unit *new)
              compare_unit_priority(iter_svc, latest_deadline_unit) < 0 )
             latest_deadline_unit = iter_svc;
 
-        cpumask_clear_cpu(cpu, &not_tickled);
-        cpu = cpumask_cycle(cpu, &not_tickled);
+        cpumask_clear_cpu(cpu, not_tickled);
+        cpu = cpumask_cycle(cpu, not_tickled);
     }
 
     /* 2) candicate has higher priority, kick out lowest priority unit */
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:11 +0100
Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:08 -0500
Received: from esa5.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:08 -0800
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: +gnC2HAAnrie2nW6sXWQqjgZmsFPgBgRoXON2JG79thNG414YCOIFZjLIQdWwMWmekVJMbnHIW
 gS2B1g3KMITGhFdKkLMRdnlO5SkNdNkvRUQbwZacP2LMzmgvI4/z7GMxuKhOM//djPravrfOiW
 3zHIqqWVHb34OGjuLh0jW0CTlojDVacIt/0xMOSFj6Ip37UVsC5gegBi79+WDrrkpFrQrAL2pZ
 nBGZpGLTGOyMz6lEZiFNd5UDEGTAhD+w9RJ1O4w8Pdopc49sEBJorwSoCZt1BqJNI2exraQtKp
 kMsOF+4UHv/Q5HXNDO8dHA+V
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 10226264
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 10226264
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3ACSMOnR3UbK6RoScZsmDT+DRfVm0co7zxezQtwd?=
 =?us-ascii?q?8Zse0TIvad9pjvdHbS+e9qxAeQG9mCsLQe07qd6vm7EUU7or+5+EgYd5JNUx?=
 =?us-ascii?q?JXwe43pCcHRPC/NEvgMfTxZDY7FskRHHVs/nW8LFQHUJ2mPw6arXK99yMdFQ?=
 =?us-ascii?q?viPgRpOOv1BpTSj8Oq3Oyu5pHfeQpFiCezbL9oMhm6sQbcusYLjYd/JKs61w?=
 =?us-ascii?q?fErGZPd+lK321jOEidnwz75se+/Z5j9zpftvc8/MNeUqv0Yro1Q6VAADspL2?=
 =?us-ascii?q?466svrtQLeTQSU/XsTTn8WkhtTDAfb6hzxQ4r8vTH7tup53ymaINH2QLUpUj?=
 =?us-ascii?q?ms86tnVBnlgzocOjUn7G/YlNB/jKNDoBKguRN/xZLUYJqIP/Z6Z6/RYM8WSX?=
 =?us-ascii?q?ZEUstXSidPAJ6zb5EXAuQBI+hWsofzqVgJoxS8CwmhH//jxiNSi3Pqx6A2z/?=
 =?us-ascii?q?gtHAfb1wIgBdIOt3HUoc3xOqcPT++11qbIwivFb/hL2Dn98o/Icgs6ofqRWr?=
 =?us-ascii?q?9wc8XRyVMoFwPelVWdspflPy6O1usTqWSU8+1gVee2hmMhtgp/oSCvy98xho?=
 =?us-ascii?q?TNho8Z0E3I+Ct5zYovO9G0VlJ3bcS5HJZQry2WKo97T8M4T211tis3yqcKtY?=
 =?us-ascii?q?ClcCQW0pgr2R7SZvOBfoOV+BzsTvyRLi19hH99eLKwmRKy8U+4x+35Wci4zk?=
 =?us-ascii?q?xGrjFYktnXqH8BzQHc5dafRvt8+EeuxyqP2hjO5uxAIU04j7fXJpAhz7IqiJ?=
 =?us-ascii?q?Yfr1jPEjXrlEj2lKOWc18r+ums6+TpeLXmoZqcOpd2igHxKKQunde/Af8jPw?=
 =?us-ascii?q?gVQmib4vqz2Kf/8k3+RbVGluc2nbXBsJDGOcQboba0AwBL3YYk8Ra/ACmp3M?=
 =?us-ascii?q?4FknkaKlJFfAiIj5DyNl7QPfD0F+uwg1WwkDdxxvDHMaftDYnKLnjGw//deu?=
 =?us-ascii?q?Nt5kgZxActwNR345NPFqpHMP/1QlX2ttHTElk+KQPn7fzgDYBfy44EVm+JSo?=
 =?us-ascii?q?CDN7jJ+QuK6fgoOPKkf5IOtXD2LP1ztK2mtmMwhVJIJfrh5pAQcn3tW60+ex?=
 =?us-ascii?q?/DM1PxntcMF3sLtQMiTevszWePSiNXe23rD/Ju6yoyVcSmBtyYGdjo3uzH3T?=
 =?us-ascii?q?+7G40QbWdDWRiAEnbtIoODXfpEKCefOdRonTFMU7+9A5Qg2hejuE6yy7duIu?=
 =?us-ascii?q?fOvCxNs5Xl2Ys9/PXdwDc18zE8FMGByyeVVWghnGwSQCQt9LtiukE7wVCGgu?=
 =?us-ascii?q?Bjm/INLdVI/LtSVxsic5vVzuh0Edf3DwDOZNCSU369X86rRzo2S4F52McANm?=
 =?us-ascii?q?B6HdjqlRXfx2yqDrsSwqSMH4Ax+7nA0mLZItYnjW3bz6Rng1R/GJl1OGarh7?=
 =?us-ascii?q?By+03oP6CTwxTLsaGseOxc0TXEqSGDxjHV4xkdDl42UL3FWGBZbUzT/5z/4Q?=
 =?us-ascii?q?vZQrmiBK5CUEMJwNOeKqZMdtzijElXDPblNtPEZmutmmC2TR+Wz7KIZYDudi?=
 =?us-ascii?q?0TxiLYQEQDlgkS+z6BO21cTm+jrHjZFydGDk/0bgXn9uw/4HK3Q0kozh2bOl?=
 =?us-ascii?q?V73un98RoUiPqADvILi+tV5WF48GgyRgzlmYuLWL/i70J7ca5RYM0w+gJKzm?=
 =?us-ascii?q?uE8Qx2Zcf/d+U81hgfawRyrwXl0BAkb+cI2cUssn4uyxJ/bKyC11YUPTGXx5?=
 =?us-ascii?q?frIZXMN3L/uhuobuSFkkGby9uQ9qoVvb46pE/kpxqBDVc5/jNs1NwfgB7+rt?=
 =?us-ascii?q?3aSQEVV5z2SEM+8RN38qrbbicK7ITRzXRwMKOwv2aYiegkD+Yk1BusOuxnHv?=
 =?us-ascii?q?PfTV3KGtYBT4ijM+V33ViiNUlbZKUMpOg1J8OjZ72N36v5dOpnmTuniyxA7u?=
 =?us-ascii?q?UfmgqO+DB7UfXgxIsezreT2Q7PWzrnjVinu9z6gsgdNGxURzHij3K+XMgIO+?=
 =?us-ascii?q?V7Zs4TBH2rItGry9kb5dalQHNe+FO5RhsH1MKvZRuOfgn41AxU214QpC/vki?=
 =?us-ascii?q?+5wjpo1jAx+/POgWqXmLmkLkBffDcWFwwAxR/2LIO5js4XRh2ldAF00huuvh?=
 =?us-ascii?q?2ln+0F9OJ+N2nWUQFDeC2lSgMqGqa2qLeGZNZCrZ0ytiACGuaze1eBUZbmvg?=
 =?us-ascii?q?AXlSjkGiENjCB+bDystpjjylZ4h3iaN21bt2fCdId7whKVt7m+DbZBmzEBQi?=
 =?us-ascii?q?d/kzzeAFOxasKo8dujnJDGqumiVmilW84bYWzxwIiHrif++XxyDEj1gaWogt?=
 =?us-ascii?q?O+W1tfs2ezx5xwWC7Pthq5foT7y/HwL7d8ZkcxTF7kt5gjQto4w81h2tdIni?=
 =?us-ascii?q?JAzpSNoSheyDi1aI8Dn/ugKiJKH2duoZad4RC5ihQ4djTUmMShEC3am5UpZs?=
 =?us-ascii?q?HmMDpNhWRjsoYSTv/TteIMnDMp8APk9kSIMaY7x25Mj6NpsiJ/4alBuRJxnH?=
 =?us-ascii?q?zBXfZIWxEeZWu1yVyJ94zs9fkGIjrzN+D2jA0nw5igFO3Q+18GHiylK9F4RH?=
 =?us-ascii?q?Q2t5sjVTCEmHzrttO9JoWWPINV7lvM1E6ex+lNdMBoyKdM3Ho/fzuh7Dt8k7?=
 =?us-ascii?q?VzjAQyj8jl587dcz4rpvrgREcAbViXL4sS4m2/1PcF2J/Ojsb1Q8knQnJRB9?=
 =?us-ascii?q?PpVa76SWNO862/aUDUSmV68jDBQt+9VUee8Bs09iOTVcnyaTfOdSlflZI4H1?=
 =?us-ascii?q?GcPBAN2VFJGmxhw9hjTFjsnIu7KyIbrngQ/gKq80oRjLg0al+lCT+Z/EDyMX?=
 =?us-ascii?q?81UMTNdUsIqFgeoR6KaYrGqbgoVyBAos/68ErXdzbdPV4WSzBSPy7MT1HlNb?=
 =?us-ascii?q?2z6dSS6PCWX6y4KOXDZbHIoutbH/aOzpay3oY05C6CbIOJOWdvC/l90U1GOB?=
 =?us-ascii?q?IxU8XfkDESRyFFjDrDNIidowmx/ikxpce6uPXtUwby6YbdEKNcd811/Ay7ir?=
 =?us-ascii?q?uCMOjWgztlLTFf1dUHwnqtqvBXmVcWj2s3LWuXHL8NtDDAQOfrooEMV0FJTS?=
 =?us-ascii?q?R1OYMI4rk1g09NMpWA1YuwiO4+j+Y1DkcDXlvkyImvYoQRLmexOUmiZg7DPa?=
 =?us-ascii?q?maJTDN38D8YL+tAbxWguJOshSsuDGdW0b9NzWHnjPtWlihK+ZJxC2cORVfvs?=
 =?us-ascii?q?m6fHMPQSD7S8n6bxSgLNJtpTgmm/svm2jHc2IRcHB9f05LsryM/HZYj/F4SA?=
 =?us-ascii?q?kjpjJuKeiJnTrc7vGNc8dL96IwX2IuzbocuSlpg6FY5yxFWvFvzSbJp4Moo1?=
 =?us-ascii?q?r9ybbXj2Q3FhtWqjNbwomMuBYHW+2R+59eVHLD5B9I43+XDkFApdR/Dcb0k7?=
 =?us-ascii?q?tN0dWJn6X2YmQnkZqc7Y4HCs7YJdjSemImKgbsESXIAREtSC7xc3rCnEEbnP?=
 =?us-ascii?q?zYpRj35tAq75PrnpQJULpSUlc4Q+gbBkpSF9sHOJ5rXzkgnOfG3v5N3mK3qV?=
 =?us-ascii?q?zqfOsfvp3DUavJU/D/cnCCkKJJIRcFk+uhcdYjc7bj0kknUWFU2ZzQEhOID8?=
 =?us-ascii?q?tQuSAnZQgx8h0UoSpOC1Yr0kegUTuDpXoaFPq6hBkz01Msevk2+XHn5FJlf1?=
 =?us-ascii?q?c=3D?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0GbAACL2fldhg/ch8NlHAEBAQEBBwE?=
 =?us-ascii?q?BEQEEBAEBgX6CG4FGIwQLKrBvCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQE?=
 =?us-ascii?q?BAwEBAQIBAgMCAgEBAhABAQEKCQsIKYUSCDCCOykBg08CAQMSFT4UED8SVxk?=
 =?us-ascii?q?igwCCfKEtPQIjAUyBBIp+M4kOgUiBNoc/hFkagUE/gRGDUYo3BI1CoUeCPpY?=
 =?us-ascii?q?GDBuOUYt9qUSBaYF7MxoIGxWDJ1ARFIhKhFQOCY4kQDOPJQEB?=
X-IPAS-Result: =?us-ascii?q?A0GbAACL2fldhg/ch8NlHAEBAQEBBwEBEQEEBAEBgX6CG?=
 =?us-ascii?q?4FGIwQLKrBvCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCA?=
 =?us-ascii?q?gEBAhABAQEKCQsIKYUSCDCCOykBg08CAQMSFT4UED8SVxkigwCCfKEtPQIjA?=
 =?us-ascii?q?UyBBIp+M4kOgUiBNoc/hFkagUE/gRGDUYo3BI1CoUeCPpYGDBuOUYt9qUSBa?=
 =?us-ascii?q?YF7MxoIGxWDJ1ARFIhKhFQOCY4kQDOPJQEB?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="10226264"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDE2YdaZ8AzYtCtwwdsNA5xxjXdXIMx/JvrDVC?=
 =?us-ascii?q?Njrl/UNufz5vdCKEdXX1oNw+Qma73sIugTMCGIoAiPfg9we63dmdc6de?=
 =?us-ascii?q?mBdcR9PVFOqycDd9UBb3wGmYxaE3Pz/xz53u1o6bfCx6gMhtESYjyXDQ?=
 =?us-ascii?q?vTOnl2/lOSufOl0ewbPzxjqQ=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa5.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:07 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id F21C8ADEB;
	Wed, 18 Dec 2019 07:49:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 6/9] xen/sched: replace null scheduler percpu-variable with pdata hook
Date: Wed, 18 Dec 2019 08:48:56 +0100
Message-ID: <20191218074859.21665-7-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: 2ae91f91-cd54-4250-02dd-08d7838ec3da
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

Instead of having an own percpu-variable for private data per cpu the
generic scheduler interface for that purpose should be used.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/sched_null.c | 89 +++++++++++++++++++++++++++++--------------
 1 file changed, 60 insertions(+), 29 deletions(-)

diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c
index 5a23a7e7dc..11aab25743 100644
--- a/xen/common/sched/sched_null.c
+++ b/xen/common/sched/sched_null.c
@@ -89,7 +89,6 @@ struct null_private {
 struct null_pcpu {
     struct sched_unit *unit;
 };
-DEFINE_PER_CPU(struct null_pcpu, npc);
 
 /*
  * Schedule unit
@@ -159,32 +158,48 @@ static void null_deinit(struct scheduler *ops)
     ops->sched_data = NULL;
 }
 
-static void init_pdata(struct null_private *prv, unsigned int cpu)
+static void init_pdata(struct null_private *prv, struct null_pcpu *npc,
+                       unsigned int cpu)
 {
     /* Mark the pCPU as free, and with no unit assigned */
     cpumask_set_cpu(cpu, &prv->cpus_free);
-    per_cpu(npc, cpu).unit = NULL;
+    npc->unit = NULL;
 }
 
 static void null_init_pdata(const struct scheduler *ops, void *pdata, int cpu)
 {
     struct null_private *prv = null_priv(ops);
 
-    /* alloc_pdata is not implemented, so we want this to be NULL. */
-    ASSERT(!pdata);
+    ASSERT(pdata);
 
-    init_pdata(prv, cpu);
+    init_pdata(prv, pdata, cpu);
 }
 
 static void null_deinit_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 {
     struct null_private *prv = null_priv(ops);
+    struct null_pcpu *npc = pcpu;
 
-    /* alloc_pdata not implemented, so this must have stayed NULL */
-    ASSERT(!pcpu);
+    ASSERT(npc);
 
     cpumask_clear_cpu(cpu, &prv->cpus_free);
-    per_cpu(npc, cpu).unit = NULL;
+    npc->unit = NULL;
+}
+
+static void *null_alloc_pdata(const struct scheduler *ops, int cpu)
+{
+    struct null_pcpu *npc;
+
+    npc = xzalloc(struct null_pcpu);
+    if ( npc == NULL )
+        return ERR_PTR(-ENOMEM);
+
+    return npc;
+}
+
+static void null_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
+{
+    xfree(pcpu);
 }
 
 static void *null_alloc_udata(const struct scheduler *ops,
@@ -268,6 +283,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit)
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit), new_cpu;
     cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock));
 
@@ -286,8 +302,7 @@ pick_res(struct null_private *prv, const struct sched_unit *unit)
          * don't, so we get to keep in the scratch cpumask what we have just
          * put in it.)
          */
-        if ( likely((per_cpu(npc, cpu).unit == NULL ||
-                     per_cpu(npc, cpu).unit == unit)
+        if ( likely((npc->unit == NULL || npc->unit == unit)
                     && cpumask_test_cpu(cpu, cpumask_scratch_cpu(cpu))) )
         {
             new_cpu = cpu;
@@ -336,9 +351,11 @@ pick_res(struct null_private *prv, const struct sched_unit *unit)
 static void unit_assign(struct null_private *prv, struct sched_unit *unit,
                         unsigned int cpu)
 {
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+
     ASSERT(is_unit_online(unit));
 
-    per_cpu(npc, cpu).unit = unit;
+    npc->unit = unit;
     sched_set_res(unit, get_sched_res(cpu));
     cpumask_clear_cpu(cpu, &prv->cpus_free);
 
@@ -363,12 +380,13 @@ static bool unit_deassign(struct null_private *prv, struct sched_unit *unit)
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit);
     struct null_unit *wvc;
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(list_empty(&null_unit(unit)->waitq_elem));
-    ASSERT(per_cpu(npc, cpu).unit == unit);
+    ASSERT(npc->unit == unit);
     ASSERT(!cpumask_test_cpu(cpu, &prv->cpus_free));
 
-    per_cpu(npc, cpu).unit = NULL;
+    npc->unit = NULL;
     cpumask_set_cpu(cpu, &prv->cpus_free);
 
     dprintk(XENLOG_G_INFO, "%d <-- NULL (%pdv%d)\n", cpu, unit->domain,
@@ -436,7 +454,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops,
      */
     ASSERT(!local_irq_is_enabled());
 
-    init_pdata(prv, cpu);
+    init_pdata(prv, pdata, cpu);
 
     return &sr->_lock;
 }
@@ -446,6 +464,7 @@ static void null_unit_insert(const struct scheduler *ops,
 {
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
+    struct null_pcpu *npc;
     unsigned int cpu;
     spinlock_t *lock;
 
@@ -462,6 +481,7 @@ static void null_unit_insert(const struct scheduler *ops,
  retry:
     sched_set_res(unit, pick_res(prv, unit));
     cpu = sched_unit_master(unit);
+    npc = get_sched_res(cpu)->sched_priv;
 
     spin_unlock(lock);
 
@@ -471,7 +491,7 @@ static void null_unit_insert(const struct scheduler *ops,
                 cpupool_domain_master_cpumask(unit->domain));
 
     /* If the pCPU is free, we assign unit to it */
-    if ( likely(per_cpu(npc, cpu).unit == NULL) )
+    if ( likely(npc->unit == NULL) )
     {
         /*
          * Insert is followed by vcpu_wake(), so there's no need to poke
@@ -519,7 +539,10 @@ static void null_unit_remove(const struct scheduler *ops,
     /* If offline, the unit shouldn't be assigned, nor in the waitqueue */
     if ( unlikely(!is_unit_online(unit)) )
     {
-        ASSERT(per_cpu(npc, sched_unit_master(unit)).unit != unit);
+        struct null_pcpu *npc;
+
+        npc = unit->res->sched_priv;
+        ASSERT(npc->unit != unit);
         ASSERT(list_empty(&nvc->waitq_elem));
         goto out;
     }
@@ -548,6 +571,7 @@ static void null_unit_wake(const struct scheduler *ops,
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
     unsigned int cpu = sched_unit_master(unit);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(!is_idle_unit(unit));
 
@@ -569,7 +593,7 @@ static void null_unit_wake(const struct scheduler *ops,
     else
         SCHED_STAT_CRANK(unit_wake_not_runnable);
 
-    if ( likely(per_cpu(npc, cpu).unit == unit) )
+    if ( likely(npc->unit == unit) )
     {
         cpu_raise_softirq(cpu, SCHEDULE_SOFTIRQ);
         return;
@@ -581,7 +605,7 @@ static void null_unit_wake(const struct scheduler *ops,
      * and its previous resource is free (and affinities match), we can just
      * assign the unit to it (we own the proper lock already) and be done.
      */
-    if ( per_cpu(npc, cpu).unit == NULL &&
+    if ( npc->unit == NULL &&
          unit_check_affinity(unit, cpu, BALANCE_HARD_AFFINITY) )
     {
         if ( !has_soft_affinity(unit) ||
@@ -622,6 +646,7 @@ static void null_unit_sleep(const struct scheduler *ops,
 {
     struct null_private *prv = null_priv(ops);
     unsigned int cpu = sched_unit_master(unit);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
     bool tickled = false;
 
     ASSERT(!is_idle_unit(unit));
@@ -640,7 +665,7 @@ static void null_unit_sleep(const struct scheduler *ops,
             list_del_init(&nvc->waitq_elem);
             spin_unlock(&prv->waitq_lock);
         }
-        else if ( per_cpu(npc, cpu).unit == unit )
+        else if ( npc->unit == unit )
             tickled = unit_deassign(prv, unit);
     }
 
@@ -663,6 +688,7 @@ static void null_unit_migrate(const struct scheduler *ops,
 {
     struct null_private *prv = null_priv(ops);
     struct null_unit *nvc = null_unit(unit);
+    struct null_pcpu *npc;
 
     ASSERT(!is_idle_unit(unit));
 
@@ -686,7 +712,8 @@ static void null_unit_migrate(const struct scheduler *ops,
      * If unit is assigned to a pCPU, then such pCPU becomes free, and we
      * should look in the waitqueue if anyone else can be assigned to it.
      */
-    if ( likely(per_cpu(npc, sched_unit_master(unit)).unit == unit) )
+    npc = unit->res->sched_priv;
+    if ( likely(npc->unit == unit) )
     {
         unit_deassign(prv, unit);
         SCHED_STAT_CRANK(migrate_running);
@@ -720,7 +747,8 @@ static void null_unit_migrate(const struct scheduler *ops,
      *
      * In latter, all we can do is to park unit in the waitqueue.
      */
-    if ( per_cpu(npc, new_cpu).unit == NULL &&
+    npc = get_sched_res(new_cpu)->sched_priv;
+    if ( npc->unit == NULL &&
          unit_check_affinity(unit, new_cpu, BALANCE_HARD_AFFINITY) )
     {
         /* unit might have been in the waitqueue, so remove it */
@@ -788,6 +816,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
     unsigned int bs;
     const unsigned int cur_cpu = smp_processor_id();
     const unsigned int sched_cpu = sched_get_resource_cpu(cur_cpu);
+    struct null_pcpu *npc = get_sched_res(sched_cpu)->sched_priv;
     struct null_private *prv = null_priv(ops);
     struct null_unit *wvc;
 
@@ -802,14 +831,14 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
         } d;
         d.cpu = cur_cpu;
         d.tasklet = tasklet_work_scheduled;
-        if ( per_cpu(npc, sched_cpu).unit == NULL )
+        if ( npc->unit == NULL )
         {
             d.unit = d.dom = -1;
         }
         else
         {
-            d.unit = per_cpu(npc, sched_cpu).unit->unit_id;
-            d.dom = per_cpu(npc, sched_cpu).unit->domain->domain_id;
+            d.unit = npc->unit->unit_id;
+            d.dom = npc->unit->domain->domain_id;
         }
         __trace_var(TRC_SNULL_SCHEDULE, 1, sizeof(d), &d);
     }
@@ -820,7 +849,7 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
         prev->next_task = sched_idle_unit(sched_cpu);
     }
     else
-        prev->next_task = per_cpu(npc, sched_cpu).unit;
+        prev->next_task = npc->unit;
     prev->next_time = -1;
 
     /*
@@ -921,6 +950,7 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc)
 static void null_dump_pcpu(const struct scheduler *ops, int cpu)
 {
     struct null_private *prv = null_priv(ops);
+    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
     struct null_unit *nvc;
     spinlock_t *lock;
     unsigned long flags;
@@ -930,9 +960,8 @@ static void null_dump_pcpu(const struct scheduler *ops, int cpu)
     printk("CPU[%02d] sibling={%*pbl}, core={%*pbl}",
            cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)),
            CPUMASK_PR(per_cpu(cpu_core_mask, cpu)));
-    if ( per_cpu(npc, cpu).unit != NULL )
-        printk(", unit=%pdv%d", per_cpu(npc, cpu).unit->domain,
-               per_cpu(npc, cpu).unit->unit_id);
+    if ( npc->unit != NULL )
+        printk(", unit=%pdv%d", npc->unit->domain, npc->unit->unit_id);
     printk("\n");
 
     /* current unit (nothing to say if that's the idle unit) */
@@ -1010,6 +1039,8 @@ static const struct scheduler sched_null_def = {
 
     .init           = null_init,
     .deinit         = null_deinit,
+    .alloc_pdata    = null_alloc_pdata,
+    .free_pdata     = null_free_pdata,
     .init_pdata     = null_init_pdata,
     .switch_sched   = null_switch_sched,
     .deinit_pdata   = null_deinit_pdata,
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:13 +0100
Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:09 -0500
Received: from esa5.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:09 -0800
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: k8i+TZ/MS9bV0I7gT00FwjkYvHsnlyzD9TCaP1grXL+t+3sWpFQBkTvx4dESbzPBNUS+a0lbkJ
 Gt1W0T7WhAvod+aO9wLzTFwJPhtbVJubHlL1Raal/FBK1womcG04zMh4aeVdImhrqeLdhOaYmQ
 mP8n6BB9gmW1iCbvu8aY2zcfTTZ+W1M/7CI6F7nPLHRzztRMrBajNX9XmLa8YV7jdA0ke0Q9Jw
 TwL4Cr2ec8zov4iMy416tkfr8hRjf2UdIwRfemi5EMHbZQIB7XzIRel05vepWPeOt19lN7VUwY
 UnCJjnVZMsbxG9Z6J1lKSHBT
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 10226265
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 10226265
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3AskcKhh+SWp4oJv9uRHKM819IXTAuvvDOBiVQ1K?=
 =?us-ascii?q?B30e4cTK2v8tzYMVDF4r011RmVBN6dsaoUwLOP6OjJYi8p2d65qncMcZhBBV?=
 =?us-ascii?q?cuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx?=
 =?us-ascii?q?7xKRR6JvjvGo7Vks+7y/2+94fcbglVijexe61+IAm1oAneq8UanIVvIbstxx?=
 =?us-ascii?q?XUpXdFZ/5Yzn5yK1KJmBb86Maw/Jp9/ClVpvks6c1OX7jkcqohVbBXAygoPG?=
 =?us-ascii?q?4z5M3wqBnMVhCP6WcGUmUXiRVHHQ7I5wznU5jrsyv6su192DSGPcDzULs5Vy?=
 =?us-ascii?q?iu47ttRRT1kyoMKSI3/3/LhcxxlKJboQyupxpjw47PfYqZMONycr7Bcd8GQG?=
 =?us-ascii?q?ZMWNtaWS5cDYOmd4YBD/QPM/tWoYf+oFUBrxW+CBKwCO/z0DJEmmX70bEm3+?=
 =?us-ascii?q?knDArI3BYgH9ULsHnMqNv1KbkdUfq6zKbWyzXIcvJY2S366IjTaRAqvPaBXb?=
 =?us-ascii?q?B/ccrLzkkvDBjFg06LqYz4JDOayP4BvHSc7+plTO+ijXMspQJpojW328shjo?=
 =?us-ascii?q?nEipgIxl3K9Sh12pg5KcOkREJhfNKpEINcuzyeOoZ2WM8uXm9ltSYgxrEYp5?=
 =?us-ascii?q?K3YDIGxIonyhPQbfGMbpKG7Qj5VOmLJDd1nHJld6y7hxa16UWgz+L9Wteu3F?=
 =?us-ascii?q?ZRsypFicfDumoQ2BPN8sSHS/198Vm92TuXygze7uFJLVopmafVK5Mt2KA8mo?=
 =?us-ascii?q?QPvUjZAyP7mF36jKqMeUUl/uio5f7nYrLjppKENY90hRv+MqM1msykGuk4KR?=
 =?us-ascii?q?UOX3OF9uimyLLj+kj5TK1QjvIqiqnZrIzaJcMDq662HQBV1Zwj6xmhADu8zd?=
 =?us-ascii?q?sYgH8HLFZfdxKflIXmJUzOLOrkAve+n1SsijZrxv/cMrL9BZXNK2DJkK39cr?=
 =?us-ascii?q?Zl905c1A0zwMhe551ODLENOvDzVVXtu9zbFR85NAq0zv35B9VgzI8RRWOPAr?=
 =?us-ascii?q?ODMKPTvl6E/P4gI+6JZNxdhDGoO/UjovLjk3I9sVscZrWym4sabmiiGfZrKF?=
 =?us-ascii?q?nfZmDj0fkbFmJfnBczVuPnjhWtSzlIfD7mXa0m4SogIJm7FoqFTYeo1u/SlB?=
 =?us-ascii?q?ynF4FbMzgVQmuHFm3lIsDdA69WMnC7P9NhnzoYVLOoV44m01SUuRTnz6Z8d7?=
 =?us-ascii?q?qEqCADvM+l1NMuvLKL0E9isz1sD8GNlWqKSjI8kmAJQmow26Zy6Qx4x0yY2K?=
 =?us-ascii?q?d1y/pfCZRI5vxPXwt7fZ7RxuB3EZbzDwTGeNraAE2+TICABjc8Bsk038dIe1?=
 =?us-ascii?q?x0TtetlBfYxAKxHqQY0beMAc986brSikD4PN010HPazO8khlgiTNFIMDikib?=
 =?us-ascii?q?R47BP7HJPSngOSkKP5Pb8E0nvr82GOhXGLoFkeUAN0VvDdWmsDY0LNsdnjzk?=
 =?us-ascii?q?bSFfm1FK8qdAdMkJTQEKZBZ9z3gFkDf8/NYoSPMUS2nWr4RRuTz+7KbIGxID?=
 =?us-ascii?q?pFmXuNTkkckwUDu32BMFp2ACDpuG/YADF0cDCnK0rx7elzrm+6RU4o3kmLaU?=
 =?us-ascii?q?Nmzb+85h8Sg7SVVfoS2rsOvCpppS9zGR6x2NffCtzIoAQEHu0UY9wn50xc/X?=
 =?us-ascii?q?nErAE7NZulbuhjilMYbwVrrhb2zRwkQo5EkMUssDYr1F8reP/eiQgcMWnChd?=
 =?us-ascii?q?apZ+6ySCG65h2kZq/I10uL3c2fp+EP469j9A2m4lHvF1Ii9mUh2N5QgB7+rt?=
 =?us-ascii?q?3HChQfVZXpXwM57R9/8vvYbTM6/Jj8zmB3PO+/tTqIiJo5QfAozBqtZYIVMq?=
 =?us-ascii?q?yeExTpO9YHHMXoI+sv0QvMDFpMLKVZ86g6ONmjfv2N1fuwPepuqzmhiHxO/I?=
 =?us-ascii?q?F31k/VrXhGR+XF3ogIz7Sj5iXXDG6utF66qYi3lJtNOHcSFTHklnChWt4XZ7?=
 =?us-ascii?q?VyeJZNAmCrcYW7wdB3hpilXHA9lhbrBVwc39SyUQGPdFG71gpVnUgaun2ony?=
 =?us-ascii?q?Kkwic8ymB06PPEgWqUmbykLUdPM3UDXGR4iFbwPYW46rJSFFOlaQQkjlrt5E?=
 =?us-ascii?q?r3wbRauLUqKmDSRUlSeC2lZ2pmU6a2qv+De5sWuc5u6H4KFr7mORbDF+2YwV?=
 =?us-ascii?q?NSyS7oEmpAySpueiqj59P5lEch1zrYciY1rWLZfNE2zhDat7m+DbZc2CQLQC?=
 =?us-ascii?q?5ghHzZHF+5apOr8s+TjIzrqf2lWiSqUZgZImH7iJiNsie2/zggAxKlmOuoss?=
 =?us-ascii?q?b6CgV82ij+nYoPN22AvFP3ZY/l0L6/OORscxxzBVPy3MF9H5l3joo6gJxDkW?=
 =?us-ascii?q?hfnJie+mAL1HviKdgOk7yrd2IDHHRYprydqBigwkBoKWiFgp70RmnIiNU0fM?=
 =?us-ascii?q?G0OysXwn5vtpgRTvbSveICxWwv+BK5tV6DOKEi2G5Hkb13tjhCxLtQ3Whlhi?=
 =?us-ascii?q?SFXuJLRhEeYHOqz1LQqIng5KRPODT2Kuf2ihc4xZb5S+jc6gBEBCSgJsxkRH?=
 =?us-ascii?q?8uqJwmYRSUlyevj+OsMNjIMYBK6kLSyVGYybASccx5l+JW13M7aCSh4Tt9jb?=
 =?us-ascii?q?Z81EIm3Inm7tHZcSM3oPP/WEcDcGSrO6ZxsnnslfoMx5jKmdH+WM0nQnJSAt?=
 =?us-ascii?q?PpVa76SWxP8623b0DXTGx68y/AUbvHQV3Gtxwg9SqUVcrzbjfOfCJ8r50qRQ?=
 =?us-ascii?q?HBdhUO20ZNBGR8xMNlUFjtnpepcV8ltGlAuhih8kUKkLgwcUG4CDy6xk/gay?=
 =?us-ascii?q?9oGsHOcFwPt18EvhaFd5TZtLk7HjkErMTw8UrUcTHdPV4QSzlOABfhZRirP6?=
 =?us-ascii?q?Hyt4CRrq7CWrf4daCWJ+3J87cWVu/Ul8v+jM0/o3DVaZ3JZj44VLU6whYRBi?=
 =?us-ascii?q?w/QZmD3W9VFWpGzEevJ4aaohy45yF6/Nun/q2tXgXx6I+LTbBVNJ1u/RuynK?=
 =?us-ascii?q?uOZfWIinw/LzFG25cIg3TPzd19lBYTjSpqajWgQ68Yu3WLSqXOl6tTSRUcbm?=
 =?us-ascii?q?tyMsBM8q4xjBJVN4jDkNTp27Vkj/kzTVBYSVjmncLvbssPRgP1fBbOAELBbe?=
 =?us-ascii?q?7UHzDAzsDpbK/5coV+170L5Tu3vzvTU0L4N2nFlzKyDE/3dL8cyiCDPBlO/o?=
 =?us-ascii?q?q6d0QlD2+rV9/gZhChVb0/xTQr3b05gG/LPm8AIHB9dU1KtLiZ8SJfhL12BW?=
 =?us-ascii?q?VA6nNvKeTMlTye6qHULZMfsP0jBSoR9aoS+HMh175c9z1JXtRwiHGUtcN1rh?=
 =?us-ascii?q?eqn6jHyzZqVgZPticehI+PuhYHW+2R/Z1BVHDYuRMVuDzBVlJT/IYjUIe3/f?=
 =?us-ascii?q?kNmZDVman+KSlP6YfZ5sJBQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?=
 =?us-ascii?q?yW8WWY65YgrZ250pgBULhAT3QuC+gXTE9iGZZRRfU/FiNhirOdgMMSsDCmqw?=
 =?us-ascii?q?LNQcxBop3dfveCW7P0NS2Uy7VJLUhto/uwPcEYMYv13FZnY19xkdHRGkbeat?=
 =?us-ascii?q?tKpzVocg4+pEgUoSpOC1Yr0kegUTuDpXoaFPq6hBkz01Msevk2+XHn5FJlfw?=
 =?us-ascii?q?OW9ht1q1E4nJDeuR7UaCT4df/iRpxNBmz/sE1jasqmETYwVhW7mAlfDBmBR7?=
 =?us-ascii?q?9ViOI7J2V70knHpIBCX/JRH/VJ?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HoAQCL2fldhg/ch8NlHQEBAQkBEQU?=
 =?us-ascii?q?FAYF+ghuBRiMECyqTL4MSmi4JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQE?=
 =?us-ascii?q?DAQEBAgECAwICAQECEAEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ?=
 =?us-ascii?q?8oS09AiMBTIEEin4ziQ6BSIE2hz+EWRqBQT+BEYJec4QjhhQEj3qfD4I+lgY?=
 =?us-ascii?q?MG45Ri32pRIFpgXszGggbFTuCbFARFI0eDgmOJEAzjGWCQAEB?=
X-IPAS-Result: =?us-ascii?q?A0HoAQCL2fldhg/ch8NlHQEBAQkBEQUFAYF+ghuBRiMEC?=
 =?us-ascii?q?yqTL4MSmi4JBAEBCy8BAQGEPwKCGhwHAQQ0EwIDAQwBAQEDAQEBAgECAwICA?=
 =?us-ascii?q?QECEAEBAQoJCwgphUqCOykBg08CAQMSFVIQPxJXGSKDAIJ8oS09AiMBTIEEi?=
 =?us-ascii?q?n4ziQ6BSIE2hz+EWRqBQT+BEYJec4QjhhQEj3qfD4I+lgYMG45Ri32pRIFpg?=
 =?us-ascii?q?XszGggbFTuCbFARFI0eDgmOJEAzjGWCQAEB?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="10226265"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDGv0IRFckNIwcZFQml/6Va198tKPillF4KNc/?=
 =?us-ascii?q?vvWuP1cI6erK6QrdT6ItodiAFqwgg7qSD35zeEYaHzpaYJVoRwzorrNQ?=
 =?us-ascii?q?ZJ23LHFx6ByiZj+CKOfHnIWjLlTRd8DDxBpL/ssP4O+Fzgg7O1bez90i?=
 =?us-ascii?q?Y79xO0Lgw3g5+eAF0naJOtSw=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa5.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:07 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 5BFCDAE19;
	Wed, 18 Dec 2019 07:49:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Josh Whitehead
	<josh.whitehead@dornerworks.com>, Stewart Hildebrand
	<stewart.hildebrand@dornerworks.com>, Meng Xu <mengxu@cis.upenn.edu>
Subject: [PATCH 7/9] xen/sched: switch scheduling to bool where appropriate
Date: Wed, 18 Dec 2019 08:48:57 +0100
Message-ID: <20191218074859.21665-8-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: 13543f40-50d9-4061-bd5a-08d7838ec45b
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

Scheduling code has several places using int or bool_t instead of bool.
Switch those.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c        | 10 +++++-----
 xen/common/sched/sched-if.h       |  2 +-
 xen/common/sched/sched_arinc653.c |  8 ++++----
 xen/common/sched/sched_credit.c   | 12 ++++++------
 xen/common/sched/sched_rt.c       | 14 +++++++-------
 xen/common/sched/schedule.c       | 14 +++++++-------
 xen/include/xen/sched.h           |  6 +++---
 7 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index d5b64d0a6a..14212bb4ae 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -154,7 +154,7 @@ static struct cpupool *alloc_cpupool_struct(void)
  * the searched id is returned
  * returns NULL if not found.
  */
-static struct cpupool *__cpupool_find_by_id(int id, int exact)
+static struct cpupool *__cpupool_find_by_id(int id, bool exact)
 {
     struct cpupool **q;
 
@@ -169,10 +169,10 @@ static struct cpupool *__cpupool_find_by_id(int id, int exact)
 
 static struct cpupool *cpupool_find_by_id(int poolid)
 {
-    return __cpupool_find_by_id(poolid, 1);
+    return __cpupool_find_by_id(poolid, true);
 }
 
-static struct cpupool *__cpupool_get_by_id(int poolid, int exact)
+static struct cpupool *__cpupool_get_by_id(int poolid, bool exact)
 {
     struct cpupool *c;
     spin_lock(&cpupool_lock);
@@ -185,12 +185,12 @@ static struct cpupool *__cpupool_get_by_id(int poolid, int exact)
 
 struct cpupool *cpupool_get_by_id(int poolid)
 {
-    return __cpupool_get_by_id(poolid, 1);
+    return __cpupool_get_by_id(poolid, true);
 }
 
 static struct cpupool *cpupool_get_next_by_id(int poolid)
 {
-    return __cpupool_get_by_id(poolid, 0);
+    return __cpupool_get_by_id(poolid, false);
 }
 
 void cpupool_put(struct cpupool *pool)
diff --git a/xen/common/sched/sched-if.h b/xen/common/sched/sched-if.h
index edce354dc7..9d0db75cbb 100644
--- a/xen/common/sched/sched-if.h
+++ b/xen/common/sched/sched-if.h
@@ -589,7 +589,7 @@ unsigned int cpupool_get_granularity(const struct cpupool *c);
  * * The hard affinity is not a subset of soft affinity
  * * There is an overlap between the soft and hard affinity masks
  */
-static inline int has_soft_affinity(const struct sched_unit *unit)
+static inline bool has_soft_affinity(const struct sched_unit *unit)
 {
     return unit->soft_aff_effective &&
            !cpumask_subset(cpupool_domain_master_cpumask(unit->domain),
diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c
index fe15754900..dc45378952 100644
--- a/xen/common/sched/sched_arinc653.c
+++ b/xen/common/sched/sched_arinc653.c
@@ -75,7 +75,7 @@ typedef struct arinc653_unit_s
      * arinc653_unit_t pointer. */
     struct sched_unit * unit;
     /* awake holds whether the UNIT has been woken with vcpu_wake() */
-    bool_t              awake;
+    bool                awake;
     /* list holds the linked list information for the list this UNIT
      * is stored in */
     struct list_head    list;
@@ -427,7 +427,7 @@ a653sched_alloc_udata(const struct scheduler *ops, struct sched_unit *unit,
      * will mark the UNIT awake.
      */
     svc->unit = unit;
-    svc->awake = 0;
+    svc->awake = false;
     if ( !is_idle_unit(unit) )
         list_add(&svc->list, &SCHED_PRIV(ops)->unit_list);
     update_schedule_units(ops);
@@ -473,7 +473,7 @@ static void
 a653sched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
 {
     if ( AUNIT(unit) != NULL )
-        AUNIT(unit)->awake = 0;
+        AUNIT(unit)->awake = false;
 
     /*
      * If the UNIT being put to sleep is the same one that is currently
@@ -493,7 +493,7 @@ static void
 a653sched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
     if ( AUNIT(unit) != NULL )
-        AUNIT(unit)->awake = 1;
+        AUNIT(unit)->awake = true;
 
     cpu_raise_softirq(sched_unit_master(unit), SCHEDULE_SOFTIRQ);
 }
diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c
index 8b1de9b033..05930261d9 100644
--- a/xen/common/sched/sched_credit.c
+++ b/xen/common/sched/sched_credit.c
@@ -245,7 +245,7 @@ __runq_elem(struct list_head *elem)
 }
 
 /* Is the first element of cpu's runq (if any) cpu's idle unit? */
-static inline bool_t is_runq_idle(unsigned int cpu)
+static inline bool is_runq_idle(unsigned int cpu)
 {
     /*
      * We're peeking at cpu's runq, we must hold the proper lock.
@@ -344,7 +344,7 @@ static void burn_credits(struct csched_unit *svc, s_time_t now)
     svc->start_time += (credits * MILLISECS(1)) / CSCHED_CREDITS_PER_MSEC;
 }
 
-static bool_t __read_mostly opt_tickle_one_idle = 1;
+static bool __read_mostly opt_tickle_one_idle = true;
 boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
 
 DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
@@ -719,7 +719,7 @@ __csched_unit_is_migrateable(const struct csched_private *prv,
 
 static int
 _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit,
-                 bool_t commit)
+                 bool commit)
 {
     int cpu = sched_unit_master(unit);
     /* We must always use cpu's scratch space */
@@ -871,7 +871,7 @@ csched_res_pick(const struct scheduler *ops, const struct sched_unit *unit)
      * get boosted, which we don't deserve as we are "only" migrating.
      */
     set_bit(CSCHED_FLAG_UNIT_MIGRATING, &svc->flags);
-    return get_sched_res(_csched_cpu_pick(ops, unit, 1));
+    return get_sched_res(_csched_cpu_pick(ops, unit, true));
 }
 
 static inline void
@@ -975,7 +975,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu)
          * migrating it to run elsewhere (see multi-core and multi-thread
          * support in csched_res_pick()).
          */
-        new_cpu = _csched_cpu_pick(ops, currunit, 0);
+        new_cpu = _csched_cpu_pick(ops, currunit, false);
 
         unit_schedule_unlock_irqrestore(lock, flags, currunit);
 
@@ -1108,7 +1108,7 @@ static void
 csched_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct csched_unit * const svc = CSCHED_UNIT(unit);
-    bool_t migrating;
+    bool migrating;
 
     BUG_ON( is_idle_unit(unit) );
 
diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c
index 264a753116..8646d77343 100644
--- a/xen/common/sched/sched_rt.c
+++ b/xen/common/sched/sched_rt.c
@@ -490,10 +490,10 @@ rt_update_deadline(s_time_t now, struct rt_unit *svc)
 static inline bool
 deadline_queue_remove(struct list_head *queue, struct list_head *elem)
 {
-    int pos = 0;
+    bool pos = false;
 
     if ( queue->next != elem )
-        pos = 1;
+        pos = true;
 
     list_del_init(elem);
     return !pos;
@@ -505,14 +505,14 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *),
                       struct list_head *queue)
 {
     struct list_head *iter;
-    int pos = 0;
+    bool pos = false;
 
     list_for_each ( iter, queue )
     {
         struct rt_unit * iter_svc = (*qelem)(iter);
         if ( compare_unit_priority(svc, iter_svc) > 0 )
             break;
-        pos++;
+        pos = true;
     }
     list_add_tail(elem, iter);
     return !pos;
@@ -605,7 +605,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
 {
     struct list_head *replq = rt_replq(ops);
     struct rt_unit *rearm_svc = svc;
-    bool_t rearm = 0;
+    bool rearm = false;
 
     ASSERT( unit_on_replq(svc) );
 
@@ -622,7 +622,7 @@ replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
     {
         deadline_replq_insert(svc, &svc->replq_elem, replq);
         rearm_svc = replq_elem(replq->next);
-        rearm = 1;
+        rearm = true;
     }
     else
         rearm = deadline_replq_insert(svc, &svc->replq_elem, replq);
@@ -1279,7 +1279,7 @@ rt_unit_wake(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct rt_unit * const svc = rt_unit(unit);
     s_time_t now;
-    bool_t missed;
+    bool missed;
 
     BUG_ON( is_idle_unit(unit) );
 
diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c
index db8ce146ca..3307e88b6c 100644
--- a/xen/common/sched/schedule.c
+++ b/xen/common/sched/schedule.c
@@ -53,7 +53,7 @@ string_param("sched", opt_sched);
  * scheduler will give preferrence to partially idle package compared to
  * the full idle package, when picking pCPU to schedule vCPU.
  */
-bool_t sched_smt_power_savings = 0;
+bool sched_smt_power_savings;
 boolean_param("sched_smt_power_savings", sched_smt_power_savings);
 
 /* Default scheduling rate limit: 1ms
@@ -574,7 +574,7 @@ int sched_init_vcpu(struct vcpu *v)
     {
         get_sched_res(v->processor)->curr = unit;
         get_sched_res(v->processor)->sched_unit_idle = unit;
-        v->is_running = 1;
+        v->is_running = true;
         unit->is_running = true;
         unit->state_entry_time = NOW();
     }
@@ -983,7 +983,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
     unsigned long flags;
     unsigned int old_cpu, new_cpu;
     spinlock_t *old_lock, *new_lock;
-    bool_t pick_called = 0;
+    bool pick_called = false;
     struct vcpu *v;
 
     /*
@@ -1029,7 +1029,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
             if ( (new_lock == get_sched_res(new_cpu)->schedule_lock) &&
                  cpumask_test_cpu(new_cpu, unit->domain->cpupool->cpu_valid) )
                 break;
-            pick_called = 1;
+            pick_called = true;
         }
         else
         {
@@ -1037,7 +1037,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
              * We do not hold the scheduler lock appropriate for this vCPU.
              * Thus we cannot select a new CPU on this iteration. Try again.
              */
-            pick_called = 0;
+            pick_called = false;
         }
 
         sched_spin_unlock_double(old_lock, new_lock, flags);
@@ -2148,7 +2148,7 @@ static void sched_switch_units(struct sched_resource *sr,
             vcpu_runstate_change(vnext, vnext->new_state, now);
         }
 
-        vnext->is_running = 1;
+        vnext->is_running = true;
 
         if ( is_idle_vcpu(vnext) )
             vnext->sched_unit = next;
@@ -2219,7 +2219,7 @@ static void vcpu_context_saved(struct vcpu *vprev, struct vcpu *vnext)
     smp_wmb();
 
     if ( vprev != vnext )
-        vprev->is_running = 0;
+        vprev->is_running = false;
 }
 
 static void unit_context_saved(struct sched_resource *sr)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 55335d6ab3..b2f48a3512 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -557,18 +557,18 @@ static inline bool is_system_domain(const struct domain *d)
  * Use this when you don't have an existing reference to @d. It returns
  * FALSE if @d is being destroyed.
  */
-static always_inline int get_domain(struct domain *d)
+static always_inline bool get_domain(struct domain *d)
 {
     int old, seen = atomic_read(&d->refcnt);
     do
     {
         old = seen;
         if ( unlikely(old & DOMAIN_DESTROYED) )
-            return 0;
+            return false;
         seen = atomic_cmpxchg(&d->refcnt, old, old + 1);
     }
     while ( unlikely(seen != old) );
-    return 1;
+    return true;
 }
 
 /*
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:11 +0100
Received: from LASPEX02MSOL02.citrite.net (10.160.21.46) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:09 -0500
Received: from esa5.hc3370-68.iphmx.com (10.160.38.12) by
 LASPEX02MSOL02.citrite.net (10.160.21.46) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Tue, 17 Dec 2019 23:49:09 -0800
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa5.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: 1SjnUNu8WBr0rIzHzh5XedMqCVK7EWg5omeoga46dipLo8puJy47KkEAD1PTTpTqmzqq/WEsRq
 N/6KekuZtVVIf2Ij1qNI0rIHjOUCOTaw8E4ZAZ+kIDcql8QIxPKKcjOBUApJJZiztRyWfXtqtd
 F/EcG3o5o7L5iQSVCb0xbL0ETFwFKYXDTW11Ob6s57fpZv5cmOT0Zt8A4yNXpyWqm3u72K9y86
 OxaTY65qYhOH1MY0SNFB16c0ZbdtZTc3OV+9TCWbWYMvsWlW340oi4sIQpkj3wIVHa+FXGXT8j
 8iiRUxFUmwV+bOIL+JIhLEIF
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 10226263
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 10226263
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3ArYvC6B/HYB0wW/9uRHKM819IXTAuvvDOBiVQ1K?=
 =?us-ascii?q?B30e4cTK2v8tzYMVDF4r011RmVBN6dsaoUwLOP6OjJYi8p2d65qncMcZhBBV?=
 =?us-ascii?q?cuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx?=
 =?us-ascii?q?7xKRR6JvjvGo7Vks+7y/2+94fcbglVijexe61+IAm1oAneq8UanIVvIbstxx?=
 =?us-ascii?q?XUpXdFZ/5Yzn5yK1KJmBb86Maw/Jp9/ClVpvks6c1OX7jkcqohVbBXAygoPG?=
 =?us-ascii?q?4z5M3wqBnMVhCP6WcGUmUXiRVHHQ7I5wznU5jrsyv6su192DSGPcDzULs5Vy?=
 =?us-ascii?q?iu47ttRRT1kyoMKSI3/3/LhcxxlKJboQyupxpjw47PfYqZMONycr7Bcd8GQG?=
 =?us-ascii?q?ZMWNtaWS5cDYOmd4YBD/QPM/tWoYf+oFUBrxW+CBKwCO/z0DJEmmX70bEm3+?=
 =?us-ascii?q?knDArI3BYgH9ULsHnMqNv1KbkdUfq6zKbWyzXIcvJY2S366IjTaRAqvPaBXb?=
 =?us-ascii?q?B/ccrLzkkvDBjFg06LqYz4JDOayP4BvHSc7+plTO+ijXMspQJpojW328shjo?=
 =?us-ascii?q?nEipgIxl3K9Sh12pg5KcOkREJhfNKpEINcuzyeOoZ2WM8uXm9ltSYgxrEYp5?=
 =?us-ascii?q?K3YDIGxIonyhPQbfGMbpKG7Qj5VOmLJDd1nHJld6y7hxa16UWgz+L9Wteu3F?=
 =?us-ascii?q?ZRsypFicfDumoQ2BPN8sSHS/198Vm92TuXygze7uFJLVopmafVK5Mt2KA8mo?=
 =?us-ascii?q?QPvUjZAyP7mF36jKqMeUUl/uio5f7nYrLjppKENY90hRv+MqM1msykGuk4KR?=
 =?us-ascii?q?UOX3OF9uimyLLj+kj5TK1QjvIqiqnZrIzaJcMDq6C4HQBV1Zwj6xmhADu83t?=
 =?us-ascii?q?oYgH8HLFZfdxKflIXmJUzOLOrkAve+n1SsijZrxv/cMrL9BZXNK2DJkK39cr?=
 =?us-ascii?q?Zl905c1A0zwMhe551ODLENOvDzVVXtu9zbFR85NAq0zv35B9VgzI8RRWOPAr?=
 =?us-ascii?q?ODMKPTvl6E/P4gI+6JZNxdhDGoO/UjovLjk3I9sVscZrWym4sabmiiGfZrKF?=
 =?us-ascii?q?nfZmDj0fkbFmJfnBczVuPnjhWtSzlIfD7mXa0m4SogIJm7FoqFTYeo1u/SlB?=
 =?us-ascii?q?ynF4FbMzgVQmuHFm3lIsDdA69WMnC7P9NhnzoYVLOoV44m01SUuRTnz6Z8d7?=
 =?us-ascii?q?qEqCADvM+l1NMuvLKL0E9isz1sD8GNlWqKSjI8kmAJQmow26Zy6Qx4x0yY2K?=
 =?us-ascii?q?d1y/pfCZRI5vxPXwt7fZ7RxuB3EZbzDwTGeNraAE2+TICABjc8Bsk038dIe1?=
 =?us-ascii?q?x0TtetlBfYxAKxHqQY0beMAc986brSikD4PN010HPazO8khlgiTNFIMDikib?=
 =?us-ascii?q?R47BP7HJPSngOSkKP5Pb8E0nvr82GOhXGLoFkeUAN0VvDdWmsDY0LNsdnjzk?=
 =?us-ascii?q?bSFfm1FK8qdAdMkJTQEKZBZ9z3gFkDf8/NYoSPMUS2nWr4RRuTz+7KbIGxID?=
 =?us-ascii?q?pFmXuNTkkckwUDu32BMFp2ACDpuG/YADF0cDCnK0rx7elzrm+6RU4o3kmLaU?=
 =?us-ascii?q?Nmzb+85h8Sg7SVVfoS2rsOvCpppS9zGR6x2NffCtzIoAQEHu0UY9wn50xc/X?=
 =?us-ascii?q?nErAE7NZulbuhjilMYbwVrrhb2zRwkQo5EkMUssDYr1F8reP/eiQgcMWnChd?=
 =?us-ascii?q?apZ+6ySCG65h2kZq/I10uL3c2fp+EP469j9A2m4lHvF1Ii9mUh2N5QgB7+rt?=
 =?us-ascii?q?3HChQfVZXpXwM57R9/8vvYbTM6/Jj8zmB3PO+/tTqIiJo5QfAozBqtZYIVMq?=
 =?us-ascii?q?yeExTpO9YHHMXoI+sv0QvMDFpMLKVZ86g6ONmjfv2N1fuwPepuqzmhiHxO/I?=
 =?us-ascii?q?F31k/VrXhGR+XF3ogIz7Sj5iXXDG6utF66qYi3lJtNOHcSFTHklnChWt4XZ7?=
 =?us-ascii?q?VyeJZNAmCrcYW7wdB3hpilXHA9lhbrBVwc39SyUQGPdFG71gpVnUgaun2ony?=
 =?us-ascii?q?Kkwic8ymB06PPEgWqUmbykLUdPM3UDXGR4iFbwPYW46rJSFFOlaQQkjlrt5E?=
 =?us-ascii?q?r3wbRauLUqKmDSRUlSeC2lZ2pmU6a2qv+De5sWuc5u6H4KFr7mORbDF+2YwV?=
 =?us-ascii?q?NSyS7oEmpAySpueiqj59P5lEch1zrYciY1rWLZfNE2zhDat7m+DbZc2CQLQC?=
 =?us-ascii?q?5ghHzZHF+5apOr8s+TjIzrqf2lWiSqUZgZImH7iJiNsie2/zggAxKlmOuoss?=
 =?us-ascii?q?b6CgV82ij+nYoPN22AvFP3ZY/l0L6/OORscxxzBVPy3MF9H5l3joo6gJxDkW?=
 =?us-ascii?q?hfnJie+mAL1HviKdgOk7yrd2IDHHRYprydqBigwkBoKWiFgp70RmnIiNU0fM?=
 =?us-ascii?q?G0OysXwn5vtpgRTvbSveICxWwv+BK5tV6DOKEi2G5Hkb13tjhCxLtQ3Whlhi?=
 =?us-ascii?q?SFXuJLRhEeYHOqz1LQqIng5KRPODT2Kuf2ihc4xZb5S+jc6gBEBCSgJsxkRH?=
 =?us-ascii?q?8uqJwmYRSUlyevj+OsMNjIMYBK6kLSyVGYybASccx5l+JW13M7aCSh4Tt9jb?=
 =?us-ascii?q?Z81EIm3Inm7tHZcSM3oPP/WEcDcGSrO6ZxsnnslfoMx5jKmdH+WM0nQnJSAt?=
 =?us-ascii?q?PpVa76SWxP8623b0DXTGx68y/AUbvHQV3Gtxwg9SqUVcrzbjfOfCJ8r50qRQ?=
 =?us-ascii?q?HBdhUO20ZNBGR8xMNlUFjtnpepcV8ltGlAuhih8kUKkLgwcUG4CDy6xk/gay?=
 =?us-ascii?q?9oGsHOcFwPt18EvhaFd5TZtLk7HjkErMTw8UrUcTHdPV4QSzlOABfhZRirP6?=
 =?us-ascii?q?Hyt4CRrq7CWrf4daCWJ+3J87cWVu/Ul8v+jM0/o3DVaZ3JZj44VLU6whYRBi?=
 =?us-ascii?q?w/QZmD3W9VFWpGzEevJ4aaohy45yF6/Nun/q2tXgXx6I+LTbBVNJ1u/RuynK?=
 =?us-ascii?q?uOZfWIinw/LzFG25cIg3TPzd19lBYTjSpqajWgQ68Yu3WLSqXOl6tTSRUcbm?=
 =?us-ascii?q?tyMsBM8q4xjBJVN4jDkNTp27Vkj/kzTVBYSVjmncLvbssPRgP1fBbOAELBbe?=
 =?us-ascii?q?7UHzDAzsDpbK/5coV+170L5Tu3vzvTU0L4N2nFlzKyDE/3dL8cyiCDPBlO/o?=
 =?us-ascii?q?q6d0QlD2+rV9/gZhChVb0/xTQr3b05gG/LPm8AIHB9dU1KtLiZ8SJfhL12BW?=
 =?us-ascii?q?VA6nNvKeTMlTye6qHULZMfsP0jBSoR9aoS+HMh175c9z1JXtRwiHGUtcN1rh?=
 =?us-ascii?q?eqn6jHyzZqVgZPticehI+PuhYHW+2R/Z1BVHDYuRMVuDzBVlJT/IYjUIe3/f?=
 =?us-ascii?q?kNmZDVman+KSlP6YfZ5sJBQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?=
 =?us-ascii?q?yW8WWY65YgrZ250pgBULhAT3QuC+gXTE9iGZZRRfU/FiNhirOdgMMSsDCmqw?=
 =?us-ascii?q?LNQcxBop3dfveCW7P0NS2Uy7VJLUhto/uwPcEYMYv13FZnY19xkdHRGkbeat?=
 =?us-ascii?q?tKpzVocg4+pEgUoSpOC1Yr0kegUTuDpXoaFPq6hBkz01Msevk2+XHn5FJlfw?=
 =?us-ascii?q?OW9ht1q1E4nJDeuR7UaCT4df/iRpxNBmz/sE1jasqmETYwVhW7mAlfDBmBR7?=
 =?us-ascii?q?9ViOI7J2V70knHpIBCX/JRH/VJ?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0FOAgCL2fldhg/ch8NlHgELHIQZgUY?=
 =?us-ascii?q?jBAsqky+WHYcjCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgM?=
 =?us-ascii?q?CAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEhVSED8SVxkigwCCfKEtPQIjAUy?=
 =?us-ascii?q?BBIp+M4kOgUiBNoc/hFkagUE/gRGDUYQthgoEjUKCOJ8Pgj6WBgwbjlGLfS2?=
 =?us-ascii?q?pF4FpgXszGggbFYMnUBEUjR4OCY4kQDOPJQEB?=
X-IPAS-Result: =?us-ascii?q?A0FOAgCL2fldhg/ch8NlHgELHIQZgUYjBAsqky+WHYcjC?=
 =?us-ascii?q?QQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCAgEBAhABAQEKC?=
 =?us-ascii?q?QsIKYVKgjspAYNPAgEDEhVSED8SVxkigwCCfKEtPQIjAUyBBIp+M4kOgUiBN?=
 =?us-ascii?q?oc/hFkagUE/gRGDUYQthgoEjUKCOJ8Pgj6WBgwbjlGLfS2pF4FpgXszGggbF?=
 =?us-ascii?q?YMnUBEUjR4OCY4kQDOPJQEB?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="10226263"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDHViP8G7mHQSrvo8ZzuqFGGdiXLRtoczeEzll?=
 =?us-ascii?q?DaVgPNfJb1tWXqPmDpofYil1NxCexxIGTn2/2CsgEu6Rpu1YBA8poiRj?=
 =?us-ascii?q?tmxCTLn5sKrKVKSE/s1mqlv//ant0cKNWOvyJL+wzB3lFNLSCl2nzPZf?=
 =?us-ascii?q?+fiBUh1Za90C0mi5qOQOWhmw=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa5.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:07 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id AC402AE52;
	Wed, 18 Dec 2019 07:49:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Dario Faggioli
	<dfaggioli@suse.com>
Subject: [PATCH 8/9] xen/sched: eliminate sched_tick_suspend() and sched_tick_resume()
Date: Wed, 18 Dec 2019 08:48:58 +0100
Message-ID: <20191218074859.21665-9-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: d0b607e7-f80a-4c41-faeb-08d7838ec420
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: LASPEX02MSOL02.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

sched_tick_suspend() and sched_tick_resume() only call rcu related
functions, so eliminate them and do the rcu_idle_timer*() calling in
rcu_idle_[enter|exit]().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/arm/domain.c         |  6 +++---
 xen/arch/x86/acpi/cpu_idle.c  | 15 ++++++++-------
 xen/arch/x86/cpu/mwait-idle.c |  8 ++++----
 xen/common/rcupdate.c         |  7 +++++--
 xen/common/sched/schedule.c   | 12 ------------
 xen/include/xen/rcupdate.h    |  3 ---
 xen/include/xen/sched.h       |  2 --
 7 files changed, 20 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c0a13aa0ab..aa3df3b3ba 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -46,8 +46,8 @@ static void do_idle(void)
 {
     unsigned int cpu = smp_processor_id();
 
-    sched_tick_suspend();
-    /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+    rcu_idle_enter(cpu);
+    /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
     process_pending_softirqs();
 
     local_irq_disable();
@@ -58,7 +58,7 @@ static void do_idle(void)
     }
     local_irq_enable();
 
-    sched_tick_resume();
+    rcu_idle_exit(cpu);
 }
 
 void idle_loop(void)
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 5edd1844f4..2676f0d7da 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -599,7 +599,8 @@ void update_idle_stats(struct acpi_processor_power *power,
 
 static void acpi_processor_idle(void)
 {
-    struct acpi_processor_power *power = processor_powers[smp_processor_id()];
+    unsigned int cpu = smp_processor_id();
+    struct acpi_processor_power *power = processor_powers[cpu];
     struct acpi_processor_cx *cx = NULL;
     int next_state;
     uint64_t t1, t2 = 0;
@@ -648,8 +649,8 @@ static void acpi_processor_idle(void)
 
     cpufreq_dbs_timer_suspend();
 
-    sched_tick_suspend();
-    /* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+    rcu_idle_enter(cpu);
+    /* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
     process_pending_softirqs();
 
     /*
@@ -658,10 +659,10 @@ static void acpi_processor_idle(void)
      */
     local_irq_disable();
 
-    if ( !cpu_is_haltable(smp_processor_id()) )
+    if ( !cpu_is_haltable(cpu) )
     {
         local_irq_enable();
-        sched_tick_resume();
+        rcu_idle_exit(cpu);
         cpufreq_dbs_timer_resume();
         return;
     }
@@ -786,7 +787,7 @@ static void acpi_processor_idle(void)
         /* Now in C0 */
         power->last_state = &power->states[0];
         local_irq_enable();
-        sched_tick_resume();
+        rcu_idle_exit(cpu);
         cpufreq_dbs_timer_resume();
         return;
     }
@@ -794,7 +795,7 @@ static void acpi_processor_idle(void)
     /* Now in C0 */
     power->last_state = &power->states[0];
 
-    sched_tick_resume();
+    rcu_idle_exit(cpu);
     cpufreq_dbs_timer_resume();
 
     if ( cpuidle_current_governor->reflect )
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 52413e6da1..f49b04c45b 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -755,8 +755,8 @@ static void mwait_idle(void)
 
 	cpufreq_dbs_timer_suspend();
 
-	sched_tick_suspend();
-	/* sched_tick_suspend() can raise TIMER_SOFTIRQ. Process it now. */
+	rcu_idle_enter(cpu);
+	/* rcu_idle_enter() can raise TIMER_SOFTIRQ. Process it now. */
 	process_pending_softirqs();
 
 	/* Interrupts must be disabled for C2 and higher transitions. */
@@ -764,7 +764,7 @@ static void mwait_idle(void)
 
 	if (!cpu_is_haltable(cpu)) {
 		local_irq_enable();
-		sched_tick_resume();
+		rcu_idle_exit(cpu);
 		cpufreq_dbs_timer_resume();
 		return;
 	}
@@ -806,7 +806,7 @@ static void mwait_idle(void)
 	if (!(lapic_timer_reliable_states & (1 << cstate)))
 		lapic_timer_on();
 
-	sched_tick_resume();
+	rcu_idle_exit(cpu);
 	cpufreq_dbs_timer_resume();
 
 	if ( cpuidle_current_governor->reflect )
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index a56103c6f7..cb712c8690 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -459,7 +459,7 @@ int rcu_needs_cpu(int cpu)
  * periodically poke rcu_pedning(), so that it will invoke the callback
  * not too late after the end of the grace period.
  */
-void rcu_idle_timer_start()
+static void rcu_idle_timer_start(void)
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
 
@@ -475,7 +475,7 @@ void rcu_idle_timer_start()
     rdp->idle_timer_active = true;
 }
 
-void rcu_idle_timer_stop()
+static void rcu_idle_timer_stop(void)
 {
     struct rcu_data *rdp = &this_cpu(rcu_data);
 
@@ -633,10 +633,13 @@ void rcu_idle_enter(unsigned int cpu)
      * Se the comment before cpumask_andnot() in  rcu_start_batch().
      */
     smp_mb();
+
+    rcu_idle_timer_start();
 }
 
 void rcu_idle_exit(unsigned int cpu)
 {
+    rcu_idle_timer_stop();
     ASSERT(cpumask_test_cpu(cpu, &rcu_ctrlblk.idle_cpumask));
     cpumask_clear_cpu(cpu, &rcu_ctrlblk.idle_cpumask);
 }
diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c
index 3307e88b6c..ddbface969 100644
--- a/xen/common/sched/schedule.c
+++ b/xen/common/sched/schedule.c
@@ -3265,18 +3265,6 @@ void schedule_dump(struct cpupool *c)
     rcu_read_unlock(&sched_res_rculock);
 }
 
-void sched_tick_suspend(void)
-{
-    rcu_idle_enter(smp_processor_id());
-    rcu_idle_timer_start();
-}
-
-void sched_tick_resume(void)
-{
-    rcu_idle_timer_stop();
-    rcu_idle_exit(smp_processor_id());
-}
-
 void wait(void)
 {
     schedule();
diff --git a/xen/include/xen/rcupdate.h b/xen/include/xen/rcupdate.h
index 13850865ed..174d058113 100644
--- a/xen/include/xen/rcupdate.h
+++ b/xen/include/xen/rcupdate.h
@@ -148,7 +148,4 @@ int rcu_barrier(void);
 void rcu_idle_enter(unsigned int cpu);
 void rcu_idle_exit(unsigned int cpu);
 
-void rcu_idle_timer_start(void);
-void rcu_idle_timer_stop(void);
-
 #endif /* __XEN_RCUPDATE_H */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index b2f48a3512..e4263de2d5 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -688,8 +688,6 @@ void sched_destroy_domain(struct domain *d);
 long sched_adjust(struct domain *, struct xen_domctl_scheduler_op *);
 long sched_adjust_global(struct xen_sysctl_scheduler_op *);
 int  sched_id(void);
-void sched_tick_suspend(void);
-void sched_tick_resume(void);
 void vcpu_wake(struct vcpu *v);
 long vcpu_yield(void);
 void vcpu_sleep_nosync(struct vcpu *v);
-- 
2.16.4


From - Wed Dec 18 11:05:12 2019
Received: from FTLPEX02AMS01.citrite.net (10.13.108.166) by
 AMSPEX02CL01.citrite.net (10.69.22.125) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Mailbox Transport; Wed, 18 Dec 2019 08:49:12 +0100
Received: from MIAPEX02MSOL01.citrite.net (10.52.109.11) by
 FTLPEX02AMS01.citrite.net (10.13.108.166) with Microsoft SMTP Server (TLS) id
 15.0.1473.3; Wed, 18 Dec 2019 02:49:09 -0500
Received: from esa3.hc3370-68.iphmx.com (10.9.154.239) by
 MIAPEX02MSOL01.citrite.net (10.52.109.11) with Microsoft SMTP Server (TLS) id
 15.0.1473.3 via Frontend Transport; Wed, 18 Dec 2019 02:49:09 -0500
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=jgross@suse.com; spf=Pass smtp.mailfrom=jgross@suse.com; spf=None smtp.helo=postmaster@mx2.suse.de
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  jgross@suse.com) identity=pra; client-ip=195.135.220.15;
  receiver=esa3.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
  jgross@suse.com designates 195.135.220.15 as permitted
  sender) identity=mailfrom; client-ip=195.135.220.15;
  receiver=esa3.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com"; x-sender="jgross@suse.com";
  x-conformance=sidf_compatible; x-record-type="v=spf1";
  x-record-text="v=spf1 ip4:103.9.96.0/22 ip4:117.120.16.0/21
  ip4:130.57.0.0/16 ip4:137.65.0.0/16 ip4:143.186.119.0/24
  ip4:147.2.0.0/16 ip4:149.44.0.0/16 ip4:162.249.213.164
  ip4:164.99.0.0/16 ip4:165.180.149.103 ip4:173.203.201.103
  ip4:193.109.254.0/23 ip4:194.106.220.0/23
  ip4:194.116.198.0/23 ip4:195.135.220.0/23
  ip4:195.245.230.0/23 ip4:196.14.170.0/23 ip4:34.252.226.93
  include:spf1.novell.com include:spf2.novell.com
  include:spf3.novell.com include:spf.protection.outlook.com
  -all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
  authenticity information available from domain of
  postmaster@mx2.suse.de) identity=helo;
  client-ip=195.135.220.15; receiver=esa3.hc3370-68.iphmx.com;
  envelope-from="jgross@suse.com";
  x-sender="postmaster@mx2.suse.de";
  x-conformance=sidf_compatible
IronPort-SDR: thYnZq6RH4Y+5NfDqI4aTLXgIn4Nuco2/2lKTn5WGml5NE8kwgoFkncPsJMlZXtuFX2dpPDYkU
 iv/WNMlAFVhH8B+92QqjCT31Jds1m1tGzqwhN1RL3LTr+VXniX0z/lGKpWPrqgp9plmr3RyY/q
 HAAqyYl7o+Hpbfv/x703wHYbuECe6/Wf+u0SO5aDpTL2hOlPPYLrYwpxYs5/9GJjCUNuf8TNA4
 cRJycD6KLkba4lx2a18ngsHsXoH/rX0rhEUR2RkXL1p99CDOtYFYaiohOSR9bC6yz/HFPsMymX
 veshjYGJZXkhOq3FFHYS/VmD
X-IronPort-RemoteIP: 195.135.220.15
X-IronPort-MID: 9853024
X-IronPort-Reputation: 3.4
X-IronPort-Listener: InboundMail
X-IronPort-SenderGroup: ValidList
X-IronPort-MailFlowPolicy: $ACCEPTED
X-SBRS: 3.4
X-MesageID: 9853024
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 195.135.220.15
X-Policy: $ACCEPTED
IronPort-PHdr: =?us-ascii?q?9a23=3Ajf0oox9dcF90Df9uRHKM819IXTAuvvDOBiVQ1K?=
 =?us-ascii?q?B30e4cTK2v8tzYMVDF4r011RmVBN6dsaoUwLOP6OjJYi8p2d65qncMcZhBBV?=
 =?us-ascii?q?cuqP49uEgeOvODElDxN/XwbiY3T4xoXV5h+GynYwAOQJ6tL1LdrWev4jEMBx?=
 =?us-ascii?q?7xKRR6JvjvGo7Vks+7y/2+94fcbglVijexe61+IAm1oAneq8UanIVvIbstxx?=
 =?us-ascii?q?XUpXdFZ/5Yzn5yK1KJmBb86Maw/Jp9/ClVpvks6c1OX7jkcqohVbBXAygoPG?=
 =?us-ascii?q?4z5M3wqBnMVhCP6WcGUmUXiRVHHQ7I5wznU5jrsyv6su192DSGPcDzULs5Vy?=
 =?us-ascii?q?iu47ttRRT1kyoMKSI3/3/LhcxxlKJboQyupxpjw47PfYqZMONycr7Bcd8GQG?=
 =?us-ascii?q?ZMWNtaWS5cDYOmd4YBD/QPM/tWoYf+oFUBrxW+CBKwCO/z0DJEmmX70bEm3+?=
 =?us-ascii?q?knDArI3BYgH9ULsHnMqNv1KbkdUfq6zKbWyzXIcvJY2S366IjTaRAqvPaBXb?=
 =?us-ascii?q?B/ccrLzkkvDBjFg06LqYz4JDOayP4BvHSc7+plTO+ijXMspQJpojW328shjo?=
 =?us-ascii?q?nEipgIxl3K9Sh12pg5KcOkREJhfNKpEINcuzyeOoZ2WM8uXm9ltSYgxrEYp5?=
 =?us-ascii?q?K3YDIGxIonyhPQbfGMbpKG7Qj5VOmLJDd1nHJld6y7hxa16UWgz+L9Wteu3F?=
 =?us-ascii?q?ZRsypFicfDumoQ2BPN8sSHS/198Vm92TuXygze7uFJLVopmafVK5Mt2KA8mo?=
 =?us-ascii?q?QPvUjZAyP7mF36jKqMeUUl/uio5f7nYrLjppKENY90hRv+MqM1msykGuk4KR?=
 =?us-ascii?q?UOX3OF9uimyLLj+kj5TK1QjvIqiqnZrIzaJcMDq662HQBV1Zwj6xmhADu8zd?=
 =?us-ascii?q?sYgH8HLFZfdxKflIXmJUzOLOrkAve+n1SsijZrxv/cMrL9BZXNK2DJkK39cr?=
 =?us-ascii?q?Zl905c1A0zwMhe551ODLENOvDzVVXtu9zbFR85NAq0zv35B9VgzI8RRWOPAr?=
 =?us-ascii?q?ODMKPTvl6E/P4gI+6JZNxdhDGoO/UjovLjk3I9sVscZrWym4sabmiiGfZrKF?=
 =?us-ascii?q?nfZmDj0fkbFmJfnBczVuPnjhWtSzlIfD7mXa0m4SogIJm7FoqFTYeo1u/SlB?=
 =?us-ascii?q?ynF4FbMzgVQmuHFm3lIsDdA69WMnC7P9NhnzoYVLOoV44m01SUuRTnz6Z8d7?=
 =?us-ascii?q?qEqCADvMml1NMuvLKL0E9isz1sD8GNlWqKSjI8kmAJQmow26Zy6Qx4x0yY2K?=
 =?us-ascii?q?d1y/pfCZRI5vxPXwt7fZ7RxuB3EZbzDwTGeNraAE2+TICABjc8Bsk038dIe1?=
 =?us-ascii?q?x0TtetlBfYxAKxHqQY0beMAc986brSikD4PN010HPazO8khlgiTNFIMDikib?=
 =?us-ascii?q?R47BP7HJPSngOSkKP5Pb8E0nvr82GOhXGLoFkeUAN0VvDdWmsDY0LNsdnjzk?=
 =?us-ascii?q?bSFfm1FK8qdAdMkJTQEKZBZ9z3gFkDf8/NYoSPMUS2nWr4RRuTz+jKbIGxID?=
 =?us-ascii?q?pFmXuNTkkckwUDu32BMFp2ACDpuG/YADF0cDCnK0rx7elzrm+6RU4o3kmLaU?=
 =?us-ascii?q?Nmzb+85h8Sg7SVVfoS2rsOvCpppS9zGR6x2NffCtzIoAQEHu0UY9wn50xc/X?=
 =?us-ascii?q?nErAE7NZulbuhjilMYbwVrrhb2zRwkQo5EkMUssDYr1F8reP/eiQgcMWnChd?=
 =?us-ascii?q?apZ+6ySCG65h2kZq/I10uL3c2foeEP469j9A2m4lHvF1Ii9mUh2N5QgB7+rt?=
 =?us-ascii?q?3HChQfVZXpXwM57R9/8vvYbTM6/Jj8zmB3PO+/tTqIiJo5QfAozBqtZYIVMq?=
 =?us-ascii?q?yeExTpO9YHHMXoI+sv0QvMDFpMLKVZ86g6ONmjfv2N1fuwPepuqzmhiHxO/I?=
 =?us-ascii?q?F31k/VrXhGR+XF3ogIz7Sj5iXXDG6utF66qYi3lJtNPncSFTHklnChWt4XZ7?=
 =?us-ascii?q?VyeJZNAmCrcYW7wdB3hpilXHA9lhbrBVwc39SyUQGPdFG71gpVnUgaun2ony?=
 =?us-ascii?q?Kkwic8ymB06PPEgWqUmbykLUdPM3UDXGR4iFbwPYW46rJSFFOlaQQkjlrt5E?=
 =?us-ascii?q?r3wbRauLUqKmDSRUlSeC2lZ2pmU6a2qv+De5sWuc5u6H4KFr7mORbDF+2YwV?=
 =?us-ascii?q?NSyS7oEmpAySpueiqj4dP5lEch1zrYciY1rWLZfNE2zhDat7m+DbZc2CQLQC?=
 =?us-ascii?q?5ghHzZHF+5apOr8s+TjIzrqf2lWiSqUZgZImH7iJiNsie2/zggAxKlmOuoss?=
 =?us-ascii?q?b6CgV82ij+nYoPN22AvFP3ZY/l0L6/OORscxxzBVPy3MF9H5l3joo6gJxDkW?=
 =?us-ascii?q?hfnJie+mAL1HviKdgOk7yrd2IDHHRYprydqBigwkBoKWiFgp70RmnIiNU0fM?=
 =?us-ascii?q?G0OysXwn5vtpgRTvbSveICxWwv+BK5tV6DOKEi2G5Hkb13tjhCxLtQ3Whlhi?=
 =?us-ascii?q?SFXuJLRhEeYHOqz1LQqIng5KRPODT2Kuf2ihc4xZb5S+jc6gBEBCSgJsxkRH?=
 =?us-ascii?q?8uqJwmYRSUlyevj+OsMNjIMYBK6kLSyVGYybASccx5l+JW13M7aCSh4Tt9jb?=
 =?us-ascii?q?Z81EIm3Inm7tHZcSM3oPP/WEcDcGSrO6ZxsnnslfoMx5jKmdH+WM0nQnJSAt?=
 =?us-ascii?q?PpVa76SWxP8623b0DXTGx68y/AUbvHQV3Gtxwg9SqUVcrzbjfOfCJ8r50qRQ?=
 =?us-ascii?q?HBdhUO20ZNBGR8xMNlUFjtnpepcV8ltGlAuhih8kUKkLgwcUG4CDy6xk/gay?=
 =?us-ascii?q?9oGsHOcFwPt18EvhaFd5TZtLk7HjkErMTw8UrUcTHdPV4QSzlOABfhZRirP6?=
 =?us-ascii?q?Hyt4CRrq7CWrf4daCWJ+3J87cWVu/Ul8v+jM0/o3DVaZ3JZj44VLU6whYRBi?=
 =?us-ascii?q?w/QZmD3W9VFWpGzEevJ4aaohy45yF6/Nun/q2tXgXx6I+LTbBVNJ1u/RuynK?=
 =?us-ascii?q?uOZfWIinw/LzFG25cIg3TPzd19lBYTjSpqajWgQ68Yu3WLSqXOl6tTSRUcbm?=
 =?us-ascii?q?tyMsBM8q4xjBJVN4jDkNTp27Vkj/kzTVBYSVjmncLvbssPRgP1fBbOAELBbe?=
 =?us-ascii?q?7UHzDAzsDpbK/5coV+170L5Tu3vzvTU0L4N2/FlzKyDE/3dL8cyiCDPBlO/o?=
 =?us-ascii?q?q6d0QlD2+rV9/gZhChVb0/xTQr3b05gG/LPm8AIHB9dU1KtLiZ8SJfhL12BW?=
 =?us-ascii?q?VA6nNvKeTMlTye6qHULZMfsP0jBSoR9aoS+HMh175c9z1JXtRwiHGUtcN1rh?=
 =?us-ascii?q?eqn6jHyzZqVgZPticehI+PuhYHW+2R/Z1BVHDYuRMVuDzBVlJT/IYjUIe3/f?=
 =?us-ascii?q?kNmZDVman+KSlP6YfZ5sJHQcjfc5nYaD9/YVzoAD7RHE0OSjv4UAOXz0Fbjv?=
 =?us-ascii?q?yW8WWY65YgrZ250pgBULhAT3QuC+gXTE9iGZZRRfU/FiNhirOdgMMSsDCmqw?=
 =?us-ascii?q?LNQcxBop3dfveCW7P0NS2Uy7VJLUhto/uwPcEYMYv13FZnY19xkdHRGkbeat?=
 =?us-ascii?q?tKpzVocg4+pEgUrCpOC1Yr0kegUTuDpWcJHKfszAUrkQY4auMopm+1vgUHY2?=
 =?us-ascii?q?HSrS51q3Ef3NXohTfIK2z0PP32RpxNBmz4uhppa8Kpc0NOdQS32HdcGnLBTr?=
 =?us-ascii?q?NVgaFncDkz2hTBopYJEvlZH/RJ?=
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: =?us-ascii?q?A0HqAQD62Pldhg/ch8NlHQEBAQkBEQU?=
 =?us-ascii?q?FAYF+gXQngUYjBAsqky+DEpouCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQE?=
 =?us-ascii?q?BAwEBAQIBAgMCAgEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNPhQQPxJXGSK?=
 =?us-ascii?q?CNUuCfKElPQIjAUyBBIp+M4kPgUiBNoc/hFkagUE/gRGCXnOKNwSNQoI4nw+?=
 =?us-ascii?q?CPpYGDBuOUYt9pSqEGoFpgXszGggbFTuCbFARFI0eDgmOJEAzjyUBAQ?=
X-IPAS-Result: =?us-ascii?q?A0HqAQD62Pldhg/ch8NlHQEBAQkBEQUFAYF+gXQngUYjB?=
 =?us-ascii?q?Asqky+DEpouCQQBAQsvAQEBhD8CghocBwEENBMCAwEMAQEBAwEBAQIBAgMCA?=
 =?us-ascii?q?gEBAhABAQEKCQsIKYVKgjspAYNPAgEDEggNPhQQPxJXGSKCNUuCfKElPQIjA?=
 =?us-ascii?q?UyBBIp+M4kPgUiBNoc/hFkagUE/gRGCXnOKNwSNQoI4nw+CPpYGDBuOUYt9p?=
 =?us-ascii?q?SqEGoFpgXszGggbFTuCbFARFI0eDgmOJEAzjyUBAQ?=
X-IronPort-AV: E=Sophos;i="5.69,328,1571716800"; 
   d="scan'208";a="9853024"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
X-IronPort-Outbreak-Status: No, level 0, Unknown - Unknown
X-MGA-submission: =?us-ascii?q?MDFFNxRAul8K1IMcG7zzfao58/IELNhbTHHPQt?=
 =?us-ascii?q?9tmay1wNQAxu7/d1yw00Nr5qwjA7LOtctC8xpqP2RWptURjRUWChCxtm?=
 =?us-ascii?q?xarI93ODbsKyoUDaOJspLDDR74rI18i+f5HDbQxUyNLfL7M0BsPlggqE?=
 =?us-ascii?q?asDU9YlDG9FPtSjRswR3NrYA=3D=3D?=
Received: from mx2.suse.de ([195.135.220.15])
  by esa3.hc3370-68.iphmx.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Dec 2019 02:49:08 -0500
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 195D5AF27;
	Wed, 18 Dec 2019 07:49:05 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: <xen-devel@lists.xenproject.org>
CC: Juergen Gross <jgross@suse.com>, Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Josh Whitehead
	<josh.whitehead@dornerworks.com>, Stewart Hildebrand
	<stewart.hildebrand@dornerworks.com>, Meng Xu <mengxu@cis.upenn.edu>
Subject: [PATCH 9/9] xen/sched: add const qualifier where appropriate
Date: Wed, 18 Dec 2019 08:48:59 +0100
Message-ID: <20191218074859.21665-10-jgross@suse.com>
X-Mailer: git-send-email 2.16.4
In-Reply-To: <20191218074859.21665-1-jgross@suse.com>
References: <20191218074859.21665-1-jgross@suse.com>
Return-Path: jgross@suse.com
Content-Type: text/plain
X-MS-Exchange-Organization-Network-Message-Id: bebd7b5a-4cbc-424c-afe7-08d7838ec458
X-MS-Exchange-Organization-AVStamp-Enterprise: 1.0
X-MS-Exchange-Organization-AuthSource: MIAPEX02MSOL01.citrite.net
X-MS-Exchange-Organization-AuthAs: Anonymous
MIME-Version: 1.0

Make use of the const qualifier more often in scheduling code.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c        |  2 +-
 xen/common/sched/sched_arinc653.c |  4 +--
 xen/common/sched/sched_credit.c   | 44 +++++++++++++++++----------------
 xen/common/sched/sched_credit2.c  | 52 ++++++++++++++++++++-------------------
 xen/common/sched/sched_null.c     | 17 +++++++------
 xen/common/sched/sched_rt.c       | 32 ++++++++++++------------
 xen/common/sched/schedule.c       | 25 ++++++++++---------
 xen/include/xen/sched.h           |  9 ++++---
 8 files changed, 96 insertions(+), 89 deletions(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 14212bb4ae..a6c04c46cb 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -882,7 +882,7 @@ int cpupool_get_id(const struct domain *d)
     return d->cpupool ? d->cpupool->cpupool_id : CPUPOOLID_NONE;
 }
 
-cpumask_t *cpupool_valid_cpus(struct cpupool *pool)
+const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool)
 {
     return pool->cpu_valid;
 }
diff --git a/xen/common/sched/sched_arinc653.c b/xen/common/sched/sched_arinc653.c
index dc45378952..0de4ba6b2c 100644
--- a/xen/common/sched/sched_arinc653.c
+++ b/xen/common/sched/sched_arinc653.c
@@ -608,7 +608,7 @@ static struct sched_resource *
 a653sched_pick_resource(const struct scheduler *ops,
                         const struct sched_unit *unit)
 {
-    cpumask_t *online;
+    const cpumask_t *online;
     unsigned int cpu;
 
     /*
@@ -639,7 +639,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int cpu,
                   void *pdata, void *vdata)
 {
     struct sched_resource *sr = get_sched_res(cpu);
-    arinc653_unit_t *svc = vdata;
+    const arinc653_unit_t *svc = vdata;
 
     ASSERT(!pdata && svc && is_idle_unit(svc->unit));
 
diff --git a/xen/common/sched/sched_credit.c b/xen/common/sched/sched_credit.c
index 05930261d9..f2fc1cca5a 100644
--- a/xen/common/sched/sched_credit.c
+++ b/xen/common/sched/sched_credit.c
@@ -233,7 +233,7 @@ static void csched_tick(void *_cpu);
 static void csched_acct(void *dummy);
 
 static inline int
-__unit_on_runq(struct csched_unit *svc)
+__unit_on_runq(const struct csched_unit *svc)
 {
     return !list_empty(&svc->runq_elem);
 }
@@ -349,11 +349,11 @@ boolean_param("tickle_one_idle_cpu", opt_tickle_one_idle);
 
 DEFINE_PER_CPU(unsigned int, last_tickle_cpu);
 
-static inline void __runq_tickle(struct csched_unit *new)
+static inline void __runq_tickle(const struct csched_unit *new)
 {
     unsigned int cpu = sched_unit_master(new->unit);
-    struct sched_resource *sr = get_sched_res(cpu);
-    struct sched_unit *unit = new->unit;
+    const struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_unit *unit = new->unit;
     struct csched_unit * const cur = CSCHED_UNIT(curr_on_cpu(cpu));
     struct csched_private *prv = CSCHED_PRIV(sr->scheduler);
     cpumask_t mask, idle_mask, *online;
@@ -509,7 +509,7 @@ static inline void __runq_tickle(struct csched_unit *new)
 static void
 csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 {
-    struct csched_private *prv = CSCHED_PRIV(ops);
+    const struct csched_private *prv = CSCHED_PRIV(ops);
 
     /*
      * pcpu either points to a valid struct csched_pcpu, or is NULL, if we're
@@ -652,7 +652,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int cpu,
 
 #ifndef NDEBUG
 static inline void
-__csched_unit_check(struct sched_unit *unit)
+__csched_unit_check(const struct sched_unit *unit)
 {
     struct csched_unit * const svc = CSCHED_UNIT(unit);
     struct csched_dom * const sdom = svc->sdom;
@@ -700,8 +700,8 @@ __csched_vcpu_is_cache_hot(const struct csched_private *prv,
 
 static inline int
 __csched_unit_is_migrateable(const struct csched_private *prv,
-                             struct sched_unit *unit,
-                             int dest_cpu, cpumask_t *mask)
+                             const struct sched_unit *unit,
+                             int dest_cpu, const cpumask_t *mask)
 {
     const struct csched_unit *svc = CSCHED_UNIT(unit);
     /*
@@ -725,7 +725,7 @@ _csched_cpu_pick(const struct scheduler *ops, const struct sched_unit *unit,
     /* We must always use cpu's scratch space */
     cpumask_t *cpus = cpumask_scratch_cpu(cpu);
     cpumask_t idlers;
-    cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
+    const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
     struct csched_pcpu *spc = NULL;
     int balance_step;
 
@@ -932,7 +932,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu)
 {
     struct sched_unit *currunit = current->sched_unit;
     struct csched_unit * const svc = CSCHED_UNIT(currunit);
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     const struct scheduler *ops = sr->scheduler;
 
     ASSERT( sched_unit_master(currunit) == cpu );
@@ -1084,7 +1084,7 @@ csched_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
 {
     struct csched_unit * const svc = CSCHED_UNIT(unit);
     unsigned int cpu = sched_unit_master(unit);
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
 
     SCHED_STAT_CRANK(unit_sleep);
 
@@ -1577,7 +1577,7 @@ static void
 csched_tick(void *_cpu)
 {
     unsigned int cpu = (unsigned long)_cpu;
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     struct csched_pcpu *spc = CSCHED_PCPU(cpu);
     struct csched_private *prv = CSCHED_PRIV(sr->scheduler);
 
@@ -1604,7 +1604,7 @@ csched_tick(void *_cpu)
 static struct csched_unit *
 csched_runq_steal(int peer_cpu, int cpu, int pri, int balance_step)
 {
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     const struct csched_private * const prv = CSCHED_PRIV(sr->scheduler);
     const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
     struct csched_unit *speer;
@@ -1681,10 +1681,10 @@ static struct csched_unit *
 csched_load_balance(struct csched_private *prv, int cpu,
     struct csched_unit *snext, bool *stolen)
 {
-    struct cpupool *c = get_sched_res(cpu)->cpupool;
+    const struct cpupool *c = get_sched_res(cpu)->cpupool;
     struct csched_unit *speer;
     cpumask_t workers;
-    cpumask_t *online = c->res_valid;
+    const cpumask_t *online = c->res_valid;
     int peer_cpu, first_cpu, peer_node, bstep;
     int node = cpu_to_node(cpu);
 
@@ -2008,7 +2008,7 @@ out:
 }
 
 static void
-csched_dump_unit(struct csched_unit *svc)
+csched_dump_unit(const struct csched_unit *svc)
 {
     struct csched_dom * const sdom = svc->sdom;
 
@@ -2041,10 +2041,11 @@ csched_dump_unit(struct csched_unit *svc)
 static void
 csched_dump_pcpu(const struct scheduler *ops, int cpu)
 {
-    struct list_head *runq, *iter;
+    const struct list_head *runq;
+    struct list_head *iter;
     struct csched_private *prv = CSCHED_PRIV(ops);
-    struct csched_pcpu *spc;
-    struct csched_unit *svc;
+    const struct csched_pcpu *spc;
+    const struct csched_unit *svc;
     spinlock_t *lock;
     unsigned long flags;
     int loop;
@@ -2132,12 +2133,13 @@ csched_dump(const struct scheduler *ops)
     loop = 0;
     list_for_each( iter_sdom, &prv->active_sdom )
     {
-        struct csched_dom *sdom;
+        const struct csched_dom *sdom;
+
         sdom = list_entry(iter_sdom, struct csched_dom, active_sdom_elem);
 
         list_for_each( iter_svc, &sdom->active_unit )
         {
-            struct csched_unit *svc;
+            const struct csched_unit *svc;
             spinlock_t *lock;
 
             svc = list_entry(iter_svc, struct csched_unit, active_unit_elem);
diff --git a/xen/common/sched/sched_credit2.c b/xen/common/sched/sched_credit2.c
index f9e521a3a8..1ed7bbde2f 100644
--- a/xen/common/sched/sched_credit2.c
+++ b/xen/common/sched/sched_credit2.c
@@ -692,7 +692,7 @@ void smt_idle_mask_clear(unsigned int cpu, cpumask_t *mask)
  */
 static int get_fallback_cpu(struct csched2_unit *svc)
 {
-    struct sched_unit *unit = svc->unit;
+    const struct sched_unit *unit = svc->unit;
     unsigned int bs;
 
     SCHED_STAT_CRANK(need_fallback_cpu);
@@ -774,7 +774,7 @@ static int get_fallback_cpu(struct csched2_unit *svc)
  *
  * FIXME: Do pre-calculated division?
  */
-static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time,
+static void t2c_update(const struct csched2_runqueue_data *rqd, s_time_t time,
                           struct csched2_unit *svc)
 {
     uint64_t val = time * rqd->max_weight + svc->residual;
@@ -783,7 +783,8 @@ static void t2c_update(struct csched2_runqueue_data *rqd, s_time_t time,
     svc->credit -= val;
 }
 
-static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct csched2_unit *svc)
+static s_time_t c2t(const struct csched2_runqueue_data *rqd, s_time_t credit,
+                    const struct csched2_unit *svc)
 {
     return credit * svc->weight / rqd->max_weight;
 }
@@ -792,7 +793,7 @@ static s_time_t c2t(struct csched2_runqueue_data *rqd, s_time_t credit, struct c
  * Runqueue related code.
  */
 
-static inline int unit_on_runq(struct csched2_unit *svc)
+static inline int unit_on_runq(const struct csched2_unit *svc)
 {
     return !list_empty(&svc->runq_elem);
 }
@@ -849,9 +850,9 @@ static inline bool same_core(unsigned int cpua, unsigned int cpub)
 }
 
 static unsigned int
-cpu_to_runqueue(struct csched2_private *prv, unsigned int cpu)
+cpu_to_runqueue(const struct csched2_private *prv, unsigned int cpu)
 {
-    struct csched2_runqueue_data *rqd;
+    const struct csched2_runqueue_data *rqd;
     unsigned int rqi;
 
     for ( rqi = 0; rqi < nr_cpu_ids; rqi++ )
@@ -917,7 +918,7 @@ static void update_max_weight(struct csched2_runqueue_data *rqd, int new_weight,
 
         list_for_each( iter, &rqd->svc )
         {
-            struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem);
+            const struct csched2_unit * svc = list_entry(iter, struct csched2_unit, rqd_elem);
 
             if ( svc->weight > max_weight )
                 max_weight = svc->weight;
@@ -970,7 +971,7 @@ _runq_assign(struct csched2_unit *svc, struct csched2_runqueue_data *rqd)
 }
 
 static void
-runq_assign(const struct scheduler *ops, struct sched_unit *unit)
+runq_assign(const struct scheduler *ops, const struct sched_unit *unit)
 {
     struct csched2_unit *svc = unit->priv;
 
@@ -997,7 +998,7 @@ _runq_deassign(struct csched2_unit *svc)
 }
 
 static void
-runq_deassign(const struct scheduler *ops, struct sched_unit *unit)
+runq_deassign(const struct scheduler *ops, const struct sched_unit *unit)
 {
     struct csched2_unit *svc = unit->priv;
 
@@ -1203,7 +1204,7 @@ static void
 update_svc_load(const struct scheduler *ops,
                 struct csched2_unit *svc, int change, s_time_t now)
 {
-    struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_private *prv = csched2_priv(ops);
     s_time_t delta, unit_load;
     unsigned int P, W;
 
@@ -1362,11 +1363,11 @@ static inline bool is_preemptable(const struct csched2_unit *svc,
  * Within the same class, the highest difference of credit.
  */
 static s_time_t tickle_score(const struct scheduler *ops, s_time_t now,
-                             struct csched2_unit *new, unsigned int cpu)
+                             const struct csched2_unit *new, unsigned int cpu)
 {
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
     struct csched2_unit * cur = csched2_unit(curr_on_cpu(cpu));
-    struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_private *prv = csched2_priv(ops);
     s_time_t score;
 
     /*
@@ -1441,7 +1442,7 @@ runq_tickle(const struct scheduler *ops, struct csched2_unit *new, s_time_t now)
     struct sched_unit *unit = new->unit;
     unsigned int bs, cpu = sched_unit_master(unit);
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
-    cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
+    const cpumask_t *online = cpupool_domain_master_cpumask(unit->domain);
     cpumask_t mask;
 
     ASSERT(new->rqd == rqd);
@@ -2005,7 +2006,7 @@ static void replenish_domain_budget(void* data)
 
 #ifndef NDEBUG
 static inline void
-csched2_unit_check(struct sched_unit *unit)
+csched2_unit_check(const struct sched_unit *unit)
 {
     struct csched2_unit * const svc = csched2_unit(unit);
     struct csched2_dom * const sdom = svc->sdom;
@@ -2541,8 +2542,8 @@ static void migrate(const struct scheduler *ops,
  *  - svc is not already flagged to migrate,
  *  - if svc is allowed to run on at least one of the pcpus of rqd.
  */
-static bool unit_is_migrateable(struct csched2_unit *svc,
-                                  struct csched2_runqueue_data *rqd)
+static bool unit_is_migrateable(const struct csched2_unit *svc,
+                                const struct csched2_runqueue_data *rqd)
 {
     struct sched_unit *unit = svc->unit;
     int cpu = sched_unit_master(unit);
@@ -3076,7 +3077,7 @@ csched2_free_domdata(const struct scheduler *ops, void *data)
 static void
 csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
 {
-    struct csched2_unit *svc = unit->priv;
+    const struct csched2_unit *svc = unit->priv;
     struct csched2_dom * const sdom = svc->sdom;
     spinlock_t *lock;
 
@@ -3142,7 +3143,7 @@ csched2_runtime(const struct scheduler *ops, int cpu,
     int rt_credit; /* Proposed runtime measured in credits */
     struct csched2_runqueue_data *rqd = c2rqd(ops, cpu);
     struct list_head *runq = &rqd->runq;
-    struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_private *prv = csched2_priv(ops);
 
     /*
      * If we're idle, just stay so. Others (or external events)
@@ -3239,7 +3240,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
                unsigned int *skipped)
 {
     struct list_head *iter, *temp;
-    struct sched_resource *sr = get_sched_res(cpu);
+    const struct sched_resource *sr = get_sched_res(cpu);
     struct csched2_unit *snext = NULL;
     struct csched2_private *prv = csched2_priv(sr->scheduler);
     bool yield = false, soft_aff_preempt = false;
@@ -3603,7 +3604,8 @@ static void csched2_schedule(
 }
 
 static void
-csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc)
+csched2_dump_unit(const struct csched2_private *prv,
+                  const struct csched2_unit *svc)
 {
     printk("[%i.%i] flags=%x cpu=%i",
             svc->unit->domain->domain_id,
@@ -3626,8 +3628,8 @@ csched2_dump_unit(struct csched2_private *prv, struct csched2_unit *svc)
 static inline void
 dump_pcpu(const struct scheduler *ops, int cpu)
 {
-    struct csched2_private *prv = csched2_priv(ops);
-    struct csched2_unit *svc;
+    const struct csched2_private *prv = csched2_priv(ops);
+    const struct csched2_unit *svc;
 
     printk("CPU[%02d] runq=%d, sibling={%*pbl}, core={%*pbl}\n",
            cpu, c2r(cpu),
@@ -3695,8 +3697,8 @@ csched2_dump(const struct scheduler *ops)
     loop = 0;
     list_for_each( iter_sdom, &prv->sdom )
     {
-        struct csched2_dom *sdom;
-        struct sched_unit *unit;
+        const struct csched2_dom *sdom;
+        const struct sched_unit *unit;
 
         sdom = list_entry(iter_sdom, struct csched2_dom, sdom_elem);
 
@@ -3737,7 +3739,7 @@ csched2_dump(const struct scheduler *ops)
         printk("RUNQ:\n");
         list_for_each( iter, runq )
         {
-            struct csched2_unit *svc = runq_elem(iter);
+            const struct csched2_unit *svc = runq_elem(iter);
 
             if ( svc )
             {
diff --git a/xen/common/sched/sched_null.c b/xen/common/sched/sched_null.c
index 11aab25743..4906e02c62 100644
--- a/xen/common/sched/sched_null.c
+++ b/xen/common/sched/sched_null.c
@@ -278,12 +278,12 @@ static void null_free_domdata(const struct scheduler *ops, void *data)
  * So this is not part of any hot path.
  */
 static struct sched_resource *
-pick_res(struct null_private *prv, const struct sched_unit *unit)
+pick_res(const struct null_private *prv, const struct sched_unit *unit)
 {
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit), new_cpu;
-    cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
-    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+    const cpumask_t *cpus = cpupool_domain_master_cpumask(unit->domain);
+    const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
 
     ASSERT(spin_is_locked(get_sched_res(cpu)->schedule_lock));
 
@@ -375,7 +375,7 @@ static void unit_assign(struct null_private *prv, struct sched_unit *unit,
 }
 
 /* Returns true if a cpu was tickled */
-static bool unit_deassign(struct null_private *prv, struct sched_unit *unit)
+static bool unit_deassign(struct null_private *prv, const struct sched_unit *unit)
 {
     unsigned int bs;
     unsigned int cpu = sched_unit_master(unit);
@@ -441,7 +441,7 @@ static spinlock_t *null_switch_sched(struct scheduler *new_ops,
 {
     struct sched_resource *sr = get_sched_res(cpu);
     struct null_private *prv = null_priv(new_ops);
-    struct null_unit *nvc = vdata;
+    const struct null_unit *nvc = vdata;
 
     ASSERT(nvc && is_idle_unit(nvc->unit));
 
@@ -940,7 +940,8 @@ static void null_schedule(const struct scheduler *ops, struct sched_unit *prev,
     prev->next_task->migrated = false;
 }
 
-static inline void dump_unit(struct null_private *prv, struct null_unit *nvc)
+static inline void dump_unit(const struct null_private *prv,
+                             const struct null_unit *nvc)
 {
     printk("[%i.%i] pcpu=%d", nvc->unit->domain->domain_id,
             nvc->unit->unit_id, list_empty(&nvc->waitq_elem) ?
@@ -950,8 +951,8 @@ static inline void dump_unit(struct null_private *prv, struct null_unit *nvc)
 static void null_dump_pcpu(const struct scheduler *ops, int cpu)
 {
     struct null_private *prv = null_priv(ops);
-    struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
-    struct null_unit *nvc;
+    const struct null_pcpu *npc = get_sched_res(cpu)->sched_priv;
+    const struct null_unit *nvc;
     spinlock_t *lock;
     unsigned long flags;
 
diff --git a/xen/common/sched/sched_rt.c b/xen/common/sched/sched_rt.c
index 8646d77343..560614ed9d 100644
--- a/xen/common/sched/sched_rt.c
+++ b/xen/common/sched/sched_rt.c
@@ -352,7 +352,7 @@ static void
 rt_dump_pcpu(const struct scheduler *ops, int cpu)
 {
     struct rt_private *prv = rt_priv(ops);
-    struct rt_unit *svc;
+    const struct rt_unit *svc;
     unsigned long flags;
 
     spin_lock_irqsave(&prv->lock, flags);
@@ -371,8 +371,8 @@ rt_dump(const struct scheduler *ops)
 {
     struct list_head *runq, *depletedq, *replq, *iter;
     struct rt_private *prv = rt_priv(ops);
-    struct rt_unit *svc;
-    struct rt_dom *sdom;
+    const struct rt_unit *svc;
+    const struct rt_dom *sdom;
     unsigned long flags;
 
     spin_lock_irqsave(&prv->lock, flags);
@@ -408,7 +408,7 @@ rt_dump(const struct scheduler *ops)
     printk("Domain info:\n");
     list_for_each ( iter, &prv->sdom )
     {
-        struct sched_unit *unit;
+        const struct sched_unit *unit;
 
         sdom = list_entry(iter, struct rt_dom, sdom_elem);
         printk("\tdomain: %d\n", sdom->dom->domain_id);
@@ -509,7 +509,7 @@ deadline_queue_insert(struct rt_unit * (*qelem)(struct list_head *),
 
     list_for_each ( iter, queue )
     {
-        struct rt_unit * iter_svc = (*qelem)(iter);
+        const struct rt_unit * iter_svc = (*qelem)(iter);
         if ( compare_unit_priority(svc, iter_svc) > 0 )
             break;
         pos = true;
@@ -547,7 +547,7 @@ replq_remove(const struct scheduler *ops, struct rt_unit *svc)
          */
         if ( !list_empty(replq) )
         {
-            struct rt_unit *svc_next = replq_elem(replq->next);
+            const struct rt_unit *svc_next = replq_elem(replq->next);
             set_timer(&prv->repl_timer, svc_next->cur_deadline);
         }
         else
@@ -604,7 +604,7 @@ static void
 replq_reinsert(const struct scheduler *ops, struct rt_unit *svc)
 {
     struct list_head *replq = rt_replq(ops);
-    struct rt_unit *rearm_svc = svc;
+    const struct rt_unit *rearm_svc = svc;
     bool rearm = false;
 
     ASSERT( unit_on_replq(svc) );
@@ -640,7 +640,7 @@ static struct sched_resource *
 rt_res_pick_locked(const struct sched_unit *unit, unsigned int locked_cpu)
 {
     cpumask_t *cpus = cpumask_scratch_cpu(locked_cpu);
-    cpumask_t *online;
+    const cpumask_t *online;
     int cpu;
 
     online = cpupool_domain_master_cpumask(unit->domain);
@@ -1028,7 +1028,7 @@ runq_pick(const struct scheduler *ops, const cpumask_t *mask, unsigned int cpu)
     struct rt_unit *svc = NULL;
     struct rt_unit *iter_svc = NULL;
     cpumask_t *cpu_common = cpumask_scratch_cpu(cpu);
-    cpumask_t *online;
+    const cpumask_t *online;
 
     list_for_each ( iter, runq )
     {
@@ -1197,15 +1197,15 @@ rt_unit_sleep(const struct scheduler *ops, struct sched_unit *unit)
  * lock is grabbed before calling this function
  */
 static void
-runq_tickle(const struct scheduler *ops, struct rt_unit *new)
+runq_tickle(const struct scheduler *ops, const struct rt_unit *new)
 {
     struct rt_private *prv = rt_priv(ops);
-    struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */
-    struct rt_unit *iter_svc;
-    struct sched_unit *iter_unit;
+    const struct rt_unit *latest_deadline_unit = NULL; /* lowest priority */
+    const struct rt_unit *iter_svc;
+    const struct sched_unit *iter_unit;
     int cpu = 0, cpu_to_tickle = 0;
     cpumask_t *not_tickled = cpumask_scratch_cpu(smp_processor_id());
-    cpumask_t *online;
+    const cpumask_t *online;
 
     if ( new == NULL || is_idle_unit(new->unit) )
         return;
@@ -1379,7 +1379,7 @@ rt_dom_cntl(
 {
     struct rt_private *prv = rt_priv(ops);
     struct rt_unit *svc;
-    struct sched_unit *unit;
+    const struct sched_unit *unit;
     unsigned long flags;
     int rc = 0;
     struct xen_domctl_schedparam_vcpu local_sched;
@@ -1484,7 +1484,7 @@ rt_dom_cntl(
  */
 static void repl_timer_handler(void *data){
     s_time_t now;
-    struct scheduler *ops = data;
+    const struct scheduler *ops = data;
     struct rt_private *prv = rt_priv(ops);
     struct list_head *replq = rt_replq(ops);
     struct list_head *runq = rt_runq(ops);
diff --git a/xen/common/sched/schedule.c b/xen/common/sched/schedule.c
index ddbface969..1d98e1fa8d 100644
--- a/xen/common/sched/schedule.c
+++ b/xen/common/sched/schedule.c
@@ -175,7 +175,7 @@ static inline struct scheduler *dom_scheduler(const struct domain *d)
 
 static inline struct scheduler *unit_scheduler(const struct sched_unit *unit)
 {
-    struct domain *d = unit->domain;
+    const struct domain *d = unit->domain;
 
     if ( likely(d->cpupool != NULL) )
         return d->cpupool->sched;
@@ -202,7 +202,7 @@ static inline struct scheduler *vcpu_scheduler(const struct vcpu *v)
 }
 #define VCPU2ONLINE(_v) cpupool_domain_master_cpumask((_v)->domain)
 
-static inline void trace_runstate_change(struct vcpu *v, int new_state)
+static inline void trace_runstate_change(const struct vcpu *v, int new_state)
 {
     struct { uint32_t vcpu:16, domain:16; } d;
     uint32_t event;
@@ -220,7 +220,7 @@ static inline void trace_runstate_change(struct vcpu *v, int new_state)
     __trace_var(event, 1/*tsc*/, sizeof(d), &d);
 }
 
-static inline void trace_continue_running(struct vcpu *v)
+static inline void trace_continue_running(const struct vcpu *v)
 {
     struct { uint32_t vcpu:16, domain:16; } d;
 
@@ -302,7 +302,8 @@ void sched_guest_idle(void (*idle) (void), unsigned int cpu)
     atomic_dec(&per_cpu(sched_urgent_count, cpu));
 }
 
-void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate)
+void vcpu_runstate_get(const struct vcpu *v,
+                       struct vcpu_runstate_info *runstate)
 {
     spinlock_t *lock;
     s_time_t delta;
@@ -324,7 +325,7 @@ void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate)
 uint64_t get_cpu_idle_time(unsigned int cpu)
 {
     struct vcpu_runstate_info state = { 0 };
-    struct vcpu *v = idle_vcpu[cpu];
+    const struct vcpu *v = idle_vcpu[cpu];
 
     if ( cpu_online(cpu) && v )
         vcpu_runstate_get(v, &state);
@@ -392,7 +393,7 @@ static void sched_free_unit_mem(struct sched_unit *unit)
 
 static void sched_free_unit(struct sched_unit *unit, struct vcpu *v)
 {
-    struct vcpu *vunit;
+    const struct vcpu *vunit;
     unsigned int cnt = 0;
 
     /* Don't count to be released vcpu, might be not in vcpu list yet. */
@@ -522,7 +523,7 @@ static unsigned int sched_select_initial_cpu(const struct vcpu *v)
 
 int sched_init_vcpu(struct vcpu *v)
 {
-    struct domain *d = v->domain;
+    const struct domain *d = v->domain;
     struct sched_unit *unit;
     unsigned int processor;
 
@@ -913,7 +914,7 @@ static void sched_unit_move_locked(struct sched_unit *unit,
                                    unsigned int new_cpu)
 {
     unsigned int old_cpu = unit->res->master_cpu;
-    struct vcpu *v;
+    const struct vcpu *v;
 
     rcu_read_lock(&sched_res_rculock);
 
@@ -1090,7 +1091,7 @@ static bool sched_check_affinity_broken(const struct sched_unit *unit)
     return false;
 }
 
-static void sched_reset_affinity_broken(struct sched_unit *unit)
+static void sched_reset_affinity_broken(const struct sched_unit *unit)
 {
     struct vcpu *v;
 
@@ -1176,7 +1177,7 @@ void restore_vcpu_affinity(struct domain *d)
 int cpu_disable_scheduler(unsigned int cpu)
 {
     struct domain *d;
-    struct cpupool *c;
+    const struct cpupool *c;
     cpumask_t online_affinity;
     int ret = 0;
 
@@ -1251,8 +1252,8 @@ out:
 static int cpu_disable_scheduler_check(unsigned int cpu)
 {
     struct domain *d;
-    struct vcpu *v;
-    struct cpupool *c;
+    const struct vcpu *v;
+    const struct cpupool *c;
 
     c = get_sched_res(cpu)->cpupool;
     if ( c == NULL )
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index e4263de2d5..fcf8e5037b 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -771,7 +771,7 @@ static inline void hypercall_cancel_continuation(struct vcpu *v)
 extern struct domain *domain_list;
 
 /* Caller must hold the domlist_read_lock or domlist_update_lock. */
-static inline struct domain *first_domain_in_cpupool( struct cpupool *c)
+static inline struct domain *first_domain_in_cpupool(const struct cpupool *c)
 {
     struct domain *d;
     for (d = rcu_dereference(domain_list); d && d->cpupool != c;
@@ -779,7 +779,7 @@ static inline struct domain *first_domain_in_cpupool( struct cpupool *c)
     return d;
 }
 static inline struct domain *next_domain_in_cpupool(
-    struct domain *d, struct cpupool *c)
+    struct domain *d, const struct cpupool *c)
 {
     for (d = rcu_dereference(d->next_in_list); d && d->cpupool != c;
          d = rcu_dereference(d->next_in_list));
@@ -923,7 +923,8 @@ void restore_vcpu_affinity(struct domain *d);
 int vcpu_affinity_domctl(struct domain *d, uint32_t cmd,
                          struct xen_domctl_vcpuaffinity *vcpuaff);
 
-void vcpu_runstate_get(struct vcpu *v, struct vcpu_runstate_info *runstate);
+void vcpu_runstate_get(const struct vcpu *v,
+                       struct vcpu_runstate_info *runstate);
 uint64_t get_cpu_idle_time(unsigned int cpu);
 void sched_guest_idle(void (*idle) (void), unsigned int cpu);
 void scheduler_enable(void);
@@ -1042,7 +1043,7 @@ extern enum cpufreq_controller {
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
-cpumask_t *cpupool_valid_cpus(struct cpupool *pool);
+const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);
 extern void dump_runq(unsigned char key);
 
 void arch_do_physinfo(struct xen_sysctl_physinfo *pi);
-- 
2.16.4



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: git-am doesn't strip CRLF line endings when the mbox is base64-encoded
  2019-12-18 11:42 git-am doesn't strip CRLF line endings when the mbox is base64-encoded George Dunlap
@ 2019-12-18 12:15 ` George Dunlap
  2019-12-18 19:41   ` Todd Zullinger
  2020-01-06 11:58   ` George Dunlap
  0 siblings, 2 replies; 6+ messages in thread
From: George Dunlap @ 2019-12-18 12:15 UTC (permalink / raw)
  To: git

On 12/18/19 11:42 AM, George Dunlap wrote:
> Using git 2.24.0 from Debian testing.
> 
> It seems that git-am will strip CRLF endings from mails before applying
> patches when the mail isn't encoded in any way.  It will also decode
> base64-encoded mails.  But it won't strip CRLF endings from
> base64-encoded mails.
> 
> Attached are two mbox files for two different recent series.
> plainenc.am applies cleanly with `git am`, while base64enc.am doesn't.
> 
> Poking around the man pages, it looks like part of the issue might be
> that the CRLF stripping is done in `git mailsplit`, before the base64
> encoding, rather than after.

Poking around -- it looks like the CRLF stripping would be better done
in `git mailinfo` after the decoding.

Also, this can *almost* be worked around using hooks -- there's an
`applypatch-msg` hook which can strip the CLRFs from the commit message,
but no hook (AFAICT) corresponding `applypatch-patch` which is run on
the patch itself before being applied.

 -George


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: git-am doesn't strip CRLF line endings when the mbox is base64-encoded
  2019-12-18 12:15 ` George Dunlap
@ 2019-12-18 19:41   ` Todd Zullinger
  2020-01-06 11:58   ` George Dunlap
  1 sibling, 0 replies; 6+ messages in thread
From: Todd Zullinger @ 2019-12-18 19:41 UTC (permalink / raw)
  To: George Dunlap; +Cc: git

George Dunlap wrote:
> On 12/18/19 11:42 AM, George Dunlap wrote:
>> Using git 2.24.0 from Debian testing.
>> 
>> It seems that git-am will strip CRLF endings from mails before applying
>> patches when the mail isn't encoded in any way.  It will also decode
>> base64-encoded mails.  But it won't strip CRLF endings from
>> base64-encoded mails.
>> 
>> Attached are two mbox files for two different recent series.
>> plainenc.am applies cleanly with `git am`, while base64enc.am doesn't.
>> 
>> Poking around the man pages, it looks like part of the issue might be
>> that the CRLF stripping is done in `git mailsplit`, before the base64
>> encoding, rather than after.
> 
> Poking around -- it looks like the CRLF stripping would be better done
> in `git mailinfo` after the decoding.
> 
> Also, this can *almost* be worked around using hooks -- there's an
> `applypatch-msg` hook which can strip the CLRFs from the commit message,
> but no hook (AFAICT) corresponding `applypatch-patch` which is run on
> the patch itself before being applied.

This came up recently in <20191130180301.5c39d8a4@lwn.net>¹.
I don't know if any of that discussion will prove useful to
you if you want to poke at this further or not.

¹ https://lore.kernel.org/git/20191130180301.5c39d8a4@lwn.net/

-- 
Todd

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: git-am doesn't strip CRLF line endings when the mbox is base64-encoded
  2019-12-18 12:15 ` George Dunlap
  2019-12-18 19:41   ` Todd Zullinger
@ 2020-01-06 11:58   ` George Dunlap
  2020-01-06 17:07     ` Junio C Hamano
  1 sibling, 1 reply; 6+ messages in thread
From: George Dunlap @ 2020-01-06 11:58 UTC (permalink / raw)
  To: git

On 12/18/19 12:15 PM, George Dunlap wrote:
> On 12/18/19 11:42 AM, George Dunlap wrote:
>> Using git 2.24.0 from Debian testing.
>>
>> It seems that git-am will strip CRLF endings from mails before applying
>> patches when the mail isn't encoded in any way.  It will also decode
>> base64-encoded mails.  But it won't strip CRLF endings from
>> base64-encoded mails.
>>
>> Attached are two mbox files for two different recent series.
>> plainenc.am applies cleanly with `git am`, while base64enc.am doesn't.
>>
>> Poking around the man pages, it looks like part of the issue might be
>> that the CRLF stripping is done in `git mailsplit`, before the base64
>> encoding, rather than after.
> 
> Poking around -- it looks like the CRLF stripping would be better done
> in `git mailinfo` after the decoding.

Anyone want to take this up?  I mean, I could try to send a patch, but
since I've never looked at the git source code before, I'm sure it would
take me about 10x as much effort for me to do it as for someone already
familiar with the codebase.

(And I've already done that work for stackgit:
https://github.com/ctmarinas/stgit/pull/46)

 -George

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: git-am doesn't strip CRLF line endings when the mbox is base64-encoded
  2020-01-06 11:58   ` George Dunlap
@ 2020-01-06 17:07     ` Junio C Hamano
  2020-01-06 18:10       ` George Dunlap
  0 siblings, 1 reply; 6+ messages in thread
From: Junio C Hamano @ 2020-01-06 17:07 UTC (permalink / raw)
  To: George Dunlap; +Cc: git

George Dunlap <george.dunlap@citrix.com> writes:

> On 12/18/19 12:15 PM, George Dunlap wrote:
>> On 12/18/19 11:42 AM, George Dunlap wrote:
>>> Using git 2.24.0 from Debian testing.
>>>
>>> It seems that git-am will strip CRLF endings from mails before applying
>>> patches when the mail isn't encoded in any way.  It will also decode
>>> base64-encoded mails.  But it won't strip CRLF endings from
>>> base64-encoded mails.
>>>
>>> Attached are two mbox files for two different recent series.
>>> plainenc.am applies cleanly with `git am`, while base64enc.am doesn't.
>>>
>>> Poking around the man pages, it looks like part of the issue might be
>>> that the CRLF stripping is done in `git mailsplit`, before the base64
>>> encoding, rather than after.
>> 
>> Poking around -- it looks like the CRLF stripping would be better done
>> in `git mailinfo` after the decoding.
>
> Anyone want to take this up?  I mean, I could try to send a patch, but
> since I've never looked at the git source code before, I'm sure it would
> take me about 10x as much effort for me to do it as for someone already
> familiar with the codebase.

Even before writing a patch, somebody needs to come up with a
sensible design first.  --[no-]keep-cr is about "because transfer of
e-mail messages between MTAs and to the receiving MUA is defined in
terms of CRLF delimited lines per RFC, Git cannot tell if the CRLF
in the input was meant to be part of the patch (i.e. the diff is
describing a change between preimage and postimage of a file that
uses CRLF line endings) or they are cruft added during transit.  By
default we favor LF endings so we will strip, but we leave an option
to keep CRs at the end of lines".  

What you are asking for is quite different, isn't it?  "We know the
CRLF in the payload is from the original because they were protected
from getting munged during the transfer by being MIME-encased.
Please tell Git to preprocess that payload to convert CRLF to LF
before treating it as a patch".

So, if you are imagining to change the meaning of --[no-]keep-cr, I
do not think it will fly (that is why I said that we need a sensible
design before a patch).

And by stepping back a bit like so, and once we start viewing this
as "after receiving a piece of e-mail from MUA (where --[no-]keep-cr
may affect the outermost CRLF line endings) and unwrapping possible
MIME-encasing, we can optionally tell Git to pass the payload
further through a preprocess filter", we'd realize that this does
not have to be limited to just running dos2unix (you may want to run
iconv to fix encodings, for example), which would mean that the new
flag may not just want to be --strip-cr, which is too limiting, but
rather want to be --filter-message=<how> where <how> could be one of
the canned preprocess filter (among which your dos2unix may exist)
or an external script.

I am not saying that "--filter-message=<how>" must be the "sensible
design" I mentioned at the beginning of this message---the above is
to illustrate what kind of thought needs to go in before even the
first line of the patch gets written.

Thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: git-am doesn't strip CRLF line endings when the mbox is base64-encoded
  2020-01-06 17:07     ` Junio C Hamano
@ 2020-01-06 18:10       ` George Dunlap
  0 siblings, 0 replies; 6+ messages in thread
From: George Dunlap @ 2020-01-06 18:10 UTC (permalink / raw)
  To: Junio C Hamano; +Cc: git

On 1/6/20 5:07 PM, Junio C Hamano wrote:
> George Dunlap <george.dunlap@citrix.com> writes:
> 
>> On 12/18/19 12:15 PM, George Dunlap wrote:
>>> On 12/18/19 11:42 AM, George Dunlap wrote:
>>>> Using git 2.24.0 from Debian testing.
>>>>
>>>> It seems that git-am will strip CRLF endings from mails before applying
>>>> patches when the mail isn't encoded in any way.  It will also decode
>>>> base64-encoded mails.  But it won't strip CRLF endings from
>>>> base64-encoded mails.
>>>>
>>>> Attached are two mbox files for two different recent series.
>>>> plainenc.am applies cleanly with `git am`, while base64enc.am doesn't.
>>>>
>>>> Poking around the man pages, it looks like part of the issue might be
>>>> that the CRLF stripping is done in `git mailsplit`, before the base64
>>>> encoding, rather than after.
>>>
>>> Poking around -- it looks like the CRLF stripping would be better done
>>> in `git mailinfo` after the decoding.
>>
>> Anyone want to take this up?  I mean, I could try to send a patch, but
>> since I've never looked at the git source code before, I'm sure it would
>> take me about 10x as much effort for me to do it as for someone already
>> familiar with the codebase.
> 
> Even before writing a patch, somebody needs to come up with a
> sensible design first.  --[no-]keep-cr is about "because transfer of
> e-mail messages between MTAs and to the receiving MUA is defined in
> terms of CRLF delimited lines per RFC, Git cannot tell if the CRLF
> in the input was meant to be part of the patch (i.e. the diff is
> describing a change between preimage and postimage of a file that
> uses CRLF line endings) or they are cruft added during transit.  By
> default we favor LF endings so we will strip, but we leave an option
> to keep CRs at the end of lines".  
> 
> What you are asking for is quite different, isn't it?  "We know the
> CRLF in the payload is from the original because they were protected
> from getting munged during the transfer by being MIME-encased.
> Please tell Git to preprocess that payload to convert CRLF to LF
> before treating it as a patch".

Actually that's not true. :-)   The RFC specifies CRLF for text sections
regardless of the encoding:

8<----
   The canonical form of any MIME "text" subtype MUST always represent a
   line break as a CRLF sequence.  Similarly, any occurrence of CRLF in
   MIME "text" MUST represent a line break.  Use of CR and LF outside of
   line break sequences is also forbidden.
----->8

[1] https://tools.ietf.org/html/rfc2046#section-4.1.1

Just as for plaintext encoding, we do not, in fact, know that CRLF is in
the original; in the example I included above, I'm confident that CRLF
is *not* in the original, but was added by an MTA afterwards.

As such, at the moment what `mailsplit` does is generate a load of
non-RFC-compliant email messages (since text-plain encoded messages
won't have CRLF, in contravention of the spec), and `mailinfo`
incorrectly interprets base64-encoded sections.  Moving the CR-stripping
from mailsplit to mailinfo would make them both more RFC-compliant.

(Sorry this wasn't in the original report -- I've been doing a lot more
digging since then.)

> So, if you are imagining to change the meaning of --[no-]keep-cr, I
> do not think it will fly (that is why I said that we need a sensible
> design before a patch).
> 
> And by stepping back a bit like so, and once we start viewing this
> as "after receiving a piece of e-mail from MUA (where --[no-]keep-cr
> may affect the outermost CRLF line endings) and unwrapping possible
> MIME-encasing, we can optionally tell Git to pass the payload
> further through a preprocess filter", we'd realize that this does
> not have to be limited to just running dos2unix (you may want to run
> iconv to fix encodings, for example), which would mean that the new
> flag may not just want to be --strip-cr, which is too limiting, but
> rather want to be --filter-message=<how> where <how> could be one of
> the canned preprocess filter (among which your dos2unix may exist)
> or an external script.
> 
> I am not saying that "--filter-message=<how>" must be the "sensible
> design" I mentioned at the beginning of this message---the above is
> to illustrate what kind of thought needs to go in before even the
> first line of the patch gets written.
As I mentioned in another thread, there's already an `applypatch-msg`
hook which can be used to do arbitrary modifications on commit messages
before applying.  Another way to fix this would be to add an
`applypatch-patch` hook, which allowed you to do arbitrary modifications
on the patch before applying.

I certainly think that `applypatch-patch` would be a useful thing to
add.  But since making `mailsplit` and `mailinfo` more RFC-compliant is
both good in itself, and probably easier, I still think that's the best
thing to do first.

 -George

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-01-06 18:10 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-18 11:42 git-am doesn't strip CRLF line endings when the mbox is base64-encoded George Dunlap
2019-12-18 12:15 ` George Dunlap
2019-12-18 19:41   ` Todd Zullinger
2020-01-06 11:58   ` George Dunlap
2020-01-06 17:07     ` Junio C Hamano
2020-01-06 18:10       ` George Dunlap

Code repositories for project(s) associated with this public inbox

	https://80x24.org/mirrors/git.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).