[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/8] IOMMU/x86: drop locking from quarantine_init() hooks


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 12 Apr 2022 14:54:22 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=iaQhTp+sXFwpQEa/WVO2RYvzfHchL021zmBFtm3MSAU=; b=drJ8JB2O+z+xj5Usv6x9RRjn83uFlUF2i/g0U04ObmZ3ecuRWEfPqP21lSd5pnwoIYQ/YhLTDCUPuLNqKmriShEKO5uSnKAjhsbhXC53mjZ6wX9/4TQ3zziCcU1buJjYtsMkYv5/a/ZnJ49GLzZJM6O1+lqusGhji8sliijtfkJB3NX6FCCM1F33jVvepsOZJ3q8vjLzqlMXA1do14mcGHkQ7NOwtxzjqKKwaqJhzUV70BE2aphAelTjl/V8+Sar01x9dnekdgYZOr9jgYL/2vwWqdetG1CiFz6PAeGTq17JQF3zfZW0yUKMTqK34+TfACV2YcdVYOmw03y7C9V2/w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AZVezfxhlRRiVo8t4cIP9uYMmFpaaMZ7VvxRQQVqstZHodNjmJcxc70axbDVfpb7UX3m9hW3tdRRNT++jpCE5r7BAwZzu6erDxH2NoN3u1M9PmHSHTIrGNXc5d2aoC3xYYKavbU/8bDFWYHHPWbA3wqn6384zIs3HuxghFP4BPsp1pV+ig1zM1xLvtYw7GEWqagmAv1dTj0z4yDpiJ3XGi6cD5wiutlnkWm2KORaFOEMZDvcR+KNm6mCIDvPk+iG6VkPBvOUL3JHcBfc1F4OQ/cC7Mmg34Kc/gKhPLvW2alwyTs6sEW6xR4fXF6tJzm6b4SKTv1CA9pUnEgxoY6lhQ==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Kevin Tian" <kevin.tian@xxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Delivery-date: Tue, 12 Apr 2022 12:54:51 +0000
  • Ironport-data: A9a23:V5H0nq+IJRW0Cq+RalWXDrUDrH6TJUtcMsCJ2f8bNWPcYEJGY0x3z WQYDTyBP/rcMWr0KIp0YY7j9UhSvpSDnd8wQFBuq3o8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy +1EN7Es+ehtFie0Si+Fa+Sn9T8mvU2xbuKU5NTsY0idfic5DnZ54f5fs7Rh2NQw3YHjW1rlV e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9 DlCnZibE0AoEobUodZeEBBWDjhHMKwdopaSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFJkYtXx6iynQEN4tQIzZQrWM7thdtNs1rp4eRamPP JVFAdZpRE7dbhlPKGcSM50nocDwuHffWWIHpHvA8MLb5ECMlVcsgdABKuH9edGURMMTgkeRo ErH+Xj0BlcRM9n34SKM73aEluLJ2yThV+o6Fre16/pri1273XEIBVsdUl7Tiem0jAuyVsxSL 2QQ+zEytu4i+UqzVN7/Uhak5nmesXYht8F4SrNgrlvXk+yNvljfVjNsoiN9hMIOsORsAj0G/ UewrZCqDDJyqYSYSCnH3+LBxd+tAhQ9IWgHbC4CaAIK5dj/vY0+5i7yosZf/L2d1YOsR2ypq 9yehG1n3uhI05ZXv0mu1Qqf6w9AsKQlWeLcCu//emu+pj10a4e+D2BDwQiKtK0QRGp1o7Tog ZTlpyR8xL1WZX1uvHbUKAnoIF1Pz6zbWNE7qQQyd6TNDxz3pxaekXl4uVmS3ntBPMceYiPOa 0TOow5X75I7FCL0Mf4oO9PhW513l/WI+THZuhb8NIUmjn9ZLlHvwc2TTRTIgzCFfLYEzMnTx qt3ge7zVC1HWMyLPRK9RvsH0K9D+8zN7Ti7eHwP9Dz+ieD2TCfMEd8taQLSBshkvPLsiFiEq L53aprVoyizpcWjO0E7B6ZIdgtURZX6bLirw/FqmhmrflI9SD58UaeIqV7jEqQ895loei7z1 ijVcmdTyUblhG2BLgOPa3t5b6joU4o5pnU+VRHA937zs5T/Se5DNJsiSqY=
  • Ironport-hdrordr: A9a23:x6R/Lqt5UlF8hBadjyMfqqI47skC7YMji2hC6mlwRA09TyXGra 6TdaUguiMc1gx8ZJhBo7C90KnpewK7yXdQ2/htAV7EZnibhILIFvAZ0WKG+Vzd8kLFh4tgPM tbAsxD4ZjLfCdHZKXBkXmF+rQbsaG6GcmT7I+0pRodLnAJV0gj1XYDNu/yKDwGeOAsP+tBKH Pz3Lshm9L2Ek5nEPhTS0N1FdQq4Lbw5ebbSC9DIyRixBiFjDuu5rK/Ox+E3i0GWzcK5bs562 DKnyHw+63m6piAu1Ph/l6Wy64TtMrqy9NFCsDJos8JKg/0ggLtQIh6QbWNsB08venqwlc3l9 vnpQsmIq1Imj7sV1DwhSGo9xjr0T4o5XOn4ViEgUH7qci8fz4+A9opv/MRTjLpr24b+P1s2q NC2GyU87BNCwnboSj779/UEzl3i0uduxMZ4K8upk0adbFbRK5arIQZ8k8QOowHBjjG5IcuF/ QrJN3A5cxRbUiRYxnizydSKeSXLzcO9yq9Mwo/UpT/6UkSoJk59TpW+CUnpAZByHpnIKM0o9 gtMcxT5cdzp4EtHOVA7dw6MLmK41z2MGHx2V2pUCHa/YE8SjrwQs3Mkf4IDN/DQu1+8HJ1ou WGbG9l
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Apr 12, 2022 at 02:17:16PM +0200, Jan Beulich wrote:
> On 12.04.2022 13:05, Roger Pau Monné wrote:
> > On Mon, Apr 11, 2022 at 11:35:54AM +0200, Jan Beulich wrote:
> >> Prior extension of these functions to enable per-device quarantine page
> >> tables already didn't add more locking there, but merely left in place
> >> what had been there before. But really locking is unnecessary here:
> >> We're running with pcidevs_lock held (i.e. multiple invocations of the
> >> same function [or their teardown equivalents] are impossible, and hence
> >> there are no "local" races), while all consuming of the data being
> >> populated here can't race anyway due to happening sequentially
> >> afterwards. See also the comment in struct arch_pci_dev.
> > 
> > I would explicitly say that none of the code in the locked region
> > touches any data in the domain_iommu struct, so taking the
> > mapping_lock is unneeded.
> 
> But that would limit what the mapping_lock protects more than it actually
> does: The entire page tables hanging off of the root table are also
> protected by that lock.

Right, but at the point where fill_qpt() gets called
hd->arch.amd.root_table == NULL, and hence it seems completely
pointless to wrap this in a mapping_lock locked region.

> It's just that for a pdev, after having
> installed identity mappings, the root doesn't hang off of hd. But in
> principle - i.e. if the per-device mappings weren't static once created -
> the lock would be the one to hold whenever any of these page tables was
> modified.

The lock would need to be held if pages tables are modified while
being set in hd->arch.amd.root_table, or at least that's my
understanding.

This is a special case anyway, as the page tables are not per-domain
but per-device, but it seems clear to me that if the page tables are
not set in hd->arch.amd.root_table the lock in hd->arch.mapping_lock
is not supposed to be protecting them.

> > Long term we might wish to implemented a per-device lock that could be
> > used here, instead of relying on the pcidevs lock.
> 
> Well, I would want to avoid this unless a need arises to not hold the
> pcidevs lock here. Or, of course, if a need arose to dynamically alter
> these page tables.

I think it's likely we will need such lock for other purposes if we
ever manage to convert the pcidevs lock into a rwlock, so my comment
was not so much as it's required for the use case here, but a side
effect if we ever manage to change pcidevs lock.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.