[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/7] vpci: add SR-IOV support for PVH Dom0


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 12 May 2026 13:11:26 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AWreu2g8IqKYu/Eg2gQn2qeacYML4OnsnZs+qppdjN4=; b=U0sY1MA9drygee3a2KU6L0RvTK6c+G1WqA8p6FqEiTHJ19clSvRCBaZL6aCLC56+qL2WZKT0y6CNV0jPx2rPB+N7HFAuwvQ6vdqsoZaU4fgs8zMxeEJCrVzJAhIi2yMSvRygKU6xXxTIJwjRRXEY5/RdTXArY3U1tc8C25MS9z66J0Lrk6ntOq6bcqnyiJvH8YDwlSO8CpwPFuy4L3y0s4UuWNjv09tb3BuYcexwSAfzHodzag574zFIU+5qV49BCXShWx346l3l5Wc+qWmuzYk1YBh6A0zil1PkzhPyd9Uj4YqkUkQtKswlkGWyJIuPrdcrizBH/JRuKWJzEm3fVQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=S8LKEFURr8ZcTlm6D6bsKOb+ylOPfQFY8pUj5uiGgv+33TE4iI5awqBQicV6Ghhsb4oHXridWw9P29KEjLpl7vw1Ana5YKR5TiCydqu/BPg0WT6z5NgCXUhhLwaAQMh1P0Q6EMZw06MUN2Q9xccA32Ukfu02i/aDnAojKaZYumitFlrckaL8aRwMJuYycmD7I3RH0azImqZ9wxCOr037O0CZ3NRmhdIB9cDPE53KKuMN1oSskrQd3ldBt73OgLZd85mVGObwaQkaGZbgI+l6n2VaNJy44Ku36JKGLa1zxOPNrFD8pPREawCDpr0IE/Dq77qxsgIxswjNxP6tCQf9VA==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=citrix.com header.i="@citrix.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stewart Hildebrand <stewart.hildebrand@xxxxxxx>, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>
  • Delivery-date: Tue, 12 May 2026 11:11:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, May 12, 2026 at 12:44:48PM +0200, Jan Beulich wrote:
> On 12.05.2026 10:58, Roger Pau Monné wrote:
> > On Tue, May 12, 2026 at 07:32:20AM +0000, Mykyta Poturai wrote:
> >> On 5/12/26 09:20, Jan Beulich wrote:
> >>> On 11.05.2026 16:10, Volodymyr Babchuk wrote:
> >>>> Okay, so let's clear this. If I remember correct, you discussed this
> >>>> with Mykyta in the previous version and suggested to put the vCPU to
> >>>> sleep for 100ms.
> >>>
> >>> I don't think I did (except perhaps from a very abstract perspective),
> >>> precisely because of ...
> >>>
> >>>> I don't think that this is a good idea, because guest
> >>>> kernel will not be happy about that.
> >>>
> >>> ... this. Instead iirc I suggested to refuse (short-circuit) handling
> >>> VF register accesses for the next 100ms.
> >>
> >> Do you have any suggestions on how to ensure that we accurately catch 
> >> the window where 100ms have already passed, but guests haven’t tried to 
> >> read anything yet, to flip this back? As I mentioned in the previous 
> >> version, Linux, for example, doesn’t attempt to re-read anything if the 
> >> first read failed after 100ms. So it appears to me that this approach 
> >> would be prone to racing with the guest for getting to the VF first.
> 
> When we do the write to the control register in Xen, our timer will start
> ticking before the guest's. Hence our 100ms will be over (slightly)
> earlier, and a well-behaved guest (having waited for the full 100ms
> according to its own tracking) will be handled fine.
> 
> >> One 
> >> approach I can think of is to somehow swap the register handlers back 
> >> in-flight during the first read by the guest if 100ms have already 
> >> passed. However, this would still depend on Dom0 for registering VFs, 
> >> but in a more convoluted way. We also can’t add the VFs before 100ms 
> >> have passed and add timing checks to all register handlers, because 
> >> pci_add_device and everything below it expects the device to be 
> >> functional at the moment of addition.
> 
> I fear I'm not following this.
> 
> > We could maybe do some middle ground here, kind of similar to what
> > Linux does.  The overall idea would be to put on hold any accesses to
> > the device(s) PCI config space for 100ms, that would include the PF
> > and any VFs.
> 
> For the PF, at most parts of the SR-IOV capability should be thus
> constrained, I think.

Linux blocks access to the whole device PCI config space, but that
might be simply because it's easier to implement that way on their
side.  Certainly the spec doesn't mention any restriction in accessing
the PF config space during that window.

As a simpler approach we might want to reject write accesses to the
SR-IOV capability during that window.

> >  At the point when VF enable is set Xen already knows the
> > position of the VFs in the PCI config space.
> > 
> > Any PCI config space access attempts to the PF or VFs during that
> > 100ms window would cause the guest vCPU to be put on hold, and the
> > access would only be retried once the 100ms window has passed and Xen
> > has registered the VFs with vPCI.  This approach needs extra logic to
> > put vPCI accesses on hold, similar to what Xen does when mapping a BAR
> > into the p2m, and a timer to defer the adding of the Vfs and the
> > unlocking of the affected PCI config space region.
> 
> I was meaning to have this done in even simpler a way: Simply record
> when the VFs were configured, and within the next 100ms terminate all
> accesses (read all ones, discard writes).

Hm, I thought about such approach also, I was mostly worried that some
drivers might know the device has a shorter initialization time, and
hence attempt to access before the 100ms window.  However simply
discarding accesses might be easier to implement initially, and hence
I would be fien with such approach.  We would need to log any such
discarded accesses during the init window.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.