[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

SR-IOV: do we need to virtualize in Xen or rely on Dom0?


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • Date: Fri, 4 Jun 2021 06:37:27 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jMV+fFvjKlYiGufHj/nr+yshgJCwxjCERrqo4V6wD0k=; b=W6onknEYgr6wssnV8rtVym7ge9fKDgc0yWRfgloK01bsiZKlrxWUK69vneWX52O40GofFpgMJYNP9tsxuMI5A2va0gLYszLzIrgvmU5T8vmCePNkSXb1LehD2L2UbdrPf5eMOGdM2LZqLUbxjKe0N7fNMA9rQBNCyZttWM4ChReLwAT0nEcCtMF7skLflsHg+F8l6sdLw5j8xN2G3GoZXg9TVM+Ec+Pow6KRA5zf8FWj+Qzrb3SV0xXjJzTQS249HrnT/cEf5XoPE5JQVjX81WdITIIdiVDGlpOstlWe7dV/gMConbzPfYnBNQBeGhGjQhCD+kMJc7YyFhJnEwvK6g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YpBsH9cS9RUCglzB8fb2ZrCrUHiHongWMEfJ2mCgZ/OkWgieafxjynKMy2QxvyVlnjVsN9SXaVUTqff62hwb7pMjkb+wFz/mEwyeJP14lgkG5m8b6NfX4PNAedqnnOTwCfB0dvmEhikDe0W+R6Ajh0G/mUENcTzWAsepU0rWuLk/XyqFU8/lHGaHyifTOLTih3kujRTS0e+HFsDFMQf+fDf2Ae1unsxIyvnGFD4oMP+IccTxv7WnSuThyvQV99InQu98lAOJZ6s16b2eQs3b81SugNaUentZNZwoW/M38S10ccGaz/lo05M9kjDyTWd8MZHRsoDDsT8HJtY+S0cmvg==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=epam.com;
  • Cc: Jan Beulich <jbeulich@xxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 04 Jun 2021 06:37:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHXWQwW/O3obtZgqkmpGRvt7ZNDkQ==
  • Thread-topic: SR-IOV: do we need to virtualize in Xen or rely on Dom0?

Hi, all!

While working on PCI SR-IOV support for ARM I started porting [1] on top
of current PCI on ARM support [2]. The question I have for this series
is if we really need emulating SR-IOV code in Xen?

I have implemented a PoC for SR-IOV on ARM [3] (please see the top 2 
patches)
and it "works for me": MSI support is still WIP, but I was able to see that
VFs are properly seen in the guest and BARs are properly programmed in p2m.

What I can't fully understand is if we can live with this approach or there
are use-cases I can't see.

Previously I've been told that this approach might not work on FreeBSD 
running
as Domain-0, but is seems that "PCI Passthrough is not supported 
(Xen/FreeBSD)"
anyways [4].

I also see ACRN hypervisor [5] implements SR-IOV inside it which makes 
me think I
miss some important use-case on x86 though.

I would like to ask for any advise with SR-IOV in hypervisor respect, 
any pointers
to documentation or any other source which might be handy in deciding if 
we do
need SR-IOV complexity in Xen.

And it does bring complexity if you compare [1] and [3])...

A bit of technical details on the approach implemented [3]:
1. We rely on PHYSDEVOP_pci_device_add
2. We rely on Domain-0 SR-IOV drivers to instantiate VFs
3. BARs are programmed in p2m implementing guest view on those (we have 
extended
vPCI code for that and this path is used for both "normal" devices and 
VFs the same way)
4. No need to trap PCI_SRIOV_CTRL
5. No need to wait 100ms in Xen before attempting to access VF registers 
when
enabling virtual functions on the PF - this is handled by Domain-0 itself.

Thank you in advance,
Oleksandr

[1] 
https://lists.xenproject.org/archives/html/xen-devel/2018-07/msg01494.html
[2] 
https://gitlab.com/xen-project/fusa/xen-integration/-/tree/integration/pci-passthrough
[3] https://github.com/xen-troops/xen/commits/pci_phase2
[4] https://wiki.freebsd.org/Xen
[5] https://projectacrn.github.io/latest/tutorials/sriov_virtualization.html

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.