[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][v2][PATCH 08/14] tools: extend xc_assign_device() to support rdm reservation policy



On 2015/6/7 19:27, Wei Liu wrote:
On Wed, Jun 03, 2015 at 10:58:31AM +0800, Chen, Tiejun wrote:
On 2015/6/3 0:36, Wei Liu wrote:
On Fri, May 22, 2015 at 05:35:08PM +0800, Tiejun Chen wrote:
This patch passes rdm reservation policy to xc_assign_device() so the policy
is checked when assigning devices to a VM.

Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx>
---
  tools/libxc/include/xenctrl.h       |  3 ++-
  tools/libxc/xc_domain.c             |  4 +++-
  tools/libxl/libxl_pci.c             | 11 ++++++++++-
  tools/libxl/xl_cmdimpl.c            | 23 +++++++++++++++++++----
  tools/libxl/xl_cmdtable.c           |  2 +-

Where is document for the new options you added to xl pci commands?

Looks I'm missing to describe something specific to pci-attach?

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index 4eb929d..2ebfd54 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1368,10 +1368,15 @@ it will also attempt to re-bind the device to its
original driver, making it
  usable by Domain 0 again.  If the device is not bound to pciback, it will
  return success.

-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<BDF> I<rdm policy>


The way you put it here suggests that "rdm policy" is mandatory. I don't
think this is the case?

If it is not mandatory, write [I<rdm>].

Yes, thanks for you correction.


  Hot-plug a new pass-through pci device to the specified domain.
  B<BDF> is the PCI Bus/Device/Function of the physical device to
pass-through.
+B<rdm policy> is about how to handle conflict between reserving reserved
device
+memory and guest address space. "strict" means an unsolved conflict leads
to
+immediate VM crash, while "relaxed" allows VM moving forward with a warning
+message thrown out. Here "strict" is default.
+

  =item B<pci-detach> [I<-f>] I<domain-id> I<BDF>



BTW you might want to consider rearrange patches in this series so that

Yes, this is really what I intend to do.

you keep the tree bisectable.

Overall, I can separate this series as several parts,

#1. Introduce our policy configuration on tools side
#2. Interact with Hypervisor to get rdm info
#3. Implement our policy with rdm info on tool side
#4. Make hvmloader to align our policy

If you already see something obviously wrong, let me know.


I think all toolstack patches should come after hypervisor and hvmloader
patches. And then within toolstack patches, libxc patches should come
before libxl patches, libxl patches should come before xl patches.

The pattern is clear. Patches that are late in the series make use of
functionalities provided by early patches. Breaking this pattern is
definitely going to break bisection.


I tried to rearrange these patches as follows:

#1. hypervisor
0001-xen-introduce-XENMEM_reserved_device_memory_map.patch
0002-xen-x86-p2m-introduce-set_identity_p2m_entry.patch
0003-xen-vtd-create-RMRR-mapping.patch
0004-xen-passthrough-extend-hypercall-to-support-rdm-rese.patch
0005-xen-enable-XENMEM_memory_map-in-hvm.patch
#2. hvmloader
0006-hvmloader-get-guest-memory-map-into-memory_map.patch
0007-hvmloader-pci-skip-reserved-ranges.patch
0008-hvmloader-e820-construct-guest-e820-table.patch
#3. tools/libxc
0009-tools-libxc-Expose-new-hypercall-xc_reserved_device_.patch
0010-tools-extend-xc_assign_device-to-support-rdm-reserva.patch
0011-tools-introduce-some-new-parameters-to-set-rdm-polic.patch
#4. tools/linxl
0012-tools-libxl-passes-rdm-reservation-policy.patch
0013-tools-libxl-detect-and-avoid-conflicts-with-RDM.patch
0014-tools-libxl-extend-XENMEM_set_memory_map.patch
#5. Misc
0015-xen-vtd-enable-USB-device-assignment.patch
0016-xen-vtd-prevent-from-assign-the-device-with-shared-r.patch

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.