[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus
Andrew Cooper writes ("[PATCH for-4.15] dmop: Add XEN_DMOP_nr_vcpus"): > Curiously absent from the stable API/ABIs is an ability to query the number of > vcpus which a domain has. Emulators need to know this information in > particular to know how many stuct ioreq's live in the ioreq server mappings. > > In practice, this forces all userspace to link against libxenctrl to use > xc_domain_getinfo(), which rather defeats the purpose of the stable libraries. Wat > For 4.15. This was a surprise discovery in the massive ABI untangling effort > I'm currently doing for XenServer's new build system. Given that this is a new feature at a late stage I am going to say this: I will R-A it subject to it getting *two* independent Reviewed-by. I will try to one of them myself :-). ... > +/* > + * XEN_DMOP_nr_vcpus: Query the number of vCPUs a domain has. > + * > + * The number of vcpus a domain has is fixed from creation time. This bound > + * is applicable to e.g. the vcpuid parameter of XEN_DMOP_inject_event, or > + * number of struct ioreq objects mapped via XENMEM_acquire_resource. AIUI from the code, the value is the maximum number of vcpus, in the sense that they are not necessarily all online. In which case I think maybe you want to mention that here ? > diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst > index 398993d5f4..cbbd20c958 100644 > --- a/xen/include/xlat.lst > +++ b/xen/include/xlat.lst > @@ -107,6 +107,7 @@ > ? dm_op_set_pci_intx_level hvm/dm_op.h > ? dm_op_set_pci_link_route hvm/dm_op.h > ? dm_op_track_dirty_vram hvm/dm_op.h > +? dm_op_nr_vcpus hvm/dm_op.h > ! hvm_altp2m_set_mem_access_multi hvm/hvm_op.h > ? vcpu_hvm_context hvm/hvm_vcpu.h > ? vcpu_hvm_x86_32 hvm/hvm_vcpu.h > -- I have no idea what even. I read the comment at the top of the file. So for *everything except the xlat.lst change* Reviewed-by: Ian Jackson <iwj@xxxxxxxxxxxxxx> Thanks, Ian.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |