[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages



On Fri, Dec 08, 2017 at 11:06:43AM +0000, Paul Durrant wrote:
>> -----Original Message-----
>> From: Chao Gao [mailto:chao.gao@xxxxxxxxx]
>> Sent: 07 December 2017 06:57
>> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
>> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
>> <wei.liu2@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Tim
>> (Xen.org) <tim@xxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
>> xen-devel@xxxxxxxxxxxxx; Jan Beulich <jbeulich@xxxxxxxx>; Ian Jackson
>> <Ian.Jackson@xxxxxxxxxx>
>> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
>> pages
>> 
>> On Thu, Dec 07, 2017 at 08:41:14AM +0000, Paul Durrant wrote:
>> >> -----Original Message-----
>> >> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On
>> Behalf
>> >> Of Paul Durrant
>> >> Sent: 06 December 2017 16:10
>> >> To: 'Chao Gao' <chao.gao@xxxxxxxxx>
>> >> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu
>> >> <wei.liu2@xxxxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>;
>> Tim
>> >> (Xen.org) <tim@xxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>;
>> >> xen-devel@xxxxxxxxxxxxx; Jan Beulich <jbeulich@xxxxxxxx>; Ian Jackson
>> >> <Ian.Jackson@xxxxxxxxxx>
>> >> Subject: Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of
>> >> IOREQ page to 4 pages
>> >>
>> >> > -----Original Message-----
>> >> > From: Chao Gao [mailto:chao.gao@xxxxxxxxx]
>> >> > Sent: 06 December 2017 09:02
>> >> > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
>> >> > Cc: xen-devel@xxxxxxxxxxxxx; Tim (Xen.org) <tim@xxxxxxx>; Stefano
>> >> > Stabellini <sstabellini@xxxxxxxxxx>; Konrad Rzeszutek Wilk
>> >> > <konrad.wilk@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; George
>> >> > Dunlap <George.Dunlap@xxxxxxxxxx>; Andrew Cooper
>> >> > <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Ian
>> Jackson
>> >> > <Ian.Jackson@xxxxxxxxxx>
>> >> > Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page
>> to 4
>> >> > pages
>> >> >
>> >> > On Wed, Dec 06, 2017 at 03:04:11PM +0000, Paul Durrant wrote:
>> >> > >> -----Original Message-----
>> >> > >> From: Chao Gao [mailto:chao.gao@xxxxxxxxx]
>> >> > >> Sent: 06 December 2017 07:50
>> >> > >> To: xen-devel@xxxxxxxxxxxxx
>> >> > >> Cc: Chao Gao <chao.gao@xxxxxxxxx>; Paul Durrant
>> >> > >> <Paul.Durrant@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Stefano
>> >> > Stabellini
>> >> > >> <sstabellini@xxxxxxxxxx>; Konrad Rzeszutek Wilk
>> >> > >> <konrad.wilk@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>;
>> George
>> >> > >> Dunlap <George.Dunlap@xxxxxxxxxx>; Andrew Cooper
>> >> > >> <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; Ian
>> >> > Jackson
>> >> > >> <Ian.Jackson@xxxxxxxxxx>
>> >> > >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page
>> to 4
>> >> > >> pages
>> >> > >>
>> >> > >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove
>> the
>> >> > vcpu
>> >> > >> number constraint imposed by one IOREQ page, bump the number
>> of
>> >> > IOREQ
>> >> > >> page to
>> >> > >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
>> >> > >>
>> >> > >> Basically, this patch extends 'ioreq' field in struct 
>> >> > >> hvm_ioreq_server
>> to
>> >> an
>> >> > >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced 
>> >> > >> with
>> >> > >> FOR_EACH_IOREQ_PAGE macro.
>> >> > >>
>> >> > >> In order to access an IOREQ page, QEMU should get the gmfn and
>> map
>> >> > this
>> >> > >> gmfn
>> >> > >> to its virtual address space.
>> >> > >
>> >> > >No. There's no need to extend the 'legacy' mechanism of using magic
>> >> page
>> >> > gfns. You should only handle the case where the mfns are allocated on
>> >> > demand (see the call to hvm_ioreq_server_alloc_pages() in
>> >> > hvm_get_ioreq_server_frame()). The number of guest vcpus is known
>> at
>> >> > this point so the correct number of pages can be allocated. If the 
>> >> > creator
>> of
>> >> > the ioreq server attempts to use the legacy
>> hvm_get_ioreq_server_info()
>> >> > and the guest has >128 vcpus then the call should fail.
>> >> >
>> >> > Great suggestion. I will introduce a new dmop, a variant of
>> >> > hvm_get_ioreq_server_frame() for creator to get an array of gfns and
>> the
>> >> > size of array. And the legacy interface will report an error if more
>> >> > than one IOREQ PAGES are needed.
>> >>
>> >> You don't need a new dmop for mapping I think. The mem op to map
>> ioreq
>> >> server frames should work. All you should need to do is update
>> >> hvm_get_ioreq_server_frame() to deal with an index > 1, and provide
>> some
>> >> means for the ioreq server creator to convert the number of guest vcpus
>> into
>> >> the correct number of pages to map. (That might need a new dm op).
>> >
>> >I realise after saying this that an emulator already knows the size of the
>> ioreq structure and so can easily calculate the correct number of pages to
>> map, given the number of guest vcpus.
>> 
>> How about the patch in the bottom? Is it in the right direction?
>
>Yes, certainly along the right lines. I would probably do away with 
>MAX_NR_IOREQ_PAGE though. You should just to dynamically allocate the correct 
>number of ioreq pages when the ioreq server is created (since you already 
>calculate nr_ioreq_page there anyway).
>
>> Do you have the QEMU patch, which replaces the old method with the new
>> method
>> to set up mapping? I want to integrate that patch and do some tests.
>
>Sure. There's a couple of patched. I have not tested them with recent rebases 
>of my series so you may find some issues.

Hi, Paul.

I merged the two qemu patches, the privcmd patch [1] and did some tests.
I encountered a small issue and report it to you, so you can pay more
attention to it when doing some tests. The symptom is that using the new
interface to map grant table in xc_dom_gnttab_seed() always fails. After
adding some printk in privcmd, I found it is
xen_remap_domain_gfn_array() that fails with errcode -16. Mapping ioreq
server doesn't have such an issue.

[1] 
http://xenbits.xen.org/gitweb/?p=people/pauldu/linux.git;a=commit;h=ce59a05e6712

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.