[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 4/5] ix86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries.





On 3/13/2017 7:24 PM, Jan Beulich wrote:
On 11.03.17 at 09:42, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
On 3/11/2017 12:03 AM, Jan Beulich wrote:
But there's a wider understanding issue I'm having here: What is
an "entry" here? Commonly I would assume this to refer to an
individual (4k) page, but it looks like you really mean table entry,
i.e. possibly representing a 2M or 1G page.
Well, it should be an entry pointing to a 4K page(only).
For p2m_ioreq_server, we shall not meet huge page. Because they are
changed from p2m_ram_rw pages
in set_mem_type() -> p2m_change_type_one(), which calls p2m_set_entry()
with PAGE_ORDER_4K specified.
And recombination of large pages won't ever end up hitting these?

Well, by recombination I guess you refer to the POD pages? I do not think p2m_ioreq_server pages will be combined now, which means we do not need to worry about recounting the
p2m_ioreq_server entries while a split happens.

And as to type change from p2m_ram_rw to p2m_ioreq_server, even if this is done on a large page, p2m_change_type_one() will split the page and only mark one ept entry(which maps to a 4K page) as p2m_ioreq_server(other 511 entries remains as p2m_ram_rw). So I still believe
counting p2m_ioreq_server entries here is correct.

Besides, if we look from XenGT requirement side, it is guest graphic page tables we are trying to
write-protect, which are 4K in size.

Thanks
Yu

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.