[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV drivers and zero copying



On 07/31/2017 02:58 PM, Joao Martins wrote:
On 07/31/2017 12:41 PM, Oleksandr Andrushchenko wrote:
Hi, Joao!

On 07/31/2017 02:03 PM, Joao Martins wrote:
Hey Oleksandr,

On 07/31/2017 09:34 AM, Oleksandr Andrushchenko wrote:
Hi, all!

[snip]
Comparison for display use-case
===============================

1 Number of grant references used
1-1 grant references: nr_pages
1-2 GNTTABOP_transfer: nr_pages
1-3 XENMEM_exchange: not an option

2 Effect of DomU crash on Dom0 (its mapped pages)
2-1 grant references: pages can be unmapped by Dom0, Dom0 is fully
recovered
2-2 GNTTABOP_transfer: pages will be returned to the Hypervisor, lost
for Dom0
2-3 XENMEM_exchange: not an option

3 Security issues from sharing Dom0 pages to DomU
1-1 grant references: none
1-2 GNTTABOP_transfer: none
1-3 XENMEM_exchange: not an option

At the moment approach 1 with granted references seems to be a winner for
sharing buffers both ways, e.g. Dom0 -> DomU and DomU -> Dom0.

Conclusion
==========

I would like to get some feedback from the community on which approach
is more
suitable for sharing large buffers and to have a clear vision on cons
and pros
of each one: please feel free to add other metrics I missed and correct
the ones
I commented on.  I would appreciate help on comparing approaches 2 and 3
as I
have little knowledge of these APIs (2 seems to be addressed by
Christopher, and
3 seems to be relevant to what Konrad/Stefano do WRT SWIOTLB).
Depending on your performance/memory requirements - there could be another
option which is to keep the guest mapped on Domain-0 (what was discussed with
Zerogrant session[0][1] that will be formally proposed in the next month or so).
Unfortunately I missed that session during the Summit
due to overlapping sessions
Hmm - Zerocopy Rx (Dom0 -> DomU) would indeed be an interesting topic to bring 
up.

it is, especially for the systems which require physically contiguous
buffers
But that would only solve the grant maps/unmaps/copies done on Domain-0 (given
the numbers you pasted a bit ago, you might not really need to go to such 
extents)

[0]
http://schd.ws/hosted_files/xendeveloperanddesignsummit2017/05/zerogrant_spec.pdf
[1]
http://schd.ws/hosted_files/xendeveloperanddesignsummit2017/a8/zerogrant_slides.pdf
I will read these, thank you for the links
For the buffers allocated on Dom0 and safely grant buffers from Dom0 to DomU
(which I am not so sure it is possible today :()
We have this working in our setup for display (we have implemented
z-copy with grant references already)
Allow me to clarify :) I meant "possible to do it in a safely manner", IOW,
regarding what I mentioned below in following paragraphs. But your answer below
clarifies on that aspect.
good :)
, maybe a "contract" from DomU
provide a set of transferable pages that Dom0 holds on for each Dom-0 gref
provided to the guest (and assuming this is only a handful couple of guests as
grant table is not that big).
It is an option
   IIUC, From what you pasted above on "Buffer
allocated @Dom0" sounds like Domain-0 could quickly ran out of pages/OOM (and
grants), if you're guest is misbehaving/buggy or malicious; *also* domain-0
grant table is a rather finite/small resource (even though you can override the
number of frames in the arguments).
Well, you are right. But, we are focusing on embedded appliances,
so those systems we use are not that "dynamic" with that respect.
Namely: we have fixed number of domains and their functionality
is well known, so we can do rather precise assumption on resource
usage.
Interesting! So here I presume backend trusts the frontend.
yes, this is the case. What is more backend can make decision
on if to allow buffer allocation or reject the request
Cheers,
Joao


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.