[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server.





On 3/14/2017 6:40 PM, Paul Durrant wrote:
-----Original Message-----
From: Yu Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx]
Sent: 14 March 2017 09:53
To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Jan Beulich
<JBeulich@xxxxxxxx>
Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap
<George.Dunlap@xxxxxxxxxx>; Jun Nakajima <jun.nakajima@xxxxxxxxx>;
Kevin Tian <kevin.tian@xxxxxxxxx>; zhiyuan.lv@xxxxxxxxx; xen-
devel@xxxxxxxxxxxxx; Tim (Xen.org) <tim@xxxxxxx>
Subject: Re: [PATCH v7 2/5] x86/ioreq server: Add DMOP to map guest ram
with p2m_ioreq_server to an ioreq server.



On 3/14/2017 5:40 PM, Paul Durrant wrote:
-----Original Message-----
[snip]
+struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain
*d,
+                                              unsigned int *flags)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct hvm_ioreq_server *s;
+
+    spin_lock(&p2m->ioreq.lock);
+
+    s = p2m->ioreq.server;
+    *flags = p2m->ioreq.flags;
+
+    spin_unlock(&p2m->ioreq.lock);
+    return s;
+}
I'm afraid this question was asked before, but since there's no
comment here or anywhere, I can't recall if there was a reason why
s potentially being stale by the time the caller looks at it is not a
problem.
Well, it is possibe that s is stale. I did not take it as a problem
because the device model
will later discard such io request. And I believe current
hvm_select_ioreq_server() also
has the same issue - the returned s should be considered to be stale, if
the MMIO/PIO
address is removed from the ioreq server's rangeset.
An enabled emulator has to be prepared to receive ioreqs for ranges it has
unmapped since there is no domain_pause() to prevent a race.

Thank you for the reply, Paul.
So you mean using the ioreq server lock around this process will not
prevent this either? Why?

Well, if emulation as already sampled the value on another vcpu then that 
emulation request may race with the range being disabled. Remember that (for 
good reason) the lock is not held by hvm_select_ioreq_server() and is only held 
by hvm_unmap_io_range_from_ioreq_server() to protect against other invocations 
of similar manipulation functions.

Oh. So that's why you mentioned the domain_pause() to prevent the race, right?
Another thought is, if you think it is inappropriate for device model to
do the check,
we can use spin_lock_recursive on ioreq_server.lock to protect all the
ioreq server select
and release the lock after the ioreq server is sent out.
Well, let's first ask Paul as to what his perspective here is, both
specifically for this change and more generally regarding what
you say above.
Paul, any suggestions on this and the above one? :)
Well, as I said above, the device model has to check whether it is willing to
handle and ioreq it is passed and terminate it appropriately under all
circumstances. There is no option for it to reject the I/O.
This may be ok for MMIO regions coming and going, but is there anything
more to consider here it we change a page time from ioreq_server back to
RAM? Clearly we need to make sure that there is no scope for I/O to that
page being incorrectly handled during transition.

I don't think we need to worry about the p2m type change, patch 1
prevents this. The s returned by
p2m_get_ioreq_server() may be stale(and is then discarded by device
model in our current code), but
p2m type will not be stale. I agree device model has responsibility to
do such check, but I also wonder
if it is possible for the hypervisor to provide some kind insurance.

Not in a cheap way. More locking code be added but it's likely to be convoluted 
and have a detrimental effect on performance.

Yep. Sounds reasonable. :)

So, Jan. Could we draw a conclusion here, that although the selected ioreq server may be stale, but
it should be the device model to do  the check?

Thanks
Yu

   Paul

Thanks
Yu

    Paul

Thanks
Yu
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.