[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 02/11] vpci: cancel pending map/unmap on vpci removal



On 18.11.2021 16:46, Oleksandr Andrushchenko wrote:
> On 18.11.21 17:41, Jan Beulich wrote:
>> On 18.11.2021 16:21, Oleksandr Andrushchenko wrote:
>>> On 18.11.21 17:16, Jan Beulich wrote:
>>>>    For the moment I can't help thinking that draining would
>>>> be preferable over canceling.
>>> Given that cancellation is going to happen on error path or
>>> on device de-assign/remove I think this can be acceptable.
>>> Any reason why not?
>> It would seem to me that the correctness of a draining approach is
>> going to be easier to prove than that of a canceling one, where I
>> expect races to be a bigger risk. Especially something that gets
>> executed infrequently, if ever (error paths in particular), knowing
>> things are well from testing isn't typically possible.
> Could you please then give me a hint how to do that:
> 1. We have scheduled SOFTIRQ on vCPU0 and it is about to touch pdev->vpci
> 2. We have de-assign/remove on vCPU1
> 
> How do we drain that? Do you mean some atomic variable to be
> used in vpci_process_pending to flag it is running and de-assign/remove
> needs to wait and spinning checking that?

First of all let's please keep remove and de-assign separate. I think we
have largely reached agreement that remove may need handling differently,
for being a Dom0-only operation.

As to draining during de-assign: I did suggest before that removing the
register handling hooks first would guarantee no new requests to appear.
Then it should be merely a matter of using hypercall continuations until
the respective domain has no pending requests anymore for the device in
question. Some locking (or lock barrier) may of course be needed to
make sure another CPU isn't just about to pend a new request.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.