[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 02/11] vpci: cancel pending map/unmap on vpci removal



On 16.11.2021 14:27, Oleksandr Andrushchenko wrote:
> 
> 
> On 16.11.21 13:38, Jan Beulich wrote:
>> On 16.11.2021 09:23, Oleksandr Andrushchenko wrote:
>>>
>>> On 16.11.21 10:01, Jan Beulich wrote:
>>>> On 16.11.2021 08:32, Oleksandr Andrushchenko wrote:
>>>>> On 15.11.21 18:56, Jan Beulich wrote:
>>>>>> On 05.11.2021 07:56, Oleksandr Andrushchenko wrote:
>>>>>>> @@ -165,6 +164,18 @@ bool vpci_process_pending(struct vcpu *v)
>>>>>>>         return false;
>>>>>>>     }
>>>>>>>     
>>>>>>> +void vpci_cancel_pending(const struct pci_dev *pdev)
>>>>>>> +{
>>>>>>> +    struct vcpu *v = current;
>>>>>>> +
>>>>>>> +    /* Cancel any pending work now. */
>>>>>> Doesn't "any" include pending work on all vCPU-s of the guest, not
>>>>>> just current? Is current even relevant (as in: part of the correct
>>>>>> domain), considering ...
>>>>>>
>>>>>>> --- a/xen/drivers/vpci/vpci.c
>>>>>>> +++ b/xen/drivers/vpci/vpci.c
>>>>>>> @@ -51,6 +51,8 @@ void vpci_remove_device(struct pci_dev *pdev)
>>>>>>>             xfree(r);
>>>>>>>         }
>>>>>>>         spin_unlock(&pdev->vpci->lock);
>>>>>>> +
>>>>>>> +    vpci_cancel_pending(pdev);
>>>>>> ... this code path, when coming here from pci_{add,remove}_device()?
>>>>>>
>>>>>> I can agree that there's a problem here, but I think you need to
>>>>>> properly (i.e. in a race free manner) drain pending work.
>>>>> Yes, the code is inconsistent with this respect. I am thinking about:
>>>>>
>>>>> void vpci_cancel_pending(const struct pci_dev *pdev)
>>>>> {
>>>>>        struct domain *d = pdev->domain;
>>>>>        struct vcpu *v;
>>>>>
>>>>>        /* Cancel any pending work now. */
>>>>>        domain_lock(d);
>>>>>        for_each_vcpu ( d, v )
>>>>>        {
>>>>>            vcpu_pause(v);
>>>>>            if ( v->vpci.mem && v->vpci.pdev == pdev)
>>>> Nit: Same style issue as in the original patch.
>>> Will fix
>>>>>            {
>>>>>                rangeset_destroy(v->vpci.mem);
>>>>>                v->vpci.mem = NULL;
>>>>>            }
>>>>>            vcpu_unpause(v);
>>>>>        }
>>>>>        domain_unlock(d);
>>>>> }
>>>>>
>>>>> which seems to solve all the concerns. Is this what you mean?
>>>> Something along these lines. I expect you will want to make use of
>>>> domain_pause_except_self(),
>>> Yes, this is what is needed here, thanks. The only question is that
>>>
>>> int domain_pause_except_self(struct domain *d)
>>> {
>>> [snip]
>>>           /* Avoid racing with other vcpus which may want to be pausing us 
>>> */
>>>           if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
>>>               return -ERESTART;
>>>
>>> so it is not clear what do we do in case of -ERESTART: do we want to spin?
>>> Otherwise we will leave the job not done effectively not canceling the
>>> pending work. Any idea other then spinning?
>> Depends on the call chain you come through. There may need to be some
>> rearrangements such that you may be able to preempt the enclosing
>> hypercall.
> Well, there are three places which may lead to the pending work
> needs to be canceled:
> 
> MMIO trap -> vpci_write -> vpci_cmd_write -> modify_bars -> 
> vpci_cancel_pending (on modify_bars fail path)
> 
> PHYSDEVOP_pci_device_add -> pci_add_device (error path) -> vpci_remove_device 
> -> vpci_cancel_pending
> 
> PHYSDEVOP_pci_device_remove -> pci_remove_device -> vpci_remove_device -> 
> vpci_cancel_pending
> 
> So, taking into account the MMIO path, I think about the below code
> 
>      /*
>       * Cancel any pending work now.
>       *
>       * FIXME: this can be called from an MMIO trap handler's error
>       * path, so we cannot just return an error code here, so upper
>       * layers can handle it. The best we can do is to still try
>       * removing the range sets.
>       */

The MMIO trap path should simply exit to the guest to have the insn
retried. With the vPCI removal a subsequent emulation attempt will
no longer be able to resolve the address to BDF, and hence do
whatever it would do for an attempted access to config space not
belonging to any device.

For the two physdevops it may not be possible to preempt the
hypercalls without further code adjustments.

Jan

>      while ( (rc = domain_pause_except_self(d)) == -ERESTART )
>          cpu_relax();
> 
>      if ( rc )
>          printk(XENLOG_G_ERR
>                 "Failed to pause vCPUs while canceling vPCI map/unmap for %pp 
> %pd: %d\n",
>                 &pdev->sbdf, pdev->domain, rc);
> 
> I am not sure how to handle this otherwise
> @Roger, do you see any other good way?
>>
>> Jan
>>
> Thank you,
> Oleksandr
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.