[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vpci: introduce per-domain lock to protect vpci structure


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 11 Feb 2022 12:51:53 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=S3iEnlsl/OISpugzlYfdMqzfKrt7Q+BNMrtT57nZTgM=; b=LCOPD6A/s9EfYOysuOyRIeKGl+148FRHiCgcI8l1t6Sny6CzJk9d3/Ns2aCU0Wg0HRVc781WJHhwa0RRD2S11+oiXDMHyxOyVAuY8Pds9IOLf2Sm37qY3/vwHHeY1AVfbeihrbi2251jXHKsXPwxjdFEEd7CHOc4ZnX7Lk9fWgfWMHSLwEWVMEg7oqFUgf4zABo1dusCcHKPqwRTHfZCNGQN8ILf5vrG8FwQNSVvDiX6Ck9aEOVFOajG9VLX9PiqZ+zWNjWGSYX3ao0CQ84XNtWwmWjc+JegzDFJV128LACBSmxRrOAEi7+1JbNfqmeoQJfyQfEyBtzk/nToUUsZlQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Jm5rC02hn8y/u16k0iZaWL/HSyHN9G+xmdQ1bCr2JhOBKgsHNOtoMGj+XNqgQdwjTN5abUaceuUg9zwHnpt6jN7mZUjsJe5iuJLmTEHxdM3+b+Bl7ar1tUXeO2vjN9vDL/j+7dseQ6udXQ5Q8Z9VLcXocP2Y/pidcqINo0ZpG2oehZ2HctXVNjNEcxwzSpeQ9CHLrkNhh74A3t9gb1pkqIAiWAIFxQ6ISAWbW2kQxS1sBfFYB2pXo4V0TnIa7gO2AifoVdg3oGhKL2F4e1DullrDo4Haqlht2ZPZdNxR+eyVxhmZSQAxRiALvGIlYdS9oqP/KY5vv/FAIIQ4Kobvcg==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Fri, 11 Feb 2022 11:52:14 +0000
  • Ironport-data: A9a23:PPlgLq6tj/SI/gdAdlOfgwxRtB3BchMFZxGqfqrLsTDasY5as4F+v jZJDG2HOvreZ2WkctAjPovj8BlVuZKAnIdiGlNrrSFnHi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuV3zyIQUBUjclkfJKlYAL/En03FV8MpBsJ00o5wbZj29cw2LBVPivW0 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Z0 +5jt5qUDhsTPqCQxf9Gdht1SipCBPgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr 7pCcmlLN03dwbLtqF64YrAEasALNs7kMZlZonh95TrYEewnUdbIRKCiCdpwgmts350TQK62i 8wxNSBTTjnJehN1YA0xFrMCzMGq2nT+SmgNwL6SjfVuuDWCpOBr65DTN97Sds2PVN9itE+Sr WLb/Ez0GhgfcteYzFKt1XahhfTGmy/hb74DD72z9vNsg1q7y3QaDVsdUl7TidCjlkO7bPdOJ EUV9zQGoLA78QqgSdyVdwexoGOA+AUdXdVQO+Qg7UeGza+8ywyUHHQeRzhNLtkvrtYrRCcC3 0WM2djuAFRHoLCTDH6Q6LqQhTezIjQOa38PYzceSgkI6MWlp5s85jroSttgC6ezgsfCMDf82 S2RrCM+irMQiuYGz6y+u1vAhlqEpJLEUwo07QX/RX++40VyY4vNT5ez9VHR4PJELYCYZlqMp n4Jn46Z9u9mJZOQkC2ARs0dEbfv4OyKWAAwmnY2QcNnrW70vSf+I8YAu1mSOXuFLO4fQhHrZ muUpTlR6cMOJyCPb4IuaL6IXpFCIbfbKfzpUfXdb9xra5d3dROa8CwGWXN8z1wBg2B3z/hhZ M7zndKESC9DVP85lGbeq/I1jOdzrh3S018/UnwSI/6P9bOFLECYRr4eWLdlRrBotfjUyOk5H js2Cidr9/m9eLCmCsU02dRKRbzvEZTdLcqnwyCwXrTdSjeK4El7V5fsLUkdU4Jkhb9JsezD4 2uwXERVoHKm2yGbdF/XNS85Neq3NXqakZ7dFXZ9VbpP8yJ9CbtDEY9FL8dnFVXZ3LALIQFIo wktJJzbX6UnpsXv8DUBd5jtxLGOhzzw7T9iyxGNOWBlF7Y5HlSh0oa9ImPHqXlfZgLq5JBWi +DxiWvmrW8rGl0KJNzIc8im017ZlSFbwIqeqWOTeYINEKgtmaA3QxHMYggffZ1Sd0WZm2fyO sT/KU5wmNQharQdqbHhrauFs52oA615GE9bFHPc9rG4KW/R+W/L/GOKeL3gken1WDym9aO8S /9Syv2gYvQLkEwT69h3EqpxzLJ47Nzq/ucIwgNhFXTNTlKqFrI/fSXWgZgR7vVAlu1DpA+7e kOT4d0Ga7+HD9zoTQwKLw0/Y+XdifxNwmvO7e44KVnR7TNs+ObVSl1bOhSB0XQPLLZ8PI4/7 /0mvcoat162hhYwa47UhSFI7WWcaHcHVvx/5J0dBYbqjCsty01DPsOAWnOnvsnXZowVYEcwI zKSiK7TvJhmxxLPIygpCHzA/etBnpBS6hpE+0APegaSkd3fi/5pgBAIqWYrTh5Yxwls2v5oP jQ5LFV8IKiD8ms6hMVHWGzwSQhNCAfApx70wloN0mbYU1OpRirGK2hkYbSB+0UQ8mR9eDlH/ e7HlDa5AGiyJMyhjDEvXUNFquD4SY0j/wLPr8mrAsCZEsRoejHimKKvOTIFphaP7RndX6EbS T2GJNpNVJA=
  • Ironport-hdrordr: A9a23:p/2ufa3MMhonvN6Pzaa80gqjBSpyeYIsimQD101hICG9Lfb2qy n+ppgmPEHP5Qr5OEtApTiBUJPwJk800aQFm7X5Wo3SITUO2VHYV72KiLGN/9SOIVydygcw79 YET0E6MqyNMbEYt7eK3ODbKadY/DDvysnB7o2/vhRQpENRGtldBm9Ce3im+yZNNW977PQCZf 6hDp0tnUveRZ1bVLXxOlA1G8z44/HbnpPvZhALQzYh9Qm1lDutrJr3CQKR0BsyWy5Ghe5Kyx mJryXJooGY992rwB7V0GHeq7xQhdva09NGQOiBkNIcJDnAghuhIK5hR7qBljYop/zH0idhrP D85zMbe+hj4XLYeW+45TPrxgnbyT4rr0TvzFeJ6EGT1/DRdXYfMY5slIhZehzW5w4Lp9dnyp 9G2Gqfqt5+EQ7AtD6V3amHazha0m6P5VYym+8aiHJSFaEEbqVKkIAZ9ERJVL8dASPB7pw9Gu UGNrCS2B9vSyLbU5nlhBgt/DT1NU5DXCtuA3Jy9vB96gIm3UyQlCAjtYkidnRpzuNLd3AL3Z WBDk1SrsA8ciYhV9MIOA4we7rGNoXze2O/DIuzGyWvKEhVAQOEl3bIiI9Fkd1CPqZ4i6cPpA ==
  • Ironport-sdr: DFSDBVZwqBcPlvGqGAULNiUihvOs6k/ysY2GoaUSsYdzsC2N8dQVgWCj55i2nFM4r0Yi/Vw2AQ eRKfqwi36gsorM4+nuMoAgqc1s0m+ewGFaW7chC5MGokdLWFnPM90JrG7Er+yOXztGVgNUiBmZ UpaqZMDx83dBoCn7gwwRzTePmv2+Ysk08ADhqnKyAuJgE3i8dLdvQSarnWA3e98tkvVuxNe9eu 0pnuay7zDMJaY8uoESMUiLzSb6aDnES5P2fkX2LyeS+j6zzPeuDTZpURCCX2AltZycArmD2Ncg rfij0vuTyoOaYqW4KN2BqaNJ
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, Feb 11, 2022 at 08:46:59AM +0000, Oleksandr Andrushchenko wrote:
> 
> 
> On 10.02.22 18:16, Roger Pau Monné wrote:
> > On Wed, Feb 09, 2022 at 03:36:27PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >>
> >> Introduce a per-domain read/write lock to check whether vpci is present,
> >> so we are sure there are no accesses to the contents of the vpci struct
> >> if not. This lock can be used (and in a few cases is used right away)
> >> so that vpci removal can be performed while holding the lock in write
> >> mode. Previously such removal could race with vpci_read for example.
> > Sadly there's still a race in the usage of pci_get_pdev_by_domain wrt
> > pci_remove_device, and likely when vPCI gets also used in
> > {de}assign_device I think.
> >
> How about the below? It seems to guarantee that we can access pdev
> without issues and without requiring pcidevs_lock to be used?

Hm, I'm unsure this is correct. It's in general a bad idea to use a
per-domain lock approach to protect the consistency of elements moving
between domains.

In order for this to be safe you will likely need to hold both the
source and the destination per-domain locks, and then you could also
get into ABBA lock issues unless you always take the lock in the same
order.

I think it's safer to use a global lock in this case (pcidevs_lock).

> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index e8b09d77d880..fd464a58b3b3 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -937,8 +937,14 @@ static int deassign_device(struct domain *d, uint16_t 
> seg, uint8_t bus,
>       }
> 
>       devfn = pdev->devfn;
> +#ifdef CONFIG_HAS_VPCI
> +    write_lock(&d->vpci_rwlock);
> +#endif
>       ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
>                        pci_to_dev(pdev));
> +#ifdef CONFIG_HAS_VPCI
> +    write_unlock(&d->vpci_rwlock);
> +#endif
>       if ( ret )
>           goto out;
> 
> @@ -1474,6 +1480,9 @@ static int assign_device(struct domain *d, u16 seg, u8 
> bus, u8 devfn, u32 flag)
>       const struct domain_iommu *hd = dom_iommu(d);
>       struct pci_dev *pdev;
>       int rc = 0;
> +#ifdef CONFIG_HAS_VPCI
> +    struct domain *old_d;
> +#endif
> 
>       if ( !is_iommu_enabled(d) )
>           return 0;
> @@ -1487,15 +1496,34 @@ static int assign_device(struct domain *d, u16 seg, 
> u8 bus, u8 devfn, u32 flag)
>       ASSERT(pdev && (pdev->domain == hardware_domain ||
>                       pdev->domain == dom_io));
> 
> +#ifdef CONFIG_HAS_VPCI
> +    /* pdev->domain is either hwdom or dom_io. We do not want the later. */
> +    old_d = pdev->domain == hardware_domain ? pdev->domain : NULL;
> +    if ( old_d )
> +        write_lock(&old_d->vpci_rwlock);
> +#endif
> +
>       rc = pdev_msix_assign(d, pdev);

I don't think you need the vpci lock for this operation.

>       if ( rc )
> +    {
> +#ifdef CONFIG_HAS_VPCI
> +        if ( old_d )
> +            write_unlock(&old_d->vpci_rwlock);
> +#endif
>           goto done;
> +    }
> 
>       pdev->fault.count = 0;
> 
>       if ( (rc = iommu_call(hd->platform_ops, assign_device, d, devfn,
>                             pci_to_dev(pdev), flag)) )
> +    {
> +#ifdef CONFIG_HAS_VPCI
> +        if ( old_d )
> +            write_unlock(&old_d->vpci_rwlock);
> +#endif

Like I've mentioned above, I'm unsure this is correct. You are holding
the lock of the previous domain, but at some point the device will be
assigned to a new domain, so that change won't be protected by the
lock of the new domain.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.