[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 06/14] vpci/header: implement guest BAR register handlers


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 31 Jan 2022 12:04:56 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=9LIUrGA9GucF+pfPVHKnw1QV57lT0Fi/ugoVAfWYBUU=; b=VFFfpUFE/x43MxkizQT6UEFQGRvhPRtoIDKnAjLJA2JgQqvLhC7PQrHzoOthomZy+9MLyzaA/yr5XT2Y1skpAH/a6yTMX548NOLTTzF7eUFFVT+D91e5J9/l8giDw2dCw0RZJwqgyANG/mqtQzPI0OP3vM6poHjkOwiAqgQ9AyhccXF78I/3hYZQXOBXR2HvI4dTYsLBbn2i0dFmhxw8zdMMwjF0MkU0ofo2fVIheD9Zxev4tlaYfEErzO8wvZumtF/2tAqsBOcqcK/VwtmHY11ewxmkfhJUEY3X692kwvWXZhYT2tdXb7q7rurbpA0GcjxC9Eth+O2COgFeQaDCxA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mZWA/tzh0dDlf2BFKLkRE4vbl5Y9lDfP70OnhScLr5K/ZzcvWiqpPRi5sJHT0IrEYhUlQpeHKXrjGGEY6muA5Ijuk7Gbt0clf1o29RjT7ZKcxjtV90+FjKwMIxvVKlOLhthPS6kRgJ5CG+j2BBlSXfumgVXrk6s2x9Ur2EcIwXEvIRSNFIGXdtnN1rggWegdXRO8FieoFwHFtY6iWdHHsLvmV4ZYxF3GdoSFAYuyIJhklBu+TSoEWQcS+suza/C8FMneiI6YYSTD880EEVM4fm49/m0pohRnnd9w47TzWH0Ul0JsxYZeoGuZB0iOLOvP4e5iz212MGBAarcnGU9qkg==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "julien@xxxxxxx" <julien@xxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <Oleksandr_Tyshchenko@xxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Artem Mygaiev <Artem_Mygaiev@xxxxxxxx>, "jbeulich@xxxxxxxx" <jbeulich@xxxxxxxx>, "andrew.cooper3@xxxxxxxxxx" <andrew.cooper3@xxxxxxxxxx>, "george.dunlap@xxxxxxxxxx" <george.dunlap@xxxxxxxxxx>, "paul@xxxxxxx" <paul@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Mon, 31 Jan 2022 11:05:11 +0000
  • Ironport-data: A9a23:yQBwO6AyPmnUKhVW/x3lw5YqxClBgxIJ4kV8jS/XYbTApGkn0zMBz jFKUGDQO67eM2bwKY9xOoizoBlTupXSz9E3QQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA xA2M4GYRCwMo/u1Si6FatANl1ElvU2zbue6WL6s1hxZH1c+En940ks7wobVv6Yz6TSHK1LV0 T/Ni5W31G+Ng1aY5UpNtspvADs21BjDkGtwUm4WPJinj3eH/5UhN7oNJLnZEpfNatI88thW5 Qr05OrREmvxp3/BAz4++1rxWhVirrX6ZWBihpfKMkQLb9crSiEai84G2PQghUh/rTS7jstB9 tR0jreKWScZOPfox94yakwNe81+FfUuFL7vJHG+tYqYzlHccmuqyPJrZK00FdRGoKAtWzgIr KFGbmBWBvyAr7veLLaTUO5ji95lNMD2FIgepmth3XfSCvNOrZXrHfiQtIIIjG9YasZmBOuFW 5UjTytVXgXwRjtSPm4aGoxmk7L97pX4W2IB8w/EzUYt2EDRxgFs1LnmMPLOZ8eHA85Smy6wj 3ja8mHOJwAVPd2S1xKI6nupwOTImEvTQIsPEJWo+/gsh0ecrkQJAR0LUR2goP+2ima3QdcZI EsRkgIkoLIu7kWtQp/4VgehvX+flhcGXpxbFOhSwD+Kzq3Y8gOIHF8uRzRKaMElnMIuTDls3 ViM9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLiKY3gxHUR9BvCpmciNHvBCrwy DCHqis5r7gLhMtN3KK+lXjFnjatq57hXgMzoALNUQqNyQd0Z5WsYYCy3mTK9vZLLIufTV6ps WANno6V6+VmJYqWiCWHTeEJHbeoz/WIKjvRhRhoBZZJyti20yf9J8YKumg4fRo3dJZfEdP0X KPNkSVQ1o5IDEWrVJRcWZKRO8Jw6ffaOu2wA5g4ceFySpR2cQaG+gRnakiRw33hnSAQrE0vB XuIWZ3yVChHUMyL2BLzHr5AiuFzmkjS0EuOHciT8vix7VaJiJd5o58hOUDGUO025bjsTO79o 4cGbJviJ/myvYTDjsjrHWw7cApiwZsTX8meRylrmgireFAO9IYJUKe5/F/ZU9Y595m5b8+Rl p1HZmdWyUDkmVrMIhiQZ3ZoZdvHBMgj9i9rYnR8Ywf0ihDPhLpDCo9FLfPbmpF8rIReIQNcF aFZK61s/NwSItg4x9jtRcak99EzHPharQmPIzCkcFACk21IHGT0FivfVlK3rkEmV3Pv3eNn+ uHI/l6FHfIrGlozZO6LOKPH5w7g5hA1xbMtN3Yk1/EOIi0ABqAwdXyo5hL2SulRQSj+Ksyyj ljOWUlA+biS8ufYMrDh3Mi5kmtgKMMndmJyFGjH97emcy7c+2uo24hbV+iUOzvaUQvJFG+KP o25Ftnwb68KmkhkqY15H+o5xK4y/YK39bRb0h5lDDPAaFHyUuFsJXyP3M9usKxRx+AG5VvqC xzXotQKa6+UPM7FEUIKIFZ3ZOq0yvxJyCLZ6u44IRum6XYvrqaHS0hbIzKFlDdZcOluKIogz Op44JwW5gWzhwAEKNGDiiwIpW2AImZZC/cst40AAZ+tgQ0ukwkQbZvZAy7wwZeOd9QTbRV6f m7K3PLP3u0OyFDDfnw/EWn28dBc3Zle6gpXyFIiJkiSnoaXjPEAwxAMoy88SR5Yz0sb3rsra HRrLUB8OY6H4yxs2JpYR2mpFgxMWE+Z90j2xwdbnWHVVRD1BGnELWl7MueR5kEJtWlbe2EDr r2fzW/kVxfsfd3wgXRuCRI09aS7QIwj7BDGleCmA9+BTsszbjfSi6OzYXYF9kn8CsQriUyb/ eRn8Y6cs0Eg2fL8d0HjN7Sn6A==
  • Ironport-hdrordr: A9a23:qigcZKlrYma4ZJ88HFxQaHc8Hi/pDfO0imdD5ihNYBxZY6Wkfp +V88jzhCWZtN9OYhwdcLC7WZVpQRvnhPlICK0qTM2ftW7dyRaVxeBZnPDfKljbdREWmdQtt5 uIH5IObeEYSGIK8foSgzPIYurIouP3iZxA7N22pxwGLXAIV0gj1XYANu/yKDwJeOAsP+teKH Pz3Lsim9L2Ek5nEfhTS0N1FNTrlpnurtbLcBQGDxko5E2nii6p0qfzF1y90g0FWz1C7L8++S yd+jaJq5mLgrWe8FvxxmXT55NZlJ/IzcZCPtWFjowwJi/3ggilSYx9U/mpvSwzosuo9FE2+e O86SsIDoBW0Tf8b2u1qRzi103J1ysv0WbrzRuijX7qsaXCNUQHIvsEobgcXgrS6kImst05+r lMxXilu51eCg6FtDjh5vDTPisa2HackD4Hq6o+nnZfWYwRZPt6tooE5n5YF58GAWbT9J0nKu 9zF8vRjcwmPm9yV0qp/lWH/ebcHUjaRny9Mwo/U42uonRrdUlCvgolLJd1pAZEyHo/I6M0kN gsfJ4Y0I2mdfVmHp6VMt1xNfdfOla9MS4kD1jiU2gPNJt3ck4l+KSHqInc2omRCek1Jd0J6d P8bG8=
  • Ironport-sdr: rXpixTtgEncs4ydYkArc+FOiblkCRt/W1jqAxWggulHc7O9o3JXifNr/Ti3q9GrqbTAJD0LZZe espahSvDEKeANZk5L6lfCVZbmeQNzqKG+JMydT77Q1Cpb7IBP8DyWU9AOtmMZIBw9FiIQBH4VL o7/lxftvaH5ah5PgxzU4zQhw2KlOkATRm9IMoOMixNIdiN+BBI43MnZV81bq/zkyiphvdHTCYp DABbn67Y1ZYYAbsleA6WSQJ8VZ7ZJ3/L9Gm2y85ksZhC+ceHLsTL1oMKctOLkqYlqJIyNubDfo CIHEaDmJxT+lbkDWEHdAw1z3
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Jan 31, 2022 at 09:47:07AM +0000, Oleksandr Andrushchenko wrote:
> Hi, Roger!
> 
> On 12.01.22 14:35, Roger Pau Monné wrote:
> > On Thu, Nov 25, 2021 at 01:02:43PM +0200, Oleksandr Andrushchenko wrote:
> >> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >> +static void guest_rom_write(const struct pci_dev *pdev, unsigned int reg,
> >> +                            uint32_t val, void *data)
> >> +{
> >> +}
> >> +
> >> +static uint32_t guest_rom_read(const struct pci_dev *pdev, unsigned int 
> >> reg,
> >> +                               void *data)
> >> +{
> >> +    return 0xffffffff;
> >> +}
> > There should be no need for those handlers. As said elsewhere: for
> > guests registers not explicitly handled should return ~0 for reads and
> > drop writes, which is what you are proposing here.
> Yes, you are right: I can see in vpci_read that we end up reading ~0 if no
> handler exists (which is what I do here with guest_rom_read). But I am not 
> that
> sure about the dropped writes:
> 
> void vpci_write(pci_sbdf_t sbdf, unsigned int reg, unsigned int size,
>                  uint32_t data)
> {
>      unsigned int data_offset = 0;
> 
> [snip]
> 
>      if ( data_offset < size )
>          /* Tailing gap, write the remaining. */
>          vpci_write_hw(sbdf, reg + data_offset, size - data_offset,
>                        data >> (data_offset * 8));
> 
> so it looks like for the un-handled writes we still reach the HW register.
> Could you please tell if the code above needs improvement (like checking
> if the write was handled) or I still need to provide a write handler, e.g.
> guest_rom_write here?

Right now (given the current code) unhandled reads and writes will all
end up being forwarded to the hardware. This is intended for dom0, but
this is not how it's going to work for domUs, where accesses will be
discarded based on an accept list. IOW the handlers that you are
adding here should be the default behavior for registers not
explicitly handled in the domU case, and shouldn't require explicit
handling.

> >> +
> >>   static int init_bars(struct pci_dev *pdev)
> >>   {
> >>       uint16_t cmd;
> >> @@ -489,6 +542,7 @@ static int init_bars(struct pci_dev *pdev)
> >>       struct vpci_header *header = &pdev->vpci->header;
> >>       struct vpci_bar *bars = header->bars;
> >>       int rc;
> >> +    bool is_hwdom = is_hardware_domain(pdev->domain);
> >>   
> >>       switch ( pci_conf_read8(pdev->sbdf, PCI_HEADER_TYPE) & 0x7f )
> >>       {
> >> @@ -528,8 +582,10 @@ static int init_bars(struct pci_dev *pdev)
> >>           if ( i && bars[i - 1].type == VPCI_BAR_MEM64_LO )
> >>           {
> >>               bars[i].type = VPCI_BAR_MEM64_HI;
> >> -            rc = vpci_add_register(pdev->vpci, vpci_hw_read32, bar_write, 
> >> reg,
> >> -                                   4, &bars[i]);
> >> +            rc = vpci_add_register(pdev->vpci,
> >> +                                   is_hwdom ? vpci_hw_read32 : 
> >> guest_bar_read,
> >> +                                   is_hwdom ? bar_write : guest_bar_write,
> >> +                                   reg, 4, &bars[i]);
> >>               if ( rc )
> >>               {
> >>                   pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
> >> @@ -569,8 +625,10 @@ static int init_bars(struct pci_dev *pdev)
> >>           bars[i].size = size;
> >>           bars[i].prefetchable = val & PCI_BASE_ADDRESS_MEM_PREFETCH;
> >>   
> >> -        rc = vpci_add_register(pdev->vpci, vpci_hw_read32, bar_write, 
> >> reg, 4,
> >> -                               &bars[i]);
> >> +        rc = vpci_add_register(pdev->vpci,
> >> +                               is_hwdom ? vpci_hw_read32 : guest_bar_read,
> >> +                               is_hwdom ? bar_write : guest_bar_write,
> >> +                               reg, 4, &bars[i]);
> >>           if ( rc )
> >>           {
> >>               pci_conf_write16(pdev->sbdf, PCI_COMMAND, cmd);
> >> @@ -590,8 +648,10 @@ static int init_bars(struct pci_dev *pdev)
> >>           header->rom_enabled = pci_conf_read32(pdev->sbdf, rom_reg) &
> >>                                 PCI_ROM_ADDRESS_ENABLE;
> >>   
> >> -        rc = vpci_add_register(pdev->vpci, vpci_hw_read32, rom_write, 
> >> rom_reg,
> >> -                               4, rom);
> >> +        rc = vpci_add_register(pdev->vpci,
> >> +                               is_hwdom ? vpci_hw_read32 : guest_rom_read,
> >> +                               is_hwdom ? rom_write : guest_rom_write,
> >> +                               rom_reg, 4, rom);
> > This whole call should be made conditional to is_hwdom, as said above
> > there's no need for the guest_rom handlers.
> Yes, if writes are indeed dropped, please see question above
> >
> > Likewise I assume you expect IO BARs to simply return ~0 and drop
> > writes, as there's no explicit handler added for those?
> Yes, but that was not my intention: I simply didn't handle IO BARs
> and now we do need that handling: either with the default behavior
> for the unhandled read/write (drop writes, read ~0) or by introducing
> the handlers. I hope we can rely on the "unhandled read/write" and
> get what we want

Indeed, the default behavior should be changed for domUs to drop
writes, return ~0 for unhandled reads, then you won't need to add
dummy handlers for the registers you don't want to expose.

> >
> >>           if ( rc )
> >>               rom->type = VPCI_BAR_EMPTY;
> >>       }
> >> diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
> >> index ed127a08a953..0a73b14a92dc 100644
> >> --- a/xen/include/xen/vpci.h
> >> +++ b/xen/include/xen/vpci.h
> >> @@ -68,7 +68,10 @@ struct vpci {
> >>       struct vpci_header {
> >>           /* Information about the PCI BARs of this device. */
> >>           struct vpci_bar {
> >> +            /* Physical view of the BAR. */
> > No, that's not the physical view, it's the physical (host) address.
> Ok
> >
> >>               uint64_t addr;
> >> +            /* Guest view of the BAR: address and lower bits. */
> >> +            uint64_t guest_reg;
> > I continue to think it would be clearer if you store the guest address
> > here (gaddr, without the low bits) and add those in guest_bar_read
> > based on bar->{type,prefetchable}. Then it would be equivalent to the
> > existing 'addr' field.
> Ok, I'll re-work the code with this approach in mind: s/guest_reg/gaddr +
> required code to handle that
> >
> > I wonder whether we need to protect the added code with
> > CONFIG_HAS_VPCI_GUEST_SUPPORT, this would effectively be dead code
> > otherwise. Long term I don't think we wish to differentiate between
> > dom0 and domU vPCI support at build time, so I'm unsure whether it's
> > helpful to pollute the code with CONFIG_HAS_VPCI_GUEST_SUPPORT when
> > the plan is to remove those long term.
> I would have it without CONFIG_HAS_VPCI_GUEST_SUPPORT if you
> don't mind

Well, I guess if it's not too intrusive it's fine to add the defines,
removing them afterwards should be easy.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.