[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen, blkback: fix persistent grants negotiation


  • To: <paul@xxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 11 Jan 2022 13:26:50 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XNZgnwuptzBzEGMvcSYkniLYjs09275LUt0a3j2y5AU=; b=b27X1bLbu7FpFbQOcogh7Xc9kzSRPWsS7sRUleo2pOz2P+h0xRnxSGKDQ9rGrrtxI3j7eGNKHFkjLLsZbd34QS+1NUr/JUK7BkwEgUCbWYNRFGGRqHzASPBr5ePzkSAvm3ePXtm4yVIYyuy4wunKLqaJ5GXOvz7OuYIcCg1guPnk4l2o8N4ykdgotZusBIARMAQ5vs3dvgAtapuy/JJq2bUdIA6+Qzog5eDT9qgKSaBXsHnT53KZMnw8yI8cBAVtH6/V5yvHMOlzZvDX5GzRATLB1+FmQg565rQhcXoH7UXNHeBvH6RUi13kzxm08GNsIjhL9Z/kpb+wCb9I+RNSEw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=g2nnm2Fws0hrPO5BaJeK8I8EGYwl64seTRut7iRiBpYZzKM9k/HvBWN0vbfaGyeMdMNAJ0eifn6srICe8Ok5P+6CJD2EYSImCjm1eca1MZ29h5c6RdMdaUbphk+kK5XJ4s9mizLGBQaQKyEX8/QzAAB85LZyQA+uXzpnpDrS1afSBOGPiR48wOsCs8Ap6OuWke9NMl7eSjqW0iXqBYo2Riga/7n2f1FFQemD8F2YAjB3lH1/W+TKrnrdm9vWTgyu3rnhsaeTS7nX4swo406TqcEp9CiXL6uN6Ukuc1OoP7syjnlQkmCuwAdtaf1xdtFtYU3z0wC0XuXYBmArXloSBQ==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 11 Jan 2022 12:27:16 +0000
  • Ironport-data: A9a23:dB0/daLSaDlsOYuoFE+RBpIlxSXFcZb7ZxGr2PjKsXjdYENS12YDm zEeUTuBM62JNmWkLtx+Otvk8h8FvZbVyd43T1RlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s q3yv/GZdJhcokcxIn5BC5C5xZVG/fjgqoHUVaiUakideSc+EH170Us6wrZj6mJVqYPR7z2l6 IuaT/L3YDdJ6xYsWo7Dw/vewP/HlK2aVAIw5jTSV9gS1LPtvyB94KYkDbOwNxPFrrx8RYZWc QphIIaRpQs19z91Yj+sfy2SnkciGtY+NiDW4pZatjTLbrGvaUXe345iXMfwZ3u7hB2FpdUpx IUX5KWgWBxub4vUlcM2UxNxRnQW0a1uoNcrIFC6uM2XiUbHb2Ht07NlC0Re0Y8wo7gtRzsUr LpBdW5LPkvra+GemdpXTsF2gcsuNo/zNZ43sXB81zDJS/0hRPgvRo2UvIACjG9g16iiG97+Q pELQGNvUi3DZh9wag4cAZwnsbeB0yyXnzpw9wvO+PtfD3Lo5Bx81v3hPcTYfvSORN5JhQCIq 2Te5WP7DxoGctuFxlKt6nuxgsffkCW9X5gdfIBU7dYz3gfVnDZKTkRLCx3r+pFVl3JSRfpTE UlM9np0/JEi5VfsH//lUjqToHSt60t0t8VrL8U27wSEy6zx6gmfB3QZQjMpVOHKpPPaVhRxi AbXwoqB6ShH9eTMFCnDruv8QSaaZHBNRVLucxPoWufsDzPLhIgoxizCQd94eEJepo2kQGqgq 9xmQcVXulnysSLp//jjlbwkq2j1znQscuLTzl+MNo5CxlkoDLNJn6TytTDmAQ9ode51tGWps nkegNS55+sTF5yLnyHlaLxTQOj5u67cYWWH3AYH83wdG9KFoSXLkWd4umAWGauUGpxcJW+Bj LH75Gu9G6O/zFP1NPQqMupd+uwhzLT6FMSNaxwnRoEmX3SFTyfepHsGTRfJhwjFyRFw+YliZ 8vzWZvyXB4yVPQ2pBLrFrx1+eJ6mUgDKZb7GMqTI+KPi+TOPRZ4iN4tbTOzUwzOxPjV/1WOr YcObpviJtc2eLSWXxQ7OLU7dDgiBXM6GYr3u4pQcOuCKRBhA2YvF7naxrZJRmCvt/49ej7g8 i7vV0lG5kD4gHGbewyGZmo6MOHkXIplrGJ9NispZA76138maIepzaEea5poIuV3qL09laZ5H 6sfZsGNIvVTUTCbqT4TWobw8d55fxOxiAPQYyf8OGojf4RtThDi88P/ele97zEHCye67JNso 7Cp2g7Bb4AEQgBuUJTfZP61lgvjtnkBguNiGUDPJ4ALKknr9YFrLQ33j+M2fJ5QeUmSmGPC2 l/PUxkCpOTLr4sky/XzhPiJ/9WzDu9zPktGBG2Hv7y4AjbXozi4yohaXefWIT2EDDHo+L+vb Pl+xu3nNKFVh05DtodxHuo5za864Nez9bZWwh49QSfOZlWvTLhhPmOHzY9EsagUnu1Vvg6/W 0Su/NhGOOrWZJO5QQBJfAd1PP6e0fw0myXJ6aVnKUr30yZ74b6bXBgAJBKLkiFccON4PY5NL T3NYyLKB9hTUiYXD+s=
  • Ironport-hdrordr: A9a23:xWE15aHmbx5OBqaspLqFcpHXdLJyesId70hD6qkvc3Jom52j+P xGws526faVslYssHFJo6HnBEClewKgyXcT2/hsAV7CZnidhILMFuBfBOTZsljd8kHFh4pgPO JbAtdD4b7LfChHZKTBkXGF+r8bqbHtms3Y5pa9854ud3AQV0gJ1XYJNu/xKDwOeOApP+tfKH LKjfA32QZINE5nJPiTNz0gZazuttfLnJXpbVovAAMm0hCHiXeN5KThGxaV8x8CW3cXqI1SvV Ttokjc3OGOovu7whjT2yv66IlXosLozp9mCNaXgsYYBz3wgkKDZZhnWZeFoDcpydvfpWoCoZ 3pmVMNLs5z43TeciWcpgbs4RDp1HIU53rr2Taj8DLeiP28YAh/J9tKhIpffBecwVEnpstA3K VC2H/cn4ZLDDvb9R6NpuTgZlVPrA6ZsHAimekcgzh0So0FcoJcqoQZ4Qd8DIoAJiTn84oqed MeQv003MwmMm9yUkqp/FWGmLeXLzEO91a9Mwc/U/WuonhrdCsT9Tpd+CQd9k1wgq7VBaM0oN gsCZ4Y5o2mePVmGp6VNN1xMvdfNVa9NC4kEFjiaWgPR5t3cE4klfbMkcEIDaeRCdo18Kc=
  • Ironport-sdr: 8VfBJGKwy/edTJfchJ6LKB/ydzmDKUowSGiIfX1dbiWbWwfu8V8HWOhP7fnE4ogGi8KRMwsPCm 480YFkKND8U0RDuqs6M9+o5N9rQZV9H5E3C8vwo0B/X8c+2BUUahxe2p5Nn4e/pMCDGCklbjmO pCZzMIoB3Z/of/neU8FmlL4SCDw9xigBb+biZ5/xbJ6wU8hOIK1Qcm0rtM6uhXrLYEF4e1hfPz teSIBi/FP1CiQffjltgN2W49Z6TStEvd/JMoB+Xbodqk9F8XFIiMOdpZTB4y2kNmvD/hA+nT/U utlJmn/uruSxgDccr3eAv/1a
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Jan 11, 2022 at 11:50:32AM +0000, Durrant, Paul wrote:
> On 11/01/2022 11:11, Roger Pau Monné wrote:
> > On Thu, Jan 06, 2022 at 09:10:13AM +0000, Maximilian Heyne wrote:
> > > Given dom0 supports persistent grants but the guest does not.
> > > Then, when attaching a block device during runtime of the guest, dom0
> > > will enable persistent grants for this newly attached block device:
> > > 
> > >    $ xenstore-ls -f | grep 20674 | grep persistent
> > >    /local/domain/0/backend/vbd/20674/768/feature-persistent = "0"
> > >    /local/domain/0/backend/vbd/20674/51792/feature-persistent = "1"
> > 
> > The mechanism that we use to advertise persistent grants support is
> > wrong. 'feature-persistent' should always be set if the backend
> > supports persistent grant (like it's done for other features in
> > xen_blkbk_probe). The usage of the feature depends on whether both
> > parties support persistent grants, and the xenstore entry printed by
> > blkback shouldn't reflect whether persistent grants are in use, but
> > rather whether blkback supports the feature.
> > 
> > > 
> > > Here disk 768 was attached during guest creation while 51792 was
> > > attached at runtime. If the guest would have advertised the persistent
> > > grant feature, there would be a xenstore entry like:
> > > 
> > >    /local/domain/20674/device/vbd/51792/feature-persistent = "1"
> > > 
> > > Persistent grants are also used when the guest tries to access the disk
> > > which can be seen when enabling log stats:
> > > 
> > >    $ echo 1 > /sys/module/xen_blkback/parameters/log_stats
> > >    $ dmesg
> > >    xen-blkback: (20674.xvdf-0): oo   0  |  rd    0  |  wr    0  |  f    0 
> > > |  ds    0 | pg:    1/1056
> > > 
> > > The "pg: 1/1056" shows that one persistent grant is used.
> > > 
> > > Before commit aac8a70db24b ("xen-blkback: add a parameter for disabling
> > > of persistent grants") vbd->feature_gnt_persistent was set in
> > > connect_ring. After the commit it was intended to be initialized in
> > > xen_vbd_create and then set according to the guest feature availability
> > > in connect_ring. However, with a running guest, connect_ring might be
> > > called before xen_vbd_create and vbd->feature_gnt_persistent will be
> > > incorrectly initialized. xen_vbd_create will overwrite it with the value
> > > of feature_persistent regardless whether the guest actually supports
> > > persistent grants.
> > > 
> > > With this commit, vbd->feature_gnt_persistent is set only in
> > > connect_ring and this is the only use of the module parameter
> > > feature_persistent. This avoids races when the module parameter changes
> > > during the block attachment process.
> > > 
> > > Note that vbd->feature_gnt_persistent doesn't need to be initialized in
> > > xen_vbd_create. It's next use is in connect which can only be called
> > > once connect_ring has initialized the rings. xen_update_blkif_status is
> > > checking for this.
> > > 
> > > Fixes: aac8a70db24b ("xen-blkback: add a parameter for disabling of 
> > > persistent grants")
> > > Signed-off-by: Maximilian Heyne <mheyne@xxxxxxxxx>
> > > ---
> > >   drivers/block/xen-blkback/xenbus.c | 9 +++------
> > >   1 file changed, 3 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/block/xen-blkback/xenbus.c 
> > > b/drivers/block/xen-blkback/xenbus.c
> > > index 914587aabca0c..51b6ec0380ca4 100644
> > > --- a/drivers/block/xen-blkback/xenbus.c
> > > +++ b/drivers/block/xen-blkback/xenbus.c
> > > @@ -522,8 +522,6 @@ static int xen_vbd_create(struct xen_blkif *blkif, 
> > > blkif_vdev_t handle,
> > >           if (q && blk_queue_secure_erase(q))
> > >                   vbd->discard_secure = true;
> > > - vbd->feature_gnt_persistent = feature_persistent;
> > > -
> > >           pr_debug("Successful creation of handle=%04x (dom=%u)\n",
> > >                   handle, blkif->domid);
> > >           return 0;
> > > @@ -1090,10 +1088,9 @@ static int connect_ring(struct backend_info *be)
> > >                   xenbus_dev_fatal(dev, err, "unknown fe protocol %s", 
> > > protocol);
> > >                   return -ENOSYS;
> > >           }
> > > - if (blkif->vbd.feature_gnt_persistent)
> > > -         blkif->vbd.feature_gnt_persistent =
> > > -                 xenbus_read_unsigned(dev->otherend,
> > > -                                 "feature-persistent", 0);
> > > +
> > > + blkif->vbd.feature_gnt_persistent = feature_persistent &&
> > > +         xenbus_read_unsigned(dev->otherend, "feature-persistent", 0);
> > 
> > I'm not sure it's correct to potentially read feature_persistent
> > multiple times like it's done here.
> > 
> > A device can be disconnected and re-attached multiple times, and that
> > implies multiple calls to connect_ring which could make the state of
> > feature_gnt_persistent change across reconnections if the value of
> > feature_persistent is changed. I think that would be unexpected.
> > 
> 
> Would that not be legitimate though? What happens if blkfront is upgraded
> (or downgraded)? Each connection should be 'groundhog day' for the backend
> IMO.

Previous implementation cached the value of feature_persistent for the
lifetime of the backend, and I assume this was the intended behavior.
Ie: so that an admin could create a set of backends with persistent
grants not enabled and then switch it on afterwards and expect only
newly created backends to use the new value.

If the intention is indeed to read the value of feature_persistent on
each reconnection it should be noted in the commit message, as it's a
behavior change.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.