[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation



On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monné wrote:
> On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote:
> > From: Munehisa Kamata <kamatam@xxxxxxxxxx
> > 
> > Add freeze, thaw and restore callbacks for PM suspend and hibernation
> > support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> > events, need to implement these xenbus_driver callbacks.
> > The freeze handler stops a block-layer queue and disconnect the
> > frontend from the backend while freeing ring_info and associated resources.
> > The restore handler re-allocates ring_info and re-connect to the
> > backend, so the rest of the kernel can continue to use the block device
> > transparently. Also, the handlers are used for both PM suspend and
> > hibernation so that we can keep the existing suspend/resume callbacks for
> > Xen suspend without modification. Before disconnecting from backend,
> > we need to prevent any new IO from being queued and wait for existing
> > IO to complete.
> 
> This is different from Xen (xenstore) initiated suspension, as in that
> case Linux doesn't flush the rings or disconnects from the backend.
Yes, AFAIK in xen initiated suspension backend takes care of it. 
> 
> This is done so that in case suspensions fails the recovery doesn't
> need to reconnect the PV devices, and in order to speed up suspension
> time (ie: waiting for all queues to be flushed can take time as Linux
> supports multiqueue, multipage rings and indirect descriptors), and
> the backend could be contended if there's a lot of IO pressure from
> guests.
> 
> Linux already keeps a shadow of the ring contents, so in-flight
> requests can be re-issued after the frontend has reconnected during
> resume.
> 
> > Freeze/unfreeze of the queues will guarantee that there
> > are no requests in use on the shared ring.
> > 
> > Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> > xen/blkback: unmap all persistent grants when frontend gets disconnected,
> > the frontend may see massive amount of grant table warning when freeing
> > resources.
> > [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> > [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> > 
> > In this case, persistent grants would need to be disabled.
> > 
> > [Anchal Changelog: Removed timeout/request during blkfront freeze.
> > Fixed major part of the code to work with blk-mq]
> > Signed-off-by: Anchal Agarwal <anchalag@xxxxxxxxxx>
> > Signed-off-by: Munehisa Kamata <kamatam@xxxxxxxxxx>
> > ---
> >  drivers/block/xen-blkfront.c | 119 ++++++++++++++++++++++++++++++++---
> >  1 file changed, 112 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> > index 478120233750..d715ed3cb69a 100644
> > --- a/drivers/block/xen-blkfront.c
> > +++ b/drivers/block/xen-blkfront.c
> > @@ -47,6 +47,8 @@
> >  #include <linux/bitmap.h>
> >  #include <linux/list.h>
> >  #include <linux/workqueue.h>
> > +#include <linux/completion.h>
> > +#include <linux/delay.h>
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -79,6 +81,8 @@ enum blkif_state {
> >     BLKIF_STATE_DISCONNECTED,
> >     BLKIF_STATE_CONNECTED,
> >     BLKIF_STATE_SUSPENDED,
> > +   BLKIF_STATE_FREEZING,
> > +   BLKIF_STATE_FROZEN
> >  };
> >  
> >  struct grant {
> > @@ -220,6 +224,7 @@ struct blkfront_info
> >     struct list_head requests;
> >     struct bio_list bio_list;
> >     struct list_head info_list;
> > +   struct completion wait_backend_disconnected;
> >  };
> >  
> >  static unsigned int nr_minors;
> > @@ -261,6 +266,7 @@ static DEFINE_SPINLOCK(minor_lock);
> >  static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo);
> >  static void blkfront_gather_backend_features(struct blkfront_info *info);
> >  static int negotiate_mq(struct blkfront_info *info);
> > +static void __blkif_free(struct blkfront_info *info);
> >  
> >  static int get_id_from_freelist(struct blkfront_ring_info *rinfo)
> >  {
> > @@ -995,6 +1001,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, 
> > u16 sector_size,
> >     info->sector_size = sector_size;
> >     info->physical_sector_size = physical_sector_size;
> >     blkif_set_queue_limits(info);
> > +   init_completion(&info->wait_backend_disconnected);
> >  
> >     return 0;
> >  }
> > @@ -1218,6 +1225,8 @@ static void xlvbd_release_gendisk(struct 
> > blkfront_info *info)
> >  /* Already hold rinfo->ring_lock. */
> >  static inline void kick_pending_request_queues_locked(struct 
> > blkfront_ring_info *rinfo)
> >  {
> > +   if (unlikely(rinfo->dev_info->connected == BLKIF_STATE_FREEZING))
> > +           return;
> >     if (!RING_FULL(&rinfo->ring))
> >             blk_mq_start_stopped_hw_queues(rinfo->dev_info->rq, true);
> >  }
> > @@ -1341,8 +1350,6 @@ static void blkif_free_ring(struct blkfront_ring_info 
> > *rinfo)
> >  
> >  static void blkif_free(struct blkfront_info *info, int suspend)
> >  {
> > -   unsigned int i;
> > -
> >     /* Prevent new requests being issued until we fix things up. */
> >     info->connected = suspend ?
> >             BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> > @@ -1350,6 +1357,13 @@ static void blkif_free(struct blkfront_info *info, 
> > int suspend)
> >     if (info->rq)
> >             blk_mq_stop_hw_queues(info->rq);
> >  
> > +   __blkif_free(info);
> > +}
> > +
> > +static void __blkif_free(struct blkfront_info *info)
> > +{
> > +   unsigned int i;
> > +
> >     for (i = 0; i < info->nr_rings; i++)
> >             blkif_free_ring(&info->rinfo[i]);
> >  
> > @@ -1553,8 +1567,10 @@ static irqreturn_t blkif_interrupt(int irq, void 
> > *dev_id)
> >     struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
> >     struct blkfront_info *info = rinfo->dev_info;
> >  
> > -   if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> > -           return IRQ_HANDLED;
> > +   if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {
> > +           if (info->connected != BLKIF_STATE_FREEZING)
> > +                   return IRQ_HANDLED;
> > +   }
> >  
> >     spin_lock_irqsave(&rinfo->ring_lock, flags);
> >   again:
> > @@ -2020,6 +2036,7 @@ static int blkif_recover(struct blkfront_info *info)
> >     struct bio *bio;
> >     unsigned int segs;
> >  
> > +   bool frozen = info->connected == BLKIF_STATE_FROZEN;
> >     blkfront_gather_backend_features(info);
> >     /* Reset limits changed by blk_mq_update_nr_hw_queues(). */
> >     blkif_set_queue_limits(info);
> > @@ -2046,6 +2063,9 @@ static int blkif_recover(struct blkfront_info *info)
> >             kick_pending_request_queues(rinfo);
> >     }
> >  
> > +   if (frozen)
> > +           return 0;
> > +
> >     list_for_each_entry_safe(req, n, &info->requests, queuelist) {
> >             /* Requeue pending requests (flush or discard) */
> >             list_del_init(&req->queuelist);
> > @@ -2359,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info 
> > *info)
> >  
> >             return;
> >     case BLKIF_STATE_SUSPENDED:
> > +   case BLKIF_STATE_FROZEN:
> >             /*
> >              * If we are recovering from suspension, we need to wait
> >              * for the backend to announce it's features before
> > @@ -2476,12 +2497,37 @@ static void blkback_changed(struct xenbus_device 
> > *dev,
> >             break;
> >  
> >     case XenbusStateClosed:
> > -           if (dev->state == XenbusStateClosed)
> > +           if (dev->state == XenbusStateClosed) {
> > +                   if (info->connected == BLKIF_STATE_FREEZING) {
> > +                           __blkif_free(info);
> > +                           info->connected = BLKIF_STATE_FROZEN;
> > +                           complete(&info->wait_backend_disconnected);
> > +                           break;
> > +                   }
> > +
> >                     break;
> > +           }
> > +
> > +           /*
> > +            * We may somehow receive backend's Closed again while thawing
> > +            * or restoring and it causes thawing or restoring to fail.
> > +            * Ignore such unexpected state anyway.
> > +            */
> > +           if (info->connected == BLKIF_STATE_FROZEN &&
> > +                           dev->state == XenbusStateInitialised) {
> > +                   dev_dbg(&dev->dev,
> > +                                   "ignore the backend's Closed state: %s",
> > +                                   dev->nodename);
> > +                   break;
> > +           }
> >             /* fall through */
> >     case XenbusStateClosing:
> > -           if (info)
> > -                   blkfront_closing(info);
> > +           if (info) {
> > +                   if (info->connected == BLKIF_STATE_FREEZING)
> > +                           xenbus_frontend_closed(dev);
> > +                   else
> > +                           blkfront_closing(info);
> > +           }
> >             break;
> >     }
> >  }
> > @@ -2625,6 +2671,62 @@ static void blkif_release(struct gendisk *disk, 
> > fmode_t mode)
> >     mutex_unlock(&blkfront_mutex);
> >  }
> >  
> > +static int blkfront_freeze(struct xenbus_device *dev)
> > +{
> > +   unsigned int i;
> > +   struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> > +   struct blkfront_ring_info *rinfo;
> > +   /* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> > +   unsigned int timeout = 5 * HZ;
> > +   int err = 0;
> > +
> > +   info->connected = BLKIF_STATE_FREEZING;
> > +
> > +   blk_mq_freeze_queue(info->rq);
> > +   blk_mq_quiesce_queue(info->rq);
> > +
> > +   for (i = 0; i < info->nr_rings; i++) {
> > +           rinfo = &info->rinfo[i];
> > +
> > +           gnttab_cancel_free_callback(&rinfo->callback);
> > +           flush_work(&rinfo->work);
> > +   }
> > +
> > +   /* Kick the backend to disconnect */
> > +   xenbus_switch_state(dev, XenbusStateClosing);
> 
> Are you sure this is safe?
> 
In my testing running multiple fio jobs, other test scenarios running
a memory loader works fine. I did not came across a scenario that would
have failed resume due to blkfront issues unless you can sugest some?
> I don't think you wait for all requests pending on the ring to be
> finished by the backend, and hence you might loose requests as the
> ones on the ring would not be re-issued by blkfront_restore AFAICT.
> 
AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of no used
request on the shared ring. Also, we I want to pause the queue and flush all
the pending requests in the shared ring before disconnecting from backend.
Quiescing the queue seemed a better option here as we want to make sure ongoing
requests dispatches are totally drained.
I should accept that some of these notion is borrowed from how nvme 
freeze/unfreeze 
is done although its not apple to apple comparison.

Do you have any particular scenario in mind which may cause resume to fail?
> Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.